All self-respecting nonlinear scientists know self-organization when they see it: For this reason, if no other, it is important to put some mathematical spine into our floppy intuitive notion of self-organization. Only a few measures of self-organization have been proposed; none can be adopted in good intellectual conscience.
Program Carnegie Mellon University Thesis Proposals Structural Learning of Data Regularity and Ensemble Size Statistical machine learning typically assumes a domain-specific problem that specifies a function class to learn and the dependence structure of the data. However, such specifications may be unobtainable in a timely manner, due to hidden or dynamic environmental features.
In contrast to the usual Bayesian approach to incomplete domain-specification, I apply a minimax approach to specify the objective of an adaptive learner, in a "completed" game in which hidden domain attributes are set adversarially.
I go on to develop optimal nonasymptotic regret and risk bounds for learning online a countably-infinite Shalizi thesis of experts.
My approach extends the theory of structural risk minimization to an incompletely specified data-dependence-structure. Referring to the disutility suffered in the completed experts game as a price, I derive an estimator that attains a finite price; more generally, a finite price corresponds to a risk bound that is optimal up to a constant factor.
Fast constant factor approximations CFAs would require domain-specific theories to avoid an exponentially increasing number of expert queries. A more general approach is to extend the foregoing theory to randomized decisions and limited feedback, whereby a learner could only query subsets of experts.
Because deterministically executing queries would have fatal worst-case performance, queries would be randomized. I propose to investigate whether a CFA remains possible with respect to expected performance over randomized queries and the inferior risk bounds that are attainable in this query-constrained setting.
Beyond computation, generalization is a cornerstone of stationary time series analysis. Stronger than good on-average performance, generalization guarantees that the expected performance of an estimator improves over every time step of a data stream. Consequently, low regret does not imply generalization.
It is, however, possible to replace pure regret with a hybrid objective that becomes a batch instantaneous excess risk when the data is sufficiently regular. I propose to investigate whether a CFA would remain possible, or how a CFA in the pure regret setting can generalize, if not optimally.A PhD thesis is intended to be on the frontier of an area so a certain degree of uncertainty and disagreement is natural.
The standard of evidence should not be “beyond all reasonable doubt” PS.
Apr 03, · Dalliard said Shalizi’s thesis is that the positive manifold is an artifact of test construction and that full-scale scores from different IQ batteries correlate only because they are designed to do that.
Causal Architecture, Complexity, and Self-Organization in Time Series and Cellular Automata Cosma Rohilla Shalizi Physics Department University of Wisconsin.
Mark Palko points me to a news article by Zack Beauchamp on Jason Richwine, the recent Ph.D. graduate from Harvard’s policy school who left the conservative Heritage Foundation after it came out that his Ph.D.
thesis was said to be all about the low IQ’s of Hispanic immigrants. Cosma Shalizi I confess I chose this picture because it makes me seem older and more important than I really am; but it gives a fair impression of my desk (and hair), and people recognize me from it at meetings.
Mohammad Nasir Shalizi of North Carolina State University, North Carolina (NCSU) with expertise in: Forestry, Soil Science and Environmental Science. Read 9 publications, and contact Mohammad.