Plato posited a cosmology in which the variability of what see we around us results from imperfect manifestation of ideal forms. So, for example, the large black horse and the small piebald horse you pass in a field are variant images of an ideal pattern of perfect horsehood. It’s not difficult to square this, at least intuitively, with modern ideas of micromutation and genetic blueprints. It is also a view of the universe mirrored in the idea of a theoretical model – the model being an idealised image of the varying reality (whatever reality may be, once you start looking at it closely). If all of this seems whimsical, consider the work of Dor Abrahamson[1] and others on how students learn statistics and probability concepts.
Statisticians have a very close, if ambivalent, relation to this business of models and ideals. Like sculptors working a block of marble in search of the form within, they seek to reveal the model which sits as a shining ideal behind the grubby uncertainties of real data – in fact, statistics could be defined as the quantification of deviance from the ideal. This Platonic vision is clearest in classical frequentist statistics with the attempt to place key population descriptors on one of a predefined list of approved mathematical distributions but it’s just as real, if less obvious, for Bayesian approaches. Even where models are not themselves statistical, do not themselves use statistical methods, and eschew any statistical connection, they are usually derived from origins in probabilistic investigation and their relation to reality is therefore a statistical concern.
IBM, for instance, is interested in building cerebral cortex biomimetics using confabulation theory. UCSD’s Robert Hecht-Nielsen, in a talk to IBM’s Almaden Institute on Cognitive Computing last year, emphasised that confabulation architecture contains ‘no algorithms, rules, Bayesian networks, etc’[2]. Confabulation, however, depends on maximisation of cogency, and cogency is defined by a probability statement of relation between assumed facts, so evaluation of outcomes as mimesis is a statistical exercise in biocomparison (Bayesian and otherwise). I hope to return to this in a future issue.
Less ambitious than modelling the functions of the cerebral cortex itself, though not necessarily less interesting, is statistical study of the braincase which contains it. This is the aim of a research study, still in the unfunded preapproval pilot phase, by a young African academic. Since the work is contentious for sociopolitical and religious reasons, and nascent careers fragile, I won’t identify the researcher or university more closely than that; my primary interest here is, in any case, not the study itself but the informational framework within which it is to be conducted. It asks whether there is an evolutionary ‘trajectory’ for evolutionary development of the cerebral cortex, indirectly inductable from analogous trajectories of three hundred descriptors for the physical forms of the cortex components and the container within which they sit. [...More]
[1] Abrahamson, D., Bottom-up stats: Toward an agent-based “unified” probability and statistics, in Small steps for agents… giant steps for students?: Learning with agent-based models. 2006 San Francisco, American Educational Research Association.
[2] Hecht-Nielsen, R., The Mechanism of Thought. 2006 San Diego, California, USA, IBM Almaden Institute on Cognitive Computing.
[3] Mithen, S.J., The singing Neanderthals: the origins of music, language, mind and body. 2005, London, Weidenfeld & Nicolson. 0297643177 (hard).
[4] Rao, C.R., Journal of the Royal Statistical Society, "The Utilization of Multiple Measurements in Problems of Biological Classification". 1948. 10(2): 45pp.
[5] van Vark, G.N., Statistica Neerlandica, "Some applications of multivariate statistics to physical anthropology". 2005. 59(3): 10pp.
[6] Thomson, A. and Randall-MacIver, R. Ancient Races of the Thebaid. 1905, Oxford, Oxford University Press
No comments:
Post a Comment