To what degree functions of the mind can be reproduced or simulated by a computer is a question that has become volubly prominent in recent years. It is often posed for public consumption as a matter of ends – when will computer performance surpass human performance on interesting challenges. From a scientific standpoint, it is more useful to ask this question from the standpoint of means – what resources do organisms have available that let them respond adaptively to situations they encounter in the world?
This is the question that computational cognitive scientists seek to answer. Such research frequently starts from parametric characterizations of empirical behavior. Theorists then develop computational models that capture the quantitative relationship between these parameters and various experimental conditions. A good empirical fit permits further questions of biological and epistemic plausibility to be asked of the model. Models that pass these quasi-philosophical checks graduate to the status of theories. These accounts are, inevitably, challenged as incomplete or erroneous by further iterations of experiments and models.
The cycle of research in cognitive science, therefore, encompasses, in order of the workflow presented above, neuroscience and psychology, statistics, computer science, and philosophy. This course is meant to give both UG and PG students interested in the computational aspects of cognitive science a relatively comprehensive overview of the discipline.
Over 4-5 lectures per area, categories of mental phenomena will be introduced via descriptions of empirical studies, followed by the chronology of models seeking to explain them. Instruction in each topic will conclude with an instructor-mediated discussion of the merits of competing models, terminating in an appreciation of promising future directions of research in the area. The instructor’s emphasis will slant towards approximate Bayesian approaches to such challenges.
We will follow a cycle of continuous evaluation – quizzes will be conducted for each of the 7 topics covered in the course (see below), each counting for 10% of the course grade. The remaining 30% will come from a course project, which will require students to implement, critique and possibly improve upon a state-of-the-art model in one of the 7 areas covered in the course
Foundations – evidence for invariants in behavior – associativity – Pavlovian conditioning – Minsky, Newell and the strong AI program – the framing problem – production system architectures of the mind – the Bayesian revolution – inference, learning and causation – compositionality and probabilistic programs – approximate computation in the mind – algorithmic accounts of sub-optimal inference
Perception – James, Helmholtz, Wundt – classical psychophysics – perceptual modalities – quantification and analysis methods – Gestalt principles – assimilation and contrast effects – poverty of stimulus – Gibsonian psychophysics – Anne Treisman’s feature integration account – recognition by components – David Knill & Eero Simoncelli’s Bayesian visual perception work
Memory – early experiments – Miller and the magic number 7 – classical experiment settings and analyses – signal detection theory – Tulving’s memory types – Baddeley and the discovery of working memory – Rich Shiffrin’s line of models and their problems – Austerweil’s random walk model – Standing and the fidelity of visual long-term memory connecting to Tim Brady’s recent work – Tom Hills and memory search
Decision-making – von-Neumann and homo economicus – Rescorla-Wagner, Hall-Pearce and classical conditioning findings – operant conditioning and skill learning – Sutton-Barto-Singh and reinforcement learning building up to skill-learning – cognitive biases in decision-making – Tversky’s non-compensatory models – the Gigerenzer fast-and-frugal school of heuristics – fast and slow decisions and their consequences – drift diffusion models and their competition – frugal preference learning
Language – semantics and semiotics – neurobiological foundations of language with empirical evidence – language universals and typology – Sapir-Whorf hypothesis, evidence for and against – pragmatics and social signaling – nativist vs emergent models of language learning – Bayesian accounts of structure learning – non-human languages – Wittgenstein and philosophy as language games
Motor control and learning – systemic principles, feedback, redundancy, coordination – physiological basis – information processing problems – Peter Dayan’s model-free vs model-based motor models – Daniel Wolpert’s hierarchical motor control models – Paul Schrater’s Bayesian structure learning – hierarchical reinforcement learning – Karl Friston’s free energy approach – Nikolai Bernstein’s beautiful ideas on the value of noise in the motor system – Jeff Beck’s rational sub-optimal inference account
Similarity & categorization – Luce, Shepherd and empirical foundations – exemplars vs prototypes debate with empirical data – Nosofsky, Shiffrin and the rise of cluster models – Anderson’s rational model – Josh Tenenbaum’s Bayesian program – hierarchical Dirichlet models of categorization – compositionality and the generation of new categories – Liane Gabora’s computational models of creativity
Mostly research papers assigned ahead of lectures.