Source : University of Pennsylvania School of Medicine
PHILADELPHIA – It happens to everyone: You stay up late one night to finish an assignment, and the next day, you’re exhausted. Humans aren’t unique in that; all animals need sleep, and if they don’t get it, they must make it up.
The biological term for that pay-the-piper behavior is “sleep homeostasis,” and now, thanks to a research team at the Perelman School of Medicine, University of Pennsylvania, one of the molecular players in this process has been identified – at least in nematode round worms.
David Raizen, MD, PhD, assistant professor of Neurology, and his colleagues report in Current Biology that even in Caenorhabditis elegans, a tiny nematode worm that feeds on bacteria, loss of sleep is “stressful.”
The researchers forced the animals to stay awake during a developmental stage when they normally sleep, called “lethargus.” These sleep-deprived worms, like college students after an all-nighter, exhibited signs of sleep homeostasis – they were harder to wake up compared to control worms.
While nematode worms do not sleep as vertebrates do, lethargus is a sleep-like state, says Raizen, characterized by episodic reversible immobility, elevated arousal thresholds, and homeostasis.
On the molecular level, loss of sleep in the worm was associated with migration of the stress-related DNA-binding protein DAF-16, also called FOXO, from the cell cytoplasm into the nucleus. Here, the protein activates expression of stress-related genes. Knocking out that DAF-16 gene eliminated the animals’ homeostatic response – the equivalent of giving an up-all-night college student a free pass on sleep deprivation.
“You might think that is a good thing,” Raizen says, “but a good percentage of DAF-16 mutants died” – as many as half of the worms in some cases. That, Raizen says, suggests that the movement of DAF-16 into the nucleus is not merely a consequence of sleep deprivation, but rather a key to the homeostatic response.
“There’s something important about being able to mount a homeostatic behavioral response,” Raizen concludes. “We don’t know what that is, but it’s clearly important to the animal.”
Sleep homeostasis is critical to human health. Sleep deprivation in humans has been linked to weight gain and insulin resistance, and in laboratory rats, has been linked to death, Raizen says.
Whether DAF-16/FOXO will play the same role in humans as in nematodes is an open question. But it turns out that C. elegans is actually a useful model organism for studying vertebrate neurobiology, Raizen says. Many key observations made in the invertebrate have carried over to vertebrate systems.
Interestingly, when the team asked which tissue requires DAF-16 activity in order to restore sleep homeostasis in mutant animals, they found to their surprise that it isn’t neurons. But restoring DAF-16 activity in muscle tissue did restore homeostasis, suggesting an extra-neuronal component of sleep.
“The muscle must somehow communicate with the nervous system to coordinate this response,” Raizen says.
Co-authors include Robert J. Driver and Annesia Lamb from the Department of Neurology, and Abraham Wyner, from the Wharton School.
The research was funded by the National Institutes of Health’s National Institute of Neurological Disorders and Stroke (NS064030) and Office of Research Infrastructure (OD010440) and the Brain & Behavior Research Foundation.
Penn Medicine is one of the world’s leading academic medical centers, dedicated to the related missions of medical education, biomedical research, and excellence in patient care. Penn Medicine consists of the Raymond and Ruth Perelman School of Medicine at the University of Pennsylvania (founded in 1765 as the nation’s first medical school) and the University of Pennsylvania Health System, which together form a $4.3 billion enterprise.
The Perelman School of Medicine is currently ranked #2 in U.S. News & World Report‘s survey of research-oriented medical schools. The School is consistently among the nation’s top recipients of funding from the National Institutes of Health, with $398 million awarded in the 2012 fiscal year.
The University of Pennsylvania Health System’s patient care facilities include: The Hospital of the University of Pennsylvania — recognized as one of the nation’s top “Honor Roll” hospitals by U.S. News & World Report; Penn Presbyterian Medical Center; and Pennsylvania Hospital — the nation’s first hospital, founded in 1751. Penn Medicine also includes additional patient care facilities and services throughout the Philadelphia region.
Penn Medicine is committed to improving lives and health through a variety of community-based programs and activities. In fiscal year 2012, Penn Medicine provided $827 million to benefit our community.
- VS265: Neural Computation – RedwoodCenter
- W. Gerstner – Video Lectures – Computational Neuroscience
I – Models of Single Neurons;
II – Synaptic Changes and Learning
III – Noise and the Neural Code
IV – Structured Networks: Competition, Decision, Field Equeations
I. Models of Single Neurons
- Week 1. A first simple neuron model.
- Spiking Neuron Models (Cambridge Univ. Press) Ch. 1 and Ch. 4.1
- http://hebb.mit.edu/courses/9.641/readings/Koch99.pdf (Integrate and fire model)
- D. Johnston and S. Miao-Sin Wu, Foundations of Cellular Neurophysiology, Chapter 2, The MIT Press, Cambridge, Massachusetts, London, England (1995)
- Dayan and Abbott, Section 220.127.116.11,5.4, 5.5,
- C. Koch, Biophysics of Computation, Information Processing in Single Neurons, Chapter 6, Oxford University Press, New York, Oxford (1999
- Linear neuron models
- Linear time-invariant systems and convolution
- Simulating differential equations
- Carandini M, Heeger D (1994) Summation and division by neurons in primate visual cortex. Science, 264: 1333-1336.
- Week 2. The Hodgkin-Huxley Model and Detailed ion-current based neuron models
- Part A – Reversal potential and Hodgkin-Huxley Equations (37 min)
- Part B – Is there a firing threshold in the Hodgkin-Huxley Model? (20 min)
- Part C – Refractoriness, Pulse currents, step current and ramp currents. Molecular basis of ion-current based neuron models (25 min)
- Part D – Dendrites and cable equation (29 min) (material taught in week 7)
- Reading: Spiking Neuron Models (Cambridge Univ. Press) Ch. 2.1 – 2.3
- Week 3. Two-dimensional models and phase plane analysis (Fitzhugh-Nagumo and Morris-Lecar model)
- Part A – From 4 to 2: Simplifying the Hodgkin-Huxley Equations (35 min)
- Part B – The Role of Nullclines: FitzHugh-Nagumo model (31 min)
- Part C – Phase Plane Analysis: Morris-Lecar and FitzHugh-Nagumo model (31 min)
- Reading Spiking Neuron Models (Cambridge Univ. Press) Ch. 3.1 and 3.2
- Week 4: Complements to Two-dimensional models
- Part A – Separation of time scales; Type I and Type II Models: bifurcation patterns (40 min)
- Reading: Spiking Neuron Models (Cambridge Univ. Press) Ch. 3.3
II. Synpatic changes and Learning
- Week 4. Synaptic Plasticity and Long-term potentiation
- Part B – Hebbian Learning and Long-Term Potentation (LTP) of Synapses; BCM Model (35 min)
- Part C – Functional Consequences of Hebbian Learning: Development of Receptive Fields (17 min)
- Reading: Spiking Neuron Models (Cambridge Univ. Press) Ch. 10.1 and 10.2 as well as Ch. 11.1.4.
- Week 5. Networks of Neurons and Associative Memory (Hopfield Model)
- Part A – Associative Memory and Classification by Similarity; Model of (Anti-)Ferromagnet (28 min)
- Part B – Storing several patterns: Hopfield model (27 min)
- Part C – Memory Capacity (26 min)
- Reading: A Short Introduction to the Hopfield model (Lecture Notes by W. Gerstner, in French)
- Here is a simple demo of the Hopfield model
- Week 6: Introduction to Reinforcement Learning
- Part A – Reward-based learning: Behavior, Conditioning and Synaptic Changes (28 min)
- Part B – Rat Navigation and Reinforcement Learning Theory (36 min)
- Part C – Continuous state space and eligibility traces (19 min)
- Reading: Reinforcement Learning (MIT Press) by R. Sutton and A. Barto
- Week 7: Hand-out of miniproject and more on topics of week 2,5, and 6.
- Part A – Reward-based learning: Eligibility Traces and Application to Morris Water Maze (31 min)
- Part B – Detailed neuron models: Dendrites and cable equation (29 min)
- Part C – Handout of Miniprojects (23 min)
- Reading: Spiking Neuron Models (Cambridge Univ. Press) Ch. 2.4 and 2.5
III. Noise and the Neural Code
- Week 8: Variability of Spike trains, noise and the neural code
- Part A – Variability of Spike trains: Rates, Timing, Noise, and the Poisson Process (40 min)
- Part B – Noise in a passive membrane (20 min)
- Part C – Noise in Integrate-and-Fire models: Escape Noise, Renewal Models, and Stochastic Spike Arrival (27 min)
- Reading: Spiking Neuron Models (Cambridge Univ. Press) Ch 5.1-5.3, 5.5, and Ch. 5.6
- Week 9: Noisy neurons and coding: variance, autocorrelation, and stochastic resonance
- Part A – Variability of Spike trains: Rates, Timing, Noise, and the Poisson Process (41 min)
- Part B – Langevin equation and Ornstein-Uhlenbeck process (12 min)
- Part C – Comparison of Noise Models; Neural Codes and Stochastic Resonance (29 min)
- Reading: Spiking Neuron Models (Cambridge Univ. Press) Ch 5.5, 5.7-5.9
- Week 10. Populations of Neurons and Fokker-Planck equation
- Part A – Membrane potential distribution in a homogeneous population (42 min)
- Part B – Flux across a reference potential and Fokker Planck equation (27 min)
- Part C – Threshold and reset (24 min)
- Reading: Spiking Neuron Models (Cambridge Univ. Press) Ch 5.5 and Ch 6.1-6.2
- Week 11. Population Rate models and Coding — Reverse correlations, PSTH and rapid transients
- Part A – Forward correlation, PSTH, and population response for spiking neurons Reverse correlation as optimal stimulus.. (25 min)
- Part B – Reverse correlation as linear filter. (32 min)
- Part C – Transient population response: Poisson neuron vs. spiking neurons (41 min)
- Reading: Spiking Neuron Models (Cambridge Univ. Press) Ch. 7
- and Ch. 6.3
IV. Structured Networks: Competition, Decision, Field Equations
- Week 12. Spatially Extended Networks and Field Equations
- Part A – Single population – population rate equation. (35 min) – Look also at the slides
- Part B – From a single population to spatially coupled populations: field equations (52 min) – Look also at the slides
- Part C – Bump formation and application to head direction cells; edge enhancement (19 min) – Look also at the slides
- Reading: Spiking Neuron Models (Cambridge Univ. Press) Ch. 6.4 and Ch. 6.5 as well as Ch. 9.1
- Week 13. Perception, Decision, and Competition in Connected Populations
- Part A – Motion Perception and Decisions in Cortex; Theory of Decision Dynamics (44 min)
- Part B – Analysis of Decision Dynamics in two dimensions (13 min)
- Part C – Decision dynamics in connected populations; decisions and free will (22 min)
- Reading: X.J. Wang, Probabilistic Decision Making by Slow Reverberation in Cortical Circuits, NEURON 36:955-968 (2002)
- Week 14. Population Dynamics and Associative Memory
- Part A – Review of Associative Memories and Mean-Field Theory. (44 min)
- Part B – Multiple patterns (29 min)
- Part C – Associative Memory and Population Dynamics of Spiking Neurons (28 min)
- Reading: Hertz, Krogh, Palmer, Introduction to the Theory of Neural Computation (Addison Wesley, 1991) Chapter 2.5; as well as W. Gerstner and J. L. van Hemmen, Associative Memory in a Network of Spiking Neurons, NETWORK 2: 139-164
Recommanded text books
- Spiking Neuron Models by W. Gerstner and W. Kistler (Cambridge Univ. Press).
- Theoretical Neuroscience by P. Dayan and L.F. Abbott (MIT Press).
- Reinforcement Learning: an Introduction by R. Sutton and A. Barto (MIT Press)
Source : Thomas Jefferson University
GM1 ganglioside slowed progression of disease in patients over at least a 2-year periodPHILADELPHIA—Treating Parkinson’s disease patients with the experimental drug GM1 ganglioside improved symptoms and slowed their progression during a two and a half-year trial, Thomas Jefferson University researchers report in a new study published online November 28 in the Journal of the Neurological Sciences.
Although the precise mechanisms of action of this drug are still unclear, the drug may protect patients’ dopamine-producing neurons from dying and at least partially restore their function, thereby increasing levels of dopamine, the key neurochemical missing in the brain of Parkinson’s patients.
The research team, led by senior author Jay S. Schneider, Ph.D., Director of the Parkinson’s Disease Research Unit and Professor in the Department of Pathology, Anatomy and Cell Biology and the Department of Neurology at Jefferson, found that administration of GM1 ganglioside, a substance naturally enriched in the brain that may be diminished in Parkinson’s disease brains, acted as a “neuroprotective” and a “neurorestorative” agent to improve symptoms and over an extended period of time slow the progression of symptoms.
What’s more, once the study participants went off the drug, their disease worsened. The study enrolled 77 subjects and followed them over a 120-week period and also followed 17 subjects who received current standard of care treatment for comparison.
“The drugs currently available for Parkinson’s disease are designed to treat symptoms and to improve function, but at this time there is no drug that has been shown unequivocally to slow disease progression,” said Dr. Schneider. “Our data suggest that GM1 ganglioside has the potential to have symptomatic and disease-modifying effects on Parkinson’s disease. If this is substantiated in a larger clinical study, GM1 could provide significant benefit for Parkinson’s disease patients.”
Symptoms of the Parkinson’s disease, which affects over 1 million people in the United States today and is diagnosed in 60,000 adults every year, include tremors, slowness in movement, difficulty initiating movements, difficulty walking, balance problems and decrease in speech volume and facial expression. The motor symptoms of Parkinson’s disease result from the death of dopamine-producing neurons in the substantia nigra, the brain region that dies in Parkinson’s disease; the cause of this cell death is unknown.
GM1 ganglioside is a chemical that is normally found in the brain and part of the outer covering of nerve cells. It plays important roles in neuron development and survival and modulates a wide variety of cell activities. GM1 has been found to rescue damaged neurons and increase dopamine levels in pre-clinical studies, and has been suggested to have beneficial effects in other neurodegenerative conditions.
Dr. Schneider and his team made a case for the use of GMI for Parkinson’s disease beginning in the 1980s. The pathological processes contributing to the development and progression of Parkinson’s disease are still unclear, but they appear to be multifactorial. Because GM1 has effects on many different cellular functions, it seemed a logical approach to try using a drug like GM1 ganglioside to modify the pathological processes occurring in Parkinson’s disease, rather than focusing on a specific potential disease mechanism, said Dr. Schneider.
“Instead of a magic bullet, we think of it like a magic shotgun,” he said. “This study was truly a success of translational research.”
The GM1 research started in mice, where Dr. Schneider and his team found that animals with an experimentally induced form of Parkinson’s disease and administered GM1 had significantly higher levels of dopamine in their brains and less loss of dopamine neurons than animals that did not receive GM1.
A follow-up study in a non-human primate Parkinson model found similar results: animals that received GM1 had higher levels of dopamine than animals that did not and had a significant improvement in Parkinson symptoms.
In the late 1990s, Dr. Schneider conducted a short-term study (16 weeks) using GM1 in a small number of patients. Improvement in symptoms was observed in patients who received GM1 compared to patients who received a non-active placebo. However, in order to determine if there was potential for GM1 to slow the progression of the disease, they would have to study patients over a longer period of time—which led them to this current study.
There were three main groups of subjects in this controlled, randomized, delayed start trial funded by the National Institutes of Health. Patients were given a placebo for the first six months and then GM1 for 2 years (known as the delayed start group); patients were given GM1 from the beginning and continued on GM1 for the duration of the study (early-start group); and a comparison group, where patients agreed to be observed over the same time period, but did not take the drug or a placebo—they only took the drugs prescribed by their doctor.
The delayed start study design has been suggested to be useful for studying a drug that may have effects both on symptoms and disease progression in Parkinson’s disease.
The change over time in the Unified Parkinson’s Diseasing Rating Scale (UPDRS) motor score was the primary measure used to assess symptoms and disease progression in patients.
At the end of the first six months of the study, the early-start group had significant improvement in UPDRS motor scores versus a significant worsening of scores in the delayed-start group. Over the next two years, early start subjects maintained much of the initial benefit of GM1 treatment, showed relatively minor symptom progression compared to patients using standard anti-Parkinson medications, and at the end of the study, their symptoms were still less severe than at the start of the study over two years earlier.
Delayed start subjects also showed improvement of symptoms after switching to GM1 use and also showed less symptom progression over the next two years compared to the standard-of-care patients. Both groups had significant symptom worsening over the next one to two years after stopping use of GM1.
In short, GM1 appeared to improve symptoms and with extended use, slow symptom progression.
“The data from this small proof-of-concept study suggest that GM1 has the potential to have a very positive effect on the lives of Parkinson’s disease patients,” said Dr. Schneider. “We’ve been working on this for a long time and have some good ideas on how to move this forward. I think it’s important to continue to develop this therapy.”
This is a list of cognitive modeling papers solicited from a wide range of cognitive modelers, by asking them the following: “I wonder if you would do me the honor of sending me a list of your top 2-5 favorite cognitive modeling papers. I would expect that 1-3 of these would be your papers, and 1-3 would be someone else’s. I am looking for papers where someone really nailed the phenomenon, whatever it is. I would lean towards more recent papers, but oldies but goodies are ok too.”
At the bottom of this list are some of the comments received with the papers, organized by the name of the respondent. Please let me know if any of the links are broken: firstname.lastname@example.org
Abstracts, papers, chapters, and other documents are posted on this site as an efficient way to distribute reprints. The respective authors and publishers of these works retain all of the copyrights to this material. Anyone copying, downloading, bookmarking, or printing any of these materials agrees to comply with all of the copyright terms. Other than having an electronic or printed copy for fair personal use, none of these works may be reposted, reprinted, or redistributed without the explicit permission of the relevant copyright holders.
- Allopenna, P.D., Magnuson, J.S., & Tanenhaus, M.K. (1998). Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models. Journal of Memory and Language, 38(4), 419–439. pdf
- Anderson, J.R. (1991). Is human cognition adaptive? Behavioral and Brain Sciences, 14, 471-484. pdf
- Anderson, J.R. (1991). The Adaptive Nature of Human Categorization. Psychological Review, 98(3), 409-429. pdf
- Anderson, J.R., & Milson, R. (1989). Human Memory: An Adaptive Perspective. Psychological Review, 96(4), 703-719. pdf
- Ashby, F.G., & Alfonso-Reese, L. (1995). Categorization as probability density estimation. Journal of Mathematical Psychology, 39, 216-233. pdf
- Baayen, R.H. (2011). Corpus linguistics and naive discriminative learning. Submitted to Brazilian Journal of Applied Linguistics. pdf
- Baayen, R.H., & Hendrix, P. (2011). Sidestepping the combinatorial explosion: Towards a processing model based on discriminative learning. Abstract for the LSA workshop: Empirically examining parsimony and redundancy in usage-based models, January 2011. pdf
- Barrington, L., Marks, T.K., Hsiao, J.H.-W., & Cottrell, G.W. (2008). NIMBLE: A kernel density model of saccade-based visual memory. Journal of Vision, 8(14):17, 1-14. pdf
- Beck, J., Ma, W.J., Kiani, R., Hanks, T., Churchland, A.K., Roitman, J., Shadlen, M.N, Latham, P.E., & Pouget, A. (2008). Probabilistic population codes for Bayesian decision making. Neuron, 60, 1142-1152. pdf
- Botvinick, M., & Plaut, D.C. (2004). Doing Without Schema Hierarchies: A Recurrent Connectionist Approach to Normal and Impaired Routine Sequential Action. Psychological Review, 111(2), 395-429. pdf
- Brown, S.D., & Heathcote, A. (2008). The simplest complete model of choice reaction time: Linear ballistic accumulation. Cognitive Psychology, 57, 153-178. pdf
- Brown, G.D.A., Neath, I., & Chater, N. (2007). A Temporal Ratio Model of Memory. Psychological Review, 114(3), 539-576. pdf
- Brown, S.D., & Steyvers, M. (2009). Detecting and Predicting Changes. Cognitive Psychology, 58, 49-67. pdf
- Cadieu, C., Kouh, M., Pasupathy, A., Conner, C., Riesenhuber, M., & Poggio, T.A. (2007). A Model of V4 Shape Selectivity and Invariance. J Neurophysiol, 98, 1733-1750. pdf
- Chang, F., Dell, G.S., & Bock, K. (2006). Becoming Syntactic. Psychological Review, 113(2), 234-272. pdf
- Christiansen, M.H., Allen, J. & Seidenberg, M.S. (1998). Learning to segment speech using multiple cues: A connectionist model. Language and Cognitive Processes, 13, 221-268. pdf
- Christiansen, M.H., & Chater, N. (2001). Connectionist Psycholinguistics: Capturing the Empirical Data. Trends in Cognitive Sciences, 5(2), 82-88. pdf
- Christiansen, M.H. & Chater, N. (1999). Toward a connectionist model of recursion in human linguistic performance. Cognitive Science, 23, 157-205. pdf
- Clark, H.H. (1973). The Language-as-Fixed-Effect Fallacy: A Critique of Language statistics in Psychological Research. Journal of Verbal Learning and Verbal Behavior, 12, 335-359. pdf
- Cleeremans, A., & McClelland, J.L. (1991). Learning the structure of event sequences. Journal of Experimental Psychology: General, 120, 235-253. pdf
- Cottrell, G.W., Branson, K., and Calder, A. J. (2002) Do expression and identity need separate representations? In Proceedings of the 24th Annual Cognitive Science Society Conference, Fairfax, Va. pdf
- Cottrell, G.W., & Plunkett, K. (1994). Acquiring the mapping from meanings to sounds.Connection Science, 6(4), 379-412. pdf
- Cowell, R.A., Bussey, T.J., & Saksida, L.M. (2006). Why does brain damage impair memory? A connectionist model of object recognition memory in perirhinal cortex. Journal of Neuroscience, 26(47), 12186-12197. pdf
- Criss, A.H., & McClelland, J.L. (2006). Differentiating the differentiation models: A comparison of the retrieving effectively from memory model (REM) and the subjective likelihood model (SLiM). Journal of Memory and Language, 55, 447-460. pdf
- Daw, N.D., O’Doherty, J.P., Dayan, P., Seymour, B., & Dolan, R.J. (2006). Cortical substrates for exploratory decisions in humans. Nature, 44, 876-879. pdf
- Dawson, M.R.W. (1991). The How and Why of What Went Where in Apparent Motion: Modeling Solutions ot the Motion Correspondence Problem. Psychological Review, 98(4), 569-603. pdf
- Dell, G.S., Burger, L.K., & Svec, W.R. (1997). Language Production and Serial Order: A Functional Analysis and a Model. Psychological Review, 104(1), 123-147. pdf
- Dell, G.S., Schwartz, M.F., Martin, N., Saffran, E.M., & Gagnon, D.A. (1997) Lexical Access in Aphasic and Nonaphasic Speakers. Psychological Review, 104(4), 801-838. pdf
- Dennis, S., & Humphreys, M.S. (2001). A context noise model of episodic word recognition.Psychological Review, 108(2), 452-478. pdf
- Elman, J.L. (1990). Finding Structure in Time. Cognitive Science, 14, 179-211. pdf
- Elman, J.L. (1991). Distributed Representations, Simple Recurrent Networks and Grammatical structure. Machine Learning, 7, 195-225. pdf
- Elman, J.L. (1993). Learning and development in neural networks: The importance of starting small. Cognition, 48(1), 71-99. pdf
- Fific, M., Little, D.R., & Nosofsky, R.M. (2010). Logical-Rule Models of Classification Response Times: A Synthesis of Mental-Architecture, Random-Walk, and Decision-Bound Approaches. Psychological Review, 117,(2), 309-348. pdf
- Frank, T.D., van der Kamp, J., & Savelsbergh, G.J.P. (2010). On a multistable dynamic model of behavioral and perceptual infant development. Developmental Psychobiology, 52, 352–371. pdf
- French, R.M., Mareschal, D., Mermillod, M., & Quinn, P.C. (2004). The Role of Bottom-Up Processing in Perceptual Categorization by 3- to 4-Month-Old Infants: Simulations and Data. Journal of Experimental Psychology: General, 133(3), 382-397. pdf
- Gao, J., Tortell, R., & McClelland, J.L. (2011). Dynamic Integration of Reward and Stimulus Information in Perceptual Decision-Making. PloS One, 6(3), 1-21. pdf
- Goldstein, D.G., & Gigerenzer, G. (2002). Models of ecological rationality: The Recognition Heuristic. Psychological Review, 109(1), 75-90. pdf
- Grant, D.A. (1962). Testing The Null Hypothesis and the Strategy and Tactics of Investigating Theoretical Models. Psychological Review, 69(1), 54-61. pdf
- Griffiths, T.L., Steyvers, M., & Firl, A. (2007). Google and the Mind: Predicting Fluency With PageRank. Psychological Science, 18(12), 1069-1076. pdf
- Griffiths, T.L., Steyvers, M., & Tenenbaum, J.B. (2007). Topics in Semantic Representation. Psychological Review, 114(2), 211-244. pdf
- Griffiths, T.L., & Tenenbaum, J.B. (2006). Optimal predictions in everyday cognition. Psychological Science, 17(9), 767–773. pdf
- Gupta, P. (2008). The Role of Computational Models in Investigating Typical and Pathological Behaviors. Seminars in Speech and Language, 29(3), 211-225. pdf
- Gureckis, T.M., & Love, B.C. (2010) Direct Associations or Internal Transformations? Exploring the Mechnisms Underlying Sequential Learning Behavior. Cognitive Science, 34, 10-50. pdf
- Hahn, U., & Nakisa, R.C. (2000). German Inflection: Single Route or Dual Route?. Cognitive Psychology, 41, 313-360. pdf
- Henson, R.N.A. (1998). Short-term memory for serial order: The start-end model. Cognitive Psychology, 36, 73-137. pdf
- Hinton, G.E. (1986). Learning distributed representations of concepts. In Proceedings of the Eighth Annual Conference of the Cognitive Science Society, Amherst, MA.
- Hinton, G.E., & Nowlan, S.J. (1987). How Learning Can Guide Evolution. Complex Systems, 1, 495-502. pdf
- Hintzman, D.L. (1986). “Schema abstraction” in a Multiple-Trace Memory Model. Psychological Review, 93, 411-428. pdf
- Hopfield, J.J. (1982).Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA, 79, 2554-2558. pdf
- Hsiao, J.H.-W., Shahbazi, R. & Cottrell, G.W. (2008). Hemispheric Asymmetry in Visual Perception Arises from Differential Encoding beyond the Sensory Level. In Proceedings of the 30th Annual Meeting of the Cognitive Science Society. pdf
- Huber, D.E., Shiffrin, R.M., Lyle, K.B., & Ruys, K.I. (2001). Perception and preference in short-term word priming. Psychological Review, 108, 149-182. pdf
- Jiang, X., Rosen, E., Zeffiro, T., VanMeter, J., Blanz, V., & Riesenhuber, M. (2006). Evaluation of a Shape-Based Model of Human Face Discrimination Using fMRI and Behavioral Techniques. Neuron, 50, 159-172. pdf
- Johns, B.T., & Jones, M.N. (2010). Evaluating the random representation assumption of lexical semantics in cognitive models. Psychonomic Bulletin & Review, 17, 662-672. pdf
- Jones, M., & Love, B. (2010). Bayesian Fundamentalism or Enlightenment? On the Explanatory Status and Theoretical Contributions of Bayesian Models of Cognition (Unpublished Draft). Behavioral and Brain Sciences. pdf
- Jones, M.N., & Mewhort, D.J.K. (2007). Representing word meaning and order information in a composite holographic lexicon. Psychological Review, 114, 1-37. pdf
- Jordan, M.I. (1986). Serial Order: A parallel distributed processing approach. UCSD Cognitive Science Technical Report 8604. html (abstract)
- Jordan, M.I., & Rumelhart, D.E. (1992). Forward models: Supervised learning with a distal teacher. Cognitive Science, 16, 307-354. pdf
- Kanan, C.M., Tong, M.H., Zhang, L., & Cottrell, G.W. (2009). SUN: Top-down saliency using natural statistics. Visual Cognition, 17(6-7), 979-1003. pdf
- Kanerva, P. (1985) Parallel Structures in Human and Computer Memory. Cognitiva 85, Paris, France. pdf
- Kelso, J.A.S. (2008). Haken-Kelso-Bunz model. Scholarpedia, 3(10):1612. html
- Kemp, C., & Tenenbaum, J.B. (2008). The discovery of structural form. Proc. Natl. Acad. Sci. U.S.A., 105, 10687–10692. pdf
- Kemp, C., Perfors, A., & Tenenbaum, J.B. (2007). Learning overhypotheses with hierarchical Bayesian methods. Developmental Science: Bayesian Special Section, 10(3), 307-321. pdf
- Kohonen, T. (1982). Self-organized formation of topologically correct feature maps. Biological Cybernetics, 43, 59-69. pdf
- Kording, K.P., Tenenbaum, J.B., & Shadmehr, R. (2007). The dynamics of memory as a consequence of optimal adaptation to a changing body. Nature Neuroscience, 10, 779–786. pdf
- Kruschke, J.K. (1992). ALCOVE: An Exemplar-Based Connectionist Model of Category Learning. Psychological Review, 99(1), 22-44 pdf
- Kruschke, J.K. (2006). Locally Bayesian Learning with Applications to Retrospective Revaluation and Highlighting. Psychological Review, 113(4), 677-699. pdf
- Landauer, T.K., & Dumais, S.T. (1997). A solution to Plato’s problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review, 104, 211-240. pdf
- Larkey, L.B., & Love, B.C. (2003). CAB: Connectionist analogy builder. Cognitive Science, 27, 781-794. pdf
- Li, Z. (1999). Contextual influences in V1 as a basis for pop out and asymmetry in visual search. Proc. Natl. Acad. Sci. USA, 96, 10530-10535. pdf
- Li, P., Farkas, I., & MacWhinney, B. (2004). Early lexical development in a self-organizing neural network. Neural Networks, 17, 1345–1362. pdf
- MacDonald, M.C. & Christiansen, M.H. (2002). Reassessing working memory: A comment on Just & Carpenter (1992) and Waters & Caplan (1996). Psychological Review, 109, 35-54. pdf
- MacWhinney, B., & Li, P (2008). Neurolinguistic Computational Models. In B. Stemmer & H. Whitaker (Eds.), Handbook of the neuroscience of language (pp. 229-236). London: Academic Press pdf
- McCleery, J.P., Zhang, L. Ge, L. Wang, Z., Christiansen, E.M., Lee, K., and Cottrell, G.W. (2008) The roles of visual expertise and visual input in the face inversion effect: Behavioral and neurocomputational evidence Vision Research 48:703-715. pdf
- McClelland, J.L. (2009). The place of modeling in cognitive science. Topics in Cognitive Science, 1(1), 11-38. pdf
- McClelland, J.L., McNaughton, B.L., & O’Reilly, R.C. (1995). Why There Are Complementary Learning Systems in the Hippocampus and Neocortex: Insights From the Successes and Failures of Connectionist Models of Learning and Memory. Psychological Review, 102(3), 419-457. pdf
- McClelland, J.L., & Elman, J.L. (1986). The TRACE Model of Speech Perception. Cognitive Psychology, 18, 1-86. pdf
- McClelland, J.L., McNaughton, B.L., & O’Reilly, R.C. (1995). Why There Are Complementary Learning Systems in the Hippocampus and Neocortex: Insights From the Successes and Failures of Connectionist Models of Learning and Memory. Psychological Review, 102(3), 419-457. pdf
- McClelland, J.L., & Rumelhart, D.E. (1981). An Interactive Activation Model of Context Effects in Letter Perception: Part 1. An Account of Basic Findings. Psychological Review, 88(5), 375-407. pdf
- McClelland, J.L., & Rogers, T.T. (2003). The parallel distributed processing approach to semantic cognition. Nature Reviews Neuroscience, 4, 310-322. pdf
- McRae, K., de Sa, V.R., & Seidenberg, M.S. (1993). Modeling Property Intercorrelations in Conceptual Memory. In Proceedings of the 15th Annual Meeting of the Cognitive Science Society, 729-734. pdf
- McRae, K., de Sa, V.R., & Seidenberg, M.S. (1997). On the Nature and Scope of Featural Representations of Word Meaning. Journal of Experimental Psychology: General, 126(2), 99-130. pdf
- McRae, K., Spivey-Knowlton, M.J., Tanenhaus, M.K. (1997). Modeling the influence of thematic fit (and other constraints) in on-line sentence comprehension. Journal of Memory and Language, 38, 283-312. pdf
- Metcalfe, J. (1993). Novelty monitoring, metacognition, and control in a composite holographic associative recall model: Implications for Korsakoff amnesia. Psychological Review, 100(1), 3-22. pdf
- Miikkulainen, R. (1997). Dyslexic and Category-Specific Aphasic Impairments in a Self-Organizing Feature Map Model of the Lexicon. Brain and Language, 59, 334–366. pdf
- Mitchell, M. (1998). Complex-Systems Perspective on the “Computation vs. Dynamics” debate in Cognitive Science. In M.A. Gernsbacher, & S.J. Derry (Eds.), Proceedings of the Twentieth Annual Conference of the Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum Associates. pdf
- Monaghan, P., Christiansen, M.H., & Fitneva, S.A. (2011). The arbitrariness of the sign: Learning advantages from the structure of the vocabulary. Journal of Experimental Psychology: General. pdf
- Montague, PR, Dayan, P & Sejnowski, TK (1996). A framework for mesencephalic dopamine systems based on predictive Hebbian learning. Journal of Neuroscience, 16, 1936-1947. pdf
- Mozer, M.C. (2002). Frames of Reference in Unilateral Neglect and Visual Perception: A Computational Perspective. Psychological Review, 109(1), 156-185. pdf
- Munakata, Y., McClelland, J.L., Johnson, M.H., & Siegler, R.S. (1997). Rethinking Infant Knowledge: Toward an Adaptive Process Account of Successes and Failures in Object Permanence Tasks. Psychological Review, 104(4), 686-713. pdf
- Nolfi, S., Elman, J.L., & Parisi, D. (1994). Learning and evolution in neural networks.Adaptive Behavior, 3, 5-28. pdf
- Norman, K.A. & O’Reilly, R.C. (2003). Modeling Hippocampal and Neocortical Contributions to Recognition Memory: A Complementary Learning Systems Approach. Psychological Review, 110, 611-646. pdf
- Nosofsky, R.M. (1984). Choice, Similarity, and the Context Theory of Classification. Journal of Experimental Psychology: Learning, Memory, and Cognition, 10(1), 104-114. pdf
- Nosofsky, R.M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General, 115(1), 39-57. pdf
- Nosofsky, R.M., & Palmeri, T.J. (1997). An Exemplar-Based Random Walk Model of Speeded Classification. Psychological Review, 104(2), 266-300. pdf
- Nosofsky, R.M., & Palmeri, T.J. (1998). A rule-plus-exception model for classifying objects in continuous-dimension spaces. Psychonomic Bulletin & Review, 5(3), 345-369. pdf
- Oaksford, M. & Chater N. (1994). A Rational Analysis of the Selection Task as Optimal Data Selection. Psychological Review, 101(4), 608-631. pdf
- Oaksford, M., & Chater, N. (2009). Précis of bayesian rationality: The probabilistic approach to human reasoning. Behavioral and Brain Sciences, 32(1), 69-120. pdf
- O’Doherty, J.P., Hampton, A., & Kim, H. (2007) Model-Based fMRI and Its Application to Reward Learning and Decision Making. Ann. N. Y. Acad. Sci., 1104, 35–53. pdf
- O’Reilly, R.C. & Frank, M.J. (2006). Making Working Memory Work: A Computational Model of Learning in the Frontal Cortex and Basal Ganglia. Neural Computation, 18, 283-328. pdf
- O’Toole, A.J., Deffenbacher, K.A., Valentin, D., & Abdi, H. (1994). Structural aspects of face recognition and the other-race effect. Memory and Cognition, 22(2), 208-224. pdf
- Otto, A.R., & Love, B.C. (2010). You don’t want to know what you’re missing: When information about forgone rewards impedes dynamic decision making. Judgment and Decision Making, 5(1), 1-10. pdf
- Palmeri, T.J. (1999). Learning Categories at Different Hierarchical Levels: A Comparison of Category Learning Models. Psychonomic Bulletin & Review. pdf
- Palmeri, T.J., & Gauthier, I. (2004). Visual object understanding. Nature Reviews Neuroscience, 5, 291-303. pdf
- Plaut, D.C., McClelland, J.L., Seidenberg, M.S., & Patterson, K. (1996). Understanding Normal and Impaired Word Reading: Computational Principles in Quasi-Regular Domains. Psychological Reviews, 103, 56-105. pdf
- Plaut, D.C., & Shallice, T. (1993). Deep dyslexia: A case study of connectionist neuropsychology. Cognitive Neuropsychology, 10(5), 377-500. pdf
- Pouget, A, Dayan, P & Zemel, RS (2003) Inference and computation with population codes. Annual Review of Neuroscience 26, 381-410. pdf
- Purcell, B.A., Heitz, R.P., Cohen, J.Y., Schall, J.D., Logan, G.D., & Palmeri, T.J. (2010). Neurally Constrained Modeling of Perceptual Decision Making. Psychological Review, 117(4), 1113-1143. pdf
- Raaijmakers, J.G.W., & Shiffrin, R.M. (1980). SAM: A theory of probabilistic search of associative memory. In G.H. Bower (Ed.), The Psychology of Learning and Motivation, 14, 207-262. New York: Academic Press. pdf
- Raaijmakers, J.G.W., & Shiffrin, R.M. (1981). Search of associative memory. Psychological Review, 88, 93-134. pdf
- Ramscar, M., Dye, M., Popick, H.M. & O’Donnell-McCarthy, F. (2010) How children learn to value numbers: Information structure and the acquisition of numerical understanding. pdf
- Ramscar, M., Suh, E., & Dye, M. (2010). For the price of a song: How pitch category learning comes at a cost to absolute frequency information. pdf
- Ramscar, M., Yarlett, D., Dye, M., Denny, K., & Thorpe, K. (2010) The Effects of Feature-Label-Order and their implications for symbolic learning. Cognitive Science, 34(6), 909-957. pdf
- Ratcliff, R., Clark, S.E., & Shiffrin, R.M. (1990). List-Strength Effect .1. Data and Discussion. Journal of Experimental Psychology-Learning Memory and Cognition, 16(2), 163-178. html (abstract)
- Reed, S.K. (1972). Pattern recognition and categorization. Cognitive Psychology, 3, 382-407. pdf
- Ritter, F.E. (2003) Social processes in validation: Comments on Grant (1962) and Roberts and Pashler (2000). Symposium on Model Fitting and Parameter Estimation at the ACT-R Workshop. pdf
- Ritter, F.E., & Bibby, P.A. (2008). Modeling How, When, and What Is Learned in a Simple Fault-Finding Task. Cognitive Science, 32(5), 862-892. pdf
- Ritter, F.E., Schoelles, M.J., Quigley, K.S., & Klein, L.C. (2010). Determining the number of simulation runs: Treating simulations as theories by not sampling their behavior. To appear in: S. Narayanan and L. Rothrock (Eds.), Human-in-the-loop Simulations: Methods and Practice. pdf
- Roberts, S., & Pashler, H. (2000). How Persuasive Is a Good Fit? A Comment on Theory Testing. Psychological Review, 107(2), 358-367. pdf
- Robinson, A.E., Hammon, P.S., & de Sa, V.R. (2007). Explaining brightness illusions using spatial filtering and local response normalization. Vision Research, 47(12), 1631-1644. pdf
- Rogers, T.T., Lambon Ralph, M.A., Garrard, P., Bozeat, S., McClelland, J.L., Hodges, J.R., & Patterson, K. (2004). The structure and deterioration of semantic memory: A neuropsychological and computational investigation. Psychological Review, 111, 205-235. pdf
- Rumelhart, D.E., & McClelland, J.L. (1982). An Interactive Activation Model of Context Effects in Letter Perception: Part 2. The Contextual Enhancement Effect and Some Tests and Extensions of the Model. Psychological Review, 89(1), 60-94. pdf
- Rumelhart, D.E., & McClelland, J.L. (1986). On learning the past tenses of English verbs. In J.L. McClelland, D.E. Rumelhart, and the PDP research group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Volume II. Cambridge, MA: MIT Press. Chapter 18, pp. 216-271 pdf
- Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1986). Learning representations by back-propagating errors. Nature, 323, 533-536. pdf
- Sanborn, A.N., Griffiths, T.L., & Navarro, D.J. (2010). Rational approximations to rational models: Alternative algorithms for category learning. Psychological Review, 117(4), 1144-1167. pdf
- Schneider, W., & Shiffrin, R.M. (1977). Controlled and automatic human information processing: I. Detection, search, and attention. Psychological Review, 84, 1-66. pdf
- Shenoy, P & Yu, A J (2011). Rational decision-making in inhibitory control. Frontiers in Human Neuroscience,5:48. doi: 10.3389/fnhum.2011.00048. (pdf)
- Shepard, R.N. (1984). Ecological Constraints on Internal Representation: Resonant Kinematics of Perceiving, Imagining, Thinking, and Dreaming. Psychological Review, 91, 417-447. pdf
- Shepard, R.N. (1987). Toward a Universal Law of Generalization for Psychological Science. Science, 237(4820), 1317-1323. pdf
- Shi, L., Griffiths, T.L., Feldman, N.H, & Sanborn, A.N. (2010). Exemplar models as a mechanism for performing Bayesian inference. Psychonomic Bulletin & Review, 17(4), 443-464. pdf
- Shiffrin, R.M., & Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending, and a general theory. Psychological Review, 84, 127-190. html (abstract)
- Shiffrin, R.M., & Steyvers, M. (1997). A model for recognition memory: REM-retrieving effectively from memory. Psychonomic Bulletin & Review, 4(2), 145-166. pdf
- Shiffrin, R.M., Lee, M.D., Kim, W., & Wagenmakers, E.-J. (2008). A Survey of Model Evaluation Approaches with a Tutorial on Hierarchical Bayesian Methods. Cognitive Science, 32, 1248-1284. pdf
- Shiffrin, R.M., Ratcliff, R., & Clark, S.E. (1990). List-Strength Effect .2. Theoretical Mechanisms. Journal of Experimental Psychology-Learning Memory and Cognition, 16(2), 179-195. html (abstract)
- Shultz, T.R. (1998). A computational analysis of conservation. Developmental Science, 1, 103-126. pdf
- Shultz, T.R. (2006). Constructive learning in the modeling of psychological development. In Y. Munakata & M. H. Johnson (Eds.), Processes of change in brain and cognitive development: Attention and performance XXI, 61-86. Oxford: Oxford University Press. pdf
- Shultz, T.R., & Bale, A.C. (2006). Neural networks discover a near-identity relation to distinguish simple syntactic forms. Minds and Machines, 16, 107-139. pdf
- Shultz, T.R., & Lepper, M.R. (1996). Cognitive dissonance reduction as constraint satisfaction. Psychological Review, 103, 219-240. pdf
- Shultz, T.R., Rivest, F., Egri, L., Thivierge, J.-P., & Dandurand, F. (2007). Could knowledge-based neural learning be useful in developmental robotics? The case of KBCC. International Journal of Humanoid Robotics, 4, 245–279. pdf
- Sirois, S., & Shultz, T.R. (1998). Neural network modeling of developmental effects in discrimination shifts. Journal of Experimental Child Psychology, 71, 235-274. pdf
- Smith, J.M. (1987). When learning guides evolution. Nature, 329, 761-762. pdf
- Spivey, M.J., & Dale, R. (2004). On the continuity of mind: Toward a dynamical account of cognition. In B.H. Ross (Ed.), Psychology of learning and motivation, 45, 85-142. Amsterdam: Elsevier. pdf
- St. Clair, M.C., Monaghan, P., & Christiansen, M.H. (2010). Learning grammatical categories from distributional cues: Flexible frames for language acquisition. Cognition, 116, 341-360. pdf
- Stewart, N., Chater, N., & Brown, G.D.A. (2006). Decision by sampling. Cognitive Psychology, 53, 1-26. pdf
- Steyvers, M., Griffiths, T.L., & Dennis, S. (2006). Probabilistic inference in human semantic memory. Trends in Cognitive Science, 10, 327-334. pdf
- Steyvers, M., Lee, M.D., Miller, B., & Hemmer, P. (2009). The Wisdom of Crowds in the Recollection of Order Information. In Y. Bengio and D. Schuurmans and J. Lafferty and C. K. I. Williams and A. Culotta (Eds.), Advances in Neural Information Processing Systems, 22, 1785-1793. MIT Press. pdf
- St. John, M.F., & McClelland, J.L. (1990). Learning and Applying Contextual Constraints in Sentence Comprehension. Artificial Intelligence, 46, 217-257. pdf
- Tabor, W. (2009). A dynamical systems perspective on the relationship between symbolic and non-symbolic computation. Cognitive Neurodynamics, 3(4), 415-427. pdf
- Tenenbaum, J.B. (1999). Bayesian modeling of human concept learning. In M.S. Kearns, S.A. Solla, & D.A. Cohn (Eds.), Advances in Neural Information Processing Systems, 11. Cambridge, MA: MIT Press. pdf
- Tenenbaum, J.B. (2000). Rules and Similarity in Concept Learning. In S.A. Solla, T.K. Leen, & K.-R. Muller (Eds.), Advances in Neural Information Processing Systems, 12, 59-65. Cambridge, MA: MIT Press. pdf
- Tenenbaum, J.B., & Griffiths, T.L. (2001). Generalization, similarity, and Bayesian inference. Behavioral and Brain Sciences, 24, 629-640. pdf
- Tenenbaum, J.B., & Griffiths, T.L. (2003). Theory-based causal inference. In S. Becker, S. Thrun, & K. Obermayer (Eds.), Advances in Neural Information Processing Systems, 15, 35-42. Cambridge, MA: MIT Press. pdf
- Tenenbaum, J.B., Griffiths, T.L., & Kemp, C. (2006). Theory-based Bayesian models of inductive learning and reasoning. Trends in Cognitive Sciences, 10(7), 309–318. pdf
- Thomas, M.S.C., & Karmiloff-Smith, A. (2003). Modeling Language Acquisition in Atypical Phenotypes. Psychological Review, 110(4), 647-682. pdf
- Todd, P.M., & Gigerenzer, G. (2000). Précis of Simple heuristics that make us smart. Behavioral and Brain Sciences, 23, 727-741. pdf
- Tong, M.H., Joyce, C.A., & Cottrell, G.W. (2008). Why is the fusiform face area recruited for novel categories of expertise? A neurocomputational investigation. Brain Research, 1202, 14-24. pdf
- Torralba, A., Oliva, A., Castelhano, M.S., & Henderson, J.M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features on object search. Psychological Review, 113(4), 766-786. pdf
- Trommershauser, J., Maloney, L.T., & Landy, M.S. (2008). Decision making, movement planning and statistical decision theory. Trends in Cognitive Sciences, 12, 291–97. pdf
- Tversky, A. (1977). Features of Similarity. Psychological Review, 84(4), 327-352. pdf
- Tversky, A. (2004) Preference, Belief, and Similarity: Selected Writings. E. Shafir (Ed.). Cambridge, MA: MIT Press. pdf
- van Rooij, I., Bongers, R.M., & Haselager, W.F.G. (2002). A non-representational approach to imagined action. Cognitive Science, 26, 345-375. pdf
- Vul, E., Frank, M., Alvarez, G., & Tenenbaum, J. (2009). Explaining human multiple object tracking as resource-constrained approximate inference in a dynamic probabilistic model. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, A. Culotta (Eds.), Advances in Neural Information Processing Systems, 22, 1955–1963. pdf
- Weiss, Y., Simoncelli, E.P., & Adelson, E.H. (2002). Motion illusions as optimal percepts.Nature Neuroscience, 5(6), 598-604 pdf
- Woollams, A., Lambon Ralph, M.A., Plaut, D.C., & Patterson, K. (2007). SD-squared: On the association between semantic dementia and surface dyslexia. Psychological Review, 114, 316-339. pdf
- Yu, A J & Dayan, P (2005). Uncertainty, neuromodulation, and attention. Neuron, 46: 681-692. (pdf).
- Yu, A J, Dayan, P, & Cohen J D (2009). Dynamics of attentional selection under conflict: Toward a rational
Bayesian account. Journal of Experimental Psychology: Human Perception and Performance, 35: 700-717. (pdf)
- Zhang, L., Tong, M.H., Marks, T.K., Shan, H., & Cottrell, G.W. (2008). SUN: A Bayesian Framework for Saliency Using Natural Statistics. Journal of Vision, 8(7):32, 1-20. pdf
Remarks by Researchers in Cognitive Science
- Dell et al (1997): “A brilliant example of how models can be used to make predictions for new data collection (for specific individuals)!”
- St. Clair et al (2010): “A recent paper of mine exploring limitations of a past model (Mintz) and proposing a new model of how kids might learn about lexical categories (combining corpus analyses of child-directed speech and connectionist models).”
- MacDonald, M.C. & Christiansen, M.H. (2002). Reassessing working memory: A comment on Just & Carpenter (1992) and Waters & Caplan (1996). Psychological Review, 109, 35-54. “showed how working memory capacity effects can be explained by experience with language”
- Christiansen, M.H., Allen, J. & Seidenberg, M.S. (1998). Learning to segment speech using multiple cues: A connectionist model. Language and Cognitive Processes, 13, 221-268. “first comprehensive multiple-cue integration model in the context of word segmentation”
- Christiansen, M.H. & Chater, N. (1999). Toward a connectionist model of recursion in human linguistic performance. Cognitive Science, 23, 157-205.
“A model of complex recursion before Hauser, Chomsky & Fitch made the topic popular again”
- Elman (1990), Cleeremans & McClelland (1991), and Munakata et al (1997): “All of these have to do with the SRN and in each case, I feel that something has been nailed indeed: the general idea and power of limited recurrence in the first paper; the application to sequence learning in the second; and to cognitive development in the third.”
- Hinton (1986): “This is the first connectionist paper I read. This is truly seminal I think in showing key concepts from distributed representations to functional similarity as well as from analyzing hidden units activity to watching abstract concepts emerge out of mere processing.”
- Plaut & Shallice (2003): “This is a truly insightful paper about how you can get double, graded dissociations out of a single system. I always use it in class as an illustration of the pitfalls of standard neuropsychological thinking.”
- My current favorites are Elman (1991, grammatical structure SRN paper); Plaut & Shallice (1993, Cog Neuroscience), and McClelland, McNaughton, & O’Reilly (1995). For my papers, I like Chang, Dell & Bock, 2006) and Dell, Burger & Svec (1997).
- As for the paper that you’ve requested (Dell, Schwartz et al., 1997), the pdf is already on your list, as are these other papers. So, I’m afraid that I can’t really expand on your list. I’m still an old connectionist fogey.
- Dennis & Humphreys (2001): “This is the paper that nailed how they ought to be modified ;-).”
- Elman (1991): “It was just such a different way of looking at language structure and really emphasised the power of statistics.”
- Henson (1998): “Conclusively shows that simple chaining models cannot be an accurate portrayal of serial recall and proposes the Start End model.”
- Landauer & Dumais (1997): “This one also was just very surprising. It taught me that toy examples are not good enough. Sometimes what we think is a complicated process is really just big data. And you can’t see it if your corpus is restricted to ‘man eats. woman eats …’.”
- Ratcliff et al (1990) and Shiffrin et al (1990): “It isn’t often that one really has to concede defeat and move on. There is normally some kind of wiggle room. The List Strength Effect just didn’t give any. The Global Matching Models were all demonstrably wrong and had to be fundamentally modified.”
- French et al (2004): “This paper has had a fair amount of success and clearly illustrates, I think, the importance of modeling as a tool for understanding human behavior.”
- Kanerva (1985): “This is a simple, absolutely clear presentation of sparse distributed memory. In fact it is the clearest description of SDM that I know of. His book, Sparse Distributed Memory, “mathematized” everything in any attempt to put it all on a formal footing and, in so doing, lost a lot of potential fans. This paper is simple, clear, and lends itself perfectly to implementation. It’s impossible to find on the Web, however.”
- Nosofsky (1986): “I think there’s no better example of a model that nailed the phenomenon of interest.”
- Nosofsky (1984): “His GCM model gave rise to everything that has followed in the study of categorization and it’s still a valid contender to this day. But, more importantly, it changed the way that people model by forcing them to consider both representation (as revealed by MDS in this case) and the process (feature attention) simultaneously. Subsequently, John Kruschke demonstrated that GCM is mathematically identical to his ALCOVE neural network model.”
- Roberts & Pashler (2000): “This paper is a good conceptual overview of why fit is not enough and motivates model selection statistics (proper model testing).”
- Sanborn et al (2010): “This paper is a really nice linkage of process and rational models with intuitive explanations of Gibbs sampling and particle filters”
- Daw et al (2006): “This is a good introduction to Reinforcement Leaning (RL) models and using cognitive models to interpret fMRI data.”
- Mitchell et al (2008): “This is a fun paper with cool twist on ‘mind reading’.”
- Shiffrin et al (2008): “This is a nice and easy to understand overview of Bayesian methods (model selection).”
- McClelland & Rumelhart (1981): “It’s a classic, and probably nobody covers it anymore, and it probably wins an award as the model that was recycled more times than any other-with its units relabeled-to explain other phenomena.”
- Najemnik & Geisler (2005): “It takes some decoding to figure out but it seems like a really pretty and believable Bayesian account that takes into account limitations of the visual system (falloff of acuity with retinal eccentricity).”
- Huber et al (2001): “The model kept predicting correctly in study after study even though our intuitions kept leading us to expect other results.”
- Allopenna et al (1998): “They used McClelland and Elman’s (1986) TRACE interactive-activation model to make remarkable time-course predictions of eye-movement patterns during spoken word recognition in a visual context.”
- McRae et al (1998): “We built a normalized version of an interactive-activation network to simulate the nonlinearities inherent in “the garden-path effect”, where syntactically ambiguous sentences cause slowed reading times, due to a constraint-satisfaction process (rather than a stage-based modular syntax process).”
Wanted to add
- Dayan, P, Hinton, GE, Neal, RM & Zemel, RS (1995). The Helmholtz machine. Neural Computation, 7, 889-904. pdf
One Is the Quirkiest Number : The Freedom, and Perils, of Living Alone
IF there is any doubt that we’re living in the age of the individual, a look at the housing data confirms it. For millenniums, people have huddled together, in caves, in mud huts, in split-levels and Cape Cods. But these days, 1 in every 4 American households is occupied by someone living alone; in Manhattan, mythic land of the singleton, the number is nearly 1 in 2.
Lately, along with the compelling statistics, a stealth P.R. campaign seems to be taking place, as though living alone were a political candidate trying to burnish its image. Two notable examples: Eric Klinenberg, a sociology professor at New York University, recently published “Going Solo: The Extraordinary Rise and Surprising Appeal of Living Alone,” a mash note to domestic solipsism, which he calls “an incredible social experiment” that reveals “the human species is developing new ways to live.” And last fall, an Atlantic magazine cover story examined the rise of the single woman, a piece in which the author Kate Bolick fondly invoked the Barbizon Hotel and visited an Amsterdam apartment complex for women committed to living solo.
“I glamorized people who lived alone — I really wanted it for myself,” said Ms. Bolick, who is in her late 30s and has her own apartment in Brooklyn Heights.
True, the benefits of living alone are many: freedom to come and go as you please; the space and solitude to recharge in a plugged-in world; kingly or queenly domain over the bed.
Still, as TV has taught us, the single-occupant home can be a breeding ground for eccentricities. Think of Claire Danes’s C.I.A. employee in “Homeland,” who turns her Georgetown one-bedroom into a control bunker for an ad hoc spying operation. Or Kramer on “Seinfeld,” washing vegetables in the shower or deciding, on a whim, to ditch his furniture in favor of “levels.”
In a sense, living alone represents the self let loose. In the absence of what Mr. Klinenberg calls “surveilling eyes,” the solo dweller is free to indulge his or her odder habits — what is sometimes referred to as Secret Single Behavior. Feel like standing naked in your kitchen at 2 a.m., eating peanut butter from the jar? Who’s to know?
Amy Kennedy, 28, a schoolteacher who has a two-bedroom apartment in High Point, N.C., all to herself, calls it living without “social checks and balances.”
The effects are noticeable, she said: “I’ve been living alone for six years, and I’ve gotten quirkier and quirkier.”
Among her domestic oddities: running in place during TV commercials; speaking conversational French to herself while making breakfast (she listens to a language CD); singing Journey songs in the shower; and removing only the clothes she needs from her dryer, thus turning it into a makeshift dresser.
“The entire apartment is your room,” Ms. Kennedy said, by way of explanation. “If I leave a bra on the kitchen table, I don’t think much about it.”
In the experience of Ms. Bolick, who has also lived with roommates and boyfriends, living alone breeds “a very indulgent work style.”
“I can work 24/7 for days on end, and I can let my whole apartment fall apart on me and not wash the dishes,” she continued. “And nobody cares.”
Ms. Bolick even has a home-alone outfit. “I have this pair of white flax bloomers that go down to my knee. They’re like pantaloons. They’re so weird,” she said. “If someone comes over, I change out of them.”
Even boyfriends have never seen her in them? “No, no,” Ms. Bolick said, laughing. “That would be the height of intimacy if someone saw those.”
What emerges over time, for those who live alone, is an at-home self that is markedly different — in ways big and small — from the self they present to the world. We all have private selves, of course, but people who live alone spend a good deal more time exploring them.
Rod Sherwood’s living-alone indulgences center on his sleep cycle. A music manager and record producer who works from his railroad apartment in Brooklyn, Mr. Sherwood, 40, said he’ll go to bed at 2 a.m. one night, and then retire later and later by increments, “until I go to bed when the sun comes up.”
He mused: “I wondered how many times in a year I repeat that cycle? I’d be interested to chart it.”
Ronni Bennett, who is 70 and writes a blog on aging, timegoesby.net, has lived alone for all but 10 or so years of her adult life. She said she has adopted a classic living-alone habit: “I never, ever close the bathroom door.”
Leaving it open “is one of those little habits that makes no difference most of the time,” she said. But when guests visit her two-bedroom apartment outside Portland, Ore., she added: “I have to make huge mental efforts to remind myself to close the door. Sometimes I think, Just put a Post-it note by the bathroom door. Well, wait, I don’t want them to see that.”
Like many, Ms. Bennett also talks to herself — or, rather, to her cat. “I’ll try things out on him when I’m writing,” she said. “He’ll look at me like he’s actually listening. I wouldn’t discuss what I’m writing with my cat if someone were around.”
Other people say their greatest eccentricities emerge in the kitchen. Eating can be a personal, even self-conscious act, and in the absence of a roommate or partner, unconventional approaches to food emerge. Drinking from the carton is only the start.
“I very rarely have what you would call ‘meals,’ ” said Steve Zimmer, a computer programmer in his 40s who lives by himself in a Manhattan loft. Instead of adhering to regular meals or meal times, he said, he makes “six or seven” trips an hour to the refrigerator and subsists largely on cereal.
Ms. Bolick, the magazine writer, grazes on nuts and seeds, something that was pointed out to her recently when she shared a house with a married couple in Los Angeles. The husband told Ms. Bolick she would be fine in an earthquake because “I ate the equivalent of emergency rations.”
Sasha Cagen, the founder of the Web site quirkyalone.net, is a kind of unofficial spokeswoman and lobbyist for singletons. Ms. Cagen, who has had roommates in the past but now lives alone, in Oakland, Calif., said that rather than cooking a big meal for one, an unappealing prospect, she fashions dinner out of “discrete objects”: “I’m often, like, here’s a sweet potato,” she said. “Let me throw this in the oven with aluminum foil and eat it.”
It’s a solution to the problem that many face with food spoilage. But for Ms. Cagen, those makeshift dinners also underscore one of the pleasures of going solo. “There’s a freedom to really let loose and be yourself when you live alone that a lot of other people may envy,” she said.
None more so than those who have never experienced it. Take Chad Griffith, 29, a Brooklyn-based photographer, who went straight from his parents’ house to living with roommates during and immediately after college to sharing an apartment with his fiancée.
“I haven’t lived alone a day in my life,” Mr. Griffith said.
Instead, he observes what he calls “The Day of Chad,” something he eagerly anticipates whenever his girlfriend goes out of town. “It consists of me doing the dumbest things possible,” he said. “I would feel guilty if anyone else saw them.”
What are some examples?
“I’ve been known to drink Champagne in the shower at 8 a.m.,” Mr. Griffith said. “I’ll play Madden NFL Football for 10 hours straight, eat a French bread pizza for every meal of the day.”
But living alone is a skill that takes management, and Mr. Griffith has found he isn’t very good at it. The Days of Chad, he said, are about all he can handle.
“I literally have zero self-control,” he said. “If I lived alone and didn’t have somebody to monitor me, I’d be a fat, out-of-work alcoholic.”
For people who are comfortable and even good at living alone, there is often another concern: a fear that the concrete has set, so to speak, on their domestic habits and that it will be difficult to go back to living with someone else. “It’s definitely something that worries me,” Ms. Kennedy said. “I can’t take the quirks back.”
The longer she lives alone, she said, the less flexible she becomes — and the less considerate of others’ needs. “If I go on vacation with a group of friends, I feel a little overwhelmed,” she said. “I’ve got to share this room with other people? We have to organize showers?”
Mr. Zimmer, the computer programmer, said he is also conscious of becoming too set in his ways, especially where sleeping is concerned. “I just do not sleep as well with someone else,” he said. “A lot of homes have double master bedrooms. I can really see the value of that.”
He added: “Looking back, maybe I should have had a roommate.”
During a year he lived with a girlfriend, Mr. Sherwood, the music manager, said his nocturnal habits were hard to break. “I’d be up clicking on the computer until 4 or 5,” he said.
But Ms. Bennett, whose last live-in relationship ended in 1976, said she doesn’t worry about being too quirky to cohabitate. “You know,” she said, “I hear this stuff about ‘I just have too many bad habits.’ If I wanted to live with someone again, I think I could. You pull yourself back together.”
That’s good news for Ms. Kennedy, who has developed the kind of quirky, absentminded habit that’s great if you’re an eccentric character in a Southern novel, but not if you want to be seen as good roommate (or romantic) material.
Pulling a sweater, boots and tights from her dryer-slash-dresser one recent morning, she forgot to grab her skirt, and left the house without wearing one. “I realized it when I got halfway to work — damn it, I forgot my skirt,” she said. And it’s not the first time that’s happened.
When and if she lives with someone again, Ms. Kennedy said: “I think I’ll need to be with someone who has lived alone. We can commiserate and help each other resocialize.”
I want to be alone: the rise and rise of solo living
The number of people living alone has skyrocketed. What is driving the phenomenon? And solo dwellers Colm Tóibín, Alex Zane, Carmen Calli and others reflect on life as a singleton
Human societies, at all times and places, have organised themselves around the will to live with others, not alone. But not any more. During the past half-century, our species has embarked on a remarkable social experiment. For the first time in human history, great numbers of people – at all ages, in all places, of every political persuasion – have begun settling down as singletons. Until the second half of the last century, most of us married young and parted only at death. If death came early, we remarried quickly; if late, we moved in with family, or they with us. Now we marry later. We divorce, and stay single for years or decades. We survive our spouses, and do everything we can to avoid moving in with others – including our children. We cycle in and out of different living arrangements: alone, together, together, alone.
Numbers never tell the whole story, but in this case the statistics are startling. According to the market research firm Euromonitor International, the number of people living alone globally is skyrocketing, rising from about 153 million in 1996 to 277 million in 2011 – a 55% increase in 15 years. In the UK, 34% of households have one person living in them and in the US it’s 27% – roughly one in every seven adults.
Contemporary solo dwellers in the US are primarily women: about 18 million, compared with 14 million men. The majority, more than 16 million, are middle-aged adults between the ages of 35 and 64. The elderly account for about 11 million of the total. Young adults between 18 and 34 number more than 5 million, compared with 500,000 in 1950, making them the fastest-growing segment of the solo-dwelling population. Unlike their predecessors, people who live alone today cluster together in metropolitan areas.
Sweden has more solo dwellers than anywhere else in the world, with 47% of households having one resident; followed by Norway at 40%. In Scandinavian countries their welfare states protect most citizens from the more difficult aspects of living alone. In Japan, where social life has historically been organised around the family, about 30% of all households have a single dweller, and the rate is far higher in urban areas. The Netherlands and Germany share a greater proportion of one-person households than the UK. And the nations with the fastest growth in one-person households? China, India and Brazil.
But despite the worldwide prevalence, living alone isn’t really discussed, or understood. We aspire to get our own places as young adults, but fret about whether it’s all right to stay that way, even if we enjoy it. We worry about friends and family members who haven’t found the right match, even if they insist that they’re OK on their own. We struggle to support elderly parents and grandparents who find themselves living alone after losing a spouse, but we are puzzled if they tell us they prefer to remain alone.
In all of these situations, living alone is something that each person, or family, experiences as the most private of matters, when in fact it is an increasingly common condition.
When there is a public debate about the rise of living alone, commentators present it as a sign of fragmentation. In fact, the reality of this great social experiment is far more interesting – and far less isolating – than these conversations would have us believe. The rise of living alone has been a transformative social experience. It changes the way we understand ourselves and our most intimate relationships. It shapes the way we build our cities and develop our economies.
So what is driving it? The wealth generated by economic development and the social security provided by modern welfare states have enabled the spike. One reason that more people live alone than ever before is that they can afford to. Yet there are a great many things that we can afford to do but choose not to, which means the economic explanation is just one piece of the puzzle.
In addition to economic prosperity, the rise stems from the cultural change that Émile Durkheim, a founding figure in sociology in the late 19th century, called the cult of the individual. According to Durkheim, this cult grew out of the transition from traditional rural communities to modern industrial cities. Now the cult of the individual has intensified far beyond what Durkheim envisioned. Not long ago, someone who was dissatisfied with their spouse and wanted a divorce had to justify that decision. Today if someone is not fulfilled by their marriage, they have to justify staying in it, because there is cultural pressure to be good to one’s self.
Another driving force is the communications revolution, which has allowed people to experience the pleasures of social life even when they’re living alone. And people are living longer than ever before – or, more specifically, because women often outlive their spouses by decades, rather than years – and so ageing alone has become an increasingly common experience.
Although each person who develops the capacity to live alone finds it an intensely personal experience, my research suggests that some elements are widely shared. Today, young solitaires actively reframe living alone as a mark of distinction and success. They use it as a way to invest time in their personal and professional growth. Such investments in the self are necessary, they say, because contemporary families are fragile, as are most jobs, and in the end each of us must be able to depend on ourselves. On the one hand, strengthening the self means undertaking solitary projects and learning to enjoy one’s own company. But on the other it means making great efforts to be social: building up a strong network of friends and work contacts.
Living alone and being alone are hardly the same, yet the two are routinely conflated. In fact, there’s little evidence that the rise of living alone is responsible for making us lonely. Research shows that it’s the quality, not the quantity of social interactions that best predicts loneliness. What matters is not whether we live alone, but whether we feel alone. There’s ample support for this conclusion outside the laboratory. As divorced or separated people often say, there’s nothing lonelier than living with the wrong person.
There is also good evidence that people who never marry are no less content than those who do. According to research, they are significantly happier and less lonely than people who are widowed or divorced.
In theory, the rise of living alone could lead to any number of outcomes, from the decline of community to a more socially active citizenry, from rampant isolation to a more robust public life. I began my exploration of singleton societies with an eye for their most dangerous and disturbing features, including selfishness, loneliness and the horrors of getting sick or dying alone. I found some measure of all of these things. On balance, however, I came away convinced that the problems related to living alone should not define the condition, because the great majority of those who go solo have a more rich and varied experience.
Sometimes they feel lonely, anxious and uncertain about whether they would be happier in another arrangement. But so do those who are married or live with others. The rise of living alone has produced significant social benefits, too. Young and middle-aged solos have helped to revitalise cities, because they are more likely to spend money, socialise and participate in public life.
Despite fears that living alone may be environmentally unsustainable, solos tend to live in apartments rather than in big houses, and in relatively green cities rather than in car-dependent suburbs. There’s good reason to believe that people who live alone in cities consume less energy than if they coupled up and decamped to pursue a single-family home.
Ultimately, it’s too early to say how any particular society will respond to either the problems or the opportunities generated by this extraordinary social transformation. After all, our experiment with living alone is still in its earliest stages, and we are just beginning to understand how it affects our own lives, as well as those of our families, communities and cities.
Ideas are like fire, observed Thomas Jefferson in 1813—information can be passed on without relinquishing it (1). Indeed, the ease and benefit of sharing information select for individuals to aggregate into groups, driving the buildup of complexity in the biological world (2, 3). Once the members of some collective—whether cells of a fruit fly or citizens of a democratic society—have accumulated information, they must integrate that information and make decisions based upon it. When these members share a common interest, as do the stomata on the surface of a plant leaf (4), integrating distributed information may be a computational challenge. But when individuals do not have entirely coincident interests, strategic problems arise. Members of animal herds, for example, face a tension between aggregating information for the benefit of the herd as a whole, and avoiding manipulation by self-interested individuals in the herd. Which collective decision procedures are robust to manipulation by selfish players (5)? On page 1578 of this issue, Couzin et al. (6) show how the presence of uninformed agents can promote democratic outcomes in collective decision problems.
View larger version:
Distributed information processing.
(A) Research in this domain comprises four areas: multisensor integration (12), social choice theory (13), cooperative distributed computation (14), and tactical distributed computation (5). The Couzin et al. study lies in the lower right quadrant, where the challenges of both social choice and distributed computation must be solved. (B) In this schematic of tactical distributed computation, an uncertain world is observed imperfectly by agents with different preferences. By means of local interactions, they aggregate the information and preferences to arrive at a collective decision.
Research in distributed information processing broadly falls into four domains (see the figure, panel A), which differ depending on whether there is local or central control over the collective decision and on whether the agents share common interests (7). The scenario addressed by Couzin et al. lies in the particularly challenging lower right quadrant. In this domain, one must simultaneously consider both the local nature of information exchange in distributed systems and the strategic issues that arise in social choice theory.
We can think of tactical distributed control as having two stages (see the figure, panel B). In an initial social choice stage, each agent imperfectly observes the world and selects a preferred outcome. In a subsequent distributed computation stage, individual preferences are aggregated through local interactions among agents to select a consensus decision. In such a situation, agents can pursue selfish aims not only through strategic choice of preferred outcome (8)—much as a far-left voter might back a moderate democrat with a chance of winning instead of a fringe candidate with a more liberal platform— but also through tactical behavior during the process of local information exchange.
Couzin et al. consider cases in which the group must decide between two options. Allowing only two options simplifies the problem considerably: There is no incentive for strategic voting, but incentives remain for manipulating the process of information integration. Furthermore, when groups must select among more than two options, they face a host of voting paradoxes. Thomas Jefferson’s acquaintance and correspondent Marquis de Condorcet noted the basic reason for this more than two centuries ago (9): If a group has intransitive preferences—its members collectively prefer A to B and B to C in pairwise comparisons, yet they also prefer C to A—there is no straightforward way to select a single best course of action. This poses a serious social choice problem, because even when no single individual has intransitive preferences, the aggregate preferences of the group can be intransitive.
With two options, the task of determining the majority opinion is analogous to the classic density classifier problem in the study of distributed computation (100). But whereas the vast majority of work on the density classifier looks at cooperative distributed computation, Couzin et al. look at—and even implement in a vertebrate system, namely schooling fish—an extension to the noncooperative case.
The authors develop three different models of the information integration process. In each of these, agents are probabilistically influenced to adopt the opinions of their neighbors, and can promote their own opinions by being reluctant to change them. In this way, an intransigent minority can convert the entire group over to their minority opinion. Such behavior can impose costs on the group, including a reduced responsiveness to the state of the environment, an increased time to make a collective decision, and an increased risk of group fragmentation.
One might expect groups with uninformed members to be particularly susceptible to tactical behavior by minority subpopulations. If that tactical behavior involved some sort of active proselytizing to accelerate conversion to the minority opinion, one would be right. But Couzin et al. show that when the tactical behavior involves intransigence, uninformed individuals have the opposite effect. Their presence allows the majority to wrest control back from a manipulative minority. In each of their models, this occurs because the uninformed individuals tend to adopt the opinions of those around them, amplifying the majority opinion and preventing erosion by an intransigent minority. In this way, adding uninformed individuals to a group can facilitate fair representation during the process of information integration. Jefferson’s passionate arguments on the importance of education for democratic society notwithstanding (11), Couzin et al. have identified circumstances in which ignorance can promote democracy.
References and Notes
- T. Jefferson
, Letter to Isaac McPherson, 13 August 1813; in The Writings of Thomas Jefferson (Thomas Jefferson Memorial Association of the United States, Washington, DC, 1903), vol. 13, pp. 333–334. “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me…[Ideas are] like fire, expansible over all space, without lessening their density in any point.”
- J. Maynard Smith,
- E. Szathmary
, Major Transitions in Evolution (Oxford Univ. Press, New York, 1998).
- M. Lachmann,
- G. Sella,
- E. Jablonka
, Proc. Biol. Sci. 267, 1287 (2000).
Abstract/FREE Full Text
- D. Peak,
- J. D. West,
- S. M. Messinger,
- K. A. Mott
, Proc. Natl. Acad. Sci. U.S.A. 101, 918 (2004).
Abstract/FREE Full Text
- M. Ben-Or,
- N. Linial
, in Proceedings of the 26th Annual Symposium on Foundations of Computer Science (FOCS, Portland, OR, 1985), pp. 408–416.
- I. D. Couzin
- et al
., Science 334, 1578 (2011).
Abstract/FREE Full Text
- L. Conradt,
- T. J. Roper
- A. Gibbard
- H. P. Young
- N. H. Packard
, Dynamic Patterns in Complex Systems (World Scientific, River Edge, NJ, 1988).
- T. Jefferson
, Letter to Charles Yancey, 6 January 1816; in The Writings of Thomas Jefferson (Thomas Jefferson Memorial Association of the United States, Washington, DC, 1903), vol. 10, p. 4. “If a nation expects to be ignorant and free, in a state of civilization, it expects what never was and never will be. The people cannot be safe without information.”
- R. C. Luo,
- M. G. Kay
, IEEE Trans. 19, 901 (1989).
- A. D. Taylor
, Social Choice and the Mathematics of Manipulation (Cambridge Univ. Press, Cambridge, 2005).
- J. P. Crutchfeld,
- M. Mitchell
, Proc. Natl. Acad. U.S.A 92, 10742 (1995).
Abstract/FREE Full Text
- We thank T. Bergstrom, E. Chastain, M. Rosvall, and D. Peak for helpful discussions. This work was supported in part by NSF grant ATB-1038590 to C.T.B.
Is there a strong correlation between the number of hours you are physically present in a lab and the pace and success of your project?
The furore over Nature’s 24/7 lab feature, published a few weeks ago, is still sending out the occasional ripple. In case you missed it, the 31 August issue of the journal featured three pieces: a beautifully written account of Alfredo Quiñones-Hinojosa’s high-powered, workaholic lab at Johns Hopkins in Baltimore by journalist Heidi Ledford; the opposing viewpoint for the importance of work/life balance, presented in the first person by Julie Overbaugh, a successful group leader at the Fred Hutchison in Seattle; and an editorial, which seemed, on balance, to come down largely in favor of the turbo-gunner approach. Indeed, it finishes with the rather ominous observation: “As research funding declines in many countries, science will intensify. Anyone lacking the inner intellectual drive and a capacity for relentless focus to get to the heart of the way the world works should stay away.”
All of you have probably known about labs like Quiñones-Hinojosa’s – especially if you’ve spent time in any prestigious research hub in America. And some of you may have lived the 24/7 lab lifestyle yourself at some point in your career.
I, personally, have been there. But rather ironically, my 24/7 epoch – which stretched over the five and half years I was a PhD student in Seattle – happened in the lab of the aforementioned Julie Overbaugh herself. Hers was a relatively new lab at the time, and I ended up being the first PhD student to graduate from her group. My work ethic wasn’t precisely 24/7, but I would routinely work 12-14 hour days on the weekdays, and 8-10 each weekend day: never fewer than 80 hours a week, and sometimes approaching 90 or 100. This was the epoch in which I single-handedly cloned and sequenced more than a megabase of DNA, the old-fashioned manual way, with radioactive sulfur and hand-poured gels, and typed the results in manually at the computer each morning until the G, C, A, and T keys were visibly worn.
You could find me in the lab at six in the morning, or at midnight, or putting in a few hours on Christmas day. I once set off the intruder alert alarm in the Regional Primate Center after coming in from clubbing to check on some cell cultures at 3 AM – the goth clothes I was wearing at the time didn’t seem to reassure the security guard that I was actually an authorized PhD student. I’d ride those long hours on an adrenalin-fuelled buzz that only a 20-something year old kid could carry off for long periods of time, buffered through the failures by sporadic bright sparks of promising data and a constant stream of incredibly loud grunge or indie music.
It is important to stress that all of this behavior was entirely self-imposed. Julie worked normal hours and repeatedly stressed that we – the Overbabes, as we called ourselves back then – could all do as we liked. Some in her lab worked normal hours, and some worked longer. I think we were all a little scared of Julie: someone who applies no pressure is a pressure of its own. But I think the fact that it wasn’t imposed or expected is very important. Nobody has a right to treat a worker like a slave – something I think that some lab heads forget. “That’s how we did it in my day” is no justification.
If you look at my track record, you’ll see that my 24/7 PhD stint bought me four first-author papers, two first-author reviews and two co-authorships. Impressive, perhaps, until you factor in that I didn’t get my first paper out until my fourth year. In many projects, you have to labor for years to get a system up and running; the 24/7 thing isn’t necessarily going to strike gold during your typical post-doctoral short-term contract span. And with age, inevitably, comes the weakness of the flesh. Nowadays, I get tired, I get hungry; I can’t force myself to work the long hours I used to do with such ease. As I approach middle age, and life starts to feel finite, I find that spending time with my loved ones is more important than cranking out so many papers that I never get to see them.
And I think Julie is right about the creativity angle: I get scientific ideas when I’m running in the woods, or swimming laps, or lying sprawled on a sunny blanket in the weekend garden staring at the clouds. Sometimes in the lab or at my computer, I feel a block that won’t ease until I step away from the problem. I abandoned the 24/7 ethic completely from my second post-doc onwards (I’m on only about 50-60 hours now, if you include the time I spend writing papers and reading the literature at home), but my publication record remains as strong as it was at the outset, and some of the work I’m most proud of happened in a 9-to-5 industry culture.
Personally, although I’m glad Nature gave someone like Julie a platform to voice an opinion that many of us feel is obvious, I’m a little bit disappointed that its editorial team decided to side with the sweatshop mentality. Judging by the comment threads on the three pieces, most readers were equally disappointed. Quantity is seldom quality in science, and there are many different styles and diverse approaches in the quest for knowledge. The sleep-in-the-lab scientific stereotype is getting a little stale. As a community, shouldn’t we be moving beyond all that?