Michelle R. Greene

Assistant Professor of Neuroscience



Hathorn Hall, Room 111

Gender and Sexuality Studies



  • B.S., Psychobiology, University of Southern California (2004)
  • Ph.D., Cognitive Science, Massachusetts Institute of Technology (2009)
  • Postdoctoral Fellowship, Department of Surgery, Harvard Medical School, Brigham & Women’s Hospital (2011)
  • Postdoctoral Fellowship, Department of Computer Science, Stanford University (2013)
Courses Taught
  • NS/PY 160 Introduction to Neuroscience
  • NRSC 205 Statistical Methods
  • NRSC 208 Neuroscience, Ethics, & Society
  • NRSC 209 Neural Codes: The Language of Thought
  • NS/PY 357 Computational Neuroscience / Lab
  • NS/PY 463 Capstone Seminar on Human Cognitive Neuroscience
Research Interests

One of the enduring open questions in cognitive science is how we so quickly and effortlessly understand the visual world from a brief glance. In a single eye fixation lasting 250 msec, we are not only able to transform the myriad colors and textures hitting our retinae into objects and scenes, but also to make inferences about future events and even assess aesthetic and emotional content. Thus, scene understanding is not a purely visual problem, but one that touches nearly every area of cognitive science, including memory, attention, categorization, and semantics. To date, no computer vision system can make such complex inferences in an unlimited amount of time, let alone real time.

My primary research goal is to understand the mechanisms that enable rapid, intelligent perception of our environment. Towards this end, I use computational methods from machine learning and computer vision to model the information that may be used by the human brain for visual understanding. This information can come from the scene’s objects, from global layout, from knowledge of the actions afforded by an environment, or from other prior visual experience. I then compare these models to human performance, either in the laboratory using well-established psychophysical measures, in neural activity or eye-movement patterns, or by using crowdsourcing to collect human-generated data at scale. This enhanced toolkit has allowed my work to make an impact beyond psychology, generating follow-up work in computer vision, as well as in applications ranging from fingerprint identification to pedagogy. I examine visual perception from two directions: a bottom-up approach of measuring and manipulating the contents in the scene, and a top-down approach of exploring the internal representations and knowledge that influence how a scene is understood.

Selected Publications

2019 Greene, M. R. The information content of scene categories. Knowledge and Vision70, 161.

2019 Hansen, B. C., Field, D. J., Greene, M. R., Olson, C., & Miskovic, V. Towards a state-space geometry of neural responses to natural scenes: A steady-state approach. NeuroImage201, 116027.

2018 Greene, M. R., & Hansen, B. C. Shared spatiotemporal category representations in biological and artificial deep neural networks. PLoS computational biology14(7), e1006327.

2018 Groen, I. I., Greene, M. R., Baldassano, C., Fei-Fei, L., Beck, D. M., & Baker, C. I. Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior. Elife7, e32962.

2016 Vessel, E.A., Biederman, I., Subramaniam, S., & Greene, M.R.  Effective signaling of surface boundaries by L-vertices reflect the consistency of their contrast in natural images. Journal of Vision.

2016 Iordan, M.C., Greene, M.R., Beck, D.B., & L. Fei-Fei. Typicality sharpens category representations in object-selective cortex. NeuroImage, 134, 170-179.

2016 Greene, M.R. Estimates of object frequency are frequently overestimated. Cognition, 149, 6-10.

2016 Greene, M.R., Baldassano, C., Esteva, A., Beck, D.M., & L. Fei-Fei. Visual scenes are categorized by function. Journal of Experimental Psychology: General, 145(1), 82-94.

2015 Iordan, M.C., Greene, M.R., Beck, D. & Li Fei-Fei. (2015) Basic level category structure emerges gradually across human ventral visual cortex. Journal of Cognitive Neuroscience 27(7), 1427-1446.

2015 Greene, M.R., Botros, A., Beck, D.M. & L. Fei-Fei. What you see is what you expect: Rapid scene understanding benefits from prior experience. Attention, Perception, & Psychophysics, 77(4), 1239-1251.

2014 Greene, M.R. & L. Fei-Fei. Visual categorization is automatic and obligatory: Evidence from a Stroop-like paradigm. Journal of Vision, 14(1).

2013 Greene, M.R., Statistics of high-level scene context. Frontiers in Perception Science, 4, 777.

2013 Boucart, M., Moroni, C., Thiabaut, M., Szaffarczyk, M., & Greene, M.R., Scene categorization at large visual eccentricities. Vision Research, 86, 35-42.

2012 Greene, M.R. Liu, T. & Wolfe, J.M., Reconsidering Yarbus: Pattern classification cannot predict observers’ task from scan paths. Vision Research, 62, 1-8.

2011 Greene, M.R., & Wolfe, J.M., Global image properties do not guide visual search. Journal of Vision, 11(6).

2011 Wolfe, J.M., Vo, M.L-H., Evans, K.K., & Greene, M.R. Visual search in scenes involves selective and non-selective pathways. Trends in Cognitive Sciences. 15(2), 77-84.

2011 Park, S., Brady, T.F., Greene, M.R. & Oliva, A. Disentangling scene content from spatial boundary: Complementary roles for the PPA and LOC in representing real-world scenes. Journal of Neuroscience. 31(4), 1333-1340.

2010 Greene, M.R., & Oliva, A. Adapting to scene space: High-level aftereffects to global scene properties. Journal of Experimental Psychology: Human Perception and Performance. 36(6), 1430-1432.

2009 Greene, M.R. & Oliva, A. The briefest of glances: Time course of natural scene understanding. Psychological Science, 20(4), 464-472.

2009 Greene, M.R. & Oliva, A. Recognition of natural scenes from global properties: Seeing the forest without representing the trees. Cognitive Psychology, 58(2), 137-176.