{"id":119151,"date":"2018-10-04T13:17:06","date_gmt":"2018-10-04T17:17:06","guid":{"rendered":"https:\/\/www.bates.edu\/news\/?p=119151"},"modified":"2018-10-05T13:28:42","modified_gmt":"2018-10-05T17:28:42","slug":"how-do-we-know-what-were-seeing-a-bates-professor-finds-out","status":"publish","type":"post","link":"https:\/\/www.bates.edu\/news\/2018\/10\/04\/how-do-we-know-what-were-seeing-a-bates-professor-finds-out\/","title":{"rendered":"How do we know what we&#8217;re seeing? A Bates professor finds out"},"content":{"rendered":"<p>In milliseconds, our brains take the information that meets our eyes and categorize it into everyday scenes like offices or kitchens, associated with memories, emotions, and applications for our daily lives.<\/p>\n<p>Neuroscientists who study such scene categorization run into a challenge, says Assistant Professor of Neuroscience <a href=\"https:\/\/www.bates.edu\/neuroscience\/michelle-r-greene\/\">Michelle Greene<\/a>. It\u2019s tough to measure how what Greene calls \u201cfeatures\u201d of a scene \u2014 such as colors, textures, or the words we associate with a scene \u2014 contribute to categorization, especially since each feature affects the others.<\/p>\n<p>So Greene and a colleague set out to mathematically model the role each feature plays in scene categorization over time, part of a research project funded by the National Science Foundation. In doing so, they found that complex features like function contribute to categorization as early in the process as simple ones, like color.<\/p>\n<div id=\"attachment_109762\" style=\"width: 310px\" class=\"wp-caption alignright\"><a href=\"https:\/\/www.bates.edu\/news\/files\/2017\/09\/170829_Michelle_Greene_0747_LR.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-109762\" class=\"wp-image-109762 size-medium\" src=\"https:\/\/www.bates.edu\/news\/files\/2017\/09\/170829_Michelle_Greene_0747_LR-e1538745748901-300x300.jpg\" alt=\"\" width=\"300\" height=\"300\" srcset=\"https:\/\/www.bates.edu\/news\/files\/2017\/09\/170829_Michelle_Greene_0747_LR-e1538745748901-300x300.jpg 300w, https:\/\/www.bates.edu\/news\/files\/2017\/09\/170829_Michelle_Greene_0747_LR-e1538745748901-150x150.jpg 150w, https:\/\/www.bates.edu\/news\/files\/2017\/09\/170829_Michelle_Greene_0747_LR-e1538745748901-900x900.jpg 900w, https:\/\/www.bates.edu\/news\/files\/2017\/09\/170829_Michelle_Greene_0747_LR-e1538745748901-200x200.jpg 200w, https:\/\/www.bates.edu\/news\/files\/2017\/09\/170829_Michelle_Greene_0747_LR-e1538745748901.jpg 1081w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/a><p id=\"caption-attachment-109762\" class=\"wp-caption-text\">Assistant Professor of Neuroscience Michelle Greene. (Theophil Syslo\/Bates College)<\/p><\/div>\n<p>Greene\u2019s resulting paper, \u201cFrom Pixels to Scene Categories: Unique and Early Contributions of Functional and Visual Features,\u201d co-authored with Colgate University\u2019s Bruce C. Hansen, won best paper at the Conference on Cognitive Computational Neuroscience in September.<\/p>\n<p>Hansen and Greene, who uses machine learning to study visual perception, identified 11 features that contribute to scene categorization. Some of them are \u201clow-level\u201d features, meaning a computer could recognize them, Greene says. These include colors, textures, and edges.<\/p>\n<p>Some are \u201chigh-level\u201d features, meaning only humans can label them. These include specific objects, like a blender in a kitchen or a computer monitor in an office, as well as function, which refers to what a person might do in a scene, such as sleeping in a bedroom.<\/p>\n<p>Human brains process these features in conjunction with each other to comprehend scenes \u2014 in our minds, at the instant of recognizing a scene, an individual feature can\u2019t be isolated.<\/p>\n<p>\u201cIf you change the geometry of the room, that&#8217;s also going to change the low-level features of the room,\u201d Greene says. \u201cIf you change the low-level features, it&#8217;s going to change the high-level features.\u201d<\/p>\n<div id=\"attachment_119197\" style=\"width: 1929px\" class=\"wp-caption alignleft\"><a href=\"https:\/\/www.bates.edu\/news\/files\/2018\/10\/171201_Neuroscience_EEG_0449.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-119197\" class=\"wp-image-119197 size-full\" src=\"https:\/\/www.bates.edu\/news\/files\/2018\/10\/171201_Neuroscience_EEG_0449.jpg\" alt=\"\" width=\"1919\" height=\"1279\" srcset=\"https:\/\/www.bates.edu\/news\/files\/2018\/10\/171201_Neuroscience_EEG_0449.jpg 1919w, https:\/\/www.bates.edu\/news\/files\/2018\/10\/171201_Neuroscience_EEG_0449-400x267.jpg 400w, https:\/\/www.bates.edu\/news\/files\/2018\/10\/171201_Neuroscience_EEG_0449-900x600.jpg 900w, https:\/\/www.bates.edu\/news\/files\/2018\/10\/171201_Neuroscience_EEG_0449-200x133.jpg 200w\" sizes=\"(max-width: 1919px) 100vw, 1919px\" \/><\/a><p id=\"caption-attachment-119197\" class=\"wp-caption-text\">\u0093In December 2017, Assistant Professor of Neuroscience Michelle Greene works with Hanna De Bruyn \u201918 to run an EEG test with Katie Hartnett \u201918. (Phyllis Graber Jensen\/Bates College)<\/p><\/div>\n<p>Greene and Hansen found ways to measure the effects of various features individually. For example, to measure functions, they took the American Time Use Survey, which asks people how they spend their time, and associated the answers \u2014 watching TV, cooking, working \u2014 with different scenes, like a living room, a kitchen, and an office.<\/p>\n<p>Once they had a way to track individual features, they gathered a selection of thousands of images and, using both computer coding and human categorization through Amazon\u2019s Mechanical Turk tool, associated high- and low-level features with each scene.<\/p>\n<p>With all that data in hand, they used a technique in linear algebra called whitening transformation to orthogonalize, or parse out, each individual feature so they could study it independently from the others.<\/p>\n<p>\u201cWe put all of these features together in a nice big matrix and de-correlated them,\u201d Greene says. \u201cHere&#8217;s color by itself, here&#8217;s edges by itself, objects by itself, so on and so forth.\u201d<\/p>\n<div id=\"attachment_119152\" style=\"width: 910px\" class=\"wp-caption alignleft\"><a href=\"https:\/\/www.bates.edu\/news\/files\/2018\/10\/180312_Sarah_EEG_0023-2.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-119152\" class=\"wp-image-119152 size-large\" src=\"https:\/\/www.bates.edu\/news\/files\/2018\/10\/180312_Sarah_EEG_0023-2-900x600.jpg\" alt=\"\" width=\"900\" height=\"600\" srcset=\"https:\/\/www.bates.edu\/news\/files\/2018\/10\/180312_Sarah_EEG_0023-2-900x600.jpg 900w, https:\/\/www.bates.edu\/news\/files\/2018\/10\/180312_Sarah_EEG_0023-2-400x267.jpg 400w, https:\/\/www.bates.edu\/news\/files\/2018\/10\/180312_Sarah_EEG_0023-2-200x133.jpg 200w, https:\/\/www.bates.edu\/news\/files\/2018\/10\/180312_Sarah_EEG_0023-2.jpg 1919w\" sizes=\"(max-width: 900px) 100vw, 900px\" \/><\/a><p id=\"caption-attachment-119152\" class=\"wp-caption-text\">In March, Michelle Greene and Hanna De Bruyn \u201918 prepare to give an EEG test for De Bruyn\u2019s senior thesis. Greene works extensively with students on her research. (Phyllis Graber Jensen\/Bates College)<\/p><\/div>\n<p>Greene and Hansen then compared the results of their model to human brain activity using EEG tests.<\/p>\n<p>\u201cWe can get, in a millisecond-by-millisecond way, the extent to which similarity in the EEG patterns tracks similarity with regard to any of these orthogonalized features,\u201d Greene says.<\/p>\n<p>Greene originally thought that the brain would perceive low-level features like color and texture first, then bring in high-level features in order to identify the scene. Instead, she found that high-level features are involved in visual processing early on.<\/p>\n<p>Greene will delve deeper into how individual features affect visual perception, studying how manipulating one feature in an image affects how we process the image. She\u2019ll also see if a brain processes images differently based on whether its owner is told to focus on a specific feature, such as color.<\/p>\n<p>Greene works extensively with Bates students on these questions \u2014 she says the students get to learn computer programming and how to run EEG tests, and Greene herself gets fresh perspectives on her work.<\/p>\n<p>Sometimes, students \u201csee that this obvious assumption is an assumption, and we should test it,\u201d she says.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Michelle Greene and a colleague created a mathematical model of how our brains process the world around us.<\/p>\n","protected":false},"author":1005,"featured_media":119154,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_hide_ai_chatbot":false,"_ai_chatbot_style":"","associated_faculty":[],"_Page_Specific_Css":"","_bates_restrict_mod":false,"_table_of_contents_display":false,"_table_of_contents_location":"","_table_of_contents_disableSticky":false,"_is_featured":false,"footnotes":"","_bates_seo_meta_description":"","_bates_seo_block_robots":false,"_bates_seo_sharing_image_id":0,"_bates_seo_sharing_image_twitter_id":0,"_bates_seo_share_title":"","_bates_seo_canonical_overwrite":"","_bates_seo_twitter_template":""},"categories":[4,14],"tags":[11556,193],"class_list":["post-119151","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-academic-life","category-faculty-staff","tag-michelle-greene","tag-neuroscience"],"_links":{"self":[{"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/posts\/119151","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/users\/1005"}],"replies":[{"embeddable":true,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/comments?post=119151"}],"version-history":[{"count":12,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/posts\/119151\/revisions"}],"predecessor-version":[{"id":119236,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/posts\/119151\/revisions\/119236"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/media\/119154"}],"wp:attachment":[{"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/media?parent=119151"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/categories?post=119151"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.bates.edu\/news\/wp-json\/wp\/v2\/tags?post=119151"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}