Are you seeing what I’m seeing?

Emma England

One of the biggest questions in neuroscience stems from the question: what is perception? How do we know what we are seeing in the world is the same as what someone else is seeing? For example, the red we all have learned to describe, could it be maroon to someone else but they just learned to associate the maroon color with the word red? This is a million-dollar question that no one has answered, despite the technology we have. In the article “How Machine Learning is Helping us to Understand the Brain” by Daniel Bear, they bring up an interesting point about the metaphors we use to describe and understand the human brain. The author explains that many times, scientists study the bigger picture of the brain by using a “the systems approach”. For example, they study how large groups of communicating neurons lead to our perception of the world. Technology is expanding to the point that is not what is restricting us from understanding the brain. Alternately, he thinks that we should be taking an approach to the brain like an evolutionary biologist would and study the brain based on how it does something. Conversely, he states that we have been studying the brain as if we are seeking explanations for each aspect of what the brain does.  In my opinion, one of the biggest obstacles that we have yet to figure out is the line between sensing something and perceiving it. For example, in the visual system we know that the light source acts as the stimulus and this stimulus hits the back of the retina which then changes various cells turning this physical stimulus (the light wave) into a neural signal that the brain can then understand. The message will then be sent from the retina to the primary visual cortex via the optic nerve. The brain will then interpret this light and develop a figure based on many of the aspects of the stimulus. This is how we are perceiving the world. However, there is still a lot we do not know. We know the stimulus (the light), the pathway the light follows to transduce the stimulus into a neural signal (the cells in the retina), how the neural signal travels through the brain (the optic nerve), and then which areas of the brain the signal synapses in to increase the neural firing rate (the activity) in that area of the brain (V1; or the primary visual cortex). But we don’t really have any idea HOW these increased firing rates of neurons in a specialized portion of the brain allow us to perceive the world we just know that’s what it DOES. Essentially, this connects to the mind-body problem as well as the bind problem. The mind-body problem is the issue of how physical processes such as nerve impulses and sodium and potassium molecules flowing across membranes (the body part of the problem) become transformed into the richness of perceptual experience (the mind part of the problem)? Additionally, the binding problem focuses on how the different aspects of the world come together to lead to our sensation. I learned about these two different issues in sensations versus perception in the Perception class taught by Professor Grubb at Trinity. All in all, the article states that we need to start looking at how the functions and allows us to carry out every day activities and experience the world through a variety of senses. It is not saying that we need to disregard our previous techniques by identifying what each area does, but that in order to get a greater understanding of the brain as a whole we need to focus on the HOW question. Studying the brain using this approach will hopefully help us understand the brain more completely and allow u to answer the question of whether what we are seeing is the same as how other people see the world. Bear, Daniel. “How Machine Learning Is Helping Us to Understand the Brain.” Salon, Massive, 25 Nov. 2017, www.salon.com/2017/11/25/how-machine-learning-is-helping-us-to-understand-the-brain_partner/.

Leave a Reply

Your email address will not be published. Required fields are marked *