I cracked up when I saw the headline: Why your brain is not a computer. I kept on grinning while reading it because it makes some of the same points I’ve tried to make here. It’s nice to know other people see these things, too; it’s not just me.
Because, to quote an old gag line, “If you can keep your head when all about you are losing theirs,… perhaps you’ve misunderstood the situation.” The prevailing attitude seems to be that brains are just machines that we’ll figure out, no big deal. So it’s certainly (and ever) possible my skepticism represents my misunderstanding of the situation.
But if so I’m apparently not the only one…
I’m not sure how much I can add to what the article says, so I’ll mostly share some bits that really caught my eye. I recommend reading the article itself for the full text.
It starts with the major progress we’ve made in neuroscience, of understanding of how the brain functions. (This leads to a view of the brain as a mechanism that can be reduced to its parts — a view that leads many to reject the David Chalmers notion of a “hard” problem.)
And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.
Which perhaps raises the question: Have we simply not gathered enough evidence? Are we just too early in our science to understand how the pieces we have so far work together? Or is there something forever opaque when it comes to the brain?
This article argues in favor of the later possibility. (Obviously I agree.)
It is possible that repeated calls for more theory may be a pious hope. It can be argued that there is no possible single theory of brain function, not even in a worm, because a brain is not a single thing. (Scientists even find it difficult to come up with a precise definition of what a brain is.)
The author (Matthew Cobb) goes on to write about how the brain is an evolved system, and different parts of it evolved at different times to meet different needs.
That makes the brain an amalgam of sub-systems with more sophisticated systems evolving on top of more primitive ones. We still make reference to our early origins when we talk about thinking with our “lizard” or “hind” brains. A lot of psychology relates to motivation from our more primitive selves.
Churchland and Abbott spelled out the implication: “Global understanding, when it comes, will likely take the form of highly diverse panels loosely stitched together into a patchwork quilt.”
So far nothing very controversial, and I think most would agree about the brain being an amalgam.
But then we get to computationalism:
For more than half a century, all those highly diverse panels of patchwork we have been working on have been framed by thinking that brain processes involve something like those carried out in a computer. But that does not mean this metaphor will continue to be useful in the future. At the very beginning of the digital age, in 1951, the pioneer neuroscientist Karl Lashley argued against the use of any machine-based metaphor.
(1951. As usual, I’m actually late to the party.)
Lashley wrote that our brain metaphors go back to Descartes, who had a hydraulic model. Then came telephone models, electrical field models, and ultimately computer models.
Essentially our models match the current technology. That technology creates a language that informs — and limits — how we model the world.
True to form, now there are quantum models of brain function.
This dismissal of metaphor has recently been taken even further by the French neuroscientist Romain Brette, who has challenged the most fundamental metaphor of brain function: coding. […] Brette’s fundamental criticism was that, in thinking about “code”, researchers inadvertently drift from a technical sense, in which there is a link between a stimulus and the activity of the neuron, to a representational sense, according to which neuronal codes represent that stimulus.
In other words, viewing these systems through a “computer” lens colors the view in ways that may not match what’s really happening.
One issue is that this excludes seeing the interconnectedness of brain systems. A computational view is linear, like a line of dominoes falling, a series of causes and effects. But the brain is a vast mesh of interconnected and interacting systems.
By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function.
(Or, as I’ve said so many times: “The brain is not a computer!” 😀 😉 )
Cobb mentions Hungarian neuroscientist György Buzsáki, who in his recent book The Brain from Inside Out, concludes that the brain doesn’t represent information, it constructs it.
I rather liked that turn of phrase. We do construct our personal model of reality.
I’ve been reading about how the eye works and what kinds of signals it sends to the brain. It’s rather astonishing how we create a three-dimensional model of our surroundings from those signals.
That applies to all our senses. Our model of reality is just that, a model we create. We also create imaginary models, often of what will happen in the future. It’s what allows us to make choices.
Cobb writes about how science metaphors are both useful and limiting. They also have a life-span, and we may be coming to the end of the computer metaphor for the brain. (So maybe I’m actually early to the party?)
One problem is that it’s not clear what will replace it (and maybe no metaphor should). And obviously not everyone agrees the view has reached retirement age. But a possible sign of it is that people are questioning it.
Cobb does an interesting bit about emergence. He first differentiates between weak and strong emergence. But…
…weak emergence cannot explain the activity of even the simplest nervous systems, never mind the working of your brain, so we fall back on strong emergence, where the phenomenon that emerges cannot be explained by the activity of the individual components.
However strong emergence can be criticized for lack of an obvious causal mechanism. But…
…faced with the mysteries of neuroscience, emergence is often our only resort. And it is not so daft – the amazing properties of deep-learning programmes, which at root cannot be explained by the people who design them, are essentially emergent properties.
I am sympathetic to this view. (I recently argued in favor of strong emergence.)
Cobb is just getting started about computers:
A related view of the nature of consciousness turns the brain-as-computer metaphor into a strict analogy. Some researchers view the mind as a kind of operating system that is implemented on neural hardware, with the implication that our minds, seen as a particular computational state, could be uploaded on to some device or into another brain. In the way this is generally presented, this is wrong, or at best hopelessly naive.
Which I have to admit echoes my own sentiments. I have argued long and hard against the idea that a “mind algorithm” exists.
As Cobb goes on to write, “It implies that our minds are somehow floating about in our brains, and could be transferred into a different head or replaced by another mind.”
Yes. Exactly. This form of computationalism is old-fashioned dualism.
Later he writes:
Even something as apparently straightforward as working out the storage capacity of a brain falls apart when it is attempted. Such calculations are fraught with conceptual and practical difficulties. Brains are natural, evolved phenomena, not digital devices.
Indeed. The brain is not RAM or a hard drive.
Also, logic gates (which neurons do have a resemblance to) do not a computer make. They are just a small part of what it requires to make a computer. And for that matter:
A neuron is not like a binary switch that can be turned on or off, forming a wiring diagram. Instead, neurons respond in an analogue way, changing their activity in response to changes in stimulation.
What’s funny to me is how many times I’ve made these same arguments. I’ve written plenty of posts (and comments) that have said the same things.
This part made me chuckle:
In 2015, the roboticist Rodney Brooks chose the computational metaphor of the brain as his pet hate in his contribution to a collection of essays entitled This Idea Must Die.
And I’ve been accused of having strong feelings on the matter.
[In fact I don’t. But I do enjoy a debate and argue any point with some degree of passion. It’s just a sign I believe in what I’m saying. I’m no sophist.]
Cobb writes about a 2017 experiment in which two neuroscientists attempted to understand a simple computer system using neuroscience techniques.
They were unable to detect the hierarchies of information. They could see all the activity, but they couldn’t really see the operating system and the application it’s running.
Which is something I’ve argued before: If we sit inside a computer and watch the information flow, there is nothing that really distinguishes one program from another. An algorithm running on an execution engine is entirely arbitrary and user-defined.
I was struck by how, while they could see various synchronization signals, which showed up prominently in data-reduction, they were unable to determine whether those signals were byproducts or crucial inputs to the process.
Brain researchers apparently struggle with the same conundrum regarding synchronous signals in the brain. Byproducts or crucial inputs?
Cobb concludes, in part:
This is the great problem we have to solve. On the one hand, brains are made of neurons and other cells, which interact together in networks, the activity of which is influenced not only by synaptic activity, but also by various factors such as neuromodulators. On the other hand, it is clear that brain function involves complex dynamic patterns of neuronal activity at a population level.
One might even be tempted to call it a “hard” problem.
Obviously this is controversial territory. The study of human consciousness is one of the most fragmented and questing branches of science I know.
Even theoretical physics and cosmology seem more grounded in at least having definitions scientists agree on. Consciousness lacks common definitions for its most fundamental aspects.
(I’m also inclined to think it suffers from too much philosophy.)
In any event, I got a huge kick out of this article. It’s an excerpt from his book, The Idea of the Brain, which will be published this month.
The bottom line, perhaps, is to not get carried away with metaphors and, in particular, to maybe move on from the brain is a computer one.
Stay metaphorical, my friends!