I cracked up when I saw the headline: Why your brain is not a computer. I kept on grinning while reading it because it makes some of the same points I’ve tried to make here. It’s nice to know other people see these things, too; it’s not just me.
Because, to quote an old gag line, “If you can keep your head when all about you are losing theirs,… perhaps you’ve misunderstood the situation.” The prevailing attitude seems to be that brains are just machines that we’ll figure out, no big deal. So it’s certainly (and ever) possible my skepticism represents my misunderstanding of the situation.
But if so I’m apparently not the only one…
I’m not sure how much I can add to what the article says, so I’ll mostly share some bits that really caught my eye. I recommend reading the article itself for the full text.
It starts with the major progress we’ve made in neuroscience, of understanding of how the brain functions. (This leads to a view of the brain as a mechanism that can be reduced to its parts — a view that leads many to reject the David Chalmers notion of a “hard” problem.)
But:
And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.
Which perhaps raises the question: Have we simply not gathered enough evidence? Are we just too early in our science to understand how the pieces we have so far work together? Or is there something forever opaque when it comes to the brain?
This article argues in favor of the later possibility. (Obviously I agree.)
It is possible that repeated calls for more theory may be a pious hope. It can be argued that there is no possible single theory of brain function, not even in a worm, because a brain is not a single thing. (Scientists even find it difficult to come up with a precise definition of what a brain is.)
The author (Matthew Cobb) goes on to write about how the brain is an evolved system, and different parts of it evolved at different times to meet different needs.
That makes the brain an amalgam of sub-systems with more sophisticated systems evolving on top of more primitive ones. We still make reference to our early origins when we talk about thinking with our “lizard” or “hind” brains. A lot of psychology relates to motivation from our more primitive selves.
Churchland and Abbott spelled out the implication: “Global understanding, when it comes, will likely take the form of highly diverse panels loosely stitched together into a patchwork quilt.”
§
So far nothing very controversial, and I think most would agree about the brain being an amalgam.
But then we get to computationalism:
For more than half a century, all those highly diverse panels of patchwork we have been working on have been framed by thinking that brain processes involve something like those carried out in a computer. But that does not mean this metaphor will continue to be useful in the future. At the very beginning of the digital age, in 1951, the pioneer neuroscientist Karl Lashley argued against the use of any machine-based metaphor.
(1951. As usual, I’m actually late to the party.)
Lashley wrote that our brain metaphors go back to Descartes, who had a hydraulic model. Then came telephone models, electrical field models, and ultimately computer models.
Essentially our models match the current technology. That technology creates a language that informs — and limits — how we model the world.
True to form, now there are quantum models of brain function.
This dismissal of metaphor has recently been taken even further by the French neuroscientist Romain Brette, who has challenged the most fundamental metaphor of brain function: coding. […] Brette’s fundamental criticism was that, in thinking about “code”, researchers inadvertently drift from a technical sense, in which there is a link between a stimulus and the activity of the neuron, to a representational sense, according to which neuronal codes represent that stimulus.
In other words, viewing these systems through a “computer” lens colors the view in ways that may not match what’s really happening.
One issue is that this excludes seeing the interconnectedness of brain systems. A computational view is linear, like a line of dominoes falling, a series of causes and effects. But the brain is a vast mesh of interconnected and interacting systems.
By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function.
(Or, as I’ve said so many times: “The brain is not a computer!” 😀 😉 )
§
Cobb mentions Hungarian neuroscientist György Buzsáki, who in his recent book The Brain from Inside Out, concludes that the brain doesn’t represent information, it constructs it.
I rather liked that turn of phrase. We do construct our personal model of reality.
I’ve been reading about how the eye works and what kinds of signals it sends to the brain. It’s rather astonishing how we create a three-dimensional model of our surroundings from those signals.
That applies to all our senses. Our model of reality is just that, a model we create. We also create imaginary models, often of what will happen in the future. It’s what allows us to make choices.
§
Cobb writes about how science metaphors are both useful and limiting. They also have a life-span, and we may be coming to the end of the computer metaphor for the brain. (So maybe I’m actually early to the party?)
One problem is that it’s not clear what will replace it (and maybe no metaphor should). And obviously not everyone agrees the view has reached retirement age. But a possible sign of it is that people are questioning it.
Cobb does an interesting bit about emergence. He first differentiates between weak and strong emergence. But…
…weak emergence cannot explain the activity of even the simplest nervous systems, never mind the working of your brain, so we fall back on strong emergence, where the phenomenon that emerges cannot be explained by the activity of the individual components.
However strong emergence can be criticized for lack of an obvious causal mechanism. But…
…faced with the mysteries of neuroscience, emergence is often our only resort. And it is not so daft – the amazing properties of deep-learning programmes, which at root cannot be explained by the people who design them, are essentially emergent properties.
I am sympathetic to this view. (I recently argued in favor of strong emergence.)
§
Cobb is just getting started about computers:
A related view of the nature of consciousness turns the brain-as-computer metaphor into a strict analogy. Some researchers view the mind as a kind of operating system that is implemented on neural hardware, with the implication that our minds, seen as a particular computational state, could be uploaded on to some device or into another brain. In the way this is generally presented, this is wrong, or at best hopelessly naive.
Which I have to admit echoes my own sentiments. I have argued long and hard against the idea that a “mind algorithm” exists.
As Cobb goes on to write, “It implies that our minds are somehow floating about in our brains, and could be transferred into a different head or replaced by another mind.”
Yes. Exactly. This form of computationalism is old-fashioned dualism.
Later he writes:
Even something as apparently straightforward as working out the storage capacity of a brain falls apart when it is attempted. Such calculations are fraught with conceptual and practical difficulties. Brains are natural, evolved phenomena, not digital devices.
Indeed. The brain is not RAM or a hard drive.
Also, logic gates (which neurons do have a resemblance to) do not a computer make. They are just a small part of what it requires to make a computer. And for that matter:
A neuron is not like a binary switch that can be turned on or off, forming a wiring diagram. Instead, neurons respond in an analogue way, changing their activity in response to changes in stimulation.
What’s funny to me is how many times I’ve made these same arguments. I’ve written plenty of posts (and comments) that have said the same things.
§
This part made me chuckle:
In 2015, the roboticist Rodney Brooks chose the computational metaphor of the brain as his pet hate in his contribution to a collection of essays entitled This Idea Must Die.
And I’ve been accused of having strong feelings on the matter.
[In fact I don’t. But I do enjoy a debate and argue any point with some degree of passion. It’s just a sign I believe in what I’m saying. I’m no sophist.]
§
Cobb writes about a 2017 experiment in which two neuroscientists attempted to understand a simple computer system using neuroscience techniques.
And failed.
They were unable to detect the hierarchies of information. They could see all the activity, but they couldn’t really see the operating system and the application it’s running.
Which is something I’ve argued before: If we sit inside a computer and watch the information flow, there is nothing that really distinguishes one program from another. An algorithm running on an execution engine is entirely arbitrary and user-defined.
I was struck by how, while they could see various synchronization signals, which showed up prominently in data-reduction, they were unable to determine whether those signals were byproducts or crucial inputs to the process.
Brain researchers apparently struggle with the same conundrum regarding synchronous signals in the brain. Byproducts or crucial inputs?
§
Cobb concludes, in part:
This is the great problem we have to solve. On the one hand, brains are made of neurons and other cells, which interact together in networks, the activity of which is influenced not only by synaptic activity, but also by various factors such as neuromodulators. On the other hand, it is clear that brain function involves complex dynamic patterns of neuronal activity at a population level.
One might even be tempted to call it a “hard” problem.
§
Obviously this is controversial territory. The study of human consciousness is one of the most fragmented and questing branches of science I know.
Even theoretical physics and cosmology seem more grounded in at least having definitions scientists agree on. Consciousness lacks common definitions for its most fundamental aspects.
(I’m also inclined to think it suffers from too much philosophy.)
In any event, I got a huge kick out of this article. It’s an excerpt from his book, The Idea of the Brain, which will be published this month.
The bottom line, perhaps, is to not get carried away with metaphors and, in particular, to maybe move on from the brain is a computer one.
Stay metaphorical, my friends!
∇
March 2nd, 2020 at 10:01 am
Here’s a video that would have been useful to include in some previous posts where I talked about computers being just overgrown calculators:
More to the point, the reason why it’s so hard to determine what the software is doing is that this process Tom describes looks pretty much the same no matter what software is running. It’s only when interpreted in its overall effect that the virtual reality of the software emerges.
March 2nd, 2020 at 11:46 am
It may be helpful to distinguish between (what I call) “strong” computationalism — which sees the brain (if not everything) as, not metaphorically but literally a computation and, thus, a computer — and “weak” computationalism — which just claims brain function can be simulated by computation.
The thing is, if Cobb’s point (and mine) about the metaphor being wrong is correct, or at least has argumentative weight, then it has even more weight against a literal view. I find it persuasive, but I already share Cobb’s view so of course I do.
March 2nd, 2020 at 6:15 pm
I disagree. (No doubt you are shocked. 🙂 )
There was a lot of discussion among neuroscientists about this article on Twitter. Much of it making the points I’ve made before. (Which, of course, I got from them, so no originality on my part.)
In summary, Cobb is arguing against the brain being a Von Neumann machine. No serious neuroscientist argues that it is, so he’s arguing against a strawman. Blake Richards makes the typical case in this thread.
Cobb’s implication that neuroscientists are lost and unable to make progress is a notion I keep seeing from mysterian philosophers, psychologists, and others outside of the field (Cobb is a zoologist). It’s discordant with the vast majority of what I see in neuroscience literature.
March 2nd, 2020 at 7:32 pm
“I disagree. (No doubt you are shocked. 🙂 )”
So shocked! 😉
“In summary, Cobb is arguing against the brain being a Von Neumann machine.”
This is kind of the crux of it. Neuroscientists want to call the brain a “computer” while at the same time saying “not that kind of computer.” But if the brain is “not that kind of computer” why would there be any equivalence with computers that are “that kind of computer”?
You’ve said in the past that the brain is not a “conventional computer” and also that the brain is not “a Turing Machine.” Obviously I agree.
Here we’re saying “von Neumann machine,” which I think is a little bit of its own strawdog because there are computers that don’t use the von Neumann architecture. Or if it’s not a strawdog, it’s ignorance of how “computing” is defined. Not by von Neumann architecture, but by what a TM can accomplish (or lambda calculus, which is purely mathematical).
So you were far more on track than Richards when you said the brain wasn’t a TM.
Indeed not.
But conventional computers are TMs, so why would conventional computers be analogous to brains?
As I have said before, we can call what the brain does “computation” only if we also say that an analog transistor radio “computes” audio from radio waves. What the brain does is far more like an analog circuit than a digital computation.
But doesn’t expanding the definition that way destroy the putative equivalence? Doesn’t it admit it will require special analogue “computer” to do what the brain does?
“Cobb’s implication that neuroscientists are lost and unable to make progress”
I think that overstates his position. He starts the article with how advanced the science is. He is only arguing against what he sees as an outdated metaphor.
“(Cobb is a zoologist)”
Firstly, attacking his credentials is ad hominem. (We’re not even zoologists. What does that say about our opinions?) Maybe this is his other field of study and he’s been actively following it a long time.
Secondly, zoologists study animals, which have brains. This may well be within his field of study.
“It’s discordant with the vast majority of what I see in neuroscience literature.”
His point is that it’s a global view that needs rethinking. He’s a bit like Sabine Hossenfelder arguing against the prevailing views in the theoretical physics field.
March 3rd, 2020 at 7:41 am
“Or if it’s not a strawdog, it’s ignorance of how “computing” is defined.”
In addition to being a neuroscientist, Blake Richards is also an AI researcher. But for the views of a straight computer scientist:
“I think that overstates his position.”
Cobb’s words:
And then own your takeaway in the post:
“Firstly, attacking his credentials is ad hominem.”
Pointing out that he’s not a neuroscientist, and grouping him with philosophers and psychologists, is not attacking his credentials. If his specialty was how the brain works, then I’m sure he’d list neuroscientist or neurobiologist on his bylines.
I will take (most) neuroscientist opinions on neuroscience over a zoologist’s (or a philosopher’s or psychologist’s). If the subject were ethology, anatomy, or something along those lines, then (most) zoologist opinions would outweigh neuroscientist ones.
We are indeed amateurs, crucially dependent on the experts. The truth is neither of us have first hand knowledge of the relevant scientific research. It’s why I don’t pretend to have my own scientific theory of consciousness. I might speculate, but always as an amateur, a student. Whenever we think we’re seeing things the experts are missing, we’re most likely making first year graduate student mistakes (at best).
March 3rd, 2020 at 10:59 am
Any reference to the person making the argument, rather than the content of the argument itself, is, by definition, ad hominem. (I think a good goal in a debate is to never use the word “you” or to characterize the other person in any way. Debates should be strictly about ideas. Not always possible, but a worthy goal.)
Simply put, people with the very best credentials are sometimes very wrong, and people with no credentials at all are sometimes spot on. I agree we depend on experts to learn, but at some point we are no longer novices and are capable of being thoughtful and discerning about the material. We reach a point where we don’t depend so much on a trusted so-and-so saying something but on what is actually said.
As for Cobb overstating or not, that’s also a distraction from the content of his argument. FWIW, I think there is a difference between “Cobb’s implication that neuroscientists are lost and unable to make progress” and what I or Cobb said.
(Besides, if it really is overstating the point, then it should be easy to refute. 😉 )
Getting to the content, the only point in play seems to involve von Neumann architecture. The Melanie Mitchell Twitter thread also invokes a “von Neumann architecture.” (And kudos for using “architecture” rather than “machine” — a von Neumann machine is something different.)
But this is still a strawdog because no one on either side is making any claim about von Neumann architecture. The phrase “von Neumann” doesn’t even appear in Cobb’s article. So, yeah, I think it is a strawdog.
A von Neumann architecture is a specific way to build a computer, and most conventional computers do use it. But it’s not what a computer scientist invokes when talking about computation. Computation is not defined with any reference to von Neumann architecture. Computation is defined in reference to a Turing Machine.
I think Mitchell takes the conversation to a better place with: “But the question remains whether a broader, more abstract notion of *computation* can be useful when thinking about how the brain operates.” (I suspect Cobb might agree it has been useful but might be due for retirement.)
Which goes back to my point about expanding the definition.
I think there’s a succinct way to put it:
CS Axiom: Computer1 ≡ TM
All agree: Brain ≢ TM
Therefore: Brain ≢ Computer1
A Computer1 is a conventional computer equivalent to a Turing Machine. They are characterized by:
Then we also have:
Asserted: Brain ≡ Computer2
However: Computer2 ≢ Computer1
Because of the conclusion above. So the bottom line is we’re talking about some other kind of computer. An analogue system characterized by:
Which is a very different ball of wax. (To the extent we’re talking Positronic brain, I’ve always been on board.)
The objection is that the phrase “the brain is a computer” is misleading because it doesn’t mean the sort of computer pretty much everyone binds with the term. We’re talking about something that has far more in common with a transistor radio than a computer.
To bring this all home, Mitchell also suggests: “This seems to me to be the correct approach to the question ‘is the brain a computer?’: To rephrase it as a better question: ‘Does the notion of computation, construed broadly, offer a useful theoretical framework for understanding brain processes?'”
And what Cobb is suggesting is that maybe it was, but the very “vocabulary” Mitchell mentions might be limiting and maybe it’s time to move beyond it to a more systems-oriented approach.
March 3rd, 2020 at 12:29 pm
As an aside, I think there is also a kind of scale problem here. The proposed equivalence between the brain and a computer often places a lot on how neurons are like logic gates.
Which, indeed, overall, they are. They do sum inputs into a single output, but that very general property is about all they have in common.
But the behavior of a neuron is far more sophisticated than any logic gate. It isn’t just the vastly larger number of inputs, or that some inhibit. It’s that there are extrasynapic neurotransmitter effects, and dendritic effects, and likely external effects from glial cells and whatnot.
Neurons are analytical systems in their own right. I think one can argue that even synapses are sophisticated enough to qualify as distinct analytical systems.
So seeing neurons as logic gates is a metaphoric view that abstracts away most of what a neuron actually does.
More importantly, consider the scale difference between a conventional computer made from billions of simple logic gates (themselves made from a dozen or so simple transistors) and a brain made from billions of complex analytical subsystems (themselves made from thousands of complex subsystems).
Calling the brain a “computer” seems like a scale mistake.
March 9th, 2020 at 10:14 am
I had hoped a post like this would draw out a concrete statement along the lines of, “No, brains are computers because X, Y, and Z,” because I really do wonder what the grounding is that results in such an abiding belief — one that seems to react pretty strongly to being challenged.
But the best response so far — really the only response so far — is to complain that this isn’t about von Neumann architecture (which, of course, obviously, it’s not), which is a negative response that seems to miss the target entirely.
What is the affirmative defense that the brain is like a computer? Even Mitchell agrees storage is a problematic comparison. I’ve shown that equating neurons with logic gates is a weak grounding.
What else is there? That’s a question I’d really like answered.
March 9th, 2020 at 10:22 am
The other question I have is what exactly does the comparison of the brain and computer really bring to the table? What is its value?
Cobb suggests it’s a metaphor; Mitchell suggests it brings a language; those amount to the same thing. Has that provided value?
Does it inform theories such as IIT or GWT or HOT? Does it help neuroscientists analyze the workings of the brain?
What makes the metaphor so useful that it must be defended?
March 9th, 2020 at 10:30 am
And finally there is the inherent contradiction that in the phrase “the brain is (like) a computer” people aren’t talking about conventional computers (von Neumann architecture or otherwise — Turing Machines at root) or the standard Computer Science definition of computing.
They mean a different kind of computer and a different kind of computing. An analogue form: asynchronous, massively parallel, hugely interconnected. A totally different kind of computing than Computer Science talks about.
Which is fine. I agreed long ago to understand the definition.
But then how does conventional computing enter into things?
Other than its ability to simulate a natural physical system, how does it apply to the brain? What is its value? And why does it really exist anyway?
Is this literally a case of Descartes and his hydraulics metaphor?
March 12th, 2020 at 7:10 pm
Finally, a brain blog post I think I understand. I couldn’t say whether the brain is like a computer, but I have wondered whether people are forgetting that the computer metaphor is a metaphor. If the metaphor is taken as such, it’s not too difficult to discard it when it no longer makes sense, but if not, I can see how the metaphor might be limiting.
Your point about Descartes, by the way, is well taken.
March 13th, 2020 at 9:50 am
Well, thank you. (You know the joke about Descartes drinking in a bar? The bar tender asks him if he’d like another drink, and Descartes replies, “I think not.” — and, poof, he vanishes into thin air.)
One question I ask is what value the metaphor ever had. Is there some understanding we wouldn’t have achieved without it? There are some general similarities, such as the idea of “memory,” but they vanish upon close examination.
I thought Cobb’s article, which was seen by many, might draw out an affirmative defense, but it appears the best response mustered from the industry is the negative complaint about von Neumann architecture. Which doesn’t say much for the argument.
So now it almost seems like a sociological question to me: Why the abiding belief in a metaphor that can’t be strongly justified and which has questionable value?
One theory I have is that it’s wishful thinking by people who are desperate to have brain uploading work — people who really, really, really want to live forever in a computer. (What many don’t seem to realize is that’s the last technology we’ll accomplish. If it’s even possible, effectively or in principle.)
March 13th, 2020 at 11:02 pm
I think we’d be hard-pressed to find another metaphor, given where we are with technology, but yeah, I’ve never loved it. I think you might be right about memory. The funny thing is, I think a lot of computer terminology came from brain terminology, and then that connection crossed back. I can’t help but think of dreaming as deleting the useless crap I picked up from the previous day. I can almost hear that crinkling sound as I click on “empty trash.” 🙂
I can’t say much about the connection from the computer/neurology sides of things, but philosophically, it seems obvious that you should be careful not to take the metaphor too far. For me the problem becomes very clear when you think about embodiment and environment. Computers don’t learn the way we do. They don’t inhabit the world, they don’t—as you’ve pointed out—have history.
Anyway, with computer metaphors, we should be careful not to put Descartes before the horse. 😉
March 14th, 2020 at 11:32 am
I wonder if the use of brain terminology in computing traces back to when it went mainstream. In the early days, the names were very technical. IBM called disk drives “Direct Access Storage Devices” (DASDs — “daz-dees”); memory was first called “core” (because they used tiny iron donuts (“cores”) for bits).
But then, as it became more a thing, people found more familiar terminology. We do tend to see new things in terms of things we know.
“For me the problem becomes very clear when you think about embodiment and environment.”
That is one of the more interesting aspects of the problem. The difference between intelligence and consciousness becomes hugely significant here. Over on Mike’s blog they were talking about robot minesweepers — specifically what they should “think” (if anything) about death.
One proposition is that, given it’s an artificial mind we create, we could give it any worldview, including one based on self-sacrifice and dying on the job. The counter proposition is that anything smart enough to have significant intelligence would be smart enough to figure out self-preservation.
And one would think anything with appreciable consciousness would seem to want to keep on being conscious. Can self-preservation be engineered out of consciousness?
One might hope machine consciousness isn’t possible just due to the moral quicksand it creates.
March 14th, 2020 at 8:59 pm
Can self-preservation be engineered out of consciousness?
That’s an interesting question. If I had to give a quick answer, I’d say no. That would seem to be an necessary aspect of consciousness, but in what sense is it necessary? I don’t know if I could answer that. It just seems everything about our consciousness is tied to that will to survive. Feeling, motivation, even morality, seem tied to that.
In any case, I hope they don’t give minesweepers a desire for self-preservation!
March 15th, 2020 at 10:12 am
No, that would certainly not be something you’d want to build in! There is the idea that, even without such programming, a smart enough general purpose intelligence might figure out that self-preservation is a good goal because it supports whatever goals are programmed in.
I think that’s essentially what our self-preservation amounts to: The desire to preserve what we’ve amassed (knowledge, experience, wealth, goods, land) as well as the desire to go on doing things. The idea that “this unit must continue” for those overall goals emerges pretty easily.
I don’t think we’d give a minesweeper that kind of intelligence, but we might with other machines with more complex goals. One problem with AGI is that it’s likely to operate very fast with perfect memory. Such an intelligence could be difficult to thwart if it got the wrong ideas (especially if it was smart enough to bide its time and consider its next moves).
Here are a couple of good cartoons.
https://abstrusegoose.com/594
And:
https://abstrusegoose.com/595
October 1st, 2022 at 4:03 pm
Hello again,
Back at the start of lockdown (i.e. around the time of this post), I decided to write a book – I had gained all that commuting time, on top of evenings, after all. (After I while I stopped blogging as a result.) At the start of this year, I switched to a new tentative title: “The Brain is not a Computer.” Hmm, I thought today, now I have a first draft, I wonder why I haven’t already looked to see what’s out there with that title. And Mr Google led me to you!
(If you’re interested in a sneak peak, leave me a message on my site somewhere.)
To add to the conversation here, here are some assertions I make…
Brains are not computers (obviously!) but they do do computation – in a distributed, self-organizing way. (Slime mold can also do computation; high-dimensional, non-linear dynamical systems such as a bucket of water can do computation.) Computers are designed, and particularly designed to avoid noise, whereas brains are evolved (obviously!) within a noisy environment and are noisy themselves. We no longer need to compare brains to computers because they are no longer the most advanced technology we have; Neural nets like Deep Reinforcement Learning are. We are finally trying to understand the brain in terms of brain-like technology! Computers may be able to create consciousness, but then I don’t really even know if you’re conscious; the ‘hard question’ is how do we develop a methodology to tell?
October 3rd, 2022 at 11:57 am
Hey there, welcome back! Given all the posts I’ve written about computationalism, it’s vaguely ironic to discuss it here in a post that’s mostly about a news article. But given your book, it’s obvious why the title was a draw. No matter; here’s as good a place as any!
“Brains are not computers (obviously!) but they do do computation…”
Indeed. And as you go on to point out, so does a bucket of water. I’ve been pondering computationalism for quite some time. Originally, I took it somewhat as a given. A lot of science fiction made it seem not just plausible but likely. But then I read The Emperor’s New Mind (1989) by Roger Penrose, and it planted seeds of skepticism in my mind. The more I thought about it, the more I thought he was right. By the time I began blogging in 2011, I was decidedly on the fence. The more I blogged about it, the more I doubted the premise. At this point, I’ll need to see it actually happen before I believe it. (I agree with Feynman that the ultimate arbiter is always experimental results.)
I’ve come to think the proper question is to ask if mind is algorithmic. (A sub-question being, is anything in nature algorithmic, or are algorithms strictly the product of intelligence?) Penrose’s book is mainly an argument against the notion of an algorithmic mind.
Last year I had what seemed a major insight about (what I define as) computation (aka executing algorithms). First I wrote a three-part series (1, 2, & 3) about what I call “digital dualism” — the dual nature of physical computation and the virtual reality it implements. Later I wrote another three-part series (I, II, & III) about pancomputation — the notion that rocks and pails of water “compute” (my bottom line: not really).
Exactly as you say, computers are engineered to be 100% deterministic. A single bit error is seen as a corruption of the data. Even an artificial neural net gives the same answer for a given input. Noise is only allowed if deliberately introduced (and if done algorithmically is only pseudo-noise like “random” numbers from an algorithm are only pseudo-random).
And while I agree ANNs are a big step in the right direction, I still see them as only very sophisticated search engines with a vast configuration space. Granted, the current ANNs are teeny compared to our natural ones, and I’ll watch developments with great interest, but I’m not sure they’re sufficient for consciousness. (A question I ask: Can any algorithm invent the notion of an algorithm?)
Except for the possibility of physical networks. I question the notion that biology is necessary (though I have some sympathy for the view), but I have long suspected that physicality is. My analogy is laser light. No algorithm can ever produce photons. Only physical substances can. Could consciousness be like laser light in this regard? Does it arise in consequence of physical interaction? If so, no computer simulation of a brain will ever work.
“Computers may be able to create consciousness, but then I don’t really even know if you’re conscious; the ‘hard question’ is how do we develop a methodology to tell?”
Again, indeed! Given Searle’s Giant File Room, if it passed what I call a “Rich Turing Test” (i.e. prolonged interaction over a period of, say, weeks) and asserted it was conscious, [A] how could we tell the difference, and [2] is there really any difference? However, I am not sympathetic to what’s called “illusionism” — the notion that even we aren’t conscious — that’s obvious nonsense to me. It is inescapable and self-evident fact to me that our brain machines experience subjectivity. Perhaps any sufficiently complex machine would also experience it, but I can’t help but wonder.
Ultimately, I think these questions depend on a true theory of consciousness, and I’m not sure we can even understand our own brains (let alone ones we construct from metal, sand, and plastic) until we have such a theory.
Tch. Once again, I wax prolix. Ah, well. Haven’t talked about this stuff in a while.
October 4th, 2022 at 4:12 pm
I put some comments over at https://logosconcarne.com/2021/10/04/analog-computing/
There’s some more disagreement here ;-)…
If you ask be to think of the worst popular science book I’ve ever read, The Emperor’s New Mind would be the first to come to mind!
It’s thesis seems to be:
1) Roger Penrose is extremely clever and knowledgeable about quantum mechanics and other mathsy/physicksy stuff.
2) Quantum mechanics is *really* weird.
3) Consciousness is *really* weird.
4) Ergo, maybe consciousness is caused by quantum mechanics (not false, but a very weak argument).
I thought maybe I’d missed something. But then I read that the likes of Pat Churchland are even more scornful of it.
Whatever goes on in physics, it should be possible to run a physics simulation to demonstrate the emergence of any emergent phenomenon (if we really understand the underlying physics). Computation Fluid Dynamics runs an algorithm to simulate non-algorithmic stuff.
Can any algorithm invent the notion of an algorithm?: an algorithm could simulate the emergence of the invention of a notion. (It would be mighty big simulation!)
A typical feed-forward neural net will produce the same output for the same input but any neural net (actually, any Machine Learning) with online learning may produce different outputs as it learns/adapts.
October 4th, 2022 at 8:26 pm
“There’s some more disagreement here”
Indeed! Should make for a lively conversation. 😃
With respect, I think you’ve misunderstood The Emperor’s New Mind. (To be blunt, given some of the really bad pop sci books I’ve read, calling it the worst seems extreme. FWIW, having read several of his books, I think Penrose is Einstein brilliant. One thing that impresses me about him is his clarity on what is considered known and what is his speculation.)
The main thesis of the book is that mind cannot be algorithmic. He makes a long and very complicated argument (it took me three readings to absorb it), but it mainly rests on Gödel’s Incompleteness theorem (which, in turn, rests on Cantor’s diagonal proof of countable versus uncountable infinities). One outcome of those is that, while the number of possible functions is uncountably infinite, the number of computable functions (per Turing Halting, another diagonal proof) is enumerable and thus countably infinite.
He then speculates that, given how quantum mechanics transcends classical mechanics (quantum computation can accomplish things classical computation cannot), perhaps quantum mechanics plays a role in consciousness. It was after the book came out that Stuart Hameroff approached him about his ideas regarding the quantum behavior in microtubules.
FWIW, I am not disdainful of the notion that the mystery of consciousness might indeed be tied to the classical transcendence of quantum physics. For one, it’s easy to forget that chemistry is quantum. We’re just so used to it that we don’t think of it that way. And given that QM underlies everything, is it really surprising that nature would leverage it in something as complex as mind?
“Whatever goes on in physics, it should be possible to run a physics simulation…”
Agreed. There are some considerations I see, though.
A practical one involves the level of granularity necessary to fully simulate a physical system. How far down must a simulation go? Neurons? Synapses? Molecules? Atoms? What about glial bodies and myelin sheathing? Is the physical relationship of brain parts necessary? What about the EMF environment? In any event, a sufficient sim is likely to be vast in size.
A related one involves the limits of classical computation. How much precision is necessary in calculations, and what about chaos entering the computation? What about the amount of data that needs to be moved around? And many natural systems are effectively impossible to simulate accurately with any reasonable amount of computer power (N-body problems, for instance, with a large N).
At the very least, and assuming a simulation works, a classical digital simulation of a mind will be extremely large and expensive. I’ve been learning a little recently about Tesla’s new DOJO supercomputer system (that they’re still building). It’s meant to replace the GPU cluster they’re using now to train a neural net for self-driving cars. What’s impressive about the current system and the intended DOJO system is its sheer size. And power consumption. One goal of DOJO is reducing the extreme power requirements of systems like these. (It still needs water cooling.)
But let’s say we do simulate a brain. No doubt we eventually will. What will happen then?
I’m pretty sure we can simulate the behavior of the meat. Blood flow, nutrient exchange, and so on. We might likewise simulate a heart or kidney. But assuming we’ve even gotten the software right, which in such a large system seems a big ask, would the brain simulation necessarily do anything but just sit there being a body organ? Maybe it will produce outputs that, decoded, amount to the thoughts of a mind, but that’s a perfect and singular goal to my eyes. There’s a much larger space of outcomes: Nothing, the brain sim acts as if in a coma; White noise, nothing useful emerges; Insanity, a mind, but a useless one; Idiocy, a mind, but a poor one. Or it works for a while, but chaotic effects eventually drive it off the rails.
I’m impressed by how we have a full Schrödinger solution for the hydrogen atom, but not for the helium atom (because of the N-body problem). Classical simulations can, in some cases, be very limited, and I can’t think of a more complicated system than the human brain.
“Computation Fluid Dynamics runs an algorithm to simulate non-algorithmic stuff.”
I’m sorry, I don’t follow. (Did you mean “nonlinear” perhaps?) We do have equations for fluid dynamics, and those we can certainly simulate. I’m sure I’ve missed your point here, though.
“…online learning may produce different outputs as it learns/adapts.”
Indeed, but then it’s a different NN and produces the same (but different) output given identical inputs. My point is that NNs are just lookup systems.
October 17th, 2022 at 10:20 am
“Should make for a lively conversation.”
Or… [Two weeks… sound of crickets] …apparently not.
November 14th, 2022 at 4:32 pm
Sorry for the delay in replying. A combination of:
1. Finishing off work before going off on holiday
2. Being on holiday,
3. Catching up after being on holiday.
Hopefully, December will give me some time to respond properly.
In the meantime: Someone mentioned Lex Fridman to me, who I hadn’t watched before. I watched this interview with Penrose, wanting to be persuaded more, but it just left me even more disappointed…
Przemysław Dolata’s comment resonated…
“If all you know is a hammer, everything looks like a nail.” A physics professor specializing in quantum mechanics asked about consciousness will say it’s caused by quantum mechanics. Funny how Penrose admits to not having any idea about neuroscience and only reading up on it after he decided to write his book 😀 At least he’s aware that his ideas are speculative at best.
The last sentence fits in with what you said: “One thing that impresses me about him is his clarity on what is considered known and what is his speculation.” I agree with you on that.
“A practical one involves the level of granularity “: I completely agree. How do we know when we have the right level of granularity? Answer (generally): when the simulation produces results sufficiently similar to reality (without having kludges or other cheat mechanisms in there.) For example, in Ising model simulations, we can see that temperature-dependent ferromagnetism at the macroscopic scale emerges from the self-organization of microscopic elements. Ferromagnetism *emerges* at that level. We don’t need to add anything extra to get the effect. We could modify the randomness, perhaps with some quantum entanglement-like coupling between pixels a long way from each other, but we would still see the self-organizing behavior, even if a little different.
“NNs are just lookup systems” … with some non-linearity thrown in to make it them universal function approximators. And that is for a ‘typical’ feed-forward NN. With feedback, it then has memory and is no longer same-in-same-out.
BTW: Whilst on holiday, I did read Anil Seth’s ‘Being You’. I was also disappointed by it. And it made no reference to Penrose and Hameroff. I would have thought a 360 page tome specifically on consciousness would have.
I’m very aware I’m just responding to some of your last comments rather than providing a proper argument here. Hopefully I can respond properly later.
November 15th, 2022 at 7:30 pm
Welcome back from holiday. Based on the first few minutes, I think I’ve seen that Penrose interview. Is your bone of contention his primary assertion that consciousness is a process that cannot be computed (with digital computation) or the secondary one that perhaps the answer lies in quantum behaviors? Both?
I’m aware of the criticism it’s a mistake to conflate quantum mechanics and consciousness just because both are big science mysteries. But it seems to beg the question in assuming consciousness can’t be quantum. Is it really that impossible the most astonishingly complicated mechanism we know might leverage nature all the way down to the quantum level? Pigeons apparently use a quantum interaction with Earth’s magnetic field to navigate, so brains seem to be capable of using quantum behavior.
I agree when granularity is fine enough low-level system effects emerge. My point is that the granularity required to simulate consciousness may be formidable. Even prohibitive. There is time granularity as well. How fine must the time slices be? Simulations are necessarily much larger than what they simulate, and the brain is already huge in terms of synapses, connections, etc.
(It might be worth noting that I divide these into three major (fuzzy) categories: functional, network, physical simulation. A functional system is a black box that acts conscious but makes no attempt to replicate brain structure. A network system, as the name implies, replicates neural network structure, either physically or logically. A physical simulation replicates the physics of the meat.
I think it boils down to whether a digital simulation can accurately compute, to the necessary resolution, the trajectory of the physical analog system through its phase space. But weather and multi-body orbital systems show that, over time, chaos makes the computation diverge from reality. Widely!
I’m not surprised the Penrose/Hameroff notion wouldn’t get mentioned. There’s an assumption it can’t possibly be right. It gets dismissed out of hand.
FWIW, I think Penrose’s main idea, which rests mainly on Gödel’s Theorems and the existence of uncomputable functions, is pretty solid. I do think analog systems, especially complex ones, might indeed be capable of “computation” impossible for any reasonable digital system to simulate with full fidelity. I’m more skeptical quantum effects are the answer. It might be merely a matter of the complexity of analog systems.
November 14th, 2022 at 6:20 pm
Coincidentally, in an article in New Scientist published *today*, Roger Penrose says (emphatically) “Consciousness must be beyond computable physics”
Behind a paywall…
https://www.newscientist.com/article/mg25634130-100-roger-penrose-consciousness-must-be-beyond-computable-physics/
I may have to go out to a shop, hand over some notes and buy a paper copy next week (how antiquated)!
November 15th, 2022 at 7:33 pm
Yeah, well, that’s his main thesis. I get the sense he’s pretty sure that part is right but sees the quantum effects part (especially the specific idea about tubules) as more of an open question.
November 16th, 2022 at 2:45 am
Quick responses…
WS: Is your bone of contention: consciousness is a process that cannot be computed or that perhaps the answer lies in quantum behaviors?
HB: Both
WS: I’m aware of the criticism it’s a mistake to conflate quantum mechanics and consciousness just because both are big science mysteries. But it seems to beg the question in assuming consciousness can’t be quantum. Is it really that impossible the most astonishingly complicated mechanism we know might leverage nature all the way down to the quantum level?
HB: No, it is possible, just as many things are possible. But why latch on to this. That’s what I’m missing: the justification of why it “has to be”.
WS: Pigeons apparently use a quantum interaction with Earth’s magnetic field to navigate
HB: I didn’t know that. I’m going to have to read https://www.scientificamerican.com/article/how-migrating-birds-use-quantum-effects-to-navigate/.
WS: My point is that the granularity required to simulate consciousness may be formidable. Even prohibitive.
HB: Yes, but that is practicality. I mean it as part of a thought experiment
WS: There is time granularity as well. How fine must the time slices be?
HB: Fine enough for the behaviour to emerge. But the question is: how do you test for consciousness? How do you detect it? (you need to know what it is already)
WS: It might be worth noting that I divide these into three major (fuzzy) categories: functional, network, physical simulation.
WS: A physical simulation replicates the physics of the meat.
HB: Yes – I’m talking about a physical simulation where the underlying models are sufficient for the phenomena to emerge, rather than imitating if for some higher-level purpose.
WS: I think it boils down to whether a digital simulation can accurately compute, to the necessary resolution.
HB: (A first real disagreement!) True for practical purposes but not for scientific explanation. For example, an Ising model simulation can show the self-organization which leads to temperature-dependent ferromagnetism. It is not a question of how accurate it is. It is that is demonstrates the behaviour without any extra information being provided.
WS: I’m not surprised the Penrose/Hameroff notion wouldn’t get mentioned. There’s an assumption it can’t possibly be right. It gets dismissed out of hand.
HB: OK. Seth says that his position on functionalism is one of ‘suspicious agnosticism’, avoiding it.
WS: I think Penrose’s main idea, which rests mainly on Gödel’s Theorems and the existence of uncomputable functions, is pretty solid.
HB: This is the crux of the matter. I think I’d need to go back to ‘Godel, Escher, Bach’ rather than ‘The Emperor’s New Mind’. Both vaguely imprinted in my brain over 30-years now.
WS: I do think analog systems… might indeed be capable of “computation” impossible for any reasonable digital system to simulate with full fidelity.
HB: For me, full fidelity is not necessary (as said above).
WS: I’m more skeptical quantum effects are the answer.
HB: Agreed. I searched out *short* thoughts from Hameroff and found the following Youtube video.
HB: The attraction of calling upon quantum effects is backward time referral as a way of counteracting Libet’s 300 milliseconds in order to salvage free will.
HB: He mentions consciousness being epiphenomenal.
HB: My position: (i) how can we speculate on the Hard Problem when we have so little knowledge of the Easy Problem?
HB: (ii) How does consciousness make a difference (examples to study: blindsight and aphantasia)?
HB: (iii) How do we detect it?
WS: It might be merely a matter of the complexity of analog systems.
HB: Back to practicalities.
Posting now rather than never. Off to work!
(how do you do your blue italic response quoting?)
November 16th, 2022 at 3:21 am
Final comment from me for today: my responses above are not real answers. I think we are merely clearing the ground at this stage.
November 16th, 2022 at 11:24 am
The easy one first:
“(how do you do your blue italic response quoting?)”
For serious conversations I use my text editor (gvim, a Windows GUI version of vi from Unix) to write responses offline, and I have a macro that wraps a paragraph in <EM> tags. It also uses a STYLE attribute to set the color. So, I end up with:
<EM STYLE=”color:#0000bf;”>”(how do you do your blue italic response quoting?)”</EM>
The macro also wraps the paragraph in double-quotes. You, too, can use (most!) HTML tags in comments. For example <EM>use the EMphasis tags to italicize</EM> and <STRONG>the STRONG tags for bold</STRONG>. Note the leading slash in the closing tag. Very important!
Unfortunately, WordPress strips STYLE information in comments not written by the blog’s author, so I can make the text blue but it doesn’t allow it from subscribers. I can’t make text blue when I comment on other blogs, for instance, but I can still use italics and bold.
Apologies for the over-explanation if you’re conversant enough with HTML that all I needed to say was, “I use HTML and CSS.”
Okay, that was easy enough. But not short. In any event, back to the top:
“Both”
Okay, so both axes then. Later you say it’s a physical simulation you have in mind, so I’ll focus on that. My next question is whether you’re a “strong” or “weak” computationalist. The difference being the former asserts “the brain IS a computer” and therefore, per Church-Turing, implementing it in digital computers is just a change of platform. The latter admits the brain is an analog system but asserts its physical simulation would produce outputs indicating conscious thought.
From what you’ve said so far, you support weak computationalism. Do your views extend into strong computationalism? I wrote this post about the two. Funny thing! When I checked the link, I noticed in the comments I’d linked to that same Penrose interview. I knew it was familiar!
“No, it is possible, just as many things are possible. But why latch on to this. That’s what I’m missing: the justification of why it ‘has to be’.”
I don’t think anyone is insisting it ‘has to be’. More like thinking it’s worth looking into. I wonder about our ability to “be of mixed mind” — to have “love/hate relationships” — and wonder if superposition plays a role. It’s only suggestive. What I find interesting on a sociological level is the resistance to even considering the idea. It seems unscientific to me to insist some unexplored branch of the tree isn’t worth exploring. Brains function down to, at least, the molecular level, and that’s definitely quantum territory, so quantum effects seem worth exploring. (It’s easy to forget that chemistry is quantum.)
“Yes, but that is practicality. I mean it as part of a thought experiment”
In the no physical limits sense? If Penrose is right about consciousness not being a computable function (and I suspect he is), then even so. Assuming we stay within the confines of digital computation as defined in Computer Science (i.e. something implementable with lambda calculus or a Turing Machine). If we can have a no physical limits quantum computer, that’s another matter, entirely. A big deal about QC is the ability to simulate reality down to the quantum level, so one big enough to simulate a brain should produce results indistinguishable from reality.
FWIW, I also support the notion of artificial physical brain isomorphs working. Asimov’s “positronic brains” if you’re familiar with his robot stories. Star Trek:TNG played homage with Data’s positronic brain. The idea is such a thing has a physical network, physical synapses and neurons, etc. I am what I call a structuralist — I think the physical structure matters. Which is why I’m skeptical of sims.
“But the question is: how do you test for consciousness? How do you detect it? (you need to know what it is already)”
Yes, definitely an issue. To me, it’s a key irony of human consciousness studies. It’s not just that we don’t really know what consciousness is, it’s that we can’t even agree on a definition of what it is we’re all studying. My yardstick is the Rich Turing Test (RTT). It involves prolonged interaction over, say, a month or more. If another “intelligence” can convince me over time that I’m talking to a real consciousness, then I’m willing to grant it sovereignty as what an SF story I just read calls a Legal Entity (LE).
“[Penrose’s main idea… of uncomputable functions] is the crux of the matter. I think I’d need to go back to ‘Godel, Escher, Bach’ rather than ‘The Emperor’s New Mind’.”
Heh, GEB is pretty heavy going. I came across my copy in a box a while back and decided to reread it. Enjoyed it, but it covers what is now such familiar territory that I bogged down and never got back to it. Kinda want to, though. Pretty awesome book. (He did a much shorter version, I Am a Strange Loop, that expresses his main points from GEB without going down so many (interesting) rabbit holes. A far more accessible book. This post gets a bit more into it.)
Very briefly, computable functions are enumerable (per Turing Halting). Infinite, but countably so. Non-computable functions are not enumerable and, thus, an uncountable infinity — a larger set than a countable one. So, there are infinitely more non-computable functions than computable ones. Penrose asserts consciousness, if considered a function, must be a non-computable one. Mostly because Gödel, which is essentially the same proof as Turing’s Halting, and both depend on Cantor’s diagonalization proof that the real numbers are uncountable. Quintessentially, non-computable functions are like real numbers.
“My position: (i) how can we speculate on the Hard Problem when we have so little knowledge of the Easy Problem?”
I very much agree! Although, I’d say we can speculate but with the understanding of how hypothetical it is and how conditional it is on further knowledge.
This is long enough. I picked out the, as you say, ground clearing stuff I don’t think is contentious. I’ll submit a new comment below to continue to conversation.
November 16th, 2022 at 3:48 pm
More ground clearing…
WS: I use HTML and CSS.
HB: HTML OK, CSS is beyond me. I’ll stick to ‘WS:’/’HB:’.
WS: whether you’re a “strong” or “weak” computationalist.
HB: Weak, as you suspected. If I really am a computationalist!
#include thoughts_on_pragmatism_science_and_metaphysics.txt
WS: I don’t think anyone is insisting it ‘has to be’. More like thinking it’s worth looking into.
HB: I should have said ‘must’, not ‘has to be’ – because I was referring to Penrose’s assertion that “Consciousness must be beyond computable physics” in the New Scientist article.
WS: It seems unscientific to me to insist some unexplored branch of the tree isn’t worth exploring.
HS: Feyerabend’s pluralistic ‘Anything goes’: yes, explore anything. But don’t arbitrarily cling on to any old theory.
Now you may say Penrose’s quantum theory of consciousness isn’t ‘any old theory’.
I am failing to understand the merits of it.
WS: It’s easy to forget that chemistry is quantum.
HB: Yes, in connecting explanatory levels, MICRO-scopic quantum effects lead to chemical MACRO-scopic effects.
Hence the migrating birds…
HB: [on pigeons] I’m going to have to read https://www.scientificamerican.com/article/how-migrating-birds-use-quantum-effects-to-navigate/.
HB: I’ve now read it….
The quantum mechanism enables a highly sensitive magnetic field sensor.
A bird’s magnetic sensors are in the retinas.
A photon of blue light hits a protein creating a radical pair: 2 molecules both with an odd number of electrons.
Electron spin on one molecule oscillates rapidly for a while, highly sensitive to Earth’s magnet field,
Eventually settling into a stable detectable state.
A quantum mechanical effect but no quantum spookiness.
WS: If Penrose is right about consciousness not being a computable function … Which is why I’m skeptical of sims.
HB: This is where I need to fill in my knowledge. To be revisited at some future date.
WS: It’s not just that we don’t really know what consciousness is, it’s that we can’t even agree on a definition of what it is we’re all studying.
HB: Yes. Consciousness = ‘that shockingly mysterious thing that is entirely subjective, which may or may not be different for for than it is me’.
WS: My yardstick is the Rich Turing Test (RTT). If another “intelligence” can convince me over time that I’m talking to a real consciousness, then I’m willing to grant it …
HB: Yes. Of course, things can be stupid but conscious or super-intelligent but unconscious, so it is a sufficient test but not a necessary test.
There’s that thing that if consciousness is merely epiphenomenal, how is it possible to express shock at the idea of having consciousness?
But then, a veridical simulation of me would presumably also express shock, whether it is conscious or not.
This is where I would go on about blindsight and aphantasia, after we’ve finished the ground clearing.
#include blindsight_and_aphantasia.txt
WS: He did a much shorter version, I Am a Strange Loop … A far more accessible book.
HB: Thanks for the recommendation. I actually have ‘I Am a Strange Loop’ on my bookshelf but have never read it.
I’ll read your https://logosconcarne.com/2013/07/19/strange-loops/ as tonight’s homework.
WS: I’ll submit a new comment below to continue to conversation.
HB: Please don’t! … just yet. I need to get on with some other stuff instead of this more interesting conversation! Radiosilence may follow for a while.
November 16th, 2022 at 7:18 pm
“I was referring to Penrose’s assertion that “Consciousness must be beyond computable physics” in the New Scientist article.”
Ah. Sorry, I misunderstood. I thought you were referring to the Hameroff-Penrose quantum theory. Penrose’s Emperor’s New Clothes argument (ENC argument?) depends on the notion of non-computable functions, and…
“This is where I need to fill in my knowledge. To be revisited at some future date.”
And we can pick it up then!
“Radiosilence may follow for a while.”
Understood. Knowing that, I’ll hear from you when I hear from you.
As an aside, about the birds, “A quantum mechanical effect but no quantum spookiness.” If you mean entanglement, agreed. Warning::Minor personal crusade: But that’s not what Einstein meant by “spooky action”. He was referring to the instantaneous collapse of the wavefunction everywhere upon making a “measurement” — a key mystery in QM and something that happens in every measurement. Ol’ Albert hated it.
(Hmmm. I should do a post about science facts everyone knows but which actually aren’t quite what everyone knows. There’s one associated with Heisenberg Uncertainty, that observation plays a role, and one associated with Hawking radiation about virtual particles. The former is dead wrong, and the latter is over-simplified to the point of being wrong.)
November 16th, 2022 at 10:22 am
[…] divide computationalism into strong and weak flavors. The former is the view that the brain is a computer and mind is computation (for some value of […]
November 16th, 2022 at 12:01 pm
In response to “I think it boils down to whether a digital simulation can accurately compute, to the necessary resolution, the trajectory of the physical analog system through its phase space.” you replied:
“It is not a question of how accurate it is. It is that is demonstrates the behaviour without any extra information being provided.”
My question: what constitutes a sufficient demonstration of the behavior? Consider a Solar system simulation. If we plug in the current positions, and it shows the planets momentarily orbiting in agreement with reality but then they go flying off in different orbits, has the simulation passed that test?
[I don’t know if you watched HBO’s Westworld. It had the interesting idea that simulated minds in a VR slowly go off track and devolve into insanity. (Yet for some reason, the minds in the robots didn’t suffer this divergence. Perhaps because their brains were physical isomorphs to human brains?) They seemed to reference the orbital N-body problem (or weather prediction), which does the same thing. Short term accuracy, long term divergence to nonsense.]
Consciousness seems to occur in a tension between static and regularity, neither of which allow conscious states. That balance, I suspect, is its non-computable aspect. But as you said, our understanding of consciousness is so scant that things could easily go either way.
“The attraction of calling upon quantum effects is backward time referral as a way of counteracting Libet’s 300 milliseconds in order to salvage free will.”
I’m not a fan of retro-causality notions. (I believe time is fundamental, axiomatic, and only runs in one direction.) As I understand it, some research questions Libet’s results, but regardless, I’ve never seen much mystery there (or issues with free will). Our “subconscious” mind or “unconscious” mind or whatever isn’t some other system off to the side. It’s all one iceberg, but much of it is inaccessible to our conscious mind. We tend to think only our conscious mind is the agent of free will, but I say it’s our whole mind that decides. That 300 ms is just the hidden part of the iceberg deciding and presenting its analysis to the conscious mind for action. (This is the source of much of our intuition.)
“He mentions consciousness being epiphenomenal.”
I don’t think it is, but it’s a vexing question how consciousness drives the body. Again, though, I think it comes down to holism. The brain and body are one.
“How does consciousness make a difference (examples to study: blindsight and aphantasia)?”
Solid questions. I expect it’ll turn out to involve the hidden parts of the iceberg.
Okay, this was shorter than I expected. I suppose it boils down to the two basic questions, one about computability, the other about simulation. I do think Penrose is likely right about the former, but the latter is more an open question in my mind. I’m skeptical of sims, but they might work, at least in principle. (I’m a lot more skeptical on a practical basis. I do think the physical requirements are prohibitive.)
FWIW, rather than tackling GEB you might try the Wikipedia article on computability. The SEP has some good articles, too.
December 17th, 2022 at 1:55 am
What’s your opinion on this: (http://disagreeableme.blogspot.com/2021/12/the-computer-metaphor.html?m=1)
December 17th, 2022 at 9:24 am
Heh, well obviously I disagree with Disagreeable Me. 😁 I’ve written a lot of posts here explaining why.
DM’s argument turns mainly on expanding the definition of “computer” to include an analog information processing system such as the brain. If you would call an old-fashioned analog radio a “computer” then, sure, the brain is also a “computer”. I think this muddles the definition of “computer” and isn’t helpful. But the brain is certainly an analog information processing system. Far more similar to an analog radio than a digital computing device.
DM explicitly mentions Turing Machines, and here his argument gets a bit divided among different propositions. It’s one thing to say the brain functions as a computer (which DM asserts), that there is some Turing Machine that emulates a conscious mind. I think this is highly unlikely. Roger Penrose, in The Emperor’s New Mind, argues that consciousness must transcend what TMs are capable of. His argument turns on Gödel’s Incompleteness theorems as well as the fact that, while computable functions are enumerable, the list of all functions is uncountably infinite. So, Penrose argues consciousness cannot be algorithmic.
DM also mentions simulations, where we simulate the physical system of the brain and hope the simulation turns out to be of an awake conscious mind. I’ve written about this a lot, too. Firstly, such a simulation may be beyond our means. Simulations are necessarily always much larger than what they simulate. Brains would have to be simulated to a very low level. Certainly at least cellular, possibly even molecular (or worse, atomic). The brain’s connection to the body and the environment seems important, so they may have to be simulated as well. Even if we can simulate the meat, will it be conscious? Maybe such a sim will just produce a brain in coma, or one filled with white noise, or an insane one, or any of a variety of more likely scenarios.
Bottom line, I’m skeptical it will ever happen, and I’m not sure it’s even possible. Consciousness may not be mechanistic.
January 10th, 2023 at 2:53 pm
Hi WS,
I’ve finally crawled out of my den for a brief mini-communication with the world. Below are some (new) thoughts in response to what is said above.
Re: computation and analog computers
Years ago, I did some work on an Analog Computer; but I would not describe it as a Computer (‘an Analog Computer is not a Computer’!). Bear with me on this for a bit – I would call it an Analog Model. It was being used a model, being an analog of something real that was too big to bring into the lab. It provided a *mechanically*-programmed analog (electronics) signal path.
It makes sense to me to restrict ‘Computer’ to something that implements a *discrete-time* *algorithm*. So that obviously covers what we normally mean Digital Computers: discrete-time and discrete values. In principle, it would also cover stored-program discrete-time, *analog* values (infinite-precision-less-noise) machines. *That* would be ‘Analog Computer’ in my book. Not sure if any exist. I imaging having a switched-capacitor design: an *electrically*-programmed datapath Analog Model. But there would be some digital sequencing stuff implementing the algorithm, to electrically-control the datapath where the computation was being done.
I think I’ve had a change of mind regarding the word ‘computation’. For example, I previously said (above) “Brains are not computers (obviously!) but they do do computation – in a distributed, self-organizing way.” But with my new thinking, I disagree. I would now say that brains do ‘information processing’ or some such. And that fits in exactly with what you say: ‘but the brain is certainly an analog information processing system’. Brains clearly don’t have any *discrete-time* algorithmic execution so they are not computers – and hence are not doing computation’).
Splitting electronics up into the 3 ‘C’s (‘Computing’, ‘Communication’ and ‘Control’), we could be particular with our terminology here (radical idea coming up!) and say Computers do computation, Controllers control and Communication devices are used to communicate. And there’s Information everywhere you look.
An Analog radio obviously fits into the ‘Communications’ bucket. There might be some gain control (‘Control’) along the signal path, (possibly even implemented with an Analog-Computer-that-is-not-computer!), but the goal of Communication is to NOT transform the signal/information – unlike computation. A Digital radio might comprise a Digital Computer (‘Computer’), performing a Forward Error Correction algorithm for example, in the name of Communications with the intent of getting information from A to B WITHOUT any information transformation. One ‘C’ can fit inside the implementation of another ‘C ‘to help achieve the latter’s goal.
Of course, Digital Computers can be used to SIMULATE Analog Models. That’s what Analog engineers use to help them design analog circuits. Analog signal values would be discretized – to 64-bit resolution, say. This quantization is likely to not be a problem; if the simulation has signals in the signal path with, say, the least-significant 20 bits under the noise floor, it is irrelevant. And it wouldn’t particularly matter if the noise added in the simulation was from a THRNG True Hardware Random Number Generator analog circuit (used in cryptography) or from a Pseudo-Random Number Generator (PRNG, with the output shape to have sufficiently close spectral qualities). With analog models that are *good enough*, the simulation results will match the physical, real-world behaviour *good enough*. Voltage at time t will not match, and may diverge, but the overall higher-level behavior will be the same (it will, for example, amplify with *similar* gain and phase shift, or it will *similarly* oscillate). And that will be true if you are using a THRNG or a *good enough* PRNG.
I’ll stop prattling.
January 11th, 2023 at 3:26 pm
Well, I quite agree with the so-called prattle. And I can appreciate the difficulty of classifying analog computing! I explored that in my pancomputation posts (part I, part II, part III). I decided I could draw a fuzzy line roughly between an abacus and a slide ruler. The former is essentially a scratch pad or memory register (not a computer). The latter, though, uses logarithmic scales (a physical model) to do math. With an abacus, the “computer” is the person using it. With a slide rule, some computation is off-loaded to the device.
Analog “computers” just get more complicated from there, and I always felt bad about excluding them as “computing” devices. Like you, I tended to equate computing with (synchronous) symbol processing. There was a necessary connection between any “computer” and a Turing Machine that implemented it. (Or a lambda calculus. A formal, finite, discrete system.) But I shifted to a dualism view of model-map-reality that easily includes analog computing.
Bottom line (FWIW), in an analog model — computer assisted or not — I focus on both the “analog” and the “model” as signficant, yet opposing. Being analog does pull such devices away from being thought of as computers, but the duality of the model and its map to real-world physical phenomina pulls it towards being seen as a computer. (As least per my “computers have a dual layer” view — expressed most recently in Analog Computers. Which you’ve seen and commented on, but we never picked up the thread there. You didn’t seem to like the dual layer view then, but maybe you’ll see it differently now?)
I like the three ‘C’s of electronics. I see the harnessing of the electron as I do the harnessing of fire and the harnessing of computing machines. (Obviously, each is required for the next: fire → electrons → computers.) Controllers covers a lot of ground, but it’s interesting how Communication and Computing goals contrast.
Modeling analog systems… resolution, precision, data size… all formidable with large systems. Totally agree “good enough” is good enough, but my question has always been just how good that needs to be. Cellular, at the very least, I should think, if not dipping into the molecular. Could atomic or quantum behavior need to be part of the model?
In contrast, neural nets, but looks like your other comment comes closer to that topic, so I’ll end here.
January 10th, 2023 at 4:26 pm
Hi again,
Re: Penrose
This 90-minute talk by Penrose from 1995 shows his thinking on consciousness a little bit further than the ENM book:
The chess example from 5 minutes in seems to provide a clear key part of the argument. At 8 minutes: ‘whatever computers do, they don’t understand chess’. But computers can simulate/emulate/implement bio-inspired machine learning. This seems to be the crux of the matter: do we think it will ever be possible for the implementation of a brain-like thing to be able to form an argument (maybe not articulated in English sentences) for why the pawn or bishop should not take the rook? I think we are a long way away from this but the short answer is yes I believe so. Even sooner, I can imagine a machine learning computer could apply formal logic equivalence checking techniques and assert something like “Black CANNOT check IF square-x-is-blocked” which is a higher-level understanding of chess.
To avoid getting bogged down in Penrose tilings and wormholes, is this a suitable case example for uncomputability, for considering the merits or otherwise of his ideas on consciousness?
And I just discovered a 25min (even shorter with faster playback!) overview of ENM which seems to provide a fair presentation of the book… https://www.youtube.com/watch?v=p6WYWQUIQsc
I’m tempted to edit the Youtube transcript down to the core argument – it would highlight just how many ‘could’, ‘might’, etc there are in his causal thinking. (I could have been petty there and written ‘casual’ but I didn’t.)
January 11th, 2023 at 4:20 pm
Caveat: I haven’t watched those videos, yet. I’ll get around to watching the shorter one; not sure about the longer one. Maybe. I do like Penrose. (I’m almost done with The Road to Reality. I’ve got Fashion, Faith, and Fantasy to read next.)
I would guess Penrose would see ANNs as still subject to Turing Halting and Gödelian issues. I suspect any deterministic computing system would necessarily be so subject. I don’t know where Penrose stands, but I’ve long believed an analog physical isomorph of a brain would work like a brain. The question has always been if a software-only version can produce consciousness — if the numbers it outputs, suitably understood, indicate a conscious mind. (FWIW, for one thing, I think such a system cannot be deterministic except at the lowest levels.)
I divide software versions into three types: functional, topological, physical. ANNs, obviously, are topological (with elements of functional). I believe we’ve already talked about physical simulations of the meat? Maybe, but working as expected is just one tree in a forest of failure modes. That said, physical simulations don’t seem as “Penrose-limited” as topological or functional. In fact, when Penrose wrote the book, I believe functional was the only mode people pursued. Topological was out of reach computationally (and physical was a pipe dream). Functional simulations seem the most limited by Penrose’s view (assuming it correct). Topological sims (like ANNs) to the extent they are only software would seem to run into finiteness (and thus chaos) issues. (A physical isomorph would be topological and — at least to some extent — physical.)
You’re heard of IIT (Integrated Information Theory)? It sees the network alone as sufficient. I think the topology is necessary but not sufficient. ANNs, likewise, have a necessary topology, but I’m not convinced the topology alone is sufficient. As I mentioned, I see ANNs as very sophisticated search engines. Modern realizations of Searle’s “Chinese Room”. (I call it the Giant File Room.)
As I understand it (and I may well not), a network trained for Chess (or Go) embodies a multi-dimensional “landscape” of potentials. Chess (or Go) board states are points in this landscape, and each has a “gradient” indicating favorable directions of play. Some local part of the landscape is explored to find the best path — the most favorable gradient. The landscape is the result of training involving millions of games, so it’s a superposition of millions of winning paths that together create the gradient.
Which is why the ANN has no “understanding” of the game. No history is attached to all those games (other than the effect they had carving the landscape). None were miraculous escapes, none were associated with births, deaths, or marriages. And why it’s so hard to get an answer to why it made a move. The only answer it has is that “the gradient in that direction was best.”
What would impress me is having an ANN decide to try an inefficient strategy “just for fun, to see what happens”. Or out of kindness to give the other player a better chance.
January 12th, 2023 at 8:05 pm
“I’m tempted to edit the Youtube transcript down to the core argument – it would highlight just how many ‘could’, ‘might’, etc there are in his causal thinking. “
Why would that be anything other than an indication of Penrose’s intellectual honesty? I’d say those are the appropriate word when discussing such issues. I’ve been reading The Road to Reality, and he often calls attention to that his views are definitely not shared by mainstream physics. But I think he’s quite right to question, for one example, unitarity in quantum mechanics. We know QM can’t be a final answer (nor can GR). But it’s funny, and very possibly wrong, to assume QM is more correct than GR. We need another Newton or Einstein!
February 4th, 2023 at 11:45 am
FWIW, I’ve watched both those videos now. They didn’t really add anything because I’m already familiar with his views via his books. It would be interesting to hear his views in the context of the latest advances in machine learning, but I think he would still feel an ANN did not understand chess so much as act as an excellent search engine in the chess space.
For instance, given a specific board configuration, I would assume it would always make the same moves based on where that board configuration was in the configuration space. It would have none of the real-word associations a human player would.
It’s neat that we can solve complicated problems with such systems, but they’re a very long way from being conscious systems.