The intent, originally, is to write a full post about them — which I sometimes do — but often, if the urge to bang out a post right away isn’t there, the email with that link ends up sitting in the folder. The longer they sit, the less likely I am to post about them.
So occasionally I open the cage and let some of them return to the wild…
The first one is actually a blog post:
The idea that quantum effects have something to do with how minds arise from brains is controversial. The big problem has been specifying exactly where and how such putative effects occur — there is nothing apparently quantum about the physics of the brain as far as we currently know.
There is no obvious gap that demands explaining. There is the physics, the chemistry, and the biology. It seems to fully explain the mechanism.
Except for two glaring questions:
- Why is consciousness (whatever it is) phenomenal?
- Why is consciousness not epiphenomenal?
Put another way: Why is there something it is like to have a complex brain? And: How do mental states affect physical reality?
But recently some scientists are having second thoughts about quantum cognition. The linked article is about a paper by theoretical physicist Matthew Fisher.
Nuclear spins, Matthew reasoned, might store QI in our brains. He catalogued the threats that could damage the QI. Hydrogen ions, he concluded, would threaten the QI most. They could entangle with (decohere) the spins via dipole-dipole interactions.
He reasoned that phosphorus was a good candidate, and then designed a molecule that would allow entangled states to persist as long as possible.
Matthew designed this molecule to block decoherence. Then, he found the molecule in the scientific literature. The structure, , is called a Posner cluster or a Posner molecule. I’ll call it a Posner, for short. Posners appear to exist in simulated biofluids, fluids created to mimic the fluids in us. Posners are believed to exist in us and might participate in bone formation. According to Matthew’s estimates, Posners might protect phosphorus nuclear spins for up to 1-10 days.
So what makes this interesting is that Fisher started from a requirements point of view and then went looking for a system that met those requirements.
And apparently found one.
The latter was rather put out that Preskill said: “Penrose and Hameroff had some interesting ideas, but I find Fisher’s arguments more persuasive.”
FWIW, the basic idea seems to be that quantum “computations” among entangled neurons might promote a synchronous firing.
A comment by Hameroff caught my eye:
Without quantum effects in the brain (1) real-time causal action is impossible and consciousness is necessarily epiphenomenal, (2) global brain zero-lag coherence/synchrony is (probably) impossible, (3) photosynthesis would be impossible and we probably wouldn’t exist. If a potato can utilize quantum coherence its likely our brains (and life in general) evolved mechanisms to do so.
I like the bit about the potato. Good point!
This next article caught my eye because it’s about the brain operating in a state of criticality:
I’ve long wondered if there isn’t some sort of balance point in the brain.
In particular I’ve wondered if it has anything to do with our apparent sense of free will. Specifically, I’ve wondered if it might be the means by which we have genuine free will.
If the mind is a system with a lot of cognition happening all the time, some of which is below our ability to access, then perhaps various related thoughts compete in a very noisy environment that operates in a critical or balanced state such that very small inputs can tip that balance.
For example, what really happens when I stand at my pantry door gazing at various cans of soup trying to decide what to have for dinner?
Obviously I’m imagining having each of them and picking among which of those possible futures I prefer. Given two similar choices, say Minestrone or lentil, what tiny impulse decides on one or the other?
The argument against (and the linked article only moves the needle a little bit — there is still much to understand) is that the brain’s operation seems too robust to be affected by tiny inputs.
And, as far as operating in critical state (the article’s topic), there is better evidence of such behavior, but there are still issues.
They and their colleagues also analyzed public data on monkeys and turtles. Although the data sets were too limited to confirm criticality with the full three-exponent relationship, the team calculated the ratio between two different power-law exponents indicating the distributions of avalanche sizes and durations. This ratio — which represents how quickly avalanches spread out — was always the same, regardless of species and whether the animal was under anesthesia. “To a physicist, this suggests some kind of universal mechanism,” Copelli said.
Alain Destexhe of the National Center for Scientific Research (CNRS) in France, the critic who proposed the equation relating the three exponents as a test of criticality, called the universality of the results “astonishing,” but said he isn’t sure if it means what critical brain proponents say. He points out that because avalanches in alert brains scale similarly to those in brains under deep anesthesia — when they have no sensory input — criticality may have nothing to do with how the brain processes information, and could be due to some other aspect of brain dynamics.
As with ideas about quantum cognition, this is an interesting path to explore. It may come to nothing, but it does seem to show there’s a lot going on with brains that we have yet to understand.
More to the point, it seems to argue that cognition is more than the sum of neural activity alone — the brain may function more holistically than that.
Jumping from human brains to artificial “brains” this caught my eye (because, to be honest, I just love stories about how AI sucks):
Why do I hate AI? (And, yes, I do hate AI.) Because I see us increasingly putting our eggs in a basket we haven’t fully explored. Because I see us increasingly making it an inextricable part of our lives.
Just like we did with the internet.
And now so many stories in my news feed are about hackers and scammers and spammers and all the vulnerabilities inherent in our (apparently very badly built) systems.
If I were president (and apparently literally any idiot can be), I’d issue a Edict making it illegal for any tech company to create new features or products until they prove all their code is bulletproof.
But I digress.
The point of the article is pretty much carried in its title. Our brains process shapes, but these AI vision systems process textures.
Which is why they’re so easily fooled. It also very likely accounts for certain failures in the Tesla self-driving system, a system that relies entirely on vision.
More to the point, it emphasizes that AI works very much differently than our brains, and maybe we should stop trying to even make such systems act like us.
It reminds me a bit of the early days of music and video players. Many tried to make their product look onscreen as a real object, a real CD player or a real VCR. But these were usually the worst user interfaces — often clumsy and weird because they were onscreen interfaces not real objects.
A new technology often calls for a whole new way of doing things!
Lastly, an article that caught my eye because it was about something I’d never considered:
Abiogenesis fascinates me. As with the Big Bang or “the hard problem” of consciousness, I see it as one of the great unsolved mysteries of existence.
For me, the big question has always been: How did RNA get started? I’ve never heard a really good theory — just guesses about “self-replicating clays.”
This article concerns something else: How did the first cells get started?
Life is built from tiny cellular bricks, and a question I’d never considered is how those little packages got started.
This article is a pretty neat story of discovery and seems to answer that question!
And that’s the news for now.
Stay newsworthy, my friends!