I have always liked those comparisons that try to illustrate the very tiny by resizing it to more imaginable objects. For instance, one says: if an orange were as big as the Earth, then the atoms of that orange would be a big as grapes. Another says: if an atom were as big as the galaxy, then the Planck Length would be the size of a tree.
The question I have with these is: How accurate are these comparisons? Can I trust them to provide any real sense of the scale involved? If I imagine an Earth made of grapes, am I also imagining an orange and its atoms?
Sunday night I watched the new Apollo 11 documentary by Todd Miller. At first, I was really into the show. When the Apollo 11 mission happened I was just starting high school and had been a big fan of the space program going back to Project Mercury. Watching a Saturn V lift off has always induced a profound sense of awe in me.
But I was increasingly struck by how white it all was. And male, but really, really white. That diluted the joy I was feeling with some deep regrets about how we act still today over what are basically paint jobs and some minor accessories.
Given where we find ourselves these days, 50 years hasn’t brought as much progress as it should have. We’re still really stupid about paint jobs.
For Sci-Fi Saturday I thought I’d mention how much I’ve enjoyed some recent Netflix original productions about robots (the very intelligent kind). As usual, I’m a little late to the party. For most people with Netflix, the post’s title probably immediately evoked either or both shows.
I’m speaking, of course, of Love, Death & Robots, an anthology of animated shorts, and of I Am Mother, a movie about a robot raising a child (humanity’s last best hope). I was delighted by the former immediately, but with the latter it wasn’t until I knew the entire story that my opinion changed from poor to good. Through most of the movie it seemed to be a rather flawed story I wasn’t sure I liked.
But the ending put all the plot holes in much better light!
I have a growing list of links to articles that catch my eye, things I’d like to post about (for whatever reason). But there’s a tension between posts based on lists of links or draft posts or idea files versus posts based on what I’m currently thinking about.
I seem to feel the latter isn’t enough, that I need a reserve for “lean times” — which never happen. More and more, I post when something strikes me as worth the effort. The “idea pile” seems almost like homework.
Anyway, here are some things that recently caught my eye.
In the last week or so I read an interesting pair of books: Through Two Doors at Once, by author and journalist Anil Ananthaswamy, and The Order of Time, by theoretical physicist Carlo Rovelli. While I did find them interesting, and I’m not sorry I bought them (as Apple ebooks), I can’t say they added anything to my knowledge or understanding.
I was already familiar with the material Ananthaswamy covers and knew of the experiments he discusses — I’ve been following the topic (the two-slit experiment) since at least the 1970s. It was nice seeing it all in one place. I enjoyed the read and recommend it to anyone with an interest.
I had a little trouble with the Rovelli book, perhaps in part because my intuitions of time are different than his, but also because I found it a bit poetic and hand-wavy.
We ought to be able to run a moldy orange against him and win by a landslide, but parts of this country are in fully embracing their ugly underbelly. It’s feeling like the 1960s again.
This ends an arc of exploration of a Combinatorial-State Automata (CSA), an idea by philosopher and cognitive scientist David Chalmers — who despite all these posts is someone whose thinking I regard very highly on multiple counts. (The only place my view diverges much from his is on computationalism, and even there I see some compatibility.)
In the first post I looked closely at the CSA state vector. In the second post I looked closely at the function that generates new states in that vector. Now I’ll consider the system as a whole, for it is only at this level that we actually seek the causal topology Chalmers requires.
It all turns on how much matching abstractions means matching systems.
This is a continuation of an exploration of an idea by philosopher and cognitive scientist David Chalmers — the idea of a Combinatorial-State Automata (CSA). I’m trying to better express ideas I first wrote about in thesethreeposts.
The previous post explored the state vector part of a CSA intended to emulate human cognition. There I described how illegal transitory states seem to violate any isomorphism between mental states in the brain and the binary numbers in RAM locations that represent them. I’ll return to that in the next post.
In this post I want to explore the function that generates the states.