The previous posts avoided spoilers and talked about HBO’s Westworld in general terms of its themes and characters — stuff that is apparent just from the trailers and basic setup. This post isn’t like that! Do not read this post unless you’ve seen all of season one!
Or unless you really like spoilers or just don’t care about the series. But if you do, you should trust me on this: You do not want this spoiled! It may even be all the better if you avoid any interweb discussion … the fans really did figure out some of the secrets before their big reveal. (On the other hand, the show’s creators have made it clear the truth was always in plain view. And so it was.)
Here are my questions and observations about the last episode and the season as a whole. I think we all have a few questions…
In the previous post I wrote about some of the general themes I saw in HBO’s Westworld. Such big picture topics are inherent in the basic description of the series — intelligent robots used as playthings — and don’t require spoiling plot points or character revelations. Everything I wrote about in the last post is part of the general context of the show.
In this post I want to look more closely at things that struck me in particular, but it requires exposing certain aspects of character or implementation that could count as spoilers if one is very strictly trying to avoid knowing anything about the show.
But if you have some idea about what’s going on, maybe just from trailers, this post shouldn’t spoil anything for you. I won’t give away any of the big secrets or reveals.
Way back in 1958, science fiction author and critic Theodore Sturgeon coined the term Sturgeon’s Revelation. Which is that “90% of film, literature, consumer goods, etc. is crap.” This became known as Sturgeon’s Law while Theodore’s actual law (from a 1956 story) — that “nothing is always absolutely so” — is forgotten. (Philosopher Daniel Dennett expanded the Law to say that 90% of everything is crap!)
I’ve always found this applies especially to science fiction TV. And in this Anno Stella Bella era, there is a lot of SF TV, so naturally there is a lot of crap. (Honestly, I don’t even pay attention to the SyFy channel anymore.)
Happily: HBO’s Westworld … not crap! In fact, it’s a gem that offers many facets worthy of (non-spoiler) thought and discussion…
Blessed be the Force!
As long as I’ve been picking my own reading material, a huge fraction of it has science fiction. I’ve been doing that picking since about 1963-ish, so let’s just call it 50+ years. Up until around the mid 1990s, it would have been hard to name a science fiction book or movie I didn’t know (and in many cases, own).
But somewhere near the end of the last century science fiction became a full-fledged mass-produced commodity that through sheer over-exposure became dull and uninteresting. In a way, I blame George Lucas and Star Wars, so I split SF into two eras:
Before Lucas (B.L.) and Anno Stella Bella (ASB).
Credit where credit is due, both the major ideas in this post come from Fareed Zakaria on his CNN Sunday program, GPS. If you follow TV news at all, you know Sunday mornings have such long-running standards as Meet the Press (on NBC since 1947!) and Face the Nation (on CBS since 1954). (Or was it Meet the Nation and Face the Press?)
Zakaria is one of the good ones: very intelligent, highly educated, calm and measured. He’s well worth listening to. (I’ve realized one attraction to TV news is the chance to — at least sometimes — hear educated, intelligent talk. It’s a nice respite from most TV entertainment.)
Two things on Zakaria’s last episode really rang a bell with me.
Over the last few weeks I’ve written a series of posts leading up to the idea of human consciousness in a machine. In particular, I focused on the difference between a physical model and a software model, and especially on the requirements of the software model.
The series is over, I have nothing particularly new to add, but I’d like to try to summarize my points and provide an index to the posts in this series. It seems I may have given readers a bit of information overload — too much information to process.
Hopefully I can achieve better clarity and brevity here!
Over the past few weeks we’ve explored background topics regarding calculation, code, and computers. That led to an exploration of software models — in particular a software model of the human brain.
The underlying question all along is whether a software model of a brain — in contrast to a physical model — can be conscious. A related, but separate, question is whether some algorithm (aka Turing Machine) functionally reproduces human consciousness without regard to the brain’s physical structure.
Now we focus on why a software model isn’t what it models!
Last time I introduced four levels of possibility regarding how mind is related to brain. Behind Door #1 is a Holy Grail of AI research, a fully algorithmic implementation of a human mind. Behind Door #4 is an ineffable metaphysical mind no machine can duplicate.
The two doors between lead to physical models that recapitulate the structure of the human brain. Behind Door #3 is the biology of the brain, a model we know creates mind. Behind Door #2 is the network of the brain, which we presume encodes the mind regardless of its physical construction.
This time we’ll look more closely at some distinguishing details.
Last week we took a look at a simple computer software model of a human brain. (We discovered that it was big, requiring dozens of petabytes!) One goal of such models is replicating consciousness — a human mind. That can involve creating a (potentially superior) new mind or uploading an existing human mind (a very different goal).
Now that we’ve explored the basics of calculation, code (software), computers, and (computer software) models, we’re ready to explore what’s involved in attempting to model a (human) mind.
I’m dividing the possibilities into four basic levels.