I’ve seen objections that simulating a virtual reality is a difficult proposition. Many computer games, and a number of animated movies, illustrate that we’re very far along — at least regarding the visual aspects. Modern audio technology demonstrates another bag of tricks we’ve gotten really good at.
The context here is not a reality rendered on screen and in headphones, but one either for plugged-in biological humans (à la The Matrix) or for uploaded human minds (à la many Greg Egan stories). Both cases do present some challenges.
But generating the virtual reality for them to exist in really isn’t all that hard.
I don’t know if the objections were based primarily on the difficulties of the interfaces or in generating the virtual environment. I’m going to focus on the environment, but I’ll start with a few words about the interfaces.
There are two quite different scenarios:
- Direct neural connection with biological humans.
- Virtual connection with virtual minds.
Regarding item #2: If we know enough about minds to run a simulated one inside a computer, then we necessarily know all about how that mind receives inputs and generates outputs.
So the difficulty of the interface is the same difficulty of implementing those minds in the first place. Mind simulation is a different topic (taken for granted here), so item #2 presents no additional difficulties.
Directly connecting with the neural system of a biological human is a considerable challenge, but we currently have technology that reads or stimulates individual nerves.
The issue is the scale and size. There are a lot of nerves involved, and they are very small (and packed together). But those are engineering problems; we’re good at those.
The proposition is an old one, known as the Brain in a Vat.
It’s based on the idea that all your brain receives or sends are impulses in the connecting nerves (mainly through the brain stem). The brain has no way of knowing if those impulses link to a real body or a computer interface.
Ultimately, what we need are two maps and the hardware micro-technology engineering to directly read or stimulate each nerve coming from or going to the brain.
The first map (a software system) converts between virtual reality data and general nervous system data. It provides generic data streams that any user could plug into. More on this in a bit.
The second map (also software) converts that generic data into the specific nervous system of a given user (it’s the adapter plug — it controls the hardware that links with the brain). This map requires identifying the connecting nerves of a given person which is likely very challenging.
[Presumably it involves presenting stimulus and monitoring nerve activity into the brain as well motor nerves coming out. That process provides a map of what does what and what the signals need to look like.]
This leaves the difficulty of generating the virtual environment.
Notice that the first map above, which converts between the environment and a generic nervous system, implies a point of view. Certainly the data for that nervous system involves things it sees, hears, smells, touches, or tastes.
Which means the first map extracts these things from the environment given a location and gaze (point of view or POV). “I’m standing here, and I’m looking at that.”
In a game, it’s often the player’s POV. In animated movies, it’s the camera. In a virtual reality, it’s your POV.
The crucial thing about this map, this POV, is that it can freely rove around the virtual environment and look at what it wants.
(In some cases, it can be inside a wall or other object, which rather limits the view.)
But the key point is that the virtual environment exists as a defined model inside the simulation. That’s what allows the POV to roam.
Many games offer a virtual reality one can freely explore, so I’m surprised at the notion there is something canned or preset.
The only video game I ever really got into, Descent (1995), had many levels one could freely explore (once you killed all the robots trying to kill you).
MS Flight Simulator, another virtual reality game, goes back to the 1980s!
A key point is that virtual reality simulations now are really good. Between helmets and haptic gloves and walkable surfaces, they’re pretty amazing.
[In fact, the high quality of virtual environments today is a supporting argument in the Reality Is a Virtual Simulation hypothesis.]
It may help to understand a little about what a virtual reality is.
It starts with a coordinate system. Usually, since the idea is to emulate our 3D reality, that coordinate system is also 3D — typically some version of x, y, and z. This comprises the space the virtual reality exists in.
All objects in this reality (walls, rocks, books, trees, baseballs) have coordinates that describe their location and orientation in that space.
Each object also has a set of properties that describe its shape, weight, surface texture, color, and many other properties (whatever is needed).
An object “exists” in the reality if there is an instance of it somewhere in the environment. It’s location and orientation are a matter of coordinates, so it can be rotated or moved just by changing those coordinates.
Generally, the virtual space is a database of some kind, and it contains instances of objects that exist in the space. That database defines the virtual world.
For example, the system has a template for baseball and box objects, so a virtual box of virtual baseballs involves adding to the database an instance of a box and many instances of baseballs. The coordinates of the various baseballs place them at various locations inside the box; the coordinates of the box places it somewhere in the virtual reality.
Placing a POV anywhere within that world — that is, giving the coordinates for a point of view — asks the system to render reality for that POV (so we can look at the box of baseballs from any angle).
The process of placing a POV in a virtual space filled with virtual objects and rendering how the space looks from that POV is well-explored, almost ancient, territory. Games and animated movies have been doing it for decades.
[One trick that makes virtual realities feel more realistic is procedural generation. Rather than having to specify each tree, or alternately of having all trees look alike, there is a procedure for creating a tree that uses randomized parameters to generate similar trees. Clouds, grass, and mountains, are all good candidates for procedures.]
Note that reality is only rendered for a specific POV. It might be done in real time as the POV moves around, but it is only ever done for that POV in that moment.
In virtual reality, things really only do appear when you look at them!
So, as far as how the virtual reality looks, that’s the easiest part. It’s something we’ve been doing for a long time.
Currently we use a map that converts virtual reality data into pixels for a 2D screen. For a Matrix scenario, we need a map that converts VR data into nervous system data. For an upload scenario, presumably there is an obvious map from virtual object data to virtual mind.
There are other senses than visual, so our VR objects need properties describing how they feel, smell, sound, and taste. These just extend the set of visual and physical properties for objects. Part of rendering objects now includes those additional properties.
Again, the map converts them to nervous system data or to whatever the uploaded or simulated minds understand as sensory inputs.
Now there was a ringer in the list of object properties: weight.
Or more properly: mass.
All the other properties mentioned apply to how the object appears to us (including to all five senses).
Mass concerns how an object behaves.
Which brings up the idea of causality and physical law in the virtual reality… because there isn’t any unless we put it there explicitly.
For example, since objects aren’t physical and only exist as numbers, there’s no problem with them overlapping. Or with just floating in the air (exactly the way bricks do not).
Virtual objects don’t have weight or intertia, so a virtual book is as easy to lift as a virtual bulldozer… unless the virtual reality explicitly implements weight and inertia due to mass.
[What gave Neo and his friends “magic” powers in the Matrix was their ability to ignore the system’s programmed virtual rules of physics.]
Something as simple as enforcing the solidity of objects (like the walls and floors of buildings, not to mention objects) requires explicit programming that knows the size and shape properties of all objects.
[I remember how one of the advances in CAD-CAM software (circa early 1990s?) was the ability for it to detect when moving parts occupied the same space. We knew how to do it — that’s just math — we just needed fast enough computers to pull it off.]
A decent VR requires object solidity at the least, but should go as far beyond that in implementing basic physics as possible. Certainly it should be easier to lift a book than a bulldozer. Gravity should make an appearance (objects shouldn’t be allowed to float.)
One question is how real a virtual reality needs to be. The point of the Matrix was fooling biological humans, but uploaded or simulated minds might not need a fully fleshed out reality.
They might not want one! Floating, for one, sounds like fun, and on some level, the whole appeal of being virtual is escaping physics and reality (like aging). One could hike up Everest if one wanted, but one could also just teleport to the top.
So it’s possible a VR doesn’t need to implement all the physics, but it certainly should implement many of them.
The easiest way to do it (the way it’s done now, to the extent it’s done now) is allow objects to have physical properties that your VR engine uses to give them physical behaviors.
Speaking of behaviors, a virtual space is a coordinate space (usually 3D), but there is also a notion of time and of actions in the environment.
The simulation has a clock of some kind that ticks off moments of time. That enables dynamics within the system.
As with space, the physics of how time behaves depends on explicit programming. For example, the rules of Special Relativity would have to be explicitly coded into the dynamic behavior of the VR.
All the laws of physics have to be explicitly coded in, and some of it can be subtle. Friction, for example, or air turbulence, requires fine-grained properties and calculations.
(Temperature is an interesting one. What does it mean to be virtually too hot or too cold? What effect does that have on a virtual person?)
I haven’t gotten into how a person navigates around a virtual space or interacts with it, but this has gotten long and that should follow from what I’ve covered so far.
I can explore it further if anyone has an interest. It mainly involves converting motor neuron outputs into commands for the environment.
Sci-Fi Saturday shout-out to E.E. “Doc” Smith and his 1947 novel Spacehounds of IPC. The main character is a “Computer” named Dr. Percival (“Steve”) Stevens. The book was written back when the term referred to a person who specialized in doing math.
Stay virtually real, my friends!