In the last post I explored how algorithms are defined and what I think is — or is not — an algorithm. The dividing line for me has mainly to do with the requirement for an ordered list of instructions and an execution engine. Physical mechanisms, from what I can see, don’t have those.
For me, the behavior of machines is only metaphorically algorithmic. Living things are biological machines, so this applies to them, too. I would not be inclined to view my kidneys, liver, or heart, as embodied algorithms (their behavior can be described by algorithms, though).
Of course, this also applies to the brain and, therefore, the mind.
There’s a discussion that’s long lurked in a dusty corner of my thinking about computationalism. It involves the definition and role of algorithms. The definition isn’t particularly tricky, but the question of what fits that definition can be. Their role in our modern life is undeniably huge — algorithms control vast swaths of human experience.
Yet some might say even the ancient lowly thermostat implements an algorithm. In a real sense, any recipe is an algorithm, and any process has some algorithm that describes that process.
But the ultimate question involves algorithms and the human mind.
For the last two weeks I’ve written a number of posts contrasting physical systems with numeric systems.
(The latter are, of course, also physical, but see many previous posts for details on significant differences. Essentially, the latter involve largely arbitrary maps between real world magnitude values and internal numeric representations of those values.)
I’ve focused on the nature of causality in those two kinds of systems, but part of the program is about clearly distinguishing the two in response to views that conflate them.
Last time I left off with a virtual ball moving towards a virtual wall after touching on the basics of how we determine if and when the mathematical ball virtually hits the mathematical wall. It amounts to detecting when one geometric shape overlaps another geometric shape.
In the physical world, objects simply can’t overlap due to physics — electromagnetic forces prevent it. An object’s solidity is “baked in” to its basic nature. In contrast, in the virtual world, the very idea of overlap has no meaning… unless we define one.
This time I want to drill down on exactly how we do that.
Last time we saw that, while we can describe a maze abstractly in terms of its network of paths, we can implement a more causal (that is: physical) approach by simulating its walls. In particular, this allows us to preserve its basic physical shape, which can be of value in game or art contexts.
This time I want to talk more about virtual walls as causal objects in a maze (or any) simulation. Walls are a basic physical object (as well as a basic metaphysical concept), so naturally they are equally foundational in the abstract and virtual worlds.
And ironically, “Something there is that doesn’t love a wall.”
First I discussed five physical causal systems. Next I considered numeric representations of those systems. Then I began to explore the idea of virtual causality, and now I’ll continue that in the context of virtual mazes (such as we might find in a computer game).
I think mazes make a simple enough example that I should be able to get very specific about how a virtual system implements causality.
With mazes, it’s about walls and paths, but mostly about paths.
This is the third of a series of posts about causal systems. In the first post I introduced five physical systems (personal communication, sound recording, light circuit, car engine, digital computer). In the second post I considered numerical representations of those systems — that is, implementing them as computer programs.
Now I’d like to explore further how we represent causality in numeric systems. I’ll return to the five numeric systems and end with a much simpler system I’ll examine in detail next time.
Simply put: How is physical causality implemented in virtual systems?
Last time I explored five physical systems. This time I want to implement those five systems as information systems, by which I mean numeric versions of those five systems. The requirement is that everything has to be done with numbers and simple manipulations of numbers.
Of course, to be useful, some parts of the system need to interact with the physical world, so, in terms of their primary information, these systems convert physical inputs into numbers and convert numbers into physical outputs.
Our goal is for the numeric systems to fully replace the physical systems.
I’ve seen objections that simulating a virtual reality is a difficult proposition. Many computer games, and a number of animated movies, illustrate that we’re very far along — at least regarding the visual aspects. Modern audio technology demonstrates another bag of tricks we’ve gotten really good at.
The context here is not a reality rendered on screen and in headphones, but one either for plugged-in biological humans (à la The Matrix) or for uploaded human minds (à la many Greg Egan stories). Both cases do present some challenges.
But generating the virtual reality for them to exist in really isn’t all that hard.
Maybe it’s a life-long diet of science fiction, but I seem to have written some trilogy posts lately. This post completes yet another, being the third of a triplet exploring the differences between physical objects and numeric models of those objects. [See Magnitudes vs Numbers and Real vs Simulated for the first two in the series.]
The motivation for the series is to argue against a common assertion of computationalism that numeric models are quintessentially the same as what they model. Note that these posts do not argue against computationalism, but against the argument conflating physical and numeric systems.
In fact, this distinction doesn’t argue against computationalism at all!