Irreducible Concepts

Hard to define…

It’s very easy for discussions to get hung up on definitions, so a serious approach to debating a subject begins with synchronizing everyone’s vocabulary watches. Accurate and nuanced communication requires mutually understood ideas and terminology for expressing those ideas.

Yet some concepts seem almost impossible to define clearly. The idea of “consciousness” is notorious for being a definition challenge, but “morality” or “justice” or “love” are also very difficult to pin down. At the same time, we seem to share mutual basic intuitions of these things.

So the question today is: why are some concepts so hard to define?

I think there at least two problems with defining some concepts. The first involves reduction; the second involves configuration space.

§ §

Reduction analyzes something in terms of its comprising parts. A key premise is the parts combine to fully explain the whole. A second premise is the parts reduce to sub-parts which themselves reduce and so on. Reduction is recursive.

Recursion requires a condition that halts the process. Otherwise, it’s “turtles all the way down” — infinite recursion that never ends. In computational recursion the halt condition depends on a computational condition.

Definitional recursion ends where the sub-parts are atomicindivisible, per the original Greek meaning of the word. It’s not possible (or sensible) to further sub-divide them.

(Ironically actual atoms are not atomic in that original sense — atoms reduce to electrons and nuclei; atomic nuclei reduce to protons and neutrons; and those reduce to quarks. We believe electrons and quarks are the end of the line; no more turtles.)

§

A more familiar example: A book reduces into chapters, which reduce into paragraphs, which reduce into sentences, then into words, and finally into individual characters.

Recursion ends there because characters are atomic, they don’t divide into anything (not in printed books, anyway — one could argue characters in electronic books are comprised of bits).

We could talk about the ink (or bits), or the atoms involved, or even take it to the quantum level of electrons and quarks, but there is no “bookness” below the level of a sequence of characters. All books look the very much the same at the quantum level.

(To be precise, the quantum state of different books would be different, but picking out the book text would be just about impossible. The text information is vastly swamped by the information of the constituent particles.)

§

The key point is this: Definitions consist of (reduce to) other, presumably simpler, definitions.

This has two consequences:

Firstly, that a definition must not circle back to what it’s defining. That creates a loop of definition with no grounding.

Secondly, that necessarily (to avoid loops), at some point there must be atomic definitions that cannot be defined by simpler concepts. There must be atomic definitions that ground everything else.

The question is how we construct those atomic definitions.

§ §

The idea of a configuration space is harder to describe. (I’ve written several posts exploring the idea. The previous post is intended as a refresher.)

They are just Cartesian spaces with axes — as many as necessary — that represent traits applying to the subject.

If the subject is cars, one obvious axis is number of doors. Others include (but aren’t limited to): number of wheels, number of cup holders, number of engine cylinders, engine displacement, fuel tank size, wheel size, fuel economy, height, width, weight, model, age, and color.

Note that some axes are smooth (e.g. age, fuel economy) whereas others are lumpy (e.g. numbers of things). It just means that certain points in the space jump. Cars, for instance, jump from three wheels to four wheels — there are no cars with 3.137 wheels. Fuel economy, on the other hand, can be any reasonable value.

Given the right set of axes, a given car is a point in the configuration space. We can’t visualize such a space, but we can intuit the general idea from 2D and 3D examples.

§

What’s important about a configuration space is that similar objects (for instance, cars) are “close together” in the space. Cars with the same model and year form tiny clusters of points separated only by the small variations among them.

In the 3D Neapolitan ice cream configuration space, people who really like vanilla and chocolate, but not strawberry, create a cloud of close points in one corner of the cubical space. (The high-vanilla, high-chocolate, low-strawberry, corner.)

Mathematically, the distance between two points is the square root of the sum of the square of their distances along each axis — in other words, the Pythagorean distance. (Configuration spaces use Cartesian coordinates in Euclidean spaces, so naturally Pythagoras applies to measuring distance.)

The key is that a collection of similar objects forms a fuzzy cloud of close points — a fuzzy region in configuration space.

§ §

Some things are fairly easy to define. For example: a second, a meter, a baseball bat, or a frying pan.

The first two are precisely defined. Seconds and meters are also simply defined — their definitions are easily stated. More involved definitions can still be precise. The definition of an electron, for example, or of a 1968 Volkswagen Beetle.

Baseball bats are precisely defined, but in a way that allows variation. In contrast, the definition of the baseball is narrower — it allows almost no variation. (There is also room for variation among ’68 Beetles.)

Frying pans aren’t as precisely defined. The definition describes a wide flat pan with a single long handle. There is a typical size range, but there are large and small variants.

(Measurements aren’t enough. We took camping with us “The Frying Pan From Hell” — a cast iron monstrosity 30″ in diameter with a bolt-on handle just as long. It cooks 8-10 pancakes at once. Or a pound of bacon.)

What’s relevant to the definition is the ratio between width and height. Frying pans are characteristically wide and flat. They have a characteristic shape.

More general definitions tend to involve characteristics rather than specific properties. (Characteristics are properties, of course, but they imply less requirement and more tolerance.)

§

Seconds, meters, and electrons, have specific definitions that make them points in their respective configuration spaces. Objects strictly are, or are not, seconds, meters, or electrons — they have to hit the bullseye.

Baseballs and ’68 Beetles have definitions with little variation, so they form rather small volumes in their spaces. The definition of baseball bat allows more variation, so the volume of baseball bats is larger.

The more general definition of frying pans creates a very large and fuzzy space volume. Note, too, that the definition begins to shift from properties to (characteristic) functionality — a frying pan is a pan for frying things.

Something as simple as a chair has a definition that is almost entirely characteristic and functional. A chair is a flat-ish surface a human can sit on. (And yet, is a rock or log a chair?)

The chair definition has a huge volume in configuration space — think of all the things that are legit chairs, from bean bag to folding to thrones.

Here’s a key point: The more specific a definition is, the smaller the region in configuration space (including possibly just a point). The more general something is, the larger its region is.

But general definitions involve multiple properties that interact. Pans have width and height. Is a pan flat enough to be a frying pan (or is it just a low-rider sauce pan)?

Such concepts tend to make the region boundaries fuzzy and vague. Determining if an object fits in borderline cases becomes something of a judgement call.

§

As was notoriously said regarding pornography, ‘We know it when we see it.’

There is truth to that. It’s another general concept with lots of interacting axes and a very large, vague, configuration space.

So how do we ‘know it when we see it’?

We do the same thing as Artificial Neural Nets (ANNs): We train our minds with all the examples of movies, books, videos, various forms of erotica, opinions of others, plus our life experiences. As a result, we make judgement calls on whether something is in the region or not.

Given fuzzy boundaries, some things are hard to judge. When an ANN considers unknown input, it provides a confidence percentage: “I’m 87.4% certain this is a picture of a cat.”

Likewise, when we consider something, our minds are testing for a match. Is this porn? Is this justice? Is this art? Or even: is this a frying pan?

Obviously the accuracy of judgements depends on the quality of the training. It also correlates with quantity given equal quality input — more (quality) experience is better than less.

§ §

So to put the pieces together, some concepts are irreducible yet general. This may seem contradictory, but many of our most basic ideas are abstract notions about something.

A basic characteristic of irreducible notions is, because definition requires reduction, they can’t be defined, only described. This involves labeling example points in the configuration space in terms of quality. Providing both good and bad examples defines the boundaries of the concept.

The more examples provided, the more the definition converges on a clear gestalt. The region of configuration space remains fuzzy because of interacting axes, but discrimination — whether a thing is or isn’t — can still become quite acute.

This all only scratches the surface, but it introduces the basic parts. I’ll pick up the threads another time.

Stay safe, my friends! Wear your masks — COVID-19 is airborne!

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

7 responses to “Irreducible Concepts

  • SelfAwarePatterns

    I like the configuration space framework. It offers a method, at least in principle, to quantify some pretty broad concepts. Something that might help also would be adding a frequency or proportion to each value in the dimensions.

    For example, it might be that values in a particular dimension in specific examples can range from 20-80, but 80% of examples have the values between 50-60, which is something I think our brains take into account in pattern matching. A more specific example might be an animal that triggers all the fringe values for a dog, but because it’s all fringe values, we wouldn’t see as a dog. (Maybe it’s a small bear instead.)

    Another factor to consider is that many of these concepts have interactions with their environments. A good example is our perception of color, which is often relative to the surroundings, light levels, etc. For some concepts, this interaction with the environment could be a major factor.

    I’m not seeing how this is irreducible though. It seems like with this framework, you’ve provided one recipe to do a reduction.

    • Wyrd Smythe

      “It offers a method, at least in principle, to quantify some pretty broad concepts.”

      I’ve definitely found it to be one of the more useful notions. It has application in so many different contexts. I introduced the idea early on this blog (it was my fourth post), but I was never happy with it (it was my fourth post). I always meant to revisit it; it just took a whole bunch of years. 🙂

      “For example, it might be that values in a particular dimension in specific examples can range from 20-80, but 80% of examples have the values between 50-60,”

      Absolutely. To some extent, that is automatically captured in the training. Among all the input examples, 80% of them would have values between 50-60, so weighting of that kind is built in.

      But any property or characteristic or function of significance can be an axis or in some way modify an axis. For the configuration space to be fully correct, everything must be accounted for if the space is to be fully accurate.

      “…we wouldn’t see as a dog. (Maybe it’s a small bear instead.)”

      True story: On my morning walks, I see others, many of them walking dogs. Over time I’ve come to recognize regulars. (I vary my route as much as possible, but I still see the same people over a long enough stretch of time. I am out at roughly the same time of the morning.)

      One woman I’ve seen several times now walks her dog. Big dog — shoulder height at least to her waist. Long shaggy black hair. And it has that rolling shoulder gait that bears do. I swear, and I’ve told her so, she looks like she’s taking her small bear for a walk.

      (I’d love to meet the dog, but these days you can’t approach people or their dogs. Kind of a bummer; I liked meeting other people’s dogs.)

      “A good example is our perception of color,…”

      If we were dealing with something where our perceptions were part of the system, that would definitely have to be included, no question.

      “I’m not seeing how this is irreducible though.”

      That’s kind of what makes it an interesting paradox. Something complex is necessarily comprised of parts, yet at the same time can be in some sense monolithic. The opposite of reduction is emergence, and what’s irreducible is the emergent aspect — what the parts work together to create.

      The problem is figuring out all the axes. Not just what they are, but (as you’ve brought up) all the nuances of how they behave. What are the axes of “justice” or “love”? I’ve tried to create a configuration space for craft beer, and it turns out to be a real challenge. Every possible ingredient has to be an axis. Every possible technique has to be an axis. There’s an open-endedness to it.

      The main points of the metaphor are, firstly, the orthogonality (independence) of the axes, and secondly, the notion of a fuzzy region for a definition. The first one is instrumental in decoupling push-pull arguments that set two properties in opposition.

      That metaphor certainly could be used as a framework for breaking down at least many of the axes that might describe something like “justice” or “love”.

      • SelfAwarePatterns

        One of the problem with identifying all the axes for things like justice and love, is that many of the associations in our minds are unconscious, existing in hidden layers of processing. So we might often intuitively feel a certain even is just or unjust, and if challenged for that judgment, confabulate a plausible reason, but often it’s just that, a post hoc rationalization for our feeling about it being just or unjust.

        Still, I like the overall concept a lot, and your point about semantic vectors is interesting. I think that statement might have just solidified the notion in my mind better than anything that’s been said until now!

      • Wyrd Smythe

        The irony of some of those post hoc rationalizations is that judgement may be correct because our neural net has made an accurate assessment, but our explanation may be a fabrication that doesn’t correlate well with it. We have a similar problem with ANNs. We can’t tell why they made the judgement they did — the processing is distributed throughout the nodes of the network.

        It’s really hard to go from that distributed processing to a single justified statement other than saying the network puts a given data point inside a concept region or not (and how close to the geometric center of that region it is — that’s where the confidence value comes from).

        So, yeah, trying to figure out the axes involved in that is probably impossible.

        Trying to come up with a configuration for a physical system is challenging enough — at least in that case there is at least a reasonable chance of decomposing the system into necessary axes. With abstract concepts it’s probably out of reach (exactly due to their irreducibility).

        (My favorite use of the c-space metaphor is in turning apparent push-pull situations with genuinely orthogonal axes into 2D spaces that encode both axes. On such a small scale, the axis-selecting problems we’re discussing aren’t so severe.)

        Configuration spaces could be thought of as subsets of a semantic vector space. (The key difference being semantic axes are anonymous, which removes certain aspects of closeness from semantic spaces.) Or a semantic space as a superset of all configuration spaces.

        Thinking about it once I realized the similarity, I saw another problem with practical application as a deconstruction tool. Firstly, the axes are not unique to a given space — they may appear in any number of different ones. Secondly, the range of values doesn’t center the definition region on the origin of the axes — that region is somewhere in the space.

        The result is that, even if one could identify all the relevant axes, all one can say is that they in some way participate in the definition. A given axis may not even be necessary in a definition. A beer space has axes for chocolate and coffee because some beers contain them, but most don’t, and those ingredients aren’t in any way necessary to the definition of beer. But for all beers to be included in the space, those axes have to exist. Their values are just usually zero.

        Which raises an interesting point about definitions. Some participating concepts are potential. They exist only because some legitimate instances of the class have those properties.

      • SelfAwarePatterns

        Chocolate and coffee in beer? There might be some beers that would work for me after all.

        I think coming up with a perfectly true configuration space for any complex concept is probably a lost cause. Although it strikes me that we could take a pragmatic approach and focus on what about the concept we care about. That’s the approach taken in a paper I just highlighted.

        But yeah, when thinking about all the concepts, their configuration spaces are going to overlap heavily. It seems like that highlights how much these concepts are generally in terms of our needs and how we think. There may be many alternate ways to group all the various sub-entities. The combinations seem infinite.

      • Wyrd Smythe

        You’d be amazed at what craft brewers put in beer. Chocolate and coffee are two of the more ordinary ones. (Don’t know if you ever watched The Drew Carey Show — Drew and his pals home-brewed and sold “Buzz Beer” which had caffeine in it. Back then it was a comedy show gag — based on drinking coffee to sober up. But in the modern craft beer era, coffee-infused beers aren’t uncommon.)

        Right, c-space is most useful as a metaphor or in simple cases. Or in reductive cases, as in the paper you just posted about. It can be super useful in cases like that, even if just for illustration and communication purposes. It’s a really handy grounding concept.

        As you say, there can be alternate c-spaces for the same thing, so, with complex things, it’s just a view of them. A view from another perspective would look different.

  • Wyrd Smythe

    There is a degree of correlation between the notion of configuration space and the notion of semantic vectors.

    One key difference, the latter uses anonymous axes — they have no meaning. Another difference is that configuration spaces are usually limited to a specific class of objects (e.g. cars), whereas semantic spaces include all concepts used.

And what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: