It’s been said that programming is an exercise in managing complexity, and while that’s true, it’s only part of the picture. (Still, it’s a pretty big part!) More to the point, managing complexity applies to much more than software design. A defining characteristic of modern life is its complexity, so learning to manage it might be a Pretty Good Idea!
Thinking about a friend struggling with the complexity of life lead to remembering of one of P.J. Plauger‘s articles about problem modeling from his book, Programming On Purpose. (The book is a collection of his essays from the long-defunct, but most excellent, Computer Language magazine.)
Thinking about Plauger’s ideas again lead to thinking they might be worth sharing. Whether fresh or review, these ideas capture six key basic analytical techniques nicely. If you’ve never really thought about them before, you may find the ideas useful.
Let me first talk just a bit about analysis and modeling.
One way to analyze something is to break it down. When you do that you can break it down in different ways. Another way to analyze something is to build a smaller, simpler model that gives you a better view of the system in action. When you build a model, you can build it, see it, in different ways.
This article is about six different ways to build something.
Most people are familiar with top-down (or top-to-bottom) and bottom-up (bottom-to-top) analysis or modeling.
The traditional business hierarchy provides an image for both, because both organize vertically. Both organize along the lines of the scope and size.
On top are the few (single one at the very top) elements with large scope and size (usually in an abstract sense of size: pay, power or responsibility).
At the bottom are the many elements with decreasing size and scope. At the very bottom, elements usually have a single purpose or task.
This hierarchy of levels is a common way to organize a system. Organizations of all kinds, as well as computer file systems, are usually modeled this way.
Top-down analysis tries to define “big picture” elements and then sub-divide them (and sub-divide them some more) until the problem is well-explored and well-defined. Top-down is a common, and useful, analysis technique. When you explore a new system, you start with the big picture and refine it into a detailed one.
On the other hand, bottom-up analysis starts by looking at the individual components of a system and builds on them to larger and larger pieces. It’s not a common way to explore something; it’s best to study the forest before you study the trees. It is useful to analyze something once you know the big picture; then the details stand out. It’s also a useful way to build something. It’s common to start by making tools and building blocks that allow construction of ever-larger pieces. Rather than breaking up big pieces, bottom-up builds a solution by combining little piece into big pieces.
Which technique you use depends on how you wish to approach the problem. Bottom-up implies knowledge of the system’s small-scale operation, so it is most useful when you know the system well enough to build its pieces.
Problems involving data flow or process (for example reading a file or generating a report) often model well horizontally.
As with top-down and bottom-up, horizontal organization also has its polar opposites: left-to-right and right-to-left. Which fits best depends on whether the problem is input-dominated or output-dominated. Input is thought of as on the left, and output is thought of as on the right.
If the task is mostly to read data, modeling left-to-right is useful.
The first elements are the input elements; conceptually they are on the left, close to the input data. The “shape” of the code matches the shape of the input data. This is the most concrete level; the code knows the details of the input. As you add increasingly abstract elements to process the input, you add layers that build towards the right. Ultimately you end with a single element that encapsulates the input.
Modeling right-to-left starts with the output elements and builds towards the left until there is a single output element. Again, the concrete code elements (on the conceptual right this time) match the shape of the data. This time the level of abstraction increases as we move to the left to encapsulate the output.
Many tasks involve both input and output, and such tasks can combine the two horizontal techniques. Input comes from the left; output goes to the right. Conceptually they meet in the middle.
Likewise, you can combine top-down and bottom-up models to meet in the middle. In general, modeling a problem benefits from a combination of any and all available techniques.
Use what is useful!
The final two are hard-to-easy and easy-to-hard (similar to top-down and bottom-up, but model on difficulty rather than scale).
The first seeks to meet the most challenging aspects of a problem up front. The second one seeks to accomplish as much as possible in the shortest time, getting easy tasks out of the way early.
Combining them (as we can with the above models) helps to discover potential pitfalls while rapidly bootstrapping a project.
Hard-to-easy is useful when the challenges are potential show-stoppers. Discovering if the task is possible at all prevents spending resources on an impossible problem. If your plan involves a faster-than-light warp drive, you’d better solve the challenge of inventing a new physics first!
Easy-to-hard is useful when there are many easy parts that potentially build a useful took kit or experience base used to attack the hard parts (best when you know the hard parts are doable, just hard). Some projects may need a tool set built first. Many of my projects needed libraries and tools written first.
I’ve found easy-to-hard useful in reducing a large and overwhelming TODO list. It was that idea I thought might benefit my friend (so this is dedicated to her). Maybe it will benefit others.
Contrary to the age-old advice about not sweating it…
Get the easy stuff done first, and then sweat the hard stuff.
And what do you think?