That is no question

Back in 1854 a renowned mathematician George Boole was the first to describe the concepts of algebra and logic over a binary field, which were eventually named after him and are now regarded as one of the pillars of the information age.

The power and universality of foundations given to IT engineers and scholars by the works of Boole had one adverse effect though. Boolean had landed such a major role in software development tools and in developers’ minds, that the concept started to be abused and misused by being employed in scenarios for which it wasn’t exactly fit.

For as long as software programming was primarily a transcription of logical chains into English words and consisted largely of unequivocal instructions alike ‘is the value stored in CX greater than zero?’ everything worked well.

And then everything went out of sync. Since around 70’s, software programming started making its way up to higher, much higher abstraction layers. C has arrived, followed by OOP and C++, and then Java, Python, and Ruby. Complexity levels of programming tasks skyrocketed. No-one cared about contents of CX anymore. Questions answered by programmers in their code started resembling non-trivial day-to-day questions that we come across in real life. Yet the tools in the box, despite looking smart, shiny, and new, remained largely the same.

Let me ask you a simple question.

Can the outcome of a friend-or-foe identification – e.g. that of an aircraft – be represented with a Boolean type?

What could be easier, at first glance, – the aircraft is either friend or foe, right?

Wrong. There are at least two more possible outcomes: “the aircraft has not been positively identified (can be either friend or foe),” and “no aircraft has ultimately been found.” Those two outcomes are of no less importance than the ‘primary’ ones, and, being ignored, may lead to erroneous or even catastrophic decisions.

If you answered yes, don’t be too hard on yourself. The human brain is a skilful optimizer. Despite being often referred to as ‘intelligent’, when left to its own devices it actually does everything in its power to think less. It operates an impressive arsenal of corner cutting techniques, such as question substitution, simplification, framing, priming, and around a hundred of others to avoid the actual thinking in favour of pattern-based decisions.

And this doesn’t marry well with Boolean type. The problem of Boolean is that it offers an illusion of an obvious answer, suggesting a simple choice between two options where there is no actual choice, or where there might be something besides that choice.

Working hard on optimizing its decision making process, our brain celebrates the chance to substitute the whole set of outcomes with an easier choice between two opposites: yes-or-no, friend-or-foe, right-or-left, good-or-bad, a-boy-or-a-girl. Inspired by simplicity of the answer, the analytic part of our brain gives up and accepts the choice – even if the opposites together only comprise so much of the whole variety of the outcomes.

Development environments kindly assist the irrational part of our brain by providing the tools. I find it amusing that in line with the evolution of programming languages Boolean was given an increasingly significant presence: from none in assembly language, through int-emulated surrogate in C, to a dedicated type in C# and Java. That is, as software developers had to deal with questions more and more vague, the development frameworks kindly offered answers more and more simple.

“Wait,”, a smart programmer would say, “and what about exceptions? What about nullable types? Aren’t those supposed to deal with everything that goes beyond true and false?”

In some scenarios they do – and in the others they don’t. Exceptions may work well where there is clearly a yes-or-no choice that falls in, and a marginal alternative that falls out. The problem is that in many instances there is no yes-or-no choice at all, but our little grey cells would tells us there is. Apart from that, exceptions are an opt-in technique for our brain: something that needs to be considered proactively – and therefore they will be among the first to be ‘optimized’ and neglected. How many programmers do you personally know that do exception handling right? And how many do you know that don’t?

And so it goes. It’s Friday, well after 7pm. It’s only a programmer and a QA guy in the deserted office. Their deadline passed a few days ago. They are rushing to finish the system tonight. The programmer starts typing, ‘if…’ and stops for a moment. He quickly glances at the bottom-right corner of his screen: 7:53pm. He sighs, takes a sip of his cooled down tea, and completes the line:

if (!friend) { missile.launch(); }

His code is complete now. He commits the changes, writes a quick note to the client, and drives home to join his family at a late dinner. The QA chap runs a quick round of positive tests and follows his fellow.

You already know what happened next.

* * *

This story is not about negligent programmers. Rather, it is about the dangerous mix brought in by peculiarities of human mind and perks offered by modern development environments, which together give rise to serious logical errors in programs.

Most real-life questions that arise on the uneven ground under our feet have no black-or-white answers. Yet, for many of them, it is way too easy to get caught in the trap of narrowing the whole set of answers down to two mutually exclusive absolutes. The narrower becomes the gap between the programmer’s way of thinking and the human’s, the clearer this problem exposes itself in the software development profession.

So the next time you are tempted to think of some characteristic as a boolean, do make an effort to ask yourself: does this choice really have only two possible options? Didn’t I neglect any important outcomes? Isn’t my mind trying to cut short and take advantage on me?

Because it most certainly will.

Pic credit: mindfulyourownbusiness.com