The Myth of (Complex) Decision Making

Can computers contemplate how ridiculous they look in a cowboy hat?

This week Watson, the latest IBM-created supercomputer trounced two of Jeopardy’s greatest champions at their own game. A sign of the dawn of true artificial intelligence or a red herring?

It seems that this was not a great week for human decision making. After much hype, the human race lost its supreme battle for supremacy on Jeopardy to a computer named Watson. Just as it was said when Garry Kasparov was beaten by another IBM product, Deep Blue, in 1997 as a match of chess supremacy, computers are now considered to be nearing the smarts of humans. It sounds plausible when you consider what kind of knowledge that Jeopardy champions Brad Rutter and Ken Jennings.

The problem is that Jeopardy is about knowledge of facts organized in a catalogue manner. With all due respect to Mssrs Rutter and Jennings, the ability to learn facts is relatively straightforward, even if nearly 99% of the population can’t do it as well as they can. It is a simple input-storage-output problem, whereby data is entered, encoded and stored, and then brought forth upon request. It is the classic example of something that might be simple, but not easy. Clearly, the fact that it took many years of programming to create a computer that could compete with humans shows how difficult the task is.

But the discourse that suggests that Watson and its progeny are about to displace humans is misguided for many reasons, but most notably because of the nature of the problem at hand. Computers are pretty good — perhaps outstanding — at organizing and recalling simple information that is presented in a linear manner. They might also be good at complicated information, the kind that is organized in a manner that has many interlocking components, but still has a relatively ordered manner.

Complexity introduces a whole new realm of problem solving skills that I don’t see computers addressing soon. Complexity adds an exponentially larger amount of information combinations that become contextually bound. Computers are great at processing algothrims, but not so good at reading landscapes like humans are. We’re wired for it.

It’s one thing to ask who wrote the Études-tableaux (“study pictures”),  two sets of piano pieces developed by Sergei Rachmaninoff, and quite another to explain how the piece can be used to understand the mood and spirit of the Fair that was invoked by the piece. The former is simple, the latter is more complex as it is open to multiple interpretations and contexts, which overlap creating a complex scenario.

A complex scenario cannot be broken down into component parts, because we never have perfect information that is complete. In chess or Jeopardy! we do, we can know all of the answers and possible combinations and thus can program something to respond to it. Too often, this model prevails in our decision-making in public health, where we naively presume we know everything. But that is often a fallacy or at best, a myth.

Fifteen hundred years ago everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew the Earth was flat, and fifteen minutes ago, you knew that humans were alone on this planet. Imagine what you’ll know tomorrow. – Agent K (played by Tommy Lee Jones),  Men in Black

We once knew that health was simple, that research knowledge would always translate into use in a way that researchers intended it to be, and that the problems we faced we would solve using computers. Imagine — to follow Men in Black — what we will know tomorrow?

Comments are closed.

Discover more from Censemaking

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top