In Lament and Memorium of Rational Decision-Making Models

“Stop thinking with your feels and think with your brain before you get more people killed.”

That’s a quote that was directed at me yesterday in response to something I wrote about Syria, and as irritating as it was, it really stuck with me because it tied a few things I’ve been working on together. But first let me state for the record I’m all for thinking with your “feels” in addition to your brain, and I have no authority or power to get anyone killed (flattering though, kind of).

A few weeks ago, I wrote a piece called “Who Needs to Know?” about classified information, followed a week later by a piece on a priori knowledge called “How Do We Know What We Know, if We Know Anything at All?” and yet another piece on challenging assumptions. In the meantime, I’m working on a NATO proposal when I start to wonder if intuition is an overlooked response mechanism in general, but especially in kinetic warfare (really, who has time for analysis in a firefight?). Are you sensing a theme?

The presidential candidate Trump had plenty to say about the Syrian Civil War, much of it criticizing both President Obama’s request of congress to authorize airstrikes, and for not doing enough, but it’s been nearly impossible to discern President Trump’s Syrian foreign policy until two weeks ago when the US sent 59 guided tomahawk missiles into a Syrian airbase, reportedly killing close to a dozen soldiers and civilians, including four children.

Separating the acts and knowledge from the process of knowing and acting, we’re left with something different than the last couple of months, though damned if I know exactly what it is. I’m not going to argue, here, if 59 Tomahawks were necessary, if warning the Russians was a good idea, or what might happen next. Here, I’m more interested in how he chose the action, or the decision making process itself.

Decision making models are typically built on some variation of similar themes. Peter Drucker’s “effective” decision making process, as I’ve summarized it below, focuses on:

1. Problem Rationalization, knowing or defining the problem you’re solving.

2. The Boundary Conditions: what will count as success?

3. A Moral Declarative: what is the right thing to do?

Drucker also includes Action and Feedback, but as he describes them, they are in the execution and post-execution stages, and therefore are outside the scope of decision making.

Cardiff and Still’s model includes: Conceptualization, Information and Prediction. Drucker’s model would include the first two in Problem Rationalization, but Prediction doesn’t explicitly factor in to his model, as it does in many others’. This is the interesting part, because a prediction is made in one of two ways- through analysis, or based in intuition, which can be either experience-based (such as when a firearms expert shoots repeatedly and accurately based on a past of conditioning to do so), or instinctively (fight or flight), which is useful but often highly subjective to irrational. Most models include some variation of these, and occasionally a fourth area simply called “Judgment” but I think this is sufficiently covered by Prediction and its sub-parts.

While the president may have little experience in foreign policy, he certainly has broad experience in decision-making. The president has vast information or access to vast information, more than most of us will ever be privy to. General Mattis was with him in Mar-a-Lago when he “pulled the trigger” so to speak, and I have every confidence in General Mattis’ ability to conceptualize and convey the problem, options and possible outcomes.

In prediction, the fourth and final step before execution, one assumes here the prediction was made that A) the consequences of the airstrikes would be minimal or that B) they’d be minimal + N, but that the US capacity to deal with the consequences was sufficient. In fact there are a few old military charts depicting just this kind of modeling.

Probably most famous of decision-making models was President Eisenhower’s Urgent versus Important matrix, which is a simplified depiction of how little time we spend on decisions that are both urgent and important, and how urgency tends to obscure importance.

What does this mean? It means that, whatever your take on the wisdom of the action in Syria, Trump used a decision making process based on equal or unequal parts of a decision making model like the ones above. The problem was conceptualized, information was obtained, assessed or dismissed, and an outcome was predicted with either low consequences or ones that the US has the capacity to counter.

And yet I’m not satisfied. I have no peace of mind that this was the right decision, with acceptable consequences. That may be my “feels” acting up again, or perhaps something is missing in decision-making models. Before that was knowable, my intuition told me something was missing. That “feeling”, based in experience and in instinct, led me to analyze the common models, and through that analysis, I determined they are incomplete.

The missing factors of decision-making models are three-fold. They all make the unexpressed assumptions of:

1. Rational Actor Status

2. That the outcome is knowable (it is not truly innovative)

3. And that the environment the decisions are made in is not disorder (extreme chaos)

Particularly with the emergence of fake news, hacking, non-state actors, proxy war and other challenges to territorial sovereignty, aggressor states, and so on, the assumption of rational actor or rational state actor is now a very large leap that used to be a given. The models also assume that outcomes are knowable, but the less rational an actor is, the more unknowable an outcome will be by his influence. If an action is truly innovative, like putting a man on the moon, our predictions will be less accurate (which is why probing and piloting are so important). Finally, until about 10 years ago, all models were based in simple or complex or complicated environments, but ones that were ordered nonetheless. The cynefin concept began to take into account the state of disorder but has not yet found a prescriptive path for decision-making in this black hole that exists squarely in the middle of the model.

Having studied the decision making models sufficiently for my own satisfaction, the simple and damning conclusion is that all models make three very large assumptions that render them inadequate to facilitate outcomes-based decisions in the modern world. The breakthrough of cynefin offers some hope though, and I think, by utilizing a hybridized agile methodology in the sphere of disorder we can correct those assumptions and offer a new path. Until I publish that, or someone else does, we’re left with the flawed and unknowable, or what Lewis Carol once wrote- “advice is only good or bad as the outcome decides”, meaning we’ll only find out after it’s done.

Writer, Principal Consultant at NOVATUM Consulting, Historian, Researcher, Pugilist, Politico https://www.facebook.com/groups/585714198294643/