#### Discover more from Win With Data

# What are our stakeholders doing, even?

### Why decisions are hard, and why we should cut our stakeholders some slack.

*👋 Hello! I’m Robert, CPO of Hyperquery and former data scientist + analyst. Welcome to Win With Data, where we talk weekly about maximizing the impact of data. As always, find me on LinkedIn or Twitter — I’m always happy to chat. And if you enjoyed this post, I’d appreciate a follow/like/share. 🙂*

As many of you know, I am a founder — I run a company called Hyperquery with

, where we’re agonizing over building the world’s best data notebook. And like many founders, I started a company because (a) I cared deeply about problems in analytics, and (b) I wanted to*build*without being fettered by the decisions of others. In 2020, we got some funding, we built a great team, and destiny seemed to be putty in our hands. Then, of course, shit hit the fan — we pivoted and pivoted until we landed on something that our users love, but in the gut-wrenching, miasmic ambiguity, I learned a hard lesson: making decisions is a lot harder than I ever gave others credit for.

This is a data newsletter, though, so I won’t regale you with our decision-making sagas. I bring this up because, as data people, we often work with folks who are the *decision-makers*. And because *we* don’t make the decisions, it’s only natural that we lament the decisions that are ultimately made. The peanut gallery impulse is strong in us technical folks, so I want to spend some time talking about why making decisions is so difficult. I hope this gives you an interesting mental model for decision-making, but, at minimum, I hope it exposes some of the biases that we tend to have against the decision-making process and, as a result, our beloved stakeholders.

# We don’t see the whole process

Our first bias: we tend to underestimate the work that goes into decisions, simply because we’re not privy to the entire process. Decisions are never just the decision. There’s problem discovery and solution discovery, and these take up the bulk of the time before a decision is actually made. The chain of idea maze navigation almost always travels further back than a particular decision and its proximate justifications.

And as deliberate as business school courses and HBR articles make it seem, making decisions isn’t particularly easy. Decision-making takes place in a high-dimensional decision space of ideas and data and market trends, and the act of decision-making is like identifying the variance-capturing eigenvectors through that space with nothing but blind numerical guesses1 — or, if you wish, it’s like trying to find patterns in seas of noise. Behind any resolute decision we see, there’s often a long phase of agonizing vacillation prior, where decision-makers try to identify what’s important before they act.

For instance, imagine an executive at Airbnb announces “we’re redesigning our homepage” because “increased competition that suggests a need to reinforce our brand”. The decision seems obvious. Decision: redesign. Justification: competition.

But that simple decision may have come from a completely unspoken set of problems to solve — e.g. we’re in the midst of a pandemic, and reminding hesitant travelers of our strong brand identity is how we establish **sufficient trust when the trust baseline has dropped** and competition is undercutting us. (We in fact did do a few homepage redesigns during the pandemic, and while I wasn’t part of these conversations, I imagine this is not so far from the truth.)

I hope this story, however apocryphal, demonstrates one thing to you: decisions are rarely as simple as they seem. And while you might think your executives are shooting from the hip, the best executives I’ve worked with agonize over their decisions to a level that would surprise you — we tend to just only see them emerging victorious from their labyrinths, blind to the hedge-hacking that got them there.

# Solutions are obvious to validate, but not obvious to find.

In computer science, there are two categorizations of problems: P problems, which can be solved in polynomial time (read: solved quickly); and NP problems, which can only be verified in polynomial time (read: verified quickly, but not necessarily solved quickly). A classic example of an NP problem is integer factorization, and you can *feel *this difference. If I say:

split the number 723 into prime factors

(NP problem)

you’ll notice it’s quite a bit harder than

verify that 241 x 3 = 723

(P problem)

This gets excruciatingly more difficult with larger numbers, while verification remains statically simple — you just multiply the numbers together to check.

One of the biggest unsolved mysteries in computer science, then, is whether P = NP. That is, whether it’s true that every polynomial-verifiable problem can also be solved in polynomial time. If it were true, the repercussions would be severe: for any absurdly difficult problem, we’d know we could solve a solution quickly — we’d just have to work to find the right algorithm.

But waving my hands a bit, I think we all can internalize that P problems are generally much *easier* than NP problems, and the process to finding the answer to an NP problem is always going to be practically arduous, regardless of whether or not it is *possible* to reduce it to a P problem.

### P ≠ NP, for decisions

That was an arduous preamble to my point here, so I apologize. But my main point: P ≠ NP, for decisions. That is, **it’s never as easy to find a good decision as it is to verify that a decision is good**.

So even if you understand that there is a lot of work that goes into making decisions, you’ll likely still underestimate the effort required to make a decision simply because decisions are easier to verify than to solve. “Obviously we should do X”, we say. But it’s a delicate thing to get there in the first place.

To make matters worse, a lot of good decision-making often follows the midwit curve — the initial thought is correct, then only after a long, arduous consideration of alternatives will one ultimately return to that initial idea. And so the decision-making journey seems, once again, trivial.

# The meta-problem is not decision-making, but deciding which decisions are important.

One final point: there are a *lot *of decisions non-technical folks have to make, and the key is often not *making all those decisions well*, it’s solving the meta-problem of **deciding which decisions to invest your time into**. Change is driven by power laws, and the business world is no exception. 20% of your decisions are going to drive 80% of the change, and I imagine this is an understatement of the skew.

And consequently, decision-making artifacts are going to be invariably difficult to evaluate. The executive that makes the wrong decision for 500 insignificant decisions, but a great one for a single, consequential one is going to be more valuable than the executive that’s right a little bit more often, but wrong for the killer decision.

# Final comments

I know, I know — I’ve given you a sob story about overpaid executives. A bit hard to empathize. And as far as I know, it could be entirely inaccurate for your situation. Your decision-makers might be far too emotional (which is probable), or your stakeholders have just been historically very lucky, baselessly propelling them up the corporate ladder. If you give a thousand monkeys $1000 and 10 turns on a roulette table, and you’ll likely end up with a millionaire monkey, after all — outcome bias, etc.

Still, I hope this gives you some insight into how *arduous* the decision-making process can be, even if it only means that you’ll now be able to pummel your stakeholders with more reasonable objections.

Sorry for the esoterica, for those of you who haven’t retained your linear algebra since college. The basic idea: there’s an algorithm called PCA (principle component analysis), where you search for the “principle components” of a group of data points — i.e. the new vectors that can more aptly describe those data points. It’s like looking at a bunch of data about people’s height and weight, then realizing you could get a more accurate description of a person by looking at BMI and body fat % — those are, roughly speaking, the principle components. These components are the eigenvectors of the covariance matrix. But really, you just have to know that I’m basically saying that decision-making requires you to figure out that BMI and body fat % are more helpful than height and weight, then coming up with an answer accordingly.