There has been a dangerous rhetoric growing over the last couple years: that the most appropriate measure of an analyst’s ability is speed. While both Cassie and Benn qualify their claims with heavy caveats and neither, I imagine, believe that bad, fast decisions beat slow ones, I find the headline to be nonetheless dangerous, and so I wanted to take some time to be equally inflammatory to course-correct the dialogue:
At first blush, setting speed as a primary metric sounds sensible. After all, we analytics folk tend to fall victim to nerd sniping, analytical excess, and overly-academic communication styles. Encouraging decreased time-to-decision can certainly mitigate these tendencies.
So why do I disagree? Two reasons:
I’m worried about the precedent praising “speed” will set.
Because it will inevitably be misinterpreted.Bad decision-making is often worse than slow decision-making.
The effects of decisions can reverberate for quarters, years, even decades.
Let’s spar.
Whiffs of “speed” being praised is going to be inevitably misinterpreted and ruin analytics.
Analysts across the world have been drowning in I-need-it-now ad-hoc requests from the moment data was first analyzable.
While it’s certainly not the intention of Benn or Cassie to encourage mindless response to questions, setting “speed” as the one skill to rule them all is bound to be misinterpreted. I’d loathe to work on product teams where the team has only skimmed these articles, flying their banner under a single reductive takeaway:
“The moment an analyst is asked a question, a timer starts. When a decision gets made based on that question, the timer stops. Analysts’ singular goal should be to minimize the hours and days on that timer.”
Using the “speed as a metric” argument itself is, ironically, an example of one of the most dangerous things you can do in analytics: to measure something with a proxy metric. While this is an important and legitimate tactic for analysts to hold in our toolbelts, we all know these sorts of proxy rules should be used thoughtfully lest their bastardizations become improperly applied. We all have stories of how hand-wavy statements like “let’s just use clicks to measure usage” can go awry. I fear the same facile usage will happen if this argument goes mainstream.
Yes, we analysts can happily fly our banner behind “go faster”, with full knowledge of the asterisks, but do we really want our stakeholders, our organizations flying this banner as well?
Bad decision-making is often much more costly than slow decision-making.
I detest blanket statements like this - so please note I said “often”. Still, I want to emphasize that there could be real bottom-line repercussions to encouraging analyst speed over all else, particularly when long-lasting decisions are being made.
For instance, if you need to create an analysis to decide whether to ship an experiment, you better damn be right because the shipped version and the learnings from it may live on for the life of the company. If you’ve ever pushed to reverse a shipped experiment, you know what a bureaucratic nightmare this is.
If you’re deciding what promising product opportunities to work on for a quarter, quickly delivering a few broad brush numbers could mean missed opportunities for an entire quarter (or longer!), particularly for mature products where wins are rare.
If you’re estimating the incremental value of a department’s effectiveness, small inaccuracies can mean the difference between reassigning the entire department or investing more in their cause. At Airbnb, an analysis I did was directly responsible for the restructuring of an org. Had I not examined the [positive] input metrics more closely, we would’ve hobbled along, investing 10s of millions of dollars on a wasted initiative.
We too often forget the power of data in the decision-making process. We’re holding live grenades, and to encourage them to be thrown faster is generally not advisable.
Final thoughts
Now for some more direct gut comments on Benn Stancil’s recent post. Even putting aside the effects I mentioned, Benn’s take is, in my opinion, overly optimistic — his connections between Speed and What We Actually Want To Measure don’t seem tenable. E.g. he argues that speed-to-decision correlates with conviction, which thereby correlates with quality of the analysis (our primary, unmeasurable objective).
“Just as any decisive argument in court is a good one, convincing analysis is, almost by definition, quality analysis.”
But I doubt this is true. If conviction and decision-making were always logical, deterministic consequences of quality analytics work, our world be a much more measured place. But I’ve found that analyses, even when wrong, can be convincing regardless of their correctness or whether or not the analyst has a full grasp of the context within which their work is being produced and consumed. Product managers, designers, executives frequently don’t have time to scrutinize our work (nor should they be expected to!) — our job is to deliver a well-reasoned insight and communicate a measured recommendation. We are not lawyers convincing a jury — we are in the evidence-collecting business.
So let’s kill the notion that “analysts singular goal is [to be fast]” and just focus on delivering impact instead (which to be honest, in many instances can be directly measurable anyways).
A tangential plug: check out Hyperquery, our modern take on the SQL notebook (thanks Valentin for the vote of confidence). In our estimation, crafting analyses with better transparency and discoverability is the best way for analysts to scale your impact, not through sheer brute force IDE-based SQL work, and Hyperquery makes that easier. 🙂
The analogy of data/insights being 'live grenades' is dramatic at best and delusional at worst.
** The first shot has been fired **
This is an ideal for the data analyst role - to drive decisions and have immense impact - however it is just as misleading as the ''SQL Monkey" assumption.
In most cases, the data or analysis simply needs to confirm that a decision is correct. That the direction the boat is heading in is logically supported by the available information. This makes sense, because the business is usually more aware of the problem, its complexities, and its dependencies, and plans would already be in motion long before the compass check is requested from the analyst.
Yes. There’s potential for moments of glory - the last minute three-pointers - however these do not represent the day-to-day responsibilities of an analyst. This potential has created hype and is frequently peddled by SaaS providers, tutors, and content creators. In reality, the "SQL Monkey" role – hence its’ existence- is an entirely relevant role whereas the fictitious ‘SQL Wizard’ that marketers keep pushing is unfortunately not.
(** The second shot has been fired **)
This is clear in the post itself, for example when is an analyst ever "directly responsible for the restructuring of an org"? Who are we talking about? Someone with executive responsibilities like this is not also being requested to pull data fast, this is a poor comparison and leads to the very problem this piece alludes to challenge.
Since we’re equating the value of an analyst to the type of decision-making that can save "10s of millions of dollars", it's perfectly logical for the expectation to be ‘Great! Do it faster!’. Hence why decisions are actually made collectively by well-informed, well-experienced people with a history of taking accountability for similar decisions and the confidence to make calls in the now.
In conclusion, the real rebuttal to the "SQL Monkey" assumption is not to redefine the role as a fictitious 'SQL Wizard'. It is to establish the truth and introduce some balance into the dialog that ensures analysts are valued while managing stakeholder expectations. However, that’s not buzzwordy enough so what about calling analysts as 'Compass Coordinators' and call it a day😊