There has been a dangerous rhetoric growing over the last couple years: that the most appropriate measure of an analystâs ability is speed. While both Cassie and Benn qualify their claims with heavy caveats and neither, I imagine, believe that bad, fast decisions beat slow ones, I find the headline to be nonetheless dangerous, and so I wanted to take some time to be equally inflammatory to course-correct the dialogue:
At first blush, setting speed as a primary metric sounds sensible. After all, we analytics folk tend to fall victim to nerd sniping, analytical excess, and overly-academic communication styles. Encouraging decreased time-to-decision can certainly mitigate these tendencies.
So why do I disagree? Two reasons:
Iâm worried about the precedent praising âspeedâ will set.
Because it will inevitably be misinterpreted.Bad decision-making is often worse than slow decision-making.
The effects of decisions can reverberate for quarters, years, even decades.
Letâs spar.
Whiffs of âspeedâ being praised is going to be inevitably misinterpreted and ruin analytics.
Analysts across the world have been drowning in I-need-it-now ad-hoc requests from the moment data was first analyzable.
While itâs certainly not the intention of Benn or Cassie to encourage mindless response to questions, setting âspeedâ as the one skill to rule them all is bound to be misinterpreted. Iâd loathe to work on product teams where the team has only skimmed these articles, flying their banner under a single reductive takeaway:
âThe moment an analyst is asked a question, a timer starts. When a decision gets made based on that question, the timer stops. Analystsâ singular goal should be to minimize the hours and days on that timer.â
Using the âspeed as a metricâ argument itself is, ironically, an example of one of the most dangerous things you can do in analytics: to measure something with a proxy metric. While this is an important and legitimate tactic for analysts to hold in our toolbelts, we all know these sorts of proxy rules should be used thoughtfully lest their bastardizations become improperly applied. We all have stories of how hand-wavy statements like âletâs just use clicks to measure usageâ can go awry. I fear the same facile usage will happen if this argument goes mainstream.
Yes, we analysts can happily fly our banner behind âgo fasterâ, with full knowledge of the asterisks, but do we really want our stakeholders, our organizations flying this banner as well?
Bad decision-making is often much more costly than slow decision-making.
I detest blanket statements like this - so please note I said âoftenâ. Still, I want to emphasize that there could be real bottom-line repercussions to encouraging analyst speed over all else, particularly when long-lasting decisions are being made.
For instance, if you need to create an analysis to decide whether to ship an experiment, you better damn be right because the shipped version and the learnings from it may live on for the life of the company. If youâve ever pushed to reverse a shipped experiment, you know what a bureaucratic nightmare this is.
If youâre deciding what promising product opportunities to work on for a quarter, quickly delivering a few broad brush numbers could mean missed opportunities for an entire quarter (or longer!), particularly for mature products where wins are rare.
If youâre estimating the incremental value of a departmentâs effectiveness, small inaccuracies can mean the difference between reassigning the entire department or investing more in their cause. At Airbnb, an analysis I did was directly responsible for the restructuring of an org. Had I not examined the [positive] input metrics more closely, we wouldâve hobbled along, investing 10s of millions of dollars on a wasted initiative.
We too often forget the power of data in the decision-making process. Weâre holding live grenades, and to encourage them to be thrown faster is generally not advisable.
Final thoughts
Now for some more direct gut comments on Benn Stancilâs recent post. Even putting aside the effects I mentioned, Bennâs take is, in my opinion, overly optimistic â his connections between Speed and What We Actually Want To Measure donât seem tenable. E.g. he argues that speed-to-decision correlates with conviction, which thereby correlates with quality of the analysis (our primary, unmeasurable objective).
âJust as any decisive argument in court is a good one, convincing analysis is, almost by definition, quality analysis.â
But I doubt this is true. If conviction and decision-making were always logical, deterministic consequences of quality analytics work, our world be a much more measured place. But Iâve found that analyses, even when wrong, can be convincing regardless of their correctness or whether or not the analyst has a full grasp of the context within which their work is being produced and consumed. Product managers, designers, executives frequently donât have time to scrutinize our work (nor should they be expected to!) â our job is to deliver a well-reasoned insight and communicate a measured recommendation. We are not lawyers convincing a jury â we are in the evidence-collecting business.
So letâs kill the notion that âanalysts singular goal is [to be fast]â and just focus on delivering impact instead (which to be honest, in many instances can be directly measurable anyways).
A tangential plug: check out Hyperquery, our modern take on the SQL notebook (thanks Valentin for the vote of confidence). In our estimation, crafting analyses with better transparency and discoverability is the best way for analysts to scale your impact, not through sheer brute force IDE-based SQL work, and Hyperquery makes that easier. đ