There has been a dangerous rhetoric growing over the last couple years: that the most appropriate measure of an analystās ability is speed. While both Cassie and Benn qualify their claims with heavy caveats and neither, I imagine, believe that bad, fast decisions beat slow ones, I find the headline to be nonetheless dangerous, and so I wanted to take some time to be equally inflammatory to course-correct the dialogue:
At first blush, setting speed as a primary metric sounds sensible. After all, we analytics folk tend to fall victim to nerd sniping, analytical excess, and overly-academic communication styles. Encouraging decreased time-to-decision can certainly mitigate these tendencies.
So why do I disagree? Two reasons:
Iām worried about the precedent praising āspeedā will set.
Because it will inevitably be misinterpreted.Bad decision-making is often worse than slow decision-making.
The effects of decisions can reverberate for quarters, years, even decades.
Letās spar.
Whiffs of āspeedā being praised is going to be inevitably misinterpreted and ruin analytics.
Analysts across the world have been drowning in I-need-it-now ad-hoc requests from the moment data was first analyzable.
While itās certainly not the intention of Benn or Cassie to encourage mindless response to questions, setting āspeedā as the one skill to rule them all is bound to be misinterpreted. Iād loathe to work on product teams where the team has only skimmed these articles, flying their banner under a single reductive takeaway:
āThe moment an analyst is asked a question, a timer starts. When a decision gets made based on that question, the timer stops. Analystsā singular goal should be to minimize the hours and days on that timer.ā
Using the āspeed as a metricā argument itself is, ironically, an example of one of the most dangerous things you can do in analytics: to measure something with a proxy metric. While this is an important and legitimate tactic for analysts to hold in our toolbelts, we all know these sorts of proxy rules should be used thoughtfully lest their bastardizations become improperly applied. We all have stories of how hand-wavy statements like āletās just use clicks to measure usageā can go awry. I fear the same facile usage will happen if this argument goes mainstream.
Yes, we analysts can happily fly our banner behind āgo fasterā, with full knowledge of the asterisks, but do we really want our stakeholders, our organizations flying this banner as well?
Bad decision-making is often much more costly than slow decision-making.
I detest blanket statements like this - so please note I said āoftenā. Still, I want to emphasize that there could be real bottom-line repercussions to encouraging analyst speed over all else, particularly when long-lasting decisions are being made.
For instance, if you need to create an analysis to decide whether to ship an experiment, you better damn be right because the shipped version and the learnings from it may live on for the life of the company. If youāve ever pushed to reverse a shipped experiment, you know what a bureaucratic nightmare this is.
If youāre deciding what promising product opportunities to work on for a quarter, quickly delivering a few broad brush numbers could mean missed opportunities for an entire quarter (or longer!), particularly for mature products where wins are rare.
If youāre estimating the incremental value of a departmentās effectiveness, small inaccuracies can mean the difference between reassigning the entire department or investing more in their cause. At Airbnb, an analysis I did was directly responsible for the restructuring of an org. Had I not examined the [positive] input metrics more closely, we wouldāve hobbled along, investing 10s of millions of dollars on a wasted initiative.
We too often forget the power of data in the decision-making process. Weāre holding live grenades, and to encourage them to be thrown faster is generally not advisable.
Final thoughts
Now for some more direct gut comments on Benn Stancilās recent post. Even putting aside the effects I mentioned, Bennās take is, in my opinion, overly optimistic ā his connections between Speed and What We Actually Want To Measure donāt seem tenable. E.g. he argues that speed-to-decision correlates with conviction, which thereby correlates with quality of the analysis (our primary, unmeasurable objective).
āJust as any decisive argument in court is a good one, convincing analysis is, almost by definition, quality analysis.ā
But I doubt this is true. If conviction and decision-making were always logical, deterministic consequences of quality analytics work, our world be a much more measured place. But Iāve found that analyses, even when wrong, can be convincing regardless of their correctness or whether or not the analyst has a full grasp of the context within which their work is being produced and consumed. Product managers, designers, executives frequently donāt have time to scrutinize our work (nor should they be expected to!) ā our job is to deliver a well-reasoned insight and communicate a measured recommendation. We are not lawyers convincing a jury ā we are in the evidence-collecting business.
So letās kill the notion that āanalysts singular goal is [to be fast]ā and just focus on delivering impact instead (which to be honest, in many instances can be directly measurable anyways).
A tangential plug: check out Hyperquery, our modern take on the SQL notebook (thanks Valentin for the vote of confidence). In our estimation, crafting analyses with better transparency and discoverability is the best way for analysts to scale your impact, not through sheer brute force IDE-based SQL work, and Hyperquery makes that easier. š