The AI we don't see
Why the abrupt improvement in LLMs has terrifying implications for the targeting models we don't see.
Hello everyone! It’s been a little while — thank you for your patience during my brief writing sabbatical. I thought I’d spend a little time this week talking about AI, given the chaos around OpenAI as of late. We’ll resume our standard data programming soon enough.
As always, find me on LinkedIn or Twitter — I’m always happy to chat. And if you enjoyed this post, I’d appreciate a follow/like/share. 🙂
I used to build uplift models at Wayfair and Airbnb. These are causal models, built to predict the causal impact of actions. In the advertising world, these models maximize ad spend by restricting spend to those who would purchase if and only if they saw your ad1. And the value-add was substantial — we were able to generate 10s of millions of dollars in incremental revenue per month above existing targeting models, all while reducing ad spend. It might sound a bit comical, but being able to get millions of people buy rugs and couches or rent their homes when they otherwise wouldn’t made me feel like some sort of domiciliary marionettist.
Fortunately, my relentless pursuit of this problem was relatively harmless. In general, the worst repercussion of my work was some frivolous spending. But I bring this up to point out how easily influenced our actions are — just a little tweak here or there to what you’re seeing can make you take an entirely different path down the chaos map. But that’s enough talk about furniture and rental homes. I want to talk about two things today that are a bit scary once you consider how manipulable we are:
AI is leaps and bounds more capable than it was even just a couple years ago, largely owing to the emergence of unpredicted capabilities.
There are equivalently complex (sometimes even more complex) targeting models that are guiding our behavior, and I wonder what emergence looks like here.
Let’s talk about the AI we don’t see.
Bifurcation points
In graduate school, I studied emergence — how macroscopic phenomena can emerge from the amalgamation of atomic agents in a manner that far exceeds what you might predict from a linear stacking of their behavior. In academic circles, this sort of work is studied in quite ordinary things: rivers, sandpiles, snowflakes, phase transitions. But these sorts of things manifest terrifyingly in the real world. Think: avalanches, stock market crashes, mass extinctions.
And emergence is the mechanism by which AI has gained, by all accounts, its distinct humanness. The transformer architecture is probably the primary core, human-comprehensible algorithm behind it, sure, but to reduce GPT-4 to this would be to erase all differentiation against the LLMs from Google or Meta that are trying — and failing — to compete. The secret seems to be that OpenAI has figured out how to manifest certain emergent properties in a way that others are still struggling with.
I’m pontificating far too much, and without a good look into OpenAI I have to admit I may be overstating their intentionality here. But emergent properties are certainly there. It’s easy to view progress as steady, linear — like the steady monotonic rise of capabilities we see in the following plot. But in viewing the world in this way, we miss the black lines — the critical points — beyond which systems such as AI start to behave profoundly differently.
This emergence should be daunting in its own right. It’s not difficult to see the second black line looming over the horizon (the often-discussed point where an AI can improve itself in a meaningful way — the tech singularity). But let’s stay a little less fantastical and talk about why this should worry us, particularly when considering all the other neural nets that are layered invisibly all over the place.
Mindless scrolling
There’s ML powering everything. In your ads, sure, but also in your Netflix recommendations, in Spotify’s track curation, in what you’re shown on Prime Day, the particular set of stores that have popped up in your neighborhood, the prices of things you want to buy, and, notably, in the videos that Instagram/TikTok/Youtube give you as you mindlessly scroll. I want to spend a little bit of time talking about this last point, as it’s one that I think we can all agree has had a net negative impact on our collective mental health.
I’d wager that I have a reasonable amount of self-control. But whenever I open a short-form content sharing platform (e.g. Tiktok, Youtube shorts, Instagram reels), I’m sucked in for far longer than I plan to. I’ll scroll mindlessly for , and then feel this constant pull to open my phone.
And while it’s easy to self-flagellate here, it’s not entirely my fault. Sure, my lack of self-control is a parameter, but the outcome — the mindless scrolling — is multiply causal. And when you have near infinite computing resources throwing themselves at you with the sole aim of holding your attention, what hope do you really have of skirting this? Clearly some fault lies with the algorithm.
Now, of course, having worked on these kinds of algorithms through 2021, I can attest that they were once largely layered incremental improvements against the early recommendation systems of the 2010s. But the shift over the last few years has been towards deep learning integration, and this is where the sensationalist in me emerges2. If you can just imagine that the same level of emergence in AI manifests in the models with less palpable frontends and more malignant objectives, they’re going to be silently orchestrating our behaviors in ways that will be hardly optimized for our own well-being. The standard KPIs in corporations are things like time-on-platform, click-through rate, long-term retention — all arguably measures of compulsivity, not enlightenment. At this point, it’s a bit of an overplayed trope to lament the generally negative influence social media has had on our collective mental health, but what this fails to account for is the steady improvement of the algorithms responsible. ChatGPT already does many things better than we can. Now just imagine the point where your recommendation systems get better than your ability to withstand them, and suddenly we’re all cadaverously addicted.3
Note that this also means that people who would not purchase because they see your ad would not be targeted, increasing revenue beyond the point of just cutting costs.
Ha ha.
In reality, I’m not really that scared. I think we humans are pretty good at adapting to these sorts of changes. But I’d love it if I could just get a few more people to doomscroll a little less.