Online advertising no longer performs its stated function. It does not inform. It does not discover. It does not even pretend to understand its audience. What it does instead — repeatedly, confidently, mechanically — is miss.
Have you noticed that online advertising has stopped feeling persuasive and started feeling confused, very confused?
Not confused in a charming, human way — but confused in a way that reveals the anti-human optimization machinery underneath. Dating ads that assume intimacy where none exists. IQ-test bait that mistakes insecurity for curiosity. Health ads whose imagery collapses into nonsense. The same six or seven obnoxious ads on endless rotation, none of them relevant, all of them faintly insulting.
The irritation isn't that these ads exist. It's that they miss — repeatedly, confidently, and without learning. They don't inform. They don't discover. They don't even adapt. They simply try again, louder, as if repetition itself might substitute for relevance.
This is how you know something fundamental has broken.
There was a time when advertising — at least at its best — served a discovery function. It surfaced tools, books, games, services, and ideas you might not have known existed. It widened your option space. It treated attention as something to be earned through relevance or novelty.
What we have now is something else entirely. And the phrase most often used to describe it — the attention economy — is not just inadequate. It is actively misleading.
The Lost Function: Advertising as Discovery
Historically, advertising worked when it aligned incentives:
- Advertisers wanted to reach people who might actually care.
- Audiences tolerated ads because they occasionally learned something useful.
- Platforms benefited by acting as intermediaries, not adversaries.
Even crude advertising could still perform discovery. A magazine ad for a new tool. A back-page notice for a book. A banner announcing a game, a service, a place.
Crucially, discovery required exploration. It required showing things that might not convert immediately but could build awareness, curiosity, or trust over time.
Modern ad systems abandoned exploration almost entirely.
Optimization replaced curiosity. Conversion replaced relevance. And once those metrics took over, discovery became an unaffordable luxury.
The Category Error: Why “Attention Economy” Is the Wrong Frame
Calling this system an economy smuggles in assumptions that do not hold.
Economies deal in commodities that are:
- fungible
- transferable
- accumulable
- valuable in themselves
Attention is none of these.
Attention is a capacity, not a currency. It exists only inside a living nervous system. It cannot be stored by others. It cannot be transferred without participation. It disappears the moment it is withdrawn.
Most importantly: attention has no intrinsic value.
A single human's attention, in isolation, is worth nothing. It becomes "valuable" only when a system can convert it into something else — behavior, belief, data, money, influence.
The value does not live in attention. It lives in the attempt to extract from it.
Calling this an economy disguises a conversion pipeline as a neutral marketplace. It reframes extraction as exchange and makes exploitation sound inevitable.
Why “Attention Ecosystem” Doesn’t Quite Work Either
Some have tried to soften the framing by talking about an attention ecosystem. This is closer — but still wrong.
Ecosystems imply balance, mutual dependence, and feedback loops that preserve the participants. They suggest a system in which use and renewal remain in equilibrium.
What we have instead is a structure that rewards depletion. Fatigue carries no penalty. Annoyance accrues no cost. Alienation registers only as churn, not as failure.
The deeper problem is that the ad environment is missing the kind of feedback loops that would force systemic correction.
First, most ad systems do not optimize for human satisfaction. They optimize for measurable proxies — impressions, clicks, conversions — because those are legible to dashboards and easy to monetize. The negative externalities (irritation, fatigue, distrust, loss of meaning) are not measured well, and even when they are felt, they are hard to attribute. You can tell you're sick of the ads, but you cannot point to the precise impression that did the damage.
Second, attribution is broken in a way that protects bad ads. People don't leave a site and file a complaint that says, "I'm done because I saw the same insulting creative twelve times." They simply disengage later, somewhere else, for reasons that look diffuse. The system treats that loss as background noise. In practice, churn becomes a rounding error, not a diagnostic.
Third, the market structure fragments responsibility. Real-time bidding and multi-layer ad supply chains mean no single actor experiences the full cost of annoyance. The advertiser blames "inventory." The platform blames "demand." The network blames "creative." Everyone captures a slice of revenue, and no one owns the downstream damage.
Finally, even when the system does learn, it tends to learn the wrong lesson. If a manipulative ad converts a small percentage of people under pressure, that conversion is recorded as success. If it alienates everyone else, that cost is distributed across the ecosystem and delayed in time. Optimization rewards the short-term win and discounts the long-term harm.
So when a system exhausts a user and loses them forever, that loss is rarely attributed back to the ad strategy itself. It is written off as an acceptable attrition rate.
That is not an ecosystem. It is strip-mining.
What’s Actually Happening: Behavioral Extraction
The modern advertising stack is not an economy of attention. It is a regime of behavioral extraction.
This outcome is not accidental, nor is it primarily the result of bad actors or tasteless creatives. It emerges mechanically from how the system is built and what it is allowed to measure.
At the core of the stack is a simplification: humans are modeled as probabilistic response surfaces. Internal states — curiosity, satisfaction, trust, boredom — are treated as unknowable or irrelevant. What is legible are actions: impressions served, clicks registered, conversions completed. Optimization therefore collapses inward toward whatever reliably produces those signals, regardless of how it does so.
Once action becomes the only currency the system can see, emotional leverage becomes the shortest path. Fear, insecurity, shame, and validation spikes compress decision time and increase the probability of response. A system trained to maximize short-term action will therefore converge on stimuli that reduce deliberation rather than encourage understanding.
Repetition follows naturally. Because understanding a person is computationally expensive and uncertain, while showing the same creative again is cheap and measurable, the system selects for persistence over relevance. Fatigue and irritation are real human responses, but they are weak signals inside the stack: delayed, diffuse, and rarely attributable to a specific ad impression. As a result, annoyance is treated as an acceptable side effect rather than a failure mode.
This is why ads do not need to be good. They need only be cheap enough and occasionally effective.
Insecurity-based advertising dominates not because it is persuasive in any deep sense, but because it exploits a structural asymmetry. A small fraction of people under momentary pressure will convert quickly, producing clean success signals. The much larger population who feels insulted, alienated, or exhausted does not generate an equally clean counter-signal. The harm is distributed, delayed, and invisible to optimization loops.
Over time, the system learns exactly what it is allowed to learn: which levers produce extractable behavior at the lowest cost. Trust, long-term relevance, and discovery fall away because they are expensive, slow, and poorly measured. What remains is a machine that burns bridges efficiently — and has no internal mechanism to notice that it is doing so.
Understanding why this system keeps missing requires looking at the mechanisms that prevent correction.
Why the System Keeps Missing
When ads feel insulting, it is not because they misunderstand you personally. It is because the system has structurally lost the ability to form a model of any individual mind.
The first mechanism is population averaging. Modern ad systems infer intent by aggregating past responses across large cohorts and projecting those averages forward. This flattens people into demographic and behavioral shadows — statistical silhouettes rather than subjects. Anything that does not fit cleanly into an existing cluster is treated as noise. The system is not rewarded for noticing difference; it is rewarded for exploiting similarity.
The second mechanism is proxy lock-in. Once a particular emotional lever — fear, insecurity, validation — has shown even modest success across a cohort, it becomes reinforced in the optimization loop. Alternative hypotheses about what might motivate engagement are never explored, because exploration carries opportunity cost. The system does not ask, "What does this person care about?" It asks, "What has extracted behavior from adjacent profiles before?"
The third mechanism is asymmetric error cost. Guessing wrong is cheap. Showing an insulting or irrelevant ad carries almost no immediate penalty, because rejection is passive and silent. Learning, by contrast, is expensive. It requires richer models, slower feedback, and a willingness to forgo short-term gains. In an environment tuned for speed and scale, the rational strategy is to guess loudly and move on.
Finally, the system is blind to why it missed. Non-engagement collapses many causes — disinterest, offense, fatigue, confusion — into a single null signal. Without causal clarity, the system cannot distinguish between "this was irrelevant" and "this was actively alienating." Both look the same: no click. As a result, the corrective gradient never appears.
Taken together, these mechanisms guarantee a particular outcome. The system will continue to try the same small set of emotionally compressive strategies, over and over, even as they degrade the overall environment. It is not failing despite optimization. It is failing because of optimization constrained to the wrong signals.
In that context, missing is not a bug. It is the cheapest possible equilibrium.
A Different Frame: Attention as Capacity and Consent
A healthier model would treat attention as finite, contextual, owned by the individual, and granted rather than owed.
Under that frame, failure to engage is not lost value. It is a signal that something did not deserve attention in the first place. Silence is not absence; it is judgment.
This reframes advertising from extraction to worthiness. The burden shifts away from asking how attention can be captured and toward asking what would make something worth attending to.
For such a system to exist, it would require fundamentally different incentives and feedback loops. It would need to measure not just action, but aftermath: whether engagement led to satisfaction, understanding, or durable interest rather than regret or fatigue. It would need to treat sustained curiosity as a success signal and rapid churn as a failure, even when short-term conversion looked strong.
A discovery-oriented system would have to reward exploration explicitly. That means allocating budget to showing novel, uncertain, or long-tail offerings whose value cannot be known in advance, and protecting those explorations from being immediately crushed by conversion metrics. It would need mechanisms that tolerate being wrong in the short term in order to learn what might matter in the long term.
Crucially, it would need negative feedback that actually propagates. Repetition-induced fatigue, irritation, and disengagement would have to be treated as first-class signals rather than background noise. Seeing the same creative too often would count against it. Causing people to tune out entirely would be recorded as harm, not merely lost opportunity.
Finally, such a system would require consent as an input, not an obstacle. Users would have to be able to say, implicitly or explicitly, "I am open to discovering things right now," and just as importantly, "I am not." Attention would be contextual, revocable, and respected.
This kind of system is not impossible. But it is incompatible with extractive optimization. It values trust over throughput, learning over leverage, and long-term relevance over short-term yield.
That is not economics. It is ethics.
Closing: The Cost of Calling It an Economy
As long as we call this an attention economy, we normalize its worst behaviors. We accept depletion as inevitable and frame resistance as naïve, as though exhaustion were a law of nature rather than the result of design choices.
But attention is not a resource to be mined. It is a human capacity — finite, situational, and voluntary. It can be invited. It can be earned. And it can just as easily be withdrawn.
The next time an ad feels insulting, irrelevant, or strangely desperate, it is worth pausing to notice what is actually happening. That ad is not competing for your attention in a marketplace. It is guessing, cheaply, about which lever might still produce extractable behavior.
Your lack of response is not apathy. It is information the system does not know how to read.
Advertising did not become obnoxious because people became shallow. It became obnoxious because the systems stopped being curious — and because curiosity is not something they are rewarded for sustaining.
Until we change the frame, this will continue. Systems will keep extracting short-term reactions while quietly training people to disengage. They will optimize for clicks while hollowing out the very capacity they depend on.
And one day, perhaps not far off, the ads will still be shouting — but no one will be listening, because attention does not disappear. It simply goes where it is treated as something more than fuel.