Normal görünüm

Dünden önceki gün alınanlar

Optimising for the surfaceless web

30 Ekim 2025 saat 18:14

When I wrote about the machine’s emerging immune system, I argued that AI ecosystems would eventually learn to protect themselves. They’d detect manipulation, filter noise, and preserve coherence. They’d start to decide what kinds of information were safe to keep, and which to reject.

That wasn’t a prediction of some distant future. It’s happening now.

Every day, the surface of the web is scraped, compressed, and folded into the models that power the systems we increasingly rely on. In that process, most of what we publish doesn’t survive contact. Duplicate content dissolves. Contradictions cancel out. Persuasive noise is treated as waste heat and vented into the void.

What remains isn’t the web as we know it – it’s something flatter, quieter, and more internal. A composite understanding of the world, shaped by probability and consensus. A compressed version of reality, organised for machines to reason about rather than for humans to explore.

And that’s the new landscape we’re marketing into: one with no pages, no clicks, and no real surface at all.

We built our disciplines on visibility. On the idea that success meant being seen. But as the machine closes itself off, as the interfaces between human experience and machine understanding grow thinner, visibility starts to lose meaning.

The web we built had surfaces. We could design them, decorate them, and fight for space upon them.
That web is gone. There are no more surfaces – only systems. So the question isn’t how to stand out anymore. It’s how to stay inside.

Visibility is legacy

For most of the web’s history, marketing has been an act of performance. You built something on the surface – a website, a post, a page – and hoped that enough people, and enough algorithms, would look at it. The goal was to be discoverable. Searchable. Indexed.

That surface once mattered. It was where meaning lived; a thin film of presentation stretched across a sea of content. We learned how to decorate it, how to manipulate its texture, how to catch the light.

But surfaces don’t mean much to systems that don’t look.

AI doesn’t browse, or scroll, or click. It doesn’t see the things we publish as artefacts, or experiences, or journeys. It sees them as data: a dense accumulation of words and links and relationships. What was once a web of pages becomes, in its eyes, a map of connections and probabilities.

As those systems grow, the visible layer stops being the web’s living tissue and becomes something closer to a residue. A user interface. A translation layer between the model’s internal state and our human need for touchable things.

What we still see – the search results, the snippets, the website visits – are just reflections of a deeper process. The model has already decided what matters before anything becomes visible to us.

Visibility, then, is no longer a measure of relevance. It’s an echo. A by-product of how well you exist in the model’s internal map of the world.

Marketing, then, can’t rely on that shadow anymore. You can’t simply perform on the surface and expect to be understood below it. You have to exist in the substrate – in the model’s inner representation of reality – where things are no longer seen, only known.

The game isn’t about visibility anymore. It’s about becoming part of the organism – being recognised as useful, true, and integral to its functioning.

The machine’s new selection pressure

Every cycle of training, pruning, and retraining redraws the boundaries of what the machine believes to be true.

Information doesn’t simply disappear; it’s weighed, compared, and either reinforced or allowed to fade. Fragments that align with the model’s broader understanding are retained and strengthened. Those that contradict it, or contribute nothing new, dissolve quietly into statistical noise. This isn’t malice – just maintenance; the system trimming what no longer fits.

From a distance, it looks a lot like natural selection.

Ideas that are clear, consistent, and corroborated tend to persist. They become anchor points in the system’s internal map of the world. Messy, manipulative, or self-serving signals lose fidelity with each generation until they’re effectively gone.

In a web of endless redundancy and synthetic repetition, that selection pressure is profound. Only the most stable patterns survive ingestion, compression, and re-ingestion. Everything else – the campaigns, the slogans, the noise – burns off in the friction between training cycles.

That’s what survival looks like now. Not attention. Not engagement. But persistence.

Survival traits: Useful. True. Integral.

If visibility has given way to persistence, the question becomes: what kinds of information survive? What traits make something worth keeping, when the machine is constantly distilling and rewriting the world?

Three patterns seem to emerge. They’re not rules so much as instincts – signals that the system tends to reinforce rather than erase.

Useful

The model keeps what helps it make better predictions. Content that clarifies, contextualises, or resolves ambiguity adds weight. It’s nutritious to the system. It strengthens its ability to connect ideas and answer questions.

Marketing, by contrast, has often specialised in the opposite – in the art of adding noise to capture attention. In a surface-driven world, that worked. In a closed, self-editing one, it reads as contamination.

To persist, information has to improve the model’s understanding of reality. In practice, that means producing things that make the broader web smarter – content that’s cited, contextualised, and consistent wherever it’s encountered.

True

Truth, in this context, isn’t about moral correctness. It’s about statistical coherence – about holding the same shape across many perspectives.

Signals that remain consistent across time, context, and corroboration develop gravity. They become stable landmarks in the machine’s internal map of the world. Contradictions, rebrands, and rhetorical pivots weaken that gravity until the entity itself loses definition.

If the machine can’t tell what you are, it stops believing you exist.

Integral

The most resilient information isn’t the loudest or the most original. It’s the most connected.

Ideas that are cited, linked, quoted, or built upon become structural. They help to shape the scaffolding that other knowledge depends on. Delete them, and the system’s understanding weakens.

That’s what it means to be integral – to occupy a position where absence would create tension. To be so entangled in the model’s sense of how the world fits together that forgetting you would break something.

These are the new markers of durability: usefulness, truthfulness, and integration – the traits that make you hard to erase.

Marketing as symbiosis

If the machine is learning what to keep, then marketing becomes an act of coexistence. The challenge isn’t to outsmart the filters or to force attention; it’s to make yourself part of the pattern the system depends on.

In a sense, the best marketing has always done this. The most durable brands, ideas, and voices are those that made themselves useful to the environment they occupied – not through volume, but through contribution. What’s changed is that the environment is now synthetic, recursive, and self-curating. The audience is a model.

You don’t persuade a model. You support it – with clarity, consistency, and connection, the same qualities it’s built to recognise and reward.

That means thinking less about performance and more about participation. Your goal isn’t to win visibility, but to earn residency – to become something the machine recognises as part of its metabolism.

The idea of “brand” shifts in that light. A brand isn’t just a promise to people anymore; it’s also a pattern that helps machines resolve ambiguity – a consistent cluster of language, data, and context that supports their confidence. When your language, data, and presence align in ways that improve the system’s accuracy, you stop being external. You become infrastructure.

And once that happens, you’re no longer competing for attention at all. You’re competing for existence.

Temporal persistence

Nothing inside the machine stays still for long.

Each new wave of training reshapes the landscape – compressing, merging, and overwriting its understanding of the world. Entities drift. Meanings decay. Facts are reinterpreted. The machine forgets as quickly as it learns.

In that churn, persistence becomes an active process. You don’t exist once; you exist continuously, by reasserting your pattern every time the world is rewritten.

That doesn’t mean producing endless noise. It means maintaining coherence – allowing the traces of your presence to echo across time, context, and medium. The goal isn’t repetition, but reinforcement.

Old marketing rewarded novelty. The machine rewards consistency. It trusts what endures. It preserves what it can still recognise after compression, translation, and paraphrase.

To survive, treat your reputation, language, and data as living systems – resilient enough to deform without breaking. Most campaigns capture a moment; few survive a model update.

Most campaigns are built to capture a moment. Few are built to survive a model update.

Existence as marketing

The machine immune system doesn’t just defend against manipulation. It shapes what becomes visible, repeatable, and credible across the systems we now rely on. Every model update redraws those boundaries a little – deciding what information stays accessible, and what quietly fades.

Our role inside that process is changing. Marketing used to be the art of visibility: we tried to be seen, to be remembered, to be chosen. Now, in a system that no longer looks in any human sense, visibility has no real meaning. What matters is whether we’re included in the model’s internal understanding of how the world works – whether we’re kept.

That’s the strange new responsibility of marketing: to make our presence valuable enough that deletion would leave a gap.

The future belongs to those who remain useful, true, and integral – not as moral virtues, but as traits the system depends on. The rest will fade quietly, not from malice but from optimisation.

The machine will keep compressing the world, folding it inward until there are no real surfaces left – only patterns it trusts to describe reality. Our task is to make sure we’re still one of them.

Because the game isn’t about visibility anymore. It’s about viability – about whether the machine still remembers you when it dreams.

The post Optimising for the surfaceless web appeared first on Jono Alderson.

The Hotmail effect

25 Ekim 2025 saat 13:29

A plumber drove past me last week with a Hotmail email address painted proudly on the back of his van.

No logo. No tagline. No QR code. Just “Plumbing Services”, and an email address that probably hasn’t changed since Windows XP was still popular.

For most of my career, that might have been all I needed to dismiss them. Probably unprofessional. Probably unsophisticated. Probably cash-in-hand. The kind of person who might install your sink backwards, and vanish with your money.

We used to treat things like that as red flags. If you looked unpolished, you were unprofessional. That was the rule.

And those assumptions didn’t come from nowhere – we engineered them. An entire generation of marketers trained businesses to look the part. We told them that trust was a design problem. That confidence could be manufactured. Custom domains. Grid-aligned logos. Friendly sans-serifs and a reassuring tone of voice. We built an industry around polish and convinced ourselves that polish was proof of competence.

But maybe that plumber’s Hotmail address is saying something else.

A human signal

Because in today’s web – a place that somehow manages to be both over-designed and under-built – a Hotmail address might be the last surviving signal of authenticity.

Most of what we see online now tries to look perfect. It tries to read smoothly, to feel effortless, to remove any friction that might interrupt the illusion. But most of it’s bad. Heavy. Slow. Repetitive. Polished in all the wrong ways.

We’ve built an internet that performs competence without actually being competent.

That van doesn’t need a strategist or a brand guide. It doesn’t need a content calendar or a generative workflow. It’s probably been trading under that same email for twenty years – longer than most marketing agencies survive.

And maybe that’s the point. In a world of synthetic competence – of things that mimic expertise without ever having earned it – the rough edges start to look like proof of life.

Because when everything looks professional, nothing feels human. Every brand, every website, every social feed has been tuned into the same glossy template. Perfect kerning, soft gradients, human-but-not-too-human copy. It’s all clean, confident, and hollow.

We’ve flattened meaning into usability. The same design systems, tone guides, and “authentic” stock photography make everything look trustworthy – and therefore suspicious.

It’s the uncanny valley of professionalism: the closer brands get to looking right, the less we might believe them.

The economy of imperfection

So authenticity becomes the next luxury good.

We start to crave friction. We look for cracks – the typos, the unfiltered moments, the signs of human hands. The emails from Hotmail accounts.

And as soon as that desire exists, someone finds a way to monetise it. The aesthetic of imperfection becomes an asset class.

You can see it everywhere now: “shot on iPhone” campaigns, lo-fi ads pretending to be user-generated content, influencers performing spontaneity with agency scripts and lighting rigs.

It’s a full-blown economy of imperfection, and it’s growing fast. The market has discovered that the easiest way to look real is to fake it badly.

The collapse of signal hierarchies

This is what happens when authenticity itself becomes synthetic.

Every signal that once conveyed truth – professionalism, polish, imperfection – can now be generated, packaged, and optimised.

We can fake competence. We can fake vulnerability. We can fake sincerity.

And when everything can be faked, the signals stop working. Meaning collapses into noise.

That’s the broader story of the synthetic web – a world where provenance has evaporated, and where all signals eventually blur into the same static.

The algorithmic loop of trust

Social media has made this worse.

Platforms are teaching us what “authentic” looks like. They amplify the content that triggers trust – the handheld shot, the stuttered delivery, the rough edge. Creators imitate what performs well. The algorithm learns from that imitation.

Authenticity becomes a closed loop, refined and replicated until it’s indistinguishable from the thing it imitates.

We’ve turned seeming human into a pattern that machines can optimise.

The uncanny mirror

That’s the bit that gets under my skin.

Maybe the plumber with the Hotmail address isn’t a relic at all. Maybe he’s the last authentic node in a network that’s otherwise been wrapped in artificial sincerity.

He’s not optimised. He’s not A/B‑tested. He’s just there. Still trading. Still human.

And maybe that’s why he stands out.

Once, a Hotmail address meant amateur hour. Now, it might mean I’m still real. Or maybe it’s just another costume – the next iteration of authenticity theatre. After all, businesses (and their vans, and their email addresses) can certainly just be bought.

Either way, the lesson’s the same. Real things age. They rust, wear down, and carry their history with them. That’s what makes them trustworthy.

And the most convincing thing you can be might just be imperfect.

The post The Hotmail effect appeared first on Jono Alderson.

Marketing against the machine immune system

24 Eylül 2025 saat 15:32

Marketing has always been about the art of misdirection. We take something ordinary, incomplete, or even broken, and we wrap it in story. We build the impression of value, of inevitability, of trustworthiness. The surface gleams, even if the foundations are cracked.

And for decades, that worked – not because the products or experiences were always good, but because the audience was human.

Humans are persuadable. We’re distractible, emotional, and inconsistent. We’ll forgive a slow checkout if the branding feels credible. We’ll look past broken links if the discount seems tempting. We’ll excuse an awkward interface if the ad campaign made us laugh. Marketing thrived in those gaps – in the space between what something is, and how it can be made to feel.

But the audience is shifting.

Increasingly, it isn’t people at the front line of discovery or decision-making. It’s machines. Search engines, recommenders, shopping agents, IoT devices, and large language models. These systems decide which products we see, which services we compare, and which sources we trust. In many cases, they carry the process to completion – making the recommendation, completing the transaction, providing the answer – before a human ever gets involved.

And unlike people, machines don’t shrug and move on when something’s off. Every flaw – a slow page, a misleading data point, a broken flow, a clumsy design choice – gets logged. They remember. Relentlessly. At scale. And at scale, those memories aren’t inert. They accumulate. They shape behaviour. And they may be the difference between being surfaced or never being recommended at all.

How machines remember

Machines log everything. Or, more precisely, they log everything that matters to them.

Every interaction, every transaction, every request leaves a trace somewhere. We know this because it already happens.

  • Web crawlers track status codes, file sizes, and response times.
  • Browsers feed back anonymised performance metrics.
  • Payment processors log retries, declines, and timeouts.
  • IoT devices record whether an API responded in time, or not at all.

And as more of our experiences flow through agents and automation, it’s reasonable to expect the same habits to spread. Checkout assistants, shopping bots, recommendation engines, voice systems – all of them are under pressure to learn from what happens when they interact with the world. Logging is the cheapest, most reliable way to do that.

At small scale, a log is just a line in a file. One record among billions. But as those records accumulate, patterns emerge.

  • A single timeout might be a blip.
  • A thousand timeouts look like unreliability.
  • One contradictory data point is noise.
  • A hundred is evidence of inconsistency.

Logs turn a one-off interaction into something that can be measured, compared, and acted on later.

The challenge is scale. Billions of requests create billions of logs. Too many to store forever. Too noisy to read line by line. Too expensive to process directly.

So machines compress. They smooth detail into summary, so they can carry forward what matters without drowning in history.

  • Latency collapses into an average response time.
  • Error codes collapse into a failure rate.
  • Conflicting inputs collapse into inconsistent source.
  • Human behaviour collapses into low engagement.

This isn’t emotional judgment. It’s economics. Without compression, systems choke. With it, they remember – not every detail, but the distilled conclusion of how you performed.

And once those conclusions exist, they don’t just sit there. They begin to shape how the system treats you the next time it encounters you.

Why machines share their memories

A single system’s memory is useful, but limited. A crawler may learn that your pages are slow, a shopping assistant may discover your checkout fails too often, a voice agent may notice that your hours don’t match reality. Each of these insights has value, but only to the system that recorded it. Everyone else has to pay the cost of re-learning the same lesson.

That duplication is wasteful. And these systems are already under pressure. Crawlers operate under strict compute budgets. Assistants live or die by their credibility with users. Models are insatiable, always hungry for feedback to refine their outputs. For all of them, repeating the same experiments, burning through cycles, and rediscovering the same flaws is an inefficiency they can’t really afford.

So the logic points toward sharing. Not because machines are altruistic, but because cooperation is cheaper than duplication.

We’ve already seen this play out elsewhere. Browsers exchange phishing and malware lists. Banks and payment processors pool fraud fingerprints. Email providers collaborate on spam signatures. Competitors in name, but collaborators in practice – because the cost of facing the same threat in isolation is higher than the cost of sharing intelligence.

There’s every reason to believe that agents, crawlers, and models will follow the same path. Maybe not by Google handing logs to Bing, but through other channels. Third-party clearinghouses. Shared standards and APIs. Training data that encodes the lessons learned elsewhere. However it happens, the effect is the same: what one system concludes about you rarely stays contained for long.

And that’s the part most businesses miss. A slow site, a broken endpoint, a misleading schema – these aren’t just local problems. They have a way of leaking, of spreading, of becoming the version of you that other systems inherit. Your flaws don’t just live where they happened; they circulate. And once they do, they start to shape how the network as a whole decides whether to trust you.

The machine immune system in action

Individually, logs are just traces. Summaries are just shorthand. Sharing is just efficiency. But together, they start to behave like something else.

When patterns are pooled and reinforced across systems, they stop being isolated judgments and begin to act like collective reflexes. What one crawler has concluded about your site’s reliability can quietly shape how other crawlers treat you. What one assistant has flagged as inconsistent data becomes a caution others inherit. Over time, these aren’t just scattered memories; they’re shared responses.

That’s the moment the metaphor shifts.

Because what we’re describing looks less like bookkeeping and more like biology. An immune system doesn’t need perfect recall of every infection or injury. It doesn’t replay the blow-by-blow of each encounter. Instead, it compresses experience into signatures – antibodies – and carries them forward. The next time it encounters a threat, it doesn’t hesitate; it recognises, and it responds.

Machines are beginning to behave the same way.

  • A pattern of timeouts doesn’t just sit in a log; it becomes a reason to crawl you less often.
  • A series of checkout failures doesn’t simply vanish once fixed; it lingers as a reason not to recommend you.
  • A mismatch between your published hours and reality doesn’t just frustrate one user; it creates a systemic doubt about your trustworthiness.

What begins as logging and compression hardens into scar tissue. That scar tissue travels, circulates, and shapes future behaviour. And once it does, the whole ecosystem starts to defend itself – not out of malice, but as a form of optimisation.

This is what emerges when memories are pooled and reinforced: a machine immune system, quietly learning to protect itself from waste, failure, and mistrust.

The flywheel of forgetting

Immune systems don’t just respond; they remember. And that memory is sticky.

For humans, reputations fade. A bad review is eventually buried. A clumsy campaign is forgotten once the next news cycle rolls on. Machines don’t work that way. Once a judgment has been compressed into a summary and circulated, it becomes a durable part of the record.

That durability creates a trap.

  • Fix a slow site, and the raw logs might show you’ve improved – but if those logs have already been smoothed into a shorthand like “site is unreliable”, the shorthand survives.
  • Correct your schema or your store hours, but if assistants have already shared the judgment “data is inconsistent”, that label will continue to shape how they treat you.

And because fewer systems will bother to revisit you, your opportunities to generate fresh, positive signals shrink. You’re starved of the very evidence you’d need to overturn the old diagnosis.

This is the flywheel:

  • Weaknesses are recorded.
  • Records are compressed into durable summaries.
  • Summaries spread across systems.
  • Spreading reduces your chances of rewriting the story.
  • Reduced chances keep the old judgment alive.

It’s not malice. It’s mechanics. In a network optimised for efficiency and trust, bad memories are easier to keep than to re-evaluate.

The result is a form of structural stickiness: once you’ve been marked down, recovery isn’t just difficult – it’s asymmetrical. The effort required to dig yourself out is many times greater than the effort it took to fall in.

Marketing without misdirection

The tricks that once defined marketing are failing. For decades, you could plaster over weak products, fragile services, or clunky experiences with the right story. Humans could be persuaded. Machines cannot.

That doesn’t just make misdirection harder; it makes it irrelevant. In a machine-mediated ecosystem, every flaw leaves a trace, every failure persists, and every judgment spreads. The immune system doesn’t care what you meant to say. It only cares how you actually performed.

So what takes marketing’s place?

Let’s call it Agent Relations.

If the old discipline was about shaping human perception, the new one is about shaping machine memory. It means understanding how crawlers, recommenders, shopping bots, and language models record, compress, and share their experiences of you. It means designing products, pages, and processes that generate the right kinds of traces. It means maintaining the kind of technical integrity that resists being scarred in the first place.

That doesn’t sound like the marketing we’re used to. It sounds closer to operations, QA, or infrastructure. But in a landscape where machines are the gatekeepers of discovery and recommendation, this is marketing.

The story you tell still matters – but only if it survives contact with the evidence.

Living with machine immune systems

What we are building is bigger than search engines, shopping bots, or voice assistants. It’s an ecosystem that behaves like a body. Crawlers, recommenders, APIs, and models are its cells. Logs are its memories. Shared summaries are its antibodies. Scar tissue is its reputation.

And like any immune system, its priority isn’t your survival. It’s its own.

If the network decides you are a source of friction – too slow, too inconsistent, too misleading, too unreliable – it will defend itself the only way it knows how. It will avoid you. It will stop visiting your site, stop recommending your product, stop trusting your data. Not out of malice, but as a reflex.

For businesses, that means invisibility. For marketers, it means irrelevance.

The old reflex – to polish the story, distract the audience, misdirect their attention – has no traction here. Machines aren’t persuaded by narrative. They’re persuaded by experience.

That’s why the future of marketing isn’t storytelling at all. It’s engineering trust into the systems that machines depend on. It’s building processes, data, and experiences that resist scarring. It’s practising Agent Relations – ensuring that when machines remember you, what they remember is worth carrying forward.

Because in the age of machine immune systems, your brand isn’t what you say about yourself. It’s what survives in their memory.

The post Marketing against the machine immune system appeared first on Jono Alderson.

If you want your blog to sell, stop selling

3 Eylül 2025 saat 08:26

Most brand blogs aren’t bad by accident. They’re bad by design.

You can see the assembly line from a mile away: build a keyword list, sort by volume and “difficulty”, pick the ‘best’ intersects on topics, write the blandest takes, wedge in a CTA, hit publish. Repeat until everyone involved can point at a dashboard and say, “Look, productivity”.

That’s industrialised mediocrity: a process optimised to churn out content that looks like content, without ever risking being interesting.

It isn’t just blogs. Knowledge bases, resource hubs, “insights” pages – all the same sausage machine. They’re the cautious cousins of the blog, stripped of even the pretence of perspective. They offer even less opinion, less differentiation, and less reason for anyone (human or machine) to care.

And it doesn’t work. It doesn’t win attention, it doesn’t earn trust, it doesn’t get remembered. It just adds to the sludge. And worse, all of it comes at a cost. Strategy sessions, planning decks, content calendars, review cycles, sign-offs. Designers polishing graphics nobody will notice. Developers pushing pages nobody will read. Whole teams locked into a treadmill that produces nothing memorable, nothing differentiating, nothing anyone wants to share.

Sure – if you churn out enough of it, you might edge out the competition. A post that’s one percent better than the other drivel might scrape a ranking, pick up a few clicks. Large sites and big brands might even ‘drive’ thousands of visits. And on paper, that looks like success.

But here’s the trap: none of those visitors care. They don’t trust you, they don’t remember you, they don’t come back. They certainly don’t convert. Which is why the business ends up confused and angry: “Why are conversion rates so low on this traffic when our landing pages convert at a hundred times the rate?” Because it was never real demand. It was never built on trust or preference. It was a trap, and it was obviously a trap.

And because even shallow wins look like progress, teams double down. They start measuring content the way they measure ads: by clicks, conversions, and cost-per-acquisition. But that’s how you end up mistaking systemisation for strategy.

Because ads and content are not the same thing. Ads are designed to compel an immediate action. Content can lead to action, but it does so indirectly – by building trust, by earning salience, by being the thing people return to in the messy, wibbly-wobbly bit where they don’t know what they don’t know.

And the more you try to make content behave like an ad, the worse it performs – as content and as advertising. You strip out the qualities that make it engaging, and you fail to generate the conversions you were chasing in the first place.

So if you want your blog to sell, you must stop making it behave like a sales page with paragraphs. Stop optimising for the micro-conversion you can attribute tomorrow, and start optimising for the salience, trust, and experiences that actually move the market over time.

Nobody is proud of this work

The writers know they’re producing beige, generic copy. It isn’t fun to research, it isn’t satisfying to write, and it isn’t something you’d ever share with a friend. It’s just filling slots in a calendar.

Managers and stakeholders know it too. They see the hours lost to keyword analysis, briefs, design assets, endless review cycles – and the output still lands with a thud.

The executives look at the system and conclude that “content doesn’t work.” Which only reinforces the problem. Content doesn’t get taken seriously, budgets get cut, and the teams producing it feel even less motivated.

Worse, they see it as expensive. Lots of salaries, lots of meetings, lots of activity – and little return. So the logic goes: why not mechanise it? Why not let ChatGPT churn out “articles” for a fraction of the cost, and fire the writers whose work doesn’t convert anyway?

And so the spiral deepens. Expensive mediocrity gives way to cheap mediocrity. Filler content floods in at scale. The bar drops further. And the chance of producing anything meaningful, opinionated, or differentiated recedes even further into the background.

And the readership? Humans don’t engage with it. They bounce. Or worse, they skim a paragraph, recognise the shallow, vapid tone, and walk away with a little less trust in the brand. Machines don’t engage either. Search engines, recommendation systems, and AI agents are built to surface authority and usefulness. Beige filler doesn’t register; at best, it’s ignored, at worst, it drags the rest of your site down with it.

It’s a vicious circle. Content becomes a chore, not a craft. Nobody enjoys it, nobody champions it, nobody believes in it. And the people (and systems) it was meant to serve see it for what it is: mass-produced, risk-averse filler.

Why it persists anyway

If everyone hates the work and the results, why does the machine keep running?

Because it’s measurable.

Traffic numbers, click-through rates, assisted conversions – all of it shows up neatly on a dashboard. It creates the illusion of progress. And in organisations where budgets are defended with charts, that’s often enough.

So content gets judged against the same metrics as ads. If a landing page converts at 5%, then a blog post should surely convert at some fraction of that. If a campaign tracks cost-per-click, then surely content should too. This is how ad logic seeps into content strategy – until every blog post is treated like a sales unit with paragraphs wrapped around it.

The irony is that content’s real value is in the things you can’t attribute neatly: trust, salience, preference. But because those don’t plot cleanly on a graph, they’re sidelined. Dashboards win arguments, even if the numbers are meaningless.

And the blind spots are bigger than most teams admit. A 2% conversion rate gets celebrated as success, but nobody asks about the other 98%. Most of those experiences are probably neutral and inconsequential. But some are negative – and impactfully so. The impact of those negative experiences compounds; it shows up in missing citations, hostile mentions, being excluded from reviews, or simply never being recommended.

That’s survivable when you can keep throwing infinite traffic at the funnel. But in an agentic world, where systems like ChatGPT are effectively “one user” deciding what gets surfaced, you don’t get a hundred chances. You get one. Fail to be the most useful, the most credible, the most compelling, and you’re filtered out.

Mediocrity isn’t just wasteful anymore. It’s actively dangerous.

You can’t have it both ways

This is where the sales logic creeps in. Someone in the room says, “Why not both? Be useful and generate sales. Add a CTA. Drop a promo paragraph. Make sure the content calendar lines up neatly with our product areas.”

That’s the point where the whole thing collapses. Because the moment the content is forced to sell, it stops being useful. It can’t be unbiased while also promoting the thing you happen to sell. It can’t be trusted while also upselling. It becomes cautious, compromised, grey.

And here’s the deeper problem: authentic, opinionated content doesn’t start from sales. It starts from a perspective – an idea, an experience, a frustration, a contrarian take. That’s what makes it readable, citeable, and memorable.

This is why Google keeps hammering on about E‑E-A‑T. The extra “E” – Experience – is their way of forcing the issue: they don’t want generic words; they want a lived perspective. Something that proves a human was here, who knows what they’re talking about, and who’s prepared to stand behind it.

Try to wrangle an opinion piece into a sales pitch, and you break it. Readers feel the gearshift. The tone becomes disingenuous, the bias becomes obvious, and the trust evaporates.

Flip it around and it’s just as bad. Try to start from a product pitch and expand it into an “opinion” piece, and you end up with something even worse: content that pretends to be independent thought, but is transparently an ad in prose form. Nobody buys it.

And ghostwriting doesn’t solve the problem. Slapping the CEO’s name and face on a cautious, committee-written post doesn’t magically make it human. Readers can tell when there’s no lived experience, no vulnerability, no genuine opinion. It’s still filler – just with a mask.

And if your articles map one-to-one with your service pages, they’re not blog posts at all. They’re brochures with paragraphs. Nobody shares them. Nobody cites them. Nobody trusts them.

The definitive answer to “How will this generate sales?” is: Not directly. Not today, not on the page. Its job is to build trust, salience, and preference – so that sales happen later, elsewhere, because you mattered.

Try to make content carry the sales quota, and you ruin both.

What success really looks like

If conversion rates and click-throughs aren’t the point, what is?

Success isn’t a form fill. It isn’t a demo request or a sale. It isn’t a 2% conversion rate on a thousand blog visits.

Success looks like discovery and salience. It looks like being the brand whose explainer gets bookmarked in a WhatsApp group. The one whose guide is quietly passed around an internal Slack channel. The article that gets cited on Wikipedia, or linked by a journalist writing tomorrow’s feature.

Success looks like becoming part of the messy middle. When people loop endlessly through doubt, reassurance, comparison, and procrastination, your content is the thing they keep stumbling across. Not because you trapped them with a CTA, but because you helped them.

It looks like being the name an analyst drops into a report, the voice invited onto a podcast, or the perspective that gets picked up in an interview. It looks like turning up where people actually make up their minds, not just where they click.

These are the real signals of salience – harder to track, but far more powerful than a trickle of gated downloads.

And here’s the thing: none of it happens if your “content” is just brand-approved filler. People don’t remember “the brand blog” – they remember perspectives, stories, and ideas worth repeating.

That doesn’t mean corporate or anonymous content can never work. They can – but there’s no quicker signal that a piece is going to be generic and forgettable than when the author is listed as “Admin” or simply the company name. If nobody is willing to stand behind it, why should anyone bother to read it?

A blog post is only a blog post if it carries the authentic, interesting opinion of a person (or, perhaps, system). Known or unknown, polished or raw, human or synthetic, what matters is that there’s a voice, a perspective, and a point of view. Otherwise, your blog is just an article repository. And in a world already drowning in corporate sludge, that’s no moat at all.

That means putting people in the loop. Authors with a voice. People with experience, perspective, humour, or even the willingness to be disagreeable. Industrialised mediocrity is safe, scalable, and forgettable. Authored content is risky, personable, and memorable. And only one of those has a future.

“But our competitors don’t do this”

They don’t. And that’s the point.

Most big companies favour systemisation over strategy. They’d rather be trackable than meaningful. They’d rather be safe than useful. They’d rather produce cautious filler that nobody hates, than take the risk of publishing something that someone might actually love.

And the way they get there is identical. They employ the same junior analysts, point them at the same keyword tools, and ask them to churn out the same “content calendars” and to-do lists. The result is inevitable: the same banal articles, repeated across every brand in the category.

That’s why their blogs are indistinguishable. It’s why their “insights” hubs blur into one another. It’s why nobody can remember a single thing they’ve ever said.

If you copy them, you inherit their mediocrity. If you differentiate, you have a chance to matter.

Stop selling to sell

Buying journeys aren’t linear. People loop endlessly through doubt, reassurance, procrastination, and comparison. They don’t need traps; they need help. If your blog is engineered like an ad, it can’t be there for them in those loops.

The irony is that the most commercially valuable content is often the least “optimised” for conversions. The ungated how-to guide that answers the question directly. The explainer that solves a problem outright instead of hiding the answer behind a form. The resource that doesn’t generate a lead on the page, but earns a hundred links, a thousand citations, and a permanent place in the conversation.

That’s what salience looks like. You see it in journalists’ citations, in podcast invitations, in analysts’ reports. Those are measurable signals, just not the ones dashboards were built for. They’re the breadcrumbs of authority and trust – the things that compound into sales over time.

And this isn’t just about blogs. The same applies to your “insights” hub, your knowledge base, your whitepapers. If it’s industrialised mediocrity, it won’t matter. If it’s authored, opinionated, and differentiated, it can.

So stop trying to make every page a conversion engine. Accept that ads and content are different things. Be useful, be generous, be memorable. The sales will follow – not because you forced them, but because you earned them.

Does this post stand up to scrutiny?

I see the irony. You’re reading this on a site with a sidebar and footer, trying to sell you my consultancy. Guilty as charged. But the advert is over there, being an advert. This post is over here, being a post. The problem isn’t advertising. The problem is when you blur the two, pretend your brochure is a blog, and end up with neither: not a real advert, not a real blog – just another forgettable blur in the sludge.

Maybe you’re wondering whether this post lives up to its own argument. Does it have a voice? Does it show experience, perspective, and opinion – or is it just another cleverly-worded filler piece designed to prop up a consultancy?

That’s exactly the tension. Authentic, opinionated writing is hard. It takes time, craft, vulnerability, and the risk of saying something someone might disagree with. It’s much easier to churn out safe words and tick the boxes.

And yes, here’s another irony: does it matter that I used ChatGPT to shortcut some of the labour-intensive parts of the writing and editing process? I don’t think so. Because what matters is that there’s still a human voice, perspective, and experience at the heart of it. The machine helped with polish; it didn’t supply the worldview.

That’s the line. Tools can support. Even systemisation can support. They can speed up editing, remove friction, and help distribute the work. But they can’t replace lived experience, a contrarian stance, or the willingness to risk saying something in your own voice. Strip those away, and you’re back in the land of industrialised mediocrity.

The post If you want your blog to sell, stop selling appeared first on Jono Alderson.

On propaganda, perception, and reputation hacking

14 Ağustos 2025 saat 17:31

For the last two decades, SEO has been a battle for position. In the age of agentic AI, it becomes a battle for perception.

When an LLM – or whatever powers your future search interface – decides “who” is trustworthy, useful, or relevant, it isn’t weighing an objective truth. It’s synthesising a reality from fragments of information, patterns in human behaviour, and historical residue. Once the model holds a view, it tends to repeat and reinforce it.

That’s propaganda - and the challenge is ensuring the reality the machine constructs reflects you at your best.

Two ideas help navigate this.

  • Perception Engineering is the long game: shaping what machines “know” over time by influencing the enduring sources and narratives they ingest.
  • Reputation Hacking is the nimble, situational work of influencing or correcting narratives in the moment.

Both are forms of propaganda in the machine age – not the crude, deceptive kind, but the careful, factual shaping of how your story is told and retold.

And both matter – because the future of discovery is dynamic and adaptive, but the raw material is often sluggish. And that persistence – what sticks, what lingers, what gets repeated – is where most of the opportunity (and risk) lives.

This brings us to the core difference between human and machine narratives: how they remember. In this game, memory isn’t a passive archive – it’s an active filter, deciding what survives and how it’s retold. Get into the memory for the right reasons, and you can ride the benefits for years; get in for the wrong ones, and the shadow can be just as long.

The perpetual memory of machines

Humans forget. Machines don’t – at least, not in the same way.

When we forget, the edges blur. Details fade, timelines collapse, and the story becomes softer with distance. Machines, by contrast, don’t lose the data; they distil it. Over time, sprawling narratives are boiled down into their most distinctive fragments. That’s why brand histories are rarely undone by a single correction: they’re retold and re-framed until only the high-contrast bits remain.

Models are especially good at this kind of distillation – the scandal becomes the headline; the resolution is relegated to a footnote. In human propaganda, repetition does the work; in machine propaganda, compression and persistence do.

And because the compressed version often becomes the only version a machine recalls at speed, understanding how that memory is formed is crucial.

Two kinds of memory matter here:

  • Training memory: whatever was in the data during the last snapshot. If it was high-profile, repeated, or cited, it picks up “institutional gravity” and is hard to dislodge.
  • Retrieval memory: whatever your agent fetches at runtime – news, documents, databases – and the guardrails that steer how it’s used.

Time decay helps, sometimes. Many systems down-weight stale material so answers feel current. But it’s imperfect. High-visibility events keep their gravity, and low-visibility corrections don’t always get the same reach.

There’s also the “lingering association” problem: co-reference (“old name a.k.a. new name”) or categorical links (“company X, part of scandal Y”) keep the old framing alive in perpetuity. In human terms, it’s like being introduced at a party with a two-year-old anecdote you’d rather forget.

The point isn’t that machines never forget – it’s that they forget selectively, in ways that don’t automatically favour the most recent or most accurate version of your story.

Positive PR as a self-fulfilling loop

If memory can haunt, it can also help.

Language travels. In the best kind of propaganda, it’s the flattering, accurate turn of phrase that does the rounds. When a respected outlet coins one, it doesn’t stay put.

It turns up in analyst notes, conference decks, product reviews, and investor briefings. The repetition turns it into a linguistic anchor – the default way to describe you, even for people who’ve never read the original.

Behaviour travels too. If people expect you to be good, they act accordingly: they search for you by name, click you first, stick around longer, and talk about you in more positive terms. None of that proves you’re the best, but it creates data patterns that make you look like the best to systems that learn from aggregate behaviour.

The loop is subtle: positive framing → positive behaviour → positive framing. It’s not instant, but once established, it can be self-reinforcing for years.

In this context, Perception Engineering is about identifying the phrases, framings, and narratives you’d want to see repeated indefinitely – and ensuring they originate in credible, durable sources. Reputation Hacking, on the other hand, is about spotting those moments in the wild – a conference panel soundbite, a glowing product comparison – and nudging them into places where they’ll be picked up, cited, and echoed.

The trick isn’t to plant advertising copy in disguise; it’s to seed clear, accurate, and repeatable language that works for you when it’s stripped of context and paraphrased by a machine.

The weaponisation of perception

Any system that can be shaped can be distorted. And in an environment where narrative persistence is the real prize, some will try.

Defensive propaganda starts with recognising the quiet ways bias enters the record: selective data, tendentious summaries, strategic omissions. These aren’t always illegal. They’re rarely obvious. But once embedded – especially in formats with long shelf lives – they can tilt the machine’s memory for years.

Weaponisation doesn’t have to look like a smear campaign. It can be as subtle as redefining a term in a trade publication, repeatedly pairing a competitor’s name with an unflattering comparison, or supplying an “expert quote” that’s technically accurate but engineered to leave the wrong impression. Even the order of information can create a lasting skew.

The danger isn’t only in outright falsehoods. Once a distortion is repeated and cited, it becomes part of the machine’s “truth set” – and because models reconcile contradictions into one coherent narrative, the detail they keep is often the one with the sharpest edge, not the one that’s most correct.

The countermeasure is simple, if not easy: make the accurate version so abundant, consistent, and easy to cite that it outweighs the distortion. If there’s going to be a gravitational centre, you want it to be yours.

We’ve seen shades of this in human media ecosystems for decades:

  • A decades-old product recall still mentioned in “history” sections long after the issue was resolved.
  • Industry rankings where the methodology favours one business model over another, subtly reshaping market perception.
  • Persistent category definitions that exclude certain players altogether, not because they’re irrelevant, but because the earliest, most visible framing said so.

Pretending this doesn’t happen is naïve. Copying it is reckless. The more sensible response is to raise the signal-to-noise ratio in your favour: make the accurate version abundant, consistent, and easy to cite. In other words, counter bad propaganda with better propaganda – a clear, consistent truth that’s hard to compress into anything less flattering.

The collapse of neutral search

Neutrality is a story we tell ourselves.

Agents don’t simply “retrieve facts”. They synthesise from priors, recency, safety layers, and whatever they can fetch. Even when they hedge (“some say”), they still decide which “some” count – and that decision shapes the story.

In the blue-link era, we optimised for ranking. In the agent era, we optimise for narrative selection: the frames, sources, and categories that get picked when the machine tells the story of your topic. This is exactly where perception engineering and reputation hacking collide: you can’t guarantee the story will be neutral, but you can influence which stories and definitions the machine has to choose from.

Once a framing is dominant, it creates a gravitational field. Competing narratives struggle to break in, because the model is optimising for coherence as much as correctness. That’s why the first widely cited definition of a category, or the earliest comprehensive guide to a topic, often becomes the anchor – whether or not it’s perfect. Every subsequent mention is then interpreted, consciously or not, through that lens.

The real collapse of neutrality isn’t bias in the political sense. It’s that “the truth” is increasingly whatever the machine can construct most coherently from the material at hand. And coherence rewards whoever got there first, spoke the clearest, or was repeated most often.

Which means if you don’t help define your category – its language, its exemplars, its boundaries – the machine will do it for you, using whatever scraps it can find. Perception engineering ensures those scraps are yours; reputation hacking helps you insert them quickly when the window is open.

Recalibrating the marketing stack

To be successful, you must treat the machine’s worldview as a product you can influence – and as an ongoing propaganda campaign you’re running in plain sight – with editorial standards, governance, and measurement.

That means that you need:

  • Governance: someone owns the brand’s “public record”. Not just the site, but the wider corpus that describes you.
  • Observation: regular belief-testing. Ask top agents the awkward questions you fear customers are asking. Record the answers. Track drift.
  • Editorial: create “sources of record” – durable, citable material that others use to explain you.
  • Change management: when reality changes (new product, leadership, policy), plan the narrative update as a programme, not a press release.
  • Crisis hygiene: have a playbook for fast corrections, long-lived clarifications, and calm follow-ups that age well.

This isn’t new work so much as joined-up work. PR, content, SEO, legal, product. Same orchestra, new conductor.

From ideas to action

The principles we’ve covered – perception engineering and reputation hacking – aren’t abstract labels. They’re two complementary operating modes that inform everything from your editorial process to your crisis comms. Perception engineering sets the long-term gravitational field; reputation hacking is the course correction when reality or risk intrudes.

In practice, they draw from the same toolkit – research, content, partnerships, corrections – but the sequencing, pace, and priority are different. Perception engineering is slow-burning and accumulative; reputation hacking is urgent and surgical.

What follows isn’t “SEO tips” or “PR tricks” – it’s the operationalisation of those two modes. Think of it as building a persistent advantage in the machine’s memory while keeping the agility to steer it when you need to.

Practical applications

The battle for perception isn’t won in the heat of a campaign. It’s won in the quiet, unglamorous maintenance of the record the machine depends on. If its “memory” is the raw material, then perception engineering and reputation hacking are the craft – the fieldwork that keeps that raw material current, coherent, and aligned with your preferred story.

What follows isn’t theory. It’s the operational layer: the things you can do – quietly, methodically – to ensure that when the machine tells your story, it’s working from the version you’d want repeated.

Perception engineering (proactive)

Proactive work is the compound-interest version: it’s slower to show results, but once set, it’s hard to dislodge. This is where you lay down the durable truths, the assets and anchors that will be repeated for years without you having to touch them.

  • Audit the deep web of your brand: Not just your own site, but press releases, partner microsites, supplier portals, open-license repositories, and archived PDFs. Look for outdated product names, superseded logos, retired imagery, and even mismatched colour palettes. Machines will happily pull any of it into their summaries.
  • Maintain staff and leadership profiles: Your own team pages, but also speaker bios on conference sites, partner directories, media appearances, and LinkedIn. An ex-employee still billed as “Head of Innovation” on a high-ranking event page can haunt search summaries for years.
  • Keep organisational clarity: Align public org charts, leadership listings, and governance descriptions across your site, LinkedIn, investor relations, and third-party listings. A machine that sees three different hierarchies will assume the one with the most citations is the “truth” – and it might not be the one you prefer.
  • Refresh high-authority, long-life assets: Identify the logos, diagrams, and “about” text most often re-used by journalists, analysts, and partners. Replace outdated versions in all the places people (and scrapers) are likely to fetch them.
  • Define your narrative anchors: Pick the ideas, phrases, and category definitions you’d like attached to your name for the next five years. Name them well, explain them clearly, and seed them in durable sources – encyclopaedic entries, standards bodies, academic syllabi – not just transient campaign pages.

Perception Engineering (reactive)

Reactive work is about patching holes in the hull before the leak becomes the story. It’s faster, more visible, and sometimes more expensive, because you’re competing with whatever’s already in circulation. The goal isn’t just to fix the record – it’s to do so in a way that ages well and doesn’t keep re-surfacing the old problem.

  • Update the record before the campaign: When something changes – product launch, rebrand, leadership shift – make sure the long-lived references get updated first (Wikipedia, investor materials, industry directories). Campaign assets come second.
  • Clean up legacy debris: Retire or redirect old content that keeps the wrong story alive. Where removal isn’t possible, add clarifying updates so the old version isn’t the only one available to be quoted.

Reputation Hacking (proactive)

This is the “social engineering” of credibility – done ethically. You’re placing the right facts and framings in the high-gravity sources that machines and people alike draw from. Done consistently, it builds a kind of reputational armour.

  • Track the gravitational sources: Identify the handful of third-party sites, writers, or communities that punch above their weight in your category. Maintain an accurate, consistent presence there.
  • Synchronise your language: Ensure spokespeople, PR, product, and content teams are describing the brand in the same terms, so repetition works in your favour – and machines see one coherent narrative, not a jumble of similar-but-different descriptors.

Reputation Hacking (reactive)

This is triage. You can’t always prevent distortions, but you can choose where and how to counter them so the fix lives longer than the fault. It’s also where the temptation to over-correct can backfire; you want a clean resolution, not an endless duel that keeps the bad version alive.

  • Respond where it will linger: When a skewed narrative surfaces, publish the correction or context in the source most likely to be cited next year – not just the one trending today.
  • Offer clarifications that age well: Use timelines, primary data, and named accountability rather than ephemeral rebuttals. Once that’s in the record, resist the temptation to keep stoking the conversation – you want the durable correction, not the endless back-and-forth.

Where to start

The fastest way to see how the machine sees you is to ask it. Pick three or four leading AI search tools and prompt them the way a customer, investor, or journalist might. Don’t just check the facts – listen for tone, framing, and what gets left out.

Then work backwards: which pieces of the public record are feeding those answers? Which of them could you update, clarify, or strengthen today? You don’t have to rewrite your whole history at once. Just start with the handful of durable, high-visibility assets that most shape the summaries – because those will be the roots every new narrative grows from.

Closing the loop

In the old search era, the prize was the click. In the agent era, the prize is the story – and once a version of that story lodges in the machine’s memory, it calcifies. You can chip at it, polish it, add new chapters… but moving the core narrative takes years.

Propaganda, perception engineering, reputation hacking – call it what you like. The point is the same: you’re no longer just marketing to people; you’re marketing to the machines that will introduce you to them.

Ignore that, and you’re effectively letting someone else write your opening paragraph – the one the machine will read aloud forever. Play it well, and your version becomes the one every other retelling has to work to dislodge.

The post On propaganda, perception, and reputation hacking appeared first on Jono Alderson.

There’s no such thing as a backlink

12 Ağustos 2025 saat 08:30

A link is not a “thing” you can own. It’s not a point in a spreadsheet, or a static object you can collect and trade like a baseball card.

And it’s certainly not a “one-way vote of confidence” in your favour.

A link is a relationship.

Every link connects two contexts: a source and a destination. That connection exists only in the relationship between those two points – inheriting its meaning, relevance, and trust from both ends at once. Break either end, and the link collapses. Change either end, and its meaning shifts.

If you want to understand how search engines, LLMs, and AI agents perceive and traverse the web, you have to start from this idea: links are not things. They are edges in a graph of meaning and trust.

Why “backlink” is a problem

That’s why “backlink” is such a loaded, dangerous word.

The moment you call it a backlink, you flatten the concept into something purely about you. You stop thinking about the source. You stop thinking about why the link exists. You strip away its context, its purpose, its role in the broader ecosystem.

And what’s on the other side – is that a “forward link”? Of course not. We’d never use that phrase because it’s absurd. Yet “backlink” has been normalised to the point where we’ve trained ourselves to see only one direction: inbound to us.

This isn’t harmless shorthand. It’s an active simplification – a way of collapsing something messy and multi-dimensional into a clean, one-directional metric that fits neatly in a monthly report.

Flattening complexity for convenience

The real problem with “backlink” isn’t just that it’s inaccurate – it’s that it’s convenient.

Modelling, tracking, and valuing the true nature of a link – as a relationship between two entities, grounded in trust, context, and purpose – is complicated. It’s hard to scale. It doesn’t always fit neatly in a dashboard.

Flatten it into “backlink count,” and suddenly you have a number. You can set a target, buy some, watch the line go up. It doesn’t matter if the links are contextless, untrusted, or fragile – the KPI looks good.

That’s why so many bought links don’t move the needle. They’re designed to satisfy the simplified model, not the underlying reality. You’re optimising for the report, not the algorithm.

The industry’s other convenient fictions

This isn’t just about “backlinks” or “link counts.” The link economy thrives on invented terminology because it turns the intangible into something tradable:

  • “Journalist links” aren’t a distinct species of link. They’re just… links. Links from journalists, sure, but still subject to the same rules of trust, context, and relevance as everything else. Calling them “journalist links” lets agencies sell them as a premium product, implying some magic dust that doesn’t exist.
  • “Niche edits” is a euphemism for “retroactively inserting a link into an existing page.” In reality, the practice often creates a weaker connection than the original content warranted, and risks breaking the source’s context entirely. But “niche edit” sounds tidy, productised, and easy to buy.
  • “DoFollow links” don’t exist. Links are followable by default, and even nofollow is more of a hint than a block. The term was invented to make the normal behaviour of the web sound like a special feature you can pay for.

There are dozens of these terms, all designed to artificially flatten and simplify, in a way which is deeply harmful.

And then there’s “link building”

“Link building” might be the most damaging term of all.

It makes the whole process sound mechanical. Industrial. Like you’re stacking identical units until you hit quota. The phrase itself erases the reality that the value of a link is inseparable from why it exists, who created it, and whether trust flows through it.

Yes, you can “build” a collection of links. You can even hit your targets. But if those links aren’t grounded in trust, context, and mutual relevance, you haven’t built anything with lasting value. You’ve just arranged numbers in a report.

Real links – the kind that carry authority, relevance, and resilience over time – aren’t built. They’re earned. They emerge from relationships, collaboration, and shared purpose.

The web is not a ledger

The mental model that search engines are just “counting backlinks” is hopelessly outdated.

The web is not a static ledger of inbound links. It’s a living, constantly shifting graph of relationships – semantic, topical, and human.

For a search engine, a link is one of many signals. It inherits meaning from:

  • The page it’s on – its quality, trustworthiness, and topic.
  • The words around it – anchor text, surrounding copy, and implicit associations.
  • The nature of the source – how it connects to other sites and pages, its history, and its place in the graph.
  • The wider topology – how that connection interacts with other connections in the ecosystem.

This is the reality that “backlink count” and “link building” both paper over – the algorithm is modelling relationships of trust, not transactions.

Search engines, LLMs, and agents don’t care about “backlinks”

Here’s the crucial shift: the future of discovery won’t be “ten blue links” driven by link-counted rankings.

LLMs and AI agents don’t think in backlinks at all. They parse the web as a network of entities, concepts, and connections. They care about how nodes in that network relate to each other – how trust, authority, and relevance propagate along the edges.

Yes, they may still evaluate links (directly, or indirectly). In fact, links can be a useful grounding signal: a way of connecting claims to sources, validating relationships, and reinforcing topical associations. But those links are never considered in isolation. They’re evaluated alongside everything else – content quality, author credibility, entity relationships, usage data, and more.

That makes artificially “built” links stand out. Contextless, untrusted, or irrelevant links are easy to spot against the backdrop of a richer, more integrated model of the web. And easy to ignore.

In that world, a “backlink” as the SEO industry defines it – a one-way token of PageRank – is almost meaningless. What matters is the relationship: why the link exists, what it connects, what concepts it reinforces, and how it integrates into the larger graph.

Why the language persists

The reason we still say “backlink” and “link building” isn’t because they’re the best descriptors. It’s because they’re useful – for someone else.

Vendors, brokers, and marketplaces love these terms. They make something messy, relational, and human sound like a measurable commodity. That makes it easier to sell, easier to buy, and easier to report on.

If you frame links as “relationships” instead, you make the job harder – and you make the value harder to commoditise. Which is precisely why the industry’s resale economy prefers the simpler fiction.

Optimising for the wrong web

If your mental model is still “get more backlinks” or “build more links,” you’re optimising for the wrong web.

The one we’re already in doesn’t reward accumulation – it rewards integration. It rewards being part of a meaningful network of relevant, trusted, and semantically connected entities.

That means:

  • Stop chasing raw counts.
  • Stop buying neat-sounding products that exist to make reporting easy.
  • Start building relationships that make sense in context.
  • Think about how your site fits into the broader topical and semantic ecosystem.
  • Design links so they deserve to exist, and make sense from both ends.

The takeaway

There is no such thing as a backlink.

There is no such thing as “DoFollow links,” “journalist links,” or “niche edits.”

And if “link building” is your strategy, you’re already thinking in the wrong dimension.

There are only relationships – some of which happen to be expressed through HTML <a> elements.

If you want to thrive in a search environment increasingly shaped by AI, entity graphs, and trust networks, stop flattening complexity and start earning your place in the web’s map of meaning.

Stop chasing backlinks. Stop buying fictions.

Start building relationships worth mapping.

The post There’s no such thing as a backlink appeared first on Jono Alderson.

Standing still is falling behind

11 Ağustos 2025 saat 13:37

“Our traffic’s down, but nothing’s changed on our website.”

This is one of the most common refrains in digital marketing. The assumption is that stability is safe; that if you’ve left your site alone, you’ve insulated yourself from volatility.

But the internet isn’t a museum. It’s a coral reef – a living ecosystem in constant flux. Currents shift. New species arrive. Old ones die. Storms tear chunks away. You can sit perfectly still and still be swept miles off course.

In this environment, “nothing changed” isn’t a defence. It’s an admission of neglect.

The myth of stability

When you measure performance purely against your own activity, it’s easy to believe that you exist in a stable vacuum. That your rankings, your traffic, and your conversion rate are a sort of natural equilibrium.

They’re not.

What you’re looking at is the current balance of power in a chaotic network of content, commerce, and culture. That balance shifts every second. Even if you do nothing, the environment around you is mutating – algorithms are recalibrating, competitors are making moves, new pages are earning links, and public attention is being diverted elsewhere.

The obvious changes

Some of the forces reshaping your position are easy to spot:

  • Competitors launching aggressive sales or product releases.
  • A rival migrating their site, and creating a temporary rankings gap.
  • Search trends shifting as customer needs evolve.

These are the obvious changes. You can see them coming, at least if you’re paying attention.

But often, the biggest hits to your performance come from events so far outside your immediate view that you don’t even think to look for them.

The invisible shifts

The web’s link graph, attention economy, and user behaviour patterns are constantly being reshaped by events you’d never imagine could affect you. Here are just a few ways your numbers can move without you touching a thing.

1. Wikipedia editing sprees

A niche documentary airs on TV, and suddenly thousands of people are editing related Wikipedia articles. Those pages rise in prominence, gain links, and reshape the web’s internal authority flow. Your carefully nurtured evergreen content in that space loses a few points of link equity, and rankings slip.

2. Celebrity deaths

A public figure dies. News sites, fan pages, and archives flood the web. Search demand spikes for their work, quotes, and history. For weeks, this attention warps the SERPs, pushes down unrelated content, and changes linking patterns.

3. Seasonal cultural juggernauts

By mid-October, Michael Bublé and Mariah Carey are already thawing out for Christmas, and seasonal content starts hoovering up clicks, ad inventory, and search attention. Your evergreen “winter wellness” content is suddenly in a knife fight with mince pie recipes and gift guides.

4. Platform and policy changes

Reddit tweaks its API pricing. Popular third-party apps die. Browsing habits change overnight. Millions of users are now encountering, sharing, and linking to content differently. Your steady “traffic from Reddit” graph turns into a cliff.

5. Macro news events

The Suez Canal gets blocked by a container ship. Suddenly, every global shipping blog post from 2017 is back on page one, displacing your carefully optimised supply chain guide.

6. Retail collapses

A high-street chain goes bankrupt. Hundreds of high-authority product and category pages vanish. The link equity they were holding gets redistributed across the web, reshaping rankings even in unrelated verticals.

7. Weird pop culture blips

A Netflix series resurrects a 20-year-old cake recipe. Overnight, tens of thousands search for it. If it’s on your food blog (and easy to find) you ride the wave. If it’s buried on page six of your “Other Baking Ideas” tag archive, or hidden behind a bloated recipe plugin, you don’t even get a crumb.

8. Major sporting events

The Olympics, the World Cup, the Super Bowl – these pull public attention, time, and disposable income into one giant funnel. For weeks, people spend differently, travel differently, and think about entirely different things. You can lose traffic and sales even if your market is nowhere near sports.

9. Political and economic ripples

Political tensions disrupt the supply of rare metals. Prices rise. Manufacturers delay or cancel product launches. Consumer tech coverage dries up. Search interest shifts to alternatives. Somewhere down the chain, your site, which sells something only vaguely connected, sees fewer visits and lower conversions, for reasons you’ll never see in Google Search Console.

How these ripples spread

These events change the digital landscape through a few predictable, but largely invisible, mechanisms:

  • Link graph redistribution – When big, authoritative pages gain or lose prominence, the “trust” and equity they pass shifts across the web.
  • SERP reshuffles – New, high-interest content pushes existing results down, sometimes permanently.
  • Attention cannibalisation – Cultural moments draw clicks and ad spend away from unrelated topics.
  • Behavioural shifts – Users change how they search, where they click, and what they expect to see.

You might never connect these cause-and-effect chains directly, but the effects are real. And they’re happening all the time.

Why ‘nothing changed’ is dangerous

Digital performance is a zero-sum game. Rankings, visibility, and attention are finite. When the environment changes, some people win and others lose.

If you’re standing still while everyone else adapts – or while macro events tilt the playing field – you’re not holding position. You’re drifting backwards. And the longer you stand still, the more ground you lose.

What to do instead

You can’t stop the reef from shifting. But you can make sure you’re swimming with it. That means adopting a mindset and an operating rhythm that treats change as the default state.

  • Monitor markets
    Not just your own, but the cultural, economic, and technological currents that shape your audience’s world. Look for leading indicators – industry chatter, policy debates, seasonal mood shifts.
  • Continually evolve, innovate, and adapt
    Change is oxygen; without it, your strategy asphyxiates. Tweak, test, and adjust regularly – even when things feel “fine.”
  • Remember that nothing is sacred
    No page, product, or process is untouchable. If it’s not delivering value in the current environment, change it.
  • Treat nothing as finished
    Your content, your UX, your strategy – they’re all drafts. There is no final version.
  • Improve 100 small things in 100 small ways every day
    Compounding micro-improvements beat sporadic overhauls. Small gains stack over time. Don’t ever stop and wait 6 months for the site redesign project you’ve been promised (because it’ll almost certainly take 18 months).

The web won’t wait for you

Your website doesn’t live in isolation. It’s part of a sprawling, shifting network of pages, links, and human behaviour. Events you’ll never see coming will keep tilting the playing field.

If your digital strategy is ‘nothing’s changed’, you’re not monitoring the map – you’re standing still while the land beneath you sinks into the ocean.

Change is the baseline. Adapting to it is the job.

The post Standing still is falling behind appeared first on Jono Alderson.

Shaping visibility in a multilingual internet

8 Ağustos 2025 saat 20:05

Everyone thinks they understand localisation.

Translate your content. Add hreflang tags to your pages. Target some regional keywords. Job done.

But that isn’t localisation. That’s sales enablement, with a dash of technical SEO.

Meanwhile, the systems that decide whether you’re found, trusted, and recommended – Google’s algorithms, large language models, social platforms, knowledge graphs – are being shaped by content, conversations, and behaviours happening in languages you’ll never read, in markets you’ll never serve.

And most brands aren’t even aware of it.

The old approach to localisation assumes neat boundaries. You sell in a country, so you translate your site. You want to rank there, so you create localised content and generate local coverage.

It’s tidy, measurable, and built on the comforting idea that you only need to care about the markets you serve.

But the internet doesn’t work like that anymore.

Content leaks. People share. Platforms aggregate. Machines consume indiscriminately.

A blog post in Polish might feed into a model’s understanding of a concept you care about. That model might use that understanding when generating an English-language answer for your audience.

A Japanese forum thread might mention your product, boosting its perceived authority in Germany.

A Spanish-language review site might copy chunks of your English product description into a page that ranks in Mexico, creating a citation network you didn’t build and can’t see.

Even if you’ve never touched those markets, they can (and do) influence how you’re found where you do compete.

And here’s the kicker: if influence can flow in from those markets, it can flow out of them too.

That means you can – and in some cases should – actively create influence in markets you’ll never sell to. Not to acquire customers, but to influence the systems. A strong citation in Turkish, a few strategic mentions in Portuguese, or a cluster of references in Korean might shape how a language model or search engine understands your brand in English.

Yes, some of this looks like old-fashioned international PR. The difference is that we’re not optimising for direct human response. We’re optimising for how machines ingest and interpret those signals. That shift changes the “where” and “why” of the work entirely.

You’re not just trying to be visible in more places. You’re trying to be influential where the influence flows from.

Search isn’t local, and machines aren’t either

The classic SEO playbook felt localised because search engines presented themselves that way. You had a .com for the US, a .de for Germany, a .es for Spain, and Google politely asked which version of your content belonged in which market.

But that neatness was always a façade. The index has always been porous, and now, with language models increasingly integrated into how content is ranked, recommended, and summarised, the boundaries have all but collapsed.

Large language models don’t care about your ccTLD strategy.

They’re trained on vast multilingual datasets. They don’t just learn from English – they absorb patterns, associations, and relationships across every language they can get their hands on.

But that absorption is messy. The training corpus is uneven. Some languages are well-represented; others are fragmented, biased, or dominated by spam and low-quality translations.

That means the model’s understanding of your brand – your products, your reputation, your expertise – might be shaped by poor-quality data in languages you’ve never published in. A scraped product description in Romanian. A mistranslation in Korean. A third-party reference in Turkish that subtly misrepresents what you do.

Worse, models interpolate. If there’s limited information in one language, they fill in the blanks using content from others. Your reputation in English becomes the proxy for how you’re understood in Portuguese. A technical blog post in German might colour how your brand is interpreted in a French answer, even if the original wasn’t about you at all.

You don’t get to decide which pieces get surfaced, or combined, or misunderstood. If you’re not present in the corpus – or if you’re present in low-quality ways – you’re vulnerable to being misrepresented; not just in language, but in meaning.

And while we can’t yet produce a neat chart showing “X citations in Portuguese equals Y uplift in US search”, we can point to decades of evidence that authority, entity associations, and knowledge-graph inputs cross linguistic and geographic boundaries.

Absence is its own liability

And here’s the uncomfortable bit: not being present doesn’t make you safe. It makes you vulnerable.

If your brand has no footprint in a market – no content, no reputation, no signals – that doesn’t mean machines ignore you. It means they guess.

They extrapolate from what exists elsewhere. They fill in the blanks. They make assumptions based on similar-sounding companies, related products, or low-context mentions from other parts of the web.

A brand that does have localised content – even if it’s thin or mediocre – might be treated as more trustworthy or relevant by default. A poorly translated competitor page might become the canonical representation of your product category. A speculative blog post might be treated as the truth.

You don’t have to be a multinational to be affected by this. If your products get reviewed on Amazon in another country, or your services get mentioned in a travel blog, you’re already part of the multilingual ecosystem. The question is whether you want to shape that, or to leave it to chance.

This is the dark side of “AI-powered” summarisation and assistance: it doesn’t know what it doesn’t know, and if you’re not present in a given locale, it will invent or import context from somewhere else.

Sometimes, the most damaging thing you can do is nothing.

What to do about it (and what that doesn’t mean)

This isn’t a call to translate your entire site into 46 languages, or to buy every ccTLD, or to build a localised blog for every market you’ve never entered.

But it is a call to be deliberate.

If your brand is already being interpreted, categorised, and described across languages – by people and machines alike – then your job is to start shaping that system.

This isn’t about expanding your market footprint. It’s about shaping the environment that the machines are learning from.

In some cases, that means being deliberately present in languages, regions, or contexts that will never become customers; but which do feed into the training data, ranking systems, and reputational scaffolding that determine your visibility where it matters.

You’re not building local authority. You’re influencing global interpretation. Here’s how.

🎯 Identify and shape your multilingual footprint

  • Spot opportunities where your brand is being talked about – but not by you – and consider replacing, reframing, or reinforcing those narratives.
  • Audit where your brand is already being mentioned, cited, or discussed across different languages and countries.
  • Prune low-quality, duplicate, or mistranslated content if it’s polluting the ecosystem (especially if it’s scraped or machine-translated).

🎙️ Influence the sources that matter to the machines

  • Focus less on user acquisition and more on shaping the ambient data that teaches the system what your brand is.
  • Find the publications, journalists, influencers, and platforms in non-English markets that LLMs and search engines are likely to trust and ingest.
  • Get your CEO interviewed on a relevant industry podcast in Dutch. Sponsor an academic paper in Portuguese. Show up where the training data lives.

🧱 Create multilingual anchor points – intentionally

  • Place these strategically in markets or languages where influence is leaking in, or where hallucinations are most likely to occur.
  • You don’t need full localisation. Sometimes, a single “About Us” page, or a translated version of your flagship research piece, is enough.
  • Make sure it’s accurate, high-quality, and clearly associated with your brand, so it becomes a source, not just an artefact.

🌐 Target visibility equity – not market share

  • Create resources in other languages not for search traffic, but for the reputational halo – in both search and LLMs.
  • Think of earned media and coverage as multilingual influence building. You’re not just trying to rank in Italy – you’re trying to be known in Italian.
  • Choose where you want to earn mentions and citations based on where those signals might shape visibility elsewhere.

🔍 Understand behavioural and linguistic variance – then act accordingly

Not all searchers behave the same way. Not all languages structure ideas the same way.

If you’re creating content, earning coverage, or trying to generate signals in a new market, you can’t just translate your existing strategy, because:

  • Searcher behaviour varies: some markets prefer long-tail informational queries; others are more transactional or brand-led.
  • Colloquialisms and structures vary: the way people express a need in Spanish isn’t a word-for-word translation of how they’d do it in English.
  • Cultural norms differ: what earns attention in one region might fall flat (or backfire) in another.
  • Buying behaviour varies: local trust factors, pricing sensitivity, and even UX expectations can impact the credibility of your content or product.

Sometimes, the best tactic isn’t to translate your English landing page into Dutch; it’s to write a new one that reflects how Dutch buyers actually think, search, and decide.

The mindset shift

This isn’t localisation for customers. It’s localisation for the systems that decide how customers see you.

You don’t need to be everywhere, but you do need to be understood everywhere. Not because you want to sell there, but because the reputation you build in one language will inevitably leak into others – and into the systems that shape search results, summaries, and recommendations in your actual markets.

Sometimes that means showing up in markets you’ll never monetise. Sometimes it means letting go of the neat, tidy boundaries between “our audience” and “everyone else.”

The choice isn’t between being visible or invisible in those places. It’s between being defined by your own hand, or by the fragments, translations, and half-truths left behind by others.

The brands that understand this and act on it won’t just be present in their markets. They’ll be present in the global dataset that the machines learn from. And that’s where the real competition is now.

The post Shaping visibility in a multilingual internet appeared first on Jono Alderson.

Why semantic HTML still matters

21 Temmuz 2025 saat 20:03

Somewhere along the way, we forgot how to write HTML – or why it mattered in the first place.

Modern development workflows prioritise components, utility classes, and JavaScript-heavy rendering. HTML becomes a byproduct, not a foundation.

And that shift comes at a cost – in performance, accessibility, resilience, and how machines (and people) interpret your content.

I’ve written elsewhere about how JavaScript is killing the web. But one of the most fixable, overlooked parts of that story is semantic HTML.

This piece is about what we’ve lost – and why it still matters.

Semantic HTML is how machines understand meaning

HTML isn’t just how we place elements on a page. It’s a language – with a vocabulary that expresses meaning

Tags like <article>, <nav> and <section> aren’t decorative. They express intent. They signal hierarchy. They tell machines what your content is, and how it relates to everything else.

Search engines, accessibility tools, AI agents, and task-based systems all rely on structural signals – sometimes explicitly, sometimes heuristically. Not every system requires perfect markup, but when they can take advantage of it, semantic HTML can give them clarity. And in a web full of structurally ambiguous pages, that clarity can be a competitive edge.

Semantic markup doesn’t guarantee better indexing or extraction – but it creates a foundation that systems can use, now and in the future. It’s a signal of quality, structure, and intent.

If everything is a <div> or a <span>, then nothing is meaningful.

It’s not just bad HTML – it’s meaningless markup

It’s easy to dismiss this as a purity issue. Who cares whether you use a <div> or a <section>, as long as it looks right?

But this isn’t about pedantry. Meaningless markup doesn’t just make your site harder to read – it makes it harder to render, harder to maintain, and harder to scale.

This kind of abstraction leads to markup that often looks like this:

<div class="tw-bg-white tw-p-4 tw-shadow tw-rounded-md">
  <div class="tw-flex tw-flex-col tw-gap-2">
    <div class="tw-text-sm tw-font-semibold tw-uppercase tw-text-gray-500">ACME Widget</div>
    <div class="tw-text-xl tw-font-bold tw-text-blue-900">Blue Widget</div>
    <div class="tw-text-md tw-text-gray-700">Our best-selling widget for 2025. Lightweight, fast, and dependable.</div>
    <div class="tw-mt-4 tw-flex tw-items-center tw-justify-between">
      <div class="tw-text-lg tw-font-bold">$49.99</div>
      <button class="tw-bg-blue-600 tw-text-white tw-px-4 tw-py-2 tw-rounded hover:tw-bg-blue-700">Buy now</button>
    </div>
  </div>
</div> 

Sure, this works. It’s styled. It renders. But it’s semantically dead.

It gives you no sense of what this content is. Is it a product listing? A blog post? A call to action?

You can’t tell at a glance – and neither can a screen reader, a crawler, or an agent trying to extract your pricing data.

Here’s the same thing with meaningful structure:

<article class="product-card">
  <header>
    <p class="product-brand">ACME Widget</p>
    <h1 class="product-name">Blue Widget</h1>
  </header>
  <p class="product-description">Our best-selling widget for 2025. Lightweight, fast, and dependable.</p>
  <footer class="product-footer">
    <span class="product-price">$49.99</span>
    <button class="buy-button">Buy now</button>
  </footer>
</article>

Now it tells a story. There’s structure. There’s intent. You can target it in your CSS. You can extract it in a scraper. You can navigate it in a screen reader. It means something.

Semantic HTML is the foundation of accessibility. Without structure and meaning, assistive technologies can’t parse your content. Screen readers don’t know what to announce. Keyboard users get stuck. Voice interfaces can’t find what you’ve buried in divs. Clean, meaningful HTML isn’t just good practice – it’s how people access the web.

That’s not to say frameworks are inherently bad, or inaccessible. Tailwind, atomic classes, and inline styles can absolutely be useful – especially in complex projects or large teams where consistency and speed matter. They can reduce cognitive overhead. They can improve velocity.

But they’re tools, not answers. And when every component devolves into a soup of near-duplicate utility classes – tweaked for every layout and breakpoint – you lose the plot. The structure disappears. The purpose is obscured.

This isn’t about abstraction. It’s about what you lose in the process.

And that loss doesn’t just hurt semantics – it hurts performance. In fact, it’s one of the biggest reasons the modern web feels slower, heavier, and more fragile than ever.

Semantic rot wrecks performance

We’ve normalised the idea that HTML is just a render target – that we can throw arbitrary markup at the browser and trust it to figure it out. And it does. Browsers are astonishingly good at fixing our messes.

But that forgiveness has a cost.

Rendering engines are designed to be fault-tolerant. They’ll infer roles, patch up bad structure, and try to render things as you intended. But every time they have to do that – every time they have to guess what your <div> soup is trying to be – it costs time. That’s CPU cycles. That’s GPU time. That’s power, especially on mobile.

Let’s break down where and how the bloat hits hardest – and why it matters.

Big DOMs are slow to render

Every single node in the DOM adds overhead. During rendering, the browser walks the DOM tree, builds the CSSOM, calculates styles, resolves layout, and paints pixels. More nodes mean more work at each stage.

It’s not just about download size (though that matters too – more markup means more bytes, and potentially less efficient compression). It’s about render performance. A bloated DOM means longer layout and paint phases, more memory usage, and higher energy usage.

Even simple interactions – like opening a modal or expanding a list – can trigger reflows that crawl through your bloated DOM. And suddenly your “simple” page lags, stutters, or janks.

You can see this in Chrome DevTools. Open the Performance tab, record a trace, and watch the flame chart light up every time your layout engine spins it’s wheels.

Fun fact: parsing isn’t the bottleneck—browsers like Chromium can process HTML at tens of GB/s on modern CPUs. The real cost comes during CSSOM construction, layout, paint, and composite. Also, HTML parsing is blocking only when you hit a non-deferred <script> or a render-blocking stylesheet – which again underscores why clean markup still matters, but you also need smart loading order.

Complex trees cause layout thrashing

But it’s not just about how much markup you have – it’s about how it’s structured. Deep nesting, wrapper bloat, and overly abstracted components create DOM trees that are hard to reason about and costly to render. The browser has to work harder to figure out what changes affect what – and that’s where things start to fall apart.

Toggle a single class, and you might invalidate layout across the entire viewport. That change cascades through parent-child chains, triggering layout shifts and visual instability. Components reposition themselves unexpectedly. Scroll anchoring fails, and users lose their position mid-interaction. The whole experience becomes unstable.

And because this all happens in real time – on every interaction – it hits your frame budget. Targeting 60fps? That gives you just ~16ms per frame. Blow that budget, and users feel the lag instantly.

You’ll see it in Chrome’s DevTools – in the “Layout Shift Regions” or in the “Frames” graph as missed frames stack up.

When you mutate the DOM, browsers don’t always re-layout the whole tree – there’s incremental layout processing. But deeply nested or ambiguous markup still triggers expensive ancestor checks. Projects like Facebook’s “Spineless Traversal” show that browsers still pay a performance penalty when many nodes need checking.

Redundant CSS increases recalculation cost

A bloated DOM is bad enough – but bloated stylesheets make things even worse.

Modern CSS workflows – especially in componentised systems – often lead to duplication. Each component declares its own styles – even when they repeat. There’s no cascade. No shared context. Specificity becomes a mess, and overrides are the default.

For example, here’s what that often looks like:

/* button.css */
.btn {
  background-color: #006;
  color: #fff;
  font-weight: bold;
}

/* header.css */
.header .btn {
  background-color: #005;
}

/* card.css */
.card .btn {
  background-color: #004;
}

Each file redefines the same thing. The browser has to parse, apply, and reconcile all of it. Multiply this by hundreds of components, and your CSSOM – the browser’s internal model of all CSS rules – balloons.

Every time something changes (like a class toggle), the browser has to re-evaluate which rules apply where. More rules, more recalculations. And on lower-end devices, that becomes a bottleneck.

Yes, atomic CSS systems like Tailwind can reduce file size and increase reuse. But only when used intentionally. When every component gets wrapped in a dozen layers of utility classes, and each utility is slightly tweaked (margin here, font there), you end up with thousands of unique combinations – many of which are nearly identical.

The cost isn’t just size. It’s churn.

Browsers match selectors from right to left (e.g., for div.card p span, they check → parent → etc). This is efficient for clear, specific selectors – but bloated deep trees or generic cascading rules force lots of overs canning.

Autogenerated classes break caching and targeting

It’s become common to see class names like .sc-a12bc, .jsx-392hf, or .tw-abc123. These are often the result of CSS-in-JS systems, scoped styles, or build-time hashing. The intent is clear: localise styles to avoid global conflicts. And that’s not a bad idea.

But this approach comes with a different kind of fragility.

If your classes are ephemeral – if they change with every build – then:

  • Your analytics tags break.
  • Your end-to-end tests need constant maintenance.
  • Your caching strategies fall apart.
  • Your markup diffs become unreadable.
  • And your CSS becomes non-reusable by default.

From a performance perspective, that last point is critical. Caching only works when things are predictable. The browser’s ability to cache and reuse parsed stylesheets depends on consistent selectors. If every component, every build, every deployment changes its class names, the browser has to reparse and reapply everything.

Worse, it forces tooling to rely on brittle workarounds. Want to target a button in your checkout funnel via your tag manager? Good luck if it’s wrapped in three layers of hashed components.

This isn’t hypothetical. It’s a common pain point in modern frontend stacks, and one that bloats everything – code, tooling, rendering paths.

Predictable, semantic class names don’t just make your life easier. They make the web faster.

Semantic tags can provide layout hints

Semantic HTML isn’t just about meaning or accessibility. It’s scaffolding. Structure. And that structure gives both you and the browser something to work with.

Tags like <main>, <nav>, <aside>, and <footer> aren’t just semantic – they’re block-level by default, and they naturally segment the page. That segmentation often lines up with how the browser processes and paints content. They don’t guarantee performance wins, but they create the conditions for them.

When your layout has clear boundaries, the browser can scope its work more effectively. It can isolate style recalculations, avoid unnecessary reflows, and better manage things like scroll containers and sticky elements.

More importantly: in the paint and composite phases, the browser can distribute rendering work across multiple threads. GPU compositing pipelines benefit from well-structured DOM regions – especially when they’re paired with properties like contain: paint or will-change: transform. By creating isolated layers, you reduce the overhead of re-rasterising large portions of the page.

If everything is a giant stack of nested <div>s, there’s no clear opportunity for this kind of isolation. Every interaction, animation, or resize event risks triggering a reflow or repaint that affects the entire tree. You’re not just making it harder for yourself – you’re bottlenecking the rendering engine.

Put simply: semantic tags help you work with the browser instead of fighting it. They’re not magic, but they make the magic possible.

Animations and the compositing catastrophe

Animations are where well-structured HTML either shines… or fails catastrophically.

Modern browsers aim to offload animation work to the GPU. That’s what enables silky-smooth transitions at 60fps or higher. But for that to happen, the browser needs to isolate the animated element onto its own compositing layer. Only certain CSS properties qualify for this kind of GPU-accelerated treatment – most notably transform and opacity.

If you animate something like top, left, width, or margin, you’re triggering the layout engine. That means recalculating layout for everything downstream of the change. That’s main-thread work, and it’s expensive.

On a simple page? Maybe you get away with it.

On a deeply nested component with dozens of siblings and dependencies? Every animation becomes a layout thrash. And once your animation frame budget blows past 16ms (the limit for 60fps), things get janky. Animations stutter. Interactions lag. Scroll becomes sluggish.

You can see this in DevTools’ Performance panel – layout recalculations, style invalidations, and paint operations lighting up the flame chart.

Semantic HTML helps here too. Proper structural boundaries allow for more effective use of modern CSS containment strategies:

contain: layout; tells the browser it doesn’t need to recalculate layout outside the element.

will-change: transform; hints that a compositing layer is needed.

isolation: isolate; and contain: paint; can help prevent visual spillover and force GPU layers.

But these tools only work when your DOM is rational. If your animated component is nested inside an unpredictable pile of generic <div>s, the browser can’t isolate it cleanly. It doesn’t know what might be affected – so it plays it safe and recalculates everything.

That’s not a browser flaw. It’s a developer failure.

Animation isn’t just about what moves. It’s about what shouldn’t.

Rendering and painting are parallel operations in modern engines. But DOM/CSS changes often force main-thread syncs, killing that advantage.

CSS layering via will-change: transform or the newer layer() syntax tells the GPU to handle composites separately. That avoids layout and paint in the main thread – but only when the DOM structure allows distinct layering containers.

CSS containment and visibility: powerful, but fragile

Modern CSS gives us powerful tools to manage performance – but they’re only effective when your HTML gives them room to breathe.

Take contain. You can use contain: layout, paint, or even size to tell the browser “don’t look outside this box – nothing in here affects the rest of the page.” This can drastically reduce the cost of layout recalculations, especially in dynamic interfaces.

But that only works when your markup has clear structural boundaries.

If your content is tangled in a nest of non-semantic wrappers, or if containers inherit unexpected styles or dependencies, then containment becomes unreliable. You can’t safely contain what you can’t isolate. The browser won’t take the risk.

Likewise, content-visibility: auto is one of the most underrated tools in the modern CSS arsenal. It lets the browser skip rendering elements that aren’t visible on-screen – effectively “virtualising” them. That’s huge for long pages, feeds, or infinite scroll components.

But it comes with caveats. It requires predictable layout, scroll anchoring, and structural coherence. If your DOM is messy, or your components leak styles and dependencies up and down the tree, it backfires – introducing layout jumps, rendering bugs, or broken focus states.

These aren’t magic bullets. They’re performance contracts. And messy markup breaks those contracts.

Semantic HTML – and a clean, well-structured DOM – is what makes these tools viable in the first place.

MDN’s docs highlight how contain: content (shorthand for layout+paint+style) lets browsers optimize entire subtrees independently
Real-world A/B tests show INP latency improvements on e‑commerce pages using content-visibility: auto.

Agents are the new users – and they care about structure

The web isn’t just for humans anymore.

Search engines were the first wave – parsing content, extracting meaning, and ranking based on structure and semantics. But now we’re entering the era of AI agents, assistants, scrapers, task runners, and LLM-backed automation. These systems don’t browse your site. They don’t scroll. They don’t click. They parse.

They look at your markup and ask:

  • What is this?
  • How is it structured?
  • What’s important?
  • How does it relate to everything else?

A clean, semantic DOM answers those questions clearly. A soup of <div>s does not.

And when these agents have to choose between ten sites that all claim to sell the same widget, the one that’s easier to interpret, extract, and summarise will win.

That’s not hypothetical. Google’s shopping systems, summarisation agents like Perplexity, AI browsers like Arc, and assistive tools for accessibility are all examples of this shift in motion. Your site isn’t just a visual experience anymore – it’s an interface. An API. A dataset.

If your markup can’t support that? You’re out of the conversation.

And yes – smart systems can and do infer structure when they have to. But that’s extra work. That’s imprecise. That’s risk.

In a competitive landscape, well-structured markup isn’t just an optimisation – it’s a differentiator.

Structure is resilience

Semantic HTML isn’t just about helping machines understand your content. It’s about building interfaces that hold together under pressure.

Clean markup is easier to debug. Easier to adapt. Easier to progressively enhance. If your JavaScript fails, or your stylesheets don’t load, or your layout breaks on an edge-case screen – semantic HTML means there’s still something usable there.

That’s not just good practice. It’s how you build software for the real world.

Because real users have flaky connections. Real devices have limited power. Real sessions include edge cases you didn’t test for.

Semantic markup gives you a baseline. A fallback. A foundation.

Structure isn’t optional

If you want to build for performance, accessibility, discoverability, or resilience – if you want your site to be fast, understandable, and adaptable – start with HTML that means something.

Don’t treat markup as an afterthought. Don’t let your tooling bury the structure. Don’t build interfaces that only work when the stars align and the JavaScript loads.

Semantic HTML is a foundation. It’s fast. It’s robust. It’s self-descriptive. It’s future-facing.

It doesn’t stop you using Tailwind. It doesn’t stop you using React. But it does ask you to be deliberate. To design your structure with intent. To write code that tells a story – not just to humans, but to browsers, bots, and agents alike.

This isn’t nostalgia. This is infrastructure.

And if the web is going to survive the next wave of complexity, automation, and expectation – we need to remember how to build it properly.

That starts with remembering how to write HTML – and why we write it the way we do. Not as a byproduct of JavaScript, or an output of tooling, but as the foundation of everything that follows.

The post Why semantic HTML still matters appeared first on Jono Alderson.

Adrift in a sea of sameness

9 Temmuz 2025 saat 13:49

There’s somebody who looks just like you, working for each of your competitors.

They’re doing the same keyword research. Spotting the same low-hanging fruit. Following the same influencers. Reading the same blogs. Building the same slides. Ticking the same SEO checklists. Fighting for the same technical fixes. Arguing with the same developers. Making the same business case, in the same way, to the same stakeholders.

Your product is just like theirs. Same problem, same solution. Same positioning, same pricing, same promise. Swap the logos on your homepages, and nobody would notice.

Your website is like a clone of your competitors. Same structure. Same language. Same design patterns. Same stock photos. Same author bios. Same thin “values”. Same thinking. Same mistakes.

And when someone in your team finally suggests doing something different – something bold, something opinionated, something genuinely useful or original – someone in your leadership will inevitably say, “but competitor X doesn’t do that”. And so the spiral begins.

We don’t do it because they don’t do it. They don’t do it because we don’t do it. Everybody looks to everybody else for permission to be interesting. Nobody acts. Nobody leads. Nobody dares. Just a whole ecosystem of well-meaning people in nice offices running perfectly average businesses, trying not to get fired.

We call it market alignment. Brand protection. Consistency. But really, it’s just fear. Fear of being first. Fear of attention. Fear of being wrong. So we compromise. We polish. We go back to safe. Safe headlines. Safe CTAs. Safe content.

And now the kicker. This whole mess is exactly what AI is trained on.

When the web is beige, the machine learns to serve beige. Every echoed article trains the model to repeat the average. When sameness becomes a survival strategy, we don’t just lose market differentiation. We become fuel for our own redundancy.

If your content looks just like everything else, there’s no reason for a human to choose it, or for a machine to prioritise it. It might as well have been written by an AI, summarised by an AI, and quietly discarded by an AI.

This is your competition now. Not just the business next door with the same three pricing tiers and the same integration with HubSpot, but the agent reading both your sites and deciding which one their user never needs to visit again.

So, where are you unique? Or what could you do uniquely? Because everything else – your content, your tech stack, your keywords, your KPIs, your pages – that’s just table stakes.

And if you’re serious about showing up in search, that means asking harder questions. Not just “what keywords do we want to rank for?” but “what do we believe that nobody else does?”, “what are we brave enough to say?”, and “where can we be the answer, not just an option?”.

That’s not about chasing volume or clustering content by topic. It’s about clarity. Depth. Quality. It’s about knowing your market better than anyone else. Saying what others won’t. Building what others don’t. And tracking your impact like it matters.

The tools are here. The data is here. But what you do with them – that’s where you stop being average.

The post Adrift in a sea of sameness appeared first on Jono Alderson.

“Performance Marketing” is just advertising (with a dashboard)

3 Temmuz 2025 saat 12:14

Somewhere along the way, the word “marketing” got hijacked.

What used to be a broad, strategic, and often creative discipline has been reduced to a euphemism for “running ads”.

Platforms like Google and Meta now refer to their ad-buying interfaces as “marketing platforms”. Their APIs for placing bids and buying reach are called “marketing APIs”. Their dashboards don’t talk about audiences or brand equity or product-market fit – they talk about impressions, conversions, and budgets.

Let’s be clear: that isn’t marketing. That’s advertising.

Definitions matter

Marketing is the umbrella. It’s the process of understanding a market, identifying needs, shaping products and services, crafting narratives, developing positioning, building awareness, nurturing relationships, and, yes, sometimes advertising.

Advertising is just one tool in that kit. A tactic, not a strategy.

When we conflate the two – when we allow platforms, execs, or even colleagues to use the terms interchangeably – we diminish the role, value, and impact of everything else marketing encompasses.

And that’s not just a semantic issue. It’s strategic.

The corruption is convenient

It’s not hard to see why the platforms are happy with the conflation.

If “doing marketing” becomes synonymous with “spending money on ads,” then Google wins. Meta wins. Amazon wins. Their dashboards are your strategy. Your budget is their revenue. And your success is only ever as good as your last CPA.

This model suits shareholders. It suits CFOs. It suits growth-hacking culture.

But it doesn’t serve brands. It doesn’t build long-term relationships. It doesn’t create distinctiveness, loyalty, or emotional connection. It just buys a moment of attention.

The cost of conflation

We’ve seen what happens when marketing is reduced to paid media:

  • Organic strategies are deprioritised.
  • Brand-building becomes a luxury.
  • Long-term vision gets replaced by short-term optimisation.
  • Teams chase metrics that are easy to measure, rather than outcomes that matter.

This affects how organisations invest, hire, and behave. It affects how products are launched, how content is created, and how success is measured.

It’s why SEO gets pigeonholed as a performance channel, rather than a strategic enabler of discoverability and trust. It’s why storytelling gets cut from the budget. It’s why customer insight becomes an afterthought.

Let’s talk about “performance marketing”

One of the most egregious examples of this conflation is the term “performance marketing”. It sounds scientific. Rigorous. Respectable. But it’s just another euphemism for “paid ads with attribution”.

It implies that other forms of marketing don’t perform – that unless you can track every click, every conversion, every penny, it’s not real. Not valuable.

But performance isn’t the same as impact. Brand builds memory. Storytelling builds trust. Relationships build retention. These things matter – and they don’t always fit neatly into a last-click attribution model.

By elevating “performance marketing” as the gold standard, we ignore the slow-burn power of brand, the compounding effects of reputation, and the strategic foundation that real marketing is built on.

Reclaiming the language

If we want to fix this – if we care about the value and future of marketing – we need to start by taking back the word.

Marketing isn’t media buying. It’s not campaign management. It’s not an algorithmic bidding war.

It’s the craft of creating something valuable, positioning it well, and connecting it meaningfully with the people who need it.

That includes product. That includes experience. That includes strategy. That includes search, content, and comms.

If we let the platforms define the boundaries of our work, we’ll never get out from under their thumb.

Common objections (and why they’re wrong)

Let’s address the inevitable pushback – especially from those who live and breathe “performance marketing” dashboards.

“Performance is marketing. If it doesn’t drive results, what’s the point?”

Performance is an outcome, not a methodology. Measuring success is vital – but defining marketing solely by what’s measurable is a category error. Plenty of valuable marketing outcomes (loyalty, awareness, word-of-mouth, brand preference) don’t show up neatly in a ROAS spreadsheet. You can’t optimise for what you refuse to see.

“Advertising is marketing – that’s how we reach people”

Reach without resonance is a waste of budget. Ads are an execution channel, not the sum of the strategy. Marketing decides what you say, how you say it, and to whom – advertising is how that gets distributed. Mistaking the media for the message is exactly the problem.

“Brand is a luxury. Performance pays the bills”

Short-term efficiency often comes at the cost of long-term growth. Brands that only feed the bottom of the funnel eventually dry up the top. Performance may pay this quarter’s bills – but without brand, there’s no demand next quarter. It’s not either/or. It’s both/and – but strategy must lead.

“Attribution is better than guessing”

Measurement matters – but so does understanding what your metrics don’t capture. Most attribution models are flawed, biased towards last-click, and blind to influence that happens before a user even enters a funnel. Relying purely on what’s trackable creates a narrow view that privileges immediate action over lasting impact.

Advertising isn’t a marketing strategy

If your “marketing strategy” is just an ad budget and a spreadsheet, you don’t have a marketing strategy.

You’re just renting attention.

And what happens when the price goes up?

Worse – what happens when the performance stops?

The post “Performance Marketing” is just advertising (with a dashboard) appeared first on Jono Alderson.

Stop testing. Start shipping.

30 Haziran 2025 saat 22:07

Big brands are often obsessed with SEO testing. And it’s rarely more than performative theatre.

They try to determine whether having alt text on images is worthwhile. They question whether using words their audience actually searches for has any benefit. They debate how much passing Core Web Vitals might help improve UX. And they spend weeks orchestrating tests, interpreting deltas, and presenting charts that promise confidence – but rarely deliver clarity.

Mostly, these tests are busywork chasing the obvious or banal, creating the illusion of control while delaying meaningful progress.

Why?

Because they want certainty. Because they need to justify decisions to risk-averse stakeholders who demand clarity, attribution, and defensibility. Because no one wants to be the person who made a call without a test to point to, or who made the wrong bet on resource prioritisation.

And in most other parts of the organisation, especially paid media, incrementality testing is the norm. There, it’s relatively easy and normal to isolate inputs and outputs, and to justify spend through clean, causal models.

In those channels, the smart way to scale is to turn every decision into data, to build a perfectly optimised incrementality measurement machine. That’s clever. That’s scalable. That’s elegant.

But that only works in systems where inputs and outputs are clean, controlled, and predictable. SEO doesn’t work like that. The same levers don’t exist. The variables aren’t stable. The outcomes aren’t linear.

So the model breaks. And trying to force it anyway only creates friction, waste, and false confidence.

It also massively underestimates the cost, and overstates the value.

Because SEO testing isn’t free. It’s not clean. And it’s rarely conclusive.

And too often, the pursuit of measurability leads to a skewed sense of priority. Teams focus on the things they can test, not the things they should improve. The strategic gives way to the testable. What’s measurable takes precedence over what’s meaningful. Worse, it’s often a distraction from progress. An expensive, well-intentioned form of procrastination.

Because while your test runs, while devs are tied up, while analysts chase significance, while stakeholders debate whether +0.4% is a win, your site is still broken. Your templates are still bloated. Your content is still buried.

You don’t need more proof. You need more conviction.

The future belongs to the brands that move fast, improve things, and ship the obvious improvements without needing a 40-slide test deck to back it up. The ones who are smart enough to recognise that being brave matters more.

Not the smartest brands. The bravest.

The mirage of measurability

The idea of SEO testing appeals because it feels scientific. Controlled. Safe. And increasingly, it feels like survival.

You tweak one thing, you measure the outcome, you learn, you scale. It works for paid media, so why not here?

Because SEO isn’t a closed system. It’s not a campaign – it’s infrastructure. It’s architecture, semantics, signals, and systems. And trying to test it like you would test a paid campaign misunderstands how the web – and Google – actually work.

Your site doesn’t exist in a vacuum. Search results are volatile. Crawl budgets fluctuate. Algorithms shift. Competitors move. Even the weather can influence click-through rates.

Trying to isolate the impact of a single change in that chaos isn’t scientific. It’s theatre.

And it’s no wonder the instinct to mechanise SEO has taken hold. Google rolls out algorithm updates that cause mass volatility. Rankings swing. Visibility drops. Budgets come under scrutiny. It’s scary – and that fear creates a powerful market for tools, frameworks, and testing harnesses that promise to bring clarity and control.

Over the last few years, SEO split-testing platforms have risen in popularity by leaning into that fear. What if the change you shipped hurt performance? What if it wasted budget? What if you never know?

That framing is seductive – but it’s also a trap.

Worse, most tests aren’t testing one thing at all. You “add relatable images” to improve engagement, but in the process:

  • You slow down the page on mobile devices
  • You alter the position of various internal links in the initial viewport
  • You alter the structure of the page’s HTML, and the content hierarchy
  • You change the average colour of the pixels in the top 30% of the page
  • You add different images for different audiences, on different locale-specific versions of your pages

So what exactly did you test? What did Google see (in which locales)? What changed? What stayed the same? How did that change their perception of your relevance, value, utility?

You don’t know. You can’t know.

And when performance changes – up or down – you’re left guessing whether it was the thing you meant to test, or something else entirely.

That’s not measurability. That’s an illusion.

And it’s only getting worse.

As Google continues to evolve, it’s increasingly focused on understanding, not just matching. It’s trying to evaluate the inherent value of a page: how helpful, trustworthy, and useful it is. Its relevance. Its originality. The educational merit. The inherent value.

None of that is cleanly testable.

You can’t A/B test “being genuinely helpful” or meaningfully isolate “editorial integrity” as a metric across 100 variant URLs – at least, not easily. You can build frameworks, run surveys, and establish real human feedback loops to evaluate that kind of quality, but it’s hard. It’s expensive. It’s slow. And it doesn’t scale neatly, nor does it fit the dashboards most teams are built around.

That’s part of why most organisations – especially those who’ve historically succeeded through scale, structure, and brute force – have never had to develop that kind of quality muscle. It’s unfamiliar. It’s messy. It’s harder to consider and wrangle than simpler, more mechanical measures.

So people try to run SEO tests. Because it feels like control. Because it’s familiar. But it’s the wrong game now.

But you almost certainly don’t need more SEO tests. You almost certainly need better content. Better pages. Better experience. Better intent alignment.

And you don’t get there with split tests.

You get there by shipping better things.

Meanwhile, obvious improvements are sitting waiting. Unshipped. Untested. Unloved.

Because everyone’s still trying to figure out whether the blue button got 0.6% more impressions than the green one.

It’s nonsense. And it’s killing your momentum.

Why incrementality doesn’t work in SEO

A/B testing, as it’s traditionally understood, doesn’t even cleanly work in SEO.

In paid channels, you test against users – different cohorts seeing different creatives, with clean measurement of results. But SEO isn’t a user-facing test environment. You have one search engine (Google, Bing, ChatGPT; choose your flavour) and it’s the only ‘user’ who matters in your test. And none of them behave predictably. Their algorithms, crawl behaviour, and indexing logic are opaque and ever-changing.

So instead of testing user responses, you’re forced to test on pages. That means segmenting comparable page types – product listings, blog posts, etc. – and testing structural changes across those segments. But this creates huge noise. One page ranks well, another doesn’t, but you have no way to know how Google’s internal scoring, crawling, or understanding shifted. You can’t meaningfully derive any insight into what the ‘user’ experienced, perceived, or came to believe.

That’s why most SEO A/B testing isn’t remotely scientific. It’s just a best-effort simulation, riddled with assumptions and susceptible to confounding variables. Even the cleanest tests can only hint at causality – and only in narrowly defined environments.

Incrementality testing works brilliantly in paid media. You change a variable, control the spend, and measure the outcome. Clear in, clear out.

But in SEO, that model breaks. Here’s why:

1. SEO is interconnected, not isolated

Touch one part of the system and the rest moves. Update a template, and you affect crawl logic, layout, internal links, rendering time, and perceived relevance.

You’re not testing a change. You’re disturbing an ecosystem.

Take a simple headline tweak. Maybe it affects perceived relevance and CTR. But maybe it also reorders keywords on the page, shifts term frequency, or alters how Google understands your content.

Now, imagine you do that across a set of 200 category pages, and traffic goes up. Was it the wording? Or the new layout? Or the improved internal link prominence? You can’t know. You’re only seeing the soup after the ingredients have been blended and cooked.

2. There are no true control groups

Everything in SEO is interdependent. A “control group” of pages can’t be shielded from algorithmic shifts, site-wide changes, or competitive volatility. Google doesn’t respect your test boundaries.

You might split-test changes across 100 product pages and leave another 100 unchanged. But if a Google core update rolls out halfway through your test, or a competitor launches new content, or your site’s crawl budget is reassigned, the playing field tilts. User behaviour can skew results, too – if one page in your test group receives higher engagement, it might rise in rankings and indirectly influence how related pages are perceived. And if searcher intent shifts due to seasonal changes or emerging trends, the makeup of search results will shift with it, in ways your test boundaries can’t contain.

Your “control” group isn’t stable. It’s just less affected – maybe.

3. The test takes too long, and the world changes while you wait

You need weeks or months for significance. In that time, Google rolls out updates, competitors iterate, or the site changes elsewhere. The result is no longer meaningful.

A test that started in Q1 may yield data in Q2. But now the seasonality is different, the algorithm has shifted, and your team has shipped unrelated changes that also affect performance. Maybe a competitor shipped a product or ran a sale.

Whatever result you see, it’s no longer answering the question you asked.

4. You can’t observe most of what matters

The most important effects in SEO happen invisibly – crawl prioritisation, canonical resolution, index state, and semantic understanding. You can’t test what you can’t measure.

Did your test change how your entities were interpreted in Google’s NLP pipeline? How would you know?

There’s no dashboard for that. You’re trying to understand a black box through a fogged-up window.

5. Testing often misleads more than it informs

A test concludes. Something changed. But was it your intervention? Or a side effect? Or something external? The illusion of certainty is more dangerous than ambiguity.

Take a hypothetical test on schema markup. You implement the relevant code on a set of PDPs. Traffic lifts 3%. Great! But in parallel:

  • You added 2% to the overall document weight.
  • Google rolled out new Rich Results eligibility rules.
  • A competitor lost visibility on a subset of pages due to a botched site migration.
  • The overall size of Wikipedia’s website shrank by 1%, but the average length of an article increased by 3.8 words. Oh, and they changed the HTML of their footer.
  • It was unseasonably sunny.

What caused the lift? You don’t know. But the test says “success” – and that’s enough to mislead decision-makers into prioritising rollouts that may do nothing in future iterations.

6. Most testing is a proxy for fear

Let’s be honest: a lot of testing isn’t about learning – it’s about deferring responsibility. It’s about having a robust story for upward reporting. About ensuring that, if results go south, there’s a paper trail that says you were being cautious and considered. It’s not about discovery – it’s about defensibility.

In that context, testing becomes theatre. A shield. A way to look responsible without actually moving forward.

And it’s corrosive. Because it shifts the culture from one of ownership to one of avoidance. From action to hesitation.

If you’re only allowed to ship something once a test proves it’s safe, and you only test things that feel risk-free, you’re no longer optimising. You’re stagnating.

And worse, you’re probably testing things that don’t even matter, just to justify the process.

If your team needs a test to prove that improving something broken won’t backfire, the issue isn’t uncertainty – it’s fear.

The buy-in trap

A question I hear a lot is: “What if I need demonstrable, testable results to get buy-in for the untestable stuff?” It’s a fair concern – and one that reveals a hidden cultural trap.

When testable wins become the gatekeepers for every investment, the essential but untestable aspects of SEO (like quality, trust, editorial integrity) end up relegated to second-class status. They’re concessions that have to be justified, negotiated, and smuggled through the organisation.

This creates a toxic loop:

  • Quality improvements aren’t seen as baseline, non-negotiable investments – they’re optional extras that compete for limited time and attention.
  • Teams spend more time lobbying, negotiating, and burning social capital for permission than actually doing the right thing.
  • Developers and creators get demotivated, knowing their work requires political finesse and goodwill rather than just good judgment.
  • Stakeholders stay stuck in risk-averse mindsets, demanding ever more proof before committing, which slows progress and rewards incremental, low-risk wins over foundational change.

The real problem? Treating quality as a concession rather than a core principle.

The fix isn’t to keep chasing testable wins to earn the right to work on quality. That only perpetuates the cycle.

Instead, leadership and teams need to shift the mindset:

  • Make quality, trust, and editorial standards strategic pillars that everyone owns.
  • Stop privileging only what’s measurable, and embrace qualitative decision-making alongside quantitative.
  • Recognise that some things can’t be tested but are obviously the right thing to do.
  • Empower teams to act decisively on quality improvements as a default, not an afterthought.

This cultural shift frees teams to focus on real progress rather than political games. It builds momentum and trust. It creates space for quality to become a non-negotiable foundation, which ultimately makes it easier to prove value across the board.

Because when quality is the baseline, you don’t have to fight for it. You just get on with making things better.

Culture, not capability

Part of the issue is that testing lends itself to the mechanical. You can measure impressions. You can test click-through rates. You can change a meta title and maybe see a clean lift.

But the things that matter more – clarity, credibility, helpfulness, trustworthiness – resist that kind of measurement. You can’t A/B test whether users believe you. You can’t split-test authority. At least, not easily.

So we over-invest in the testable and under-invest in the meaningful.

Because frankly, investing in ‘quality’ is scary. It’s ephemeral. It’s hard to define, and hard to measure. It doesn’t map neatly to a team or a KPI. It’s not that it’s unimportant – it’s just that it’s rarely prioritised. It sits somewhere between editorial, product, engineering, UX, and SEO – and yet belongs to no one.

So it falls through the cracks. Not because people don’t care, but because no one’s incentivised to catch it. And without ownership, it’s deprioritised. Not urgent. Not accountable.

No one gets fired for not investing in quality.

It’s not that things like trustworthiness or editorial integrity can’t be measured – but they’re harder. They require real human feedback, slower feedback loops, and more nuanced assessment frameworks. You can build those systems. But they’re costlier, less convenient, and don’t fit neatly into the A/B dashboards most teams are built around.

So we default to what’s easy, not what’s important.

We tweak the things we can measure, even when they’re marginal, instead of improving the things we can’t – even when they’re fundamental.

The result? A surface-level optimisation culture that neglects what drives long-term success.

Most organisations don’t default to testing because it’s effective. They do it because it’s safe.

Or more precisely, because it’s defensible.

If a test shows no impact, that’s fine. You were being cautious. If a test fails, that’s fine. You learned something. If you ship something without testing, and it goes wrong? That’s a career-limiting move.

So teams run tests. Not because they don’t know what to do, but because they’re not allowed to do it without cover.

The real blockers aren’t technical – they’re cultural:

  • A leadership culture that prizes risk-aversion over results.
  • Incentives that reward defensibility over decisiveness.
  • A lack of trust in SEO as a strategic driver, not just a reporting layer.

In that environment, testing becomes a security blanket.

You don’t test to validate your expertise – you test because nobody will sign off without a graph.

But if every improvement needs a test, and every test needs sign-off, and every sign-off needs consensus, you don’t have a strategy. You have inertia. That’s not caution. That’s a bottleneck.

But what about prioritisation?

Of course, resources are finite. That’s why testing can seem appealing – it offers a way to “prove” that an investment is worth it before spending the effort.

But in practice, that often backfires.

If something is so uncertain or marginal that it needs a multi-week SEO test to justify its existence… maybe it shouldn’t be a priority at all.

And if it’s a clear best practice – improving speed, crawlability, structure, or clarity – then you don’t need a test. You need to ship it.

Testing doesn’t validate good work. It delays it.

So what should you do instead? Use a more honest, practical decision model.

Here’s how to decide:

1. If the change is foundational and clearly aligned with best practice – things like improving site speed, fixing broken navigation, clarifying headings, or making pages more crawlable: → Just ship it. You already know it’s the right thing to do. Don’t waste time testing the obvious.

2. If the change is speculative, complex, or genuinely uncertain – like rolling out AI-generated content, removing large content sections, or redesigning core templates: → Test it, or pilot it. There’s legitimate risk and learning value. Controlled experimentation makes sense here.

3. If the change is minor, marginal, or only matters if it performs demonstrably better – like small content tweaks, cosmetic design changes, or headline experiments: → Deprioritise it. If it only matters under test conditions, it probably doesn’t matter enough to invest in at all.

This isn’t just about prioritising effort. It’s about prioritising momentum. And it’s worth noting that other parts of marketing, like brand or TV, have long operated with only partial measurability. These disciplines haven’t been rendered ineffective by the absence of perfect data. They’ve adapted by anchoring in strategy, principles, and conviction. SEO should be no different.

Yes, sometimes even best-practice changes surprise us. But that’s not a reason to freeze. It’s a reason to improve your culture, your QA, and your confidence in making good decisions. Testing shouldn’t be your first defence – good fundamentals should.

If you’re spending more time building test harnesses than fixing obvious problems, you’re not optimising your roadmap – you’re defending it from progress.

If your organisation can’t ship obvious improvements because it’s addicted to permission structures and dashboards, testing isn’t your salvation. It’s your symptom.

And no amount of incrementality modelling will fix that.

The alternative

This isn’t just idealism – it’s a strategic necessity. In a world where other channels are becoming more expensive, more competitive, and less efficient, the brands that succeed will be the ones who stop dithering and start iterating. Bravery isn’t a rebellion against data – it’s a recognition that over-optimising for certainty can paralyse progress.

What’s the alternative?

Bravery.

Not recklessness. Not guesswork. But conviction – the confidence to act without demanding proof for every obvious improvement.

You don’t need another test. You need someone senior enough, trusted enough, and brave enough to say:

“We’re going to fix this because it’s clearly broken.”

That’s it. That’s the strategy.

A fast site is better than a slow one. A crawlable site is better than an impenetrable one. Clean structure beats chaos. Good content beats thin content. These aren’t radical bets. They’re fundamentals.

You don’t need to test whether good hygiene is worth doing. You need to do it consistently and at scale.

And the only thing standing between you and that outcome isn’t a lack of data. It’s a lack of permission.

Bravery creates permission. Bravery cuts through bureaucracy. Bravery aligns teams and unlocks velocity.

You don’t scale SEO by proving every meta tag and message. You scale by improving everything that needs to be improved, without apology.

The best brands of tomorrow won’t be the most optimised for certainty. They’ll be the ones who shipped. The ones who trusted their people. The ones who moved.

The brave ones.

The strategic fork

Many of the large brands that over-rely on testing do so because they’ve never had to be good at SEO. They’ve never needed to build genuinely useful content. Never had to care about page speed, accessibility, or clarity. They’ve succeeded through scale, spend, or brand equity.

But the landscape is changing. Google is changing. Users are changing.

And if those brands don’t adapt – if they keep waiting for tests to tell them how to be better – they’ll be left with one option: spend.

More money on ads. More dependency on paid visibility. More fragility in the face of competition.

And yes, that route is testable. It’s measurable. It’s incremental.

But it’s also a treadmill – one that gets faster, more expensive, and less effective over time.

Because if you don’t build your organic capability now, you won’t have one when you need it.

And you will need it.

Because the answer isn’t to build some omniscient framework to measure and score every nuance of quality. Sure, you could try – but doing so would be so complex, expensive, and burdensome that you’d spend 10x more time and resources managing the framework than actually fixing the issues it measures. You can’t checklist your way to trust. You can’t spreadsheet your way to impact. There is no 10,000-point rubric that captures what it means to be genuinely helpful, fast, clear, or useful – and even if there were, trying to implement it would be its own kind of failure.

At some point, you have to act. Not because a graph told you to. But because you believe in making things better.

That’s not guesswork. That’s faith. Faith in your team, your users, and your principles.

What happens next

You don’t need more data. You don’t need to test for certainty. You need conviction.

The problems are obvious and many. The opportunities are clear. The question isn’t what to do next – it’s whether you’ve built the confidence to do it without waiting for permission.

If you’re in a position to lead, lead. Say: “We’re going to fix this because it’s clearly broken.

If you’re in a position to act, act. Don’t wait for a dashboard, or a test, or the illusion of certainty.

Because the brands that win won’t be the ones who proved every improvement was safe.
They’ll be the ones who made them anyway.

Just ship it. Be brave.

The post Stop testing. Start shipping. appeared first on Jono Alderson.

❌