Take
Why Your SEO Agency Can't Do AI Search
They renamed their content team. That is not the same thing.
You're reading an RFP response. It arrived eleven minutes before the deadline, which is the first red flag, but you've learned to ignore that one because all agencies run on deadline fumes the way the rest of us run on oxygen. The cover page has a new logo treatment - they've added a gradient - and the agency's name now includes the word \"Intelligence,\" which it did not include the last time you saw a proposal from them, which was eight months ago, when they were still just a name and an ampersand and another name. You scroll past the capabilities section, past the case studies you've already seen, past the team bios that still include that one guy's marathon time, and you arrive at the new offering. It's on page fourteen. \"AI Search Optimization.\" It has its own section. It has its own pricing tier. It costs 40% more than their regular SEO package.
\n\nYou lean forward in your chair. You have been waiting for someone to show you something real about AI search, because you know it matters, because your CEO read a Wired article three weeks ago and now brings it up in every leadership meeting with the energy of someone who has recently discovered cold plunge therapy. You need an answer. You need a strategy. You need someone who knows what they're doing.
\n\nYou look at the deliverables.
\n\nThey are identical to the regular SEO package. Not similar. Not \"enhanced.\" Identical. Monthly content production. Technical audits. Keyword research. Backlink analysis. Quarterly reporting. The same twelve line items you've been reading in SEO proposals since 2014, except now each one has the word \"AI\" inserted before it like a prefix that was stamped on at the print shop. \"AI Content Strategy.\" \"AI Keyword Research.\" \"AI Technical Optimization.\" The content team has been renamed the \"AI Content Intelligence Team.\" The reporting dashboard has been renamed the \"AI Visibility Dashboard.\" The quarterly strategy call has been renamed the \"AI Strategy Alignment Session.\"
\n\nThe work is the same. The people are the same. The process is the same. But now there's a surcharge.
\n\nIt's like your plumber adding \"smart home plumbing\" to their business card. The pipes are the same. The wrench is the same. The guy is the same. He still shows up between 9 and 4, which means he shows up at 3:47. But now when he snakes your drain, it's \"smart drain optimization,\" and when he replaces your faucet, it's an \"intelligent water delivery endpoint,\" and the invoice is 40% higher because intelligence, as it turns out, costs extra, even when it's just a sticker on the same toolbox.
\n\nThis is happening everywhere. I have reviewed nine agency proposals in the last four months that include an AI search tier, and I want to tell you how many of them contained a single deliverable that was genuinely specific to how large language models retrieve and surface information. The number is one. One out of nine. And that one was from a boutique shop run by a former ML engineer who pivoted to SEO, which is not the typical agency origin story (the typical agency origin story involves a WordPress blog, a burst of link building in 2011, and the discovery that you could charge people monthly for something that used to be a one-time project).
\n\nThe Relabeling Industrial Complex
\n\nI have been in SEO for more than twenty years, and in that time I have watched the industry rename itself approximately once every thirty months. We were search engine marketers, then we were inbound marketers, then we were growth hackers (briefly, mercifully), then we were content strategists, then we were \"full-stack\" something, and now we are AI search optimization specialists. Each rename coincides with a repricing event. The rename is the strategy. The deliverables trail behind it, unchanged, like a dog tied to a truck that's been repainted.
\n\nI'm not even angry about it, really. I understand the economics. Agencies operate on margins that would make a grocery store look like a hedge fund, and when a new wave of client demand appears - when every CMO starts asking \"what are we doing about AI search\" the same week - you have approximately ninety days to have an offering or you lose the RFP cycle entirely. Ninety days is not enough time to build genuine capability. It is enough time to build a slide deck. So that's what gets built.
\n\nThe slide deck gets a title. The title gets a price. The price gets approved because the CMO needs to tell the CEO they have an AI strategy, and the CEO needs to tell the board the same thing, and the board needs to tell the investors, and so a chain of reassurance extends from the agency account manager all the way up to a limited partner who read the same Wired article your CEO did, and everyone in the chain is satisfied because everyone in the chain has the same need, which is not \"results\" but \"a story about results that sounds credible in a meeting.\"
\n\nThis is not a conspiracy. It's an ecosystem. Every organism in it is behaving rationally given its incentives. The agency needs revenue. The CMO needs air cover. The CEO needs a narrative. The investor needs a thesis. AI search provides all four with a single phrase. The fact that nobody in the chain actually understands how LLMs retrieve information is, from an ecosystem perspective, not relevant. The ecosystem does not select for understanding. It selects for plausibility.
\n\nWhat They Don't Know (And Can't Sell You)
\n\nHere is the core problem, and I'm going to be specific about it because specificity is the thing that's missing from every AI search pitch I've evaluated.
\n\nWhen a large language model answers a user's question, it is doing something fundamentally different from what Google does when it returns ten blue links. Google matches query terms to document terms, weighted by a thousand signals including link authority, content quality, user engagement, and (increasingly) semantic understanding. The process is retrieval-first: find the documents, then rank them. The user sees the documents. The user clicks. The click is the value.
\n\nAn LLM does not do this. An LLM generates a response by predicting the next token in a sequence, drawing on patterns absorbed during training and - in the case of retrieval-augmented generation - on documents pulled from a search index in real time. The output is not a ranked list of documents. It is a synthesized answer. The user may never see your document. Your document may have informed the answer without being cited. Your document may have been cited but in a way that answers the question so completely that nobody clicks through. The value chain is structurally different.
\n\nMost SEO agencies do not understand this. I don't mean they haven't read about it. I mean they don't understand it at the level where understanding translates into different work. They're still thinking in keywords. They're still thinking in rankings. They're still thinking in click-through rates and landing pages and conversion funnels that begin with a search result and end with a form fill. That entire mental model - the model on which their processes, their tools, their hiring decisions, and their pricing are built - assumes a user who sees a list of results, chooses one, and clicks. And for traditional search, that model still works. But for AI search, that model is like using a roadmap to navigate the ocean (if I were the kind of person who said \"navigate the landscape,\" which I am not, because I have read my own voice rules, and I know that sentence was getting dangerously close).
\n\nLet me give you a concrete example. An agency tells you they'll optimize your content for AI search. You ask them how. They say they'll do \"comprehensive content\" that \"thoroughly covers the topic\" so that LLMs will \"prefer your content as a source.\" This sounds reasonable. It is also exactly what they would have told you about regular SEO. The deliverable is a 2,500-word blog post with an H1, five H2s, a meta description, and some internal links. It is content marketing. It has always been content marketing. The AI label is decorative.
\n\nWhat they can't tell you - because they don't know - is how an LLM decides which source to cite when generating a response. They can't tell you about entity recognition and why being a well-defined entity in the knowledge graph matters more for AI retrieval than any individual piece of content. They can't tell you about the specific structured data patterns that make your information machine-readable in a way that influences training data incorporation. They can't tell you about citation-worthiness - why some sources get cited and others get synthesized without attribution - because the research on this is six months old and published in ML papers, not marketing blogs, and nobody at the agency reads ML papers because nobody at the agency was hired to read ML papers.
\n\nThey don't know what they don't know. And the gap between what's being sold and what's being delivered is not a small gap. It is the kind of gap that wars are fought in.
\n\nThe Keyword Hangover
\n\nThe deepest problem is conceptual, and it goes back to the beginning of the industry. SEO was built on keywords. The entire practice, from its origins in the late 1990s through its maturation in the 2010s, was organized around the idea that people type queries into a box, and you optimize pages to match those queries. The tools are keyword tools. The strategies are keyword strategies. The reports track keyword rankings. The pitches lead with keyword opportunities. The entire cognitive framework of the profession is a keyword framework.
\n\nAI search does not work on keywords. I need to say this clearly because I keep seeing agencies apply keyword thinking to LLM systems and it is like watching someone try to unlock a door with a fishing rod - technically you're interacting with the door, but the mechanism of interaction is entirely wrong.
\n\nWhen someone asks ChatGPT \"what's the best SEO consultant for mid-market B2B companies,\" the model is not matching keywords. It is synthesizing a response based on its understanding of the entities \"SEO consultant,\" \"mid-market,\" and \"B2B,\" combined with whatever evidence exists in its training data or retrieval sources about specific consultants who match those criteria. The model is reasoning (or performing something that looks enough like reasoning that the distinction is academic). It is not retrieving a ranked list of keyword-optimized pages.
\n\nThis means the entire SEO toolkit - keyword research, keyword mapping, keyword density analysis, keyword gap analysis, keyword difficulty scores - is not wrong for AI search, but it's not right either. It's operating at the wrong level of abstraction. It's like using a street map when you need a topographic map. The territory is the same, but the features that matter are different. For AI search, what matters is: Are you an entity? Are you a well-defined, well-connected entity that the model recognizes as authoritative on specific topics? Do you have structured data that makes your attributes machine-readable? Have you produced content that is citation-worthy - content that contains original data, original analysis, original frameworks, things that can't be synthesized from other sources because they didn't exist until you created them?
\n\nThese are not keyword questions. They are entity questions, authority questions, originality questions. And most agencies are not equipped to answer them because their entire operation - the tools they pay for, the people they hire, the processes they run, the templates they use - is a keyword operation.
\n\nThe Content Marketing Shell Game
\n\nI need to talk about what agencies mean when they say \"AI content strategy,\" because I have now seen this phrase in enough proposals that I feel qualified to translate it.
\n\n\"AI content strategy\" means content marketing. It means blog posts, pillar pages, topic clusters, and editorial calendars. It means the same content strategy they were selling before, targeted at the same keywords (yes, keywords - they haven't actually changed the targeting methodology, just the label), distributed through the same channels, measured by the same metrics. The \"AI\" part of \"AI content strategy\" refers to the fact that the content might show up in an AI-generated response, the same way any content might show up in an AI-generated response, the same way any content might show up in a Google snippet, which is not a strategy. It is a hope. Hopes are fine. Hopes are not worth a 40% premium.
\n\nHere is what an actual AI search strategy would involve, if one could be built with today's tools and data (and I want to be honest: a fully realized AI search strategy is still somewhat speculative, because the measurement tools genuinely do not exist yet, which is itself a reason to be skeptical of anyone selling one with certainty).
\n\nFirst, entity optimization. Not in the SEO-buzzword sense of \"we'll update your Knowledge Panel.\" In the deep sense of ensuring that your brand, your people, your products, and your expertise are represented as clearly defined entities across the web - in Wikipedia, in industry databases, in professional profiles, in structured data on your own site, in the corpus of content that LLMs are trained on. This is not a content strategy problem. It is a digital presence problem, closer to PR than to content marketing.
\n\nSecond, citation-worthiness. Producing content that contains things worth citing - original research, proprietary data, novel frameworks, expert analysis that cannot be found elsewhere. Not \"comprehensive guides\" that summarize what already exists, but source material that other people (and other machines) cite because it contains something new. This requires subject matter expertise, which most agencies do not have, because most agencies are staffed by generalists who can write competently about anything but are experts in nothing (which is a structural feature of the agency model, not a personal failing - you can't hire deep experts at agency rates).
\n\nThird, structured data at a level most agencies do not implement. Not just schema markup on your blog posts (which they're already doing, sort of, when they remember). I mean comprehensive structured data that makes your entire information architecture machine-readable - organization schema, person schema, product schema, FAQ schema, how-to schema, dataset schema. The kind of implementation that requires a developer, not a content writer, and that most agencies outsource to a freelancer who implements it once and never touches it again.
\n\nFourth, and this is the one nobody wants to hear: accepting that for some queries, in some categories, the goal is not to get the click but to be the source. Being the entity the model references, the brand the model names, the authority the model defers to - even if the user never visits your site. This requires a fundamental shift in how you measure success, and agencies cannot make that shift because their entire business model is built on reporting metrics that require clicks. An agency that says \"we improved your AI visibility but traffic went down\" will lose the account. So they don't say it. They track the same metrics. They report the same numbers. The AI label is decoration over a fundamentally unchanged practice.
\n\nThe People Who Actually Get It
\n\nI know this sounds like I'm saying nobody can do AI search well. That's not what I'm saying. Some people do it very well. They are just not the people selling \"AI Search Optimization\" as a service tier.
\n\nThe people who actually understand how to perform well in AI search are, overwhelmingly, the same people who were already good at SEO. Not because AI search is the same as regular SEO (I wrote a whole piece about how the foundations overlap but the nuances differ). Because the skillset that makes you good at SEO - deep technical understanding, genuine content expertise, the ability to think about information architecture at a structural level, the patience to build authority over years instead of sprinting for quick wins - is the same skillset that makes you good at AI search.
\n\nThe best \"AI search optimization\" work I've seen in the past year was done by people who never called it that. A technical SEO consultant who implemented comprehensive structured data across a client's entire site, not because of AI search, but because structured data has been important since 2015 and she actually does it properly. A content strategist who focused on original research and proprietary data, not because of AI citation, but because original content has always outperformed derivative content. A brand strategist who built her client's entity presence across the web, not because of LLM training data, but because brand authority has been a ranking signal since Google started.
\n\nThese people didn't rename their services. They didn't create new pricing tiers. They didn't add \"AI\" to their LinkedIn headlines. They just kept doing the work they were already doing, and that work happened to be exactly what AI search systems reward, because AI search systems are built on the same foundation as traditional search systems, and the people who built strong foundations don't need to rebuild them every time someone adds a new floor.
\n\nThe gap is not between \"AI SEO\" and \"regular SEO.\" The gap is between good SEO and bad SEO. Good SEO has always meant building genuine authority, producing content worth referencing, and ensuring your technical foundation makes your information accessible to machines. That's it. The machines changed. The foundation didn't.
\n\nWhat You Should Do Instead of Buying the Service Tier
\n\nIf you're evaluating agencies or consultants for AI search readiness (and I realize the irony of me, a consultant, telling you how to evaluate consultants - I am aware that the wolf is writing the guide to wolves), here is what to ask.
\n\nAsk them to explain how retrieval-augmented generation works. Not in marketing terms. In technical terms. If they can't explain how an LLM decides which sources to pull during RAG, they don't understand the system they're claiming to optimize for. This is the equivalent of asking an SEO person to explain how Googlebot crawls a site. It's foundational. If they can't do it, walk.
\n\nAsk them what structured data they implement beyond basic article schema. If the answer is \"we add FAQ schema to blog posts,\" that's fine, but it's not AI search optimization. It's regular technical SEO that most agencies have been doing (or claiming to do) since 2019.
\n\nAsk them how they measure AI search performance. This is the trick question, because the honest answer is \"we can't, not reliably, not yet.\" If they show you a dashboard with confident numbers, those numbers are either fabricated or pulled from tools that are sampling from a stochastic system in ways that produce the appearance of data without the substance of it. The honest answer is uncertainty. Uncertainty is not what agencies sell. But it's what honest practitioners feel, because the measurement infrastructure for AI search genuinely does not exist yet, and pretending it does is either ignorance or dishonesty, and you don't want to pay for either.
\n\nAsk them what they would do differently for AI search that they wouldn't do for regular SEO. If the answer is nothing, they're honest but also proving that their \"AI search tier\" is a rebadged SEO package. If the answer is a specific, technical list of activities - entity optimization, citation analysis, training data auditing, structured data expansion, source authority building - they might actually know what they're talking about. But verify. Ask for examples. Ask for results. Ask them to show you a before-and-after where their work specifically improved AI citation, and then ask them how they isolated that variable, because it is nearly impossible to isolate that variable with today's tools, and anyone who claims otherwise is selling you something.
\n\nThe Honest Version
\n\nHere is what I tell clients when they ask me about AI search, and I want to say it plainly because I have been sarcastic for three thousand words and the thing deserves plain language.
\n\nAI search matters. It is not a fad. It is not going away. The way people find information is genuinely changing, and if you ignore that change, you will lose ground to competitors who don't. This is real.
\n\nBut the response to a real change is not to buy a fake service. The response is to do real work. Build your entity. Produce original research. Implement comprehensive structured data. Create content that is worth citing - not content that covers topics, but content that advances them. Be a source, not a summarizer. And do all of this as part of your regular SEO practice, not as a separate initiative with a separate budget and a separate team, because separation is how you end up paying twice for the same foundation.
\n\nThe agencies will catch up. Some of them are catching up right now, building genuine capability behind the marketing language. Give it eighteen months and the good ones will be doing real work in this space, the same way the good ones eventually did real mobile optimization after years of selling \"mobile-friendly\" as a checkbox exercise. The market corrects. It just corrects slowly, and in the meantime, a lot of money gets spent on slide decks.
\n\nYour plumber doesn't need a new title. He needs to keep the water flowing and the pipes clean and the joints sealed. If he does that well, the smart home will work fine, because the smart home sits on the same pipes the dumb home did, and the water doesn't know the difference.
\n\nNeither does the machine.
\n\n---\n\n**ARTICLE 4: \"I Don't Use AI to Write This\"**\n\nA prospect asked if I use AI to write my blog posts. I said no. There was a silence on the call that lasted about three seconds, which in sales call terms is roughly the length of the Hundred Years' War. \"Why not?\" she asked. And I realized I had never actually articulated the answer.
\n\nI'd felt the answer. I'd felt it every time I read an AI-generated blog post and experienced that peculiar sensation of consuming something that technically qualifies as content the same way a gas station sandwich technically qualifies as food - all the components are present, the bread is bread, the meat is meat-adjacent, but the thing itself is missing something that you can't name but can absolutely taste. I'd felt it every time a competitor's website got a sudden injection of thirty new pages in a month and I knew, without checking, that every single one of them read like it had been written by a very polite entity that had never had a bad day, never lost a client, never sat in a meeting watching a decision-maker ignore data that took three weeks to compile. I'd felt it. But I'd never said it out loud, and saying things out loud is, as my therapist has pointed out on multiple occasions, a different skill than feeling them.
\n\nSo I thought about it. On the call, in those three seconds, I thought about it with the speed of someone who suddenly realizes they're about to have to defend a position they've held so instinctively that they never bothered to build the defense. And what came out of my mouth was something like: \"Because writing is how I think, and if I outsource the writing, I outsource the thinking, and the thinking is the product.\" Which is true. But it's not the whole answer. The whole answer is longer, and more complicated, and involves some things I'm not sure I believe all the way, and some things I'm very sure about, and the line between those two categories moves depending on the day.
\n\nThis is the whole answer. Or as close to it as I can get.
\n\nThe Convergence Problem
\n\nEvery large language model is trained on essentially the same corpus. GPT-4, Claude, Gemini, Llama - the details differ but the mass of it is the same: the internet, Wikipedia, books, academic papers, code repositories, the collective output of several billion people writing things down over several decades. When you ask any of these models to write a blog post about, say, technical SEO for B2B SaaS companies, they all produce something from the same substrate. The word choices differ. The structure varies slightly. But the substance converges, because the substance is drawn from the same pool.
\n\nThis is not a flaw. It's a feature of the technology. LLMs produce the statistically likely next token, which means they produce the statistically likely next idea, which means they produce the consensus. They produce what is, on average, correct. What is, on average, well-phrased. What is, on average, the kind of thing that has been said before about this topic by people who wrote about it in the training data. They produce the mean. And the mean is, by definition, average.
\n\nNow think about what this means for content marketing. The whole point of content marketing - the entire strategic premise - is differentiation. You produce content to distinguish yourself from competitors. To demonstrate unique expertise. To say something that nobody else is saying, or to say something everyone is saying but in a way that is recognizably, irreducibly yours. The value of content marketing is the extent to which your content is not like everyone else's content.
\n\nAI-generated content is converging. Not slowly. Rapidly. Because every company is using the same models (or models trained on the same data) with the same prompts (or prompts that produce the same outputs), and the result is a vast, expanding ocean of content that sounds like it was written by the same competent, characterless, unfailingly polite entity. An entity that has read everything and experienced nothing. An entity that can tell you the best practices for E-E-A-T but has never sat across from a client who just got hit by a core update and doesn't know if they should panic or wait. An entity that can produce a perfectly structured pillar page about link building strategy but has never built a link through a relationship that started as an argument at a conference bar at 11 PM in Salt Lake City (which is how I built one of the best links my site has ever received, and I'd tell you the story except it involves a disagreement about baseball statistics that somehow turned into a guest post opportunity, and that sentence is the kind of thing AI doesn't generate because AI doesn't go to conference bars and argue about baseball).
\n\nWhen everyone's content converges, nobody's content differentiates. And when nobody's content differentiates, content marketing stops working. Not stops working a little. Stops working at the structural level, at the level of the strategy itself, because the strategy depends on differentiation and the tool being used to execute it is an anti-differentiation machine.
\n\nI watch this happening in real time. I look at competitor blogs in the SEO consulting space and I can see the shift. There was a time - not long ago, maybe 2022 - when each competitor's blog had a voice. Some were technical and dry. Some were casual and full of screenshots. Some were academic and citation-heavy. Some were personal and anecdotal. You could read a paragraph and know whose site you were on. That is increasingly not the case. The blogs are merging into a single register: clear, professional, well-structured, thoroughly researched, and completely interchangeable. They are better-written than they used to be, in the narrow sense that the grammar is cleaner and the structure is more logical. And they are worse-written than they used to be, in the deep sense that they say nothing that couldn't be said by anyone else, because they weren't said by anyone at all.
\n\nWhat Google Knows That You're Hoping It Doesn't
\n\nI want to set aside the philosophical argument for a moment and talk about the practical one, because some of you are reading this thinking \"that's nice, Amos, but I need traffic, not a writing manifesto,\" and you're not wrong to think that, and the practical argument is, if anything, stronger than the philosophical one.
\n\nGoogle's helpful content system - which was first introduced in August 2022 and has been updated several times since, most recently in a way that folded it into the core ranking system itself - is explicitly designed to identify and demote content that exists primarily for search engine purposes rather than to genuinely help people. The system looks for signals of human expertise, experience, and firsthand knowledge. It looks for what Google internally calls \"information gain\" - does this page contain something you can't get from other pages on the same topic? It looks for author authority, for specificity, for the kind of granular detail that comes from doing a thing rather than reading about a thing.
\n\nAI-generated content, by its nature, fails most of these signals. Not because it's bad. Because it's derived. It is, by definition, a synthesis of existing information. It cannot contain firsthand experience because it has no firsthand experience. It cannot contain original data because it generates no data. It cannot contain the kind of specificity that comes from the author having personally done the thing they're writing about, because the author has never done anything. Every claim it makes is a statistical average of claims other people have made, which means its information gain is approximately zero, which means it is precisely the kind of content the helpful content system was built to detect.
\n\nGoogle has been cagey about whether they can detect AI-generated content at the textual level - whether there's a classifier running on the content itself that flags it as machine-generated. I suspect they can (the technology exists, watermarking aside, and Google has the talent and the data to build a very good one). But honestly, it doesn't matter whether they can detect it at the textual level, because they can detect it at the signal level. Content that provides no information gain. Content that lacks firsthand experience markers. Content that covers a topic comprehensively without adding anything to the topic. Content that reads like a very good summary of everything that already exists on the first page of Google for this query. All of these are signals that the helpful content system already evaluates, and all of them correlate nearly perfectly with AI-generated content, whether or not Google knows (or cares) about the generation method.
\n\nThe practical argument is this: AI-generated content is optimized for the wrong signals. It is optimized for coverage, for comprehensiveness, for topical completeness. Those were valuable signals in 2018. They are not the signals that matter in 2026. The signals that matter now are experience, expertise, originality, and perspective - and those are the four things that AI is structurally incapable of providing, because they all require having been a specific person in a specific situation, which is not something a language model can fake, no matter how sophisticated the prompt engineering gets.
\n\nThe Counterfeit Painting Problem
\n\nI keep coming back to this analogy, and I apologize for it because it's not original (nothing I say is original - the original people are the ones making things, and I'm a consultant, which means I comment on the things other people make, which is a humbling realization that I try not to think about more than three or four times a day).
\n\nAI-written content is like a very convincing counterfeit painting. It has all the right elements. The brushstrokes look correct. The composition follows the rules. The color palette is appropriate for the period. If you showed it to most people, they would say it's a real painting, and they might even say it's a good one. But it was made by someone - something - that has never felt anything, and if you look at it long enough, you can tell.
\n\nYou can tell because the painting is too correct. A real artist makes choices that don't follow the rules, not because they don't know the rules but because they know something the rules don't capture. A real artist puts a shadow where there shouldn't be one because the shadow makes you feel something. A real artist leaves a line unfinished because the unfinished line creates tension. A real artist makes a mark that is, technically, a mistake, and the mistake is the thing that makes the painting alive.
\n\nAI doesn't make those mistakes. And the absence of meaningful mistakes is itself a tell. Not to everyone. Most people scroll through AI-generated content the way most people walk through a museum gift shop - they see the reproduction and it's fine, it's nice, it serves its purpose. But the people who matter - the readers who become clients, the industry peers who become referral sources, the journalists who become amplifiers - those people can tell. They have been reading enough content for long enough that they have developed an instinct for the difference between something written and something generated, the same way an art dealer has an instinct for the difference between a painting and a print, even when the print is very good.
\n\nAnd here's the thing about a counterfeit: its value depends entirely on nobody knowing it's a counterfeit. The moment someone suspects, the value doesn't decrease. It collapses. A counterfeit that's been identified is worth less than nothing - it's evidence of dishonesty. And we are approaching the moment where AI-generated content, on a professional services website, will be received the same way. Not \"oh, they used AI to write this, that's efficient.\" More like \"oh, they used AI to write this, so they don't actually have expertise, and everything on this site is potentially a synthesis of other people's ideas presented as their own, and I can't trust any of it.\" The liability is not that the content is bad. The liability is that the content is untrustworthy, and trust, once lost, does not come back in the same shape.
\n\nThe Business Argument That Nobody Wants to Hear
\n\nI consult for companies that range from startups to publicly traded enterprises, and across that entire range, there is one thing they all want from their content marketing: a competitive advantage. They want their content to do something that their competitors' content doesn't do. They want their blog to be the one people bookmark. They want their newsletter to be the one people actually read. They want their thought leadership to be the thing that makes a prospect think \"these people know something the other vendors don't.\"
\n\nYou cannot build a competitive advantage on a tool that is available to all of your competitors. This is so obvious that it feels stupid to type it, and yet entire content strategies are being built on exactly this premise. \"We'll use AI to produce more content, faster, cheaper.\" Great. So will your competitor. And their competitor. And the company that launched last Tuesday. You are all using the same tool to produce content from the same training data, which means you are all producing the same content at the same speed, which means you have optimized your way into a perfectly level playing field, which is the opposite of a competitive advantage. It is a competitive wash.
\n\nThe competitive advantage in content has always been the same thing: saying something only you can say. Having an insight that comes from your specific experience. Making an argument that requires your specific expertise. Telling a story that only you were there for. These are the things that make someone read your post instead of your competitor's post, link to your post instead of your competitor's post, remember your name instead of your competitor's name. And these are precisely the things that AI cannot do, because AI does not have specific experience, does not have specific expertise, was not there for any specific story.
\n\nI realize this sounds like I'm romanticizing writing. I'm not. I'm making a business argument. The business argument is: if your content is interchangeable with your competitor's content, it has no strategic value. It may have tactical value - it fills pages, it targets keywords, it gives you something to post on LinkedIn every Tuesday. But strategic value requires differentiation, and AI is a differentiation-erasing technology. The more you use it, the more you sound like everyone else who uses it. The more you sound like everyone else, the less reason anyone has to choose you.
\n\nWriting as Thinking
\n\nNow I'm going to say the thing I said on that call, the thing that fell out of my mouth before I'd thought it through, and I'm going to try to say it better because I've had time to think about it and because writing this (by hand, word by word, the slow way) is itself the process of thinking it through, which is sort of the point.
\n\nWriting is not the output of thinking. Writing is the thinking. When I sit down to write about, say, why AI SEO isn't a real discipline, I don't start with a finished argument that I then transcribe into prose. I start with a hunch, a feeling, a half-formed irritation, and the act of writing is what turns that hunch into an argument. The sentences do the work. The process of finding the right word forces me to clarify what I actually mean. The process of structuring a paragraph forces me to understand the logical sequence of my own ideas. The process of writing a transition between two sections forces me to discover the connection between two thoughts that I didn't know were connected until I tried to connect them.
\n\nThis is not a mystical claim. It is a well-documented cognitive phenomenon. Writing is a form of externalized thinking. The physical (or digital) act of putting words in sequence engages a different cognitive process than thinking silently, and the output of that process - the ideas, the connections, the arguments - is different from what you would produce if you just sat and thought. Every serious writer knows this. You don't know what you think until you write it. The writing is the discovery.
\n\nWhen I use AI to write, I skip the discovery. I get the output without the process, and the process was where the value was. I get a blog post that sounds like my ideas but isn't - it's a statistical approximation of my ideas based on whatever the model absorbed about the topic, and the approximation is close enough to be satisfying and far enough to be wrong in ways I might not catch, because the wrongness is subtle, it's not factual errors (those are easy to spot), it's the absence of the connections I would have made if I'd done the thinking myself, the insights I would have discovered in the act of writing that I'll never discover because I outsourced the act to a machine.
\n\nI am not productive when I skip the writing. I am less productive. Because the writing is not a cost. It is an investment. Every piece I write teaches me something I didn't know I didn't know. Every argument I construct reveals a gap in my thinking that I can fill. Every analogy I reach for shows me a connection between two ideas that becomes part of my permanent mental model. Outsourcing this to AI is not \"freeing up my time for higher-value work.\" It is eliminating the highest-value work I do and replacing it with time I'll probably spend on email.
\n\nThe Objections, Because I Know What You're Thinking
\n\nYou're thinking: \"This is a luxury. You can afford to hand-write everything because you're a solo consultant with a small blog. If I'm running a content operation that needs to produce fifty pages a month, I can't write them all by hand.\"
\n\nYou're right. Sort of. If your content strategy requires fifty pages a month, then yes, you need tools to scale, and AI is a powerful scaling tool, and I'm not here to tell you that you should hire fifty writers instead. But I want to push back on the premise. Why do you need fifty pages a month? Because your SEO strategy says so? Because \"more content = more traffic\" was true in 2019 and nobody questioned whether it's still true in 2026? Because your agency (the one with the AI service tier) told you that volume is the game?
\n\nVolume is not the game. Not anymore. Google's helpful content system is explicitly designed to reward quality over quantity. Sites that publish massive amounts of thin or derivative content are being demoted. Sites that publish less frequently but with more depth, more originality, more genuine expertise are being elevated. The game has changed, and the change favors exactly the approach I'm describing: fewer pieces, more substance, written by people who know what they're talking about.
\n\nIf your content strategy requires fifty AI-generated pages a month, you don't have a content strategy. You have a keyword-filling strategy, and the era in which that worked is ending, not slowly, but with the kind of algorithmic violence that Google periodically visits upon strategies that game the system rather than serve the user. I have seen it happen before. I have seen it happen to clients who came to me after it happened. It is not fun. It is the kind of meeting where nobody makes eye contact and everyone's water glass stays full because nobody can swallow.
\n\nYou might also be thinking: \"OK, but AI as a first draft. AI as a starting point. You edit it, add your voice, add your experience, and it's still faster than writing from scratch.\"
\n\nI've tried this. Multiple times. And here's what I found: editing AI output into something that sounds like me takes longer than writing from scratch. Because the AI has already made structural choices - about what to cover, in what order, with what emphasis - and those choices are wrong, not factually wrong but strategically wrong, and unwinding them requires more effort than making them fresh. It's like trying to renovate a house by tearing down walls that shouldn't have been built in the first place. At some point you're not renovating. You're demolishing and rebuilding, and you'd have been better off starting with an empty lot.
\n\nThis is not true for everyone. If your writing is functional - if the purpose is to convey information clearly and you don't care about voice or perspective - then AI as a first draft is genuinely useful. But for content that is supposed to represent your expertise, your personality, your point of view? The first draft is where those things are born. The first draft is where the voice emerges. If you skip it, you spend the rest of the process trying to inject life into something that was born dead, and it never quite works, and you end up with content that is 80% as good as what you'd have written yourself, which sounds efficient until you realize that the 20% you lost is the only 20% that mattered.
\n\nWhat I'm Actually Saying
\n\nI am not saying AI is bad. AI is a tool, and like every tool, it is good for some things and bad for others. I use AI in my work - for data analysis, for code, for research synthesis, for generating ideas I can push against, for doing things that are genuinely mechanical and benefit from automation. I'm not a Luddite. I'm not smashing the loom. I'm saying the loom is bad at making paintings.
\n\nI am not saying everyone should hand-write everything. For some content - product descriptions, support documentation, internal communications, routine reporting - AI is the right tool. It produces acceptable output at extraordinary speed, and \"acceptable\" is all those contexts require. I am talking specifically about content that exists to represent your expertise and differentiate your brand, the content that is supposed to make people choose you. That content needs to be the thing that only you can produce, and AI cannot produce the thing that only you can produce, because AI is not you.
\n\nI am saying that the rush to automate content creation is, in many cases, a rush to eliminate the only thing that was making the content valuable. It's like automating the flavor out of food because the automation makes the food cheaper. The food is still food. But nobody's driving across town for it anymore.
\n\nEvery competitor in my space is publishing more than I am. Substantially more. Some of them are publishing five, ten, twenty times more. Their sites are filling up with well-structured, well-optimized, perfectly adequate content that covers every topic in our industry with the thoroughness of a textbook and the personality of a hallway. They are winning the volume game. They are losing the voice game. And in a market where every buyer has access to the same information - where the textbook is free and the hallway is infinite - voice is the only thing left that creates preference.
\n\nI write this blog by hand. Every word. It takes me between six and twelve hours per piece, depending on the topic and depending on how many times I delete the first three paragraphs and start over (the answer is usually twice, sometimes three times, once memorably seven times for a piece about attribution models that I still don't think I got right). It's slow. It's inefficient. It cannot scale. And it is the single best business development investment I make, because every piece sounds like me, and sounding like me is my competitive advantage, and my competitive advantage is not something I'm willing to hand to a machine, even a very good machine, even a machine that could probably write something 80% as good in 2% of the time.
\n\nThe 80% is not the same as the 100%. And the 20% gap is where the trust lives.
\n\nShe asked me why not. Here's why not. Because the last defensible advantage in a world of infinite content is being the person who actually wrote it, who actually thought it, who actually sat in the chair and did the work and came out the other side knowing something they didn't know when they sat down. That's not romanticism. That's strategy.
\n\nThe counterfeit hangs in the gallery. It looks right. But the artist isn't in it, and if you look long enough, you can feel the absence. You can always feel the absence.