Take
The SEO Audit Is a Sales Document (And Everyone Knows It)
I wrote my first audit to win a client. The client didn't need half of what I recommended.
It's 2009 and you're writing your first SEO audit. You're twenty-four and you don't know anything but you know how to make a spreadsheet look important. You export every error from Screaming Frog. You flag every missing meta description. You screenshot every 404. The document is 47 pages. You use a table of contents. You use color-coded severity ratings. You use the phrase "critical issue" fourteen times, because the word "critical" makes things sound like they require a professional, and you are trying very hard to be a professional.
The client is impressed. The client is a mid-size e-commerce company that sells industrial supplies - bolts, washers, pneumatic fittings, the kind of products that nobody thinks about until they need one and then they need it immediately. The VP of marketing holds your 47-page audit in both hands like it's a sacred text. He flips through it. He nods. He says "this is very thorough." He says it the way people say things when they're looking at something they don't fully understand but that appears to have cost a lot of effort to produce. He signs the contract. You are hired.
Twelve months later you realize that exactly three of the forty-seven pages contained recommendations that actually mattered. Three pages. Out of forty-seven. The other forty-four pages were (and I'm being generous to my twenty-four-year-old self here, which is more generosity than he deserves) a combination of technical trivia that had zero impact on rankings, cosmetic issues that made the site marginally "better" in a way that no human being would ever notice, and outright padding designed to make the document look like it justified the price tag.
The three pages that mattered? Page one: the site had a massive crawl budget problem because their faceted navigation was generating hundreds of thousands of duplicate URLs. Page two: their top ten product categories had no unique content whatsoever - just auto-generated lists of products with no context, no description, no reason for Google to rank them over anyone else. Page three: their internal linking structure was so flat that their most important pages were getting the same crawl priority as a size filter page for M6 hex bolts.
Those three things - crawl budget, category content, internal link architecture - accounted for approximately one hundred percent of the organic growth they saw over the following year. Everything else I recommended - the missing meta descriptions, the broken 404s, the image alt text, the duplicate title tags, the pages with thin content warnings in Search Console - fixing all of that was the SEO equivalent of rearranging deck chairs on a ship that needed its engine rebuilt.
But the 47-page audit is what got me hired. Not the three-page version. If I'd walked into that meeting with a three-page document that said "here are the three things wrong with your site, fix them and you'll grow," the VP would have looked at me, looked at my three pages, thought about the five other agencies who'd sent him documents measured in pounds, and gone with one of them.
I know this because I eventually tried it. Years later, once I knew what I was doing, I tried sending a focused audit to a prospect. Five pages. Just the stuff that mattered. Clear prioritization. Honest assessment. The prospect said it was "a little light" and hired someone who sent them eighty pages.
I learned two things from that experience. The first is that the length of an SEO audit has almost nothing to do with its usefulness and almost everything to do with its ability to generate revenue for the person who wrote it. The second is that the entire industry knows this and nobody talks about it, because talking about it would require acknowledging that most of what we produce as an industry is theater.
The Medical Analogy Nobody Wants to Hear
Here's the thing about SEO audits. They work exactly like medical testing, and the parallel is so precise that it's uncomfortable.
If you go to a doctor and say "test me for everything," the doctor will run every test available. Blood panels. Imaging. Biopsies. Endocrine screens. Cardiac stress tests. Allergy panels. They'll test your iron, your B12, your thyroid, your liver enzymes, your cortisol, your vitamin D, your lipid panel, your inflammatory markers, your glucose tolerance, and about four hundred other things you've never heard of.
And they will find something. They will always find something. Your B12 will be slightly low. Your vitamin D will be "suboptimal." One of your liver enzymes will be a point above normal range. Your cortisol might spike at an unusual hour. There will be an anomaly in one of three hundred data points, because human bodies are messy, variable systems that never perfectly match a reference range, and if you measure enough things, the statistical certainty of finding something "wrong" approaches one hundred percent.
The question isn't whether the tests found something. The question is whether what they found matters. Is your slightly elevated ALT a sign of liver disease, or did you have three beers last weekend? Is your low vitamin D causing symptoms, or is it a number on a chart that means nothing to your actual health? Is that tiny nodule on the imaging something to worry about, or is it the kind of incidental finding that shows up in thirty percent of healthy adults and causes nothing but anxiety?
A good doctor knows which findings matter and which don't. A good doctor looks at the full picture - your age, your symptoms, your history, your lifestyle - and says "this matters, this doesn't, ignore this, watch this, treat this." A good doctor triages. A good doctor understands that finding something is not the same as finding something important.
A bad doctor (or more accurately, a doctor operating inside a system that incentivizes throughput over judgment) orders every test, flags every anomaly, schedules follow-ups for every result outside the reference range, and generates a cascade of appointments and procedures that have more to do with liability avoidance and revenue generation than with making you healthier. The American medical system runs on this model and it is wildly profitable and it is also, by most outcome metrics, not particularly good at keeping people healthy.
The SEO audit industry is the same system with different vocabulary. Instead of blood panels, we run Screaming Frog crawls. Instead of liver enzymes, we flag response codes. Instead of "your cortisol is slightly elevated," we write "47 pages have duplicate meta descriptions." Instead of scheduling follow-up appointments, we scope retainers. And just like in medicine, the incentive structure rewards thoroughness over judgment, quantity over prioritization, and finding things over understanding whether those things matter.
The uncomfortable truth: If you run a comprehensive technical crawl on any website - any website, including Google's own properties - you will find hundreds of "issues." Missing alt text. Redirect chains. Orphan pages. Mixed content warnings. Duplicate canonicals. Pages with low word count. Response time anomalies. This is normal. This is how websites work. The existence of these issues does not mean the site has a problem. It means the site is a website.
The Audit Industrial Complex
I want to talk about the economics of SEO audits because the economics explain everything about why they are the way they are, and once you understand the economics, you'll never look at an audit the same way again.
Most SEO agencies use audits as loss leaders. The audit itself is either free, cheap, or built into the discovery phase. The purpose of the audit is not to diagnose the site. The purpose of the audit is to generate a list of problems so large and so alarming that the client feels compelled to hire the agency to fix them. The audit is a sales document. It has always been a sales document. Everyone in the industry knows it's a sales document, and we all pretend it's a diagnostic document, and this pretense is so universal and so longstanding that questioning it feels almost rude, like pointing out that the emperor's wardrobe is suspiciously minimalist.
The economics work like this: an agency spends five to ten hours running crawls, pulling data, and compiling screenshots into a branded PDF. They present this PDF to the prospect. The PDF contains, let's say, 200 "issues." Of those 200 issues, maybe 15 will have meaningful impact on organic performance. The other 185 are technically accurate (yes, that page really does have a missing H1) and functionally irrelevant (no, adding an H1 to your privacy policy is not going to affect your rankings).
But the prospect doesn't know this. The prospect sees 200 issues. The prospect panics. The prospect asks "how long will it take to fix all of this?" and the agency says "well, we'd recommend a six-month engagement" and the prospect signs because they're looking at 200 things that a professional just told them are wrong with their website and they don't have the expertise to know that 185 of those things are noise.
I am not exaggerating. I have reviewed hundreds of audits produced by other agencies - sometimes because a client showed me what they received before hiring me, sometimes because I was brought in to provide a second opinion, sometimes because the client hired the agency, the agency spent six months "fixing" everything in the audit, the organic traffic didn't move, and the client wanted to understand why.
The answer to "why" was almost always the same: because the agency spent six months fixing things that didn't matter while ignoring the two or three things that did. They fixed 47 missing meta descriptions instead of restructuring the category taxonomy. They cleaned up 23 redirect chains instead of building internal links to the pages that needed authority. They optimized image alt text on 300 product pages instead of addressing the fundamental crawl budget problem that prevented Google from even finding those pages.
And the agency didn't do this out of malice. They did it because the audit told them to, and the audit told them to because the audit was designed to find everything, not to find what matters. The audit was a comprehensive list of everything that deviates from a theoretical ideal, and the agency dutifully worked through the list from top to bottom, which is an efficient way to burn through a retainer and an inefficient way to improve organic performance.
What Actually Matters in an Audit
I'm going to tell you what I actually look at when I audit a site, which is going to sound suspiciously simple compared to the 200-item checklists you've seen, because it is simple. The complexity of an SEO audit is not correlated with its value. In my experience (and I've done this for over twenty years now, which means I've done it wrong for about fifteen of those years and gradually less wrong for the remaining five), the most valuable audits are the simplest ones.
Here's what I look at. There are five categories and they are in priority order.
First: Can Google crawl and index the important pages? This is the plumbing check. If Google can't find your pages, nothing else matters. I'm looking at crawl budget allocation (is Google spending its time on your money pages or on your faceted navigation garbage?), indexation rates (how many of your important pages are actually in the index?), and crawl barriers (are you accidentally blocking things with robots.txt, noindex tags, or canonical mistakes?). This is almost always where the biggest wins are, and it's almost always the section that gets buried on page 38 of a traditional audit because it requires actual analysis rather than screenshot-and-paste.
Second: Is the site architecture working for or against you? How are pages organized? How deep is the important content? How does link equity flow through the site? Is the internal linking structure directing authority toward the pages that generate revenue, or is it distributing authority democratically across every page including the ones that serve no business purpose? Site architecture is a ranking problem, not a navigation problem, and most audits treat it as neither - they just check whether the navigation menu works and move on.
Third: Do the money pages have what they need to rank? I don't mean "are the title tags optimized." I mean: does the page that's supposed to rank for your most important keyword actually deserve to rank? Is the content better than what's currently ranking? Is the search intent aligned? Is the E-E-A-T sufficient for the topic? This requires actually looking at the SERPs and comparing the client's pages to the competition, which requires judgment, which requires expertise, which is why most audits skip it in favor of automated checks that don't require any of those things.
Fourth: Is the technical foundation solid enough? Note the "enough." This is not "is the technical implementation perfect." Nothing is perfect. The question is whether the technical issues are severe enough to be limiting performance. Page speed that's genuinely terrible (not "could be 0.3 seconds faster" but "takes eight seconds to load on mobile") matters. Core Web Vitals that fail badly matter. HTTPS issues matter. Structured data that's fundamentally broken matters. But most technical issues that show up in crawl tools are in the "could be slightly better" category, and slightly better is not worth the opportunity cost of fixing when there are larger problems to solve.
Fifth: Is there anything that's going to hurt you in the near future? Manual actions. Toxic link profiles that might trigger a penalty. Content quality issues that might not be hurting now but will hurt when the next Helpful Content update rolls through. Security vulnerabilities that could lead to a hack that could lead to a deindexing event. This is the defensive check - not "how do we grow" but "how do we not lose what we have."
That's it. Five categories. Each one gets as much depth as it needs and no more. For some sites, category one reveals a crawl budget disaster that explains everything, and the audit is essentially "fix this one thing." For other sites, the crawling and indexing is fine but the content is thin and the architecture is flat, and the audit focuses on those two areas. The audit fits the site. It doesn't fit a template.
The honest prioritization framework: For each finding, I ask three questions. (1) If we fix this, how much additional revenue can we reasonably expect? (2) How much effort and time does the fix require? (3) What's the confidence level that the fix will actually produce the expected result? Then I rank everything by expected revenue impact per unit of effort, adjusted for confidence. The things at the top of that list are the audit. Everything else is a footnote.
Why Fixing Every 404 Is a Waste of Time
I need to spend a moment on 404 errors specifically because they are the single most overrepresented item in SEO audits and the single most common example of confusing "technically imperfect" with "actually harmful."
Every website has 404s. Every single one. Google has 404s. Amazon has 404s. Wikipedia has 404s. A 404 is what happens when someone requests a URL that doesn't exist, and on any site of meaningful size, there will always be URLs that don't exist. Products get discontinued. Pages get reorganized. Someone on a random forum linked to a URL with a typo in 2014 and that typo has been generating a 404 in your crawl logs ever since. This is normal. This is fine. This is how the internet works.
Google's own documentation says, and I'm paraphrasing only slightly, that 404s are a normal part of the web and do not inherently hurt your site's ranking. John Mueller has said this approximately four hundred times on Twitter, in YouTube videos, in office hours, in blog posts, and in what I imagine are increasingly exasperated internal memos. The message has not gotten through. SEO audits continue to flag every 404 as an issue because crawl tools flag them and audits are built on crawl tool output and nobody stops to ask whether the thing the tool flagged actually matters.
Now. Are there 404s that matter? Yes. If a page that was getting significant traffic and had valuable backlinks returns a 404, that's a problem. You're losing traffic and wasting link equity. That page should be redirected to the most relevant alternative. If a key landing page is 404ing because someone deleted it by accident, that's a problem. Fix it.
But the list of 847 404s that Screaming Frog found? The ones that are mostly old blog posts from 2016 that never got traffic, discontinued product pages that have been off the site for three years, and URLs that were never real pages in the first place? Fixing those will accomplish exactly nothing. You will spend hours setting up redirects. Your traffic will not change. Your rankings will not improve. You will have made your .htaccess file longer and your afternoon shorter and that is all.
And yet I have seen agencies spend entire months on 404 cleanup projects. I have seen clients pay thousands of dollars to redirect URLs that receive zero visits per year. I have seen scopes of work where "resolve all 404 errors" is a line item with a budget attached, and the budget is not small, and the expected outcome is not defined, because if you defined the expected outcome honestly it would be "none."
The 404 fixation is a symptom of the larger problem with SEO audits: the assumption that every deviation from a theoretical ideal is a problem to be solved. It's the medical analogy again. Yes, your white blood cell count is one point above normal. No, you do not have leukemia. Please stop Googling your symptoms and go live your life.
The Audit as Trust Document
Here's where I'm going to say something that might sound contradictory to everything I've said so far, so stay with me.
The SEO audit, despite everything I've just described - the padding, the theater, the misaligned incentives, the 404 hysteria - serves a real and important function. It's just not the function that most people think it is.
The real purpose of an SEO audit is not to diagnose the website. The real purpose is to demonstrate, to the person who will be paying you money, that you understand their business, their site, their competitive landscape, and the specific constraints and opportunities they face. The audit is a trust-building document. Its purpose is to show the client that you've done the work, that you've looked at their specific situation (not just run a generic crawl), and that your recommendations come from understanding, not from a template.
This is why the three-page audit I tried didn't work. Not because it was wrong - it was more right than the 47-page version had ever been. It didn't work because three pages didn't give the client enough evidence that I understood their world. They couldn't see the work behind the conclusions. They couldn't trace the logic. They had to trust my judgment without seeing the judgment process, and trust is exactly what an audit is supposed to build.
The solution isn't to go back to padding. The solution is to write an audit that shows your thinking. Not "here are 200 things wrong with your site" but "here's what I looked at, here's what I found, here's why most of it doesn't matter, and here are the three things that do." Show the client the entire landscape and then show them which mountains are worth climbing. The comprehensiveness should be in the analysis, not in the recommendations.
When I write an audit now (and I've gone through several iterations of my audit format over the past decade, each one slightly less embarrassing than the last), it has a specific structure that I've arrived at through extensive trial and error, where the errors were expensive and the trials were humbling.
The first section is the executive summary. One page. Three to five key findings. Expected revenue impact of fixing them. This is for the person who makes decisions. They will read this page and maybe nothing else, and that's fine, because this page has everything they need.
The second section is the competitive landscape. Who are you competing against? What are they doing that you're not? Where are the gaps? This section is less about the client's site and more about the client's market, which is what most audits completely ignore. An audit that looks only at the client's site without looking at the competitive context is like a coach who studies their own team's plays but never watches game film of the opponent. Understanding what the competition is doing is half the battle.
The third section is the analysis - the detailed findings for each of the five categories I described earlier. This is where the depth lives. Each finding includes the evidence (what I found), the interpretation (why it matters or doesn't), the recommendation (what to do about it), and the expected impact (what will change if you do it). No finding exists without an expected impact, because a finding without an expected impact is trivia, not strategy.
The fourth section is the roadmap. What do we fix first? What do we fix second? What do we defer? What do we ignore entirely? This section is a prioritized action plan organized by expected impact, not by severity. A "critical" 404 error that affects a page with zero traffic is less important than a "moderate" internal linking opportunity that could drive thousands of additional visits per month, and the roadmap reflects that.
There is no fifth section. There is no appendix of every error the crawl tool found. There is no fifty-page spreadsheet of missing alt text. Those things exist in my working files, and if the client wants to see them, I'll share them, but they don't go in the audit because they're not part of the audit. They're ingredients, not the meal.
How to Write an Audit That's Actually Useful
I want to get specific, because I've been talking about philosophy and what I really owe you is mechanics. So here's the process, step by step, as I actually do it in practice, with all the messiness and iteration that implies.
Step one: I talk to the client. Before I open a single tool. Before I run a single crawl. I ask questions. What are your top revenue-generating pages? Which products or services have the highest margins? Where does your traffic come from now? What happened to your traffic in the last twelve months? Did anything change - a redesign, a migration, a CMS switch, a new marketing agency, a shift in business model? What are your competitors doing that concerns you? What have you tried that didn't work?
This conversation is more valuable than any crawl data, because it tells me what matters to the business. An SEO audit that isn't oriented around business goals is a technical exercise, and technical exercises don't generate revenue. They generate retainers, which is different.
Step two: I look at the SERPs. Not the site. The SERPs. I take the client's top twenty target keywords and I search each one manually. I look at who's ranking. I look at what kind of content is ranking. I look at the SERP features present. I look at the intent signals. I do this in an incognito window at different times of day (and yes, sometimes at 2 AM on a Thursday) because the SERPs tell you more about a client's organic opportunity than any tool ever will.
Step three: I crawl the site. Now I open Screaming Frog. Now I pull the data. But I pull it with context. I know what matters because I've already talked to the client and looked at the SERPs. I'm not running a crawl to find everything - I'm running a crawl to investigate specific hypotheses. "I think the faceted navigation is causing a crawl budget problem" is a hypothesis I can test with crawl data. "Let's see what's wrong" is not a hypothesis. It's a fishing expedition, and fishing expeditions catch a lot of things, most of which you'll throw back.
Step four: I look at Search Console and analytics. Traffic trends. Impression and click data. Query performance. Index coverage. Core Web Vitals. Manual actions. This is where the numbers live, and the numbers tell stories that crawl data can't. A page might look perfect in Screaming Frog and still be losing traffic because the search intent shifted, or because a competitor published something better, or because Google's algorithm changed what it rewards for that query. The analytics don't always tell the truth, but they tell a different kind of truth than the tools do.
Step five: I synthesize. This is the step that can't be automated, and it's the step that separates a useful audit from a generated report. I take everything I've gathered - the conversation, the SERP analysis, the crawl data, the analytics - and I ask: what are the three to five things that, if fixed, would have the biggest impact on this site's organic revenue? Not organic traffic. Organic revenue. Because traffic that doesn't convert is a vanity metric, and an audit that optimizes for vanity metrics is a vanity document.
Step six: I write the audit. Following the structure I described above. Executive summary, competitive landscape, detailed analysis, prioritized roadmap. Every recommendation ties back to expected revenue impact. Every finding explains why it matters, not just that it exists. The document is usually fifteen to twenty pages, which is long enough to demonstrate thoroughness and short enough that a human being might actually read it.
The Conversation We Should Be Having
There's a broader point here about the SEO industry that I keep circling back to, and I might as well say it directly because I've been oblique about it for long enough.
The SEO audit, as it exists in most agencies, is not designed to help the client. It's designed to help the agency. It's designed to generate a scope of work. It's designed to justify a retainer. It's designed to create the impression of complexity and urgency so that the client feels they need ongoing professional help. This is not a conspiracy. It's an incentive structure. The agency makes money by finding things to fix. The more things they find, the more money they make. The audit is the mechanism by which things are found.
I'm not saying agencies are dishonest. Most SEO professionals genuinely believe that fixing every issue in the audit will help the site. They've been trained to believe this. The tools reinforce it - every crawl tool presents its findings as "issues" or "errors" or "warnings," using language that implies every finding is a problem to be solved. The certifications reinforce it - every SEO certification test asks about best practices as if best practices are binary, as if there's a list of things that should always be done and the auditor's job is to check whether they've been done.
But the reality is more nuanced than that, and the nuance matters. A missing meta description on a page that gets zero impressions is not an issue. A redirect chain that adds 50 milliseconds to a page that loads in 1.2 seconds is not an issue. A 404 on a URL that nobody visits and nobody links to is not an issue. These things are technically imperfect. They are not problems. The difference between "technically imperfect" and "actually a problem" is the difference between a good audit and a 47-page sales document.
The conversation we should be having, as an industry, is about prioritization. Not "what's wrong with the site" but "what's wrong with the site that matters." Not "here's everything we found" but "here's what we found that will move the needle, and here's everything else we found that won't." The honest audit says "you have 200 issues but only five of them matter, and here's why the other 195 don't." The dishonest audit (or, more charitably, the unthinking audit) says "you have 200 issues, let's fix them all, that'll be $8,000 a month for six months."
The clients who understand this - the ones who've been burned by the comprehensive audit that led to comprehensive billing that led to comprehensively unchanged rankings - are the best clients. They don't want thoroughness. They want judgment. They want someone who can look at their site and say, with confidence born from experience, "this is what matters and this is what doesn't." They want the three-page audit. They just want to understand the work behind it.
The best SEO audit I ever wrote was six pages long. It identified two problems. The client's organic revenue doubled in eight months. The worst SEO audit I ever wrote was forty-seven pages long. It identified two hundred problems. The client's organic revenue didn't change at all.
The difference wasn't the length. The difference wasn't even the findings. The difference was judgment - knowing which findings mattered, having the confidence to say the rest didn't, and having the honesty to write a document that served the client instead of serving my pipeline.
I'm not going to pretend I always get this right. I don't. Sometimes I flag something as important that turns out not to be. Sometimes I dismiss something as trivial that turns out to matter. Judgment is probabilistic, not perfect. But the goal is right, and the goal is this: an audit should make the client's site better, not make the agency richer. When those two outcomes align, you have a good engagement. When they don't, you have a sales document with a table of contents and color-coded severity ratings, and you have a client who's about to spend six months fixing things that don't matter while the real problems quietly eat their traffic.
I know this because I wrote that audit. In 2009. Forty-seven pages. The client was impressed. I was hired. And then I spent a year learning, slowly and expensively, that being hired and being right are not the same thing.
They never were.