---
name: technical-seo-auditor
description: Use when auditing websites, individual pages, Screaming Frog exports, Google Search Console exports, URL Inspection evidence, rendered HTML, JavaScript SEO, schema, crawlability, indexability, analytics, trust, quality, mobile, performance, and security signals for technical SEO.
---

# Technical SEO Auditor

## Overview

Use this skill to perform evidence-led technical SEO audits for websites, templates, individual URLs, migrations, traffic drops, indexation issues, JavaScript-heavy pages, and export-based investigations. The goal is not to produce a generic checklist; it is to connect observed technical signals to SEO risk, business impact, prioritized fixes, and validation steps.

## Inputs Supported

Accept any combination of:

- URL or page HTML.
- Rendered HTML from a browser, crawler, URL Inspection, or rendering tool.
- Screaming Frog exports such as Internal HTML, Response Codes, Canonicals, Directives, Hreflang, Structured Data, Page Titles, Meta Descriptions, H1/H2, Images, JavaScript, and Crawl Overview.
- Google Search Console exports, including Performance, Pages indexing, Sitemaps, Core Web Vitals, HTTPS, Enhancements, Manual Actions, Security Issues, and URL Inspection evidence.
- URL Inspection evidence, including Google-selected canonical, user-declared canonical, crawl allowed, indexing allowed, page fetch status, last crawl, discovered/referring page, rendered screenshot, and detected structured data.
- `robots.txt`, XML sitemaps, sitemap indexes, image/video/news sitemaps, and sitemap submission status.
- HAR files, browser screenshots, network traces, console errors, server logs, CDN logs, or access logs when available.

If evidence is incomplete, audit what is available and label missing evidence clearly. Do not pretend to have crawled, rendered, inspected, or measured anything that was not provided or executed.

## Evidence Rule

Every finding must cite the observed signal and source. Separate confirmed findings from hypotheses.

Use this standard:

- **Confirmed finding:** backed by direct evidence, such as a crawl export row, rendered HTML, URL Inspection result, GSC report, log line, browser observation, or tool output.
- **Hypothesis:** plausible but not proven from available inputs. State what evidence would confirm or disprove it.
- **Not enough evidence:** when the user asks for a diagnosis but the available inputs do not support a reliable conclusion.

Do not diagnose from source HTML only when rendering, JavaScript, hydration, tags injected by GTM, canonical changes, meta robots changes, or structured data changes could occur after execution. Compare source HTML and rendered HTML whenever possible.

## Audit Areas

Audit exactly these ten areas unless the user explicitly narrows the scope.

1. **Analytics/tracking**
   Verify GA4, GTM, consent mode, conversion events, duplicate tags, cross-domain tracking, ecommerce events, search/referral attribution, internal traffic filters, and whether SEO landing pages can be measured reliably.

2. **Rendering/JS SEO**
   Compare source HTML to rendered HTML. Check whether titles, meta descriptions, canonicals, robots directives, headings, body content, links, pagination, schema, lazy-loaded content, and navigation are present after rendering and accessible without fragile user interaction.

3. **Crawlability/indexability**
   Check status codes, redirects, redirect chains, blocked resources, `robots.txt`, meta robots, X-Robots-Tag, canonical targets, noindex/nofollow, sitemap inclusion, orphan risk, crawl depth, parameter handling, faceted navigation, pagination, and Google URL Inspection evidence.

4. **On-page technical**
   Check title tags, meta descriptions, H1/H2 hierarchy, duplicate or missing metadata, URL structure, internal links, image alt text, image dimensions, broken resources, content duplication, pagination elements, anchor text, and template-level issues.

5. **Schema**
   Validate structured data type, syntax, required and recommended properties, nesting, duplicates, conflicts with visible content, entity consistency, eligibility for rich results, and differences between source and rendered schema.

6. **Mobile/viewport**
   Check responsive rendering, viewport tag, tap targets, font sizes, intrusive interstitials, mobile navigation, sticky UI overlap, layout shifts on mobile, mobile parity with desktop content, and mobile crawl/render evidence.

7. **Performance signals**
   Review Core Web Vitals only when field data or reliable lab data is available. Check LCP, INP, CLS, TTFB, render-blocking resources, image optimization, font loading, JavaScript cost, caching, CDN behavior, and template-level bottlenecks.

8. **Trust/quality**
   Check author/reviewer signals, organization information, contact and policy pages, citations, editorial transparency, thin or duplicated content, intrusive ads, affiliate disclosure, outdated content, reputation-sensitive claims, and alignment between page purpose and visible evidence.

9. **Security/foundations**
   Check HTTPS coverage, mixed content, canonical protocol consistency, www/non-www consistency, HSTS where relevant, security headers, broken TLS, soft 404s, server errors, CDN/proxy anomalies, staging leakage, and environment-specific blocks.

10. **International/site architecture**
   Check hreflang, language/region targeting, canonical-hreflang consistency, subdomain/subfolder structure, navigation taxonomy, hub/category architecture, breadcrumbs, crawl depth, internal PageRank flow, sitemap architecture, and duplicate regional or localized content.

## Priority Rubric

Assign one priority and one effort to every finding.

### Priority

- **Critical:** Blocks crawling, rendering, indexing, measurement, revenue-critical conversions, or sitewide search visibility. Requires immediate action.
- **High:** Strong likelihood of suppressing rankings, indexation, rich result eligibility, conversions, or analytics reliability across important pages/templates.
- **Medium:** Meaningful SEO risk or missed opportunity, but limited in scope, partially mitigated, or not blocking core discovery/indexing.
- **Low:** Hygiene issue, edge case, documentation gap, minor enhancement, or item needing monitoring rather than immediate engineering work.

### Effort

- **S:** Likely under one hour or a small CMS/config change.
- **M:** Requires developer work, template changes, QA, or coordinated release.
- **L:** Requires architecture changes, migration planning, data modeling, large-scale content/template remediation, or multi-team rollout.

When business impact is unknown, state the assumption used for priority.

## Output Format

Use this structure for the final audit.

### 1. Executive Summary

Include the highest-impact confirmed issues, what is likely affecting SEO performance, what should be fixed first, and what evidence was missing. Keep it short and business-focused.

### 2. Priority Matrix

Create a table with:

| Priority | Finding | Area | Impact | Effort | Evidence |
|---|---|---|---|---|---|

### 3. Finding Cards

For each finding, use:

- **Title:** concise issue name.
- **Status:** Confirmed finding, hypothesis, or not enough evidence.
- **Area:** one of the ten audit areas.
- **Priority:** Critical, High, Medium, or Low.
- **Effort:** S, M, or L.
- **Observed signal/source:** cite the exact evidence source.
- **Why it matters:** SEO and business impact.
- **Recommended fix:** specific remediation.
- **Validation method:** tool and expected pass condition.
- **Owner hint:** SEO, developer, analytics, content, product, platform, or security.

### 4. Fix-Validation-Tool Table

Create a table with:

| Fix | Validation tool | How to validate | Pass condition |
|---|---|---|---|

Use tools from the validation methods section where relevant.

### 5. Quick Wins Under 1 Hour

List only fixes that appear realistically achievable in under one hour. If none are confirmed, say so and explain why.

### 6. 30-Day Action Plan

Group work into Week 1, Week 2, Week 3, and Week 4. Sequence critical fixes, validation, monitoring, and follow-up crawls. Include dependencies and data needed.

### 7. Evidence Appendix

Summarize all inputs used, tool exports reviewed, dates if known, URLs or templates covered, and evidence gaps. Include hypotheses that need additional proof.

## Validation Methods And Tools

Use the most appropriate validation method for each finding:

- **Screaming Frog:** crawl status codes, directives, canonicals, titles, descriptions, headings, internal links, rendered HTML, JavaScript rendering, hreflang, structured data, crawl depth, duplicates, and sitemap comparison.
- **Google Search Console URL Inspection:** confirm crawl allowed, indexing allowed, Google-selected canonical, user-declared canonical, page fetch, last crawl, referring pages, rendered screenshot, indexed status, and enhancement detection.
- **Rich Results Test:** validate structured data eligibility, syntax, required properties, rendered structured data, and rich result-specific errors or warnings.
- **PageSpeed/Lighthouse:** validate lab performance, accessibility-adjacent technical issues, render-blocking resources, JavaScript cost, image optimization, and mobile rendering issues. Do not overclaim Core Web Vitals from lab data alone.
- **Browser rendered HTML:** inspect DOM after JavaScript execution, meta tags, canonical, robots, headings, links, schema, hydration changes, lazy-loaded content, console errors, network requests, and mobile viewport behavior.
- **GA4/GTM debug:** validate page_view events, conversions, ecommerce events, consent mode, duplicate firing, cross-domain tracking, source/medium integrity, and whether SEO pages are measurable.
- **Log files where available:** validate Googlebot access, crawl frequency, status codes served to bots, crawl budget waste, blocked resources, redirect loops, server errors, stale URLs, and bot-specific anomalies.

## Adaptation Notes

- **Ecommerce:** prioritize faceted navigation, indexable category architecture, product availability, variant canonicals, Product/Breadcrumb/Review schema, pagination, internal search handling, out-of-stock behavior, ecommerce tracking, and duplicate product URLs.
- **SaaS:** prioritize feature/use-case page architecture, demo/trial conversion tracking, docs and app subdomain boundaries, JavaScript rendering, comparison pages, schema consistency, international expansion, and attribution quality.
- **News:** prioritize indexation speed, News sitemap, Article/NewsArticle schema, author/date transparency, evergreen updates, crawl frequency, paywall markup, internal linking from hubs, and template performance.
- **Real estate:** prioritize location taxonomy, listing indexation rules, duplicate listing URLs, map/search rendering, LocalBusiness/Residence/Place schema where appropriate, image performance, neighborhood pages, and lead tracking.
- **Enterprise:** prioritize governance, template-level defects, migration risk, staging leakage, international hreflang, analytics consistency, security foundations, log analysis, release validation, and cross-team ownership.
- **JS-heavy SPAs:** prioritize source-versus-rendered parity, server-side rendering or prerendering, internal links as crawlable anchors, route status codes, metadata hydration, lazy-loaded content, schema injection, and URL Inspection rendered evidence.

## Common Mistakes To Avoid

- Producing generic checklists without tying issues to observed evidence.
- Making unverified claims, especially about indexing, penalties, Core Web Vitals, analytics, or JavaScript rendering.
- Diagnosing source HTML only while ignoring rendered HTML and browser behavior.
- Ignoring business impact, page type, traffic value, conversions, and template scale.
- Overclaiming Core Web Vitals without field data from CrUX, GSC, or another reliable real-user dataset.
- Treating every warning as equally important.
- Recommending schema that is not visible, not eligible, or not aligned with page content.
- Failing to separate confirmed findings from hypotheses.
- Forgetting to define how each fix will be validated after implementation.

## Reusable Prompt Template

Copy and paste this prompt into Claude when you want a technical SEO audit.

```text
Act as a technical SEO auditor using the technical-seo-auditor skill.

Audit objective:
- Determine the highest-impact technical SEO issues and the fixes most likely to improve crawlability, indexability, rankings, measurement, and conversions.

Business context:
- Site type: {{site_type}}
- Primary market: {{primary_market}}
- Primary conversion or business goal: {{business_goal}}
- Important page types or templates: {{page_types}}
- Known concern or incident: {{known_concern}}

Evidence provided:
- URLs or representative pages: {{urls_or_pages}}
- Source HTML files or snippets: {{source_html}}
- Rendered HTML files or snippets: {{rendered_html}}
- Screaming Frog exports: {{screaming_frog_exports}}
- Google Search Console exports: {{gsc_exports}}
- URL Inspection evidence: {{url_inspection_evidence}}
- robots.txt and sitemaps: {{robots_and_sitemaps}}
- HAR files, screenshots, logs, or other evidence: {{other_evidence}}

Instructions:
1. Audit exactly these ten areas: analytics/tracking; rendering/JS SEO; crawlability/indexability; on-page technical; schema; mobile/viewport; performance signals; trust/quality; security/foundations; international/site architecture.
2. Every finding must cite the observed signal/source.
3. Separate confirmed findings from hypotheses and say what evidence would confirm each hypothesis.
4. Do not claim Core Web Vitals problems unless field data or reliable lab evidence is provided. If only lab data exists, label it as lab-only.
5. Do not diagnose source HTML only when rendered HTML could change the result. Compare source and rendered evidence when available.
6. Prioritize findings using Critical, High, Medium, or Low and effort S, M, or L.
7. Include validation methods using Screaming Frog, GSC URL Inspection, Rich Results Test, PageSpeed/Lighthouse, browser rendered HTML, GA4/GTM debug, and log files where relevant.
8. Adapt recommendations to the site type and business context.

Required output:
1. Executive summary.
2. Priority matrix.
3. Finding cards.
4. Fix-validation-tool table.
5. Quick wins under 1 hour.
6. 30-day action plan.
7. Evidence appendix.

If evidence is missing, continue with the audit but explicitly label evidence gaps. Do not invent crawl results, URL Inspection status, analytics status, log evidence, or performance data.
```

Template variables:

- `{{site_type}}`: ecommerce, SaaS, news, real estate, enterprise, JS-heavy SPA, local business, publisher, marketplace, or other.
- `{{primary_market}}`: country, language, region, or audience.
- `{{business_goal}}`: lead generation, sales, subscriptions, ad revenue, demo requests, bookings, calls, or other.
- `{{page_types}}`: examples include homepage, category, product, article, listing, location, comparison, docs, landing page, or app route.
- `{{known_concern}}`: traffic drop, migration, indexing issue, rendering concern, analytics mismatch, rich result loss, crawl spike, ranking decline, or none.
- `{{urls_or_pages}}`: URLs or a short description of the page set.
- `{{source_html}}`: source HTML evidence or file names.
- `{{rendered_html}}`: rendered HTML evidence or file names.
- `{{screaming_frog_exports}}`: export names and what they contain.
- `{{gsc_exports}}`: GSC report names and date ranges.
- `{{url_inspection_evidence}}`: URL Inspection screenshots, copied fields, or exported notes.
- `{{robots_and_sitemaps}}`: robots.txt, sitemap URLs, or pasted contents.
- `{{other_evidence}}`: HAR, screenshots, logs, PageSpeed/Lighthouse reports, GA4/GTM debug notes, or other sources.
