Skip to main content
My Blog
Insights & Stories

Explore my latest thoughts on design, development, branding, and the creative process.

Back to Blog
Development13 min read
Technical SEO Is Not Optional: The Silent Issues Killing Your Website's Search Performance

Crawl errors, duplicate content, broken redirects, and CMS technical debt don't announce themselves. They quietly drain your search visibility until rankings drop and nobody knows why.

E
Excelle Escalada
Digital Experience ArchitectMay 19, 2025

The traffic that disappeared

A municipal communications team noticed that the landing page for their most popular service had dropped from the first page of Google results to somewhere on page three. Nothing had been changed on the page. The content was accurate. The page looked fine in the browser. Analytics showed a 40% drop in organic traffic over six months.

The culprit, found after an audit: a developer had migrated the site eight months earlier and set up a redirect chain. The original URL redirected to an intermediate URL, which redirected to the current URL. Three hops. Google was still crawling the old chain instead of indexing the current page cleanly. No error message had surfaced anywhere. No alarm had gone off. The issue had simply been sitting there, silently costing them search visibility, since the day the migration finished.

This story is not unusual. Technical SEO problems are not like broken links or missing images; you notice those. Technical SEO problems hide in the infrastructure of your site. They show up in log files, crawl reports, and ranking graphs, not in the visual experience. And because they're invisible, they stay unfixed for months or years while the cost compounds.

This guide covers the most common technical SEO issues I find in audits, explains exactly why each one hurts your search performance, and gives you a prioritized checklist you can work through with your team.

What technical SEO actually covers

Technical SEO is everything that affects how search engines discover, crawl, interpret, and index your site. It is distinct from content SEO (what your pages say) and off-page SEO (who links to you). All three matter, but technical SEO is foundational. You can write the best content in your industry and earn excellent backlinks. If Google can't crawl your pages reliably, none of it matters.

The six categories that generate the most technical SEO issues in practice:

  • Crawl errors and accessibility
  • Duplicate content
  • Missing or broken meta
  • Core Web Vitals and load performance
  • Redirect problems
  • CMS-related technical debt
  • We'll go through each one, explain the mechanism, and then close with a full prioritized audit checklist.


    1. Crawl errors and accessibility

    Search engines discover your pages by following links, the same way a human would navigate your site. A crawler visits a page, follows every link it finds, and adds new pages to a queue to visit later. Anything that interrupts that process costs you indexing.

    Common crawl blockers

    robots.txt misconfiguration: The robots.txt file tells crawlers which parts of your site they're allowed to visit. A common mistake is blocking everything during a site build, launching the site, and forgetting to update the file. Another: blocking a specific directory (like /wp-admin/) and accidentally using a pattern that also blocks a public directory you need indexed. Check your robots.txt file manually and run it through Google Search Console's robots.txt tester.

    Noindex tags on public pages: The <meta name="robots" content="noindex"> tag tells search engines not to index a page. This is intentional and correct for things like login pages, thank-you pages, and internal search results. It is unintentional and catastrophic when it appears on your homepage, service pages, or any content you want to rank. This happens more often than you'd expect, usually as a misconfigured default in a CMS plugin or staging environment configuration that followed the site into production.

    Orphaned pages: Pages that exist but have no internal links pointing to them. Crawlers follow links. A page with no inbound links may never be discovered, or may be crawled so infrequently that new content takes months to index. Every important page on your site should be reachable from at least one other indexed page within two or three clicks of the homepage.

    XML sitemap problems: Your sitemap should list every URL you want indexed, nothing you don't, and all URLs should return a 200 status code. Common failures: the sitemap includes redirected URLs (crawlers index the destination, not the source), it includes noindexed pages (contradictory signal), or it hasn't been updated since the last site migration and still references old URL patterns.


    2. Duplicate content

    Search engines index content at a specific URL. When the same content appears at more than one URL, crawlers have to guess which version to index and rank. They usually guess correctly, but the process splits the "ranking signals" (links, engagement data) between the duplicates, weakening both instead of concentrating all the value on one canonical version.

    The most common duplicate content sources

    HTTP vs. HTTPS, www vs. non-www: If your site resolves on all four variants (http://, https://, http://www., https://www.) and all four serve the same content, you have four duplicate versions of your homepage (and every other page). All four variants should redirect to a single canonical version. Check this by typing all four into your browser and confirming they all land on the same final URL with a 301 redirect, not just different URLs serving the same content.

    Trailing slash inconsistency: /services and /services/ are different URLs to a search engine. If both resolve and serve the same content, you have duplicates. Pick one pattern and 301 redirect the other consistently across your entire site.

    CMS pagination and parameter URLs: Category pages, search results, and filtered views often generate URL parameters (?page=2, ?sort=date, ?category=news). Without proper handling, these create hundreds of near-duplicate URLs that crawlers waste time on instead of indexing your real content. Handle this with canonical tags, parameter configuration in Google Search Console, or robots.txt disallow rules depending on the situation.

    Printer-friendly pages and session IDs: Older CMS platforms and e-commerce systems sometimes generate separate printer-friendly URLs or append session IDs to URLs. Both create duplicate content at scale. Both should be handled with canonical tags pointing to the primary URL.

    The canonical tag: The <link rel="canonical" href="[primary URL]"> tag in your page's <head> is the primary tool for signalling to search engines which version of a page is the one to index. Every page on your site should have a self-referencing canonical tag. Any page that is intentionally duplicated (a syndicated article, a paginated view) should have a canonical pointing to the primary version.


    3. Missing or broken meta

    Meta tags communicate page context to search engines and to users in search results. Missing or broken meta doesn't usually stop a page from being indexed, but it consistently reduces click-through rates and makes it harder for search engines to understand what a page is about.

    Title tags

    The <title> tag is the most important on-page SEO element. It appears as the clickable blue headline in search results. Problems to look for:

  • Missing title tags: Some CMS setups generate pages without a title tag if the page title field is left blank. Crawl your site and flag any page returning an empty or missing title.
  • Duplicate title tags: Every page should have a unique title. When multiple pages share the same title ("Home" or "Contact Us" without the organization name), search engines struggle to differentiate them and users can't tell from search results which page is which.
  • Too long or too short: Google displays roughly 50-60 characters of a title tag before truncating. Titles under 30 characters often miss the opportunity to communicate the page's purpose. Titles over 60 characters get cut off in results, usually at an awkward point.
  • Front-loading keywords: The words at the beginning of a title carry more weight. "Register for recreation programs | City of Oakville" performs better than "City of Oakville | Recreation Program Registration" because the intent-matching terms are first.
  • Meta descriptions

    The meta description is the grey text below the title in search results. Google does not use it as a ranking signal, but it directly affects click-through rate, which does influence rankings indirectly. Problems to look for:

  • Missing meta descriptions: Google will generate a snippet from the page content, but it will rarely be as good as a written description. Any page you care about ranking should have a hand-written meta description.
  • Duplicate meta descriptions: The same problem as duplicate titles. Every page, different description.
  • Too long: Descriptions are truncated at roughly 155-160 characters in desktop results and about 120 characters on mobile. Write for the shorter limit.
  • No call to action: A meta description is an ad for your page. It should tell the user what they'll find and why they should click. "Learn how to register for parks and recreation programs online, including swimming, skating, and summer camps" outperforms "This page contains information about recreation programs."
  • Heading tags (H1)

    Every page should have exactly one <h1> tag. It signals the primary topic of the page to both search engines and screen readers. Common failures: pages with no H1 (the CMS uses a styled <div> instead), pages with multiple H1s (a component library that applies H1 styling to section headings), and H1 text that doesn't match the title tag topic (a mismatch between what the title promises and what the page delivers).


    4. Core Web Vitals and load performance

    Since 2021, Google uses Core Web Vitals as a ranking signal. These are three user-experience metrics that measure how fast a page loads, how stable it is as it loads, and how quickly it responds to user interaction.

    The three metrics:

    MetricWhat it measuresGood threshold
    Largest Contentful Paint (LCP)How long until the main content is visibleUnder 2.5 seconds
    Cumulative Layout Shift (CLS)How much the page layout shifts as it loadsUnder 0.1
    Interaction to Next Paint (INP)How long until the page responds to a click or tapUnder 200 milliseconds

    A score in the "good" range for all three does not guarantee higher rankings. A score in the "poor" range for any of them is a confirmed negative ranking signal as well as a bad user experience.

    The most common performance problems

    Unoptimized images: Images are the single biggest contributor to slow load times on most websites. Every image served should be:

  • The right dimensions (not a 3000px image scaled down to 300px in CSS)
  • Compressed (use WebP format where supported; tools like Squoosh or ImageOptim handle this)
  • Lazy-loaded (the loading="lazy" attribute on <img> tags defers off-screen images until the user scrolls toward them)
  • Served with explicit width and height attributes to prevent layout shift during load
  • Render-blocking resources: JavaScript and CSS files loaded in the <head> block the browser from rendering the page until they finish downloading. Move non-critical scripts to the bottom of the body or make them defer or async. Defer any CSS that isn't needed for the initial viewport.

    No caching or CDN: Resources that can be cached (images, fonts, scripts, stylesheets) should be served with appropriate cache headers. A content delivery network (CDN) serves assets from a server geographically close to the user, reducing latency. Both are especially important for public-sector sites that serve geographically distributed users.

    Third-party scripts: Analytics, chat widgets, social media embeds, and tag managers all add load time. Each third-party script is a separate network request, often loading additional scripts of its own. Audit your third-party scripts quarterly and remove anything you're not actively using. Load remaining scripts asynchronously and consider using a tag manager to control when they fire.

    Check your scores: Google's PageSpeed Insights (pagespeed.web.dev) gives you Core Web Vitals scores and field data for both mobile and desktop for any public URL. The Chrome User Experience Report (CrUX) provides real-world user data. Run your five highest-traffic pages through PageSpeed Insights and address any pages scoring "poor" first.


    5. Redirect problems

    Redirects are necessary and useful. Redirect problems are silent and expensive.

    A 301 redirect passes approximately 90-99% of the original page's "link equity" to the destination. A chain of three redirects passes less. A chain of five passes considerably less. A redirect loop passes none and returns an error.

    Redirect chains

    A redirect chain exists when Page A redirects to Page B, which redirects to Page C. This happens most often after site migrations, where an old redirect from a previous migration wasn't cleaned up before the new one was added.

    The fix is straightforward but requires a crawl to find: map every redirect on your site, identify chains, and update them so Page A redirects directly to the final destination (Page C) in a single hop.

    Redirect loops

    A redirect loop exists when Page A redirects to Page B, which redirects back to Page A. Both browsers and crawlers give up immediately on a loop. Users see a "too many redirects" error. Crawlers stop indexing both pages.

    Soft 404s

    A soft 404 is a page that returns a 200 OK status code (meaning "everything is fine") while displaying content that communicates "this page doesn't exist." This happens when a CMS generates a page for a deleted item but doesn't set the response status to 404 or 410. Search engines index the empty shell page and waste crawl budget on it. The fix is ensuring your CMS serves 404 or 410 status codes for removed content rather than blank template pages.

    302 vs. 301

    A 302 redirect signals a temporary move. A 301 signals a permanent move. If you've permanently moved a page or changed a URL structure, using 302 instead of 301 means search engines keep the old URL in their index and don't fully transfer the link equity. Audit your redirects and confirm that permanent changes use 301s, not 302s.


    6. CMS-related technical debt

    Every CMS decision has technical SEO implications. These are the most common CMS-specific issues I find in audits.

    Plugin and extension sprawl

    WordPress sites in particular tend to accumulate plugins. Each plugin can add its own scripts, stylesheets, database queries, and meta tags. Conflicting SEO plugins (two plugins both trying to control title tags and canonical tags) create unpredictable output. Inactive plugins that still load code on every page add load time without benefit. Audit your plugins annually: remove inactive ones, consolidate duplicated functionality, and test performance before and after changes.

    Auto-generated archive pages

    Most CMS platforms generate archive pages for categories, tags, authors, dates, and custom taxonomies. At scale, these generate hundreds of thin, near-duplicate pages that consume crawl budget and dilute your content's relevance signals. Pages like /author/admin/page/47/ or /tag/news/2019/03/ serve no user or search purpose. Noindex non-essential archive types and ensure your sitemap doesn't include them.

    Faceted navigation and filtered URLs

    E-commerce and directory sites often generate thousands of URLs through filtered searches (/services/?type=permit&area=downtown&year=2024). Without careful handling in robots.txt or canonical tags, these exponentially multiply your crawlable URL space with near-duplicate content, drowning out your canonical pages in the crawl budget.

    URL structure after migration

    CMS migrations often change URL patterns. A site that was at /services/permits/building-permit may move to /building-permits/. If 301 redirects from old to new aren't set up comprehensively and correctly, any links pointing to the old URLs (from other sites, from your own older pages, from bookmarks) return 404 errors and pass no equity to the new page.

    Always maintain a full URL map during a migration, redirect every changed URL, and verify the redirect setup with a crawl tool before launch.


    The prioritized audit checklist

    Work through this in order. Items at the top affect the most pages with the most severity; items further down are important but can follow once the critical issues are resolved.

    Priority 1: Crawl and indexing (do these first)

  • [ ] Check robots.txt — confirm it's not blocking public content, test with Google Search Console
  • [ ] Check for noindex tags on pages you want indexed — crawl the full site and flag any unintentional noindex
  • [ ] Verify all important pages are in the XML sitemap — and that all sitemap URLs return 200 status codes
  • [ ] Confirm the sitemap is submitted to Google Search Console and Bing Webmaster Tools
  • [ ] Check for orphaned pages — identify any important pages with no inbound internal links
  • [ ] Review Google Search Console for crawl errors and coverage issues — address anything flagged as an error or excluded unintentionally
  • Priority 2: Canonicalization and duplicates

  • [ ] Confirm all four HTTP/HTTPS and www/non-www variants redirect to a single canonical version
  • [ ] Confirm trailing slash handling is consistent across all pages
  • [ ] Check that every page has a self-referencing canonical tag
  • [ ] Audit URL parameters — configure Search Console parameter handling or add noindex / canonical as appropriate
  • [ ] Check for duplicate title tags — every page should have a unique, descriptive title
  • Priority 3: On-page meta

  • [ ] Confirm every indexed page has a title tag (50-60 characters, keyword-first)
  • [ ] Confirm every important page has a unique meta description (under 155 characters)
  • [ ] Confirm every page has exactly one H1 that matches the page's topic
  • [ ] Check for missing or generic Open Graph tags on pages shared on social media
  • Priority 4: Performance

  • [ ] Run the top 10 traffic pages through PageSpeed Insights — flag any "poor" Core Web Vitals scores
  • [ ] Audit images: check for uncompressed, oversized, or unoptimized images on high-traffic pages
  • [ ] Check for render-blocking scripts and stylesheets in the <head>
  • [ ] Audit third-party scripts: remove unused ones, defer remaining ones
  • [ ] Confirm caching headers are set correctly for static assets
  • Priority 5: Redirects

  • [ ] Crawl the site and identify any redirect chains (3+ hops) — collapse them to single-hop 301s
  • [ ] Check for redirect loops — any URL that returns a loop should be flagged and corrected immediately
  • [ ] Confirm all permanent redirects use 301, not 302
  • [ ] Test the 10 most-linked old URLs after any migration to confirm they redirect correctly
  • Priority 6: CMS technical debt

  • [ ] Audit active plugins or extensions — remove inactive ones, consolidate duplicated SEO functionality
  • [ ] Check auto-generated archive pages — noindex any that serve no search or user purpose
  • [ ] Review URL structure for any changes introduced in the last CMS update or migration
  • [ ] Verify structured data markup (schema.org) is valid using Google's Rich Results Test

  • How often to run a technical SEO audit

    A full technical audit once a year is a minimum. A partial crawl of your highest-traffic pages and a Google Search Console review every quarter catches newly introduced issues before they compound. After any site migration, CMS update, or large content restructure, run a targeted audit before the change goes live and again two weeks after, to catch anything that slips through.

    The goal is not a perfect score on every metric. It is a system that catches issues early, when they're cheap to fix, rather than late, when they've been silently costing you traffic for months.


    Want a full technical SEO audit for your site? Get in touch for a prioritized findings report covering all six categories, with a remediation roadmap your development team can act on immediately.

    Share this article

    More Articles