The 100% pass that still failed real users
A municipal recreation department ran their newly redesigned program registration page through an automated accessibility checker before launch. The tool returned zero errors. Zero warnings. The team celebrated, checked "WCAG compliance" off the project list, and went live.
Three weeks later, a resident using a screen reader called their accessibility desk. She couldn't register for programs online. She'd been trying for two weeks. The page looked completely fine to everyone who had tested it visually. To her screen reader, the multi-step form was a series of unlabelled inputs inside a container with no announced structure. Every error message appeared visually in red text next to the relevant field but was never announced to the screen reader at all. The form could be submitted with invalid data, returned a success message that was also never announced, and she'd been submitting blank registrations without knowing it.
Zero errors in the automated audit. Completely unusable for a screen reader user.
This gap is not an edge case or a failure of one particular tool. It is a structural limitation of what automated testing can and cannot do. Understanding that limitation is the starting point for building accessibility practices that actually work.
What automated tools can and can't detect
Automated accessibility checkers work by scanning your page's HTML and comparing it against a ruleset derived from the Web Content Accessibility Guidelines (WCAG). They are fast, consistent, and excellent at catching things that are definitively wrong at the code level.
What they catch reliably:
alt attributes on images<label> elements<html lang="en">)<h1> to <h3>)<title> on the pageThese are objectively detectable because they're binary: either the attribute is present or it isn't, either the contrast ratio meets 4.5:1 or it doesn't. The tool doesn't need to understand what the page is trying to do. It just checks for presence and value.
What automated tools cannot detect:
alt text on an image is accurate and useful (a photo of a staff member labelled "image001.jpg" passes if the alt attribute exists, even if it says "photo")The frequently cited figure in the accessibility community is that automated tools catch approximately 30-40% of WCAG failures. That number comes from research by WebAIM, Deque, and others who have compared automated audit results against manual expert audits on the same pages. The number varies depending on the page type and the tools used, but the consistent finding is that the majority of real accessibility barriers are invisible to automated scanners.
The three testing layers that automated audits replace but shouldn't
A complete accessibility testing practice has three layers. Automated scanning is one of them, and it belongs at the beginning of the process, not as the final checkpoint.
Layer 1: Automated scanning
Use automated tools early and continuously, not as a gate before launch. The right role for automated scanning is catching the low-hanging fruit: missing labels, contrast failures, empty alt text. These issues are mechanical and can be fixed quickly. Catching them early means your manual testers aren't spending their time on things a scanner could have flagged.
Tools worth using: axe DevTools, Deque's axe browser extension, WAVE, Lighthouse's accessibility audit, and Siteimprove if your organization uses it. No single tool catches everything; a combination is more comprehensive than any one alone.
Critically, a passing score from an automated tool is not evidence of compliance. It is evidence that the page doesn't have the specific mechanical failures the tool checks for. Both are useful to know. They are not the same thing.
Layer 2: Manual expert review
Manual testing by someone who understands WCAG, assistive technology, and the practical experience of disabled users is where the majority of real issues get found. This is not the same as a developer clicking through the page with a mouse. It requires specific testing protocols.
Keyboard navigation testing: Unplug or disable your mouse. Navigate the entire page and every interaction using only the keyboard (Tab, Shift+Tab, Enter, Space, arrow keys). Can you reach every interactive element? Is the focus indicator visible at every step? Does the focus order follow a logical sequence that matches the visual layout? After a modal dialog opens, does focus move into it? After the dialog closes, does focus return to the trigger? These questions don't require a screen reader. They require a keyboard and patience.
Screen reader testing: Use an actual screen reader to navigate the page. NVDA (free, Windows) with Firefox, JAWS (Windows) with Chrome or Edge, and VoiceOver (built into macOS and iOS) with Safari are the three most used combinations by real users. Navigate by headings, by form elements, by links. Does the heading structure make sense as an outline of the page? Are form labels read aloud when an input receives focus? Are error messages announced when they appear? Are status updates (like "your form has been submitted") announced without requiring the user to navigate to find them?
Zoom and reflow testing: Increase your browser zoom to 200% and then 400%. At 400% zoom (WCAG 1.4.10 Reflow), all content and functionality should be available without horizontal scrolling, in a single column layout. Text that overlaps, buttons that disappear, and menus that become unreachable at high zoom are failures that affect users with low vision who use browser zoom rather than screen magnification software.
Cognitive accessibility review: Read the page as if you're encountering the subject matter for the first time. Are instructions clear and specific? Are error messages written in plain language that explains what went wrong and how to fix it? Are time limits disclosed upfront? Is the page free of content that flashes more than three times per second? Is the reading level appropriate for the audience?
Layer 3: User testing with disabled people
This is the layer most organizations skip entirely, and it is the layer where the most significant and unexpected barriers surface.
Formal user testing with disabled people means recruiting participants who actually use assistive technology in their daily lives and observing them trying to complete real tasks on your site. It does not mean asking your accessibility consultant to use a screen reader for thirty minutes. It means watching a person who has been blind since birth, who has years of screen reader expertise and a completely different mental model of how they interact with websites, try to book a service appointment or pay a utility bill.
The things that surface in these sessions are almost never things that would appear in an automated audit. Confusing but technically valid ARIA patterns that experienced screen reader users work around through learned workarounds. Tasks that require too many steps for users with motor impairments using switch access. Form flows that assume the user reads the instructions before filling inputs in, when most blind users navigate by form element and encounter the submit button before they find the instructions.
You don't need a large budget to run basic usability testing. A single session with two or three participants using different assistive technologies will surface more actionable issues than a year of automated scanning.
Building an accessibility-first mindset across the whole team
The single biggest predictor of whether a website is actually accessible is not the tools the team uses. It is whether accessibility is a shared responsibility or a single person's job.
When accessibility lives with one person, everything depends on that person's capacity, their ability to review all changes before they go live, and their willingness to push back against timelines. When that person leaves, the knowledge leaves with them. When they're overloaded, things slip. When there's a disagreement about prioritization, they're outnumbered.
When accessibility is shared, it looks like this instead.
Content editors
Content editors are responsible for:
None of this requires technical training. It requires awareness of why these things matter and a clear style guide that covers accessibility alongside brand voice.
Designers
Designers are responsible for:
Developers
Developers are responsible for:
<button> for buttons, <nav> for navigation, <main> for main content, not divs with click handlers for everything)Project managers and leadership
Project managers and leadership are responsible for:
The AODA reality check for Ontario organizations
If your organization operates in Ontario, the Accessibility for Ontarians with Disabilities Act requires websites to conform with WCAG 2.0 Level AA. The AODA Integrated Accessibility Standards Regulation has been in effect for public sector organizations since 2014 and for private sector organizations (50+ employees) since 2021.
A few things worth being direct about:
Automated tool compliance is not AODA compliance. The AODA requires WCAG 2.0 Level AA conformance. WCAG conformance requires that all success criteria at the specified level are actually met, not that no automated tool flags issues. A page that passes an automated audit but fails a keyboard navigation test or a screen reader test is not WCAG conformant, regardless of what the tool report says.
"Best efforts" is not a defence in enforcement. The AODA is enforced by the Accessibility Directorate of Ontario. Organizations that receive complaints and cannot demonstrate active compliance efforts face penalties. Having an accessibility statement that says "we strive to meet WCAG 2.0 AA" while the site has known barriers is not a compliant posture.
Accessibility statements should be honest. An accessibility statement should describe what level of conformance the site has achieved, list any known barriers and the timeline for remediation, and provide contact information for users who encounter barriers. A statement that claims full WCAG 2.0 AA compliance when the site has not been manually tested is a liability, not a protection.
A practical accessibility review cycle
Here is a sustainable schedule for organizations that want to maintain genuine accessibility rather than checking a box once:
Every time new content is published: content editor checks for alt text, heading structure, link text, and color-only information cues. Takes two minutes per page with a checklist.
Every time a new component or template is launched: developer tests keyboard navigation and runs automated scan before merging. Designer confirms focus states are implemented as specified.
Every quarter: One manual keyboard-and-screen-reader walkthrough of the five highest-traffic pages and any page that received an accessibility complaint in the previous quarter.
Every year: Full manual expert audit of the site, covering keyboard navigation, screen reader testing on NVDA+Firefox and VoiceOver+Safari, zoom and reflow testing, and a cognitive accessibility review. Update the accessibility statement based on findings.
When resources allow: User testing session with two or three participants using assistive technology. Even one session per year will surface issues that no other method finds.
True accessibility is a practice, not a project milestone. If your team needs a starting point, a gap assessment, or a roadmap that goes beyond the automated audit, get in touch for a hands-on accessibility review built around how your real users experience your site.