SiteRecord ← Articles

Accessibility Scans Don’t Prove Compliance — Documentation Does

Organizations increasingly rely on automated accessibility scanners to evaluate their websites. These tools identify potential issues, generate reports, and sometimes assign scores. The output feels authoritative — a list of concrete, technical findings that can be reviewed and acted upon.

But scanning a website is not the same as demonstrating compliance. Detection is the beginning of a process, not the end of one. Organizations that treat scan results as proof of compliance misunderstand what regulators, auditors, and legal reviewers actually look for when examining an accessibility program.

What matters is not just whether issues were found, but what happened next.

What Accessibility Scanners Actually Do

Automated accessibility scanners evaluate web pages against a set of machine-verifiable rules derived from standards like the Web Content Accessibility Guidelines (WCAG). They check for conditions that can be detected programmatically: images without alternative text, form fields without labels, insufficient color contrast, missing document language attributes, and similar patterns.

These tools are valuable. They can evaluate hundreds of pages in minutes, identify recurring patterns across a site, and surface issues that might take a manual reviewer hours to find. For organizations managing large or frequently changing websites, automated scanning is a practical necessity.

However, scanners operate within inherent constraints. They evaluate code structure, not user experience. A scanner can confirm that an image has an alt attribute, but it cannot determine whether the alt text meaningfully describes the image. It can detect that a button exists, but not whether the button's purpose is clear to someone using a screen reader. Industry estimates suggest automated tools can detect roughly 30 to 40 percent of WCAG conformance issues. The remaining issues require human judgment.

This is not a criticism of scanning tools. It is a description of what automation can and cannot do. Organizations that understand these boundaries use scanners effectively. Organizations that do not may believe they have achieved compliance when they have only completed the first step.

Why Detection Is Not Enough

When an organization's accessibility practices come under scrutiny — whether through a regulatory inquiry, a legal complaint, or an internal audit — the questions asked are rarely about scanning. They are about institutional response.

Typical questions include:

A scan report answers the first question. It shows that issues were identified at a particular point in time. It does not answer any of the others.

An organization that can produce scan results but cannot demonstrate what happened after those results were generated has a detection program, not a compliance program. The distinction matters because compliance is fundamentally about demonstrating ongoing, responsive effort — not about producing a single artifact that says problems were found.

Regulators understand that accessibility is complex and that organizations cannot fix everything instantly. What they look for is evidence of a systematic process: awareness, prioritization, action, and follow-through. The absence of that process, even in the presence of scan results, suggests that accessibility is being monitored without being managed.

The Importance of Remediation Records

Remediation documentation is where scanning ends and compliance begins. When an issue is detected, the record of what was done about it — and by whom, and when — is what transforms a finding into evidence of institutional accountability.

Effective remediation records typically include several elements. A description of the issue as detected, including where it appeared and what standard it relates to. An account of the action taken to address it, whether that involves a code change, a content update, a design revision, or a policy decision. The identity of the person who performed the remediation and, separately, the person who reviewed it. Timestamps establishing when each step occurred.

These records do not need to be elaborate. A simple, structured entry showing that an issue was identified on a specific date, assigned to a specific person, remediated through a specific action, and reviewed by a specific reviewer is far more defensible than a detailed scan report with no follow-up documentation.

The reason is straightforward: remediation records demonstrate that the organization treated accessibility findings as actionable obligations rather than informational outputs. They show a chain of awareness, responsibility, and response that can be examined independently.

Organizations that maintain remediation records also benefit from consistency. When findings are documented systematically, patterns become visible. Recurring issues can be identified and addressed at a structural level rather than being fixed one page at a time. The remediation record becomes not just a compliance artifact but an operational tool for improving accessibility outcomes.

Verification Matters

Fixing an issue and confirming that the fix actually resolved the problem are two different activities. Organizations often assume that once a change has been made, the issue is resolved. In practice, fixes can be incomplete, can introduce new issues, or can fail to address the underlying barrier.

Verification is the step that closes the loop. It confirms that the remediation was effective — that the issue identified by the scanner or by a manual reviewer is no longer present. Verification can take several forms: a follow-up scan targeting the specific page and issue, a manual review by someone other than the person who performed the fix, or a structured test confirming that the assistive technology interaction now works as expected.

From a compliance perspective, verification is significant because it demonstrates diligence. An organization that detects an issue, fixes it, and then confirms the fix has completed a full cycle of responsive action. An organization that detects an issue and fixes it without verification has completed most of the work but left the most credible part of the record incomplete.

Verification also protects against regression. Issues that were fixed in one release can reappear in the next if the underlying cause was not fully addressed. Periodic verification ensures that resolved issues remain resolved, and creates a longitudinal record that demonstrates sustained attention rather than one-time effort.

Documentation Is the Evidence

Accessibility compliance is not achieved by scanning websites. Scanning is a detection mechanism — an important one, but only the first step in a process that must also include acknowledgment, remediation, review, and verification.

Organizations that invest in scanning without investing in documentation may be doing genuine work to improve accessibility while simultaneously failing to create records that demonstrate that work was done. When scrutiny arrives, the absence of structured documentation leaves the organization unable to answer the most basic questions about its own process.

The goal is not to create paperwork for its own sake. The goal is to ensure that every meaningful step in the accessibility lifecycle — detection, responsibility assignment, remediation, review, and verification — produces a record that can be examined by someone who was not involved in the process. That record, maintained consistently over time, is what transforms accessibility activity into compliance evidence.

Scanners find the issues. Documentation proves you addressed them.