The Nature of Compliance Evidence
What Compliance Evidence Actually Is
The word "evidence" appears frequently in discussions of website accessibility compliance. Organizations speak of having evidence that their sites are compliant. Vendors offer to provide evidence through audits, scans, and reports. Legal teams request evidence when responding to complaints or preparing for litigation.
Yet in most cases, what organizations possess is not evidence in any meaningful sense. They have documentation. They have reports. They have records of activity. But documentation, reports, and activity records are not the same thing as evidence, and the difference becomes visible precisely when those materials are examined by someone who has no reason to trust them.
Ordinary Meaning and Institutional Meaning
In everyday usage, "evidence" means anything that supports a claim. A scan result showing no errors detected feels like evidence of compliance. A remediation report listing resolved issues feels like evidence of good faith. A certification badge feels like evidence that a standard has been met.
In legal and institutional contexts, evidence has a narrower and more demanding meaning. Evidence is a record capable of establishing a fact under scrutiny by an independent party. It must be verifiable, meaning it does not rely solely on the credibility of the organization presenting it. It must be attributable, meaning it shows who created it, when, and under what circumstances. And it must be complete enough to withstand challenge, because partial records invite questions about what has been omitted.
A scan result, by itself, establishes only that a scan was run. It does not establish what was scanned, what the scan could not detect, or what happened afterward. A remediation report establishes only that a document describing remediation exists. It does not establish that the remediation occurred, that it was effective, or that it addressed the barriers a user actually encountered.
The gap between ordinary meaning and institutional meaning is where many compliance programs quietly fail.
Why Reports Do Not Function as Evidence
A compliance report describes a state of affairs at a particular moment. It may be produced by a vendor, an internal team, or an automated tool. It typically contains findings, assessments, or scores.
Structurally, reports are limited in ways that prevent them from functioning as evidence.
A report is an assertion. It states that something is true. When a report says "14 issues were found and remediated," it is asserting a fact, not independently establishing it. Under scrutiny, the question is not whether the assertion is plausible, but how its accuracy can be verified without relying on the report itself.
Reports also collapse time. They capture a snapshot. Compliance, however, is not a momentary condition. A report from last month establishes nothing about the state of a website during the period a user claims to have encountered barriers. Institutional examination looks for timelines, not snapshots.
Most reports lack attribution chains. They rarely show who identified an issue, who determined it required action, who performed the remediation, who verified the result, and when each step occurred. Without attribution, there is no way to establish institutional awareness or accountability. The report becomes an anonymous artifact, detached from decision-making.
Finally, reports are produced for the organization that commissions them. This does not make them dishonest. It does mean they are created in a context where favorable presentation is expected. When examined externally, the conditions under which a report was produced matter as much as its contents.
Why Scans Do Not Function as Evidence
Automated accessibility scans occupy a special place in compliance discussions because they produce concrete, technical output. A scan returns a list of issues, a score, or a pass/fail determination. This output appears objective in a way narrative assessments do not.
But scan output is not evidence of compliance for reasons that are structural, not technological.
Automated tools evaluate code patterns, not user experience. They can detect the presence of an attribute, but not whether its use meaningfully supports accessibility. This gap is inherent to automation, not a flaw awaiting correction.
Scan results also record output, not awareness. They show what a tool detected at a moment in time. They do not show what the organization knew, what decisions were made, or how findings were handled. A scan returning zero errors establishes only that the scan returned zero errors.
Scans are repeatable but not continuous. Each run captures a moment. Over time, scan results accumulate as disconnected snapshots rather than as a coherent record of ongoing attention.
When multiple scans exist, results are often presented selectively. The existence of selection itself weakens evidentiary value, because it raises questions about what was excluded and why.
What Makes a Record Defensible
If reports and scans are not evidence, the difference lies not in producing a different document, but in how records are created and preserved.
When records withstand scrutiny, they tend to share certain structural qualities. They are created contemporaneously, close to the events they document. They are attributable to specific individuals, establishing who knew and who acted. They form a continuous history rather than isolated artifacts. They exist independently of whether their contents are flattering. And they resist selective presentation because the record exists as a whole.
These qualities describe a record-keeping practice, not a file format. The question is not what tool produced a record, but whether the record can be examined by someone who assumes the organization may be mistaken, biased, or incomplete.
Activity Without Evidence
Most organizations engaged in accessibility compliance are doing genuine work. They hire consultants, run scans, remediate issues, train staff, and update policies. The effort is real.
But effort does not automatically produce evidence of effort. Work can occur without leaving behind records capable of demonstrating that it occurred in a way that matters institutionally.
When scrutiny arrives, organizations often assemble documentation: reports, scan outputs, training certificates, remediation logs. On examination, these materials frequently show activity without continuity, outcomes without attribution, and assertions without verification.
The work happened. The institution, however, cannot demonstrate how it knew about issues, how it decided what to do, or how it verified the results over time.
Evidence as Institutional Memory
Evidence is not only a legal concept. It is an institutional one. An organization that cannot produce evidence-grade records lacks durable memory of its own actions.
As people change roles, vendors change, and time passes, undocumented knowledge disappears. Without records that preserve awareness, decisions, and follow-through, the institution cannot reliably answer basic questions about its own history.
This is not a failure of intent. It is a failure of structure. The work occurred, but it was not recorded in a way that allowed it to endure.
Evidence, properly understood, is institutional memory made durable. It is how an organization's past awareness and actions remain accessible to its future self - and to anyone else who must examine them without presumption of trust.