TLDR
An agency that handles accessibility on a case-by-case basis for each client is running accessibility as overhead, not as a practice. Scaling requires three things: a standard toolchain that doesn't require reinvention per project, a process that runs without senior developer involvement for every scan, and retainer economics that cover the ongoing cost.
- Accessibility retainer
- A recurring monthly or quarterly service agreement where an agency provides ongoing accessibility monitoring, issue triage, remediation, and client reporting. Distinct from a one-time audit — a retainer covers the continuous maintenance of compliance as sites receive new content, code updates, and design changes.
DEFINITION
- Accessibility practice
- A systematized agency capability — defined process, standard toolchain, trained staff, productized service offerings — for delivering accessibility services to clients. Distinct from ad-hoc accessibility work, where each project handles accessibility independently with no institutional infrastructure.
DEFINITION
- Baseline audit
- The initial accessibility audit conducted on a client site before an ongoing monitoring relationship begins. Establishes the starting compliance state, identifies issues to be remediated before monitoring begins, and sets the benchmark against which future scan results are compared.
DEFINITION
One accessibility audit per project is not an accessibility practice. Sites change — content editors add images without alt text, developers push component updates that break keyboard navigation, design changes introduce contrast failures. Accessibility degrades over time without ongoing attention.
Building a practice means building the infrastructure to monitor, triage, and report on accessibility continuously, across every client site, without the per-project overhead that makes it unsustainable.
What Makes Accessibility Hard to Scale
No consistent toolchain. Agencies where different developers use different tools for different projects produce inconsistent results. One project gets a Lighthouse run, another gets nothing, a third gets an expert audit because the developer happened to care. Clients get different quality levels depending on who ran their project.
No dedicated workflow. Without a defined process, accessibility work lands in the “developer judgment” category — something good developers do when they have time. At project crunch, it gets cut.
No retainer structure. Agencies that do accessibility only at project launch have no revenue model for ongoing monitoring. If a client calls because their site now fails accessibility checks after a content update, that work is unscoped and underbid.
No report templates. Generating a client accessibility report from scratch for each client is slow. Without a template, each report is ad-hoc, inconsistent in quality, and takes longer than it should.
Fixing these four problems is what building an accessibility practice means operationally.
Building the Standard Toolchain
Choose one scanner for portfolio-level work and standardize on it. The tool needs to handle multi-client management — separate clients, separate reports, a portfolio health view — and generate reports without manual assembly. A11yProof is built for this. Other options require workarounds.
Separately, equip developers with the axe browser extension for in-development use. This is free and integrates directly into the workflow developers already use. It catches issues at the point where they are cheapest to fix.
For manual testing, ensure at least one team member is trained on a screen reader (NVDA on Windows is free, VoiceOver on Mac is built-in). Manual testing of interactive components — carousels, modals, dropdowns, form validation — requires human verification that automated tools cannot provide.
Defining the Process
For project work:
- Automated baseline scan at design handoff or start of build
- Developer fixes critical issues during development (not post-launch)
- Pre-launch rescan to confirm fixes and catch anything missed
- Client audit report generated from post-fix scan
For ongoing monitoring:
- Scheduled automated scan — monthly or quarterly, depending on how frequently the site changes
- Triage of new issues — categorize by severity, escalate criticals immediately
- Developer fix tickets for any new critical or serious issues
- Monthly or quarterly summary report generated and sent to client
Structuring the Retainer
A sustainable accessibility retainer has three components:
Monitoring fee: Covers the tool cost, the scheduled scan, and the report generation. This is the baseline monthly charge — clients pay it regardless of whether new issues are found.
Included remediation hours: A monthly hour block (e.g., 1-2 hours of developer time) included in the retainer for fixing newly identified issues within the scan scope. This prevents every new issue from becoming a separate change order.
Additional remediation: Billed separately when the included hours are insufficient. Large content updates or code deployments that introduce many new issues go above the included hours.
This structure is transparent and defensible. Clients understand what they are paying for each month and what triggers additional charges.
Team Structure at Scale
At small scale (under 20 clients), one person can manage monitoring alongside other responsibilities. At medium scale (20-50 clients), a dedicated part-time role makes sense: someone responsible for running scans, triaging results, generating reports, and coordinating developer fixes.
Training all developers on the basic WCAG checklist items reduces the remediation volume that flows through the dedicated monitoring role. Issues caught during development don’t appear in post-launch scans.
The Portfolio Health View
Once you have 20+ clients in an accessibility monitoring program, you need a portfolio-level view: which clients are compliant, which have outstanding critical issues, which haven’t been scanned recently. A portfolio dashboard lets you prioritize attention and demonstrate the practice’s value internally.
This is one of the structural requirements that makes tool selection important at scale. Tools built for single-site use require you to reconstruct this view manually. A11yProof’s multi-client dashboard surfaces this view natively.
Starting Small
You don’t need 50 clients to start building an accessibility practice. Start with the clients currently on maintenance retainers. Add accessibility monitoring to existing maintenance agreements at a modest monthly uplift. Run the workflow on those clients for 90 days. Refine the process, the report format, and the triage approach before rolling it out more broadly.
The agencies that have built recurring accessibility revenue typically didn’t plan a launch — they added accessibility monitoring to one client, found it worked, and expanded from there.
Q&A
How do agencies scale accessibility monitoring across many client sites?
At scale, accessibility monitoring requires: automated scanning tools with multi-client management (so one account covers all clients), scheduled scans that run without manual initiation, templated client reports generated from scan data, and a triage process that prioritizes new issues for developer attention. Without these elements, per-site manual effort prevents scaling past a small number of clients.
Q&A
What is the ROI of building an accessibility practice for a web agency?
Accessibility retainers are recurring revenue on top of existing client relationships. An agency with 30 maintenance clients at $200/month in accessibility monitoring generates $6,000/month without new client acquisition. The incremental cost — tool cost plus triage time — is significantly less than the revenue. Accessibility also creates an upsell path from clients who need remediation services beyond the monitoring scope.
Q&A
How do agencies avoid scope creep in accessibility work?
Define scope explicitly for each service tier: monitoring covers X pages, Y scans per month, Z hours of developer time for fixes. Remediation beyond the included hours is billed separately. A baseline audit at the start of a monitoring relationship sets the starting state clearly, so both agency and client know what the monitoring is tracking against.
Want to learn more?
Frequently asked