Follow

Accessibility Testing Tool Requirements

This document provides an overview of Accessibility Testing Tool Requirements – which are the general ideals an accessibility testing tools should conform to. This document was developed by Level Access' research and development team as part of the Accessibility Management Platform (AMP) Roadmap. It focuses on features that should be implemented in the testing functionality of an enterprise level accessibility platform and how best to make an accessibility testing infrastructure work in a modern development organization.

Automate Everything that can be Automated

The ideal accessibility testing solution is 100% automated and requires no user interaction apart from telling the tool what to test. While the nature of the compliance requirements (Section 508, WCAG 1.0, WCAG 2.0, client specific) make it impossible to test for everything automatically, everything that can be automated should be automated. Viewed from a different angle, accessibility requirements that must be validated manually pose the highest testing cost and tend to be the dominant cost in performing accessibility testing. The goal is to minimize these testing costs without negatively impacting the scope and quality of testing. This area has the biggest opportunity for cost savings, has the repeatability of machine testing, and surfaces issues for further manual evaluation.

The following principles can be derived from this methodology:

  • Any test that can be automated should be automated. A high cost to automate something will likely be justified as the cost savings will be amortized across millions of tests performed.
  • Any portion of a test that can be automated should. Many tests fall into this category of “Guided Automatic” tests – tests where it is possible to determine the candidacy of elements in a document for testing but the direct test needs to be performed by a human. Proper use of guided automatic tests has the effect of drastically limiting the number of manual tests that need to be performed. For example, manual test on data tables only need to be performed if data tables exist. An automated tool can use heuristic methods to determine 99% of the time whether a data table is present on a given page — thus providing testing savings when no tables were found.
  • If a manual test has been performed, the same test results should be applied in the future in similar situations. In implementation, this concept allows for the automatic re-play the results of any tests that have previously been completed and stored in the accessibility platform. For example, if an image has been reviewed to ensure the alt text provides a meaningful alternative – and the same image is encountered again and it has the same alt text – the image need not be reviewed again.
  • Many manual tests are often difficult to perform when looking at code but easy to perform with a proper view of the issue at hand. Level Access’ first tool InFocus pioneered the concept of using previews or special renderings of a page and element to allow manual testing to rapidly be performed. The modern equivalent is the use of AMP Toolbar preview modes to allow complex semantic and visual equivalence issues – like the proper use of headers – to be diagnosed in a quick and easy fashion. These preview tests can be provided in a direct effective method via whatever visual interface is readily available – browser toolbar, integrated development environment (IDE), etc..

Testing Approaches

Across all our clients, Level Access has seen many different testing approaches for Section 508 compliance or WCAG conformance. These run the gamut from no testing to full, formal audits of IT systems on a per-release basis. Out of all of these different approaches, however, four general approaches have been repeated across the marketplace:

  • Automatic Testing – The cheapest and most common approach to validation is pure, automatic testing – generally performed with the aid of a spider. In this scenario you enter a URL, a spider is dispatched to gather pages and the resulting pages that are discovered are diagnosed using automated methods.
  • Quick Test – The second type of test is something we generally refer to as a quick test. This uses automatic testing as a baseline but extends it with testing against a limited set of manual tests – generally somewhere between ten and twenty in a basic checklist. The set of best practices can be chosen using any fashion but generally are chosen based on the frequency and severity of violations. This testing provides a good hybrid of coverage for critical accessibility issues, but the test is significantly cheaper than a full testing approach — a formal audit.
  • AT Testing – AT testing focuses on testing solely in one or a few assistive technologies, and it does not perform any normative or rule-based testing on the application. This approach has the effect of determining if the system works in a specific technology but limits results to particular assistive technologies and versions, disability types and application paths tested. It is difficult to determine the level of conformance using this approach. The results of this method tend to be assistive technology specific and thus the translation of the results into implementation can be a challenge.
  • Audit – A formal audit for Section 508 compliance or WCAG conformance producing a VPAT or conformance statement (or both). This scenario generally conforms to a formal audit methodology — such as Level Access' Unified Audit Methodology – and includes testing for the full set of normative issues as well as functional validation in specific assistive technologies. This approach provides all the information required to determine compliance with a given accessibility standard but also is the most costly and time consuming approach.

Each of these testing solutions provides benefits and detriments at certain points in the development life cycle. In general then, the issue is not one of picking a single approach but providing an easy way to pick the testing approach that provides the right tradeoff for the situation at hand. This ensures that, for the given situation, the most cost effective testing approach can be executed maximizing the risk reduction / cost tradeoff.

Browser Centric

The ideal interface for performing manual and guided automatic accessibility testing is the browser, which has become the de-facto operating system for the user interface of most applications. Ideally, accessibility testing begins by bringing up a page in a browser, pressing a button to diagnose the page and then perform any required tests in a guided fashion. For example, to diagnose the Google home page we would navigate to www.google.com, press a “Diagnose” button and then all automatic tests performed on the current page and a testing tree interface is displayed to allow the completion of the remaining, relevant guided automatic and manual tests. Various tools are provided to assist the evaluator in determining whether a given test passes or fails — for example, a color contrast eye dropper, an option to display the heading structure of a page, or visual indication of tables. The results are automatically stored on a central server for distribution to whoever owns the relevant asset as part of the diagnosis process.

Simple

One of the main challenges that occurs in a broad and deep enterprise level accessibility platform is provided role based and specific access to the right information out of thousands of best practices and millions of results. Additionally, while final testing must validate an application’s compliance with all relevant Section 508 or WCAG requirements in practice, this may not be the testing method that makes the most sense for a given evaluator at a given point in time. For example, a designer may want to evaluate the design for a site without having to see the code level requirements that are used by developers to implement or remediate for compliance.

Another reality of accessibility is that it tends to impact a large number of people with significant variance in the level of technical depth. For some users, a spider and HTML are relatively complex topics – for others, direct access to the source code and applications programming interface (API) for AMP are their focus. In practice, however, the makeup of users tends to be skewed toward those less technical in nature that are looking to get in and out of the system as quickly as possible.

This requires keeping the workflows in AMP as simple as possible and (ii) removing and limiting the numbers of features exposed in the system. Historically, our focus at Level Access has been on broadly exposing features (more features = good). Going forward, our focus is on enabling the same core business tasks while removing or simplifying the number of activities required to complete them.

An Up or Down Determination of Compliance is Good

Currently AMP provides a variety of different summary reports that allow you to slice and dice the compliance of a system in a variety of different fashions. Level Access was the first company in the market to provide percentage-based compliance reporting and it remains a core part of the drill down reporting experience provided.

What Level Access has found, however, is that customers are increasingly looking for a basic way to quickly determine the compliance of the system. This type of reporting is like a basic red / yellow / green dashboard approach to compliance, where you can quickly see if an application is compliant, potentially in trouble or not compliant. These reports are provided in real time. This allows one to quickly check on any high-risk assets to make sure that they are currently compliant. When manual testing has not been performed – report viewers must be warned that the automated results alone are not sufficient to determine compliance.

Learn as you go

It would be great if all developers, designers, project managers, and QA engineers had extensive training on accessibility, in practice, training for accessibility occurs with widely varying degrees of formality and it is rarely a requirement to actually be able to deploy content to a site or check-in code. The implication of this is that the vast majority of users of an accessibility platform and it’s audit reports will have little prior knowledge of accessibility and little to no formal training in the subject. For the average user, we assume they have had little or no training in accessibility, and the AMP in-system workflow focuses on training users on what is required for conformance as they use the system, rather than expecting all users to take full, formal training courses on accessibility.

Level Access pioneered this approach with the creation of Just-in-Time Learning – where information about best practices was provided to developers in the context of the current violation – allowing users to learn as they go with live code examples. This is an experience we strive to support across all desktop and toolbar clients of the AMP platform.

It is also critical that training curriculums and reference content be available to all team members as part of the enterprise level accessibility platform. Providing role based training materials and a streamlined training plan based on the role of the user appear to be most effective options when organizations mandate accessibility training. Users then feel like the training is relevant to their position and targeted at them. Targeted training is less time consuming and more focused to job responsibilities increasing the rate that people will take ownership of challenge.

 
Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request

Comments

Help Desk
www.levelaccess.com | 800.889.9659
© 2005 - 2018 - Level Access. All rights reserved.
Privacy | Security | Credits | License
Powered by Zendesk