5.8. Testing for AccessibilityOne of the best ways to ensure successful implementation of these guidelines is through testing. How else will you know when you've hit the mark in terms of providing accessible content? There are four primary methods of testing for accessibility: by developers, by expert review, with real users, and with automated tools. 5.8.1. Testing by DevelopersYou can find accessibility testing tools online and on the desktop for everything from smaller scale testing to enterprise level tools that allow you to track progress over time, automate reporting, and allow for manual review in conjunction with automated tests. These items should be in every web developer's toolkit. In addition to their use for informal accessibility testing, they are often useful for general web development as well.
5.8.2. Expert ReviewExpert review testing involves one or more accessibility specialists reviewing a web site, page, or set of pages to perform a heuristic analysis and evaluation of conformance against a set of criteria, such as W3C or Section 508 standards. The analysis is based on experience and common "rules of thumb" in terms of accessibility issues. Once the review is completed, the reviewers usually prepare a report that outlines specific accessibility issues, methods for improving accessibility, and areas that need to be tested further by users with various disabilities (often referred to as "pan-disability" testing). They may or may not assign a "severity" to each issue, but will likely make prioritized recommendations on those items that should be fixed first and those that should be fixed but may be less critical. 5.8.3. Testing with UsersAlthough it is fine for an expert to review a site, feedback is that much more meaningful and powerful when it comes from people who use assistive technology every day. User testing falls into two categories: general review and testing of specific tasks. General review tends to be focused on providing a general impression of the accessibility of a site but without particular goals in mind. Although this can be useful for finding "obvious" accessibility issues such as missing alt text, spelling mistakes, and confusing content or reading order, it may not be as useful as testing for such specific tasks as:
User testing that provides an overall impression of the accessibility of a site can be useful, but it pales in comparison to actually watching users attempt to complete tasks that are critical to their use of the application or site. Several things often happen during these facilitated tests: an observer makes notes about difficult areas, ranks task completion (completed, completed with difficulty, completed with assistance, not completed, for example), and code is reviewed to identify areas for improvement. User testing should not be seen as a final stage of development; it should be done early in the development process, conducted with multiple users with various disabilities, and repeated after improvements are made. In some cases, however, we don't have that luxury. So, how much accessibility testing should you do? As much as you can! Some is better than none. If all you can manage is expert review, or testing with a handful of users, then do that, and do it well. 5.8.4. Automated Testing ToolsIf used with appropriate caution and judgment, automated tools can be very useful in determining accessibility problems in a site, in tracking progress over time, and for identifying possible issues that bear further investigation. The W3C maintain an extensive list of tools that are available for use in testing at www.w3.org/WAI/ER/existingtools.html. Keep in mind, however, that ability and disability are relative terms, so testing with black-and-white absolutes is sometimes problematic and always controversial. It is important to remember with all of the automated testing tools that in some cases, you may see issues that do not apply to your particular site or that are difficult to test. For example, after scanning a page with JavaScript, many automated testing tools will state that you have used JavaScript in the page and therefore must include an alternate by using a <noscript>…</noscript> block in your page. The problem is that the automated test does not know what the script is doing, and what the result will be if the page is used with JavaScript both on and off. As another simple example to illustrate the point: an automated testing tool can test for the presence of alternative text for an image. It can even test to see if there are other images with the same alternative text, and it can test to see if that image is part of a link. However, it cannot run any test that will determine whether or not the alternative text is appropriate for the image in question. Therein lies the problem with automated testing. Human judgment is still required and must be factored into testing time as automated testing on its own is simply not the answer. For best testing, a combination of automated testing methods, browser-based tools, expert review, and user task completion should be what you aim for. |