Section 5.8. Testing for Accessibility


5.8. Testing for Accessibility

One of the best ways to ensure successful implementation of these guidelines is through testing. How else will you know when you've hit the mark in terms of providing accessible content? There are four primary methods of testing for accessibility: by developers, by expert review, with real users, and with automated tools.

5.8.1. Testing by Developers

You can find accessibility testing tools online and on the desktop for everything from smaller scale testing to enterprise level tools that allow you to track progress over time, automate reporting, and allow for manual review in conjunction with automated tests.

These items should be in every web developer's toolkit. In addition to their use for informal accessibility testing, they are often useful for general web development as well.


Web Developer Toolbar for Firefox/Mozilla (addons.mozilla.org/extensions/moreinfo.php?id=60)

An extension for Firefox and Mozilla, the Web Developer Toolbar provides a host of tools that are useful for low-level accessibility testing. It allows you to easily disable CSS and JavaScript, as well as replace images with their alt text. Quick access to these tools helps assess your work against the guidelines presented in this chapter.


Accessibility Toolbar for Internet Explorer (www.nils.org.au/ais/web/resources/toolbar/)

Similar to the Web Developer Toolbar, the Accessibility Toolbar is designed to work in Internet Explorer for Windows. It provides quick access to many of the same types of tools found in the Web Developer Toolbar, as well as one-click launching of several online services that allow you to roughly analyze readability of passages of text, color contrast analysis, various other vision-related disabilities, and online automated testing tools.


Opera browser (www.opera.com)

The Opera browser is actually quite a good testing tool on its own. It includes quick access to various browser "modes" that are useful for demonstrating and testing a text-based view of the web site. It also includes both voice recognition of commands and speech capabilities in its browser, which are useful for quick demonstration and testing.


WAT-C online tools and services (www.wat-c.org/tools/)

In September of 2005, a group of web developers and accessibility specialists formed the Web Accessibility Tool Consortium (WAT-C) to provide a series of tools under a general public license agreement that can be used for both testing accessibility and educational purposes. These tools include the Accessibility Toolbar for IE and many useful online services developed by Gez Lemon of Juicy Studio (www.juicystudio.com).

5.8.2. Expert Review

Expert review testing involves one or more accessibility specialists reviewing a web site, page, or set of pages to perform a heuristic analysis and evaluation of conformance against a set of criteria, such as W3C or Section 508 standards. The analysis is based on experience and common "rules of thumb" in terms of accessibility issues.

Once the review is completed, the reviewers usually prepare a report that outlines specific accessibility issues, methods for improving accessibility, and areas that need to be tested further by users with various disabilities (often referred to as "pan-disability" testing). They may or may not assign a "severity" to each issue, but will likely make prioritized recommendations on those items that should be fixed first and those that should be fixed but may be less critical.

5.8.3. Testing with Users

Although it is fine for an expert to review a site, feedback is that much more meaningful and powerful when it comes from people who use assistive technology every day.

User testing falls into two categories: general review and testing of specific tasks. General review tends to be focused on providing a general impression of the accessibility of a site but without particular goals in mind. Although this can be useful for finding "obvious" accessibility issues such as missing alt text, spelling mistakes, and confusing content or reading order, it may not be as useful as testing for such specific tasks as:

  • Logging into the application

  • Finding the email address for support/help

  • Performing a typical transaction, such as determining your current bank balance or purchasing a specific item and having it shipped to your address

  • Creating a new account

User testing that provides an overall impression of the accessibility of a site can be useful, but it pales in comparison to actually watching users attempt to complete tasks that are critical to their use of the application or site.

Several things often happen during these facilitated tests: an observer makes notes about difficult areas, ranks task completion (completed, completed with difficulty, completed with assistance, not completed, for example), and code is reviewed to identify areas for improvement.

User testing should not be seen as a final stage of development; it should be done early in the development process, conducted with multiple users with various disabilities, and repeated after improvements are made.

In some cases, however, we don't have that luxury. So, how much accessibility testing should you do? As much as you can! Some is better than none. If all you can manage is expert review, or testing with a handful of users, then do that, and do it well.

5.8.4. Automated Testing Tools

If used with appropriate caution and judgment, automated tools can be very useful in determining accessibility problems in a site, in tracking progress over time, and for identifying possible issues that bear further investigation. The W3C maintain an extensive list of tools that are available for use in testing at www.w3.org/WAI/ER/existingtools.html. Keep in mind, however, that ability and disability are relative terms, so testing with black-and-white absolutes is sometimes problematic and always controversial.

It is important to remember with all of the automated testing tools that in some cases, you may see issues that do not apply to your particular site or that are difficult to test. For example, after scanning a page with JavaScript, many automated testing tools will state that you have used JavaScript in the page and therefore must include an alternate by using a <noscript></noscript> block in your page. The problem is that the automated test does not know what the script is doing, and what the result will be if the page is used with JavaScript both on and off.

As another simple example to illustrate the point: an automated testing tool can test for the presence of alternative text for an image. It can even test to see if there are other images with the same alternative text, and it can test to see if that image is part of a link. However, it cannot run any test that will determine whether or not the alternative text is appropriate for the image in question.

Therein lies the problem with automated testing. Human judgment is still required and must be factored into testing time as automated testing on its own is simply not the answer.

For best testing, a combination of automated testing methods, browser-based tools, expert review, and user task completion should be what you aim for.




Web Design in a Nutshell
Web Design in a Nutshell: A Desktop Quick Reference (In a Nutshell (OReilly))
ISBN: 0596009879
EAN: 2147483647
Year: 2006
Pages: 325

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net