QA for A11Y
Conducting quality assurance testing on accessibility issues is often a stumbling block for teams after receiving the results of an accessibility audit. Assuming your audit came back with a long list of issues, there is typically a learning curve for the Dev and QA teams as they work through fixing and re-testing. After the Dev team has implemented changes, it then falls to the QA team to verify the fixes. QA for accessibility issues can be more difficult to get right than QA for other issues. Some reasons for this include:
- Testers may have never dealt with accessibility before, so these types of issues are new to them
- The issue a tester is looking for may not be visible, but instead hidden under the hood, buried deep in the code
- Some issues require testing with assistive technologies to properly verify the fix
- Issues often need to be tested in context or relationship to other things
Many accessibility issues that come back from an audit are related to poorly implemented ARIA roles, states, and properties. ARIA is the Accessible Rich Internet Applications suite of web standards and defines ways to add markup to HTML to make complex user interfaces more accessible to users of assistive technology. It addresses the problem of how to create advanced features and controls for the web when there is no native HTML element that meets the need. To a QA tester who has never heard much about ARIA, they may have tested these structures for general functionality before the accessibility audit, but were unaware of the presence and importance that any ARIA markup plays in how these work with assistive technologies. If a developer has made updates to an element or component that uses ARIA, the QA tester must thoroughly read the original audit’s issue description, recommended solution, and any linked documentation. They will likely have to open the browser’s developer tools and inspect the code. The tester must ensure that:
- The developer made the fixes as instructed
- The fixes follow the requirements of the ARIA specification
- The fixed component functions as expected during user interaction, both with and without assistive technology
Sometimes only part of the recommendation is followed, or an ARIA attribute is added to an incorrect HTML element. Sometimes the issue requires more guidance than could be logged in the audit report, and additional reading is needed for both Dev and QA. Both must make that effort to understand the fixes they are implementing and testing. And finally, testing what happens to the component at the code level when someone interacts with it using multiple input modalities is crucial. Testing something with only the mouse will not tell you if it works with the keyboard. Failing to test something with a screen reader when it was logged as an issue that impacts screen reader users will often lead to issues passing QA, only to fail when they get to an accessibility tester.
Accessibility issues related to labeling of controls also require viewing the code and possibly testing with assistive technology, such as a screen reader. For form fields, while a tester may see visual text labels on the screen, that does not necessarily mean they are programmatically associated with the input element. A tester must review the code to ensure they are connected properly. For icon buttons and links, such as “x” for close, or social media icons, the tester must again review the code to ensure there is a text equivalent for the icon, and that it is correct. The label can come in the form of off-screen text or aria-label, both of which necessitate checking the code. Using a screen reader can help by checking what is rendered audibly when landing on form fields, buttons, or links, but there is the caveat that some screen readers will apply heuristics to compensate for deficiencies in code. Always verify label issues by reviewing the code.
Accessibility issues with error handling on forms are very common. When testing remediated forms for error handling, testers again need to thoroughly read the original issue description and verify all the suggested fixes are applied. It is imperative to interact with the form and deliberately trigger as many error states as possible. This can include attempting to submit with:
- Required fields left blank
- Incorrectly formatted information
- Number or date ranges outside the norm
Test how the errors are communicated when zoomed in on the page, when using a screen reader, and when color is not perceivable (such as in high contrast mode). Users should be able to easily identify which fields are in error in each case.
Time, patience, collaboration
Ultimately, being able to effectively test for accessibility issues and their fixes requires gaining some knowledge in accessibility best practices as a whole and how at least some assistive technologies work. This knowledge comes with time, practice, and, we hope, collaboration with the friendly team in the Stanford Office of Digital Accessibility.