OLD | NEW |
---|---|
(Empty) | |
1 Wai-aria: Tests for the WAI-ARIA Recommendations | |
tkent
2016/08/30 02:26:19
wai-aria/ is a new API directory. We should skip
| |
2 ================================================ | |
3 | |
4 The [WAI ARIA Recommendations](https://www.w3.org/TR/wai-aria) | |
5 define extensions to HTML4/5 for support of extended semantics. These | |
6 semantics make it easier for Assistive Technologies to intepret and | |
7 present content to users with varying physical and cognitive abilities. | |
8 | |
9 The purpose of these tests is to help ensure that user agents support the | |
10 requirements of the Recommendations. | |
11 | |
12 The general approach for this testing is to enable both manual and automated | |
13 testing, with a preference for automation. | |
14 | |
15 | |
16 Running Tests | |
17 ------------- | |
18 | |
19 In order to run these tests in an automated fashion, you will need to have a | |
20 special Assistive Technology Test Adapter (ATTA) for the platform under test. We will | |
21 provide a list of these for popular platforms here as they are made available. | |
22 | |
23 The ATTA will monitor the window under test via the platforms Accessibility | |
24 Layer, forwarding information about the Accessibility Tree to the running test | |
25 so that it can evaluate support for the various features under test. | |
26 | |
27 The workflow for running these tests is something like: | |
28 | |
29 1. Start up the ATTA for the platform under test, informing it of the location | |
30 of the test server and the user agent to be tested. | |
31 2. Start up the test driver window and select the wai-aria tests to be run | |
32 (e.g., the ARIA 1.1 tests) - click "Start" | |
33 3. A window pops up that shows a test - the description of which tells the | |
34 tester what is being tested. In an automated test, the test with proceed | |
35 without user intervention. In a manual test, some user input may be required | |
36 in order to stimulate the test. | |
37 4. The test runs. Success or failure is determined and reported to the test | |
38 driver window, which then cycles to the next test in the sequence. | |
39 5. Repeat steps 2-4 until done. | |
40 6. Download the JSON format report of test results, which can then be visually | |
41 inspected, reported on using various tools, or passed on to W3C for | |
42 evaluation and collection in the Implementation Report via github. | |
43 | |
44 **Remember that while these tests are written to help exercise implementations, | |
45 their other (important) purpose is to increase confidence that there are | |
46 interoperable implementations.** So, implementers are the audience, but these | |
47 tests are not meant to be a comprehensive collection of tests for a client that | |
48 might implement the Recommendation. | |
49 | |
50 | |
51 Capturing and Reporting Results | |
52 ------------------------------- | |
53 | |
54 As tests are run against implementations, if the results of testing are | |
55 submitted to [test-results](https://github.com/w3c/test-results/) then they will | |
56 be automatically included in documents generated by | |
57 [wptreport](https://www.github.com/w3c/wptreport). The same tool can be used | |
58 locally to view reports about recorded results. | |
59 | |
60 | |
61 Writing Tests | |
62 ------------- | |
63 | |
64 If you are interested in writing tests for this environment, see the | |
65 associated [CONTRIBUTING](CONTRIBUTING.md) document. | |
OLD | NEW |