| OLD | NEW |
| (Empty) |
| 1 dpub-aria: Tests for the DPUB-ARIA Recommendations | |
| 2 ================================================== | |
| 3 | |
| 4 The [DPUB ARIA Recommendation](https://www.w3.org/TR/dpub-aria-1.0) | |
| 5 define extensions to HTML4/5 for support of extended semantics. These | |
| 6 semantics make it easier for Assistive Technologies to intepret and | |
| 7 present content to users with varying physical and cognitive abilities. | |
| 8 | |
| 9 The purpose of these tests is to help ensure that user agents support the | |
| 10 requirements of the Recommendation. | |
| 11 | |
| 12 The general approach for this testing is to enable both manual and automated | |
| 13 testing, with a preference for automation. | |
| 14 | |
| 15 | |
| 16 Running Tests | |
| 17 ------------- | |
| 18 | |
| 19 In order to run these tests in an automated fashion, you will need to have a | |
| 20 special Assistive Technology Test Adapter (ATTA) for the platform under test. W
e will | |
| 21 provide a list of these for popular platforms here as they are made available. | |
| 22 | |
| 23 The ATTA will monitor the window under test via the platforms Accessibility | |
| 24 Layer, forwarding information about the Accessibility Tree to the running test | |
| 25 so that it can evaluate support for the various features under test. | |
| 26 | |
| 27 The workflow for running these tests is something like: | |
| 28 | |
| 29 1. Start up the ATTA for the platform under test. | |
| 30 2. Start up the test driver window and select the dpub -aria tests to be run | |
| 31 (e.g., the DPUB ARIA 1.0 tests) - click "Start" | |
| 32 3. A window pops up that shows a test - the description of which tells the | |
| 33 tester what is being tested. In an automated test, the test with proceed | |
| 34 without user intervention. In a manual test, some user input may be required | |
| 35 in order to stimulate the test. | |
| 36 4. The test runs. Success or failure is determined and reported to the test | |
| 37 driver window, which then cycles to the next test in the sequence. | |
| 38 5. Repeat steps 2-4 until done. | |
| 39 6. Download the JSON format report of test results, which can then be visually | |
| 40 inspected, reported on using various tools, or passed on to W3C for | |
| 41 evaluation and collection in the Implementation Report via github. | |
| 42 | |
| 43 **Remember that while these tests are written to help exercise implementations, | |
| 44 their other (important) purpose is to increase confidence that there are | |
| 45 interoperable implementations.** So, implementers are the audience, but these | |
| 46 tests are not meant to be a comprehensive collection of tests for a client that | |
| 47 might implement the Recommendation. | |
| 48 | |
| 49 | |
| 50 Capturing and Reporting Results | |
| 51 ------------------------------- | |
| 52 | |
| 53 As tests are run against implementations, if the results of testing are | |
| 54 submitted to [test-results](https://github.com/w3c/test-results/) then they will | |
| 55 be automatically included in documents generated by | |
| 56 [wptreport](https://www.github.com/w3c/wptreport). The same tool can be used | |
| 57 locally to view reports about recorded results. | |
| 58 | |
| 59 | |
| 60 Writing Tests | |
| 61 ------------- | |
| 62 | |
| 63 If you are interested in writing tests for this environment, see the | |
| 64 associated [CONTRIBUTING](CONTRIBUTING.md) document. | |
| OLD | NEW |