OLD | NEW |
(Empty) | |
| 1 Annotation-Protocol: Tests for the Web Annotation Protocol |
| 2 ========================================================== |
| 3 |
| 4 The [Web Annotation Protocol](https://www.w3.org/TR/annotation-protocol) |
| 5 specification presents set of messages to allow Annotation clients and servers |
| 6 to interact seamlessly. |
| 7 |
| 8 The purpose of these tests is to help validate that clients send and are |
| 9 capable of receiving correctly formatted messages, and that servers are |
| 10 able to receive and respond to correctly structured requests. |
| 11 |
| 12 The general approach for this testing is to enable both manual and |
| 13 automated testing. However, since the specification has no actual user |
| 14 interface requirements, there is no general automation mechanism that |
| 15 can be presented for testing clients. Also the server tests need to be |
| 16 pointed at a server implementation to exercise. However, once provided |
| 17 the basic information, testing is automated. |
| 18 |
| 19 Implementors could take advantage of the plumbing we provide here to |
| 20 help their implementations talk to the endpoint we provide or exercise |
| 21 their endpoint with the provided server tests. This assumes knowledge |
| 22 of the requirements of each test / collection of tests so that the input |
| 23 data is relevant. Each test or test collection contains information |
| 24 sufficient for the task. |
| 25 |
| 26 With regard to server tests, the browser tests we provide can be |
| 27 pointed at an endpoint and will exercise that endpoint using well |
| 28 defined messages. This is done semi-automatically, although some set-up |
| 29 is required. |
| 30 |
| 31 Running Tests |
| 32 ------------- |
| 33 |
| 34 In the case of this test collection, we will be initially creating manual |
| 35 tests. These will automatically determine pass or fail and generate output for |
| 36 the main WPT window. The plan is to minimize the number of such tests to |
| 37 ease the burden on the testers while still exercising all the features. |
| 38 |
| 39 The workflow for running these tests is something like: |
| 40 |
| 41 1. Start up the test driver window and select the annotation-protocol tests - |
| 42 either client or server - then click "Start". |
| 43 2. A window pops up that shows a test - the description of which tells the |
| 44 tester what is required. The window will contain fields into which some |
| 45 information is provided. |
| 46 3. In the case of client testing the tester (presumably in another window) bring
s up their |
| 47 annotation client and points it at the supplied endpoint. They they perform
the |
| 48 action specified (annotating content in the test window, requesting an annota
tion from the server, etc.). |
| 49 4. The server receives the information from the client, evaluates it, and report
s the result of testing. |
| 50 In the event of multi-step messages, the cycle repeats until complete. |
| 51 5. Repeat steps 2-4 until done. |
| 52 6. Download the JSON format report of test results, which can then be visually |
| 53 inspected, reported on using various tools, or passed on to W3C for |
| 54 evaluation and collection in the Implementation Report via github. |
| 55 |
| 56 **Remember that while these tests are written to help exercise implementations, |
| 57 their other (important) purpose is to increase confidence that there are |
| 58 interoperable implementations.** So, implementers are our audience, but these |
| 59 tests are not meant to be a comprehensive collection of tests for an implementor
. |
| 60 The bulk of the tests are manual because there are no UI requirements in the |
| 61 Recommendation that would make it possible to effectively stimulate every client
portably. |
| 62 |
| 63 Having said that, because the structure of these "manual" tests is very rigid, |
| 64 it is possible for an implementer who understands test automation to use an |
| 65 open source tool such as [Selenium](http://www.seleniumhq.org/) to run these |
| 66 "manual" tests against their implementation - exercising their implementation |
| 67 against content they provide to create annotations and feed the data into our |
| 68 test input field and run the test. |
| 69 |
| 70 Capturing and Reporting Results |
| 71 ------------------------------- |
| 72 |
| 73 As tests are run against implementations, if the results of testing are |
| 74 submitted to [test-results](https://github.com/w3c/test-results/) then they will |
| 75 be automatically included in documents generated by |
| 76 [wptreport](https://www.github.com/w3c/wptreport). The same tool can be used |
| 77 locally to view reports about recorded results. |
| 78 |
| 79 Automating Test Execution |
| 80 ------------------------- |
| 81 |
| 82 Writing Tests |
| 83 ------------- |
| 84 |
| 85 If you are interested in writing tests for this environment, see the |
| 86 associated [CONTRIBUTING](CONTRIBUTING.md) document. |
OLD | NEW |