Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(252)

Unified Diff: third_party/WebKit/LayoutTests/imported/wpt/annotation-protocol/README.md

Issue 2303013002: Add UMA metric to track usage of sending a mousedown to select elements. (Closed)
Patch Set: W3C auto test import CL. Created 4 years, 3 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View side-by-side diff with in-line comments
Download patch
Index: third_party/WebKit/LayoutTests/imported/wpt/annotation-protocol/README.md
diff --git a/third_party/WebKit/LayoutTests/imported/wpt/annotation-protocol/README.md b/third_party/WebKit/LayoutTests/imported/wpt/annotation-protocol/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..d0ec93573fedefe140a4052bf448df5edd5cebf3
--- /dev/null
+++ b/third_party/WebKit/LayoutTests/imported/wpt/annotation-protocol/README.md
@@ -0,0 +1,86 @@
+Annotation-Protocol: Tests for the Web Annotation Protocol
+==========================================================
+
+The [Web Annotation Protocol](https://www.w3.org/TR/annotation-protocol)
+specification presents set of messages to allow Annotation clients and servers
+to interact seamlessly.
+
+The purpose of these tests is to help validate that clients send and are
+capable of receiving correctly formatted messages, and that servers are
+able to receive and respond to correctly structured requests.
+
+The general approach for this testing is to enable both manual and
+automated testing. However, since the specification has no actual user
+interface requirements, there is no general automation mechanism that
+can be presented for testing clients. Also the server tests need to be
+pointed at a server implementation to exercise. However, once provided
+the basic information, testing is automated.
+
+Implementors could take advantage of the plumbing we provide here to
+help their implementations talk to the endpoint we provide or exercise
+their endpoint with the provided server tests. This assumes knowledge
+of the requirements of each test / collection of tests so that the input
+data is relevant. Each test or test collection contains information
+sufficient for the task.
+
+With regard to server tests, the browser tests we provide can be
+pointed at an endpoint and will exercise that endpoint using well
+defined messages. This is done semi-automatically, although some set-up
+is required.
+
+Running Tests
+-------------
+
+In the case of this test collection, we will be initially creating manual
+tests. These will automatically determine pass or fail and generate output for
+the main WPT window. The plan is to minimize the number of such tests to
+ease the burden on the testers while still exercising all the features.
+
+The workflow for running these tests is something like:
+
+1. Start up the test driver window and select the annotation-protocol tests -
+ either client or server - then click "Start".
+2. A window pops up that shows a test - the description of which tells the
+ tester what is required. The window will contain fields into which some
+ information is provided.
+3. In the case of client testing the tester (presumably in another window) brings up their
+ annotation client and points it at the supplied endpoint. They they perform the
+ action specified (annotating content in the test window, requesting an annotation from the server, etc.).
+4. The server receives the information from the client, evaluates it, and reports the result of testing.
+ In the event of multi-step messages, the cycle repeats until complete.
+5. Repeat steps 2-4 until done.
+6. Download the JSON format report of test results, which can then be visually
+ inspected, reported on using various tools, or passed on to W3C for
+ evaluation and collection in the Implementation Report via github.
+
+**Remember that while these tests are written to help exercise implementations,
+their other (important) purpose is to increase confidence that there are
+interoperable implementations.** So, implementers are our audience, but these
+tests are not meant to be a comprehensive collection of tests for an implementor.
+The bulk of the tests are manual because there are no UI requirements in the
+Recommendation that would make it possible to effectively stimulate every client portably.
+
+Having said that, because the structure of these "manual" tests is very rigid,
+it is possible for an implementer who understands test automation to use an
+open source tool such as [Selenium](http://www.seleniumhq.org/) to run these
+"manual" tests against their implementation - exercising their implementation
+against content they provide to create annotations and feed the data into our
+test input field and run the test.
+
+Capturing and Reporting Results
+-------------------------------
+
+As tests are run against implementations, if the results of testing are
+submitted to [test-results](https://github.com/w3c/test-results/) then they will
+be automatically included in documents generated by
+[wptreport](https://www.github.com/w3c/wptreport). The same tool can be used
+locally to view reports about recorded results.
+
+Automating Test Execution
+-------------------------
+
+Writing Tests
+-------------
+
+If you are interested in writing tests for this environment, see the
+associated [CONTRIBUTING](CONTRIBUTING.md) document.

Powered by Google App Engine
This is Rietveld 408576698