Index: docs/testing/writing_layout_tests.md |
diff --git a/docs/testing/writing_layout_tests.md b/docs/testing/writing_layout_tests.md |
new file mode 100644 |
index 0000000000000000000000000000000000000000..dd31b555d5c375de093ba32547a1028bffa99bce |
--- /dev/null |
+++ b/docs/testing/writing_layout_tests.md |
@@ -0,0 +1,573 @@ |
+# Writing Layout Tests |
+ |
+_Layout tests_ is a bit of a misnomer. This term is |
+[a part of our WebKit heritage](https://webkit.org/blog/1452/layout-tests-theory/), |
+and we use it to refer to every test that is written as a Web page (HTML, SVG, |
+or XHTML) and lives in |
+[third_party/WebKit/LayoutTests/](../../third_party/WebKit/LayoutTests). |
+ |
+[TOC] |
+ |
+## Overview |
+ |
+Layout tests should be used to accomplish one of the following goals: |
+ |
+1. The entire surface of Blink that is exposed to the Web should be covered by |
+ tests that we contribute to the |
+ [Web Platform Tests Project](https://github.com/w3c/web-platform-tests) |
+ (WPT). This helps us avoid regressions, and helps us identify Web Platform |
+ areas where the major browsers don't have interoperable implementations. |
+2. When a Blink feature cannot be tested using the Web Platform, and cannot be |
+ easily covered by |
+ [C++ unit tests](https://cs.chromium.org/chromium/src/third_party/WebKit/Source/web/tests/?q=webframetest&sq=package:chromium&type=cs), |
+ the feature must be covered by layout tests, to avoid unexpected regressions. |
+ These tests will use Blink-specific testing APIs that are only available in |
+ [content_shell](./layout_tests_in_content_shell.md). |
+ |
+### Test Types |
+ |
+There are three broad types of layout tests, listed in the order of preference. |
+ |
+* *JavaScript Tests* are the layout test implementation of |
+ [xUnit tests](https://en.wikipedia.org/wiki/XUnit). These tests contain |
+ assertions written in JavaScript, and pass if the assertions evaluate to |
+ true. |
+* *Reference Tests* render a test page and a reference page, and pass if the two |
+ renderings are identical, according to a pixel-by-pixel comparison. These |
+ tests are less robust, harder to debug, and significantly slower than |
+ JavaScript tests, and are only used when JavaScript tests are insufficient, |
+ such as when testing layout code. |
+* *Pixel Tests* render a test page and compare the result against a pre-rendered |
+ image in the repository. Pixel tests are less robust than JavaScript tests and |
+ reference tests, because the rendering of a page is influenced by many factors |
+ such as the host computer's graphics card and driver, the platform's text |
+ rendering system, and various user-configurable operating system settings. |
+ For this reason, it is not uncommon for a pixel test to have a different |
+ reference image for each platform that Blink is tested on. Pixel tests are |
+ least preferred, because the reference images are |
+ [quite cumbersome to manage](./layout_test_expectations.md). |
+ |
+## General Principles |
+ |
+The principles below are adapted from |
+[Test the Web Forward's Test Format Guidelines](http://testthewebforward.org/docs/test-format-guidelines.html) |
+and |
+[WebKit's Wiki page on Writing good test cases](https://trac.webkit.org/wiki/Writing%20Layout%20Tests%20for%20DumpRenderTree). |
+ |
+* Tests should be as **short** as possible. The page should only include |
+ elements that are necessary and relevant to what is being tested. |
+ |
+* Tests should be as **fast** as possible. Blink has several thousand layout |
+ tests that are run in parallel, and avoiding unnecessary delays is crucial to |
+ keeping our Commit Queue in good shape. |
+ * Avoid [window.setTimeout](https://developer.mozilla.org/en-US/docs/Web/API/WindowTimers/setTimeout), |
+ as it wastes time on the testing infrastructure. Instead, use specific |
+ event handlers, such as |
+ [window.onload](https://developer.mozilla.org/en-US/docs/Web/API/GlobalEventHandlers/onload), |
+ to decide when to proceed with a test. |
+ |
+* Tests should be **reliable** and yield consistent results for a given |
+ implementation. Flaky tests slow down fellow developers' debugging efforts and |
+ the Commit Queue. |
+ * `window.setTimeout` is again a primary offender here. Asides from wasting |
+ time on a fast system, tests that rely on fixed timeouts can fail when run |
+ on systems that are slower than expected. |
+ * Follow the guidelines in this |
+ [PSA on writing reliable layout tests](https://docs.google.com/document/d/1Yl4SnTLBWmY1O99_BTtQvuoffP8YM9HZx2YPkEsaduQ/edit). |
+ |
+* Tests should be **self-describing**, so that a project member can recognize |
+ whether a test passes or fails without having to read the specification of the |
+ feature being tested. `testharness.js` makes a test self-describing when used |
+ correctly, but tests that degrade to manual tests |
+ [must be carefully designed](http://testthewebforward.org/docs/test-style-guidelines.html) |
+ to be self-describing. |
+ |
+* Tests should use the **minimal** set of platform features needed to express |
+ the test scenario efficiently. |
+ * Avoid depending on edge case behavior of features that aren't explicitly |
+ covered by the test. For example, except where testing parsing, tests |
+ should contain valid markup (no parsing errors). |
+ * Prefer JavaScript's |
+ [===](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Operators/Comparison_Operators#Identity_strict_equality_()) |
+ operator to |
+ [==](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Operators/Comparison_Operators#Equality_()) |
+ so that readers don't have to reason about |
+ [type conversion](http://www.ecma-international.org/ecma-262/6.0/#sec-abstract-equality-comparison). |
+ |
+* Tests should be as **cross-platform** as reasonably possible. Avoid |
+ assumptions about device type, screen resolution, etc. Unavoidable assumptions |
+ should be documented. |
+ * When possible, tests should only use Web platform features, as specified |
+ in the relevant standards. |
+ * Test pages should use the HTML5 doctype (`<!doctype html>`) unless they |
+ are testing the quirks mode. |
+ * Tests should be written under the assumption that they will be upstreamed |
+ to the WPT project. For example, tests should follow the |
+ [WPT guidelines](http://testthewebforward.org/docs/writing-tests.html). |
+ * Tests that use Blink-specific testing APIs should feature-test for the |
+ presence of the testing APIs and degrade to |
+ [manual tests](http://testthewebforward.org/docs/manual-test.html) |
+ when the testing APIs are not present. |
+ |
+* Tests must be **self-contained** and not depend on external network resources. |
+ Unless used by multiple test files, CSS and JavaScript should be inlined using |
+ `<style>` and `<script>` tags. Content shared by multiple tests should be |
+ placed in a `resources/` directory near the tests that share it. See below for |
+ using multiple origins in a test. |
+ |
+* Test **file names** should describe what is being tested. File names should |
+ use `snake-case`, but preserve the case of any embedded API names. For |
+ example, prefer `document-createElement.html` to |
+ `document-create-element.html`. |
+ |
+* Tests should prefer **modern features** in JavaScript and in the Web Platform. |
+ * JavaScript code should prefer |
+ [const](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/const) |
+ and |
+ [let](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/let) |
+ over `var`, prefer |
+ [classes](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Classes) |
+ over other OOP constructs, and prefer |
+ [Promises](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise) |
+ over other mechanisms for structuring asynchronous code. |
+ * The desire to use modern features must be balanced with the desire for |
+ cross-platform tests. Avoid using features that haven't shipped by other |
+ current major rendering engines (WebKit, Gecko, Edge). When unsure, check |
+ [caniuse.com](http://caniuse.com/). |
+ |
+* Tests must use the UTF-8 **character encoding**, which should be declared by |
+ `<meta charset=utf-8>`. This does not apply when specifically testing |
+ encodings. |
+ |
+* Tests must aim to have a **coding style** that is consistent with |
+ [Google's JavaScript Style Guide](https://google.github.io/styleguide/javascriptguide.xml), |
+ and |
+ [Google's HTML/CSS Style Guide](https://google.github.io/styleguide/htmlcssguide.xml), |
+ with the following exceptions. |
+ * Rules related to Google Closure and JSDoc do not apply. |
+ * Modern Web Platform and JavaScript features should be preferred to legacy |
+ constructs that target old browsers. For example, prefer `const` and `let` |
+ to `var`, and prefer `class` over other OOP constructs. This should be |
+ balanced with the desire to have cross-platform tests. |
+ * Concerns regarding buggy behavior in legacy browsers do not apply. For |
+ example, the garbage collection cycle note in the _Closures_ section does |
+ not apply. |
+ * Per the JavaScript guide, new tests should also follow any per-project |
+ style guide, such as the |
+ [ServiceWorker Tests Style guide](http://www.chromium.org/blink/serviceworker/testing). |
+ |
+## JavaScript Tests |
+ |
+Whenever possible, the testing criteria should be expressed in JavaScript. The |
+alternatives, which will be described in future sections, result in slower and |
+less robust tests. |
+ |
+All new JavaScript tests should be written using the |
+[testharness.js](https://github.com/w3c/testharness.js/) testing framework. This |
+framework is used by the tests in the |
+[web-platform-tests](https://github.com/w3c/web-platform-tests) repository, |
+which is shared with all the other browser vendors, so `testharness.js` tests |
+are more accessible to browser developers. |
+ |
+As a shared framework, `testharness.js` enjoys high-quality documentation, such |
+as [a tutorial](http://testthewebforward.org/docs/testharness-tutorial.html) and |
+[API documentation](https://github.com/w3c/testharness.js/blob/master/docs/api.md). |
+Layout tests should follow the recommendations of the above documents. |
+Furthermore, layout tests should include relevant |
+[metadata](http://testthewebforward.org/docs/css-metadata.html). The |
+specification URL (in `<link rel="help">`) is almost always relevant, and is |
+incredibly helpful to a developer who needs to understand the test quickly. |
+ |
+Below is a skeleton for a JavaScript test embedded in an HTML page. Note that, |
+in order to follow the minimality guideline, the test omits the tags `<html>`, |
+`<head>` and `<body>`, as they can be inferred by the HTML parser. |
+ |
+```html |
+<!doctype html> |
+<meta charset="utf-8"> |
+<title>JavaScript: the true literal</title> |
+<link rel="help" href="https://tc39.github.io/ecma262/#sec-boolean-literals"> |
+<meta name="assert" value="The true literal is equal to itself and immutable"> |
+<script src="/resources/testharness.js"></script> |
+<script src="/resources/testharnessreport.js"></script> |
+<script> |
+ |
+// Synchronous test example. |
+test(() => { |
+ const value = true; |
+ assert_true(value, 'true literal'); |
+ assert_equal(!value, false, 'the logical negtion of true'); |
foolip
2016/11/18 11:30:40
I guess assert_false here, since you'll have a cha
pwnall
2016/11/22 20:32:55
I changed the assertion to assert_equals(value.toS
foolip
2016/11/23 09:23:59
Thanks, that looks good!
|
+}, 'The literal true in a synchronous test case'); |
+ |
+// Asynchronous test example. |
+async_test(t => { |
+ const originallyTrue = true; |
+ setTimeout(t.step_func_done(() => { |
+ const value = true; |
+ assert_true(originallyTrue); |
foolip
2016/11/18 11:30:40
assert_equals(value, orginallyTrue), or maybe the
pwnall
2016/11/22 20:32:55
Thanks for noticing! I remove the value variable a
|
+ }), 0); |
+}, 'The literal true in a setTimeout callback'); |
+ |
+// Promise test example. |
+promise_test(() => { |
+ return new Promise((resolve, reject) => { |
+ resolve(true); |
+ }).then(value => { |
+ assert_true(value); |
+ }); |
+}, 'The literal true used to resolve a Promise'); |
+ |
+</script> |
+``` |
+ |
+Some points that are not immediately obvious from the example: |
+ |
+* The `<meta name="assert">` describes the purpose of the entire file, and |
+ is not redundant to `<title>`. Don't add a `<meta name="assert">` when the |
+ information in the `<title>` is sufficient. |
+* When calling an `assert_` function that compares two values, the first |
+ argument is the actual value (produced by the functionality being tested), and |
+ the second argument is the expected value (known good, golden). The order |
+ is important, because the testing harness relies on it to generate expressive |
+ error messages that are relied upon when debugging test failures. |
+* The assertion description (the string argument to `assert_` methods) conveys |
+ the way the actual value was obtained. |
+ * If the expected value doesn't make it clear, the assertion description |
+ should explain the desired behavior. |
+ * Test cases with a single assertion should omit the assertion's description |
+ when it is sufficiently clear. |
+* Each test case describes the circumstance that it tests, without being |
+ redundant. |
+ * Do not start test case descriptions with redundant terms like "Testing " |
foolip
2016/11/18 11:30:40
This whole block is great! Nit here is extra space
pwnall
2016/11/22 20:32:55
Done.
Thank you for catching this!
Totally my faul
|
+ or "Test for". |
+ * Test files with a single test case should omit the test case description. |
+ The file's `<title>` should be sufficient to describe the scenario being |
+ tested. |
+* Asynchronous tests have a few subtleties. |
+ * The `async_test` wrapper calls its function with a test case argument that |
+ is used to signal when the test case is done, and to connect assertion |
+ failures to the correct test. |
+ * `t.done()` must be called after all the test case's assertions have |
+ executed. |
+ * Test case assertions must be wrapped in `t.step_func()` calls, so that |
foolip
2016/11/18 11:30:40
Callbacks that don't have assertions but could pla
pwnall
2016/11/22 20:32:55
Good point! I amended the description.
|
+ assertion failures can be connected to the correct test case. |
+ * `t.step_func_done()` is a shortcut that combines `t.step_func()` with a |
+ `t.done()` call. |
+ |
+*** promo |
+Layout tests that load from `file://` origins must currently use relative paths |
+to point to |
+[/resources/testharness.js](../../third_party/WebKit/LayoutTests/resources/testharness.js) |
+and |
+[/resources/testharnessreport.js](../../third_party/WebKit/LayoutTests/resources/testharnessreport.js). |
+This is contrary to the WPT guidelines, which call for absolute paths. |
+This limitation does not apply to the tests in `LayoutTests/http`, which rely on |
+an HTTP server, or to the tests in `LayoutTests/imported/wpt`, which are |
+imported from the [WPT repository](https://github.com/w3c/web-platform-tests). |
+*** |
+ |
+ |
+### Relying on Blink-Specific Testing APIs |
+ |
+Tests that cannot be expressed using the Web Platform APIs rely on |
+Blink-specific testing APIs. These APIs are only available in |
+[content_shell](./layout_tests_in_content_shell.md). |
+ |
+### Manual Tests |
+ |
+Whenever possible, tests that rely on Blink-specific testing APIs should also be |
+usable as [manual tests](http://testthewebforward.org/docs/manual-test.html). |
+This makes it easy to debug the test, and to check whether our behavior matches |
+other browsers. |
+ |
+Manual tests should minimize the chance of user error. This implies keeping the |
+manual steps to a minimum, and having simple and clear instructions that |
+describe all the configuration changes and user gestures that match the effect |
+of the Blink-specific APIs used by the test. |
+ |
+Below is an example of a fairly minimal test that uses a Blink-Specific API |
+(`window.eventSender`), and gracefully degrades to a manual test. |
+ |
+```html |
+<!doctype html> |
+<meta charset="utf-8"> |
+<title>DOM: Event.isTrusted for UI events</title> |
+<link rel="help" href="https://dom.spec.whatwg.org/#dom-event-istrusted"> |
+<link rel="help" href="https://dom.spec.whatwg.org/#constructing-events"> |
+<meta name="assert" |
+ content="Event.isTrusted is true for events generated by user interaction"> |
+<script src="../../resources/testharness.js"></script> |
+<script src="../../resources/testharnessreport.js"></script> |
+ |
+<p>Please click on the button below.</p> |
+<button>Click Me!</button> |
+ |
+<script> |
+ |
+setup({ explicit_timeout: true }); |
+ |
+promise_test(() => { |
+ const button = document.querySelector('button'); |
+ return new Promise((resolve, reject) => { |
+ const button = document.querySelector('button'); |
+ button.addEventListener('click', (event) => { |
+ clickEvent = event; |
+ resolve(event); |
+ }); |
+ |
+ if (window.eventSender) { |
+ eventSender.mouseMoveTo(button.offsetLeft, button.offsetTop); |
+ eventSender.mouseDown(); |
+ eventSender.mouseUp(); |
+ } |
+ }).then((clickEvent) => { |
+ assert_true(clickEvent.isTrusted); |
+ }); |
+ |
+}, 'Click generated by user interaction'); |
+ |
+</script> |
+``` |
+ |
+The test exhibits the following desirable features: |
+ |
+* It has a second specification URL (`<link rel="help">`), because the paragraph |
+ that documents the tested feature (referenced by the primary URL) is not very |
+ informative on its own. |
+* It links to the |
+ [WHATWG Living Standard](https://wiki.whatwg.org/wiki/FAQ#What_does_.22Living_Standard.22_mean.3F), |
+ rather than to a frozen version of the specification. |
+* It documents its assertions clearly. |
+ * The `<meta name="assert">` describes the purpose of the entire file. |
+ However, don't add a `<meta>` when the information in the `<title>` is |
+ sufficient. |
+ * The `assert_equals` string describes the way the actual value was |
foolip
2016/11/18 11:30:40
The above test doesn't have assert_equals any more
pwnall
2016/11/22 20:32:54
Done.
This block was duplicated (and then edited)
|
+ obtained. If the expected value doesn't make it clear, the assertion |
+ description should explain the desired behavior. |
+ * Each test case describes the circumstance that it tests. |
+* It contains clear instructions for manually triggering the test conditions. |
+ The test starts with a paragraph (`<p>`) that tells the tester exactly what to |
+ do, and the `<button>` that needs to be clicked is clearly labeled. |
+* It disables the timeout mechanism built into `testharness.js` by calling |
+ `setup({ explicit_timeout: true });` |
+* It checks for the presence of the Blink-specific testing APIs |
+ (`window.eventSender`) before invoking them. The test does not automatically |
+ fail when the APIs are not present. |
+* It uses [Promises](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise) |
+ to separate the test setup from the assertions. This is particularly helpful |
+ for manual tests that depend on a sequence of events to occur, as Promises |
+ offer a composable way to express waiting for asynchronous events that avoids |
+ [callback hell](http://stackabuse.com/avoiding-callback-hell-in-node-js/). |
+ |
+Notice that the test is pretty heavy compared to a minimal JavaScript test that |
+does not rely on testing APIs. Only use Blink-specific testing APIs when the |
+desired testing conditions cannot be set up using Web Platform APIs. |
+ |
+#### Using Blink-Specific Testing APIs |
+ |
+A downside of Blink-specific APIs is that they are not as well documented as the |
+Web Platform features. Learning to use a Blink-specific feature requires finding |
+other tests that use it, or reading its source code. |
+ |
+For example, the most popular Blink-specific API is `testRunner`, which is |
+implemented in |
+[components/test_runner/test_runner.h](../../components/test_runner/test_runner.h) |
+and |
+[components/test_runner/test_runner.cpp](../../components/test_runner/test_runner.cpp). |
+By skimming the `TestRunnerBindings::Install` method, we learn that the |
+testRunner API is presented by the `window.testRunner` and |
+`window.layoutTestsController` objects, which are synonyms. Reading the |
+`TestRunnerBindings::GetObjectTemplateBuilder` method tells us what properties |
+are available on the `window.testRunner` object. |
+ |
+*** aside |
+`window.testRunner` is the preferred way to access the `testRunner` APIs. |
+`window.layoutTestsController` is still supported because it is used by |
+3rd-party tests. |
+*** |
+ |
+*** note |
+`testRunner` is the most popular testing API because it is also used indirectly |
+by tests that stick to Web Platform APIs. The `testharnessreport.js` file in |
+`testharness.js` is specifically designated to hold glue code that connects |
+`testharness.js` to the testing environment. Our implementation is in |
+[third_party/WebKit/LayoutTests/resources/testharnessreport.js](../../third_party/WebKit/LayoutTests/resources/testharnessreport.js), |
+and uses the `testRunner` API. |
+*** |
+ |
+See the [components/test_runner/](../../components/test_runner/) directory and |
+[WebKit's LayoutTests guide](https://trac.webkit.org/wiki/Writing%20Layout%20Tests%20for%20DumpRenderTree) |
+for other useful APIs. For example, `window.eventSender` |
+([components/test_runner/event_sender.h](../../components/test_runner/event_sender.h) |
+and |
+[components/test_runner/event_sender.cpp](../../components/test_runner/event_sender.cpp)) |
+has methods that simulate events input such as keyboard / mouse input and |
+drag-and-drop. |
+ |
+Here is a UML diagram of how the `testRunner` bindings fit into Chromium. |
+ |
+[](https://docs.google.com/drawings/d/1KNRNjlxK0Q3Tp8rKxuuM5mpWf4OJQZmvm9_kpwu_Wwg/edit) |
+ |
+### Text Test Baselines |
+ |
+By default, all the test cases in a file that uses `testharness.js` are expected |
+to pass. However, in some cases, we prefer to add failing test cases to the |
+repository, so that we can be notified when the failure modes change (e.g., we |
+want to know if a test starts crashing rather than returning incorrect output). |
+In these situations, a test file will be accompanied by a baseline, which is an |
+`-expected.txt` file that contains the test's expected output. |
+ |
+The baselines are generated automatically when appropriate by |
+`run-webkit-tests`, which is described [here](./layout_tests.md), and by the |
+[rebaselining tools](./layout_test_expectations.md). |
+ |
+Text baselines for `testharness.js` should be avoided, as having a text baseline |
+associated with a `testharness.js` indicates the presence of a bug. For this |
+reason, CLs that add text baselines must include a |
+[crbug.com](https://crbug.com) link for an issue tracking the removal of the |
+text expectations. |
+ |
+* When creating tests that will be upstreamed to WPT, and Blink's current |
+ behavior does not match the specification that is being tested, a text |
+ baseline is necessary. Remember to create an issue tracking the expectation's |
+ removal, and to link the issue in the CL description. |
+* Layout tests that cannot be upstreamed to WPT should use JavaScript to |
+ document Blink's current behavior, rather than using JavaScript to document |
+ desired behavior and a text file to document current behavior. |
+ |
+ |
+### The js-test.js Legacy Harness |
+ |
+*** promo |
+For historical reasons, older tests are written using the `js-test` harness. |
+This harness is **deprecated**, and should not be used for new tests. |
+*** |
+ |
+If you need to understand old tests, the best `js-test` documentation is its |
+implementation at |
+[third_party/WebKit/LayoutTests/resources/js-test.js](../../third_party/WebKit/LayoutTests/resources/js-test.js). |
+ |
+`js-test` tests lean heavily on the Blink-specific `testRunner` testing API. |
+In a nutshell, the tests call `testRunner.dumpAsText()` to signal that the page |
+content should be dumped and compared against a text baseline (an |
+`-expected.txt` file). As a consequence, `js-test` tests are always accompanied |
+by text baselines. Asynchronous tests also use `testRunner.waitUntilDone()` and |
+`testRunner.notifyDone()` to tell the testing tools when they are complete. |
+ |
+### Tests that use an HTTP Server |
+ |
+By default, tests are loaded as if via `file:` URLs. Some web platform features |
+require tests served via HTTP or HTTPS, for example absolute paths (`src=/foo`) |
+or features restricted to secure protocols. |
+ |
+HTTP tests are those tests that are under `LayoutTests/http/tests` (or virtual |
+variants). Use a locally running HTTP server (Apache) to run them. Tests are |
+served off of ports 8000 and 8080 for HTTP, and 8443 for HTTPS. If you run the |
+tests using `run-webkit-tests`, the server will be started automatically. To run |
+the server manually to reproduce or debug a failure: |
+ |
+```bash |
+cd src/third_party/WebKit/Tools/Scripts |
+run-blink-httpd start |
+``` |
+ |
+The layout tests will be served from `http://127.0.0.1:8000`. For example, to |
+run the test `http/tests/serviceworker/chromium/service-worker-allowed.html`, |
+navigate to |
+`http://127.0.0.1:8000/serviceworker/chromium/service-worker-allowed.html`. Some |
+tests will behave differently if you go to 127.0.0.1 instead of localhost, so |
+use 127.0.0.1. |
+ |
+To kill the server, run `run-blink-httpd --server stop`, or just use `taskkill` |
+or the Task Manager on Windows, and `killall` or Activity Monitor on MacOS. |
+ |
+The test server sets up an alias to `LayoutTests/resources` directory. In HTTP |
+tests, you can access the testing framework at e.g. |
+`src="/js-test-resources/js-test.js"`. |
+ |
+TODO: Document [wptserve](http://wptserve.readthedocs.io/) when we are in a |
+position to use it to run layout tests. |
+ |
+## Reference Tests |
+ |
+*** promo |
+In the long term, we intend to express reference tests as |
+[WPT reftests](http://testthewebforward.org/docs/reftests.html). Currently, |
+Chromium's testing infrastructure does not support WPT reftests. In the |
foolip
2016/11/18 11:31:42
qyearsley@, I think this maybe doesn't quite captu
qyearsley
2016/11/18 18:02:08
Yeah, rather than saying that we "don't support WP
foolip
2016/11/21 09:45:02
Keeping -expected.html outside of WPT sounds good
pwnall
2016/11/22 20:32:55
In the spirit of writing tests that can be upstrea
foolip
2016/11/23 09:23:59
I think using just one model at a time is preferab
|
+meantime, please use the legacy reference tests format described below. |
+*** |
+ |
+*** note |
+TODO: Summarize best practices for reftests and give examples. |
+*** |
+ |
+### Legacy Reference Tests |
+ |
+Blink also has inherited a sizable amount of |
+[reftests](https://trac.webkit.org/wiki/Writing%20Reftests) from WebKit. In |
+these tests, the reference page file name is based on the test page's file name |
+and an `-expected.html` suffix. |
qyearsley
2016/11/18 18:02:08
Not sure if it's worth mentioning that it's also p
foolip
2016/11/21 09:45:02
+1, this can be powerful, and the first time I wan
pwnall
2016/11/22 20:32:54
Thank you very much for explaining all this! It is
|
+ |
+## Pixel Tests |
+ |
+`testRunner` APIs such as `window.testRunner.dumpAsTextWithPixelResults()` and |
+`window.testRunner.dumpDragImage()` create an image result that is associated |
+with the test. The image result is compared against an image baseline, which is |
+an `-expected.png` file associated with the test, and the test passes if the |
+image result is identical to the baseline, according to a pixel-by-pixel |
+comparison. Tests that have image results (and baselines) are called **pixel |
+tests**. |
+ |
+Pixel tests should still follow the principles laid out above. Pixel tests pose |
+unique challenges to the desire to have *self-describing* and *cross-platform* |
+tests. The |
+[WPT test style guidelines](http://testthewebforward.org/docs/test-style-guidelines.html) |
+contain useful guidance. The most relevant pieces of advice are below. |
+ |
+* use a green paragraph / page / square to indicate success |
+* use the red color or the word `FAIL` to highlight errors |
+* use the [Ahem font](https://www.w3.org/Style/CSS/Test/Fonts/Ahem/README) to |
+ minimize the variance introduced by the platform's text rendering system |
+ |
+The following snippet includes the Ahem font in a layout test. |
+ |
+```html |
+<style> |
+body { |
+ font: 10px Ahem; |
+} |
+</style> |
+<script src="/resources/ahem.js"></script> |
+``` |
+ |
+*** promo |
+Tests outside `LayoutTests/http` and `LayoutTests/imported/wpt` currently need |
+to use a relative path to |
+[/third_party/WebKit/LayoutTests/resources/ahem.js](../../third_party/WebKit/LayoutTests/resources/ahem.js) |
+*** |
+ |
+### Tests that need to paint, raster, or draw a frame of intermediate output |
+ |
+A layout test does not actually draw frames of output until the test exits. If |
+it is required to generate a painted frame, then use |
+`window.testRunner.displayAsyncThen`, which will run the machinery to put up a |
+frame, then call the passed callback. There is also a library at |
+`fast/repaint/resources/text-based-repaint.js` to help with writing paint |
+invalidation and repaint tests. |
+ |
+## Directory Structure |
+ |
+The [LayoutTests directory](../../third_party/WebKit/LayoutTests) currently |
+lacks a strict, formal structure. The following directories have special |
+meaning: |
+ |
+* The `http/` directory hosts tests that require a HTTP server (see above). |
+* The `resources/` subdirectory in every directory contains binary files, such |
+ as media files, and code that is shared by multiple test files. |
+ |
+*** note |
+Some layout tests consist of a minimal HTML page that references a JavaScript |
+file in `resources/`. Please do not use this pattern for new tests, as it goes |
+against the minimality principle. JavaScript and CSS files should only live in |
+`resources/` if they are shared by at least two test files. |
+*** |