Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(288)

Side by Side Diff: docs/testing/webkit_layout_tests.md

Issue 2476573006: Move layout test documentation to Markdown. (Closed)
Patch Set: Move webkit_layout_tests documentation to Markdown. Created 4 years, 1 month ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
« no previous file with comments | « no previous file | no next file » | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
(Empty)
1 # Layout Tests
jsbell 2016/11/07 21:13:10 Can we name this file something w/o webkit ? Why n
jsbell 2016/11/07 21:13:10 Also, this is in docs/testing - do we need that ex
pwnall 2016/11/07 22:23:24 I was following the site structure. This page has
pwnall 2016/11/07 22:23:24 Done.
2
3 Layout tests are used by Blink to test many components, including but not
4 limited to layout and rendering. In general, layout tests involve loading pages
5 in a test renderer (`content_shell`) and comparing the rendered output or
6 JavaScript output against an expected output file.
7
8 [TOC]
9
10 ## Running Layout Tests
11
12 ### Initial Setup
13
14 Before you can run the layout tests, you need to build the `blink_tests` target
15 to get `content_shell` and all of the other needed binaries.
16
17 ```shell
18 ninja -C out/Release blink_tests
19 ```
20
21 On **Android** (layout test support
22 [currently limited to KitKat and earlier](http://crbug.com/567947)) you need to
jsbell 2016/11/07 21:13:11 https
pwnall 2016/11/07 22:23:24 Done. Also replaced wherever feasible throughout t
23 build and install `content_shell_apk` instead. See also:
24 [Android Build Instructions](../android_build_instructions.md).
25
26 ```shell
27 ninja -C out/Release content_shell_apk
28 adb install -r out/Release/apks/ContentShell.apk
29 ```
30
31 On **Mac**, you probably want to strip the content_shell binary before starting
32 the tests. If you don't, you'll have 5-10 running concurrently, all stuck being
33 examined by the OS crash reporter. This may cause other failures like timeouts
34 where they normally don't occur.
35
36 ```shell
37 strip ./xcodebuild/{Debug,Release}/content_shell.app/Contents/MacOS/content_shel l
38 ```
39
40 ### Running the Tests
jsbell 2016/11/07 21:13:11 Some mention of testing/xvfb.py would be useful in
pwnall 2016/11/07 22:23:24 Added TODO :D
41
42 The test runner script is in
43 `third_party/WebKit/Tools/Scripts/run-webkit-tests`.
44
45 To specify which build directory to use (e.g. out/Release, out/Debug,
46 out/Default) you should pass the `-t` or `--target` parameter. For example, to
47 use the build in `out/Default`, use:
48
49 ```shell
50 python third_party/WebKit/Tools/Scripts/run-webkit-tests -t Default
51 ```
52
53 For Android (if your build directory is `out/android`):
54
55 ```shell
56 python third_party/WebKit/Tools/Scripts/run-webkit-tests -t android --android
57 ```
58
59 Tests marked as `[ Skip ]` in
60 [TestExpectations](../../third_party/WebKit/LayoutTests/TestExpectations)
61 won't be run at all, generally because they cause some intractable tool error.
62 To force one of them to be run, either rename that file or specify the skipped
63 test as the only one on the command line (see below).
64
65 Note that currently only the tests listed in
66 [SmokeTests](../../third_party/WebKit/LayoutTests/SmokeTests)
67 are run on the Android bots, since running all layout tests takes too long on
68 Android (and may still have some infrastructure issues). Most developers focus
69 their Blink testing on Linux. We rely on the fact that the Linux and Android
70 behavior is nearly identical for scenarios outside those covered by the smoke
71 tests.
72
73 To run only some of the tests, specify their directories or filenames as
74 arguments to `run_webkit_tests.py` relative to the layout test directory
75 (`src/third_party/WebKit/LayoutTests`). For example, to run the fast form tests,
76 use:
77
78 ```shell
79 Tools/Scripts/run-webkit-tests fast/forms
80 ```
81
82 Or you could use:
83
84 ```shell
85 Tools/Scripts/run-webkit-tests fast/fo\*
86 ```
87
88 as a shorthand.
89
90 Example: To run the layout tests with a debug build of `content_shell`, but only
jsbell 2016/11/07 21:13:11 Consider using *** promo here for the example bloc
pwnall 2016/11/07 22:23:23 Done.
91 test the SVG tests and run pixel tests, you would run:
92
93 ```shell
94 Tools/Scripts/run-webkit-tests -t Debug svg
95 ```
96
97 As a final quick-but-less-robust alternative, you can also just use the
98 content_shell executable to run specific tests by using (for Windows):
99
100 ```shell
101 out/Debug/content_shell.exe --run-layout-test --no-sandbox full_test_source_path
102 ```
103
104 as in:
105
106 ```shell
107 out/Debug/content_shell.exe --run-layout-test --no-sandbox \
108 c:/chrome/src/third_party/WebKit/LayoutTests/fast/forms/001.html
109 ```
110
111 but this requires a manual diff against expected results, because the shell
112 doesn't do it for you.
113
114 To see a complete list of arguments supported, run: `run-webkit-tests --help`
115
116 **Linux Note**: We try to match the Windows render tree output exactly by
jsbell 2016/11/07 21:13:10 Use *** note ? https://gerrit.googlesource.com/git
pwnall 2016/11/07 22:23:23 Done.
117 matching font metrics and widget metrics. If there's a difference in the render
118 tree output, we should see if we can avoid rebaselining by improving our font
119 metrics. For additional information on Linux Layout Tests, please see
120 [docs/layout_tests_linux.md](../layout_tests_linux.md).
121
122 **Mac Note**: While the tests are running, a bunch of Appearance settings are
jsbell 2016/11/07 21:13:11 Use *** note ? https://gerrit.googlesource.com/git
pwnall 2016/11/07 22:23:24 Done.
123 overridden for you so the right type of scroll bars, colors, etc. are used.
124 Your main display's "Color Profile" is also changed to make sure color
125 correction by ColorSync matches what is expected in the pixel tests. The change
126 is noticeable, how much depends on the normal level of correction for your
127 display. The tests do their best to restore your setting when done, but if
128 you're left in the wrong state, you can manually reset it by going to
129 System Preferences → Displays → Color and selecting the "right" value.
130
131
132 ### Test Harness Options
133
134 This script has a lot of command line flags. You can pass `--help` to the script
135 to see a full list of options. A few of the most useful options are below:
136
137 | Option | Meaning |
138 |:----------------------------|:------------------------------------------------ --|
139 | `--debug` | Run the debug build of the test shell (defau lt is release). Equivalent to `-t Debug` |
jsbell 2016/11/07 21:13:10 extra whitespace?
jsbell 2016/11/07 21:13:11 also, with gn and out/Default is this still a thin
pwnall 2016/11/07 22:23:24 Done.
pwnall 2016/11/07 22:23:24 My personal opinion is that --debug should be remo
jsbell 2016/11/08 00:09:46 sgtm
140 | `--nocheck-sys-deps` | Don't check system dependencies; this allows fas ter iteration. |
141 | `--verbose` | Produce more verbose output, including a list of tests that pass. |
142 | `--no-pixel-tests` | Disable the pixel-to-pixel PNG comparisons and i mage checksums for tests that don't call `layoutTestController.dumpAsText()` |
143 | `--reset-results` | Write all generated results directly into the gi ven directory, overwriting what's there. |
144 | `--new-baseline` | Write all generated results into the most specif ic platform directory, overwriting what's there. Equivalent to `--reset-results --add-platform-expectations` |
145 | `--renderer-startup-dialog` | Bring up a modal dialog before running the test, useful for attaching a debugger. |
146 | `--fully-parallel` | Run tests in parallel using as many child proces ses as the system has cores. |
147 | `--driver-logging` | Print C++ logs (LOG(WARNING), etc). |
148
149 ## Success and Failure
150
151 A test succeeds when its output matches the pre-defined expected results. If any
152 tests fail, the test script will place the actual generated results, along with
153 a diff of the actual and expected results, into
154 `src/out/{Debug,Release}/layout_test_results/`, and by default launch a browser
jsbell 2016/11/07 21:13:11 Now that we use gn should this match defaults, i.e
pwnall 2016/11/07 22:23:24 Done.
155 with a summary and link to the results/diffs.
156
157 The expected results for tests are in the
158 `src/third_party/WebKit/LayoutTests/platform` or alongside their respective
159 tests.
160
161 NOTE: Tests which use [testharness.js](https://github.com/w3c/testharness.js/)
jsbell 2016/11/07 21:13:10 Use *** note ? https://gerrit.googlesource.com/git
pwnall 2016/11/07 22:23:24 Done.
162 do not have expected result files if all test cases pass.
163
164 A test that runs but produces the wrong output is marked as "failed", one that
165 causes the test shell to crash is marked as "crashed", and one that takes longer
166 than a certain amount of time to complete is aborted and marked as "timed out".
167 A row of dots in the script's output indicates one or more tests that passed.
168
169 ## Test expectations
170
171 The
172 [TestExpectations](../../WebKit/LayoutTests/TestExpectations) file (and related
173 files, including
174 [skia_test_expectations.txt](../../skia/skia_test_expectations.txt))
175 contains the list of all known layout test failures. See
176 [Test Expectations](https://sites.google.com/a/chromium.org/dev/developers/testi ng/webkit-layout-tests/testexpectations)
177 for more on this.
178
179 ## Testing Runtime Flags
180
181 There are two ways to run layout tests with additional command-line arguments:
182
183 * Using `--additional-driver-flag`:
184
185 ```bash
jsbell 2016/11/07 21:13:11 Be consistent: ```shell is used above, ```bash her
pwnall 2016/11/07 22:23:24 Done. Thank you very much for pointing this out!
186 run-webkit-tests --additional-driver-flag=--blocking-repaint
187 ```
188
189 This tells the test harness to pass `--blocking-repaint` to the
190 content_shell binary.
191
192 It will also look for flag-specific expectations in
193 `LayoutTests/FlagExpectations/blocking-repaint`, if this file exists. The
194 suppressions in this file override the main TestExpectations file.
195
196 * Using a *virtual test suite* defined in
197 [LayoutTests/VirtualTestSuites](https://code.google.com/p/chromium/codesearch# chromium/src/third_party/WebKit/LayoutTests/VirtualTestSuites).
198 A virtual test suite runs a subset of layout tests under a specific path with
199 additional flags. For example, you could test a (hypothetical) new mode for
200 repainting using the following virtual test suite:
201
202 ```json
203 {
204 "prefix": "blocking_repaint",
205 "base": "fast/repaint",
206 "args": ["--blocking-repaint"],
207 }
208 ```
209
210 This will create new "virtual" tests of the form
211 `virtual/blocking_repaint/fast/repaint/...`` which correspond to the files
212 under `LayoutTests/fast/repaint` and pass `--blocking-repaint` to
213 content_shell when they are run.
214
215 These virtual tests exist in addition to the original `fast/repaint/...`
216 tests. They can have their own expectations in TestExpectations, and their own
217 baselines. The test harness will use the non-virtual baselines as a fallback.
218 However, the non-virtual expectations are not inherited: if
219 `fast/repaint/foo.html` is marked `[ Fail ]`, the test harness still expects
220 `virtual/blocking_repaint/fast/repaint/foo.html` to pass. If you expect the
221 virtual test to also fail, it needs its own suppression.
222
223 The "prefix" value does not have to be unique. This is useful if you want to
224 run multiple directories with the same flags (but see the notes below about
225 performance). Using the same prefix for different sets of flags is not
226 recommended.
227
228 For flags whose implementation is still in progress, virtual test suites and
229 flag-specific expectations represent two alternative strategies for testing.
230 Consider the following when choosing between them:
231
232 * The
233 [waterfall builders](http://dev.chromium.org/developers/testing/chromium-build -infrastructure/tour-of-the-chromium-buildbot)
jsbell 2016/11/07 21:13:11 https
pwnall 2016/11/07 22:23:23 Done.
234 and [try bots](http://dev.chromium.org/developers/testing/try-server-usage)
235 will run all virtual test suites in addition to the non-virtual tests.
236 Conversely, a flag-specific expectations file won't automatically cause the
237 bots to test your flag - if you want bot coverage without virtual test suites,
238 you will need to set up a dedicated bot for your flag.
239
240 * Due to the above, virtual test suites incur a performance penalty for the
241 commit queue and the continuous build infrastructure. This is exacerbated by
242 the need to restart `content_shell` whenever flags change, which limits
243 parallelism. Therefore, you should avoid adding large numbers of virtual test
244 suites. They are well suited to running a subset of tests that are directly
245 related to the feature, but they don't scale to flags that make deep
246 architectural changes that potentially impact all of the tests.
247
248 ## Tracking Test Failures
249
250 All bugs, associated with layout test failures must have the
251 [LayoutTests](http://crbug.com/?q=label:LayoutTests) label. Depending on how
jsbell 2016/11/07 21:13:11 https, and update the query
pwnall 2016/11/07 22:23:24 Done.
252 much you know about the bug, assign the status accordingly:
253
254 * **Unconfirmed** -- you aren't sure if this is a simple rebaseline, possible
jsbell 2016/11/07 21:13:10 Interesting that Untriaged is not used here. I'd a
pwnall 2016/11/07 22:23:23 Done.
255 duplicate of an existing bug, or a real failure
256 * **Available** -- you know the root cause of the issue.
257 * **Assigned** or **Started** -- you will fix this issue.
258
259 When creating a new layout test bug, please assign the following labels to it --
260 having proper label hygiene is good for everyone:
261
262 * **Type-Bug**
263 * **Pri-2** (**Pri-1** if it's a crash)
264 * **Area-WebKit**
jsbell 2016/11/07 21:13:11 Some of these labels don't make sense w/ monorail
pwnall 2016/11/07 22:23:24 Done. I replaced the label list with the matching
265 * **OS-All** (or whichever OS the failure is on)
266 * **LayoutTests**
267 * **Mstone-9** (or current milestone)
268 * **Tests-Flaky** (if the test is flaky)
269
270 You can also use Layout Test Failure template, which will pre-set these labels
271 for you.
272
273 ## Writing Layout Tests
274
275 ### Pixel Tests
276
277 TODO: Write documentation here.
278
279 ### Reference Tests
280
281 TODO: Write documentation here.
282
283 ### Script Tests
284
285 These tests use a JavaScript test harness and test cases written in script to
286 exercise features and make assertions about the behavior. Generally, new tests
287 are written using the [testharness.js](https://github.com/w3c/testharness.js/)
288 test harness, which is also heavily used in the cross-vendor
289 [web-platform-tests](https://github.com/w3c/web-platform-tests) project. Tests
290 written with testharness.js generally look something like the following:
291
292 ```html
293 <!DOCTYPE html>
294 <script src="/resources/testharness.js"></script>
295 <script src="/resources/testharnessreport.js"></script>
296 <script>
297 test(t => {
298 var x = true;
299 assert_true(x);
300 }, "Truth is true.");
301 </script>
302 ```
303
304 Many older tests are written using the **js-test**
305 (`LayoutTests/resources/js-test.js`) test harness. This harness is
306 **deprecated**, and should not be used for new tests. The tests call
307 `testRunner.dumpAsText()` to signal that the page content should be dumped and
308 compared against an \*-expected.txt file, and optionally
309 `testRunner.waitUntilDone()` and `testRunner.notifyDone()` for asynchronous
310 tests.
311
312 ### Tests that use a HTTP Server
313
314 By default, tests are loaded as if via `file:` URLs. Some web platform features
315 require tests served via HTTP or HTTPS, for example relative paths (`src=/foo`)
316 or features restricted to secure protocols.
317
318 HTTP tests are those tests that are under `LayoutTests/http/tests` (or virtual
319 variants). Use a locally running HTTP server (Apache) to run. Tests are served
320 off of ports 8000, 8080 for HTTP and 8443 for HTTPS. If you run the tests using
321 `run-webkit-tests`, the server will be started automatically.To run the server
322 manually to reproduce or debug a failure:
323
324 ```bash
325 cd src/third_party/WebKit/Tools/Scripts
326 run-blink-httpd start
327 ```
328
329 The layout tests will be served from `http://127.0.0.1:8000`. For example, to
330 run the test `http/tests/serviceworker/chromium/service-worker-allowed.html`,
331 navigate to
332 `http://127.0.0.1:8000/serviceworker/chromium/service-worker-allowed.html`. Some
333 tests will behave differently if you go to 127.0.0.1 instead of localhost, so
334 use 127.0.0.1.
335
336 To kill the server, run `run-blink-httpd --server stop`, or just use `taskkill`
337 or the Task Manager on Windows, and `killall` or Activity Monitor on MacOS.
338
339 The test server sets up an alias to `LayoutTests/resources` directory. In HTTP
340 tests, you can access the testing framework at e.g.
341 `src="/js-test-resources/js-test.js"`.
342
343 ### Writing tests that need to paint, raster, or draw a frame of intermediate ou tput
344
345 A layout test does not actually draw frames of output until the test exits. If
346 it is required to generate a painted frame, then use
347 `window.testRunner.displayAsyncThen`, which will run the machinery to put up a
348 frame, then call the passed callback. There is also a library at
349 `fast/repaint/resources/text-based-repaint.js` to help with writing paint
350 invalidation and repaint tests.
351
352 #### Layout test support for `testRunner`
353
354 Some layout tests rely on the testRunner object to expose configuration for
355 mocking the platform. This is provided in content_shell, here's a UML diagram of
356 testRunner bindings configuring platform implementation:
357
358 [![UML of testRunner bindings configuring platform implementation](https://docs. google.com/drawings/u/1/d/1KNRNjlxK0Q3Tp8rKxuuM5mpWf4OJQZmvm9_kpwu_Wwg/export/sv g?id=1KNRNjlxK0Q3Tp8rKxuuM5mpWf4OJQZmvm9_kpwu_Wwg&pageid=p)](https://docs.google .com/drawings/d/1KNRNjlxK0Q3Tp8rKxuuM5mpWf4OJQZmvm9_kpwu_Wwg/edit)
359
360 [Writing reliable layout tests](https://docs.google.com/document/d/1Yl4SnTLBWmY1 O99_BTtQvuoffP8YM9HZx2YPkEsaduQ/edit)
361
362 ## Debugging Layout Tests
363
364 After the layout tests run, you should get a summary of tests that pass or fail.
365 If something fails unexpectedly (a new regression), you will get a content_shell
366 window with a summary of the unexpected failures. Or you might have a failing
367 test in mind to investigate. In any case, here are some steps and tips for
368 finding the problem.
369
370 * Take a look at the result. Sometimes tests just need to be rebaselined (see
371 below) to account for changes introduced in your patch.
372 * Load the test into a trunk Chrome or content_shell build and look at its
373 result. (For tests in the http/ directory, start the http server first.
374 See above. Navigate to `http://localhost:8000/` and proceed from there.)
375 The best tests describe what they're looking for, but not all do, and
376 sometimes things they're not explicitly testing are still broken. Compare
377 it to Safari, Firefox, and IE if necessary to see if it's correct. If
378 you're still not sure, find the person who knows the most about it and
379 ask.
380 * Some tests only work properly in content_shell, not Chrome, because they
381 rely on extra APIs exposed there.
382 * Some tests only work properly when they're run in the layout-test
383 framework, not when they're loaded into content_shell directly. The test
384 should mention that in its visible text, but not all do. So try that too.
385 See "Running the tests", above.
386 * If you think the test is correct, confirm your suspicion by looking at the
387 diffs between the expected result and the actual one.
388 * Make sure that the diffs reported aren't important. Small differences in
389 spacing or box sizes are often unimportant, especially around fonts and
390 form controls. Differences in wording of JS error messages are also
391 usually acceptable.
392 * `./run_webkit_tests.py path/to/your/test.html --full-results-html` will
393 produce a page including links to the expected result, actual result, and
394 diff.
395 * Add the `--sources` option to `run_webkit_tests.py` to see exactly which
396 expected result it's comparing to (a file next to the test, something in
397 platform/mac/, something in platform/chromium-win/, etc.)
398 * If you're still sure it's correct, rebaseline the test (see below).
399 Otherwise...
400 * If you're lucky, your test is one that runs properly when you navigate to it
401 in content_shell normally. In that case, build the Debug content_shell
402 project, fire it up in your favorite debugger, and load the test file either
403 from a file:// URL.
404 * You'll probably be starting and stopping the content_shell a lot. In VS,
405 to save navigating to the test every time, you can set the URL to your
406 test (file: or http:) as the command argument in the Debugging section of
407 the content_shell project Properties.
408 * If your test contains a JS call, DOM manipulation, or other distinctive
409 piece of code that you think is failing, search for that in the Chrome
410 solution. That's a good place to put a starting breakpoint to start
411 tracking down the issue.
412 * Otherwise, you're running in a standard message loop just like in Chrome.
413 If you have no other information, set a breakpoint on page load.
414 * If your test only works in full layout-test mode, or if you find it simpler to
415 debug without all the overhead of an interactive session, start the
416 content_shell with the command-line flag `--run-layout-test`, followed by the
417 URL (file: or http:) to your test. More information about running layout tests
418 in content_shell can be found
419 [here](https://www.chromium.org/developers/testing/webkit-layout-tests/content -shell).
420 * In VS, you can do this in the Debugging section of the content_shell
421 project Properties.
422 * Now you're running with exactly the same API, theme, and other setup that
423 the layout tests use.
424 * Again, if your test contains a JS call, DOM manipulation, or other
425 distinctive piece of code that you think is failing, search for that in
426 the Chrome solution. That's a good place to put a starting breakpoint to
427 start tracking down the issue.
428 * If you can't find any better place to set a breakpoint, start at the
429 `TestShell::RunFileTest()` call in `content_shell_main.cc`, or at
430 `shell->LoadURL() within RunFileTest()` in `content_shell_win.cc`.
431 * Debug as usual. Once you've gotten this far, the failing layout test is just a
432 (hopefully) reduced test case that exposes a problem.
433
434 ### Debugging HTTP Tests
435
436 To run the server manually to reproduce/debug a failure:
437
438 ```bash
439 cd src/third_party/WebKit/Tools/Scripts
440 run-blink-httpd start
441 ```
442
443 The layout tests will be served from `http://127.0.0.1:8000`. For example, to
444 run the test
445 `LayoutTest/http/tests/serviceworker/chromium/service-worker-allowed.html`,
446 navigate to
447 `http://127.0.0.1:8000/serviceworker/chromium/service-worker-allowed.html`. Some
448 tests will behave differently if you go to 127.0.0.1 vs localhost, so use
449 127.0.0.1.
450
451 To kill the server, run `run-blink-httpd --server stop`, or just use `taskkill`
452 or the Task Manager on Windows, and `killall` or Activity Monitor on MacOS.
453
454 The test server sets up an alias to `LayoutTests/resources` directory. In HTTP
455 tests, you can access testing framework at e.g.
456 `src="/js-test-resources/js-test.js"`.
457
458 ### Tips
459
460 Check http://test-results.appspot.com/ to see how a test did in the most recent
461 ~100 builds on each builder (as long as the page is being updated regularly).
462
463 A timeout will often also be a text mismatch, since the wrapper script kills the
464 content_shell before it has a chance to finish. The exception is if the test
465 finishes loading properly, but somehow hangs before it outputs the bit of text
466 that tells the wrapper it's done.
467
468 Why might a test fail (or crash, or timeout) on buildbot, but pass on your local
469 machine?
470 * If the test finishes locally but is slow, more than 10 seconds or so, that
471 would be why it's called a timeout on the bot.
472 * Otherwise, try running it as part of a set of tests; it's possible that a test
473 one or two (or ten) before this one is corrupting something that makes this
474 one fail.
475 * If it consistently works locally, make sure your environment looks like the
476 one on the bot (look at the top of the stdio for the webkit_tests step to see
477 all the environment variables and so on).
478 * If none of that helps, and you have access to the bot itself, you may have to
479 log in there and see if you can reproduce the problem manually.
480
481 ### Debugging Inspector Tests
482
483 * Add `window.debugTest = true;` to your test code as follows:
484
485 ```javascript
486 window.debugTest = true;
487 function test() {
488 /* TEST CODE */
489 }
490 ```
491
492 * Do one of the following:
493 * Option A) Run from the chromium/src folder:
494 `blink/tools/run_layout_tests.sh
495 --additional_driver_flag='--remote-debugging-port=9222'
496 --time-out-ms=6000000`
497 * Option B) If you need to debug an http/tests/inspector test, start httpd
498 as described above. Then, run content_shell:
499 `out/Default/content_shell --remote-debugging-port=9222 --run-layout-test
500 http://127.0.0.1:8000/path/to/test.html`
501 * Open `http://localhost:9222` in a stable/beta/canary Chrome, click the single
502 link to open the devtools with the test loaded.
503 * You may need to replace devtools.html with inspector.html in your URL (or you
504 can use local chrome inspection of content_shell from chrome://inspect
505 instead)
506 * In the loaded devtools, set any required breakpoints and execute `test()` in
507 the console to actually start the test.
508
509 ## Rebaselining Layout Tests
510
511 _To automatically re-baseline tests across all Chromium platforms, using the
512 buildbot results, see the
513 [Rebaselining keywords in TestExpectations](https://www.chromium.org/developers/ testing/webkit-layout-tests/testexpectations#TOC-Rebaselining)
514 and
515 [Rebaselining Tool](https://trac.webkit.org/wiki/Rebaseline).
516 Alternatively, to manually run and test and rebaseline it on your workstation,
517 read on._
518
519 By default, text-only tests (ones that call `layoutTestController.dumpAsText()`)
jsbell 2016/11/07 21:13:11 testRunner ?
pwnall 2016/11/07 22:23:24 Done. I agree that we should have a preferred name
520 produce only text results. Other tests produce both new text results and new
521 image results (the image baseline comprises two files, `-expected.png` and
522 `-expected.checksum`). So you'll need either one or three `-expected.\*` files
523 in your new baseline, depending on whether you have a text-only test or not. If
524 you enable `--no-pixel-tests`, only new text results will be produced, even for
525 tests that do image comparisons.
526
527 ```bash
528 cd src/third_party/WebKit
529 Tools/Scripts/run-webkit-tests --new-baseline foo/bar/test.html
530 ```
531
532 The above command will generate a new baseline for
533 `LayoutTests/foo/bar/test.html` and put the output files in the right place,
534 e.g.
535 `LayoutTests/platform/chromium-win/LayoutTests/foo/bar/test-expected.{txt,png,ch ecksum}`.
536
537 When you rebaseline a test, make sure your commit description explains why the
538 test is being re-baselined. If this is a special case (i.e., something we've
539 decided to be different with upstream), please put a README file next to the new
540 expected output explaining the difference.
541
542 ## W3C Tests
jsbell 2016/11/07 21:13:11 This seems like an odd place for this section. (Ov
pwnall 2016/11/07 22:23:24 I very much agree. That seems like a project of it
543
544 In addition to layout tests developed and run just by the Blink team, there are
545 also W3C conformance tests. For more info, see
546 [Importing the W3C Tests](https://www.chromium.org/blink/importing-the-w3c-tests ).
547
548 ## Known Issues
549
550 See
551 [bugs with the component Blink>Infra](https://bugs.chromium.org/p/chromium/issue s/list?can=2&q=component%3ABlink%3EInfra)
552 for issues related to Blink tools, include the layout test runner.
553
554 * Windows and Linux: Do not copy and paste while the layout tests are running,
555 as it may interfere with the editing/pasteboard and other clipboard-related
556 tests. (Mac tests swizzle NSClipboard to avoid any conflicts).
557 * If QuickTime is not installed, the plugin tests
558 `fast/dom/object-embed-plugin-scripting.html` and
559 `plugins/embed-attributes-setting.html` are expected to fail.
OLDNEW
« no previous file with comments | « no previous file | no next file » | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698