OLD | NEW |
(Empty) | |
| 1 # Layout Tests |
| 2 |
| 3 Layout tests are used by Blink to test many components, including but not |
| 4 limited to layout and rendering. In general, layout tests involve loading pages |
| 5 in a test renderer (`content_shell`) and comparing the rendered output or |
| 6 JavaScript output against an expected output file. |
| 7 |
| 8 [TOC] |
| 9 |
| 10 ## Running Layout Tests |
| 11 |
| 12 ### Initial Setup |
| 13 |
| 14 Before you can run the layout tests, you need to build the `blink_tests` target |
| 15 to get `content_shell` and all of the other needed binaries. |
| 16 |
| 17 ```bash |
| 18 ninja -C out/Release blink_tests |
| 19 ``` |
| 20 |
| 21 On **Android** (layout test support |
| 22 [currently limited to KitKat and earlier](https://crbug.com/567947)) you need to |
| 23 build and install `content_shell_apk` instead. See also: |
| 24 [Android Build Instructions](../android_build_instructions.md). |
| 25 |
| 26 ```bash |
| 27 ninja -C out/Default content_shell_apk |
| 28 adb install -r out/Default/apks/ContentShell.apk |
| 29 ``` |
| 30 |
| 31 On **Mac**, you probably want to strip the content_shell binary before starting |
| 32 the tests. If you don't, you'll have 5-10 running concurrently, all stuck being |
| 33 examined by the OS crash reporter. This may cause other failures like timeouts |
| 34 where they normally don't occur. |
| 35 |
| 36 ```bash |
| 37 strip ./xcodebuild/{Debug,Release}/content_shell.app/Contents/MacOS/content_shel
l |
| 38 ``` |
| 39 |
| 40 ### Running the Tests |
| 41 |
| 42 TODO: mention `testing/xvfb.py` |
| 43 |
| 44 The test runner script is in |
| 45 `third_party/WebKit/Tools/Scripts/run-webkit-tests`. |
| 46 |
| 47 To specify which build directory to use (e.g. out/Default, out/Release, |
| 48 out/Debug) you should pass the `-t` or `--target` parameter. For example, to |
| 49 use the build in `out/Default`, use: |
| 50 |
| 51 ```bash |
| 52 python third_party/WebKit/Tools/Scripts/run-webkit-tests -t Default |
| 53 ``` |
| 54 |
| 55 For Android (if your build directory is `out/android`): |
| 56 |
| 57 ```bash |
| 58 python third_party/WebKit/Tools/Scripts/run-webkit-tests -t android --android |
| 59 ``` |
| 60 |
| 61 Tests marked as `[ Skip ]` in |
| 62 [TestExpectations](../../third_party/WebKit/LayoutTests/TestExpectations) |
| 63 won't be run at all, generally because they cause some intractable tool error. |
| 64 To force one of them to be run, either rename that file or specify the skipped |
| 65 test as the only one on the command line (see below). |
| 66 |
| 67 Note that currently only the tests listed in |
| 68 [SmokeTests](../../third_party/WebKit/LayoutTests/SmokeTests) |
| 69 are run on the Android bots, since running all layout tests takes too long on |
| 70 Android (and may still have some infrastructure issues). Most developers focus |
| 71 their Blink testing on Linux. We rely on the fact that the Linux and Android |
| 72 behavior is nearly identical for scenarios outside those covered by the smoke |
| 73 tests. |
| 74 |
| 75 To run only some of the tests, specify their directories or filenames as |
| 76 arguments to `run_webkit_tests.py` relative to the layout test directory |
| 77 (`src/third_party/WebKit/LayoutTests`). For example, to run the fast form tests, |
| 78 use: |
| 79 |
| 80 ```bash |
| 81 Tools/Scripts/run-webkit-tests fast/forms |
| 82 ``` |
| 83 |
| 84 Or you could use the following shorthand: |
| 85 |
| 86 ```bash |
| 87 Tools/Scripts/run-webkit-tests fast/fo\* |
| 88 ``` |
| 89 |
| 90 *** promo |
| 91 Example: To run the layout tests with a debug build of `content_shell`, but only |
| 92 test the SVG tests and run pixel tests, you would run: |
| 93 |
| 94 ```bash |
| 95 Tools/Scripts/run-webkit-tests -t Default svg |
| 96 ``` |
| 97 *** |
| 98 |
| 99 As a final quick-but-less-robust alternative, you can also just use the |
| 100 content_shell executable to run specific tests by using (for Windows): |
| 101 |
| 102 ```bash |
| 103 out/Default/content_shell.exe --run-layout-test --no-sandbox full_test_source_pa
th |
| 104 ``` |
| 105 |
| 106 as in: |
| 107 |
| 108 ```bash |
| 109 out/Default/content_shell.exe --run-layout-test --no-sandbox \ |
| 110 c:/chrome/src/third_party/WebKit/LayoutTests/fast/forms/001.html |
| 111 ``` |
| 112 |
| 113 but this requires a manual diff against expected results, because the shell |
| 114 doesn't do it for you. |
| 115 |
| 116 To see a complete list of arguments supported, run: `run-webkit-tests --help` |
| 117 |
| 118 *** note |
| 119 **Linux Note:** We try to match the Windows render tree output exactly by |
| 120 matching font metrics and widget metrics. If there's a difference in the render |
| 121 tree output, we should see if we can avoid rebaselining by improving our font |
| 122 metrics. For additional information on Linux Layout Tests, please see |
| 123 [docs/layout_tests_linux.md](../layout_tests_linux.md). |
| 124 *** |
| 125 |
| 126 *** note |
| 127 **Mac Note:** While the tests are running, a bunch of Appearance settings are |
| 128 overridden for you so the right type of scroll bars, colors, etc. are used. |
| 129 Your main display's "Color Profile" is also changed to make sure color |
| 130 correction by ColorSync matches what is expected in the pixel tests. The change |
| 131 is noticeable, how much depends on the normal level of correction for your |
| 132 display. The tests do their best to restore your setting when done, but if |
| 133 you're left in the wrong state, you can manually reset it by going to |
| 134 System Preferences → Displays → Color and selecting the "right" value. |
| 135 *** |
| 136 |
| 137 ### Test Harness Options |
| 138 |
| 139 This script has a lot of command line flags. You can pass `--help` to the script |
| 140 to see a full list of options. A few of the most useful options are below: |
| 141 |
| 142 | Option | Meaning | |
| 143 |:----------------------------|:------------------------------------------------
--| |
| 144 | `--debug` | Run the debug build of the test shell (default i
s release). Equivalent to `-t Debug` | |
| 145 | `--nocheck-sys-deps` | Don't check system dependencies; this allows fas
ter iteration. | |
| 146 | `--verbose` | Produce more verbose output, including a list of
tests that pass. | |
| 147 | `--no-pixel-tests` | Disable the pixel-to-pixel PNG comparisons and i
mage checksums for tests that don't call `testRunner.dumpAsText()` | |
| 148 | `--reset-results` | Write all generated results directly into the gi
ven directory, overwriting what's there. | |
| 149 | `--new-baseline` | Write all generated results into the most specif
ic platform directory, overwriting what's there. Equivalent to `--reset-results
--add-platform-expectations` | |
| 150 | `--renderer-startup-dialog` | Bring up a modal dialog before running the test,
useful for attaching a debugger. | |
| 151 | `--fully-parallel` | Run tests in parallel using as many child proces
ses as the system has cores. | |
| 152 | `--driver-logging` | Print C++ logs (LOG(WARNING), etc). | |
| 153 |
| 154 ## Success and Failure |
| 155 |
| 156 A test succeeds when its output matches the pre-defined expected results. If any |
| 157 tests fail, the test script will place the actual generated results, along with |
| 158 a diff of the actual and expected results, into |
| 159 `src/out/Default/layout_test_results/`, and by default launch a browser with a |
| 160 summary and link to the results/diffs. |
| 161 |
| 162 The expected results for tests are in the |
| 163 `src/third_party/WebKit/LayoutTests/platform` or alongside their respective |
| 164 tests. |
| 165 |
| 166 *** note |
| 167 Tests which use [testharness.js](https://github.com/w3c/testharness.js/) |
| 168 do not have expected result files if all test cases pass. |
| 169 *** |
| 170 |
| 171 A test that runs but produces the wrong output is marked as "failed", one that |
| 172 causes the test shell to crash is marked as "crashed", and one that takes longer |
| 173 than a certain amount of time to complete is aborted and marked as "timed out". |
| 174 A row of dots in the script's output indicates one or more tests that passed. |
| 175 |
| 176 ## Test expectations |
| 177 |
| 178 The |
| 179 [TestExpectations](../../WebKit/LayoutTests/TestExpectations) file (and related |
| 180 files, including |
| 181 [skia_test_expectations.txt](../../skia/skia_test_expectations.txt)) |
| 182 contains the list of all known layout test failures. See |
| 183 [Test Expectations](https://sites.google.com/a/chromium.org/dev/developers/testi
ng/webkit-layout-tests/testexpectations) |
| 184 for more on this. |
| 185 |
| 186 ## Testing Runtime Flags |
| 187 |
| 188 There are two ways to run layout tests with additional command-line arguments: |
| 189 |
| 190 * Using `--additional-driver-flag`: |
| 191 |
| 192 ```bash |
| 193 run-webkit-tests --additional-driver-flag=--blocking-repaint |
| 194 ``` |
| 195 |
| 196 This tells the test harness to pass `--blocking-repaint` to the |
| 197 content_shell binary. |
| 198 |
| 199 It will also look for flag-specific expectations in |
| 200 `LayoutTests/FlagExpectations/blocking-repaint`, if this file exists. The |
| 201 suppressions in this file override the main TestExpectations file. |
| 202 |
| 203 * Using a *virtual test suite* defined in |
| 204 [LayoutTests/VirtualTestSuites](https://code.google.com/p/chromium/codesearch#
chromium/src/third_party/WebKit/LayoutTests/VirtualTestSuites). |
| 205 A virtual test suite runs a subset of layout tests under a specific path with |
| 206 additional flags. For example, you could test a (hypothetical) new mode for |
| 207 repainting using the following virtual test suite: |
| 208 |
| 209 ```json |
| 210 { |
| 211 "prefix": "blocking_repaint", |
| 212 "base": "fast/repaint", |
| 213 "args": ["--blocking-repaint"], |
| 214 } |
| 215 ``` |
| 216 |
| 217 This will create new "virtual" tests of the form |
| 218 `virtual/blocking_repaint/fast/repaint/...`` which correspond to the files |
| 219 under `LayoutTests/fast/repaint` and pass `--blocking-repaint` to |
| 220 content_shell when they are run. |
| 221 |
| 222 These virtual tests exist in addition to the original `fast/repaint/...` |
| 223 tests. They can have their own expectations in TestExpectations, and their own |
| 224 baselines. The test harness will use the non-virtual baselines as a fallback. |
| 225 However, the non-virtual expectations are not inherited: if |
| 226 `fast/repaint/foo.html` is marked `[ Fail ]`, the test harness still expects |
| 227 `virtual/blocking_repaint/fast/repaint/foo.html` to pass. If you expect the |
| 228 virtual test to also fail, it needs its own suppression. |
| 229 |
| 230 The "prefix" value does not have to be unique. This is useful if you want to |
| 231 run multiple directories with the same flags (but see the notes below about |
| 232 performance). Using the same prefix for different sets of flags is not |
| 233 recommended. |
| 234 |
| 235 For flags whose implementation is still in progress, virtual test suites and |
| 236 flag-specific expectations represent two alternative strategies for testing. |
| 237 Consider the following when choosing between them: |
| 238 |
| 239 * The |
| 240 [waterfall builders](https://dev.chromium.org/developers/testing/chromium-buil
d-infrastructure/tour-of-the-chromium-buildbot) |
| 241 and [try bots](https://dev.chromium.org/developers/testing/try-server-usage) |
| 242 will run all virtual test suites in addition to the non-virtual tests. |
| 243 Conversely, a flag-specific expectations file won't automatically cause the |
| 244 bots to test your flag - if you want bot coverage without virtual test suites, |
| 245 you will need to set up a dedicated bot for your flag. |
| 246 |
| 247 * Due to the above, virtual test suites incur a performance penalty for the |
| 248 commit queue and the continuous build infrastructure. This is exacerbated by |
| 249 the need to restart `content_shell` whenever flags change, which limits |
| 250 parallelism. Therefore, you should avoid adding large numbers of virtual test |
| 251 suites. They are well suited to running a subset of tests that are directly |
| 252 related to the feature, but they don't scale to flags that make deep |
| 253 architectural changes that potentially impact all of the tests. |
| 254 |
| 255 ## Tracking Test Failures |
| 256 |
| 257 All bugs, associated with layout test failures must have the |
| 258 [Test-Layout](https://crbug.com/?q=label:Test-Layout) label. Depending on how |
| 259 much you know about the bug, assign the status accordingly: |
| 260 |
| 261 * **Unconfirmed** -- You aren't sure if this is a simple rebaseline, possible |
| 262 duplicate of an existing bug, or a real failure |
| 263 * **Untriaged** -- Confirmed but unsure of priority or root cause. |
| 264 * **Available** -- You know the root cause of the issue. |
| 265 * **Assigned** or **Started** -- You will fix this issue. |
| 266 |
| 267 When creating a new layout test bug, please set the following properties: |
| 268 |
| 269 * Components: a sub-component of Blink |
| 270 * OS: **All** (or whichever OS the failure is on) |
| 271 * Priority: 2 (1 if it's a crash) |
| 272 * Type: **Bug** |
| 273 * Labels: **Test-Layout** |
| 274 |
| 275 You can also use the _Layout Test Failure_ template, which will pre-set these |
| 276 labels for you. |
| 277 |
| 278 ## Writing Layout Tests |
| 279 |
| 280 ### Pixel Tests |
| 281 |
| 282 TODO: Write documentation here. |
| 283 |
| 284 ### Reference Tests |
| 285 |
| 286 TODO: Write documentation here. |
| 287 |
| 288 ### Script Tests |
| 289 |
| 290 These tests use a JavaScript test harness and test cases written in script to |
| 291 exercise features and make assertions about the behavior. Generally, new tests |
| 292 are written using the [testharness.js](https://github.com/w3c/testharness.js/) |
| 293 test harness, which is also heavily used in the cross-vendor |
| 294 [web-platform-tests](https://github.com/w3c/web-platform-tests) project. Tests |
| 295 written with testharness.js generally look something like the following: |
| 296 |
| 297 ```html |
| 298 <!DOCTYPE html> |
| 299 <script src="/resources/testharness.js"></script> |
| 300 <script src="/resources/testharnessreport.js"></script> |
| 301 <script> |
| 302 test(t => { |
| 303 var x = true; |
| 304 assert_true(x); |
| 305 }, "Truth is true."); |
| 306 </script> |
| 307 ``` |
| 308 |
| 309 Many older tests are written using the **js-test** |
| 310 (`LayoutTests/resources/js-test.js`) test harness. This harness is |
| 311 **deprecated**, and should not be used for new tests. The tests call |
| 312 `testRunner.dumpAsText()` to signal that the page content should be dumped and |
| 313 compared against an \*-expected.txt file, and optionally |
| 314 `testRunner.waitUntilDone()` and `testRunner.notifyDone()` for asynchronous |
| 315 tests. |
| 316 |
| 317 ### Tests that use a HTTP Server |
| 318 |
| 319 By default, tests are loaded as if via `file:` URLs. Some web platform features |
| 320 require tests served via HTTP or HTTPS, for example relative paths (`src=/foo`) |
| 321 or features restricted to secure protocols. |
| 322 |
| 323 HTTP tests are those tests that are under `LayoutTests/http/tests` (or virtual |
| 324 variants). Use a locally running HTTP server (Apache) to run. Tests are served |
| 325 off of ports 8000, 8080 for HTTP and 8443 for HTTPS. If you run the tests using |
| 326 `run-webkit-tests`, the server will be started automatically.To run the server |
| 327 manually to reproduce or debug a failure: |
| 328 |
| 329 ```bash |
| 330 cd src/third_party/WebKit/Tools/Scripts |
| 331 run-blink-httpd start |
| 332 ``` |
| 333 |
| 334 The layout tests will be served from `http://127.0.0.1:8000`. For example, to |
| 335 run the test `http/tests/serviceworker/chromium/service-worker-allowed.html`, |
| 336 navigate to |
| 337 `http://127.0.0.1:8000/serviceworker/chromium/service-worker-allowed.html`. Some |
| 338 tests will behave differently if you go to 127.0.0.1 instead of localhost, so |
| 339 use 127.0.0.1. |
| 340 |
| 341 To kill the server, run `run-blink-httpd --server stop`, or just use `taskkill` |
| 342 or the Task Manager on Windows, and `killall` or Activity Monitor on MacOS. |
| 343 |
| 344 The test server sets up an alias to `LayoutTests/resources` directory. In HTTP |
| 345 tests, you can access the testing framework at e.g. |
| 346 `src="/js-test-resources/js-test.js"`. |
| 347 |
| 348 ### Writing tests that need to paint, raster, or draw a frame of intermediate ou
tput |
| 349 |
| 350 A layout test does not actually draw frames of output until the test exits. If |
| 351 it is required to generate a painted frame, then use |
| 352 `window.testRunner.displayAsyncThen`, which will run the machinery to put up a |
| 353 frame, then call the passed callback. There is also a library at |
| 354 `fast/repaint/resources/text-based-repaint.js` to help with writing paint |
| 355 invalidation and repaint tests. |
| 356 |
| 357 #### Layout test support for `testRunner` |
| 358 |
| 359 Some layout tests rely on the testRunner object to expose configuration for |
| 360 mocking the platform. This is provided in content_shell, here's a UML diagram of |
| 361 testRunner bindings configuring platform implementation: |
| 362 |
| 363 [](https://docs.google
.com/drawings/d/1KNRNjlxK0Q3Tp8rKxuuM5mpWf4OJQZmvm9_kpwu_Wwg/edit) |
| 364 |
| 365 [Writing reliable layout tests](https://docs.google.com/document/d/1Yl4SnTLBWmY1
O99_BTtQvuoffP8YM9HZx2YPkEsaduQ/edit) |
| 366 |
| 367 ## Debugging Layout Tests |
| 368 |
| 369 After the layout tests run, you should get a summary of tests that pass or fail. |
| 370 If something fails unexpectedly (a new regression), you will get a content_shell |
| 371 window with a summary of the unexpected failures. Or you might have a failing |
| 372 test in mind to investigate. In any case, here are some steps and tips for |
| 373 finding the problem. |
| 374 |
| 375 * Take a look at the result. Sometimes tests just need to be rebaselined (see |
| 376 below) to account for changes introduced in your patch. |
| 377 * Load the test into a trunk Chrome or content_shell build and look at its |
| 378 result. (For tests in the http/ directory, start the http server first. |
| 379 See above. Navigate to `http://localhost:8000/` and proceed from there.) |
| 380 The best tests describe what they're looking for, but not all do, and |
| 381 sometimes things they're not explicitly testing are still broken. Compare |
| 382 it to Safari, Firefox, and IE if necessary to see if it's correct. If |
| 383 you're still not sure, find the person who knows the most about it and |
| 384 ask. |
| 385 * Some tests only work properly in content_shell, not Chrome, because they |
| 386 rely on extra APIs exposed there. |
| 387 * Some tests only work properly when they're run in the layout-test |
| 388 framework, not when they're loaded into content_shell directly. The test |
| 389 should mention that in its visible text, but not all do. So try that too. |
| 390 See "Running the tests", above. |
| 391 * If you think the test is correct, confirm your suspicion by looking at the |
| 392 diffs between the expected result and the actual one. |
| 393 * Make sure that the diffs reported aren't important. Small differences in |
| 394 spacing or box sizes are often unimportant, especially around fonts and |
| 395 form controls. Differences in wording of JS error messages are also |
| 396 usually acceptable. |
| 397 * `./run_webkit_tests.py path/to/your/test.html --full-results-html` will |
| 398 produce a page including links to the expected result, actual result, and |
| 399 diff. |
| 400 * Add the `--sources` option to `run_webkit_tests.py` to see exactly which |
| 401 expected result it's comparing to (a file next to the test, something in |
| 402 platform/mac/, something in platform/chromium-win/, etc.) |
| 403 * If you're still sure it's correct, rebaseline the test (see below). |
| 404 Otherwise... |
| 405 * If you're lucky, your test is one that runs properly when you navigate to it |
| 406 in content_shell normally. In that case, build the Debug content_shell |
| 407 project, fire it up in your favorite debugger, and load the test file either |
| 408 from a file:// URL. |
| 409 * You'll probably be starting and stopping the content_shell a lot. In VS, |
| 410 to save navigating to the test every time, you can set the URL to your |
| 411 test (file: or http:) as the command argument in the Debugging section of |
| 412 the content_shell project Properties. |
| 413 * If your test contains a JS call, DOM manipulation, or other distinctive |
| 414 piece of code that you think is failing, search for that in the Chrome |
| 415 solution. That's a good place to put a starting breakpoint to start |
| 416 tracking down the issue. |
| 417 * Otherwise, you're running in a standard message loop just like in Chrome. |
| 418 If you have no other information, set a breakpoint on page load. |
| 419 * If your test only works in full layout-test mode, or if you find it simpler to |
| 420 debug without all the overhead of an interactive session, start the |
| 421 content_shell with the command-line flag `--run-layout-test`, followed by the |
| 422 URL (file: or http:) to your test. More information about running layout tests |
| 423 in content_shell can be found |
| 424 [here](https://www.chromium.org/developers/testing/webkit-layout-tests/content
-shell). |
| 425 * In VS, you can do this in the Debugging section of the content_shell |
| 426 project Properties. |
| 427 * Now you're running with exactly the same API, theme, and other setup that |
| 428 the layout tests use. |
| 429 * Again, if your test contains a JS call, DOM manipulation, or other |
| 430 distinctive piece of code that you think is failing, search for that in |
| 431 the Chrome solution. That's a good place to put a starting breakpoint to |
| 432 start tracking down the issue. |
| 433 * If you can't find any better place to set a breakpoint, start at the |
| 434 `TestShell::RunFileTest()` call in `content_shell_main.cc`, or at |
| 435 `shell->LoadURL() within RunFileTest()` in `content_shell_win.cc`. |
| 436 * Debug as usual. Once you've gotten this far, the failing layout test is just a |
| 437 (hopefully) reduced test case that exposes a problem. |
| 438 |
| 439 ### Debugging HTTP Tests |
| 440 |
| 441 To run the server manually to reproduce/debug a failure: |
| 442 |
| 443 ```bash |
| 444 cd src/third_party/WebKit/Tools/Scripts |
| 445 run-blink-httpd start |
| 446 ``` |
| 447 |
| 448 The layout tests will be served from `http://127.0.0.1:8000`. For example, to |
| 449 run the test |
| 450 `LayoutTest/http/tests/serviceworker/chromium/service-worker-allowed.html`, |
| 451 navigate to |
| 452 `http://127.0.0.1:8000/serviceworker/chromium/service-worker-allowed.html`. Some |
| 453 tests will behave differently if you go to 127.0.0.1 vs localhost, so use |
| 454 127.0.0.1. |
| 455 |
| 456 To kill the server, run `run-blink-httpd --server stop`, or just use `taskkill` |
| 457 or the Task Manager on Windows, and `killall` or Activity Monitor on MacOS. |
| 458 |
| 459 The test server sets up an alias to `LayoutTests/resources` directory. In HTTP |
| 460 tests, you can access testing framework at e.g. |
| 461 `src="/js-test-resources/js-test.js"`. |
| 462 |
| 463 ### Tips |
| 464 |
| 465 Check https://test-results.appspot.com/ to see how a test did in the most recent |
| 466 ~100 builds on each builder (as long as the page is being updated regularly). |
| 467 |
| 468 A timeout will often also be a text mismatch, since the wrapper script kills the |
| 469 content_shell before it has a chance to finish. The exception is if the test |
| 470 finishes loading properly, but somehow hangs before it outputs the bit of text |
| 471 that tells the wrapper it's done. |
| 472 |
| 473 Why might a test fail (or crash, or timeout) on buildbot, but pass on your local |
| 474 machine? |
| 475 * If the test finishes locally but is slow, more than 10 seconds or so, that |
| 476 would be why it's called a timeout on the bot. |
| 477 * Otherwise, try running it as part of a set of tests; it's possible that a test |
| 478 one or two (or ten) before this one is corrupting something that makes this |
| 479 one fail. |
| 480 * If it consistently works locally, make sure your environment looks like the |
| 481 one on the bot (look at the top of the stdio for the webkit_tests step to see |
| 482 all the environment variables and so on). |
| 483 * If none of that helps, and you have access to the bot itself, you may have to |
| 484 log in there and see if you can reproduce the problem manually. |
| 485 |
| 486 ### Debugging Inspector Tests |
| 487 |
| 488 * Add `window.debugTest = true;` to your test code as follows: |
| 489 |
| 490 ```javascript |
| 491 window.debugTest = true; |
| 492 function test() { |
| 493 /* TEST CODE */ |
| 494 } |
| 495 ``` |
| 496 |
| 497 * Do one of the following: |
| 498 * Option A) Run from the chromium/src folder: |
| 499 `blink/tools/run_layout_tests.sh |
| 500 --additional_driver_flag='--remote-debugging-port=9222' |
| 501 --time-out-ms=6000000` |
| 502 * Option B) If you need to debug an http/tests/inspector test, start httpd |
| 503 as described above. Then, run content_shell: |
| 504 `out/Default/content_shell --remote-debugging-port=9222 --run-layout-test |
| 505 http://127.0.0.1:8000/path/to/test.html` |
| 506 * Open `http://localhost:9222` in a stable/beta/canary Chrome, click the single |
| 507 link to open the devtools with the test loaded. |
| 508 * You may need to replace devtools.html with inspector.html in your URL (or you |
| 509 can use local chrome inspection of content_shell from chrome://inspect |
| 510 instead) |
| 511 * In the loaded devtools, set any required breakpoints and execute `test()` in |
| 512 the console to actually start the test. |
| 513 |
| 514 ## Rebaselining Layout Tests |
| 515 |
| 516 _To automatically re-baseline tests across all Chromium platforms, using the |
| 517 buildbot results, see the |
| 518 [Rebaselining keywords in TestExpectations](https://www.chromium.org/developers/
testing/webkit-layout-tests/testexpectations#TOC-Rebaselining) |
| 519 and |
| 520 [Rebaselining Tool](https://trac.webkit.org/wiki/Rebaseline). |
| 521 Alternatively, to manually run and test and rebaseline it on your workstation, |
| 522 read on._ |
| 523 |
| 524 By default, text-only tests (ones that call `testRunner.dumpAsText()`) produce |
| 525 only text results. Other tests produce both new text results and new image |
| 526 results (the image baseline comprises two files, `-expected.png` and |
| 527 `-expected.checksum`). So you'll need either one or three `-expected.\*` files |
| 528 in your new baseline, depending on whether you have a text-only test or not. If |
| 529 you enable `--no-pixel-tests`, only new text results will be produced, even for |
| 530 tests that do image comparisons. |
| 531 |
| 532 ```bash |
| 533 cd src/third_party/WebKit |
| 534 Tools/Scripts/run-webkit-tests --new-baseline foo/bar/test.html |
| 535 ``` |
| 536 |
| 537 The above command will generate a new baseline for |
| 538 `LayoutTests/foo/bar/test.html` and put the output files in the right place, |
| 539 e.g. |
| 540 `LayoutTests/platform/chromium-win/LayoutTests/foo/bar/test-expected.{txt,png,ch
ecksum}`. |
| 541 |
| 542 When you rebaseline a test, make sure your commit description explains why the |
| 543 test is being re-baselined. If this is a special case (i.e., something we've |
| 544 decided to be different with upstream), please put a README file next to the new |
| 545 expected output explaining the difference. |
| 546 |
| 547 ## W3C Tests |
| 548 |
| 549 In addition to layout tests developed and run just by the Blink team, there are |
| 550 also W3C conformance tests. For more info, see |
| 551 [Importing the W3C Tests](https://www.chromium.org/blink/importing-the-w3c-tests
). |
| 552 |
| 553 ## Known Issues |
| 554 |
| 555 See |
| 556 [bugs with the component Blink>Infra](https://bugs.chromium.org/p/chromium/issue
s/list?can=2&q=component%3ABlink%3EInfra) |
| 557 for issues related to Blink tools, include the layout test runner. |
| 558 |
| 559 * Windows and Linux: Do not copy and paste while the layout tests are running, |
| 560 as it may interfere with the editing/pasteboard and other clipboard-related |
| 561 tests. (Mac tests swizzle NSClipboard to avoid any conflicts). |
| 562 * If QuickTime is not installed, the plugin tests |
| 563 `fast/dom/object-embed-plugin-scripting.html` and |
| 564 `plugins/embed-attributes-setting.html` are expected to fail. |
OLD | NEW |