Index: third_party/WebKit/PerformanceTests/README.md |
diff --git a/third_party/WebKit/PerformanceTests/README.md b/third_party/WebKit/PerformanceTests/README.md |
new file mode 100644 |
index 0000000000000000000000000000000000000000..33b17679f9de9ba676ee30e81266372b3598cc5d |
--- /dev/null |
+++ b/third_party/WebKit/PerformanceTests/README.md |
@@ -0,0 +1,156 @@ |
+# WebKit Performance Tests |
haraken
2017/05/19 07:57:33
WebKit => Blink
nednguyen
2017/05/19 16:25:18
Done.
|
+ |
+[TOC] |
+ |
+## Overview |
+ |
+WebKit perf tests are used for micro benchmarking the surface of Blink that |
haraken
2017/05/19 07:57:33
WebKit => Blink
nednguyen
2017/05/19 16:25:18
Done.
|
+is exposed to the Web. They are the the counterpart of [LayoutTests/] |
Xianzhu
2017/05/19 03:32:10
"the the" -> "the"
nednguyen
2017/05/19 16:25:19
Done.
|
+(https://chromium.googlesource.com/chromium/src/+/master/docs/testing/layout_tests.md) |
+but for performance coverage. |
+ |
+## Writing Tests |
+Each test entry point is a HTML file written using |
+[runner.js](https://cs.chromium.org/chromium/src/third_party/WebKit/PerformanceTests/resources/runner.js) |
Xianzhu
2017/05/19 03:32:10
The url can be relative: resources/runner.js
nednguyen
2017/05/19 16:25:18
Done.
|
+testing framework. The test file is placed inside a sub folder of |
+[WebKit/PerformanceTests/](https://cs.chromium.org/chromium/src/third_party/WebKit/PerformanceTests/) |
Xianzhu
2017/05/19 03:32:10
Ditto.
nednguyen
2017/05/19 16:25:18
I use "." here for current directory. Not sure if
|
+and is started by importing `runner.js` script into the document: |
+``` |
+ <script src="../resources/runner.js"></script> |
+ |
+``` |
+ |
+### Synchronous Perf Tests |
+In a nutshell, to measure speed of synchronous code encapsulated in a test run |
+method `F`, synchronous perf tests exercises this loop: |
+ |
+``` |
+ FOR i = 1 to NUMBER_OF_REPEAT |
+ Start timer |
+ F() |
+ Stop timer |
+``` |
+ |
+Depending on how fast `F` runs, one can choose between |
+`PerfTestRunner.measureTime` or `PerfTestRunner.measureRunsPerSecond` |
+(very fast). In either case, you create a test object & run by invoking the |
+measure method as follow: |
+ |
+``` |
+PerfTestRunner.measureTime({ // the "test" object |
+ description: '...', |
+ setup: function () { ... }, // test setup logic, called once before each run |
+ run: function () { ... }, // contains the code to benchmark |
+ iterationCount: 5 // repeat the test 5 times |
+}); |
+``` |
+ |
+In the case of `PerfTestRunner.measureRunsPerSecond`, each run invokes |
+`test.run` multiple times. |
+ |
+**Tracing support** |
+When the test in run through Telemetry, you can also collect timing of trace |
Xianzhu
2017/05/19 03:32:10
typo "in"
nednguyen
2017/05/19 16:25:19
Done.
|
+events that happen during each run by specifying `tracingCategories` & |
+`traceEventsToMeasure` in the test object. For example: |
+ |
+``` |
+PerfTestRunner.measureTime({ |
+ ... |
+ run: foo, |
+ iterationCount: 3, |
+ tracingCategories: 'blink', |
+ traceEventsToMeasure: ['A', 'B'], |
+}); |
+``` |
+To illustrate what the framework computes, imaging the test timeline as |
+follow: |
+ |
+``` |
+Test run times (time duration of each slice is right under it): |
+-----------[ foo ]-----[ foo ]-----[ foo ]------ |
+ u1 u2 u3 |
+---------------[ A ]------------------------------------------------------ |
+ v0 |
+-----------------[ A ]--[ A ]---------[ B ]--[A]----------[ B ]--[C]-------- |
+ v1 v2 v3 v4 v5 v6 |
+``` |
+ |
+Besides outputting timeseries `[u1, u2, u3]`, telemetry perf test runner will |
+also compute the total CPU times for trace events 'A' & 'B' per `foo()` run: |
+ |
+* CPU times of trace events A: `[v0 + v2, v4, 0.0]` |
Xianzhu
2017/05/19 03:32:10
Three spaces after "*", according to https://githu
nednguyen
2017/05/19 16:25:19
Done.
|
+* CPU times of trace events B: `[0.0, v3, v5]` |
+ |
+Example tracing synchronous tests: |
+[WebKit/PerformanceTests/](https://cs.chromium.org/chromium/src/third_party/WebKit/PerformanceTests/TestData/) |
+ |
+### Asynchronous Perf Tests |
+In asynchronous perf test, you define your test scheduler and do your own |
+measurement. For example: |
+ |
+``` |
+var isDone = false; |
+var startTime; |
+ |
+function runTest() { |
+ if (startTime) { |
+ PerfTestRunner.measureValueAsync(PerfTestRunner.now() - startTime); |
+ PerfTestRunner.addRunTestEndMarker(); // For tracing metrics |
+ } |
+ if (!done) { |
+ PerfTestRunner.addRunTestStartMarker(); |
+ startTime = PerfTestRunner.now(); // For tracing metrics |
+ // runTest will be invoked after the async operation finish |
+ runAsyncOperation(runTest); |
+ } |
+} |
+ |
+PerfTestRunner.startMeasureValuesAsync({ |
+ unit: 'ms', |
+ done: function () { |
+ isDone = true; |
+ }, |
+ run: function() { |
+ runTest(); |
+ }, |
+ iterationCount: 6, |
+}); |
+``` |
+ |
+In the example above, the call |
+`PerfTestRunner.measureValueAsync(value)` send the metric of a single run to |
+the test runner & also let the runner now that it has finished a single run. |
Xianzhu
2017/05/19 03:32:10
& -> and
now -> know?
nednguyen
2017/05/19 16:25:19
Done.
|
+One the number of run reaches `iterationCount` (6 in the example above), the |
Xianzhu
2017/05/19 03:32:10
One -> Once?
nednguyen
2017/05/19 16:25:18
Done.
|
+`done` callback is invoked, allow you to set the your test state to finished. |
Xianzhu
2017/05/19 03:32:10
Reword "set the your test ..."
nednguyen
2017/05/19 16:25:19
Done.
|
+ |
+**Tracing support** |
+Like synchronous perf tests, tracing metrics are only available when you run |
+your tests with Telemetry. |
+ |
+Unlike synchronous perf tests which the test runner framework handles test |
+scheduling & tracing coverage for you, for asynchronous tests, you need to |
+manually mark when the async test begins |
+(`PerfTestRunner.addRunTestStartMarker`) & ends |
+(`PerfTestRunner.addRunTestEndMarker`). One those are marked, specifying |
Xianzhu
2017/05/19 03:32:10
One -> Once?
nednguyen
2017/05/19 16:25:18
Done.
|
+`tracingCategories` & `traceEventsToMeasure` will output CPU time metrics |
+of trace events that happen during test runs in the fashion similar to the |
+example of synchronous tracing test above. |
+ |
+## Running Tests |
+ |
+** Running tests directly in browser ** |
+Most of WebKit Performance tests should be runnable by just open the test file |
haraken
2017/05/19 07:57:33
Ditto.
nednguyen
2017/05/19 16:25:18
Done.
|
+directly in the browser. However, features like tracing metrics & HTML results |
+viewe won't be supported. |
Xianzhu
2017/05/19 03:32:10
typo "viewe"
nednguyen
2017/05/19 16:25:18
Done.
|
+ |
+** Running tests with Telemetry ** |
+Assuming your current directory is chromium/src/, you can run tests with: |
+./tools/perf/run_benchmark blink_perf [--test-path=<path to your tests>] |
Xianzhu
2017/05/19 03:32:10
Code block quote.
nednguyen
2017/05/19 16:25:18
Done.
|
+ |
+For information about all supported options, run: |
+./tools/perf/run_benchmark blink_perf --help |
Xianzhu
2017/05/19 03:32:10
Code block quote.
nednguyen
2017/05/19 16:25:18
Done.
|
+ |
+ |
+** Running tests through run-perf-tests script ** |
haraken
2017/05/19 07:57:33
BTW what's the advantage of keeping run-perf-tests
nednguyen
2017/05/19 16:25:19
We are actually trying to deprecate run-perf-tests
|
+Assuming your current directory is chromium/src/, you can run tests with: |
+./third_party/WebKit/Tools/Scripts/run-perf-tests [path to your tets] |
Xianzhu
2017/05/19 03:32:10
Code block quote.
nednguyen
2017/05/19 16:25:18
Removed.
|