OLD | NEW |
---|---|
(Empty) | |
1 # WebKit Performance Tests | |
haraken
2017/05/19 07:57:33
WebKit => Blink
nednguyen
2017/05/19 16:25:18
Done.
| |
2 | |
3 [TOC] | |
4 | |
5 ## Overview | |
6 | |
7 WebKit perf tests are used for micro benchmarking the surface of Blink that | |
haraken
2017/05/19 07:57:33
WebKit => Blink
nednguyen
2017/05/19 16:25:18
Done.
| |
8 is exposed to the Web. They are the the counterpart of [LayoutTests/] | |
Xianzhu
2017/05/19 03:32:10
"the the" -> "the"
nednguyen
2017/05/19 16:25:19
Done.
| |
9 (https://chromium.googlesource.com/chromium/src/+/master/docs/testing/layout_tes ts.md) | |
10 but for performance coverage. | |
11 | |
12 ## Writing Tests | |
13 Each test entry point is a HTML file written using | |
14 [runner.js](https://cs.chromium.org/chromium/src/third_party/WebKit/PerformanceT ests/resources/runner.js) | |
Xianzhu
2017/05/19 03:32:10
The url can be relative: resources/runner.js
nednguyen
2017/05/19 16:25:18
Done.
| |
15 testing framework. The test file is placed inside a sub folder of | |
16 [WebKit/PerformanceTests/](https://cs.chromium.org/chromium/src/third_party/WebK it/PerformanceTests/) | |
Xianzhu
2017/05/19 03:32:10
Ditto.
nednguyen
2017/05/19 16:25:18
I use "." here for current directory. Not sure if
| |
17 and is started by importing `runner.js` script into the document: | |
18 ``` | |
19 <script src="../resources/runner.js"></script> | |
20 | |
21 ``` | |
22 | |
23 ### Synchronous Perf Tests | |
24 In a nutshell, to measure speed of synchronous code encapsulated in a test run | |
25 method `F`, synchronous perf tests exercises this loop: | |
26 | |
27 ``` | |
28 FOR i = 1 to NUMBER_OF_REPEAT | |
29 Start timer | |
30 F() | |
31 Stop timer | |
32 ``` | |
33 | |
34 Depending on how fast `F` runs, one can choose between | |
35 `PerfTestRunner.measureTime` or `PerfTestRunner.measureRunsPerSecond` | |
36 (very fast). In either case, you create a test object & run by invoking the | |
37 measure method as follow: | |
38 | |
39 ``` | |
40 PerfTestRunner.measureTime({ // the "test" object | |
41 description: '...', | |
42 setup: function () { ... }, // test setup logic, called once before each run | |
43 run: function () { ... }, // contains the code to benchmark | |
44 iterationCount: 5 // repeat the test 5 times | |
45 }); | |
46 ``` | |
47 | |
48 In the case of `PerfTestRunner.measureRunsPerSecond`, each run invokes | |
49 `test.run` multiple times. | |
50 | |
51 **Tracing support** | |
52 When the test in run through Telemetry, you can also collect timing of trace | |
Xianzhu
2017/05/19 03:32:10
typo "in"
nednguyen
2017/05/19 16:25:19
Done.
| |
53 events that happen during each run by specifying `tracingCategories` & | |
54 `traceEventsToMeasure` in the test object. For example: | |
55 | |
56 ``` | |
57 PerfTestRunner.measureTime({ | |
58 ... | |
59 run: foo, | |
60 iterationCount: 3, | |
61 tracingCategories: 'blink', | |
62 traceEventsToMeasure: ['A', 'B'], | |
63 }); | |
64 ``` | |
65 To illustrate what the framework computes, imaging the test timeline as | |
66 follow: | |
67 | |
68 ``` | |
69 Test run times (time duration of each slice is right under it): | |
70 -----------[ foo ]-----[ foo ]-----[ foo ]------ | |
71 u1 u2 u3 | |
72 ---------------[ A ]------------------------------------------------------ | |
73 v0 | |
74 -----------------[ A ]--[ A ]---------[ B ]--[A]----------[ B ]--[C]-------- | |
75 v1 v2 v3 v4 v5 v6 | |
76 ``` | |
77 | |
78 Besides outputting timeseries `[u1, u2, u3]`, telemetry perf test runner will | |
79 also compute the total CPU times for trace events 'A' & 'B' per `foo()` run: | |
80 | |
81 * CPU times of trace events A: `[v0 + v2, v4, 0.0]` | |
Xianzhu
2017/05/19 03:32:10
Three spaces after "*", according to https://githu
nednguyen
2017/05/19 16:25:19
Done.
| |
82 * CPU times of trace events B: `[0.0, v3, v5]` | |
83 | |
84 Example tracing synchronous tests: | |
85 [WebKit/PerformanceTests/](https://cs.chromium.org/chromium/src/third_party/WebK it/PerformanceTests/TestData/) | |
86 | |
87 ### Asynchronous Perf Tests | |
88 In asynchronous perf test, you define your test scheduler and do your own | |
89 measurement. For example: | |
90 | |
91 ``` | |
92 var isDone = false; | |
93 var startTime; | |
94 | |
95 function runTest() { | |
96 if (startTime) { | |
97 PerfTestRunner.measureValueAsync(PerfTestRunner.now() - startTime); | |
98 PerfTestRunner.addRunTestEndMarker(); // For tracing metrics | |
99 } | |
100 if (!done) { | |
101 PerfTestRunner.addRunTestStartMarker(); | |
102 startTime = PerfTestRunner.now(); // For tracing metrics | |
103 // runTest will be invoked after the async operation finish | |
104 runAsyncOperation(runTest); | |
105 } | |
106 } | |
107 | |
108 PerfTestRunner.startMeasureValuesAsync({ | |
109 unit: 'ms', | |
110 done: function () { | |
111 isDone = true; | |
112 }, | |
113 run: function() { | |
114 runTest(); | |
115 }, | |
116 iterationCount: 6, | |
117 }); | |
118 ``` | |
119 | |
120 In the example above, the call | |
121 `PerfTestRunner.measureValueAsync(value)` send the metric of a single run to | |
122 the test runner & also let the runner now that it has finished a single run. | |
Xianzhu
2017/05/19 03:32:10
& -> and
now -> know?
nednguyen
2017/05/19 16:25:19
Done.
| |
123 One the number of run reaches `iterationCount` (6 in the example above), the | |
Xianzhu
2017/05/19 03:32:10
One -> Once?
nednguyen
2017/05/19 16:25:18
Done.
| |
124 `done` callback is invoked, allow you to set the your test state to finished. | |
Xianzhu
2017/05/19 03:32:10
Reword "set the your test ..."
nednguyen
2017/05/19 16:25:19
Done.
| |
125 | |
126 **Tracing support** | |
127 Like synchronous perf tests, tracing metrics are only available when you run | |
128 your tests with Telemetry. | |
129 | |
130 Unlike synchronous perf tests which the test runner framework handles test | |
131 scheduling & tracing coverage for you, for asynchronous tests, you need to | |
132 manually mark when the async test begins | |
133 (`PerfTestRunner.addRunTestStartMarker`) & ends | |
134 (`PerfTestRunner.addRunTestEndMarker`). One those are marked, specifying | |
Xianzhu
2017/05/19 03:32:10
One -> Once?
nednguyen
2017/05/19 16:25:18
Done.
| |
135 `tracingCategories` & `traceEventsToMeasure` will output CPU time metrics | |
136 of trace events that happen during test runs in the fashion similar to the | |
137 example of synchronous tracing test above. | |
138 | |
139 ## Running Tests | |
140 | |
141 ** Running tests directly in browser ** | |
142 Most of WebKit Performance tests should be runnable by just open the test file | |
haraken
2017/05/19 07:57:33
Ditto.
nednguyen
2017/05/19 16:25:18
Done.
| |
143 directly in the browser. However, features like tracing metrics & HTML results | |
144 viewe won't be supported. | |
Xianzhu
2017/05/19 03:32:10
typo "viewe"
nednguyen
2017/05/19 16:25:18
Done.
| |
145 | |
146 ** Running tests with Telemetry ** | |
147 Assuming your current directory is chromium/src/, you can run tests with: | |
148 ./tools/perf/run_benchmark blink_perf [--test-path=<path to your tests>] | |
Xianzhu
2017/05/19 03:32:10
Code block quote.
nednguyen
2017/05/19 16:25:18
Done.
| |
149 | |
150 For information about all supported options, run: | |
151 ./tools/perf/run_benchmark blink_perf --help | |
Xianzhu
2017/05/19 03:32:10
Code block quote.
nednguyen
2017/05/19 16:25:18
Done.
| |
152 | |
153 | |
154 ** Running tests through run-perf-tests script ** | |
haraken
2017/05/19 07:57:33
BTW what's the advantage of keeping run-perf-tests
nednguyen
2017/05/19 16:25:19
We are actually trying to deprecate run-perf-tests
| |
155 Assuming your current directory is chromium/src/, you can run tests with: | |
156 ./third_party/WebKit/Tools/Scripts/run-perf-tests [path to your tets] | |
Xianzhu
2017/05/19 03:32:10
Code block quote.
nednguyen
2017/05/19 16:25:18
Removed.
| |
OLD | NEW |