| Index: mojo/devtools/common/docs/mojo_benchmark.md
|
| diff --git a/mojo/devtools/common/docs/mojo_benchmark.md b/mojo/devtools/common/docs/mojo_benchmark.md
|
| index bcca1add06d868292761f752dcdba7f3f683e132..f9cb773c55ae749508cb494d4b074fd39aad228a 100644
|
| --- a/mojo/devtools/common/docs/mojo_benchmark.md
|
| +++ b/mojo/devtools/common/docs/mojo_benchmark.md
|
| @@ -12,7 +12,7 @@ measurements on the collected trace data.
|
| ## Defining benchmarks
|
|
|
| `mojo_benchmark` runs performance tests defined in a benchmark file. The
|
| -benchmark file is a Python dictionary of the following format:
|
| +benchmark file is a Python program setting a dictionary of the following format:
|
|
|
| ```python
|
| benchmarks = [
|
| @@ -24,35 +24,72 @@ benchmarks = [
|
|
|
| # List of measurements to make.
|
| 'measurements': [
|
| - '<measurement type>/<event category>/<event name>',
|
| + {
|
| + 'name': <my_measurement>,
|
| + 'spec': <spec>,
|
| + },
|
| + (...)
|
| ],
|
| },
|
| ]
|
| ```
|
|
|
| +The benchmark file may reference the `target_os` global that will be any of
|
| +('android', 'linux'), indicating the system on which the benchmarks are run.
|
| +
|
| +### Measurement specs
|
| +
|
| The following types of measurements are available:
|
|
|
| - - `time_until` - measures time until the first occurence of the specified event
|
| - - `avg_duration` - measures the average duration of all instances of the
|
| - specified event
|
| + - `time_until`
|
| + - `time_between`
|
| + - `avg_duration`
|
| + - `percentile_duration`
|
| +
|
| +`time_until` records the time until the first occurence of the targeted event.
|
| +The underlying benchmark runner records the time origin just before issuing the
|
| +connection call to the application being benchmarked. Results of `time_until`
|
| +measurements are relative to this time. Spec format:
|
| +
|
| +```
|
| +'time_until/<category>/<event>'
|
| +```
|
| +
|
| +`time_between` records the time between the first occurence of the first
|
| +targeted event and the first occurence of the second targeted event. Spec
|
| +format:
|
| +
|
| +```
|
| +'time_between/<category1>/<event1>/<category2>/<event2>'
|
| +```
|
| +
|
| +`avg_duration` records the average duration of all occurences of the targeted
|
| +event. Spec format:
|
| +
|
| +```
|
| +'avg_duration/<category>/<event>'
|
| +```
|
| +
|
| +`percentile_duration` records the value at the given percentile of durations of
|
| +all occurences of the targeted event. Spec format:
|
| +
|
| +```
|
| +'percentile_duration/<category>/<event>/<percentile>'
|
| +```
|
| +
|
| +where `<percentile>` is a number between 0.0 and 0.1.
|
|
|
| ## Caching
|
|
|
| The script runs each benchmark twice. The first run (**cold start**) clears
|
| caches of the following apps on startup:
|
|
|
| - - network_service.mojo
|
| - - url_response_disk_cache.mojo
|
| + - `network_service.mojo`
|
| + - `url_response_disk_cache.mojo`
|
|
|
| The second run (**warm start**) runs immediately afterwards, without clearing
|
| any caches.
|
|
|
| -## Time origin
|
| -
|
| -The underlying benchmark runner records the time origin just before issuing the
|
| -connection call to the application being benchmarked. Results of `time_until`
|
| -measurements are relative to this time.
|
| -
|
| ## Example
|
|
|
| For an app that records a trace event named "initialized" in category "my_app"
|
|
|