Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(759)

Unified Diff: mojo/devtools/common/docs/mojo_benchmark.md

Issue 1423233002: Centralize mojo_benchmark documentation. (Closed) Base URL: git@github.com:domokit/mojo.git@master
Patch Set: Created 5 years, 2 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View side-by-side diff with in-line comments
Download patch
« no previous file with comments | « no previous file | mojo/devtools/common/mojo_benchmark » ('j') | no next file with comments »
Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
Index: mojo/devtools/common/docs/mojo_benchmark.md
diff --git a/mojo/devtools/common/docs/mojo_benchmark.md b/mojo/devtools/common/docs/mojo_benchmark.md
index bcca1add06d868292761f752dcdba7f3f683e132..f9cb773c55ae749508cb494d4b074fd39aad228a 100644
--- a/mojo/devtools/common/docs/mojo_benchmark.md
+++ b/mojo/devtools/common/docs/mojo_benchmark.md
@@ -12,7 +12,7 @@ measurements on the collected trace data.
## Defining benchmarks
`mojo_benchmark` runs performance tests defined in a benchmark file. The
-benchmark file is a Python dictionary of the following format:
+benchmark file is a Python program setting a dictionary of the following format:
```python
benchmarks = [
@@ -24,35 +24,72 @@ benchmarks = [
# List of measurements to make.
'measurements': [
- '<measurement type>/<event category>/<event name>',
+ {
+ 'name': <my_measurement>,
+ 'spec': <spec>,
+ },
+ (...)
],
},
]
```
+The benchmark file may reference the `target_os` global that will be any of
+('android', 'linux'), indicating the system on which the benchmarks are run.
+
+### Measurement specs
+
The following types of measurements are available:
- - `time_until` - measures time until the first occurence of the specified event
- - `avg_duration` - measures the average duration of all instances of the
- specified event
+ - `time_until`
+ - `time_between`
+ - `avg_duration`
+ - `percentile_duration`
+
+`time_until` records the time until the first occurence of the targeted event.
+The underlying benchmark runner records the time origin just before issuing the
+connection call to the application being benchmarked. Results of `time_until`
+measurements are relative to this time. Spec format:
+
+```
+'time_until/<category>/<event>'
+```
+
+`time_between` records the time between the first occurence of the first
+targeted event and the first occurence of the second targeted event. Spec
+format:
+
+```
+'time_between/<category1>/<event1>/<category2>/<event2>'
+```
+
+`avg_duration` records the average duration of all occurences of the targeted
+event. Spec format:
+
+```
+'avg_duration/<category>/<event>'
+```
+
+`percentile_duration` records the value at the given percentile of durations of
+all occurences of the targeted event. Spec format:
+
+```
+'percentile_duration/<category>/<event>/<percentile>'
+```
+
+where `<percentile>` is a number between 0.0 and 0.1.
## Caching
The script runs each benchmark twice. The first run (**cold start**) clears
caches of the following apps on startup:
- - network_service.mojo
- - url_response_disk_cache.mojo
+ - `network_service.mojo`
+ - `url_response_disk_cache.mojo`
The second run (**warm start**) runs immediately afterwards, without clearing
any caches.
-## Time origin
-
-The underlying benchmark runner records the time origin just before issuing the
-connection call to the application being benchmarked. Results of `time_until`
-measurements are relative to this time.
-
## Example
For an app that records a trace event named "initialized" in category "my_app"
« no previous file with comments | « no previous file | mojo/devtools/common/mojo_benchmark » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698