OLD | NEW |
(Empty) | |
| 1 # Analysis Server Benchmarks |
| 2 |
| 3 ## How to run the benchmarks |
| 4 |
| 5 To see a list of all available benchmarks, run: |
| 6 |
| 7 ``` |
| 8 dart benchmarks/benchmarks.dart list |
| 9 ``` |
| 10 |
| 11 To run an individual benchmark, run: |
| 12 |
| 13 ``` |
| 14 dart benchmarks/benchmarks.dart run <benchmark-id> |
| 15 ``` |
| 16 |
| 17 ## How they're tested |
| 18 |
| 19 In order to make sure that our benchmarks don't regress in terms of their |
| 20 ability to run, we create one unit test per benchmark, and run those tests |
| 21 as part of our normal CI test suite. |
| 22 |
| 23 To save time on the CI, we only run one iteration of each benchmark |
| 24 (`--repeat=1`), and we run the benchmark on a smaller data set (`--quick`). |
| 25 |
| 26 See `test/benchmark_test.dart`. |
| 27 |
| 28 ## To add a new benchmark |
| 29 |
| 30 Register the new benchmark in then main() method of benchmarks/benchmarks.dart. |
| 31 |
| 32 ## On the bots |
| 33 |
| 34 Our benchmarks run on a continuous performance testing system. It will run |
| 35 any benchmark produced by the `benchmarks/benchmarks.dart list` command. |
| 36 |
| 37 To not run a benchmark on the bot, define the benchmark with the `disable` flag. |
OLD | NEW |