Description[Telemetry] Do not have benchmarks validate story name in expectations during runs.
There is a CL out adding a presubmit test for checking names:
https://chromium-review.googlesource.com/c/513162/
This should be the approach we use for making sure names exist. The telemetry
smoke tests do some optimizations that make checking mid run not viable.
https://cs.chromium.org/chromium/src/tools/perf/benchmarks/benchmark_smoke_unittest.py?type=cs&q=SinglePageBenchmark&l=53
The fact that you can add layers that change what CreateStorySet() returns makes
it hard to determine mid run if a story name is valid. Checking during presubmit
allows us to sidestep the smoke tests being clever but still gives us assurance
that the story names in expectations exist.
BUG=chromium:713222
Review-Url: https://codereview.chromium.org/2903023002
Committed: https://chromium.googlesource.com/external/github.com/catapult-project/catapult/+/518df53454b5c951cc4804786f957ffb0639c8ec
Patch Set 1 #Patch Set 2 : [Telemetry] Do not have benchmarks validate story name in expectations during runs. #
Total comments: 6
Patch Set 3 : [Telemetry] Do not have benchmarks validate story name in expectations during runs. #
Total comments: 8
Patch Set 4 : [Telemetry] Do not have benchmarks validate story name in expectations during runs. #
Total comments: 1
Messages
Total messages: 24 (10 generated)
|