|
|
DescriptionUpdating the script that gets run in the benchmark isolate to also read in
chartjson results if available and add a chartjson result to the results
json when present.
I need these results in the chromium_tests SwarmingIsolatedScript class so
that I can upload results to the perf dashboard after a successful swarming
task. I am not completely replacing the json with chartjson instead of
just json becuase I am not sure if the json form is still needed yet. This
will be an iterative approach while I try and get a successful swarmed
bencmark on the waterfall.
BUG=chromium:633253
Committed: https://crrev.com/4e29701cda37e5f57845fbd616582d7b71a24da2
Cr-Commit-Position: refs/heads/master@{#419740}
Patch Set 1 #
Total comments: 2
Patch Set 2 : Updating script to generate more than one output file #Patch Set 3 : Updating swarming api flag to chartjson #
Total comments: 2
Patch Set 4 : Updating other scripts run in isolates to remove flags #Patch Set 5 : Removing unwanted code #
Total comments: 4
Patch Set 6 : Adding flag to parser #
Messages
Total messages: 28 (10 generated)
eyaich@chromium.org changed reviewers: + dtu@chromium.org
On 2016/09/12 17:58:53, eyaich1 wrote: To see how these results will be utilized, see WIP cl https://codereview.chromium.org/2330133002/.
kbr@chromium.org changed reviewers: + dpranke@chromium.org, kbr@chromium.org
Adding dpranke@ because he is familiar with the needed upgrade to get flakiness dashboard results out of these tests. https://codereview.chromium.org/2331993003/diff/1/testing/scripts/run_telemet... File testing/scripts/run_telemetry_benchmark_as_googletest.py (right): https://codereview.chromium.org/2331993003/diff/1/testing/scripts/run_telemet... testing/scripts/run_telemetry_benchmark_as_googletest.py:107: }, args.isolated_script_test_output) Rather than overloading the meaning of --isolated-script-test-output, I think it is a better direction to add a new command line argument --isolated-script-chartjson-output taking a separate file to which the chartjson results are written. This simplified JSON format is mainly used to display results on the waterfall, but it's known that we are going to need to output more "full" JSON results for the flakiness dashboard. It would be better to start extending the contract between recipes and IsolatedScriptTest now. Do you agree?
https://codereview.chromium.org/2331993003/diff/1/testing/scripts/run_telemet... File testing/scripts/run_telemetry_benchmark_as_googletest.py (right): https://codereview.chromium.org/2331993003/diff/1/testing/scripts/run_telemet... testing/scripts/run_telemetry_benchmark_as_googletest.py:107: }, args.isolated_script_test_output) On 2016/09/12 19:31:42, Ken Russell wrote: > Rather than overloading the meaning of --isolated-script-test-output, I think it > is a better direction to add a new command line argument > --isolated-script-chartjson-output taking a separate file to which the chartjson > results are written. This simplified JSON format is mainly used to display > results on the waterfall, but it's known that we are going to need to output > more "full" JSON results for the flakiness dashboard. It would be better to > start extending the contract between recipes and IsolatedScriptTest now. Do you > agree? Good to know there is another use case coming down the pipe. Adding another output was one of my options, but I wasn't sure if I was still going to need the regular json output or not. My plan was to make the output just the chartjson once I concluded that. Therefore, I was going to iterate on a solution once I got something up on the waterfall and figure out if I needed both from there. I am open to going down this route first though, I don't think it was that much more work, I just chose this route while I was still in investigation phase. I will ping this CL again when changes are made.
On 2016/09/13 11:53:41, eyaich1 wrote: > https://codereview.chromium.org/2331993003/diff/1/testing/scripts/run_telemet... > File testing/scripts/run_telemetry_benchmark_as_googletest.py (right): > > https://codereview.chromium.org/2331993003/diff/1/testing/scripts/run_telemet... > testing/scripts/run_telemetry_benchmark_as_googletest.py:107: }, > args.isolated_script_test_output) > On 2016/09/12 19:31:42, Ken Russell wrote: > > Rather than overloading the meaning of --isolated-script-test-output, I think > it > > is a better direction to add a new command line argument > > --isolated-script-chartjson-output taking a separate file to which the > chartjson > > results are written. This simplified JSON format is mainly used to display > > results on the waterfall, but it's known that we are going to need to output > > more "full" JSON results for the flakiness dashboard. It would be better to > > start extending the contract between recipes and IsolatedScriptTest now. Do > you > > agree? > > Good to know there is another use case coming down the pipe. Adding another > output was one of my options, but I wasn't sure if I was still going to need the > regular json output or not. My plan was to make the output just the chartjson > once I concluded that. Therefore, I was going to iterate on a solution once I > got something up on the waterfall and figure out if I needed both from there. > > I am open to going down this route first though, I don't think it was that much > more work, I just chose this route while I was still in investigation phase. I > will ping this CL again when changes are made. I have updated this script to reflect the changes in naming and the new swarming api. ptal
I think this change is fine for now. I agree that we need to revisit the contract/API for swarming commands, but I don't think that adding individual command line args is a very scalable solution. We might need some more generic way to indicate which files should be part of the output and how to handle them.
Sorry for the delay re-reviewing this. LGTM. I don't anticipate adding a lot more command line arguments to the isolated script test interface, but I want to keep the meaning of each one, and the file formats, very clear. That's why I'm insisting that the chartjson output be a separate file, and why I'd like the "full" JSON output (compatible with the flakiness dashboard) to be a third command line argument and file. Hopefully we can standardize on the latter and remove the other two, or at least --isolated-script-test-output.
Minor nit: I suggest editing the first line of the CL description to be a brief summary sentence, followed by the paragraphs explaining it.
https://codereview.chromium.org/2331993003/diff/40001/testing/scripts/run_tel... File testing/scripts/run_telemetry_benchmark_as_googletest.py (right): https://codereview.chromium.org/2331993003/diff/40001/testing/scripts/run_tel... testing/scripts/run_telemetry_benchmark_as_googletest.py:49: required=False) Sorry, I only just realized this: since you're passing this argument into the isolated script tests all the time in https://codereview.chromium.org/2336293002 , you need to add this argument to all of the isolated script test wrappers in this directory: run_telemetry_as_googletest.py run_telemetry_benchmark_as_googletest.py run_gpu_integration_test_as_googletest.py and drop it on the floor for the ones that don't care about it.
https://codereview.chromium.org/2331993003/diff/40001/testing/scripts/run_tel... File testing/scripts/run_telemetry_benchmark_as_googletest.py (right): https://codereview.chromium.org/2331993003/diff/40001/testing/scripts/run_tel... testing/scripts/run_telemetry_benchmark_as_googletest.py:49: required=False) Sorry, I only just realized this: since you're passing this argument into the isolated script tests all the time in https://codereview.chromium.org/2336293002 , you need to add this argument to all of the isolated script test wrappers in this directory: run_telemetry_as_googletest.py run_telemetry_benchmark_as_googletest.py run_gpu_integration_test_as_googletest.py and drop it on the floor for the ones that don't care about it.
Description was changed from ========== Updating the script that gets run in the benchmark isolate to also read in chartjson results if available and add a chartjson result to the results json when present. I need these results in the chromium_tests SwarmingIsolatedScript class so that I can upload results to the perf dashboard after a successful swarming task. I am not completely replacing the json with chartjson instead of just json becuase I am not sure if the json form is still needed yet. This will be an iterative approach while I try and get a successful swarmed bencmark on the waterfall. BUG=chromium:633253 ========== to ========== Updating the script that gets run in the benchmark isolate to also read in chartjson results if available and add a chartjson result to the results json when present. I need these results in the chromium_tests SwarmingIsolatedScript class so that I can upload results to the perf dashboard after a successful swarming task. I am not completely replacing the json with chartjson instead of just json becuase I am not sure if the json form is still needed yet. This will be an iterative approach while I try and get a successful swarmed bencmark on the waterfall. BUG=chromium:633253 ==========
https://codereview.chromium.org/2331993003/diff/40001/testing/scripts/run_tel... File testing/scripts/run_telemetry_benchmark_as_googletest.py (right): https://codereview.chromium.org/2331993003/diff/40001/testing/scripts/run_tel... testing/scripts/run_telemetry_benchmark_as_googletest.py:49: required=False) On 2016/09/17 02:39:56, Ken Russell wrote: > Sorry, I only just realized this: since you're passing this argument into the > isolated script tests all the time in https://codereview.chromium.org/2336293002 > , you need to add this argument to all of the isolated script test wrappers in > this directory: > run_telemetry_as_googletest.py > run_telemetry_benchmark_as_googletest.py > run_gpu_integration_test_as_googletest.py > > and drop it on the floor for the ones that don't care about it. Done.
The CQ bit was checked by eyaich@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from kbr@chromium.org Link to the patchset: https://codereview.chromium.org/2331993003/#ps80001 (title: "Removing unwanted code")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: linux_android_rel_ng on master.tryserver.chromium.android (JOB_FAILED, https://build.chromium.org/p/tryserver.chromium.android/builders/linux_androi...)
Just as well the Android tryserver failed -- would like to ask for one update. https://codereview.chromium.org/2331993003/diff/80001/testing/scripts/run_gpu... File testing/scripts/run_gpu_integration_test_as_googletest.py (right): https://codereview.chromium.org/2331993003/diff/80001/testing/scripts/run_gpu... testing/scripts/run_gpu_integration_test_as_googletest.py:56: index += 1 It's not necessary to manually extract it -- adding it as a non-optional argument like in run_telemetry_benchmark_as_googletest.py , and then ignoring it, would be preferable, since then it's going through the same argument parsing logic. https://codereview.chromium.org/2331993003/diff/80001/testing/scripts/run_tel... File testing/scripts/run_telemetry_as_googletest.py (right): https://codereview.chromium.org/2331993003/diff/80001/testing/scripts/run_tel... testing/scripts/run_telemetry_as_googletest.py:49: index += 1 Same here.
https://codereview.chromium.org/2331993003/diff/80001/testing/scripts/run_gpu... File testing/scripts/run_gpu_integration_test_as_googletest.py (right): https://codereview.chromium.org/2331993003/diff/80001/testing/scripts/run_gpu... testing/scripts/run_gpu_integration_test_as_googletest.py:56: index += 1 On 2016/09/19 20:27:57, Ken Russell wrote: > It's not necessary to manually extract it -- adding it as a non-optional > argument like in run_telemetry_benchmark_as_googletest.py , and then ignoring > it, would be preferable, since then it's going through the same argument parsing > logic. Done. https://codereview.chromium.org/2331993003/diff/80001/testing/scripts/run_tel... File testing/scripts/run_telemetry_as_googletest.py (right): https://codereview.chromium.org/2331993003/diff/80001/testing/scripts/run_tel... testing/scripts/run_telemetry_as_googletest.py:49: index += 1 On 2016/09/19 20:27:57, Ken Russell wrote: > Same here. Done.
The CQ bit was checked by eyaich@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from kbr@chromium.org Link to the patchset: https://codereview.chromium.org/2331993003/#ps100001 (title: "Adding flag to parser")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Message was sent while issue was closed.
Description was changed from ========== Updating the script that gets run in the benchmark isolate to also read in chartjson results if available and add a chartjson result to the results json when present. I need these results in the chromium_tests SwarmingIsolatedScript class so that I can upload results to the perf dashboard after a successful swarming task. I am not completely replacing the json with chartjson instead of just json becuase I am not sure if the json form is still needed yet. This will be an iterative approach while I try and get a successful swarmed bencmark on the waterfall. BUG=chromium:633253 ========== to ========== Updating the script that gets run in the benchmark isolate to also read in chartjson results if available and add a chartjson result to the results json when present. I need these results in the chromium_tests SwarmingIsolatedScript class so that I can upload results to the perf dashboard after a successful swarming task. I am not completely replacing the json with chartjson instead of just json becuase I am not sure if the json form is still needed yet. This will be an iterative approach while I try and get a successful swarmed bencmark on the waterfall. BUG=chromium:633253 ==========
Message was sent while issue was closed.
Committed patchset #6 (id:100001)
Message was sent while issue was closed.
Description was changed from ========== Updating the script that gets run in the benchmark isolate to also read in chartjson results if available and add a chartjson result to the results json when present. I need these results in the chromium_tests SwarmingIsolatedScript class so that I can upload results to the perf dashboard after a successful swarming task. I am not completely replacing the json with chartjson instead of just json becuase I am not sure if the json form is still needed yet. This will be an iterative approach while I try and get a successful swarmed bencmark on the waterfall. BUG=chromium:633253 ========== to ========== Updating the script that gets run in the benchmark isolate to also read in chartjson results if available and add a chartjson result to the results json when present. I need these results in the chromium_tests SwarmingIsolatedScript class so that I can upload results to the perf dashboard after a successful swarming task. I am not completely replacing the json with chartjson instead of just json becuase I am not sure if the json form is still needed yet. This will be an iterative approach while I try and get a successful swarmed bencmark on the waterfall. BUG=chromium:633253 Committed: https://crrev.com/4e29701cda37e5f57845fbd616582d7b71a24da2 Cr-Commit-Position: refs/heads/master@{#419740} ==========
Message was sent while issue was closed.
Patchset 6 (id:??) landed as https://crrev.com/4e29701cda37e5f57845fbd616582d7b71a24da2 Cr-Commit-Position: refs/heads/master@{#419740} |