|
|
DescriptionAdding performance benchmark for creating imagebitmap from imagedata
BUG=538253
CQ_EXTRA_TRYBOTS=tryserver.chromium.perf:linux_perf_bisect;tryserver.chromium.perf:mac_10_10_perf_bisect;tryserver.chromium.perf:win_perf_bisect;tryserver.chromium.perf:android_nexus5_perf_bisect
Committed: https://crrev.com/9f0bb323d1b080be08797b13fa77e2d6153c5800
Cr-Commit-Position: refs/heads/master@{#353785}
Patch Set 1 #
Total comments: 4
Patch Set 2 : Fixing issues with function name and comments. #Patch Set 3 : Enabling --enable-experimental-canvas-features on blink_perf #
Total comments: 1
Messages
Total messages: 36 (8 generated)
xidachen@chromium.org changed reviewers: + chrishtr@chromium.org, junov@chromium.org
This is the benchmark test for createImageBitmap from ImageData. Since createImageBitmap is still an experimental feature, you will need to create an http server under the dir "third_party/WebKit/PerformanceTests", and "--enable-experimental-canvas-features" is needed when running the benchmark. Please review it. Thank you.
On 2015/10/05 20:20:32, xidachen wrote: > This is the benchmark test for createImageBitmap from ImageData. Since > createImageBitmap is still an experimental feature, you will need to create an > http server under the dir "third_party/WebKit/PerformanceTests", and > "--enable-experimental-canvas-features" is needed when running the benchmark. > > Please review it. > > Thank you. Does it run properly with the command line: telemetry/run_benchmark --browser=release blink_perf.canvas This is how the perf bots will run the test.
https://codereview.chromium.org/1387933002/diff/1/third_party/WebKit/Performa... File third_party/WebKit/PerformanceTests/Canvas/createImageBitmapFromImageData.html (right): https://codereview.chromium.org/1387933002/diff/1/third_party/WebKit/Performa... third_party/WebKit/PerformanceTests/Canvas/createImageBitmapFromImageData.html:18: function setImageData() { I would call this initializeImageData. https://codereview.chromium.org/1387933002/diff/1/third_party/WebKit/Performa... third_party/WebKit/PerformanceTests/Canvas/createImageBitmapFromImageData.html:19: for(var i = 0; i < image.data.length; i+= 4) if you just incremented i by 1, you would not need two nested loops. https://codereview.chromium.org/1387933002/diff/1/third_party/WebKit/Performa... third_party/WebKit/PerformanceTests/Canvas/createImageBitmapFromImageData.html:21: image.data[i+j] = rand(255); Not that it matters much, but to produce values between 0 and 255 uniformly, you should call rand(256). Look it up: Math.random() produces value in the range [0,1) https://codereview.chromium.org/1387933002/diff/1/third_party/WebKit/Performa... third_party/WebKit/PerformanceTests/Canvas/createImageBitmapFromImageData.html:25: createImageBitmap(image, 0, 0, imgWidth, imgHeight); Add a comment to explain that the return Promise is not retained because this test is meant to only measure the immediate run time of createImageBitmap from an ImageData, which is not to be implemented in a way that does all the work synchronously, even though the API is technically async.
> an ImageData, which is not to be implemented in a way that does all the work typo: "not" -> "known"
On 2015/10/06 02:06:32, Justin Novosad wrote: > On 2015/10/05 20:20:32, xidachen wrote: > > This is the benchmark test for createImageBitmap from ImageData. Since > > createImageBitmap is still an experimental feature, you will need to create an > > http server under the dir "third_party/WebKit/PerformanceTests", and > > "--enable-experimental-canvas-features" is needed when running the benchmark. > > > > Please review it. > > > > Thank you. > > Does it run properly with the command line: > telemetry/run_benchmark --browser=release blink_perf.canvas > > This is how the perf bots will run the test. Hi Justin, I thought running this file directly in the browser will measure how many times createImageBitmap() can run in 1 second. For example, if I open the "drawimage.html" in the browser, it will show the result like "drawimage runs X times per second". And my impression is that this is a direct measure for the performance of this function. Please correct me if this is wrong. Thanks.
On 2015/10/06 02:06:32, Justin Novosad wrote: > On 2015/10/05 20:20:32, xidachen wrote: > > This is the benchmark test for createImageBitmap from ImageData. Since > > createImageBitmap is still an experimental feature, you will need to create an > > http server under the dir "third_party/WebKit/PerformanceTests", and > > "--enable-experimental-canvas-features" is needed when running the benchmark. > > > > Please review it. > > > > Thank you. > > Does it run properly with the command line: > telemetry/run_benchmark --browser=release blink_perf.canvas > > This is how the perf bots will run the test. Hi Justin, Please ignore my previous reply. You can run it using this command line: telemetry/run_benchmark --browser=release --extra-browser-args=--enable-experimental-canvas-features blink_perf.canvas I just successfully run that and got result. Thank you.
Hi Justin, I have fixed issues with function name and added some comments. You can run this test using this command line: ./run_benchmark --browser=default --extra-browser-args=--enable-experimental-canvas-features blink_perf.canvas In my case, default is a release build. Thank you.
junov@chromium.org changed reviewers: + sullivan@chromium.org
+sullivan Xida, that is great, but now you have to find the script that runs these tests on the perf bot and make blink_perf.canvas runs with the --enable-experimental-canvas-features flag. Annie, I have no idea how these scripts work. can you point Xida in the right direction?
On 2015/10/08 00:40:51, Justin Novosad wrote: > +sullivan > > Xida, that is great, but now you have to find the script that runs these tests > on the perf bot and make blink_perf.canvas runs with the > --enable-experimental-canvas-features flag. > > Annie, I have no idea how these scripts work. can you point Xida in the right > direction? https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/benchma...
On 2015/10/08 00:40:51, Justin Novosad wrote: > +sullivan > > Xida, that is great, but now you have to find the script that runs these tests > on the perf bot and make blink_perf.canvas runs with the > --enable-experimental-canvas-features flag. > > Annie, I have no idea how these scripts work. can you point Xida in the right > direction? Hi Justin, I don't understand the part "run the tests on the perf bot", how should I run it instead of using the command line. Thank you.
On 2015/10/08 12:30:21, xidachen wrote: > On 2015/10/08 00:40:51, Justin Novosad wrote: > > +sullivan > > > > Xida, that is great, but now you have to find the script that runs these tests > > on the perf bot and make blink_perf.canvas runs with the > > --enable-experimental-canvas-features flag. > > > > Annie, I have no idea how these scripts work. can you point Xida in the right > > direction? > > Hi Justin, > > I don't understand the part "run the tests on the perf bot", how should I run it > instead of using the command line. > > Thank you. The perf bots run the command line: tools/perf/run_benchmark bink_perf.canvas
On 2015/10/08 13:39:12, sullivan wrote: > On 2015/10/08 12:30:21, xidachen wrote: > > On 2015/10/08 00:40:51, Justin Novosad wrote: > > > +sullivan > > > > > > Xida, that is great, but now you have to find the script that runs these > tests > > > on the perf bot and make blink_perf.canvas runs with the > > > --enable-experimental-canvas-features flag. > > > > > > Annie, I have no idea how these scripts work. can you point Xida in the > right > > > direction? > > > > Hi Justin, > > > > I don't understand the part "run the tests on the perf bot", how should I run > it > > instead of using the command line. > > > > Thank you. > > The perf bots run the command line: > tools/perf/run_benchmark bink_perf.canvas Thanks Annie. I tried "./tools/perf/run_benchmark --browser=default --extra-browser-args=--enable-experimental-canvas-features blink_perf.canvas", it works fine. The result I got for this particular benchmark is 642 +- 4.82%. My understanding is "tools/perf/run_benchmark bink_perf.canvas" will automatically run all the benchmark tests that are placed under the dir "third_party/WebKit/PerformanceTests/Canvas", is that correct?
On 2015/10/08 13:56:16, xidachen wrote: > On 2015/10/08 13:39:12, sullivan wrote: > > On 2015/10/08 12:30:21, xidachen wrote: > > > On 2015/10/08 00:40:51, Justin Novosad wrote: > > > > +sullivan > > > > > > > > Xida, that is great, but now you have to find the script that runs these > > tests > > > > on the perf bot and make blink_perf.canvas runs with the > > > > --enable-experimental-canvas-features flag. > > > > > > > > Annie, I have no idea how these scripts work. can you point Xida in the > > right > > > > direction? > > > > > > Hi Justin, > > > > > > I don't understand the part "run the tests on the perf bot", how should I > run > > it > > > instead of using the command line. > > > > > > Thank you. > > > > The perf bots run the command line: > > tools/perf/run_benchmark bink_perf.canvas > > Thanks Annie. > I tried "./tools/perf/run_benchmark --browser=default > --extra-browser-args=--enable-experimental-canvas-features blink_perf.canvas", > it works fine. The result I got for this particular benchmark is 642 +- 4.82%. > My understanding is "tools/perf/run_benchmark bink_perf.canvas" will > automatically run all the benchmark tests that are placed under the dir > "third_party/WebKit/PerformanceTests/Canvas", is that correct? That's correct. If you want the perfbots to run with --enable-experimental-canvas-features you need to modify the benchmark (or add a second one with the flag)
> That's correct. If you want the perfbots to run with > --enable-experimental-canvas-features you need to modify the benchmark (or add a > second one with the flag) Hi Annie, When I run "tools/perf/run_benchmark blink_perf.canvas", it gives an error like this: CustomizeBrowserOptions at tools/perf/benchmarks/blink_perf.py:92 if 'content-shell' in options.browser_type: TypeError: argument of type 'NoneType' is not iterable But if I do "tools/perf/run_benchmark --browser=default blink_perf.canvas", then it works perfectly fine. Is that an issue with the telemetry? Thank you.
That is normal. the --browser arg is mandatory. The important thing is to modify the script to make it start the browser with the command line options you need without having to specify them on the run_benchmark command line. This is so that the bots run the tests with the right configuration. IMHO, it would be perfectly fine to enable experimental canvas features on all of blink_perf if that is easier than targeting just blink_perf.canvas.
The "--extra-browser-args=--enable-experimental-canvas-features" should be by default for all blink_perf tests. Run "tools/perf/run_benchmark --browser=default blink_perf_canvas" for this particular test.
On 2015/10/08 17:31:38, xidachen wrote: > The "--extra-browser-args=--enable-experimental-canvas-features" should be by > default for all blink_perf tests. > > Run "tools/perf/run_benchmark --browser=default blink_perf_canvas" for this > particular test. lgtm for third_party/WebKit Still need approval from sullivan@ for blink_perf.py
https://codereview.chromium.org/1387933002/diff/40001/tools/perf/benchmarks/b... File tools/perf/benchmarks/blink_perf.py (right): https://codereview.chromium.org/1387933002/diff/40001/tools/perf/benchmarks/b... tools/perf/benchmarks/blink_perf.py:90: '--enable-experimental-canvas-features' This is for all perf tests? That's bad because we would no longer be testing existing behavior. Why not enable the flag within your test like layout tests do?
On 2015/10/08 17:59:05, chrishtr wrote: > https://codereview.chromium.org/1387933002/diff/40001/tools/perf/benchmarks/b... > File tools/perf/benchmarks/blink_perf.py (right): > > https://codereview.chromium.org/1387933002/diff/40001/tools/perf/benchmarks/b... > tools/perf/benchmarks/blink_perf.py:90: '--enable-experimental-canvas-features' > This is for all perf tests? That's bad because we would no longer be testing > existing behavior. Why not enable the flag within your test like layout tests > do? Not all perf, just blink_perf. Also, this flag simply unhides experimental APIs, it is not a behavior modifier. for existing APIs. We run the layout tests with this flag always on (and a bunch of other experimental API flags always on).
On 2015/10/08 at 18:10:33, junov wrote: > On 2015/10/08 17:59:05, chrishtr wrote: > > https://codereview.chromium.org/1387933002/diff/40001/tools/perf/benchmarks/b... > > File tools/perf/benchmarks/blink_perf.py (right): > > > > https://codereview.chromium.org/1387933002/diff/40001/tools/perf/benchmarks/b... > > tools/perf/benchmarks/blink_perf.py:90: '--enable-experimental-canvas-features' > > This is for all perf tests? That's bad because we would no longer be testing > > existing behavior. Why not enable the flag within your test like layout tests > > do? > > Not all perf, just blink_perf. > Also, this flag simply unhides experimental APIs, it is not a behavior modifier. for existing APIs. We run the layout tests with this flag always on (and a bunch of other experimental API flags always on). Maybe in this case it's ok then, since it should have no bearing on performance of existing features.
lgtm blink_perf changes lgtm if we're not concerned about the affect on non-canvas blink_perf tests.
On 2015/10/08 17:42:20, Justin Novosad wrote: > On 2015/10/08 17:31:38, xidachen wrote: > > The "--extra-browser-args=--enable-experimental-canvas-features" should be by > > default for all blink_perf tests. > > > > Run "tools/perf/run_benchmark --browser=default blink_perf_canvas" for this > > particular test. > > lgtm for third_party/WebKit > Still need approval from sullivan@ for blink_perf.py Thanks, Justin -:)
On 2015/10/08 18:13:11, chrishtr wrote: > On 2015/10/08 at 18:10:33, junov wrote: > > On 2015/10/08 17:59:05, chrishtr wrote: > > > > https://codereview.chromium.org/1387933002/diff/40001/tools/perf/benchmarks/b... > > > File tools/perf/benchmarks/blink_perf.py (right): > > > > > > > https://codereview.chromium.org/1387933002/diff/40001/tools/perf/benchmarks/b... > > > tools/perf/benchmarks/blink_perf.py:90: > '--enable-experimental-canvas-features' > > > This is for all perf tests? That's bad because we would no longer be testing > > > existing behavior. Why not enable the flag within your test like layout > tests > > > do? > > > > Not all perf, just blink_perf. > > Also, this flag simply unhides experimental APIs, it is not a behavior > modifier. for existing APIs. We run the layout tests with this flag always on > (and a bunch of other experimental API flags always on). > > Maybe in this case it's ok then, since it should have no bearing on performance > of existing features. That's great, thank you, Chris.
On 2015/10/09 18:41:01, sullivan wrote: > lgtm > > blink_perf changes lgtm if we're not concerned about the affect on non-canvas > blink_perf tests. Thank you for reviewing it -:).
The CQ bit was checked by xidachen@chromium.org
The CQ bit was unchecked by xidachen@chromium.org
The CQ bit was unchecked by xidachen@chromium.org
The CQ bit was checked by xidachen@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1387933002/40001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1387933002/40001
The CQ bit was unchecked by xidachen@chromium.org
The CQ bit was checked by xidachen@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1387933002/40001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1387933002/40001
Message was sent while issue was closed.
Committed patchset #3 (id:40001)
Message was sent while issue was closed.
Patchset 3 (id:??) landed as https://crrev.com/9f0bb323d1b080be08797b13fa77e2d6153c5800 Cr-Commit-Position: refs/heads/master@{#353785} |