|
|
Chromium Code Reviews
DescriptionMD Settings: split browser tests into smaller chunks
Step 1: just move into multiple TEST_Fs (this CL).
Step 2: make separate subclasses of PolymerTest for each
settings-specific browser test (next CL).
R=dpapad@chromium.org
BUG=534718
Committed: https://crrev.com/3aa0bc4821a0e2eee953ee1cb755db92622f66d6
Cr-Commit-Position: refs/heads/master@{#399845}
Patch Set 1 : asdf #
Total comments: 4
Patch Set 2 : asdf #
Dependent Patchsets: Messages
Total messages: 23 (9 generated)
Description was changed from ========== MD Settings: split browser tests into smaller chunks Step 1: just move into multiple TEST_Fs. Step 2: make separate subclasses of PolymerTest for each settings-specific browser test R=dpapad@chromium.org BUG=534718 ========== to ========== MD Settings: split browser tests into smaller chunks Step 1: just move into multiple TEST_Fs (this CL). Step 2: make separate subclasses of PolymerTest for each settings-specific browser test (next CL). R=dpapad@chromium.org BUG=534718 ==========
Patchset #1 (id:1) has been deleted
https://codereview.chromium.org/2049943003/diff/20001/chrome/test/data/webui/... File chrome/test/data/webui/settings/cr_settings_browsertest.js (left): https://codereview.chromium.org/2049943003/diff/20001/chrome/test/data/webui/... chrome/test/data/webui/settings/cr_settings_browsertest.js:58: GEN('#if defined(MEMORY_SANITIZER)'); Won't removing this line make our tests fail on memory bots again? If this is not an issue anymore, then we should probably also update all other places were we replicated this ifdef, see https://cs.chromium.org/search/?q=MEMORY_SANITIZER+file:%5Esrc/chrome/test/da....
https://codereview.chromium.org/2049943003/diff/20001/chrome/test/data/webui/... File chrome/test/data/webui/settings/cr_settings_browsertest.js (left): https://codereview.chromium.org/2049943003/diff/20001/chrome/test/data/webui/... chrome/test/data/webui/settings/cr_settings_browsertest.js:58: GEN('#if defined(MEMORY_SANITIZER)'); On 2016/06/09 01:21:29, dpapad wrote: > Won't removing this line make our tests fail on memory bots again? If this is > not an issue anymore, then we should probably also update all other places were > we replicated this ifdef, see > https://cs.chromium.org/search/?q=MEMORY_SANITIZER+file:%5Esrc/chrome/test/da.... the test previously failed for 3 reasons: a) the GN migration accidentally used a debug version of v8 instead of an optimized one (fixed: crbug.com/586511) b) the test was doing too much (fixed after this CL and crrev.com/2050053002) c) memory bots are slow cuz they're tracking memory (not fixed) I'm guessing that fixing 2 out of 3 of these are enough to re-enable on msan bots
michaelpg@chromium.org changed reviewers: + michaelpg@chromium.org
https://codereview.chromium.org/2049943003/diff/20001/chrome/test/data/webui/... File chrome/test/data/webui/settings/cr_settings_browsertest.js (left): https://codereview.chromium.org/2049943003/diff/20001/chrome/test/data/webui/... chrome/test/data/webui/settings/cr_settings_browsertest.js:58: GEN('#if defined(MEMORY_SANITIZER)'); On 2016/06/09 01:37:25, Dan Beam wrote: > On 2016/06/09 01:21:29, dpapad wrote: > > Won't removing this line make our tests fail on memory bots again? If this is > > not an issue anymore, then we should probably also update all other places > were > > we replicated this ifdef, see > > > https://cs.chromium.org/search/?q=MEMORY_SANITIZER+file:%5Esrc/chrome/test/da.... > > the test previously failed for 3 reasons: > > a) the GN migration accidentally used a debug version of v8 instead of an > optimized one (fixed: crbug.com/586511) > > b) the test was doing too much (fixed after this CL and crrev.com/2050053002) this is the first bullet point under "advantages" here: https://www.chromium.org/developers/updating-webui-for-material-design/testin... if that philosophy was wrong, we should remove it. What are the timings actually like here? I suspect the browser test overhead, including navigating to the browsePreload, accounts for like 90% of the time, and the time taken for all suites of unrelated mocha tests would be at most 10%. Put another way: It seems very likely to me that a test with 4 mocha suites which times out, will still time out if you disable 3 of the suites. Unless one particular suite is just hella slow for some reason and needs to be addressed. You're almost quadrupling the amount of time running these tests will take by making us spin up more browsers. Of course that's not a problem from the flakiness perspective*, because each one has its own timeout, it's just slow overall... and since our intention with web components is to test more granular elements, WebUI will ultimately have many many more browser_tests this way than we've had before. Is the *total* running time of browser_tests not a concern? * actually it is a problem for flakiness, as a flake is more likely to occur in one of 4 correct browser_tests than in just 1 correct browser_tests. > > c) memory bots are slow cuz they're tracking memory (not fixed) > > I'm guessing that fixing 2 out of 3 of these are enough to re-enable on msan > bots
https://codereview.chromium.org/2049943003/diff/20001/chrome/test/data/webui/... File chrome/test/data/webui/settings/cr_settings_browsertest.js (left): https://codereview.chromium.org/2049943003/diff/20001/chrome/test/data/webui/... chrome/test/data/webui/settings/cr_settings_browsertest.js:58: GEN('#if defined(MEMORY_SANITIZER)'); On 2016/06/09 04:06:36, michaelpg wrote: > On 2016/06/09 01:37:25, Dan Beam wrote: > > On 2016/06/09 01:21:29, dpapad wrote: > > > Won't removing this line make our tests fail on memory bots again? If this > is > > > not an issue anymore, then we should probably also update all other places > > were > > > we replicated this ifdef, see > > > > > > https://cs.chromium.org/search/?q=MEMORY_SANITIZER+file:%5Esrc/chrome/test/da.... > > > > the test previously failed for 3 reasons: > > > > a) the GN migration accidentally used a debug version of v8 instead of an > > optimized one (fixed: crbug.com/586511) > > > > b) the test was doing too much (fixed after this CL and crrev.com/2050053002) > > this is the first bullet point under "advantages" here: > > https://www.chromium.org/developers/updating-webui-for-material-design/testin... > > if that philosophy was wrong, we should remove it. What are the timings actually > like here? I suspect the browser test overhead, including navigating to the > browsePreload, accounts for like 90% of the time, and the time taken for all > suites of unrelated mocha tests would be at most 10%. > > Put another way: It seems very likely to me that a test with 4 mocha suites > which times out, will still time out if you disable 3 of the suites. Unless one > particular suite is just hella slow for some reason and needs to be addressed. > > You're almost quadrupling the amount of time running these tests will take by > making us spin up more browsers. Of course that's not a problem from the > flakiness perspective*, because each one has its own timeout, it's just slow > overall... and since our intention with web components is to test more granular > elements, WebUI will ultimately have many many more browser_tests this way than > we've had before. Is the *total* running time of browser_tests not a concern? > > > * actually it is a problem for flakiness, as a flake is more likely to occur in > one of 4 correct browser_tests than in just 1 correct browser_tests. doing more work will always take more time, BUT I agree with you that we need better per-step (setup, teardown, each suite) metrics. I'll run some printf()s on the try bots before I go further so we're not blind. > > > > > > c) memory bots are slow cuz they're tracking memory (not fixed) > > > > I'm guessing that fixing 2 out of 3 of these are enough to re-enable on msan > > bots >
On 2016/06/09 04:37:08, Dan Beam wrote: > https://codereview.chromium.org/2049943003/diff/20001/chrome/test/data/webui/... > File chrome/test/data/webui/settings/cr_settings_browsertest.js (left): > > https://codereview.chromium.org/2049943003/diff/20001/chrome/test/data/webui/... > chrome/test/data/webui/settings/cr_settings_browsertest.js:58: GEN('#if > defined(MEMORY_SANITIZER)'); > On 2016/06/09 04:06:36, michaelpg wrote: > > On 2016/06/09 01:37:25, Dan Beam wrote: > > > On 2016/06/09 01:21:29, dpapad wrote: > > > > Won't removing this line make our tests fail on memory bots again? If this > > is > > > > not an issue anymore, then we should probably also update all other places > > > were > > > > we replicated this ifdef, see > > > > > > > > > > https://cs.chromium.org/search/?q=MEMORY_SANITIZER+file:%5Esrc/chrome/test/da.... > > > > > > the test previously failed for 3 reasons: > > > > > > a) the GN migration accidentally used a debug version of v8 instead of an > > > optimized one (fixed: crbug.com/586511) > > > > > > b) the test was doing too much (fixed after this CL and > crrev.com/2050053002) > > > > this is the first bullet point under "advantages" here: > > > > > https://www.chromium.org/developers/updating-webui-for-material-design/testin... > > > > if that philosophy was wrong, we should remove it. What are the timings > actually > > like here? I suspect the browser test overhead, including navigating to the > > browsePreload, accounts for like 90% of the time, and the time taken for all > > suites of unrelated mocha tests would be at most 10%. > > > > Put another way: It seems very likely to me that a test with 4 mocha suites > > which times out, will still time out if you disable 3 of the suites. Unless > one > > particular suite is just hella slow for some reason and needs to be addressed. > > > > You're almost quadrupling the amount of time running these tests will take by > > making us spin up more browsers. Of course that's not a problem from the > > flakiness perspective*, because each one has its own timeout, it's just slow > > overall... and since our intention with web components is to test more > granular > > elements, WebUI will ultimately have many many more browser_tests this way > than > > we've had before. Is the *total* running time of browser_tests not a concern? > > > > > > * actually it is a problem for flakiness, as a flake is more likely to occur > in > > one of 4 correct browser_tests than in just 1 correct browser_tests. > > doing more work will always take more time, BUT I agree with you that we need > better per-step (setup, teardown, each suite) metrics. > > I'll run some printf()s on the try bots before I go further so we're not blind. > > > > > > > > > > > c) memory bots are slow cuz they're tracking memory (not fixed) > > > > > > I'm guessing that fixing 2 out of 3 of these are enough to re-enable on msan > > > bots > > OK, so, running locally (but assuming the *distribution* is similar): - just mocha.run(): ~3ms - registering tests: ~6ms - the function body of the TEST_F: ~9ms (just mocha.run() + registerTests, so this makes sense) - the RunJavascriptTestF() call in the generated code: ~850ms - from the beginning of cr_settings_browsertest.js until tearDown: ~920ms - preloading chrome://md-settings/prefs/prefs.html: ~500ms - the whole WebUIBrowserTest (from ctor to dtor): ~2850ms - total gtest time: [1/1] CrSettingsBrowserTest.CrSettingsTest ~3000-ish ms this gives me mixed feeling obviously performing the tests (i.e. mocha.run()) is not a bottleneck IF this distribution holds BUT, just loading the files and parsing/running the contents MAY be an issue. also, preloading prefs.html is taking a fairly long time, IMO (but I've never understood what takes longer about this). and YES, running the browser via spinning up a new process does seem to take quite a bit of overhead, but I'm not convinced that packaging stuff together is a good idea. Note: all this testing was done in Xvfb+icewm and X11+xmonad on Linux (via testing/xvfb.py) using --gtest_repeat=5
On 2016/06/10 01:59:07, Dan Beam wrote: > On 2016/06/09 04:37:08, Dan Beam wrote: > > > https://codereview.chromium.org/2049943003/diff/20001/chrome/test/data/webui/... > > File chrome/test/data/webui/settings/cr_settings_browsertest.js (left): > > > > > https://codereview.chromium.org/2049943003/diff/20001/chrome/test/data/webui/... > > chrome/test/data/webui/settings/cr_settings_browsertest.js:58: GEN('#if > > defined(MEMORY_SANITIZER)'); > > On 2016/06/09 04:06:36, michaelpg wrote: > > > On 2016/06/09 01:37:25, Dan Beam wrote: > > > > On 2016/06/09 01:21:29, dpapad wrote: > > > > > Won't removing this line make our tests fail on memory bots again? If > this > > > is > > > > > not an issue anymore, then we should probably also update all other > places > > > > were > > > > > we replicated this ifdef, see > > > > > > > > > > > > > > > https://cs.chromium.org/search/?q=MEMORY_SANITIZER+file:%5Esrc/chrome/test/da.... > > > > > > > > the test previously failed for 3 reasons: > > > > > > > > a) the GN migration accidentally used a debug version of v8 instead of an > > > > optimized one (fixed: crbug.com/586511) > > > > > > > > b) the test was doing too much (fixed after this CL and > > crrev.com/2050053002) > > > > > > this is the first bullet point under "advantages" here: > > > > > > > > > https://www.chromium.org/developers/updating-webui-for-material-design/testin... > > > > > > if that philosophy was wrong, we should remove it. What are the timings > > actually > > > like here? I suspect the browser test overhead, including navigating to the > > > browsePreload, accounts for like 90% of the time, and the time taken for all > > > suites of unrelated mocha tests would be at most 10%. > > > > > > Put another way: It seems very likely to me that a test with 4 mocha suites > > > which times out, will still time out if you disable 3 of the suites. Unless > > one > > > particular suite is just hella slow for some reason and needs to be > addressed. > > > > > > You're almost quadrupling the amount of time running these tests will take > by > > > making us spin up more browsers. Of course that's not a problem from the > > > flakiness perspective*, because each one has its own timeout, it's just slow > > > overall... and since our intention with web components is to test more > > granular > > > elements, WebUI will ultimately have many many more browser_tests this way > > than > > > we've had before. Is the *total* running time of browser_tests not a > concern? > > > > > > > > > * actually it is a problem for flakiness, as a flake is more likely to occur > > in > > > one of 4 correct browser_tests than in just 1 correct browser_tests. > > > > doing more work will always take more time, BUT I agree with you that we need > > better per-step (setup, teardown, each suite) metrics. > > > > I'll run some printf()s on the try bots before I go further so we're not > blind. > > > > > > > > > > > > > > > > c) memory bots are slow cuz they're tracking memory (not fixed) > > > > > > > > I'm guessing that fixing 2 out of 3 of these are enough to re-enable on > msan > > > > bots > > > > > OK, so, running locally (but assuming the *distribution* is similar): > > - just mocha.run(): ~3ms > - registering tests: ~6ms > - the function body of the TEST_F: ~9ms (just mocha.run() + registerTests, so > this makes sense) > - the RunJavascriptTestF() call in the generated code: ~850ms > - from the beginning of cr_settings_browsertest.js until tearDown: ~920ms > - preloading chrome://md-settings/prefs/prefs.html: ~500ms > - the whole WebUIBrowserTest (from ctor to dtor): ~2850ms > - total gtest time: [1/1] CrSettingsBrowserTest.CrSettingsTest ~3000-ish ms > > this gives me mixed feeling > > obviously performing the tests (i.e. mocha.run()) is not a bottleneck IF this > distribution holds > > BUT, just loading the files and parsing/running the contents MAY be an issue. > > also, preloading prefs.html is taking a fairly long time, IMO (but I've never > understood what takes longer about this). idk, 500ms seems like a lot, but it's 100ms on my beefy Windows machine for an official Release build. so if the browser test build/environment is 5x slower, i guess that makes sense? even so that would imply everything from parsing to execution is 5x slower. interestingly, the most time (7ms, but still) is spent in "get usersPrivate", whatever that means. i guess that's how it accounts for loading the private extension APIs into the global execution environment or whatever? could that have a proportionately longer impact in a browser test for some reason? > > and YES, running the browser via spinning up a new process does seem to take > quite a bit of overhead, but I'm not convinced that packaging stuff together is > a good idea. sounds like a good idea to me, if the tests are unit-testy (independent and don't affect state or global scope), which they mostly are (some singletons stick around but we try to handle that). obviously any tests we expect to be slow should be isolated. reporting test run times in the log may help with this (we can also configure mocha to mark test durations above a certain threshold as "slow", which we could possibly use to warn about multiple "slow" tests in one mocha run) > > Note: all this testing was done in Xvfb+icewm and X11+xmonad on Linux (via > testing/xvfb.py) using --gtest_repeat=5 cool! TIL you don't have to repeat tests inside a for loop.
dbeam@chromium.org changed reviewers: - dpapad@chromium.org
lgtm, let's see what happens on the bots!
The CQ bit was checked by dbeam@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/2049943003/40001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: ios-device on tryserver.chromium.mac (JOB_TIMED_OUT, no build URL) ios-device-gn on tryserver.chromium.mac (JOB_TIMED_OUT, no build URL)
The CQ bit was checked by dbeam@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/2049943003/40001
Message was sent while issue was closed.
Description was changed from ========== MD Settings: split browser tests into smaller chunks Step 1: just move into multiple TEST_Fs (this CL). Step 2: make separate subclasses of PolymerTest for each settings-specific browser test (next CL). R=dpapad@chromium.org BUG=534718 ========== to ========== MD Settings: split browser tests into smaller chunks Step 1: just move into multiple TEST_Fs (this CL). Step 2: make separate subclasses of PolymerTest for each settings-specific browser test (next CL). R=dpapad@chromium.org BUG=534718 ==========
Message was sent while issue was closed.
Committed patchset #2 (id:40001)
Message was sent while issue was closed.
CQ bit was unchecked
Message was sent while issue was closed.
Description was changed from ========== MD Settings: split browser tests into smaller chunks Step 1: just move into multiple TEST_Fs (this CL). Step 2: make separate subclasses of PolymerTest for each settings-specific browser test (next CL). R=dpapad@chromium.org BUG=534718 ========== to ========== MD Settings: split browser tests into smaller chunks Step 1: just move into multiple TEST_Fs (this CL). Step 2: make separate subclasses of PolymerTest for each settings-specific browser test (next CL). R=dpapad@chromium.org BUG=534718 Committed: https://crrev.com/3aa0bc4821a0e2eee953ee1cb755db92622f66d6 Cr-Commit-Position: refs/heads/master@{#399845} ==========
Message was sent while issue was closed.
Patchset 2 (id:??) landed as https://crrev.com/3aa0bc4821a0e2eee953ee1cb755db92622f66d6 Cr-Commit-Position: refs/heads/master@{#399845} |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
