|
|
Created:
5 years, 10 months ago by jdduke (slow) Modified:
5 years, 7 months ago Reviewers:
jamesr, cpu_(ooo_6.6-7.5), jam, boliu, davidben, piman, danakj, Nico, pasko, Fabrice (no longer in Chrome) CC:
chromium-reviews, cbentzel+watch_chromium.org, android-webview-reviews_chromium.org, yfriedman+watch_chromium.org, klundberg+watch_chromium.org, erikwright+watch_chromium.org, jbudorick+watch_chromium.org, jshin+watch_chromium.org, andrewhsieh, Tom Hudson, reveman, vmpstr, hans Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionReland "Enable libc++ on Android"
Switch all Android builds to use libc++ instead of stlport.
This change originally landed in crrev.com/315085, but was reverted in
crrev.com/315174 due to CQ execution time regressions.
The root cause of the test time regression was determined to be an issue
with MD5 hash computation, where hashing failed to produce a consistent
result in turn preventing deployment optimizations. This was resolved in
crbug.com/456396, leaving libc++ test runs comparable with stlport runs.
BUG=456396, 427718
Committed: https://crrev.com/9f355f21c4049c622c71fc66a50e0288c5211335
Cr-Commit-Position: refs/heads/master@{#328296}
Patch Set 1 #Patch Set 2 : Rebase #Patch Set 3 : Final rebase #
Total comments: 1
Patch Set 4 : Rebase #Patch Set 5 : Cleanup #Patch Set 6 : Fix clang+mesa #
Messages
Total messages: 49 (5 generated)
Created Revert of Revert of Enable libc++ on Android
Looks like it needs a rebase. Also, looking at the patch now, it seems we could move most of the source file changes to a separate patch. Maybe we should do that?
Since this was good enough to land once already, I think just rebasing is enough :-) On Tue, Feb 24, 2015 at 8:26 AM, <fdegans@chromium.org> wrote: > Looks like it needs a rebase. > Also, looking at the patch now, it seems we could move most of the source > file > changes to a separate patch. Maybe we should do that? > > https://codereview.chromium.org/951983002/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2015/02/24 16:38:24, Nico wrote: > Since this was good enough to land once already, I think just rebasing is > enough :-) Nico could you verify the changes in PS#3? That should be the only stlport-specific change that has crept in since the revert. Thanks.
lgtm, makes sense
The CQ bit was checked by jdduke@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/951983002/280001
The CQ bit was unchecked by jam@chromium.org
not lgtm, test runs are slower than normal. please watch this closely by doing manual try runs before hitting CQ again.
On 2015/02/24 19:15:25, jam wrote: > not lgtm, > > test runs are slower than normal. please watch this closely by doing manual try > runs before hitting CQ again. Yeah, I was just looking at that. The instrumentation test times look comparable, but many unit tests are 20-40% slower. I wonder if gtest/gmock abuse any operations that are simply slower in libc++?
On 2015/02/24 19:20:13, jdduke wrote: > On 2015/02/24 19:15:25, jam wrote: > > not lgtm, > > > > test runs are slower than normal. please watch this closely by doing manual > try > > runs before hitting CQ again. > > Yeah, I was just looking at that. The instrumentation test times look > comparable, but many unit tests are 20-40% slower. I wonder if gtest/gmock abuse > any operations that are simply slower in libc++? Actually, the biggest regression is in compilation time. I wonder if we need to tweak any infra settings to accommodate the libc++ build?
On 2015/02/24 19:23:21, jdduke wrote: > On 2015/02/24 19:20:13, jdduke wrote: > > On 2015/02/24 19:15:25, jam wrote: > > > not lgtm, > > > > > > test runs are slower than normal. please watch this closely by doing manual > > try > > > runs before hitting CQ again. > > > > Yeah, I was just looking at that. The instrumentation test times look > > comparable, but many unit tests are 20-40% slower. I wonder if gtest/gmock > abuse > > any operations that are simply slower in libc++? > > Actually, the biggest regression is in compilation time. I wonder if we need to > tweak any infra settings to accommodate the libc++ build? comparing the total time on this cl won't be that helpful, because compilation time on this cl should be high as it's doing a full rebuild. however to compare apples to apples, you'd want to compare compile time with another try run that also did full rebuild. that is orthogonal from the test run time issues though.
There will be a compile time hit. libc++ includes more stuff, and that takes longer to parse. (Turning on c++11 also regressed build time measurably.) So some of that we'll have to take. Test runtime shouldn't get worse in rel mode though; if it does we need to fix that. On Tue, Feb 24, 2015 at 11:23 AM, <jdduke@chromium.org> wrote: > On 2015/02/24 19:20:13, jdduke wrote: > >> On 2015/02/24 19:15:25, jam wrote: >> > not lgtm, >> > >> > test runs are slower than normal. please watch this closely by doing >> manual >> try >> > runs before hitting CQ again. >> > > Yeah, I was just looking at that. The instrumentation test times look >> comparable, but many unit tests are 20-40% slower. I wonder if gtest/gmock >> > abuse > >> any operations that are simply slower in libc++? >> > > Actually, the biggest regression is in compilation time. I wonder if we > need to > tweak any infra settings to accommodate the libc++ build? > > https://codereview.chromium.org/951983002/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2015/02/24 19:26:35, jam wrote: > On 2015/02/24 19:23:21, jdduke wrote: > > On 2015/02/24 19:20:13, jdduke wrote: > > > On 2015/02/24 19:15:25, jam wrote: > > > > not lgtm, > > > > > > > > test runs are slower than normal. please watch this closely by doing > manual > > > try > > > > runs before hitting CQ again. > > > > > > Yeah, I was just looking at that. The instrumentation test times look > > > comparable, but many unit tests are 20-40% slower. I wonder if gtest/gmock > > abuse > > > any operations that are simply slower in libc++? > > > > Actually, the biggest regression is in compilation time. I wonder if we need > to > > tweak any infra settings to accommodate the libc++ build? > > comparing the total time on this cl won't be that helpful, because compilation > time on this cl should be high as it's doing a full rebuild. however to compare > apples to apples, you'd want to compare compile time with another try run that > also did full rebuild. > > that is orthogonal from the test run time issues though. wrt test run times: they're starting significantly later with this CL than without, e.g. unit_tests starts after ~190-200s normally, but starts after ~270s here.
since i found this very confusing the last time too: can you please link to a concrete example, and also to the same thing on the current bots where it's working as intended?
On 2015/02/24 19:30:54, Nico wrote: > since i found this very confusing the last time too: can you please link to a > concrete example, and also to the same thing on the current bots where it's > working as intended? Here's a run of content_unittests without this change (from a previous build, the mips warning change): http://build.chromium.org/p/tryserver.chromium.linux/builders/android_rel_tes... The unit tests start execution after ~88s. Here's the same test run *with* this change (from the above CQ try): http://build.chromium.org/p/tryserver.chromium.linux/builders/android_rel_tes... It takes ~200s before the tests start.
Isn't the way the tests work that they compare the md5 of a bunch of built products and upload them only if they have changed? If so, isn't it expected that the first build after a change that touches every binary takes longer to upload stuff? I suppose I can send a tryjob for another change that modifies every binary to test that theory. On Tue, Feb 24, 2015 at 11:36 AM, <jdduke@chromium.org> wrote: > On 2015/02/24 19:30:54, Nico wrote: > >> since i found this very confusing the last time too: can you please link >> to a >> concrete example, and also to the same thing on the current bots where >> it's >> working as intended? >> > > Here's a run of content_unittests without this change (from a previous > build, > the mips warning change): > http://build.chromium.org/p/tryserver.chromium.linux/ > builders/android_rel_tests_recipe/builds/4956/steps/ > content_unittests/logs/stdio > > The unit tests start execution after ~88s. > > Here's the same test run *with* this change (from the above CQ try): > http://build.chromium.org/p/tryserver.chromium.linux/ > builders/android_rel_tests_recipe/builds/5268/steps/ > content_unittests/logs/stdio > > It takes ~200s before the tests start. > > https://codereview.chromium.org/951983002/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2015/02/24 19:48:13, Nico wrote: > Isn't the way the tests work that they compare the md5 of a bunch of built > products and upload them only if they have changed? If so, isn't it > expected that the first build after a change that touches every binary > takes longer to upload stuff? Our device provisioning wipes everything every run. Rebuilding everything should not affect the time it takes to install/push data to devices. That said, it appears that a number of pieces that md5sum have been affected. With this CL, we appear to be: - pushing device_forwarder in each test - installing the apk multiple times prior to listing tests and content_unittests (at least) is taking much longer to push its data dependencies. > > I suppose I can send a tryjob for another change that modifies every binary > to test that theory. > > On Tue, Feb 24, 2015 at 11:36 AM, <mailto:jdduke@chromium.org> wrote: > > > On 2015/02/24 19:30:54, Nico wrote: > > > >> since i found this very confusing the last time too: can you please link > >> to a > >> concrete example, and also to the same thing on the current bots where > >> it's > >> working as intended? > >> > > > > Here's a run of content_unittests without this change (from a previous > > build, > > the mips warning change): > > http://build.chromium.org/p/tryserver.chromium.linux/ > > builders/android_rel_tests_recipe/builds/4956/steps/ > > content_unittests/logs/stdio > > > > The unit tests start execution after ~88s. > > > > Here's the same test run *with* this change (from the above CQ try): > > http://build.chromium.org/p/tryserver.chromium.linux/ > > builders/android_rel_tests_recipe/builds/5268/steps/ > > content_unittests/logs/stdio > > > > It takes ~200s before the tests start. > > > > https://codereview.chromium.org/951983002/ > > > > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org.
I tried to repro this upload time difference locally and failed so far :-/ I built everything with and without this cl, and ran $ time build/android/test_runner.py gtest -s content_unittests --verbose --release with this patch applied: $ git diff diff --git a/build/android/pylib/base/test_dispatcher.py b/build/android/pylib/base/test_dispatcher.py index 929c408..064f335 100644 --- a/build/android/pylib/base/test_dispatcher.py +++ b/build/android/pylib/base/test_dispatcher.py @@ -19,6 +19,7 @@ Performs the following steps: # been ported to the new environment / test instance model. import logging +import sys import threading from pylib import android_commands @@ -324,6 +325,7 @@ def RunTests(tests, runner_factory, devices, shard=True, tag_results_with_device = True log_string = 'replicated on each device' + sys.exit(0) logging.info('Will run %d tests (%s): %s', len(tests_expanded), log_string, str(tests_expanded)) runners = _CreateRunners(runner_factory, devices, setup_timeout) It took the same time with bot stlport and libc++.
Message was sent while issue was closed.
danakj@chromium.org changed reviewers: + danakj@chromium.org
Message was sent while issue was closed.
https://codereview.chromium.org/951983002/diff/280001/build/common.gypi File build/common.gypi (right): https://codereview.chromium.org/951983002/diff/280001/build/common.gypi#newco... build/common.gypi:4750: }, { # else: android_webview_build!=0 This appears to be gone from common.gypi. A few other things need less unclear rebasing.
Message was sent while issue was closed.
On 2015/04/24 21:25:30, danakj wrote: > https://codereview.chromium.org/951983002/diff/280001/build/common.gypi > File build/common.gypi (right): > > https://codereview.chromium.org/951983002/diff/280001/build/common.gypi#newco... > build/common.gypi:4750: }, { # else: android_webview_build!=0 > This appears to be gone from common.gypi. A few other things need less unclear > rebasing. Maybe we should go back to https://codereview.chromium.org/835633003/ and re-open that one though.
Message was sent while issue was closed.
On 2015/04/24 21:25:30, danakj wrote: > https://codereview.chromium.org/951983002/diff/280001/build/common.gypi > File build/common.gypi (right): > > https://codereview.chromium.org/951983002/diff/280001/build/common.gypi#newco... > build/common.gypi:4750: }, { # else: android_webview_build!=0 > This appears to be gone from common.gypi. A few other things need less unclear > rebasing. Yeah, I completely removed android_webview_build and the android_aosp bots that used it, so all these conditions have been collapsed into their parents as if android_webview_build==0 was always true. You should be able to find all the android_webview_build==0 paths nearby to where they were in common.gypi, but some of them moved more than a few lines in order to simplify nesting/conditionals.
Message was sent while issue was closed.
I build everything and ran the same command for libcxx vs stlport. With libcxx and stlport the first 30s are the same. Then the libcxx version does the following: I 19.586s TimeoutThread-1-for-push_data_deps_to_device_dir(04f6f6751e8976ca) [host]> adb -s 04f6f6751e8976ca shell 'getprop ro.build.type; echo %$?;' I 19.656s TimeoutThread-1-for-push_data_deps_to_device_dir(04f6f6751e8976ca) [host]> adb -s 04f6f6751e8976ca shell 'mkdir /data/local/tmp/bin /data/local/tmp/framework; echo %$?;' I 19.730s TimeoutThread-1-for-push_data_deps_to_device_dir(04f6f6751e8976ca) [host]> adb -s 04f6f6751e8976ca shell 'echo -n '"'"'#!/system/bin/sh base=/data/local/tmp export CLASSPATH=$base/framework/chromium_commands.jar exec app_process $base/bin org.chromium.android.commands.unzip.Unzip $@ '"'"' > /data/local/tmp/bin/unzip; echo %$?;' I 19.795s TimeoutThread-1-for-push_data_deps_to_device_dir(04f6f6751e8976ca) [host]> adb -s 04f6f6751e8976ca shell 'chmod 755 /data/local/tmp/bin/unzip; echo %$?;' I 19.863s TimeoutThread-1-for-push_data_deps_to_device_dir(04f6f6751e8976ca) [host]> adb -s 04f6f6751e8976ca push /usr/local/google/home/danakj/s/c/src/out_android/Release/lib.java/chromium_commands.dex.jar /data/local/tmp/framework/chromium_commands.jar I 22.349s TimeoutThread-1-for-push_data_deps_to_device_dir(04f6f6751e8976ca) [host]> adb -s 04f6f6751e8976ca push /tmp/tmpCLGIaD.zip /storage/emulated/legacy/tmp.zip I 34.034s TimeoutThread-1-for-push_data_deps_to_device_dir(04f6f6751e8976ca) [host]> adb -s 04f6f6751e8976ca shell 'su -c ls /root && ! ls /root; echo %$?;' I 34.113s TimeoutThread-1-for-push_data_deps_to_device_dir(04f6f6751e8976ca) [host]> adb -s 04f6f6751e8976ca shell 'PATH="/data/local/tmp/bin:$PATH" unzip /storage/emulated/legacy/tmp.zip; echo %$?;' I 52.783s TimeoutThread-1-for-push_data_deps_to_device_dir(04f6f6751e8976ca) [host]> adb -s 04f6f6751e8976ca get-state I 52.827s TimeoutThread-1-for-push_data_deps_to_device_dir(04f6f6751e8976ca) [host]> adb -s 04f6f6751e8976ca shell 'rm /storage/emulated/legacy/tmp.zip; echo %$?;' I 52.986s TimeoutThread-1-for-push_data_deps_to_device_dir(04f6f6751e8976ca) [host]> adb -s 04f6f6751e8976ca shell 'chmod -R 777 /storage/emulated/legacy/natives_blob.bin /storage/emulated/legacy/content /storage/emulated/legacy/icudtl.dat /storage/emulated/legacy/net /storage/emulated/legacy/media /storage/emulated/legacy/snapshot_blob.bin /storage/emulated/legacy/paks /storage/emulated/legacy/content_shell; echo %$?;' I don't know what this is, but it takes ~30 seconds of the total 3m30s. The commands you see are from build/android/pylib/device/commands/install_commands.py which is to install the chromium_commands target. Idk what this is or why it feels like it needs it though. The difference I see with libcxx vs stlport then is 3m31s vs 2m59s. I think that's on the order people were seeing on the bots? (I read 20-40% and this is only 10% but maybe my machine is faster? This is with an N4.)
Message was sent while issue was closed.
There's a lot of noise in that. Here's what it's doing extra for libcxx: [host]> adb shell 'getprop ro.build.type' [host]> adb shell 'mkdir /data/local/tmp/bin /data/local/tmp/framework' [host]> adb shell 'echo -n '"'"'#!/system/bin/sh base=/data/local/tmp export CLASSPATH=$base/framework/chromium_commands.jar exec app_process $base/bin org.chromium.android.commands.unzip.Unzip $@ '"'"' > /data/local/tmp/bin/unzip' [host]> adb shell 'chmod 755 /data/local/tmp/bin/unzip' [host]> adb push /usr/local/google/home/danakj/s/c/src/out_android/Release/lib.java/chromium_commands.dex.jar /data/local/tmp/framework/chromium_commands.jar [host]> adb push /tmp/tmpCLGIaD.zip /storage/emulated/legacy/tmp.zip [host]> adb shell 'su -c ls /root && ! ls /root' [host]> adb shell 'PATH="/data/local/tmp/bin:$PATH" unzip /storage/emulated/legacy/tmp.zip' [host]> adb get-state [host]> adb shell 'rm /storage/emulated/legacy/tmp.zip' [host]> adb -s 04f6f6751e8976ca shell 'chmod -R 777 /storage/emulated/legacy/natives_blob.bin /storage/emulated/legacy/content /storage/emulated/legacy/icudtl.dat /storage/emulated/legacy/net /storage/emulated/legacy/media /storage/emulated/legacy/snapshot_blob.bin /storage/emulated/legacy/paks /storage/emulated/legacy/content_shell'
Message was sent while issue was closed.
All that appears to be to unzip something in /storage/emulated/legacy/ with a bunch of subdirs in it.
Message was sent while issue was closed.
I went ahead and rebased the change, basic compilation at least should work.
Now when I run it the libcxx version it doesn't spend time unzipping /storage/emulated/legacy. It appears to me that the md5sum fails on the first run, and then passes thereafter, making the difference no longer reproducible. I'm not sure if the bots wipe the device with some image that would include the contents of /storage/emulated/legacy? So it's a problem on every run? Or if this would resolve itself after each bot ran with the new config once?
OK I'm failing to reproduce this locally as well. I'm going to take it to the bug though: https://code.google.com/p/chromium/issues/detail?id=456396
jam@: Any objections to trying to reland this and monitor build times per discussion on crbug.com/456396?
On 2015/04/28 00:37:04, jdduke wrote: > jam@: Any objections to trying to reland this and monitor build times per > discussion on crbug.com/456396? why do we need to commit it and watch the cycle time, instead of say kicking n builds on try bots with this? in general i'm nervous of using the 1000 chrome devs as guinea pigs.
On 2015/04/28 20:03:03, jabdelmalek wrote: > On 2015/04/28 00:37:04, jdduke wrote: > > jam@: Any objections to trying to reland this and monitor build times per > > discussion on crbug.com/456396? > > why do we need to commit it and watch the cycle time, instead of say kicking n > builds on try bots with this? > > in general i'm nervous of using the 1000 chrome devs as guinea pigs. I can send those try jobs again. You think they'd be dramatically different?
On 2015/04/28 20:03:03, jabdelmalek wrote: > On 2015/04/28 00:37:04, jdduke wrote: > > jam@: Any objections to trying to reland this and monitor build times per > > discussion on crbug.com/456396? > > why do we need to commit it and watch the cycle time, instead of say kicking n > builds on try bots with this? > > in general i'm nervous of using the 1000 chrome devs as guinea pigs. See comments on the bug from yesterday. We did send trybots and didn't see a slowdown there. We also tried to reproduce this locally (several times, independently) and failed. We feel we've done due diligence.
On 2015/04/28 20:04:48, danakj wrote: > On 2015/04/28 20:03:03, jabdelmalek wrote: > > On 2015/04/28 00:37:04, jdduke wrote: > > > jam@: Any objections to trying to reland this and monitor build times per > > > discussion on crbug.com/456396? > > > > why do we need to commit it and watch the cycle time, instead of say kicking n > > builds on try bots with this? > > > > in general i'm nervous of using the 1000 chrome devs as guinea pigs. > > I can send those try jobs again. You think they'd be dramatically different? I ran the try jobs again. libcxx was 10m slower again, this time the compile went faster, and the extra time was distributed to tests instead. This doesn't look particularly actionable, and I see no difference in behaviour locally between libcxx and stlport when doing provision_devices.py => install/run content_unittests. Should this really be blocked on anything here still?
On 2015/04/28 21:54:38, danakj wrote: > On 2015/04/28 20:04:48, danakj wrote: > > On 2015/04/28 20:03:03, jabdelmalek wrote: > > > On 2015/04/28 00:37:04, jdduke wrote: > > > > jam@: Any objections to trying to reland this and monitor build times per > > > > discussion on crbug.com/456396? > > > > > > why do we need to commit it and watch the cycle time, instead of say kicking > n > > > builds on try bots with this? > > > > > > in general i'm nervous of using the 1000 chrome devs as guinea pigs. > > > > I can send those try jobs again. You think they'd be dramatically different? > > I ran the try jobs again. libcxx was 10m slower again, this time the compile > went faster, and the extra time was distributed to tests instead. This doesn't > look particularly actionable, and I see no difference in behaviour locally > between libcxx and stlport when doing provision_devices.py => install/run > content_unittests. > > Should this really be blocked on anything here still? @jam: All the details are here https://code.google.com/p/chromium/issues/detail?id=456396#c57
On 2015/04/28 21:54:38, danakj wrote: > On 2015/04/28 20:04:48, danakj wrote: > > On 2015/04/28 20:03:03, jabdelmalek wrote: > > > On 2015/04/28 00:37:04, jdduke wrote: > > > > jam@: Any objections to trying to reland this and monitor build times per > > > > discussion on crbug.com/456396? > > > > > > why do we need to commit it and watch the cycle time, instead of say kicking > n > > > builds on try bots with this? > > > > > > in general i'm nervous of using the 1000 chrome devs as guinea pigs. > > > > I can send those try jobs again. You think they'd be dramatically different? > > I ran the try jobs again. libcxx was 10m slower again, this time the compile > went faster, and the extra time was distributed to tests instead. This doesn't > look particularly actionable, and I see no difference in behaviour locally > between libcxx and stlport when doing provision_devices.py => install/run > content_unittests. > > Should this really be blocked on anything here still? To compare build times, one would need to to do two changes to see how long full builds take with/without this change. It would be good to confirm build time is the same. Intuitively I don't see why it would be different because of goma. For your normal try runs here, they will be artificially higher compile time because it'll force a full rebuild, which of course won't happen after this cl lands after it rolls out. What's more interesting to look at is test times. I'm comparing a run from this cl http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_android_r... vs http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_android_r... (making sure to pick runs that both ran on a bot with 7 attached working devices) the test running times seem to be slower with this change. no?
On Tue, Apr 28, 2015 at 5:21 PM, <jam@chromium.org> wrote: > On 2015/04/28 21:54:38, danakj wrote: > >> On 2015/04/28 20:04:48, danakj wrote: >> > On 2015/04/28 20:03:03, jabdelmalek wrote: >> > > On 2015/04/28 00:37:04, jdduke wrote: >> > > > jam@: Any objections to trying to reland this and monitor build >> times >> > per > >> > > > discussion on crbug.com/456396? >> > > >> > > why do we need to commit it and watch the cycle time, instead of say >> > kicking > >> n >> > > builds on try bots with this? >> > > >> > > in general i'm nervous of using the 1000 chrome devs as guinea pigs. >> > >> > I can send those try jobs again. You think they'd be dramatically >> different? >> > > I ran the try jobs again. libcxx was 10m slower again, this time the >> compile >> went faster, and the extra time was distributed to tests instead. This >> doesn't >> look particularly actionable, and I see no difference in behaviour locally >> between libcxx and stlport when doing provision_devices.py => install/run >> content_unittests. >> > > Should this really be blocked on anything here still? >> > > To compare build times, one would need to to do two changes to see how > long full > builds take with/without this change. It would be good to confirm build > time is > the same. Intuitively I don't see why it would be different because of > goma. For > your normal try runs here, they will be artificially higher compile time > because > it'll force a full rebuild, which of course won't happen after this cl > lands > after it rolls out. > My stlport CL changes a compiler define so that it forces a full rebuild also. No? I'm confused here what you mean. I should be comparing a full rebuild with libcxx to a full rebuild with stlport. That's what I intended to do. > > What's more interesting to look at is test times. I'm comparing a run from > this > cl > > http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_android_r... > vs > > http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_android_r... > (making sure to pick runs that both ran on a bot with 7 attached working > devices) > > the test running times seem to be slower with this change. no? > Hm ok I do see that difference if I only compare builds on the same bot instance. Back to the bug. > > https://codereview.chromium.org/951983002/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2015/04/29 20:23:22, danakj wrote: > @jam: https://code.google.com/p/chromium/issues/detail?id=456396#c63 Copy paste from the bug for jam, PTAL: Here are two full re-builds on bots with the same number of devices: libcxx: http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_android_r... stlport: http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_android_r... libcxx is on build118-a4: http://build.chromium.org/p/tryserver.chromium.linux/buildslaves/build118-a4 android device 1: 06b829dd13c869d3 hammerhead KTU84P 29.2C 100% android device 2: 03d136f6006adc5e hammerhead KTU84P 28.3C 100% android device 3: 060888b5006235be hammerhead KTU84P 26.1C 100% android device 4: 06b71e0b00621c73 hammerhead KTU84P 26.6C 100% android device 5: 06c3870c006b0068 hammerhead KTU84P 24.8C 100% android device 6: 06085ccf006236dd hammerhead KTU84P 25.3C 100% architecture: amd64 git version: 2.3.7 memory total: 15.64 GB osfamily: Debian processor count: 4 product name: PowerEdge R210 II os version: 12.04 stlport is on build111-a4: http://build.chromium.org/p/tryserver.chromium.linux/buildslaves/build111-a4 android device 1: 03d27c79006adc13 hammerhead KTU84P 29.6C 100% android device 2: 0608588513c86e2b hammerhead KTU84P 29.1C 100% android device 3: 03d28015006adc78 hammerhead KTU84P 31.5C 100% android device 4: 06b6ab3d006224bf hammerhead KTU84P 33.3C 100% android device 5: 03d28022006adc6d hammerhead KTU84P 29.5C 100% android device 6: 06b6a72b006200a5 hammerhead KTU84P 30.8C 100% architecture: amd64 git version: 2.3.7 memory total: 15.64 GB osfamily: Debian processor count: 4 product name: PowerEdge R210 II os version: 12.04 libcxx total time: 1h15m stlport total time: 1h19m Some test steps are slower: content_unittests: 1m => 2m unit_tests: 2m => 3m Some test steps are faster: content_browsertests: 9.5m => 7.8m The libcxx compile step here was actually 7m faster. The tests were 3m slower overall, most steps were < 10 seconds slower. And the total time was faster. This feels like mostly noise to me? WDYT jam?
I talked to jam in person. He was concerned that these results have 6 devices connected instead of the usual 7, and he'd like us to understand the content_unittests slowdown better (since that takes fairly consistently ~1 min on http://build.chromium.org/p/tryserver.chromium.linux/stats/linux_android_rel_ng He says reproducing this locally is tricky because there's fewer and different devices connected to the desktop locally. As the next step, I'll ssh into one of the slaves with 7 devices (tomorrow) and try to reproduce and investigate the slowdown there.
On 2015/04/30 22:58:35, Nico wrote: > I talked to jam in person. He was concerned that these results have 6 devices > connected instead of the usual 7, and he'd like us to understand the > content_unittests slowdown better (since that takes fairly consistently ~1 min > on > http://build.chromium.org/p/tryserver.chromium.linux/stats/linux_android_rel_ng > > He says reproducing this locally is tricky because there's fewer and different > devices connected to the desktop locally. > > As the next step, I'll ssh into one of the slaves with 7 devices (tomorrow) and > try to reproduce and investigate the slowdown there. Alternatively, you may be able to talk to labs and borrow 7xN5s.
jam: I was able to reproduce the slowdown on a slave, found the cause, fixed it, and verified that the slowdown disappears with the fix (see http://code.google.com/p/chromium/issues/detail?id=456396#c65 - #c70). With this addressed, does this change lgty?
On 2015/05/05 03:00:08, Nico wrote: > jam: I was able to reproduce the slowdown on a slave, found the cause, fixed it, > and verified that the slowdown disappears with the fix (see > http://code.google.com/p/chromium/issues/detail?id=456396#c65 - #c70). > > With this addressed, does this change lgty? Awesome detective work Nico and Dana. CL description updated, and all builds/tests should now be passing (clang included).
On 2015/05/05 03:00:08, Nico wrote: > jam: I was able to reproduce the slowdown on a slave, found the cause, fixed it, > and verified that the slowdown disappears with the fix (see > http://code.google.com/p/chromium/issues/detail?id=456396#c65 - #c70). > > With this addressed, does this change lgty? lgtm, great investigation and tracking this down :)
The CQ bit was checked by jdduke@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from thakis@chromium.org Link to the patchset: https://codereview.chromium.org/951983002/#ps340001 (title: "Fix clang+mesa")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/951983002/340001
Message was sent while issue was closed.
Committed patchset #6 (id:340001)
Message was sent while issue was closed.
Patchset 6 (id:??) landed as https://crrev.com/9f355f21c4049c622c71fc66a50e0288c5211335 Cr-Commit-Position: refs/heads/master@{#328296} |