|
|
Created:
4 years, 7 months ago by sdefresne Modified:
4 years, 6 months ago Reviewers:
Dirk Pranke CC:
chromium-reviews Base URL:
https://chromium.googlesource.com/chromium/src.git@gn-pool Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
Description[GN] Configure a pool for copy_bundle_data and compile_xcassets tools.
Reduce the number of tasks using the copy_bundle_data and compile_xcassets
tools as they can cause lots of I/O contention when invoking ninja with a
large number of parallel jobs (e.g. when using distributed build like goma).
Use the same depth as the link_pool (but in a separate pool).
BUG=612786
Committed: https://crrev.com/a78c5b20b9d0d8cb6e883e3efb83dafbc7849655
Cr-Commit-Position: refs/heads/master@{#398585}
Patch Set 1 #
Total comments: 2
Patch Set 2 : Use get_concurrent_links.py #Patch Set 3 : Rebase #
Total comments: 2
Patch Set 4 : Address comments #Messages
Total messages: 18 (5 generated)
sdefresne@chromium.org changed reviewers: + dpranke@chromium.org
Please take a look. I've picked the pool depth more or less at random (I used the default number of ninja jobs on my machine rounded to the closest multiple of ten).
https://codereview.chromium.org/2018553003/diff/1/build/toolchain/mac/BUILD.gn File build/toolchain/mac/BUILD.gn (right): https://codereview.chromium.org/2018553003/diff/1/build/toolchain/mac/BUILD.g... build/toolchain/mac/BUILD.gn:29: depth = 20 Your desktop machine is probably a lot beefier than the bots, so this is probably too high by default. It's probably better to re-use the get_concurrent_links.py script to get default values (see, e.g., gcc_toolchain.gni). You might also want this to be a declare_arg() so that we can tune this on bots as needed. ... which made me just realize that the mac and ios toolchains aren't using get_concurrent_links.py either, which is probably bad.
On 2016/05/31 22:29:44, Dirk Pranke (slow) wrote: > https://codereview.chromium.org/2018553003/diff/1/build/toolchain/mac/BUILD.gn > File build/toolchain/mac/BUILD.gn (right): > > https://codereview.chromium.org/2018553003/diff/1/build/toolchain/mac/BUILD.g... > build/toolchain/mac/BUILD.gn:29: depth = 20 > Your desktop machine is probably a lot beefier than the bots, so this is > probably too high by default. It's probably better to re-use the > get_concurrent_links.py script to get default values (see, e.g., > gcc_toolchain.gni). > > You might also want this to be a declare_arg() so that we can tune this on bots > as needed. > > ... which made me just realize that the mac and ios toolchains aren't using > get_concurrent_links.py either, which is probably bad. This is an upper limit of the number of tasks that can be run. It will be also limited by the -j parameter passed to ninja. On my machine, build is fine with values around 100, it's starting to struggle after that point. I'll make it configurable and will use the get_concurrent_links.py script as it should be okay using that value. Note that the bundle data steps are pretty cheap (most of them jus do a "cp" but implemented in Python). What's expensive is opening all the files at the same time (even my beefy ssd gets sad due to io).
On 2016/05/31 22:37:08, sdefresne wrote: > On 2016/05/31 22:29:44, Dirk Pranke (slow) wrote: > > https://codereview.chromium.org/2018553003/diff/1/build/toolchain/mac/BUILD.gn > > File build/toolchain/mac/BUILD.gn (right): > > > > > https://codereview.chromium.org/2018553003/diff/1/build/toolchain/mac/BUILD.g... > > build/toolchain/mac/BUILD.gn:29: depth = 20 > > Your desktop machine is probably a lot beefier than the bots, so this is > > probably too high by default. It's probably better to re-use the > > get_concurrent_links.py script to get default values (see, e.g., > > gcc_toolchain.gni). > > > > You might also want this to be a declare_arg() so that we can tune this on > bots > > as needed. > > > > ... which made me just realize that the mac and ios toolchains aren't using > > get_concurrent_links.py either, which is probably bad. > > This is an upper limit of the number of tasks that can be run. It will be also > limited by the -j parameter passed to ninja. On my machine, build is fine with > values around 100, it's starting to struggle after that point. I'll make it > configurable and will use the get_concurrent_links.py script as it should be > okay using that value. > > Note that the bundle data steps are pretty cheap (most of them jus do a "cp" but > implemented in Python). What's expensive is opening all the files at the same > time (even my beefy ssd gets sad due to io). The bots are 8 core machines w/o a huge amount of i/o, but they use -j 50 for goma, so defaulting to 20 is lower than -j but probably still too high.
Description was changed from ========== [GN] Configure a pool for copy_bundle_data and compile_xcassets tools. Reduce the number of tasks using the copy_bundle_data and compile_xcassets tools as they can cause lots of I/O contention when invoking ninja with a large number of parallel jobs (e.g. when using distributed build like goma). BUG=612786 ========== to ========== [GN] Configure a pool for copy_bundle_data and compile_xcassets tools. Reduce the number of tasks using the copy_bundle_data and compile_xcassets tools as they can cause lots of I/O contention when invoking ninja with a large number of parallel jobs (e.g. when using distributed build like goma). Make the pool size configurable via "gn args" and defaults to using the number of tasks returned by get_concurrent_links.py script. BUG=612786 ==========
https://codereview.chromium.org/2018553003/diff/1/build/toolchain/mac/BUILD.gn File build/toolchain/mac/BUILD.gn (right): https://codereview.chromium.org/2018553003/diff/1/build/toolchain/mac/BUILD.g... build/toolchain/mac/BUILD.gn:29: depth = 20 On 2016/05/31 22:29:44, Dirk Pranke wrote: > Your desktop machine is probably a lot beefier than the bots, so this is > probably too high by default. It's probably better to re-use the > get_concurrent_links.py script to get default values (see, e.g., > gcc_toolchain.gni). > > You might also want this to be a declare_arg() so that we can tune this on bots > as needed. > > ... which made me just realize that the mac and ios toolchains aren't using > get_concurrent_links.py either, which is probably bad. Done.
On 2016/05/31 22:29:44, Dirk Pranke wrote: > https://codereview.chromium.org/2018553003/diff/1/build/toolchain/mac/BUILD.gn > File build/toolchain/mac/BUILD.gn (right): > > https://codereview.chromium.org/2018553003/diff/1/build/toolchain/mac/BUILD.g... > build/toolchain/mac/BUILD.gn:29: depth = 20 > Your desktop machine is probably a lot beefier than the bots, so this is > probably too high by default. It's probably better to re-use the > get_concurrent_links.py script to get default values (see, e.g., > gcc_toolchain.gni). > > You might also want this to be a declare_arg() so that we can tune this on bots > as needed. > > ... which made me just realize that the mac and ios toolchains aren't using > get_concurrent_links.py either, which is probably bad. Reported as https://bugs.chromium.org/p/chromium/issues/detail?id=616390.
GN has rolled, so can you take another look?
https://codereview.chromium.org/2018553003/diff/40001/build/toolchain/mac/BUI... File build/toolchain/mac/BUILD.gn (right): https://codereview.chromium.org/2018553003/diff/40001/build/toolchain/mac/BUI... build/toolchain/mac/BUILD.gn:39: if (bundle_pool_depth == -1) { Can we just use the value of concurrent_links here? It seems like adding another build arg for this is overkill. It also seems like we can just re-use the existing (concurrent_links) value rather than not having to call out to the script again. I suppose it's theoretically possible that someone might set concurrent_links=X explicitly and we want bundle_pool_depth=Y, but that seems like excessive optimization.
Description was changed from ========== [GN] Configure a pool for copy_bundle_data and compile_xcassets tools. Reduce the number of tasks using the copy_bundle_data and compile_xcassets tools as they can cause lots of I/O contention when invoking ninja with a large number of parallel jobs (e.g. when using distributed build like goma). Make the pool size configurable via "gn args" and defaults to using the number of tasks returned by get_concurrent_links.py script. BUG=612786 ========== to ========== [GN] Configure a pool for copy_bundle_data and compile_xcassets tools. Reduce the number of tasks using the copy_bundle_data and compile_xcassets tools as they can cause lots of I/O contention when invoking ninja with a large number of parallel jobs (e.g. when using distributed build like goma). Use the same depth as the link_pool (but in a separate pool). BUG=612786 ==========
Please take another look and send to CQ if LGTY. https://codereview.chromium.org/2018553003/diff/40001/build/toolchain/mac/BUI... File build/toolchain/mac/BUILD.gn (right): https://codereview.chromium.org/2018553003/diff/40001/build/toolchain/mac/BUI... build/toolchain/mac/BUILD.gn:39: if (bundle_pool_depth == -1) { On 2016/06/07 22:17:46, Dirk Pranke wrote: > Can we just use the value of concurrent_links here? It seems like adding another > build arg for this is overkill. > > It also seems like we can just re-use the existing (concurrent_links) value > rather than not having to call out to the script again. I suppose it's > theoretically possible that someone might set concurrent_links=X explicitly and > we want bundle_pool_depth=Y, but that seems like excessive optimization. Done.
The CQ bit was checked by dpranke@chromium.org
lgtm
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/2018553003/60001
Message was sent while issue was closed.
Committed patchset #4 (id:60001)
Message was sent while issue was closed.
Description was changed from ========== [GN] Configure a pool for copy_bundle_data and compile_xcassets tools. Reduce the number of tasks using the copy_bundle_data and compile_xcassets tools as they can cause lots of I/O contention when invoking ninja with a large number of parallel jobs (e.g. when using distributed build like goma). Use the same depth as the link_pool (but in a separate pool). BUG=612786 ========== to ========== [GN] Configure a pool for copy_bundle_data and compile_xcassets tools. Reduce the number of tasks using the copy_bundle_data and compile_xcassets tools as they can cause lots of I/O contention when invoking ninja with a large number of parallel jobs (e.g. when using distributed build like goma). Use the same depth as the link_pool (but in a separate pool). BUG=612786 Committed: https://crrev.com/a78c5b20b9d0d8cb6e883e3efb83dafbc7849655 Cr-Commit-Position: refs/heads/master@{#398585} ==========
Message was sent while issue was closed.
Patchset 4 (id:??) landed as https://crrev.com/a78c5b20b9d0d8cb6e883e3efb83dafbc7849655 Cr-Commit-Position: refs/heads/master@{#398585} |