Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(19)

Side by Side Diff: docs/linux_faster_builds.md

Issue 1324603002: [Docs] Another round of stylistic fixes. (Closed) Base URL: https://chromium.googlesource.com/chromium/src.git@master
Patch Set: Created 5 years, 3 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
« no previous file with comments | « docs/linux_eclipse_dev.md ('k') | docs/linux_graphics_pipeline.md » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 #summary tips for improving build speed on Linux 1 # Tips for improving build speed on Linux
2 #labels Linux,build
3 2
4 This list is sorted such that the largest speedup is first; see LinuxBuildInstru ctions for context and [Faster Builds](https://code.google.com/p/chromium/wiki/C ommonBuildTasks#Faster_Builds) for non-Linux-specific techniques. 3 This list is sorted such that the largest speedup is first; see
4 [Linux build instructions](linux_build_instructions.md) for context and
5 [Faster Builds](common_build_tasks.md) for non-Linux-specific techniques.
5 6
6 7 [TOC]
7 8
8 ## Use goma 9 ## Use goma
9 10
10 If you work at Google, you can use goma for distributed builds; this is similar to [distcc](http://en.wikipedia.org/wiki/Distcc). See [go/ma](http://go/ma) for documentation. 11 If you work at Google, you can use goma for distributed builds; this is similar
12 to [distcc](http://en.wikipedia.org/wiki/Distcc). See [go/ma](http://go/ma) for
13 documentation.
11 14
12 Even without goma, you can do distributed builds with distcc (if you have access to other machines), or a parallel build locally if have multiple cores. 15 Even without goma, you can do distributed builds with distcc (if you have access
16 to other machines), or a parallel build locally if have multiple cores.
13 17
14 Whether using goma, distcc, or parallel building, you can specify the number of build processes with `-jX` where `X` is the number of processes to start. 18 Whether using goma, distcc, or parallel building, you can specify the number of
19 build processes with `-jX` where `X` is the number of processes to start.
15 20
16 ## Use Icecc 21 ## Use Icecc
17 22
18 [Icecc](https://github.com/icecc/icecream) is the distributed compiler with a ce ntral scheduler to share build load. Currently, many external contributors use i t. e.g. Intel, Opera, Samsung. 23 [Icecc](https://github.com/icecc/icecream) is the distributed compiler with a
24 central scheduler to share build load. Currently, many external contributors use
25 it. e.g. Intel, Opera, Samsung.
19 26
20 When you use Icecc, you need to set some gyp variables. 27 When you use Icecc, you need to set some gyp variables.
21 28
22 **linux\_use\_bundled\_binutils=0** 29 linux_use_bundled_binutils=0**
23 30
24 -B option is not supported. [relevant commit](https://github.com/icecc/icecream/ commit/b2ce5b9cc4bd1900f55c3684214e409fa81e7a92) 31 `-B` option is not supported.
32 [relevant commit](https://github.com/icecc/icecream/commit/b2ce5b9cc4bd1900f55c3 684214e409fa81e7a92)
25 33
26 **linux\_use\_debug\_fission=0** 34 linux_use_debug_fission=0
27 35
28 [debug fission](http://gcc.gnu.org/wiki/DebugFission) is not supported. [bug](ht tps://github.com/icecc/icecream/issues/86) 36 [debug fission](http://gcc.gnu.org/wiki/DebugFission) is not supported.
37 [bug](https://github.com/icecc/icecream/issues/86)
29 38
30 **clang=0** 39 clang=0
31 40
32 Icecc doesn't support clang yet. 41 Icecc doesn't support clang yet.
33 42
34 ## Build only specific targets 43 ## Build only specific targets
35 44
36 If you specify just the target(s) you want built, the build will only walk that portion of the dependency graph: 45 If you specify just the target(s) you want built, the build will only walk that
37 ``` 46 portion of the dependency graph:
38 $ cd $CHROMIUM_ROOT/src 47
39 $ ninja -C out/Debug base_unittests 48 cd $CHROMIUM_ROOT/src
40 ``` 49 ninja -C out/Debug base_unittests
41 50
42 ## Linking 51 ## Linking
52
43 ### Dynamically link 53 ### Dynamically link
44 54
45 We normally statically link everything into one final executable, which produces enormous (nearly 1gb in debug mode) files. If you dynamically link, you save a lot of time linking for a bit of time during startup, which is fine especially when you're in an edit/compile/test cycle. 55 We normally statically link everything into one final executable, which produces
56 enormous (nearly 1gb in debug mode) files. If you dynamically link, you save a
57 lot of time linking for a bit of time during startup, which is fine especially
58 when you're in an edit/compile/test cycle.
46 59
47 Run gyp with the `-Dcomponent=shared_library` flag to put it in this configurati on. (Or set those flags via the `GYP_DEFINES` environment variable.) 60 Run gyp with the `-Dcomponent=shared_library` flag to put it in this
61 configuration. (Or set those flags via the `GYP_DEFINES` environment variable.)
48 62
49 e.g. 63 e.g.
50 64
51 ``` 65 build/gyp_chromium -D component=shared_library
52 $ build/gyp_chromium -D component=shared_library 66 ninja -C out/Debug chrome
53 $ ninja -C out/Debug chrome
54 ```
55 67
56 See the [component build page](http://www.chromium.org/developers/how-tos/compon ent-build) for more information. 68 See the
69 [component build page](http://www.chromium.org/developers/how-tos/component-buil d)
70 for more information.
57 71
58 ### Linking using gold 72 ### Linking using gold
59 73
60 The experimental "gold" linker is much faster than the standard BFD linker. 74 The experimental "gold" linker is much faster than the standard BFD linker.
61 75
62 On some systems (including Debian experimental, Ubuntu Karmic and beyond), there exists a `binutils-gold` package. Do not install this version! Having gold as t he default linker is known to break kernel / kernel module builds. 76 On some systems (including Debian experimental, Ubuntu Karmic and beyond), there
77 exists a `binutils-gold` package. Do not install this version! Having gold as
78 the default linker is known to break kernel / kernel module builds.
63 79
64 The Chrome tree now includes a binary of gold compiled for x64 Linux. It is use d by default on those systems. 80 The Chrome tree now includes a binary of gold compiled for x64 Linux. It is used
81 by default on those systems.
65 82
66 On other systems, to safely install gold, make sure the final binary is named `l d` and then set `CC/CXX` appropriately, e.g. `export CC="gcc -B/usr/local/gold/b in"` and similarly for `CXX`. Alternatively, you can add `/usr/local/gold/bin` t o your `PATH` in front of `/usr/bin`. 83 On other systems, to safely install gold, make sure the final binary is named
84 `ld` and then set `CC/CXX` appropriately, e.g.
85 `export CC="gcc -B/usr/local/gold/bin"` and similarly for `CXX`. Alternatively,
86 you can add `/usr/local/gold/bin` to your `PATH` in front of `/usr/bin`.
67 87
68 ## WebKit 88 ## WebKit
89
69 ### Build WebKit without debug symbols 90 ### Build WebKit without debug symbols
70 91
71 WebKit is about half our weight in terms of debug symbols. (Lots of templates!) If you're working on UI bits where you don't care to trace into WebKit you can cut down the size and slowness of debug builds significantly by building WebKit without debug symbols. 92 WebKit is about half our weight in terms of debug symbols. (Lots of templates!)
93 If you're working on UI bits where you don't care to trace into WebKit you can
94 cut down the size and slowness of debug builds significantly by building WebKit
95 without debug symbols.
72 96
73 Set the gyp variable `remove_webcore_debug_symbols=1`, either via the `GYP_DEFIN ES` environment variable, the `-D` flag to gyp, or by adding the following to `~ /.gyp/include.gypi`: 97 Set the gyp variable `remove_webcore_debug_symbols=1`, either via the
98 `GYP_DEFINES` environment variable, the `-D` flag to gyp, or by adding the
99 following to `~/.gyp/include.gypi`:
100
74 ``` 101 ```
75 { 102 {
76 'variables': { 103 'variables': {
77 'remove_webcore_debug_symbols': 1, 104 'remove_webcore_debug_symbols': 1,
78 }, 105 },
79 } 106 }
80 ``` 107 ```
81 108
82 ## Tune ccache for multiple working directories 109 ## Tune ccache for multiple working directories
83 110
84 (Ignore this if you use goma.) 111 (Ignore this if you use goma.)
85 112
86 Increase your ccache hit rate by setting `CCACHE_BASEDIR` to a parent directory that the working directories all have in common (e.g., `/home/yourusername/devel opment`). Consider using `CCACHE_SLOPPINESS=include_file_mtime` (since if you a re using multiple working directories, header times in svn sync'ed portions of y our trees will be different - see [the ccache troubleshooting section](http://cc ache.samba.org/manual.html#_troubleshooting) for additional information). If yo u use symbolic links from your home directory to get to the local physical disk directory where you keep those working development directories, consider putting 113 Increase your ccache hit rate by setting `CCACHE_BASEDIR` to a parent directory
87 ``` 114 that the working directories all have in common (e.g.,
88 alias cd="cd -P" 115 `/home/yourusername/development`). Consider using
89 ``` 116 `CCACHE_SLOPPINESS=include_file_mtime` (since if you are using multiple working
90 in your .bashrc so that `$PWD` or `cwd` always refers to a physical, not logical directory (and make sure `CCACHE_BASEDIR` also refers to a physical parent). 117 directories, header times in svn sync'ed portions of your trees will be
118 different - see
119 [the ccache troubleshooting section](http://ccache.samba.org/manual.html#_troubl eshooting)
120 for additional information). If you use symbolic links from your home directory
121 to get to the local physical disk directory where you keep those working
122 development directories, consider putting
91 123
92 If you tune ccache correctly, a second working directory that uses a branch trac king trunk and is up-to-date with trunk and was gclient sync'ed at about the sam e time should build chrome in about 1/3 the time, and the cache misses as report ed by `ccache -s` should barely increase. 124 alias cd="cd -P"
93 125
94 This is especially useful if you use `git-new-workdir` and keep multiple local w orking directories going at once. 126 in your `.bashrc` so that `$PWD` or `cwd` always refers to a physical, not
127 logical directory (and make sure `CCACHE_BASEDIR` also refers to a physical
128 parent).
129
130 If you tune ccache correctly, a second working directory that uses a branch
131 tracking trunk and is up-to-date with trunk and was gclient sync'ed at about the
132 same time should build chrome in about 1/3 the time, and the cache misses as
133 reported by `ccache -s` should barely increase.
134
135 This is especially useful if you use `git-new-workdir` and keep multiple local
136 working directories going at once.
95 137
96 ## Using tmpfs 138 ## Using tmpfs
97 139
98 You can use tmpfs for the build output to reduce the amount of disk writes requi red. I.e. mount tmpfs to the output directory where the build output goes: 140 You can use tmpfs for the build output to reduce the amount of disk writes
141 required. I.e. mount tmpfs to the output directory where the build output goes:
99 142
100 As root: 143 As root:
101 * `mount -t tmpfs -o size=20G,nr_inodes=40k,mode=1777 tmpfs /path/to/out`
102 144
103 **Caveat:** You need to have enough RAM + swap to back the tmpfs. For a full deb ug build, you will need about 20 GB. Less for just building the chrome target or for a release build. 145 mount -t tmpfs -o size=20G,nr_inodes=40k,mode=1777 tmpfs /path/to/out
104 146
105 Quick and dirty benchmark numbers on a HP Z600 (Intel core i7, 16 cores hyperthr eaded, 12 GB RAM) 147 **Caveat:** You need to have enough RAM + swap to back the tmpfs. For a full
148 debug build, you will need about 20 GB. Less for just building the chrome target
149 or for a release build.
106 150
107 | With tmpfs: | 12m:20s | 151 Quick and dirty benchmark numbers on a HP Z600 (Intel core i7, 16 cores
108 |:------------|:--------| 152 hyperthreaded, 12 GB RAM)
109 | Without tmpsfs: | 15m:40s | 153
154 * With tmpfs:
155 * 12m:20s
156 * Without tmpfs
157 * 15m:40s
OLDNEW
« no previous file with comments | « docs/linux_eclipse_dev.md ('k') | docs/linux_graphics_pipeline.md » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698