Chromium Code Reviews| OLD | NEW |
|---|---|
| 1 # The MB (Meta-Build wrapper) design spec | 1 # The MB (Meta-Build wrapper) design spec |
| 2 | 2 |
| 3 [TOC] | 3 [TOC] |
| 4 | 4 |
| 5 ## Intro | 5 ## Intro |
| 6 | 6 |
| 7 MB is intended to address two major aspects of the GYP -> GN transition | 7 MB is intended to address two major aspects of the GYP -> GN transition |
| 8 for Chromium: | 8 for Chromium: |
| 9 | 9 |
| 10 1. "bot toggling" - make it so that we can easily flip a given bot | 10 1. "bot toggling" - make it so that we can easily flip a given bot |
| (...skipping 35 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... | |
| 46 the master name and builder name (useful on the bots so that they do not need | 46 the master name and builder name (useful on the bots so that they do not need |
| 47 to specify a config directly and can be hidden from the details). | 47 to specify a config directly and can be hidden from the details). |
| 48 | 48 |
| 49 See the [user guide](user_guide.md#mb_config.pyl) for details. | 49 See the [user guide](user_guide.md#mb_config.pyl) for details. |
| 50 | 50 |
| 51 ### Handling the analyze step | 51 ### Handling the analyze step |
| 52 | 52 |
| 53 The interface to `mb analyze` is described in the | 53 The interface to `mb analyze` is described in the |
| 54 [user\_guide](user_guide.md#mb_analyze). | 54 [user\_guide](user_guide.md#mb_analyze). |
| 55 | 55 |
| 56 The way analyze works can be subtle and complicated (see below). | |
| 57 | |
| 56 Since the interface basically mirrors the way the "analyze" step on the bots | 58 Since the interface basically mirrors the way the "analyze" step on the bots |
| 57 invokes gyp\_chromium today, when the config is found to be a gyp config, | 59 invokes `gyp_chromium` today, when the config is found to be a gyp config, |
| 58 the arguments are passed straight through. | 60 the arguments are passed straight through. |
| 59 | 61 |
| 60 It implements the equivalent functionality in GN by calling `'gn refs | 62 It implements the equivalent functionality in GN by calling `gn refs |
| 61 [list of files] --type=executable --all --as=output` and filtering the | 63 [list of files] --type=executable --all --as=output` and filtering the |
| 62 output to match the list of targets. | 64 output to match the list of targets. |
| 63 | 65 |
| 64 ## Detailed Design Requirements and Rationale | 66 ## Analyze |
| 67 | |
| 68 The goal of the `analyze` step is to speed up the cycle time of the try servers | |
| 69 by only building and running the tests affected by the files in a patch, rather | |
| 70 than everything that might be out of date. Doing this ends up being tricky. | |
| 71 | |
| 72 We start with the following requirements and observations: | |
| 73 | |
| 74 * In an ideal (un-resource-constrained) world, we would build and test | |
| 75 everything that a patch affected on every patch. This does not | |
| 76 necessarily mean that we would build 'all' on every patch (see below). | |
| 77 | |
| 78 * In the real world, however, we do not have an infinite number of machines, | |
| 79 and try jobs are not infinitely fast, so we need to balance the desire | |
| 80 to get maximum test coverage against the desire to have reasonable cycle | |
| 81 times, given the number of machines we have. | |
| 82 | |
| 83 * Also, since we run most try jobs against tip-of-tree Chromium, by | |
| 84 the time one job completes on the bot, new patches have probably landed, | |
| 85 rendering the build out of date. | |
| 86 | |
| 87 * This means that the next try job may have to do a build that is out of | |
| 88 date due to a combination of files affected by a given patch, and files | |
| 89 affected for unrelated reasons. We want to rebuild and test only the | |
| 90 targets affected by the patch, so that we don't blame or punish the | |
| 91 patch author for unrelated changes. | |
| 92 | |
| 93 So: | |
| 94 | |
| 95 1. We need a way to indicate which changed files we care about and which | |
| 96 we don't (the affected files of a patch). | |
| 97 | |
| 98 2. We need to know which tests we might potentially want to run, and how | |
| 99 those are mapped onto build targets. For some kinds of tests (like | |
| 100 GTest-based tests), the mapping is 1:1 - if you want to run base_unittests, | |
| 101 you need to build base_unittests. For others (like the telemetry and | |
| 102 layout tests), you might need to build several executables in order to | |
| 103 run the tests, and that mapping might best be captured by a *meta* | |
| 104 target (a GN group or a GYP 'none' target like `webkit_tests`) that | |
| 105 depends on the right list of files. Because the GN and GYP files know | |
| 106 nothing about test steps, we have to have some way of mapping back | |
| 107 and forth between test steps and build targets. That mapping | |
| 108 is *not* currently available to MB (or GN or GYP), and so we have to | |
| 109 enough information to make it possible for the caller to do the mapping. | |
| 110 | |
| 111 3. (We might also want to know when test targets are affected by data files | |
|
sky
2015/11/12 16:59:09
nit: remove leading '('?
Dirk Pranke
2015/11/12 18:24:12
Will do.
| |
| 112 that aren't compiled (python scripts, or the layout tests themselves). | |
| 113 There's no good way to do this in GYP, but GN supports this). | |
| 114 | |
| 115 4. We also want to ensure that particular targets still compile even if they | |
| 116 are not actually tested; consider testing the installers themselves, or | |
| 117 targets that don't yet have good test coverage. We might want to use meta | |
| 118 targets for this purpose as well. | |
| 119 | |
| 120 5. However, for some meta targets, we don't necessarily want to rebuild the | |
| 121 meta target itself, perhaps just the dependencies of the meta target that | |
| 122 are affected by the patch. For example, if you have a meta target like | |
| 123 `blink_tests` that might depend on ten different test binaries. If a patch | |
| 124 only affects one of them (say `wtf_unittests`), you don't want to | |
| 125 build `blink_tests`, because that might actually also build the other nine | |
| 126 targets. In other words, some meta targets are *prunable*. | |
| 127 | |
| 128 6. As noted above, in the ideal case we actually have enough resources and | |
| 129 things are fast enough that we can afford to build everything affected by a | |
| 130 patch, but listing every possible target explicitly would be painful. The | |
| 131 GYP and GN Ninja generators provide an 'all' target that captures (nearly, | |
| 132 see [crbug.com/503241](crbug.com/503241)) everything, but unfortunately | |
| 133 neither GN nor GYP actually represents 'all' as a meta target in the build | |
| 134 graph, so we will need to write code to handle that specially. | |
| 135 | |
| 136 7. In some cases, we will not be able to correctly analyze the build graph to | |
| 137 determine the impact of a patch, and need to bail out (e.g,. if you change a | |
| 138 build file itself, it may not be easy to tell how that affects the graph). | |
| 139 In that case we should simply build and run everything. | |
| 140 | |
| 141 The interaction between 2) and 5) means that we need to treat meta targets | |
| 142 two different ways, and so we need to know which targets should be | |
| 143 pruned in the sense of 5) and which targets should be returned unchanged | |
| 144 so that we can map them back to the appropriate tests. | |
| 145 | |
| 146 So, we need three things as input: | |
| 147 | |
| 148 * `files`: the list of files in the patch | |
| 149 * `test_targets`: the list of ninja targets which, if out of date, should | |
|
Paweł Hajdan Jr.
2015/11/12 15:23:03
Wouldn't "affected by the patch" more accurate tha
Dirk Pranke
2015/11/12 18:24:12
You're right, "affected by the patch" is better. W
| |
| 150 be reported back so that we can map them back to the appropriate tests to | |
| 151 run. Any meta targets in this list should *not* be pruned. | |
| 152 * `additional_compile_targets`: the list of ninja targets we wish to compile | |
| 153 *in addition to* the list in `test_targets`. Any meta targets | |
| 154 present in this list should be pruned (we don't need to return the | |
| 155 meta targets because they aren't mapped back to tests, and we don't want | |
| 156 to build them because we might build too much). | |
| 157 | |
| 158 We can then return two lists as output: | |
| 159 | |
| 160 * `compile_targets`, which is a list of pruned targets to be | |
| 161 passed to Ninja to build. It is acceptable to replace a list of | |
| 162 pruned targets by a meta target if it turns out that all of the | |
| 163 dependendencies of the target are affected by the patch (i.e., | |
| 164 all ten binaries that blink_tests depends on), but doing so is | |
| 165 not required. | |
| 166 * `test_targets`, which is a list of unpruned targets to be mapped | |
| 167 back to determine which tests to run. | |
| 168 | |
| 169 There may be substantial overlap between the two lists, but there is | |
| 170 no guarantee that one is a subset of the other and the two cannot be | |
| 171 used interchangeably or merged together without losing information and | |
| 172 causing the wrong thing to happen. | |
| 173 | |
| 174 We also have to specify how to deal with 'all': the implementation is | |
| 175 responsible for recognizing 'all' as a magic string and mapping it onto | |
| 176 the list of all root nodes in the build graph. | |
| 177 | |
| 178 We have to decide how to deal with files that don't actually exist | |
| 179 in the build graph: this could be either the result of an error | |
| 180 (the file should be in the build graph, but isn't), or perfectly fine | |
| 181 (the file doesn't affect the build graph at all). We can't tell these | |
| 182 two apart, so we should ignore missing files. | |
| 183 | |
| 184 We have to decide how to deal with targets that don't exist in the build | |
| 185 graph: unlike missing files, this can only indicate a configuration error, | |
| 186 and so we should return which targets are missing so the caller can | |
| 187 treat this as an error, if so desired. | |
| 188 | |
| 189 We have to decide how to deal with empty lists for the three | |
| 190 fields: | |
| 191 | |
| 192 * It doesn't make sense to call analyze at all if no files were modified, | |
| 193 so this should probably return an error. | |
| 194 | |
| 195 * Passing an empty list for one or the other of test_targets and | |
| 196 additional_compile_targets is perfectly sensible: in the former case, | |
| 197 it can indicate that you don't want to run any tests, and in the latter, | |
| 198 it can indicate that you don't want to do build anything else in | |
| 199 addition to the test targets. | |
| 200 | |
| 201 * It doesn't make sense to call analyze if you don't want to compile | |
| 202 anything at all, so passing [] for both test_targets and | |
| 203 additional_compile_targets should probably return an error. | |
| 204 | |
| 205 In the output case, an empty list indicates that there was nothing to | |
| 206 build, or that there were no affected test targets as appropriate. | |
| 207 | |
| 208 Note that passing no arguments to Ninja is equivalent to passing | |
| 209 `all` to Ninja (at least given how GN and GYP work); however, we | |
| 210 don't want to take advantage of this in most cases because we don't | |
| 211 actually want to build every out of date target, only the targets | |
| 212 potentially affected by the files. One could try to indicate | |
| 213 to analyze that we wanted to use no arguments instead of an empty | |
| 214 list, but using the existing fields for this seems fragile and/or | |
| 215 confusing, and adding a new field for this seems unwarranted at this time. | |
| 216 | |
| 217 We also have an "error" field in case something goes wrong (like the | |
| 218 empty file list case, above, or an internal error in MB/GYP/GN). The | |
| 219 analyze code should also return an error code to the shell if appropriate | |
| 220 to indicate that the command failed. | |
| 221 | |
| 222 In the case where build files themselves are modified and analyze may | |
|
sky
2015/11/12 16:59:09
There are actually two different cases for analyze
Dirk Pranke
2015/11/12 18:24:12
In GN you can't easily distinguish the two cases,
| |
| 223 not be able to determine a correct answer (point 7 above, where we return | |
| 224 "Found dependency (all)"), we should also return the `test_targets` unmodified | |
| 225 and return the union of `test_targets` and `additional_compile_targets` for | |
| 226 `compile_targets`, to avoid confusion. | |
| 227 | |
| 228 ### Examples | |
| 229 | |
| 230 Continuing the example given above, suppose we have the following build | |
| 231 graph: | |
| 232 | |
| 233 * `blink_tests` is a meta target that depends on `webkit_unit_tests`, | |
| 234 `wtf_unittests`, and `webkit_tests` and represents all of the targets | |
| 235 needed to fully test Blink. Each of those is a separate test step. | |
| 236 * `webkit_tests` is also a meta target; it depends on `content_shell` | |
| 237 and `image_diff`. | |
| 238 * `base_unittests` is a separate test binary. | |
| 239 * `wtf_unittests` depends on `Assertions.cpp` and `AssertionsTest.cpp`. | |
| 240 * `webkit_unit_tests` depends on `WebNode.cpp` and `WebNodeTest.cpp`. | |
| 241 * `content_shell` depends on `WebNode.cpp` and `Assertions.cpp`. | |
| 242 * `base_unittests` depends on `logging.cc` and `logging_unittest.cc`. | |
| 243 | |
| 244 #### Example 1 | |
| 245 | |
| 246 We wish to run 'wtf_unittests' and 'webkit_tests' on a bot, but not | |
| 247 compile any additional targets. | |
| 248 | |
| 249 If a patch touches WebNode.cpp, then analyze gets as input: | |
| 250 | |
| 251 { | |
| 252 "files": ["WebNode.cpp"], | |
| 253 "test_targets": ["wtf_unittests", "webkit_tests"], | |
| 254 "additional_compile_targets": [] | |
| 255 } | |
| 256 | |
| 257 and should return as output: | |
| 258 | |
| 259 { | |
| 260 "status": "Found dependency", | |
| 261 "compile_targets": ["webkit_unit_tests"], | |
| 262 "test_targets": ["webkit_tests"] | |
| 263 } | |
| 264 | |
| 265 Note how `webkit_tests` was pruned in compile_targets but not in test_targets. | |
| 266 | |
| 267 #### Example 2 | |
| 268 | |
| 269 Using the same patch as Example 1, assume we wish to run only `wtf_unittests`, | |
| 270 but additionally build everything needed to test Blink (`blink_tests`): | |
| 271 | |
| 272 We pass as input: | |
| 273 | |
| 274 { | |
| 275 "files": ["WebNode.cpp"], | |
| 276 "test_targets": ["wtf_unittests"], | |
| 277 "additional_compile_targets": ["blink_tests"] | |
| 278 } | |
| 279 | |
| 280 And should get as output: | |
| 281 | |
| 282 { | |
| 283 "status": "Found dependency", | |
| 284 "compile_targets": ["webkit_unit_tests"], | |
| 285 "test_targets": [] | |
| 286 } | |
| 287 | |
| 288 Here `blink_tests` was pruned in the output compile_targets, and | |
| 289 test_targets was empty, since blink_tests was not listed in the input | |
| 290 test_targets. | |
| 291 | |
| 292 #### Example 3 | |
| 293 | |
| 294 Build everything, but do not run any tests. | |
| 295 | |
| 296 Input: | |
| 297 | |
| 298 { | |
| 299 "files": ["WebNode.cpp"], | |
| 300 "test_targets": [], | |
| 301 "additional_compile_targets": ["all"] | |
| 302 } | |
| 303 | |
| 304 Output: | |
| 305 | |
| 306 { | |
| 307 "status": "Found dependency", | |
| 308 "compile_targets": ["webkit_unit_tests", "content_shell"], | |
| 309 "test_targets": [] | |
| 310 } | |
| 311 | |
| 312 #### Example 4 | |
| 313 | |
| 314 Same as Example 2, but a build file was modified instead of a source file. | |
| 315 | |
| 316 Input: | |
| 317 | |
| 318 { | |
| 319 "files": ["BUILD.gn"], | |
| 320 "test_targets": ["wtf_unittests"], | |
| 321 "additional_compile_targets": ["blink_tests"] | |
| 322 } | |
| 323 | |
| 324 Output: | |
| 325 | |
| 326 { | |
| 327 "status": "Found dependency (all)", | |
| 328 "compile_targets": ["webkit_unit_tests", "wtf_unittests"], | |
| 329 "test_targets": ["wtf_unittests"] | |
| 330 } | |
| 331 | |
| 332 test_targets was returned unchanged, compile_targets was pruned. | |
| 333 | |
| 334 ## Random Requirements and Rationale | |
| 65 | 335 |
| 66 This section is collection of semi-organized notes on why MB is the way | 336 This section is collection of semi-organized notes on why MB is the way |
| 67 it is ... | 337 it is ... |
| 68 | 338 |
| 69 ### in-tree or out-of-tree | 339 ### in-tree or out-of-tree |
| 70 | 340 |
| 71 The first issue is whether or not this should exist as a script in | 341 The first issue is whether or not this should exist as a script in |
| 72 Chromium at all; an alternative would be to simply change the bot | 342 Chromium at all; an alternative would be to simply change the bot |
| 73 configurations to know whether to use GYP or GN, and which flags to | 343 configurations to know whether to use GYP or GN, and which flags to |
| 74 pass. | 344 pass. |
| (...skipping 75 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... | |
| 150 * Some common flags (goma\_dir being the obvious one) may need to be | 420 * Some common flags (goma\_dir being the obvious one) may need to be |
| 151 specified via the user, and it's unclear how to integrate this with | 421 specified via the user, and it's unclear how to integrate this with |
| 152 the concept of build\_configs. | 422 the concept of build\_configs. |
| 153 | 423 |
| 154 Right now, MB has hard-coded support for a few flags (i.e., you can | 424 Right now, MB has hard-coded support for a few flags (i.e., you can |
| 155 pass the --goma-dir flag, and it will know to expand "${goma\_dir}" in | 425 pass the --goma-dir flag, and it will know to expand "${goma\_dir}" in |
| 156 the string before calling out to the tool. We may want to generalize | 426 the string before calling out to the tool. We may want to generalize |
| 157 this to a common key/value approach (perhaps then meeting the | 427 this to a common key/value approach (perhaps then meeting the |
| 158 ChromeOS non-goal, above), or we may want to keep this very strictly | 428 ChromeOS non-goal, above), or we may want to keep this very strictly |
| 159 limited for simplicity. | 429 limited for simplicity. |
| OLD | NEW |