Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(715)

Side by Side Diff: tools/mb/docs/design_spec.md

Issue 1412733015: Clarify how the `analyze` step should work in MB (and elsewhere). (Closed) Base URL: https://chromium.googlesource.com/chromium/src.git@master
Patch Set: update w/ more review feedback, clean up a bit Created 5 years, 1 month ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
« no previous file with comments | « no previous file | tools/mb/docs/user_guide.md » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 # The MB (Meta-Build wrapper) design spec 1 # The MB (Meta-Build wrapper) design spec
2 2
3 [TOC] 3 [TOC]
4 4
5 ## Intro 5 ## Intro
6 6
7 MB is intended to address two major aspects of the GYP -> GN transition 7 MB is intended to address two major aspects of the GYP -> GN transition
8 for Chromium: 8 for Chromium:
9 9
10 1. "bot toggling" - make it so that we can easily flip a given bot 10 1. "bot toggling" - make it so that we can easily flip a given bot
(...skipping 35 matching lines...) Expand 10 before | Expand all | Expand 10 after
46 the master name and builder name (useful on the bots so that they do not need 46 the master name and builder name (useful on the bots so that they do not need
47 to specify a config directly and can be hidden from the details). 47 to specify a config directly and can be hidden from the details).
48 48
49 See the [user guide](user_guide.md#mb_config.pyl) for details. 49 See the [user guide](user_guide.md#mb_config.pyl) for details.
50 50
51 ### Handling the analyze step 51 ### Handling the analyze step
52 52
53 The interface to `mb analyze` is described in the 53 The interface to `mb analyze` is described in the
54 [user\_guide](user_guide.md#mb_analyze). 54 [user\_guide](user_guide.md#mb_analyze).
55 55
56 The way analyze works can be subtle and complicated (see below).
57
56 Since the interface basically mirrors the way the "analyze" step on the bots 58 Since the interface basically mirrors the way the "analyze" step on the bots
57 invokes gyp\_chromium today, when the config is found to be a gyp config, 59 invokes `gyp_chromium` today, when the config is found to be a gyp config,
58 the arguments are passed straight through. 60 the arguments are passed straight through.
59 61
60 It implements the equivalent functionality in GN by calling `'gn refs 62 It implements the equivalent functionality in GN by calling `gn refs
61 [list of files] --type=executable --all --as=output` and filtering the 63 [list of files] --type=executable --all --as=output` and filtering the
62 output to match the list of targets. 64 output to match the list of targets.
63 65
64 ## Detailed Design Requirements and Rationale 66 ## Analyze
67
68 The goal of the `analyze` step is to speed up the cycle time of the try servers
69 by only building and running the tests affected by the files in a patch, rather
70 than everything that might be out of date. Doing this ends up being tricky.
71
72 We start with the following requirements and observations:
73
74 * In an ideal (un-resource-constrained) world, we would build and test
75 everything that a patch affected on every patch. This does not
76 necessarily mean that we would build 'all' on every patch (see below).
77
78 * In the real world, however, we do not have an infinite number of machines,
79 and try jobs are not infinitely fast, so we need to balance the desire
80 to get maximum test coverage against the desire to have reasonable cycle
81 times, given the number of machines we have.
82
83 * Also, since we run most try jobs against tip-of-tree Chromium, by
84 the time one job completes on the bot, new patches have probably landed,
85 rendering the build out of date.
86
87 * This means that the next try job may have to do a build that is out of
88 date due to a combination of files affected by a given patch, and files
89 affected for unrelated reasons. We want to rebuild and test only the
90 targets affected by the patch, so that we don't blame or punish the
91 patch author for unrelated changes.
92
93 So:
94
95 1. We need a way to indicate which changed files we care about and which
96 we don't (the affected files of a patch).
97
98 2. We need to know which tests we might potentially want to run, and how
99 those are mapped onto build targets. For some kinds of tests (like
100 GTest-based tests), the mapping is 1:1 - if you want to run base_unittests,
101 you need to build base_unittests. For others (like the telemetry and
102 layout tests), you might need to build several executables in order to
103 run the tests, and that mapping might best be captured by a *meta*
104 target (a GN group or a GYP 'none' target like `webkit_tests`) that
105 depends on the right list of files. Because the GN and GYP files know
106 nothing about test steps, we have to have some way of mapping back
107 and forth between test steps and build targets. That mapping
108 is *not* currently available to MB (or GN or GYP), and so we have to
109 enough information to make it possible for the caller to do the mapping.
110
111 3. We might also want to know when test targets are affected by data files
112 that aren't compiled (python scripts, or the layout tests themselves).
113 There's no good way to do this in GYP, but GN supports this.
114
115 4. We also want to ensure that particular targets still compile even if they
116 are not actually tested; consider testing the installers themselves, or
117 targets that don't yet have good test coverage. We might want to use meta
118 targets for this purpose as well.
119
120 5. However, for some meta targets, we don't necessarily want to rebuild the
121 meta target itself, perhaps just the dependencies of the meta target that
122 are affected by the patch. For example, if you have a meta target like
123 `blink_tests` that might depend on ten different test binaries. If a patch
124 only affects one of them (say `wtf_unittests`), you don't want to
125 build `blink_tests`, because that might actually also build the other nine
126 targets. In other words, some meta targets are *prunable*.
127
128 6. As noted above, in the ideal case we actually have enough resources and
129 things are fast enough that we can afford to build everything affected by a
130 patch, but listing every possible target explicitly would be painful. The
131 GYP and GN Ninja generators provide an 'all' target that captures (nearly,
132 see [crbug.com/503241](crbug.com/503241)) everything, but unfortunately
133 neither GN nor GYP actually represents 'all' as a meta target in the build
134 graph, so we will need to write code to handle that specially.
135
136 7. In some cases, we will not be able to correctly analyze the build graph to
137 determine the impact of a patch, and need to bail out (e.g,. if you change a
138 build file itself, it may not be easy to tell how that affects the graph).
139 In that case we should simply build and run everything.
140
141 The interaction between 2) and 5) means that we need to treat meta targets
142 two different ways, and so we need to know which targets should be
143 pruned in the sense of 5) and which targets should be returned unchanged
144 so that we can map them back to the appropriate tests.
145
146 So, we need three things as input:
147
148 * `files`: the list of files in the patch
149 * `test_targets`: the list of ninja targets which, if affected by a patch,
150 should be reported back so that we can map them back to the appropriate
151 tests to run. Any meta targets in this list should *not* be pruned.
152 * `additional_compile_targets`: the list of ninja targets we wish to compile
153 *in addition to* the list in `test_targets`. Any meta targets
154 present in this list should be pruned (we don't need to return the
155 meta targets because they aren't mapped back to tests, and we don't want
156 to build them because we might build too much).
157
158 We can then return two lists as output:
159
160 * `compile_targets`, which is a list of pruned targets to be
161 passed to Ninja to build. It is acceptable to replace a list of
162 pruned targets by a meta target if it turns out that all of the
163 dependendencies of the target are affected by the patch (i.e.,
164 all ten binaries that blink_tests depends on), but doing so is
165 not required.
166 * `test_targets`, which is a list of unpruned targets to be mapped
167 back to determine which tests to run.
168
169 There may be substantial overlap between the two lists, but there is
170 no guarantee that one is a subset of the other and the two cannot be
171 used interchangeably or merged together without losing information and
172 causing the wrong thing to happen.
173
174 The implementation is responsible for recognizing 'all' as a magic string
175 and mapping it onto the list of all root nodes in the build graph.
176
177 There may be files listed in the input that don't actually exist in the build
178 graph: this could be either the result of an error (the file should be in the
179 build graph, but isn't), or perfectly fine (the file doesn't affect the build
180 graph at all). We can't tell these two apart, so we should ignore missing
181 files.
182
183 There may be targets listed in the input that don't exist in the build
184 graph; unlike missing files, this can only indicate a configuration error,
185 and so we should return which targets are missing so the caller can
186 treat this as an error, if so desired.
187
188 Any of the three inputs may be an empty list:
189
190 * It doesn't make sense to call analyze at all if no files were modified,
191 so this should probably return an error.
192
193 * Passing an empty list for one or the other of test_targets and
194 additional_compile_targets is perfectly sensible: in the former case,
195 it can indicate that you don't want to run any tests, and in the latter,
196 it can indicate that you don't want to do build anything else in
197 addition to the test targets.
198
199 * It doesn't make sense to call analyze if you don't want to compile
200 anything at all, so passing [] for both test_targets and
201 additional_compile_targets should probably return an error.
202
203 In the output case, an empty list indicates that there was nothing to
204 build, or that there were no affected test targets as appropriate.
205
206 Note that passing no arguments to Ninja is equivalent to passing
207 `all` to Ninja (at least given how GN and GYP work); however, we
208 don't want to take advantage of this in most cases because we don't
209 actually want to build every out of date target, only the targets
210 potentially affected by the files. One could try to indicate
211 to analyze that we wanted to use no arguments instead of an empty
212 list, but using the existing fields for this seems fragile and/or
213 confusing, and adding a new field for this seems unwarranted at this time.
214
215 There is an "error" field in case something goes wrong (like the
216 empty file list case, above, or an internal error in MB/GYP/GN). The
217 analyze code should also return an error code to the shell if appropriate
218 to indicate that the command failed.
219
220 In the case where build files themselves are modified and analyze may
221 not be able to determine a correct answer (point 7 above, where we return
222 "Found dependency (all)"), we should also return the `test_targets` unmodified
223 and return the union of `test_targets` and `additional_compile_targets` for
224 `compile_targets`, to avoid confusion.
225
226 ### Examples
227
228 Continuing the example given above, suppose we have the following build
229 graph:
230
231 * `blink_tests` is a meta target that depends on `webkit_unit_tests`,
232 `wtf_unittests`, and `webkit_tests` and represents all of the targets
233 needed to fully test Blink. Each of those is a separate test step.
234 * `webkit_tests` is also a meta target; it depends on `content_shell`
235 and `image_diff`.
236 * `base_unittests` is a separate test binary.
237 * `wtf_unittests` depends on `Assertions.cpp` and `AssertionsTest.cpp`.
238 * `webkit_unit_tests` depends on `WebNode.cpp` and `WebNodeTest.cpp`.
239 * `content_shell` depends on `WebNode.cpp` and `Assertions.cpp`.
240 * `base_unittests` depends on `logging.cc` and `logging_unittest.cc`.
241
242 #### Example 1
243
244 We wish to run 'wtf_unittests' and 'webkit_tests' on a bot, but not
245 compile any additional targets.
246
247 If a patch touches WebNode.cpp, then analyze gets as input:
248
249 {
250 "files": ["WebNode.cpp"],
251 "test_targets": ["wtf_unittests", "webkit_tests"],
252 "additional_compile_targets": []
253 }
254
255 and should return as output:
256
257 {
258 "status": "Found dependency",
259 "compile_targets": ["webkit_unit_tests"],
260 "test_targets": ["webkit_tests"]
261 }
262
263 Note how `webkit_tests` was pruned in compile_targets but not in test_targets.
264
265 #### Example 2
266
267 Using the same patch as Example 1, assume we wish to run only `wtf_unittests`,
268 but additionally build everything needed to test Blink (`blink_tests`):
269
270 We pass as input:
271
272 {
273 "files": ["WebNode.cpp"],
274 "test_targets": ["wtf_unittests"],
275 "additional_compile_targets": ["blink_tests"]
276 }
277
278 And should get as output:
279
280 {
281 "status": "Found dependency",
282 "compile_targets": ["webkit_unit_tests"],
283 "test_targets": []
284 }
285
286 Here `blink_tests` was pruned in the output compile_targets, and
287 test_targets was empty, since blink_tests was not listed in the input
288 test_targets.
289
290 #### Example 3
291
292 Build everything, but do not run any tests.
293
294 Input:
295
296 {
297 "files": ["WebNode.cpp"],
298 "test_targets": [],
299 "additional_compile_targets": ["all"]
300 }
301
302 Output:
303
304 {
305 "status": "Found dependency",
306 "compile_targets": ["webkit_unit_tests", "content_shell"],
307 "test_targets": []
308 }
309
310 #### Example 4
311
312 Same as Example 2, but a build file was modified instead of a source file.
313
314 Input:
315
316 {
317 "files": ["BUILD.gn"],
318 "test_targets": ["wtf_unittests"],
319 "additional_compile_targets": ["blink_tests"]
320 }
321
322 Output:
323
324 {
325 "status": "Found dependency (all)",
326 "compile_targets": ["webkit_unit_tests", "wtf_unittests"],
327 "test_targets": ["wtf_unittests"]
328 }
329
330 test_targets was returned unchanged, compile_targets was pruned.
331
332 ## Random Requirements and Rationale
65 333
66 This section is collection of semi-organized notes on why MB is the way 334 This section is collection of semi-organized notes on why MB is the way
67 it is ... 335 it is ...
68 336
69 ### in-tree or out-of-tree 337 ### in-tree or out-of-tree
70 338
71 The first issue is whether or not this should exist as a script in 339 The first issue is whether or not this should exist as a script in
72 Chromium at all; an alternative would be to simply change the bot 340 Chromium at all; an alternative would be to simply change the bot
73 configurations to know whether to use GYP or GN, and which flags to 341 configurations to know whether to use GYP or GN, and which flags to
74 pass. 342 pass.
(...skipping 75 matching lines...) Expand 10 before | Expand all | Expand 10 after
150 * Some common flags (goma\_dir being the obvious one) may need to be 418 * Some common flags (goma\_dir being the obvious one) may need to be
151 specified via the user, and it's unclear how to integrate this with 419 specified via the user, and it's unclear how to integrate this with
152 the concept of build\_configs. 420 the concept of build\_configs.
153 421
154 Right now, MB has hard-coded support for a few flags (i.e., you can 422 Right now, MB has hard-coded support for a few flags (i.e., you can
155 pass the --goma-dir flag, and it will know to expand "${goma\_dir}" in 423 pass the --goma-dir flag, and it will know to expand "${goma\_dir}" in
156 the string before calling out to the tool. We may want to generalize 424 the string before calling out to the tool. We may want to generalize
157 this to a common key/value approach (perhaps then meeting the 425 this to a common key/value approach (perhaps then meeting the
158 ChromeOS non-goal, above), or we may want to keep this very strictly 426 ChromeOS non-goal, above), or we may want to keep this very strictly
159 limited for simplicity. 427 limited for simplicity.
OLDNEW
« no previous file with comments | « no previous file | tools/mb/docs/user_guide.md » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698