Index: tools/mb/docs/design_spec.md |
diff --git a/tools/mb/docs/design_spec.md b/tools/mb/docs/design_spec.md |
index d6067dfd3c8b4a57166c71c6ba71703777c4a766..11d9f5c0d3ade7218533af393dd9efb68b6d6d33 100644 |
--- a/tools/mb/docs/design_spec.md |
+++ b/tools/mb/docs/design_spec.md |
@@ -53,6 +53,8 @@ See the [user guide](user_guide.md#mb_config.pyl) for details. |
The interface to `mb analyze` is described in the |
[user\_guide](user_guide.md#mb_analyze). |
+The way analyze works can be subtle and complicated (see below). |
+ |
Since the interface basically mirrors the way the "analyze" step on the bots |
invokes gyp\_chromium today, when the config is found to be a gyp config, |
the arguments are passed straight through. |
@@ -61,7 +63,244 @@ It implements the equivalent functionality in GN by calling `'gn refs |
[list of files] --type=executable --all --as=output` and filtering the |
output to match the list of targets. |
-## Detailed Design Requirements and Rationale |
+## Analyze |
+ |
+The goal of the `analyze` step is to speed up the cycle time of the try servers |
+by only building and runing the tests affected by the files in a patch, rather |
+than everything that might be out of date. Doing this ends up being tricky. |
+ |
+We start with the following requirements and observations: |
+ |
+* In an ideal (un-resource-constrained) world, we would build and test |
+ everything that a patch affected on every patch. |
+ |
+* In the real world, however, we do not have an infinite number of machines, |
+ and try jobs are not infinitely fast, so we need to balance the desire |
+ to get maximum test coverage against the desire to have reasonable cycle |
+ times, given the number of machines we have. In particular, we have a |
+ goal that most try jobs should complete in a half hour. |
Paweł Hajdan Jr.
2015/11/06 15:27:53
Consider removing the specific number. I was not a
Dirk Pranke
2015/11/06 17:38:19
See https://groups.google.com/a/chromium.org/d/msg
Paweł Hajdan Jr.
2015/11/09 10:37:52
My understanding is that was about main waterfall,
|
+ |
+* Also, since we run most try jobs against tip-of-tree Chromium, by |
+ the time one job completes on the bot, new patches have probably landed, |
+ rendering the build out of date. |
+ |
+* This means that the next try job may have to do a build that is out of |
+ date due to a combination of files affected by a given patch, and files |
+ affected for unrelated reasons. We want to rebuild and test only the |
+ targets affected by the patch, so that we don't blame or punish the |
smut
2015/11/06 20:56:01
In the first * you said that with no resource cons
Dirk Pranke
2015/11/06 21:14:37
Actually, I meant "everything that a patch affecte
|
+ patch author for unrelated changes. |
+ |
+So: |
+ |
+1. We need a way to indicate which changed files we care about and which |
+ we don't (the affected files of a patch). |
+ |
+2. We need to know which tests we might potentially want to run, and how |
+ those are mapped onto build targets. For some kinds of tests (like |
+ GTest-based tests), the mapping is 1:1 - if you want to run base_unittests, |
+ you need to build base_unittests. For others (like the telemetry and |
+ layout tests), you might need to build several executables in order to |
+ run the tests, and that mapping might best be captured by a *meta* |
+ target (a GN group or a GYP 'none' target like 'webkit_tests') that |
+ depends on the right list of files. Because the GN and GYP files know |
+ nothing about test steps, we have to have some way of mapping back |
+ and forth between test steps and build targets. That mapping |
+ is *not* currently available to MB (or GN or GYP), and so we have to |
+ enough information to make it possible for the caller to do the mapping. |
+ |
+3. (We might also want to know when test targets are affected by data files |
+ that aren't compiled (python scripts, or the layout tests themselves). |
+ There's no good way to do this in GYP, but GN supports this). |
+ |
+4. We also want to ensure that particular targets still compile even if they |
+ are not actually tested; consider testing the installers themselves, or |
+ targets that don't yet have good test coverage. We might want to be able to |
+ specify. We might want to use meta targets for this purpose as well. |
+ |
+5. However, for some meta targets, we don't necessarily want to rebuild the |
+ meta target itself, perhaps just the dependencies of the meta target that |
+ are affected by the patch. For example, if you have a meta target like |
+ 'blink_tests' that might depend on ten different binaries. If a patch only |
Paweł Hajdan Jr.
2015/11/06 15:27:53
Can we use "all" or "chromium_builder_tests" as an
Dirk Pranke
2015/11/06 17:38:19
I don't want to use 'all' because it's special and
|
+ affects one of them (say webkit_unittests), you don't want to build |
+ blink_tests, because that might actually also build the other nine targets. |
+ In other words, some meta targets are *prunable*. |
+ |
+6. In the ideal case we actually have enough resources and things are fast |
+ enough that we can afford to build everything, so we need a way to indicate |
+ that rather than actually require us to list every possible target. In other |
+ words, we want to support 'ninja all' (or, equivalently, just 'ninja'), but |
+ neither GN nor GYP has a built-in concept of what 'all' means, so we need to |
+ accomodate that. |
+ |
+7. In some cases, we will not be able to correctly analyze the build graph to |
+ determine the impact of a patch, and need to bail out (e.g,. if you change a |
+ build file itself, it may not be easy to tell how that affects the graph). |
+ In that case we should simply build and run everything. |
+ |
+The interaction between 2) and 5) means that we need to treat meta targets |
+two different ways, and so we need to know which targets should be |
+pruned in the sense of 5) and which targets should be returned unchanged |
+so that we can map them back to the appropriate tests. |
+ |
+So, we need two things as input: |
Paweł Hajdan Jr.
2015/11/06 15:27:53
Well, if files are counted here, then we'd need th
Dirk Pranke
2015/11/06 17:38:19
Yeah, I understand the confusion. I wasn't complet
|
+ |
+* the list of files in a patch |
+* the list of targets which are potentially out of date |
+ |
+And, we need to be able to tell, for each target, if the target is a meta |
+target (or 'all'), whether we should treat that target as 'prunable'. |
+ |
+The way we do this is to actually take three lists as input: |
+ |
+* `files`: the list of files in the patch |
+* `compile_targets`: the list of ninja targets to compile, determined by |
+ computing the union of all of the compile targets for each test and |
+ any additional targets that you might wish to compile. Any meta targets |
+ present in this list should be pruned. |
Dirk Pranke
2015/11/06 04:34:19
I need to go back and look at the weird static_lib
Dirk Pranke
2015/11/06 05:08:47
Actually, I also need to update this to note that
Dirk Pranke
2015/11/12 03:49:32
Okay, I've looked at the weird static library logi
|
+* `test_targets`: the list of ninja targets which, if out of date, should |
+ be reported back so that we can map them back to the appropriate tests to |
+ run. Any meta targets in this list should *not* be pruned. |
+ |
+We can then return two lists as output, a list of pruned targets, and a |
+list of unpruned targets. |
Dirk Pranke
2015/11/06 05:08:47
Need to update this to return a 'status' field as
|
+ |
+Note that `compile_targets` and `test_targets` are actually synthesized |
+from the list of tests a bot is configured to run (via the mapping of |
+test steps to associated compile targets, and the support for |
+`additional_compile_targets` that are desired). |
+ |
+We also have to specify how to deal with 'all': the implementation is |
+responsible for recognizing 'all' as a magic string and mapping it onto |
+the list of all root nodes in the build graph. |
+ |
+We have to decide how to deal with files that don't actually exist |
Dirk Pranke
2015/11/06 04:34:19
Note the rules I've given for error handling here
|
+in the build graph: this could be either the result of an error |
+(the file should be in the build graph, but isn't), or perfectly fine |
+(the file doesn't affect the build graph at all). We can't tell these |
+two apart, so we should ignore missing files. |
+ |
+We have to decide how to deal with targets that don't exist in the build |
+graph: unlike missing files, this can only indicate a configuration error, |
+and so it should return an error accordingly. |
sky
2015/11/09 17:17:49
Is that what happens now with an invalid target? I
Dirk Pranke
2015/11/09 20:15:20
I'm pretty sure it'll fail in a GN build. I don't
|
+ |
+We have to decide how to deal with empty lists for the three |
+fields: |
+ |
+* It doesn't make sense to call analyze at all if no files were modified, |
+ so this should probably return an error. |
+ |
+* It doesn't make sense to call analyze if you don't want to compile |
+ anything at all, but there is a sense in which compile_targets=[] |
+ should be treated the same as compile_targets=["all"], since that's |
Paweł Hajdan Jr.
2015/11/06 15:27:53
I'm worried about this. Can we not do that, at lea
Dirk Pranke
2015/11/06 17:38:19
Nico seems to feel strongly that we should allow t
Paweł Hajdan Jr.
2015/11/09 10:37:52
What is the rationale for his strong opinion here?
Dirk Pranke
2015/11/09 20:15:21
Nico answered that question here:
https://groups
Dirk Pranke
2015/11/12 03:49:32
I've decided that Nico loses out :). Everyone else
|
+ what Ninja does, so we will preserve the Ninja interpretation here. |
+ This also means that you can get the ideal behavior of "build everything" |
+ by not specifying everything, so it's a reasonable default. |
+ |
+* On the other hand, it does make sense for test_targets=[]: it might |
+ mean that you don't want to run any tests. And, treating [] |
+ as ["all"] probably doesn't make any sense since no tests actually |
+ depend on every single target in the build. |
+ |
+We also have an "error" field in case something goes wrong (like the |
Paweł Hajdan Jr.
2015/11/06 15:27:53
I'd suggest making this change the exit code of th
Dirk Pranke
2015/11/06 17:38:19
It is done already, but I'll add that to the text
|
+empty file list case, above, or an internal error in MB/GYP/GN). |
+ |
+In an error case, or in the case where analyze doesn't know what to do, we |
+should default to echoing the compile_targets and test_targets back unchanged, |
+which will produce a conservative result where we build and run everything, |
+just to be safe. In this way even if analyze fails, the failure does not |
+have to be fatal. |
Paweł Hajdan Jr.
2015/11/06 15:27:53
I'd like it to be fatal so that it's noticeable. I
Dirk Pranke
2015/11/06 17:38:19
There's two different cases and I shouldn't have m
Paweł Hajdan Jr.
2015/11/09 10:37:51
sg
|
+ |
+### Examples |
+ |
+Continuing the example given above, suppose we have the following build |
+graph: |
+ |
+* `blink_tests` is a meta target that depends on `webkit_unitests`, |
+ `wtf_unittests`, and `webkit_tests` and represents all of the targets |
+ needed to fully test Blink. Each of those is a separate test step. |
+* `webkit_tests` is also a meta target; it depends on `content_shell` |
+ and `image_diff`. |
+* `base_unittests` is a separate test binary. |
+* `wtf_unittests` depends on `Assertions.cpp` and `AssertionsTest.cpp`. |
+* `webkit_unittests` depends on `WebNode.cpp` and `WebNodeTest.cpp`. |
+* `content_shell` depends on `WebNode.cpp` and `Assertions.cpp`. |
+* `base_unittests` depends on `logging.cc` and `logging_unittest.cc`. |
+ |
+#### Example 1 |
+ |
+We wish to run 'wtf_unittests' and 'webkit_tests' on a bot, but not |
+compile any additional targets. |
+ |
+If a patch touches WebNode.cpp, then analyze gets as input: |
+ |
+ { |
+ "files": ["WebNode.cpp"], |
+ "compile_targets": ["wtf_unittests", "webkit_tests"], |
sky
2015/11/09 17:17:49
As compile_target is the same as test_targets, doe
Dirk Pranke
2015/11/09 20:15:20
Maybe not, but I think it's easier to just specify
Dirk Pranke
2015/11/12 03:49:31
As we've discussed, I'm changing this to 'addition
|
+ "test_targets": ["wtf_unittests", "webkit_tests"] |
+ } |
+ |
+and should return as output: |
+ |
+ { |
+ "compile_targets": ["content_shell"], |
+ "test_targets": ["webkit_tests"] |
+ } |
+ |
+Note how `webkit_tests` was pruned in compile_targets but not in test_targets. |
+ |
+#### Example 2 |
+ |
+Using the same patch as Example 1, assume we wish to run only `wtf_unittests`, |
+but additionally build everything needed to test Blink (`blink_tests`): |
+ |
+We pass as input: |
+ |
+ { |
+ "files": ["WebNode.cpp"], |
+ "compile_targets": ["wtf_unittests", "blink_tests"], |
+ "test_targets": ["wtf_unittests"] |
+ } |
+ |
+And should get as output: |
+ |
+ { |
+ "compile_targets": ["content_shell"], |
+ "test_targets": [] |
+ } |
+ |
+Note that even though `wtf_unittests` is a dependency of `blink_tests`, |
+we listed both as input. This is fine and to be expected, since the |
+bot scripts don't know that one contains the other. |
+ |
+Also, we pruned `blink_tests` for the output compile_targets, but |
+test_targets was empty, since blink_tests was not listed in the input |
+test_targets. |
+ |
+#### Example 3 |
+ |
+Suppose we always wish to build everything on the bot, but not run any |
+tests. |
+ |
+Input: |
+ |
+ { |
+ "files": ["WebNode.cpp"], |
+ "compile_targets": [], |
sky
2015/11/09 17:17:49
My knee jerk reaction is that analyzer should neve
Dirk Pranke
2015/11/09 20:15:20
I understand the reaction, and obviously we can ma
Dirk Pranke
2015/11/12 03:49:32
Updated: Nico loses :).
|
+ "test_targets": [] |
+ } |
+ |
+Output: |
+ |
+ { |
+ "compile_targets": ["webkit_unittests", "content_shell"], |
+ "test_targets": [] |
+ } |
+ |
+Note here that [] is being treated as ["all"] for compile_targets, but |
+not test_targets. |
Dirk Pranke
2015/11/06 05:08:47
Need to update the examples to contain status fiel
|
+ |
+ |
+## Random Requirements and Rationale |
This section is collection of semi-organized notes on why MB is the way |
it is ... |