Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(293)

Unified Diff: tools/mb/docs/design_spec.md

Issue 1412733015: Clarify how the `analyze` step should work in MB (and elsewhere). (Closed) Base URL: https://chromium.googlesource.com/chromium/src.git@master
Patch Set: update w/ more review feedback, clean up a bit Created 5 years, 1 month ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View side-by-side diff with in-line comments
Download patch
« no previous file with comments | « no previous file | tools/mb/docs/user_guide.md » ('j') | no next file with comments »
Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
Index: tools/mb/docs/design_spec.md
diff --git a/tools/mb/docs/design_spec.md b/tools/mb/docs/design_spec.md
index d6067dfd3c8b4a57166c71c6ba71703777c4a766..55f8c794d2bfb0680c3765094b4e5add67e79be8 100644
--- a/tools/mb/docs/design_spec.md
+++ b/tools/mb/docs/design_spec.md
@@ -53,15 +53,283 @@ See the [user guide](user_guide.md#mb_config.pyl) for details.
The interface to `mb analyze` is described in the
[user\_guide](user_guide.md#mb_analyze).
+The way analyze works can be subtle and complicated (see below).
+
Since the interface basically mirrors the way the "analyze" step on the bots
-invokes gyp\_chromium today, when the config is found to be a gyp config,
+invokes `gyp_chromium` today, when the config is found to be a gyp config,
the arguments are passed straight through.
-It implements the equivalent functionality in GN by calling `'gn refs
+It implements the equivalent functionality in GN by calling `gn refs
[list of files] --type=executable --all --as=output` and filtering the
output to match the list of targets.
-## Detailed Design Requirements and Rationale
+## Analyze
+
+The goal of the `analyze` step is to speed up the cycle time of the try servers
+by only building and running the tests affected by the files in a patch, rather
+than everything that might be out of date. Doing this ends up being tricky.
+
+We start with the following requirements and observations:
+
+* In an ideal (un-resource-constrained) world, we would build and test
+ everything that a patch affected on every patch. This does not
+ necessarily mean that we would build 'all' on every patch (see below).
+
+* In the real world, however, we do not have an infinite number of machines,
+ and try jobs are not infinitely fast, so we need to balance the desire
+ to get maximum test coverage against the desire to have reasonable cycle
+ times, given the number of machines we have.
+
+* Also, since we run most try jobs against tip-of-tree Chromium, by
+ the time one job completes on the bot, new patches have probably landed,
+ rendering the build out of date.
+
+* This means that the next try job may have to do a build that is out of
+ date due to a combination of files affected by a given patch, and files
+ affected for unrelated reasons. We want to rebuild and test only the
+ targets affected by the patch, so that we don't blame or punish the
+ patch author for unrelated changes.
+
+So:
+
+1. We need a way to indicate which changed files we care about and which
+ we don't (the affected files of a patch).
+
+2. We need to know which tests we might potentially want to run, and how
+ those are mapped onto build targets. For some kinds of tests (like
+ GTest-based tests), the mapping is 1:1 - if you want to run base_unittests,
+ you need to build base_unittests. For others (like the telemetry and
+ layout tests), you might need to build several executables in order to
+ run the tests, and that mapping might best be captured by a *meta*
+ target (a GN group or a GYP 'none' target like `webkit_tests`) that
+ depends on the right list of files. Because the GN and GYP files know
+ nothing about test steps, we have to have some way of mapping back
+ and forth between test steps and build targets. That mapping
+ is *not* currently available to MB (or GN or GYP), and so we have to
+ enough information to make it possible for the caller to do the mapping.
+
+3. We might also want to know when test targets are affected by data files
+ that aren't compiled (python scripts, or the layout tests themselves).
+ There's no good way to do this in GYP, but GN supports this.
+
+4. We also want to ensure that particular targets still compile even if they
+ are not actually tested; consider testing the installers themselves, or
+ targets that don't yet have good test coverage. We might want to use meta
+ targets for this purpose as well.
+
+5. However, for some meta targets, we don't necessarily want to rebuild the
+ meta target itself, perhaps just the dependencies of the meta target that
+ are affected by the patch. For example, if you have a meta target like
+ `blink_tests` that might depend on ten different test binaries. If a patch
+ only affects one of them (say `wtf_unittests`), you don't want to
+ build `blink_tests`, because that might actually also build the other nine
+ targets. In other words, some meta targets are *prunable*.
+
+6. As noted above, in the ideal case we actually have enough resources and
+ things are fast enough that we can afford to build everything affected by a
+ patch, but listing every possible target explicitly would be painful. The
+ GYP and GN Ninja generators provide an 'all' target that captures (nearly,
+ see [crbug.com/503241](crbug.com/503241)) everything, but unfortunately
+ neither GN nor GYP actually represents 'all' as a meta target in the build
+ graph, so we will need to write code to handle that specially.
+
+7. In some cases, we will not be able to correctly analyze the build graph to
+ determine the impact of a patch, and need to bail out (e.g,. if you change a
+ build file itself, it may not be easy to tell how that affects the graph).
+ In that case we should simply build and run everything.
+
+The interaction between 2) and 5) means that we need to treat meta targets
+two different ways, and so we need to know which targets should be
+pruned in the sense of 5) and which targets should be returned unchanged
+so that we can map them back to the appropriate tests.
+
+So, we need three things as input:
+
+* `files`: the list of files in the patch
+* `test_targets`: the list of ninja targets which, if affected by a patch,
+ should be reported back so that we can map them back to the appropriate
+ tests to run. Any meta targets in this list should *not* be pruned.
+* `additional_compile_targets`: the list of ninja targets we wish to compile
+ *in addition to* the list in `test_targets`. Any meta targets
+ present in this list should be pruned (we don't need to return the
+ meta targets because they aren't mapped back to tests, and we don't want
+ to build them because we might build too much).
+
+We can then return two lists as output:
+
+* `compile_targets`, which is a list of pruned targets to be
+ passed to Ninja to build. It is acceptable to replace a list of
+ pruned targets by a meta target if it turns out that all of the
+ dependendencies of the target are affected by the patch (i.e.,
+ all ten binaries that blink_tests depends on), but doing so is
+ not required.
+* `test_targets`, which is a list of unpruned targets to be mapped
+ back to determine which tests to run.
+
+There may be substantial overlap between the two lists, but there is
+no guarantee that one is a subset of the other and the two cannot be
+used interchangeably or merged together without losing information and
+causing the wrong thing to happen.
+
+The implementation is responsible for recognizing 'all' as a magic string
+and mapping it onto the list of all root nodes in the build graph.
+
+There may be files listed in the input that don't actually exist in the build
+graph: this could be either the result of an error (the file should be in the
+build graph, but isn't), or perfectly fine (the file doesn't affect the build
+graph at all). We can't tell these two apart, so we should ignore missing
+files.
+
+There may be targets listed in the input that don't exist in the build
+graph; unlike missing files, this can only indicate a configuration error,
+and so we should return which targets are missing so the caller can
+treat this as an error, if so desired.
+
+Any of the three inputs may be an empty list:
+
+* It doesn't make sense to call analyze at all if no files were modified,
+ so this should probably return an error.
+
+* Passing an empty list for one or the other of test_targets and
+ additional_compile_targets is perfectly sensible: in the former case,
+ it can indicate that you don't want to run any tests, and in the latter,
+ it can indicate that you don't want to do build anything else in
+ addition to the test targets.
+
+* It doesn't make sense to call analyze if you don't want to compile
+ anything at all, so passing [] for both test_targets and
+ additional_compile_targets should probably return an error.
+
+In the output case, an empty list indicates that there was nothing to
+build, or that there were no affected test targets as appropriate.
+
+Note that passing no arguments to Ninja is equivalent to passing
+`all` to Ninja (at least given how GN and GYP work); however, we
+don't want to take advantage of this in most cases because we don't
+actually want to build every out of date target, only the targets
+potentially affected by the files. One could try to indicate
+to analyze that we wanted to use no arguments instead of an empty
+list, but using the existing fields for this seems fragile and/or
+confusing, and adding a new field for this seems unwarranted at this time.
+
+There is an "error" field in case something goes wrong (like the
+empty file list case, above, or an internal error in MB/GYP/GN). The
+analyze code should also return an error code to the shell if appropriate
+to indicate that the command failed.
+
+In the case where build files themselves are modified and analyze may
+not be able to determine a correct answer (point 7 above, where we return
+"Found dependency (all)"), we should also return the `test_targets` unmodified
+and return the union of `test_targets` and `additional_compile_targets` for
+`compile_targets`, to avoid confusion.
+
+### Examples
+
+Continuing the example given above, suppose we have the following build
+graph:
+
+* `blink_tests` is a meta target that depends on `webkit_unit_tests`,
+ `wtf_unittests`, and `webkit_tests` and represents all of the targets
+ needed to fully test Blink. Each of those is a separate test step.
+* `webkit_tests` is also a meta target; it depends on `content_shell`
+ and `image_diff`.
+* `base_unittests` is a separate test binary.
+* `wtf_unittests` depends on `Assertions.cpp` and `AssertionsTest.cpp`.
+* `webkit_unit_tests` depends on `WebNode.cpp` and `WebNodeTest.cpp`.
+* `content_shell` depends on `WebNode.cpp` and `Assertions.cpp`.
+* `base_unittests` depends on `logging.cc` and `logging_unittest.cc`.
+
+#### Example 1
+
+We wish to run 'wtf_unittests' and 'webkit_tests' on a bot, but not
+compile any additional targets.
+
+If a patch touches WebNode.cpp, then analyze gets as input:
+
+ {
+ "files": ["WebNode.cpp"],
+ "test_targets": ["wtf_unittests", "webkit_tests"],
+ "additional_compile_targets": []
+ }
+
+and should return as output:
+
+ {
+ "status": "Found dependency",
+ "compile_targets": ["webkit_unit_tests"],
+ "test_targets": ["webkit_tests"]
+ }
+
+Note how `webkit_tests` was pruned in compile_targets but not in test_targets.
+
+#### Example 2
+
+Using the same patch as Example 1, assume we wish to run only `wtf_unittests`,
+but additionally build everything needed to test Blink (`blink_tests`):
+
+We pass as input:
+
+ {
+ "files": ["WebNode.cpp"],
+ "test_targets": ["wtf_unittests"],
+ "additional_compile_targets": ["blink_tests"]
+ }
+
+And should get as output:
+
+ {
+ "status": "Found dependency",
+ "compile_targets": ["webkit_unit_tests"],
+ "test_targets": []
+ }
+
+Here `blink_tests` was pruned in the output compile_targets, and
+test_targets was empty, since blink_tests was not listed in the input
+test_targets.
+
+#### Example 3
+
+Build everything, but do not run any tests.
+
+Input:
+
+ {
+ "files": ["WebNode.cpp"],
+ "test_targets": [],
+ "additional_compile_targets": ["all"]
+ }
+
+Output:
+
+ {
+ "status": "Found dependency",
+ "compile_targets": ["webkit_unit_tests", "content_shell"],
+ "test_targets": []
+ }
+
+#### Example 4
+
+Same as Example 2, but a build file was modified instead of a source file.
+
+Input:
+
+ {
+ "files": ["BUILD.gn"],
+ "test_targets": ["wtf_unittests"],
+ "additional_compile_targets": ["blink_tests"]
+ }
+
+Output:
+
+ {
+ "status": "Found dependency (all)",
+ "compile_targets": ["webkit_unit_tests", "wtf_unittests"],
+ "test_targets": ["wtf_unittests"]
+ }
+
+test_targets was returned unchanged, compile_targets was pruned.
+
+## Random Requirements and Rationale
This section is collection of semi-organized notes on why MB is the way
it is ...
« no previous file with comments | « no previous file | tools/mb/docs/user_guide.md » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698