Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(79)

Issue 14172018: Add trace-info.json which contains information about perf traces. Also update test-info.json to use… (Closed)

Created:
7 years, 8 months ago by sullivan
Modified:
7 years, 8 months ago
Reviewers:
tonyg
CC:
chromium-reviews, chrome-speed-team+watch_google.com, rcrabb
Visibility:
Public.

Description

Add trace-info.json which contains information about perf traces. Also update test-info.json to use markdown for links. BUG= Committed: https://src.chromium.org/viewvc/chrome?view=rev&revision=195284

Patch Set 1 #

Total comments: 25

Patch Set 2 : Addressed review comments #

Unified diffs Side-by-side diffs Delta from patch set Stats (+577 lines, -8 lines) Patch
M tools/perf/test-info.json View 3 chunks +8 lines, -8 lines 0 comments Download
A tools/perf/trace-info.json View 1 1 chunk +569 lines, -0 lines 0 comments Download

Messages

Total messages: 7 (0 generated)
sullivan
7 years, 8 months ago (2013-04-18 02:49:28 UTC) #1
tonyg
lgtm Some nits to fix before landing. https://codereview.chromium.org/14172018/diff/1/tools/perf/trace-info.json File tools/perf/trace-info.json (right): https://codereview.chromium.org/14172018/diff/1/tools/perf/trace-info.json#newcode252 tools/perf/trace-info.json:252: "description": "" ...
7 years, 8 months ago (2013-04-19 18:45:17 UTC) #2
sullivan
Thanks for the comments! Updated and submitting. https://codereview.chromium.org/14172018/diff/1/tools/perf/trace-info.json File tools/perf/trace-info.json (right): https://codereview.chromium.org/14172018/diff/1/tools/perf/trace-info.json#newcode252 tools/perf/trace-info.json:252: "description": "" ...
7 years, 8 months ago (2013-04-19 20:22:30 UTC) #3
commit-bot: I haz the power
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/sullivan@chromium.org/14172018/5001
7 years, 8 months ago (2013-04-19 20:23:17 UTC) #4
sullivan
Committed patchset #2 manually as r195284 (presubmit successful).
7 years, 8 months ago (2013-04-19 21:09:50 UTC) #5
nduca
I'm a bit confused about two things with this: 1. how do we ensure that ...
7 years, 8 months ago (2013-04-22 22:36:56 UTC) #6
sullivan
7 years, 8 months ago (2013-04-23 02:26:36 UTC) #7
Message was sent while issue was closed.
On 2013/04/22 22:36:56, nduca wrote:
> I'm a bit confused about two things with this:
> 1. how do we ensure that checkins to perf/ update this file?

I think in general it's pretty difficult to ensure that people update 
documentation when they make changes. Test suites which don't have a description
in test-info.json are hidden from the menus, so that is an incentive to add
documentation to tests (we don't hide traces, though). One nice thing about the
separate file is that it's easier to update the documentation in one pass, which
is what seems to happen most often for high-level documentation like these files
contain.

> 2. how does an entry in this file relate to the python file?

An entry in test-info.json is the name of a test suite (the "test" part of
"system/test/graph/trace"). For example, "sunspider" or "scrolling_benchmark".
An entry in trace-info.json is the name of a trace (for example,
"V8.MemoryHeapSampleTotalCommitted").

The file contains high-level descriptions of the test suites, and the individual
traces. The dashboard will show these descriptions so that it's easy to
understand what a test is when it regresses. For test suites, the test-info.json
file also lists relevant source code files and will in the future list command
lines to run the tests.

I'm not really sure what you mean by the python file--do you mean
page_cycler.py, robohornetpro.py, etc? The json files contain high-level
information about the test suites we are running. For a lot of the page_cycler
tests, the python file doesn't really have any sense of what data is in the test
suite, and why we picked that data. For example, page_cycler.py doesn't know
that our new page cyclers will be broken up into fewer languages per suite than
the old intl tests so that we don't over-exercise the font cache.

> In general, is there a way we can unify this more closely with the benchmark
> system so there's less "these two files have to be kept in sync with one
> another?"

The usual approach to doing this is to use the RESULT lines the test spits
out--the code on the buildbots processes those lines, and parses out things like
which are the "important" tests, what the units are, and sends them up with the
test results. We didn't think this worked as well for high-level documentation
for a few reasons:

* Sending up a full description of the test suite and each trace with every
single run creates a lot of extra data to upload and process, especially when
you consider that tests like moz have over 200 traces. I couldn't think of a
good way to have the buildbot "know" it doesn't need to send the documentation
on a given run.
* It seems like parsing buildbot RESULT lines for free-form text like
descriptions could get ugly.
* For page_cyclers, where the code is the same but the test data is different,
where does the description come from?

There's also the possibility of having some special format for comments in the
tests with the description. But then we need to make up a special format and
parse it correctly, and also have some kind of glue code that tells the
dashboard where to find the test source code to parse it for the comments.

So we used an approach like UMA histograms, where the high-level descriptions
are all in one file. That said, these files are very new, so if you have a
better idea about how to communicate high-level descriptions of the test suites
and traces to the dashboard, we could definitely switch. It sounds like I am
missing a simpler approach.

Powered by Google App Engine
This is Rietveld 408576698