|
|
Created:
8 years, 4 months ago by Dai Mikurube (NOT FULLTIME) Modified:
8 years, 3 months ago CC:
chromium-reviews, dennis_jeffrey, anantha, dyu1, Nirnimesh Base URL:
svn://svn.chromium.org/chrome/trunk/src Visibility:
Public. |
DescriptionAllow multiple performance values in OutputPerfForStandaloneGraphing.
BUG=122119
Committed: http://src.chromium.org/viewvc/chrome?view=rev&revision=154247
Patch Set 1 #
Total comments: 9
Patch Set 2 : fixed the comments. #Messages
Total messages: 14 (0 generated)
Hi Nirnimesh, Sorry for my many review requests. Could you take a look? It is to allow multiple performance values in _OutputPerfForStandaloneGraphing. In other words, it dumps a JSON like following: {'rev': 1, 'traces': { 'DMP-TCMallocUsed': [['0', '2956.93'], ['20', '6547.11']], 'DMP-TCMallocUnused': [['0', '35.22'], ['20', '12039.21']] }} by calling _OutputPerfGraphValue for multiple |description|. My approach may be wrong. I didn't understand the reason why we need such seen_key and self._seen_graph_lines well...
fyi, an example of multiple performance values at http://codereview.chromium.org/10871038/.
I'd suggest waiting for Dennis to look at this CL. Unfotunately he's OOO for the rest of the week.
On 2012/08/23 17:44:13, Nirnimesh wrote: > I'd suggest waiting for Dennis to look at this CL. > Unfotunately he's OOO for the rest of the week. Make sense. Ok, I'm waiting for Dennis.
LGTM with a couple nits to address before submitting, and a longer FYI (no response needed for that). I stared at this change for awhile and I think it looks fine. It makes it possible to associate multiple traces (lines) for a single graph (which admittedly I did not have in mind at the time I originally wrote the code), and I don't think it breaks any existing functionality. If I find out later that it does, then we can fix it at that time :-) http://codereview.chromium.org/10870039/diff/1/chrome/test/functional/perf.py File chrome/test/functional/perf.py (right): http://codereview.chromium.org/10870039/diff/1/chrome/test/functional/perf.py... chrome/test/functional/perf.py:328: # We assume that the first line |existing_line[0]| is the latest. nit: existing_line --> existing_lines http://codereview.chromium.org/10870039/diff/1/chrome/test/functional/perf.py... chrome/test/functional/perf.py:329: # We update only the first line. I recommend removing this line. There's a case where we may not actually update the first line, and that's when we're creating a new line for the first time during a test run. http://codereview.chromium.org/10870039/diff/1/chrome/test/functional/perf.py... chrome/test/functional/perf.py:350: if seen_key in self._seen_graph_lines: FYI: The reason for "seen_key" and "self._seen_graph_lines" is to allow long-running tests (like Chrome Endure tests) to continually invoke the current function (OutputPerfForStandaloneGraphing) during a test run, and each time the function is invoked, *append* new results to an existing results list. For example, suppose we're periodically measuring DOM node count during a long test run. The very first time we invoke OutputPerfForStandaloneGraphing, we should create a new JSON line for the output file. But each time we subsequently invoke OutputPerfForStandaloneGraphing for the same metric in the same test run, we need to know that we have to *append* to the existing JSON line, rather than creating a new JSON line. So that's why I associate each metric with a unique key (called "seen_key"), and I store all the keys we've already seen in the dictionary "self._seen_graph_lines".
Thanks for looking at it, Dennis. :) I'll be committing the change. http://codereview.chromium.org/10870039/diff/1/chrome/test/functional/perf.py File chrome/test/functional/perf.py (right): http://codereview.chromium.org/10870039/diff/1/chrome/test/functional/perf.py... chrome/test/functional/perf.py:328: # We assume that the first line |existing_line[0]| is the latest. On 2012/08/29 18:47:51, dennis_jeffrey wrote: > nit: existing_line --> existing_lines Done. http://codereview.chromium.org/10870039/diff/1/chrome/test/functional/perf.py... chrome/test/functional/perf.py:329: # We update only the first line. On 2012/08/29 18:47:51, dennis_jeffrey wrote: > I recommend removing this line. There's a case where we may not actually update > the first line, and that's when we're creating a new line for the first time > during a test run. Removed. To write it correctly, "We insert a new line at the top, or update the first line"? http://codereview.chromium.org/10870039/diff/1/chrome/test/functional/perf.py... chrome/test/functional/perf.py:350: if seen_key in self._seen_graph_lines: On 2012/08/29 18:47:51, dennis_jeffrey wrote: > FYI: The reason for "seen_key" and "self._seen_graph_lines" is to allow > long-running tests (like Chrome Endure tests) to continually invoke the current > function (OutputPerfForStandaloneGraphing) during a test run, and each time the > function is invoked, *append* new results to an existing results list. > > For example, suppose we're periodically measuring DOM node count during a long > test run. The very first time we invoke OutputPerfForStandaloneGraphing, we > should create a new JSON line for the output file. But each time we > subsequently invoke OutputPerfForStandaloneGraphing for the same metric in the > same test run, we need to know that we have to *append* to the existing JSON > line, rather than creating a new JSON line. So that's why I associate each > metric with a unique key (called "seen_key"), and I store all the keys we've > already seen in the dictionary "self._seen_graph_lines". Thanks for the description. Hmm, my question is: isn't it enough to compare |revision| with the top line in -summary.dat? I guess it would have a same effect. If seen_key is found, it has been already depended on the existing_lines[0].
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/dmikurube@chromium.org/10870039/7001
Try job failure for 10870039-7001 (retry) on win for step "compile" (clobber build). It's a second try, previously, step "compile" failed. http://build.chromium.org/p/tryserver.chromium/buildstatus?builder=win&number...
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/dmikurube@chromium.org/10870039/7001
Try job failure for 10870039-7001 (retry) on linux_chromeos for step "browser_tests". It's a second try, previously, step "browser_tests" failed. http://build.chromium.org/p/tryserver.chromium/buildstatus?builder=linux_chro...
A few replies to your questions. I'll go ahead and click the commit box again for you -- looks like one of the try bots failed the last time in the CQ and I'm pretty sure it's just a flake. http://codereview.chromium.org/10870039/diff/1/chrome/test/functional/perf.py File chrome/test/functional/perf.py (right): http://codereview.chromium.org/10870039/diff/1/chrome/test/functional/perf.py... chrome/test/functional/perf.py:329: # We update only the first line. On 2012/08/30 02:24:11, Dai Mikurube wrote: > On 2012/08/29 18:47:51, dennis_jeffrey wrote: > > I recommend removing this line. There's a case where we may not actually > update > > the first line, and that's when we're creating a new line for the first time > > during a test run. > > Removed. To write it correctly, "We insert a new line at the top, or update the > first line"? Yes, I believe that would be a more accurate statement. But I think it's also ok to just leave out that comment. http://codereview.chromium.org/10870039/diff/1/chrome/test/functional/perf.py... chrome/test/functional/perf.py:350: if seen_key in self._seen_graph_lines: On 2012/08/30 02:24:11, Dai Mikurube wrote: > On 2012/08/29 18:47:51, dennis_jeffrey wrote: > > FYI: The reason for "seen_key" and "self._seen_graph_lines" is to allow > > long-running tests (like Chrome Endure tests) to continually invoke the > current > > function (OutputPerfForStandaloneGraphing) during a test run, and each time > the > > function is invoked, *append* new results to an existing results list. > > > > For example, suppose we're periodically measuring DOM node count during a long > > test run. The very first time we invoke OutputPerfForStandaloneGraphing, we > > should create a new JSON line for the output file. But each time we > > subsequently invoke OutputPerfForStandaloneGraphing for the same metric in the > > same test run, we need to know that we have to *append* to the existing JSON > > line, rather than creating a new JSON line. So that's why I associate each > > metric with a unique key (called "seen_key"), and I store all the keys we've > > already seen in the dictionary "self._seen_graph_lines". > > Thanks for the description. Hmm, my question is: isn't it enough to compare > |revision| with the top line in -summary.dat? > > I guess it would have a same effect. If seen_key is found, it has been already > depended on the existing_lines[0]. I am not totally clear on exactly what you mean, but you may be correct that we can simplify the code here. If you'd like, feel free to make the change itself and send me a new code review, and then I can check it out. By looking at the code it'll be easier for me to see exactly what you'd like to do :-)
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/dmikurube@chromium.org/10870039/7001
Change committed as 154247
Thanks for checking "Commit", Dennis! http://codereview.chromium.org/10870039/diff/1/chrome/test/functional/perf.py File chrome/test/functional/perf.py (right): http://codereview.chromium.org/10870039/diff/1/chrome/test/functional/perf.py... chrome/test/functional/perf.py:350: if seen_key in self._seen_graph_lines: On 2012/08/30 17:13:23, dennis_jeffrey wrote: > On 2012/08/30 02:24:11, Dai Mikurube wrote: > > On 2012/08/29 18:47:51, dennis_jeffrey wrote: > > > FYI: The reason for "seen_key" and "self._seen_graph_lines" is to allow > > > long-running tests (like Chrome Endure tests) to continually invoke the > > current > > > function (OutputPerfForStandaloneGraphing) during a test run, and each time > > the > > > function is invoked, *append* new results to an existing results list. > > > > > > For example, suppose we're periodically measuring DOM node count during a > long > > > test run. The very first time we invoke OutputPerfForStandaloneGraphing, we > > > should create a new JSON line for the output file. But each time we > > > subsequently invoke OutputPerfForStandaloneGraphing for the same metric in > the > > > same test run, we need to know that we have to *append* to the existing JSON > > > line, rather than creating a new JSON line. So that's why I associate each > > > metric with a unique key (called "seen_key"), and I store all the keys we've > > > already seen in the dictionary "self._seen_graph_lines". > > > > Thanks for the description. Hmm, my question is: isn't it enough to compare > > |revision| with the top line in -summary.dat? > > > > I guess it would have a same effect. If seen_key is found, it has been > already > > depended on the existing_lines[0]. > > I am not totally clear on exactly what you mean, but you may be correct that we > can simplify the code here. If you'd like, feel free to make the change itself > and send me a new code review, and then I can check it out. By looking at the > code it'll be easier for me to see exactly what you'd like to do :-) Ok, I'll try it when I have time. Thanks. :) |