|
|
DescriptionAdd startup metrics that measure the performance of the first web contents.
I added two metrics: "Startup.FirstWebContents.MainFrameLoad" and
"Startup.FirstWebContents.NonEmptyPaint".
BUG=436152
Committed: https://crrev.com/c2cec4c53bd674b77a9cfdb662dd38a7dfff5347
Cr-Commit-Position: refs/heads/master@{#306884}
Patch Set 1 : #
Total comments: 8
Patch Set 2 : Comments from isherman. #
Total comments: 10
Patch Set 3 : Improve documentation. #
Total comments: 1
Patch Set 4 : Rebase against top of tree. #Patch Set 5 : Comments from asvitkine, rkaplow. #
Total comments: 17
Patch Set 6 : Comments from asvitkine. #
Total comments: 2
Patch Set 7 : Comments from asvitkine, round two. #Patch Set 8 : Don't define FirstWebContentsProfiler on Android. Rebase against top of tree. #
Messages
Total messages: 43 (17 generated)
Patchset #1 (id:1) has been deleted
Patchset #1 (id:20001) has been deleted
erikchen@chromium.org changed reviewers: + isherman@chromium.org
isherman: Please review. I am intentionally using a high bucket count for the histograms because I want more granularity.
https://codereview.chromium.org/760763002/diff/40001/chrome/browser/metrics/c... File chrome/browser/metrics/chrome_browser_main_extra_parts_metrics.cc (right): https://codereview.chromium.org/760763002/diff/40001/chrome/browser/metrics/c... chrome/browser/metrics/chrome_browser_main_extra_parts_metrics.cc:253: base::TimeDelta::FromMinutes(2), 300); Could you clarify why you need such fine granularity? Jim tuned the default numbers pretty carefully based on the needs of the networking team, and they're pretty picky about their latency measurements. What do you know that they don't? https://codereview.chromium.org/760763002/diff/40001/chrome/browser/metrics/c... chrome/browser/metrics/chrome_browser_main_extra_parts_metrics.cc:298: }; Please split this class out into its own implementation files. Even better would be not to dangle it off of ChromeBrowserMainExtraPartsMetrics at all -- I'd really prefer this file not to become too much of a dumping ground -- but if there's not a better place to put it, that's okay. https://codereview.chromium.org/760763002/diff/40001/chrome/browser/metrics/c... chrome/browser/metrics/chrome_browser_main_extra_parts_metrics.cc:354: } I think this logic could safely be internal to the FirstWebContentsProfiler class. https://codereview.chromium.org/760763002/diff/40001/chrome/browser/metrics/c... File chrome/browser/metrics/chrome_browser_main_extra_parts_metrics.h (right): https://codereview.chromium.org/760763002/diff/40001/chrome/browser/metrics/c... chrome/browser/metrics/chrome_browser_main_extra_parts_metrics.h:49: void FirstWebContentsProfilerWantsDestruction(); This feels pretty hacky. I wonder if it would be cleaner to just have the FirstWebContentsProfiler own its own memory, i.e. call "delete this" when it currently calls FirstWebContentsProfilerWantsDestruction().
Patchset #2 (id:60001) has been deleted
Patchset #2 (id:80001) has been deleted
Patchset #2 (id:100001) has been deleted
isherman: PTAL https://codereview.chromium.org/760763002/diff/40001/chrome/browser/metrics/c... File chrome/browser/metrics/chrome_browser_main_extra_parts_metrics.cc (right): https://codereview.chromium.org/760763002/diff/40001/chrome/browser/metrics/c... chrome/browser/metrics/chrome_browser_main_extra_parts_metrics.cc:253: base::TimeDelta::FromMinutes(2), 300); On 2014/11/26 02:24:55, Ilya Sherman wrote: > Could you clarify why you need such fine granularity? Jim tuned the default > numbers pretty carefully based on the needs of the networking team, and they're > pretty picky about their latency measurements. What do you know that they > don't? I am basing these numbers on the histograms Startup.OSX.* that I added a couple of months ago. Those use 100 buckets, with min=10ms and max=1 minute. That works out to 12% increments for each bucket, which means that changes which improve startup time by say, 5%, are incredibly hard to discern from the histograms. With the numbers I listed here, the bucket increment is 3.4%. In my updated patch set, I've further reduced the min/max difference, as well as increased the bucket count to 781 to get 1% bucket increments. This is necessary since I expect most startup improvements to be incredibly incremental, so being able to discern 1% improvements is quite important. https://codereview.chromium.org/760763002/diff/40001/chrome/browser/metrics/c... chrome/browser/metrics/chrome_browser_main_extra_parts_metrics.cc:298: }; On 2014/11/26 02:24:55, Ilya Sherman wrote: > Please split this class out into its own implementation files. Even better > would be not to dangle it off of ChromeBrowserMainExtraPartsMetrics at all -- > I'd really prefer this file not to become too much of a dumping ground -- but if > there's not a better place to put it, that's okay. I split out the file into its own implementation. This class is still the logical owner of the FirstWebContentsProfiler, but the new design only requires 7 additional lines in this .cc file. https://codereview.chromium.org/760763002/diff/40001/chrome/browser/metrics/c... chrome/browser/metrics/chrome_browser_main_extra_parts_metrics.cc:354: } On 2014/11/26 02:24:55, Ilya Sherman wrote: > I think this logic could safely be internal to the FirstWebContentsProfiler > class. Done. https://codereview.chromium.org/760763002/diff/40001/chrome/browser/metrics/c... File chrome/browser/metrics/chrome_browser_main_extra_parts_metrics.h (right): https://codereview.chromium.org/760763002/diff/40001/chrome/browser/metrics/c... chrome/browser/metrics/chrome_browser_main_extra_parts_metrics.h:49: void FirstWebContentsProfilerWantsDestruction(); On 2014/11/26 02:24:55, Ilya Sherman wrote: > This feels pretty hacky. I wonder if it would be cleaner to just have the > FirstWebContentsProfiler own its own memory, i.e. call "delete this" when it > currently calls FirstWebContentsProfilerWantsDestruction(). I don't like the style where objects own their own memory, as I find it prone to memory leaks. The logic is the same, but I changed the code to use the delegation pattern, which I assume will be okay by you in terms of not feeling hacky.
Thanks for moving the code. I'm ok with the delegation pattern that you used. https://codereview.chromium.org/760763002/diff/120001/chrome/browser/metrics/... File chrome/browser/metrics/first_web_contents_profiler.cc (right): https://codereview.chromium.org/760763002/diff/120001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:14: namespace { nit: Please leave a blank line after this one. https://codereview.chromium.org/760763002/diff/120001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:20: // improvements, which requires 781 buckets. 1% improvements to what? Histograms measure a precise mean, so the number of buckets is irrelevant for measuring improvements to the mean. If you're not tracking the mean, then what are you tracking? https://codereview.chromium.org/760763002/diff/120001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:21: // log(45 * 1000 / 200) / log (2) / 0.01 = 781. nit: I expect we'll arrive at some different computation; but if you find yourself keeping this one, please just use "lg" in place of "log", and remove the "/ log (2)" part. https://codereview.chromium.org/760763002/diff/120001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:21: // log(45 * 1000 / 200) / log (2) / 0.01 = 781. Sorry, but I don't understand the relevance of this computation. Could you please clarify why dividing the distance, in log space, by 0.01 gives an interesting number? https://codereview.chromium.org/760763002/diff/120001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:22: const int kHistogramBucketSize = 781; nit: Please leave a blank line after this one.
isherman: PTAL https://codereview.chromium.org/760763002/diff/120001/chrome/browser/metrics/... File chrome/browser/metrics/first_web_contents_profiler.cc (right): https://codereview.chromium.org/760763002/diff/120001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:14: namespace { On 2014/11/26 04:12:26, Ilya Sherman wrote: > nit: Please leave a blank line after this one. Done. https://codereview.chromium.org/760763002/diff/120001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:20: // improvements, which requires 781 buckets. On 2014/11/26 04:12:26, Ilya Sherman wrote: > 1% improvements to what? Histograms measure a precise mean, so the number of > buckets is irrelevant for measuring improvements to the mean. If you're not > tracking the mean, then what are you tracking? I'm tracking the two modes of a bi-modal distribution. I've significantly expanded the documentation in patch set 3, so take a look at that for a more detailed explanation. https://codereview.chromium.org/760763002/diff/120001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:21: // log(45 * 1000 / 200) / log (2) / 0.01 = 781. On 2014/11/26 04:12:26, Ilya Sherman wrote: > nit: I expect we'll arrive at some different computation; but if you find > yourself keeping this one, please just use "lg" in place of "log", and remove > the "/ log (2)" part. Done. https://codereview.chromium.org/760763002/diff/120001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:21: // log(45 * 1000 / 200) / log (2) / 0.01 = 781. On 2014/11/26 04:12:26, Ilya Sherman wrote: > Sorry, but I don't understand the relevance of this computation. Could you > please clarify why dividing the distance, in log space, by 0.01 gives an > interesting number? I've updated the explanation in patch set 3 to break down the computation into documented subcomponents. https://codereview.chromium.org/760763002/diff/120001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:22: const int kHistogramBucketSize = 781; On 2014/11/26 04:12:26, Ilya Sherman wrote: > nit: Please leave a blank line after this one. Done.
isherman@chromium.org changed reviewers: + asvitkine@chromium.org, rkaplow@chromium.org
+Alexei and Rob for any statistically-minded insights they can provide. https://codereview.chromium.org/760763002/diff/140001/chrome/browser/metrics/... File chrome/browser/metrics/first_web_contents_profiler.cc (right): https://codereview.chromium.org/760763002/diff/140001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:29: // logarithmic difference between buckets must be 0.01. What is the connection between 1% and a logarithmic difference of 0.01? Are you interested in 1% improvements in the log space? Moreover, I'm not convinced that you need 1%-sized deltas between consecutive buckets in order to get 1% granularity on the means, even for a bimodal distribution. The best option would be to identify why the distribution is bimodal, and split it up into two sets of unimodal histograms. If that's not feasible, I'd recommend at least checking with a statistician for how best to compute the number you're interested in. I'm going to be pretty strict in asking you for a solid statistical basis for a time-tracking histogram that has oodles of buckets.
+1 to Ilya's concern. 781 buckets is still quite a lot. One of the problems is because it's not a sparse histogram, this is quite a bit of extra memory that's kept around in the browser process (an array with 781 entries), which is quite wasteful, since it sounds like you're recording only a single sample during a session there. I'm also not sure what the fine granularity buckets actually buys you. What metric will you be analyzing? The mean? Some percentiles (e.g. 50% percentile, 90% percentile)? All of those can be analyzed without that many buckets. Also, since you're just adding this metric now, I'm not sure how you can actually reason about how many buckets you need - given you don't actually know the underlying distribution. At the very least, I suggest adding a coarser grained metric to begin with and then once you know the range you want to "zoom in" on, you can always add a more fine grained metric that's appropriate for the data we're seeing. On Wed, Nov 26, 2014 at 5:04 PM, <isherman@chromium.org> wrote: > +Alexei and Rob for any statistically-minded insights they can provide. > > > https://codereview.chromium.org/760763002/diff/140001/ > chrome/browser/metrics/first_web_contents_profiler.cc > File chrome/browser/metrics/first_web_contents_profiler.cc (right): > > https://codereview.chromium.org/760763002/diff/140001/ > chrome/browser/metrics/first_web_contents_profiler.cc#newcode29 > chrome/browser/metrics/first_web_contents_profiler.cc:29: // logarithmic > difference between buckets must be 0.01. > What is the connection between 1% and a logarithmic difference of 0.01? > Are you interested in 1% improvements in the log space? > > Moreover, I'm not convinced that you need 1%-sized deltas between > consecutive buckets in order to get 1% granularity on the means, even > for a bimodal distribution. The best option would be to identify why > the distribution is bimodal, and split it up into two sets of unimodal > histograms. If that's not feasible, I'd recommend at least checking > with a statistician for how best to compute the number you're interested > in. I'm going to be pretty strict in asking you for a solid statistical > basis for a time-tracking histogram that has oodles of buckets. > > https://codereview.chromium.org/760763002/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
isherman, asvitkine: Response requested. My response got so long that I made a Google Doc for it to make it easier to read. I addressed every single concern raised in the last 2 comments. Feel free to add more concerns. https://docs.google.com/a/chromium.org/document/d/1JUbOXqE58Ata17TAoqt9mEDCPO... There are many metrics collected in Chrome right now, and none of them have needed this many buckets, so both of you naturally wonder why I require so many buckets for these metrics, and if the buckets are actually necessary. Furthermore, there are even existing "startup" metrics that use far, far fewer buckets. I will attempt to concisely prove that the existing "startup" metrics have too few buckets to be useful. (A couple of months ago, I spent quite some time staring at the existing Startup metrics, and even added some of my own. As far as I know, I'm pretty much the only person that actually looks at these histograms. Jeremy used to, once upon a time.) Take a look at the timeline for Startup.BrowserWindowDisplay (50th percentile). This metric uses 50 buckets, and the "distance" between buckets is 1.35. (e.g. bucket_max / bucket_min = 1.35). https://uma.googleplex.com/p/chrome/timeline/#metric=Startup.BrowserWindowDis... When you look at the stable channel, the sample size is huge, and the graph is incredibly regular. 5 days flat, with 2 days spike, repeat at 7-day intervals. During the 5 flat days, the typical shift in median is around 0.25%. If a specific Chrome version were to introduce a 1% improvement to startup, this would be easily measurable. Now let's look at the dev channel, which is where most experiments will be running. It has orders of magnitude fewer samples. The typical shift in median is around 10%. The 7 day cycle isn't even noticeable because of the noise. If a specific Chrome version introduced a 1% improvement to startup, there would be no way to measure if the improvement actually worked. I think it's pretty obvious that the noise is a function of the # of samples, and the "distance" between buckets. In order to run experiments on dev (or even beta) that affect startup time, I need to reduce the noise. I'm not a statistician, so I can't tell you off the top of my head how many buckets I need to reduce the noise to 1% or lower, given the sample size of dev channel. Having a distance between buckets of 1.01 seems like a reasonable estimate.
Just chatted about this with Rob offline. Let me summarize our thoughts on this and see if this is consistent with what you're thinking. First, be aware that the HistogramEventProto has an exact Sum field, which means we can get an exact mean of the data (divide exact sum by exact count), regardless of how many/few buckets are. Just stating this to make sure we're on the same page. However, from your links, it looks like you're planning to use percentiles to analyze this. Is that correct? Which percentiles are you interested in? Or, if not percentiles, then how will you analyze it? With percentiles, the backend calculations are indeed estimates. How it works is that the code figures out which bucket that percentile lands in (i.e. it knows the total count and goes through the bucket incrementing counts until it finds the bucket that would go over the percentile). Then, for that bucket, it figures out which sample (e.g. Nth out of X counts in that bucket) that percentile should land on - and then interpolates the bucket range to get at the estimated percentile value. With the above, you can get an inaccurate result (i.e. one that doesn't correspond to reality) if the real samples in that bucket were not uniformly distributed (which is an assumption made for the interpolation calculation) - the less uniformly distributed they are, the bigger the error. So fewer samples generally means worse error, but more samples generally means more accurate estimate - unless those samples really follow a much different distribution *within* that bucket By increasing the number of buckets, it means that: a) Now there are fewer samples in each bucket, and thus it's more likely that they won't be evenly distributed. but also: b) The maximum delta between the estimate and the actual real value is also reduced (i.e. it is at most the size of each bucket). c) Less chance for a non-uniform distribution *within* that bucket being the cause of a mis-estimate. Obviously, if you go to the limit (i.e. one sample per one unit - e.g. ms), then you get exact counts and the effect b) wins out over effect a). Is the above consistent with your take on this? i.e. that in fact you will be looking at percentiles and that you do want to improve the b) factor, at the detriment of the a) factor? Note: Your argument about having very few users (on dev where you want to experiment) seems like it would actually be more susceptible to the a) factor (i.e. if you have many buckets, with only a single sample per bucket for many of them, then you get greater interpolation error). Another thing to consider: Since you're planning to analyze this metric via experimentation, you can actually compare groups directly using the variations dashboard. It actually provides confidence intervals for the various values (i.e. means and percentiles) - which is done by sub-sampling (i.e. dividing users into 20 buckets and seeing how consistent results are between groups) - which may be sufficient for your needs to determine if there's a statistically significant change without needing the extra buckets. Finally, one other factor that I wasn't sure I fully understood in your argument. You said there's more noise in the dev data and you said it's a function of the size of buckets and such. It's not clear to me that you can make that conclusion. For example, there could be noise from the underlying data (e.g. simply too few samples), rather than anything about how we measure it. If that's indeed the case, then no amount of granularity in how we record the metric can help. On Wed, Nov 26, 2014 at 7:48 PM, <erikchen@chromium.org> wrote: > isherman, asvitkine: Response requested. > > My response got so long that I made a Google Doc for it to make it easier > to > read. I addressed every single concern raised in the last 2 comments. Feel > free > to add more concerns. > > https://docs.google.com/a/chromium.org/document/d/ > 1JUbOXqE58Ata17TAoqt9mEDCPOx7x2FWWNbZJik3JME/edit > > There are many metrics collected in Chrome right now, and none of them have > needed this many buckets, so both of you naturally wonder why I require so > many > buckets for these metrics, and if the buckets are actually necessary. > Furthermore, there are even existing "startup" metrics that use far, far > fewer > buckets. I will attempt to concisely prove that the existing "startup" > metrics > have too few buckets to be useful. (A couple of months ago, I spent quite > some > time staring at the existing Startup metrics, and even added some of my > own. As > far as I know, I'm pretty much the only person that actually looks at these > histograms. Jeremy used to, once upon a time.) > > Take a look at the timeline for Startup.BrowserWindowDisplay (50th > percentile). > This metric uses 50 buckets, and the "distance" between buckets is 1.35. > (e.g. > bucket_max / bucket_min = 1.35). > https://uma.googleplex.com/p/chrome/timeline/#metric= > Startup.BrowserWindowDisplay&bucket=&platform=win& > dateRange=2014.8.3-2014.11.25&yAxis=zero&measure=50&relative=false > > When you look at the stable channel, the sample size is huge, and the > graph is > incredibly regular. 5 days flat, with 2 days spike, repeat at 7-day > intervals. > During the 5 flat days, the typical shift in median is around 0.25%. If a > specific Chrome version were to introduce a 1% improvement to startup, this > would be easily measurable. > > Now let's look at the dev channel, which is where most experiments will be > running. It has orders of magnitude fewer samples. The typical shift in > median > is around 10%. The 7 day cycle isn't even noticeable because of the noise. > If a > specific Chrome version introduced a 1% improvement to startup, there > would be > no way to measure if the improvement actually worked. > > I think it's pretty obvious that the noise is a function of the # of > samples, > and the "distance" between buckets. In order to run experiments on dev (or > even > beta) that affect startup time, I need to reduce the noise. I'm not a > statistician, so I can't tell you off the top of my head how many buckets > I need > to reduce the noise to 1% or lower, given the sample size of dev channel. > Having > a distance between buckets of 1.01 seems like a reasonable estimate. > > > https://codereview.chromium.org/760763002/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
Just had an offline discussion with asvitkine and rkaplow. We are going to release the metric as is, into dev/canary channels. We are also going to release a metric with the standard bucket sizing/count into dev/canary channels. Both of these metrics will be explicitly commented as "temporary, slated for reconsideration". Once we have 1-2 weeks of data, we will look to see whether increasing the buckets actually has a useful effect, and then make a decision then on how to proceed.
Patchset #5 (id:180001) has been deleted
Patchset #5 (id:200001) has been deleted
Patchset #5 (id:220001) has been deleted
Patchset #5 (id:240001) has been deleted
Patchset #5 (id:260001) has been deleted
asvitkine: Please review. If you only want to see the changes that arose from our discussion, you can diff patch set 5 against patch set 4.
Generally looks good, some suggestion below. Thanks! https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... File chrome/browser/metrics/first_web_contents_profiler.cc (right): https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:66: base::CurrentProcessInfo::CreationTime(); CreationTime() seems like it's doing a system call - at least on some platforms. Since you call it in two places, how about doing it in the ctor and caching it in a member? https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:67: if (!process_creation_time.is_null()) { Nit: Early return instead. Same below. https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:80: UMA_HISTOGRAM_TIMES( Use UMA_HISTOGRAM_TIMES_100 https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:108: UMA_HISTOGRAM_TIMES( Ditto. https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... File chrome/browser/metrics/first_web_contents_profiler.h (right): https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.h:15: class FirstWebContentsProfilerDelegate { Nit: Any reason to not just nest it under FirstWebContentsProfiler so then it's just FirstWebContentsProfiler::Delegate? https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.h:40: // WebContentsObserver overrides. Nit: I think for new code, the preferred notation is: // content::WebContentsObserver: https://codereview.chromium.org/760763002/diff/280001/tools/metrics/histogram... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/760763002/diff/280001/tools/metrics/histogram... tools/metrics/histograms/histograms.xml:35395: + name="Startup.Experimental.FirstWebContents.MainFrameLoad.ManyBuckets" Honestly, I'd just remove the Experimental substring here - it just makes it longer without much benefit. Also, if we decide to keep one of these metrics, we won't need to rename them, then.
asvitkine: PTAL Diff patch set 6 against patch set 5 to see the changes. I did not make any changes to 'tools/metrics/histograms/histograms.xml' (letting you know since a diff isn't available). https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... File chrome/browser/metrics/first_web_contents_profiler.cc (right): https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:66: base::CurrentProcessInfo::CreationTime(); On 2014/12/03 20:05:57, Alexei Svitkine wrote: > CreationTime() seems like it's doing a system call - at least on some platforms. > > Since you call it in two places, how about doing it in the ctor and caching it > in a member? Done. I'm surprised that the function doesn't cache the result itself, since it's called multiple times over the lifetime of Chrome, and the result isn't expected to change. https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:67: if (!process_creation_time.is_null()) { On 2014/12/03 20:05:57, Alexei Svitkine wrote: > Nit: Early return instead. Same below. That would change the behavior of this method (it would prevent FinishedCollectingMetrics() from firing). While it would be possible to make it early return by duplicating lines 85 & 86, that seems less clean. I added documentation to collected_paint_metric_ and collected_load_metric_ to indicate that the member is set to true even if the attempt fails, to prevent this class from sitting around forever. I refrained from renaming the members, since I couldn't think of a reasonably short member name that would convey more precision to the meaning of the member. https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:80: UMA_HISTOGRAM_TIMES( On 2014/12/03 20:05:57, Alexei Svitkine wrote: > Use UMA_HISTOGRAM_TIMES_100 Done. https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:108: UMA_HISTOGRAM_TIMES( On 2014/12/03 20:05:57, Alexei Svitkine wrote: > Ditto. Done. https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... File chrome/browser/metrics/first_web_contents_profiler.h (right): https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.h:15: class FirstWebContentsProfilerDelegate { On 2014/12/03 20:05:57, Alexei Svitkine wrote: > Nit: Any reason to not just nest it under FirstWebContentsProfiler so then it's > just FirstWebContentsProfiler::Delegate? sounds reasonable to me. https://codereview.chromium.org/760763002/diff/280001/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.h:40: // WebContentsObserver overrides. On 2014/12/03 20:05:57, Alexei Svitkine wrote: > Nit: I think for new code, the preferred notation is: > > // content::WebContentsObserver: sure, done. I kept the old style for chrome/browser/metrics/chrome_browser_main_extra_parts_metrics.h, since the rest of that file still uses the old style. https://codereview.chromium.org/760763002/diff/280001/tools/metrics/histogram... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/760763002/diff/280001/tools/metrics/histogram... tools/metrics/histograms/histograms.xml:35395: + name="Startup.Experimental.FirstWebContents.MainFrameLoad.ManyBuckets" On 2014/12/03 20:05:57, Alexei Svitkine wrote: > Honestly, I'd just remove the Experimental substring here - it just makes it > longer without much benefit. > > Also, if we decide to keep one of these metrics, we won't need to rename them, > then. I would prefer to keep the names as is, since I intend to rename the metric regardless, and this makes it more clear that these are temporary metrics. Regardless of the final metric, I intend to remove the final affix from the name, since "ManyBuckets" and "StandardBuckets" add no meaning to the metric. If there's some hidden cost that I'm no aware of to orphaning histograms, let me know and I'll happily make the change.
lgtm, will leave it up to you if you want to rename them or not https://codereview.chromium.org/760763002/diff/280001/tools/metrics/histogram... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/760763002/diff/280001/tools/metrics/histogram... tools/metrics/histograms/histograms.xml:35395: + name="Startup.Experimental.FirstWebContents.MainFrameLoad.ManyBuckets" On 2014/12/03 21:38:41, erikchen wrote: > On 2014/12/03 20:05:57, Alexei Svitkine wrote: > > Honestly, I'd just remove the Experimental substring here - it just makes it > > longer without much benefit. > > > > Also, if we decide to keep one of these metrics, we won't need to rename them, > > then. > > I would prefer to keep the names as is, since I intend to rename the metric > regardless, and this makes it more clear that these are temporary metrics. > > Regardless of the final metric, I intend to remove the final affix from the > name, since "ManyBuckets" and "StandardBuckets" add no meaning to the metric. > > If there's some hidden cost that I'm no aware of to orphaning histograms, let me > know and I'll happily make the change. The cost is keeping the old entries in the XML file and marking them as obsolete. (Since we want to be able to decode even old data.) https://codereview.chromium.org/760763002/diff/270007/chrome/browser/metrics/... File chrome/browser/metrics/first_web_contents_profiler.cc (right): https://codereview.chromium.org/760763002/diff/270007/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:91: return; Nit: Add a line after this to match the method above.
https://codereview.chromium.org/760763002/diff/280001/tools/metrics/histogram... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/760763002/diff/280001/tools/metrics/histogram... tools/metrics/histograms/histograms.xml:35395: + name="Startup.Experimental.FirstWebContents.MainFrameLoad.ManyBuckets" On 2014/12/03 22:09:33, Alexei Svitkine wrote: > On 2014/12/03 21:38:41, erikchen wrote: > > On 2014/12/03 20:05:57, Alexei Svitkine wrote: > > > Honestly, I'd just remove the Experimental substring here - it just makes it > > > longer without much benefit. > > > > > > Also, if we decide to keep one of these metrics, we won't need to rename > them, > > > then. > > > > I would prefer to keep the names as is, since I intend to rename the metric > > regardless, and this makes it more clear that these are temporary metrics. > > > > Regardless of the final metric, I intend to remove the final affix from the > > name, since "ManyBuckets" and "StandardBuckets" add no meaning to the metric. > > > > If there's some hidden cost that I'm no aware of to orphaning histograms, let > me > > know and I'll happily make the change. > > The cost is keeping the old entries in the XML file and marking them as > obsolete. (Since we want to be able to decode even old data.) Alternately, the cost is losing continuity for your data -- though if you know where to look, it's easy enough to just view both histograms side-by-side on the dashboard.
https://codereview.chromium.org/760763002/diff/280001/tools/metrics/histogram... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/760763002/diff/280001/tools/metrics/histogram... tools/metrics/histograms/histograms.xml:35395: + name="Startup.Experimental.FirstWebContents.MainFrameLoad.ManyBuckets" On 2014/12/03 22:16:11, Ilya Sherman wrote: > On 2014/12/03 22:09:33, Alexei Svitkine wrote: > > On 2014/12/03 21:38:41, erikchen wrote: > > > On 2014/12/03 20:05:57, Alexei Svitkine wrote: > > > > Honestly, I'd just remove the Experimental substring here - it just makes > it > > > > longer without much benefit. > > > > > > > > Also, if we decide to keep one of these metrics, we won't need to rename > > them, > > > > then. > > > > > > I would prefer to keep the names as is, since I intend to rename the metric > > > regardless, and this makes it more clear that these are temporary metrics. > > > > > > Regardless of the final metric, I intend to remove the final affix from the > > > name, since "ManyBuckets" and "StandardBuckets" add no meaning to the > metric. > > > > > > If there's some hidden cost that I'm no aware of to orphaning histograms, > let > > me > > > know and I'll happily make the change. > > > > The cost is keeping the old entries in the XML file and marking them as > > obsolete. (Since we want to be able to decode even old data.) > > Alternately, the cost is losing continuity for your data -- though if you know > where to look, it's easy enough to just view both histograms side-by-side on the > dashboard. Understood. Since I plan on renaming the metric anyways to remove the final affix, I will continue with my plan to not rename this temporary histogram. https://codereview.chromium.org/760763002/diff/270007/chrome/browser/metrics/... File chrome/browser/metrics/first_web_contents_profiler.cc (right): https://codereview.chromium.org/760763002/diff/270007/chrome/browser/metrics/... chrome/browser/metrics/first_web_contents_profiler.cc:91: return; On 2014/12/03 22:09:33, Alexei Svitkine wrote: > Nit: Add a line after this to match the method above. Done.
The CQ bit was checked by erikchen@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/760763002/310001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: android_chromium_gn_compile_rel on tryserver.chromium.linux (http://build.chromium.org/p/tryserver.chromium.linux/builders/android_chromiu...)
The CQ bit was checked by erikchen@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/760763002/310001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: android_chromium_gn_compile_rel on tryserver.chromium.linux (http://build.chromium.org/p/tryserver.chromium.linux/builders/android_chromiu...)
Looks like this code won't work on android very well: ../../chrome/browser/metrics/first_web_contents_profiler.cc:59: error: undefined reference to 'base::CurrentProcessInfo::CreationTime()' ../../chrome/browser/metrics/first_web_contents_profiler.cc:34: error: undefined reference to 'chrome::BrowserIterator::BrowserIterator()' ../../chrome/browser/metrics/first_web_contents_profiler.cc:37: error: undefined reference to 'TabStripModel::GetActiveWebContents() const' Both of the latter two errors arise because the files aren't built on android, and the first error arises because there is no android specific version of process_info (there's only win, mac, linux). I'll update the CL to not build first_web_contents_profiler.cc/first_web_contents_profiler.h on android.
asvitkine: PTAL tl dr: I leave FirstWebContentsProfiler declared on all platforms, but only define it on non-Android platforms. I considered 3 possibilities for how to disable metric collection on android. 1. Make first_web_contents_profiler self-owned, and only compile it on non-android platforms. This solution leaves the smallest footprint in chrome_browser_main_extra_parts_metric. I dislike self-owned objects, as I find that it is too easy to write poor quality code that leaks memory. 2. Guard all references to first_web_contents_profiler with preprocessor directives. This is the most common solution I've seen in Chrome code. Using this solution would add 5 additional preprocessor directives, one of which would be nested in the class inheritance code. 3. Declare the class FirstWebContentsProfiler on Android, but don't define it. I am going to go with solution (3). It leaves a smaller footprint than (2) on chrome_browser_main_extra_parts_metric: it adds a single preprocessor directive to chrome_browser_main_extra_parts_metric.cc, and doesn't touch chrome_browser_main_extra_parts_metric.h. It adds minimal complexity to first_web_contents_profiler.cc.
lgtm if it passes the bots
The CQ bit was checked by erikchen@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/760763002/330001
Message was sent while issue was closed.
Committed patchset #8 (id:330001)
Message was sent while issue was closed.
Patchset 8 (id:??) landed as https://crrev.com/c2cec4c53bd674b77a9cfdb662dd38a7dfff5347 Cr-Commit-Position: refs/heads/master@{#306884} |