|
|
DescriptionAdd Plugin.PpapiSyncIPCTime metric.
BUG=459600
Committed: https://crrev.com/751416e1cab41302b8fecf1512764e266314cf8d
Cr-Commit-Position: refs/heads/master@{#322408}
Patch Set 1 #
Total comments: 7
Patch Set 2 : update histogram description #
Messages
Total messages: 21 (4 generated)
gab@chromium.org changed reviewers: + bbudge@chromium.org, mpearson@chromium.org
Hey Bill, PTAL, this is the metric we talked about with cpu yesterday. @Mark for histograms.xml CC jam FYI; CC piman (thanks for the tip on where to put it!) Thanks, Gab
histograms.xml lgtm https://codereview.chromium.org/1034043002/diff/1/tools/metrics/histograms/hi... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/1034043002/diff/1/tools/metrics/histograms/hi... tools/metrics/histograms/histograms.xml:27400: + Logs the time it took to complete each synchronous IPC made from the PPAPI nit: omit "Logs" nit: each -> a
On 2015/03/25 21:51:30, gab wrote: > Hey Bill, PTAL, this is the metric we talked about with cpu yesterday. > > @Mark for histograms.xml > > CC jam FYI; CC piman (thanks for the tip on where to put it!) > > Thanks, > Gab LGTM in principle, but FYI I'm not sure how good the data will be. Most sync messages should be fairly quick, but a few sync messages (I'm thinking GPU ones) can be "long" and add a lot of noise.
On Wed, Mar 25, 2015 at 3:11 PM, <piman@chromium.org> wrote: > On 2015/03/25 21:51:30, gab wrote: > >> Hey Bill, PTAL, this is the metric we talked about with cpu yesterday. >> > > @Mark for histograms.xml >> > > CC jam FYI; CC piman (thanks for the tip on where to put it!) >> > > Thanks, >> Gab >> > > LGTM in principle, but FYI I'm not sure how good the data will be. Most > sync > messages should be fairly quick, but a few sync messages (I'm thinking GPU > ones) > can be "long" and add a lot of noise. > In these kinds of cases, you might want to break out into multiple histograms depending on the type of message. --mark > > https://codereview.chromium.org/1034043002/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
https://codereview.chromium.org/1034043002/diff/1/ppapi/proxy/plugin_dispatch... File ppapi/proxy/plugin_dispatcher.cc (right): https://codereview.chromium.org/1034043002/diff/1/ppapi/proxy/plugin_dispatch... ppapi/proxy/plugin_dispatcher.cc:210: SCOPED_UMA_HISTOGRAM_TIMER("Plugin.PpapiSyncIPCTime"); There are about a dozen usages of this in Chromium right now, and they all seem like low frequency things. Could this slow down the Pepper proxy?
On 2015/03/25 22:11:49, piman (Very slow to review) wrote: > On 2015/03/25 21:51:30, gab wrote: > > Hey Bill, PTAL, this is the metric we talked about with cpu yesterday. > > > > @Mark for histograms.xml > > > > CC jam FYI; CC piman (thanks for the tip on where to put it!) > > > > Thanks, > > Gab > > LGTM in principle, but FYI I'm not sure how good the data will be. Most sync > messages should be fairly quick, but a few sync messages (I'm thinking GPU ones) > can be "long" and add a lot of noise. The goal here is to determine whether an experiment we're running (see bug and/or [1]) affects this in any way (we are mainly thinking of Flash=>Renderer interactions -- does this cover mostly that or way more?). So while the data itself may not be so interesting on its own, it becomes interesting in an experiment where we can check whether the long tail is impacted in group A vs group B. [1] https://groups.google.com/a/google.com/d/topic/chrome-fast/F23xneF4rO4/discus...
https://codereview.chromium.org/1034043002/diff/1/ppapi/proxy/plugin_dispatch... File ppapi/proxy/plugin_dispatcher.cc (right): https://codereview.chromium.org/1034043002/diff/1/ppapi/proxy/plugin_dispatch... ppapi/proxy/plugin_dispatcher.cc:210: SCOPED_UMA_HISTOGRAM_TIMER("Plugin.PpapiSyncIPCTime"); On 2015/03/25 22:28:14, bbudge wrote: > There are about a dozen usages of this in Chromium right now, and they all seem > like low frequency things. Could this slow down the Pepper proxy? Mark will know better than I, but FWIU, this is a new wrapper for UMA_HISTOGRAM_TIMES (to avoid every user of that in a small scope to have to do the local variable/base::Time::Now() dance explicitly). It probably shouldn't be put in a tight loop -- and even then I'm not sure, it probably is very minimal overhead -- but other than that I doubt it makes any difference on the path to a synchronous IPC (which I assume is at least an order of magnitude slower).
Actually, @Mark: I remember having a stat that was logged every 10s and it made it's way to the top data users on the lug clusters, this stat as-is would be **much** worse. Should we only log every 100th, 1000th call? @piman: how often will this hit on average you think with typical browsing?
>>> On Wed, Mar 25, 2015 at 3:48 PM, <gab@chromium.org> wrote: Actually, @Mark: I remember having a stat that was logged every 10s and it made it's way to the top data users on the lug clusters, this stat as-is would be **much** worse. Should we only log every 100th, 1000th call? >>> The stat you're thinking of had to do with *data space* on *logs clusters* [1]. What we're talking about on this code review has to do with *CPU time* in *client browsers*. That issue is not relevant. That said, if you only want to log ever Xth call, you can. See further comments below. --mark [1] In particular, it had to do with sparse histograms, which are a hashtable, unlike the simply array-like maps of regular histograms like you're using here. https://codereview.chromium.org/1034043002/diff/1/ppapi/proxy/plugin_dispatch... File ppapi/proxy/plugin_dispatcher.cc (right): https://codereview.chromium.org/1034043002/diff/1/ppapi/proxy/plugin_dispatch... ppapi/proxy/plugin_dispatcher.cc:210: SCOPED_UMA_HISTOGRAM_TIMER("Plugin.PpapiSyncIPCTime"); On 2015/03/25 22:45:57, gab wrote: > On 2015/03/25 22:28:14, bbudge wrote: > > There are about a dozen usages of this in Chromium right now, and they all > seem > > like low frequency things. Could this slow down the Pepper proxy? > > Mark will know better than I, but FWIU, this is a new wrapper for > UMA_HISTOGRAM_TIMES (to avoid every user of that in a small scope to have to do > the local variable/base::Time::Now() dance explicitly). Yes, it should be as fast as doing the local variable time dance explicitly. > It probably shouldn't be put in a tight loop -- and even then I'm not sure, it > probably is very minimal overhead All the UMA histogram macros are designed to be extremely low overhead. They were written and optimize to use as few machine instructions as possible. I forget the exact number but I think it was O(10) machine instructions; unless you're putting this in a really tight loop, I doubt it can slow anything down meaningfully. > -- but other than that I doubt it makes any > difference on the path to a synchronous IPC (which I assume is at least an order > of magnitude slower).
LGTM https://codereview.chromium.org/1034043002/diff/1/ppapi/proxy/plugin_dispatch... File ppapi/proxy/plugin_dispatcher.cc (right): https://codereview.chromium.org/1034043002/diff/1/ppapi/proxy/plugin_dispatch... ppapi/proxy/plugin_dispatcher.cc:210: SCOPED_UMA_HISTOGRAM_TIMER("Plugin.PpapiSyncIPCTime"); On 2015/03/25 22:56:56, Mark P wrote: > On 2015/03/25 22:45:57, gab wrote: > > On 2015/03/25 22:28:14, bbudge wrote: > > > There are about a dozen usages of this in Chromium right now, and they all > > seem > > > like low frequency things. Could this slow down the Pepper proxy? > > > > Mark will know better than I, but FWIU, this is a new wrapper for > > UMA_HISTOGRAM_TIMES (to avoid every user of that in a small scope to have to > do > > the local variable/base::Time::Now() dance explicitly). > > Yes, it should be as fast as doing the local variable time dance explicitly. > > > It probably shouldn't be put in a tight loop -- and even then I'm not sure, it > > probably is very minimal overhead > > All the UMA histogram macros are designed to be extremely low overhead. They > were written and optimize to use as few machine instructions as possible. I > forget the exact number but I think it was O(10) machine instructions; unless > you're putting this in a really tight loop, I doubt it can slow anything down > meaningfully. > > > -- but other than that I doubt it makes any > > difference on the path to a synchronous IPC (which I assume is at least an > order > > of magnitude slower). > It's hard to tell looking at the macro definition, which defines a class and instantiates it. https://code.google.com/p/chromium/codesearch#chromium/src/base/metrics/histo... Also, I don't have a good feel for the cost of base::TimeTicks on different platforms. But it does seem that the cost should be very small relative to a sync IPC.
@Mark: my question above probably got lost in the noise: Is the potential high frequency of this metric a problem for the log clusters? (@piman any estimate on how frequently this gets called?) https://codereview.chromium.org/1034043002/diff/1/ppapi/proxy/plugin_dispatch... File ppapi/proxy/plugin_dispatcher.cc (right): https://codereview.chromium.org/1034043002/diff/1/ppapi/proxy/plugin_dispatch... ppapi/proxy/plugin_dispatcher.cc:210: SCOPED_UMA_HISTOGRAM_TIMER("Plugin.PpapiSyncIPCTime"); On 2015/03/25 23:24:19, bbudge wrote: > On 2015/03/25 22:56:56, Mark P wrote: > > On 2015/03/25 22:45:57, gab wrote: > > > On 2015/03/25 22:28:14, bbudge wrote: > > > > There are about a dozen usages of this in Chromium right now, and they all > > > seem > > > > like low frequency things. Could this slow down the Pepper proxy? > > > > > > Mark will know better than I, but FWIU, this is a new wrapper for > > > UMA_HISTOGRAM_TIMES (to avoid every user of that in a small scope to have to > > do > > > the local variable/base::Time::Now() dance explicitly). > > > > Yes, it should be as fast as doing the local variable time dance explicitly. > > > > > It probably shouldn't be put in a tight loop -- and even then I'm not sure, > it > > > probably is very minimal overhead > > > > All the UMA histogram macros are designed to be extremely low overhead. They > > were written and optimize to use as few machine instructions as possible. I > > forget the exact number but I think it was O(10) machine instructions; unless > > you're putting this in a really tight loop, I doubt it can slow anything down > > meaningfully. > > > > > -- but other than that I doubt it makes any > > > difference on the path to a synchronous IPC (which I assume is at least an > > order > > > of magnitude slower). > > > > It's hard to tell looking at the macro definition, which defines a class and > instantiates it. But it's a static instantiation so it only happens once. > https://code.google.com/p/chromium/codesearch#chromium/src/base/metrics/histo... > > Also, I don't have a good feel for the cost of base::TimeTicks on different > platforms. But it does seem that the cost should be very small relative to a > sync IPC. https://codereview.chromium.org/1034043002/diff/1/tools/metrics/histograms/hi... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/1034043002/diff/1/tools/metrics/histograms/hi... tools/metrics/histograms/histograms.xml:27400: + Logs the time it took to complete each synchronous IPC made from the PPAPI On 2015/03/25 22:09:33, Mark P wrote: > nit: omit "Logs" > nit: each -> a Done.
On 2015/03/26 00:07:58, gab wrote: > @Mark: my question above probably got lost in the noise: > > Is the potential high frequency of this metric a problem for the log clusters? > (@piman any estimate on how frequently this gets called?) Oops nevermind, I'm the one that missed your reply...!
On 2015/03/25 22:56:56, Mark P wrote: > >>> > On Wed, Mar 25, 2015 at 3:48 PM, <https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=gab@chromium.org> wrote: > Actually, > > @Mark: I remember having a stat that was logged every 10s and it made it's way > to the top data users on the lug clusters, this stat as-is would be **much** > worse. > > Should we only log every 100th, 1000th call? > >>> > > The stat you're thinking of had to do with *data space* on *logs clusters* [1]. > What we're talking about on this code review has to do with *CPU time* in > *client browsers*. That issue is not relevant. I was actually asking about *data space* on *logs clusters*, i.e. we logged the pref file size on every write (every >=10s) and this was a problem. This stat would potentially trigger a lot more than that. > > That said, if you only want to log ever Xth call, you can. > > See further comments below. > > --mark > > [1] > In particular, it had to do with sparse histograms, which are a hashtable, > unlike the simply array-like maps of regular histograms like you're using here. > > https://codereview.chromium.org/1034043002/diff/1/ppapi/proxy/plugin_dispatch... > File ppapi/proxy/plugin_dispatcher.cc (right): > > https://codereview.chromium.org/1034043002/diff/1/ppapi/proxy/plugin_dispatch... > ppapi/proxy/plugin_dispatcher.cc:210: > SCOPED_UMA_HISTOGRAM_TIMER("Plugin.PpapiSyncIPCTime"); > On 2015/03/25 22:45:57, gab wrote: > > On 2015/03/25 22:28:14, bbudge wrote: > > > There are about a dozen usages of this in Chromium right now, and they all > > seem > > > like low frequency things. Could this slow down the Pepper proxy? > > > > Mark will know better than I, but FWIU, this is a new wrapper for > > UMA_HISTOGRAM_TIMES (to avoid every user of that in a small scope to have to > do > > the local variable/base::Time::Now() dance explicitly). > > Yes, it should be as fast as doing the local variable time dance explicitly. > > > It probably shouldn't be put in a tight loop -- and even then I'm not sure, it > > probably is very minimal overhead > > All the UMA histogram macros are designed to be extremely low overhead. They > were written and optimize to use as few machine instructions as possible. I > forget the exact number but I think it was O(10) machine instructions; unless > you're putting this in a really tight loop, I doubt it can slow anything down > meaningfully. > > > -- but other than that I doubt it makes any > > difference on the path to a synchronous IPC (which I assume is at least an > order > > of magnitude slower).
On Wed, Mar 25, 2015 at 5:07 PM, <gab@chromium.org> wrote: > @Mark: my question above probably got lost in the noise: > > Is the potential high frequency of this metric a problem for the log > clusters? > No. (In case you're curious why: it's because histograms are stored using summary statistics, i.e., counts in each bucket. It doesn't matter how many counts are in each bucket; the magnitude of counts doesn't cause additional space. The only problems come from special-purpose histograms that have a potentially huge number of buckets.) --mark > (@piman any estimate on how frequently this gets called?) > > > https://codereview.chromium.org/1034043002/diff/1/ppapi/ > proxy/plugin_dispatcher.cc > File ppapi/proxy/plugin_dispatcher.cc (right): > > https://codereview.chromium.org/1034043002/diff/1/ppapi/ > proxy/plugin_dispatcher.cc#newcode210 > ppapi/proxy/plugin_dispatcher.cc:210: > SCOPED_UMA_HISTOGRAM_TIMER("Plugin.PpapiSyncIPCTime"); > On 2015/03/25 23:24:19, bbudge wrote: > >> On 2015/03/25 22:56:56, Mark P wrote: >> > On 2015/03/25 22:45:57, gab wrote: >> > > On 2015/03/25 22:28:14, bbudge wrote: >> > > > There are about a dozen usages of this in Chromium right now, >> > and they all > >> > > seem >> > > > like low frequency things. Could this slow down the Pepper >> > proxy? > >> > > >> > > Mark will know better than I, but FWIU, this is a new wrapper for >> > > UMA_HISTOGRAM_TIMES (to avoid every user of that in a small scope >> > to have to > >> > do >> > > the local variable/base::Time::Now() dance explicitly). >> > >> > Yes, it should be as fast as doing the local variable time dance >> > explicitly. > >> > >> > > It probably shouldn't be put in a tight loop -- and even then I'm >> > not sure, > >> it >> > > probably is very minimal overhead >> > >> > All the UMA histogram macros are designed to be extremely low >> > overhead. They > >> > were written and optimize to use as few machine instructions as >> > possible. I > >> > forget the exact number but I think it was O(10) machine >> > instructions; unless > >> > you're putting this in a really tight loop, I doubt it can slow >> > anything down > >> > meaningfully. >> > >> > > -- but other than that I doubt it makes any >> > > difference on the path to a synchronous IPC (which I assume is at >> > least an > >> > order >> > > of magnitude slower). >> > >> > > It's hard to tell looking at the macro definition, which defines a >> > class and > >> instantiates it. >> > > But it's a static instantiation so it only happens once. > > > https://code.google.com/p/chromium/codesearch#chromium/ > src/base/metrics/histogram_macros.h&l=235 > > Also, I don't have a good feel for the cost of base::TimeTicks on >> > different > >> platforms. But it does seem that the cost should be very small >> > relative to a > >> sync IPC. >> > > https://codereview.chromium.org/1034043002/diff/1/tools/ > metrics/histograms/histograms.xml > File tools/metrics/histograms/histograms.xml (right): > > https://codereview.chromium.org/1034043002/diff/1/tools/ > metrics/histograms/histograms.xml#newcode27400 > tools/metrics/histograms/histograms.xml:27400: + Logs the time it > took to complete each synchronous IPC made from the PPAPI > On 2015/03/25 22:09:33, Mark P wrote: > >> nit: omit "Logs" >> nit: each -> a >> > > Done. > > https://codereview.chromium.org/1034043002/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
The CQ bit was checked by gab@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from mpearson@chromium.org, piman@chromium.org Link to the patchset: https://codereview.chromium.org/1034043002/#ps20001 (title: "update histogram description")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1034043002/20001
Message was sent while issue was closed.
Committed patchset #2 (id:20001)
Message was sent while issue was closed.
Patchset 2 (id:??) landed as https://crrev.com/751416e1cab41302b8fecf1512764e266314cf8d Cr-Commit-Position: refs/heads/master@{#322408}
Message was sent while issue was closed.
Patchset #3 (id:40001) has been deleted |