|
|
Created:
5 years, 11 months ago by xdai1 Modified:
5 years, 10 months ago CC:
chromium-reviews, erikwright+watch_chromium.org, stevenjb+watch_chromium.org, oshima+watch_chromium.org Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionAdd UMA stats to record the memory pressure level periodically (once per second) in ChromeOS. This is used to show the relative frequency of the system being in a critical vs relaxed memory pressure state
BUG=442829
Committed: https://crrev.com/c919cea5dbf733fe6e817df779196968ed04c737
Cr-Commit-Position: refs/heads/master@{#314702}
Patch Set 1 : #Patch Set 2 : #
Total comments: 1
Patch Set 3 : Address the code review comment. #
Total comments: 12
Patch Set 4 : Address the review comments. #
Total comments: 3
Patch Set 5 : Address the code review comments. #
Total comments: 1
Patch Set 6 : Modify MemoryPressureListener::MemoryPressureLevel #Patch Set 7 : Add MemoryPressureLevelUMA #Patch Set 8 : Just add a variable kNumMemoryPressureLevels #
Total comments: 4
Patch Set 9 : Address the review comments. #
Total comments: 3
Patch Set 10 : Address Ilya's code review comments. #
Messages
Total messages: 32 (10 generated)
Patchset #1 (id:1) has been deleted
xdai@chromium.org changed reviewers: + skuhne@chromium.org
Hi Stefan, Can you take a look at the CL? Thanks! I'm not quite sure about the meaning of ([how long not in crunch mode] / [how long in crunch mode]). Since the system may go back and forth between critical mode and non-critical mode, which means we can get many piece of time duration for these two mode. So am I supposed to get piecewise ratios, or just a total ratio like I did in this CL? Thanks!
xdai, sorry for the delay, but with the interviews going on it is tricky to allocate time for review. But from now on it should get faster again... See comment! https://codereview.chromium.org/874483008/diff/40001/base/chromeos/memory_pre... File base/chromeos/memory_pressure_observer_chromeos.cc (right): https://codereview.chromium.org/874483008/diff/40001/base/chromeos/memory_pre... base/chromeos/memory_pressure_observer_chromeos.cc:135: MemoryPressureListener::MEMORY_PRESSURE_LEVEL_CRITICAL) Instead of adding the ms value here, you should remember the time stamp at event time and subtract them to get the real time delta. Reason: The time between events is "roughly" what we are asking for, but in the end the task scheduler and many other factors decide when / how this is called and the error is cumulative.
Patchset #3 (id:60001) has been deleted
Patchset #3 (id:80001) has been deleted
Hi Stefan, please take another look at the CL. I modified the UMA stat from [how long the system stayed in non-critical mode] / [how long the system stayed in critical mode] to the percentage that the system stayed in non-critical mode: [100 * how long the system stayed in non-critical mode] / [how long the system stayed in critical and non-critical mode] I think the advantage of using percentage is the range is fixed from 0 to 100. And also the larger the percentage is, the better the pressure handling scheme is.
Nearly there - only a few nits! Many thanks for doing this! https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... File base/chromeos/memory_pressure_observer_chromeos.cc (right): https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.cc:93: ((non_critical_delta_ + critical_delta_).InMilliseconds()); For line 90 - 93 I would create a function which can be called from here and from line 135. It would return then an int value which is the percentage. I would then create a static function which writes the UMA stat and which gets an int. Pass either the retrieved number or 100. (this reduces the number of this macro expansion and the danger of typos). https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.cc:131: // Record quality of service of memory pressure handling scheme in ChromeOS. Maybe: // Record the quality of service for memory pressure handling. ? https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.cc:135: critical_delta_ = TimeTicks::Now() - critical_start_time_; Note: As stated in the header file we do not need the delta variables inside the data structure. Suggestion: You call Now() twice. It is better to call it only once and use that value in both locations since time can pass between the two calls, which would then introduce a gap / error. https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.cc:137: What about: When leaving the critical memory pressure, we compute the fraction of time spent in the non critical vs. the critical state. A larger value indicates a better management scheme. ? https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... File base/chromeos/memory_pressure_observer_chromeos.h (right): https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.h:105: base::TimeDelta non_critical_delta_; We do not need this value. It can be derived from the existing non_critical_start_time and critical_start_time at the time when we write the UMA stats. https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.h:106: base::TimeDelta critical_delta_; On a second thought - we do not need to store either of these.
Oh - one more: Fixing it to 0..100: this is probably not good. I would estimate that the result would be for the most part in the 99 region -> so an improvement would be impossible to see. As such we should might want to multiply with e.g. 100000.
Patchset #4 (id:120001) has been deleted
Hi Stefan, Thanks for the review! Please take another look at the CL, thanks! https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... File base/chromeos/memory_pressure_observer_chromeos.cc (right): https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.cc:93: ((non_critical_delta_ + critical_delta_).InMilliseconds()); On 2015/01/29 08:17:53, Mr4D (OOO - Slow response) wrote: > For line 90 - 93 I would create a function which can be called from here and > from line 135. It would return then an int value which is the percentage. > > I would then create a static function which writes the UMA stat and which gets > an int. Pass either the retrieved number or 100. (this reduces the number of > this macro expansion and the danger of typos). Why should the function that writes the UMA stat be static? https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.cc:131: // Record quality of service of memory pressure handling scheme in ChromeOS. On 2015/01/29 08:17:53, Mr4D (OOO - Slow response) wrote: > Maybe: > > // Record the quality of service for memory pressure handling. > > ? Done. https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.cc:135: critical_delta_ = TimeTicks::Now() - critical_start_time_; On 2015/01/29 08:17:53, Mr4D (OOO - Slow response) wrote: > Note: As stated in the header file we do not need the delta variables inside the > data structure. > > Suggestion: You call Now() twice. It is better to call it only once and use that > value in both locations since time can pass between the two calls, which would > then introduce a gap / error. Done. https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.cc:137: On 2015/01/29 08:17:53, Mr4D (OOO - Slow response) wrote: > What about: > > When leaving the critical memory pressure, we compute the fraction of time spent > in the non critical vs. the critical state. A larger value indicates a better > management scheme. > > ? Done. https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... File base/chromeos/memory_pressure_observer_chromeos.h (right): https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.h:105: base::TimeDelta non_critical_delta_; On 2015/01/29 08:17:53, Mr4D (OOO - Slow response) wrote: > We do not need this value. It can be derived from the existing > non_critical_start_time and critical_start_time at the time when we write the > UMA stats. You're right. Deleted these two values. https://codereview.chromium.org/874483008/diff/100001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.h:106: base::TimeDelta critical_delta_; On 2015/01/29 08:17:53, Mr4D (OOO - Slow response) wrote: > On a second thought - we do not need to store either of these. Done.
lgtm after a few nits Thanks! https://codereview.chromium.org/874483008/diff/140001/base/chromeos/memory_pr... File base/chromeos/memory_pressure_observer_chromeos.cc (right): https://codereview.chromium.org/874483008/diff/140001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.cc:59: // fraction with 1000000 to magnify the result for better comparison between Maybe: Computes the by kNonCriticalTimeFractionMagnification scaled fraction of time spent in non critical state vs. in critical + non critical state. https://codereview.chromium.org/874483008/diff/140001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.cc:103: // Record the percentage for the last time. What about: // Before shutting down send the quality of service statistics. https://codereview.chromium.org/874483008/diff/140001/tools/metrics/histogram... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/874483008/diff/140001/tools/metrics/histogram... tools/metrics/histograms/histograms.xml:2707: + The fraction that the time system stays in non-critical memory pressure What about : A by 1000% multiplied fraction of time spent in non critical vs. non critical + critical time. This can be used to compare different memory pressure handling schemes against each other whereas a higher number is better.
xdai@chromium.org changed reviewers: + isherman@chromium.org
isherman@, please help to reivew tools/metrics/histograms/histograms.xml file, thanks!
https://codereview.chromium.org/874483008/diff/160001/tools/metrics/histogram... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/874483008/diff/160001/tools/metrics/histogram... tools/metrics/histograms/histograms.xml:2710: + number is better. Hmm, from looking at this description, I only partially understand this metric. I think it might help in part to write the ratio as a ratio, with parentheses enclosing the numerator and denominator, like so: (non-critical) / (non-critical + critical). Then, you say "10000%" -- which is equal to 100 -- but it looks like the code multiplies by 1 million. I'd recommend writing this without a percent, as that just makes the value confusing. I'd also mention *why* you're scaling the number -- I assume it's because UMA can only record integers, so this is how you're representing the fractional values that you actually care about. Now, all that said, I see that you're using an exponentially-spaced histogram, i.e. a histogram that has higher resolution for smaller values, and relatively low resolution for larger values. Is that appropriate for the metric that you're computing? Also, multiplying by 1 million suggests that you believe that you have at least 6 significant figures in the measurement. Is that true? That's pretty surprising to me -- I would have expected something closer to two significant digits. In addition, it's not clear to me how to interpret the data in this histogram -- probably because I don't wok on memory pressure. It would be helpful for the reader to know whether higher or lower non-critical ratios are better; and ideally a little bit of insight as to why this ratio, in particular, is a useful one to measure. Finally, please document when this metric is recorded. I'm guessing it's either periodic -- say once every minute -- or triggered by some event -- maybe process termination? But I'm really not sure.
The goal is to see "quality of service" improvements. The idea being that we compare time spent in "relaxed memory pressure" vs. "critical memory pressure". The bigger the number the better the service. So ideally we should see things like: 65 minutes in non crunch mode vs. 10 seconds in crunch mode which would then come out to 996168 - and that is better than having every e.g. 10 minutes 20 seconds crunch time -> 967741. But as you can see, the number itself is likely to be always close to the million with some change in the last digits. I was just thinking: Maybe it is better to subtract the result from 1 million and get then to smaller numbers? Would that be better for the UMA resolution? To answer your last question: It will be recorded when we change from critical to non critical memory pressure and upon system shutdown.
On 2015/02/03 00:05:41, Mr4D (OOO - Slow response) wrote: > The goal is to see "quality of service" improvements. The idea being that we > compare time spent in "relaxed memory pressure" vs. "critical memory pressure". > The bigger the number the better the service. > > So ideally we should see things like: 65 minutes in non crunch mode vs. 10 > seconds in crunch mode which would then come out to 996168 - and that is better > than having every e.g. 10 minutes 20 seconds crunch time -> 967741. > > But as you can see, the number itself is likely to be always close to the > million with some change in the last digits. I was just thinking: Maybe it is > better to subtract the result from 1 million and get then to smaller numbers? > Would that be better for the UMA resolution? I see. In that case, I do agree that measuring the ratio of (critical)/(total) is likely to be more compatible with the exponentially sized histogram buckets than measuring (non-critical)/(total). I'd also recommend tightening the histogram range to the range that you actually expect to see samples in. If most of the histogram buckets are empty 99.999999% of the time, you probably don't need those buckets. You can either reduce the number of buckets (and save a bit of memory), or you can instead increase the granularity for the data that *is* typically recorded. By the way, I found the phrasing "critical memory pressure" vs. "relaxed memory pressure" much easier to understand than "critical time" vs. "non-critical time". I'd recommend using that phrasing in the histogram description. > To answer your last question: It will be recorded when we change from critical > to non critical memory pressure and upon system shutdown. Thanks. It's important to document this in the histogram description, as the choice of when to record a metric introduces a source of bias, and it's important for viewers of the histogram to be able to reason about that bias. Your explanation also raises further questions for me: (1) Over what time period is the ratio measured? If the state of the system goes from "non-critical" to "critical" to "non-critical" to "critical" to "non-critical" to shutdown, will the final value emitted to the histogram measure the memory use since the system was started, or only since the last non-critical state? (2) Is there a minimum threshold for the duration of the period that is measured for this metric? It's useful to think about what the metric will look like if the system rapidly alternates between critical and non-critical, vs. slowly alternating. Is it useful to have more records emitted from a system that is rapidly alternating? Would rapid alternation introduce a bunch of noise into the metric? Maybe really rapid alternation is not a realistic scenario, but I'd be surprised if there weren't different rates of alternation for different workloads. It sounds to me like you might be biasing toward recording more samples from workloads that switch more frequently between critical and relaxed memory pressure; and that at the same time samples recorded from these workloads might be noisier (due to measuring shorter durations). You might want to think about whether this is a worrisome source of bias; and if it is, consider redesigning the metric to try to gather a more even sampling.
On 2015/02/03 00:35:37, Ilya Sherman wrote: > On 2015/02/03 00:05:41, Mr4D (OOO - Slow response) wrote: > > The goal is to see "quality of service" improvements. The idea being that we > > compare time spent in "relaxed memory pressure" vs. "critical memory > pressure". > > The bigger the number the better the service. > > > > So ideally we should see things like: 65 minutes in non crunch mode vs. 10 > > seconds in crunch mode which would then come out to 996168 - and that is > better > > than having every e.g. 10 minutes 20 seconds crunch time -> 967741. > > > > But as you can see, the number itself is likely to be always close to the > > million with some change in the last digits. I was just thinking: Maybe it is > > better to subtract the result from 1 million and get then to smaller numbers? > > Would that be better for the UMA resolution? > > I see. In that case, I do agree that measuring the ratio of (critical)/(total) > is likely to be more compatible with the exponentially sized histogram buckets > than measuring (non-critical)/(total). I'd also recommend tightening the > histogram range to the range that you actually expect to see samples in. If > most of the histogram buckets are empty 99.999999% of the time, you probably > don't need those buckets. You can either reduce the number of buckets (and save > a bit of memory), or you can instead increase the granularity for the data that > *is* typically recorded. > > By the way, I found the phrasing "critical memory pressure" vs. "relaxed memory > pressure" much easier to understand than "critical time" vs. "non-critical > time". I'd recommend using that phrasing in the histogram description. > > > To answer your last question: It will be recorded when we change from critical > > to non critical memory pressure and upon system shutdown. > > Thanks. It's important to document this in the histogram description, as the > choice of when to record a metric introduces a source of bias, and it's > important for viewers of the histogram to be able to reason about that bias. > > Your explanation also raises further questions for me: > (1) Over what time period is the ratio measured? If the state of the system > goes from "non-critical" to "critical" to "non-critical" to "critical" to > "non-critical" to shutdown, will the final value emitted to the histogram > measure the memory use since the system was started, or only since the last > non-critical state? Yes, I am afraid of that as well. That is the problem of a "fraction" vs. an absolute value. However I hoped that this would not get a problem. One thing I did not mention yet however is that the measurement is in regular (1 second) intervals. So one possibility would be to do the accounting once a second with each call, but write the UMA stats then every e.g. 600 sampling calls (10 minutes). That might be less noisy, but would possibly lead into lots of 100% = "no problem" segments. Hmmm.. In that case however we might get away with leaving values with up to 100(%) again since we could count the 100% counters for the most part. Interesting thought! > (2) Is there a minimum threshold for the duration of the period that is > measured for this metric? > Yes. It is sampling (and deciding for an event) every second. > It's useful to think about what the metric will look like if the system rapidly > alternates between critical and non-critical, vs. slowly alternating. Is it > useful to have more records emitted from a system that is rapidly alternating? > Would rapid alternation introduce a bunch of noise into the metric? Maybe > really rapid alternation is not a realistic scenario, but I'd be surprised if > there weren't different rates of alternation for different workloads. It sounds > to me like you might be biasing toward recording more samples from workloads > that switch more frequently between critical and relaxed memory pressure; and > that at the same time samples recorded from these workloads might be noisier > (due to measuring shorter durations). You might want to think about whether > this is a worrisome source of bias; and if it is, consider redesigning the > metric to try to gather a more even sampling.
On 2015/02/03 01:17:59, Mr4D (OOO - Slow response) wrote: > On 2015/02/03 00:35:37, Ilya Sherman wrote: > > On 2015/02/03 00:05:41, Mr4D (OOO - Slow response) wrote: > > > The goal is to see "quality of service" improvements. The idea being that we > > > compare time spent in "relaxed memory pressure" vs. "critical memory > > pressure". > > > The bigger the number the better the service. > > > > > > So ideally we should see things like: 65 minutes in non crunch mode vs. 10 > > > seconds in crunch mode which would then come out to 996168 - and that is > > better > > > than having every e.g. 10 minutes 20 seconds crunch time -> 967741. > > > > > > But as you can see, the number itself is likely to be always close to the > > > million with some change in the last digits. I was just thinking: Maybe it > is > > > better to subtract the result from 1 million and get then to smaller > numbers? > > > Would that be better for the UMA resolution? > > > > I see. In that case, I do agree that measuring the ratio of > (critical)/(total) > > is likely to be more compatible with the exponentially sized histogram buckets > > than measuring (non-critical)/(total). I'd also recommend tightening the > > histogram range to the range that you actually expect to see samples in. If > > most of the histogram buckets are empty 99.999999% of the time, you probably > > don't need those buckets. You can either reduce the number of buckets (and > save > > a bit of memory), or you can instead increase the granularity for the data > that > > *is* typically recorded. > > > > By the way, I found the phrasing "critical memory pressure" vs. "relaxed > memory > > pressure" much easier to understand than "critical time" vs. "non-critical > > time". I'd recommend using that phrasing in the histogram description. > > > > > To answer your last question: It will be recorded when we change from > critical > > > to non critical memory pressure and upon system shutdown. > > > > Thanks. It's important to document this in the histogram description, as the > > choice of when to record a metric introduces a source of bias, and it's > > important for viewers of the histogram to be able to reason about that bias. > > > > Your explanation also raises further questions for me: > > (1) Over what time period is the ratio measured? If the state of the system > > goes from "non-critical" to "critical" to "non-critical" to "critical" to > > "non-critical" to shutdown, will the final value emitted to the histogram > > measure the memory use since the system was started, or only since the last > > non-critical state? > > Yes, I am afraid of that as well. That is the problem of a "fraction" vs. an > absolute value. However I hoped that this would not get a problem. > > One thing I did not mention yet however is that the measurement is in regular (1 > second) intervals. > So one possibility would be to do the accounting once a second with each call, > but write the UMA stats then every e.g. 600 sampling calls (10 minutes). > That might be less noisy, but would possibly lead into lots of 100% = "no > problem" segments. > Hmmm.. In that case however we might get away with leaving values with up to > 100(%) again since we could count the 100% counters for the most part. > Interesting thought! You might have already considered this, but you could just emit to the histogram once per sample, i.e. once per second. That is, you could have a histogram with two buckets: "critical memory pressure" and "relaxed memory pressure". Once per second, you could emit the current state of the system. In aggregate, that would then show the relative frequency of the system being in a critical vs. relaxed state.
> You might have already considered this, but you could just emit to the histogram > once per sample, i.e. once per second. That is, you could have a histogram with > two buckets: "critical memory pressure" and "relaxed memory pressure". Once per > second, you could emit the current state of the system. In aggregate, that > would then show the relative frequency of the system being in a critical vs. > relaxed state. I thought that this might be too costly to do, but as discussed offline this should work. There are some other considerations though: 1. Under load the timer will not run every second, but rather when it gets (roughly) called whereas in non critical times it will really get called every second. So in total the critical "pings" will be comparable less than the others. 2. It might have been of interest to get a feeling on how long it takes to come back from a bad state vs. how long we stay in the good state. As pointed out - this value would be useful, but "stutters" could make this numbers worthless. So I think that we should do it by counting each second. It would resolve also the question of sampling bucket granularity and makes the numbers super easy to compare.
Patchset #7 (id:200001) has been deleted
Patchset #7 (id:220001) has been deleted
I think this looks good! A few nits and we are there. https://codereview.chromium.org/874483008/diff/250001/base/chromeos/memory_pr... File base/chromeos/memory_pressure_observer_chromeos.cc (right): https://codereview.chromium.org/874483008/diff/250001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.cc:34: // pressure level is introduced in MemoryPressureListener::MemoryPressureLevel. ... if more memory pressure levels are introduced ... Looking at the constants, there is a -1 .. 2. I think it would be a good idea to map the pressure values to a new enum and store that instead (e.g. like UMAMemoryPressureLevel). The new enum could then also have something like a UMA_MEMORY_PRESSURE_MAX value which denotes the upper limit which is cleaner than this. https://codereview.chromium.org/874483008/diff/250001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.cc:119: RecordMemoryStatistics(current_memory_pressure_level_); In this case you can pull the content of the function into this function and add a switch statement to convert the MemoryPressureLevel into a UMAMemoryPressureLevel here. I would say that there is no need to create a function with a single call which gets only called from one place. https://codereview.chromium.org/874483008/diff/250001/tools/metrics/histogram... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/874483008/diff/250001/tools/metrics/histogram... tools/metrics/histograms/histograms.xml:2708: + The current memory pressure level in ChromeOS, which is recorded I think you can get rid of the "current" here. https://codereview.chromium.org/874483008/diff/250001/tools/metrics/histogram... tools/metrics/histograms/histograms.xml:43908: + <int value="-1" label="Low memory pressure"/> Great that you added this enum! This would start then from 0, .. and this entry in particular would be "no memory pressure".
Please take another look at the CL, thanks!
lgtm
Thanks, this looks much better! LGTM % nits. https://codereview.chromium.org/874483008/diff/270001/base/chromeos/memory_pr... File base/chromeos/memory_pressure_observer_chromeos.h (right): https://codereview.chromium.org/874483008/diff/270001/base/chromeos/memory_pr... base/chromeos/memory_pressure_observer_chromeos.h:79: // The function periodically check the memory pressure changes and record nit: "check" -> "checks"; "record" -> "records" https://codereview.chromium.org/874483008/diff/270001/tools/metrics/histogram... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/874483008/diff/270001/tools/metrics/histogram... tools/metrics/histograms/histograms.xml:43915: + <int value="0" label="No memory pressure"/> nit: Perhaps "Relaxed memory pressure" is a little clearer than "No memory pressure"? Up to you. https://codereview.chromium.org/874483008/diff/270001/tools/metrics/histogram... tools/metrics/histograms/histograms.xml:43917: + <int value="2" label="High Memory pressure"/> nit: "Memory" -> "memory" (lowercase "m")
New patchsets have been uploaded after l-g-t-m from skuhne@chromium.org,isherman@chromium.org
On 2015/02/04 23:13:17, Ilya Sherman wrote: > Thanks, this looks much better! LGTM % nits. > > https://codereview.chromium.org/874483008/diff/270001/base/chromeos/memory_pr... > File base/chromeos/memory_pressure_observer_chromeos.h (right): > > https://codereview.chromium.org/874483008/diff/270001/base/chromeos/memory_pr... > base/chromeos/memory_pressure_observer_chromeos.h:79: // The function > periodically check the memory pressure changes and record > nit: "check" -> "checks"; "record" -> "records" > > https://codereview.chromium.org/874483008/diff/270001/tools/metrics/histogram... > File tools/metrics/histograms/histograms.xml (right): > > https://codereview.chromium.org/874483008/diff/270001/tools/metrics/histogram... > tools/metrics/histograms/histograms.xml:43915: + <int value="0" label="No > memory pressure"/> > nit: Perhaps "Relaxed memory pressure" is a little clearer than "No memory > pressure"? Up to you. > > https://codereview.chromium.org/874483008/diff/270001/tools/metrics/histogram... > tools/metrics/histograms/histograms.xml:43917: + <int value="2" label="High > Memory pressure"/> > nit: "Memory" -> "memory" (lowercase "m") Thanks Ilya! Comments are addressed.
The CQ bit was checked by xdai@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/874483008/290001
Message was sent while issue was closed.
Committed patchset #10 (id:290001)
Message was sent while issue was closed.
Patchset 10 (id:??) landed as https://crrev.com/c919cea5dbf733fe6e817df779196968ed04c737 Cr-Commit-Position: refs/heads/master@{#314702} |