Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(15)

Issue 4092: Change to Hi Res timers on Windows.... (Closed)

Created:
12 years, 3 months ago by Mike Belshe
Modified:
9 years, 5 months ago
CC:
chromium-reviews_googlegroups.com
Visibility:
Public.

Description

Change to Hi Res timers on Windows. There are two parts of this: 1) TimeTicks:Now() Don't call timeBeginPeriod() in all cases. Use the new SystemMonitor class to watch battery on/off transitions and use the timeBeginPeriod() only when we're using the battery. 2) TimeTicks::UnreliableHiResNow() Change this function from "UnreliableHiResNow()" to "HiResNow()". We still use QPC, but we detect if we're on AMD Athlon XP machines which fail on QPC. For those systems, we fall back to TimeTicks::Now(). Updated tests to detect hardware specifics of timers. Output of the test will contain lines such as these: [ RUN ] TimeTicks.SubMillisecondTimers Min timer is: 1us [ OK ] TimeTicks.SubMillisecondTimers [ RUN ] TimeTicks.TimeGetTimeCaps timeGetTime range is 1 to 1000000ms [ OK ] TimeTicks.TimeGetTimeCaps [ RUN ] TimeTicks.QueryPerformanceFrequency QueryPerformanceFrequency is 2394.18MHz [ OK ] TimeTicks.QueryPerformanceFrequency [ RUN ] TimeTicks.TimerPerformance Time::Now: 0.11us per call TimeTicks::Now: 0.09us per call TimeTicks::HighResNow: 0.26us per call [ OK ] TimeTicks.TimerPerformance Committed: http://src.chromium.org/viewvc/chrome?view=rev&revision=2625

Patch Set 1 #

Patch Set 2 : '' #

Unified diffs Side-by-side diffs Delta from patch set Stats (+248 lines, -51 lines) Patch
M base/time.h View 1 1 chunk +6 lines, -6 lines 0 comments Download
M base/time_posix.cc View 1 chunk +1 line, -1 line 0 comments Download
M base/time_unittest.cc View 1 1 chunk +28 lines, -11 lines 0 comments Download
M base/time_unittest_win.cc View 1 3 chunks +91 lines, -1 line 0 comments Download
M base/time_win.cc View 1 7 chunks +121 lines, -31 lines 0 comments Download
M base/trace_event.cc View 1 chunk +1 line, -1 line 0 comments Download

Messages

Total messages: 5 (0 generated)
Mike Belshe
12 years, 3 months ago (2008-09-25 20:46:55 UTC) #1
darin (slow to review)
LG overall. Here's some minor comments and a couple questions: http://codereview.chromium.org/4092/diff/1/8 File base/time.h (right): http://codereview.chromium.org/4092/diff/1/8#newcode358 ...
12 years, 3 months ago (2008-09-25 22:54:51 UTC) #2
Mike Belshe
http://codereview.chromium.org/4092/diff/1/3 File base/time_unittest.cc (right): http://codereview.chromium.org/4092/diff/1/3#newcode110 Line 110: EXPECT_GE(delta.InMicroseconds(), 9000); On 2008/09/25 22:54:51, Darin Fisher (Google) ...
12 years, 3 months ago (2008-09-25 23:56:30 UTC) #3
Dean McNamee
I didn't do a real review yet, will do it on Monday. In general I ...
12 years, 3 months ago (2008-09-26 08:22:39 UTC) #4
Mike Belshe
12 years, 2 months ago (2008-09-29 17:46:13 UTC) #5
On 2008/09/26 08:22:39, Dean McNamee wrote:
> I didn't do a real review yet, will do it on Monday.  In general I think the
> idea looks ok... I don't want to say I'm skeptical, I believe in the interrupt
> battery problem.  But on the other hand, it would be nice to have like, data
and
> stuff?  How much difference is this really making?  How do we know there isn't
> another part of Chrome that is effectively voiding out these changes?  Can we
> hook up a WattsUp to a laptop and actual measure a power difference?  Can we
> time on long a battery lasts or something?

I don't have a good way to measure this, but if we trust the Intel report, here
are some thoughts.
http://software.intel.com/en-us/articles/cpu-power-utilization-on-intel-archi...

In these tests, they measured a 11% to 28% increase in CPU usage when increasing
the timer rate from 10ms to 1ms.

My laptop battery is rated for 10.8V/7.2A*h.  If the CPU were the only drain on 
the battery, Intel says it uses 1.1W @10ms and 1.24W @1ms timers.  This 11%
difference would equate to just about 8 hours of battery life.  Of course, there
are additional drains on battery which are much larger....

> I mean, you know, I don't want to make things overly difficult, but on the
other
> hand, we have no validation that this actually helps, or how big the problem
was
> in the first place?

I think the Intel report is good?

> 
> http://codereview.chromium.org/4092/diff/1/4
> File base/time_unittest_win.cc (right):
> 
> http://codereview.chromium.org/4092/diff/1/4#newcode183
> Line 183: (stop - start).InMillisecondsF() * 1000 / kLoops);
> Do we really just want printfs in our unit tests?  Seems like a LOG would be
> more appropriate.

I thought the printfs were nice because you can run these tests on any machine
and see some characteristics of the machine.  I have seen other tests which use
stdio, but I don't know if we have a policy on this?

> 
> http://codereview.chromium.org/4092/diff/1/2
> File base/time_win.cc (right):
> 
> http://codereview.chromium.org/4092/diff/1/2#newcode311
> Line 311: base::CPU cpu;
> I realize not your job, but I think CPU should move into sys_info.

Yeah, maybe that would be a good idea.  I didn't think of this when I was doing
it...  I filed a bug (2956) on that.  I hope you agree its not hi pri?

 
> http://codereview.chromium.org/4092/diff/1/2#newcode357
> Line 357: return static_cast<int64>(now.QuadPart / ticks_per_microsecond_);
> On 2008/09/25 23:56:30, mbelshe wrote:
> > On 2008/09/25 22:54:51, Darin Fisher (Google) wrote:
> > > so i thought part of the "problem" with QPC was that
ticks_per_microsecond_
> > can
> > > actually change?  e.g., intel speed step.  is that not something to worry
> > about?
> > 
> > I don't think so:
> > http://www.gamedev.net/community/forums/topic.asp?topic_id=453522
> > http://msdn.microsoft.com/en-us/library/ms644905(VS.85).aspx
> > 
> > "The frequency cannot change while the system is running"
> > 
> > 
> > 
> > 
> 
> 
> I think this also means that they adjust for the worse case.  So if you have
> speed step, take the slowest frequency?  I know oh my laptop, I was 3.5mhz,
> while Mike's desktop was like 1000 times that?

Yup.  BTW - you can run the tests on any machine and it will print the 
frequency for you :-) (I confirmed your 3.5MHz on my laptop too)


> So while I don't think the result of this function call will ever change, I
> think there could still be a noticeable effect on the result of QPC during
> frequency scaling.

Yes; different machines will have vastly different rollovers and resolution.

> 
> http://codereview.chromium.org/4092/diff/1/2#newcode367
> Line 367: float ticks_per_microsecond_;  // 0 indicates QPF failed and we're
> broken.
> We talked about this before, and I suggested maybe a float.  I thin it would
be
> simpler / faster / more accurate to avoid floating point.  Is there a way we
can
> just keep the original int64 returned by QPF, instead of scaling it down to
> ticks_per_microsecond, leave it in 100 nanos and do the math so we aren't
> truncating like before?

I actually tested this; I really was trying to make QPC viable; but the effect
on
performance due to float math was radically dwarfed by the time to call QPC
itself.
So, we might be able to come up with something here; but the QPC perf will still
be very slow compared to timeGetTime().

> 
> http://codereview.chromium.org/4092/diff/1/2#newcode368
> Line 368: int64 skew_;  // Skew between lo-res and hi-res clocks (for
> debugging).
> If it's for debugging, we should only have it when DEBUG?

Yes, you are right.

Powered by Google App Engine
This is Rietveld 408576698