Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(2297)

Side by Side Diff: functional/perf.py

Issue 8824002: Update _OutputPerfGraphValue to handle Chrome. (Closed) Base URL: svn://svn.chromium.org/chrome/trunk/src/chrome/test/
Patch Set: '' Created 9 years ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch | Annotate | Revision Log
« no previous file with comments | « no previous file | no next file » | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 #!/usr/bin/env python 1 #!/usr/bin/env python
2 # Copyright (c) 2011 The Chromium Authors. All rights reserved. 2 # Copyright (c) 2011 The Chromium Authors. All rights reserved.
3 # Use of this source code is governed by a BSD-style license that can be 3 # Use of this source code is governed by a BSD-style license that can be
4 # found in the LICENSE file. 4 # found in the LICENSE file.
5 5
6 """Basic pyauto performance tests. 6 """Basic pyauto performance tests.
7 7
8 For tests that need to be run for multiple iterations (e.g., so that average 8 For tests that need to be run for multiple iterations (e.g., so that average
9 and standard deviation values can be reported), the default number of iterations 9 and standard deviation values can be reported), the default number of iterations
10 run for each of these tests is specified by |_DEFAULT_NUM_ITERATIONS|. 10 run for each of these tests is specified by |_DEFAULT_NUM_ITERATIONS|.
(...skipping 49 matching lines...) Expand 10 before | Expand all | Expand 10 after
60 def setUp(self): 60 def setUp(self):
61 """Performs necessary setup work before running each test.""" 61 """Performs necessary setup work before running each test."""
62 self._num_iterations = self._DEFAULT_NUM_ITERATIONS 62 self._num_iterations = self._DEFAULT_NUM_ITERATIONS
63 if 'NUM_ITERATIONS' in os.environ: 63 if 'NUM_ITERATIONS' in os.environ:
64 self._num_iterations = int(os.environ['NUM_ITERATIONS']) 64 self._num_iterations = int(os.environ['NUM_ITERATIONS'])
65 self._max_timeout_count = self._DEFAULT_MAX_TIMEOUT_COUNT 65 self._max_timeout_count = self._DEFAULT_MAX_TIMEOUT_COUNT
66 if 'MAX_TIMEOUT_COUNT' in os.environ: 66 if 'MAX_TIMEOUT_COUNT' in os.environ:
67 self._max_timeout_count = int(os.environ['MAX_TIMEOUT_COUNT']) 67 self._max_timeout_count = int(os.environ['MAX_TIMEOUT_COUNT'])
68 self._timeout_count = 0 68 self._timeout_count = 0
69 pyauto.PyUITest.setUp(self) 69 pyauto.PyUITest.setUp(self)
70 print ('CHROME VERSION: ' +
71 self.GetBrowserInfo()['properties']['ChromeVersion'])
dennis_jeffrey 2011/12/07 22:25:29 This is not actually needed right now, until we su
chrisphan 2011/12/07 22:34:13 Done.
70 72
71 def _AppendTab(self, url): 73 def _AppendTab(self, url):
72 """Appends a tab and increments a counter if the automation call times out. 74 """Appends a tab and increments a counter if the automation call times out.
73 75
74 Args: 76 Args:
75 url: The string url to which the appended tab should be navigated. 77 url: The string url to which the appended tab should be navigated.
76 """ 78 """
77 if not self.AppendTab(pyauto.GURL(url)): 79 if not self.AppendTab(pyauto.GURL(url)):
78 self._timeout_count += 1 80 self._timeout_count += 1
79 81
(...skipping 26 matching lines...) Expand all
106 """ 108 """
107 avg = 0.0 109 avg = 0.0
108 std_dev = 0.0 110 std_dev = 0.0
109 if values: 111 if values:
110 avg = float(sum(values)) / len(values) 112 avg = float(sum(values)) / len(values)
111 if len(values) > 1: 113 if len(values) > 1:
112 temp_vals = [math.pow(x - avg, 2) for x in values] 114 temp_vals = [math.pow(x - avg, 2) for x in values]
113 std_dev = math.sqrt(sum(temp_vals) / (len(temp_vals) - 1)) 115 std_dev = math.sqrt(sum(temp_vals) / (len(temp_vals) - 1))
114 return avg, std_dev 116 return avg, std_dev
115 117
116 def _OutputPerfGraphValue(self, description, value): 118 def _OutputPerfGraphValue(self, description, value, units,
119 graph_name='Default-Graph'):
117 """Outputs a performance value to have it graphed on the performance bots. 120 """Outputs a performance value to have it graphed on the performance bots.
118 121
119 Only used for ChromeOS. The performance bots have a 30-character limit on 122 The output format differs, depending on whether the current platform is
120 the length of the description for a performance value. Any characters 123 Chrome desktop or ChromeOS.
121 beyond that are truncated before results are stored in the autotest 124
125
dennis_jeffrey 2011/12/07 22:25:29 remove one of these blank lines
chrisphan 2011/12/07 22:34:13 Done.
126 For ChromeOS, the performance bots have a 30-character limit on the length
127 of the key associated with a performance value. A key on ChromeOS is
128 considered to be of the form "units_description" (for example,
129 "milliseconds_NewTabPage"), and is created from the |units| and
130 |description| passed as input to this function. Any characters beyond the
131 length 30 limit are truncated before results are stored in the autotest
122 database. 132 database.
123 133
124 Args: 134 Args:
125 description: A string description of the performance value. The string 135 description: A string description of the performance value. Should not
126 should be of the form "units_identifier" (for example, 136 include spaces.
127 "milliseconds_NewTabPage"). This description will be
128 truncated to 30 characters when stored in the autotest
129 database.
130 value: A numeric value representing a single performance measurement. 137 value: A numeric value representing a single performance measurement.
138 units: A string representing the units of the performance value. Should
139 not include spaces.
140 graph_name: A string name for the graph associated with this performance
141 value. Only used on Chrome desktop.
142
131 """ 143 """
132 if self.IsChromeOS(): 144 if self.IsChromeOS():
133 if len(description) > 30: 145 perf_key = '%s_%s' % (units, description)
146 if len(perf_key) > 30:
134 logging.warning('The description "%s" will be truncated to "%s" ' 147 logging.warning('The description "%s" will be truncated to "%s" '
135 '(length 30) when added to the autotest database.', 148 '(length 30) when added to the autotest database.',
136 description, description[:30]) 149 perf_key, perf_key[:30])
137 print '\n%s(\'%s\', %.2f)%s' % (self._PERF_OUTPUT_MARKER_PRE, description, 150 print '\n%s(\'%s\', %.2f)%s' % (self._PERF_OUTPUT_MARKER_PRE,
138 value, self._PERF_OUTPUT_MARKER_POST) 151 perf_key, value,
152 self._PERF_OUTPUT_MARKER_POST)
139 sys.stdout.flush() 153 sys.stdout.flush()
154 else:
155 pyauto_utils.PrintPerfResult(graph_name, description, value, units)
140 156
141 def _PrintSummaryResults(self, description, values, units): 157 def _PrintSummaryResults(self, description, values, units,
158 graph_name='Default-Graph'):
dennis_jeffrey 2011/12/07 22:25:29 indent this line by 1 fewer space
chrisphan 2011/12/07 22:34:13 Done.
142 """Logs summary measurement information. 159 """Logs summary measurement information.
143 160
144 This function computes and outputs the average and standard deviation of 161 This function computes and outputs the average and standard deviation of
145 the specified list of value measurements. It also invokes 162 the specified list of value measurements. It also invokes
146 _OutputPerfGraphValue() with the computed *average* value, to ensure the 163 _OutputPerfGraphValue() with the computed *average* value, to ensure the
147 average value can be plotted in a performance graph. 164 average value can be plotted in a performance graph.
148 165
149 Args: 166 Args:
150 description: A string description for the specified results. 167 description: A string description for the specified results.
151 values: A list of numeric value measurements. 168 values: A list of numeric value measurements.
152 units: A string specifying the units for the specified measurements. 169 units: A string specifying the units for the specified measurements.
dennis_jeffrey 2011/12/07 22:25:29 add a description for the new "graph_name" argumen
chrisphan 2011/12/07 22:34:13 Done.
153 """ 170 """
154 logging.info('Results for: ' + description) 171 logging.info('Results for: ' + description)
155 if values: 172 if values:
156 avg, std_dev = self._AvgAndStdDev(values) 173 avg, std_dev = self._AvgAndStdDev(values)
157 logging.info('Number of iterations: %d', len(values)) 174 logging.info('Number of iterations: %d', len(values))
158 for val in values: 175 for val in values:
159 logging.info(' %.2f %s', val, units) 176 logging.info(' %.2f %s', val, units)
160 logging.info(' --------------------------') 177 logging.info(' --------------------------')
161 logging.info(' Average: %.2f %s', avg, units) 178 logging.info(' Average: %.2f %s', avg, units)
162 logging.info(' Std dev: %.2f %s', std_dev, units) 179 logging.info(' Std dev: %.2f %s', std_dev, units)
163 self._OutputPerfGraphValue('%s_%s' % (units, description), avg) 180 self._OutputPerfGraphValue(description, avg, units, graph_name)
dennis_jeffrey 2011/12/07 22:25:29 pass the last argument as a named argument: graph_
chrisphan 2011/12/07 22:34:13 Done.
164 else: 181 else:
165 logging.info('No results to report.') 182 logging.info('No results to report.')
166 183
167 def _RunNewTabTest(self, description, open_tab_command, num_tabs=1): 184 def _RunNewTabTest(self, description, open_tab_command, num_tabs=1):
168 """Runs a perf test that involves opening new tab(s). 185 """Runs a perf test that involves opening new tab(s).
169 186
170 This helper function can be called from different tests to do perf testing 187 This helper function can be called from different tests to do perf testing
171 with different types of tabs. It is assumed that the |open_tab_command| 188 with different types of tabs. It is assumed that the |open_tab_command|
172 will open up a single tab. 189 will open up a single tab.
173 190
(...skipping 13 matching lines...) Expand all
187 # Only count the timing measurement if no automation call timed out. 204 # Only count the timing measurement if no automation call timed out.
188 if self._timeout_count == orig_timeout_count: 205 if self._timeout_count == orig_timeout_count:
189 timings.append(elapsed_time) 206 timings.append(elapsed_time)
190 self.assertTrue(self._timeout_count <= self._max_timeout_count, 207 self.assertTrue(self._timeout_count <= self._max_timeout_count,
191 msg='Test exceeded automation timeout threshold.') 208 msg='Test exceeded automation timeout threshold.')
192 self.assertEqual(1 + num_tabs, self.GetTabCount(), 209 self.assertEqual(1 + num_tabs, self.GetTabCount(),
193 msg='Did not open %d new tab(s).' % num_tabs) 210 msg='Did not open %d new tab(s).' % num_tabs)
194 for _ in range(num_tabs): 211 for _ in range(num_tabs):
195 self.GetBrowserWindow(0).GetTab(1).Close(True) 212 self.GetBrowserWindow(0).GetTab(1).Close(True)
196 213
197 self._PrintSummaryResults(description, timings, 'milliseconds') 214 self._PrintSummaryResults(description, timings, 'milliseconds',
215 description)
198 216
199 def _LoginToGoogleAccount(self): 217 def _LoginToGoogleAccount(self):
200 """Logs in to a testing Google account.""" 218 """Logs in to a testing Google account."""
201 creds = self.GetPrivateInfo()['test_google_account'] 219 creds = self.GetPrivateInfo()['test_google_account']
202 test_utils.GoogleAccountsLogin(self, creds['username'], creds['password']) 220 test_utils.GoogleAccountsLogin(self, creds['username'], creds['password'])
203 self.NavigateToURL('about:blank') # Clear the existing tab. 221 self.NavigateToURL('about:blank') # Clear the existing tab.
204 222
205 def _GetCPUUsage(self): 223 def _GetCPUUsage(self):
206 """Returns machine's CPU usage. 224 """Returns machine's CPU usage.
207 225
(...skipping 112 matching lines...) Expand 10 before | Expand all | Expand 10 after
320 338
321 timings = {} 339 timings = {}
322 for _ in xrange(self._num_iterations): 340 for _ in xrange(self._num_iterations):
323 result_dict = _RunBenchmarkOnce(url) 341 result_dict = _RunBenchmarkOnce(url)
324 for key, val in result_dict.items(): 342 for key, val in result_dict.items():
325 timings.setdefault(key, []).append(val) 343 timings.setdefault(key, []).append(val)
326 344
327 for key, val in timings.items(): 345 for key, val in timings.items():
328 print 346 print
329 if key == 'final_score': 347 if key == 'final_score':
330 self._PrintSummaryResults('V8Benchmark', val, 'score') 348 self._PrintSummaryResults('V8Benchmark', val, 'score',
349 'V8Benchmark-final')
331 else: 350 else:
332 self._PrintSummaryResults('V8Benchmark-%s' % key, val, 'score') 351 self._PrintSummaryResults('V8Benchmark-%s' % key, val, 'score',
352 'V8Benchmark-individual')
333 353
334 def testSunSpider(self): 354 def testSunSpider(self):
335 """Runs the SunSpider javascript benchmark suite.""" 355 """Runs the SunSpider javascript benchmark suite."""
336 url = self.GetFileURLForDataPath('sunspider', 'sunspider-driver.html') 356 url = self.GetFileURLForDataPath('sunspider', 'sunspider-driver.html')
337 self.assertTrue(self.AppendTab(pyauto.GURL(url)), 357 self.assertTrue(self.AppendTab(pyauto.GURL(url)),
338 msg='Failed to append tab for SunSpider benchmark suite.') 358 msg='Failed to append tab for SunSpider benchmark suite.')
339 359
340 js_is_done = """ 360 js_is_done = """
341 var done = false; 361 var done = false;
342 if (document.getElementById("console")) 362 if (document.getElementById("console"))
343 done = true; 363 done = true;
344 window.domAutomationController.send(JSON.stringify(done)); 364 window.domAutomationController.send(JSON.stringify(done));
345 """ 365 """
346 self.assertTrue( 366 self.assertTrue(
347 self.WaitUntil( 367 self.WaitUntil(
348 lambda: self.ExecuteJavascript(js_is_done, tab_index=1) == 'true', 368 lambda: self.ExecuteJavascript(js_is_done, tab_index=1) == 'true',
349 timeout=300, retry_sleep=1), 369 timeout=300, retry_sleep=1),
350 msg='Timed out when waiting for SunSpider benchmark score.') 370 msg='Timed out when waiting for SunSpider benchmark score.')
351 371
352 js_get_results = """ 372 js_get_results = """
353 window.domAutomationController.send( 373 window.domAutomationController.send(
354 document.getElementById("console").innerHTML); 374 document.getElementById("console").innerHTML);
355 """ 375 """
356 # Append '<br>' to the result to simplify regular expression matching. 376 # Append '<br>' to the result to simplify regular expression matching.
357 results = self.ExecuteJavascript(js_get_results, tab_index=1) + '<br>' 377 results = self.ExecuteJavascript(js_get_results, tab_index=1) + '<br>'
358 total = re.search('Total:\s*([\d.]+)ms', results).group(1) 378 total = re.search('Total:\s*([\d.]+)ms', results).group(1)
359 logging.info('Total: %.2f ms' % float(total)) 379 logging.info('Total: %.2f ms' % float(total))
360 self._OutputPerfGraphValue('ms_SunSpider-total', float(total)) 380 self._OutputPerfGraphValue('SunSpider', float(total), 'ms',
dennis_jeffrey 2011/12/07 22:25:29 please change the first argument to "SunSpider-tot
chrisphan 2011/12/07 22:34:13 Done.
381 graph_name='SunSpider-total')
361 382
362 for match_category in re.finditer('\s\s(\w+):\s*([\d.]+)ms.+?<br><br>', 383 for match_category in re.finditer('\s\s(\w+):\s*([\d.]+)ms.+?<br><br>',
363 results): 384 results):
364 category_name = match_category.group(1) 385 category_name = match_category.group(1)
365 category_result = match_category.group(2) 386 category_result = match_category.group(2)
366 logging.info('Benchmark "%s": %.2f ms', category_name, 387 logging.info('Benchmark "%s": %.2f ms', category_name,
367 float(category_result)) 388 float(category_result))
368 self._OutputPerfGraphValue('ms_SunSpider-%s' % category_name, 389 self._OutputPerfGraphValue('SunSpider-' + category_name,
369 float(category_result)) 390 float(category_result), 'ms',
391 graph_name='SunSpider-individual')
392
370 for match_result in re.finditer('<br>\s\s\s\s([\w-]+):\s*([\d.]+)ms', 393 for match_result in re.finditer('<br>\s\s\s\s([\w-]+):\s*([\d.]+)ms',
371 match_category.group(0)): 394 match_category.group(0)):
372 result_name = match_result.group(1) 395 result_name = match_result.group(1)
373 result_value = match_result.group(2) 396 result_value = match_result.group(2)
374 logging.info(' Result "%s-%s": %.2f ms', category_name, result_name, 397 logging.info(' Result "%s-%s": %.2f ms', category_name, result_name,
375 float(result_value)) 398 float(result_value))
376 self._OutputPerfGraphValue( 399 self._OutputPerfGraphValue(
377 'ms_SunSpider-%s-%s' % (category_name, result_name), 400 'SunSpider-%s-%s' % (category_name, result_name),
378 float(result_value)) 401 float(result_value), 'ms',
402 graph_name='SunSpider-individual')
379 403
380 def testDromaeoSuite(self): 404 def testDromaeoSuite(self):
381 """Measures results from Dromaeo benchmark suite.""" 405 """Measures results from Dromaeo benchmark suite."""
382 url = self.GetFileURLForDataPath('dromaeo', 'index.html') 406 url = self.GetFileURLForDataPath('dromaeo', 'index.html')
383 self.assertTrue(self.AppendTab(pyauto.GURL(url + '?dromaeo')), 407 self.assertTrue(self.AppendTab(pyauto.GURL(url + '?dromaeo')),
384 msg='Failed to append tab for Dromaeo benchmark suite.') 408 msg='Failed to append tab for Dromaeo benchmark suite.')
385 409
386 js_is_ready = """ 410 js_is_ready = """
387 var val = document.getElementById('pause').value; 411 var val = document.getElementById('pause').value;
388 window.domAutomationController.send(val); 412 window.domAutomationController.send(val);
(...skipping 36 matching lines...) Expand 10 before | Expand all | Expand 10 after
425 group_results['sub_groups'][sub_name] = 449 group_results['sub_groups'][sub_name] =
426 $(this).text().match(/: ([\d.]+)/)[1] 450 $(this).text().match(/: ([\d.]+)/)[1]
427 }); 451 });
428 result['all_results'][group_name] = group_results; 452 result['all_results'][group_name] = group_results;
429 }); 453 });
430 window.domAutomationController.send(JSON.stringify(result)); 454 window.domAutomationController.send(JSON.stringify(result));
431 """ 455 """
432 results = eval(self.ExecuteJavascript(js_get_results, tab_index=1)) 456 results = eval(self.ExecuteJavascript(js_get_results, tab_index=1))
433 total_result = results['total_result'] 457 total_result = results['total_result']
434 logging.info('Total result: ' + total_result) 458 logging.info('Total result: ' + total_result)
435 self._OutputPerfGraphValue('runsPerSec_Dromaeo-total', float(total_result)) 459 self._OutputPerfGraphValue(
436 460 'Dromaeo-total',
461 float(total_result), 'runsPerSec',
462 graph_name='Dromaeo-total')
463
437 for group_name, group in results['all_results'].iteritems(): 464 for group_name, group in results['all_results'].iteritems():
438 logging.info('Benchmark "%s": %s', group_name, group['result']) 465 logging.info('Benchmark "%s": %s', group_name, group['result'])
439 self._OutputPerfGraphValue( 466 self._OutputPerfGraphValue(
440 'runsPerSec_Dromaeo-%s' % group_name.replace(' ', ''), 467 'Dromaeo-' + group_name.replace(' ', ''),
441 float(group['result'])) 468 float(group['result']), 'runsPerSec',
442 469 graph_name='Dromaeo-individual')
443 for benchmark_name, benchmark_score in group['sub_groups'].iteritems(): 470 for benchmark_name, benchmark_score in group['sub_groups'].iteritems():
444 logging.info(' Result "%s": %s', benchmark_name, benchmark_score) 471 logging.info(' Result "%s": %s', benchmark_name, benchmark_score)
445 472
446 473
447 class LiveWebappLoadTest(BasePerfTest): 474 class LiveWebappLoadTest(BasePerfTest):
448 """Tests that involve performance measurements of live webapps. 475 """Tests that involve performance measurements of live webapps.
449 476
450 These tests connect to live webpages (e.g., Gmail, Calendar, Docs) and are 477 These tests connect to live webpages (e.g., Gmail, Calendar, Docs) and are
451 therefore subject to network conditions. These tests are meant to generate 478 therefore subject to network conditions. These tests are meant to generate
452 "ball-park" numbers only (to see roughly how long things take to occur from a 479 "ball-park" numbers only (to see roughly how long things take to occur from a
(...skipping 129 matching lines...) Expand 10 before | Expand all | Expand 10 after
582 time.sleep(60) 609 time.sleep(60)
583 total_video_frames = self._GetVideoFrames() - init_video_frames 610 total_video_frames = self._GetVideoFrames() - init_video_frames
584 total_dropped_frames = self._GetVideoDroppedFrames() - init_dropped_frames 611 total_dropped_frames = self._GetVideoDroppedFrames() - init_dropped_frames
585 cpu_usage_end = self._GetCPUUsage() 612 cpu_usage_end = self._GetCPUUsage()
586 fraction_non_idle_time = \ 613 fraction_non_idle_time = \
587 self._GetFractionNonIdleCPUTime(cpu_usage_start, cpu_usage_end) 614 self._GetFractionNonIdleCPUTime(cpu_usage_start, cpu_usage_end)
588 # Counting extrapolation for utilization to play the video. 615 # Counting extrapolation for utilization to play the video.
589 extrapolation_value = fraction_non_idle_time * \ 616 extrapolation_value = fraction_non_idle_time * \
590 (total_video_frames + total_dropped_frames) / total_video_frames 617 (total_video_frames + total_dropped_frames) / total_video_frames
591 logging.info('Netflix CPU extrapolation: %.2f' % extrapolation_value) 618 logging.info('Netflix CPU extrapolation: %.2f' % extrapolation_value)
592 self._OutputPerfGraphValue('extrapolation_NetflixCPUExtrapolation', 619 self._OutputPerfGraphValue(
593 extrapolation_value) 620 'NetflixCPUExtrapolation',
621 extrapolation_value, 'extrapolation',
622 graph_name='NetflixCPUExtrapolation')
594 623
595 624
596 class YoutubePerfTest(BasePerfTest, YoutubeTestHelper): 625 class YoutubePerfTest(BasePerfTest, YoutubeTestHelper):
597 """Test Youtube video performance.""" 626 """Test Youtube video performance."""
598 627
599 def __init__(self, methodName='runTest', **kwargs): 628 def __init__(self, methodName='runTest', **kwargs):
600 pyauto.PyUITest.__init__(self, methodName, **kwargs) 629 pyauto.PyUITest.__init__(self, methodName, **kwargs)
601 YoutubeTestHelper.__init__(self, self) 630 YoutubeTestHelper.__init__(self, self)
602 631
603 def _VerifyVideoTotalBytes(self): 632 def _VerifyVideoTotalBytes(self):
(...skipping 72 matching lines...) Expand 10 before | Expand all | Expand 10 after
676 total_dropped_frames = self.GetVideoDroppedFrames() - init_dropped_frames 705 total_dropped_frames = self.GetVideoDroppedFrames() - init_dropped_frames
677 cpu_usage_end = self._GetCPUUsage() 706 cpu_usage_end = self._GetCPUUsage()
678 707
679 fraction_non_idle_time = self._GetFractionNonIdleCPUTime( 708 fraction_non_idle_time = self._GetFractionNonIdleCPUTime(
680 cpu_usage_start, cpu_usage_end) 709 cpu_usage_start, cpu_usage_end)
681 total_frames = total_shown_frames + total_dropped_frames 710 total_frames = total_shown_frames + total_dropped_frames
682 # Counting extrapolation for utilization to play the video. 711 # Counting extrapolation for utilization to play the video.
683 extrapolation_value = (fraction_non_idle_time * 712 extrapolation_value = (fraction_non_idle_time *
684 (total_frames / total_shown_frames)) 713 (total_frames / total_shown_frames))
685 logging.info('Youtube CPU extrapolation: %.2f' % extrapolation_value) 714 logging.info('Youtube CPU extrapolation: %.2f' % extrapolation_value)
686 self._OutputPerfGraphValue('extrapolation_YoutubeCPUExtrapolation', 715 self._OutputPerfGraphValue(
687 extrapolation_value) 716 'YoutubeCPUExtrapolation',
717 extrapolation_value, 'extrapolation',
718 graph_name='YoutubeCPUExtrapolation')
688 719
689 720
690 class WebGLTest(BasePerfTest): 721 class WebGLTest(BasePerfTest):
691 """Tests for WebGL performance.""" 722 """Tests for WebGL performance."""
692 723
693 def _RunWebGLTest(self, url, description): 724 def _RunWebGLTest(self, url, description):
694 """Measures FPS using a specified WebGL demo. 725 """Measures FPS using a specified WebGL demo.
695 726
696 Args: 727 Args:
697 url: The string URL that, once loaded, will run the WebGL demo (default 728 url: The string URL that, once loaded, will run the WebGL demo (default
(...skipping 426 matching lines...) Expand 10 before | Expand all | Expand 10 after
1124 window.domAutomationController.send( 1155 window.domAutomationController.send(
1125 JSON.stringify(final_average_fps)); 1156 JSON.stringify(final_average_fps));
1126 """ 1157 """
1127 self.assertTrue( 1158 self.assertTrue(
1128 self.WaitUntil( 1159 self.WaitUntil(
1129 lambda: self.ExecuteJavascript(js, tab_index=1) != '-1', 1160 lambda: self.ExecuteJavascript(js, tab_index=1) != '-1',
1130 timeout=300, expect_retval=True, retry_sleep=1), 1161 timeout=300, expect_retval=True, retry_sleep=1),
1131 msg='Timed out when waiting for test result.') 1162 msg='Timed out when waiting for test result.')
1132 result = float(self.ExecuteJavascript(js, tab_index=1)) 1163 result = float(self.ExecuteJavascript(js, tab_index=1))
1133 logging.info('Result for %s: %.2f FPS (average)', description, result) 1164 logging.info('Result for %s: %.2f FPS (average)', description, result)
1134 self._OutputPerfGraphValue('%s_%s' % ('FPS', description), result) 1165 self._OutputPerfGraphValue(
1166 description,
1167 result, 'FPS',
1168 graph_name=description)
1135 1169
1136 def testFlashGaming(self): 1170 def testFlashGaming(self):
1137 """Runs a simple flash gaming benchmark test.""" 1171 """Runs a simple flash gaming benchmark test."""
1138 webpage_url = self.GetHttpURLForDataPath('pyauto_private', 'flash', 1172 webpage_url = self.GetHttpURLForDataPath('pyauto_private', 'flash',
1139 'FlashGamingTest2.html') 1173 'FlashGamingTest2.html')
1140 self._RunFlashTestForAverageFPS(webpage_url, 'FlashGaming') 1174 self._RunFlashTestForAverageFPS(webpage_url, 'FlashGaming')
1141 1175
1142 def testFlashText(self): 1176 def testFlashText(self):
1143 """Runs a simple flash text benchmark test.""" 1177 """Runs a simple flash text benchmark test."""
1144 webpage_url = self.GetHttpURLForDataPath('pyauto_private', 'flash', 1178 webpage_url = self.GetHttpURLForDataPath('pyauto_private', 'flash',
(...skipping 27 matching lines...) Expand all
1172 result = eval(self.ExecuteJavascript(js_result, tab_index=1)) 1206 result = eval(self.ExecuteJavascript(js_result, tab_index=1))
1173 for benchmark in result: 1207 for benchmark in result:
1174 mflops = float(result[benchmark][0]) 1208 mflops = float(result[benchmark][0])
1175 mem = float(result[benchmark][1]) 1209 mem = float(result[benchmark][1])
1176 if benchmark.endswith('_mflops'): 1210 if benchmark.endswith('_mflops'):
1177 benchmark = benchmark[:benchmark.find('_mflops')] 1211 benchmark = benchmark[:benchmark.find('_mflops')]
1178 logging.info('Results for ScimarkGui_' + benchmark + ':') 1212 logging.info('Results for ScimarkGui_' + benchmark + ':')
1179 logging.info(' %.2f MFLOPS', mflops) 1213 logging.info(' %.2f MFLOPS', mflops)
1180 logging.info(' %.2f MB', mem) 1214 logging.info(' %.2f MB', mem)
1181 self._OutputPerfGraphValue( 1215 self._OutputPerfGraphValue(
1182 '%s_ScimarkGui-%s-MFLOPS' % ('MFLOPS', benchmark), mflops) 1216 'ScimarkGui-%s-MFLOPS' % benchmark,
1217 mflops, 'MFLOPS',
1218 graph_name='ScimarkGui')
1183 self._OutputPerfGraphValue( 1219 self._OutputPerfGraphValue(
1184 '%s_ScimarkGui-%s-Mem' % ('MB', benchmark), mem) 1220 'ScimarkGui-%s-Mem' % benchmark,
1221 mem, 'MB',
1222 graph_name='ScimarkGui')
1185 1223
1186 1224
1187 class LiveGamePerfTest(BasePerfTest): 1225 class LiveGamePerfTest(BasePerfTest):
1188 """Tests to measure performance of live gaming webapps.""" 1226 """Tests to measure performance of live gaming webapps."""
1189 1227
1190 def ExtraChromeFlags(self): 1228 def ExtraChromeFlags(self):
1191 """Ensures Chrome is launched with custom flags. 1229 """Ensures Chrome is launched with custom flags.
1192 1230
1193 Returns: 1231 Returns:
1194 A list of extra flags to pass to Chrome when it is launched. 1232 A list of extra flags to pass to Chrome when it is launched.
(...skipping 28 matching lines...) Expand all
1223 1261
1224 # Let the app run for 1 minute. 1262 # Let the app run for 1 minute.
1225 time.sleep(60) 1263 time.sleep(60)
1226 1264
1227 cpu_usage_end = self._GetCPUUsage() 1265 cpu_usage_end = self._GetCPUUsage()
1228 fraction_non_idle_time = self._GetFractionNonIdleCPUTime( 1266 fraction_non_idle_time = self._GetFractionNonIdleCPUTime(
1229 cpu_usage_start, cpu_usage_end) 1267 cpu_usage_start, cpu_usage_end)
1230 1268
1231 logging.info('Fraction of CPU time spent non-idle: %.2f' % 1269 logging.info('Fraction of CPU time spent non-idle: %.2f' %
1232 fraction_non_idle_time) 1270 fraction_non_idle_time)
1233 self._OutputPerfGraphValue('Fraction_%sCpuBusy' % description, 1271 self._OutputPerfGraphValue(
1234 fraction_non_idle_time) 1272 description + 'CpuBusy',
1273 fraction_non_idle_time, 'Fraction',
1274 graph_name='CpuBusy')
1235 snapshotter = perf_snapshot.PerformanceSnapshotter() 1275 snapshotter = perf_snapshot.PerformanceSnapshotter()
1236 snapshot = snapshotter.HeapSnapshot()[0] 1276 snapshot = snapshotter.HeapSnapshot()[0]
1237 v8_heap_size = snapshot['total_heap_size'] / (1024.0 * 1024.0) 1277 v8_heap_size = snapshot['total_heap_size'] / (1024.0 * 1024.0)
1238 logging.info('Total v8 heap size: %.2f MB' % v8_heap_size) 1278 logging.info('Total v8 heap size: %.2f MB' % v8_heap_size)
1239 self._OutputPerfGraphValue('MB_%sV8HeapSize' % description, v8_heap_size) 1279 self._OutputPerfGraphValue(
1280 description + 'V8HeapSize',
1281 v8_heap_size, 'MB',
1282 graph_name='V8HeapSize')
1240 1283
1241 def testAngryBirds(self): 1284 def testAngryBirds(self):
1242 """Measures performance for Angry Birds.""" 1285 """Measures performance for Angry Birds."""
1243 self._RunLiveGamePerfTest('http://chrome.angrybirds.com', 'Angry Birds', 1286 self._RunLiveGamePerfTest('http://chrome.angrybirds.com', 'Angry Birds',
1244 'AngryBirds') 1287 'AngryBirds')
1245 1288
1246 1289
1247 class PerfTestServerRequestHandler(SimpleHTTPServer.SimpleHTTPRequestHandler): 1290 class PerfTestServerRequestHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
1248 """Request handler for the local performance test server.""" 1291 """Request handler for the local performance test server."""
1249 1292
(...skipping 220 matching lines...) Expand 10 before | Expand all | Expand 10 after
1470 """Identifies the port number to which the server is currently bound. 1513 """Identifies the port number to which the server is currently bound.
1471 1514
1472 Returns: 1515 Returns:
1473 The numeric port number to which the server is currently bound. 1516 The numeric port number to which the server is currently bound.
1474 """ 1517 """
1475 return self._server.server_address[1] 1518 return self._server.server_address[1]
1476 1519
1477 1520
1478 if __name__ == '__main__': 1521 if __name__ == '__main__':
1479 pyauto_functional.Main() 1522 pyauto_functional.Main()
OLDNEW
« no previous file with comments | « no previous file | no next file » | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698