Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(181)

Side by Side Diff: third_party/WebKit/Tools/Scripts/webkitpy/w3c/wpt_expectations_updater.py

Issue 2877123002: When importing wpt tests, add skip expectations for tests with missing results. (Closed)
Patch Set: Created 3 years, 7 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
« no previous file with comments | « no previous file | third_party/WebKit/Tools/Scripts/webkitpy/w3c/wpt_expectations_updater_unittest.py » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 # Copyright 2016 The Chromium Authors. All rights reserved. 1 # Copyright 2016 The Chromium Authors. All rights reserved.
2 # Use of this source code is governed by a BSD-style license that can be 2 # Use of this source code is governed by a BSD-style license that can be
3 # found in the LICENSE file. 3 # found in the LICENSE file.
4 4
5 """Updates expectations and baselines when updating web-platform-tests. 5 """Updates expectations and baselines when updating web-platform-tests.
6 6
7 Specifically, this class fetches results from try bots for the current CL, then 7 Specifically, this class fetches results from try bots for the current CL, then
8 (1) downloads new baseline files for any tests that can be rebaselined, and 8 (1) downloads new baseline files for any tests that can be rebaselined, and
9 (2) updates the generic TestExpectations file for any other failing tests. 9 (2) updates the generic TestExpectations file for any other failing tests.
10 """ 10 """
(...skipping 183 matching lines...) Expand 10 before | Expand all | Expand 10 after
194 if next_item == keys[-1]: 194 if next_item == keys[-1]:
195 if found_match: 195 if found_match:
196 merged_dict[tuple(matching_value_keys)] = dictionary[cur rent_key] 196 merged_dict[tuple(matching_value_keys)] = dictionary[cur rent_key]
197 keys = [k for k in keys if k not in matching_value_keys] 197 keys = [k for k in keys if k not in matching_value_keys]
198 else: 198 else:
199 merged_dict[current_key] = dictionary[current_key] 199 merged_dict[current_key] = dictionary[current_key]
200 keys.remove(current_key) 200 keys.remove(current_key)
201 matching_value_keys = set() 201 matching_value_keys = set()
202 return merged_dict 202 return merged_dict
203 203
204 def get_expectations(self, results): 204 def get_expectations(self, results, test_name=''):
205
205 """Returns a set of test expectations to use based on results. 206 """Returns a set of test expectations to use based on results.
206 207
207 Returns a set of one or more test expectations based on the expected 208 Returns a set of one or more test expectations based on the expected
208 and actual results of a given test name. This function is to decide 209 and actual results of a given test name. This function is to decide
209 expectations for tests that could not be rebaselined. 210 expectations for tests that could not be rebaselined.
210 211
211 Args: 212 Args:
212 results: A dictionary that maps one test to its results. Example: 213 results: A dictionary that maps one test to its results. Example:
213 { 214 {
214 'test_name': { 215 'test_name': {
215 'expected': 'PASS', 216 'expected': 'PASS',
216 'actual': 'FAIL', 217 'actual': 'FAIL',
217 'bug': 'crbug.com/11111' 218 'bug': 'crbug.com/11111'
218 } 219 }
219 } 220 }
221 test_name: The test name string (optional).
220 222
221 Returns: 223 Returns:
222 A set of one or more test expectation strings with the first letter 224 A set of one or more test expectation strings with the first letter
223 capitalized. Example: set(['Failure', 'Timeout']). 225 capitalized. Example: set(['Failure', 'Timeout']).
224 """ 226 """
227 # If the result is MISSING, this implies that the test was not
228 # rebaselined and has an actual result but no baseline. We can't
229 # add a Missing expectation (this is not allowed), but no other
230 # expectation is correct.
231 # We also want to skip any new manual tests that are not automated;
232 # see crbug.com/708241 for context.
233 if (results['actual'] == 'MISSING' or
234 '-manual.' in test_name and results['actual'] == 'TIMEOUT'):
235 return {'Skip'}
225 expectations = set() 236 expectations = set()
226 failure_types = ('TEXT', 'IMAGE+TEXT', 'IMAGE', 'AUDIO') 237 failure_types = ('TEXT', 'IMAGE+TEXT', 'IMAGE', 'AUDIO')
227 other_types = ('TIMEOUT', 'CRASH', 'PASS') 238 other_types = ('TIMEOUT', 'CRASH', 'PASS')
228 for actual in results['actual'].split(): 239 for actual in results['actual'].split():
229 if actual in failure_types: 240 if actual in failure_types:
230 expectations.add('Failure') 241 expectations.add('Failure')
231 if actual in other_types: 242 if actual in other_types:
232 expectations.add(actual.capitalize()) 243 expectations.add(actual.capitalize())
233 return expectations 244 return expectations
234 245
(...skipping 27 matching lines...) Expand all
262 line_list.append(self._create_line(test_name, port_names, result s)) 273 line_list.append(self._create_line(test_name, port_names, result s))
263 return line_list 274 return line_list
264 275
265 def _create_line(self, test_name, port_names, results): 276 def _create_line(self, test_name, port_names, results):
266 """Constructs one test expectations line string.""" 277 """Constructs one test expectations line string."""
267 line_parts = [results['bug']] 278 line_parts = [results['bug']]
268 specifier_part = self.specifier_part(self.to_list(port_names), test_name ) 279 specifier_part = self.specifier_part(self.to_list(port_names), test_name )
269 if specifier_part: 280 if specifier_part:
270 line_parts.append(specifier_part) 281 line_parts.append(specifier_part)
271 line_parts.append(test_name) 282 line_parts.append(test_name)
283 line_parts.append('[ %s ]' % ' '.join(self.get_expectations(results, tes t_name)))
272 284
273 # Skip new manual tests; see crbug.com/708241 for context.
274 if '-manual.' in test_name and results['actual'] in ('MISSING', 'TIMEOUT '):
275 line_parts.append('[ Skip ]')
276 else:
277 line_parts.append('[ %s ]' % ' '.join(self.get_expectations(results) ))
278 return ' '.join(line_parts) 285 return ' '.join(line_parts)
279 286
280 def specifier_part(self, port_names, test_name): 287 def specifier_part(self, port_names, test_name):
281 """Returns the specifier part for a new test expectations line. 288 """Returns the specifier part for a new test expectations line.
282 289
283 Args: 290 Args:
284 port_names: A list of full port names that the line should apply to. 291 port_names: A list of full port names that the line should apply to.
285 test_name: The test name for the expectation line. 292 test_name: The test name for the expectation line.
286 293
287 Returns: 294 Returns:
(...skipping 158 matching lines...) Expand 10 before | Expand all | Expand 10 after
446 if result['actual'] in ('CRASH', 'TIMEOUT', 'MISSING'): 453 if result['actual'] in ('CRASH', 'TIMEOUT', 'MISSING'):
447 return False 454 return False
448 return True 455 return True
449 456
450 def is_reference_test(self, test_path): 457 def is_reference_test(self, test_path):
451 """Checks whether a given file is a testharness.js test.""" 458 """Checks whether a given file is a testharness.js test."""
452 return bool(self.port.reference_files(test_path)) 459 return bool(self.port.reference_files(test_path))
453 460
454 def _get_try_bots(self): 461 def _get_try_bots(self):
455 return self.host.builders.all_try_builder_names() 462 return self.host.builders.all_try_builder_names()
OLDNEW
« no previous file with comments | « no previous file | third_party/WebKit/Tools/Scripts/webkitpy/w3c/wpt_expectations_updater_unittest.py » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698