Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(117)

Unified Diff: scripts/slave/recipe_modules/chromium_tests/steps.py

Issue 2375663003: Add json test results format support for SwarmingIsolatedScriptTest (Closed)
Patch Set: Rebase Created 4 years, 2 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View side-by-side diff with in-line comments
Download patch
Index: scripts/slave/recipe_modules/chromium_tests/steps.py
diff --git a/scripts/slave/recipe_modules/chromium_tests/steps.py b/scripts/slave/recipe_modules/chromium_tests/steps.py
index 0f79e0d774659ec91183b2aa3b54f8c015ae3688..ea311ef821d8ce5fc52d890d343ee988e35f1c97 100644
--- a/scripts/slave/recipe_modules/chromium_tests/steps.py
+++ b/scripts/slave/recipe_modules/chromium_tests/steps.py
@@ -1080,20 +1080,35 @@ class SwarmingIsolatedScriptTest(SwarmingTest):
title=self._step_name(suffix), isolated_hash=isolated_hash,
shards=self._shards, idempotent=False, extra_args=args)
+ def validate_simplified_results(self, results):
+ failures = results['failures']
+ valid = results['valid']
Sergiy Byelozyorov 2016/10/05 15:36:06 Please remove two lines above... they are not need
nednguyen 2016/10/05 18:50:53 Done.
+ return results['valid'], results['failures']
+
+ def validate_json_test_results(self, api, results):
+ test_results = api.test_utils.create_results_from_json(results)
+ tests = test_results.tests
+ failures = list(t for t in tests if
+ tests[t]['expected'] != tests[t]['actual'])
Sergiy Byelozyorov 2016/10/05 15:36:06 AFAIK, 'expected' and 'actual' are lists of result
nednguyen 2016/10/05 18:50:53 I see. The spec says: "actual" is an ordered space
Sergiy Byelozyorov 2016/10/09 14:43:10 I think, to get a list of failures, you'd need to
nednguyen 2016/10/10 13:24:51 Done.
+ valid = results['num_failures_by_type'].get('FAIL', 0) == len(failures)
Sergiy Byelozyorov 2016/10/05 15:36:05 FAIL is not the only type of failure.
nednguyen 2016/10/05 18:50:53 But is should be the only one we use for checking
Dirk Pranke 2016/10/05 19:39:46 "valid" is used to detect whether something went h
nednguyen 2016/10/05 20:14:29 Done. We have a try catch block at 1105 to check v
+ return valid, failures
+
def validate_task_results(self, api, step_result):
results = getattr(step_result, 'isolated_script_results', None) or {}
-
+ valid = True
+ failures = []
try:
- failures = results['failures']
- valid = results['valid']
- if not failures and step_result.retcode != 0:
- failures = ['%s (entire test suite)' % self.name]
- valid = False
-
+ if results.get('version', 0) == 3:
+ valid, failures = self.validate_json_test_results(api, results)
+ else:
+ valid, failures = self.validate_simplified_results(results)
except (ValueError, KeyError) as e:
step_result.presentation.logs['invalid_results_exc'] = [str(e)]
valid = False
failures = None
+ if not failures and step_result.retcode != 0:
+ failures = ['%s (entire test suite)' % self.name]
+ valid = False
if valid:
step_result.presentation.step_text += api.test_utils.format_step_text([
['failures:', failures]
« no previous file with comments | « no previous file | scripts/slave/recipe_modules/swarming/__init__.py » ('j') | scripts/slave/recipe_modules/swarming/__init__.py » ('J')

Powered by Google App Engine
This is Rietveld 408576698