|
|
Chromium Code Reviews|
Created:
4 years, 5 months ago by alexandermont Modified:
4 years, 5 months ago Reviewers:
Zhen Wang, aschulman, nednguyen, rnephew (Reviews Here), charliea (OOO until 10-5), sullivan CC:
catapult-reviews_chromium.org, telemetry-reviews_chromium.org Base URL:
git@github.com:catapult-project/catapult@master Target Ref:
refs/heads/master Project:
catapult Visibility:
Public. |
DescriptionOptimize JSON dump in telemetry.
This change makes the JSON dump go much faster (change from about
8.5s to about 1.4s for a 330000-event trace)
BUG=chromium:624164
Committed: https://chromium.googlesource.com/external/github.com/catapult-project/catapult/+/07aa51b63e8c04002a8696e63beacff9a8f2fbfc
Patch Set 1 #
Total comments: 1
Patch Set 2 : add comment #
Total comments: 1
Messages
Total messages: 22 (9 generated)
alexandermont@chromium.org changed reviewers: + charliea@chromium.org, rnephew@chromium.org, zhenw@chromium.org
OoO learnt something new... lgtm
alexandermont@chromium.org changed reviewers: + aschulman@chromium.org
On 2016/07/21 23:01:43, alexandermont wrote: my only concern would be how this will tax memory. I guess this is likely building the string all at once rather than incrementaly. can you provide some evidence that having the entire json string in memory won't significantly increase our memory requirements?
sullivan@chromium.org changed reviewers: + sullivan@chromium.org
https://codereview.chromium.org/2174543002/diff/1/telemetry/telemetry/value/t... File telemetry/telemetry/value/trace.py (right): https://codereview.chromium.org/2174543002/diff/1/telemetry/telemetry/value/t... telemetry/telemetry/value/trace.py:52: fp.write(json.dumps(trace)) If the other comments on the CL are resolved and we decide to go this route, please add a comment in the code explaining why we do it this way, so it doesn't get "cleaned up" to the previous state.
Im not sure the best way to profile memory usage on python, but a quick hack of a dictionary with 3 million entries on my desktop shows 1.3% memory usage on top using dump, and 2.0% memory usage using dumps. I'm not sure if that is an acceptable increase of memory usage or not; but since it only processes one trace at a time where this is at; not all of them at once, it might be.
On 2016/07/22 16:30:55, rnephew (Reviews Here) wrote: > Im not sure the best way to profile memory usage on python, but a quick hack of > a dictionary with 3 million entries on my desktop shows 1.3% memory usage on top > using dump, and 2.0% memory usage using dumps. I'm not sure if that is an > acceptable increase of memory usage or not; but since it only processes one > trace at a time where this is at; not all of them at once, it might be. my concern is just that the laptops with limited memory won't be able to do the dump. but given how difficult it is to profile Python memory usage we may just have to deploy it and see if it fails on any of the bots. lgtm
Description was changed from ========== Optimize JSON dump in telemetry. This change makes the JSON dump go much faster (change from about 8.5s to about 1.4s for a 330000-event trace) BUG=catapult:# ========== to ========== Optimize JSON dump in telemetry. This change makes the JSON dump go much faster (change from about 8.5s to about 1.4s for a 330000-event trace) ==========
The CQ bit was checked by alexandermont@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from aschulman@chromium.org, zhenw@chromium.org Link to the patchset: https://codereview.chromium.org/2174543002/#ps20001 (title: "add comment")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Message was sent while issue was closed.
Description was changed from ========== Optimize JSON dump in telemetry. This change makes the JSON dump go much faster (change from about 8.5s to about 1.4s for a 330000-event trace) ========== to ========== Optimize JSON dump in telemetry. This change makes the JSON dump go much faster (change from about 8.5s to about 1.4s for a 330000-event trace) Committed: https://chromium.googlesource.com/external/github.com/catapult-project/catapu... ==========
Message was sent while issue was closed.
Committed patchset #2 (id:20001) as https://chromium.googlesource.com/external/github.com/catapult-project/catapu...
Message was sent while issue was closed.
On 2016/07/22 18:51:30, commit-bot: I haz the power wrote: > Committed patchset #2 (id:20001) as > https://chromium.googlesource.com/external/github.com/catapult-project/catapu... Can you edit this patch's description to add the reference to the bug which this helps to fix https://crbug.com/624164?
Message was sent while issue was closed.
Description was changed from ========== Optimize JSON dump in telemetry. This change makes the JSON dump go much faster (change from about 8.5s to about 1.4s for a 330000-event trace) Committed: https://chromium.googlesource.com/external/github.com/catapult-project/catapu... ========== to ========== Optimize JSON dump in telemetry. This change makes the JSON dump go much faster (change from about 8.5s to about 1.4s for a 330000-event trace) BUG=chromium:624164 Committed: https://chromium.googlesource.com/external/github.com/catapult-project/catapu... ==========
Message was sent while issue was closed.
nednguyen@google.com changed reviewers: + nednguyen@google.com
Message was sent while issue was closed.
https://codereview.chromium.org/2174543002/diff/20001/telemetry/telemetry/val... File telemetry/telemetry/value/trace.py (right): https://codereview.chromium.org/2174543002/diff/20001/telemetry/telemetry/val... telemetry/telemetry/value/trace.py:53: # ~5x faster than using json.dump(). Event better if we never keep the dict/list form in telemetry :-)
Message was sent while issue was closed.
A revert of this CL (patchset #2 id:20001) has been created in https://codereview.chromium.org/2177323002/ by rnephew@chromium.org. The reason for reverting is: https://github.com/catapult-project/catapult/issues/2535. |
