|
|
Created:
3 years, 7 months ago by chiniforooshan Modified:
3 years, 7 months ago CC:
catapult-reviews_chromium.org, ehmaldonado_chromium, nednguyen, tracing-review_chromium.org Target Ref:
refs/heads/master Project:
catapult Visibility:
Public. |
Descriptiontracing: Enable stream processing when input is large
Still, by default, data is processed as a single string. But, in the
case that unzipped data does not fit in a V8 string, we try to process
it as a trace stream. In other words, when trace processing would
crash because of the large size we use stream processing. Although not
all importers support trace streams, trace_event_importer,
gzip_importer, and ftrace_importer that support trace streams cover
many cases. For example, after this CL webrtc.perf_benchmark can run
successfully, when otherwise some stories that generate large traces,
like multiple_peerconnections and 30s_datachannel_transfer (being
introduced in https://codereview.chromium.org/2790553003/), would
crash.
BUG=catapult:#2826
BUG=chromium:679768
Review-Url: https://codereview.chromium.org/2864743002
Committed: https://chromium.googlesource.com/external/github.com/catapult-project/catapult/+/9e43159fc62c2ba87c5dd5202e38aeb5e207bb68
Patch Set 1 #Patch Set 2 : sync #
Messages
Total messages: 23 (15 generated)
Description was changed from ========== Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfull, when otherwise some stories that generate large traces would crash ( multiple_peerconnections and 30s_datachannel_transfer). BUG=catapult:2826 BUG=chromium:679768 ========== to ========== Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfull, when otherwise some stories that generate large traces would crash ( multiple_peerconnections and 30s_datachannel_transfer). BUG=catapult:2826 BUG=chromium:679768 ==========
chiniforooshan@chromium.org changed reviewers: + eakuefner@chromium.org
PTAL
nednguyen@google.com changed reviewers: + charliea@chromium.org
On 2017/05/05 21:33:33, chiniforooshan wrote: > PTAL ping
Description was changed from ========== Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfull, when otherwise some stories that generate large traces would crash ( multiple_peerconnections and 30s_datachannel_transfer). BUG=catapult:2826 BUG=chromium:679768 ========== to ========== tracing: Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfull, when otherwise some stories that generate large traces, like multiple_peerconnections and 30s_datachannel_transfer (being introduced in https://codereview.chromium.org/2790553003/), would crash. BUG=catapult:2826 BUG=chromium:679768 ==========
Description was changed from ========== tracing: Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfull, when otherwise some stories that generate large traces, like multiple_peerconnections and 30s_datachannel_transfer (being introduced in https://codereview.chromium.org/2790553003/), would crash. BUG=catapult:2826 BUG=chromium:679768 ========== to ========== tracing: Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfull, when otherwise some stories that generate large traces, like multiple_peerconnections and 30s_datachannel_transfer (being introduced in https://codereview.chromium.org/2790553003/), would crash. BUG=catapult:2826 BUG=chromium:679768 ==========
lgtm, but couple of typo nits in your description: s/successfull/successfully/ and BUG=catapult: should be BUG=catapult:#. We're supposed to have a PRESUBMIT that enforces the latter, so let me know if that wasn't caught for you. Finally, do you have any intuition on why some traces that are too large won't import using stream processing, as you described in the description? I understand that JSON isn't fundamentally a streaming format, so maybe it's just parsing limitations of some kind? It would be really cool to figure out why stream processing breaks in some cases as a follow-up.
Description was changed from ========== tracing: Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfull, when otherwise some stories that generate large traces, like multiple_peerconnections and 30s_datachannel_transfer (being introduced in https://codereview.chromium.org/2790553003/), would crash. BUG=catapult:2826 BUG=chromium:679768 ========== to ========== tracing: Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfully, when otherwise some stories that generate large traces, like multiple_peerconnections and 30s_datachannel_transfer (being introduced in https://codereview.chromium.org/2790553003/), would crash. BUG=catapult:2826 BUG=chromium:679768 ==========
Description was changed from ========== tracing: Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfully, when otherwise some stories that generate large traces, like multiple_peerconnections and 30s_datachannel_transfer (being introduced in https://codereview.chromium.org/2790553003/), would crash. BUG=catapult:2826 BUG=chromium:679768 ========== to ========== tracing: Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfully, when otherwise some stories that generate large traces, like multiple_peerconnections and 30s_datachannel_transfer (being introduced in https://codereview.chromium.org/2790553003/), would crash. BUG=catapult:#2826 BUG=chromium:679768 ==========
Description was changed from ========== tracing: Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfully, when otherwise some stories that generate large traces, like multiple_peerconnections and 30s_datachannel_transfer (being introduced in https://codereview.chromium.org/2790553003/), would crash. BUG=catapult:#2826 BUG=chromium:679768 ========== to ========== tracing: Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfully, when otherwise some stories that generate large traces, like multiple_peerconnections and 30s_datachannel_transfer (being introduced in https://codereview.chromium.org/2790553003/), would crash. BUG=catapult:2826 BUG=chromium:679768 ==========
Description was changed from ========== tracing: Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfully, when otherwise some stories that generate large traces, like multiple_peerconnections and 30s_datachannel_transfer (being introduced in https://codereview.chromium.org/2790553003/), would crash. BUG=catapult:2826 BUG=chromium:679768 ========== to ========== tracing: Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfully, when otherwise some stories that generate large traces, like multiple_peerconnections and 30s_datachannel_transfer (being introduced in https://codereview.chromium.org/2790553003/), would crash. BUG=catapult:#2826 BUG=chromium:679768 ==========
Thanks! On 2017/05/09 15:18:35, eakuefner wrote: > lgtm, but couple of typo nits in your description: s/successfull/successfully/ > and BUG=catapult: should be BUG=catapult:#. We're supposed to have a PRESUBMIT > that enforces the latter, so let me know if that wasn't caught for you. Done. git cl upload didn't prevent me from uploading the CL. > Finally, do you have any intuition on why some traces that are too large won't > import using stream processing, as you described in the description? I > understand that JSON isn't fundamentally a streaming format, so maybe it's just > parsing limitations of some kind? It would be really cool to figure out why > stream processing breaks in some cases as a follow-up. The current limitations of stream processing are: 1- Some importers assume the input is a string. We have to modify them to understand inputs of TraceStream type. I have done this for linux ftrace, gzip, and chrome trace event importers, so far. 2- Stream processing uses a third-party lib which is 2-3x slower than JSON.parse. We should investigate how we can improve the performance. 3- If the input JSON has string fields whose values are larger than V8 string limit, we have a problem.
The CQ bit was checked by chiniforooshan@chromium.org
CQ is trying da patch. Follow status at: https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: Catapult Presubmit on master.tryserver.client.catapult (JOB_FAILED, https://build.chromium.org/p/tryserver.client.catapult/builders/Catapult%20Pr...)
The CQ bit was checked by chiniforooshan@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from eakuefner@chromium.org Link to the patchset: https://codereview.chromium.org/2864743002/#ps20001 (title: "sync")
CQ is trying da patch. Follow status at: https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
CQ is committing da patch. Bot data: {"patchset_id": 20001, "attempt_start_ts": 1494354456458780, "parent_rev": "a957433308c7e7751ce777d31414e24f8791842a", "commit_rev": "9e43159fc62c2ba87c5dd5202e38aeb5e207bb68"}
Message was sent while issue was closed.
Description was changed from ========== tracing: Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfully, when otherwise some stories that generate large traces, like multiple_peerconnections and 30s_datachannel_transfer (being introduced in https://codereview.chromium.org/2790553003/), would crash. BUG=catapult:#2826 BUG=chromium:679768 ========== to ========== tracing: Enable stream processing when input is large Still, by default, data is processed as a single string. But, in the case that unzipped data does not fit in a V8 string, we try to process it as a trace stream. In other words, when trace processing would crash because of the large size we use stream processing. Although not all importers support trace streams, trace_event_importer, gzip_importer, and ftrace_importer that support trace streams cover many cases. For example, after this CL webrtc.perf_benchmark can run successfully, when otherwise some stories that generate large traces, like multiple_peerconnections and 30s_datachannel_transfer (being introduced in https://codereview.chromium.org/2790553003/), would crash. BUG=catapult:#2826 BUG=chromium:679768 Review-Url: https://codereview.chromium.org/2864743002 Committed: https://chromium.googlesource.com/external/github.com/catapult-project/catapu... ==========
Message was sent while issue was closed.
Committed patchset #2 (id:20001) as https://chromium.googlesource.com/external/github.com/catapult-project/catapu... |