|
|
Created:
5 years ago by Primiano Tucci (use gerrit) Modified:
5 years ago CC:
chromium-reviews, tracing+reviews_chromium.org, wfh+watch_chromium.org, vmpstr+watch_chromium.org Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
Description[tracing] Simplify logic of MemoryDumpManager
This CL simplifies the unregistration logic of MemoryDumpManager.
This is to make the overall code more readable and simplify upcoming
changes.
Prior to this CL, the MemoryDumpManager was keeping only one list to
keep track of the registered dump providers. This caused the
unregistration logic to be tricky because:
- We couldn't remove the MDPInfo straight away, as that might
cause invalidation of the |next_dump_provider| iterator.
- We had to flag the MDPInfo as unregistered and postpone the deletion
on the next dump.
This has a major drawback: if we keep registering and unregistering
dump providers when no tracing is happening at all, the dump_providers_
list grows without bounds. This is bad.
This CL changes the logic as follows:
- MDPInfo becomes refcounted. At any time it can be referenced by:
- The MDM's |dump_providers_| list.
- The |ProcessMemoryDumpAsyncState.pending_dump_providers| list.
- Upon each dump, the dump provider list is snapshotted in the
|ProcessMemoryDumpAsyncState.pending_dump_providers|
- Upon unregistration of a dump provider we just remove the MDPInfo
from the MDM's |dump_providers_|. If a dump is ongoing, the
ProcessMemoryDumpAsyncState will keep it refcounted and delete it
during the dump.
This CL does not add or change any behavior of the MemoryDumpManager,
with the exception of:
- Fixing a corner case when dumping with no dump providers registered
(See changes in the unittest).
- Making the fail-safe logic more strict: if a dumper fails once, it
will stay disabled forever.
BUG=461788
Committed: https://crrev.com/9734733909e7cb41ef5c153f3c2d0927e8209133
Cr-Commit-Position: refs/heads/master@{#366374}
Patch Set 1 #Patch Set 2 : #
Total comments: 18
Patch Set 3 : ignore_Result #
Total comments: 8
Patch Set 4 : before lock removal #Patch Set 5 : final without lock #
Total comments: 8
Patch Set 6 : more simplicity #
Total comments: 6
Patch Set 7 : comments #Patch Set 8 : fix on double registration #Patch Set 9 : remove DCHECK #
Dependent Patchsets: Messages
Total messages: 33 (15 generated)
primiano@chromium.org changed reviewers: + ruuda@google.com, ssid@chromium.org
Ok, lemme try again. This does not solve the UnregisterAsync problem, just makes the situation cleaner and easier to solve. PTAL
Awesome, this looks much more elegant than before. I have few minor comments. https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... File base/trace_event/memory_dump_manager.cc (right): https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:236: // The MDPInfo instance can still be refcounted by the /s/refcounted/referenced/ https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:385: // disable the dump provider and abort this dump. This does no longer abort the dump, does it? https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:420: pmd_async_state->pending_dump_providers.pop_front(); This is abusing the global lock a bit. Perhaps the MDPInfo should be atomically refcounted, so this can be outside of the lock? https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:613: // Ensure that unbound providers (task_runner == nullptr) always run last. Why do unbound providers need to run last again? If some ordering is all you need, you could use |std::tie| here (with task runner as first element). https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... File base/trace_event/memory_dump_manager.h (right): https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.h:151: // - It is safe to read the const fields without holind the |lock_|. Changes /s/holind/holding/ https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.h:220: std::deque<scoped_refptr<MemoryDumpProviderInfo>> pending_dump_providers; I hate to ruin your chance to use a deque, but why not use a vector here? You can call |pop_back| on it. Or a |std::stack<T, std::vector<T>>| even, because you only ever look at elements at the ends. If you need to respect the ordering of the set, just reverse the comparison operator. (That would allow getting rid of the awkward !(x < Y) in there too.) https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... File base/trace_event/memory_dump_manager_unittest.cc (right): https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager_unittest.cc:586: // its dump provider should be skipped but the dump itself shall succeed. Nit: /s/shall/should/? (I imagine Gandalf shouting at this test, "You shall pass!") https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager_unittest.cc:627: ASSERT_EQ(true, last_callback_success_); Note: below the test is using |EXPECT_TRUE|, here it is |ASSERT_EQ|. (You could also use |ASSERT_TRUE|.) https://codereview.chromium.org/1536533004/diff/40001/base/trace_event/memory... File base/trace_event/memory_dump_manager.cc (right): https://codereview.chromium.org/1536533004/diff/40001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:624: pending_dump_providers(dump_providers.begin(), dump_providers.end()), If you make this a vector, this will cause insertion of the elements one by one because set does not have a random access iterator. That will likely waste memory too due to the growth factor. Instead you can call |reserve| and |assign| then.
few thoughts. https://codereview.chromium.org/1536533004/diff/40001/base/trace_event/memory... File base/trace_event/memory_dump_manager.cc (right): https://codereview.chromium.org/1536533004/diff/40001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:339: // Unfortunatelly, in fact, PostTask() destroyes the scoped_ptr arguments upon s/Unfortunatelly/Unfortunately s/destroyes/destroys https://codereview.chromium.org/1536533004/diff/40001/base/trace_event/memory... File base/trace_event/memory_dump_manager.h (right): https://codereview.chromium.org/1536533004/diff/40001/base/trace_event/memory... base/trace_event/memory_dump_manager.h:151: // - It is safe to read the const fields without holind the |lock_|. Changes s/holind/holding/ https://codereview.chromium.org/1536533004/diff/40001/base/trace_event/memory... base/trace_event/memory_dump_manager.h:151: // - It is safe to read the const fields without holind the |lock_|. Changes 1. It is not safe to access the mdp pointer. Even if the mdp pointer is const member, the object could have been destroyed while the pointer is just kept around. The comment is misleading when it says const members can be accessed. 2. Currently the code assumes: Only thread-associated dump providers can be unregistered. Only that thread can dump or unregister the dump provider should be able to access or change the mdp_info. So, there is no case where the dump happens when mdp_info is changed. Lock would not be needed to change the members of mdp_info, when only one thread can actually change it. The only reason I see for the lock is the disabled check that happens in previous thread and thread hop is avoided. It is easier to remove lock and do thread hop before this check. It will be rare case where this thread hop goes waste since the dump provider got un-registered after dump started. If such a thread is registering and unregistering providers then a thread hop to that thread would anyway happen for next provider. The exception is when the PostTask fails, then it is safe to set it disabled with holding the lock. This will be only case lock will be needed and used rarely. This would reduce locking and unlocking while dumping.
Thanks a lot guys, super useful feedback. I simplified even further and removed the lock entirely from ContinueAsyncProcessDump. Mind taking another careful look? https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... File base/trace_event/memory_dump_manager.cc (right): https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:236: // The MDPInfo instance can still be refcounted by the On 2015/12/17 20:20:39, Ruud van Asseldonk wrote: > /s/refcounted/referenced/ Done. https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:385: // disable the dump provider and abort this dump. On 2015/12/17 20:20:39, Ruud van Asseldonk wrote: > This does no longer abort the dump, does it? Oh good catch. correct. fixed comment. https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:420: pmd_async_state->pending_dump_providers.pop_front(); On 2015/12/17 20:20:39, Ruud van Asseldonk wrote: > This is abusing the global lock a bit. Perhaps the MDPInfo should be atomically > refcounted, so this can be outside of the lock? Ok, ok. i originally did that because I didn't like doing ~30 (N dump providers) atomic increments when creading the pending copy vector. But now turns out I can actually remove the lock completely here, so feels really back to keep the lock just for an alledged perf optimization. Made the MDPInfo threadsafe. https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:613: // Ensure that unbound providers (task_runner == nullptr) always run last. On 2015/12/17 20:20:39, Ruud van Asseldonk wrote: > Why do unbound providers need to run last again? Because I know that some of those unbound providers are slow and I want them to not affect timing of the other dump providers. I know, I know, I should have a proper priority field (but what the order then should be?). >If some ordering is all you > need, you could use |std::tie| here (with task runner as first element). TIL about std::tie. That's nice. Also I realized that I want a more stable sorting, not based on pointer arith. See registration_order in the new patchset. https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... File base/trace_event/memory_dump_manager.h (right): https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.h:151: // - It is safe to read the const fields without holind the |lock_|. Changes On 2015/12/17 20:20:39, Ruud van Asseldonk wrote: > /s/holind/holding/ Done. https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.h:220: std::deque<scoped_refptr<MemoryDumpProviderInfo>> pending_dump_providers; On 2015/12/17 20:20:39, Ruud van Asseldonk wrote: > I hate to ruin your chance to use a deque, but why not use a vector here? You > can call |pop_back| on it. Or a |std::stack<T, std::vector<T>>| even, because > you only ever look at elements at the ends. If you need to respect the ordering > of the set, just reverse the comparison operator. (That would allow getting rid > of the awkward !(x < Y) in there too.) switched to vector (using that like a stack). Don't like the idea that set and vector would have reverse orders though. https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... File base/trace_event/memory_dump_manager_unittest.cc (right): https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager_unittest.cc:586: // its dump provider should be skipped but the dump itself shall succeed. On 2015/12/17 20:20:39, Ruud van Asseldonk wrote: > Nit: /s/shall/should/? > > (I imagine Gandalf shouting at this test, "You shall pass!") Done. https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager_unittest.cc:627: ASSERT_EQ(true, last_callback_success_); On 2015/12/17 20:20:39, Ruud van Asseldonk wrote: > Note: below the test is using |EXPECT_TRUE|, here it is |ASSERT_EQ|. (You could > also use |ASSERT_TRUE|.) Done. https://codereview.chromium.org/1536533004/diff/40001/base/trace_event/memory... File base/trace_event/memory_dump_manager.cc (right): https://codereview.chromium.org/1536533004/diff/40001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:339: // Unfortunatelly, in fact, PostTask() destroyes the scoped_ptr arguments upon On 2015/12/17 22:09:36, ssid wrote: > s/Unfortunatelly/Unfortunately > s/destroyes/destroys Done. https://codereview.chromium.org/1536533004/diff/40001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:624: pending_dump_providers(dump_providers.begin(), dump_providers.end()), On 2015/12/17 20:20:39, Ruud van Asseldonk wrote: > If you make this a vector, this will cause insertion of the elements one by one > because set does not have a random access iterator. That will likely waste > memory too due to the growth factor. Instead you can call |reserve| and |assign| > then. ok switches to a vector (and using that as a stack). https://codereview.chromium.org/1536533004/diff/40001/base/trace_event/memory... File base/trace_event/memory_dump_manager.h (right): https://codereview.chromium.org/1536533004/diff/40001/base/trace_event/memory... base/trace_event/memory_dump_manager.h:151: // - It is safe to read the const fields without holind the |lock_|. Changes On 2015/12/17 22:09:36, ssid wrote: > s/holind/holding/ Done. https://codereview.chromium.org/1536533004/diff/40001/base/trace_event/memory... base/trace_event/memory_dump_manager.h:151: // - It is safe to read the const fields without holind the |lock_|. Changes On 2015/12/17 22:09:36, ssid wrote: > 1. It is not safe to access the mdp pointer. Even if the mdp pointer is const > member, the object could have been destroyed while the pointer is just kept > around. The comment is misleading when it says const members can be accessed. Absolutely correct. Added another point. > 2. Currently the code assumes: Only thread-associated dump providers can be > unregistered. Only that thread can dump or unregister the dump provider should > be able to access or change the mdp_info. > So, there is no case where the dump happens when mdp_info is changed. > Lock would not be needed to change the members of mdp_info, when only one thread > can actually change it. But this is the case today, isn't it? At most one ContinueAsyncProcessDump can be executed (on random threads) per time, so that it intrinsically safe (and has lock regions before and after). You have to use the lock if you write fields in other parts of MDM. > The only reason I see for the lock is the disabled check that happens in > previous thread and thread hop is avoided. It is easier to remove lock and do > thread hop before this check. It will be rare case where this thread hop goes > waste since the dump provider got un-registered after dump started. If such a > thread is registering and unregistering providers then a thread hop to that > thread would anyway happen for next provider. Ok I see your point. Let me try to rewrite that part. I think you are right and locking can be further avoided. > The exception is when the PostTask fails, then it is safe to set it disabled > with holding the lock. This will be only case lock will be needed and used > rarely. Yeah agree this is rare. Thanks for the feedback. Let me give a try
Threading is hard ... https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... File base/trace_event/memory_dump_manager.cc (right): https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:420: pmd_async_state->pending_dump_providers.pop_front(); On 2015/12/18 12:00:31, Primiano Tucci wrote: > On 2015/12/17 20:20:39, Ruud van Asseldonk wrote: > > This is abusing the global lock a bit. Perhaps the MDPInfo should be > atomically > > refcounted, so this can be outside of the lock? > > Ok, ok. i originally did that because I didn't like doing ~30 (N dump providers) > atomic increments when creading the pending copy vector. > But now turns out I can actually remove the lock completely here, so feels > really back to keep the lock just for an alledged perf optimization. > Made the MDPInfo threadsafe. We are making far too many assumptions about performance without measuring anything. I would just do the simplest possible thing. It is always possible to optimise later if this turns out to be a real performance issue. https://codereview.chromium.org/1536533004/diff/20001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:613: // Ensure that unbound providers (task_runner == nullptr) always run last. On 2015/12/18 12:00:31, Primiano Tucci wrote: > On 2015/12/17 20:20:39, Ruud van Asseldonk wrote: > > Why do unbound providers need to run last again? > Because I know that some of those unbound providers are slow and I want them to > not affect timing of the other dump providers. Ugh, yuk. And tomorrow somebody writes a slow dump provider that has a thread affinity, and all of this breaks down. It’s ok for this CL but this needs to be fixed. (At the very least document it somewhere.) > Also I realized that I want a more stable sorting, not based on pointer arith. > See registration_order in the new patchset. Why? Do you want consistent ordering across different runs of Chrome? And if that is the case, do dump providers register in a well-defined order? Perhaps you can use the friendly name (with deep comparison) for ordering? (That would not fix the issue if it is not unique though.) https://codereview.chromium.org/1536533004/diff/80001/base/trace_event/memory... File base/trace_event/memory_dump_manager.cc (right): https://codereview.chromium.org/1536533004/diff/80001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:376: mdpinfo->disabled = true; Now that this is no longer inside the lock, is it possible for other threads to read stale data? https://codereview.chromium.org/1536533004/diff/80001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:380: // Tehcnically there is a data race here while accessing |mdpinfo.disabled| /s/Tehcnically/Technically/ https://codereview.chromium.org/1536533004/diff/80001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:388: if (!mdpinfo->disabled) { No, you need to keep the lock. (Or add a lock per MDPInfo.) Scenario: 1. This threads loads |mdpinfo->disabled|, sees it is not disabled, and takes the branch. 2. Somewhere during |TRACE_EVENT_WITH_FLOW1| this thread gets preempted. 3. On a different thread, the MDProvider is unsubscribed and deleted. The MDPInfo is kept alive properly, but the dump provider is gone. 4. The dumping thread resumes and uses the MDProvider after free. https://codereview.chromium.org/1536533004/diff/80001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:604: std::tie(b->task_runner, b->registration_order); Looks much nicer than before :D
Description was changed from ========== [tracing] Simplify logic of MemoryDumpManager This CL simplifies the unregistration logic of MemoryDumpManager. This is to make the overall code more readable and simplify upcoming changes. Prior to this CL, the MemoryDumpManager was keeping only one list to keep track of the registered dump providers. This caused the unregistration logic to be tricky because: - We couldn't remove the MDPInfo straight away, as that might cause invalidation of the |next_dump_provider| iterator. - We had to flag the MDPInfo as unregistered and postpone the deletion on the next dump. This has a major drawback: if we keep registering and unregistering dump providers when no tracing is happening at all, the dump_providers_ list grows without bounds. This is bad. This CL changes the logic as follows: - MDPInfo becomes refcounted. At any time it can be referenced by: - The MDM's |dump_providers_| list. - The |ProcessMemoryDumpAsyncState.pending_dump_providers| list. - Upon each dump, the dump provider list is snapshotted in the |ProcessMemoryDumpAsyncState.pending_dump_providers| - Upon unregistration of a dump provider we just remove the MDPInfo from the MDM's |dump_providers_|. If a dump is ongoing, the ProcessMemoryDumpAsyncState will keep it refcounted and delete it during the dump. This CL does not add or change any behavior of the MemoryDumpManager, with the exception of fixing a corner case when dumping with no dump providers registered (See changes in the unittest). BUG= ========== to ========== [tracing] Simplify logic of MemoryDumpManager This CL simplifies the unregistration logic of MemoryDumpManager. This is to make the overall code more readable and simplify upcoming changes. Prior to this CL, the MemoryDumpManager was keeping only one list to keep track of the registered dump providers. This caused the unregistration logic to be tricky because: - We couldn't remove the MDPInfo straight away, as that might cause invalidation of the |next_dump_provider| iterator. - We had to flag the MDPInfo as unregistered and postpone the deletion on the next dump. This has a major drawback: if we keep registering and unregistering dump providers when no tracing is happening at all, the dump_providers_ list grows without bounds. This is bad. This CL changes the logic as follows: - MDPInfo becomes refcounted. At any time it can be referenced by: - The MDM's |dump_providers_| list. - The |ProcessMemoryDumpAsyncState.pending_dump_providers| list. - Upon each dump, the dump provider list is snapshotted in the |ProcessMemoryDumpAsyncState.pending_dump_providers| - Upon unregistration of a dump provider we just remove the MDPInfo from the MDM's |dump_providers_|. If a dump is ongoing, the ProcessMemoryDumpAsyncState will keep it refcounted and delete it during the dump. This CL does not add or change any behavior of the MemoryDumpManager, with the exception of: - Fixing a corner case when dumping with no dump providers registered (See changes in the unittest). - Making the fail-safe logic more strict: if a dumper fails once, it will stay disabled forever. BUG= ==========
https://codereview.chromium.org/1536533004/diff/80001/base/trace_event/memory... File base/trace_event/memory_dump_manager.cc (right): https://codereview.chromium.org/1536533004/diff/80001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:376: mdpinfo->disabled = true; On 2015/12/18 12:40:43, Ruud van Asseldonk wrote: > Now that this is no longer inside the lock, is it possible for other threads to > read stale data? This is precisely why I wrote the large block below. THere is only one instance, and adding a lock wouldn't really solve anything. https://codereview.chromium.org/1536533004/diff/80001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:380: // Tehcnically there is a data race here while accessing |mdpinfo.disabled| On 2015/12/18 12:40:42, Ruud van Asseldonk wrote: > /s/Tehcnically/Technically/ Done. https://codereview.chromium.org/1536533004/diff/80001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:388: if (!mdpinfo->disabled) { On 2015/12/18 12:40:42, Ruud van Asseldonk wrote: > 3. On a different thread, the MDProvider is unsubscribed and deleted. > The MDPInfo is kept alive properly, but the dump provider is gone. This cannot happen. A MDProvider can be unsubcribed only on the same task_runner where it lives. ANd you cannot be pre-empted and unregistered by the same thread. https://codereview.chromium.org/1536533004/diff/80001/base/trace_event/memory... base/trace_event/memory_dump_manager.cc:604: std::tie(b->task_runner, b->registration_order); On 2015/12/18 12:40:42, Ruud van Asseldonk wrote: > Looks much nicer than before :D Acknowledged.
lgtm thanks. https://codereview.chromium.org/1536533004/diff/100001/base/trace_event/memor... File base/trace_event/memory_dump_manager.cc (right): https://codereview.chromium.org/1536533004/diff/100001/base/trace_event/memor... base/trace_event/memory_dump_manager.cc:377: // We are now on the right thread to read non-const |mdpinfo| fields. Comment could be : Either we are on right thread or the right thread is gone. So, it is safe to read. https://codereview.chromium.org/1536533004/diff/100001/base/trace_event/memor... base/trace_event/memory_dump_manager.cc:586: // to avoid skewing timings of the other dump providers. I think there are much slower dump providers now, than the memory maps. The cc, gpu and MemoryCache take a lot more time than memory maps.
sorry, one more thought. https://codereview.chromium.org/1536533004/diff/100001/base/trace_event/memor... File base/trace_event/memory_dump_manager.h (right): https://codereview.chromium.org/1536533004/diff/100001/base/trace_event/memor... base/trace_event/memory_dump_manager.h:146: // the copy inside ProcessMemoryDumpAsyncState is erase()-d. Maybe also get back the comment that says, the non-const elements can only be accessed in the task_runner thread if it is not null, unless the thread does not exist. Not really sure if needed.
https://codereview.chromium.org/1536533004/diff/100001/base/trace_event/memor... File base/trace_event/memory_dump_manager.cc (right): https://codereview.chromium.org/1536533004/diff/100001/base/trace_event/memor... base/trace_event/memory_dump_manager.cc:377: // We are now on the right thread to read non-const |mdpinfo| fields. On 2015/12/18 15:16:53, ssid wrote: > Comment could be : Either we are on right thread or the right thread is gone. > So, it is safe to read. Done. https://codereview.chromium.org/1536533004/diff/100001/base/trace_event/memor... base/trace_event/memory_dump_manager.cc:586: // to avoid skewing timings of the other dump providers. On 2015/12/18 15:16:53, ssid wrote: > I think there are much slower dump providers now, than the memory maps. The cc, > gpu and MemoryCache take a lot more time than memory maps. Yeah I think in future CLs I'll need to introduce some statically defined priority. https://codereview.chromium.org/1536533004/diff/100001/base/trace_event/memor... File base/trace_event/memory_dump_manager.h (right): https://codereview.chromium.org/1536533004/diff/100001/base/trace_event/memor... base/trace_event/memory_dump_manager.h:146: // the copy inside ProcessMemoryDumpAsyncState is erase()-d. On 2015/12/18 15:18:46, ssid wrote: > Maybe also get back the comment that says, the non-const elements can only be > accessed in the task_runner thread if it is not null, unless the thread does not > exist. Not really sure if needed. Done.
The CQ bit was checked by primiano@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from ssid@chromium.org Link to the patchset: https://codereview.chromium.org/1536533004/#ps120001 (title: "comments")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1536533004/120001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1536533004/120001
The CQ bit was unchecked by primiano@chromium.org
The CQ bit was checked by primiano@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from ssid@chromium.org Link to the patchset: https://codereview.chromium.org/1536533004/#ps140001 (title: "fix on double registration")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1536533004/140001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1536533004/140001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: cast_shell_linux on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/cast_shell_linu...)
The CQ bit was checked by primiano@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1536533004/140001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1536533004/140001
The CQ bit was unchecked by primiano@chromium.org
Description was changed from ========== [tracing] Simplify logic of MemoryDumpManager This CL simplifies the unregistration logic of MemoryDumpManager. This is to make the overall code more readable and simplify upcoming changes. Prior to this CL, the MemoryDumpManager was keeping only one list to keep track of the registered dump providers. This caused the unregistration logic to be tricky because: - We couldn't remove the MDPInfo straight away, as that might cause invalidation of the |next_dump_provider| iterator. - We had to flag the MDPInfo as unregistered and postpone the deletion on the next dump. This has a major drawback: if we keep registering and unregistering dump providers when no tracing is happening at all, the dump_providers_ list grows without bounds. This is bad. This CL changes the logic as follows: - MDPInfo becomes refcounted. At any time it can be referenced by: - The MDM's |dump_providers_| list. - The |ProcessMemoryDumpAsyncState.pending_dump_providers| list. - Upon each dump, the dump provider list is snapshotted in the |ProcessMemoryDumpAsyncState.pending_dump_providers| - Upon unregistration of a dump provider we just remove the MDPInfo from the MDM's |dump_providers_|. If a dump is ongoing, the ProcessMemoryDumpAsyncState will keep it refcounted and delete it during the dump. This CL does not add or change any behavior of the MemoryDumpManager, with the exception of: - Fixing a corner case when dumping with no dump providers registered (See changes in the unittest). - Making the fail-safe logic more strict: if a dumper fails once, it will stay disabled forever. BUG= ========== to ========== [tracing] Simplify logic of MemoryDumpManager This CL simplifies the unregistration logic of MemoryDumpManager. This is to make the overall code more readable and simplify upcoming changes. Prior to this CL, the MemoryDumpManager was keeping only one list to keep track of the registered dump providers. This caused the unregistration logic to be tricky because: - We couldn't remove the MDPInfo straight away, as that might cause invalidation of the |next_dump_provider| iterator. - We had to flag the MDPInfo as unregistered and postpone the deletion on the next dump. This has a major drawback: if we keep registering and unregistering dump providers when no tracing is happening at all, the dump_providers_ list grows without bounds. This is bad. This CL changes the logic as follows: - MDPInfo becomes refcounted. At any time it can be referenced by: - The MDM's |dump_providers_| list. - The |ProcessMemoryDumpAsyncState.pending_dump_providers| list. - Upon each dump, the dump provider list is snapshotted in the |ProcessMemoryDumpAsyncState.pending_dump_providers| - Upon unregistration of a dump provider we just remove the MDPInfo from the MDM's |dump_providers_|. If a dump is ongoing, the ProcessMemoryDumpAsyncState will keep it refcounted and delete it during the dump. This CL does not add or change any behavior of the MemoryDumpManager, with the exception of: - Fixing a corner case when dumping with no dump providers registered (See changes in the unittest). - Making the fail-safe logic more strict: if a dumper fails once, it will stay disabled forever. BUG=461788 ==========
The CQ bit was checked by primiano@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from ssid@chromium.org Link to the patchset: https://codereview.chromium.org/1536533004/#ps160001 (title: "remove DCHECK")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1536533004/160001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1536533004/160001
Message was sent while issue was closed.
Description was changed from ========== [tracing] Simplify logic of MemoryDumpManager This CL simplifies the unregistration logic of MemoryDumpManager. This is to make the overall code more readable and simplify upcoming changes. Prior to this CL, the MemoryDumpManager was keeping only one list to keep track of the registered dump providers. This caused the unregistration logic to be tricky because: - We couldn't remove the MDPInfo straight away, as that might cause invalidation of the |next_dump_provider| iterator. - We had to flag the MDPInfo as unregistered and postpone the deletion on the next dump. This has a major drawback: if we keep registering and unregistering dump providers when no tracing is happening at all, the dump_providers_ list grows without bounds. This is bad. This CL changes the logic as follows: - MDPInfo becomes refcounted. At any time it can be referenced by: - The MDM's |dump_providers_| list. - The |ProcessMemoryDumpAsyncState.pending_dump_providers| list. - Upon each dump, the dump provider list is snapshotted in the |ProcessMemoryDumpAsyncState.pending_dump_providers| - Upon unregistration of a dump provider we just remove the MDPInfo from the MDM's |dump_providers_|. If a dump is ongoing, the ProcessMemoryDumpAsyncState will keep it refcounted and delete it during the dump. This CL does not add or change any behavior of the MemoryDumpManager, with the exception of: - Fixing a corner case when dumping with no dump providers registered (See changes in the unittest). - Making the fail-safe logic more strict: if a dumper fails once, it will stay disabled forever. BUG=461788 ========== to ========== [tracing] Simplify logic of MemoryDumpManager This CL simplifies the unregistration logic of MemoryDumpManager. This is to make the overall code more readable and simplify upcoming changes. Prior to this CL, the MemoryDumpManager was keeping only one list to keep track of the registered dump providers. This caused the unregistration logic to be tricky because: - We couldn't remove the MDPInfo straight away, as that might cause invalidation of the |next_dump_provider| iterator. - We had to flag the MDPInfo as unregistered and postpone the deletion on the next dump. This has a major drawback: if we keep registering and unregistering dump providers when no tracing is happening at all, the dump_providers_ list grows without bounds. This is bad. This CL changes the logic as follows: - MDPInfo becomes refcounted. At any time it can be referenced by: - The MDM's |dump_providers_| list. - The |ProcessMemoryDumpAsyncState.pending_dump_providers| list. - Upon each dump, the dump provider list is snapshotted in the |ProcessMemoryDumpAsyncState.pending_dump_providers| - Upon unregistration of a dump provider we just remove the MDPInfo from the MDM's |dump_providers_|. If a dump is ongoing, the ProcessMemoryDumpAsyncState will keep it refcounted and delete it during the dump. This CL does not add or change any behavior of the MemoryDumpManager, with the exception of: - Fixing a corner case when dumping with no dump providers registered (See changes in the unittest). - Making the fail-safe logic more strict: if a dumper fails once, it will stay disabled forever. BUG=461788 ==========
Message was sent while issue was closed.
Committed patchset #9 (id:160001)
Message was sent while issue was closed.
Description was changed from ========== [tracing] Simplify logic of MemoryDumpManager This CL simplifies the unregistration logic of MemoryDumpManager. This is to make the overall code more readable and simplify upcoming changes. Prior to this CL, the MemoryDumpManager was keeping only one list to keep track of the registered dump providers. This caused the unregistration logic to be tricky because: - We couldn't remove the MDPInfo straight away, as that might cause invalidation of the |next_dump_provider| iterator. - We had to flag the MDPInfo as unregistered and postpone the deletion on the next dump. This has a major drawback: if we keep registering and unregistering dump providers when no tracing is happening at all, the dump_providers_ list grows without bounds. This is bad. This CL changes the logic as follows: - MDPInfo becomes refcounted. At any time it can be referenced by: - The MDM's |dump_providers_| list. - The |ProcessMemoryDumpAsyncState.pending_dump_providers| list. - Upon each dump, the dump provider list is snapshotted in the |ProcessMemoryDumpAsyncState.pending_dump_providers| - Upon unregistration of a dump provider we just remove the MDPInfo from the MDM's |dump_providers_|. If a dump is ongoing, the ProcessMemoryDumpAsyncState will keep it refcounted and delete it during the dump. This CL does not add or change any behavior of the MemoryDumpManager, with the exception of: - Fixing a corner case when dumping with no dump providers registered (See changes in the unittest). - Making the fail-safe logic more strict: if a dumper fails once, it will stay disabled forever. BUG=461788 ========== to ========== [tracing] Simplify logic of MemoryDumpManager This CL simplifies the unregistration logic of MemoryDumpManager. This is to make the overall code more readable and simplify upcoming changes. Prior to this CL, the MemoryDumpManager was keeping only one list to keep track of the registered dump providers. This caused the unregistration logic to be tricky because: - We couldn't remove the MDPInfo straight away, as that might cause invalidation of the |next_dump_provider| iterator. - We had to flag the MDPInfo as unregistered and postpone the deletion on the next dump. This has a major drawback: if we keep registering and unregistering dump providers when no tracing is happening at all, the dump_providers_ list grows without bounds. This is bad. This CL changes the logic as follows: - MDPInfo becomes refcounted. At any time it can be referenced by: - The MDM's |dump_providers_| list. - The |ProcessMemoryDumpAsyncState.pending_dump_providers| list. - Upon each dump, the dump provider list is snapshotted in the |ProcessMemoryDumpAsyncState.pending_dump_providers| - Upon unregistration of a dump provider we just remove the MDPInfo from the MDM's |dump_providers_|. If a dump is ongoing, the ProcessMemoryDumpAsyncState will keep it refcounted and delete it during the dump. This CL does not add or change any behavior of the MemoryDumpManager, with the exception of: - Fixing a corner case when dumping with no dump providers registered (See changes in the unittest). - Making the fail-safe logic more strict: if a dumper fails once, it will stay disabled forever. BUG=461788 Committed: https://crrev.com/9734733909e7cb41ef5c153f3c2d0927e8209133 Cr-Commit-Position: refs/heads/master@{#366374} ==========
Message was sent while issue was closed.
Patchset 9 (id:??) landed as https://crrev.com/9734733909e7cb41ef5c153f3c2d0927e8209133 Cr-Commit-Position: refs/heads/master@{#366374}
Message was sent while issue was closed.
A revert of this CL (patchset #9 id:160001) has been created in https://codereview.chromium.org/1540283003/ by primiano@chromium.org. The reason for reverting is: Caused crash in pending-version-change-stuck-works-with-terminate.html layout test. See: https://build.chromium.org/p/chromium.webkit/builders/WebKit%20Mac10.10/build... .
Message was sent while issue was closed.
On 2015/12/21 15:23:30, Primiano Tucci wrote: > A revert of this CL (patchset #9 id:160001) has been created in > https://codereview.chromium.org/1540283003/ by mailto:primiano@chromium.org. > > The reason for reverting is: Caused crash in > pending-version-change-stuck-works-with-terminate.html layout test. > > See: > https://build.chromium.org/p/chromium.webkit/builders/WebKit%20Mac10.10/build... > . Relanding in crrev.com/1546583002 The race condition has been fixed by crrev.com/1544663002 |