|
|
Created:
4 years, 4 months ago by gab Modified:
4 years, 3 months ago CC:
chromium-reviews, fdoray+watch_chromium.org, robliao+watch_chromium.org, gab+watch_chromium.org, sadrul, alokp, fdoray Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionFix incorrect memory barrier usage I previously asked to be introduced.
I previously thought memory barriers forced a flush after writes and a
cache-invalidations before read. It turns out this is not true, atomics
do that by default, memory barriers don't change anything in this
regard.
What memory barriers *do* is guarantee an "happens-after" relationship:
- *if* thread B can see thread A's write to an atomic variable X (and
barriers were used after the write on A and before the read on B --
at a minimum this is ReleaseStore and AcquireLoad) then B is
guaranteed to also see at least all the other writes that A made
before writing X (for both atomic and non-atomic variables).
This is in fact how mutexes are implemented:
- adding to the above the requirement that only one caller can set M at
a time (others block -- test-and-set loop or wtv) and having a memory
barrier after releasing M and before acquiring it, you get that
anyone that acquires M is also guaranteed to see everything else the
previous owner wrote before releasing M (atomic or not).
Now why barriers are not needed for the callsites addressed in this CL:
A) MessageLoop::GetThreadName(): I asked for these memory barriers in
https://codereview.chromium.org/1942053002 thinking they would force
a flush. As described above, they don't, so they've been useless. We
have two options:
(1) enforce that MessageLoop::GetThreadName() callers have an
"happens-after" relationship with
MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic
wouldn't break the race it would merely make it less likely
(2) Add a WaitableEvent to enforce it.
I went with (1) since the existing DCHECK would have fired if this
wasn't the case (and (2) would be annoying to implement because of
AssertWaitAllowed()).
B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again,
barriers do nothing to single atomic variables. Since sequential
consistency isn't required around this variable (it's checked in
isolation and other state takes care of its own synchronization; no
thread assumes anything about other state based on this value).
C) TaskTracker::State::bits_: the increments/decrements follow the same
reasoning as (B). The only new requirement is that callers of
HasShutdownStarted() not assume sequential consistency with the
caller of Shutdown (i.e. not assume side-effects of things that "must
have happened" before the Shutdown() call -- which is true outside of
|shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW
(read-modify-write) semantics, two threads racing for conflicting
updates will always be synchronized by the kernel such that one sees
the side-effects of the other (i.e. this is the same race race as if
TaskTracker::State was using locks)
See this thread (sorry internal) for extra discussions on the topic:
https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/discussion
Useful external articles:
- http://preshing.com/20130618/atomic-vs-non-atomic-operations/
- http://preshing.com/20120710/memory-barriers-are-like-source-control-operations/
- http://preshing.com/20120913/acquire-and-release-semantics/
- http://preshing.com/20120612/an-introduction-to-lock-free-programming/
- http://preshing.com/20130922/acquire-and-release-fences/
- https://fgiesen.wordpress.com/2014/07/07/cache-coherency/
BUG=553459
Committed: https://crrev.com/70de8f5b3896ce8f44d0e52f2c851b2209b39b2a
Cr-Commit-Position: refs/heads/master@{#411860}
Patch Set 1 #
Messages
Total messages: 28 (19 generated)
The CQ bit was checked by gab@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Description was changed from ========== Fix incorrect memory barrier usage I previously asked to be introduced. BUG= ========== to ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regards. What memory barriers *do* is guarantee sequential consistency: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write and before the read -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made (to atomic and to non-atomic variables) that A made before writing X. This is in fact how mutexes are implemented: - adding to the above the requirement that only one callers can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implemented because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization -- no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same logic as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_|). Thanks to AtomicIncrement's RMW (read-modify- write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side- effects of the other (this race would be the same if TaskTracker::State was using locks) See this thread (sorry internal) for extra details: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... BUG=553459 ==========
Description was changed from ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regards. What memory barriers *do* is guarantee sequential consistency: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write and before the read -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made (to atomic and to non-atomic variables) that A made before writing X. This is in fact how mutexes are implemented: - adding to the above the requirement that only one callers can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implemented because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization -- no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same logic as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_|). Thanks to AtomicIncrement's RMW (read-modify- write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side- effects of the other (this race would be the same if TaskTracker::State was using locks) See this thread (sorry internal) for extra details: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... BUG=553459 ========== to ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee sequential consistency: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write and before the read -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made (to atomic and to non-atomic variables) that A made before writing X. This is in fact how mutexes are implemented: - adding to the above the requirement that only one callers can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implemented because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization -- no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same logic as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_|). Thanks to AtomicIncrement's RMW (read-modify- write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side- effects of the other (this race would be the same if TaskTracker::State was using locks) See this thread (sorry internal) for extra details: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... BUG=553459 ==========
Description was changed from ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee sequential consistency: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write and before the read -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made (to atomic and to non-atomic variables) that A made before writing X. This is in fact how mutexes are implemented: - adding to the above the requirement that only one callers can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implemented because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization -- no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same logic as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_|). Thanks to AtomicIncrement's RMW (read-modify- write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side- effects of the other (this race would be the same if TaskTracker::State was using locks) See this thread (sorry internal) for extra details: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... BUG=553459 ========== to ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee sequential consistency: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write on A and before the read on B -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made before writing X (for both atomic and non-atomic variables). This is in fact how mutexes are implemented: - adding to the above the requirement that only one caller can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic wouldn't break the race it would merely make it less likely (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implement because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization; no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same reasoning as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW (read-modify- write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side- effects of the other (i.e. this is the same race race as if TaskTracker::State was using locks) See this thread (sorry internal) for extra details: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... BUG=553459 ==========
Description was changed from ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee sequential consistency: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write on A and before the read on B -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made before writing X (for both atomic and non-atomic variables). This is in fact how mutexes are implemented: - adding to the above the requirement that only one caller can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic wouldn't break the race it would merely make it less likely (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implement because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization; no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same reasoning as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW (read-modify- write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side- effects of the other (i.e. this is the same race race as if TaskTracker::State was using locks) See this thread (sorry internal) for extra details: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... BUG=553459 ========== to ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee sequential consistency: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write on A and before the read on B -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made before writing X (for both atomic and non-atomic variables). This is in fact how mutexes are implemented: - adding to the above the requirement that only one caller can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic wouldn't break the race it would merely make it less likely (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implement because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization; no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same reasoning as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW (read-modify- write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side- effects of the other (i.e. this is the same race race as if TaskTracker::State was using locks) See this thread (sorry internal) for extra discussions on the topic: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... BUG=553459 ==========
Description was changed from ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee sequential consistency: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write on A and before the read on B -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made before writing X (for both atomic and non-atomic variables). This is in fact how mutexes are implemented: - adding to the above the requirement that only one caller can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic wouldn't break the race it would merely make it less likely (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implement because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization; no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same reasoning as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW (read-modify- write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side- effects of the other (i.e. this is the same race race as if TaskTracker::State was using locks) See this thread (sorry internal) for extra discussions on the topic: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... BUG=553459 ========== to ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee sequential consistency: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write on A and before the read on B -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made before writing X (for both atomic and non-atomic variables). This is in fact how mutexes are implemented: - adding to the above the requirement that only one caller can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic wouldn't break the race it would merely make it less likely (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implement because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization; no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same reasoning as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW (read-modify- write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side- effects of the other (i.e. this is the same race race as if TaskTracker::State was using locks) See this thread (sorry internal) for extra discussions on the topic: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... Useful external articles: - http://preshing.com/20130618/atomic-vs-non-atomic-operations/ - http://preshing.com/20120710/memory-barriers-are-like-source-control-operations/ - http://preshing.com/20120913/acquire-and-release-semantics/ - http://preshing.com/20120612/an-introduction-to-lock-free-programming/ - http://preshing.com/20130922/acquire-and-release-fences/ - https://fgiesen.wordpress.com/2014/07/07/cache-coherency/ BUG=553459 ==========
The CQ bit was checked by gab@chromium.org to run a CQ dry run
Patchset #1 (id:1) has been deleted
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Patchset #1 (id:1) has been deleted
gab@chromium.org changed reviewers: + danakj@chromium.org, robliao@chromium.org
@Dana/Rob: PTAL. @Bruce: please confirm that the documentation in the CL description is correct (and you're welcome to look at the code too if you want). CC alokp whom I initially told to add the barrier in MessageLoop -- it was incorrect so want to make sure incorrect knowledge/usage doesn't spread to other portions of the codebase..! CC fdoray FYI Thanks! Gab
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
> What memory barriers *do* is guarantee sequential consistency: I'd tweak that to say that what they *do* is guarantee a "happens-after" relationship - sequential consistency is one type of such relationship with ReleaseStore and AcquireLoad being weaker versions of this relationship.
Description was changed from ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee sequential consistency: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write on A and before the read on B -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made before writing X (for both atomic and non-atomic variables). This is in fact how mutexes are implemented: - adding to the above the requirement that only one caller can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic wouldn't break the race it would merely make it less likely (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implement because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization; no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same reasoning as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW (read-modify- write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side- effects of the other (i.e. this is the same race race as if TaskTracker::State was using locks) See this thread (sorry internal) for extra discussions on the topic: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... Useful external articles: - http://preshing.com/20130618/atomic-vs-non-atomic-operations/ - http://preshing.com/20120710/memory-barriers-are-like-source-control-operations/ - http://preshing.com/20120913/acquire-and-release-semantics/ - http://preshing.com/20120612/an-introduction-to-lock-free-programming/ - http://preshing.com/20130922/acquire-and-release-fences/ - https://fgiesen.wordpress.com/2014/07/07/cache-coherency/ BUG=553459 ========== to ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee an "happens-after" relationship: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write on A and before the read on B -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made before writing X (for both atomic and non-atomic variables). This is in fact how mutexes are implemented: - adding to the above the requirement that only one caller can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic wouldn't break the race it would merely make it less likely (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implement because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization; no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same reasoning as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW (read-modify- write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side- effects of the other (i.e. this is the same race race as if TaskTracker::State was using locks) See this thread (sorry internal) for extra discussions on the topic: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... Useful external articles: - http://preshing.com/20130618/atomic-vs-non-atomic-operations/ - http://preshing.com/20120710/memory-barriers-are-like-source-control-operations/ - http://preshing.com/20120913/acquire-and-release-semantics/ - http://preshing.com/20120612/an-introduction-to-lock-free-programming/ - http://preshing.com/20130922/acquire-and-release-fences/ - https://fgiesen.wordpress.com/2014/07/07/cache-coherency/ BUG=553459 ==========
On 2016/08/11 21:32:37, brucedawson wrote: > > What memory barriers *do* is guarantee sequential consistency: > > I'd tweak that to say that what they *do* is guarantee a "happens-after" > relationship - sequential consistency is one type of such relationship with > ReleaseStore and AcquireLoad being weaker versions of this relationship. Thanks, done.
On 2016/08/11 21:43:11, gab (OOO soon - Sep 1) wrote: > On 2016/08/11 21:32:37, brucedawson wrote: > > > What memory barriers *do* is guarantee sequential consistency: > > > > I'd tweak that to say that what they *do* is guarantee a "happens-after" > > relationship - sequential consistency is one type of such relationship with > > ReleaseStore and AcquireLoad being weaker versions of this relationship. > > Thanks, done. lgtm+nits In your CL description, point C s/side- effects/side-effects/
Description was changed from ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee an "happens-after" relationship: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write on A and before the read on B -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made before writing X (for both atomic and non-atomic variables). This is in fact how mutexes are implemented: - adding to the above the requirement that only one caller can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic wouldn't break the race it would merely make it less likely (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implement because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization; no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same reasoning as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW (read-modify- write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side- effects of the other (i.e. this is the same race race as if TaskTracker::State was using locks) See this thread (sorry internal) for extra discussions on the topic: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... Useful external articles: - http://preshing.com/20130618/atomic-vs-non-atomic-operations/ - http://preshing.com/20120710/memory-barriers-are-like-source-control-operations/ - http://preshing.com/20120913/acquire-and-release-semantics/ - http://preshing.com/20120612/an-introduction-to-lock-free-programming/ - http://preshing.com/20130922/acquire-and-release-fences/ - https://fgiesen.wordpress.com/2014/07/07/cache-coherency/ BUG=553459 ========== to ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee an "happens-after" relationship: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write on A and before the read on B -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made before writing X (for both atomic and non-atomic variables). This is in fact how mutexes are implemented: - adding to the above the requirement that only one caller can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic wouldn't break the race it would merely make it less likely (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implement because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization; no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same reasoning as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW (read-modify-write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side-effects of the other (i.e. this is the same race race as if TaskTracker::State was using locks) See this thread (sorry internal) for extra discussions on the topic: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... Useful external articles: - http://preshing.com/20130618/atomic-vs-non-atomic-operations/ - http://preshing.com/20120710/memory-barriers-are-like-source-control-operations/ - http://preshing.com/20120913/acquire-and-release-semantics/ - http://preshing.com/20120612/an-introduction-to-lock-free-programming/ - http://preshing.com/20130922/acquire-and-release-fences/ - https://fgiesen.wordpress.com/2014/07/07/cache-coherency/ BUG=553459 ==========
On 2016/08/11 22:13:49, robliao wrote: > On 2016/08/11 21:43:11, gab (OOO soon - Sep 1) wrote: > > On 2016/08/11 21:32:37, brucedawson wrote: > > > > What memory barriers *do* is guarantee sequential consistency: > > > > > > I'd tweak that to say that what they *do* is guarantee a "happens-after" > > > relationship - sequential consistency is one type of such relationship with > > > ReleaseStore and AcquireLoad being weaker versions of this relationship. > > > > Thanks, done. > > lgtm+nits > > In your CL description, point C > s/side- effects/side-effects/ Thanks, done.
LGTM
The CQ bit was checked by gab@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Message was sent while issue was closed.
Description was changed from ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee an "happens-after" relationship: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write on A and before the read on B -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made before writing X (for both atomic and non-atomic variables). This is in fact how mutexes are implemented: - adding to the above the requirement that only one caller can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic wouldn't break the race it would merely make it less likely (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implement because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization; no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same reasoning as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW (read-modify-write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side-effects of the other (i.e. this is the same race race as if TaskTracker::State was using locks) See this thread (sorry internal) for extra discussions on the topic: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... Useful external articles: - http://preshing.com/20130618/atomic-vs-non-atomic-operations/ - http://preshing.com/20120710/memory-barriers-are-like-source-control-operations/ - http://preshing.com/20120913/acquire-and-release-semantics/ - http://preshing.com/20120612/an-introduction-to-lock-free-programming/ - http://preshing.com/20130922/acquire-and-release-fences/ - https://fgiesen.wordpress.com/2014/07/07/cache-coherency/ BUG=553459 ========== to ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee an "happens-after" relationship: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write on A and before the read on B -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made before writing X (for both atomic and non-atomic variables). This is in fact how mutexes are implemented: - adding to the above the requirement that only one caller can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic wouldn't break the race it would merely make it less likely (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implement because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization; no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same reasoning as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW (read-modify-write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side-effects of the other (i.e. this is the same race race as if TaskTracker::State was using locks) See this thread (sorry internal) for extra discussions on the topic: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... Useful external articles: - http://preshing.com/20130618/atomic-vs-non-atomic-operations/ - http://preshing.com/20120710/memory-barriers-are-like-source-control-operations/ - http://preshing.com/20120913/acquire-and-release-semantics/ - http://preshing.com/20120612/an-introduction-to-lock-free-programming/ - http://preshing.com/20130922/acquire-and-release-fences/ - https://fgiesen.wordpress.com/2014/07/07/cache-coherency/ BUG=553459 ==========
Message was sent while issue was closed.
Committed patchset #1 (id:20001)
Message was sent while issue was closed.
Description was changed from ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee an "happens-after" relationship: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write on A and before the read on B -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made before writing X (for both atomic and non-atomic variables). This is in fact how mutexes are implemented: - adding to the above the requirement that only one caller can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic wouldn't break the race it would merely make it less likely (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implement because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization; no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same reasoning as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW (read-modify-write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side-effects of the other (i.e. this is the same race race as if TaskTracker::State was using locks) See this thread (sorry internal) for extra discussions on the topic: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... Useful external articles: - http://preshing.com/20130618/atomic-vs-non-atomic-operations/ - http://preshing.com/20120710/memory-barriers-are-like-source-control-operations/ - http://preshing.com/20120913/acquire-and-release-semantics/ - http://preshing.com/20120612/an-introduction-to-lock-free-programming/ - http://preshing.com/20130922/acquire-and-release-fences/ - https://fgiesen.wordpress.com/2014/07/07/cache-coherency/ BUG=553459 ========== to ========== Fix incorrect memory barrier usage I previously asked to be introduced. I previously thought memory barriers forced a flush after writes and a cache-invalidations before read. It turns out this is not true, atomics do that by default, memory barriers don't change anything in this regard. What memory barriers *do* is guarantee an "happens-after" relationship: - *if* thread B can see thread A's write to an atomic variable X (and barriers were used after the write on A and before the read on B -- at a minimum this is ReleaseStore and AcquireLoad) then B is guaranteed to also see at least all the other writes that A made before writing X (for both atomic and non-atomic variables). This is in fact how mutexes are implemented: - adding to the above the requirement that only one caller can set M at a time (others block -- test-and-set loop or wtv) and having a memory barrier after releasing M and before acquiring it, you get that anyone that acquires M is also guaranteed to see everything else the previous owner wrote before releasing M (atomic or not). Now why barriers are not needed for the callsites addressed in this CL: A) MessageLoop::GetThreadName(): I asked for these memory barriers in https://codereview.chromium.org/1942053002 thinking they would force a flush. As described above, they don't, so they've been useless. We have two options: (1) enforce that MessageLoop::GetThreadName() callers have an "happens-after" relationship with MessageLoop::BindToCurrentThread() -- making |thread_id_| atomic wouldn't break the race it would merely make it less likely (2) Add a WaitableEvent to enforce it. I went with (1) since the existing DCHECK would have fired if this wasn't the case (and (2) would be annoying to implement because of AssertWaitAllowed()). B) SchedulerWorker::Delegate::num_single_threaded_runners_: once again, barriers do nothing to single atomic variables. Since sequential consistency isn't required around this variable (it's checked in isolation and other state takes care of its own synchronization; no thread assumes anything about other state based on this value). C) TaskTracker::State::bits_: the increments/decrements follow the same reasoning as (B). The only new requirement is that callers of HasShutdownStarted() not assume sequential consistency with the caller of Shutdown (i.e. not assume side-effects of things that "must have happened" before the Shutdown() call -- which is true outside of |shutdown_lock_| and thus safe). Thanks to AtomicIncrement's RMW (read-modify-write) semantics, two threads racing for conflicting updates will always be synchronized by the kernel such that one sees the side-effects of the other (i.e. this is the same race race as if TaskTracker::State was using locks) See this thread (sorry internal) for extra discussions on the topic: https://groups.google.com/a/google.com/d/topic/lucky-luke-dev/DfdrYh8XKI0/dis... Useful external articles: - http://preshing.com/20130618/atomic-vs-non-atomic-operations/ - http://preshing.com/20120710/memory-barriers-are-like-source-control-operations/ - http://preshing.com/20120913/acquire-and-release-semantics/ - http://preshing.com/20120612/an-introduction-to-lock-free-programming/ - http://preshing.com/20130922/acquire-and-release-fences/ - https://fgiesen.wordpress.com/2014/07/07/cache-coherency/ BUG=553459 Committed: https://crrev.com/70de8f5b3896ce8f44d0e52f2c851b2209b39b2a Cr-Commit-Position: refs/heads/master@{#411860} ==========
Message was sent while issue was closed.
Patchset 1 (id:??) landed as https://crrev.com/70de8f5b3896ce8f44d0e52f2c851b2209b39b2a Cr-Commit-Position: refs/heads/master@{#411860} |