Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(69)

Side by Side Diff: include/private/SkWeakRefCnt.h

Issue 1867863002: Convert SkRefCnt to std::atomic. (Closed) Base URL: https://skia.googlesource.com/skia.git@master
Patch Set: Restore to 1. Created 4 years, 8 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
« no previous file with comments | « include/private/SkOnce.h ('k') | src/core/SkRWBuffer.cpp » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 /* 1 /*
2 * Copyright 2012 Google Inc. 2 * Copyright 2012 Google Inc.
3 * 3 *
4 * Use of this source code is governed by a BSD-style license that can be 4 * Use of this source code is governed by a BSD-style license that can be
5 * found in the LICENSE file. 5 * found in the LICENSE file.
6 */ 6 */
7 7
8 #ifndef SkWeakRefCnt_DEFINED 8 #ifndef SkWeakRefCnt_DEFINED
9 #define SkWeakRefCnt_DEFINED 9 #define SkWeakRefCnt_DEFINED
10 10
11 #include "SkRefCnt.h" 11 #include "SkRefCnt.h"
12 #include "../private/SkAtomics.h" 12 #include <atomic>
13 13
14 /** \class SkWeakRefCnt 14 /** \class SkWeakRefCnt
15 15
16 SkWeakRefCnt is the base class for objects that may be shared by multiple 16 SkWeakRefCnt is the base class for objects that may be shared by multiple
17 objects. When an existing strong owner wants to share a reference, it calls 17 objects. When an existing strong owner wants to share a reference, it calls
18 ref(). When a strong owner wants to release its reference, it calls 18 ref(). When a strong owner wants to release its reference, it calls
19 unref(). When the shared object's strong reference count goes to zero as 19 unref(). When the shared object's strong reference count goes to zero as
20 the result of an unref() call, its (virtual) weak_dispose method is called. 20 the result of an unref() call, its (virtual) weak_dispose method is called.
21 It is an error for the destructor to be called explicitly (or via the 21 It is an error for the destructor to be called explicitly (or via the
22 object going out of scope on the stack or calling delete) if 22 object going out of scope on the stack or calling delete) if
(...skipping 32 matching lines...) Expand 10 before | Expand all | Expand 10 after
55 The strong references collectively hold one weak reference. When the 55 The strong references collectively hold one weak reference. When the
56 strong reference count goes to zero, the collectively held weak 56 strong reference count goes to zero, the collectively held weak
57 reference is released. 57 reference is released.
58 */ 58 */
59 SkWeakRefCnt() : SkRefCnt(), fWeakCnt(1) {} 59 SkWeakRefCnt() : SkRefCnt(), fWeakCnt(1) {}
60 60
61 /** Destruct, asserting that the weak reference count is 1. 61 /** Destruct, asserting that the weak reference count is 1.
62 */ 62 */
63 virtual ~SkWeakRefCnt() { 63 virtual ~SkWeakRefCnt() {
64 #ifdef SK_DEBUG 64 #ifdef SK_DEBUG
65 SkASSERT(fWeakCnt == 1); 65 SkASSERT(getWeakCnt() == 1);
66 fWeakCnt = 0; 66 fWeakCnt.store(0, std::memory_order_relaxed);
67 #endif 67 #endif
68 } 68 }
69 69
70 /** Return the weak reference count. 70 #ifdef SK_DEBUG
71 */ 71 /** Return the weak reference count. */
72 int32_t getWeakCnt() const { return fWeakCnt; } 72 int32_t getWeakCnt() const {
73 return fWeakCnt.load(std::memory_order_relaxed);
74 }
73 75
74 #ifdef SK_DEBUG
75 void validate() const { 76 void validate() const {
76 this->INHERITED::validate(); 77 this->INHERITED::validate();
77 SkASSERT(fWeakCnt > 0); 78 SkASSERT(getWeakCnt() > 0);
78 } 79 }
79 #endif 80 #endif
80 81
82 private:
83 /** If fRefCnt is 0, returns 0.
84 * Otherwise increments fRefCnt, acquires, and returns the old value.
85 */
86 int32_t atomic_conditional_acquire_strong_ref() const {
87 int32_t prev = fRefCnt.load(std::memory_order_relaxed);
88 do {
89 if (0 == prev) {
90 break;
91 }
92 } while(!fRefCnt.compare_exchange_weak(prev, prev+1, std::memory_order_a cquire,
93 std::memory_order_r elaxed));
94 return prev;
95 }
96
97 public:
81 /** Creates a strong reference from a weak reference, if possible. The 98 /** Creates a strong reference from a weak reference, if possible. The
82 caller must already be an owner. If try_ref() returns true the owner 99 caller must already be an owner. If try_ref() returns true the owner
83 is in posession of an additional strong reference. Both the original 100 is in posession of an additional strong reference. Both the original
84 reference and new reference must be properly unreferenced. If try_ref() 101 reference and new reference must be properly unreferenced. If try_ref()
85 returns false, no strong reference could be created and the owner's 102 returns false, no strong reference could be created and the owner's
86 reference is in the same state as before the call. 103 reference is in the same state as before the call.
87 */ 104 */
88 bool SK_WARN_UNUSED_RESULT try_ref() const { 105 bool SK_WARN_UNUSED_RESULT try_ref() const {
89 if (sk_atomic_conditional_inc(&fRefCnt) != 0) { 106 if (atomic_conditional_acquire_strong_ref() != 0) {
90 // Acquire barrier (L/SL), if not provided above. 107 // Acquire barrier (L/SL), if not provided above.
91 // Prevents subsequent code from happening before the increment. 108 // Prevents subsequent code from happening before the increment.
92 sk_membar_acquire__after_atomic_conditional_inc();
93 return true; 109 return true;
94 } 110 }
95 return false; 111 return false;
96 } 112 }
97 113
98 /** Increment the weak reference count. Must be balanced by a call to 114 /** Increment the weak reference count. Must be balanced by a call to
99 weak_unref(). 115 weak_unref().
100 */ 116 */
101 void weak_ref() const { 117 void weak_ref() const {
102 SkASSERT(fRefCnt > 0); 118 SkASSERT(getRefCnt() > 0);
103 SkASSERT(fWeakCnt > 0); 119 SkASSERT(getWeakCnt() > 0);
104 sk_atomic_inc(&fWeakCnt); // No barrier required. 120 // No barrier required.
121 (void)fWeakCnt.fetch_add(+1, std::memory_order_relaxed);
105 } 122 }
106 123
107 /** Decrement the weak reference count. If the weak reference count is 1 124 /** Decrement the weak reference count. If the weak reference count is 1
108 before the decrement, then call delete on the object. Note that if this 125 before the decrement, then call delete on the object. Note that if this
109 is the case, then the object needs to have been allocated via new, and 126 is the case, then the object needs to have been allocated via new, and
110 not on the stack. 127 not on the stack.
111 */ 128 */
112 void weak_unref() const { 129 void weak_unref() const {
113 SkASSERT(fWeakCnt > 0); 130 SkASSERT(getWeakCnt() > 0);
114 // Release barrier (SL/S), if not provided below. 131 // A release here acts in place of all releases we "should" have been do ing in ref().
115 if (sk_atomic_dec(&fWeakCnt) == 1) { 132 if (1 == fWeakCnt.fetch_add(-1, std::memory_order_acq_rel)) {
116 // Acquire barrier (L/SL), if not provided above. 133 // Like try_ref(), the acquire is only needed on success, to make su re
117 // Prevents code in destructor from happening before the decrement. 134 // code in internal_dispose() doesn't happen before the decrement.
118 sk_membar_acquire__after_atomic_dec();
119 #ifdef SK_DEBUG 135 #ifdef SK_DEBUG
120 // so our destructor won't complain 136 // so our destructor won't complain
121 fWeakCnt = 1; 137 fWeakCnt.store(1, std::memory_order_relaxed);
122 #endif 138 #endif
123 this->INHERITED::internal_dispose(); 139 this->INHERITED::internal_dispose();
124 } 140 }
125 } 141 }
126 142
127 /** Returns true if there are no strong references to the object. When this 143 /** Returns true if there are no strong references to the object. When this
128 is the case all future calls to try_ref() will return false. 144 is the case all future calls to try_ref() will return false.
129 */ 145 */
130 bool weak_expired() const { 146 bool weak_expired() const {
131 return fRefCnt == 0; 147 return fRefCnt.load(std::memory_order_relaxed) == 0;
132 } 148 }
133 149
134 protected: 150 protected:
135 /** Called when the strong reference count goes to zero. This allows the 151 /** Called when the strong reference count goes to zero. This allows the
136 object to free any resources it may be holding. Weak references may 152 object to free any resources it may be holding. Weak references may
137 still exist and their level of allowed access to the object is defined 153 still exist and their level of allowed access to the object is defined
138 by the object's class. 154 by the object's class.
139 */ 155 */
140 virtual void weak_dispose() const { 156 virtual void weak_dispose() const {
141 } 157 }
142 158
143 private: 159 private:
144 /** Called when the strong reference count goes to zero. Calls weak_dispose 160 /** Called when the strong reference count goes to zero. Calls weak_dispose
145 on the object and releases the implicit weak reference held 161 on the object and releases the implicit weak reference held
146 collectively by the strong references. 162 collectively by the strong references.
147 */ 163 */
148 void internal_dispose() const override { 164 void internal_dispose() const override {
149 weak_dispose(); 165 weak_dispose();
150 weak_unref(); 166 weak_unref();
151 } 167 }
152 168
153 /* Invariant: fWeakCnt = #weak + (fRefCnt > 0 ? 1 : 0) */ 169 /* Invariant: fWeakCnt = #weak + (fRefCnt > 0 ? 1 : 0) */
154 mutable int32_t fWeakCnt; 170 mutable std::atomic<int32_t> fWeakCnt;
155 171
156 typedef SkRefCnt INHERITED; 172 typedef SkRefCnt INHERITED;
157 }; 173 };
158 174
159 #endif 175 #endif
OLDNEW
« no previous file with comments | « include/private/SkOnce.h ('k') | src/core/SkRWBuffer.cpp » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698