Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(837)

Side by Side Diff: base/threading/thread_local_storage.cc

Issue 60743004: Implement chromium's TLS. (Closed) Base URL: https://chromium.googlesource.com/chromium/src.git@master
Patch Set: Created 7 years ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
OLDNEW
1 // Copyright (c) 2012 The Chromium Authors. All rights reserved. 1 // Copyright (c) 2012 The Chromium Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style license that can be 2 // Use of this source code is governed by a BSD-style license that can be
3 // found in the LICENSE file. 3 // found in the LICENSE file.
4 4
5 #include "base/threading/thread_local_storage.h" 5 #include "base/threading/thread_local_storage.h"
6 6
7 #include <windows.h> 7 #include "base/atomicops.h"
8
9 #include "base/logging.h" 8 #include "base/logging.h"
10 9
10 using base::internal::PlatformThreadLocalStorage;
11 11
12 namespace { 12 namespace {
13 // In order to make TLS destructors work, we need to keep function 13 // In order to make TLS destructors work, we need to keep function
14 // pointers to the destructor for each TLS that we allocate. 14 // pointers to the destructor for each TLS that we allocate.
15 // We make this work by allocating a single OS-level TLS, which 15 // We make this work by allocating a single OS-level TLS, which
16 // contains an array of slots for the application to use. In 16 // contains an array of slots for the application to use. In
17 // parallel, we also allocate an array of destructors, which we 17 // parallel, we also allocate an array of destructors, which we
18 // keep track of and call when threads terminate. 18 // keep track of and call when threads terminate.
jar (doing other things) 2013/11/26 20:05:24 nit: Starting on line 13, better comment would be:
michaelbai 2013/11/27 00:29:26 Done.
19 19
20 // g_native_tls_key is the one native TLS that we use. It stores our table. 20 // g_native_tls_key is the one native TLS that we use. It stores our table.
21 long g_native_tls_key = TLS_OUT_OF_INDEXES; 21 PlatformThreadLocalStorage::TLSKey g_native_tls_key =
22 PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES;
22 23
23 // g_last_used_tls_key is the high-water-mark of allocated thread local storage. 24 // g_last_used_tls_key is the high-water-mark of allocated thread local storage.
24 // Each allocation is an index into our g_tls_destructors[]. Each such index is 25 // Each allocation is an index into our g_tls_destructors[]. Each such index is
25 // assigned to the instance variable slot_ in a ThreadLocalStorage::Slot 26 // assigned to the instance variable slot_ in a ThreadLocalStorage::Slot
26 // instance. We reserve the value slot_ == 0 to indicate that the corresponding 27 // instance. We reserve the value slot_ == 0 to indicate that the corresponding
27 // instance of ThreadLocalStorage::Slot has been freed (i.e., destructor called, 28 // instance of ThreadLocalStorage::Slot has been freed (i.e., destructor called,
28 // etc.). This reserved use of 0 is then stated as the initial value of 29 // etc.). This reserved use of 0 is then stated as the initial value of
29 // g_last_used_tls_key, so that the first issued index will be 1. 30 // g_last_used_tls_key, so that the first issued index will be 1.
30 long g_last_used_tls_key = 0; 31 base::subtle::Atomic32 g_last_used_tls_key = 0;
31 32
32 // The maximum number of 'slots' in our thread local storage stack. 33 // The maximum number of 'slots' in our thread local storage stack.
33 const int kThreadLocalStorageSize = 64; 34 const int kThreadLocalStorageSize = 64;
34 35
35 // The maximum number of times to try to clear slots by calling destructors. 36 // The maximum number of times to try to clear slots by calling destructors.
36 // Use pthread naming convention for clarity. 37 // Use pthread naming convention for clarity.
37 const int kMaxDestructorIterations = kThreadLocalStorageSize; 38 const int kMaxDestructorIterations = kThreadLocalStorageSize;
38 39
39 // An array of destructor function pointers for the slots. If a slot has a 40 // An array of destructor function pointers for the slots. If a slot has a
40 // destructor, it will be stored in its corresponding entry in this array. 41 // destructor, it will be stored in its corresponding entry in this array.
41 // The elements are volatile to ensure that when the compiler reads the value 42 // The elements are volatile to ensure that when the compiler reads the value
42 // to potentially call the destructor, it does so once, and that value is tested 43 // to potentially call the destructor, it does so once, and that value is tested
43 // for null-ness and then used. Yes, that would be a weird de-optimization, 44 // for null-ness and then used. Yes, that would be a weird de-optimization,
44 // but I can imagine some register machines where it was just as easy to 45 // but I can imagine some register machines where it was just as easy to
45 // re-fetch an array element, and I want to be sure a call to free the key 46 // re-fetch an array element, and I want to be sure a call to free the key
46 // (i.e., null out the destructor entry) that happens on a separate thread can't 47 // (i.e., null out the destructor entry) that happens on a separate thread can't
47 // hurt the racy calls to the destructors on another thread. 48 // hurt the racy calls to the destructors on another thread.
48 volatile base::ThreadLocalStorage::TLSDestructorFunc 49 volatile base::ThreadLocalStorage::TLSDestructorFunc
49 g_tls_destructors[kThreadLocalStorageSize]; 50 g_tls_destructors[kThreadLocalStorageSize];
50 51
51 void** ConstructTlsVector() { 52 void** ConstructTlsVector() {
jar (doing other things) 2013/11/26 20:05:24 Please add a comment: This function is called to
michaelbai 2013/11/27 00:29:26 Done.
52 if (g_native_tls_key == TLS_OUT_OF_INDEXES) { 53 if (g_native_tls_key == PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES) {
jar (doing other things) 2013/11/26 20:05:24 I think you need an atomic access ofr g_native_tls
michaelbai 2013/11/27 00:29:26 Done.
53 long value = TlsAlloc(); 54 PlatformThreadLocalStorage::TLSKey key;
54 DCHECK(value != TLS_OUT_OF_INDEXES); 55 CHECK(PlatformThreadLocalStorage::AllocTLS(&key));
55 56
57 // The TLS_KEY_OUT_OF_INDEXES is used to find out whether the key is set or
58 // not in NoBarrier_CompareAndSwap, but Posix doesn't have invalid key, we
59 // define an almost impossible value be it.
60 // If we really get TLS_KEY_OUT_OF_INDEXES as value of key, just alloc
61 // another TLS slot.
62 if (key == PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES) {
63 PlatformThreadLocalStorage::TLSKey tmp = key;
64 CHECK(PlatformThreadLocalStorage::AllocTLS(&key) &&
65 key != PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES);
66 PlatformThreadLocalStorage::FreeTLS(tmp);
jar (doing other things) 2013/11/26 20:05:24 This and line 78 are perhaps your only use of Free
michaelbai 2013/11/27 00:29:26 Right, I want it be clean, it looks weird if there
67 }
56 // Atomically test-and-set the tls_key. If the key is TLS_OUT_OF_INDEXES, 68 // Atomically test-and-set the tls_key. If the key is TLS_OUT_OF_INDEXES,
jar (doing other things) 2013/11/26 20:05:24 This comment about TLS_OUT_OF_INDEXES seems to hav
michaelbai 2013/11/27 00:29:26 I changed TLS_OUT_OF_INDEXES to TLS_KEY_OUT_OF_IND
57 // go ahead and set it. Otherwise, do nothing, as another 69 // go ahead and set it. Otherwise, do nothing, as another
58 // thread already did our dirty work. 70 // thread already did our dirty work.
59 if (TLS_OUT_OF_INDEXES != InterlockedCompareExchange( 71 if (PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES !=
60 &g_native_tls_key, value, TLS_OUT_OF_INDEXES)) { 72 base::subtle::NoBarrier_CompareAndSwap(
73 reinterpret_cast<base::subtle::Atomic32*>(&g_native_tls_key),
74 PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES, key)) {
61 // We've been shortcut. Another thread replaced g_native_tls_key first so 75 // We've been shortcut. Another thread replaced g_native_tls_key first so
62 // we need to destroy our index and use the one the other thread got 76 // we need to destroy our index and use the one the other thread got
63 // first. 77 // first.
64 TlsFree(value); 78 PlatformThreadLocalStorage::FreeTLS(key);
65 } 79 }
66 } 80 }
67 DCHECK(!TlsGetValue(g_native_tls_key)); 81 DCHECK(!PlatformThreadLocalStorage::GetTLSValue(g_native_tls_key));
jar (doing other things) 2013/11/26 20:05:24 I think that once you declare a value to be Atomic
michaelbai 2013/11/27 00:29:26 Thanks, Done.
68 82
69 // Some allocators, such as TCMalloc, make use of thread local storage. 83 // Some allocators, such as TCMalloc, make use of thread local storage.
70 // As a result, any attempt to call new (or malloc) will lazily cause such a 84 // As a result, any attempt to call new (or malloc) will lazily cause such a
71 // system to initialize, which will include registering for a TLS key. If we 85 // system to initialize, which will include registering for a TLS key. If we
72 // are not careful here, then that request to create a key will call new back, 86 // are not careful here, then that request to create a key will call new back,
73 // and we'll have an infinite loop. We avoid that as follows: 87 // and we'll have an infinite loop. We avoid that as follows:
74 // Use a stack allocated vector, so that we don't have dependence on our 88 // Use a stack allocated vector, so that we don't have dependence on our
75 // allocator until our service is in place. (i.e., don't even call new until 89 // allocator until our service is in place. (i.e., don't even call new until
76 // after we're setup) 90 // after we're setup)
77 void* stack_allocated_tls_data[kThreadLocalStorageSize]; 91 void* stack_allocated_tls_data[kThreadLocalStorageSize];
78 memset(stack_allocated_tls_data, 0, sizeof(stack_allocated_tls_data)); 92 memset(stack_allocated_tls_data, 0, sizeof(stack_allocated_tls_data));
79 // Ensure that any rentrant calls change the temp version. 93 // Ensure that any rentrant calls change the temp version.
80 TlsSetValue(g_native_tls_key, stack_allocated_tls_data); 94 PlatformThreadLocalStorage::SetTLSValue(g_native_tls_key,
95 stack_allocated_tls_data);
81 96
82 // Allocate an array to store our data. 97 // Allocate an array to store our data.
83 void** tls_data = new void*[kThreadLocalStorageSize]; 98 void** tls_data = new void*[kThreadLocalStorageSize];
84 memcpy(tls_data, stack_allocated_tls_data, sizeof(stack_allocated_tls_data)); 99 memcpy(tls_data, stack_allocated_tls_data, sizeof(stack_allocated_tls_data));
85 TlsSetValue(g_native_tls_key, tls_data); 100 PlatformThreadLocalStorage::SetTLSValue(g_native_tls_key, tls_data);
86 return tls_data; 101 return tls_data;
87 } 102 }
88 103
89 // Called when we terminate a thread, this function calls any TLS destructors 104 } // namespace
90 // that are pending for this thread. 105
91 void WinThreadExit() { 106 namespace base {
92 if (g_native_tls_key == TLS_OUT_OF_INDEXES) 107
108 namespace internal {
109
110 void PlatformThreadLocalStorage::OnThreadExit() {
111 if (g_native_tls_key == PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES)
93 return; 112 return;
94 113
95 void** tls_data = static_cast<void**>(TlsGetValue(g_native_tls_key)); 114 void** tls_data = static_cast<void**>(GetTLSValue(g_native_tls_key));
115
96 // Maybe we have never initialized TLS for this thread. 116 // Maybe we have never initialized TLS for this thread.
97 if (!tls_data) 117 if (!tls_data)
98 return; 118 return;
99 119
100 // Some allocators, such as TCMalloc, use TLS. As a result, when a thread 120 // Some allocators, such as TCMalloc, use TLS. As a result, when a thread
101 // terminates, one of the destructor calls we make may be to shut down an 121 // terminates, one of the destructor calls we make may be to shut down an
102 // allocator. We have to be careful that after we've shutdown all of the 122 // allocator. We have to be careful that after we've shutdown all of the
103 // known destructors (perchance including an allocator), that we don't call 123 // known destructors (perchance including an allocator), that we don't call
104 // the allocator and cause it to resurrect itself (with no possibly destructor 124 // the allocator and cause it to resurrect itself (with no possibly destructor
105 // call to follow). We handle this problem as follows: 125 // call to follow). We handle this problem as follows:
106 // Switch to using a stack allocated vector, so that we don't have dependence 126 // Switch to using a stack allocated vector, so that we don't have dependence
107 // on our allocator after we have called all g_tls_destructors. (i.e., don't 127 // on our allocator after we have called all g_tls_destructors. (i.e., don't
108 // even call delete[] after we're done with destructors.) 128 // even call delete[] after we're done with destructors.)
109 void* stack_allocated_tls_data[kThreadLocalStorageSize]; 129 void* stack_allocated_tls_data[kThreadLocalStorageSize];
110 memcpy(stack_allocated_tls_data, tls_data, sizeof(stack_allocated_tls_data)); 130 memcpy(stack_allocated_tls_data, tls_data, sizeof(stack_allocated_tls_data));
111 // Ensure that any re-entrant calls change the temp version. 131 // Ensure that any re-entrant calls change the temp version.
112 TlsSetValue(g_native_tls_key, stack_allocated_tls_data); 132 PlatformThreadLocalStorage::SetTLSValue(g_native_tls_key,
133 stack_allocated_tls_data);
113 delete[] tls_data; // Our last dependence on an allocator. 134 delete[] tls_data; // Our last dependence on an allocator.
114 135
115 int remaining_attempts = kMaxDestructorIterations; 136 int remaining_attempts = kMaxDestructorIterations;
116 bool need_to_scan_destructors = true; 137 bool need_to_scan_destructors = true;
117 while (need_to_scan_destructors) { 138 while (need_to_scan_destructors) {
118 need_to_scan_destructors = false; 139 need_to_scan_destructors = false;
119 // Try to destroy the first-created-slot (which is slot 1) in our last 140 // Try to destroy the first-created-slot (which is slot 1) in our last
120 // destructor call. That user was able to function, and define a slot with 141 // destructor call. That user was able to function, and define a slot with
121 // no other services running, so perhaps it is a basic service (like an 142 // no other services running, so perhaps it is a basic service (like an
122 // allocator) and should also be destroyed last. If we get the order wrong, 143 // allocator) and should also be destroyed last. If we get the order wrong,
(...skipping 14 matching lines...) Expand all
137 // the whole vector again. This is a pthread standard. 158 // the whole vector again. This is a pthread standard.
138 need_to_scan_destructors = true; 159 need_to_scan_destructors = true;
139 } 160 }
140 if (--remaining_attempts <= 0) { 161 if (--remaining_attempts <= 0) {
141 NOTREACHED(); // Destructors might not have been called. 162 NOTREACHED(); // Destructors might not have been called.
142 break; 163 break;
143 } 164 }
144 } 165 }
145 166
146 // Remove our stack allocated vector. 167 // Remove our stack allocated vector.
147 TlsSetValue(g_native_tls_key, NULL); 168 PlatformThreadLocalStorage::SetTLSValue(g_native_tls_key, NULL);
148 } 169 }
149 170
150 } // namespace 171 } // namespace internal
151 172
152 namespace base {
153 173
154 ThreadLocalStorage::Slot::Slot(TLSDestructorFunc destructor) { 174 ThreadLocalStorage::Slot::Slot(TLSDestructorFunc destructor) {
155 initialized_ = false; 175 initialized_ = false;
156 slot_ = 0; 176 slot_ = 0;
157 Initialize(destructor); 177 Initialize(destructor);
158 } 178 }
159 179
160 bool ThreadLocalStorage::StaticSlot::Initialize(TLSDestructorFunc destructor) { 180 bool ThreadLocalStorage::StaticSlot::Initialize(TLSDestructorFunc destructor) {
161 if (g_native_tls_key == TLS_OUT_OF_INDEXES || !TlsGetValue(g_native_tls_key)) 181 if (g_native_tls_key == PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES ||
182 !PlatformThreadLocalStorage::GetTLSValue(g_native_tls_key))
162 ConstructTlsVector(); 183 ConstructTlsVector();
163 184
164 // Grab a new slot. 185 // Grab a new slot.
165 slot_ = InterlockedIncrement(&g_last_used_tls_key); 186 slot_ = base::subtle::NoBarrier_AtomicIncrement(
187 reinterpret_cast<base::subtle::Atomic32*>(&g_last_used_tls_key), 1);
166 DCHECK_GT(slot_, 0); 188 DCHECK_GT(slot_, 0);
167 if (slot_ >= kThreadLocalStorageSize) { 189 if (slot_ >= kThreadLocalStorageSize) {
168 NOTREACHED(); 190 NOTREACHED();
169 return false; 191 return false;
170 } 192 }
171 193
172 // Setup our destructor. 194 // Setup our destructor.
173 g_tls_destructors[slot_] = destructor; 195 g_tls_destructors[slot_] = destructor;
174 initialized_ = true; 196 initialized_ = true;
175 return true; 197 return true;
176 } 198 }
177 199
178 void ThreadLocalStorage::StaticSlot::Free() { 200 void ThreadLocalStorage::StaticSlot::Free() {
179 // At this time, we don't reclaim old indices for TLS slots. 201 // At this time, we don't reclaim old indices for TLS slots.
180 // So all we need to do is wipe the destructor. 202 // So all we need to do is wipe the destructor.
181 DCHECK_GT(slot_, 0); 203 DCHECK_GT(slot_, 0);
182 DCHECK_LT(slot_, kThreadLocalStorageSize); 204 DCHECK_LT(slot_, kThreadLocalStorageSize);
183 g_tls_destructors[slot_] = NULL; 205 g_tls_destructors[slot_] = NULL;
184 slot_ = 0; 206 slot_ = 0;
185 initialized_ = false; 207 initialized_ = false;
186 } 208 }
187 209
188 void* ThreadLocalStorage::StaticSlot::Get() const { 210 void* ThreadLocalStorage::StaticSlot::Get() const {
189 void** tls_data = static_cast<void**>(TlsGetValue(g_native_tls_key)); 211 void** tls_data = static_cast<void**>(
212 PlatformThreadLocalStorage::GetTLSValue(g_native_tls_key));
190 if (!tls_data) 213 if (!tls_data)
191 tls_data = ConstructTlsVector(); 214 tls_data = ConstructTlsVector();
192 DCHECK_GT(slot_, 0); 215 DCHECK_GT(slot_, 0);
193 DCHECK_LT(slot_, kThreadLocalStorageSize); 216 DCHECK_LT(slot_, kThreadLocalStorageSize);
194 return tls_data[slot_]; 217 return tls_data[slot_];
195 } 218 }
196 219
197 void ThreadLocalStorage::StaticSlot::Set(void* value) { 220 void ThreadLocalStorage::StaticSlot::Set(void* value) {
198 void** tls_data = static_cast<void**>(TlsGetValue(g_native_tls_key)); 221 void** tls_data = static_cast<void**>(
222 PlatformThreadLocalStorage::GetTLSValue(g_native_tls_key));
199 if (!tls_data) 223 if (!tls_data)
200 tls_data = ConstructTlsVector(); 224 tls_data = ConstructTlsVector();
201 DCHECK_GT(slot_, 0); 225 DCHECK_GT(slot_, 0);
202 DCHECK_LT(slot_, kThreadLocalStorageSize); 226 DCHECK_LT(slot_, kThreadLocalStorageSize);
203 tls_data[slot_] = value; 227 tls_data[slot_] = value;
204 } 228 }
205 229
206 } // namespace base 230 } // namespace base
207
208 // Thread Termination Callbacks.
209 // Windows doesn't support a per-thread destructor with its
210 // TLS primitives. So, we build it manually by inserting a
211 // function to be called on each thread's exit.
212 // This magic is from http://www.codeproject.com/threads/tls.asp
213 // and it works for VC++ 7.0 and later.
214
215 // Force a reference to _tls_used to make the linker create the TLS directory
216 // if it's not already there. (e.g. if __declspec(thread) is not used).
217 // Force a reference to p_thread_callback_base to prevent whole program
218 // optimization from discarding the variable.
219 #ifdef _WIN64
220
221 #pragma comment(linker, "/INCLUDE:_tls_used")
222 #pragma comment(linker, "/INCLUDE:p_thread_callback_base")
223
224 #else // _WIN64
225
226 #pragma comment(linker, "/INCLUDE:__tls_used")
227 #pragma comment(linker, "/INCLUDE:_p_thread_callback_base")
228
229 #endif // _WIN64
230
231 // Static callback function to call with each thread termination.
232 void NTAPI OnThreadExit(PVOID module, DWORD reason, PVOID reserved) {
233 // On XP SP0 & SP1, the DLL_PROCESS_ATTACH is never seen. It is sent on SP2+
234 // and on W2K and W2K3. So don't assume it is sent.
235 if (DLL_THREAD_DETACH == reason || DLL_PROCESS_DETACH == reason)
236 WinThreadExit();
237 }
238
239 // .CRT$XLA to .CRT$XLZ is an array of PIMAGE_TLS_CALLBACK pointers that are
240 // called automatically by the OS loader code (not the CRT) when the module is
241 // loaded and on thread creation. They are NOT called if the module has been
242 // loaded by a LoadLibrary() call. It must have implicitly been loaded at
243 // process startup.
244 // By implicitly loaded, I mean that it is directly referenced by the main EXE
245 // or by one of its dependent DLLs. Delay-loaded DLL doesn't count as being
246 // implicitly loaded.
247 //
248 // See VC\crt\src\tlssup.c for reference.
249
250 // extern "C" suppresses C++ name mangling so we know the symbol name for the
251 // linker /INCLUDE:symbol pragma above.
252 extern "C" {
253 // The linker must not discard p_thread_callback_base. (We force a reference
254 // to this variable with a linker /INCLUDE:symbol pragma to ensure that.) If
255 // this variable is discarded, the OnThreadExit function will never be called.
256 #ifdef _WIN64
257
258 // .CRT section is merged with .rdata on x64 so it must be constant data.
259 #pragma const_seg(".CRT$XLB")
260 // When defining a const variable, it must have external linkage to be sure the
261 // linker doesn't discard it.
262 extern const PIMAGE_TLS_CALLBACK p_thread_callback_base;
263 const PIMAGE_TLS_CALLBACK p_thread_callback_base = OnThreadExit;
264
265 // Reset the default section.
266 #pragma const_seg()
267
268 #else // _WIN64
269
270 #pragma data_seg(".CRT$XLB")
271 PIMAGE_TLS_CALLBACK p_thread_callback_base = OnThreadExit;
272
273 // Reset the default section.
274 #pragma data_seg()
275
276 #endif // _WIN64
277 } // extern "C"
OLDNEW

Powered by Google App Engine
This is Rietveld 408576698