Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(161)

Side by Side Diff: base/threading/thread_local_storage.cc

Issue 60743004: Implement chromium's TLS. (Closed) Base URL: https://chromium.googlesource.com/chromium/src.git@master
Patch Set: Created 7 years ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
OLDNEW
1 // Copyright (c) 2012 The Chromium Authors. All rights reserved. 1 // Copyright (c) 2012 The Chromium Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style license that can be 2 // Use of this source code is governed by a BSD-style license that can be
3 // found in the LICENSE file. 3 // found in the LICENSE file.
4 4
5 #include "base/threading/thread_local_storage.h" 5 #include "base/threading/thread_local_storage.h"
6 6
7 #include <windows.h> 7 #include "base/atomicops.h"
8
9 #include "base/logging.h" 8 #include "base/logging.h"
10 9
10 using base::internal::PlatformThreadLocalStorage;
11 11
12 namespace { 12 namespace {
13 // In order to make TLS destructors work, we need to keep function 13 // In order to make TLS destructors work, we need to keep around a function
14 // pointers to the destructor for each TLS that we allocate. 14 // pointer to the destructor for each slot. We keep this array of pointers in a
15 // We make this work by allocating a single OS-level TLS, which 15 // global (static) array.
16 // contains an array of slots for the application to use. In 16 // We use the single OS-level TLS slot (giving us one pointer per thread) to
17 // parallel, we also allocate an array of destructors, which we 17 // hold a pointer to a per-thread array (table) of slots that we allocate to
18 // keep track of and call when threads terminate. 18 // Chromium consumers.
19 19
20 // g_native_tls_key is the one native TLS that we use. It stores our table. 20 // g_native_tls_key is the one native TLS that we use. It stores our table.
21 long g_native_tls_key = TLS_OUT_OF_INDEXES; 21 base::subtle::AtomicWord g_native_tls_key =
22 PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES;
22 23
23 // g_last_used_tls_key is the high-water-mark of allocated thread local storage. 24 // g_last_used_tls_key is the high-water-mark of allocated thread local storage.
24 // Each allocation is an index into our g_tls_destructors[]. Each such index is 25 // Each allocation is an index into our g_tls_destructors[]. Each such index is
25 // assigned to the instance variable slot_ in a ThreadLocalStorage::Slot 26 // assigned to the instance variable slot_ in a ThreadLocalStorage::Slot
26 // instance. We reserve the value slot_ == 0 to indicate that the corresponding 27 // instance. We reserve the value slot_ == 0 to indicate that the corresponding
27 // instance of ThreadLocalStorage::Slot has been freed (i.e., destructor called, 28 // instance of ThreadLocalStorage::Slot has been freed (i.e., destructor called,
28 // etc.). This reserved use of 0 is then stated as the initial value of 29 // etc.). This reserved use of 0 is then stated as the initial value of
29 // g_last_used_tls_key, so that the first issued index will be 1. 30 // g_last_used_tls_key, so that the first issued index will be 1.
30 long g_last_used_tls_key = 0; 31 base::subtle::Atomic32 g_last_used_tls_key = 0;
31 32
32 // The maximum number of 'slots' in our thread local storage stack. 33 // The maximum number of 'slots' in our thread local storage stack.
33 const int kThreadLocalStorageSize = 64; 34 const int kThreadLocalStorageSize = 64;
34 35
35 // The maximum number of times to try to clear slots by calling destructors. 36 // The maximum number of times to try to clear slots by calling destructors.
36 // Use pthread naming convention for clarity. 37 // Use pthread naming convention for clarity.
37 const int kMaxDestructorIterations = kThreadLocalStorageSize; 38 const int kMaxDestructorIterations = kThreadLocalStorageSize;
38 39
39 // An array of destructor function pointers for the slots. If a slot has a 40 // An array of destructor function pointers for the slots. If a slot has a
40 // destructor, it will be stored in its corresponding entry in this array. 41 // destructor, it will be stored in its corresponding entry in this array.
41 // The elements are volatile to ensure that when the compiler reads the value 42 // The elements are volatile to ensure that when the compiler reads the value
42 // to potentially call the destructor, it does so once, and that value is tested 43 // to potentially call the destructor, it does so once, and that value is tested
43 // for null-ness and then used. Yes, that would be a weird de-optimization, 44 // for null-ness and then used. Yes, that would be a weird de-optimization,
44 // but I can imagine some register machines where it was just as easy to 45 // but I can imagine some register machines where it was just as easy to
45 // re-fetch an array element, and I want to be sure a call to free the key 46 // re-fetch an array element, and I want to be sure a call to free the key
46 // (i.e., null out the destructor entry) that happens on a separate thread can't 47 // (i.e., null out the destructor entry) that happens on a separate thread can't
47 // hurt the racy calls to the destructors on another thread. 48 // hurt the racy calls to the destructors on another thread.
48 volatile base::ThreadLocalStorage::TLSDestructorFunc 49 volatile base::ThreadLocalStorage::TLSDestructorFunc
49 g_tls_destructors[kThreadLocalStorageSize]; 50 g_tls_destructors[kThreadLocalStorageSize];
50 51
52 // This function is called to initialize our entire Chromium TLS system.
53 // It may be called very early, and we need to complete most all of the setup
54 // (initialization) before calling *any* memory allocator functions, which may
55 // recursively depend on this initialization.
56 // As a result, we use Atomics, and avoid anything (like a singleton) that might
57 // require memory allocations.
51 void** ConstructTlsVector() { 58 void** ConstructTlsVector() {
52 if (g_native_tls_key == TLS_OUT_OF_INDEXES) { 59 PlatformThreadLocalStorage::TLSKey key =
53 long value = TlsAlloc(); 60 base::subtle::NoBarrier_Load(&g_native_tls_key);
54 DCHECK(value != TLS_OUT_OF_INDEXES); 61 if (key == PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES) {
62 CHECK(PlatformThreadLocalStorage::AllocTLS(&key));
55 63
56 // Atomically test-and-set the tls_key. If the key is TLS_OUT_OF_INDEXES, 64 // The TLS_KEY_OUT_OF_INDEXES is used to find out whether the key is set or
57 // go ahead and set it. Otherwise, do nothing, as another 65 // not in NoBarrier_CompareAndSwap, but Posix doesn't have invalid key, we
58 // thread already did our dirty work. 66 // define an almost impossible value be it.
59 if (TLS_OUT_OF_INDEXES != InterlockedCompareExchange( 67 // If we really get TLS_KEY_OUT_OF_INDEXES as value of key, just alloc
60 &g_native_tls_key, value, TLS_OUT_OF_INDEXES)) { 68 // another TLS slot.
69 if (key == PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES) {
70 PlatformThreadLocalStorage::TLSKey tmp = key;
71 CHECK(PlatformThreadLocalStorage::AllocTLS(&key) &&
72 key != PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES);
73 PlatformThreadLocalStorage::FreeTLS(tmp);
74 }
75 // Atomically test-and-set the tls_key. If the key is
76 // TLS_KEY_OUT_OF_INDEXES, go ahead and set it. Otherwise, do nothing, as
77 // another thread already did our dirty work.
78 if (PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES !=
79 base::subtle::NoBarrier_CompareAndSwap(&g_native_tls_key,
80 PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES, key)) {
61 // We've been shortcut. Another thread replaced g_native_tls_key first so 81 // We've been shortcut. Another thread replaced g_native_tls_key first so
62 // we need to destroy our index and use the one the other thread got 82 // we need to destroy our index and use the one the other thread got
63 // first. 83 // first.
64 TlsFree(value); 84 PlatformThreadLocalStorage::FreeTLS(key);
65 } 85 }
86 key = base::subtle::NoBarrier_Load(&g_native_tls_key);
jar (doing other things) 2013/12/05 19:53:32 nit: I think we only need to do this when some oth
michaelbai 2013/12/05 22:56:15 Right, it should be between line 84 and 85. On 20
66 } 87 }
67 DCHECK(!TlsGetValue(g_native_tls_key)); 88 DCHECK(!PlatformThreadLocalStorage::GetTLSValue(key));
68 89
69 // Some allocators, such as TCMalloc, make use of thread local storage. 90 // Some allocators, such as TCMalloc, make use of thread local storage.
70 // As a result, any attempt to call new (or malloc) will lazily cause such a 91 // As a result, any attempt to call new (or malloc) will lazily cause such a
71 // system to initialize, which will include registering for a TLS key. If we 92 // system to initialize, which will include registering for a TLS key. If we
72 // are not careful here, then that request to create a key will call new back, 93 // are not careful here, then that request to create a key will call new back,
73 // and we'll have an infinite loop. We avoid that as follows: 94 // and we'll have an infinite loop. We avoid that as follows:
74 // Use a stack allocated vector, so that we don't have dependence on our 95 // Use a stack allocated vector, so that we don't have dependence on our
75 // allocator until our service is in place. (i.e., don't even call new until 96 // allocator until our service is in place. (i.e., don't even call new until
76 // after we're setup) 97 // after we're setup)
77 void* stack_allocated_tls_data[kThreadLocalStorageSize]; 98 void* stack_allocated_tls_data[kThreadLocalStorageSize];
78 memset(stack_allocated_tls_data, 0, sizeof(stack_allocated_tls_data)); 99 memset(stack_allocated_tls_data, 0, sizeof(stack_allocated_tls_data));
79 // Ensure that any rentrant calls change the temp version. 100 // Ensure that any rentrant calls change the temp version.
80 TlsSetValue(g_native_tls_key, stack_allocated_tls_data); 101 PlatformThreadLocalStorage::SetTLSValue(key, stack_allocated_tls_data);
81 102
82 // Allocate an array to store our data. 103 // Allocate an array to store our data.
83 void** tls_data = new void*[kThreadLocalStorageSize]; 104 void** tls_data = new void*[kThreadLocalStorageSize];
84 memcpy(tls_data, stack_allocated_tls_data, sizeof(stack_allocated_tls_data)); 105 memcpy(tls_data, stack_allocated_tls_data, sizeof(stack_allocated_tls_data));
85 TlsSetValue(g_native_tls_key, tls_data); 106 PlatformThreadLocalStorage::SetTLSValue(key, tls_data);
86 return tls_data; 107 return tls_data;
87 } 108 }
88 109
89 // Called when we terminate a thread, this function calls any TLS destructors 110 } // namespace
90 // that are pending for this thread.
91 void WinThreadExit() {
92 if (g_native_tls_key == TLS_OUT_OF_INDEXES)
93 return;
94 111
95 void** tls_data = static_cast<void**>(TlsGetValue(g_native_tls_key)); 112 namespace base {
113
114 namespace internal {
115
116 void PlatformThreadLocalStorage::OnThreadExit(void* value) {
117 void** tls_data = static_cast<void**>(value);
118 // |value| is NULL in Windows platform which doesn't support TLS destructor,
jar (doing other things) 2013/12/05 19:53:32 Best would be to have the platform specific code f
michaelbai 2013/12/05 22:56:15 I wanted to avoid #if defined(OS_WIN) code here, I
119 // we could get value from TLS key.
120 PlatformThreadLocalStorage::TLSKey key =
121 base::subtle::NoBarrier_Load(&g_native_tls_key);
122 if (!tls_data) {
123 if (key == PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES)
124 return;
125
126 tls_data = static_cast<void**>(GetTLSValue(key));
127 }
96 // Maybe we have never initialized TLS for this thread. 128 // Maybe we have never initialized TLS for this thread.
97 if (!tls_data) 129 if (!tls_data)
98 return; 130 return;
99 131
100 // Some allocators, such as TCMalloc, use TLS. As a result, when a thread 132 // Some allocators, such as TCMalloc, use TLS. As a result, when a thread
101 // terminates, one of the destructor calls we make may be to shut down an 133 // terminates, one of the destructor calls we make may be to shut down an
102 // allocator. We have to be careful that after we've shutdown all of the 134 // allocator. We have to be careful that after we've shutdown all of the
103 // known destructors (perchance including an allocator), that we don't call 135 // known destructors (perchance including an allocator), that we don't call
104 // the allocator and cause it to resurrect itself (with no possibly destructor 136 // the allocator and cause it to resurrect itself (with no possibly destructor
105 // call to follow). We handle this problem as follows: 137 // call to follow). We handle this problem as follows:
106 // Switch to using a stack allocated vector, so that we don't have dependence 138 // Switch to using a stack allocated vector, so that we don't have dependence
107 // on our allocator after we have called all g_tls_destructors. (i.e., don't 139 // on our allocator after we have called all g_tls_destructors. (i.e., don't
108 // even call delete[] after we're done with destructors.) 140 // even call delete[] after we're done with destructors.)
109 void* stack_allocated_tls_data[kThreadLocalStorageSize]; 141 void* stack_allocated_tls_data[kThreadLocalStorageSize];
110 memcpy(stack_allocated_tls_data, tls_data, sizeof(stack_allocated_tls_data)); 142 memcpy(stack_allocated_tls_data, tls_data, sizeof(stack_allocated_tls_data));
111 // Ensure that any re-entrant calls change the temp version. 143 // Ensure that any re-entrant calls change the temp version.
112 TlsSetValue(g_native_tls_key, stack_allocated_tls_data); 144 PlatformThreadLocalStorage::SetTLSValue(key, stack_allocated_tls_data);
113 delete[] tls_data; // Our last dependence on an allocator. 145 delete[] tls_data; // Our last dependence on an allocator.
114 146
115 int remaining_attempts = kMaxDestructorIterations; 147 int remaining_attempts = kMaxDestructorIterations;
116 bool need_to_scan_destructors = true; 148 bool need_to_scan_destructors = true;
117 while (need_to_scan_destructors) { 149 while (need_to_scan_destructors) {
118 need_to_scan_destructors = false; 150 need_to_scan_destructors = false;
119 // Try to destroy the first-created-slot (which is slot 1) in our last 151 // Try to destroy the first-created-slot (which is slot 1) in our last
120 // destructor call. That user was able to function, and define a slot with 152 // destructor call. That user was able to function, and define a slot with
121 // no other services running, so perhaps it is a basic service (like an 153 // no other services running, so perhaps it is a basic service (like an
122 // allocator) and should also be destroyed last. If we get the order wrong, 154 // allocator) and should also be destroyed last. If we get the order wrong,
123 // then we'll itterate several more times, so it is really not that 155 // then we'll itterate several more times, so it is really not that
124 // critical (but it might help). 156 // critical (but it might help).
125 for (int slot = g_last_used_tls_key; slot > 0; --slot) { 157 base::subtle::Atomic32 last_used_tls_key =
158 base::subtle::NoBarrier_Load(&g_last_used_tls_key);
159 for (int slot = last_used_tls_key; slot > 0; --slot) {
126 void* value = stack_allocated_tls_data[slot]; 160 void* value = stack_allocated_tls_data[slot];
127 if (value == NULL) 161 if (value == NULL)
128 continue; 162 continue;
163
129 base::ThreadLocalStorage::TLSDestructorFunc destructor = 164 base::ThreadLocalStorage::TLSDestructorFunc destructor =
130 g_tls_destructors[slot]; 165 g_tls_destructors[slot];
131 if (destructor == NULL) 166 if (destructor == NULL)
132 continue; 167 continue;
133 stack_allocated_tls_data[slot] = NULL; // pre-clear the slot. 168 stack_allocated_tls_data[slot] = NULL; // pre-clear the slot.
134 destructor(value); 169 destructor(value);
135 // Any destructor might have called a different service, which then set 170 // Any destructor might have called a different service, which then set
136 // a different slot to a non-NULL value. Hence we need to check 171 // a different slot to a non-NULL value. Hence we need to check
137 // the whole vector again. This is a pthread standard. 172 // the whole vector again. This is a pthread standard.
138 need_to_scan_destructors = true; 173 need_to_scan_destructors = true;
139 } 174 }
140 if (--remaining_attempts <= 0) { 175 if (--remaining_attempts <= 0) {
141 NOTREACHED(); // Destructors might not have been called. 176 NOTREACHED(); // Destructors might not have been called.
142 break; 177 break;
143 } 178 }
144 } 179 }
145 180
146 // Remove our stack allocated vector. 181 // Remove our stack allocated vector.
147 TlsSetValue(g_native_tls_key, NULL); 182 PlatformThreadLocalStorage::SetTLSValue(key, NULL);
148 } 183 }
149 184
150 } // namespace 185 } // namespace internal
151 186
152 namespace base {
153 187
154 ThreadLocalStorage::Slot::Slot(TLSDestructorFunc destructor) { 188 ThreadLocalStorage::Slot::Slot(TLSDestructorFunc destructor) {
155 initialized_ = false; 189 initialized_ = false;
156 slot_ = 0; 190 slot_ = 0;
157 Initialize(destructor); 191 Initialize(destructor);
158 } 192 }
159 193
160 bool ThreadLocalStorage::StaticSlot::Initialize(TLSDestructorFunc destructor) { 194 bool ThreadLocalStorage::StaticSlot::Initialize(TLSDestructorFunc destructor) {
161 if (g_native_tls_key == TLS_OUT_OF_INDEXES || !TlsGetValue(g_native_tls_key)) 195 PlatformThreadLocalStorage::TLSKey key =
196 base::subtle::NoBarrier_Load(&g_native_tls_key);
197 if (key == PlatformThreadLocalStorage::TLS_KEY_OUT_OF_INDEXES ||
198 !PlatformThreadLocalStorage::GetTLSValue(key))
162 ConstructTlsVector(); 199 ConstructTlsVector();
163 200
164 // Grab a new slot. 201 // Grab a new slot.
165 slot_ = InterlockedIncrement(&g_last_used_tls_key); 202 slot_ = base::subtle::NoBarrier_AtomicIncrement(&g_last_used_tls_key, 1);
166 DCHECK_GT(slot_, 0); 203 DCHECK_GT(slot_, 0);
167 if (slot_ >= kThreadLocalStorageSize) { 204 if (slot_ >= kThreadLocalStorageSize) {
jar (doing other things) 2013/12/05 19:53:32 Can you change this to a: CHECK_LT(slot_, kThreadL
michaelbai 2013/12/05 22:56:15 Done.
168 NOTREACHED(); 205 NOTREACHED();
169 return false; 206 return false;
170 } 207 }
171 208
172 // Setup our destructor. 209 // Setup our destructor.
173 g_tls_destructors[slot_] = destructor; 210 g_tls_destructors[slot_] = destructor;
174 initialized_ = true; 211 initialized_ = true;
175 return true; 212 return true;
176 } 213 }
177 214
178 void ThreadLocalStorage::StaticSlot::Free() { 215 void ThreadLocalStorage::StaticSlot::Free() {
179 // At this time, we don't reclaim old indices for TLS slots. 216 // At this time, we don't reclaim old indices for TLS slots.
180 // So all we need to do is wipe the destructor. 217 // So all we need to do is wipe the destructor.
181 DCHECK_GT(slot_, 0); 218 DCHECK_GT(slot_, 0);
182 DCHECK_LT(slot_, kThreadLocalStorageSize); 219 DCHECK_LT(slot_, kThreadLocalStorageSize);
183 g_tls_destructors[slot_] = NULL; 220 g_tls_destructors[slot_] = NULL;
184 slot_ = 0; 221 slot_ = 0;
185 initialized_ = false; 222 initialized_ = false;
186 } 223 }
187 224
188 void* ThreadLocalStorage::StaticSlot::Get() const { 225 void* ThreadLocalStorage::StaticSlot::Get() const {
189 void** tls_data = static_cast<void**>(TlsGetValue(g_native_tls_key)); 226 void** tls_data = static_cast<void**>(
227 PlatformThreadLocalStorage::GetTLSValue(
228 base::subtle::NoBarrier_Load(&g_native_tls_key)));
190 if (!tls_data) 229 if (!tls_data)
191 tls_data = ConstructTlsVector(); 230 tls_data = ConstructTlsVector();
192 DCHECK_GT(slot_, 0); 231 DCHECK_GT(slot_, 0);
193 DCHECK_LT(slot_, kThreadLocalStorageSize); 232 DCHECK_LT(slot_, kThreadLocalStorageSize);
194 return tls_data[slot_]; 233 return tls_data[slot_];
195 } 234 }
196 235
197 void ThreadLocalStorage::StaticSlot::Set(void* value) { 236 void ThreadLocalStorage::StaticSlot::Set(void* value) {
198 void** tls_data = static_cast<void**>(TlsGetValue(g_native_tls_key)); 237 void** tls_data = static_cast<void**>(
238 PlatformThreadLocalStorage::GetTLSValue(
239 base::subtle::NoBarrier_Load(&g_native_tls_key)));
199 if (!tls_data) 240 if (!tls_data)
200 tls_data = ConstructTlsVector(); 241 tls_data = ConstructTlsVector();
201 DCHECK_GT(slot_, 0); 242 DCHECK_GT(slot_, 0);
202 DCHECK_LT(slot_, kThreadLocalStorageSize); 243 DCHECK_LT(slot_, kThreadLocalStorageSize);
203 tls_data[slot_] = value; 244 tls_data[slot_] = value;
204 } 245 }
205 246
206 } // namespace base 247 } // namespace base
207
208 // Thread Termination Callbacks.
209 // Windows doesn't support a per-thread destructor with its
210 // TLS primitives. So, we build it manually by inserting a
211 // function to be called on each thread's exit.
212 // This magic is from http://www.codeproject.com/threads/tls.asp
213 // and it works for VC++ 7.0 and later.
214
215 // Force a reference to _tls_used to make the linker create the TLS directory
216 // if it's not already there. (e.g. if __declspec(thread) is not used).
217 // Force a reference to p_thread_callback_base to prevent whole program
218 // optimization from discarding the variable.
219 #ifdef _WIN64
220
221 #pragma comment(linker, "/INCLUDE:_tls_used")
222 #pragma comment(linker, "/INCLUDE:p_thread_callback_base")
223
224 #else // _WIN64
225
226 #pragma comment(linker, "/INCLUDE:__tls_used")
227 #pragma comment(linker, "/INCLUDE:_p_thread_callback_base")
228
229 #endif // _WIN64
230
231 // Static callback function to call with each thread termination.
232 void NTAPI OnThreadExit(PVOID module, DWORD reason, PVOID reserved) {
233 // On XP SP0 & SP1, the DLL_PROCESS_ATTACH is never seen. It is sent on SP2+
234 // and on W2K and W2K3. So don't assume it is sent.
235 if (DLL_THREAD_DETACH == reason || DLL_PROCESS_DETACH == reason)
236 WinThreadExit();
237 }
238
239 // .CRT$XLA to .CRT$XLZ is an array of PIMAGE_TLS_CALLBACK pointers that are
240 // called automatically by the OS loader code (not the CRT) when the module is
241 // loaded and on thread creation. They are NOT called if the module has been
242 // loaded by a LoadLibrary() call. It must have implicitly been loaded at
243 // process startup.
244 // By implicitly loaded, I mean that it is directly referenced by the main EXE
245 // or by one of its dependent DLLs. Delay-loaded DLL doesn't count as being
246 // implicitly loaded.
247 //
248 // See VC\crt\src\tlssup.c for reference.
249
250 // extern "C" suppresses C++ name mangling so we know the symbol name for the
251 // linker /INCLUDE:symbol pragma above.
252 extern "C" {
253 // The linker must not discard p_thread_callback_base. (We force a reference
254 // to this variable with a linker /INCLUDE:symbol pragma to ensure that.) If
255 // this variable is discarded, the OnThreadExit function will never be called.
256 #ifdef _WIN64
257
258 // .CRT section is merged with .rdata on x64 so it must be constant data.
259 #pragma const_seg(".CRT$XLB")
260 // When defining a const variable, it must have external linkage to be sure the
261 // linker doesn't discard it.
262 extern const PIMAGE_TLS_CALLBACK p_thread_callback_base;
263 const PIMAGE_TLS_CALLBACK p_thread_callback_base = OnThreadExit;
264
265 // Reset the default section.
266 #pragma const_seg()
267
268 #else // _WIN64
269
270 #pragma data_seg(".CRT$XLB")
271 PIMAGE_TLS_CALLBACK p_thread_callback_base = OnThreadExit;
272
273 // Reset the default section.
274 #pragma data_seg()
275
276 #endif // _WIN64
277 } // extern "C"
OLDNEW

Powered by Google App Engine
This is Rietveld 408576698