Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(3)

Side by Side Diff: src/third_party/valgrind/valgrind.h

Issue 8337007: Merge r8721, r8729 and r9334 to 3.4 branch (Closed) Base URL: http://v8.googlecode.com/svn/branches/3.4/
Patch Set: Created 9 years, 2 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch | Annotate | Revision Log
« no previous file with comments | « src/mips/full-codegen-mips.cc ('k') | src/version.cc » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 /* -*- c -*- 1 /* -*- c -*-
2 ---------------------------------------------------------------- 2 ----------------------------------------------------------------
3 3
4 Notice that the following BSD-style license applies to this one 4 Notice that the following BSD-style license applies to this one
5 file (valgrind.h) only. The rest of Valgrind is licensed under the 5 file (valgrind.h) only. The rest of Valgrind is licensed under the
6 terms of the GNU General Public License, version 2, unless 6 terms of the GNU General Public License, version 2, unless
7 otherwise indicated. See the COPYING file in the source 7 otherwise indicated. See the COPYING file in the source
8 distribution for details. 8 distribution for details.
9 9
10 ---------------------------------------------------------------- 10 ----------------------------------------------------------------
11 11
12 This file is part of Valgrind, a dynamic binary instrumentation 12 This file is part of Valgrind, a dynamic binary instrumentation
13 framework. 13 framework.
14 14
15 Copyright (C) 2000-2007 Julian Seward. All rights reserved. 15 Copyright (C) 2000-2010 Julian Seward. All rights reserved.
16 16
17 Redistribution and use in source and binary forms, with or without 17 Redistribution and use in source and binary forms, with or without
18 modification, are permitted provided that the following conditions 18 modification, are permitted provided that the following conditions
19 are met: 19 are met:
20 20
21 1. Redistributions of source code must retain the above copyright 21 1. Redistributions of source code must retain the above copyright
22 notice, this list of conditions and the following disclaimer. 22 notice, this list of conditions and the following disclaimer.
23 23
24 2. The origin of this software must not be misrepresented; you must 24 2. The origin of this software must not be misrepresented; you must
25 not claim that you wrote the original software. If you use this 25 not claim that you wrote the original software. If you use this
(...skipping 40 matching lines...) Expand 10 before | Expand all | Expand 10 after
66 unchanged. When not running on valgrind, each client request 66 unchanged. When not running on valgrind, each client request
67 consumes very few (eg. 7) instructions, so the resulting performance 67 consumes very few (eg. 7) instructions, so the resulting performance
68 loss is negligible unless you plan to execute client requests 68 loss is negligible unless you plan to execute client requests
69 millions of times per second. Nevertheless, if that is still a 69 millions of times per second. Nevertheless, if that is still a
70 problem, you can compile with the NVALGRIND symbol defined (gcc 70 problem, you can compile with the NVALGRIND symbol defined (gcc
71 -DNVALGRIND) so that client requests are not even compiled in. */ 71 -DNVALGRIND) so that client requests are not even compiled in. */
72 72
73 #ifndef __VALGRIND_H 73 #ifndef __VALGRIND_H
74 #define __VALGRIND_H 74 #define __VALGRIND_H
75 75
76
77 /* ------------------------------------------------------------------ */
78 /* VERSION NUMBER OF VALGRIND */
79 /* ------------------------------------------------------------------ */
80
81 /* Specify Valgrind's version number, so that user code can
82 conditionally compile based on our version number. Note that these
83 were introduced at version 3.6 and so do not exist in version 3.5
84 or earlier. The recommended way to use them to check for "version
85 X.Y or later" is (eg)
86
87 #if defined(__VALGRIND_MAJOR__) && defined(__VALGRIND_MINOR__) \
88 && (__VALGRIND_MAJOR__ > 3 \
89 || (__VALGRIND_MAJOR__ == 3 && __VALGRIND_MINOR__ >= 6))
90 */
91 #define __VALGRIND_MAJOR__ 3
92 #define __VALGRIND_MINOR__ 6
93
94
76 #include <stdarg.h> 95 #include <stdarg.h>
77 #include <stdint.h> 96 #include <stdint.h>
78 97
79 /* Nb: this file might be included in a file compiled with -ansi. So 98 /* Nb: this file might be included in a file compiled with -ansi. So
80 we can't use C++ style "//" comments nor the "asm" keyword (instead 99 we can't use C++ style "//" comments nor the "asm" keyword (instead
81 use "__asm__"). */ 100 use "__asm__"). */
82 101
83 /* Derive some tags indicating what the target platform is. Note 102 /* Derive some tags indicating what the target platform is. Note
84 that in this file we're using the compiler's CPP symbols for 103 that in this file we're using the compiler's CPP symbols for
85 identifying architectures, which are different to the ones we use 104 identifying architectures, which are different to the ones we use
86 within the rest of Valgrind. Note, __powerpc__ is active for both 105 within the rest of Valgrind. Note, __powerpc__ is active for both
87 32 and 64-bit PPC, whereas __powerpc64__ is only active for the 106 32 and 64-bit PPC, whereas __powerpc64__ is only active for the
88 latter (on Linux, that is). */ 107 latter (on Linux, that is).
108
109 Misc note: how to find out what's predefined in gcc by default:
110 gcc -Wp,-dM somefile.c
111 */
112 #undef PLAT_x86_darwin
113 #undef PLAT_amd64_darwin
114 #undef PLAT_x86_win32
89 #undef PLAT_x86_linux 115 #undef PLAT_x86_linux
90 #undef PLAT_amd64_linux 116 #undef PLAT_amd64_linux
91 #undef PLAT_ppc32_linux 117 #undef PLAT_ppc32_linux
92 #undef PLAT_ppc64_linux 118 #undef PLAT_ppc64_linux
93 #undef PLAT_ppc32_aix5 119 #undef PLAT_arm_linux
94 #undef PLAT_ppc64_aix5 120 #undef PLAT_s390x_linux
95
96 #if !defined(_AIX) && defined(__i386__)
97 # define PLAT_x86_linux 1
98 #elif !defined(_AIX) && defined(__x86_64__)
99 # define PLAT_amd64_linux 1
100 #elif !defined(_AIX) && defined(__powerpc__) && !defined(__powerpc64__)
101 # define PLAT_ppc32_linux 1
102 #elif !defined(_AIX) && defined(__powerpc__) && defined(__powerpc64__)
103 # define PLAT_ppc64_linux 1
104 #elif defined(_AIX) && defined(__64BIT__)
105 # define PLAT_ppc64_aix5 1
106 #elif defined(_AIX) && !defined(__64BIT__)
107 # define PLAT_ppc32_aix5 1
108 #endif
109 121
110 122
123 #if defined(__APPLE__) && defined(__i386__)
124 # define PLAT_x86_darwin 1
125 #elif defined(__APPLE__) && defined(__x86_64__)
126 # define PLAT_amd64_darwin 1
127 #elif defined(__MINGW32__) || defined(__CYGWIN32__) \
128 || (defined(_WIN32) && defined(_M_IX86))
129 # define PLAT_x86_win32 1
130 #elif defined(__linux__) && defined(__i386__)
131 # define PLAT_x86_linux 1
132 #elif defined(__linux__) && defined(__x86_64__)
133 # define PLAT_amd64_linux 1
134 #elif defined(__linux__) && defined(__powerpc__) && !defined(__powerpc64__)
135 # define PLAT_ppc32_linux 1
136 #elif defined(__linux__) && defined(__powerpc__) && defined(__powerpc64__)
137 # define PLAT_ppc64_linux 1
138 #elif defined(__linux__) && defined(__arm__)
139 # define PLAT_arm_linux 1
140 #elif defined(__linux__) && defined(__s390__) && defined(__s390x__)
141 # define PLAT_s390x_linux 1
142 #else
111 /* If we're not compiling for our target platform, don't generate 143 /* If we're not compiling for our target platform, don't generate
112 any inline asms. */ 144 any inline asms. */
113 #if !defined(PLAT_x86_linux) && !defined(PLAT_amd64_linux) \
114 && !defined(PLAT_ppc32_linux) && !defined(PLAT_ppc64_linux) \
115 && !defined(PLAT_ppc32_aix5) && !defined(PLAT_ppc64_aix5)
116 # if !defined(NVALGRIND) 145 # if !defined(NVALGRIND)
117 # define NVALGRIND 1 146 # define NVALGRIND 1
118 # endif 147 # endif
119 #endif 148 #endif
120 149
121 150
122 /* ------------------------------------------------------------------ */ 151 /* ------------------------------------------------------------------ */
123 /* ARCHITECTURE SPECIFICS for SPECIAL INSTRUCTIONS. There is nothing */ 152 /* ARCHITECTURE SPECIFICS for SPECIAL INSTRUCTIONS. There is nothing */
124 /* in here of use to end-users -- skip to the next section. */ 153 /* in here of use to end-users -- skip to the next section. */
125 /* ------------------------------------------------------------------ */ 154 /* ------------------------------------------------------------------ */
126 155
156 /*
157 * VALGRIND_DO_CLIENT_REQUEST(): a statement that invokes a Valgrind client
158 * request. Accepts both pointers and integers as arguments.
159 *
160 * VALGRIND_DO_CLIENT_REQUEST_EXPR(): a C expression that invokes a Valgrind
161 * client request and whose value equals the client request result. Accepts
162 * both pointers and integers as arguments.
163 */
164
165 #define VALGRIND_DO_CLIENT_REQUEST(_zzq_rlval, _zzq_default, \
166 _zzq_request, _zzq_arg1, _zzq_arg2, \
167 _zzq_arg3, _zzq_arg4, _zzq_arg5) \
168 { (_zzq_rlval) = VALGRIND_DO_CLIENT_REQUEST_EXPR((_zzq_default), \
169 (_zzq_request), (_zzq_arg1), (_zzq_arg2), \
170 (_zzq_arg3), (_zzq_arg4), (_zzq_arg5)); }
171
127 #if defined(NVALGRIND) 172 #if defined(NVALGRIND)
128 173
129 /* Define NVALGRIND to completely remove the Valgrind magic sequence 174 /* Define NVALGRIND to completely remove the Valgrind magic sequence
130 from the compiled code (analogous to NDEBUG's effects on 175 from the compiled code (analogous to NDEBUG's effects on
131 assert()) */ 176 assert()) */
132 #define VALGRIND_DO_CLIENT_REQUEST( \ 177 #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
133 _zzq_rlval, _zzq_default, _zzq_request, \ 178 _zzq_default, _zzq_request, \
134 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ 179 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
135 { \ 180 (_zzq_default)
136 (_zzq_rlval) = (_zzq_default); \
137 }
138 181
139 #else /* ! NVALGRIND */ 182 #else /* ! NVALGRIND */
140 183
141 /* The following defines the magic code sequences which the JITter 184 /* The following defines the magic code sequences which the JITter
142 spots and handles magically. Don't look too closely at them as 185 spots and handles magically. Don't look too closely at them as
143 they will rot your brain. 186 they will rot your brain.
144 187
145 The assembly code sequences for all architectures is in this one 188 The assembly code sequences for all architectures is in this one
146 file. This is because this file must be stand-alone, and we don't 189 file. This is because this file must be stand-alone, and we don't
147 want to have multiple files. 190 want to have multiple files.
(...skipping 18 matching lines...) Expand all
166 information is abstracted into a user-visible type, OrigFn. 209 information is abstracted into a user-visible type, OrigFn.
167 210
168 VALGRIND_CALL_NOREDIR_* behaves the same as the following on the 211 VALGRIND_CALL_NOREDIR_* behaves the same as the following on the
169 guest, but guarantees that the branch instruction will not be 212 guest, but guarantees that the branch instruction will not be
170 redirected: x86: call *%eax, amd64: call *%rax, ppc32/ppc64: 213 redirected: x86: call *%eax, amd64: call *%rax, ppc32/ppc64:
171 branch-and-link-to-r11. VALGRIND_CALL_NOREDIR is just text, not a 214 branch-and-link-to-r11. VALGRIND_CALL_NOREDIR is just text, not a
172 complete inline asm, since it needs to be combined with more magic 215 complete inline asm, since it needs to be combined with more magic
173 inline asm stuff to be useful. 216 inline asm stuff to be useful.
174 */ 217 */
175 218
176 /* ------------------------- x86-linux ------------------------- */ 219 /* ------------------------- x86-{linux,darwin} ---------------- */
177 220
178 #if defined(PLAT_x86_linux) 221 #if defined(PLAT_x86_linux) || defined(PLAT_x86_darwin) \
222 || (defined(PLAT_x86_win32) && defined(__GNUC__))
179 223
180 typedef 224 typedef
181 struct { 225 struct {
182 unsigned int nraddr; /* where's the code? */ 226 unsigned int nraddr; /* where's the code? */
183 } 227 }
184 OrigFn; 228 OrigFn;
185 229
186 #define __SPECIAL_INSTRUCTION_PREAMBLE \ 230 #define __SPECIAL_INSTRUCTION_PREAMBLE \
187 "roll $3, %%edi ; roll $13, %%edi\n\t" \ 231 "roll $3, %%edi ; roll $13, %%edi\n\t" \
188 "roll $29, %%edi ; roll $19, %%edi\n\t" 232 "roll $29, %%edi ; roll $19, %%edi\n\t"
189 233
190 #define VALGRIND_DO_CLIENT_REQUEST( \ 234 #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
191 _zzq_rlval, _zzq_default, _zzq_request, \ 235 _zzq_default, _zzq_request, \
192 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ 236 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
193 { volatile unsigned int _zzq_args[6]; \ 237 __extension__ \
238 ({volatile unsigned int _zzq_args[6]; \
194 volatile unsigned int _zzq_result; \ 239 volatile unsigned int _zzq_result; \
195 _zzq_args[0] = (unsigned int)(_zzq_request); \ 240 _zzq_args[0] = (unsigned int)(_zzq_request); \
196 _zzq_args[1] = (unsigned int)(_zzq_arg1); \ 241 _zzq_args[1] = (unsigned int)(_zzq_arg1); \
197 _zzq_args[2] = (unsigned int)(_zzq_arg2); \ 242 _zzq_args[2] = (unsigned int)(_zzq_arg2); \
198 _zzq_args[3] = (unsigned int)(_zzq_arg3); \ 243 _zzq_args[3] = (unsigned int)(_zzq_arg3); \
199 _zzq_args[4] = (unsigned int)(_zzq_arg4); \ 244 _zzq_args[4] = (unsigned int)(_zzq_arg4); \
200 _zzq_args[5] = (unsigned int)(_zzq_arg5); \ 245 _zzq_args[5] = (unsigned int)(_zzq_arg5); \
201 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ 246 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
202 /* %EDX = client_request ( %EAX ) */ \ 247 /* %EDX = client_request ( %EAX ) */ \
203 "xchgl %%ebx,%%ebx" \ 248 "xchgl %%ebx,%%ebx" \
204 : "=d" (_zzq_result) \ 249 : "=d" (_zzq_result) \
205 : "a" (&_zzq_args[0]), "0" (_zzq_default) \ 250 : "a" (&_zzq_args[0]), "0" (_zzq_default) \
206 : "cc", "memory" \ 251 : "cc", "memory" \
207 ); \ 252 ); \
208 _zzq_rlval = _zzq_result; \ 253 _zzq_result; \
209 } 254 })
210 255
211 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ 256 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
212 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ 257 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
213 volatile unsigned int __addr; \ 258 volatile unsigned int __addr; \
214 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ 259 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
215 /* %EAX = guest_NRADDR */ \ 260 /* %EAX = guest_NRADDR */ \
216 "xchgl %%ecx,%%ecx" \ 261 "xchgl %%ecx,%%ecx" \
217 : "=a" (__addr) \ 262 : "=a" (__addr) \
218 : \ 263 : \
219 : "cc", "memory" \ 264 : "cc", "memory" \
220 ); \ 265 ); \
221 _zzq_orig->nraddr = __addr; \ 266 _zzq_orig->nraddr = __addr; \
222 } 267 }
223 268
224 #define VALGRIND_CALL_NOREDIR_EAX \ 269 #define VALGRIND_CALL_NOREDIR_EAX \
225 __SPECIAL_INSTRUCTION_PREAMBLE \ 270 __SPECIAL_INSTRUCTION_PREAMBLE \
226 /* call-noredir *%EAX */ \ 271 /* call-noredir *%EAX */ \
227 "xchgl %%edx,%%edx\n\t" 272 "xchgl %%edx,%%edx\n\t"
228 #endif /* PLAT_x86_linux */ 273 #endif /* PLAT_x86_linux || PLAT_x86_darwin || (PLAT_x86_win32 && __GNUC__) */
229 274
230 /* ------------------------ amd64-linux ------------------------ */ 275 /* ------------------------- x86-Win32 ------------------------- */
231 276
232 #if defined(PLAT_amd64_linux) 277 #if defined(PLAT_x86_win32) && !defined(__GNUC__)
278
279 typedef
280 struct {
281 unsigned int nraddr; /* where's the code? */
282 }
283 OrigFn;
284
285 #if defined(_MSC_VER)
286
287 #define __SPECIAL_INSTRUCTION_PREAMBLE \
288 __asm rol edi, 3 __asm rol edi, 13 \
289 __asm rol edi, 29 __asm rol edi, 19
290
291 #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
292 _zzq_default, _zzq_request, \
293 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
294 valgrind_do_client_request_expr((uintptr_t)(_zzq_default), \
295 (uintptr_t)(_zzq_request), (uintptr_t)(_zzq_arg1), \
296 (uintptr_t)(_zzq_arg2), (uintptr_t)(_zzq_arg3), \
297 (uintptr_t)(_zzq_arg4), (uintptr_t)(_zzq_arg5))
298
299 static __inline uintptr_t
300 valgrind_do_client_request_expr(uintptr_t _zzq_default, uintptr_t _zzq_request,
301 uintptr_t _zzq_arg1, uintptr_t _zzq_arg2,
302 uintptr_t _zzq_arg3, uintptr_t _zzq_arg4,
303 uintptr_t _zzq_arg5)
304 {
305 volatile uintptr_t _zzq_args[6];
306 volatile unsigned int _zzq_result;
307 _zzq_args[0] = (uintptr_t)(_zzq_request);
308 _zzq_args[1] = (uintptr_t)(_zzq_arg1);
309 _zzq_args[2] = (uintptr_t)(_zzq_arg2);
310 _zzq_args[3] = (uintptr_t)(_zzq_arg3);
311 _zzq_args[4] = (uintptr_t)(_zzq_arg4);
312 _zzq_args[5] = (uintptr_t)(_zzq_arg5);
313 __asm { __asm lea eax, _zzq_args __asm mov edx, _zzq_default
314 __SPECIAL_INSTRUCTION_PREAMBLE
315 /* %EDX = client_request ( %EAX ) */
316 __asm xchg ebx,ebx
317 __asm mov _zzq_result, edx
318 }
319 return _zzq_result;
320 }
321
322 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
323 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
324 volatile unsigned int __addr; \
325 __asm { __SPECIAL_INSTRUCTION_PREAMBLE \
326 /* %EAX = guest_NRADDR */ \
327 __asm xchg ecx,ecx \
328 __asm mov __addr, eax \
329 } \
330 _zzq_orig->nraddr = __addr; \
331 }
332
333 #define VALGRIND_CALL_NOREDIR_EAX ERROR
334
335 #else
336 #error Unsupported compiler.
337 #endif
338
339 #endif /* PLAT_x86_win32 */
340
341 /* ------------------------ amd64-{linux,darwin} --------------- */
342
343 #if defined(PLAT_amd64_linux) || defined(PLAT_amd64_darwin)
233 344
234 typedef 345 typedef
235 struct { 346 struct {
236 uint64_t nraddr; /* where's the code? */ 347 uint64_t nraddr; /* where's the code? */
237 } 348 }
238 OrigFn; 349 OrigFn;
239 350
240 #define __SPECIAL_INSTRUCTION_PREAMBLE \ 351 #define __SPECIAL_INSTRUCTION_PREAMBLE \
241 "rolq $3, %%rdi ; rolq $13, %%rdi\n\t" \ 352 "rolq $3, %%rdi ; rolq $13, %%rdi\n\t" \
242 "rolq $61, %%rdi ; rolq $51, %%rdi\n\t" 353 "rolq $61, %%rdi ; rolq $51, %%rdi\n\t"
243 354
244 #define VALGRIND_DO_CLIENT_REQUEST( \ 355 #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
245 _zzq_rlval, _zzq_default, _zzq_request, \ 356 _zzq_default, _zzq_request, \
246 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ 357 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
247 { volatile uint64_t _zzq_args[6]; \ 358 __extension__ \
359 ({ volatile uint64_t _zzq_args[6]; \
248 volatile uint64_t _zzq_result; \ 360 volatile uint64_t _zzq_result; \
249 _zzq_args[0] = (uint64_t)(_zzq_request); \ 361 _zzq_args[0] = (uint64_t)(_zzq_request); \
250 _zzq_args[1] = (uint64_t)(_zzq_arg1); \ 362 _zzq_args[1] = (uint64_t)(_zzq_arg1); \
251 _zzq_args[2] = (uint64_t)(_zzq_arg2); \ 363 _zzq_args[2] = (uint64_t)(_zzq_arg2); \
252 _zzq_args[3] = (uint64_t)(_zzq_arg3); \ 364 _zzq_args[3] = (uint64_t)(_zzq_arg3); \
253 _zzq_args[4] = (uint64_t)(_zzq_arg4); \ 365 _zzq_args[4] = (uint64_t)(_zzq_arg4); \
254 _zzq_args[5] = (uint64_t)(_zzq_arg5); \ 366 _zzq_args[5] = (uint64_t)(_zzq_arg5); \
255 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ 367 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
256 /* %RDX = client_request ( %RAX ) */ \ 368 /* %RDX = client_request ( %RAX ) */ \
257 "xchgq %%rbx,%%rbx" \ 369 "xchgq %%rbx,%%rbx" \
258 : "=d" (_zzq_result) \ 370 : "=d" (_zzq_result) \
259 : "a" (&_zzq_args[0]), "0" (_zzq_default) \ 371 : "a" (&_zzq_args[0]), "0" (_zzq_default) \
260 : "cc", "memory" \ 372 : "cc", "memory" \
261 ); \ 373 ); \
262 _zzq_rlval = _zzq_result; \ 374 _zzq_result; \
263 } 375 })
264 376
265 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ 377 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
266 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ 378 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
267 volatile uint64_t __addr; \ 379 volatile uint64_t __addr; \
268 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ 380 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
269 /* %RAX = guest_NRADDR */ \ 381 /* %RAX = guest_NRADDR */ \
270 "xchgq %%rcx,%%rcx" \ 382 "xchgq %%rcx,%%rcx" \
271 : "=a" (__addr) \ 383 : "=a" (__addr) \
272 : \ 384 : \
273 : "cc", "memory" \ 385 : "cc", "memory" \
274 ); \ 386 ); \
275 _zzq_orig->nraddr = __addr; \ 387 _zzq_orig->nraddr = __addr; \
276 } 388 }
277 389
278 #define VALGRIND_CALL_NOREDIR_RAX \ 390 #define VALGRIND_CALL_NOREDIR_RAX \
279 __SPECIAL_INSTRUCTION_PREAMBLE \ 391 __SPECIAL_INSTRUCTION_PREAMBLE \
280 /* call-noredir *%RAX */ \ 392 /* call-noredir *%RAX */ \
281 "xchgq %%rdx,%%rdx\n\t" 393 "xchgq %%rdx,%%rdx\n\t"
282 #endif /* PLAT_amd64_linux */ 394 #endif /* PLAT_amd64_linux || PLAT_amd64_darwin */
283 395
284 /* ------------------------ ppc32-linux ------------------------ */ 396 /* ------------------------ ppc32-linux ------------------------ */
285 397
286 #if defined(PLAT_ppc32_linux) 398 #if defined(PLAT_ppc32_linux)
287 399
288 typedef 400 typedef
289 struct { 401 struct {
290 unsigned int nraddr; /* where's the code? */ 402 unsigned int nraddr; /* where's the code? */
291 } 403 }
292 OrigFn; 404 OrigFn;
293 405
294 #define __SPECIAL_INSTRUCTION_PREAMBLE \ 406 #define __SPECIAL_INSTRUCTION_PREAMBLE \
295 "rlwinm 0,0,3,0,0 ; rlwinm 0,0,13,0,0\n\t" \ 407 "rlwinm 0,0,3,0,0 ; rlwinm 0,0,13,0,0\n\t" \
296 "rlwinm 0,0,29,0,0 ; rlwinm 0,0,19,0,0\n\t" 408 "rlwinm 0,0,29,0,0 ; rlwinm 0,0,19,0,0\n\t"
297 409
298 #define VALGRIND_DO_CLIENT_REQUEST( \ 410 #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
299 _zzq_rlval, _zzq_default, _zzq_request, \ 411 _zzq_default, _zzq_request, \
300 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ 412 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
301 \ 413 \
302 { unsigned int _zzq_args[6]; \ 414 __extension__ \
415 ({ unsigned int _zzq_args[6]; \
303 unsigned int _zzq_result; \ 416 unsigned int _zzq_result; \
304 unsigned int* _zzq_ptr; \ 417 unsigned int* _zzq_ptr; \
305 _zzq_args[0] = (unsigned int)(_zzq_request); \ 418 _zzq_args[0] = (unsigned int)(_zzq_request); \
306 _zzq_args[1] = (unsigned int)(_zzq_arg1); \ 419 _zzq_args[1] = (unsigned int)(_zzq_arg1); \
307 _zzq_args[2] = (unsigned int)(_zzq_arg2); \ 420 _zzq_args[2] = (unsigned int)(_zzq_arg2); \
308 _zzq_args[3] = (unsigned int)(_zzq_arg3); \ 421 _zzq_args[3] = (unsigned int)(_zzq_arg3); \
309 _zzq_args[4] = (unsigned int)(_zzq_arg4); \ 422 _zzq_args[4] = (unsigned int)(_zzq_arg4); \
310 _zzq_args[5] = (unsigned int)(_zzq_arg5); \ 423 _zzq_args[5] = (unsigned int)(_zzq_arg5); \
311 _zzq_ptr = _zzq_args; \ 424 _zzq_ptr = _zzq_args; \
312 __asm__ volatile("mr 3,%1\n\t" /*default*/ \ 425 __asm__ volatile("mr 3,%1\n\t" /*default*/ \
313 "mr 4,%2\n\t" /*ptr*/ \ 426 "mr 4,%2\n\t" /*ptr*/ \
314 __SPECIAL_INSTRUCTION_PREAMBLE \ 427 __SPECIAL_INSTRUCTION_PREAMBLE \
315 /* %R3 = client_request ( %R4 ) */ \ 428 /* %R3 = client_request ( %R4 ) */ \
316 "or 1,1,1\n\t" \ 429 "or 1,1,1\n\t" \
317 "mr %0,3" /*result*/ \ 430 "mr %0,3" /*result*/ \
318 : "=b" (_zzq_result) \ 431 : "=b" (_zzq_result) \
319 : "b" (_zzq_default), "b" (_zzq_ptr) \ 432 : "b" (_zzq_default), "b" (_zzq_ptr) \
320 : "cc", "memory", "r3", "r4"); \ 433 : "cc", "memory", "r3", "r4"); \
321 _zzq_rlval = _zzq_result; \ 434 _zzq_result; \
322 } 435 })
323 436
324 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ 437 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
325 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ 438 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
326 unsigned int __addr; \ 439 unsigned int __addr; \
327 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ 440 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
328 /* %R3 = guest_NRADDR */ \ 441 /* %R3 = guest_NRADDR */ \
329 "or 2,2,2\n\t" \ 442 "or 2,2,2\n\t" \
330 "mr %0,3" \ 443 "mr %0,3" \
331 : "=b" (__addr) \ 444 : "=b" (__addr) \
332 : \ 445 : \
(...skipping 16 matching lines...) Expand all
349 struct { 462 struct {
350 uint64_t nraddr; /* where's the code? */ 463 uint64_t nraddr; /* where's the code? */
351 uint64_t r2; /* what tocptr do we need? */ 464 uint64_t r2; /* what tocptr do we need? */
352 } 465 }
353 OrigFn; 466 OrigFn;
354 467
355 #define __SPECIAL_INSTRUCTION_PREAMBLE \ 468 #define __SPECIAL_INSTRUCTION_PREAMBLE \
356 "rotldi 0,0,3 ; rotldi 0,0,13\n\t" \ 469 "rotldi 0,0,3 ; rotldi 0,0,13\n\t" \
357 "rotldi 0,0,61 ; rotldi 0,0,51\n\t" 470 "rotldi 0,0,61 ; rotldi 0,0,51\n\t"
358 471
359 #define VALGRIND_DO_CLIENT_REQUEST( \ 472 #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
360 _zzq_rlval, _zzq_default, _zzq_request, \ 473 _zzq_default, _zzq_request, \
361 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ 474 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
362 \ 475 \
363 { uint64_t _zzq_args[6]; \ 476 __extension__ \
477 ({ uint64_t _zzq_args[6]; \
364 register uint64_t _zzq_result __asm__("r3"); \ 478 register uint64_t _zzq_result __asm__("r3"); \
365 register uint64_t* _zzq_ptr __asm__("r4"); \ 479 register uint64_t* _zzq_ptr __asm__("r4"); \
366 _zzq_args[0] = (uint64_t)(_zzq_request); \ 480 _zzq_args[0] = (uint64_t)(_zzq_request); \
367 _zzq_args[1] = (uint64_t)(_zzq_arg1); \ 481 _zzq_args[1] = (uint64_t)(_zzq_arg1); \
368 _zzq_args[2] = (uint64_t)(_zzq_arg2); \ 482 _zzq_args[2] = (uint64_t)(_zzq_arg2); \
369 _zzq_args[3] = (uint64_t)(_zzq_arg3); \ 483 _zzq_args[3] = (uint64_t)(_zzq_arg3); \
370 _zzq_args[4] = (uint64_t)(_zzq_arg4); \ 484 _zzq_args[4] = (uint64_t)(_zzq_arg4); \
371 _zzq_args[5] = (uint64_t)(_zzq_arg5); \ 485 _zzq_args[5] = (uint64_t)(_zzq_arg5); \
372 _zzq_ptr = _zzq_args; \ 486 _zzq_ptr = _zzq_args; \
373 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ 487 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
374 /* %R3 = client_request ( %R4 ) */ \ 488 /* %R3 = client_request ( %R4 ) */ \
375 "or 1,1,1" \ 489 "or 1,1,1" \
376 : "=r" (_zzq_result) \ 490 : "=r" (_zzq_result) \
377 : "0" (_zzq_default), "r" (_zzq_ptr) \ 491 : "0" (_zzq_default), "r" (_zzq_ptr) \
378 : "cc", "memory"); \ 492 : "cc", "memory"); \
379 _zzq_rlval = _zzq_result; \ 493 _zzq_result; \
380 } 494 })
381 495
382 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ 496 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
383 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ 497 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
384 register uint64_t __addr __asm__("r3"); \ 498 register uint64_t __addr __asm__("r3"); \
385 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ 499 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
386 /* %R3 = guest_NRADDR */ \ 500 /* %R3 = guest_NRADDR */ \
387 "or 2,2,2" \ 501 "or 2,2,2" \
388 : "=r" (__addr) \ 502 : "=r" (__addr) \
389 : \ 503 : \
390 : "cc", "memory" \ 504 : "cc", "memory" \
391 ); \ 505 ); \
392 _zzq_orig->nraddr = __addr; \ 506 _zzq_orig->nraddr = __addr; \
393 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ 507 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
394 /* %R3 = guest_NRADDR_GPR2 */ \ 508 /* %R3 = guest_NRADDR_GPR2 */ \
395 "or 4,4,4" \ 509 "or 4,4,4" \
396 : "=r" (__addr) \ 510 : "=r" (__addr) \
397 : \ 511 : \
398 : "cc", "memory" \ 512 : "cc", "memory" \
399 ); \ 513 ); \
400 _zzq_orig->r2 = __addr; \ 514 _zzq_orig->r2 = __addr; \
401 } 515 }
402 516
403 #define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 517 #define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
404 __SPECIAL_INSTRUCTION_PREAMBLE \ 518 __SPECIAL_INSTRUCTION_PREAMBLE \
405 /* branch-and-link-to-noredir *%R11 */ \ 519 /* branch-and-link-to-noredir *%R11 */ \
406 "or 3,3,3\n\t" 520 "or 3,3,3\n\t"
407 521
408 #endif /* PLAT_ppc64_linux */ 522 #endif /* PLAT_ppc64_linux */
409 523
410 /* ------------------------ ppc32-aix5 ------------------------- */ 524 /* ------------------------- arm-linux ------------------------- */
411 525
412 #if defined(PLAT_ppc32_aix5) 526 #if defined(PLAT_arm_linux)
413 527
414 typedef 528 typedef
415 struct { 529 struct {
416 unsigned int nraddr; /* where's the code? */ 530 unsigned int nraddr; /* where's the code? */
417 unsigned int r2; /* what tocptr do we need? */
418 } 531 }
419 OrigFn; 532 OrigFn;
420 533
421 #define __SPECIAL_INSTRUCTION_PREAMBLE \ 534 #define __SPECIAL_INSTRUCTION_PREAMBLE \
422 "rlwinm 0,0,3,0,0 ; rlwinm 0,0,13,0,0\n\t" \ 535 "mov r12, r12, ror #3 ; mov r12, r12, ror #13 \n\t" \
423 "rlwinm 0,0,29,0,0 ; rlwinm 0,0,19,0,0\n\t" 536 "mov r12, r12, ror #29 ; mov r12, r12, ror #19 \n\t"
424 537
425 #define VALGRIND_DO_CLIENT_REQUEST( \ 538 #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
426 _zzq_rlval, _zzq_default, _zzq_request, \ 539 _zzq_default, _zzq_request, \
427 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ 540 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
428 \ 541 \
429 { unsigned int _zzq_args[7]; \ 542 __extension__ \
430 register unsigned int _zzq_result; \ 543 ({volatile unsigned int _zzq_args[6]; \
431 register unsigned int* _zzq_ptr; \ 544 volatile unsigned int _zzq_result; \
432 _zzq_args[0] = (unsigned int)(_zzq_request); \ 545 _zzq_args[0] = (unsigned int)(_zzq_request); \
433 _zzq_args[1] = (unsigned int)(_zzq_arg1); \ 546 _zzq_args[1] = (unsigned int)(_zzq_arg1); \
434 _zzq_args[2] = (unsigned int)(_zzq_arg2); \ 547 _zzq_args[2] = (unsigned int)(_zzq_arg2); \
435 _zzq_args[3] = (unsigned int)(_zzq_arg3); \ 548 _zzq_args[3] = (unsigned int)(_zzq_arg3); \
436 _zzq_args[4] = (unsigned int)(_zzq_arg4); \ 549 _zzq_args[4] = (unsigned int)(_zzq_arg4); \
437 _zzq_args[5] = (unsigned int)(_zzq_arg5); \ 550 _zzq_args[5] = (unsigned int)(_zzq_arg5); \
438 _zzq_args[6] = (unsigned int)(_zzq_default); \ 551 __asm__ volatile("mov r3, %1\n\t" /*default*/ \
439 _zzq_ptr = _zzq_args; \ 552 "mov r4, %2\n\t" /*ptr*/ \
440 __asm__ volatile("mr 4,%1\n\t" \
441 "lwz 3, 24(4)\n\t" \
442 __SPECIAL_INSTRUCTION_PREAMBLE \ 553 __SPECIAL_INSTRUCTION_PREAMBLE \
443 /* %R3 = client_request ( %R4 ) */ \ 554 /* R3 = client_request ( R4 ) */ \
444 "or 1,1,1\n\t" \ 555 "orr r10, r10, r10\n\t" \
445 "mr %0,3" \ 556 "mov %0, r3" /*result*/ \
446 : "=b" (_zzq_result) \ 557 : "=r" (_zzq_result) \
447 : "b" (_zzq_ptr) \ 558 : "r" (_zzq_default), "r" (&_zzq_args[0]) \
448 : "r3", "r4", "cc", "memory"); \ 559 : "cc","memory", "r3", "r4"); \
449 _zzq_rlval = _zzq_result; \ 560 _zzq_result; \
450 } 561 })
451 562
452 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ 563 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
453 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ 564 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
454 register unsigned int __addr; \ 565 unsigned int __addr; \
455 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ 566 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
456 /* %R3 = guest_NRADDR */ \ 567 /* R3 = guest_NRADDR */ \
457 "or 2,2,2\n\t" \ 568 "orr r11, r11, r11\n\t" \
458 "mr %0,3" \ 569 "mov %0, r3" \
459 : "=b" (__addr) \ 570 : "=r" (__addr) \
460 : \ 571 : \
461 : "r3", "cc", "memory" \ 572 : "cc", "memory", "r3" \
462 ); \ 573 ); \
463 _zzq_orig->nraddr = __addr; \ 574 _zzq_orig->nraddr = __addr; \
464 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
465 /* %R3 = guest_NRADDR_GPR2 */ \
466 "or 4,4,4\n\t" \
467 "mr %0,3" \
468 : "=b" (__addr) \
469 : \
470 : "r3", "cc", "memory" \
471 ); \
472 _zzq_orig->r2 = __addr; \
473 } 575 }
474 576
475 #define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 577 #define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
476 __SPECIAL_INSTRUCTION_PREAMBLE \ 578 __SPECIAL_INSTRUCTION_PREAMBLE \
477 /* branch-and-link-to-noredir *%R11 */ \ 579 /* branch-and-link-to-noredir *%R4 */ \
478 "or 3,3,3\n\t" 580 "orr r12, r12, r12\n\t"
479 581
480 #endif /* PLAT_ppc32_aix5 */ 582 #endif /* PLAT_arm_linux */
481 583
482 /* ------------------------ ppc64-aix5 ------------------------- */ 584 /* ------------------------ s390x-linux ------------------------ */
483 585
484 #if defined(PLAT_ppc64_aix5) 586 #if defined(PLAT_s390x_linux)
485 587
486 typedef 588 typedef
487 struct { 589 struct {
488 uint64_t nraddr; /* where's the code? */ 590 uint64_t nraddr; /* where's the code? */
489 uint64_t r2; /* what tocptr do we need? */ 591 }
490 } 592 OrigFn;
491 OrigFn;
492 593
493 #define __SPECIAL_INSTRUCTION_PREAMBLE \ 594 /* __SPECIAL_INSTRUCTION_PREAMBLE will be used to identify Valgrind specific
494 "rotldi 0,0,3 ; rotldi 0,0,13\n\t" \ 595 * code. This detection is implemented in platform specific toIR.c
495 "rotldi 0,0,61 ; rotldi 0,0,51\n\t" 596 * (e.g. VEX/priv/guest_s390_decoder.c).
597 */
598 #define __SPECIAL_INSTRUCTION_PREAMBLE \
599 "lr 15,15\n\t" \
600 "lr 1,1\n\t" \
601 "lr 2,2\n\t" \
602 "lr 3,3\n\t"
496 603
497 #define VALGRIND_DO_CLIENT_REQUEST( \ 604 #define __CLIENT_REQUEST_CODE "lr 2,2\n\t"
498 _zzq_rlval, _zzq_default, _zzq_request, \ 605 #define __GET_NR_CONTEXT_CODE "lr 3,3\n\t"
499 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \ 606 #define __CALL_NO_REDIR_CODE "lr 4,4\n\t"
500 \
501 { uint64_t _zzq_args[7]; \
502 register uint64_t _zzq_result; \
503 register uint64_t* _zzq_ptr; \
504 _zzq_args[0] = (unsigned int long long)(_zzq_request); \
505 _zzq_args[1] = (unsigned int long long)(_zzq_arg1); \
506 _zzq_args[2] = (unsigned int long long)(_zzq_arg2); \
507 _zzq_args[3] = (unsigned int long long)(_zzq_arg3); \
508 _zzq_args[4] = (unsigned int long long)(_zzq_arg4); \
509 _zzq_args[5] = (unsigned int long long)(_zzq_arg5); \
510 _zzq_args[6] = (unsigned int long long)(_zzq_default); \
511 _zzq_ptr = _zzq_args; \
512 __asm__ volatile("mr 4,%1\n\t" \
513 "ld 3, 48(4)\n\t" \
514 __SPECIAL_INSTRUCTION_PREAMBLE \
515 /* %R3 = client_request ( %R4 ) */ \
516 "or 1,1,1\n\t" \
517 "mr %0,3" \
518 : "=b" (_zzq_result) \
519 : "b" (_zzq_ptr) \
520 : "r3", "r4", "cc", "memory"); \
521 _zzq_rlval = _zzq_result; \
522 }
523 607
524 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \ 608 #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
525 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \ 609 _zzq_default, _zzq_request, \
526 register uint64_t __addr; \ 610 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
527 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ 611 __extension__ \
528 /* %R3 = guest_NRADDR */ \ 612 ({volatile uint64_t _zzq_args[6]; \
529 "or 2,2,2\n\t" \ 613 volatile uint64_t _zzq_result; \
530 "mr %0,3" \ 614 _zzq_args[0] = (uint64_t)(_zzq_request); \
531 : "=b" (__addr) \ 615 _zzq_args[1] = (uint64_t)(_zzq_arg1); \
532 : \ 616 _zzq_args[2] = (uint64_t)(_zzq_arg2); \
533 : "r3", "cc", "memory" \ 617 _zzq_args[3] = (uint64_t)(_zzq_arg3); \
534 ); \ 618 _zzq_args[4] = (uint64_t)(_zzq_arg4); \
535 _zzq_orig->nraddr = __addr; \ 619 _zzq_args[5] = (uint64_t)(_zzq_arg5); \
536 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \ 620 __asm__ volatile(/* r2 = args */ \
537 /* %R3 = guest_NRADDR_GPR2 */ \ 621 "lgr 2,%1\n\t" \
538 "or 4,4,4\n\t" \ 622 /* r3 = default */ \
539 "mr %0,3" \ 623 "lgr 3,%2\n\t" \
540 : "=b" (__addr) \ 624 __SPECIAL_INSTRUCTION_PREAMBLE \
541 : \ 625 __CLIENT_REQUEST_CODE \
542 : "r3", "cc", "memory" \ 626 /* results = r3 */ \
543 ); \ 627 "lgr %0, 3\n\t" \
544 _zzq_orig->r2 = __addr; \ 628 : "=d" (_zzq_result) \
545 } 629 : "a" (&_zzq_args[0]), "0" (_zzq_default) \
630 : "cc", "2", "3", "memory" \
631 ); \
632 _zzq_result; \
633 })
546 634
547 #define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 635 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
548 __SPECIAL_INSTRUCTION_PREAMBLE \ 636 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
549 /* branch-and-link-to-noredir *%R11 */ \ 637 volatile uint64_t __addr; \
550 "or 3,3,3\n\t" 638 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
639 __GET_NR_CONTEXT_CODE \
640 "lgr %0, 3\n\t" \
641 : "=a" (__addr) \
642 : \
643 : "cc", "3", "memory" \
644 ); \
645 _zzq_orig->nraddr = __addr; \
646 }
551 647
552 #endif /* PLAT_ppc64_aix5 */ 648 #define VALGRIND_CALL_NOREDIR_R1 \
649 __SPECIAL_INSTRUCTION_PREAMBLE \
650 __CALL_NO_REDIR_CODE
651
652 #endif /* PLAT_s390x_linux */
553 653
554 /* Insert assembly code for other platforms here... */ 654 /* Insert assembly code for other platforms here... */
555 655
556 #endif /* NVALGRIND */ 656 #endif /* NVALGRIND */
557 657
558 658
559 /* ------------------------------------------------------------------ */ 659 /* ------------------------------------------------------------------ */
560 /* PLATFORM SPECIFICS for FUNCTION WRAPPING. This is all very */ 660 /* PLATFORM SPECIFICS for FUNCTION WRAPPING. This is all very */
561 /* ugly. It's the least-worst tradeoff I can think of. */ 661 /* ugly. It's the least-worst tradeoff I can think of. */
562 /* ------------------------------------------------------------------ */ 662 /* ------------------------------------------------------------------ */
(...skipping 12 matching lines...) Expand all
575 675
576 'W' stands for "word" and 'v' for "void". Hence there are 676 'W' stands for "word" and 'v' for "void". Hence there are
577 different macros for calling arity 0, 1, 2, 3, 4, etc, functions, 677 different macros for calling arity 0, 1, 2, 3, 4, etc, functions,
578 and for each, the possibility of returning a word-typed result, or 678 and for each, the possibility of returning a word-typed result, or
579 no result. 679 no result.
580 */ 680 */
581 681
582 /* Use these to write the name of your wrapper. NOTE: duplicates 682 /* Use these to write the name of your wrapper. NOTE: duplicates
583 VG_WRAP_FUNCTION_Z{U,Z} in pub_tool_redir.h. */ 683 VG_WRAP_FUNCTION_Z{U,Z} in pub_tool_redir.h. */
584 684
685 /* Use an extra level of macroisation so as to ensure the soname/fnname
686 args are fully macro-expanded before pasting them together. */
687 #define VG_CONCAT4(_aa,_bb,_cc,_dd) _aa##_bb##_cc##_dd
688
585 #define I_WRAP_SONAME_FNNAME_ZU(soname,fnname) \ 689 #define I_WRAP_SONAME_FNNAME_ZU(soname,fnname) \
586 _vgwZU_##soname##_##fnname 690 VG_CONCAT4(_vgwZU_,soname,_,fnname)
587 691
588 #define I_WRAP_SONAME_FNNAME_ZZ(soname,fnname) \ 692 #define I_WRAP_SONAME_FNNAME_ZZ(soname,fnname) \
589 _vgwZZ_##soname##_##fnname 693 VG_CONCAT4(_vgwZZ_,soname,_,fnname)
590 694
591 /* Use this macro from within a wrapper function to collect the 695 /* Use this macro from within a wrapper function to collect the
592 context (address and possibly other info) of the original function. 696 context (address and possibly other info) of the original function.
593 Once you have that you can then use it in one of the CALL_FN_ 697 Once you have that you can then use it in one of the CALL_FN_
594 macros. The type of the argument _lval is OrigFn. */ 698 macros. The type of the argument _lval is OrigFn. */
595 #define VALGRIND_GET_ORIG_FN(_lval) VALGRIND_GET_NR_CONTEXT(_lval) 699 #define VALGRIND_GET_ORIG_FN(_lval) VALGRIND_GET_NR_CONTEXT(_lval)
596 700
597 /* Derivatives of the main macros below, for calling functions 701 /* Derivatives of the main macros below, for calling functions
598 returning void. */ 702 returning void. */
599 703
600 #define CALL_FN_v_v(fnptr) \ 704 #define CALL_FN_v_v(fnptr) \
601 do { volatile unsigned long _junk; \ 705 do { volatile unsigned long _junk; \
602 CALL_FN_W_v(_junk,fnptr); } while (0) 706 CALL_FN_W_v(_junk,fnptr); } while (0)
603 707
604 #define CALL_FN_v_W(fnptr, arg1) \ 708 #define CALL_FN_v_W(fnptr, arg1) \
605 do { volatile unsigned long _junk; \ 709 do { volatile unsigned long _junk; \
606 CALL_FN_W_W(_junk,fnptr,arg1); } while (0) 710 CALL_FN_W_W(_junk,fnptr,arg1); } while (0)
607 711
608 #define CALL_FN_v_WW(fnptr, arg1,arg2) \ 712 #define CALL_FN_v_WW(fnptr, arg1,arg2) \
609 do { volatile unsigned long _junk; \ 713 do { volatile unsigned long _junk; \
610 CALL_FN_W_WW(_junk,fnptr,arg1,arg2); } while (0) 714 CALL_FN_W_WW(_junk,fnptr,arg1,arg2); } while (0)
611 715
612 #define CALL_FN_v_WWW(fnptr, arg1,arg2,arg3) \ 716 #define CALL_FN_v_WWW(fnptr, arg1,arg2,arg3) \
613 do { volatile unsigned long _junk; \ 717 do { volatile unsigned long _junk; \
614 CALL_FN_W_WWW(_junk,fnptr,arg1,arg2,arg3); } while (0) 718 CALL_FN_W_WWW(_junk,fnptr,arg1,arg2,arg3); } while (0)
615 719
616 /* ------------------------- x86-linux ------------------------- */ 720 #define CALL_FN_v_WWWW(fnptr, arg1,arg2,arg3,arg4) \
721 do { volatile unsigned long _junk; \
722 CALL_FN_W_WWWW(_junk,fnptr,arg1,arg2,arg3,arg4); } while (0)
617 723
618 #if defined(PLAT_x86_linux) 724 #define CALL_FN_v_5W(fnptr, arg1,arg2,arg3,arg4,arg5) \
725 do { volatile unsigned long _junk; \
726 CALL_FN_W_5W(_junk,fnptr,arg1,arg2,arg3,arg4,arg5); } while (0)
727
728 #define CALL_FN_v_6W(fnptr, arg1,arg2,arg3,arg4,arg5,arg6) \
729 do { volatile unsigned long _junk; \
730 CALL_FN_W_6W(_junk,fnptr,arg1,arg2,arg3,arg4,arg5,arg6); } while (0)
731
732 #define CALL_FN_v_7W(fnptr, arg1,arg2,arg3,arg4,arg5,arg6,arg7) \
733 do { volatile unsigned long _junk; \
734 CALL_FN_W_7W(_junk,fnptr,arg1,arg2,arg3,arg4,arg5,arg6,arg7); } while (0 )
735
736 /* ------------------------- x86-{linux,darwin} ---------------- */
737
738 #if defined(PLAT_x86_linux) || defined(PLAT_x86_darwin)
619 739
620 /* These regs are trashed by the hidden call. No need to mention eax 740 /* These regs are trashed by the hidden call. No need to mention eax
621 as gcc can already see that, plus causes gcc to bomb. */ 741 as gcc can already see that, plus causes gcc to bomb. */
622 #define __CALLER_SAVED_REGS /*"eax"*/ "ecx", "edx" 742 #define __CALLER_SAVED_REGS /*"eax"*/ "ecx", "edx"
623 743
624 /* These CALL_FN_ macros assume that on x86-linux, sizeof(unsigned 744 /* These CALL_FN_ macros assume that on x86-linux, sizeof(unsigned
625 long) == 4. */ 745 long) == 4. */
626 746
627 #define CALL_FN_W_v(lval, orig) \ 747 #define CALL_FN_W_v(lval, orig) \
628 do { \ 748 do { \
(...skipping 12 matching lines...) Expand all
641 } while (0) 761 } while (0)
642 762
643 #define CALL_FN_W_W(lval, orig, arg1) \ 763 #define CALL_FN_W_W(lval, orig, arg1) \
644 do { \ 764 do { \
645 volatile OrigFn _orig = (orig); \ 765 volatile OrigFn _orig = (orig); \
646 volatile unsigned long _argvec[2]; \ 766 volatile unsigned long _argvec[2]; \
647 volatile unsigned long _res; \ 767 volatile unsigned long _res; \
648 _argvec[0] = (unsigned long)_orig.nraddr; \ 768 _argvec[0] = (unsigned long)_orig.nraddr; \
649 _argvec[1] = (unsigned long)(arg1); \ 769 _argvec[1] = (unsigned long)(arg1); \
650 __asm__ volatile( \ 770 __asm__ volatile( \
771 "subl $12, %%esp\n\t" \
651 "pushl 4(%%eax)\n\t" \ 772 "pushl 4(%%eax)\n\t" \
652 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 773 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
653 VALGRIND_CALL_NOREDIR_EAX \ 774 VALGRIND_CALL_NOREDIR_EAX \
654 "addl $4, %%esp\n" \ 775 "addl $16, %%esp\n" \
655 : /*out*/ "=a" (_res) \ 776 : /*out*/ "=a" (_res) \
656 : /*in*/ "a" (&_argvec[0]) \ 777 : /*in*/ "a" (&_argvec[0]) \
657 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 778 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
658 ); \ 779 ); \
659 lval = (__typeof__(lval)) _res; \ 780 lval = (__typeof__(lval)) _res; \
660 } while (0) 781 } while (0)
661 782
662 #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ 783 #define CALL_FN_W_WW(lval, orig, arg1,arg2) \
663 do { \ 784 do { \
664 volatile OrigFn _orig = (orig); \ 785 volatile OrigFn _orig = (orig); \
665 volatile unsigned long _argvec[3]; \ 786 volatile unsigned long _argvec[3]; \
666 volatile unsigned long _res; \ 787 volatile unsigned long _res; \
667 _argvec[0] = (unsigned long)_orig.nraddr; \ 788 _argvec[0] = (unsigned long)_orig.nraddr; \
668 _argvec[1] = (unsigned long)(arg1); \ 789 _argvec[1] = (unsigned long)(arg1); \
669 _argvec[2] = (unsigned long)(arg2); \ 790 _argvec[2] = (unsigned long)(arg2); \
670 __asm__ volatile( \ 791 __asm__ volatile( \
792 "subl $8, %%esp\n\t" \
671 "pushl 8(%%eax)\n\t" \ 793 "pushl 8(%%eax)\n\t" \
672 "pushl 4(%%eax)\n\t" \ 794 "pushl 4(%%eax)\n\t" \
673 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 795 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
674 VALGRIND_CALL_NOREDIR_EAX \ 796 VALGRIND_CALL_NOREDIR_EAX \
675 "addl $8, %%esp\n" \ 797 "addl $16, %%esp\n" \
676 : /*out*/ "=a" (_res) \ 798 : /*out*/ "=a" (_res) \
677 : /*in*/ "a" (&_argvec[0]) \ 799 : /*in*/ "a" (&_argvec[0]) \
678 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 800 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
679 ); \ 801 ); \
680 lval = (__typeof__(lval)) _res; \ 802 lval = (__typeof__(lval)) _res; \
681 } while (0) 803 } while (0)
682 804
683 #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ 805 #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \
684 do { \ 806 do { \
685 volatile OrigFn _orig = (orig); \ 807 volatile OrigFn _orig = (orig); \
686 volatile unsigned long _argvec[4]; \ 808 volatile unsigned long _argvec[4]; \
687 volatile unsigned long _res; \ 809 volatile unsigned long _res; \
688 _argvec[0] = (unsigned long)_orig.nraddr; \ 810 _argvec[0] = (unsigned long)_orig.nraddr; \
689 _argvec[1] = (unsigned long)(arg1); \ 811 _argvec[1] = (unsigned long)(arg1); \
690 _argvec[2] = (unsigned long)(arg2); \ 812 _argvec[2] = (unsigned long)(arg2); \
691 _argvec[3] = (unsigned long)(arg3); \ 813 _argvec[3] = (unsigned long)(arg3); \
692 __asm__ volatile( \ 814 __asm__ volatile( \
815 "subl $4, %%esp\n\t" \
693 "pushl 12(%%eax)\n\t" \ 816 "pushl 12(%%eax)\n\t" \
694 "pushl 8(%%eax)\n\t" \ 817 "pushl 8(%%eax)\n\t" \
695 "pushl 4(%%eax)\n\t" \ 818 "pushl 4(%%eax)\n\t" \
696 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 819 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
697 VALGRIND_CALL_NOREDIR_EAX \ 820 VALGRIND_CALL_NOREDIR_EAX \
698 "addl $12, %%esp\n" \ 821 "addl $16, %%esp\n" \
699 : /*out*/ "=a" (_res) \ 822 : /*out*/ "=a" (_res) \
700 : /*in*/ "a" (&_argvec[0]) \ 823 : /*in*/ "a" (&_argvec[0]) \
701 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 824 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
702 ); \ 825 ); \
703 lval = (__typeof__(lval)) _res; \ 826 lval = (__typeof__(lval)) _res; \
704 } while (0) 827 } while (0)
705 828
706 #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ 829 #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \
707 do { \ 830 do { \
708 volatile OrigFn _orig = (orig); \ 831 volatile OrigFn _orig = (orig); \
(...skipping 24 matching lines...) Expand all
733 volatile OrigFn _orig = (orig); \ 856 volatile OrigFn _orig = (orig); \
734 volatile unsigned long _argvec[6]; \ 857 volatile unsigned long _argvec[6]; \
735 volatile unsigned long _res; \ 858 volatile unsigned long _res; \
736 _argvec[0] = (unsigned long)_orig.nraddr; \ 859 _argvec[0] = (unsigned long)_orig.nraddr; \
737 _argvec[1] = (unsigned long)(arg1); \ 860 _argvec[1] = (unsigned long)(arg1); \
738 _argvec[2] = (unsigned long)(arg2); \ 861 _argvec[2] = (unsigned long)(arg2); \
739 _argvec[3] = (unsigned long)(arg3); \ 862 _argvec[3] = (unsigned long)(arg3); \
740 _argvec[4] = (unsigned long)(arg4); \ 863 _argvec[4] = (unsigned long)(arg4); \
741 _argvec[5] = (unsigned long)(arg5); \ 864 _argvec[5] = (unsigned long)(arg5); \
742 __asm__ volatile( \ 865 __asm__ volatile( \
866 "subl $12, %%esp\n\t" \
743 "pushl 20(%%eax)\n\t" \ 867 "pushl 20(%%eax)\n\t" \
744 "pushl 16(%%eax)\n\t" \ 868 "pushl 16(%%eax)\n\t" \
745 "pushl 12(%%eax)\n\t" \ 869 "pushl 12(%%eax)\n\t" \
746 "pushl 8(%%eax)\n\t" \ 870 "pushl 8(%%eax)\n\t" \
747 "pushl 4(%%eax)\n\t" \ 871 "pushl 4(%%eax)\n\t" \
748 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 872 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
749 VALGRIND_CALL_NOREDIR_EAX \ 873 VALGRIND_CALL_NOREDIR_EAX \
750 "addl $20, %%esp\n" \ 874 "addl $32, %%esp\n" \
751 : /*out*/ "=a" (_res) \ 875 : /*out*/ "=a" (_res) \
752 : /*in*/ "a" (&_argvec[0]) \ 876 : /*in*/ "a" (&_argvec[0]) \
753 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 877 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
754 ); \ 878 ); \
755 lval = (__typeof__(lval)) _res; \ 879 lval = (__typeof__(lval)) _res; \
756 } while (0) 880 } while (0)
757 881
758 #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ 882 #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \
759 do { \ 883 do { \
760 volatile OrigFn _orig = (orig); \ 884 volatile OrigFn _orig = (orig); \
761 volatile unsigned long _argvec[7]; \ 885 volatile unsigned long _argvec[7]; \
762 volatile unsigned long _res; \ 886 volatile unsigned long _res; \
763 _argvec[0] = (unsigned long)_orig.nraddr; \ 887 _argvec[0] = (unsigned long)_orig.nraddr; \
764 _argvec[1] = (unsigned long)(arg1); \ 888 _argvec[1] = (unsigned long)(arg1); \
765 _argvec[2] = (unsigned long)(arg2); \ 889 _argvec[2] = (unsigned long)(arg2); \
766 _argvec[3] = (unsigned long)(arg3); \ 890 _argvec[3] = (unsigned long)(arg3); \
767 _argvec[4] = (unsigned long)(arg4); \ 891 _argvec[4] = (unsigned long)(arg4); \
768 _argvec[5] = (unsigned long)(arg5); \ 892 _argvec[5] = (unsigned long)(arg5); \
769 _argvec[6] = (unsigned long)(arg6); \ 893 _argvec[6] = (unsigned long)(arg6); \
770 __asm__ volatile( \ 894 __asm__ volatile( \
895 "subl $8, %%esp\n\t" \
771 "pushl 24(%%eax)\n\t" \ 896 "pushl 24(%%eax)\n\t" \
772 "pushl 20(%%eax)\n\t" \ 897 "pushl 20(%%eax)\n\t" \
773 "pushl 16(%%eax)\n\t" \ 898 "pushl 16(%%eax)\n\t" \
774 "pushl 12(%%eax)\n\t" \ 899 "pushl 12(%%eax)\n\t" \
775 "pushl 8(%%eax)\n\t" \ 900 "pushl 8(%%eax)\n\t" \
776 "pushl 4(%%eax)\n\t" \ 901 "pushl 4(%%eax)\n\t" \
777 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 902 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
778 VALGRIND_CALL_NOREDIR_EAX \ 903 VALGRIND_CALL_NOREDIR_EAX \
779 "addl $24, %%esp\n" \ 904 "addl $32, %%esp\n" \
780 : /*out*/ "=a" (_res) \ 905 : /*out*/ "=a" (_res) \
781 : /*in*/ "a" (&_argvec[0]) \ 906 : /*in*/ "a" (&_argvec[0]) \
782 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 907 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
783 ); \ 908 ); \
784 lval = (__typeof__(lval)) _res; \ 909 lval = (__typeof__(lval)) _res; \
785 } while (0) 910 } while (0)
786 911
787 #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 912 #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
788 arg7) \ 913 arg7) \
789 do { \ 914 do { \
790 volatile OrigFn _orig = (orig); \ 915 volatile OrigFn _orig = (orig); \
791 volatile unsigned long _argvec[8]; \ 916 volatile unsigned long _argvec[8]; \
792 volatile unsigned long _res; \ 917 volatile unsigned long _res; \
793 _argvec[0] = (unsigned long)_orig.nraddr; \ 918 _argvec[0] = (unsigned long)_orig.nraddr; \
794 _argvec[1] = (unsigned long)(arg1); \ 919 _argvec[1] = (unsigned long)(arg1); \
795 _argvec[2] = (unsigned long)(arg2); \ 920 _argvec[2] = (unsigned long)(arg2); \
796 _argvec[3] = (unsigned long)(arg3); \ 921 _argvec[3] = (unsigned long)(arg3); \
797 _argvec[4] = (unsigned long)(arg4); \ 922 _argvec[4] = (unsigned long)(arg4); \
798 _argvec[5] = (unsigned long)(arg5); \ 923 _argvec[5] = (unsigned long)(arg5); \
799 _argvec[6] = (unsigned long)(arg6); \ 924 _argvec[6] = (unsigned long)(arg6); \
800 _argvec[7] = (unsigned long)(arg7); \ 925 _argvec[7] = (unsigned long)(arg7); \
801 __asm__ volatile( \ 926 __asm__ volatile( \
927 "subl $4, %%esp\n\t" \
802 "pushl 28(%%eax)\n\t" \ 928 "pushl 28(%%eax)\n\t" \
803 "pushl 24(%%eax)\n\t" \ 929 "pushl 24(%%eax)\n\t" \
804 "pushl 20(%%eax)\n\t" \ 930 "pushl 20(%%eax)\n\t" \
805 "pushl 16(%%eax)\n\t" \ 931 "pushl 16(%%eax)\n\t" \
806 "pushl 12(%%eax)\n\t" \ 932 "pushl 12(%%eax)\n\t" \
807 "pushl 8(%%eax)\n\t" \ 933 "pushl 8(%%eax)\n\t" \
808 "pushl 4(%%eax)\n\t" \ 934 "pushl 4(%%eax)\n\t" \
809 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 935 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
810 VALGRIND_CALL_NOREDIR_EAX \ 936 VALGRIND_CALL_NOREDIR_EAX \
811 "addl $28, %%esp\n" \ 937 "addl $32, %%esp\n" \
812 : /*out*/ "=a" (_res) \ 938 : /*out*/ "=a" (_res) \
813 : /*in*/ "a" (&_argvec[0]) \ 939 : /*in*/ "a" (&_argvec[0]) \
814 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 940 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
815 ); \ 941 ); \
816 lval = (__typeof__(lval)) _res; \ 942 lval = (__typeof__(lval)) _res; \
817 } while (0) 943 } while (0)
818 944
819 #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 945 #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
820 arg7,arg8) \ 946 arg7,arg8) \
821 do { \ 947 do { \
(...skipping 38 matching lines...) Expand 10 before | Expand all | Expand 10 after
860 _argvec[1] = (unsigned long)(arg1); \ 986 _argvec[1] = (unsigned long)(arg1); \
861 _argvec[2] = (unsigned long)(arg2); \ 987 _argvec[2] = (unsigned long)(arg2); \
862 _argvec[3] = (unsigned long)(arg3); \ 988 _argvec[3] = (unsigned long)(arg3); \
863 _argvec[4] = (unsigned long)(arg4); \ 989 _argvec[4] = (unsigned long)(arg4); \
864 _argvec[5] = (unsigned long)(arg5); \ 990 _argvec[5] = (unsigned long)(arg5); \
865 _argvec[6] = (unsigned long)(arg6); \ 991 _argvec[6] = (unsigned long)(arg6); \
866 _argvec[7] = (unsigned long)(arg7); \ 992 _argvec[7] = (unsigned long)(arg7); \
867 _argvec[8] = (unsigned long)(arg8); \ 993 _argvec[8] = (unsigned long)(arg8); \
868 _argvec[9] = (unsigned long)(arg9); \ 994 _argvec[9] = (unsigned long)(arg9); \
869 __asm__ volatile( \ 995 __asm__ volatile( \
996 "subl $12, %%esp\n\t" \
870 "pushl 36(%%eax)\n\t" \ 997 "pushl 36(%%eax)\n\t" \
871 "pushl 32(%%eax)\n\t" \ 998 "pushl 32(%%eax)\n\t" \
872 "pushl 28(%%eax)\n\t" \ 999 "pushl 28(%%eax)\n\t" \
873 "pushl 24(%%eax)\n\t" \ 1000 "pushl 24(%%eax)\n\t" \
874 "pushl 20(%%eax)\n\t" \ 1001 "pushl 20(%%eax)\n\t" \
875 "pushl 16(%%eax)\n\t" \ 1002 "pushl 16(%%eax)\n\t" \
876 "pushl 12(%%eax)\n\t" \ 1003 "pushl 12(%%eax)\n\t" \
877 "pushl 8(%%eax)\n\t" \ 1004 "pushl 8(%%eax)\n\t" \
878 "pushl 4(%%eax)\n\t" \ 1005 "pushl 4(%%eax)\n\t" \
879 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 1006 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
880 VALGRIND_CALL_NOREDIR_EAX \ 1007 VALGRIND_CALL_NOREDIR_EAX \
881 "addl $36, %%esp\n" \ 1008 "addl $48, %%esp\n" \
882 : /*out*/ "=a" (_res) \ 1009 : /*out*/ "=a" (_res) \
883 : /*in*/ "a" (&_argvec[0]) \ 1010 : /*in*/ "a" (&_argvec[0]) \
884 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1011 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
885 ); \ 1012 ); \
886 lval = (__typeof__(lval)) _res; \ 1013 lval = (__typeof__(lval)) _res; \
887 } while (0) 1014 } while (0)
888 1015
889 #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1016 #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
890 arg7,arg8,arg9,arg10) \ 1017 arg7,arg8,arg9,arg10) \
891 do { \ 1018 do { \
892 volatile OrigFn _orig = (orig); \ 1019 volatile OrigFn _orig = (orig); \
893 volatile unsigned long _argvec[11]; \ 1020 volatile unsigned long _argvec[11]; \
894 volatile unsigned long _res; \ 1021 volatile unsigned long _res; \
895 _argvec[0] = (unsigned long)_orig.nraddr; \ 1022 _argvec[0] = (unsigned long)_orig.nraddr; \
896 _argvec[1] = (unsigned long)(arg1); \ 1023 _argvec[1] = (unsigned long)(arg1); \
897 _argvec[2] = (unsigned long)(arg2); \ 1024 _argvec[2] = (unsigned long)(arg2); \
898 _argvec[3] = (unsigned long)(arg3); \ 1025 _argvec[3] = (unsigned long)(arg3); \
899 _argvec[4] = (unsigned long)(arg4); \ 1026 _argvec[4] = (unsigned long)(arg4); \
900 _argvec[5] = (unsigned long)(arg5); \ 1027 _argvec[5] = (unsigned long)(arg5); \
901 _argvec[6] = (unsigned long)(arg6); \ 1028 _argvec[6] = (unsigned long)(arg6); \
902 _argvec[7] = (unsigned long)(arg7); \ 1029 _argvec[7] = (unsigned long)(arg7); \
903 _argvec[8] = (unsigned long)(arg8); \ 1030 _argvec[8] = (unsigned long)(arg8); \
904 _argvec[9] = (unsigned long)(arg9); \ 1031 _argvec[9] = (unsigned long)(arg9); \
905 _argvec[10] = (unsigned long)(arg10); \ 1032 _argvec[10] = (unsigned long)(arg10); \
906 __asm__ volatile( \ 1033 __asm__ volatile( \
1034 "subl $8, %%esp\n\t" \
907 "pushl 40(%%eax)\n\t" \ 1035 "pushl 40(%%eax)\n\t" \
908 "pushl 36(%%eax)\n\t" \ 1036 "pushl 36(%%eax)\n\t" \
909 "pushl 32(%%eax)\n\t" \ 1037 "pushl 32(%%eax)\n\t" \
910 "pushl 28(%%eax)\n\t" \ 1038 "pushl 28(%%eax)\n\t" \
911 "pushl 24(%%eax)\n\t" \ 1039 "pushl 24(%%eax)\n\t" \
912 "pushl 20(%%eax)\n\t" \ 1040 "pushl 20(%%eax)\n\t" \
913 "pushl 16(%%eax)\n\t" \ 1041 "pushl 16(%%eax)\n\t" \
914 "pushl 12(%%eax)\n\t" \ 1042 "pushl 12(%%eax)\n\t" \
915 "pushl 8(%%eax)\n\t" \ 1043 "pushl 8(%%eax)\n\t" \
916 "pushl 4(%%eax)\n\t" \ 1044 "pushl 4(%%eax)\n\t" \
917 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 1045 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
918 VALGRIND_CALL_NOREDIR_EAX \ 1046 VALGRIND_CALL_NOREDIR_EAX \
919 "addl $40, %%esp\n" \ 1047 "addl $48, %%esp\n" \
920 : /*out*/ "=a" (_res) \ 1048 : /*out*/ "=a" (_res) \
921 : /*in*/ "a" (&_argvec[0]) \ 1049 : /*in*/ "a" (&_argvec[0]) \
922 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1050 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
923 ); \ 1051 ); \
924 lval = (__typeof__(lval)) _res; \ 1052 lval = (__typeof__(lval)) _res; \
925 } while (0) 1053 } while (0)
926 1054
927 #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5, \ 1055 #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5, \
928 arg6,arg7,arg8,arg9,arg10, \ 1056 arg6,arg7,arg8,arg9,arg10, \
929 arg11) \ 1057 arg11) \
930 do { \ 1058 do { \
931 volatile OrigFn _orig = (orig); \ 1059 volatile OrigFn _orig = (orig); \
932 volatile unsigned long _argvec[12]; \ 1060 volatile unsigned long _argvec[12]; \
933 volatile unsigned long _res; \ 1061 volatile unsigned long _res; \
934 _argvec[0] = (unsigned long)_orig.nraddr; \ 1062 _argvec[0] = (unsigned long)_orig.nraddr; \
935 _argvec[1] = (unsigned long)(arg1); \ 1063 _argvec[1] = (unsigned long)(arg1); \
936 _argvec[2] = (unsigned long)(arg2); \ 1064 _argvec[2] = (unsigned long)(arg2); \
937 _argvec[3] = (unsigned long)(arg3); \ 1065 _argvec[3] = (unsigned long)(arg3); \
938 _argvec[4] = (unsigned long)(arg4); \ 1066 _argvec[4] = (unsigned long)(arg4); \
939 _argvec[5] = (unsigned long)(arg5); \ 1067 _argvec[5] = (unsigned long)(arg5); \
940 _argvec[6] = (unsigned long)(arg6); \ 1068 _argvec[6] = (unsigned long)(arg6); \
941 _argvec[7] = (unsigned long)(arg7); \ 1069 _argvec[7] = (unsigned long)(arg7); \
942 _argvec[8] = (unsigned long)(arg8); \ 1070 _argvec[8] = (unsigned long)(arg8); \
943 _argvec[9] = (unsigned long)(arg9); \ 1071 _argvec[9] = (unsigned long)(arg9); \
944 _argvec[10] = (unsigned long)(arg10); \ 1072 _argvec[10] = (unsigned long)(arg10); \
945 _argvec[11] = (unsigned long)(arg11); \ 1073 _argvec[11] = (unsigned long)(arg11); \
946 __asm__ volatile( \ 1074 __asm__ volatile( \
1075 "subl $4, %%esp\n\t" \
947 "pushl 44(%%eax)\n\t" \ 1076 "pushl 44(%%eax)\n\t" \
948 "pushl 40(%%eax)\n\t" \ 1077 "pushl 40(%%eax)\n\t" \
949 "pushl 36(%%eax)\n\t" \ 1078 "pushl 36(%%eax)\n\t" \
950 "pushl 32(%%eax)\n\t" \ 1079 "pushl 32(%%eax)\n\t" \
951 "pushl 28(%%eax)\n\t" \ 1080 "pushl 28(%%eax)\n\t" \
952 "pushl 24(%%eax)\n\t" \ 1081 "pushl 24(%%eax)\n\t" \
953 "pushl 20(%%eax)\n\t" \ 1082 "pushl 20(%%eax)\n\t" \
954 "pushl 16(%%eax)\n\t" \ 1083 "pushl 16(%%eax)\n\t" \
955 "pushl 12(%%eax)\n\t" \ 1084 "pushl 12(%%eax)\n\t" \
956 "pushl 8(%%eax)\n\t" \ 1085 "pushl 8(%%eax)\n\t" \
957 "pushl 4(%%eax)\n\t" \ 1086 "pushl 4(%%eax)\n\t" \
958 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 1087 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
959 VALGRIND_CALL_NOREDIR_EAX \ 1088 VALGRIND_CALL_NOREDIR_EAX \
960 "addl $44, %%esp\n" \ 1089 "addl $48, %%esp\n" \
961 : /*out*/ "=a" (_res) \ 1090 : /*out*/ "=a" (_res) \
962 : /*in*/ "a" (&_argvec[0]) \ 1091 : /*in*/ "a" (&_argvec[0]) \
963 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1092 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
964 ); \ 1093 ); \
965 lval = (__typeof__(lval)) _res; \ 1094 lval = (__typeof__(lval)) _res; \
966 } while (0) 1095 } while (0)
967 1096
968 #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5, \ 1097 #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5, \
969 arg6,arg7,arg8,arg9,arg10, \ 1098 arg6,arg7,arg8,arg9,arg10, \
970 arg11,arg12) \ 1099 arg11,arg12) \
(...skipping 30 matching lines...) Expand all
1001 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 1130 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1002 VALGRIND_CALL_NOREDIR_EAX \ 1131 VALGRIND_CALL_NOREDIR_EAX \
1003 "addl $48, %%esp\n" \ 1132 "addl $48, %%esp\n" \
1004 : /*out*/ "=a" (_res) \ 1133 : /*out*/ "=a" (_res) \
1005 : /*in*/ "a" (&_argvec[0]) \ 1134 : /*in*/ "a" (&_argvec[0]) \
1006 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1135 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
1007 ); \ 1136 ); \
1008 lval = (__typeof__(lval)) _res; \ 1137 lval = (__typeof__(lval)) _res; \
1009 } while (0) 1138 } while (0)
1010 1139
1011 #endif /* PLAT_x86_linux */ 1140 #endif /* PLAT_x86_linux || PLAT_x86_darwin */
1012 1141
1013 /* ------------------------ amd64-linux ------------------------ */ 1142 /* ------------------------ amd64-{linux,darwin} --------------- */
1014 1143
1015 #if defined(PLAT_amd64_linux) 1144 #if defined(PLAT_amd64_linux) || defined(PLAT_amd64_darwin)
1016 1145
1017 /* ARGREGS: rdi rsi rdx rcx r8 r9 (the rest on stack in R-to-L order) */ 1146 /* ARGREGS: rdi rsi rdx rcx r8 r9 (the rest on stack in R-to-L order) */
1018 1147
1019 /* These regs are trashed by the hidden call. */ 1148 /* These regs are trashed by the hidden call. */
1020 #define __CALLER_SAVED_REGS /*"rax",*/ "rcx", "rdx", "rsi", \ 1149 #define __CALLER_SAVED_REGS /*"rax",*/ "rcx", "rdx", "rsi", \
1021 "rdi", "r8", "r9", "r10", "r11" 1150 "rdi", "r8", "r9", "r10", "r11"
1022 1151
1152 /* This is all pretty complex. It's so as to make stack unwinding
1153 work reliably. See bug 243270. The basic problem is the sub and
1154 add of 128 of %rsp in all of the following macros. If gcc believes
1155 the CFA is in %rsp, then unwinding may fail, because what's at the
1156 CFA is not what gcc "expected" when it constructs the CFIs for the
1157 places where the macros are instantiated.
1158
1159 But we can't just add a CFI annotation to increase the CFA offset
1160 by 128, to match the sub of 128 from %rsp, because we don't know
1161 whether gcc has chosen %rsp as the CFA at that point, or whether it
1162 has chosen some other register (eg, %rbp). In the latter case,
1163 adding a CFI annotation to change the CFA offset is simply wrong.
1164
1165 So the solution is to get hold of the CFA using
1166 __builtin_dwarf_cfa(), put it in a known register, and add a
1167 CFI annotation to say what the register is. We choose %rbp for
1168 this (perhaps perversely), because:
1169
1170 (1) %rbp is already subject to unwinding. If a new register was
1171 chosen then the unwinder would have to unwind it in all stack
1172 traces, which is expensive, and
1173
1174 (2) %rbp is already subject to precise exception updates in the
1175 JIT. If a new register was chosen, we'd have to have precise
1176 exceptions for it too, which reduces performance of the
1177 generated code.
1178
1179 However .. one extra complication. We can't just whack the result
1180 of __builtin_dwarf_cfa() into %rbp and then add %rbp to the
1181 list of trashed registers at the end of the inline assembly
1182 fragments; gcc won't allow %rbp to appear in that list. Hence
1183 instead we need to stash %rbp in %r15 for the duration of the asm,
1184 and say that %r15 is trashed instead. gcc seems happy to go with
1185 that.
1186
1187 Oh .. and this all needs to be conditionalised so that it is
1188 unchanged from before this commit, when compiled with older gccs
1189 that don't support __builtin_dwarf_cfa. Furthermore, since
1190 this header file is freestanding, it has to be independent of
1191 config.h, and so the following conditionalisation cannot depend on
1192 configure time checks.
1193
1194 Although it's not clear from
1195 'defined(__GNUC__) && defined(__GCC_HAVE_DWARF2_CFI_ASM)',
1196 this expression excludes Darwin.
1197 .cfi directives in Darwin assembly appear to be completely
1198 different and I haven't investigated how they work.
1199
1200 For even more entertainment value, note we have to use the
1201 completely undocumented __builtin_dwarf_cfa(), which appears to
1202 really compute the CFA, whereas __builtin_frame_address(0) claims
1203 to but actually doesn't. See
1204 https://bugs.kde.org/show_bug.cgi?id=243270#c47
1205 */
1206 #if defined(__GNUC__) && defined(__GCC_HAVE_DWARF2_CFI_ASM)
1207 # define __FRAME_POINTER \
1208 ,"r"(__builtin_dwarf_cfa())
1209 # define VALGRIND_CFI_PROLOGUE \
1210 "movq %%rbp, %%r15\n\t" \
1211 "movq %2, %%rbp\n\t" \
1212 ".cfi_remember_state\n\t" \
1213 ".cfi_def_cfa rbp, 0\n\t"
1214 # define VALGRIND_CFI_EPILOGUE \
1215 "movq %%r15, %%rbp\n\t" \
1216 ".cfi_restore_state\n\t"
1217 #else
1218 # define __FRAME_POINTER
1219 # define VALGRIND_CFI_PROLOGUE
1220 # define VALGRIND_CFI_EPILOGUE
1221 #endif
1222
1223
1023 /* These CALL_FN_ macros assume that on amd64-linux, sizeof(unsigned 1224 /* These CALL_FN_ macros assume that on amd64-linux, sizeof(unsigned
1024 long) == 8. */ 1225 long) == 8. */
1025 1226
1026 /* NB 9 Sept 07. There is a nasty kludge here in all these CALL_FN_ 1227 /* NB 9 Sept 07. There is a nasty kludge here in all these CALL_FN_
1027 macros. In order not to trash the stack redzone, we need to drop 1228 macros. In order not to trash the stack redzone, we need to drop
1028 %rsp by 128 before the hidden call, and restore afterwards. The 1229 %rsp by 128 before the hidden call, and restore afterwards. The
1029 nastyness is that it is only by luck that the stack still appears 1230 nastyness is that it is only by luck that the stack still appears
1030 to be unwindable during the hidden call - since then the behaviour 1231 to be unwindable during the hidden call - since then the behaviour
1031 of any routine using this macro does not match what the CFI data 1232 of any routine using this macro does not match what the CFI data
1032 says. Sigh. 1233 says. Sigh.
1033 1234
1034 Why is this important? Imagine that a wrapper has a stack 1235 Why is this important? Imagine that a wrapper has a stack
1035 allocated local, and passes to the hidden call, a pointer to it. 1236 allocated local, and passes to the hidden call, a pointer to it.
1036 Because gcc does not know about the hidden call, it may allocate 1237 Because gcc does not know about the hidden call, it may allocate
1037 that local in the redzone. Unfortunately the hidden call may then 1238 that local in the redzone. Unfortunately the hidden call may then
1038 trash it before it comes to use it. So we must step clear of the 1239 trash it before it comes to use it. So we must step clear of the
1039 redzone, for the duration of the hidden call, to make it safe. 1240 redzone, for the duration of the hidden call, to make it safe.
1040 1241
1041 Probably the same problem afflicts the other redzone-style ABIs too 1242 Probably the same problem afflicts the other redzone-style ABIs too
1042 (ppc64-linux, ppc32-aix5, ppc64-aix5); but for those, the stack is 1243 (ppc64-linux); but for those, the stack is
1043 self describing (none of this CFI nonsense) so at least messing 1244 self describing (none of this CFI nonsense) so at least messing
1044 with the stack pointer doesn't give a danger of non-unwindable 1245 with the stack pointer doesn't give a danger of non-unwindable
1045 stack. */ 1246 stack. */
1046 1247
1047 #define CALL_FN_W_v(lval, orig) \ 1248 #define CALL_FN_W_v(lval, orig) \
1048 do { \ 1249 do { \
1049 volatile OrigFn _orig = (orig); \ 1250 volatile OrigFn _orig = (orig); \
1050 volatile unsigned long _argvec[1]; \ 1251 volatile unsigned long _argvec[1]; \
1051 volatile unsigned long _res; \ 1252 volatile unsigned long _res; \
1052 _argvec[0] = (unsigned long)_orig.nraddr; \ 1253 _argvec[0] = (unsigned long)_orig.nraddr; \
1053 __asm__ volatile( \ 1254 __asm__ volatile( \
1255 VALGRIND_CFI_PROLOGUE \
1054 "subq $128,%%rsp\n\t" \ 1256 "subq $128,%%rsp\n\t" \
1055 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1257 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1056 VALGRIND_CALL_NOREDIR_RAX \ 1258 VALGRIND_CALL_NOREDIR_RAX \
1057 "addq $128,%%rsp\n\t" \ 1259 "addq $128,%%rsp\n\t" \
1260 VALGRIND_CFI_EPILOGUE \
1058 : /*out*/ "=a" (_res) \ 1261 : /*out*/ "=a" (_res) \
1059 : /*in*/ "a" (&_argvec[0]) \ 1262 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1060 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1263 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1061 ); \ 1264 ); \
1062 lval = (__typeof__(lval)) _res; \ 1265 lval = (__typeof__(lval)) _res; \
1063 } while (0) 1266 } while (0)
1064 1267
1065 #define CALL_FN_W_W(lval, orig, arg1) \ 1268 #define CALL_FN_W_W(lval, orig, arg1) \
1066 do { \ 1269 do { \
1067 volatile OrigFn _orig = (orig); \ 1270 volatile OrigFn _orig = (orig); \
1068 volatile unsigned long _argvec[2]; \ 1271 volatile unsigned long _argvec[2]; \
1069 volatile unsigned long _res; \ 1272 volatile unsigned long _res; \
1070 _argvec[0] = (unsigned long)_orig.nraddr; \ 1273 _argvec[0] = (unsigned long)_orig.nraddr; \
1071 _argvec[1] = (unsigned long)(arg1); \ 1274 _argvec[1] = (unsigned long)(arg1); \
1072 __asm__ volatile( \ 1275 __asm__ volatile( \
1276 VALGRIND_CFI_PROLOGUE \
1073 "subq $128,%%rsp\n\t" \ 1277 "subq $128,%%rsp\n\t" \
1074 "movq 8(%%rax), %%rdi\n\t" \ 1278 "movq 8(%%rax), %%rdi\n\t" \
1075 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1279 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1076 VALGRIND_CALL_NOREDIR_RAX \ 1280 VALGRIND_CALL_NOREDIR_RAX \
1077 "addq $128,%%rsp\n\t" \ 1281 "addq $128,%%rsp\n\t" \
1282 VALGRIND_CFI_EPILOGUE \
1078 : /*out*/ "=a" (_res) \ 1283 : /*out*/ "=a" (_res) \
1079 : /*in*/ "a" (&_argvec[0]) \ 1284 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1080 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1285 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1081 ); \ 1286 ); \
1082 lval = (__typeof__(lval)) _res; \ 1287 lval = (__typeof__(lval)) _res; \
1083 } while (0) 1288 } while (0)
1084 1289
1085 #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ 1290 #define CALL_FN_W_WW(lval, orig, arg1,arg2) \
1086 do { \ 1291 do { \
1087 volatile OrigFn _orig = (orig); \ 1292 volatile OrigFn _orig = (orig); \
1088 volatile unsigned long _argvec[3]; \ 1293 volatile unsigned long _argvec[3]; \
1089 volatile unsigned long _res; \ 1294 volatile unsigned long _res; \
1090 _argvec[0] = (unsigned long)_orig.nraddr; \ 1295 _argvec[0] = (unsigned long)_orig.nraddr; \
1091 _argvec[1] = (unsigned long)(arg1); \ 1296 _argvec[1] = (unsigned long)(arg1); \
1092 _argvec[2] = (unsigned long)(arg2); \ 1297 _argvec[2] = (unsigned long)(arg2); \
1093 __asm__ volatile( \ 1298 __asm__ volatile( \
1299 VALGRIND_CFI_PROLOGUE \
1094 "subq $128,%%rsp\n\t" \ 1300 "subq $128,%%rsp\n\t" \
1095 "movq 16(%%rax), %%rsi\n\t" \ 1301 "movq 16(%%rax), %%rsi\n\t" \
1096 "movq 8(%%rax), %%rdi\n\t" \ 1302 "movq 8(%%rax), %%rdi\n\t" \
1097 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1303 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1098 VALGRIND_CALL_NOREDIR_RAX \ 1304 VALGRIND_CALL_NOREDIR_RAX \
1099 "addq $128,%%rsp\n\t" \ 1305 "addq $128,%%rsp\n\t" \
1306 VALGRIND_CFI_EPILOGUE \
1100 : /*out*/ "=a" (_res) \ 1307 : /*out*/ "=a" (_res) \
1101 : /*in*/ "a" (&_argvec[0]) \ 1308 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1102 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1309 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1103 ); \ 1310 ); \
1104 lval = (__typeof__(lval)) _res; \ 1311 lval = (__typeof__(lval)) _res; \
1105 } while (0) 1312 } while (0)
1106 1313
1107 #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ 1314 #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \
1108 do { \ 1315 do { \
1109 volatile OrigFn _orig = (orig); \ 1316 volatile OrigFn _orig = (orig); \
1110 volatile unsigned long _argvec[4]; \ 1317 volatile unsigned long _argvec[4]; \
1111 volatile unsigned long _res; \ 1318 volatile unsigned long _res; \
1112 _argvec[0] = (unsigned long)_orig.nraddr; \ 1319 _argvec[0] = (unsigned long)_orig.nraddr; \
1113 _argvec[1] = (unsigned long)(arg1); \ 1320 _argvec[1] = (unsigned long)(arg1); \
1114 _argvec[2] = (unsigned long)(arg2); \ 1321 _argvec[2] = (unsigned long)(arg2); \
1115 _argvec[3] = (unsigned long)(arg3); \ 1322 _argvec[3] = (unsigned long)(arg3); \
1116 __asm__ volatile( \ 1323 __asm__ volatile( \
1324 VALGRIND_CFI_PROLOGUE \
1117 "subq $128,%%rsp\n\t" \ 1325 "subq $128,%%rsp\n\t" \
1118 "movq 24(%%rax), %%rdx\n\t" \ 1326 "movq 24(%%rax), %%rdx\n\t" \
1119 "movq 16(%%rax), %%rsi\n\t" \ 1327 "movq 16(%%rax), %%rsi\n\t" \
1120 "movq 8(%%rax), %%rdi\n\t" \ 1328 "movq 8(%%rax), %%rdi\n\t" \
1121 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1329 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1122 VALGRIND_CALL_NOREDIR_RAX \ 1330 VALGRIND_CALL_NOREDIR_RAX \
1123 "addq $128,%%rsp\n\t" \ 1331 "addq $128,%%rsp\n\t" \
1332 VALGRIND_CFI_EPILOGUE \
1124 : /*out*/ "=a" (_res) \ 1333 : /*out*/ "=a" (_res) \
1125 : /*in*/ "a" (&_argvec[0]) \ 1334 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1126 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1335 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1127 ); \ 1336 ); \
1128 lval = (__typeof__(lval)) _res; \ 1337 lval = (__typeof__(lval)) _res; \
1129 } while (0) 1338 } while (0)
1130 1339
1131 #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ 1340 #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \
1132 do { \ 1341 do { \
1133 volatile OrigFn _orig = (orig); \ 1342 volatile OrigFn _orig = (orig); \
1134 volatile unsigned long _argvec[5]; \ 1343 volatile unsigned long _argvec[5]; \
1135 volatile unsigned long _res; \ 1344 volatile unsigned long _res; \
1136 _argvec[0] = (unsigned long)_orig.nraddr; \ 1345 _argvec[0] = (unsigned long)_orig.nraddr; \
1137 _argvec[1] = (unsigned long)(arg1); \ 1346 _argvec[1] = (unsigned long)(arg1); \
1138 _argvec[2] = (unsigned long)(arg2); \ 1347 _argvec[2] = (unsigned long)(arg2); \
1139 _argvec[3] = (unsigned long)(arg3); \ 1348 _argvec[3] = (unsigned long)(arg3); \
1140 _argvec[4] = (unsigned long)(arg4); \ 1349 _argvec[4] = (unsigned long)(arg4); \
1141 __asm__ volatile( \ 1350 __asm__ volatile( \
1351 VALGRIND_CFI_PROLOGUE \
1142 "subq $128,%%rsp\n\t" \ 1352 "subq $128,%%rsp\n\t" \
1143 "movq 32(%%rax), %%rcx\n\t" \ 1353 "movq 32(%%rax), %%rcx\n\t" \
1144 "movq 24(%%rax), %%rdx\n\t" \ 1354 "movq 24(%%rax), %%rdx\n\t" \
1145 "movq 16(%%rax), %%rsi\n\t" \ 1355 "movq 16(%%rax), %%rsi\n\t" \
1146 "movq 8(%%rax), %%rdi\n\t" \ 1356 "movq 8(%%rax), %%rdi\n\t" \
1147 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1357 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1148 VALGRIND_CALL_NOREDIR_RAX \ 1358 VALGRIND_CALL_NOREDIR_RAX \
1149 "addq $128,%%rsp\n\t" \ 1359 "addq $128,%%rsp\n\t" \
1360 VALGRIND_CFI_EPILOGUE \
1150 : /*out*/ "=a" (_res) \ 1361 : /*out*/ "=a" (_res) \
1151 : /*in*/ "a" (&_argvec[0]) \ 1362 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1152 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1363 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1153 ); \ 1364 ); \
1154 lval = (__typeof__(lval)) _res; \ 1365 lval = (__typeof__(lval)) _res; \
1155 } while (0) 1366 } while (0)
1156 1367
1157 #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \ 1368 #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \
1158 do { \ 1369 do { \
1159 volatile OrigFn _orig = (orig); \ 1370 volatile OrigFn _orig = (orig); \
1160 volatile unsigned long _argvec[6]; \ 1371 volatile unsigned long _argvec[6]; \
1161 volatile unsigned long _res; \ 1372 volatile unsigned long _res; \
1162 _argvec[0] = (unsigned long)_orig.nraddr; \ 1373 _argvec[0] = (unsigned long)_orig.nraddr; \
1163 _argvec[1] = (unsigned long)(arg1); \ 1374 _argvec[1] = (unsigned long)(arg1); \
1164 _argvec[2] = (unsigned long)(arg2); \ 1375 _argvec[2] = (unsigned long)(arg2); \
1165 _argvec[3] = (unsigned long)(arg3); \ 1376 _argvec[3] = (unsigned long)(arg3); \
1166 _argvec[4] = (unsigned long)(arg4); \ 1377 _argvec[4] = (unsigned long)(arg4); \
1167 _argvec[5] = (unsigned long)(arg5); \ 1378 _argvec[5] = (unsigned long)(arg5); \
1168 __asm__ volatile( \ 1379 __asm__ volatile( \
1380 VALGRIND_CFI_PROLOGUE \
1169 "subq $128,%%rsp\n\t" \ 1381 "subq $128,%%rsp\n\t" \
1170 "movq 40(%%rax), %%r8\n\t" \ 1382 "movq 40(%%rax), %%r8\n\t" \
1171 "movq 32(%%rax), %%rcx\n\t" \ 1383 "movq 32(%%rax), %%rcx\n\t" \
1172 "movq 24(%%rax), %%rdx\n\t" \ 1384 "movq 24(%%rax), %%rdx\n\t" \
1173 "movq 16(%%rax), %%rsi\n\t" \ 1385 "movq 16(%%rax), %%rsi\n\t" \
1174 "movq 8(%%rax), %%rdi\n\t" \ 1386 "movq 8(%%rax), %%rdi\n\t" \
1175 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1387 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1176 VALGRIND_CALL_NOREDIR_RAX \ 1388 VALGRIND_CALL_NOREDIR_RAX \
1177 "addq $128,%%rsp\n\t" \ 1389 "addq $128,%%rsp\n\t" \
1390 VALGRIND_CFI_EPILOGUE \
1178 : /*out*/ "=a" (_res) \ 1391 : /*out*/ "=a" (_res) \
1179 : /*in*/ "a" (&_argvec[0]) \ 1392 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1180 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1393 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1181 ); \ 1394 ); \
1182 lval = (__typeof__(lval)) _res; \ 1395 lval = (__typeof__(lval)) _res; \
1183 } while (0) 1396 } while (0)
1184 1397
1185 #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ 1398 #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \
1186 do { \ 1399 do { \
1187 volatile OrigFn _orig = (orig); \ 1400 volatile OrigFn _orig = (orig); \
1188 volatile unsigned long _argvec[7]; \ 1401 volatile unsigned long _argvec[7]; \
1189 volatile unsigned long _res; \ 1402 volatile unsigned long _res; \
1190 _argvec[0] = (unsigned long)_orig.nraddr; \ 1403 _argvec[0] = (unsigned long)_orig.nraddr; \
1191 _argvec[1] = (unsigned long)(arg1); \ 1404 _argvec[1] = (unsigned long)(arg1); \
1192 _argvec[2] = (unsigned long)(arg2); \ 1405 _argvec[2] = (unsigned long)(arg2); \
1193 _argvec[3] = (unsigned long)(arg3); \ 1406 _argvec[3] = (unsigned long)(arg3); \
1194 _argvec[4] = (unsigned long)(arg4); \ 1407 _argvec[4] = (unsigned long)(arg4); \
1195 _argvec[5] = (unsigned long)(arg5); \ 1408 _argvec[5] = (unsigned long)(arg5); \
1196 _argvec[6] = (unsigned long)(arg6); \ 1409 _argvec[6] = (unsigned long)(arg6); \
1197 __asm__ volatile( \ 1410 __asm__ volatile( \
1411 VALGRIND_CFI_PROLOGUE \
1198 "subq $128,%%rsp\n\t" \ 1412 "subq $128,%%rsp\n\t" \
1199 "movq 48(%%rax), %%r9\n\t" \ 1413 "movq 48(%%rax), %%r9\n\t" \
1200 "movq 40(%%rax), %%r8\n\t" \ 1414 "movq 40(%%rax), %%r8\n\t" \
1201 "movq 32(%%rax), %%rcx\n\t" \ 1415 "movq 32(%%rax), %%rcx\n\t" \
1202 "movq 24(%%rax), %%rdx\n\t" \ 1416 "movq 24(%%rax), %%rdx\n\t" \
1203 "movq 16(%%rax), %%rsi\n\t" \ 1417 "movq 16(%%rax), %%rsi\n\t" \
1204 "movq 8(%%rax), %%rdi\n\t" \ 1418 "movq 8(%%rax), %%rdi\n\t" \
1205 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1419 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1420 VALGRIND_CALL_NOREDIR_RAX \
1206 "addq $128,%%rsp\n\t" \ 1421 "addq $128,%%rsp\n\t" \
1207 VALGRIND_CALL_NOREDIR_RAX \ 1422 VALGRIND_CFI_EPILOGUE \
1208 : /*out*/ "=a" (_res) \ 1423 : /*out*/ "=a" (_res) \
1209 : /*in*/ "a" (&_argvec[0]) \ 1424 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1210 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1425 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1211 ); \ 1426 ); \
1212 lval = (__typeof__(lval)) _res; \ 1427 lval = (__typeof__(lval)) _res; \
1213 } while (0) 1428 } while (0)
1214 1429
1215 #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1430 #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1216 arg7) \ 1431 arg7) \
1217 do { \ 1432 do { \
1218 volatile OrigFn _orig = (orig); \ 1433 volatile OrigFn _orig = (orig); \
1219 volatile unsigned long _argvec[8]; \ 1434 volatile unsigned long _argvec[8]; \
1220 volatile unsigned long _res; \ 1435 volatile unsigned long _res; \
1221 _argvec[0] = (unsigned long)_orig.nraddr; \ 1436 _argvec[0] = (unsigned long)_orig.nraddr; \
1222 _argvec[1] = (unsigned long)(arg1); \ 1437 _argvec[1] = (unsigned long)(arg1); \
1223 _argvec[2] = (unsigned long)(arg2); \ 1438 _argvec[2] = (unsigned long)(arg2); \
1224 _argvec[3] = (unsigned long)(arg3); \ 1439 _argvec[3] = (unsigned long)(arg3); \
1225 _argvec[4] = (unsigned long)(arg4); \ 1440 _argvec[4] = (unsigned long)(arg4); \
1226 _argvec[5] = (unsigned long)(arg5); \ 1441 _argvec[5] = (unsigned long)(arg5); \
1227 _argvec[6] = (unsigned long)(arg6); \ 1442 _argvec[6] = (unsigned long)(arg6); \
1228 _argvec[7] = (unsigned long)(arg7); \ 1443 _argvec[7] = (unsigned long)(arg7); \
1229 __asm__ volatile( \ 1444 __asm__ volatile( \
1230 "subq $128,%%rsp\n\t" \ 1445 VALGRIND_CFI_PROLOGUE \
1446 "subq $136,%%rsp\n\t" \
1231 "pushq 56(%%rax)\n\t" \ 1447 "pushq 56(%%rax)\n\t" \
1232 "movq 48(%%rax), %%r9\n\t" \ 1448 "movq 48(%%rax), %%r9\n\t" \
1233 "movq 40(%%rax), %%r8\n\t" \ 1449 "movq 40(%%rax), %%r8\n\t" \
1234 "movq 32(%%rax), %%rcx\n\t" \ 1450 "movq 32(%%rax), %%rcx\n\t" \
1235 "movq 24(%%rax), %%rdx\n\t" \ 1451 "movq 24(%%rax), %%rdx\n\t" \
1236 "movq 16(%%rax), %%rsi\n\t" \ 1452 "movq 16(%%rax), %%rsi\n\t" \
1237 "movq 8(%%rax), %%rdi\n\t" \ 1453 "movq 8(%%rax), %%rdi\n\t" \
1238 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1454 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1239 VALGRIND_CALL_NOREDIR_RAX \ 1455 VALGRIND_CALL_NOREDIR_RAX \
1240 "addq $8, %%rsp\n" \ 1456 "addq $8, %%rsp\n" \
1241 "addq $128,%%rsp\n\t" \ 1457 "addq $136,%%rsp\n\t" \
1458 VALGRIND_CFI_EPILOGUE \
1242 : /*out*/ "=a" (_res) \ 1459 : /*out*/ "=a" (_res) \
1243 : /*in*/ "a" (&_argvec[0]) \ 1460 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1244 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1461 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1245 ); \ 1462 ); \
1246 lval = (__typeof__(lval)) _res; \ 1463 lval = (__typeof__(lval)) _res; \
1247 } while (0) 1464 } while (0)
1248 1465
1249 #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1466 #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1250 arg7,arg8) \ 1467 arg7,arg8) \
1251 do { \ 1468 do { \
1252 volatile OrigFn _orig = (orig); \ 1469 volatile OrigFn _orig = (orig); \
1253 volatile unsigned long _argvec[9]; \ 1470 volatile unsigned long _argvec[9]; \
1254 volatile unsigned long _res; \ 1471 volatile unsigned long _res; \
1255 _argvec[0] = (unsigned long)_orig.nraddr; \ 1472 _argvec[0] = (unsigned long)_orig.nraddr; \
1256 _argvec[1] = (unsigned long)(arg1); \ 1473 _argvec[1] = (unsigned long)(arg1); \
1257 _argvec[2] = (unsigned long)(arg2); \ 1474 _argvec[2] = (unsigned long)(arg2); \
1258 _argvec[3] = (unsigned long)(arg3); \ 1475 _argvec[3] = (unsigned long)(arg3); \
1259 _argvec[4] = (unsigned long)(arg4); \ 1476 _argvec[4] = (unsigned long)(arg4); \
1260 _argvec[5] = (unsigned long)(arg5); \ 1477 _argvec[5] = (unsigned long)(arg5); \
1261 _argvec[6] = (unsigned long)(arg6); \ 1478 _argvec[6] = (unsigned long)(arg6); \
1262 _argvec[7] = (unsigned long)(arg7); \ 1479 _argvec[7] = (unsigned long)(arg7); \
1263 _argvec[8] = (unsigned long)(arg8); \ 1480 _argvec[8] = (unsigned long)(arg8); \
1264 __asm__ volatile( \ 1481 __asm__ volatile( \
1482 VALGRIND_CFI_PROLOGUE \
1265 "subq $128,%%rsp\n\t" \ 1483 "subq $128,%%rsp\n\t" \
1266 "pushq 64(%%rax)\n\t" \ 1484 "pushq 64(%%rax)\n\t" \
1267 "pushq 56(%%rax)\n\t" \ 1485 "pushq 56(%%rax)\n\t" \
1268 "movq 48(%%rax), %%r9\n\t" \ 1486 "movq 48(%%rax), %%r9\n\t" \
1269 "movq 40(%%rax), %%r8\n\t" \ 1487 "movq 40(%%rax), %%r8\n\t" \
1270 "movq 32(%%rax), %%rcx\n\t" \ 1488 "movq 32(%%rax), %%rcx\n\t" \
1271 "movq 24(%%rax), %%rdx\n\t" \ 1489 "movq 24(%%rax), %%rdx\n\t" \
1272 "movq 16(%%rax), %%rsi\n\t" \ 1490 "movq 16(%%rax), %%rsi\n\t" \
1273 "movq 8(%%rax), %%rdi\n\t" \ 1491 "movq 8(%%rax), %%rdi\n\t" \
1274 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1492 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1275 VALGRIND_CALL_NOREDIR_RAX \ 1493 VALGRIND_CALL_NOREDIR_RAX \
1276 "addq $16, %%rsp\n" \ 1494 "addq $16, %%rsp\n" \
1277 "addq $128,%%rsp\n\t" \ 1495 "addq $128,%%rsp\n\t" \
1496 VALGRIND_CFI_EPILOGUE \
1278 : /*out*/ "=a" (_res) \ 1497 : /*out*/ "=a" (_res) \
1279 : /*in*/ "a" (&_argvec[0]) \ 1498 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1280 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1499 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1281 ); \ 1500 ); \
1282 lval = (__typeof__(lval)) _res; \ 1501 lval = (__typeof__(lval)) _res; \
1283 } while (0) 1502 } while (0)
1284 1503
1285 #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1504 #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1286 arg7,arg8,arg9) \ 1505 arg7,arg8,arg9) \
1287 do { \ 1506 do { \
1288 volatile OrigFn _orig = (orig); \ 1507 volatile OrigFn _orig = (orig); \
1289 volatile unsigned long _argvec[10]; \ 1508 volatile unsigned long _argvec[10]; \
1290 volatile unsigned long _res; \ 1509 volatile unsigned long _res; \
1291 _argvec[0] = (unsigned long)_orig.nraddr; \ 1510 _argvec[0] = (unsigned long)_orig.nraddr; \
1292 _argvec[1] = (unsigned long)(arg1); \ 1511 _argvec[1] = (unsigned long)(arg1); \
1293 _argvec[2] = (unsigned long)(arg2); \ 1512 _argvec[2] = (unsigned long)(arg2); \
1294 _argvec[3] = (unsigned long)(arg3); \ 1513 _argvec[3] = (unsigned long)(arg3); \
1295 _argvec[4] = (unsigned long)(arg4); \ 1514 _argvec[4] = (unsigned long)(arg4); \
1296 _argvec[5] = (unsigned long)(arg5); \ 1515 _argvec[5] = (unsigned long)(arg5); \
1297 _argvec[6] = (unsigned long)(arg6); \ 1516 _argvec[6] = (unsigned long)(arg6); \
1298 _argvec[7] = (unsigned long)(arg7); \ 1517 _argvec[7] = (unsigned long)(arg7); \
1299 _argvec[8] = (unsigned long)(arg8); \ 1518 _argvec[8] = (unsigned long)(arg8); \
1300 _argvec[9] = (unsigned long)(arg9); \ 1519 _argvec[9] = (unsigned long)(arg9); \
1301 __asm__ volatile( \ 1520 __asm__ volatile( \
1302 "subq $128,%%rsp\n\t" \ 1521 VALGRIND_CFI_PROLOGUE \
1522 "subq $136,%%rsp\n\t" \
1303 "pushq 72(%%rax)\n\t" \ 1523 "pushq 72(%%rax)\n\t" \
1304 "pushq 64(%%rax)\n\t" \ 1524 "pushq 64(%%rax)\n\t" \
1305 "pushq 56(%%rax)\n\t" \ 1525 "pushq 56(%%rax)\n\t" \
1306 "movq 48(%%rax), %%r9\n\t" \ 1526 "movq 48(%%rax), %%r9\n\t" \
1307 "movq 40(%%rax), %%r8\n\t" \ 1527 "movq 40(%%rax), %%r8\n\t" \
1308 "movq 32(%%rax), %%rcx\n\t" \ 1528 "movq 32(%%rax), %%rcx\n\t" \
1309 "movq 24(%%rax), %%rdx\n\t" \ 1529 "movq 24(%%rax), %%rdx\n\t" \
1310 "movq 16(%%rax), %%rsi\n\t" \ 1530 "movq 16(%%rax), %%rsi\n\t" \
1311 "movq 8(%%rax), %%rdi\n\t" \ 1531 "movq 8(%%rax), %%rdi\n\t" \
1312 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1532 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1313 VALGRIND_CALL_NOREDIR_RAX \ 1533 VALGRIND_CALL_NOREDIR_RAX \
1314 "addq $24, %%rsp\n" \ 1534 "addq $24, %%rsp\n" \
1315 "addq $128,%%rsp\n\t" \ 1535 "addq $136,%%rsp\n\t" \
1536 VALGRIND_CFI_EPILOGUE \
1316 : /*out*/ "=a" (_res) \ 1537 : /*out*/ "=a" (_res) \
1317 : /*in*/ "a" (&_argvec[0]) \ 1538 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1318 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1539 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1319 ); \ 1540 ); \
1320 lval = (__typeof__(lval)) _res; \ 1541 lval = (__typeof__(lval)) _res; \
1321 } while (0) 1542 } while (0)
1322 1543
1323 #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1544 #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1324 arg7,arg8,arg9,arg10) \ 1545 arg7,arg8,arg9,arg10) \
1325 do { \ 1546 do { \
1326 volatile OrigFn _orig = (orig); \ 1547 volatile OrigFn _orig = (orig); \
1327 volatile unsigned long _argvec[11]; \ 1548 volatile unsigned long _argvec[11]; \
1328 volatile unsigned long _res; \ 1549 volatile unsigned long _res; \
1329 _argvec[0] = (unsigned long)_orig.nraddr; \ 1550 _argvec[0] = (unsigned long)_orig.nraddr; \
1330 _argvec[1] = (unsigned long)(arg1); \ 1551 _argvec[1] = (unsigned long)(arg1); \
1331 _argvec[2] = (unsigned long)(arg2); \ 1552 _argvec[2] = (unsigned long)(arg2); \
1332 _argvec[3] = (unsigned long)(arg3); \ 1553 _argvec[3] = (unsigned long)(arg3); \
1333 _argvec[4] = (unsigned long)(arg4); \ 1554 _argvec[4] = (unsigned long)(arg4); \
1334 _argvec[5] = (unsigned long)(arg5); \ 1555 _argvec[5] = (unsigned long)(arg5); \
1335 _argvec[6] = (unsigned long)(arg6); \ 1556 _argvec[6] = (unsigned long)(arg6); \
1336 _argvec[7] = (unsigned long)(arg7); \ 1557 _argvec[7] = (unsigned long)(arg7); \
1337 _argvec[8] = (unsigned long)(arg8); \ 1558 _argvec[8] = (unsigned long)(arg8); \
1338 _argvec[9] = (unsigned long)(arg9); \ 1559 _argvec[9] = (unsigned long)(arg9); \
1339 _argvec[10] = (unsigned long)(arg10); \ 1560 _argvec[10] = (unsigned long)(arg10); \
1340 __asm__ volatile( \ 1561 __asm__ volatile( \
1562 VALGRIND_CFI_PROLOGUE \
1341 "subq $128,%%rsp\n\t" \ 1563 "subq $128,%%rsp\n\t" \
1342 "pushq 80(%%rax)\n\t" \ 1564 "pushq 80(%%rax)\n\t" \
1343 "pushq 72(%%rax)\n\t" \ 1565 "pushq 72(%%rax)\n\t" \
1344 "pushq 64(%%rax)\n\t" \ 1566 "pushq 64(%%rax)\n\t" \
1345 "pushq 56(%%rax)\n\t" \ 1567 "pushq 56(%%rax)\n\t" \
1346 "movq 48(%%rax), %%r9\n\t" \ 1568 "movq 48(%%rax), %%r9\n\t" \
1347 "movq 40(%%rax), %%r8\n\t" \ 1569 "movq 40(%%rax), %%r8\n\t" \
1348 "movq 32(%%rax), %%rcx\n\t" \ 1570 "movq 32(%%rax), %%rcx\n\t" \
1349 "movq 24(%%rax), %%rdx\n\t" \ 1571 "movq 24(%%rax), %%rdx\n\t" \
1350 "movq 16(%%rax), %%rsi\n\t" \ 1572 "movq 16(%%rax), %%rsi\n\t" \
1351 "movq 8(%%rax), %%rdi\n\t" \ 1573 "movq 8(%%rax), %%rdi\n\t" \
1352 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1574 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1353 VALGRIND_CALL_NOREDIR_RAX \ 1575 VALGRIND_CALL_NOREDIR_RAX \
1354 "addq $32, %%rsp\n" \ 1576 "addq $32, %%rsp\n" \
1355 "addq $128,%%rsp\n\t" \ 1577 "addq $128,%%rsp\n\t" \
1578 VALGRIND_CFI_EPILOGUE \
1356 : /*out*/ "=a" (_res) \ 1579 : /*out*/ "=a" (_res) \
1357 : /*in*/ "a" (&_argvec[0]) \ 1580 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1358 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1581 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1359 ); \ 1582 ); \
1360 lval = (__typeof__(lval)) _res; \ 1583 lval = (__typeof__(lval)) _res; \
1361 } while (0) 1584 } while (0)
1362 1585
1363 #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1586 #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1364 arg7,arg8,arg9,arg10,arg11) \ 1587 arg7,arg8,arg9,arg10,arg11) \
1365 do { \ 1588 do { \
1366 volatile OrigFn _orig = (orig); \ 1589 volatile OrigFn _orig = (orig); \
1367 volatile unsigned long _argvec[12]; \ 1590 volatile unsigned long _argvec[12]; \
1368 volatile unsigned long _res; \ 1591 volatile unsigned long _res; \
1369 _argvec[0] = (unsigned long)_orig.nraddr; \ 1592 _argvec[0] = (unsigned long)_orig.nraddr; \
1370 _argvec[1] = (unsigned long)(arg1); \ 1593 _argvec[1] = (unsigned long)(arg1); \
1371 _argvec[2] = (unsigned long)(arg2); \ 1594 _argvec[2] = (unsigned long)(arg2); \
1372 _argvec[3] = (unsigned long)(arg3); \ 1595 _argvec[3] = (unsigned long)(arg3); \
1373 _argvec[4] = (unsigned long)(arg4); \ 1596 _argvec[4] = (unsigned long)(arg4); \
1374 _argvec[5] = (unsigned long)(arg5); \ 1597 _argvec[5] = (unsigned long)(arg5); \
1375 _argvec[6] = (unsigned long)(arg6); \ 1598 _argvec[6] = (unsigned long)(arg6); \
1376 _argvec[7] = (unsigned long)(arg7); \ 1599 _argvec[7] = (unsigned long)(arg7); \
1377 _argvec[8] = (unsigned long)(arg8); \ 1600 _argvec[8] = (unsigned long)(arg8); \
1378 _argvec[9] = (unsigned long)(arg9); \ 1601 _argvec[9] = (unsigned long)(arg9); \
1379 _argvec[10] = (unsigned long)(arg10); \ 1602 _argvec[10] = (unsigned long)(arg10); \
1380 _argvec[11] = (unsigned long)(arg11); \ 1603 _argvec[11] = (unsigned long)(arg11); \
1381 __asm__ volatile( \ 1604 __asm__ volatile( \
1382 "subq $128,%%rsp\n\t" \ 1605 VALGRIND_CFI_PROLOGUE \
1606 "subq $136,%%rsp\n\t" \
1383 "pushq 88(%%rax)\n\t" \ 1607 "pushq 88(%%rax)\n\t" \
1384 "pushq 80(%%rax)\n\t" \ 1608 "pushq 80(%%rax)\n\t" \
1385 "pushq 72(%%rax)\n\t" \ 1609 "pushq 72(%%rax)\n\t" \
1386 "pushq 64(%%rax)\n\t" \ 1610 "pushq 64(%%rax)\n\t" \
1387 "pushq 56(%%rax)\n\t" \ 1611 "pushq 56(%%rax)\n\t" \
1388 "movq 48(%%rax), %%r9\n\t" \ 1612 "movq 48(%%rax), %%r9\n\t" \
1389 "movq 40(%%rax), %%r8\n\t" \ 1613 "movq 40(%%rax), %%r8\n\t" \
1390 "movq 32(%%rax), %%rcx\n\t" \ 1614 "movq 32(%%rax), %%rcx\n\t" \
1391 "movq 24(%%rax), %%rdx\n\t" \ 1615 "movq 24(%%rax), %%rdx\n\t" \
1392 "movq 16(%%rax), %%rsi\n\t" \ 1616 "movq 16(%%rax), %%rsi\n\t" \
1393 "movq 8(%%rax), %%rdi\n\t" \ 1617 "movq 8(%%rax), %%rdi\n\t" \
1394 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1618 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1395 VALGRIND_CALL_NOREDIR_RAX \ 1619 VALGRIND_CALL_NOREDIR_RAX \
1396 "addq $40, %%rsp\n" \ 1620 "addq $40, %%rsp\n" \
1397 "addq $128,%%rsp\n\t" \ 1621 "addq $136,%%rsp\n\t" \
1622 VALGRIND_CFI_EPILOGUE \
1398 : /*out*/ "=a" (_res) \ 1623 : /*out*/ "=a" (_res) \
1399 : /*in*/ "a" (&_argvec[0]) \ 1624 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1400 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1625 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1401 ); \ 1626 ); \
1402 lval = (__typeof__(lval)) _res; \ 1627 lval = (__typeof__(lval)) _res; \
1403 } while (0) 1628 } while (0)
1404 1629
1405 #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1630 #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1406 arg7,arg8,arg9,arg10,arg11,arg12) \ 1631 arg7,arg8,arg9,arg10,arg11,arg12) \
1407 do { \ 1632 do { \
1408 volatile OrigFn _orig = (orig); \ 1633 volatile OrigFn _orig = (orig); \
1409 volatile unsigned long _argvec[13]; \ 1634 volatile unsigned long _argvec[13]; \
1410 volatile unsigned long _res; \ 1635 volatile unsigned long _res; \
1411 _argvec[0] = (unsigned long)_orig.nraddr; \ 1636 _argvec[0] = (unsigned long)_orig.nraddr; \
1412 _argvec[1] = (unsigned long)(arg1); \ 1637 _argvec[1] = (unsigned long)(arg1); \
1413 _argvec[2] = (unsigned long)(arg2); \ 1638 _argvec[2] = (unsigned long)(arg2); \
1414 _argvec[3] = (unsigned long)(arg3); \ 1639 _argvec[3] = (unsigned long)(arg3); \
1415 _argvec[4] = (unsigned long)(arg4); \ 1640 _argvec[4] = (unsigned long)(arg4); \
1416 _argvec[5] = (unsigned long)(arg5); \ 1641 _argvec[5] = (unsigned long)(arg5); \
1417 _argvec[6] = (unsigned long)(arg6); \ 1642 _argvec[6] = (unsigned long)(arg6); \
1418 _argvec[7] = (unsigned long)(arg7); \ 1643 _argvec[7] = (unsigned long)(arg7); \
1419 _argvec[8] = (unsigned long)(arg8); \ 1644 _argvec[8] = (unsigned long)(arg8); \
1420 _argvec[9] = (unsigned long)(arg9); \ 1645 _argvec[9] = (unsigned long)(arg9); \
1421 _argvec[10] = (unsigned long)(arg10); \ 1646 _argvec[10] = (unsigned long)(arg10); \
1422 _argvec[11] = (unsigned long)(arg11); \ 1647 _argvec[11] = (unsigned long)(arg11); \
1423 _argvec[12] = (unsigned long)(arg12); \ 1648 _argvec[12] = (unsigned long)(arg12); \
1424 __asm__ volatile( \ 1649 __asm__ volatile( \
1650 VALGRIND_CFI_PROLOGUE \
1425 "subq $128,%%rsp\n\t" \ 1651 "subq $128,%%rsp\n\t" \
1426 "pushq 96(%%rax)\n\t" \ 1652 "pushq 96(%%rax)\n\t" \
1427 "pushq 88(%%rax)\n\t" \ 1653 "pushq 88(%%rax)\n\t" \
1428 "pushq 80(%%rax)\n\t" \ 1654 "pushq 80(%%rax)\n\t" \
1429 "pushq 72(%%rax)\n\t" \ 1655 "pushq 72(%%rax)\n\t" \
1430 "pushq 64(%%rax)\n\t" \ 1656 "pushq 64(%%rax)\n\t" \
1431 "pushq 56(%%rax)\n\t" \ 1657 "pushq 56(%%rax)\n\t" \
1432 "movq 48(%%rax), %%r9\n\t" \ 1658 "movq 48(%%rax), %%r9\n\t" \
1433 "movq 40(%%rax), %%r8\n\t" \ 1659 "movq 40(%%rax), %%r8\n\t" \
1434 "movq 32(%%rax), %%rcx\n\t" \ 1660 "movq 32(%%rax), %%rcx\n\t" \
1435 "movq 24(%%rax), %%rdx\n\t" \ 1661 "movq 24(%%rax), %%rdx\n\t" \
1436 "movq 16(%%rax), %%rsi\n\t" \ 1662 "movq 16(%%rax), %%rsi\n\t" \
1437 "movq 8(%%rax), %%rdi\n\t" \ 1663 "movq 8(%%rax), %%rdi\n\t" \
1438 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1664 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1439 VALGRIND_CALL_NOREDIR_RAX \ 1665 VALGRIND_CALL_NOREDIR_RAX \
1440 "addq $48, %%rsp\n" \ 1666 "addq $48, %%rsp\n" \
1441 "addq $128,%%rsp\n\t" \ 1667 "addq $128,%%rsp\n\t" \
1668 VALGRIND_CFI_EPILOGUE \
1442 : /*out*/ "=a" (_res) \ 1669 : /*out*/ "=a" (_res) \
1443 : /*in*/ "a" (&_argvec[0]) \ 1670 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1444 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1671 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1445 ); \ 1672 ); \
1446 lval = (__typeof__(lval)) _res; \ 1673 lval = (__typeof__(lval)) _res; \
1447 } while (0) 1674 } while (0)
1448 1675
1449 #endif /* PLAT_amd64_linux */ 1676 #endif /* PLAT_amd64_linux || PLAT_amd64_darwin */
1450 1677
1451 /* ------------------------ ppc32-linux ------------------------ */ 1678 /* ------------------------ ppc32-linux ------------------------ */
1452 1679
1453 #if defined(PLAT_ppc32_linux) 1680 #if defined(PLAT_ppc32_linux)
1454 1681
1455 /* This is useful for finding out about the on-stack stuff: 1682 /* This is useful for finding out about the on-stack stuff:
1456 1683
1457 extern int f9 ( int,int,int,int,int,int,int,int,int ); 1684 extern int f9 ( int,int,int,int,int,int,int,int,int );
1458 extern int f10 ( int,int,int,int,int,int,int,int,int,int ); 1685 extern int f10 ( int,int,int,int,int,int,int,int,int,int );
1459 extern int f11 ( int,int,int,int,int,int,int,int,int,int,int ); 1686 extern int f11 ( int,int,int,int,int,int,int,int,int,int,int );
(...skipping 972 matching lines...) Expand 10 before | Expand all | Expand 10 after
2432 "addi 1,1,144" /* restore frame */ \ 2659 "addi 1,1,144" /* restore frame */ \
2433 : /*out*/ "=r" (_res) \ 2660 : /*out*/ "=r" (_res) \
2434 : /*in*/ "r" (&_argvec[2]) \ 2661 : /*in*/ "r" (&_argvec[2]) \
2435 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 2662 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2436 ); \ 2663 ); \
2437 lval = (__typeof__(lval)) _res; \ 2664 lval = (__typeof__(lval)) _res; \
2438 } while (0) 2665 } while (0)
2439 2666
2440 #endif /* PLAT_ppc64_linux */ 2667 #endif /* PLAT_ppc64_linux */
2441 2668
2442 /* ------------------------ ppc32-aix5 ------------------------- */ 2669 /* ------------------------- arm-linux ------------------------- */
2443 2670
2444 #if defined(PLAT_ppc32_aix5) 2671 #if defined(PLAT_arm_linux)
2445
2446 /* ARGREGS: r3 r4 r5 r6 r7 r8 r9 r10 (the rest on stack somewhere) */
2447 2672
2448 /* These regs are trashed by the hidden call. */ 2673 /* These regs are trashed by the hidden call. */
2449 #define __CALLER_SAVED_REGS \ 2674 #define __CALLER_SAVED_REGS "r0", "r1", "r2", "r3","r4","r14"
2450 "lr", "ctr", "xer", \
2451 "cr0", "cr1", "cr2", "cr3", "cr4", "cr5", "cr6", "cr7", \
2452 "r0", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "r10", \
2453 "r11", "r12", "r13"
2454 2675
2455 /* Expand the stack frame, copying enough info that unwinding 2676 /* These CALL_FN_ macros assume that on arm-linux, sizeof(unsigned
2456 still works. Trashes r3. */
2457
2458 #define VG_EXPAND_FRAME_BY_trashes_r3(_n_fr) \
2459 "addi 1,1,-" #_n_fr "\n\t" \
2460 "lwz 3," #_n_fr "(1)\n\t" \
2461 "stw 3,0(1)\n\t"
2462
2463 #define VG_CONTRACT_FRAME_BY(_n_fr) \
2464 "addi 1,1," #_n_fr "\n\t"
2465
2466 /* These CALL_FN_ macros assume that on ppc32-aix5, sizeof(unsigned
2467 long) == 4. */ 2677 long) == 4. */
2468 2678
2469 #define CALL_FN_W_v(lval, orig) \ 2679 #define CALL_FN_W_v(lval, orig) \
2470 do { \ 2680 do { \
2471 volatile OrigFn _orig = (orig); \ 2681 volatile OrigFn _orig = (orig); \
2472 volatile unsigned long _argvec[3+0]; \ 2682 volatile unsigned long _argvec[1]; \
2473 volatile unsigned long _res; \ 2683 volatile unsigned long _res; \
2474 /* _argvec[0] holds current r2 across the call */ \ 2684 _argvec[0] = (unsigned long)_orig.nraddr; \
2475 _argvec[1] = (unsigned long)_orig.r2; \
2476 _argvec[2] = (unsigned long)_orig.nraddr; \
2477 __asm__ volatile( \ 2685 __asm__ volatile( \
2478 "mr 11,%1\n\t" \ 2686 "ldr r4, [%1] \n\t" /* target->r4 */ \
2479 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 2687 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2480 "stw 2,-8(11)\n\t" /* save tocptr */ \ 2688 "mov %0, r0\n" \
2481 "lwz 2,-4(11)\n\t" /* use nraddr's tocptr */ \
2482 "lwz 11, 0(11)\n\t" /* target->r11 */ \
2483 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2484 "mr 11,%1\n\t" \
2485 "mr %0,3\n\t" \
2486 "lwz 2,-8(11)\n\t" /* restore tocptr */ \
2487 VG_CONTRACT_FRAME_BY(512) \
2488 : /*out*/ "=r" (_res) \ 2689 : /*out*/ "=r" (_res) \
2489 : /*in*/ "r" (&_argvec[2]) \ 2690 : /*in*/ "0" (&_argvec[0]) \
2490 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 2691 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2491 ); \ 2692 ); \
2492 lval = (__typeof__(lval)) _res; \ 2693 lval = (__typeof__(lval)) _res; \
2493 } while (0) 2694 } while (0)
2494 2695
2495 #define CALL_FN_W_W(lval, orig, arg1) \ 2696 #define CALL_FN_W_W(lval, orig, arg1) \
2496 do { \ 2697 do { \
2497 volatile OrigFn _orig = (orig); \ 2698 volatile OrigFn _orig = (orig); \
2498 volatile unsigned long _argvec[3+1]; \ 2699 volatile unsigned long _argvec[2]; \
2499 volatile unsigned long _res; \ 2700 volatile unsigned long _res; \
2500 /* _argvec[0] holds current r2 across the call */ \ 2701 _argvec[0] = (unsigned long)_orig.nraddr; \
2501 _argvec[1] = (unsigned long)_orig.r2; \ 2702 _argvec[1] = (unsigned long)(arg1); \
2502 _argvec[2] = (unsigned long)_orig.nraddr; \
2503 _argvec[2+1] = (unsigned long)arg1; \
2504 __asm__ volatile( \ 2703 __asm__ volatile( \
2505 "mr 11,%1\n\t" \ 2704 "ldr r0, [%1, #4] \n\t" \
2506 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 2705 "ldr r4, [%1] \n\t" /* target->r4 */ \
2507 "stw 2,-8(11)\n\t" /* save tocptr */ \ 2706 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2508 "lwz 2,-4(11)\n\t" /* use nraddr's tocptr */ \ 2707 "mov %0, r0\n" \
2509 "lwz 3, 4(11)\n\t" /* arg1->r3 */ \
2510 "lwz 11, 0(11)\n\t" /* target->r11 */ \
2511 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2512 "mr 11,%1\n\t" \
2513 "mr %0,3\n\t" \
2514 "lwz 2,-8(11)\n\t" /* restore tocptr */ \
2515 VG_CONTRACT_FRAME_BY(512) \
2516 : /*out*/ "=r" (_res) \ 2708 : /*out*/ "=r" (_res) \
2517 : /*in*/ "r" (&_argvec[2]) \ 2709 : /*in*/ "0" (&_argvec[0]) \
2518 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 2710 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2519 ); \ 2711 ); \
2520 lval = (__typeof__(lval)) _res; \ 2712 lval = (__typeof__(lval)) _res; \
2521 } while (0) 2713 } while (0)
2522 2714
2523 #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ 2715 #define CALL_FN_W_WW(lval, orig, arg1,arg2) \
2524 do { \ 2716 do { \
2525 volatile OrigFn _orig = (orig); \ 2717 volatile OrigFn _orig = (orig); \
2526 volatile unsigned long _argvec[3+2]; \ 2718 volatile unsigned long _argvec[3]; \
2527 volatile unsigned long _res; \ 2719 volatile unsigned long _res; \
2528 /* _argvec[0] holds current r2 across the call */ \ 2720 _argvec[0] = (unsigned long)_orig.nraddr; \
2529 _argvec[1] = (unsigned long)_orig.r2; \ 2721 _argvec[1] = (unsigned long)(arg1); \
2530 _argvec[2] = (unsigned long)_orig.nraddr; \ 2722 _argvec[2] = (unsigned long)(arg2); \
2531 _argvec[2+1] = (unsigned long)arg1; \
2532 _argvec[2+2] = (unsigned long)arg2; \
2533 __asm__ volatile( \ 2723 __asm__ volatile( \
2534 "mr 11,%1\n\t" \ 2724 "ldr r0, [%1, #4] \n\t" \
2535 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 2725 "ldr r1, [%1, #8] \n\t" \
2536 "stw 2,-8(11)\n\t" /* save tocptr */ \ 2726 "ldr r4, [%1] \n\t" /* target->r4 */ \
2537 "lwz 2,-4(11)\n\t" /* use nraddr's tocptr */ \ 2727 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2538 "lwz 3, 4(11)\n\t" /* arg1->r3 */ \ 2728 "mov %0, r0\n" \
2539 "lwz 4, 8(11)\n\t" /* arg2->r4 */ \
2540 "lwz 11, 0(11)\n\t" /* target->r11 */ \
2541 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2542 "mr 11,%1\n\t" \
2543 "mr %0,3\n\t" \
2544 "lwz 2,-8(11)\n\t" /* restore tocptr */ \
2545 VG_CONTRACT_FRAME_BY(512) \
2546 : /*out*/ "=r" (_res) \ 2729 : /*out*/ "=r" (_res) \
2547 : /*in*/ "r" (&_argvec[2]) \ 2730 : /*in*/ "0" (&_argvec[0]) \
2548 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 2731 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2549 ); \ 2732 ); \
2550 lval = (__typeof__(lval)) _res; \ 2733 lval = (__typeof__(lval)) _res; \
2551 } while (0) 2734 } while (0)
2552 2735
2553 #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ 2736 #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \
2554 do { \ 2737 do { \
2555 volatile OrigFn _orig = (orig); \ 2738 volatile OrigFn _orig = (orig); \
2556 volatile unsigned long _argvec[3+3]; \ 2739 volatile unsigned long _argvec[4]; \
2557 volatile unsigned long _res; \ 2740 volatile unsigned long _res; \
2558 /* _argvec[0] holds current r2 across the call */ \ 2741 _argvec[0] = (unsigned long)_orig.nraddr; \
2559 _argvec[1] = (unsigned long)_orig.r2; \ 2742 _argvec[1] = (unsigned long)(arg1); \
2560 _argvec[2] = (unsigned long)_orig.nraddr; \ 2743 _argvec[2] = (unsigned long)(arg2); \
2561 _argvec[2+1] = (unsigned long)arg1; \ 2744 _argvec[3] = (unsigned long)(arg3); \
2562 _argvec[2+2] = (unsigned long)arg2; \
2563 _argvec[2+3] = (unsigned long)arg3; \
2564 __asm__ volatile( \ 2745 __asm__ volatile( \
2565 "mr 11,%1\n\t" \ 2746 "ldr r0, [%1, #4] \n\t" \
2566 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 2747 "ldr r1, [%1, #8] \n\t" \
2567 "stw 2,-8(11)\n\t" /* save tocptr */ \ 2748 "ldr r2, [%1, #12] \n\t" \
2568 "lwz 2,-4(11)\n\t" /* use nraddr's tocptr */ \ 2749 "ldr r4, [%1] \n\t" /* target->r4 */ \
2569 "lwz 3, 4(11)\n\t" /* arg1->r3 */ \ 2750 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2570 "lwz 4, 8(11)\n\t" /* arg2->r4 */ \ 2751 "mov %0, r0\n" \
2571 "lwz 5, 12(11)\n\t" /* arg3->r5 */ \
2572 "lwz 11, 0(11)\n\t" /* target->r11 */ \
2573 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2574 "mr 11,%1\n\t" \
2575 "mr %0,3\n\t" \
2576 "lwz 2,-8(11)\n\t" /* restore tocptr */ \
2577 VG_CONTRACT_FRAME_BY(512) \
2578 : /*out*/ "=r" (_res) \ 2752 : /*out*/ "=r" (_res) \
2579 : /*in*/ "r" (&_argvec[2]) \ 2753 : /*in*/ "0" (&_argvec[0]) \
2580 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 2754 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2581 ); \ 2755 ); \
2582 lval = (__typeof__(lval)) _res; \ 2756 lval = (__typeof__(lval)) _res; \
2583 } while (0) 2757 } while (0)
2584 2758
2585 #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ 2759 #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \
2586 do { \ 2760 do { \
2587 volatile OrigFn _orig = (orig); \ 2761 volatile OrigFn _orig = (orig); \
2588 volatile unsigned long _argvec[3+4]; \ 2762 volatile unsigned long _argvec[5]; \
2589 volatile unsigned long _res; \ 2763 volatile unsigned long _res; \
2590 /* _argvec[0] holds current r2 across the call */ \ 2764 _argvec[0] = (unsigned long)_orig.nraddr; \
2591 _argvec[1] = (unsigned long)_orig.r2; \ 2765 _argvec[1] = (unsigned long)(arg1); \
2592 _argvec[2] = (unsigned long)_orig.nraddr; \ 2766 _argvec[2] = (unsigned long)(arg2); \
2593 _argvec[2+1] = (unsigned long)arg1; \ 2767 _argvec[3] = (unsigned long)(arg3); \
2594 _argvec[2+2] = (unsigned long)arg2; \ 2768 _argvec[4] = (unsigned long)(arg4); \
2595 _argvec[2+3] = (unsigned long)arg3; \
2596 _argvec[2+4] = (unsigned long)arg4; \
2597 __asm__ volatile( \ 2769 __asm__ volatile( \
2598 "mr 11,%1\n\t" \ 2770 "ldr r0, [%1, #4] \n\t" \
2599 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 2771 "ldr r1, [%1, #8] \n\t" \
2600 "stw 2,-8(11)\n\t" /* save tocptr */ \ 2772 "ldr r2, [%1, #12] \n\t" \
2601 "lwz 2,-4(11)\n\t" /* use nraddr's tocptr */ \ 2773 "ldr r3, [%1, #16] \n\t" \
2602 "lwz 3, 4(11)\n\t" /* arg1->r3 */ \ 2774 "ldr r4, [%1] \n\t" /* target->r4 */ \
2603 "lwz 4, 8(11)\n\t" /* arg2->r4 */ \ 2775 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2604 "lwz 5, 12(11)\n\t" /* arg3->r5 */ \ 2776 "mov %0, r0" \
2605 "lwz 6, 16(11)\n\t" /* arg4->r6 */ \
2606 "lwz 11, 0(11)\n\t" /* target->r11 */ \
2607 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2608 "mr 11,%1\n\t" \
2609 "mr %0,3\n\t" \
2610 "lwz 2,-8(11)\n\t" /* restore tocptr */ \
2611 VG_CONTRACT_FRAME_BY(512) \
2612 : /*out*/ "=r" (_res) \ 2777 : /*out*/ "=r" (_res) \
2613 : /*in*/ "r" (&_argvec[2]) \ 2778 : /*in*/ "0" (&_argvec[0]) \
2614 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 2779 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2615 ); \ 2780 ); \
2616 lval = (__typeof__(lval)) _res; \ 2781 lval = (__typeof__(lval)) _res; \
2617 } while (0) 2782 } while (0)
2618 2783
2619 #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \ 2784 #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \
2620 do { \ 2785 do { \
2621 volatile OrigFn _orig = (orig); \ 2786 volatile OrigFn _orig = (orig); \
2622 volatile unsigned long _argvec[3+5]; \ 2787 volatile unsigned long _argvec[6]; \
2623 volatile unsigned long _res; \ 2788 volatile unsigned long _res; \
2624 /* _argvec[0] holds current r2 across the call */ \ 2789 _argvec[0] = (unsigned long)_orig.nraddr; \
2625 _argvec[1] = (unsigned long)_orig.r2; \ 2790 _argvec[1] = (unsigned long)(arg1); \
2626 _argvec[2] = (unsigned long)_orig.nraddr; \ 2791 _argvec[2] = (unsigned long)(arg2); \
2627 _argvec[2+1] = (unsigned long)arg1; \ 2792 _argvec[3] = (unsigned long)(arg3); \
2628 _argvec[2+2] = (unsigned long)arg2; \ 2793 _argvec[4] = (unsigned long)(arg4); \
2629 _argvec[2+3] = (unsigned long)arg3; \ 2794 _argvec[5] = (unsigned long)(arg5); \
2630 _argvec[2+4] = (unsigned long)arg4; \
2631 _argvec[2+5] = (unsigned long)arg5; \
2632 __asm__ volatile( \ 2795 __asm__ volatile( \
2633 "mr 11,%1\n\t" \ 2796 "ldr r0, [%1, #20] \n\t" \
2634 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 2797 "push {r0} \n\t" \
2635 "stw 2,-8(11)\n\t" /* save tocptr */ \ 2798 "ldr r0, [%1, #4] \n\t" \
2636 "lwz 2,-4(11)\n\t" /* use nraddr's tocptr */ \ 2799 "ldr r1, [%1, #8] \n\t" \
2637 "lwz 3, 4(11)\n\t" /* arg1->r3 */ \ 2800 "ldr r2, [%1, #12] \n\t" \
2638 "lwz 4, 8(11)\n\t" /* arg2->r4 */ \ 2801 "ldr r3, [%1, #16] \n\t" \
2639 "lwz 5, 12(11)\n\t" /* arg3->r5 */ \ 2802 "ldr r4, [%1] \n\t" /* target->r4 */ \
2640 "lwz 6, 16(11)\n\t" /* arg4->r6 */ \ 2803 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2641 "lwz 7, 20(11)\n\t" /* arg5->r7 */ \ 2804 "add sp, sp, #4 \n\t" \
2642 "lwz 11, 0(11)\n\t" /* target->r11 */ \ 2805 "mov %0, r0" \
2643 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2644 "mr 11,%1\n\t" \
2645 "mr %0,3\n\t" \
2646 "lwz 2,-8(11)\n\t" /* restore tocptr */ \
2647 VG_CONTRACT_FRAME_BY(512) \
2648 : /*out*/ "=r" (_res) \ 2806 : /*out*/ "=r" (_res) \
2649 : /*in*/ "r" (&_argvec[2]) \ 2807 : /*in*/ "0" (&_argvec[0]) \
2650 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 2808 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2651 ); \ 2809 ); \
2652 lval = (__typeof__(lval)) _res; \ 2810 lval = (__typeof__(lval)) _res; \
2653 } while (0) 2811 } while (0)
2654 2812
2655 #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ 2813 #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \
2656 do { \ 2814 do { \
2657 volatile OrigFn _orig = (orig); \ 2815 volatile OrigFn _orig = (orig); \
2658 volatile unsigned long _argvec[3+6]; \ 2816 volatile unsigned long _argvec[7]; \
2659 volatile unsigned long _res; \ 2817 volatile unsigned long _res; \
2660 /* _argvec[0] holds current r2 across the call */ \ 2818 _argvec[0] = (unsigned long)_orig.nraddr; \
2661 _argvec[1] = (unsigned long)_orig.r2; \ 2819 _argvec[1] = (unsigned long)(arg1); \
2662 _argvec[2] = (unsigned long)_orig.nraddr; \ 2820 _argvec[2] = (unsigned long)(arg2); \
2663 _argvec[2+1] = (unsigned long)arg1; \ 2821 _argvec[3] = (unsigned long)(arg3); \
2664 _argvec[2+2] = (unsigned long)arg2; \ 2822 _argvec[4] = (unsigned long)(arg4); \
2665 _argvec[2+3] = (unsigned long)arg3; \ 2823 _argvec[5] = (unsigned long)(arg5); \
2666 _argvec[2+4] = (unsigned long)arg4; \ 2824 _argvec[6] = (unsigned long)(arg6); \
2667 _argvec[2+5] = (unsigned long)arg5; \
2668 _argvec[2+6] = (unsigned long)arg6; \
2669 __asm__ volatile( \ 2825 __asm__ volatile( \
2670 "mr 11,%1\n\t" \ 2826 "ldr r0, [%1, #20] \n\t" \
2671 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 2827 "ldr r1, [%1, #24] \n\t" \
2672 "stw 2,-8(11)\n\t" /* save tocptr */ \ 2828 "push {r0, r1} \n\t" \
2673 "lwz 2,-4(11)\n\t" /* use nraddr's tocptr */ \ 2829 "ldr r0, [%1, #4] \n\t" \
2674 "lwz 3, 4(11)\n\t" /* arg1->r3 */ \ 2830 "ldr r1, [%1, #8] \n\t" \
2675 "lwz 4, 8(11)\n\t" /* arg2->r4 */ \ 2831 "ldr r2, [%1, #12] \n\t" \
2676 "lwz 5, 12(11)\n\t" /* arg3->r5 */ \ 2832 "ldr r3, [%1, #16] \n\t" \
2677 "lwz 6, 16(11)\n\t" /* arg4->r6 */ \ 2833 "ldr r4, [%1] \n\t" /* target->r4 */ \
2678 "lwz 7, 20(11)\n\t" /* arg5->r7 */ \ 2834 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2679 "lwz 8, 24(11)\n\t" /* arg6->r8 */ \ 2835 "add sp, sp, #8 \n\t" \
2680 "lwz 11, 0(11)\n\t" /* target->r11 */ \ 2836 "mov %0, r0" \
2681 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2682 "mr 11,%1\n\t" \
2683 "mr %0,3\n\t" \
2684 "lwz 2,-8(11)\n\t" /* restore tocptr */ \
2685 VG_CONTRACT_FRAME_BY(512) \
2686 : /*out*/ "=r" (_res) \ 2837 : /*out*/ "=r" (_res) \
2687 : /*in*/ "r" (&_argvec[2]) \ 2838 : /*in*/ "0" (&_argvec[0]) \
2688 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 2839 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2689 ); \ 2840 ); \
2690 lval = (__typeof__(lval)) _res; \ 2841 lval = (__typeof__(lval)) _res; \
2691 } while (0) 2842 } while (0)
2692 2843
2693 #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 2844 #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2694 arg7) \ 2845 arg7) \
2695 do { \ 2846 do { \
2696 volatile OrigFn _orig = (orig); \ 2847 volatile OrigFn _orig = (orig); \
2697 volatile unsigned long _argvec[3+7]; \ 2848 volatile unsigned long _argvec[8]; \
2698 volatile unsigned long _res; \ 2849 volatile unsigned long _res; \
2699 /* _argvec[0] holds current r2 across the call */ \ 2850 _argvec[0] = (unsigned long)_orig.nraddr; \
2700 _argvec[1] = (unsigned long)_orig.r2; \ 2851 _argvec[1] = (unsigned long)(arg1); \
2701 _argvec[2] = (unsigned long)_orig.nraddr; \ 2852 _argvec[2] = (unsigned long)(arg2); \
2702 _argvec[2+1] = (unsigned long)arg1; \ 2853 _argvec[3] = (unsigned long)(arg3); \
2703 _argvec[2+2] = (unsigned long)arg2; \ 2854 _argvec[4] = (unsigned long)(arg4); \
2704 _argvec[2+3] = (unsigned long)arg3; \ 2855 _argvec[5] = (unsigned long)(arg5); \
2705 _argvec[2+4] = (unsigned long)arg4; \ 2856 _argvec[6] = (unsigned long)(arg6); \
2706 _argvec[2+5] = (unsigned long)arg5; \ 2857 _argvec[7] = (unsigned long)(arg7); \
2707 _argvec[2+6] = (unsigned long)arg6; \
2708 _argvec[2+7] = (unsigned long)arg7; \
2709 __asm__ volatile( \ 2858 __asm__ volatile( \
2710 "mr 11,%1\n\t" \ 2859 "ldr r0, [%1, #20] \n\t" \
2711 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 2860 "ldr r1, [%1, #24] \n\t" \
2712 "stw 2,-8(11)\n\t" /* save tocptr */ \ 2861 "ldr r2, [%1, #28] \n\t" \
2713 "lwz 2,-4(11)\n\t" /* use nraddr's tocptr */ \ 2862 "push {r0, r1, r2} \n\t" \
2714 "lwz 3, 4(11)\n\t" /* arg1->r3 */ \ 2863 "ldr r0, [%1, #4] \n\t" \
2715 "lwz 4, 8(11)\n\t" /* arg2->r4 */ \ 2864 "ldr r1, [%1, #8] \n\t" \
2716 "lwz 5, 12(11)\n\t" /* arg3->r5 */ \ 2865 "ldr r2, [%1, #12] \n\t" \
2717 "lwz 6, 16(11)\n\t" /* arg4->r6 */ \ 2866 "ldr r3, [%1, #16] \n\t" \
2718 "lwz 7, 20(11)\n\t" /* arg5->r7 */ \ 2867 "ldr r4, [%1] \n\t" /* target->r4 */ \
2719 "lwz 8, 24(11)\n\t" /* arg6->r8 */ \ 2868 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2720 "lwz 9, 28(11)\n\t" /* arg7->r9 */ \ 2869 "add sp, sp, #12 \n\t" \
2721 "lwz 11, 0(11)\n\t" /* target->r11 */ \ 2870 "mov %0, r0" \
2722 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2723 "mr 11,%1\n\t" \
2724 "mr %0,3\n\t" \
2725 "lwz 2,-8(11)\n\t" /* restore tocptr */ \
2726 VG_CONTRACT_FRAME_BY(512) \
2727 : /*out*/ "=r" (_res) \ 2871 : /*out*/ "=r" (_res) \
2728 : /*in*/ "r" (&_argvec[2]) \ 2872 : /*in*/ "0" (&_argvec[0]) \
2729 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 2873 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2730 ); \ 2874 ); \
2731 lval = (__typeof__(lval)) _res; \ 2875 lval = (__typeof__(lval)) _res; \
2732 } while (0) 2876 } while (0)
2733 2877
2734 #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 2878 #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2735 arg7,arg8) \ 2879 arg7,arg8) \
2736 do { \ 2880 do { \
2737 volatile OrigFn _orig = (orig); \ 2881 volatile OrigFn _orig = (orig); \
2738 volatile unsigned long _argvec[3+8]; \ 2882 volatile unsigned long _argvec[9]; \
2739 volatile unsigned long _res; \ 2883 volatile unsigned long _res; \
2740 /* _argvec[0] holds current r2 across the call */ \ 2884 _argvec[0] = (unsigned long)_orig.nraddr; \
2741 _argvec[1] = (unsigned long)_orig.r2; \ 2885 _argvec[1] = (unsigned long)(arg1); \
2742 _argvec[2] = (unsigned long)_orig.nraddr; \ 2886 _argvec[2] = (unsigned long)(arg2); \
2743 _argvec[2+1] = (unsigned long)arg1; \ 2887 _argvec[3] = (unsigned long)(arg3); \
2744 _argvec[2+2] = (unsigned long)arg2; \ 2888 _argvec[4] = (unsigned long)(arg4); \
2745 _argvec[2+3] = (unsigned long)arg3; \ 2889 _argvec[5] = (unsigned long)(arg5); \
2746 _argvec[2+4] = (unsigned long)arg4; \ 2890 _argvec[6] = (unsigned long)(arg6); \
2747 _argvec[2+5] = (unsigned long)arg5; \ 2891 _argvec[7] = (unsigned long)(arg7); \
2748 _argvec[2+6] = (unsigned long)arg6; \ 2892 _argvec[8] = (unsigned long)(arg8); \
2749 _argvec[2+7] = (unsigned long)arg7; \
2750 _argvec[2+8] = (unsigned long)arg8; \
2751 __asm__ volatile( \ 2893 __asm__ volatile( \
2752 "mr 11,%1\n\t" \ 2894 "ldr r0, [%1, #20] \n\t" \
2753 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 2895 "ldr r1, [%1, #24] \n\t" \
2754 "stw 2,-8(11)\n\t" /* save tocptr */ \ 2896 "ldr r2, [%1, #28] \n\t" \
2755 "lwz 2,-4(11)\n\t" /* use nraddr's tocptr */ \ 2897 "ldr r3, [%1, #32] \n\t" \
2756 "lwz 3, 4(11)\n\t" /* arg1->r3 */ \ 2898 "push {r0, r1, r2, r3} \n\t" \
2757 "lwz 4, 8(11)\n\t" /* arg2->r4 */ \ 2899 "ldr r0, [%1, #4] \n\t" \
2758 "lwz 5, 12(11)\n\t" /* arg3->r5 */ \ 2900 "ldr r1, [%1, #8] \n\t" \
2759 "lwz 6, 16(11)\n\t" /* arg4->r6 */ \ 2901 "ldr r2, [%1, #12] \n\t" \
2760 "lwz 7, 20(11)\n\t" /* arg5->r7 */ \ 2902 "ldr r3, [%1, #16] \n\t" \
2761 "lwz 8, 24(11)\n\t" /* arg6->r8 */ \ 2903 "ldr r4, [%1] \n\t" /* target->r4 */ \
2762 "lwz 9, 28(11)\n\t" /* arg7->r9 */ \ 2904 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2763 "lwz 10, 32(11)\n\t" /* arg8->r10 */ \ 2905 "add sp, sp, #16 \n\t" \
2764 "lwz 11, 0(11)\n\t" /* target->r11 */ \ 2906 "mov %0, r0" \
2765 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2766 "mr 11,%1\n\t" \
2767 "mr %0,3\n\t" \
2768 "lwz 2,-8(11)\n\t" /* restore tocptr */ \
2769 VG_CONTRACT_FRAME_BY(512) \
2770 : /*out*/ "=r" (_res) \ 2907 : /*out*/ "=r" (_res) \
2771 : /*in*/ "r" (&_argvec[2]) \ 2908 : /*in*/ "0" (&_argvec[0]) \
2772 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 2909 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2773 ); \ 2910 ); \
2774 lval = (__typeof__(lval)) _res; \ 2911 lval = (__typeof__(lval)) _res; \
2775 } while (0) 2912 } while (0)
2776 2913
2777 #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 2914 #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2778 arg7,arg8,arg9) \ 2915 arg7,arg8,arg9) \
2779 do { \ 2916 do { \
2780 volatile OrigFn _orig = (orig); \ 2917 volatile OrigFn _orig = (orig); \
2781 volatile unsigned long _argvec[3+9]; \ 2918 volatile unsigned long _argvec[10]; \
2782 volatile unsigned long _res; \ 2919 volatile unsigned long _res; \
2783 /* _argvec[0] holds current r2 across the call */ \ 2920 _argvec[0] = (unsigned long)_orig.nraddr; \
2784 _argvec[1] = (unsigned long)_orig.r2; \ 2921 _argvec[1] = (unsigned long)(arg1); \
2785 _argvec[2] = (unsigned long)_orig.nraddr; \ 2922 _argvec[2] = (unsigned long)(arg2); \
2786 _argvec[2+1] = (unsigned long)arg1; \ 2923 _argvec[3] = (unsigned long)(arg3); \
2787 _argvec[2+2] = (unsigned long)arg2; \ 2924 _argvec[4] = (unsigned long)(arg4); \
2788 _argvec[2+3] = (unsigned long)arg3; \ 2925 _argvec[5] = (unsigned long)(arg5); \
2789 _argvec[2+4] = (unsigned long)arg4; \ 2926 _argvec[6] = (unsigned long)(arg6); \
2790 _argvec[2+5] = (unsigned long)arg5; \ 2927 _argvec[7] = (unsigned long)(arg7); \
2791 _argvec[2+6] = (unsigned long)arg6; \ 2928 _argvec[8] = (unsigned long)(arg8); \
2792 _argvec[2+7] = (unsigned long)arg7; \ 2929 _argvec[9] = (unsigned long)(arg9); \
2793 _argvec[2+8] = (unsigned long)arg8; \
2794 _argvec[2+9] = (unsigned long)arg9; \
2795 __asm__ volatile( \ 2930 __asm__ volatile( \
2796 "mr 11,%1\n\t" \ 2931 "ldr r0, [%1, #20] \n\t" \
2797 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 2932 "ldr r1, [%1, #24] \n\t" \
2798 "stw 2,-8(11)\n\t" /* save tocptr */ \ 2933 "ldr r2, [%1, #28] \n\t" \
2799 "lwz 2,-4(11)\n\t" /* use nraddr's tocptr */ \ 2934 "ldr r3, [%1, #32] \n\t" \
2800 VG_EXPAND_FRAME_BY_trashes_r3(64) \ 2935 "ldr r4, [%1, #36] \n\t" \
2801 /* arg9 */ \ 2936 "push {r0, r1, r2, r3, r4} \n\t" \
2802 "lwz 3,36(11)\n\t" \ 2937 "ldr r0, [%1, #4] \n\t" \
2803 "stw 3,56(1)\n\t" \ 2938 "ldr r1, [%1, #8] \n\t" \
2804 /* args1-8 */ \ 2939 "ldr r2, [%1, #12] \n\t" \
2805 "lwz 3, 4(11)\n\t" /* arg1->r3 */ \ 2940 "ldr r3, [%1, #16] \n\t" \
2806 "lwz 4, 8(11)\n\t" /* arg2->r4 */ \ 2941 "ldr r4, [%1] \n\t" /* target->r4 */ \
2807 "lwz 5, 12(11)\n\t" /* arg3->r5 */ \ 2942 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2808 "lwz 6, 16(11)\n\t" /* arg4->r6 */ \ 2943 "add sp, sp, #20 \n\t" \
2809 "lwz 7, 20(11)\n\t" /* arg5->r7 */ \ 2944 "mov %0, r0" \
2810 "lwz 8, 24(11)\n\t" /* arg6->r8 */ \
2811 "lwz 9, 28(11)\n\t" /* arg7->r9 */ \
2812 "lwz 10, 32(11)\n\t" /* arg8->r10 */ \
2813 "lwz 11, 0(11)\n\t" /* target->r11 */ \
2814 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2815 "mr 11,%1\n\t" \
2816 "mr %0,3\n\t" \
2817 "lwz 2,-8(11)\n\t" /* restore tocptr */ \
2818 VG_CONTRACT_FRAME_BY(64) \
2819 VG_CONTRACT_FRAME_BY(512) \
2820 : /*out*/ "=r" (_res) \ 2945 : /*out*/ "=r" (_res) \
2821 : /*in*/ "r" (&_argvec[2]) \ 2946 : /*in*/ "0" (&_argvec[0]) \
2822 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 2947 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2823 ); \ 2948 ); \
2824 lval = (__typeof__(lval)) _res; \ 2949 lval = (__typeof__(lval)) _res; \
2825 } while (0) 2950 } while (0)
2826 2951
2827 #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 2952 #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2828 arg7,arg8,arg9,arg10) \ 2953 arg7,arg8,arg9,arg10) \
2829 do { \ 2954 do { \
2830 volatile OrigFn _orig = (orig); \ 2955 volatile OrigFn _orig = (orig); \
2831 volatile unsigned long _argvec[3+10]; \ 2956 volatile unsigned long _argvec[11]; \
2832 volatile unsigned long _res; \ 2957 volatile unsigned long _res; \
2833 /* _argvec[0] holds current r2 across the call */ \ 2958 _argvec[0] = (unsigned long)_orig.nraddr; \
2834 _argvec[1] = (unsigned long)_orig.r2; \ 2959 _argvec[1] = (unsigned long)(arg1); \
2835 _argvec[2] = (unsigned long)_orig.nraddr; \ 2960 _argvec[2] = (unsigned long)(arg2); \
2836 _argvec[2+1] = (unsigned long)arg1; \ 2961 _argvec[3] = (unsigned long)(arg3); \
2837 _argvec[2+2] = (unsigned long)arg2; \ 2962 _argvec[4] = (unsigned long)(arg4); \
2838 _argvec[2+3] = (unsigned long)arg3; \ 2963 _argvec[5] = (unsigned long)(arg5); \
2839 _argvec[2+4] = (unsigned long)arg4; \ 2964 _argvec[6] = (unsigned long)(arg6); \
2840 _argvec[2+5] = (unsigned long)arg5; \ 2965 _argvec[7] = (unsigned long)(arg7); \
2841 _argvec[2+6] = (unsigned long)arg6; \ 2966 _argvec[8] = (unsigned long)(arg8); \
2842 _argvec[2+7] = (unsigned long)arg7; \ 2967 _argvec[9] = (unsigned long)(arg9); \
2843 _argvec[2+8] = (unsigned long)arg8; \ 2968 _argvec[10] = (unsigned long)(arg10); \
2844 _argvec[2+9] = (unsigned long)arg9; \
2845 _argvec[2+10] = (unsigned long)arg10; \
2846 __asm__ volatile( \ 2969 __asm__ volatile( \
2847 "mr 11,%1\n\t" \ 2970 "ldr r0, [%1, #40] \n\t" \
2848 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 2971 "push {r0} \n\t" \
2849 "stw 2,-8(11)\n\t" /* save tocptr */ \ 2972 "ldr r0, [%1, #20] \n\t" \
2850 "lwz 2,-4(11)\n\t" /* use nraddr's tocptr */ \ 2973 "ldr r1, [%1, #24] \n\t" \
2851 VG_EXPAND_FRAME_BY_trashes_r3(64) \ 2974 "ldr r2, [%1, #28] \n\t" \
2852 /* arg10 */ \ 2975 "ldr r3, [%1, #32] \n\t" \
2853 "lwz 3,40(11)\n\t" \ 2976 "ldr r4, [%1, #36] \n\t" \
2854 "stw 3,60(1)\n\t" \ 2977 "push {r0, r1, r2, r3, r4} \n\t" \
2855 /* arg9 */ \ 2978 "ldr r0, [%1, #4] \n\t" \
2856 "lwz 3,36(11)\n\t" \ 2979 "ldr r1, [%1, #8] \n\t" \
2857 "stw 3,56(1)\n\t" \ 2980 "ldr r2, [%1, #12] \n\t" \
2858 /* args1-8 */ \ 2981 "ldr r3, [%1, #16] \n\t" \
2859 "lwz 3, 4(11)\n\t" /* arg1->r3 */ \ 2982 "ldr r4, [%1] \n\t" /* target->r4 */ \
2860 "lwz 4, 8(11)\n\t" /* arg2->r4 */ \ 2983 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2861 "lwz 5, 12(11)\n\t" /* arg3->r5 */ \ 2984 "add sp, sp, #24 \n\t" \
2862 "lwz 6, 16(11)\n\t" /* arg4->r6 */ \ 2985 "mov %0, r0" \
2863 "lwz 7, 20(11)\n\t" /* arg5->r7 */ \
2864 "lwz 8, 24(11)\n\t" /* arg6->r8 */ \
2865 "lwz 9, 28(11)\n\t" /* arg7->r9 */ \
2866 "lwz 10, 32(11)\n\t" /* arg8->r10 */ \
2867 "lwz 11, 0(11)\n\t" /* target->r11 */ \
2868 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2869 "mr 11,%1\n\t" \
2870 "mr %0,3\n\t" \
2871 "lwz 2,-8(11)\n\t" /* restore tocptr */ \
2872 VG_CONTRACT_FRAME_BY(64) \
2873 VG_CONTRACT_FRAME_BY(512) \
2874 : /*out*/ "=r" (_res) \ 2986 : /*out*/ "=r" (_res) \
2875 : /*in*/ "r" (&_argvec[2]) \ 2987 : /*in*/ "0" (&_argvec[0]) \
2876 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 2988 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2877 ); \ 2989 ); \
2878 lval = (__typeof__(lval)) _res; \ 2990 lval = (__typeof__(lval)) _res; \
2879 } while (0) 2991 } while (0)
2880 2992
2881 #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 2993 #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5, \
2882 arg7,arg8,arg9,arg10,arg11) \ 2994 arg6,arg7,arg8,arg9,arg10, \
2995 arg11) \
2883 do { \ 2996 do { \
2884 volatile OrigFn _orig = (orig); \ 2997 volatile OrigFn _orig = (orig); \
2885 volatile unsigned long _argvec[3+11]; \ 2998 volatile unsigned long _argvec[12]; \
2886 volatile unsigned long _res; \ 2999 volatile unsigned long _res; \
2887 /* _argvec[0] holds current r2 across the call */ \ 3000 _argvec[0] = (unsigned long)_orig.nraddr; \
2888 _argvec[1] = (unsigned long)_orig.r2; \ 3001 _argvec[1] = (unsigned long)(arg1); \
2889 _argvec[2] = (unsigned long)_orig.nraddr; \ 3002 _argvec[2] = (unsigned long)(arg2); \
2890 _argvec[2+1] = (unsigned long)arg1; \ 3003 _argvec[3] = (unsigned long)(arg3); \
2891 _argvec[2+2] = (unsigned long)arg2; \ 3004 _argvec[4] = (unsigned long)(arg4); \
2892 _argvec[2+3] = (unsigned long)arg3; \ 3005 _argvec[5] = (unsigned long)(arg5); \
2893 _argvec[2+4] = (unsigned long)arg4; \ 3006 _argvec[6] = (unsigned long)(arg6); \
2894 _argvec[2+5] = (unsigned long)arg5; \ 3007 _argvec[7] = (unsigned long)(arg7); \
2895 _argvec[2+6] = (unsigned long)arg6; \ 3008 _argvec[8] = (unsigned long)(arg8); \
2896 _argvec[2+7] = (unsigned long)arg7; \ 3009 _argvec[9] = (unsigned long)(arg9); \
2897 _argvec[2+8] = (unsigned long)arg8; \ 3010 _argvec[10] = (unsigned long)(arg10); \
2898 _argvec[2+9] = (unsigned long)arg9; \ 3011 _argvec[11] = (unsigned long)(arg11); \
2899 _argvec[2+10] = (unsigned long)arg10; \
2900 _argvec[2+11] = (unsigned long)arg11; \
2901 __asm__ volatile( \ 3012 __asm__ volatile( \
2902 "mr 11,%1\n\t" \ 3013 "ldr r0, [%1, #40] \n\t" \
2903 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 3014 "ldr r1, [%1, #44] \n\t" \
2904 "stw 2,-8(11)\n\t" /* save tocptr */ \ 3015 "push {r0, r1} \n\t" \
2905 "lwz 2,-4(11)\n\t" /* use nraddr's tocptr */ \ 3016 "ldr r0, [%1, #20] \n\t" \
2906 VG_EXPAND_FRAME_BY_trashes_r3(72) \ 3017 "ldr r1, [%1, #24] \n\t" \
2907 /* arg11 */ \ 3018 "ldr r2, [%1, #28] \n\t" \
2908 "lwz 3,44(11)\n\t" \ 3019 "ldr r3, [%1, #32] \n\t" \
2909 "stw 3,64(1)\n\t" \ 3020 "ldr r4, [%1, #36] \n\t" \
2910 /* arg10 */ \ 3021 "push {r0, r1, r2, r3, r4} \n\t" \
2911 "lwz 3,40(11)\n\t" \ 3022 "ldr r0, [%1, #4] \n\t" \
2912 "stw 3,60(1)\n\t" \ 3023 "ldr r1, [%1, #8] \n\t" \
2913 /* arg9 */ \ 3024 "ldr r2, [%1, #12] \n\t" \
2914 "lwz 3,36(11)\n\t" \ 3025 "ldr r3, [%1, #16] \n\t" \
2915 "stw 3,56(1)\n\t" \ 3026 "ldr r4, [%1] \n\t" /* target->r4 */ \
2916 /* args1-8 */ \ 3027 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2917 "lwz 3, 4(11)\n\t" /* arg1->r3 */ \ 3028 "add sp, sp, #28 \n\t" \
2918 "lwz 4, 8(11)\n\t" /* arg2->r4 */ \ 3029 "mov %0, r0" \
2919 "lwz 5, 12(11)\n\t" /* arg3->r5 */ \
2920 "lwz 6, 16(11)\n\t" /* arg4->r6 */ \
2921 "lwz 7, 20(11)\n\t" /* arg5->r7 */ \
2922 "lwz 8, 24(11)\n\t" /* arg6->r8 */ \
2923 "lwz 9, 28(11)\n\t" /* arg7->r9 */ \
2924 "lwz 10, 32(11)\n\t" /* arg8->r10 */ \
2925 "lwz 11, 0(11)\n\t" /* target->r11 */ \
2926 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
2927 "mr 11,%1\n\t" \
2928 "mr %0,3\n\t" \
2929 "lwz 2,-8(11)\n\t" /* restore tocptr */ \
2930 VG_CONTRACT_FRAME_BY(72) \
2931 VG_CONTRACT_FRAME_BY(512) \
2932 : /*out*/ "=r" (_res) \ 3030 : /*out*/ "=r" (_res) \
2933 : /*in*/ "r" (&_argvec[2]) \ 3031 : /*in*/ "0" (&_argvec[0]) \
3032 : /*trash*/ "cc", "memory",__CALLER_SAVED_REGS \
3033 ); \
3034 lval = (__typeof__(lval)) _res; \
3035 } while (0)
3036
3037 #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5, \
3038 arg6,arg7,arg8,arg9,arg10, \
3039 arg11,arg12) \
3040 do { \
3041 volatile OrigFn _orig = (orig); \
3042 volatile unsigned long _argvec[13]; \
3043 volatile unsigned long _res; \
3044 _argvec[0] = (unsigned long)_orig.nraddr; \
3045 _argvec[1] = (unsigned long)(arg1); \
3046 _argvec[2] = (unsigned long)(arg2); \
3047 _argvec[3] = (unsigned long)(arg3); \
3048 _argvec[4] = (unsigned long)(arg4); \
3049 _argvec[5] = (unsigned long)(arg5); \
3050 _argvec[6] = (unsigned long)(arg6); \
3051 _argvec[7] = (unsigned long)(arg7); \
3052 _argvec[8] = (unsigned long)(arg8); \
3053 _argvec[9] = (unsigned long)(arg9); \
3054 _argvec[10] = (unsigned long)(arg10); \
3055 _argvec[11] = (unsigned long)(arg11); \
3056 _argvec[12] = (unsigned long)(arg12); \
3057 __asm__ volatile( \
3058 "ldr r0, [%1, #40] \n\t" \
3059 "ldr r1, [%1, #44] \n\t" \
3060 "ldr r2, [%1, #48] \n\t" \
3061 "push {r0, r1, r2} \n\t" \
3062 "ldr r0, [%1, #20] \n\t" \
3063 "ldr r1, [%1, #24] \n\t" \
3064 "ldr r2, [%1, #28] \n\t" \
3065 "ldr r3, [%1, #32] \n\t" \
3066 "ldr r4, [%1, #36] \n\t" \
3067 "push {r0, r1, r2, r3, r4} \n\t" \
3068 "ldr r0, [%1, #4] \n\t" \
3069 "ldr r1, [%1, #8] \n\t" \
3070 "ldr r2, [%1, #12] \n\t" \
3071 "ldr r3, [%1, #16] \n\t" \
3072 "ldr r4, [%1] \n\t" /* target->r4 */ \
3073 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
3074 "add sp, sp, #32 \n\t" \
3075 "mov %0, r0" \
3076 : /*out*/ "=r" (_res) \
3077 : /*in*/ "0" (&_argvec[0]) \
2934 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 3078 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2935 ); \ 3079 ); \
2936 lval = (__typeof__(lval)) _res; \ 3080 lval = (__typeof__(lval)) _res; \
2937 } while (0) 3081 } while (0)
2938 3082
2939 #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 3083 #endif /* PLAT_arm_linux */
2940 arg7,arg8,arg9,arg10,arg11,arg12) \ 3084
2941 do { \ 3085 /* ------------------------- s390x-linux ------------------------- */
2942 volatile OrigFn _orig = (orig); \ 3086
2943 volatile unsigned long _argvec[3+12]; \ 3087 #if defined(PLAT_s390x_linux)
2944 volatile unsigned long _res; \ 3088
2945 /* _argvec[0] holds current r2 across the call */ \ 3089 /* Similar workaround as amd64 (see above), but we use r11 as frame
2946 _argvec[1] = (unsigned long)_orig.r2; \ 3090 pointer and save the old r11 in r7. r11 might be used for
2947 _argvec[2] = (unsigned long)_orig.nraddr; \ 3091 argvec, therefore we copy argvec in r1 since r1 is clobbered
2948 _argvec[2+1] = (unsigned long)arg1; \ 3092 after the call anyway. */
2949 _argvec[2+2] = (unsigned long)arg2; \ 3093 #if defined(__GNUC__) && defined(__GCC_HAVE_DWARF2_CFI_ASM)
2950 _argvec[2+3] = (unsigned long)arg3; \ 3094 # define __FRAME_POINTER \
2951 _argvec[2+4] = (unsigned long)arg4; \ 3095 ,"d"(__builtin_dwarf_cfa())
2952 _argvec[2+5] = (unsigned long)arg5; \ 3096 # define VALGRIND_CFI_PROLOGUE \
2953 _argvec[2+6] = (unsigned long)arg6; \ 3097 ".cfi_remember_state\n\t" \
2954 _argvec[2+7] = (unsigned long)arg7; \ 3098 "lgr 1,%1\n\t" /* copy the argvec pointer in r1 */ \
2955 _argvec[2+8] = (unsigned long)arg8; \ 3099 "lgr 7,11\n\t" \
2956 _argvec[2+9] = (unsigned long)arg9; \ 3100 "lgr 11,%2\n\t" \
2957 _argvec[2+10] = (unsigned long)arg10; \ 3101 ".cfi_def_cfa r11, 0\n\t"
2958 _argvec[2+11] = (unsigned long)arg11; \ 3102 # define VALGRIND_CFI_EPILOGUE \
2959 _argvec[2+12] = (unsigned long)arg12; \ 3103 "lgr 11, 7\n\t" \
2960 __asm__ volatile( \ 3104 ".cfi_restore_state\n\t"
2961 "mr 11,%1\n\t" \ 3105 #else
2962 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 3106 # define __FRAME_POINTER
2963 "stw 2,-8(11)\n\t" /* save tocptr */ \ 3107 # define VALGRIND_CFI_PROLOGUE \
2964 "lwz 2,-4(11)\n\t" /* use nraddr's tocptr */ \ 3108 "lgr 1,%1\n\t"
2965 VG_EXPAND_FRAME_BY_trashes_r3(72) \ 3109 # define VALGRIND_CFI_EPILOGUE
2966 /* arg12 */ \ 3110 #endif
2967 "lwz 3,48(11)\n\t" \ 3111
2968 "stw 3,68(1)\n\t" \ 3112
2969 /* arg11 */ \ 3113
2970 "lwz 3,44(11)\n\t" \ 3114
2971 "stw 3,64(1)\n\t" \ 3115 /* These regs are trashed by the hidden call. Note that we overwrite
2972 /* arg10 */ \ 3116 r14 in s390_irgen_noredir (VEX/priv/guest_s390_irgen.c) to give the
2973 "lwz 3,40(11)\n\t" \ 3117 function a proper return address. All others are ABI defined call
2974 "stw 3,60(1)\n\t" \ 3118 clobbers. */
2975 /* arg9 */ \ 3119 #define __CALLER_SAVED_REGS "0","1","2","3","4","5","14", \
2976 "lwz 3,36(11)\n\t" \ 3120 "f0","f1","f2","f3","f4","f5","f6","f7"
2977 "stw 3,56(1)\n\t" \ 3121
2978 /* args1-8 */ \ 3122
2979 "lwz 3, 4(11)\n\t" /* arg1->r3 */ \ 3123 #define CALL_FN_W_v(lval, orig) \
2980 "lwz 4, 8(11)\n\t" /* arg2->r4 */ \ 3124 do { \
2981 "lwz 5, 12(11)\n\t" /* arg3->r5 */ \ 3125 volatile OrigFn _orig = (orig); \
2982 "lwz 6, 16(11)\n\t" /* arg4->r6 */ \ 3126 volatile unsigned long _argvec[1]; \
2983 "lwz 7, 20(11)\n\t" /* arg5->r7 */ \ 3127 volatile unsigned long _res; \
2984 "lwz 8, 24(11)\n\t" /* arg6->r8 */ \ 3128 _argvec[0] = (unsigned long)_orig.nraddr; \
2985 "lwz 9, 28(11)\n\t" /* arg7->r9 */ \ 3129 __asm__ volatile( \
2986 "lwz 10, 32(11)\n\t" /* arg8->r10 */ \ 3130 VALGRIND_CFI_PROLOGUE \
2987 "lwz 11, 0(11)\n\t" /* target->r11 */ \ 3131 "aghi 15,-160\n\t" \
2988 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 3132 "lg 1, 0(1)\n\t" /* target->r1 */ \
2989 "mr 11,%1\n\t" \ 3133 VALGRIND_CALL_NOREDIR_R1 \
2990 "mr %0,3\n\t" \ 3134 "lgr %0, 2\n\t" \
2991 "lwz 2,-8(11)\n\t" /* restore tocptr */ \ 3135 "aghi 15,160\n\t" \
2992 VG_CONTRACT_FRAME_BY(72) \ 3136 VALGRIND_CFI_EPILOGUE \
2993 VG_CONTRACT_FRAME_BY(512) \ 3137 : /*out*/ "=d" (_res) \
2994 : /*out*/ "=r" (_res) \ 3138 : /*in*/ "d" (&_argvec[0]) __FRAME_POINTER \
2995 : /*in*/ "r" (&_argvec[2]) \ 3139 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"7" \
2996 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 3140 ); \
2997 ); \ 3141 lval = (__typeof__(lval)) _res; \
2998 lval = (__typeof__(lval)) _res; \ 3142 } while (0)
2999 } while (0) 3143
3000 3144 /* The call abi has the arguments in r2-r6 and stack */
3001 #endif /* PLAT_ppc32_aix5 */ 3145 #define CALL_FN_W_W(lval, orig, arg1) \
3002 3146 do { \
3003 /* ------------------------ ppc64-aix5 ------------------------- */ 3147 volatile OrigFn _orig = (orig); \
3004 3148 volatile unsigned long _argvec[2]; \
3005 #if defined(PLAT_ppc64_aix5) 3149 volatile unsigned long _res; \
3006 3150 _argvec[0] = (unsigned long)_orig.nraddr; \
3007 /* ARGREGS: r3 r4 r5 r6 r7 r8 r9 r10 (the rest on stack somewhere) */ 3151 _argvec[1] = (unsigned long)arg1; \
3008 3152 __asm__ volatile( \
3009 /* These regs are trashed by the hidden call. */ 3153 VALGRIND_CFI_PROLOGUE \
3010 #define __CALLER_SAVED_REGS \ 3154 "aghi 15,-160\n\t" \
3011 "lr", "ctr", "xer", \ 3155 "lg 2, 8(1)\n\t" \
3012 "cr0", "cr1", "cr2", "cr3", "cr4", "cr5", "cr6", "cr7", \ 3156 "lg 1, 0(1)\n\t" \
3013 "r0", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "r10", \ 3157 VALGRIND_CALL_NOREDIR_R1 \
3014 "r11", "r12", "r13" 3158 "lgr %0, 2\n\t" \
3015 3159 "aghi 15,160\n\t" \
3016 /* Expand the stack frame, copying enough info that unwinding 3160 VALGRIND_CFI_EPILOGUE \
3017 still works. Trashes r3. */ 3161 : /*out*/ "=d" (_res) \
3018 3162 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
3019 #define VG_EXPAND_FRAME_BY_trashes_r3(_n_fr) \ 3163 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"7" \
3020 "addi 1,1,-" #_n_fr "\n\t" \ 3164 ); \
3021 "ld 3," #_n_fr "(1)\n\t" \ 3165 lval = (__typeof__(lval)) _res; \
3022 "std 3,0(1)\n\t" 3166 } while (0)
3023 3167
3024 #define VG_CONTRACT_FRAME_BY(_n_fr) \ 3168 #define CALL_FN_W_WW(lval, orig, arg1, arg2) \
3025 "addi 1,1," #_n_fr "\n\t" 3169 do { \
3026 3170 volatile OrigFn _orig = (orig); \
3027 /* These CALL_FN_ macros assume that on ppc64-aix5, sizeof(unsigned 3171 volatile unsigned long _argvec[3]; \
3028 long) == 8. */ 3172 volatile unsigned long _res; \
3029 3173 _argvec[0] = (unsigned long)_orig.nraddr; \
3030 #define CALL_FN_W_v(lval, orig) \ 3174 _argvec[1] = (unsigned long)arg1; \
3031 do { \ 3175 _argvec[2] = (unsigned long)arg2; \
3032 volatile OrigFn _orig = (orig); \ 3176 __asm__ volatile( \
3033 volatile unsigned long _argvec[3+0]; \ 3177 VALGRIND_CFI_PROLOGUE \
3034 volatile unsigned long _res; \ 3178 "aghi 15,-160\n\t" \
3035 /* _argvec[0] holds current r2 across the call */ \ 3179 "lg 2, 8(1)\n\t" \
3036 _argvec[1] = (unsigned long)_orig.r2; \ 3180 "lg 3,16(1)\n\t" \
3037 _argvec[2] = (unsigned long)_orig.nraddr; \ 3181 "lg 1, 0(1)\n\t" \
3038 __asm__ volatile( \ 3182 VALGRIND_CALL_NOREDIR_R1 \
3039 "mr 11,%1\n\t" \ 3183 "lgr %0, 2\n\t" \
3040 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 3184 "aghi 15,160\n\t" \
3041 "std 2,-16(11)\n\t" /* save tocptr */ \ 3185 VALGRIND_CFI_EPILOGUE \
3042 "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ 3186 : /*out*/ "=d" (_res) \
3043 "ld 11, 0(11)\n\t" /* target->r11 */ \ 3187 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
3044 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 3188 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"7" \
3045 "mr 11,%1\n\t" \ 3189 ); \
3046 "mr %0,3\n\t" \ 3190 lval = (__typeof__(lval)) _res; \
3047 "ld 2,-16(11)\n\t" /* restore tocptr */ \ 3191 } while (0)
3048 VG_CONTRACT_FRAME_BY(512) \ 3192
3049 : /*out*/ "=r" (_res) \ 3193 #define CALL_FN_W_WWW(lval, orig, arg1, arg2, arg3) \
3050 : /*in*/ "r" (&_argvec[2]) \ 3194 do { \
3051 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 3195 volatile OrigFn _orig = (orig); \
3052 ); \ 3196 volatile unsigned long _argvec[4]; \
3053 lval = (__typeof__(lval)) _res; \ 3197 volatile unsigned long _res; \
3054 } while (0) 3198 _argvec[0] = (unsigned long)_orig.nraddr; \
3055 3199 _argvec[1] = (unsigned long)arg1; \
3056 #define CALL_FN_W_W(lval, orig, arg1) \ 3200 _argvec[2] = (unsigned long)arg2; \
3057 do { \ 3201 _argvec[3] = (unsigned long)arg3; \
3058 volatile OrigFn _orig = (orig); \ 3202 __asm__ volatile( \
3059 volatile unsigned long _argvec[3+1]; \ 3203 VALGRIND_CFI_PROLOGUE \
3060 volatile unsigned long _res; \ 3204 "aghi 15,-160\n\t" \
3061 /* _argvec[0] holds current r2 across the call */ \ 3205 "lg 2, 8(1)\n\t" \
3062 _argvec[1] = (unsigned long)_orig.r2; \ 3206 "lg 3,16(1)\n\t" \
3063 _argvec[2] = (unsigned long)_orig.nraddr; \ 3207 "lg 4,24(1)\n\t" \
3064 _argvec[2+1] = (unsigned long)arg1; \ 3208 "lg 1, 0(1)\n\t" \
3065 __asm__ volatile( \ 3209 VALGRIND_CALL_NOREDIR_R1 \
3066 "mr 11,%1\n\t" \ 3210 "lgr %0, 2\n\t" \
3067 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 3211 "aghi 15,160\n\t" \
3068 "std 2,-16(11)\n\t" /* save tocptr */ \ 3212 VALGRIND_CFI_EPILOGUE \
3069 "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ 3213 : /*out*/ "=d" (_res) \
3070 "ld 3, 8(11)\n\t" /* arg1->r3 */ \ 3214 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
3071 "ld 11, 0(11)\n\t" /* target->r11 */ \ 3215 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"7" \
3072 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 3216 ); \
3073 "mr 11,%1\n\t" \ 3217 lval = (__typeof__(lval)) _res; \
3074 "mr %0,3\n\t" \ 3218 } while (0)
3075 "ld 2,-16(11)\n\t" /* restore tocptr */ \ 3219
3076 VG_CONTRACT_FRAME_BY(512) \ 3220 #define CALL_FN_W_WWWW(lval, orig, arg1, arg2, arg3, arg4) \
3077 : /*out*/ "=r" (_res) \ 3221 do { \
3078 : /*in*/ "r" (&_argvec[2]) \ 3222 volatile OrigFn _orig = (orig); \
3079 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 3223 volatile unsigned long _argvec[5]; \
3080 ); \ 3224 volatile unsigned long _res; \
3081 lval = (__typeof__(lval)) _res; \ 3225 _argvec[0] = (unsigned long)_orig.nraddr; \
3082 } while (0) 3226 _argvec[1] = (unsigned long)arg1; \
3083 3227 _argvec[2] = (unsigned long)arg2; \
3084 #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ 3228 _argvec[3] = (unsigned long)arg3; \
3085 do { \ 3229 _argvec[4] = (unsigned long)arg4; \
3086 volatile OrigFn _orig = (orig); \ 3230 __asm__ volatile( \
3087 volatile unsigned long _argvec[3+2]; \ 3231 VALGRIND_CFI_PROLOGUE \
3088 volatile unsigned long _res; \ 3232 "aghi 15,-160\n\t" \
3089 /* _argvec[0] holds current r2 across the call */ \ 3233 "lg 2, 8(1)\n\t" \
3090 _argvec[1] = (unsigned long)_orig.r2; \ 3234 "lg 3,16(1)\n\t" \
3091 _argvec[2] = (unsigned long)_orig.nraddr; \ 3235 "lg 4,24(1)\n\t" \
3092 _argvec[2+1] = (unsigned long)arg1; \ 3236 "lg 5,32(1)\n\t" \
3093 _argvec[2+2] = (unsigned long)arg2; \ 3237 "lg 1, 0(1)\n\t" \
3094 __asm__ volatile( \ 3238 VALGRIND_CALL_NOREDIR_R1 \
3095 "mr 11,%1\n\t" \ 3239 "lgr %0, 2\n\t" \
3096 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 3240 "aghi 15,160\n\t" \
3097 "std 2,-16(11)\n\t" /* save tocptr */ \ 3241 VALGRIND_CFI_EPILOGUE \
3098 "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ 3242 : /*out*/ "=d" (_res) \
3099 "ld 3, 8(11)\n\t" /* arg1->r3 */ \ 3243 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
3100 "ld 4, 16(11)\n\t" /* arg2->r4 */ \ 3244 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"7" \
3101 "ld 11, 0(11)\n\t" /* target->r11 */ \ 3245 ); \
3102 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 3246 lval = (__typeof__(lval)) _res; \
3103 "mr 11,%1\n\t" \ 3247 } while (0)
3104 "mr %0,3\n\t" \ 3248
3105 "ld 2,-16(11)\n\t" /* restore tocptr */ \ 3249 #define CALL_FN_W_5W(lval, orig, arg1, arg2, arg3, arg4, arg5) \
3106 VG_CONTRACT_FRAME_BY(512) \ 3250 do { \
3107 : /*out*/ "=r" (_res) \ 3251 volatile OrigFn _orig = (orig); \
3108 : /*in*/ "r" (&_argvec[2]) \ 3252 volatile unsigned long _argvec[6]; \
3109 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 3253 volatile unsigned long _res; \
3110 ); \ 3254 _argvec[0] = (unsigned long)_orig.nraddr; \
3111 lval = (__typeof__(lval)) _res; \ 3255 _argvec[1] = (unsigned long)arg1; \
3112 } while (0) 3256 _argvec[2] = (unsigned long)arg2; \
3113 3257 _argvec[3] = (unsigned long)arg3; \
3114 #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ 3258 _argvec[4] = (unsigned long)arg4; \
3115 do { \ 3259 _argvec[5] = (unsigned long)arg5; \
3116 volatile OrigFn _orig = (orig); \ 3260 __asm__ volatile( \
3117 volatile unsigned long _argvec[3+3]; \ 3261 VALGRIND_CFI_PROLOGUE \
3118 volatile unsigned long _res; \ 3262 "aghi 15,-160\n\t" \
3119 /* _argvec[0] holds current r2 across the call */ \ 3263 "lg 2, 8(1)\n\t" \
3120 _argvec[1] = (unsigned long)_orig.r2; \ 3264 "lg 3,16(1)\n\t" \
3121 _argvec[2] = (unsigned long)_orig.nraddr; \ 3265 "lg 4,24(1)\n\t" \
3122 _argvec[2+1] = (unsigned long)arg1; \ 3266 "lg 5,32(1)\n\t" \
3123 _argvec[2+2] = (unsigned long)arg2; \ 3267 "lg 6,40(1)\n\t" \
3124 _argvec[2+3] = (unsigned long)arg3; \ 3268 "lg 1, 0(1)\n\t" \
3125 __asm__ volatile( \ 3269 VALGRIND_CALL_NOREDIR_R1 \
3126 "mr 11,%1\n\t" \ 3270 "lgr %0, 2\n\t" \
3127 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 3271 "aghi 15,160\n\t" \
3128 "std 2,-16(11)\n\t" /* save tocptr */ \ 3272 VALGRIND_CFI_EPILOGUE \
3129 "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ 3273 : /*out*/ "=d" (_res) \
3130 "ld 3, 8(11)\n\t" /* arg1->r3 */ \ 3274 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
3131 "ld 4, 16(11)\n\t" /* arg2->r4 */ \ 3275 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \
3132 "ld 5, 24(11)\n\t" /* arg3->r5 */ \ 3276 ); \
3133 "ld 11, 0(11)\n\t" /* target->r11 */ \ 3277 lval = (__typeof__(lval)) _res; \
3134 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 3278 } while (0)
3135 "mr 11,%1\n\t" \ 3279
3136 "mr %0,3\n\t" \ 3280 #define CALL_FN_W_6W(lval, orig, arg1, arg2, arg3, arg4, arg5, \
3137 "ld 2,-16(11)\n\t" /* restore tocptr */ \ 3281 arg6) \
3138 VG_CONTRACT_FRAME_BY(512) \ 3282 do { \
3139 : /*out*/ "=r" (_res) \ 3283 volatile OrigFn _orig = (orig); \
3140 : /*in*/ "r" (&_argvec[2]) \ 3284 volatile unsigned long _argvec[7]; \
3141 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 3285 volatile unsigned long _res; \
3142 ); \ 3286 _argvec[0] = (unsigned long)_orig.nraddr; \
3143 lval = (__typeof__(lval)) _res; \ 3287 _argvec[1] = (unsigned long)arg1; \
3144 } while (0) 3288 _argvec[2] = (unsigned long)arg2; \
3145 3289 _argvec[3] = (unsigned long)arg3; \
3146 #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ 3290 _argvec[4] = (unsigned long)arg4; \
3147 do { \ 3291 _argvec[5] = (unsigned long)arg5; \
3148 volatile OrigFn _orig = (orig); \ 3292 _argvec[6] = (unsigned long)arg6; \
3149 volatile unsigned long _argvec[3+4]; \ 3293 __asm__ volatile( \
3150 volatile unsigned long _res; \ 3294 VALGRIND_CFI_PROLOGUE \
3151 /* _argvec[0] holds current r2 across the call */ \ 3295 "aghi 15,-168\n\t" \
3152 _argvec[1] = (unsigned long)_orig.r2; \ 3296 "lg 2, 8(1)\n\t" \
3153 _argvec[2] = (unsigned long)_orig.nraddr; \ 3297 "lg 3,16(1)\n\t" \
3154 _argvec[2+1] = (unsigned long)arg1; \ 3298 "lg 4,24(1)\n\t" \
3155 _argvec[2+2] = (unsigned long)arg2; \ 3299 "lg 5,32(1)\n\t" \
3156 _argvec[2+3] = (unsigned long)arg3; \ 3300 "lg 6,40(1)\n\t" \
3157 _argvec[2+4] = (unsigned long)arg4; \ 3301 "mvc 160(8,15), 48(1)\n\t" \
3158 __asm__ volatile( \ 3302 "lg 1, 0(1)\n\t" \
3159 "mr 11,%1\n\t" \ 3303 VALGRIND_CALL_NOREDIR_R1 \
3160 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 3304 "lgr %0, 2\n\t" \
3161 "std 2,-16(11)\n\t" /* save tocptr */ \ 3305 "aghi 15,168\n\t" \
3162 "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ 3306 VALGRIND_CFI_EPILOGUE \
3163 "ld 3, 8(11)\n\t" /* arg1->r3 */ \ 3307 : /*out*/ "=d" (_res) \
3164 "ld 4, 16(11)\n\t" /* arg2->r4 */ \ 3308 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
3165 "ld 5, 24(11)\n\t" /* arg3->r5 */ \ 3309 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \
3166 "ld 6, 32(11)\n\t" /* arg4->r6 */ \ 3310 ); \
3167 "ld 11, 0(11)\n\t" /* target->r11 */ \ 3311 lval = (__typeof__(lval)) _res; \
3168 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 3312 } while (0)
3169 "mr 11,%1\n\t" \ 3313
3170 "mr %0,3\n\t" \ 3314 #define CALL_FN_W_7W(lval, orig, arg1, arg2, arg3, arg4, arg5, \
3171 "ld 2,-16(11)\n\t" /* restore tocptr */ \ 3315 arg6, arg7) \
3172 VG_CONTRACT_FRAME_BY(512) \ 3316 do { \
3173 : /*out*/ "=r" (_res) \ 3317 volatile OrigFn _orig = (orig); \
3174 : /*in*/ "r" (&_argvec[2]) \ 3318 volatile unsigned long _argvec[8]; \
3175 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 3319 volatile unsigned long _res; \
3176 ); \ 3320 _argvec[0] = (unsigned long)_orig.nraddr; \
3177 lval = (__typeof__(lval)) _res; \ 3321 _argvec[1] = (unsigned long)arg1; \
3178 } while (0) 3322 _argvec[2] = (unsigned long)arg2; \
3179 3323 _argvec[3] = (unsigned long)arg3; \
3180 #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \ 3324 _argvec[4] = (unsigned long)arg4; \
3181 do { \ 3325 _argvec[5] = (unsigned long)arg5; \
3182 volatile OrigFn _orig = (orig); \ 3326 _argvec[6] = (unsigned long)arg6; \
3183 volatile unsigned long _argvec[3+5]; \ 3327 _argvec[7] = (unsigned long)arg7; \
3184 volatile unsigned long _res; \ 3328 __asm__ volatile( \
3185 /* _argvec[0] holds current r2 across the call */ \ 3329 VALGRIND_CFI_PROLOGUE \
3186 _argvec[1] = (unsigned long)_orig.r2; \ 3330 "aghi 15,-176\n\t" \
3187 _argvec[2] = (unsigned long)_orig.nraddr; \ 3331 "lg 2, 8(1)\n\t" \
3188 _argvec[2+1] = (unsigned long)arg1; \ 3332 "lg 3,16(1)\n\t" \
3189 _argvec[2+2] = (unsigned long)arg2; \ 3333 "lg 4,24(1)\n\t" \
3190 _argvec[2+3] = (unsigned long)arg3; \ 3334 "lg 5,32(1)\n\t" \
3191 _argvec[2+4] = (unsigned long)arg4; \ 3335 "lg 6,40(1)\n\t" \
3192 _argvec[2+5] = (unsigned long)arg5; \ 3336 "mvc 160(8,15), 48(1)\n\t" \
3193 __asm__ volatile( \ 3337 "mvc 168(8,15), 56(1)\n\t" \
3194 "mr 11,%1\n\t" \ 3338 "lg 1, 0(1)\n\t" \
3195 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 3339 VALGRIND_CALL_NOREDIR_R1 \
3196 "std 2,-16(11)\n\t" /* save tocptr */ \ 3340 "lgr %0, 2\n\t" \
3197 "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ 3341 "aghi 15,176\n\t" \
3198 "ld 3, 8(11)\n\t" /* arg1->r3 */ \ 3342 VALGRIND_CFI_EPILOGUE \
3199 "ld 4, 16(11)\n\t" /* arg2->r4 */ \ 3343 : /*out*/ "=d" (_res) \
3200 "ld 5, 24(11)\n\t" /* arg3->r5 */ \ 3344 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
3201 "ld 6, 32(11)\n\t" /* arg4->r6 */ \ 3345 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \
3202 "ld 7, 40(11)\n\t" /* arg5->r7 */ \ 3346 ); \
3203 "ld 11, 0(11)\n\t" /* target->r11 */ \ 3347 lval = (__typeof__(lval)) _res; \
3204 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 3348 } while (0)
3205 "mr 11,%1\n\t" \ 3349
3206 "mr %0,3\n\t" \ 3350 #define CALL_FN_W_8W(lval, orig, arg1, arg2, arg3, arg4, arg5, \
3207 "ld 2,-16(11)\n\t" /* restore tocptr */ \ 3351 arg6, arg7 ,arg8) \
3208 VG_CONTRACT_FRAME_BY(512) \ 3352 do { \
3209 : /*out*/ "=r" (_res) \ 3353 volatile OrigFn _orig = (orig); \
3210 : /*in*/ "r" (&_argvec[2]) \ 3354 volatile unsigned long _argvec[9]; \
3211 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 3355 volatile unsigned long _res; \
3212 ); \ 3356 _argvec[0] = (unsigned long)_orig.nraddr; \
3213 lval = (__typeof__(lval)) _res; \ 3357 _argvec[1] = (unsigned long)arg1; \
3214 } while (0) 3358 _argvec[2] = (unsigned long)arg2; \
3215 3359 _argvec[3] = (unsigned long)arg3; \
3216 #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ 3360 _argvec[4] = (unsigned long)arg4; \
3217 do { \ 3361 _argvec[5] = (unsigned long)arg5; \
3218 volatile OrigFn _orig = (orig); \ 3362 _argvec[6] = (unsigned long)arg6; \
3219 volatile unsigned long _argvec[3+6]; \ 3363 _argvec[7] = (unsigned long)arg7; \
3220 volatile unsigned long _res; \ 3364 _argvec[8] = (unsigned long)arg8; \
3221 /* _argvec[0] holds current r2 across the call */ \ 3365 __asm__ volatile( \
3222 _argvec[1] = (unsigned long)_orig.r2; \ 3366 VALGRIND_CFI_PROLOGUE \
3223 _argvec[2] = (unsigned long)_orig.nraddr; \ 3367 "aghi 15,-184\n\t" \
3224 _argvec[2+1] = (unsigned long)arg1; \ 3368 "lg 2, 8(1)\n\t" \
3225 _argvec[2+2] = (unsigned long)arg2; \ 3369 "lg 3,16(1)\n\t" \
3226 _argvec[2+3] = (unsigned long)arg3; \ 3370 "lg 4,24(1)\n\t" \
3227 _argvec[2+4] = (unsigned long)arg4; \ 3371 "lg 5,32(1)\n\t" \
3228 _argvec[2+5] = (unsigned long)arg5; \ 3372 "lg 6,40(1)\n\t" \
3229 _argvec[2+6] = (unsigned long)arg6; \ 3373 "mvc 160(8,15), 48(1)\n\t" \
3230 __asm__ volatile( \ 3374 "mvc 168(8,15), 56(1)\n\t" \
3231 "mr 11,%1\n\t" \ 3375 "mvc 176(8,15), 64(1)\n\t" \
3232 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 3376 "lg 1, 0(1)\n\t" \
3233 "std 2,-16(11)\n\t" /* save tocptr */ \ 3377 VALGRIND_CALL_NOREDIR_R1 \
3234 "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ 3378 "lgr %0, 2\n\t" \
3235 "ld 3, 8(11)\n\t" /* arg1->r3 */ \ 3379 "aghi 15,184\n\t" \
3236 "ld 4, 16(11)\n\t" /* arg2->r4 */ \ 3380 VALGRIND_CFI_EPILOGUE \
3237 "ld 5, 24(11)\n\t" /* arg3->r5 */ \ 3381 : /*out*/ "=d" (_res) \
3238 "ld 6, 32(11)\n\t" /* arg4->r6 */ \ 3382 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
3239 "ld 7, 40(11)\n\t" /* arg5->r7 */ \ 3383 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \
3240 "ld 8, 48(11)\n\t" /* arg6->r8 */ \ 3384 ); \
3241 "ld 11, 0(11)\n\t" /* target->r11 */ \ 3385 lval = (__typeof__(lval)) _res; \
3242 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 3386 } while (0)
3243 "mr 11,%1\n\t" \ 3387
3244 "mr %0,3\n\t" \ 3388 #define CALL_FN_W_9W(lval, orig, arg1, arg2, arg3, arg4, arg5, \
3245 "ld 2,-16(11)\n\t" /* restore tocptr */ \ 3389 arg6, arg7 ,arg8, arg9) \
3246 VG_CONTRACT_FRAME_BY(512) \ 3390 do { \
3247 : /*out*/ "=r" (_res) \ 3391 volatile OrigFn _orig = (orig); \
3248 : /*in*/ "r" (&_argvec[2]) \ 3392 volatile unsigned long _argvec[10]; \
3249 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 3393 volatile unsigned long _res; \
3250 ); \ 3394 _argvec[0] = (unsigned long)_orig.nraddr; \
3251 lval = (__typeof__(lval)) _res; \ 3395 _argvec[1] = (unsigned long)arg1; \
3252 } while (0) 3396 _argvec[2] = (unsigned long)arg2; \
3253 3397 _argvec[3] = (unsigned long)arg3; \
3254 #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 3398 _argvec[4] = (unsigned long)arg4; \
3255 arg7) \ 3399 _argvec[5] = (unsigned long)arg5; \
3256 do { \ 3400 _argvec[6] = (unsigned long)arg6; \
3257 volatile OrigFn _orig = (orig); \ 3401 _argvec[7] = (unsigned long)arg7; \
3258 volatile unsigned long _argvec[3+7]; \ 3402 _argvec[8] = (unsigned long)arg8; \
3259 volatile unsigned long _res; \ 3403 _argvec[9] = (unsigned long)arg9; \
3260 /* _argvec[0] holds current r2 across the call */ \ 3404 __asm__ volatile( \
3261 _argvec[1] = (unsigned long)_orig.r2; \ 3405 VALGRIND_CFI_PROLOGUE \
3262 _argvec[2] = (unsigned long)_orig.nraddr; \ 3406 "aghi 15,-192\n\t" \
3263 _argvec[2+1] = (unsigned long)arg1; \ 3407 "lg 2, 8(1)\n\t" \
3264 _argvec[2+2] = (unsigned long)arg2; \ 3408 "lg 3,16(1)\n\t" \
3265 _argvec[2+3] = (unsigned long)arg3; \ 3409 "lg 4,24(1)\n\t" \
3266 _argvec[2+4] = (unsigned long)arg4; \ 3410 "lg 5,32(1)\n\t" \
3267 _argvec[2+5] = (unsigned long)arg5; \ 3411 "lg 6,40(1)\n\t" \
3268 _argvec[2+6] = (unsigned long)arg6; \ 3412 "mvc 160(8,15), 48(1)\n\t" \
3269 _argvec[2+7] = (unsigned long)arg7; \ 3413 "mvc 168(8,15), 56(1)\n\t" \
3270 __asm__ volatile( \ 3414 "mvc 176(8,15), 64(1)\n\t" \
3271 "mr 11,%1\n\t" \ 3415 "mvc 184(8,15), 72(1)\n\t" \
3272 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 3416 "lg 1, 0(1)\n\t" \
3273 "std 2,-16(11)\n\t" /* save tocptr */ \ 3417 VALGRIND_CALL_NOREDIR_R1 \
3274 "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ 3418 "lgr %0, 2\n\t" \
3275 "ld 3, 8(11)\n\t" /* arg1->r3 */ \ 3419 "aghi 15,192\n\t" \
3276 "ld 4, 16(11)\n\t" /* arg2->r4 */ \ 3420 VALGRIND_CFI_EPILOGUE \
3277 "ld 5, 24(11)\n\t" /* arg3->r5 */ \ 3421 : /*out*/ "=d" (_res) \
3278 "ld 6, 32(11)\n\t" /* arg4->r6 */ \ 3422 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
3279 "ld 7, 40(11)\n\t" /* arg5->r7 */ \ 3423 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \
3280 "ld 8, 48(11)\n\t" /* arg6->r8 */ \ 3424 ); \
3281 "ld 9, 56(11)\n\t" /* arg7->r9 */ \ 3425 lval = (__typeof__(lval)) _res; \
3282 "ld 11, 0(11)\n\t" /* target->r11 */ \ 3426 } while (0)
3283 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 3427
3284 "mr 11,%1\n\t" \ 3428 #define CALL_FN_W_10W(lval, orig, arg1, arg2, arg3, arg4, arg5, \
3285 "mr %0,3\n\t" \ 3429 arg6, arg7 ,arg8, arg9, arg10) \
3286 "ld 2,-16(11)\n\t" /* restore tocptr */ \ 3430 do { \
3287 VG_CONTRACT_FRAME_BY(512) \ 3431 volatile OrigFn _orig = (orig); \
3288 : /*out*/ "=r" (_res) \ 3432 volatile unsigned long _argvec[11]; \
3289 : /*in*/ "r" (&_argvec[2]) \ 3433 volatile unsigned long _res; \
3290 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 3434 _argvec[0] = (unsigned long)_orig.nraddr; \
3291 ); \ 3435 _argvec[1] = (unsigned long)arg1; \
3292 lval = (__typeof__(lval)) _res; \ 3436 _argvec[2] = (unsigned long)arg2; \
3293 } while (0) 3437 _argvec[3] = (unsigned long)arg3; \
3294 3438 _argvec[4] = (unsigned long)arg4; \
3295 #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 3439 _argvec[5] = (unsigned long)arg5; \
3296 arg7,arg8) \ 3440 _argvec[6] = (unsigned long)arg6; \
3297 do { \ 3441 _argvec[7] = (unsigned long)arg7; \
3298 volatile OrigFn _orig = (orig); \ 3442 _argvec[8] = (unsigned long)arg8; \
3299 volatile unsigned long _argvec[3+8]; \ 3443 _argvec[9] = (unsigned long)arg9; \
3300 volatile unsigned long _res; \ 3444 _argvec[10] = (unsigned long)arg10; \
3301 /* _argvec[0] holds current r2 across the call */ \ 3445 __asm__ volatile( \
3302 _argvec[1] = (unsigned long)_orig.r2; \ 3446 VALGRIND_CFI_PROLOGUE \
3303 _argvec[2] = (unsigned long)_orig.nraddr; \ 3447 "aghi 15,-200\n\t" \
3304 _argvec[2+1] = (unsigned long)arg1; \ 3448 "lg 2, 8(1)\n\t" \
3305 _argvec[2+2] = (unsigned long)arg2; \ 3449 "lg 3,16(1)\n\t" \
3306 _argvec[2+3] = (unsigned long)arg3; \ 3450 "lg 4,24(1)\n\t" \
3307 _argvec[2+4] = (unsigned long)arg4; \ 3451 "lg 5,32(1)\n\t" \
3308 _argvec[2+5] = (unsigned long)arg5; \ 3452 "lg 6,40(1)\n\t" \
3309 _argvec[2+6] = (unsigned long)arg6; \ 3453 "mvc 160(8,15), 48(1)\n\t" \
3310 _argvec[2+7] = (unsigned long)arg7; \ 3454 "mvc 168(8,15), 56(1)\n\t" \
3311 _argvec[2+8] = (unsigned long)arg8; \ 3455 "mvc 176(8,15), 64(1)\n\t" \
3312 __asm__ volatile( \ 3456 "mvc 184(8,15), 72(1)\n\t" \
3313 "mr 11,%1\n\t" \ 3457 "mvc 192(8,15), 80(1)\n\t" \
3314 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 3458 "lg 1, 0(1)\n\t" \
3315 "std 2,-16(11)\n\t" /* save tocptr */ \ 3459 VALGRIND_CALL_NOREDIR_R1 \
3316 "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ 3460 "lgr %0, 2\n\t" \
3317 "ld 3, 8(11)\n\t" /* arg1->r3 */ \ 3461 "aghi 15,200\n\t" \
3318 "ld 4, 16(11)\n\t" /* arg2->r4 */ \ 3462 VALGRIND_CFI_EPILOGUE \
3319 "ld 5, 24(11)\n\t" /* arg3->r5 */ \ 3463 : /*out*/ "=d" (_res) \
3320 "ld 6, 32(11)\n\t" /* arg4->r6 */ \ 3464 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
3321 "ld 7, 40(11)\n\t" /* arg5->r7 */ \ 3465 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \
3322 "ld 8, 48(11)\n\t" /* arg6->r8 */ \ 3466 ); \
3323 "ld 9, 56(11)\n\t" /* arg7->r9 */ \ 3467 lval = (__typeof__(lval)) _res; \
3324 "ld 10, 64(11)\n\t" /* arg8->r10 */ \ 3468 } while (0)
3325 "ld 11, 0(11)\n\t" /* target->r11 */ \ 3469
3326 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 3470 #define CALL_FN_W_11W(lval, orig, arg1, arg2, arg3, arg4, arg5, \
3327 "mr 11,%1\n\t" \ 3471 arg6, arg7 ,arg8, arg9, arg10, arg11) \
3328 "mr %0,3\n\t" \ 3472 do { \
3329 "ld 2,-16(11)\n\t" /* restore tocptr */ \ 3473 volatile OrigFn _orig = (orig); \
3330 VG_CONTRACT_FRAME_BY(512) \ 3474 volatile unsigned long _argvec[12]; \
3331 : /*out*/ "=r" (_res) \ 3475 volatile unsigned long _res; \
3332 : /*in*/ "r" (&_argvec[2]) \ 3476 _argvec[0] = (unsigned long)_orig.nraddr; \
3333 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 3477 _argvec[1] = (unsigned long)arg1; \
3334 ); \ 3478 _argvec[2] = (unsigned long)arg2; \
3335 lval = (__typeof__(lval)) _res; \ 3479 _argvec[3] = (unsigned long)arg3; \
3336 } while (0) 3480 _argvec[4] = (unsigned long)arg4; \
3337 3481 _argvec[5] = (unsigned long)arg5; \
3338 #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 3482 _argvec[6] = (unsigned long)arg6; \
3339 arg7,arg8,arg9) \ 3483 _argvec[7] = (unsigned long)arg7; \
3340 do { \ 3484 _argvec[8] = (unsigned long)arg8; \
3341 volatile OrigFn _orig = (orig); \ 3485 _argvec[9] = (unsigned long)arg9; \
3342 volatile unsigned long _argvec[3+9]; \ 3486 _argvec[10] = (unsigned long)arg10; \
3343 volatile unsigned long _res; \ 3487 _argvec[11] = (unsigned long)arg11; \
3344 /* _argvec[0] holds current r2 across the call */ \ 3488 __asm__ volatile( \
3345 _argvec[1] = (unsigned long)_orig.r2; \ 3489 VALGRIND_CFI_PROLOGUE \
3346 _argvec[2] = (unsigned long)_orig.nraddr; \ 3490 "aghi 15,-208\n\t" \
3347 _argvec[2+1] = (unsigned long)arg1; \ 3491 "lg 2, 8(1)\n\t" \
3348 _argvec[2+2] = (unsigned long)arg2; \ 3492 "lg 3,16(1)\n\t" \
3349 _argvec[2+3] = (unsigned long)arg3; \ 3493 "lg 4,24(1)\n\t" \
3350 _argvec[2+4] = (unsigned long)arg4; \ 3494 "lg 5,32(1)\n\t" \
3351 _argvec[2+5] = (unsigned long)arg5; \ 3495 "lg 6,40(1)\n\t" \
3352 _argvec[2+6] = (unsigned long)arg6; \ 3496 "mvc 160(8,15), 48(1)\n\t" \
3353 _argvec[2+7] = (unsigned long)arg7; \ 3497 "mvc 168(8,15), 56(1)\n\t" \
3354 _argvec[2+8] = (unsigned long)arg8; \ 3498 "mvc 176(8,15), 64(1)\n\t" \
3355 _argvec[2+9] = (unsigned long)arg9; \ 3499 "mvc 184(8,15), 72(1)\n\t" \
3356 __asm__ volatile( \ 3500 "mvc 192(8,15), 80(1)\n\t" \
3357 "mr 11,%1\n\t" \ 3501 "mvc 200(8,15), 88(1)\n\t" \
3358 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 3502 "lg 1, 0(1)\n\t" \
3359 "std 2,-16(11)\n\t" /* save tocptr */ \ 3503 VALGRIND_CALL_NOREDIR_R1 \
3360 "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ 3504 "lgr %0, 2\n\t" \
3361 VG_EXPAND_FRAME_BY_trashes_r3(128) \ 3505 "aghi 15,208\n\t" \
3362 /* arg9 */ \ 3506 VALGRIND_CFI_EPILOGUE \
3363 "ld 3,72(11)\n\t" \ 3507 : /*out*/ "=d" (_res) \
3364 "std 3,112(1)\n\t" \ 3508 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
3365 /* args1-8 */ \ 3509 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \
3366 "ld 3, 8(11)\n\t" /* arg1->r3 */ \ 3510 ); \
3367 "ld 4, 16(11)\n\t" /* arg2->r4 */ \ 3511 lval = (__typeof__(lval)) _res; \
3368 "ld 5, 24(11)\n\t" /* arg3->r5 */ \ 3512 } while (0)
3369 "ld 6, 32(11)\n\t" /* arg4->r6 */ \ 3513
3370 "ld 7, 40(11)\n\t" /* arg5->r7 */ \ 3514 #define CALL_FN_W_12W(lval, orig, arg1, arg2, arg3, arg4, arg5, \
3371 "ld 8, 48(11)\n\t" /* arg6->r8 */ \ 3515 arg6, arg7 ,arg8, arg9, arg10, arg11, arg12)\
3372 "ld 9, 56(11)\n\t" /* arg7->r9 */ \ 3516 do { \
3373 "ld 10, 64(11)\n\t" /* arg8->r10 */ \ 3517 volatile OrigFn _orig = (orig); \
3374 "ld 11, 0(11)\n\t" /* target->r11 */ \ 3518 volatile unsigned long _argvec[13]; \
3375 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 3519 volatile unsigned long _res; \
3376 "mr 11,%1\n\t" \ 3520 _argvec[0] = (unsigned long)_orig.nraddr; \
3377 "mr %0,3\n\t" \ 3521 _argvec[1] = (unsigned long)arg1; \
3378 "ld 2,-16(11)\n\t" /* restore tocptr */ \ 3522 _argvec[2] = (unsigned long)arg2; \
3379 VG_CONTRACT_FRAME_BY(128) \ 3523 _argvec[3] = (unsigned long)arg3; \
3380 VG_CONTRACT_FRAME_BY(512) \ 3524 _argvec[4] = (unsigned long)arg4; \
3381 : /*out*/ "=r" (_res) \ 3525 _argvec[5] = (unsigned long)arg5; \
3382 : /*in*/ "r" (&_argvec[2]) \ 3526 _argvec[6] = (unsigned long)arg6; \
3383 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 3527 _argvec[7] = (unsigned long)arg7; \
3384 ); \ 3528 _argvec[8] = (unsigned long)arg8; \
3385 lval = (__typeof__(lval)) _res; \ 3529 _argvec[9] = (unsigned long)arg9; \
3386 } while (0) 3530 _argvec[10] = (unsigned long)arg10; \
3387 3531 _argvec[11] = (unsigned long)arg11; \
3388 #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 3532 _argvec[12] = (unsigned long)arg12; \
3389 arg7,arg8,arg9,arg10) \ 3533 __asm__ volatile( \
3390 do { \ 3534 VALGRIND_CFI_PROLOGUE \
3391 volatile OrigFn _orig = (orig); \ 3535 "aghi 15,-216\n\t" \
3392 volatile unsigned long _argvec[3+10]; \ 3536 "lg 2, 8(1)\n\t" \
3393 volatile unsigned long _res; \ 3537 "lg 3,16(1)\n\t" \
3394 /* _argvec[0] holds current r2 across the call */ \ 3538 "lg 4,24(1)\n\t" \
3395 _argvec[1] = (unsigned long)_orig.r2; \ 3539 "lg 5,32(1)\n\t" \
3396 _argvec[2] = (unsigned long)_orig.nraddr; \ 3540 "lg 6,40(1)\n\t" \
3397 _argvec[2+1] = (unsigned long)arg1; \ 3541 "mvc 160(8,15), 48(1)\n\t" \
3398 _argvec[2+2] = (unsigned long)arg2; \ 3542 "mvc 168(8,15), 56(1)\n\t" \
3399 _argvec[2+3] = (unsigned long)arg3; \ 3543 "mvc 176(8,15), 64(1)\n\t" \
3400 _argvec[2+4] = (unsigned long)arg4; \ 3544 "mvc 184(8,15), 72(1)\n\t" \
3401 _argvec[2+5] = (unsigned long)arg5; \ 3545 "mvc 192(8,15), 80(1)\n\t" \
3402 _argvec[2+6] = (unsigned long)arg6; \ 3546 "mvc 200(8,15), 88(1)\n\t" \
3403 _argvec[2+7] = (unsigned long)arg7; \ 3547 "mvc 208(8,15), 96(1)\n\t" \
3404 _argvec[2+8] = (unsigned long)arg8; \ 3548 "lg 1, 0(1)\n\t" \
3405 _argvec[2+9] = (unsigned long)arg9; \ 3549 VALGRIND_CALL_NOREDIR_R1 \
3406 _argvec[2+10] = (unsigned long)arg10; \ 3550 "lgr %0, 2\n\t" \
3407 __asm__ volatile( \ 3551 "aghi 15,216\n\t" \
3408 "mr 11,%1\n\t" \ 3552 VALGRIND_CFI_EPILOGUE \
3409 VG_EXPAND_FRAME_BY_trashes_r3(512) \ 3553 : /*out*/ "=d" (_res) \
3410 "std 2,-16(11)\n\t" /* save tocptr */ \ 3554 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
3411 "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \ 3555 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS,"6","7" \
3412 VG_EXPAND_FRAME_BY_trashes_r3(128) \ 3556 ); \
3413 /* arg10 */ \ 3557 lval = (__typeof__(lval)) _res; \
3414 "ld 3,80(11)\n\t" \ 3558 } while (0)
3415 "std 3,120(1)\n\t" \ 3559
3416 /* arg9 */ \ 3560
3417 "ld 3,72(11)\n\t" \ 3561 #endif /* PLAT_s390x_linux */
3418 "std 3,112(1)\n\t" \
3419 /* args1-8 */ \
3420 "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3421 "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3422 "ld 5, 24(11)\n\t" /* arg3->r5 */ \
3423 "ld 6, 32(11)\n\t" /* arg4->r6 */ \
3424 "ld 7, 40(11)\n\t" /* arg5->r7 */ \
3425 "ld 8, 48(11)\n\t" /* arg6->r8 */ \
3426 "ld 9, 56(11)\n\t" /* arg7->r9 */ \
3427 "ld 10, 64(11)\n\t" /* arg8->r10 */ \
3428 "ld 11, 0(11)\n\t" /* target->r11 */ \
3429 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3430 "mr 11,%1\n\t" \
3431 "mr %0,3\n\t" \
3432 "ld 2,-16(11)\n\t" /* restore tocptr */ \
3433 VG_CONTRACT_FRAME_BY(128) \
3434 VG_CONTRACT_FRAME_BY(512) \
3435 : /*out*/ "=r" (_res) \
3436 : /*in*/ "r" (&_argvec[2]) \
3437 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
3438 ); \
3439 lval = (__typeof__(lval)) _res; \
3440 } while (0)
3441
3442 #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
3443 arg7,arg8,arg9,arg10,arg11) \
3444 do { \
3445 volatile OrigFn _orig = (orig); \
3446 volatile unsigned long _argvec[3+11]; \
3447 volatile unsigned long _res; \
3448 /* _argvec[0] holds current r2 across the call */ \
3449 _argvec[1] = (unsigned long)_orig.r2; \
3450 _argvec[2] = (unsigned long)_orig.nraddr; \
3451 _argvec[2+1] = (unsigned long)arg1; \
3452 _argvec[2+2] = (unsigned long)arg2; \
3453 _argvec[2+3] = (unsigned long)arg3; \
3454 _argvec[2+4] = (unsigned long)arg4; \
3455 _argvec[2+5] = (unsigned long)arg5; \
3456 _argvec[2+6] = (unsigned long)arg6; \
3457 _argvec[2+7] = (unsigned long)arg7; \
3458 _argvec[2+8] = (unsigned long)arg8; \
3459 _argvec[2+9] = (unsigned long)arg9; \
3460 _argvec[2+10] = (unsigned long)arg10; \
3461 _argvec[2+11] = (unsigned long)arg11; \
3462 __asm__ volatile( \
3463 "mr 11,%1\n\t" \
3464 VG_EXPAND_FRAME_BY_trashes_r3(512) \
3465 "std 2,-16(11)\n\t" /* save tocptr */ \
3466 "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3467 VG_EXPAND_FRAME_BY_trashes_r3(144) \
3468 /* arg11 */ \
3469 "ld 3,88(11)\n\t" \
3470 "std 3,128(1)\n\t" \
3471 /* arg10 */ \
3472 "ld 3,80(11)\n\t" \
3473 "std 3,120(1)\n\t" \
3474 /* arg9 */ \
3475 "ld 3,72(11)\n\t" \
3476 "std 3,112(1)\n\t" \
3477 /* args1-8 */ \
3478 "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3479 "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3480 "ld 5, 24(11)\n\t" /* arg3->r5 */ \
3481 "ld 6, 32(11)\n\t" /* arg4->r6 */ \
3482 "ld 7, 40(11)\n\t" /* arg5->r7 */ \
3483 "ld 8, 48(11)\n\t" /* arg6->r8 */ \
3484 "ld 9, 56(11)\n\t" /* arg7->r9 */ \
3485 "ld 10, 64(11)\n\t" /* arg8->r10 */ \
3486 "ld 11, 0(11)\n\t" /* target->r11 */ \
3487 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3488 "mr 11,%1\n\t" \
3489 "mr %0,3\n\t" \
3490 "ld 2,-16(11)\n\t" /* restore tocptr */ \
3491 VG_CONTRACT_FRAME_BY(144) \
3492 VG_CONTRACT_FRAME_BY(512) \
3493 : /*out*/ "=r" (_res) \
3494 : /*in*/ "r" (&_argvec[2]) \
3495 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
3496 ); \
3497 lval = (__typeof__(lval)) _res; \
3498 } while (0)
3499
3500 #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
3501 arg7,arg8,arg9,arg10,arg11,arg12) \
3502 do { \
3503 volatile OrigFn _orig = (orig); \
3504 volatile unsigned long _argvec[3+12]; \
3505 volatile unsigned long _res; \
3506 /* _argvec[0] holds current r2 across the call */ \
3507 _argvec[1] = (unsigned long)_orig.r2; \
3508 _argvec[2] = (unsigned long)_orig.nraddr; \
3509 _argvec[2+1] = (unsigned long)arg1; \
3510 _argvec[2+2] = (unsigned long)arg2; \
3511 _argvec[2+3] = (unsigned long)arg3; \
3512 _argvec[2+4] = (unsigned long)arg4; \
3513 _argvec[2+5] = (unsigned long)arg5; \
3514 _argvec[2+6] = (unsigned long)arg6; \
3515 _argvec[2+7] = (unsigned long)arg7; \
3516 _argvec[2+8] = (unsigned long)arg8; \
3517 _argvec[2+9] = (unsigned long)arg9; \
3518 _argvec[2+10] = (unsigned long)arg10; \
3519 _argvec[2+11] = (unsigned long)arg11; \
3520 _argvec[2+12] = (unsigned long)arg12; \
3521 __asm__ volatile( \
3522 "mr 11,%1\n\t" \
3523 VG_EXPAND_FRAME_BY_trashes_r3(512) \
3524 "std 2,-16(11)\n\t" /* save tocptr */ \
3525 "ld 2,-8(11)\n\t" /* use nraddr's tocptr */ \
3526 VG_EXPAND_FRAME_BY_trashes_r3(144) \
3527 /* arg12 */ \
3528 "ld 3,96(11)\n\t" \
3529 "std 3,136(1)\n\t" \
3530 /* arg11 */ \
3531 "ld 3,88(11)\n\t" \
3532 "std 3,128(1)\n\t" \
3533 /* arg10 */ \
3534 "ld 3,80(11)\n\t" \
3535 "std 3,120(1)\n\t" \
3536 /* arg9 */ \
3537 "ld 3,72(11)\n\t" \
3538 "std 3,112(1)\n\t" \
3539 /* args1-8 */ \
3540 "ld 3, 8(11)\n\t" /* arg1->r3 */ \
3541 "ld 4, 16(11)\n\t" /* arg2->r4 */ \
3542 "ld 5, 24(11)\n\t" /* arg3->r5 */ \
3543 "ld 6, 32(11)\n\t" /* arg4->r6 */ \
3544 "ld 7, 40(11)\n\t" /* arg5->r7 */ \
3545 "ld 8, 48(11)\n\t" /* arg6->r8 */ \
3546 "ld 9, 56(11)\n\t" /* arg7->r9 */ \
3547 "ld 10, 64(11)\n\t" /* arg8->r10 */ \
3548 "ld 11, 0(11)\n\t" /* target->r11 */ \
3549 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
3550 "mr 11,%1\n\t" \
3551 "mr %0,3\n\t" \
3552 "ld 2,-16(11)\n\t" /* restore tocptr */ \
3553 VG_CONTRACT_FRAME_BY(144) \
3554 VG_CONTRACT_FRAME_BY(512) \
3555 : /*out*/ "=r" (_res) \
3556 : /*in*/ "r" (&_argvec[2]) \
3557 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
3558 ); \
3559 lval = (__typeof__(lval)) _res; \
3560 } while (0)
3561
3562 #endif /* PLAT_ppc64_aix5 */
3563 3562
3564 3563
3565 /* ------------------------------------------------------------------ */ 3564 /* ------------------------------------------------------------------ */
3566 /* ARCHITECTURE INDEPENDENT MACROS for CLIENT REQUESTS. */ 3565 /* ARCHITECTURE INDEPENDENT MACROS for CLIENT REQUESTS. */
3567 /* */ 3566 /* */
3568 /* ------------------------------------------------------------------ */ 3567 /* ------------------------------------------------------------------ */
3569 3568
3570 /* Some request codes. There are many more of these, but most are not 3569 /* Some request codes. There are many more of these, but most are not
3571 exposed to end-user view. These are the public ones, all of the 3570 exposed to end-user view. These are the public ones, all of the
3572 form 0x1000 + small_number. 3571 form 0x1000 + small_number.
(...skipping 25 matching lines...) Expand all
3598 VG_USERREQ__CLIENT_CALL0 = 0x1101, 3597 VG_USERREQ__CLIENT_CALL0 = 0x1101,
3599 VG_USERREQ__CLIENT_CALL1 = 0x1102, 3598 VG_USERREQ__CLIENT_CALL1 = 0x1102,
3600 VG_USERREQ__CLIENT_CALL2 = 0x1103, 3599 VG_USERREQ__CLIENT_CALL2 = 0x1103,
3601 VG_USERREQ__CLIENT_CALL3 = 0x1104, 3600 VG_USERREQ__CLIENT_CALL3 = 0x1104,
3602 3601
3603 /* Can be useful in regression testing suites -- eg. can 3602 /* Can be useful in regression testing suites -- eg. can
3604 send Valgrind's output to /dev/null and still count 3603 send Valgrind's output to /dev/null and still count
3605 errors. */ 3604 errors. */
3606 VG_USERREQ__COUNT_ERRORS = 0x1201, 3605 VG_USERREQ__COUNT_ERRORS = 0x1201,
3607 3606
3607 /* Allows a string (gdb monitor command) to be passed to the tool
3608 Used for interaction with vgdb/gdb */
3609 VG_USERREQ__GDB_MONITOR_COMMAND = 0x1202,
3610
3608 /* These are useful and can be interpreted by any tool that 3611 /* These are useful and can be interpreted by any tool that
3609 tracks malloc() et al, by using vg_replace_malloc.c. */ 3612 tracks malloc() et al, by using vg_replace_malloc.c. */
3610 VG_USERREQ__MALLOCLIKE_BLOCK = 0x1301, 3613 VG_USERREQ__MALLOCLIKE_BLOCK = 0x1301,
3614 VG_USERREQ__RESIZEINPLACE_BLOCK = 0x130b,
3611 VG_USERREQ__FREELIKE_BLOCK = 0x1302, 3615 VG_USERREQ__FREELIKE_BLOCK = 0x1302,
3612 /* Memory pool support. */ 3616 /* Memory pool support. */
3613 VG_USERREQ__CREATE_MEMPOOL = 0x1303, 3617 VG_USERREQ__CREATE_MEMPOOL = 0x1303,
3614 VG_USERREQ__DESTROY_MEMPOOL = 0x1304, 3618 VG_USERREQ__DESTROY_MEMPOOL = 0x1304,
3615 VG_USERREQ__MEMPOOL_ALLOC = 0x1305, 3619 VG_USERREQ__MEMPOOL_ALLOC = 0x1305,
3616 VG_USERREQ__MEMPOOL_FREE = 0x1306, 3620 VG_USERREQ__MEMPOOL_FREE = 0x1306,
3617 VG_USERREQ__MEMPOOL_TRIM = 0x1307, 3621 VG_USERREQ__MEMPOOL_TRIM = 0x1307,
3618 VG_USERREQ__MOVE_MEMPOOL = 0x1308, 3622 VG_USERREQ__MOVE_MEMPOOL = 0x1308,
3619 VG_USERREQ__MEMPOOL_CHANGE = 0x1309, 3623 VG_USERREQ__MEMPOOL_CHANGE = 0x1309,
3620 VG_USERREQ__MEMPOOL_EXISTS = 0x130a, 3624 VG_USERREQ__MEMPOOL_EXISTS = 0x130a,
3621 3625
3622 /* Allow printfs to valgrind log. */ 3626 /* Allow printfs to valgrind log. */
3627 /* The first two pass the va_list argument by value, which
3628 assumes it is the same size as or smaller than a UWord,
3629 which generally isn't the case. Hence are deprecated.
3630 The second two pass the vargs by reference and so are
3631 immune to this problem. */
3632 /* both :: char* fmt, va_list vargs (DEPRECATED) */
3623 VG_USERREQ__PRINTF = 0x1401, 3633 VG_USERREQ__PRINTF = 0x1401,
3624 VG_USERREQ__PRINTF_BACKTRACE = 0x1402, 3634 VG_USERREQ__PRINTF_BACKTRACE = 0x1402,
3635 /* both :: char* fmt, va_list* vargs */
3636 VG_USERREQ__PRINTF_VALIST_BY_REF = 0x1403,
3637 VG_USERREQ__PRINTF_BACKTRACE_VALIST_BY_REF = 0x1404,
3625 3638
3626 /* Stack support. */ 3639 /* Stack support. */
3627 VG_USERREQ__STACK_REGISTER = 0x1501, 3640 VG_USERREQ__STACK_REGISTER = 0x1501,
3628 VG_USERREQ__STACK_DEREGISTER = 0x1502, 3641 VG_USERREQ__STACK_DEREGISTER = 0x1502,
3629 VG_USERREQ__STACK_CHANGE = 0x1503 3642 VG_USERREQ__STACK_CHANGE = 0x1503,
3643
3644 /* Wine support */
3645 VG_USERREQ__LOAD_PDB_DEBUGINFO = 0x1601,
3646
3647 /* Querying of debug info. */
3648 VG_USERREQ__MAP_IP_TO_SRCLOC = 0x1701
3630 } Vg_ClientRequest; 3649 } Vg_ClientRequest;
3631 3650
3632 #if !defined(__GNUC__) 3651 #if !defined(__GNUC__)
3633 # define __extension__ /* */ 3652 # define __extension__ /* */
3634 #endif 3653 #endif
3635 3654
3655
3636 /* Returns the number of Valgrinds this code is running under. That 3656 /* Returns the number of Valgrinds this code is running under. That
3637 is, 0 if running natively, 1 if running under Valgrind, 2 if 3657 is, 0 if running natively, 1 if running under Valgrind, 2 if
3638 running under Valgrind which is running under another Valgrind, 3658 running under Valgrind which is running under another Valgrind,
3639 etc. */ 3659 etc. */
3640 #define RUNNING_ON_VALGRIND __extension__ \ 3660 #define RUNNING_ON_VALGRIND \
3641 ({unsigned int _qzz_res; \ 3661 (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* if not */, \
3642 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0 /* if not */, \ 3662 VG_USERREQ__RUNNING_ON_VALGRIND, \
3643 VG_USERREQ__RUNNING_ON_VALGRIND, \ 3663 0, 0, 0, 0, 0) \
3644 0, 0, 0, 0, 0); \
3645 _qzz_res; \
3646 })
3647 3664
3648 3665
3649 /* Discard translation of code in the range [_qzz_addr .. _qzz_addr + 3666 /* Discard translation of code in the range [_qzz_addr .. _qzz_addr +
3650 _qzz_len - 1]. Useful if you are debugging a JITter or some such, 3667 _qzz_len - 1]. Useful if you are debugging a JITter or some such,
3651 since it provides a way to make sure valgrind will retranslate the 3668 since it provides a way to make sure valgrind will retranslate the
3652 invalidated area. Returns no value. */ 3669 invalidated area. Returns no value. */
3653 #define VALGRIND_DISCARD_TRANSLATIONS(_qzz_addr,_qzz_len) \ 3670 #define VALGRIND_DISCARD_TRANSLATIONS(_qzz_addr,_qzz_len) \
3654 {unsigned int _qzz_res; \ 3671 (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3655 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3656 VG_USERREQ__DISCARD_TRANSLATIONS, \ 3672 VG_USERREQ__DISCARD_TRANSLATIONS, \
3657 _qzz_addr, _qzz_len, 0, 0, 0); \ 3673 _qzz_addr, _qzz_len, 0, 0, 0)
3658 }
3659 3674
3660 3675
3661 /* These requests are for getting Valgrind itself to print something. 3676 /* These requests are for getting Valgrind itself to print something.
3662 Possibly with a backtrace. This is a really ugly hack. */ 3677 Possibly with a backtrace. This is a really ugly hack. The return value
3678 is the number of characters printed, excluding the "**<pid>** " part at the
3679 start and the backtrace (if present). */
3663 3680
3664 #if defined(NVALGRIND) 3681 #if defined(__GNUC__) || defined(__INTEL_COMPILER)
3665
3666 # define VALGRIND_PRINTF(...)
3667 # define VALGRIND_PRINTF_BACKTRACE(...)
3668
3669 #else /* NVALGRIND */
3670
3671 /* Modern GCC will optimize the static routine out if unused, 3682 /* Modern GCC will optimize the static routine out if unused,
3672 and unused attribute will shut down warnings about it. */ 3683 and unused attribute will shut down warnings about it. */
3673 static int VALGRIND_PRINTF(const char *format, ...) 3684 static int VALGRIND_PRINTF(const char *format, ...)
3674 __attribute__((format(__printf__, 1, 2), __unused__)); 3685 __attribute__((format(__printf__, 1, 2), __unused__));
3686 #endif
3675 static int 3687 static int
3688 #if defined(_MSC_VER)
3689 __inline
3690 #endif
3676 VALGRIND_PRINTF(const char *format, ...) 3691 VALGRIND_PRINTF(const char *format, ...)
3677 { 3692 {
3693 #if defined(NVALGRIND)
3694 return 0;
3695 #else /* NVALGRIND */
3696 #if defined(_MSC_VER)
3697 uintptr_t _qzz_res;
3698 #else
3678 unsigned long _qzz_res; 3699 unsigned long _qzz_res;
3700 #endif
3679 va_list vargs; 3701 va_list vargs;
3680 va_start(vargs, format); 3702 va_start(vargs, format);
3681 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, VG_USERREQ__PRINTF, 3703 #if defined(_MSC_VER)
3682 (unsigned long)format, (unsigned long)vargs, 3704 _qzz_res = VALGRIND_DO_CLIENT_REQUEST_EXPR(0,
3705 VG_USERREQ__PRINTF_VALIST_BY_REF,
3706 (uintptr_t)format,
3707 (uintptr_t)&vargs,
3683 0, 0, 0); 3708 0, 0, 0);
3709 #else
3710 _qzz_res = VALGRIND_DO_CLIENT_REQUEST_EXPR(0,
3711 VG_USERREQ__PRINTF_VALIST_BY_REF,
3712 (unsigned long)format,
3713 (unsigned long)&vargs,
3714 0, 0, 0);
3715 #endif
3684 va_end(vargs); 3716 va_end(vargs);
3685 return (int)_qzz_res; 3717 return (int)_qzz_res;
3718 #endif /* NVALGRIND */
3686 } 3719 }
3687 3720
3721 #if defined(__GNUC__) || defined(__INTEL_COMPILER)
3688 static int VALGRIND_PRINTF_BACKTRACE(const char *format, ...) 3722 static int VALGRIND_PRINTF_BACKTRACE(const char *format, ...)
3689 __attribute__((format(__printf__, 1, 2), __unused__)); 3723 __attribute__((format(__printf__, 1, 2), __unused__));
3724 #endif
3690 static int 3725 static int
3726 #if defined(_MSC_VER)
3727 __inline
3728 #endif
3691 VALGRIND_PRINTF_BACKTRACE(const char *format, ...) 3729 VALGRIND_PRINTF_BACKTRACE(const char *format, ...)
3692 { 3730 {
3731 #if defined(NVALGRIND)
3732 return 0;
3733 #else /* NVALGRIND */
3734 #if defined(_MSC_VER)
3735 uintptr_t _qzz_res;
3736 #else
3693 unsigned long _qzz_res; 3737 unsigned long _qzz_res;
3738 #endif
3694 va_list vargs; 3739 va_list vargs;
3695 va_start(vargs, format); 3740 va_start(vargs, format);
3696 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, VG_USERREQ__PRINTF_BACKTRACE, 3741 #if defined(_MSC_VER)
3697 (unsigned long)format, (unsigned long)vargs, 3742 _qzz_res = VALGRIND_DO_CLIENT_REQUEST_EXPR(0,
3743 VG_USERREQ__PRINTF_BACKTRACE_VALIST_BY_REF,
3744 (uintptr_t)format,
3745 (uintptr_t)&vargs,
3698 0, 0, 0); 3746 0, 0, 0);
3747 #else
3748 _qzz_res = VALGRIND_DO_CLIENT_REQUEST_EXPR(0,
3749 VG_USERREQ__PRINTF_BACKTRACE_VALIST_BY_REF,
3750 (unsigned long)format,
3751 (unsigned long)&vargs,
3752 0, 0, 0);
3753 #endif
3699 va_end(vargs); 3754 va_end(vargs);
3700 return (int)_qzz_res; 3755 return (int)_qzz_res;
3756 #endif /* NVALGRIND */
3701 } 3757 }
3702 3758
3703 #endif /* NVALGRIND */
3704
3705 3759
3706 /* These requests allow control to move from the simulated CPU to the 3760 /* These requests allow control to move from the simulated CPU to the
3707 real CPU, calling an arbitary function. 3761 real CPU, calling an arbitary function.
3708 3762
3709 Note that the current ThreadId is inserted as the first argument. 3763 Note that the current ThreadId is inserted as the first argument.
3710 So this call: 3764 So this call:
3711 3765
3712 VALGRIND_NON_SIMD_CALL2(f, arg1, arg2) 3766 VALGRIND_NON_SIMD_CALL2(f, arg1, arg2)
3713 3767
3714 requires f to have this signature: 3768 requires f to have this signature:
3715 3769
3716 Word f(Word tid, Word arg1, Word arg2) 3770 Word f(Word tid, Word arg1, Word arg2)
3717 3771
3718 where "Word" is a word-sized type. 3772 where "Word" is a word-sized type.
3719 3773
3720 Note that these client requests are not entirely reliable. For example, 3774 Note that these client requests are not entirely reliable. For example,
3721 if you call a function with them that subsequently calls printf(), 3775 if you call a function with them that subsequently calls printf(),
3722 there's a high chance Valgrind will crash. Generally, your prospects of 3776 there's a high chance Valgrind will crash. Generally, your prospects of
3723 these working are made higher if the called function does not refer to 3777 these working are made higher if the called function does not refer to
3724 any global variables, and does not refer to any libc or other functions 3778 any global variables, and does not refer to any libc or other functions
3725 (printf et al). Any kind of entanglement with libc or dynamic linking is 3779 (printf et al). Any kind of entanglement with libc or dynamic linking is
3726 likely to have a bad outcome, for tricky reasons which we've grappled 3780 likely to have a bad outcome, for tricky reasons which we've grappled
3727 with a lot in the past. 3781 with a lot in the past.
3728 */ 3782 */
3729 #define VALGRIND_NON_SIMD_CALL0(_qyy_fn) \ 3783 #define VALGRIND_NON_SIMD_CALL0(_qyy_fn) \
3730 __extension__ \ 3784 VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \
3731 ({unsigned long _qyy_res; \ 3785 VG_USERREQ__CLIENT_CALL0, \
3732 VALGRIND_DO_CLIENT_REQUEST(_qyy_res, 0 /* default return */, \ 3786 _qyy_fn, \
3733 VG_USERREQ__CLIENT_CALL0, \ 3787 0, 0, 0, 0)
3734 _qyy_fn, \ 3788
3735 0, 0, 0, 0); \ 3789 #define VALGRIND_NON_SIMD_CALL1(_qyy_fn, _qyy_arg1) \
3736 _qyy_res; \ 3790 VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \
3737 }) 3791 VG_USERREQ__CLIENT_CALL1, \
3738 3792 _qyy_fn, \
3739 #define VALGRIND_NON_SIMD_CALL1(_qyy_fn, _qyy_arg1) \ 3793 _qyy_arg1, 0, 0, 0)
3740 __extension__ \ 3794
3741 ({unsigned long _qyy_res; \ 3795 #define VALGRIND_NON_SIMD_CALL2(_qyy_fn, _qyy_arg1, _qyy_arg2) \
3742 VALGRIND_DO_CLIENT_REQUEST(_qyy_res, 0 /* default return */, \ 3796 VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \
3743 VG_USERREQ__CLIENT_CALL1, \ 3797 VG_USERREQ__CLIENT_CALL2, \
3744 _qyy_fn, \ 3798 _qyy_fn, \
3745 _qyy_arg1, 0, 0, 0); \ 3799 _qyy_arg1, _qyy_arg2, 0, 0)
3746 _qyy_res; \
3747 })
3748
3749 #define VALGRIND_NON_SIMD_CALL2(_qyy_fn, _qyy_arg1, _qyy_arg2) \
3750 __extension__ \
3751 ({unsigned long _qyy_res; \
3752 VALGRIND_DO_CLIENT_REQUEST(_qyy_res, 0 /* default return */, \
3753 VG_USERREQ__CLIENT_CALL2, \
3754 _qyy_fn, \
3755 _qyy_arg1, _qyy_arg2, 0, 0); \
3756 _qyy_res; \
3757 })
3758 3800
3759 #define VALGRIND_NON_SIMD_CALL3(_qyy_fn, _qyy_arg1, _qyy_arg2, _qyy_arg3) \ 3801 #define VALGRIND_NON_SIMD_CALL3(_qyy_fn, _qyy_arg1, _qyy_arg2, _qyy_arg3) \
3760 __extension__ \ 3802 VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* default return */, \
3761 ({unsigned long _qyy_res; \ 3803 VG_USERREQ__CLIENT_CALL3, \
3762 VALGRIND_DO_CLIENT_REQUEST(_qyy_res, 0 /* default return */, \ 3804 _qyy_fn, \
3763 VG_USERREQ__CLIENT_CALL3, \ 3805 _qyy_arg1, _qyy_arg2, \
3764 _qyy_fn, \ 3806 _qyy_arg3, 0)
3765 _qyy_arg1, _qyy_arg2, \
3766 _qyy_arg3, 0); \
3767 _qyy_res; \
3768 })
3769 3807
3770 3808
3771 /* Counts the number of errors that have been recorded by a tool. Nb: 3809 /* Counts the number of errors that have been recorded by a tool. Nb:
3772 the tool must record the errors with VG_(maybe_record_error)() or 3810 the tool must record the errors with VG_(maybe_record_error)() or
3773 VG_(unique_error)() for them to be counted. */ 3811 VG_(unique_error)() for them to be counted. */
3774 #define VALGRIND_COUNT_ERRORS \ 3812 #define VALGRIND_COUNT_ERRORS \
3775 __extension__ \ 3813 (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR( \
3776 ({unsigned int _qyy_res; \ 3814 0 /* default return */, \
3777 VALGRIND_DO_CLIENT_REQUEST(_qyy_res, 0 /* default return */, \
3778 VG_USERREQ__COUNT_ERRORS, \ 3815 VG_USERREQ__COUNT_ERRORS, \
3779 0, 0, 0, 0, 0); \ 3816 0, 0, 0, 0, 0)
3780 _qyy_res; \ 3817
3781 }) 3818 /* Several Valgrind tools (Memcheck, Massif, Helgrind, DRD) rely on knowing
3782 3819 when heap blocks are allocated in order to give accurate results. This
3783 /* Mark a block of memory as having been allocated by a malloc()-like 3820 happens automatically for the standard allocator functions such as
3784 function. `addr' is the start of the usable block (ie. after any 3821 malloc(), calloc(), realloc(), memalign(), new, new[], free(), delete,
3785 redzone) `rzB' is redzone size if the allocator can apply redzones; 3822 delete[], etc.
3786 use '0' if not. Adding redzones makes it more likely Valgrind will spot 3823
3787 block overruns. `is_zeroed' indicates if the memory is zeroed, as it is 3824 But if your program uses a custom allocator, this doesn't automatically
3788 for calloc(). Put it immediately after the point where a block is 3825 happen, and Valgrind will not do as well. For example, if you allocate
3789 allocated. 3826 superblocks with mmap() and then allocates chunks of the superblocks, all
3827 Valgrind's observations will be at the mmap() level and it won't know that
3828 the chunks should be considered separate entities. In Memcheck's case,
3829 that means you probably won't get heap block overrun detection (because
3830 there won't be redzones marked as unaddressable) and you definitely won't
3831 get any leak detection.
3832
3833 The following client requests allow a custom allocator to be annotated so
3834 that it can be handled accurately by Valgrind.
3835
3836 VALGRIND_MALLOCLIKE_BLOCK marks a region of memory as having been allocated
3837 by a malloc()-like function. For Memcheck (an illustrative case), this
3838 does two things:
3839
3840 - It records that the block has been allocated. This means any addresses
3841 within the block mentioned in error messages will be
3842 identified as belonging to the block. It also means that if the block
3843 isn't freed it will be detected by the leak checker.
3844
3845 - It marks the block as being addressable and undefined (if 'is_zeroed' is
3846 not set), or addressable and defined (if 'is_zeroed' is set). This
3847 controls how accesses to the block by the program are handled.
3790 3848
3791 If you're using Memcheck: If you're allocating memory via superblocks, 3849 'addr' is the start of the usable block (ie. after any
3792 and then handing out small chunks of each superblock, if you don't have 3850 redzone), 'sizeB' is its size. 'rzB' is the redzone size if the allocator
3793 redzones on your small blocks, it's worth marking the superblock with 3851 can apply redzones -- these are blocks of padding at the start and end of
3794 VALGRIND_MAKE_MEM_NOACCESS when it's created, so that block overruns are 3852 each block. Adding redzones is recommended as it makes it much more likely
3795 detected. But if you can put redzones on, it's probably better to not do 3853 Valgrind will spot block overruns. `is_zeroed' indicates if the memory is
3796 this, so that messages for small overruns are described in terms of the 3854 zeroed (or filled with another predictable value), as is the case for
3797 small block rather than the superblock (but if you have a big overrun 3855 calloc().
3798 that skips over a redzone, you could miss an error this way). See 3856
3799 memcheck/tests/custom_alloc.c for an example. 3857 VALGRIND_MALLOCLIKE_BLOCK should be put immediately after the point where a
3800 3858 heap block -- that will be used by the client program -- is allocated.
3801 WARNING: if your allocator uses malloc() or 'new' to allocate 3859 It's best to put it at the outermost level of the allocator if possible;
3802 superblocks, rather than mmap() or brk(), this will not work properly -- 3860 for example, if you have a function my_alloc() which calls
3803 you'll likely get assertion failures during leak detection. This is 3861 internal_alloc(), and the client request is put inside internal_alloc(),
3804 because Valgrind doesn't like seeing overlapping heap blocks. Sorry. 3862 stack traces relating to the heap block will contain entries for both
3805 3863 my_alloc() and internal_alloc(), which is probably not what you want.
3806 Nb: block must be freed via a free()-like function specified 3864
3807 with VALGRIND_FREELIKE_BLOCK or mismatch errors will occur. */ 3865 For Memcheck users: if you use VALGRIND_MALLOCLIKE_BLOCK to carve out
3866 custom blocks from within a heap block, B, that has been allocated with
3867 malloc/calloc/new/etc, then block B will be *ignored* during leak-checking
3868 -- the custom blocks will take precedence.
3869
3870 VALGRIND_FREELIKE_BLOCK is the partner to VALGRIND_MALLOCLIKE_BLOCK. For
3871 Memcheck, it does two things:
3872
3873 - It records that the block has been deallocated. This assumes that the
3874 block was annotated as having been allocated via
3875 VALGRIND_MALLOCLIKE_BLOCK. Otherwise, an error will be issued.
3876
3877 - It marks the block as being unaddressable.
3878
3879 VALGRIND_FREELIKE_BLOCK should be put immediately after the point where a
3880 heap block is deallocated.
3881
3882 VALGRIND_RESIZEINPLACE_BLOCK informs a tool about reallocation. For
3883 Memcheck, it does four things:
3884
3885 - It records that the size of a block has been changed. This assumes that
3886 the block was annotated as having been allocated via
3887 VALGRIND_MALLOCLIKE_BLOCK. Otherwise, an error will be issued.
3888
3889 - If the block shrunk, it marks the freed memory as being unaddressable.
3890
3891 - If the block grew, it marks the new area as undefined and defines a red
3892 zone past the end of the new block.
3893
3894 - The V-bits of the overlap between the old and the new block are preserved.
3895
3896 VALGRIND_RESIZEINPLACE_BLOCK should be put after allocation of the new block
3897 and before deallocation of the old block.
3898
3899 In many cases, these three client requests will not be enough to get your
3900 allocator working well with Memcheck. More specifically, if your allocator
3901 writes to freed blocks in any way then a VALGRIND_MAKE_MEM_UNDEFINED call
3902 will be necessary to mark the memory as addressable just before the zeroing
3903 occurs, otherwise you'll get a lot of invalid write errors. For example,
3904 you'll need to do this if your allocator recycles freed blocks, but it
3905 zeroes them before handing them back out (via VALGRIND_MALLOCLIKE_BLOCK).
3906 Alternatively, if your allocator reuses freed blocks for allocator-internal
3907 data structures, VALGRIND_MAKE_MEM_UNDEFINED calls will also be necessary.
3908
3909 Really, what's happening is a blurring of the lines between the client
3910 program and the allocator... after VALGRIND_FREELIKE_BLOCK is called, the
3911 memory should be considered unaddressable to the client program, but the
3912 allocator knows more than the rest of the client program and so may be able
3913 to safely access it. Extra client requests are necessary for Valgrind to
3914 understand the distinction between the allocator and the rest of the
3915 program.
3916
3917 Ignored if addr == 0.
3918 */
3808 #define VALGRIND_MALLOCLIKE_BLOCK(addr, sizeB, rzB, is_zeroed) \ 3919 #define VALGRIND_MALLOCLIKE_BLOCK(addr, sizeB, rzB, is_zeroed) \
3809 {unsigned int _qzz_res; \ 3920 VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3810 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3811 VG_USERREQ__MALLOCLIKE_BLOCK, \ 3921 VG_USERREQ__MALLOCLIKE_BLOCK, \
3812 addr, sizeB, rzB, is_zeroed, 0); \ 3922 addr, sizeB, rzB, is_zeroed, 0)
3813 } 3923
3814 3924 /* See the comment for VALGRIND_MALLOCLIKE_BLOCK for details.
3815 /* Mark a block of memory as having been freed by a free()-like function. 3925 Ignored if addr == 0.
3816 `rzB' is redzone size; it must match that given to 3926 */
3817 VALGRIND_MALLOCLIKE_BLOCK. Memory not freed will be detected by the leak 3927 #define VALGRIND_RESIZEINPLACE_BLOCK(addr, oldSizeB, newSizeB, rzB) \
3818 checker. Put it immediately after the point where the block is freed. */ 3928 VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3929 VG_USERREQ__RESIZEINPLACE_BLOCK, \
3930 addr, oldSizeB, newSizeB, rzB, 0)
3931
3932 /* See the comment for VALGRIND_MALLOCLIKE_BLOCK for details.
3933 Ignored if addr == 0.
3934 */
3819 #define VALGRIND_FREELIKE_BLOCK(addr, rzB) \ 3935 #define VALGRIND_FREELIKE_BLOCK(addr, rzB) \
3820 {unsigned int _qzz_res; \ 3936 VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3821 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3822 VG_USERREQ__FREELIKE_BLOCK, \ 3937 VG_USERREQ__FREELIKE_BLOCK, \
3823 addr, rzB, 0, 0, 0); \ 3938 addr, rzB, 0, 0, 0)
3824 }
3825 3939
3826 /* Create a memory pool. */ 3940 /* Create a memory pool. */
3827 #define VALGRIND_CREATE_MEMPOOL(pool, rzB, is_zeroed) \ 3941 #define VALGRIND_CREATE_MEMPOOL(pool, rzB, is_zeroed) \
3828 {unsigned int _qzz_res; \ 3942 VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3829 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3830 VG_USERREQ__CREATE_MEMPOOL, \ 3943 VG_USERREQ__CREATE_MEMPOOL, \
3831 pool, rzB, is_zeroed, 0, 0); \ 3944 pool, rzB, is_zeroed, 0, 0)
3832 }
3833 3945
3834 /* Destroy a memory pool. */ 3946 /* Destroy a memory pool. */
3835 #define VALGRIND_DESTROY_MEMPOOL(pool) \ 3947 #define VALGRIND_DESTROY_MEMPOOL(pool) \
3836 {unsigned int _qzz_res; \ 3948 VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3837 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3838 VG_USERREQ__DESTROY_MEMPOOL, \ 3949 VG_USERREQ__DESTROY_MEMPOOL, \
3839 pool, 0, 0, 0, 0); \ 3950 pool, 0, 0, 0, 0)
3840 }
3841 3951
3842 /* Associate a piece of memory with a memory pool. */ 3952 /* Associate a piece of memory with a memory pool. */
3843 #define VALGRIND_MEMPOOL_ALLOC(pool, addr, size) \ 3953 #define VALGRIND_MEMPOOL_ALLOC(pool, addr, size) \
3844 {unsigned int _qzz_res; \ 3954 VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3845 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3846 VG_USERREQ__MEMPOOL_ALLOC, \ 3955 VG_USERREQ__MEMPOOL_ALLOC, \
3847 pool, addr, size, 0, 0); \ 3956 pool, addr, size, 0, 0)
3848 }
3849 3957
3850 /* Disassociate a piece of memory from a memory pool. */ 3958 /* Disassociate a piece of memory from a memory pool. */
3851 #define VALGRIND_MEMPOOL_FREE(pool, addr) \ 3959 #define VALGRIND_MEMPOOL_FREE(pool, addr) \
3852 {unsigned int _qzz_res; \ 3960 VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3853 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3854 VG_USERREQ__MEMPOOL_FREE, \ 3961 VG_USERREQ__MEMPOOL_FREE, \
3855 pool, addr, 0, 0, 0); \ 3962 pool, addr, 0, 0, 0)
3856 }
3857 3963
3858 /* Disassociate any pieces outside a particular range. */ 3964 /* Disassociate any pieces outside a particular range. */
3859 #define VALGRIND_MEMPOOL_TRIM(pool, addr, size) \ 3965 #define VALGRIND_MEMPOOL_TRIM(pool, addr, size) \
3860 {unsigned int _qzz_res; \ 3966 VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3861 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3862 VG_USERREQ__MEMPOOL_TRIM, \ 3967 VG_USERREQ__MEMPOOL_TRIM, \
3863 pool, addr, size, 0, 0); \ 3968 pool, addr, size, 0, 0)
3864 }
3865 3969
3866 /* Resize and/or move a piece associated with a memory pool. */ 3970 /* Resize and/or move a piece associated with a memory pool. */
3867 #define VALGRIND_MOVE_MEMPOOL(poolA, poolB) \ 3971 #define VALGRIND_MOVE_MEMPOOL(poolA, poolB) \
3868 {unsigned int _qzz_res; \ 3972 VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3869 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3870 VG_USERREQ__MOVE_MEMPOOL, \ 3973 VG_USERREQ__MOVE_MEMPOOL, \
3871 poolA, poolB, 0, 0, 0); \ 3974 poolA, poolB, 0, 0, 0)
3872 }
3873 3975
3874 /* Resize and/or move a piece associated with a memory pool. */ 3976 /* Resize and/or move a piece associated with a memory pool. */
3875 #define VALGRIND_MEMPOOL_CHANGE(pool, addrA, addrB, size) \ 3977 #define VALGRIND_MEMPOOL_CHANGE(pool, addrA, addrB, size) \
3876 {unsigned int _qzz_res; \ 3978 VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3877 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3878 VG_USERREQ__MEMPOOL_CHANGE, \ 3979 VG_USERREQ__MEMPOOL_CHANGE, \
3879 pool, addrA, addrB, size, 0); \ 3980 pool, addrA, addrB, size, 0)
3880 }
3881 3981
3882 /* Return 1 if a mempool exists, else 0. */ 3982 /* Return 1 if a mempool exists, else 0. */
3883 #define VALGRIND_MEMPOOL_EXISTS(pool) \ 3983 #define VALGRIND_MEMPOOL_EXISTS(pool) \
3884 ({unsigned int _qzz_res; \ 3984 (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3885 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3886 VG_USERREQ__MEMPOOL_EXISTS, \ 3985 VG_USERREQ__MEMPOOL_EXISTS, \
3887 pool, 0, 0, 0, 0); \ 3986 pool, 0, 0, 0, 0)
3888 _qzz_res; \
3889 })
3890 3987
3891 /* Mark a piece of memory as being a stack. Returns a stack id. */ 3988 /* Mark a piece of memory as being a stack. Returns a stack id. */
3892 #define VALGRIND_STACK_REGISTER(start, end) \ 3989 #define VALGRIND_STACK_REGISTER(start, end) \
3893 ({unsigned int _qzz_res; \ 3990 (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3894 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3895 VG_USERREQ__STACK_REGISTER, \ 3991 VG_USERREQ__STACK_REGISTER, \
3896 start, end, 0, 0, 0); \ 3992 start, end, 0, 0, 0)
3897 _qzz_res; \
3898 })
3899 3993
3900 /* Unmark the piece of memory associated with a stack id as being a 3994 /* Unmark the piece of memory associated with a stack id as being a
3901 stack. */ 3995 stack. */
3902 #define VALGRIND_STACK_DEREGISTER(id) \ 3996 #define VALGRIND_STACK_DEREGISTER(id) \
3903 {unsigned int _qzz_res; \ 3997 (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3904 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3905 VG_USERREQ__STACK_DEREGISTER, \ 3998 VG_USERREQ__STACK_DEREGISTER, \
3906 id, 0, 0, 0, 0); \ 3999 id, 0, 0, 0, 0)
3907 }
3908 4000
3909 /* Change the start and end address of the stack id. */ 4001 /* Change the start and end address of the stack id. */
3910 #define VALGRIND_STACK_CHANGE(id, start, end) \ 4002 #define VALGRIND_STACK_CHANGE(id, start, end) \
3911 {unsigned int _qzz_res; \ 4003 VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
3912 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3913 VG_USERREQ__STACK_CHANGE, \ 4004 VG_USERREQ__STACK_CHANGE, \
3914 id, start, end, 0, 0); \ 4005 id, start, end, 0, 0)
3915 } 4006
3916 4007 /* Load PDB debug info for Wine PE image_map. */
3917 4008 #define VALGRIND_LOAD_PDB_DEBUGINFO(fd, ptr, total_size, delta) \
4009 VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
4010 VG_USERREQ__LOAD_PDB_DEBUGINFO, \
4011 fd, ptr, total_size, delta, 0)
4012
4013 /* Map a code address to a source file name and line number. buf64
4014 must point to a 64-byte buffer in the caller's address space. The
4015 result will be dumped in there and is guaranteed to be zero
4016 terminated. If no info is found, the first byte is set to zero. */
4017 #define VALGRIND_MAP_IP_TO_SRCLOC(addr, buf64) \
4018 (unsigned)VALGRIND_DO_CLIENT_REQUEST_EXPR(0, \
4019 VG_USERREQ__MAP_IP_TO_SRCLOC, \
4020 addr, buf64, 0, 0, 0)
4021
4022
4023 #undef PLAT_x86_darwin
4024 #undef PLAT_amd64_darwin
4025 #undef PLAT_x86_win32
3918 #undef PLAT_x86_linux 4026 #undef PLAT_x86_linux
3919 #undef PLAT_amd64_linux 4027 #undef PLAT_amd64_linux
3920 #undef PLAT_ppc32_linux 4028 #undef PLAT_ppc32_linux
3921 #undef PLAT_ppc64_linux 4029 #undef PLAT_ppc64_linux
3922 #undef PLAT_ppc32_aix5 4030 #undef PLAT_arm_linux
3923 #undef PLAT_ppc64_aix5 4031 #undef PLAT_s390x_linux
3924 4032
3925 #endif /* __VALGRIND_H */ 4033 #endif /* __VALGRIND_H */
OLDNEW
« no previous file with comments | « src/mips/full-codegen-mips.cc ('k') | src/version.cc » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698