Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(339)

Side by Side Diff: base/third_party/valgrind/valgrind.h

Issue 6327018: Add one more Valgrind header (memcheck.h) and update the one we already have. Base URL: http://src.chromium.org/svn/trunk/src/
Patch Set: '' Created 9 years, 11 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch | Annotate | Revision Log
« no previous file with comments | « base/third_party/valgrind/memcheck.h ('k') | no next file » | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 /* -*- c -*- 1 /* -*- c -*-
2 ---------------------------------------------------------------- 2 ----------------------------------------------------------------
3 3
4 Notice that the following BSD-style license applies to this one 4 Notice that the following BSD-style license applies to this one
5 file (valgrind.h) only. The rest of Valgrind is licensed under the 5 file (valgrind.h) only. The rest of Valgrind is licensed under the
6 terms of the GNU General Public License, version 2, unless 6 terms of the GNU General Public License, version 2, unless
7 otherwise indicated. See the COPYING file in the source 7 otherwise indicated. See the COPYING file in the source
8 distribution for details. 8 distribution for details.
9 9
10 ---------------------------------------------------------------- 10 ----------------------------------------------------------------
11 11
12 This file is part of Valgrind, a dynamic binary instrumentation 12 This file is part of Valgrind, a dynamic binary instrumentation
13 framework. 13 framework.
14 14
15 Copyright (C) 2000-2008 Julian Seward. All rights reserved. 15 Copyright (C) 2000-2010 Julian Seward. All rights reserved.
16 16
17 Redistribution and use in source and binary forms, with or without 17 Redistribution and use in source and binary forms, with or without
18 modification, are permitted provided that the following conditions 18 modification, are permitted provided that the following conditions
19 are met: 19 are met:
20 20
21 1. Redistributions of source code must retain the above copyright 21 1. Redistributions of source code must retain the above copyright
22 notice, this list of conditions and the following disclaimer. 22 notice, this list of conditions and the following disclaimer.
23 23
24 2. The origin of this software must not be misrepresented; you must 24 2. The origin of this software must not be misrepresented; you must
25 not claim that you wrote the original software. If you use this 25 not claim that you wrote the original software. If you use this
(...skipping 40 matching lines...) Expand 10 before | Expand all | Expand 10 after
66 unchanged. When not running on valgrind, each client request 66 unchanged. When not running on valgrind, each client request
67 consumes very few (eg. 7) instructions, so the resulting performance 67 consumes very few (eg. 7) instructions, so the resulting performance
68 loss is negligible unless you plan to execute client requests 68 loss is negligible unless you plan to execute client requests
69 millions of times per second. Nevertheless, if that is still a 69 millions of times per second. Nevertheless, if that is still a
70 problem, you can compile with the NVALGRIND symbol defined (gcc 70 problem, you can compile with the NVALGRIND symbol defined (gcc
71 -DNVALGRIND) so that client requests are not even compiled in. */ 71 -DNVALGRIND) so that client requests are not even compiled in. */
72 72
73 #ifndef __VALGRIND_H 73 #ifndef __VALGRIND_H
74 #define __VALGRIND_H 74 #define __VALGRIND_H
75 75
76
77 /* ------------------------------------------------------------------ */
78 /* VERSION NUMBER OF VALGRIND */
79 /* ------------------------------------------------------------------ */
80
81 /* Specify Valgrind's version number, so that user code can
82 conditionally compile based on our version number. Note that these
83 were introduced at version 3.6 and so do not exist in version 3.5
84 or earlier. The recommended way to use them to check for "version
85 X.Y or later" is (eg)
86
87 #if defined(__VALGRIND_MAJOR__) && defined(__VALGRIND_MINOR__) \
88 && (__VALGRIND_MAJOR__ > 3 \
89 || (__VALGRIND_MAJOR__ == 3 && __VALGRIND_MINOR__ >= 6))
90 */
91 #define __VALGRIND_MAJOR__ 3
92 #define __VALGRIND_MINOR__ 6
93
94
76 #include <stdarg.h> 95 #include <stdarg.h>
77 96
78 /* Nb: this file might be included in a file compiled with -ansi. So 97 /* Nb: this file might be included in a file compiled with -ansi. So
79 we can't use C++ style "//" comments nor the "asm" keyword (instead 98 we can't use C++ style "//" comments nor the "asm" keyword (instead
80 use "__asm__"). */ 99 use "__asm__"). */
81 100
82 /* Derive some tags indicating what the target platform is. Note 101 /* Derive some tags indicating what the target platform is. Note
83 that in this file we're using the compiler's CPP symbols for 102 that in this file we're using the compiler's CPP symbols for
84 identifying architectures, which are different to the ones we use 103 identifying architectures, which are different to the ones we use
85 within the rest of Valgrind. Note, __powerpc__ is active for both 104 within the rest of Valgrind. Note, __powerpc__ is active for both
86 32 and 64-bit PPC, whereas __powerpc64__ is only active for the 105 32 and 64-bit PPC, whereas __powerpc64__ is only active for the
87 latter (on Linux, that is). */ 106 latter (on Linux, that is).
107
108 Misc note: how to find out what's predefined in gcc by default:
109 gcc -Wp,-dM somefile.c
110 */
111 #undef PLAT_ppc64_aix5
112 #undef PLAT_ppc32_aix5
113 #undef PLAT_x86_darwin
114 #undef PLAT_amd64_darwin
115 #undef PLAT_x86_win32
88 #undef PLAT_x86_linux 116 #undef PLAT_x86_linux
89 #undef PLAT_amd64_linux 117 #undef PLAT_amd64_linux
90 #undef PLAT_ppc32_linux 118 #undef PLAT_ppc32_linux
91 #undef PLAT_ppc64_linux 119 #undef PLAT_ppc64_linux
92 #undef PLAT_ppc32_aix5 120 #undef PLAT_arm_linux
93 #undef PLAT_ppc64_aix5
94 121
95 #if !defined(_AIX) && defined(__i386__) 122 #if defined(_AIX) && defined(__64BIT__)
96 # define PLAT_x86_linux 1
97 #elif !defined(_AIX) && defined(__x86_64__)
98 # define PLAT_amd64_linux 1
99 #elif !defined(_AIX) && defined(__powerpc__) && !defined(__powerpc64__)
100 # define PLAT_ppc32_linux 1
101 #elif !defined(_AIX) && defined(__powerpc__) && defined(__powerpc64__)
102 # define PLAT_ppc64_linux 1
103 #elif defined(_AIX) && defined(__64BIT__)
104 # define PLAT_ppc64_aix5 1 123 # define PLAT_ppc64_aix5 1
105 #elif defined(_AIX) && !defined(__64BIT__) 124 #elif defined(_AIX) && !defined(__64BIT__)
106 # define PLAT_ppc32_aix5 1 125 # define PLAT_ppc32_aix5 1
107 #endif 126 #elif defined(__APPLE__) && defined(__i386__)
108 127 # define PLAT_x86_darwin 1
109 128 #elif defined(__APPLE__) && defined(__x86_64__)
129 # define PLAT_amd64_darwin 1
130 #elif defined(__MINGW32__) || defined(__CYGWIN32__) || defined(_WIN32) && define d(_M_IX86)
131 # define PLAT_x86_win32 1
132 #elif defined(__linux__) && defined(__i386__)
133 # define PLAT_x86_linux 1
134 #elif defined(__linux__) && defined(__x86_64__)
135 # define PLAT_amd64_linux 1
136 #elif defined(__linux__) && defined(__powerpc__) && !defined(__powerpc64__)
137 # define PLAT_ppc32_linux 1
138 #elif defined(__linux__) && defined(__powerpc__) && defined(__powerpc64__)
139 # define PLAT_ppc64_linux 1
140 #elif defined(__linux__) && defined(__arm__)
141 # define PLAT_arm_linux 1
142 #else
110 /* If we're not compiling for our target platform, don't generate 143 /* If we're not compiling for our target platform, don't generate
111 any inline asms. */ 144 any inline asms. */
112 #if !defined(PLAT_x86_linux) && !defined(PLAT_amd64_linux) \
113 && !defined(PLAT_ppc32_linux) && !defined(PLAT_ppc64_linux) \
114 && !defined(PLAT_ppc32_aix5) && !defined(PLAT_ppc64_aix5)
115 # if !defined(NVALGRIND) 145 # if !defined(NVALGRIND)
116 # define NVALGRIND 1 146 # define NVALGRIND 1
117 # endif 147 # endif
118 #endif 148 #endif
119 149
120 150
121 /* ------------------------------------------------------------------ */ 151 /* ------------------------------------------------------------------ */
122 /* ARCHITECTURE SPECIFICS for SPECIAL INSTRUCTIONS. There is nothing */ 152 /* ARCHITECTURE SPECIFICS for SPECIAL INSTRUCTIONS. There is nothing */
123 /* in here of use to end-users -- skip to the next section. */ 153 /* in here of use to end-users -- skip to the next section. */
124 /* ------------------------------------------------------------------ */ 154 /* ------------------------------------------------------------------ */
(...skipping 40 matching lines...) Expand 10 before | Expand all | Expand 10 after
165 information is abstracted into a user-visible type, OrigFn. 195 information is abstracted into a user-visible type, OrigFn.
166 196
167 VALGRIND_CALL_NOREDIR_* behaves the same as the following on the 197 VALGRIND_CALL_NOREDIR_* behaves the same as the following on the
168 guest, but guarantees that the branch instruction will not be 198 guest, but guarantees that the branch instruction will not be
169 redirected: x86: call *%eax, amd64: call *%rax, ppc32/ppc64: 199 redirected: x86: call *%eax, amd64: call *%rax, ppc32/ppc64:
170 branch-and-link-to-r11. VALGRIND_CALL_NOREDIR is just text, not a 200 branch-and-link-to-r11. VALGRIND_CALL_NOREDIR is just text, not a
171 complete inline asm, since it needs to be combined with more magic 201 complete inline asm, since it needs to be combined with more magic
172 inline asm stuff to be useful. 202 inline asm stuff to be useful.
173 */ 203 */
174 204
175 /* ------------------------- x86-linux ------------------------- */ 205 /* ------------------------- x86-{linux,darwin} ---------------- */
176 206
177 #if defined(PLAT_x86_linux) 207 #if defined(PLAT_x86_linux) || defined(PLAT_x86_darwin) \
208 || (defined(PLAT_x86_win32) && defined(__GNUC__))
178 209
179 typedef 210 typedef
180 struct { 211 struct {
181 unsigned int nraddr; /* where's the code? */ 212 unsigned int nraddr; /* where's the code? */
182 } 213 }
183 OrigFn; 214 OrigFn;
184 215
185 #define __SPECIAL_INSTRUCTION_PREAMBLE \ 216 #define __SPECIAL_INSTRUCTION_PREAMBLE \
186 "roll $3, %%edi ; roll $13, %%edi\n\t" \ 217 "roll $3, %%edi ; roll $13, %%edi\n\t" \
187 "roll $29, %%edi ; roll $19, %%edi\n\t" 218 "roll $29, %%edi ; roll $19, %%edi\n\t"
(...skipping 29 matching lines...) Expand all
217 : \ 248 : \
218 : "cc", "memory" \ 249 : "cc", "memory" \
219 ); \ 250 ); \
220 _zzq_orig->nraddr = __addr; \ 251 _zzq_orig->nraddr = __addr; \
221 } 252 }
222 253
223 #define VALGRIND_CALL_NOREDIR_EAX \ 254 #define VALGRIND_CALL_NOREDIR_EAX \
224 __SPECIAL_INSTRUCTION_PREAMBLE \ 255 __SPECIAL_INSTRUCTION_PREAMBLE \
225 /* call-noredir *%EAX */ \ 256 /* call-noredir *%EAX */ \
226 "xchgl %%edx,%%edx\n\t" 257 "xchgl %%edx,%%edx\n\t"
227 #endif /* PLAT_x86_linux */ 258 #endif /* PLAT_x86_linux || PLAT_x86_darwin || (PLAT_x86_win32 && __GNUC__) */
228 259
229 /* ------------------------ amd64-linux ------------------------ */ 260 /* ------------------------- x86-Win32 ------------------------- */
230 261
231 #if defined(PLAT_amd64_linux) 262 #if defined(PLAT_x86_win32) && !defined(__GNUC__)
263
264 typedef
265 struct {
266 unsigned int nraddr; /* where's the code? */
267 }
268 OrigFn;
269
270 #if defined(_MSC_VER)
271
272 #define __SPECIAL_INSTRUCTION_PREAMBLE \
273 __asm rol edi, 3 __asm rol edi, 13 \
274 __asm rol edi, 29 __asm rol edi, 19
275
276 #define VALGRIND_DO_CLIENT_REQUEST( \
277 _zzq_rlval, _zzq_default, _zzq_request, \
278 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
279 { volatile uintptr_t _zzq_args[6]; \
280 volatile unsigned int _zzq_result; \
281 _zzq_args[0] = (uintptr_t)(_zzq_request); \
282 _zzq_args[1] = (uintptr_t)(_zzq_arg1); \
283 _zzq_args[2] = (uintptr_t)(_zzq_arg2); \
284 _zzq_args[3] = (uintptr_t)(_zzq_arg3); \
285 _zzq_args[4] = (uintptr_t)(_zzq_arg4); \
286 _zzq_args[5] = (uintptr_t)(_zzq_arg5); \
287 __asm { __asm lea eax, _zzq_args __asm mov edx, _zzq_default \
288 __SPECIAL_INSTRUCTION_PREAMBLE \
289 /* %EDX = client_request ( %EAX ) */ \
290 __asm xchg ebx,ebx \
291 __asm mov _zzq_result, edx \
292 } \
293 _zzq_rlval = _zzq_result; \
294 }
295
296 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
297 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
298 volatile unsigned int __addr; \
299 __asm { __SPECIAL_INSTRUCTION_PREAMBLE \
300 /* %EAX = guest_NRADDR */ \
301 __asm xchg ecx,ecx \
302 __asm mov __addr, eax \
303 } \
304 _zzq_orig->nraddr = __addr; \
305 }
306
307 #define VALGRIND_CALL_NOREDIR_EAX ERROR
308
309 #else
310 #error Unsupported compiler.
311 #endif
312
313 #endif /* PLAT_x86_win32 */
314
315 /* ------------------------ amd64-{linux,darwin} --------------- */
316
317 #if defined(PLAT_amd64_linux) || defined(PLAT_amd64_darwin)
232 318
233 typedef 319 typedef
234 struct { 320 struct {
235 unsigned long long int nraddr; /* where's the code? */ 321 unsigned long long int nraddr; /* where's the code? */
236 } 322 }
237 OrigFn; 323 OrigFn;
238 324
239 #define __SPECIAL_INSTRUCTION_PREAMBLE \ 325 #define __SPECIAL_INSTRUCTION_PREAMBLE \
240 "rolq $3, %%rdi ; rolq $13, %%rdi\n\t" \ 326 "rolq $3, %%rdi ; rolq $13, %%rdi\n\t" \
241 "rolq $61, %%rdi ; rolq $51, %%rdi\n\t" 327 "rolq $61, %%rdi ; rolq $51, %%rdi\n\t"
(...skipping 29 matching lines...) Expand all
271 : \ 357 : \
272 : "cc", "memory" \ 358 : "cc", "memory" \
273 ); \ 359 ); \
274 _zzq_orig->nraddr = __addr; \ 360 _zzq_orig->nraddr = __addr; \
275 } 361 }
276 362
277 #define VALGRIND_CALL_NOREDIR_RAX \ 363 #define VALGRIND_CALL_NOREDIR_RAX \
278 __SPECIAL_INSTRUCTION_PREAMBLE \ 364 __SPECIAL_INSTRUCTION_PREAMBLE \
279 /* call-noredir *%RAX */ \ 365 /* call-noredir *%RAX */ \
280 "xchgq %%rdx,%%rdx\n\t" 366 "xchgq %%rdx,%%rdx\n\t"
281 #endif /* PLAT_amd64_linux */ 367 #endif /* PLAT_amd64_linux || PLAT_amd64_darwin */
282 368
283 /* ------------------------ ppc32-linux ------------------------ */ 369 /* ------------------------ ppc32-linux ------------------------ */
284 370
285 #if defined(PLAT_ppc32_linux) 371 #if defined(PLAT_ppc32_linux)
286 372
287 typedef 373 typedef
288 struct { 374 struct {
289 unsigned int nraddr; /* where's the code? */ 375 unsigned int nraddr; /* where's the code? */
290 } 376 }
291 OrigFn; 377 OrigFn;
(...skipping 107 matching lines...) Expand 10 before | Expand all | Expand 10 after
399 _zzq_orig->r2 = __addr; \ 485 _zzq_orig->r2 = __addr; \
400 } 486 }
401 487
402 #define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \ 488 #define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R11 \
403 __SPECIAL_INSTRUCTION_PREAMBLE \ 489 __SPECIAL_INSTRUCTION_PREAMBLE \
404 /* branch-and-link-to-noredir *%R11 */ \ 490 /* branch-and-link-to-noredir *%R11 */ \
405 "or 3,3,3\n\t" 491 "or 3,3,3\n\t"
406 492
407 #endif /* PLAT_ppc64_linux */ 493 #endif /* PLAT_ppc64_linux */
408 494
495 /* ------------------------- arm-linux ------------------------- */
496
497 #if defined(PLAT_arm_linux)
498
499 typedef
500 struct {
501 unsigned int nraddr; /* where's the code? */
502 }
503 OrigFn;
504
505 #define __SPECIAL_INSTRUCTION_PREAMBLE \
506 "mov r12, r12, ror #3 ; mov r12, r12, ror #13 \n\t" \
507 "mov r12, r12, ror #29 ; mov r12, r12, ror #19 \n\t"
508
509 #define VALGRIND_DO_CLIENT_REQUEST( \
510 _zzq_rlval, _zzq_default, _zzq_request, \
511 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
512 \
513 { volatile unsigned int _zzq_args[6]; \
514 volatile unsigned int _zzq_result; \
515 _zzq_args[0] = (unsigned int)(_zzq_request); \
516 _zzq_args[1] = (unsigned int)(_zzq_arg1); \
517 _zzq_args[2] = (unsigned int)(_zzq_arg2); \
518 _zzq_args[3] = (unsigned int)(_zzq_arg3); \
519 _zzq_args[4] = (unsigned int)(_zzq_arg4); \
520 _zzq_args[5] = (unsigned int)(_zzq_arg5); \
521 __asm__ volatile("mov r3, %1\n\t" /*default*/ \
522 "mov r4, %2\n\t" /*ptr*/ \
523 __SPECIAL_INSTRUCTION_PREAMBLE \
524 /* R3 = client_request ( R4 ) */ \
525 "orr r10, r10, r10\n\t" \
526 "mov %0, r3" /*result*/ \
527 : "=r" (_zzq_result) \
528 : "r" (_zzq_default), "r" (&_zzq_args[0]) \
529 : "cc","memory", "r3", "r4"); \
530 _zzq_rlval = _zzq_result; \
531 }
532
533 #define VALGRIND_GET_NR_CONTEXT(_zzq_rlval) \
534 { volatile OrigFn* _zzq_orig = &(_zzq_rlval); \
535 unsigned int __addr; \
536 __asm__ volatile(__SPECIAL_INSTRUCTION_PREAMBLE \
537 /* R3 = guest_NRADDR */ \
538 "orr r11, r11, r11\n\t" \
539 "mov %0, r3" \
540 : "=r" (__addr) \
541 : \
542 : "cc", "memory", "r3" \
543 ); \
544 _zzq_orig->nraddr = __addr; \
545 }
546
547 #define VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
548 __SPECIAL_INSTRUCTION_PREAMBLE \
549 /* branch-and-link-to-noredir *%R4 */ \
550 "orr r12, r12, r12\n\t"
551
552 #endif /* PLAT_arm_linux */
553
409 /* ------------------------ ppc32-aix5 ------------------------- */ 554 /* ------------------------ ppc32-aix5 ------------------------- */
410 555
411 #if defined(PLAT_ppc32_aix5) 556 #if defined(PLAT_ppc32_aix5)
412 557
413 typedef 558 typedef
414 struct { 559 struct {
415 unsigned int nraddr; /* where's the code? */ 560 unsigned int nraddr; /* where's the code? */
416 unsigned int r2; /* what tocptr do we need? */ 561 unsigned int r2; /* what tocptr do we need? */
417 } 562 }
418 OrigFn; 563 OrigFn;
(...skipping 155 matching lines...) Expand 10 before | Expand all | Expand 10 after
574 719
575 'W' stands for "word" and 'v' for "void". Hence there are 720 'W' stands for "word" and 'v' for "void". Hence there are
576 different macros for calling arity 0, 1, 2, 3, 4, etc, functions, 721 different macros for calling arity 0, 1, 2, 3, 4, etc, functions,
577 and for each, the possibility of returning a word-typed result, or 722 and for each, the possibility of returning a word-typed result, or
578 no result. 723 no result.
579 */ 724 */
580 725
581 /* Use these to write the name of your wrapper. NOTE: duplicates 726 /* Use these to write the name of your wrapper. NOTE: duplicates
582 VG_WRAP_FUNCTION_Z{U,Z} in pub_tool_redir.h. */ 727 VG_WRAP_FUNCTION_Z{U,Z} in pub_tool_redir.h. */
583 728
729 /* Use an extra level of macroisation so as to ensure the soname/fnname
730 args are fully macro-expanded before pasting them together. */
731 #define VG_CONCAT4(_aa,_bb,_cc,_dd) _aa##_bb##_cc##_dd
732
584 #define I_WRAP_SONAME_FNNAME_ZU(soname,fnname) \ 733 #define I_WRAP_SONAME_FNNAME_ZU(soname,fnname) \
585 _vgwZU_##soname##_##fnname 734 VG_CONCAT4(_vgwZU_,soname,_,fnname)
586 735
587 #define I_WRAP_SONAME_FNNAME_ZZ(soname,fnname) \ 736 #define I_WRAP_SONAME_FNNAME_ZZ(soname,fnname) \
588 _vgwZZ_##soname##_##fnname 737 VG_CONCAT4(_vgwZZ_,soname,_,fnname)
589 738
590 /* Use this macro from within a wrapper function to collect the 739 /* Use this macro from within a wrapper function to collect the
591 context (address and possibly other info) of the original function. 740 context (address and possibly other info) of the original function.
592 Once you have that you can then use it in one of the CALL_FN_ 741 Once you have that you can then use it in one of the CALL_FN_
593 macros. The type of the argument _lval is OrigFn. */ 742 macros. The type of the argument _lval is OrigFn. */
594 #define VALGRIND_GET_ORIG_FN(_lval) VALGRIND_GET_NR_CONTEXT(_lval) 743 #define VALGRIND_GET_ORIG_FN(_lval) VALGRIND_GET_NR_CONTEXT(_lval)
595 744
596 /* Derivatives of the main macros below, for calling functions 745 /* Derivatives of the main macros below, for calling functions
597 returning void. */ 746 returning void. */
598 747
599 #define CALL_FN_v_v(fnptr) \ 748 #define CALL_FN_v_v(fnptr) \
600 do { volatile unsigned long _junk; \ 749 do { volatile unsigned long _junk; \
601 CALL_FN_W_v(_junk,fnptr); } while (0) 750 CALL_FN_W_v(_junk,fnptr); } while (0)
602 751
603 #define CALL_FN_v_W(fnptr, arg1) \ 752 #define CALL_FN_v_W(fnptr, arg1) \
604 do { volatile unsigned long _junk; \ 753 do { volatile unsigned long _junk; \
605 CALL_FN_W_W(_junk,fnptr,arg1); } while (0) 754 CALL_FN_W_W(_junk,fnptr,arg1); } while (0)
606 755
607 #define CALL_FN_v_WW(fnptr, arg1,arg2) \ 756 #define CALL_FN_v_WW(fnptr, arg1,arg2) \
608 do { volatile unsigned long _junk; \ 757 do { volatile unsigned long _junk; \
609 CALL_FN_W_WW(_junk,fnptr,arg1,arg2); } while (0) 758 CALL_FN_W_WW(_junk,fnptr,arg1,arg2); } while (0)
610 759
611 #define CALL_FN_v_WWW(fnptr, arg1,arg2,arg3) \ 760 #define CALL_FN_v_WWW(fnptr, arg1,arg2,arg3) \
612 do { volatile unsigned long _junk; \ 761 do { volatile unsigned long _junk; \
613 CALL_FN_W_WWW(_junk,fnptr,arg1,arg2,arg3); } while (0) 762 CALL_FN_W_WWW(_junk,fnptr,arg1,arg2,arg3); } while (0)
614 763
615 /* ------------------------- x86-linux ------------------------- */ 764 #define CALL_FN_v_WWWW(fnptr, arg1,arg2,arg3,arg4) \
765 do { volatile unsigned long _junk; \
766 CALL_FN_W_WWWW(_junk,fnptr,arg1,arg2,arg3,arg4); } while (0)
616 767
617 #if defined(PLAT_x86_linux) 768 #define CALL_FN_v_5W(fnptr, arg1,arg2,arg3,arg4,arg5) \
769 do { volatile unsigned long _junk; \
770 CALL_FN_W_5W(_junk,fnptr,arg1,arg2,arg3,arg4,arg5); } while (0)
771
772 #define CALL_FN_v_6W(fnptr, arg1,arg2,arg3,arg4,arg5,arg6) \
773 do { volatile unsigned long _junk; \
774 CALL_FN_W_6W(_junk,fnptr,arg1,arg2,arg3,arg4,arg5,arg6); } while (0)
775
776 #define CALL_FN_v_7W(fnptr, arg1,arg2,arg3,arg4,arg5,arg6,arg7) \
777 do { volatile unsigned long _junk; \
778 CALL_FN_W_7W(_junk,fnptr,arg1,arg2,arg3,arg4,arg5,arg6,arg7); } while (0 )
779
780 /* ------------------------- x86-{linux,darwin} ---------------- */
781
782 #if defined(PLAT_x86_linux) || defined(PLAT_x86_darwin)
618 783
619 /* These regs are trashed by the hidden call. No need to mention eax 784 /* These regs are trashed by the hidden call. No need to mention eax
620 as gcc can already see that, plus causes gcc to bomb. */ 785 as gcc can already see that, plus causes gcc to bomb. */
621 #define __CALLER_SAVED_REGS /*"eax"*/ "ecx", "edx" 786 #define __CALLER_SAVED_REGS /*"eax"*/ "ecx", "edx"
622 787
623 /* These CALL_FN_ macros assume that on x86-linux, sizeof(unsigned 788 /* These CALL_FN_ macros assume that on x86-linux, sizeof(unsigned
624 long) == 4. */ 789 long) == 4. */
625 790
626 #define CALL_FN_W_v(lval, orig) \ 791 #define CALL_FN_W_v(lval, orig) \
627 do { \ 792 do { \
(...skipping 12 matching lines...) Expand all
640 } while (0) 805 } while (0)
641 806
642 #define CALL_FN_W_W(lval, orig, arg1) \ 807 #define CALL_FN_W_W(lval, orig, arg1) \
643 do { \ 808 do { \
644 volatile OrigFn _orig = (orig); \ 809 volatile OrigFn _orig = (orig); \
645 volatile unsigned long _argvec[2]; \ 810 volatile unsigned long _argvec[2]; \
646 volatile unsigned long _res; \ 811 volatile unsigned long _res; \
647 _argvec[0] = (unsigned long)_orig.nraddr; \ 812 _argvec[0] = (unsigned long)_orig.nraddr; \
648 _argvec[1] = (unsigned long)(arg1); \ 813 _argvec[1] = (unsigned long)(arg1); \
649 __asm__ volatile( \ 814 __asm__ volatile( \
815 "subl $12, %%esp\n\t" \
650 "pushl 4(%%eax)\n\t" \ 816 "pushl 4(%%eax)\n\t" \
651 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 817 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
652 VALGRIND_CALL_NOREDIR_EAX \ 818 VALGRIND_CALL_NOREDIR_EAX \
653 "addl $4, %%esp\n" \ 819 "addl $16, %%esp\n" \
654 : /*out*/ "=a" (_res) \ 820 : /*out*/ "=a" (_res) \
655 : /*in*/ "a" (&_argvec[0]) \ 821 : /*in*/ "a" (&_argvec[0]) \
656 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 822 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
657 ); \ 823 ); \
658 lval = (__typeof__(lval)) _res; \ 824 lval = (__typeof__(lval)) _res; \
659 } while (0) 825 } while (0)
660 826
661 #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ 827 #define CALL_FN_W_WW(lval, orig, arg1,arg2) \
662 do { \ 828 do { \
663 volatile OrigFn _orig = (orig); \ 829 volatile OrigFn _orig = (orig); \
664 volatile unsigned long _argvec[3]; \ 830 volatile unsigned long _argvec[3]; \
665 volatile unsigned long _res; \ 831 volatile unsigned long _res; \
666 _argvec[0] = (unsigned long)_orig.nraddr; \ 832 _argvec[0] = (unsigned long)_orig.nraddr; \
667 _argvec[1] = (unsigned long)(arg1); \ 833 _argvec[1] = (unsigned long)(arg1); \
668 _argvec[2] = (unsigned long)(arg2); \ 834 _argvec[2] = (unsigned long)(arg2); \
669 __asm__ volatile( \ 835 __asm__ volatile( \
836 "subl $8, %%esp\n\t" \
670 "pushl 8(%%eax)\n\t" \ 837 "pushl 8(%%eax)\n\t" \
671 "pushl 4(%%eax)\n\t" \ 838 "pushl 4(%%eax)\n\t" \
672 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 839 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
673 VALGRIND_CALL_NOREDIR_EAX \ 840 VALGRIND_CALL_NOREDIR_EAX \
674 "addl $8, %%esp\n" \ 841 "addl $16, %%esp\n" \
675 : /*out*/ "=a" (_res) \ 842 : /*out*/ "=a" (_res) \
676 : /*in*/ "a" (&_argvec[0]) \ 843 : /*in*/ "a" (&_argvec[0]) \
677 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 844 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
678 ); \ 845 ); \
679 lval = (__typeof__(lval)) _res; \ 846 lval = (__typeof__(lval)) _res; \
680 } while (0) 847 } while (0)
681 848
682 #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ 849 #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \
683 do { \ 850 do { \
684 volatile OrigFn _orig = (orig); \ 851 volatile OrigFn _orig = (orig); \
685 volatile unsigned long _argvec[4]; \ 852 volatile unsigned long _argvec[4]; \
686 volatile unsigned long _res; \ 853 volatile unsigned long _res; \
687 _argvec[0] = (unsigned long)_orig.nraddr; \ 854 _argvec[0] = (unsigned long)_orig.nraddr; \
688 _argvec[1] = (unsigned long)(arg1); \ 855 _argvec[1] = (unsigned long)(arg1); \
689 _argvec[2] = (unsigned long)(arg2); \ 856 _argvec[2] = (unsigned long)(arg2); \
690 _argvec[3] = (unsigned long)(arg3); \ 857 _argvec[3] = (unsigned long)(arg3); \
691 __asm__ volatile( \ 858 __asm__ volatile( \
859 "subl $4, %%esp\n\t" \
692 "pushl 12(%%eax)\n\t" \ 860 "pushl 12(%%eax)\n\t" \
693 "pushl 8(%%eax)\n\t" \ 861 "pushl 8(%%eax)\n\t" \
694 "pushl 4(%%eax)\n\t" \ 862 "pushl 4(%%eax)\n\t" \
695 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 863 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
696 VALGRIND_CALL_NOREDIR_EAX \ 864 VALGRIND_CALL_NOREDIR_EAX \
697 "addl $12, %%esp\n" \ 865 "addl $16, %%esp\n" \
698 : /*out*/ "=a" (_res) \ 866 : /*out*/ "=a" (_res) \
699 : /*in*/ "a" (&_argvec[0]) \ 867 : /*in*/ "a" (&_argvec[0]) \
700 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 868 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
701 ); \ 869 ); \
702 lval = (__typeof__(lval)) _res; \ 870 lval = (__typeof__(lval)) _res; \
703 } while (0) 871 } while (0)
704 872
705 #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ 873 #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \
706 do { \ 874 do { \
707 volatile OrigFn _orig = (orig); \ 875 volatile OrigFn _orig = (orig); \
(...skipping 24 matching lines...) Expand all
732 volatile OrigFn _orig = (orig); \ 900 volatile OrigFn _orig = (orig); \
733 volatile unsigned long _argvec[6]; \ 901 volatile unsigned long _argvec[6]; \
734 volatile unsigned long _res; \ 902 volatile unsigned long _res; \
735 _argvec[0] = (unsigned long)_orig.nraddr; \ 903 _argvec[0] = (unsigned long)_orig.nraddr; \
736 _argvec[1] = (unsigned long)(arg1); \ 904 _argvec[1] = (unsigned long)(arg1); \
737 _argvec[2] = (unsigned long)(arg2); \ 905 _argvec[2] = (unsigned long)(arg2); \
738 _argvec[3] = (unsigned long)(arg3); \ 906 _argvec[3] = (unsigned long)(arg3); \
739 _argvec[4] = (unsigned long)(arg4); \ 907 _argvec[4] = (unsigned long)(arg4); \
740 _argvec[5] = (unsigned long)(arg5); \ 908 _argvec[5] = (unsigned long)(arg5); \
741 __asm__ volatile( \ 909 __asm__ volatile( \
910 "subl $12, %%esp\n\t" \
742 "pushl 20(%%eax)\n\t" \ 911 "pushl 20(%%eax)\n\t" \
743 "pushl 16(%%eax)\n\t" \ 912 "pushl 16(%%eax)\n\t" \
744 "pushl 12(%%eax)\n\t" \ 913 "pushl 12(%%eax)\n\t" \
745 "pushl 8(%%eax)\n\t" \ 914 "pushl 8(%%eax)\n\t" \
746 "pushl 4(%%eax)\n\t" \ 915 "pushl 4(%%eax)\n\t" \
747 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 916 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
748 VALGRIND_CALL_NOREDIR_EAX \ 917 VALGRIND_CALL_NOREDIR_EAX \
749 "addl $20, %%esp\n" \ 918 "addl $32, %%esp\n" \
750 : /*out*/ "=a" (_res) \ 919 : /*out*/ "=a" (_res) \
751 : /*in*/ "a" (&_argvec[0]) \ 920 : /*in*/ "a" (&_argvec[0]) \
752 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 921 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
753 ); \ 922 ); \
754 lval = (__typeof__(lval)) _res; \ 923 lval = (__typeof__(lval)) _res; \
755 } while (0) 924 } while (0)
756 925
757 #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ 926 #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \
758 do { \ 927 do { \
759 volatile OrigFn _orig = (orig); \ 928 volatile OrigFn _orig = (orig); \
760 volatile unsigned long _argvec[7]; \ 929 volatile unsigned long _argvec[7]; \
761 volatile unsigned long _res; \ 930 volatile unsigned long _res; \
762 _argvec[0] = (unsigned long)_orig.nraddr; \ 931 _argvec[0] = (unsigned long)_orig.nraddr; \
763 _argvec[1] = (unsigned long)(arg1); \ 932 _argvec[1] = (unsigned long)(arg1); \
764 _argvec[2] = (unsigned long)(arg2); \ 933 _argvec[2] = (unsigned long)(arg2); \
765 _argvec[3] = (unsigned long)(arg3); \ 934 _argvec[3] = (unsigned long)(arg3); \
766 _argvec[4] = (unsigned long)(arg4); \ 935 _argvec[4] = (unsigned long)(arg4); \
767 _argvec[5] = (unsigned long)(arg5); \ 936 _argvec[5] = (unsigned long)(arg5); \
768 _argvec[6] = (unsigned long)(arg6); \ 937 _argvec[6] = (unsigned long)(arg6); \
769 __asm__ volatile( \ 938 __asm__ volatile( \
939 "subl $8, %%esp\n\t" \
770 "pushl 24(%%eax)\n\t" \ 940 "pushl 24(%%eax)\n\t" \
771 "pushl 20(%%eax)\n\t" \ 941 "pushl 20(%%eax)\n\t" \
772 "pushl 16(%%eax)\n\t" \ 942 "pushl 16(%%eax)\n\t" \
773 "pushl 12(%%eax)\n\t" \ 943 "pushl 12(%%eax)\n\t" \
774 "pushl 8(%%eax)\n\t" \ 944 "pushl 8(%%eax)\n\t" \
775 "pushl 4(%%eax)\n\t" \ 945 "pushl 4(%%eax)\n\t" \
776 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 946 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
777 VALGRIND_CALL_NOREDIR_EAX \ 947 VALGRIND_CALL_NOREDIR_EAX \
778 "addl $24, %%esp\n" \ 948 "addl $32, %%esp\n" \
779 : /*out*/ "=a" (_res) \ 949 : /*out*/ "=a" (_res) \
780 : /*in*/ "a" (&_argvec[0]) \ 950 : /*in*/ "a" (&_argvec[0]) \
781 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 951 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
782 ); \ 952 ); \
783 lval = (__typeof__(lval)) _res; \ 953 lval = (__typeof__(lval)) _res; \
784 } while (0) 954 } while (0)
785 955
786 #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 956 #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
787 arg7) \ 957 arg7) \
788 do { \ 958 do { \
789 volatile OrigFn _orig = (orig); \ 959 volatile OrigFn _orig = (orig); \
790 volatile unsigned long _argvec[8]; \ 960 volatile unsigned long _argvec[8]; \
791 volatile unsigned long _res; \ 961 volatile unsigned long _res; \
792 _argvec[0] = (unsigned long)_orig.nraddr; \ 962 _argvec[0] = (unsigned long)_orig.nraddr; \
793 _argvec[1] = (unsigned long)(arg1); \ 963 _argvec[1] = (unsigned long)(arg1); \
794 _argvec[2] = (unsigned long)(arg2); \ 964 _argvec[2] = (unsigned long)(arg2); \
795 _argvec[3] = (unsigned long)(arg3); \ 965 _argvec[3] = (unsigned long)(arg3); \
796 _argvec[4] = (unsigned long)(arg4); \ 966 _argvec[4] = (unsigned long)(arg4); \
797 _argvec[5] = (unsigned long)(arg5); \ 967 _argvec[5] = (unsigned long)(arg5); \
798 _argvec[6] = (unsigned long)(arg6); \ 968 _argvec[6] = (unsigned long)(arg6); \
799 _argvec[7] = (unsigned long)(arg7); \ 969 _argvec[7] = (unsigned long)(arg7); \
800 __asm__ volatile( \ 970 __asm__ volatile( \
971 "subl $4, %%esp\n\t" \
801 "pushl 28(%%eax)\n\t" \ 972 "pushl 28(%%eax)\n\t" \
802 "pushl 24(%%eax)\n\t" \ 973 "pushl 24(%%eax)\n\t" \
803 "pushl 20(%%eax)\n\t" \ 974 "pushl 20(%%eax)\n\t" \
804 "pushl 16(%%eax)\n\t" \ 975 "pushl 16(%%eax)\n\t" \
805 "pushl 12(%%eax)\n\t" \ 976 "pushl 12(%%eax)\n\t" \
806 "pushl 8(%%eax)\n\t" \ 977 "pushl 8(%%eax)\n\t" \
807 "pushl 4(%%eax)\n\t" \ 978 "pushl 4(%%eax)\n\t" \
808 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 979 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
809 VALGRIND_CALL_NOREDIR_EAX \ 980 VALGRIND_CALL_NOREDIR_EAX \
810 "addl $28, %%esp\n" \ 981 "addl $32, %%esp\n" \
811 : /*out*/ "=a" (_res) \ 982 : /*out*/ "=a" (_res) \
812 : /*in*/ "a" (&_argvec[0]) \ 983 : /*in*/ "a" (&_argvec[0]) \
813 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 984 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
814 ); \ 985 ); \
815 lval = (__typeof__(lval)) _res; \ 986 lval = (__typeof__(lval)) _res; \
816 } while (0) 987 } while (0)
817 988
818 #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 989 #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
819 arg7,arg8) \ 990 arg7,arg8) \
820 do { \ 991 do { \
(...skipping 38 matching lines...) Expand 10 before | Expand all | Expand 10 after
859 _argvec[1] = (unsigned long)(arg1); \ 1030 _argvec[1] = (unsigned long)(arg1); \
860 _argvec[2] = (unsigned long)(arg2); \ 1031 _argvec[2] = (unsigned long)(arg2); \
861 _argvec[3] = (unsigned long)(arg3); \ 1032 _argvec[3] = (unsigned long)(arg3); \
862 _argvec[4] = (unsigned long)(arg4); \ 1033 _argvec[4] = (unsigned long)(arg4); \
863 _argvec[5] = (unsigned long)(arg5); \ 1034 _argvec[5] = (unsigned long)(arg5); \
864 _argvec[6] = (unsigned long)(arg6); \ 1035 _argvec[6] = (unsigned long)(arg6); \
865 _argvec[7] = (unsigned long)(arg7); \ 1036 _argvec[7] = (unsigned long)(arg7); \
866 _argvec[8] = (unsigned long)(arg8); \ 1037 _argvec[8] = (unsigned long)(arg8); \
867 _argvec[9] = (unsigned long)(arg9); \ 1038 _argvec[9] = (unsigned long)(arg9); \
868 __asm__ volatile( \ 1039 __asm__ volatile( \
1040 "subl $12, %%esp\n\t" \
869 "pushl 36(%%eax)\n\t" \ 1041 "pushl 36(%%eax)\n\t" \
870 "pushl 32(%%eax)\n\t" \ 1042 "pushl 32(%%eax)\n\t" \
871 "pushl 28(%%eax)\n\t" \ 1043 "pushl 28(%%eax)\n\t" \
872 "pushl 24(%%eax)\n\t" \ 1044 "pushl 24(%%eax)\n\t" \
873 "pushl 20(%%eax)\n\t" \ 1045 "pushl 20(%%eax)\n\t" \
874 "pushl 16(%%eax)\n\t" \ 1046 "pushl 16(%%eax)\n\t" \
875 "pushl 12(%%eax)\n\t" \ 1047 "pushl 12(%%eax)\n\t" \
876 "pushl 8(%%eax)\n\t" \ 1048 "pushl 8(%%eax)\n\t" \
877 "pushl 4(%%eax)\n\t" \ 1049 "pushl 4(%%eax)\n\t" \
878 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 1050 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
879 VALGRIND_CALL_NOREDIR_EAX \ 1051 VALGRIND_CALL_NOREDIR_EAX \
880 "addl $36, %%esp\n" \ 1052 "addl $48, %%esp\n" \
881 : /*out*/ "=a" (_res) \ 1053 : /*out*/ "=a" (_res) \
882 : /*in*/ "a" (&_argvec[0]) \ 1054 : /*in*/ "a" (&_argvec[0]) \
883 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1055 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
884 ); \ 1056 ); \
885 lval = (__typeof__(lval)) _res; \ 1057 lval = (__typeof__(lval)) _res; \
886 } while (0) 1058 } while (0)
887 1059
888 #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1060 #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
889 arg7,arg8,arg9,arg10) \ 1061 arg7,arg8,arg9,arg10) \
890 do { \ 1062 do { \
891 volatile OrigFn _orig = (orig); \ 1063 volatile OrigFn _orig = (orig); \
892 volatile unsigned long _argvec[11]; \ 1064 volatile unsigned long _argvec[11]; \
893 volatile unsigned long _res; \ 1065 volatile unsigned long _res; \
894 _argvec[0] = (unsigned long)_orig.nraddr; \ 1066 _argvec[0] = (unsigned long)_orig.nraddr; \
895 _argvec[1] = (unsigned long)(arg1); \ 1067 _argvec[1] = (unsigned long)(arg1); \
896 _argvec[2] = (unsigned long)(arg2); \ 1068 _argvec[2] = (unsigned long)(arg2); \
897 _argvec[3] = (unsigned long)(arg3); \ 1069 _argvec[3] = (unsigned long)(arg3); \
898 _argvec[4] = (unsigned long)(arg4); \ 1070 _argvec[4] = (unsigned long)(arg4); \
899 _argvec[5] = (unsigned long)(arg5); \ 1071 _argvec[5] = (unsigned long)(arg5); \
900 _argvec[6] = (unsigned long)(arg6); \ 1072 _argvec[6] = (unsigned long)(arg6); \
901 _argvec[7] = (unsigned long)(arg7); \ 1073 _argvec[7] = (unsigned long)(arg7); \
902 _argvec[8] = (unsigned long)(arg8); \ 1074 _argvec[8] = (unsigned long)(arg8); \
903 _argvec[9] = (unsigned long)(arg9); \ 1075 _argvec[9] = (unsigned long)(arg9); \
904 _argvec[10] = (unsigned long)(arg10); \ 1076 _argvec[10] = (unsigned long)(arg10); \
905 __asm__ volatile( \ 1077 __asm__ volatile( \
1078 "subl $8, %%esp\n\t" \
906 "pushl 40(%%eax)\n\t" \ 1079 "pushl 40(%%eax)\n\t" \
907 "pushl 36(%%eax)\n\t" \ 1080 "pushl 36(%%eax)\n\t" \
908 "pushl 32(%%eax)\n\t" \ 1081 "pushl 32(%%eax)\n\t" \
909 "pushl 28(%%eax)\n\t" \ 1082 "pushl 28(%%eax)\n\t" \
910 "pushl 24(%%eax)\n\t" \ 1083 "pushl 24(%%eax)\n\t" \
911 "pushl 20(%%eax)\n\t" \ 1084 "pushl 20(%%eax)\n\t" \
912 "pushl 16(%%eax)\n\t" \ 1085 "pushl 16(%%eax)\n\t" \
913 "pushl 12(%%eax)\n\t" \ 1086 "pushl 12(%%eax)\n\t" \
914 "pushl 8(%%eax)\n\t" \ 1087 "pushl 8(%%eax)\n\t" \
915 "pushl 4(%%eax)\n\t" \ 1088 "pushl 4(%%eax)\n\t" \
916 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 1089 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
917 VALGRIND_CALL_NOREDIR_EAX \ 1090 VALGRIND_CALL_NOREDIR_EAX \
918 "addl $40, %%esp\n" \ 1091 "addl $48, %%esp\n" \
919 : /*out*/ "=a" (_res) \ 1092 : /*out*/ "=a" (_res) \
920 : /*in*/ "a" (&_argvec[0]) \ 1093 : /*in*/ "a" (&_argvec[0]) \
921 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1094 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
922 ); \ 1095 ); \
923 lval = (__typeof__(lval)) _res; \ 1096 lval = (__typeof__(lval)) _res; \
924 } while (0) 1097 } while (0)
925 1098
926 #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5, \ 1099 #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5, \
927 arg6,arg7,arg8,arg9,arg10, \ 1100 arg6,arg7,arg8,arg9,arg10, \
928 arg11) \ 1101 arg11) \
929 do { \ 1102 do { \
930 volatile OrigFn _orig = (orig); \ 1103 volatile OrigFn _orig = (orig); \
931 volatile unsigned long _argvec[12]; \ 1104 volatile unsigned long _argvec[12]; \
932 volatile unsigned long _res; \ 1105 volatile unsigned long _res; \
933 _argvec[0] = (unsigned long)_orig.nraddr; \ 1106 _argvec[0] = (unsigned long)_orig.nraddr; \
934 _argvec[1] = (unsigned long)(arg1); \ 1107 _argvec[1] = (unsigned long)(arg1); \
935 _argvec[2] = (unsigned long)(arg2); \ 1108 _argvec[2] = (unsigned long)(arg2); \
936 _argvec[3] = (unsigned long)(arg3); \ 1109 _argvec[3] = (unsigned long)(arg3); \
937 _argvec[4] = (unsigned long)(arg4); \ 1110 _argvec[4] = (unsigned long)(arg4); \
938 _argvec[5] = (unsigned long)(arg5); \ 1111 _argvec[5] = (unsigned long)(arg5); \
939 _argvec[6] = (unsigned long)(arg6); \ 1112 _argvec[6] = (unsigned long)(arg6); \
940 _argvec[7] = (unsigned long)(arg7); \ 1113 _argvec[7] = (unsigned long)(arg7); \
941 _argvec[8] = (unsigned long)(arg8); \ 1114 _argvec[8] = (unsigned long)(arg8); \
942 _argvec[9] = (unsigned long)(arg9); \ 1115 _argvec[9] = (unsigned long)(arg9); \
943 _argvec[10] = (unsigned long)(arg10); \ 1116 _argvec[10] = (unsigned long)(arg10); \
944 _argvec[11] = (unsigned long)(arg11); \ 1117 _argvec[11] = (unsigned long)(arg11); \
945 __asm__ volatile( \ 1118 __asm__ volatile( \
1119 "subl $4, %%esp\n\t" \
946 "pushl 44(%%eax)\n\t" \ 1120 "pushl 44(%%eax)\n\t" \
947 "pushl 40(%%eax)\n\t" \ 1121 "pushl 40(%%eax)\n\t" \
948 "pushl 36(%%eax)\n\t" \ 1122 "pushl 36(%%eax)\n\t" \
949 "pushl 32(%%eax)\n\t" \ 1123 "pushl 32(%%eax)\n\t" \
950 "pushl 28(%%eax)\n\t" \ 1124 "pushl 28(%%eax)\n\t" \
951 "pushl 24(%%eax)\n\t" \ 1125 "pushl 24(%%eax)\n\t" \
952 "pushl 20(%%eax)\n\t" \ 1126 "pushl 20(%%eax)\n\t" \
953 "pushl 16(%%eax)\n\t" \ 1127 "pushl 16(%%eax)\n\t" \
954 "pushl 12(%%eax)\n\t" \ 1128 "pushl 12(%%eax)\n\t" \
955 "pushl 8(%%eax)\n\t" \ 1129 "pushl 8(%%eax)\n\t" \
956 "pushl 4(%%eax)\n\t" \ 1130 "pushl 4(%%eax)\n\t" \
957 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 1131 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
958 VALGRIND_CALL_NOREDIR_EAX \ 1132 VALGRIND_CALL_NOREDIR_EAX \
959 "addl $44, %%esp\n" \ 1133 "addl $48, %%esp\n" \
960 : /*out*/ "=a" (_res) \ 1134 : /*out*/ "=a" (_res) \
961 : /*in*/ "a" (&_argvec[0]) \ 1135 : /*in*/ "a" (&_argvec[0]) \
962 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1136 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
963 ); \ 1137 ); \
964 lval = (__typeof__(lval)) _res; \ 1138 lval = (__typeof__(lval)) _res; \
965 } while (0) 1139 } while (0)
966 1140
967 #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5, \ 1141 #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5, \
968 arg6,arg7,arg8,arg9,arg10, \ 1142 arg6,arg7,arg8,arg9,arg10, \
969 arg11,arg12) \ 1143 arg11,arg12) \
(...skipping 30 matching lines...) Expand all
1000 "movl (%%eax), %%eax\n\t" /* target->%eax */ \ 1174 "movl (%%eax), %%eax\n\t" /* target->%eax */ \
1001 VALGRIND_CALL_NOREDIR_EAX \ 1175 VALGRIND_CALL_NOREDIR_EAX \
1002 "addl $48, %%esp\n" \ 1176 "addl $48, %%esp\n" \
1003 : /*out*/ "=a" (_res) \ 1177 : /*out*/ "=a" (_res) \
1004 : /*in*/ "a" (&_argvec[0]) \ 1178 : /*in*/ "a" (&_argvec[0]) \
1005 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1179 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
1006 ); \ 1180 ); \
1007 lval = (__typeof__(lval)) _res; \ 1181 lval = (__typeof__(lval)) _res; \
1008 } while (0) 1182 } while (0)
1009 1183
1010 #endif /* PLAT_x86_linux */ 1184 #endif /* PLAT_x86_linux || PLAT_x86_darwin */
1011 1185
1012 /* ------------------------ amd64-linux ------------------------ */ 1186 /* ------------------------ amd64-{linux,darwin} --------------- */
1013 1187
1014 #if defined(PLAT_amd64_linux) 1188 #if defined(PLAT_amd64_linux) || defined(PLAT_amd64_darwin)
1015 1189
1016 /* ARGREGS: rdi rsi rdx rcx r8 r9 (the rest on stack in R-to-L order) */ 1190 /* ARGREGS: rdi rsi rdx rcx r8 r9 (the rest on stack in R-to-L order) */
1017 1191
1018 /* These regs are trashed by the hidden call. */ 1192 /* These regs are trashed by the hidden call. */
1019 #define __CALLER_SAVED_REGS /*"rax",*/ "rcx", "rdx", "rsi", \ 1193 #define __CALLER_SAVED_REGS /*"rax",*/ "rcx", "rdx", "rsi", \
1020 "rdi", "r8", "r9", "r10", "r11" 1194 "rdi", "r8", "r9", "r10", "r11"
1021 1195
1196 /* This is all pretty complex. It's so as to make stack unwinding
1197 work reliably. See bug 243270. The basic problem is the sub and
1198 add of 128 of %rsp in all of the following macros. If gcc believes
1199 the CFA is in %rsp, then unwinding may fail, because what's at the
1200 CFA is not what gcc "expected" when it constructs the CFIs for the
1201 places where the macros are instantiated.
1202
1203 But we can't just add a CFI annotation to increase the CFA offset
1204 by 128, to match the sub of 128 from %rsp, because we don't know
1205 whether gcc has chosen %rsp as the CFA at that point, or whether it
1206 has chosen some other register (eg, %rbp). In the latter case,
1207 adding a CFI annotation to change the CFA offset is simply wrong.
1208
1209 So the solution is to get hold of the CFA using
1210 __builtin_dwarf_cfa(), put it in a known register, and add a
1211 CFI annotation to say what the register is. We choose %rbp for
1212 this (perhaps perversely), because:
1213
1214 (1) %rbp is already subject to unwinding. If a new register was
1215 chosen then the unwinder would have to unwind it in all stack
1216 traces, which is expensive, and
1217
1218 (2) %rbp is already subject to precise exception updates in the
1219 JIT. If a new register was chosen, we'd have to have precise
1220 exceptions for it too, which reduces performance of the
1221 generated code.
1222
1223 However .. one extra complication. We can't just whack the result
1224 of __builtin_dwarf_cfa() into %rbp and then add %rbp to the
1225 list of trashed registers at the end of the inline assembly
1226 fragments; gcc won't allow %rbp to appear in that list. Hence
1227 instead we need to stash %rbp in %r15 for the duration of the asm,
1228 and say that %r15 is trashed instead. gcc seems happy to go with
1229 that.
1230
1231 Oh .. and this all needs to be conditionalised so that it is
1232 unchanged from before this commit, when compiled with older gccs
1233 that don't support __builtin_dwarf_cfa. Furthermore, since
1234 this header file is freestanding, it has to be independent of
1235 config.h, and so the following conditionalisation cannot depend on
1236 configure time checks.
1237
1238 Although it's not clear from
1239 'defined(__GNUC__) && defined(__GCC_HAVE_DWARF2_CFI_ASM)',
1240 this expression excludes Darwin.
1241 .cfi directives in Darwin assembly appear to be completely
1242 different and I haven't investigated how they work.
1243
1244 For even more entertainment value, note we have to use the
1245 completely undocumented __builtin_dwarf_cfa(), which appears to
1246 really compute the CFA, whereas __builtin_frame_address(0) claims
1247 to but actually doesn't. See
1248 https://bugs.kde.org/show_bug.cgi?id=243270#c47
1249 */
1250 #if defined(__GNUC__) && defined(__GCC_HAVE_DWARF2_CFI_ASM)
1251 # define __FRAME_POINTER \
1252 ,"r"(__builtin_dwarf_cfa())
1253 # define VALGRIND_CFI_PROLOGUE \
1254 "movq %%rbp, %%r15\n\t" \
1255 "movq %2, %%rbp\n\t" \
1256 ".cfi_remember_state\n\t" \
1257 ".cfi_def_cfa rbp, 0\n\t"
1258 # define VALGRIND_CFI_EPILOGUE \
1259 "movq %%r15, %%rbp\n\t" \
1260 ".cfi_restore_state\n\t"
1261 #else
1262 # define __FRAME_POINTER
1263 # define VALGRIND_CFI_PROLOGUE
1264 # define VALGRIND_CFI_EPILOGUE
1265 #endif
1266
1267
1022 /* These CALL_FN_ macros assume that on amd64-linux, sizeof(unsigned 1268 /* These CALL_FN_ macros assume that on amd64-linux, sizeof(unsigned
1023 long) == 8. */ 1269 long) == 8. */
1024 1270
1025 /* NB 9 Sept 07. There is a nasty kludge here in all these CALL_FN_ 1271 /* NB 9 Sept 07. There is a nasty kludge here in all these CALL_FN_
1026 macros. In order not to trash the stack redzone, we need to drop 1272 macros. In order not to trash the stack redzone, we need to drop
1027 %rsp by 128 before the hidden call, and restore afterwards. The 1273 %rsp by 128 before the hidden call, and restore afterwards. The
1028 nastyness is that it is only by luck that the stack still appears 1274 nastyness is that it is only by luck that the stack still appears
1029 to be unwindable during the hidden call - since then the behaviour 1275 to be unwindable during the hidden call - since then the behaviour
1030 of any routine using this macro does not match what the CFI data 1276 of any routine using this macro does not match what the CFI data
1031 says. Sigh. 1277 says. Sigh.
(...skipping 11 matching lines...) Expand all
1043 with the stack pointer doesn't give a danger of non-unwindable 1289 with the stack pointer doesn't give a danger of non-unwindable
1044 stack. */ 1290 stack. */
1045 1291
1046 #define CALL_FN_W_v(lval, orig) \ 1292 #define CALL_FN_W_v(lval, orig) \
1047 do { \ 1293 do { \
1048 volatile OrigFn _orig = (orig); \ 1294 volatile OrigFn _orig = (orig); \
1049 volatile unsigned long _argvec[1]; \ 1295 volatile unsigned long _argvec[1]; \
1050 volatile unsigned long _res; \ 1296 volatile unsigned long _res; \
1051 _argvec[0] = (unsigned long)_orig.nraddr; \ 1297 _argvec[0] = (unsigned long)_orig.nraddr; \
1052 __asm__ volatile( \ 1298 __asm__ volatile( \
1299 VALGRIND_CFI_PROLOGUE \
1053 "subq $128,%%rsp\n\t" \ 1300 "subq $128,%%rsp\n\t" \
1054 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1301 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1055 VALGRIND_CALL_NOREDIR_RAX \ 1302 VALGRIND_CALL_NOREDIR_RAX \
1056 "addq $128,%%rsp\n\t" \ 1303 "addq $128,%%rsp\n\t" \
1304 VALGRIND_CFI_EPILOGUE \
1057 : /*out*/ "=a" (_res) \ 1305 : /*out*/ "=a" (_res) \
1058 : /*in*/ "a" (&_argvec[0]) \ 1306 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1059 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1307 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1060 ); \ 1308 ); \
1061 lval = (__typeof__(lval)) _res; \ 1309 lval = (__typeof__(lval)) _res; \
1062 } while (0) 1310 } while (0)
1063 1311
1064 #define CALL_FN_W_W(lval, orig, arg1) \ 1312 #define CALL_FN_W_W(lval, orig, arg1) \
1065 do { \ 1313 do { \
1066 volatile OrigFn _orig = (orig); \ 1314 volatile OrigFn _orig = (orig); \
1067 volatile unsigned long _argvec[2]; \ 1315 volatile unsigned long _argvec[2]; \
1068 volatile unsigned long _res; \ 1316 volatile unsigned long _res; \
1069 _argvec[0] = (unsigned long)_orig.nraddr; \ 1317 _argvec[0] = (unsigned long)_orig.nraddr; \
1070 _argvec[1] = (unsigned long)(arg1); \ 1318 _argvec[1] = (unsigned long)(arg1); \
1071 __asm__ volatile( \ 1319 __asm__ volatile( \
1320 VALGRIND_CFI_PROLOGUE \
1072 "subq $128,%%rsp\n\t" \ 1321 "subq $128,%%rsp\n\t" \
1073 "movq 8(%%rax), %%rdi\n\t" \ 1322 "movq 8(%%rax), %%rdi\n\t" \
1074 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1323 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1075 VALGRIND_CALL_NOREDIR_RAX \ 1324 VALGRIND_CALL_NOREDIR_RAX \
1076 "addq $128,%%rsp\n\t" \ 1325 "addq $128,%%rsp\n\t" \
1326 VALGRIND_CFI_EPILOGUE \
1077 : /*out*/ "=a" (_res) \ 1327 : /*out*/ "=a" (_res) \
1078 : /*in*/ "a" (&_argvec[0]) \ 1328 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1079 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1329 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1080 ); \ 1330 ); \
1081 lval = (__typeof__(lval)) _res; \ 1331 lval = (__typeof__(lval)) _res; \
1082 } while (0) 1332 } while (0)
1083 1333
1084 #define CALL_FN_W_WW(lval, orig, arg1,arg2) \ 1334 #define CALL_FN_W_WW(lval, orig, arg1,arg2) \
1085 do { \ 1335 do { \
1086 volatile OrigFn _orig = (orig); \ 1336 volatile OrigFn _orig = (orig); \
1087 volatile unsigned long _argvec[3]; \ 1337 volatile unsigned long _argvec[3]; \
1088 volatile unsigned long _res; \ 1338 volatile unsigned long _res; \
1089 _argvec[0] = (unsigned long)_orig.nraddr; \ 1339 _argvec[0] = (unsigned long)_orig.nraddr; \
1090 _argvec[1] = (unsigned long)(arg1); \ 1340 _argvec[1] = (unsigned long)(arg1); \
1091 _argvec[2] = (unsigned long)(arg2); \ 1341 _argvec[2] = (unsigned long)(arg2); \
1092 __asm__ volatile( \ 1342 __asm__ volatile( \
1343 VALGRIND_CFI_PROLOGUE \
1093 "subq $128,%%rsp\n\t" \ 1344 "subq $128,%%rsp\n\t" \
1094 "movq 16(%%rax), %%rsi\n\t" \ 1345 "movq 16(%%rax), %%rsi\n\t" \
1095 "movq 8(%%rax), %%rdi\n\t" \ 1346 "movq 8(%%rax), %%rdi\n\t" \
1096 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1347 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1097 VALGRIND_CALL_NOREDIR_RAX \ 1348 VALGRIND_CALL_NOREDIR_RAX \
1098 "addq $128,%%rsp\n\t" \ 1349 "addq $128,%%rsp\n\t" \
1350 VALGRIND_CFI_EPILOGUE \
1099 : /*out*/ "=a" (_res) \ 1351 : /*out*/ "=a" (_res) \
1100 : /*in*/ "a" (&_argvec[0]) \ 1352 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1101 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1353 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1102 ); \ 1354 ); \
1103 lval = (__typeof__(lval)) _res; \ 1355 lval = (__typeof__(lval)) _res; \
1104 } while (0) 1356 } while (0)
1105 1357
1106 #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \ 1358 #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \
1107 do { \ 1359 do { \
1108 volatile OrigFn _orig = (orig); \ 1360 volatile OrigFn _orig = (orig); \
1109 volatile unsigned long _argvec[4]; \ 1361 volatile unsigned long _argvec[4]; \
1110 volatile unsigned long _res; \ 1362 volatile unsigned long _res; \
1111 _argvec[0] = (unsigned long)_orig.nraddr; \ 1363 _argvec[0] = (unsigned long)_orig.nraddr; \
1112 _argvec[1] = (unsigned long)(arg1); \ 1364 _argvec[1] = (unsigned long)(arg1); \
1113 _argvec[2] = (unsigned long)(arg2); \ 1365 _argvec[2] = (unsigned long)(arg2); \
1114 _argvec[3] = (unsigned long)(arg3); \ 1366 _argvec[3] = (unsigned long)(arg3); \
1115 __asm__ volatile( \ 1367 __asm__ volatile( \
1368 VALGRIND_CFI_PROLOGUE \
1116 "subq $128,%%rsp\n\t" \ 1369 "subq $128,%%rsp\n\t" \
1117 "movq 24(%%rax), %%rdx\n\t" \ 1370 "movq 24(%%rax), %%rdx\n\t" \
1118 "movq 16(%%rax), %%rsi\n\t" \ 1371 "movq 16(%%rax), %%rsi\n\t" \
1119 "movq 8(%%rax), %%rdi\n\t" \ 1372 "movq 8(%%rax), %%rdi\n\t" \
1120 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1373 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1121 VALGRIND_CALL_NOREDIR_RAX \ 1374 VALGRIND_CALL_NOREDIR_RAX \
1122 "addq $128,%%rsp\n\t" \ 1375 "addq $128,%%rsp\n\t" \
1376 VALGRIND_CFI_EPILOGUE \
1123 : /*out*/ "=a" (_res) \ 1377 : /*out*/ "=a" (_res) \
1124 : /*in*/ "a" (&_argvec[0]) \ 1378 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1125 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1379 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1126 ); \ 1380 ); \
1127 lval = (__typeof__(lval)) _res; \ 1381 lval = (__typeof__(lval)) _res; \
1128 } while (0) 1382 } while (0)
1129 1383
1130 #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \ 1384 #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \
1131 do { \ 1385 do { \
1132 volatile OrigFn _orig = (orig); \ 1386 volatile OrigFn _orig = (orig); \
1133 volatile unsigned long _argvec[5]; \ 1387 volatile unsigned long _argvec[5]; \
1134 volatile unsigned long _res; \ 1388 volatile unsigned long _res; \
1135 _argvec[0] = (unsigned long)_orig.nraddr; \ 1389 _argvec[0] = (unsigned long)_orig.nraddr; \
1136 _argvec[1] = (unsigned long)(arg1); \ 1390 _argvec[1] = (unsigned long)(arg1); \
1137 _argvec[2] = (unsigned long)(arg2); \ 1391 _argvec[2] = (unsigned long)(arg2); \
1138 _argvec[3] = (unsigned long)(arg3); \ 1392 _argvec[3] = (unsigned long)(arg3); \
1139 _argvec[4] = (unsigned long)(arg4); \ 1393 _argvec[4] = (unsigned long)(arg4); \
1140 __asm__ volatile( \ 1394 __asm__ volatile( \
1395 VALGRIND_CFI_PROLOGUE \
1141 "subq $128,%%rsp\n\t" \ 1396 "subq $128,%%rsp\n\t" \
1142 "movq 32(%%rax), %%rcx\n\t" \ 1397 "movq 32(%%rax), %%rcx\n\t" \
1143 "movq 24(%%rax), %%rdx\n\t" \ 1398 "movq 24(%%rax), %%rdx\n\t" \
1144 "movq 16(%%rax), %%rsi\n\t" \ 1399 "movq 16(%%rax), %%rsi\n\t" \
1145 "movq 8(%%rax), %%rdi\n\t" \ 1400 "movq 8(%%rax), %%rdi\n\t" \
1146 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1401 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1147 VALGRIND_CALL_NOREDIR_RAX \ 1402 VALGRIND_CALL_NOREDIR_RAX \
1148 "addq $128,%%rsp\n\t" \ 1403 "addq $128,%%rsp\n\t" \
1404 VALGRIND_CFI_EPILOGUE \
1149 : /*out*/ "=a" (_res) \ 1405 : /*out*/ "=a" (_res) \
1150 : /*in*/ "a" (&_argvec[0]) \ 1406 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1151 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1407 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1152 ); \ 1408 ); \
1153 lval = (__typeof__(lval)) _res; \ 1409 lval = (__typeof__(lval)) _res; \
1154 } while (0) 1410 } while (0)
1155 1411
1156 #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \ 1412 #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \
1157 do { \ 1413 do { \
1158 volatile OrigFn _orig = (orig); \ 1414 volatile OrigFn _orig = (orig); \
1159 volatile unsigned long _argvec[6]; \ 1415 volatile unsigned long _argvec[6]; \
1160 volatile unsigned long _res; \ 1416 volatile unsigned long _res; \
1161 _argvec[0] = (unsigned long)_orig.nraddr; \ 1417 _argvec[0] = (unsigned long)_orig.nraddr; \
1162 _argvec[1] = (unsigned long)(arg1); \ 1418 _argvec[1] = (unsigned long)(arg1); \
1163 _argvec[2] = (unsigned long)(arg2); \ 1419 _argvec[2] = (unsigned long)(arg2); \
1164 _argvec[3] = (unsigned long)(arg3); \ 1420 _argvec[3] = (unsigned long)(arg3); \
1165 _argvec[4] = (unsigned long)(arg4); \ 1421 _argvec[4] = (unsigned long)(arg4); \
1166 _argvec[5] = (unsigned long)(arg5); \ 1422 _argvec[5] = (unsigned long)(arg5); \
1167 __asm__ volatile( \ 1423 __asm__ volatile( \
1424 VALGRIND_CFI_PROLOGUE \
1168 "subq $128,%%rsp\n\t" \ 1425 "subq $128,%%rsp\n\t" \
1169 "movq 40(%%rax), %%r8\n\t" \ 1426 "movq 40(%%rax), %%r8\n\t" \
1170 "movq 32(%%rax), %%rcx\n\t" \ 1427 "movq 32(%%rax), %%rcx\n\t" \
1171 "movq 24(%%rax), %%rdx\n\t" \ 1428 "movq 24(%%rax), %%rdx\n\t" \
1172 "movq 16(%%rax), %%rsi\n\t" \ 1429 "movq 16(%%rax), %%rsi\n\t" \
1173 "movq 8(%%rax), %%rdi\n\t" \ 1430 "movq 8(%%rax), %%rdi\n\t" \
1174 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1431 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1175 VALGRIND_CALL_NOREDIR_RAX \ 1432 VALGRIND_CALL_NOREDIR_RAX \
1176 "addq $128,%%rsp\n\t" \ 1433 "addq $128,%%rsp\n\t" \
1434 VALGRIND_CFI_EPILOGUE \
1177 : /*out*/ "=a" (_res) \ 1435 : /*out*/ "=a" (_res) \
1178 : /*in*/ "a" (&_argvec[0]) \ 1436 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1179 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1437 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1180 ); \ 1438 ); \
1181 lval = (__typeof__(lval)) _res; \ 1439 lval = (__typeof__(lval)) _res; \
1182 } while (0) 1440 } while (0)
1183 1441
1184 #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \ 1442 #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \
1185 do { \ 1443 do { \
1186 volatile OrigFn _orig = (orig); \ 1444 volatile OrigFn _orig = (orig); \
1187 volatile unsigned long _argvec[7]; \ 1445 volatile unsigned long _argvec[7]; \
1188 volatile unsigned long _res; \ 1446 volatile unsigned long _res; \
1189 _argvec[0] = (unsigned long)_orig.nraddr; \ 1447 _argvec[0] = (unsigned long)_orig.nraddr; \
1190 _argvec[1] = (unsigned long)(arg1); \ 1448 _argvec[1] = (unsigned long)(arg1); \
1191 _argvec[2] = (unsigned long)(arg2); \ 1449 _argvec[2] = (unsigned long)(arg2); \
1192 _argvec[3] = (unsigned long)(arg3); \ 1450 _argvec[3] = (unsigned long)(arg3); \
1193 _argvec[4] = (unsigned long)(arg4); \ 1451 _argvec[4] = (unsigned long)(arg4); \
1194 _argvec[5] = (unsigned long)(arg5); \ 1452 _argvec[5] = (unsigned long)(arg5); \
1195 _argvec[6] = (unsigned long)(arg6); \ 1453 _argvec[6] = (unsigned long)(arg6); \
1196 __asm__ volatile( \ 1454 __asm__ volatile( \
1455 VALGRIND_CFI_PROLOGUE \
1197 "subq $128,%%rsp\n\t" \ 1456 "subq $128,%%rsp\n\t" \
1198 "movq 48(%%rax), %%r9\n\t" \ 1457 "movq 48(%%rax), %%r9\n\t" \
1199 "movq 40(%%rax), %%r8\n\t" \ 1458 "movq 40(%%rax), %%r8\n\t" \
1200 "movq 32(%%rax), %%rcx\n\t" \ 1459 "movq 32(%%rax), %%rcx\n\t" \
1201 "movq 24(%%rax), %%rdx\n\t" \ 1460 "movq 24(%%rax), %%rdx\n\t" \
1202 "movq 16(%%rax), %%rsi\n\t" \ 1461 "movq 16(%%rax), %%rsi\n\t" \
1203 "movq 8(%%rax), %%rdi\n\t" \ 1462 "movq 8(%%rax), %%rdi\n\t" \
1204 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1463 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1464 VALGRIND_CALL_NOREDIR_RAX \
1205 "addq $128,%%rsp\n\t" \ 1465 "addq $128,%%rsp\n\t" \
1206 VALGRIND_CALL_NOREDIR_RAX \ 1466 VALGRIND_CFI_EPILOGUE \
1207 : /*out*/ "=a" (_res) \ 1467 : /*out*/ "=a" (_res) \
1208 : /*in*/ "a" (&_argvec[0]) \ 1468 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1209 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1469 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1210 ); \ 1470 ); \
1211 lval = (__typeof__(lval)) _res; \ 1471 lval = (__typeof__(lval)) _res; \
1212 } while (0) 1472 } while (0)
1213 1473
1214 #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1474 #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1215 arg7) \ 1475 arg7) \
1216 do { \ 1476 do { \
1217 volatile OrigFn _orig = (orig); \ 1477 volatile OrigFn _orig = (orig); \
1218 volatile unsigned long _argvec[8]; \ 1478 volatile unsigned long _argvec[8]; \
1219 volatile unsigned long _res; \ 1479 volatile unsigned long _res; \
1220 _argvec[0] = (unsigned long)_orig.nraddr; \ 1480 _argvec[0] = (unsigned long)_orig.nraddr; \
1221 _argvec[1] = (unsigned long)(arg1); \ 1481 _argvec[1] = (unsigned long)(arg1); \
1222 _argvec[2] = (unsigned long)(arg2); \ 1482 _argvec[2] = (unsigned long)(arg2); \
1223 _argvec[3] = (unsigned long)(arg3); \ 1483 _argvec[3] = (unsigned long)(arg3); \
1224 _argvec[4] = (unsigned long)(arg4); \ 1484 _argvec[4] = (unsigned long)(arg4); \
1225 _argvec[5] = (unsigned long)(arg5); \ 1485 _argvec[5] = (unsigned long)(arg5); \
1226 _argvec[6] = (unsigned long)(arg6); \ 1486 _argvec[6] = (unsigned long)(arg6); \
1227 _argvec[7] = (unsigned long)(arg7); \ 1487 _argvec[7] = (unsigned long)(arg7); \
1228 __asm__ volatile( \ 1488 __asm__ volatile( \
1229 "subq $128,%%rsp\n\t" \ 1489 VALGRIND_CFI_PROLOGUE \
1490 "subq $136,%%rsp\n\t" \
1230 "pushq 56(%%rax)\n\t" \ 1491 "pushq 56(%%rax)\n\t" \
1231 "movq 48(%%rax), %%r9\n\t" \ 1492 "movq 48(%%rax), %%r9\n\t" \
1232 "movq 40(%%rax), %%r8\n\t" \ 1493 "movq 40(%%rax), %%r8\n\t" \
1233 "movq 32(%%rax), %%rcx\n\t" \ 1494 "movq 32(%%rax), %%rcx\n\t" \
1234 "movq 24(%%rax), %%rdx\n\t" \ 1495 "movq 24(%%rax), %%rdx\n\t" \
1235 "movq 16(%%rax), %%rsi\n\t" \ 1496 "movq 16(%%rax), %%rsi\n\t" \
1236 "movq 8(%%rax), %%rdi\n\t" \ 1497 "movq 8(%%rax), %%rdi\n\t" \
1237 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1498 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1238 VALGRIND_CALL_NOREDIR_RAX \ 1499 VALGRIND_CALL_NOREDIR_RAX \
1239 "addq $8, %%rsp\n" \ 1500 "addq $8, %%rsp\n" \
1240 "addq $128,%%rsp\n\t" \ 1501 "addq $136,%%rsp\n\t" \
1502 VALGRIND_CFI_EPILOGUE \
1241 : /*out*/ "=a" (_res) \ 1503 : /*out*/ "=a" (_res) \
1242 : /*in*/ "a" (&_argvec[0]) \ 1504 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1243 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1505 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1244 ); \ 1506 ); \
1245 lval = (__typeof__(lval)) _res; \ 1507 lval = (__typeof__(lval)) _res; \
1246 } while (0) 1508 } while (0)
1247 1509
1248 #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1510 #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1249 arg7,arg8) \ 1511 arg7,arg8) \
1250 do { \ 1512 do { \
1251 volatile OrigFn _orig = (orig); \ 1513 volatile OrigFn _orig = (orig); \
1252 volatile unsigned long _argvec[9]; \ 1514 volatile unsigned long _argvec[9]; \
1253 volatile unsigned long _res; \ 1515 volatile unsigned long _res; \
1254 _argvec[0] = (unsigned long)_orig.nraddr; \ 1516 _argvec[0] = (unsigned long)_orig.nraddr; \
1255 _argvec[1] = (unsigned long)(arg1); \ 1517 _argvec[1] = (unsigned long)(arg1); \
1256 _argvec[2] = (unsigned long)(arg2); \ 1518 _argvec[2] = (unsigned long)(arg2); \
1257 _argvec[3] = (unsigned long)(arg3); \ 1519 _argvec[3] = (unsigned long)(arg3); \
1258 _argvec[4] = (unsigned long)(arg4); \ 1520 _argvec[4] = (unsigned long)(arg4); \
1259 _argvec[5] = (unsigned long)(arg5); \ 1521 _argvec[5] = (unsigned long)(arg5); \
1260 _argvec[6] = (unsigned long)(arg6); \ 1522 _argvec[6] = (unsigned long)(arg6); \
1261 _argvec[7] = (unsigned long)(arg7); \ 1523 _argvec[7] = (unsigned long)(arg7); \
1262 _argvec[8] = (unsigned long)(arg8); \ 1524 _argvec[8] = (unsigned long)(arg8); \
1263 __asm__ volatile( \ 1525 __asm__ volatile( \
1526 VALGRIND_CFI_PROLOGUE \
1264 "subq $128,%%rsp\n\t" \ 1527 "subq $128,%%rsp\n\t" \
1265 "pushq 64(%%rax)\n\t" \ 1528 "pushq 64(%%rax)\n\t" \
1266 "pushq 56(%%rax)\n\t" \ 1529 "pushq 56(%%rax)\n\t" \
1267 "movq 48(%%rax), %%r9\n\t" \ 1530 "movq 48(%%rax), %%r9\n\t" \
1268 "movq 40(%%rax), %%r8\n\t" \ 1531 "movq 40(%%rax), %%r8\n\t" \
1269 "movq 32(%%rax), %%rcx\n\t" \ 1532 "movq 32(%%rax), %%rcx\n\t" \
1270 "movq 24(%%rax), %%rdx\n\t" \ 1533 "movq 24(%%rax), %%rdx\n\t" \
1271 "movq 16(%%rax), %%rsi\n\t" \ 1534 "movq 16(%%rax), %%rsi\n\t" \
1272 "movq 8(%%rax), %%rdi\n\t" \ 1535 "movq 8(%%rax), %%rdi\n\t" \
1273 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1536 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1274 VALGRIND_CALL_NOREDIR_RAX \ 1537 VALGRIND_CALL_NOREDIR_RAX \
1275 "addq $16, %%rsp\n" \ 1538 "addq $16, %%rsp\n" \
1276 "addq $128,%%rsp\n\t" \ 1539 "addq $128,%%rsp\n\t" \
1540 VALGRIND_CFI_EPILOGUE \
1277 : /*out*/ "=a" (_res) \ 1541 : /*out*/ "=a" (_res) \
1278 : /*in*/ "a" (&_argvec[0]) \ 1542 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1279 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1543 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1280 ); \ 1544 ); \
1281 lval = (__typeof__(lval)) _res; \ 1545 lval = (__typeof__(lval)) _res; \
1282 } while (0) 1546 } while (0)
1283 1547
1284 #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1548 #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1285 arg7,arg8,arg9) \ 1549 arg7,arg8,arg9) \
1286 do { \ 1550 do { \
1287 volatile OrigFn _orig = (orig); \ 1551 volatile OrigFn _orig = (orig); \
1288 volatile unsigned long _argvec[10]; \ 1552 volatile unsigned long _argvec[10]; \
1289 volatile unsigned long _res; \ 1553 volatile unsigned long _res; \
1290 _argvec[0] = (unsigned long)_orig.nraddr; \ 1554 _argvec[0] = (unsigned long)_orig.nraddr; \
1291 _argvec[1] = (unsigned long)(arg1); \ 1555 _argvec[1] = (unsigned long)(arg1); \
1292 _argvec[2] = (unsigned long)(arg2); \ 1556 _argvec[2] = (unsigned long)(arg2); \
1293 _argvec[3] = (unsigned long)(arg3); \ 1557 _argvec[3] = (unsigned long)(arg3); \
1294 _argvec[4] = (unsigned long)(arg4); \ 1558 _argvec[4] = (unsigned long)(arg4); \
1295 _argvec[5] = (unsigned long)(arg5); \ 1559 _argvec[5] = (unsigned long)(arg5); \
1296 _argvec[6] = (unsigned long)(arg6); \ 1560 _argvec[6] = (unsigned long)(arg6); \
1297 _argvec[7] = (unsigned long)(arg7); \ 1561 _argvec[7] = (unsigned long)(arg7); \
1298 _argvec[8] = (unsigned long)(arg8); \ 1562 _argvec[8] = (unsigned long)(arg8); \
1299 _argvec[9] = (unsigned long)(arg9); \ 1563 _argvec[9] = (unsigned long)(arg9); \
1300 __asm__ volatile( \ 1564 __asm__ volatile( \
1301 "subq $128,%%rsp\n\t" \ 1565 VALGRIND_CFI_PROLOGUE \
1566 "subq $136,%%rsp\n\t" \
1302 "pushq 72(%%rax)\n\t" \ 1567 "pushq 72(%%rax)\n\t" \
1303 "pushq 64(%%rax)\n\t" \ 1568 "pushq 64(%%rax)\n\t" \
1304 "pushq 56(%%rax)\n\t" \ 1569 "pushq 56(%%rax)\n\t" \
1305 "movq 48(%%rax), %%r9\n\t" \ 1570 "movq 48(%%rax), %%r9\n\t" \
1306 "movq 40(%%rax), %%r8\n\t" \ 1571 "movq 40(%%rax), %%r8\n\t" \
1307 "movq 32(%%rax), %%rcx\n\t" \ 1572 "movq 32(%%rax), %%rcx\n\t" \
1308 "movq 24(%%rax), %%rdx\n\t" \ 1573 "movq 24(%%rax), %%rdx\n\t" \
1309 "movq 16(%%rax), %%rsi\n\t" \ 1574 "movq 16(%%rax), %%rsi\n\t" \
1310 "movq 8(%%rax), %%rdi\n\t" \ 1575 "movq 8(%%rax), %%rdi\n\t" \
1311 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1576 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1312 VALGRIND_CALL_NOREDIR_RAX \ 1577 VALGRIND_CALL_NOREDIR_RAX \
1313 "addq $24, %%rsp\n" \ 1578 "addq $24, %%rsp\n" \
1314 "addq $128,%%rsp\n\t" \ 1579 "addq $136,%%rsp\n\t" \
1580 VALGRIND_CFI_EPILOGUE \
1315 : /*out*/ "=a" (_res) \ 1581 : /*out*/ "=a" (_res) \
1316 : /*in*/ "a" (&_argvec[0]) \ 1582 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1317 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1583 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1318 ); \ 1584 ); \
1319 lval = (__typeof__(lval)) _res; \ 1585 lval = (__typeof__(lval)) _res; \
1320 } while (0) 1586 } while (0)
1321 1587
1322 #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1588 #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1323 arg7,arg8,arg9,arg10) \ 1589 arg7,arg8,arg9,arg10) \
1324 do { \ 1590 do { \
1325 volatile OrigFn _orig = (orig); \ 1591 volatile OrigFn _orig = (orig); \
1326 volatile unsigned long _argvec[11]; \ 1592 volatile unsigned long _argvec[11]; \
1327 volatile unsigned long _res; \ 1593 volatile unsigned long _res; \
1328 _argvec[0] = (unsigned long)_orig.nraddr; \ 1594 _argvec[0] = (unsigned long)_orig.nraddr; \
1329 _argvec[1] = (unsigned long)(arg1); \ 1595 _argvec[1] = (unsigned long)(arg1); \
1330 _argvec[2] = (unsigned long)(arg2); \ 1596 _argvec[2] = (unsigned long)(arg2); \
1331 _argvec[3] = (unsigned long)(arg3); \ 1597 _argvec[3] = (unsigned long)(arg3); \
1332 _argvec[4] = (unsigned long)(arg4); \ 1598 _argvec[4] = (unsigned long)(arg4); \
1333 _argvec[5] = (unsigned long)(arg5); \ 1599 _argvec[5] = (unsigned long)(arg5); \
1334 _argvec[6] = (unsigned long)(arg6); \ 1600 _argvec[6] = (unsigned long)(arg6); \
1335 _argvec[7] = (unsigned long)(arg7); \ 1601 _argvec[7] = (unsigned long)(arg7); \
1336 _argvec[8] = (unsigned long)(arg8); \ 1602 _argvec[8] = (unsigned long)(arg8); \
1337 _argvec[9] = (unsigned long)(arg9); \ 1603 _argvec[9] = (unsigned long)(arg9); \
1338 _argvec[10] = (unsigned long)(arg10); \ 1604 _argvec[10] = (unsigned long)(arg10); \
1339 __asm__ volatile( \ 1605 __asm__ volatile( \
1606 VALGRIND_CFI_PROLOGUE \
1340 "subq $128,%%rsp\n\t" \ 1607 "subq $128,%%rsp\n\t" \
1341 "pushq 80(%%rax)\n\t" \ 1608 "pushq 80(%%rax)\n\t" \
1342 "pushq 72(%%rax)\n\t" \ 1609 "pushq 72(%%rax)\n\t" \
1343 "pushq 64(%%rax)\n\t" \ 1610 "pushq 64(%%rax)\n\t" \
1344 "pushq 56(%%rax)\n\t" \ 1611 "pushq 56(%%rax)\n\t" \
1345 "movq 48(%%rax), %%r9\n\t" \ 1612 "movq 48(%%rax), %%r9\n\t" \
1346 "movq 40(%%rax), %%r8\n\t" \ 1613 "movq 40(%%rax), %%r8\n\t" \
1347 "movq 32(%%rax), %%rcx\n\t" \ 1614 "movq 32(%%rax), %%rcx\n\t" \
1348 "movq 24(%%rax), %%rdx\n\t" \ 1615 "movq 24(%%rax), %%rdx\n\t" \
1349 "movq 16(%%rax), %%rsi\n\t" \ 1616 "movq 16(%%rax), %%rsi\n\t" \
1350 "movq 8(%%rax), %%rdi\n\t" \ 1617 "movq 8(%%rax), %%rdi\n\t" \
1351 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1618 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1352 VALGRIND_CALL_NOREDIR_RAX \ 1619 VALGRIND_CALL_NOREDIR_RAX \
1353 "addq $32, %%rsp\n" \ 1620 "addq $32, %%rsp\n" \
1354 "addq $128,%%rsp\n\t" \ 1621 "addq $128,%%rsp\n\t" \
1622 VALGRIND_CFI_EPILOGUE \
1355 : /*out*/ "=a" (_res) \ 1623 : /*out*/ "=a" (_res) \
1356 : /*in*/ "a" (&_argvec[0]) \ 1624 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1357 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1625 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1358 ); \ 1626 ); \
1359 lval = (__typeof__(lval)) _res; \ 1627 lval = (__typeof__(lval)) _res; \
1360 } while (0) 1628 } while (0)
1361 1629
1362 #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1630 #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1363 arg7,arg8,arg9,arg10,arg11) \ 1631 arg7,arg8,arg9,arg10,arg11) \
1364 do { \ 1632 do { \
1365 volatile OrigFn _orig = (orig); \ 1633 volatile OrigFn _orig = (orig); \
1366 volatile unsigned long _argvec[12]; \ 1634 volatile unsigned long _argvec[12]; \
1367 volatile unsigned long _res; \ 1635 volatile unsigned long _res; \
1368 _argvec[0] = (unsigned long)_orig.nraddr; \ 1636 _argvec[0] = (unsigned long)_orig.nraddr; \
1369 _argvec[1] = (unsigned long)(arg1); \ 1637 _argvec[1] = (unsigned long)(arg1); \
1370 _argvec[2] = (unsigned long)(arg2); \ 1638 _argvec[2] = (unsigned long)(arg2); \
1371 _argvec[3] = (unsigned long)(arg3); \ 1639 _argvec[3] = (unsigned long)(arg3); \
1372 _argvec[4] = (unsigned long)(arg4); \ 1640 _argvec[4] = (unsigned long)(arg4); \
1373 _argvec[5] = (unsigned long)(arg5); \ 1641 _argvec[5] = (unsigned long)(arg5); \
1374 _argvec[6] = (unsigned long)(arg6); \ 1642 _argvec[6] = (unsigned long)(arg6); \
1375 _argvec[7] = (unsigned long)(arg7); \ 1643 _argvec[7] = (unsigned long)(arg7); \
1376 _argvec[8] = (unsigned long)(arg8); \ 1644 _argvec[8] = (unsigned long)(arg8); \
1377 _argvec[9] = (unsigned long)(arg9); \ 1645 _argvec[9] = (unsigned long)(arg9); \
1378 _argvec[10] = (unsigned long)(arg10); \ 1646 _argvec[10] = (unsigned long)(arg10); \
1379 _argvec[11] = (unsigned long)(arg11); \ 1647 _argvec[11] = (unsigned long)(arg11); \
1380 __asm__ volatile( \ 1648 __asm__ volatile( \
1381 "subq $128,%%rsp\n\t" \ 1649 VALGRIND_CFI_PROLOGUE \
1650 "subq $136,%%rsp\n\t" \
1382 "pushq 88(%%rax)\n\t" \ 1651 "pushq 88(%%rax)\n\t" \
1383 "pushq 80(%%rax)\n\t" \ 1652 "pushq 80(%%rax)\n\t" \
1384 "pushq 72(%%rax)\n\t" \ 1653 "pushq 72(%%rax)\n\t" \
1385 "pushq 64(%%rax)\n\t" \ 1654 "pushq 64(%%rax)\n\t" \
1386 "pushq 56(%%rax)\n\t" \ 1655 "pushq 56(%%rax)\n\t" \
1387 "movq 48(%%rax), %%r9\n\t" \ 1656 "movq 48(%%rax), %%r9\n\t" \
1388 "movq 40(%%rax), %%r8\n\t" \ 1657 "movq 40(%%rax), %%r8\n\t" \
1389 "movq 32(%%rax), %%rcx\n\t" \ 1658 "movq 32(%%rax), %%rcx\n\t" \
1390 "movq 24(%%rax), %%rdx\n\t" \ 1659 "movq 24(%%rax), %%rdx\n\t" \
1391 "movq 16(%%rax), %%rsi\n\t" \ 1660 "movq 16(%%rax), %%rsi\n\t" \
1392 "movq 8(%%rax), %%rdi\n\t" \ 1661 "movq 8(%%rax), %%rdi\n\t" \
1393 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1662 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1394 VALGRIND_CALL_NOREDIR_RAX \ 1663 VALGRIND_CALL_NOREDIR_RAX \
1395 "addq $40, %%rsp\n" \ 1664 "addq $40, %%rsp\n" \
1396 "addq $128,%%rsp\n\t" \ 1665 "addq $136,%%rsp\n\t" \
1666 VALGRIND_CFI_EPILOGUE \
1397 : /*out*/ "=a" (_res) \ 1667 : /*out*/ "=a" (_res) \
1398 : /*in*/ "a" (&_argvec[0]) \ 1668 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1399 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1669 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1400 ); \ 1670 ); \
1401 lval = (__typeof__(lval)) _res; \ 1671 lval = (__typeof__(lval)) _res; \
1402 } while (0) 1672 } while (0)
1403 1673
1404 #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \ 1674 #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
1405 arg7,arg8,arg9,arg10,arg11,arg12) \ 1675 arg7,arg8,arg9,arg10,arg11,arg12) \
1406 do { \ 1676 do { \
1407 volatile OrigFn _orig = (orig); \ 1677 volatile OrigFn _orig = (orig); \
1408 volatile unsigned long _argvec[13]; \ 1678 volatile unsigned long _argvec[13]; \
1409 volatile unsigned long _res; \ 1679 volatile unsigned long _res; \
1410 _argvec[0] = (unsigned long)_orig.nraddr; \ 1680 _argvec[0] = (unsigned long)_orig.nraddr; \
1411 _argvec[1] = (unsigned long)(arg1); \ 1681 _argvec[1] = (unsigned long)(arg1); \
1412 _argvec[2] = (unsigned long)(arg2); \ 1682 _argvec[2] = (unsigned long)(arg2); \
1413 _argvec[3] = (unsigned long)(arg3); \ 1683 _argvec[3] = (unsigned long)(arg3); \
1414 _argvec[4] = (unsigned long)(arg4); \ 1684 _argvec[4] = (unsigned long)(arg4); \
1415 _argvec[5] = (unsigned long)(arg5); \ 1685 _argvec[5] = (unsigned long)(arg5); \
1416 _argvec[6] = (unsigned long)(arg6); \ 1686 _argvec[6] = (unsigned long)(arg6); \
1417 _argvec[7] = (unsigned long)(arg7); \ 1687 _argvec[7] = (unsigned long)(arg7); \
1418 _argvec[8] = (unsigned long)(arg8); \ 1688 _argvec[8] = (unsigned long)(arg8); \
1419 _argvec[9] = (unsigned long)(arg9); \ 1689 _argvec[9] = (unsigned long)(arg9); \
1420 _argvec[10] = (unsigned long)(arg10); \ 1690 _argvec[10] = (unsigned long)(arg10); \
1421 _argvec[11] = (unsigned long)(arg11); \ 1691 _argvec[11] = (unsigned long)(arg11); \
1422 _argvec[12] = (unsigned long)(arg12); \ 1692 _argvec[12] = (unsigned long)(arg12); \
1423 __asm__ volatile( \ 1693 __asm__ volatile( \
1694 VALGRIND_CFI_PROLOGUE \
1424 "subq $128,%%rsp\n\t" \ 1695 "subq $128,%%rsp\n\t" \
1425 "pushq 96(%%rax)\n\t" \ 1696 "pushq 96(%%rax)\n\t" \
1426 "pushq 88(%%rax)\n\t" \ 1697 "pushq 88(%%rax)\n\t" \
1427 "pushq 80(%%rax)\n\t" \ 1698 "pushq 80(%%rax)\n\t" \
1428 "pushq 72(%%rax)\n\t" \ 1699 "pushq 72(%%rax)\n\t" \
1429 "pushq 64(%%rax)\n\t" \ 1700 "pushq 64(%%rax)\n\t" \
1430 "pushq 56(%%rax)\n\t" \ 1701 "pushq 56(%%rax)\n\t" \
1431 "movq 48(%%rax), %%r9\n\t" \ 1702 "movq 48(%%rax), %%r9\n\t" \
1432 "movq 40(%%rax), %%r8\n\t" \ 1703 "movq 40(%%rax), %%r8\n\t" \
1433 "movq 32(%%rax), %%rcx\n\t" \ 1704 "movq 32(%%rax), %%rcx\n\t" \
1434 "movq 24(%%rax), %%rdx\n\t" \ 1705 "movq 24(%%rax), %%rdx\n\t" \
1435 "movq 16(%%rax), %%rsi\n\t" \ 1706 "movq 16(%%rax), %%rsi\n\t" \
1436 "movq 8(%%rax), %%rdi\n\t" \ 1707 "movq 8(%%rax), %%rdi\n\t" \
1437 "movq (%%rax), %%rax\n\t" /* target->%rax */ \ 1708 "movq (%%rax), %%rax\n\t" /* target->%rax */ \
1438 VALGRIND_CALL_NOREDIR_RAX \ 1709 VALGRIND_CALL_NOREDIR_RAX \
1439 "addq $48, %%rsp\n" \ 1710 "addq $48, %%rsp\n" \
1440 "addq $128,%%rsp\n\t" \ 1711 "addq $128,%%rsp\n\t" \
1712 VALGRIND_CFI_EPILOGUE \
1441 : /*out*/ "=a" (_res) \ 1713 : /*out*/ "=a" (_res) \
1442 : /*in*/ "a" (&_argvec[0]) \ 1714 : /*in*/ "a" (&_argvec[0]) __FRAME_POINTER \
1443 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 1715 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS, "r15" \
1444 ); \ 1716 ); \
1445 lval = (__typeof__(lval)) _res; \ 1717 lval = (__typeof__(lval)) _res; \
1446 } while (0) 1718 } while (0)
1447 1719
1448 #endif /* PLAT_amd64_linux */ 1720 #endif /* PLAT_amd64_linux || PLAT_amd64_darwin */
1449 1721
1450 /* ------------------------ ppc32-linux ------------------------ */ 1722 /* ------------------------ ppc32-linux ------------------------ */
1451 1723
1452 #if defined(PLAT_ppc32_linux) 1724 #if defined(PLAT_ppc32_linux)
1453 1725
1454 /* This is useful for finding out about the on-stack stuff: 1726 /* This is useful for finding out about the on-stack stuff:
1455 1727
1456 extern int f9 ( int,int,int,int,int,int,int,int,int ); 1728 extern int f9 ( int,int,int,int,int,int,int,int,int );
1457 extern int f10 ( int,int,int,int,int,int,int,int,int,int ); 1729 extern int f10 ( int,int,int,int,int,int,int,int,int,int );
1458 extern int f11 ( int,int,int,int,int,int,int,int,int,int,int ); 1730 extern int f11 ( int,int,int,int,int,int,int,int,int,int,int );
(...skipping 972 matching lines...) Expand 10 before | Expand all | Expand 10 after
2431 "addi 1,1,144" /* restore frame */ \ 2703 "addi 1,1,144" /* restore frame */ \
2432 : /*out*/ "=r" (_res) \ 2704 : /*out*/ "=r" (_res) \
2433 : /*in*/ "r" (&_argvec[2]) \ 2705 : /*in*/ "r" (&_argvec[2]) \
2434 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \ 2706 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2435 ); \ 2707 ); \
2436 lval = (__typeof__(lval)) _res; \ 2708 lval = (__typeof__(lval)) _res; \
2437 } while (0) 2709 } while (0)
2438 2710
2439 #endif /* PLAT_ppc64_linux */ 2711 #endif /* PLAT_ppc64_linux */
2440 2712
2713 /* ------------------------- arm-linux ------------------------- */
2714
2715 #if defined(PLAT_arm_linux)
2716
2717 /* These regs are trashed by the hidden call. */
2718 #define __CALLER_SAVED_REGS "r0", "r1", "r2", "r3","r4","r14"
2719
2720 /* These CALL_FN_ macros assume that on arm-linux, sizeof(unsigned
2721 long) == 4. */
2722
2723 #define CALL_FN_W_v(lval, orig) \
2724 do { \
2725 volatile OrigFn _orig = (orig); \
2726 volatile unsigned long _argvec[1]; \
2727 volatile unsigned long _res; \
2728 _argvec[0] = (unsigned long)_orig.nraddr; \
2729 __asm__ volatile( \
2730 "ldr r4, [%1] \n\t" /* target->r4 */ \
2731 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2732 "mov %0, r0\n" \
2733 : /*out*/ "=r" (_res) \
2734 : /*in*/ "0" (&_argvec[0]) \
2735 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2736 ); \
2737 lval = (__typeof__(lval)) _res; \
2738 } while (0)
2739
2740 #define CALL_FN_W_W(lval, orig, arg1) \
2741 do { \
2742 volatile OrigFn _orig = (orig); \
2743 volatile unsigned long _argvec[2]; \
2744 volatile unsigned long _res; \
2745 _argvec[0] = (unsigned long)_orig.nraddr; \
2746 _argvec[1] = (unsigned long)(arg1); \
2747 __asm__ volatile( \
2748 "ldr r0, [%1, #4] \n\t" \
2749 "ldr r4, [%1] \n\t" /* target->r4 */ \
2750 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2751 "mov %0, r0\n" \
2752 : /*out*/ "=r" (_res) \
2753 : /*in*/ "0" (&_argvec[0]) \
2754 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2755 ); \
2756 lval = (__typeof__(lval)) _res; \
2757 } while (0)
2758
2759 #define CALL_FN_W_WW(lval, orig, arg1,arg2) \
2760 do { \
2761 volatile OrigFn _orig = (orig); \
2762 volatile unsigned long _argvec[3]; \
2763 volatile unsigned long _res; \
2764 _argvec[0] = (unsigned long)_orig.nraddr; \
2765 _argvec[1] = (unsigned long)(arg1); \
2766 _argvec[2] = (unsigned long)(arg2); \
2767 __asm__ volatile( \
2768 "ldr r0, [%1, #4] \n\t" \
2769 "ldr r1, [%1, #8] \n\t" \
2770 "ldr r4, [%1] \n\t" /* target->r4 */ \
2771 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2772 "mov %0, r0\n" \
2773 : /*out*/ "=r" (_res) \
2774 : /*in*/ "0" (&_argvec[0]) \
2775 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2776 ); \
2777 lval = (__typeof__(lval)) _res; \
2778 } while (0)
2779
2780 #define CALL_FN_W_WWW(lval, orig, arg1,arg2,arg3) \
2781 do { \
2782 volatile OrigFn _orig = (orig); \
2783 volatile unsigned long _argvec[4]; \
2784 volatile unsigned long _res; \
2785 _argvec[0] = (unsigned long)_orig.nraddr; \
2786 _argvec[1] = (unsigned long)(arg1); \
2787 _argvec[2] = (unsigned long)(arg2); \
2788 _argvec[3] = (unsigned long)(arg3); \
2789 __asm__ volatile( \
2790 "ldr r0, [%1, #4] \n\t" \
2791 "ldr r1, [%1, #8] \n\t" \
2792 "ldr r2, [%1, #12] \n\t" \
2793 "ldr r4, [%1] \n\t" /* target->r4 */ \
2794 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2795 "mov %0, r0\n" \
2796 : /*out*/ "=r" (_res) \
2797 : /*in*/ "0" (&_argvec[0]) \
2798 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2799 ); \
2800 lval = (__typeof__(lval)) _res; \
2801 } while (0)
2802
2803 #define CALL_FN_W_WWWW(lval, orig, arg1,arg2,arg3,arg4) \
2804 do { \
2805 volatile OrigFn _orig = (orig); \
2806 volatile unsigned long _argvec[5]; \
2807 volatile unsigned long _res; \
2808 _argvec[0] = (unsigned long)_orig.nraddr; \
2809 _argvec[1] = (unsigned long)(arg1); \
2810 _argvec[2] = (unsigned long)(arg2); \
2811 _argvec[3] = (unsigned long)(arg3); \
2812 _argvec[4] = (unsigned long)(arg4); \
2813 __asm__ volatile( \
2814 "ldr r0, [%1, #4] \n\t" \
2815 "ldr r1, [%1, #8] \n\t" \
2816 "ldr r2, [%1, #12] \n\t" \
2817 "ldr r3, [%1, #16] \n\t" \
2818 "ldr r4, [%1] \n\t" /* target->r4 */ \
2819 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2820 "mov %0, r0" \
2821 : /*out*/ "=r" (_res) \
2822 : /*in*/ "0" (&_argvec[0]) \
2823 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2824 ); \
2825 lval = (__typeof__(lval)) _res; \
2826 } while (0)
2827
2828 #define CALL_FN_W_5W(lval, orig, arg1,arg2,arg3,arg4,arg5) \
2829 do { \
2830 volatile OrigFn _orig = (orig); \
2831 volatile unsigned long _argvec[6]; \
2832 volatile unsigned long _res; \
2833 _argvec[0] = (unsigned long)_orig.nraddr; \
2834 _argvec[1] = (unsigned long)(arg1); \
2835 _argvec[2] = (unsigned long)(arg2); \
2836 _argvec[3] = (unsigned long)(arg3); \
2837 _argvec[4] = (unsigned long)(arg4); \
2838 _argvec[5] = (unsigned long)(arg5); \
2839 __asm__ volatile( \
2840 "ldr r0, [%1, #20] \n\t" \
2841 "push {r0} \n\t" \
2842 "ldr r0, [%1, #4] \n\t" \
2843 "ldr r1, [%1, #8] \n\t" \
2844 "ldr r2, [%1, #12] \n\t" \
2845 "ldr r3, [%1, #16] \n\t" \
2846 "ldr r4, [%1] \n\t" /* target->r4 */ \
2847 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2848 "add sp, sp, #4 \n\t" \
2849 "mov %0, r0" \
2850 : /*out*/ "=r" (_res) \
2851 : /*in*/ "0" (&_argvec[0]) \
2852 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2853 ); \
2854 lval = (__typeof__(lval)) _res; \
2855 } while (0)
2856
2857 #define CALL_FN_W_6W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6) \
2858 do { \
2859 volatile OrigFn _orig = (orig); \
2860 volatile unsigned long _argvec[7]; \
2861 volatile unsigned long _res; \
2862 _argvec[0] = (unsigned long)_orig.nraddr; \
2863 _argvec[1] = (unsigned long)(arg1); \
2864 _argvec[2] = (unsigned long)(arg2); \
2865 _argvec[3] = (unsigned long)(arg3); \
2866 _argvec[4] = (unsigned long)(arg4); \
2867 _argvec[5] = (unsigned long)(arg5); \
2868 _argvec[6] = (unsigned long)(arg6); \
2869 __asm__ volatile( \
2870 "ldr r0, [%1, #20] \n\t" \
2871 "ldr r1, [%1, #24] \n\t" \
2872 "push {r0, r1} \n\t" \
2873 "ldr r0, [%1, #4] \n\t" \
2874 "ldr r1, [%1, #8] \n\t" \
2875 "ldr r2, [%1, #12] \n\t" \
2876 "ldr r3, [%1, #16] \n\t" \
2877 "ldr r4, [%1] \n\t" /* target->r4 */ \
2878 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2879 "add sp, sp, #8 \n\t" \
2880 "mov %0, r0" \
2881 : /*out*/ "=r" (_res) \
2882 : /*in*/ "0" (&_argvec[0]) \
2883 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2884 ); \
2885 lval = (__typeof__(lval)) _res; \
2886 } while (0)
2887
2888 #define CALL_FN_W_7W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2889 arg7) \
2890 do { \
2891 volatile OrigFn _orig = (orig); \
2892 volatile unsigned long _argvec[8]; \
2893 volatile unsigned long _res; \
2894 _argvec[0] = (unsigned long)_orig.nraddr; \
2895 _argvec[1] = (unsigned long)(arg1); \
2896 _argvec[2] = (unsigned long)(arg2); \
2897 _argvec[3] = (unsigned long)(arg3); \
2898 _argvec[4] = (unsigned long)(arg4); \
2899 _argvec[5] = (unsigned long)(arg5); \
2900 _argvec[6] = (unsigned long)(arg6); \
2901 _argvec[7] = (unsigned long)(arg7); \
2902 __asm__ volatile( \
2903 "ldr r0, [%1, #20] \n\t" \
2904 "ldr r1, [%1, #24] \n\t" \
2905 "ldr r2, [%1, #28] \n\t" \
2906 "push {r0, r1, r2} \n\t" \
2907 "ldr r0, [%1, #4] \n\t" \
2908 "ldr r1, [%1, #8] \n\t" \
2909 "ldr r2, [%1, #12] \n\t" \
2910 "ldr r3, [%1, #16] \n\t" \
2911 "ldr r4, [%1] \n\t" /* target->r4 */ \
2912 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2913 "add sp, sp, #12 \n\t" \
2914 "mov %0, r0" \
2915 : /*out*/ "=r" (_res) \
2916 : /*in*/ "0" (&_argvec[0]) \
2917 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2918 ); \
2919 lval = (__typeof__(lval)) _res; \
2920 } while (0)
2921
2922 #define CALL_FN_W_8W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2923 arg7,arg8) \
2924 do { \
2925 volatile OrigFn _orig = (orig); \
2926 volatile unsigned long _argvec[9]; \
2927 volatile unsigned long _res; \
2928 _argvec[0] = (unsigned long)_orig.nraddr; \
2929 _argvec[1] = (unsigned long)(arg1); \
2930 _argvec[2] = (unsigned long)(arg2); \
2931 _argvec[3] = (unsigned long)(arg3); \
2932 _argvec[4] = (unsigned long)(arg4); \
2933 _argvec[5] = (unsigned long)(arg5); \
2934 _argvec[6] = (unsigned long)(arg6); \
2935 _argvec[7] = (unsigned long)(arg7); \
2936 _argvec[8] = (unsigned long)(arg8); \
2937 __asm__ volatile( \
2938 "ldr r0, [%1, #20] \n\t" \
2939 "ldr r1, [%1, #24] \n\t" \
2940 "ldr r2, [%1, #28] \n\t" \
2941 "ldr r3, [%1, #32] \n\t" \
2942 "push {r0, r1, r2, r3} \n\t" \
2943 "ldr r0, [%1, #4] \n\t" \
2944 "ldr r1, [%1, #8] \n\t" \
2945 "ldr r2, [%1, #12] \n\t" \
2946 "ldr r3, [%1, #16] \n\t" \
2947 "ldr r4, [%1] \n\t" /* target->r4 */ \
2948 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2949 "add sp, sp, #16 \n\t" \
2950 "mov %0, r0" \
2951 : /*out*/ "=r" (_res) \
2952 : /*in*/ "0" (&_argvec[0]) \
2953 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2954 ); \
2955 lval = (__typeof__(lval)) _res; \
2956 } while (0)
2957
2958 #define CALL_FN_W_9W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2959 arg7,arg8,arg9) \
2960 do { \
2961 volatile OrigFn _orig = (orig); \
2962 volatile unsigned long _argvec[10]; \
2963 volatile unsigned long _res; \
2964 _argvec[0] = (unsigned long)_orig.nraddr; \
2965 _argvec[1] = (unsigned long)(arg1); \
2966 _argvec[2] = (unsigned long)(arg2); \
2967 _argvec[3] = (unsigned long)(arg3); \
2968 _argvec[4] = (unsigned long)(arg4); \
2969 _argvec[5] = (unsigned long)(arg5); \
2970 _argvec[6] = (unsigned long)(arg6); \
2971 _argvec[7] = (unsigned long)(arg7); \
2972 _argvec[8] = (unsigned long)(arg8); \
2973 _argvec[9] = (unsigned long)(arg9); \
2974 __asm__ volatile( \
2975 "ldr r0, [%1, #20] \n\t" \
2976 "ldr r1, [%1, #24] \n\t" \
2977 "ldr r2, [%1, #28] \n\t" \
2978 "ldr r3, [%1, #32] \n\t" \
2979 "ldr r4, [%1, #36] \n\t" \
2980 "push {r0, r1, r2, r3, r4} \n\t" \
2981 "ldr r0, [%1, #4] \n\t" \
2982 "ldr r1, [%1, #8] \n\t" \
2983 "ldr r2, [%1, #12] \n\t" \
2984 "ldr r3, [%1, #16] \n\t" \
2985 "ldr r4, [%1] \n\t" /* target->r4 */ \
2986 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
2987 "add sp, sp, #20 \n\t" \
2988 "mov %0, r0" \
2989 : /*out*/ "=r" (_res) \
2990 : /*in*/ "0" (&_argvec[0]) \
2991 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
2992 ); \
2993 lval = (__typeof__(lval)) _res; \
2994 } while (0)
2995
2996 #define CALL_FN_W_10W(lval, orig, arg1,arg2,arg3,arg4,arg5,arg6, \
2997 arg7,arg8,arg9,arg10) \
2998 do { \
2999 volatile OrigFn _orig = (orig); \
3000 volatile unsigned long _argvec[11]; \
3001 volatile unsigned long _res; \
3002 _argvec[0] = (unsigned long)_orig.nraddr; \
3003 _argvec[1] = (unsigned long)(arg1); \
3004 _argvec[2] = (unsigned long)(arg2); \
3005 _argvec[3] = (unsigned long)(arg3); \
3006 _argvec[4] = (unsigned long)(arg4); \
3007 _argvec[5] = (unsigned long)(arg5); \
3008 _argvec[6] = (unsigned long)(arg6); \
3009 _argvec[7] = (unsigned long)(arg7); \
3010 _argvec[8] = (unsigned long)(arg8); \
3011 _argvec[9] = (unsigned long)(arg9); \
3012 _argvec[10] = (unsigned long)(arg10); \
3013 __asm__ volatile( \
3014 "ldr r0, [%1, #40] \n\t" \
3015 "push {r0} \n\t" \
3016 "ldr r0, [%1, #20] \n\t" \
3017 "ldr r1, [%1, #24] \n\t" \
3018 "ldr r2, [%1, #28] \n\t" \
3019 "ldr r3, [%1, #32] \n\t" \
3020 "ldr r4, [%1, #36] \n\t" \
3021 "push {r0, r1, r2, r3, r4} \n\t" \
3022 "ldr r0, [%1, #4] \n\t" \
3023 "ldr r1, [%1, #8] \n\t" \
3024 "ldr r2, [%1, #12] \n\t" \
3025 "ldr r3, [%1, #16] \n\t" \
3026 "ldr r4, [%1] \n\t" /* target->r4 */ \
3027 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
3028 "add sp, sp, #24 \n\t" \
3029 "mov %0, r0" \
3030 : /*out*/ "=r" (_res) \
3031 : /*in*/ "0" (&_argvec[0]) \
3032 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
3033 ); \
3034 lval = (__typeof__(lval)) _res; \
3035 } while (0)
3036
3037 #define CALL_FN_W_11W(lval, orig, arg1,arg2,arg3,arg4,arg5, \
3038 arg6,arg7,arg8,arg9,arg10, \
3039 arg11) \
3040 do { \
3041 volatile OrigFn _orig = (orig); \
3042 volatile unsigned long _argvec[12]; \
3043 volatile unsigned long _res; \
3044 _argvec[0] = (unsigned long)_orig.nraddr; \
3045 _argvec[1] = (unsigned long)(arg1); \
3046 _argvec[2] = (unsigned long)(arg2); \
3047 _argvec[3] = (unsigned long)(arg3); \
3048 _argvec[4] = (unsigned long)(arg4); \
3049 _argvec[5] = (unsigned long)(arg5); \
3050 _argvec[6] = (unsigned long)(arg6); \
3051 _argvec[7] = (unsigned long)(arg7); \
3052 _argvec[8] = (unsigned long)(arg8); \
3053 _argvec[9] = (unsigned long)(arg9); \
3054 _argvec[10] = (unsigned long)(arg10); \
3055 _argvec[11] = (unsigned long)(arg11); \
3056 __asm__ volatile( \
3057 "ldr r0, [%1, #40] \n\t" \
3058 "ldr r1, [%1, #44] \n\t" \
3059 "push {r0, r1} \n\t" \
3060 "ldr r0, [%1, #20] \n\t" \
3061 "ldr r1, [%1, #24] \n\t" \
3062 "ldr r2, [%1, #28] \n\t" \
3063 "ldr r3, [%1, #32] \n\t" \
3064 "ldr r4, [%1, #36] \n\t" \
3065 "push {r0, r1, r2, r3, r4} \n\t" \
3066 "ldr r0, [%1, #4] \n\t" \
3067 "ldr r1, [%1, #8] \n\t" \
3068 "ldr r2, [%1, #12] \n\t" \
3069 "ldr r3, [%1, #16] \n\t" \
3070 "ldr r4, [%1] \n\t" /* target->r4 */ \
3071 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
3072 "add sp, sp, #28 \n\t" \
3073 "mov %0, r0" \
3074 : /*out*/ "=r" (_res) \
3075 : /*in*/ "0" (&_argvec[0]) \
3076 : /*trash*/ "cc", "memory",__CALLER_SAVED_REGS \
3077 ); \
3078 lval = (__typeof__(lval)) _res; \
3079 } while (0)
3080
3081 #define CALL_FN_W_12W(lval, orig, arg1,arg2,arg3,arg4,arg5, \
3082 arg6,arg7,arg8,arg9,arg10, \
3083 arg11,arg12) \
3084 do { \
3085 volatile OrigFn _orig = (orig); \
3086 volatile unsigned long _argvec[13]; \
3087 volatile unsigned long _res; \
3088 _argvec[0] = (unsigned long)_orig.nraddr; \
3089 _argvec[1] = (unsigned long)(arg1); \
3090 _argvec[2] = (unsigned long)(arg2); \
3091 _argvec[3] = (unsigned long)(arg3); \
3092 _argvec[4] = (unsigned long)(arg4); \
3093 _argvec[5] = (unsigned long)(arg5); \
3094 _argvec[6] = (unsigned long)(arg6); \
3095 _argvec[7] = (unsigned long)(arg7); \
3096 _argvec[8] = (unsigned long)(arg8); \
3097 _argvec[9] = (unsigned long)(arg9); \
3098 _argvec[10] = (unsigned long)(arg10); \
3099 _argvec[11] = (unsigned long)(arg11); \
3100 _argvec[12] = (unsigned long)(arg12); \
3101 __asm__ volatile( \
3102 "ldr r0, [%1, #40] \n\t" \
3103 "ldr r1, [%1, #44] \n\t" \
3104 "ldr r2, [%1, #48] \n\t" \
3105 "push {r0, r1, r2} \n\t" \
3106 "ldr r0, [%1, #20] \n\t" \
3107 "ldr r1, [%1, #24] \n\t" \
3108 "ldr r2, [%1, #28] \n\t" \
3109 "ldr r3, [%1, #32] \n\t" \
3110 "ldr r4, [%1, #36] \n\t" \
3111 "push {r0, r1, r2, r3, r4} \n\t" \
3112 "ldr r0, [%1, #4] \n\t" \
3113 "ldr r1, [%1, #8] \n\t" \
3114 "ldr r2, [%1, #12] \n\t" \
3115 "ldr r3, [%1, #16] \n\t" \
3116 "ldr r4, [%1] \n\t" /* target->r4 */ \
3117 VALGRIND_BRANCH_AND_LINK_TO_NOREDIR_R4 \
3118 "add sp, sp, #32 \n\t" \
3119 "mov %0, r0" \
3120 : /*out*/ "=r" (_res) \
3121 : /*in*/ "0" (&_argvec[0]) \
3122 : /*trash*/ "cc", "memory", __CALLER_SAVED_REGS \
3123 ); \
3124 lval = (__typeof__(lval)) _res; \
3125 } while (0)
3126
3127 #endif /* PLAT_arm_linux */
3128
2441 /* ------------------------ ppc32-aix5 ------------------------- */ 3129 /* ------------------------ ppc32-aix5 ------------------------- */
2442 3130
2443 #if defined(PLAT_ppc32_aix5) 3131 #if defined(PLAT_ppc32_aix5)
2444 3132
2445 /* ARGREGS: r3 r4 r5 r6 r7 r8 r9 r10 (the rest on stack somewhere) */ 3133 /* ARGREGS: r3 r4 r5 r6 r7 r8 r9 r10 (the rest on stack somewhere) */
2446 3134
2447 /* These regs are trashed by the hidden call. */ 3135 /* These regs are trashed by the hidden call. */
2448 #define __CALLER_SAVED_REGS \ 3136 #define __CALLER_SAVED_REGS \
2449 "lr", "ctr", "xer", \ 3137 "lr", "ctr", "xer", \
2450 "cr0", "cr1", "cr2", "cr3", "cr4", "cr5", "cr6", "cr7", \ 3138 "cr0", "cr1", "cr2", "cr3", "cr4", "cr5", "cr6", "cr7", \
(...skipping 1161 matching lines...) Expand 10 before | Expand all | Expand 10 after
3612 VG_USERREQ__CREATE_MEMPOOL = 0x1303, 4300 VG_USERREQ__CREATE_MEMPOOL = 0x1303,
3613 VG_USERREQ__DESTROY_MEMPOOL = 0x1304, 4301 VG_USERREQ__DESTROY_MEMPOOL = 0x1304,
3614 VG_USERREQ__MEMPOOL_ALLOC = 0x1305, 4302 VG_USERREQ__MEMPOOL_ALLOC = 0x1305,
3615 VG_USERREQ__MEMPOOL_FREE = 0x1306, 4303 VG_USERREQ__MEMPOOL_FREE = 0x1306,
3616 VG_USERREQ__MEMPOOL_TRIM = 0x1307, 4304 VG_USERREQ__MEMPOOL_TRIM = 0x1307,
3617 VG_USERREQ__MOVE_MEMPOOL = 0x1308, 4305 VG_USERREQ__MOVE_MEMPOOL = 0x1308,
3618 VG_USERREQ__MEMPOOL_CHANGE = 0x1309, 4306 VG_USERREQ__MEMPOOL_CHANGE = 0x1309,
3619 VG_USERREQ__MEMPOOL_EXISTS = 0x130a, 4307 VG_USERREQ__MEMPOOL_EXISTS = 0x130a,
3620 4308
3621 /* Allow printfs to valgrind log. */ 4309 /* Allow printfs to valgrind log. */
4310 /* The first two pass the va_list argument by value, which
4311 assumes it is the same size as or smaller than a UWord,
4312 which generally isn't the case. Hence are deprecated.
4313 The second two pass the vargs by reference and so are
4314 immune to this problem. */
4315 /* both :: char* fmt, va_list vargs (DEPRECATED) */
3622 VG_USERREQ__PRINTF = 0x1401, 4316 VG_USERREQ__PRINTF = 0x1401,
3623 VG_USERREQ__PRINTF_BACKTRACE = 0x1402, 4317 VG_USERREQ__PRINTF_BACKTRACE = 0x1402,
4318 /* both :: char* fmt, va_list* vargs */
4319 VG_USERREQ__PRINTF_VALIST_BY_REF = 0x1403,
4320 VG_USERREQ__PRINTF_BACKTRACE_VALIST_BY_REF = 0x1404,
3624 4321
3625 /* Stack support. */ 4322 /* Stack support. */
3626 VG_USERREQ__STACK_REGISTER = 0x1501, 4323 VG_USERREQ__STACK_REGISTER = 0x1501,
3627 VG_USERREQ__STACK_DEREGISTER = 0x1502, 4324 VG_USERREQ__STACK_DEREGISTER = 0x1502,
3628 VG_USERREQ__STACK_CHANGE = 0x1503 4325 VG_USERREQ__STACK_CHANGE = 0x1503,
4326
4327 /* Wine support */
4328 VG_USERREQ__LOAD_PDB_DEBUGINFO = 0x1601,
4329
4330 /* Querying of debug info. */
4331 VG_USERREQ__MAP_IP_TO_SRCLOC = 0x1701
3629 } Vg_ClientRequest; 4332 } Vg_ClientRequest;
3630 4333
3631 #if !defined(__GNUC__) 4334 #if !defined(__GNUC__)
3632 # define __extension__ /* */ 4335 # define __extension__ /* */
3633 #endif 4336 #endif
3634 4337
4338
4339 /*
4340 * VALGRIND_DO_CLIENT_REQUEST_EXPR(): a C expression that invokes a Valgrind
4341 * client request and whose value equals the client request result.
4342 */
4343
4344 #if defined(NVALGRIND)
4345
4346 #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
4347 _zzq_default, _zzq_request, \
4348 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
4349 (_zzq_default)
4350
4351 #else /*defined(NVALGRIND)*/
4352
4353 #if defined(_MSC_VER)
4354
4355 #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
4356 _zzq_default, _zzq_request, \
4357 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
4358 (vg_VALGRIND_DO_CLIENT_REQUEST_EXPR((uintptr_t)(_zzq_default), \
4359 (_zzq_request), (uintptr_t)(_zzq_arg1), (uintptr_t)(_zzq_arg2), \
4360 (uintptr_t)(_zzq_arg3), (uintptr_t)(_zzq_arg4), \
4361 (uintptr_t)(_zzq_arg5)))
4362
4363 static __inline unsigned
4364 vg_VALGRIND_DO_CLIENT_REQUEST_EXPR(uintptr_t _zzq_default,
4365 unsigned _zzq_request, uintptr_t _zzq_arg1,
4366 uintptr_t _zzq_arg2, uintptr_t _zzq_arg3,
4367 uintptr_t _zzq_arg4, uintptr_t _zzq_arg5)
4368 {
4369 unsigned _zzq_rlval;
4370 VALGRIND_DO_CLIENT_REQUEST(_zzq_rlval, _zzq_default, _zzq_request,
4371 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5);
4372 return _zzq_rlval;
4373 }
4374
4375 #else /*defined(_MSC_VER)*/
4376
4377 #define VALGRIND_DO_CLIENT_REQUEST_EXPR( \
4378 _zzq_default, _zzq_request, \
4379 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
4380 (__extension__({unsigned int _zzq_rlval; \
4381 VALGRIND_DO_CLIENT_REQUEST(_zzq_rlval, _zzq_default, _zzq_request, \
4382 _zzq_arg1, _zzq_arg2, _zzq_arg3, _zzq_arg4, _zzq_arg5) \
4383 _zzq_rlval; \
4384 }))
4385
4386 #endif /*defined(_MSC_VER)*/
4387
4388 #endif /*defined(NVALGRIND)*/
4389
4390
3635 /* Returns the number of Valgrinds this code is running under. That 4391 /* Returns the number of Valgrinds this code is running under. That
3636 is, 0 if running natively, 1 if running under Valgrind, 2 if 4392 is, 0 if running natively, 1 if running under Valgrind, 2 if
3637 running under Valgrind which is running under another Valgrind, 4393 running under Valgrind which is running under another Valgrind,
3638 etc. */ 4394 etc. */
3639 #define RUNNING_ON_VALGRIND __extension__ \ 4395 #define RUNNING_ON_VALGRIND \
3640 ({unsigned int _qzz_res; \ 4396 VALGRIND_DO_CLIENT_REQUEST_EXPR(0 /* if not */, \
3641 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0 /* if not */, \ 4397 VG_USERREQ__RUNNING_ON_VALGRIND, \
3642 VG_USERREQ__RUNNING_ON_VALGRIND, \ 4398 0, 0, 0, 0, 0) \
3643 0, 0, 0, 0, 0); \
3644 _qzz_res; \
3645 })
3646 4399
3647 4400
3648 /* Discard translation of code in the range [_qzz_addr .. _qzz_addr + 4401 /* Discard translation of code in the range [_qzz_addr .. _qzz_addr +
3649 _qzz_len - 1]. Useful if you are debugging a JITter or some such, 4402 _qzz_len - 1]. Useful if you are debugging a JITter or some such,
3650 since it provides a way to make sure valgrind will retranslate the 4403 since it provides a way to make sure valgrind will retranslate the
3651 invalidated area. Returns no value. */ 4404 invalidated area. Returns no value. */
3652 #define VALGRIND_DISCARD_TRANSLATIONS(_qzz_addr,_qzz_len) \ 4405 #define VALGRIND_DISCARD_TRANSLATIONS(_qzz_addr,_qzz_len) \
3653 {unsigned int _qzz_res; \ 4406 {unsigned int _qzz_res; \
3654 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \ 4407 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3655 VG_USERREQ__DISCARD_TRANSLATIONS, \ 4408 VG_USERREQ__DISCARD_TRANSLATIONS, \
3656 _qzz_addr, _qzz_len, 0, 0, 0); \ 4409 _qzz_addr, _qzz_len, 0, 0, 0); \
3657 } 4410 }
3658 4411
3659 4412
3660 /* These requests are for getting Valgrind itself to print something. 4413 /* These requests are for getting Valgrind itself to print something.
3661 Possibly with a backtrace. This is a really ugly hack. */ 4414 Possibly with a backtrace. This is a really ugly hack. The return value
4415 is the number of characters printed, excluding the "**<pid>** " part at the
4416 start and the backtrace (if present). */
3662 4417
3663 #if defined(NVALGRIND) 4418 #if defined(NVALGRIND)
3664 4419
3665 # define VALGRIND_PRINTF(...) 4420 # define VALGRIND_PRINTF(...)
3666 # define VALGRIND_PRINTF_BACKTRACE(...) 4421 # define VALGRIND_PRINTF_BACKTRACE(...)
3667 4422
3668 #else /* NVALGRIND */ 4423 #else /* NVALGRIND */
3669 4424
4425 #if !defined(_MSC_VER)
3670 /* Modern GCC will optimize the static routine out if unused, 4426 /* Modern GCC will optimize the static routine out if unused,
3671 and unused attribute will shut down warnings about it. */ 4427 and unused attribute will shut down warnings about it. */
3672 static int VALGRIND_PRINTF(const char *format, ...) 4428 static int VALGRIND_PRINTF(const char *format, ...)
3673 __attribute__((format(__printf__, 1, 2), __unused__)); 4429 __attribute__((format(__printf__, 1, 2), __unused__));
4430 #endif
3674 static int 4431 static int
4432 #if defined(_MSC_VER)
4433 __inline
4434 #endif
3675 VALGRIND_PRINTF(const char *format, ...) 4435 VALGRIND_PRINTF(const char *format, ...)
3676 { 4436 {
3677 unsigned long _qzz_res; 4437 unsigned long _qzz_res;
3678 va_list vargs; 4438 va_list vargs;
3679 va_start(vargs, format); 4439 va_start(vargs, format);
3680 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, VG_USERREQ__PRINTF, 4440 #if defined(_MSC_VER)
3681 (unsigned long)format, (unsigned long)vargs, 4441 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0,
4442 VG_USERREQ__PRINTF_VALIST_BY_REF,
4443 (uintptr_t)format,
4444 (uintptr_t)&vargs,
3682 0, 0, 0); 4445 0, 0, 0);
4446 #else
4447 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0,
4448 VG_USERREQ__PRINTF_VALIST_BY_REF,
4449 (unsigned long)format,
4450 (unsigned long)&vargs,
4451 0, 0, 0);
4452 #endif
3683 va_end(vargs); 4453 va_end(vargs);
3684 return (int)_qzz_res; 4454 return (int)_qzz_res;
3685 } 4455 }
3686 4456
4457 #if !defined(_MSC_VER)
3687 static int VALGRIND_PRINTF_BACKTRACE(const char *format, ...) 4458 static int VALGRIND_PRINTF_BACKTRACE(const char *format, ...)
3688 __attribute__((format(__printf__, 1, 2), __unused__)); 4459 __attribute__((format(__printf__, 1, 2), __unused__));
4460 #endif
3689 static int 4461 static int
4462 #if defined(_MSC_VER)
4463 __inline
4464 #endif
3690 VALGRIND_PRINTF_BACKTRACE(const char *format, ...) 4465 VALGRIND_PRINTF_BACKTRACE(const char *format, ...)
3691 { 4466 {
3692 unsigned long _qzz_res; 4467 unsigned long _qzz_res;
3693 va_list vargs; 4468 va_list vargs;
3694 va_start(vargs, format); 4469 va_start(vargs, format);
3695 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, VG_USERREQ__PRINTF_BACKTRACE, 4470 #if defined(_MSC_VER)
3696 (unsigned long)format, (unsigned long)vargs, 4471 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0,
4472 VG_USERREQ__PRINTF_BACKTRACE_VALIST_BY_REF,
4473 (uintptr_t)format,
4474 (uintptr_t)&vargs,
3697 0, 0, 0); 4475 0, 0, 0);
4476 #else
4477 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0,
4478 VG_USERREQ__PRINTF_BACKTRACE_VALIST_BY_REF,
4479 (unsigned long)format,
4480 (unsigned long)&vargs,
4481 0, 0, 0);
4482 #endif
3698 va_end(vargs); 4483 va_end(vargs);
3699 return (int)_qzz_res; 4484 return (int)_qzz_res;
3700 } 4485 }
3701 4486
3702 #endif /* NVALGRIND */ 4487 #endif /* NVALGRIND */
3703 4488
3704 4489
3705 /* These requests allow control to move from the simulated CPU to the 4490 /* These requests allow control to move from the simulated CPU to the
3706 real CPU, calling an arbitary function. 4491 real CPU, calling an arbitary function.
3707 4492
(...skipping 64 matching lines...) Expand 10 before | Expand all | Expand 10 after
3772 VG_(unique_error)() for them to be counted. */ 4557 VG_(unique_error)() for them to be counted. */
3773 #define VALGRIND_COUNT_ERRORS \ 4558 #define VALGRIND_COUNT_ERRORS \
3774 __extension__ \ 4559 __extension__ \
3775 ({unsigned int _qyy_res; \ 4560 ({unsigned int _qyy_res; \
3776 VALGRIND_DO_CLIENT_REQUEST(_qyy_res, 0 /* default return */, \ 4561 VALGRIND_DO_CLIENT_REQUEST(_qyy_res, 0 /* default return */, \
3777 VG_USERREQ__COUNT_ERRORS, \ 4562 VG_USERREQ__COUNT_ERRORS, \
3778 0, 0, 0, 0, 0); \ 4563 0, 0, 0, 0, 0); \
3779 _qyy_res; \ 4564 _qyy_res; \
3780 }) 4565 })
3781 4566
3782 /* Mark a block of memory as having been allocated by a malloc()-like 4567 /* Several Valgrind tools (Memcheck, Massif, Helgrind, DRD) rely on knowing
3783 function. `addr' is the start of the usable block (ie. after any 4568 when heap blocks are allocated in order to give accurate results. This
3784 redzone) `rzB' is redzone size if the allocator can apply redzones; 4569 happens automatically for the standard allocator functions such as
3785 use '0' if not. Adding redzones makes it more likely Valgrind will spot 4570 malloc(), calloc(), realloc(), memalign(), new, new[], free(), delete,
3786 block overruns. `is_zeroed' indicates if the memory is zeroed, as it is 4571 delete[], etc.
3787 for calloc(). Put it immediately after the point where a block is 4572
3788 allocated. 4573 But if your program uses a custom allocator, this doesn't automatically
4574 happen, and Valgrind will not do as well. For example, if you allocate
4575 superblocks with mmap() and then allocates chunks of the superblocks, all
4576 Valgrind's observations will be at the mmap() level and it won't know that
4577 the chunks should be considered separate entities. In Memcheck's case,
4578 that means you probably won't get heap block overrun detection (because
4579 there won't be redzones marked as unaddressable) and you definitely won't
4580 get any leak detection.
4581
4582 The following client requests allow a custom allocator to be annotated so
4583 that it can be handled accurately by Valgrind.
4584
4585 VALGRIND_MALLOCLIKE_BLOCK marks a region of memory as having been allocated
4586 by a malloc()-like function. For Memcheck (an illustrative case), this
4587 does two things:
4588
4589 - It records that the block has been allocated. This means any addresses
4590 within the block mentioned in error messages will be
4591 identified as belonging to the block. It also means that if the block
4592 isn't freed it will be detected by the leak checker.
4593
4594 - It marks the block as being addressable and undefined (if 'is_zeroed' is
4595 not set), or addressable and defined (if 'is_zeroed' is set). This
4596 controls how accesses to the block by the program are handled.
3789 4597
3790 If you're using Memcheck: If you're allocating memory via superblocks, 4598 'addr' is the start of the usable block (ie. after any
3791 and then handing out small chunks of each superblock, if you don't have 4599 redzone), 'sizeB' is its size. 'rzB' is the redzone size if the allocator
3792 redzones on your small blocks, it's worth marking the superblock with 4600 can apply redzones -- these are blocks of padding at the start and end of
3793 VALGRIND_MAKE_MEM_NOACCESS when it's created, so that block overruns are 4601 each block. Adding redzones is recommended as it makes it much more likely
3794 detected. But if you can put redzones on, it's probably better to not do 4602 Valgrind will spot block overruns. `is_zeroed' indicates if the memory is
3795 this, so that messages for small overruns are described in terms of the 4603 zeroed (or filled with another predictable value), as is the case for
3796 small block rather than the superblock (but if you have a big overrun 4604 calloc().
3797 that skips over a redzone, you could miss an error this way). See 4605
3798 memcheck/tests/custom_alloc.c for an example. 4606 VALGRIND_MALLOCLIKE_BLOCK should be put immediately after the point where a
4607 heap block -- that will be used by the client program -- is allocated.
4608 It's best to put it at the outermost level of the allocator if possible;
4609 for example, if you have a function my_alloc() which calls
4610 internal_alloc(), and the client request is put inside internal_alloc(),
4611 stack traces relating to the heap block will contain entries for both
4612 my_alloc() and internal_alloc(), which is probably not what you want.
3799 4613
3800 WARNING: if your allocator uses malloc() or 'new' to allocate 4614 For Memcheck users: if you use VALGRIND_MALLOCLIKE_BLOCK to carve out
3801 superblocks, rather than mmap() or brk(), this will not work properly -- 4615 custom blocks from within a heap block, B, that has been allocated with
3802 you'll likely get assertion failures during leak detection. This is 4616 malloc/calloc/new/etc, then block B will be *ignored* during leak-checking
3803 because Valgrind doesn't like seeing overlapping heap blocks. Sorry. 4617 -- the custom blocks will take precedence.
3804 4618
3805 Nb: block must be freed via a free()-like function specified 4619 VALGRIND_FREELIKE_BLOCK is the partner to VALGRIND_MALLOCLIKE_BLOCK. For
3806 with VALGRIND_FREELIKE_BLOCK or mismatch errors will occur. */ 4620 Memcheck, it does two things:
4621
4622 - It records that the block has been deallocated. This assumes that the
4623 block was annotated as having been allocated via
4624 VALGRIND_MALLOCLIKE_BLOCK. Otherwise, an error will be issued.
4625
4626 - It marks the block as being unaddressable.
4627
4628 VALGRIND_FREELIKE_BLOCK should be put immediately after the point where a
4629 heap block is deallocated.
4630
4631 In many cases, these two client requests will not be enough to get your
4632 allocator working well with Memcheck. More specifically, if your allocator
4633 writes to freed blocks in any way then a VALGRIND_MAKE_MEM_UNDEFINED call
4634 will be necessary to mark the memory as addressable just before the zeroing
4635 occurs, otherwise you'll get a lot of invalid write errors. For example,
4636 you'll need to do this if your allocator recycles freed blocks, but it
4637 zeroes them before handing them back out (via VALGRIND_MALLOCLIKE_BLOCK).
4638 Alternatively, if your allocator reuses freed blocks for allocator-internal
4639 data structures, VALGRIND_MAKE_MEM_UNDEFINED calls will also be necessary.
4640
4641 Really, what's happening is a blurring of the lines between the client
4642 program and the allocator... after VALGRIND_FREELIKE_BLOCK is called, the
4643 memory should be considered unaddressable to the client program, but the
4644 allocator knows more than the rest of the client program and so may be able
4645 to safely access it. Extra client requests are necessary for Valgrind to
4646 understand the distinction between the allocator and the rest of the
4647 program.
4648
4649 Note: there is currently no VALGRIND_REALLOCLIKE_BLOCK client request; it
4650 has to be emulated with MALLOCLIKE/FREELIKE and memory copying.
4651
4652 Ignored if addr == 0.
4653 */
3807 #define VALGRIND_MALLOCLIKE_BLOCK(addr, sizeB, rzB, is_zeroed) \ 4654 #define VALGRIND_MALLOCLIKE_BLOCK(addr, sizeB, rzB, is_zeroed) \
3808 {unsigned int _qzz_res; \ 4655 {unsigned int _qzz_res; \
3809 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \ 4656 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3810 VG_USERREQ__MALLOCLIKE_BLOCK, \ 4657 VG_USERREQ__MALLOCLIKE_BLOCK, \
3811 addr, sizeB, rzB, is_zeroed, 0); \ 4658 addr, sizeB, rzB, is_zeroed, 0); \
3812 } 4659 }
3813 4660
3814 /* Mark a block of memory as having been freed by a free()-like function. 4661 /* See the comment for VALGRIND_MALLOCLIKE_BLOCK for details.
3815 `rzB' is redzone size; it must match that given to 4662 Ignored if addr == 0.
3816 VALGRIND_MALLOCLIKE_BLOCK. Memory not freed will be detected by the leak 4663 */
3817 checker. Put it immediately after the point where the block is freed. */
3818 #define VALGRIND_FREELIKE_BLOCK(addr, rzB) \ 4664 #define VALGRIND_FREELIKE_BLOCK(addr, rzB) \
3819 {unsigned int _qzz_res; \ 4665 {unsigned int _qzz_res; \
3820 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \ 4666 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3821 VG_USERREQ__FREELIKE_BLOCK, \ 4667 VG_USERREQ__FREELIKE_BLOCK, \
3822 addr, rzB, 0, 0, 0); \ 4668 addr, rzB, 0, 0, 0); \
3823 } 4669 }
3824 4670
3825 /* Create a memory pool. */ 4671 /* Create a memory pool. */
3826 #define VALGRIND_CREATE_MEMPOOL(pool, rzB, is_zeroed) \ 4672 #define VALGRIND_CREATE_MEMPOOL(pool, rzB, is_zeroed) \
3827 {unsigned int _qzz_res; \ 4673 {unsigned int _qzz_res; \
(...skipping 45 matching lines...) Expand 10 before | Expand all | Expand 10 after
3873 /* Resize and/or move a piece associated with a memory pool. */ 4719 /* Resize and/or move a piece associated with a memory pool. */
3874 #define VALGRIND_MEMPOOL_CHANGE(pool, addrA, addrB, size) \ 4720 #define VALGRIND_MEMPOOL_CHANGE(pool, addrA, addrB, size) \
3875 {unsigned int _qzz_res; \ 4721 {unsigned int _qzz_res; \
3876 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \ 4722 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3877 VG_USERREQ__MEMPOOL_CHANGE, \ 4723 VG_USERREQ__MEMPOOL_CHANGE, \
3878 pool, addrA, addrB, size, 0); \ 4724 pool, addrA, addrB, size, 0); \
3879 } 4725 }
3880 4726
3881 /* Return 1 if a mempool exists, else 0. */ 4727 /* Return 1 if a mempool exists, else 0. */
3882 #define VALGRIND_MEMPOOL_EXISTS(pool) \ 4728 #define VALGRIND_MEMPOOL_EXISTS(pool) \
4729 __extension__ \
3883 ({unsigned int _qzz_res; \ 4730 ({unsigned int _qzz_res; \
3884 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \ 4731 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3885 VG_USERREQ__MEMPOOL_EXISTS, \ 4732 VG_USERREQ__MEMPOOL_EXISTS, \
3886 pool, 0, 0, 0, 0); \ 4733 pool, 0, 0, 0, 0); \
3887 _qzz_res; \ 4734 _qzz_res; \
3888 }) 4735 })
3889 4736
3890 /* Mark a piece of memory as being a stack. Returns a stack id. */ 4737 /* Mark a piece of memory as being a stack. Returns a stack id. */
3891 #define VALGRIND_STACK_REGISTER(start, end) \ 4738 #define VALGRIND_STACK_REGISTER(start, end) \
4739 __extension__ \
3892 ({unsigned int _qzz_res; \ 4740 ({unsigned int _qzz_res; \
3893 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \ 4741 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3894 VG_USERREQ__STACK_REGISTER, \ 4742 VG_USERREQ__STACK_REGISTER, \
3895 start, end, 0, 0, 0); \ 4743 start, end, 0, 0, 0); \
3896 _qzz_res; \ 4744 _qzz_res; \
3897 }) 4745 })
3898 4746
3899 /* Unmark the piece of memory associated with a stack id as being a 4747 /* Unmark the piece of memory associated with a stack id as being a
3900 stack. */ 4748 stack. */
3901 #define VALGRIND_STACK_DEREGISTER(id) \ 4749 #define VALGRIND_STACK_DEREGISTER(id) \
3902 {unsigned int _qzz_res; \ 4750 {unsigned int _qzz_res; \
3903 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \ 4751 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3904 VG_USERREQ__STACK_DEREGISTER, \ 4752 VG_USERREQ__STACK_DEREGISTER, \
3905 id, 0, 0, 0, 0); \ 4753 id, 0, 0, 0, 0); \
3906 } 4754 }
3907 4755
3908 /* Change the start and end address of the stack id. */ 4756 /* Change the start and end address of the stack id. */
3909 #define VALGRIND_STACK_CHANGE(id, start, end) \ 4757 #define VALGRIND_STACK_CHANGE(id, start, end) \
3910 {unsigned int _qzz_res; \ 4758 {unsigned int _qzz_res; \
3911 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \ 4759 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
3912 VG_USERREQ__STACK_CHANGE, \ 4760 VG_USERREQ__STACK_CHANGE, \
3913 id, start, end, 0, 0); \ 4761 id, start, end, 0, 0); \
3914 } 4762 }
3915 4763
4764 /* Load PDB debug info for Wine PE image_map. */
4765 #define VALGRIND_LOAD_PDB_DEBUGINFO(fd, ptr, total_size, delta) \
4766 {unsigned int _qzz_res; \
4767 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
4768 VG_USERREQ__LOAD_PDB_DEBUGINFO, \
4769 fd, ptr, total_size, delta, 0); \
4770 }
4771
4772 /* Map a code address to a source file name and line number. buf64
4773 must point to a 64-byte buffer in the caller's address space. The
4774 result will be dumped in there and is guaranteed to be zero
4775 terminated. If no info is found, the first byte is set to zero. */
4776 #define VALGRIND_MAP_IP_TO_SRCLOC(addr, buf64) \
4777 {unsigned int _qzz_res; \
4778 VALGRIND_DO_CLIENT_REQUEST(_qzz_res, 0, \
4779 VG_USERREQ__MAP_IP_TO_SRCLOC, \
4780 addr, buf64, 0, 0, 0); \
4781 }
4782
3916 4783
3917 #undef PLAT_x86_linux 4784 #undef PLAT_x86_linux
3918 #undef PLAT_amd64_linux 4785 #undef PLAT_amd64_linux
3919 #undef PLAT_ppc32_linux 4786 #undef PLAT_ppc32_linux
3920 #undef PLAT_ppc64_linux 4787 #undef PLAT_ppc64_linux
4788 #undef PLAT_arm_linux
3921 #undef PLAT_ppc32_aix5 4789 #undef PLAT_ppc32_aix5
3922 #undef PLAT_ppc64_aix5 4790 #undef PLAT_ppc64_aix5
3923 4791
3924 #endif /* __VALGRIND_H */ 4792 #endif /* __VALGRIND_H */
OLDNEW
« no previous file with comments | « base/third_party/valgrind/memcheck.h ('k') | no next file » | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698