OLD | NEW |
(Empty) | |
| 1 /* |
| 2 This is a version (aka dlmalloc) of malloc/free/realloc written by |
| 3 Doug Lea and released to the public domain, as explained at |
| 4 http://creativecommons.org/publicdomain/zero/1.0/ Send questions, |
| 5 comments, complaints, performance data, etc to dl@cs.oswego.edu |
| 6 |
| 7 * Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea |
| 8 Note: There may be an updated version of this malloc obtainable at |
| 9 ftp://gee.cs.oswego.edu/pub/misc/malloc.c |
| 10 Check before installing! |
| 11 |
| 12 * Quickstart |
| 13 |
| 14 This library is all in one file to simplify the most common usage: |
| 15 ftp it, compile it (-O3), and link it into another program. All of |
| 16 the compile-time options default to reasonable values for use on |
| 17 most platforms. You might later want to step through various |
| 18 compile-time and dynamic tuning options. |
| 19 |
| 20 For convenience, an include file for code using this malloc is at: |
| 21 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h |
| 22 You don't really need this .h file unless you call functions not |
| 23 defined in your system include files. The .h file contains only the |
| 24 excerpts from this file needed for using this malloc on ANSI C/C++ |
| 25 systems, so long as you haven't changed compile-time options about |
| 26 naming and tuning parameters. If you do, then you can create your |
| 27 own malloc.h that does include all settings by cutting at the point |
| 28 indicated below. Note that you may already by default be using a C |
| 29 library containing a malloc that is based on some version of this |
| 30 malloc (for example in linux). You might still want to use the one |
| 31 in this file to customize settings or to avoid overheads associated |
| 32 with library versions. |
| 33 |
| 34 * Vital statistics: |
| 35 |
| 36 Supported pointer/size_t representation: 4 or 8 bytes |
| 37 size_t MUST be an unsigned type of the same width as |
| 38 pointers. (If you are using an ancient system that declares |
| 39 size_t as a signed type, or need it to be a different width |
| 40 than pointers, you can use a previous release of this malloc |
| 41 (e.g. 2.7.2) supporting these.) |
| 42 |
| 43 Alignment: 8 bytes (minimum) |
| 44 This suffices for nearly all current machines and C compilers. |
| 45 However, you can define MALLOC_ALIGNMENT to be wider than this |
| 46 if necessary (up to 128bytes), at the expense of using more space. |
| 47 |
| 48 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes) |
| 49 8 or 16 bytes (if 8byte sizes) |
| 50 Each malloced chunk has a hidden word of overhead holding size |
| 51 and status information, and additional cross-check word |
| 52 if FOOTERS is defined. |
| 53 |
| 54 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead) |
| 55 8-byte ptrs: 32 bytes (including overhead) |
| 56 |
| 57 Even a request for zero bytes (i.e., malloc(0)) returns a |
| 58 pointer to something of the minimum allocatable size. |
| 59 The maximum overhead wastage (i.e., number of extra bytes |
| 60 allocated than were requested in malloc) is less than or equal |
| 61 to the minimum size, except for requests >= mmap_threshold that |
| 62 are serviced via mmap(), where the worst case wastage is about |
| 63 32 bytes plus the remainder from a system page (the minimal |
| 64 mmap unit); typically 4096 or 8192 bytes. |
| 65 |
| 66 Security: static-safe; optionally more or less |
| 67 The "security" of malloc refers to the ability of malicious |
| 68 code to accentuate the effects of errors (for example, freeing |
| 69 space that is not currently malloc'ed or overwriting past the |
| 70 ends of chunks) in code that calls malloc. This malloc |
| 71 guarantees not to modify any memory locations below the base of |
| 72 heap, i.e., static variables, even in the presence of usage |
| 73 errors. The routines additionally detect most improper frees |
| 74 and reallocs. All this holds as long as the static bookkeeping |
| 75 for malloc itself is not corrupted by some other means. This |
| 76 is only one aspect of security -- these checks do not, and |
| 77 cannot, detect all possible programming errors. |
| 78 |
| 79 If FOOTERS is defined nonzero, then each allocated chunk |
| 80 carries an additional check word to verify that it was malloced |
| 81 from its space. These check words are the same within each |
| 82 execution of a program using malloc, but differ across |
| 83 executions, so externally crafted fake chunks cannot be |
| 84 freed. This improves security by rejecting frees/reallocs that |
| 85 could corrupt heap memory, in addition to the checks preventing |
| 86 writes to statics that are always on. This may further improve |
| 87 security at the expense of time and space overhead. (Note that |
| 88 FOOTERS may also be worth using with MSPACES.) |
| 89 |
| 90 By default detected errors cause the program to abort (calling |
| 91 "abort()"). You can override this to instead proceed past |
| 92 errors by defining PROCEED_ON_ERROR. In this case, a bad free |
| 93 has no effect, and a malloc that encounters a bad address |
| 94 caused by user overwrites will ignore the bad address by |
| 95 dropping pointers and indices to all known memory. This may |
| 96 be appropriate for programs that should continue if at all |
| 97 possible in the face of programming errors, although they may |
| 98 run out of memory because dropped memory is never reclaimed. |
| 99 |
| 100 If you don't like either of these options, you can define |
| 101 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything |
| 102 else. And if if you are sure that your program using malloc has |
| 103 no errors or vulnerabilities, you can define INSECURE to 1, |
| 104 which might (or might not) provide a small performance improvement. |
| 105 |
| 106 It is also possible to limit the maximum total allocatable |
| 107 space, using malloc_set_footprint_limit. This is not |
| 108 designed as a security feature in itself (calls to set limits |
| 109 are not screened or privileged), but may be useful as one |
| 110 aspect of a secure implementation. |
| 111 |
| 112 Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero |
| 113 When USE_LOCKS is defined, each public call to malloc, free, |
| 114 etc is surrounded with a lock. By default, this uses a plain |
| 115 pthread mutex, win32 critical section, or a spin-lock if if |
| 116 available for the platform and not disabled by setting |
| 117 USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined, |
| 118 recursive versions are used instead (which are not required for |
| 119 base functionality but may be needed in layered extensions). |
| 120 Using a global lock is not especially fast, and can be a major |
| 121 bottleneck. It is designed only to provide minimal protection |
| 122 in concurrent environments, and to provide a basis for |
| 123 extensions. If you are using malloc in a concurrent program, |
| 124 consider instead using nedmalloc |
| 125 (http://www.nedprod.com/programs/portable/nedmalloc/) or |
| 126 ptmalloc (See http://www.malloc.de), which are derived from |
| 127 versions of this malloc. |
| 128 |
| 129 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP |
| 130 This malloc can use unix sbrk or any emulation (invoked using |
| 131 the CALL_MORECORE macro) and/or mmap/munmap or any emulation |
| 132 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system |
| 133 memory. On most unix systems, it tends to work best if both |
| 134 MORECORE and MMAP are enabled. On Win32, it uses emulations |
| 135 based on VirtualAlloc. It also uses common C library functions |
| 136 like memset. |
| 137 |
| 138 Compliance: I believe it is compliant with the Single Unix Specification |
| 139 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably |
| 140 others as well. |
| 141 |
| 142 * Overview of algorithms |
| 143 |
| 144 This is not the fastest, most space-conserving, most portable, or |
| 145 most tunable malloc ever written. However it is among the fastest |
| 146 while also being among the most space-conserving, portable and |
| 147 tunable. Consistent balance across these factors results in a good |
| 148 general-purpose allocator for malloc-intensive programs. |
| 149 |
| 150 In most ways, this malloc is a best-fit allocator. Generally, it |
| 151 chooses the best-fitting existing chunk for a request, with ties |
| 152 broken in approximately least-recently-used order. (This strategy |
| 153 normally maintains low fragmentation.) However, for requests less |
| 154 than 256bytes, it deviates from best-fit when there is not an |
| 155 exactly fitting available chunk by preferring to use space adjacent |
| 156 to that used for the previous small request, as well as by breaking |
| 157 ties in approximately most-recently-used order. (These enhance |
| 158 locality of series of small allocations.) And for very large requests |
| 159 (>= 256Kb by default), it relies on system memory mapping |
| 160 facilities, if supported. (This helps avoid carrying around and |
| 161 possibly fragmenting memory used only for large chunks.) |
| 162 |
| 163 All operations (except malloc_stats and mallinfo) have execution |
| 164 times that are bounded by a constant factor of the number of bits in |
| 165 a size_t, not counting any clearing in calloc or copying in realloc, |
| 166 or actions surrounding MORECORE and MMAP that have times |
| 167 proportional to the number of non-contiguous regions returned by |
| 168 system allocation routines, which is often just 1. In real-time |
| 169 applications, you can optionally suppress segment traversals using |
| 170 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when |
| 171 system allocators return non-contiguous spaces, at the typical |
| 172 expense of carrying around more memory and increased fragmentation. |
| 173 |
| 174 The implementation is not very modular and seriously overuses |
| 175 macros. Perhaps someday all C compilers will do as good a job |
| 176 inlining modular code as can now be done by brute-force expansion, |
| 177 but now, enough of them seem not to. |
| 178 |
| 179 Some compilers issue a lot of warnings about code that is |
| 180 dead/unreachable only on some platforms, and also about intentional |
| 181 uses of negation on unsigned types. All known cases of each can be |
| 182 ignored. |
| 183 |
| 184 For a longer but out of date high-level description, see |
| 185 http://gee.cs.oswego.edu/dl/html/malloc.html |
| 186 |
| 187 * MSPACES |
| 188 If MSPACES is defined, then in addition to malloc, free, etc., |
| 189 this file also defines mspace_malloc, mspace_free, etc. These |
| 190 are versions of malloc routines that take an "mspace" argument |
| 191 obtained using create_mspace, to control all internal bookkeeping. |
| 192 If ONLY_MSPACES is defined, only these versions are compiled. |
| 193 So if you would like to use this allocator for only some allocations, |
| 194 and your system malloc for others, you can compile with |
| 195 ONLY_MSPACES and then do something like... |
| 196 static mspace mymspace = create_mspace(0,0); // for example |
| 197 #define mymalloc(bytes) mspace_malloc(mymspace, bytes) |
| 198 |
| 199 (Note: If you only need one instance of an mspace, you can instead |
| 200 use "USE_DL_PREFIX" to relabel the global malloc.) |
| 201 |
| 202 You can similarly create thread-local allocators by storing |
| 203 mspaces as thread-locals. For example: |
| 204 static __thread mspace tlms = 0; |
| 205 void* tlmalloc(size_t bytes) { |
| 206 if (tlms == 0) tlms = create_mspace(0, 0); |
| 207 return mspace_malloc(tlms, bytes); |
| 208 } |
| 209 void tlfree(void* mem) { mspace_free(tlms, mem); } |
| 210 |
| 211 Unless FOOTERS is defined, each mspace is completely independent. |
| 212 You cannot allocate from one and free to another (although |
| 213 conformance is only weakly checked, so usage errors are not always |
| 214 caught). If FOOTERS is defined, then each chunk carries around a tag |
| 215 indicating its originating mspace, and frees are directed to their |
| 216 originating spaces. Normally, this requires use of locks. |
| 217 |
| 218 ------------------------- Compile-time options --------------------------- |
| 219 |
| 220 Be careful in setting #define values for numerical constants of type |
| 221 size_t. On some systems, literal values are not automatically extended |
| 222 to size_t precision unless they are explicitly casted. You can also |
| 223 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below. |
| 224 |
| 225 WIN32 default: defined if _WIN32 defined |
| 226 Defining WIN32 sets up defaults for MS environment and compilers. |
| 227 Otherwise defaults are for unix. Beware that there seem to be some |
| 228 cases where this malloc might not be a pure drop-in replacement for |
| 229 Win32 malloc: Random-looking failures from Win32 GDI API's (eg; |
| 230 SetDIBits()) may be due to bugs in some video driver implementations |
| 231 when pixel buffers are malloc()ed, and the region spans more than |
| 232 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb) |
| 233 default granularity, pixel buffers may straddle virtual allocation |
| 234 regions more often than when using the Microsoft allocator. You can |
| 235 avoid this by using VirtualAlloc() and VirtualFree() for all pixel |
| 236 buffers rather than using malloc(). If this is not possible, |
| 237 recompile this malloc with a larger DEFAULT_GRANULARITY. Note: |
| 238 in cases where MSC and gcc (cygwin) are known to differ on WIN32, |
| 239 conditions use _MSC_VER to distinguish them. |
| 240 |
| 241 DLMALLOC_EXPORT default: extern |
| 242 Defines how public APIs are declared. If you want to export via a |
| 243 Windows DLL, you might define this as |
| 244 #define DLMALLOC_EXPORT extern __declspec(dllexport) |
| 245 If you want a POSIX ELF shared object, you might use |
| 246 #define DLMALLOC_EXPORT extern __attribute__((visibility("default"))) |
| 247 |
| 248 MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *)) |
| 249 Controls the minimum alignment for malloc'ed chunks. It must be a |
| 250 power of two and at least 8, even on machines for which smaller |
| 251 alignments would suffice. It may be defined as larger than this |
| 252 though. Note however that code and data structures are optimized for |
| 253 the case of 8-byte alignment. |
| 254 |
| 255 MSPACES default: 0 (false) |
| 256 If true, compile in support for independent allocation spaces. |
| 257 This is only supported if HAVE_MMAP is true. |
| 258 |
| 259 ONLY_MSPACES default: 0 (false) |
| 260 If true, only compile in mspace versions, not regular versions. |
| 261 |
| 262 USE_LOCKS default: 0 (false) |
| 263 Causes each call to each public routine to be surrounded with |
| 264 pthread or WIN32 mutex lock/unlock. (If set true, this can be |
| 265 overridden on a per-mspace basis for mspace versions.) If set to a |
| 266 non-zero value other than 1, locks are used, but their |
| 267 implementation is left out, so lock functions must be supplied manually, |
| 268 as described below. |
| 269 |
| 270 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available |
| 271 If true, uses custom spin locks for locking. This is currently |
| 272 supported only gcc >= 4.1, older gccs on x86 platforms, and recent |
| 273 MS compilers. Otherwise, posix locks or win32 critical sections are |
| 274 used. |
| 275 |
| 276 USE_RECURSIVE_LOCKS default: not defined |
| 277 If defined nonzero, uses recursive (aka reentrant) locks, otherwise |
| 278 uses plain mutexes. This is not required for malloc proper, but may |
| 279 be needed for layered allocators such as nedmalloc. |
| 280 |
| 281 LOCK_AT_FORK default: not defined |
| 282 If defined nonzero, performs pthread_atfork upon initialization |
| 283 to initialize child lock while holding parent lock. The implementation |
| 284 assumes that pthread locks (not custom locks) are being used. In other |
| 285 cases, you may need to customize the implementation. |
| 286 |
| 287 FOOTERS default: 0 |
| 288 If true, provide extra checking and dispatching by placing |
| 289 information in the footers of allocated chunks. This adds |
| 290 space and time overhead. |
| 291 |
| 292 INSECURE default: 0 |
| 293 If true, omit checks for usage errors and heap space overwrites. |
| 294 |
| 295 USE_DL_PREFIX default: NOT defined |
| 296 Causes compiler to prefix all public routines with the string 'dl'. |
| 297 This can be useful when you only want to use this malloc in one part |
| 298 of a program, using your regular system malloc elsewhere. |
| 299 |
| 300 MALLOC_INSPECT_ALL default: NOT defined |
| 301 If defined, compiles malloc_inspect_all and mspace_inspect_all, that |
| 302 perform traversal of all heap space. Unless access to these |
| 303 functions is otherwise restricted, you probably do not want to |
| 304 include them in secure implementations. |
| 305 |
| 306 ABORT default: defined as abort() |
| 307 Defines how to abort on failed checks. On most systems, a failed |
| 308 check cannot die with an "assert" or even print an informative |
| 309 message, because the underlying print routines in turn call malloc, |
| 310 which will fail again. Generally, the best policy is to simply call |
| 311 abort(). It's not very useful to do more than this because many |
| 312 errors due to overwriting will show up as address faults (null, odd |
| 313 addresses etc) rather than malloc-triggered checks, so will also |
| 314 abort. Also, most compilers know that abort() does not return, so |
| 315 can better optimize code conditionally calling it. |
| 316 |
| 317 PROCEED_ON_ERROR default: defined as 0 (false) |
| 318 Controls whether detected bad addresses cause them to bypassed |
| 319 rather than aborting. If set, detected bad arguments to free and |
| 320 realloc are ignored. And all bookkeeping information is zeroed out |
| 321 upon a detected overwrite of freed heap space, thus losing the |
| 322 ability to ever return it from malloc again, but enabling the |
| 323 application to proceed. If PROCEED_ON_ERROR is defined, the |
| 324 static variable malloc_corruption_error_count is compiled in |
| 325 and can be examined to see if errors have occurred. This option |
| 326 generates slower code than the default abort policy. |
| 327 |
| 328 DEBUG default: NOT defined |
| 329 The DEBUG setting is mainly intended for people trying to modify |
| 330 this code or diagnose problems when porting to new platforms. |
| 331 However, it may also be able to better isolate user errors than just |
| 332 using runtime checks. The assertions in the check routines spell |
| 333 out in more detail the assumptions and invariants underlying the |
| 334 algorithms. The checking is fairly extensive, and will slow down |
| 335 execution noticeably. Calling malloc_stats or mallinfo with DEBUG |
| 336 set will attempt to check every non-mmapped allocated and free chunk |
| 337 in the course of computing the summaries. |
| 338 |
| 339 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true) |
| 340 Debugging assertion failures can be nearly impossible if your |
| 341 version of the assert macro causes malloc to be called, which will |
| 342 lead to a cascade of further failures, blowing the runtime stack. |
| 343 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(), |
| 344 which will usually make debugging easier. |
| 345 |
| 346 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32 |
| 347 The action to take before "return 0" when malloc fails to be able to |
| 348 return memory because there is none available. |
| 349 |
| 350 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES |
| 351 True if this system supports sbrk or an emulation of it. |
| 352 |
| 353 MORECORE default: sbrk |
| 354 The name of the sbrk-style system routine to call to obtain more |
| 355 memory. See below for guidance on writing custom MORECORE |
| 356 functions. The type of the argument to sbrk/MORECORE varies across |
| 357 systems. It cannot be size_t, because it supports negative |
| 358 arguments, so it is normally the signed type of the same width as |
| 359 size_t (sometimes declared as "intptr_t"). It doesn't much matter |
| 360 though. Internally, we only call it with arguments less than half |
| 361 the max value of a size_t, which should work across all reasonable |
| 362 possibilities, although sometimes generating compiler warnings. |
| 363 |
| 364 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE |
| 365 If true, take advantage of fact that consecutive calls to MORECORE |
| 366 with positive arguments always return contiguous increasing |
| 367 addresses. This is true of unix sbrk. It does not hurt too much to |
| 368 set it true anyway, since malloc copes with non-contiguities. |
| 369 Setting it false when definitely non-contiguous saves time |
| 370 and possibly wasted space it would take to discover this though. |
| 371 |
| 372 MORECORE_CANNOT_TRIM default: NOT defined |
| 373 True if MORECORE cannot release space back to the system when given |
| 374 negative arguments. This is generally necessary only if you are |
| 375 using a hand-crafted MORECORE function that cannot handle negative |
| 376 arguments. |
| 377 |
| 378 NO_SEGMENT_TRAVERSAL default: 0 |
| 379 If non-zero, suppresses traversals of memory segments |
| 380 returned by either MORECORE or CALL_MMAP. This disables |
| 381 merging of segments that are contiguous, and selectively |
| 382 releasing them to the OS if unused, but bounds execution times. |
| 383 |
| 384 HAVE_MMAP default: 1 (true) |
| 385 True if this system supports mmap or an emulation of it. If so, and |
| 386 HAVE_MORECORE is not true, MMAP is used for all system |
| 387 allocation. If set and HAVE_MORECORE is true as well, MMAP is |
| 388 primarily used to directly allocate very large blocks. It is also |
| 389 used as a backup strategy in cases where MORECORE fails to provide |
| 390 space from system. Note: A single call to MUNMAP is assumed to be |
| 391 able to unmap memory that may have be allocated using multiple calls |
| 392 to MMAP, so long as they are adjacent. |
| 393 |
| 394 HAVE_MREMAP default: 1 on linux, else 0 |
| 395 If true realloc() uses mremap() to re-allocate large blocks and |
| 396 extend or shrink allocation spaces. |
| 397 |
| 398 MMAP_CLEARS default: 1 except on WINCE. |
| 399 True if mmap clears memory so calloc doesn't need to. This is true |
| 400 for standard unix mmap using /dev/zero and on WIN32 except for WINCE. |
| 401 |
| 402 USE_BUILTIN_FFS default: 0 (i.e., not used) |
| 403 Causes malloc to use the builtin ffs() function to compute indices. |
| 404 Some compilers may recognize and intrinsify ffs to be faster than the |
| 405 supplied C version. Also, the case of x86 using gcc is special-cased |
| 406 to an asm instruction, so is already as fast as it can be, and so |
| 407 this setting has no effect. Similarly for Win32 under recent MS compilers. |
| 408 (On most x86s, the asm version is only slightly faster than the C version.) |
| 409 |
| 410 malloc_getpagesize default: derive from system includes, or 4096. |
| 411 The system page size. To the extent possible, this malloc manages |
| 412 memory from the system in page-size units. This may be (and |
| 413 usually is) a function rather than a constant. This is ignored |
| 414 if WIN32, where page size is determined using getSystemInfo during |
| 415 initialization. |
| 416 |
| 417 USE_DEV_RANDOM default: 0 (i.e., not used) |
| 418 Causes malloc to use /dev/random to initialize secure magic seed for |
| 419 stamping footers. Otherwise, the current time is used. |
| 420 |
| 421 NO_MALLINFO default: 0 |
| 422 If defined, don't compile "mallinfo". This can be a simple way |
| 423 of dealing with mismatches between system declarations and |
| 424 those in this file. |
| 425 |
| 426 MALLINFO_FIELD_TYPE default: size_t |
| 427 The type of the fields in the mallinfo struct. This was originally |
| 428 defined as "int" in SVID etc, but is more usefully defined as |
| 429 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set |
| 430 |
| 431 NO_MALLOC_STATS default: 0 |
| 432 If defined, don't compile "malloc_stats". This avoids calls to |
| 433 fprintf and bringing in stdio dependencies you might not want. |
| 434 |
| 435 REALLOC_ZERO_BYTES_FREES default: not defined |
| 436 This should be set if a call to realloc with zero bytes should |
| 437 be the same as a call to free. Some people think it should. Otherwise, |
| 438 since this malloc returns a unique pointer for malloc(0), so does |
| 439 realloc(p, 0). |
| 440 |
| 441 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H |
| 442 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H |
| 443 LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32 |
| 444 Define these if your system does not have these header files. |
| 445 You might need to manually insert some of the declarations they provide. |
| 446 |
| 447 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS, |
| 448 system_info.dwAllocationGranularity in WIN32, |
| 449 otherwise 64K. |
| 450 Also settable using mallopt(M_GRANULARITY, x) |
| 451 The unit for allocating and deallocating memory from the system. On |
| 452 most systems with contiguous MORECORE, there is no reason to |
| 453 make this more than a page. However, systems with MMAP tend to |
| 454 either require or encourage larger granularities. You can increase |
| 455 this value to prevent system allocation functions to be called so |
| 456 often, especially if they are slow. The value must be at least one |
| 457 page and must be a power of two. Setting to 0 causes initialization |
| 458 to either page size or win32 region size. (Note: In previous |
| 459 versions of malloc, the equivalent of this option was called |
| 460 "TOP_PAD") |
| 461 |
| 462 DEFAULT_TRIM_THRESHOLD default: 2MB |
| 463 Also settable using mallopt(M_TRIM_THRESHOLD, x) |
| 464 The maximum amount of unused top-most memory to keep before |
| 465 releasing via malloc_trim in free(). Automatic trimming is mainly |
| 466 useful in long-lived programs using contiguous MORECORE. Because |
| 467 trimming via sbrk can be slow on some systems, and can sometimes be |
| 468 wasteful (in cases where programs immediately afterward allocate |
| 469 more large chunks) the value should be high enough so that your |
| 470 overall system performance would improve by releasing this much |
| 471 memory. As a rough guide, you might set to a value close to the |
| 472 average size of a process (program) running on your system. |
| 473 Releasing this much memory would allow such a process to run in |
| 474 memory. Generally, it is worth tuning trim thresholds when a |
| 475 program undergoes phases where several large chunks are allocated |
| 476 and released in ways that can reuse each other's storage, perhaps |
| 477 mixed with phases where there are no such chunks at all. The trim |
| 478 value must be greater than page size to have any useful effect. To |
| 479 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick |
| 480 some people use of mallocing a huge space and then freeing it at |
| 481 program startup, in an attempt to reserve system memory, doesn't |
| 482 have the intended effect under automatic trimming, since that memory |
| 483 will immediately be returned to the system. |
| 484 |
| 485 DEFAULT_MMAP_THRESHOLD default: 256K |
| 486 Also settable using mallopt(M_MMAP_THRESHOLD, x) |
| 487 The request size threshold for using MMAP to directly service a |
| 488 request. Requests of at least this size that cannot be allocated |
| 489 using already-existing space will be serviced via mmap. (If enough |
| 490 normal freed space already exists it is used instead.) Using mmap |
| 491 segregates relatively large chunks of memory so that they can be |
| 492 individually obtained and released from the host system. A request |
| 493 serviced through mmap is never reused by any other request (at least |
| 494 not directly; the system may just so happen to remap successive |
| 495 requests to the same locations). Segregating space in this way has |
| 496 the benefits that: Mmapped space can always be individually released |
| 497 back to the system, which helps keep the system level memory demands |
| 498 of a long-lived program low. Also, mapped memory doesn't become |
| 499 `locked' between other chunks, as can happen with normally allocated |
| 500 chunks, which means that even trimming via malloc_trim would not |
| 501 release them. However, it has the disadvantage that the space |
| 502 cannot be reclaimed, consolidated, and then used to service later |
| 503 requests, as happens with normal chunks. The advantages of mmap |
| 504 nearly always outweigh disadvantages for "large" chunks, but the |
| 505 value of "large" may vary across systems. The default is an |
| 506 empirically derived value that works well in most systems. You can |
| 507 disable mmap by setting to MAX_SIZE_T. |
| 508 |
| 509 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP |
| 510 The number of consolidated frees between checks to release |
| 511 unused segments when freeing. When using non-contiguous segments, |
| 512 especially with multiple mspaces, checking only for topmost space |
| 513 doesn't always suffice to trigger trimming. To compensate for this, |
| 514 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the |
| 515 current number of segments, if greater) try to release unused |
| 516 segments to the OS when freeing chunks that result in |
| 517 consolidation. The best value for this parameter is a compromise |
| 518 between slowing down frees with relatively costly checks that |
| 519 rarely trigger versus holding on to unused memory. To effectively |
| 520 disable, set to MAX_SIZE_T. This may lead to a very slight speed |
| 521 improvement at the expense of carrying around more memory. |
| 522 */ |
| 523 |
| 524 /* Version identifier to allow people to support multiple versions */ |
| 525 #ifndef DLMALLOC_VERSION |
| 526 #define DLMALLOC_VERSION 20806 |
| 527 #endif /* DLMALLOC_VERSION */ |
| 528 |
| 529 #ifndef DLMALLOC_EXPORT |
| 530 #define DLMALLOC_EXPORT extern |
| 531 #endif |
| 532 |
| 533 #if defined(__ANDROID__) |
| 534 #include <android/log.h> |
| 535 #include "../../../third_party/ashmem/ashmem.h" |
| 536 #endif |
| 537 |
| 538 #ifndef WIN32 |
| 539 #ifdef _WIN32 |
| 540 #define WIN32 1 |
| 541 #endif /* _WIN32 */ |
| 542 #ifdef _WIN32_WCE |
| 543 #define LACKS_FCNTL_H |
| 544 #define WIN32 1 |
| 545 #endif /* _WIN32_WCE */ |
| 546 #endif /* WIN32 */ |
| 547 #ifdef WIN32 |
| 548 #define WIN32_LEAN_AND_MEAN |
| 549 #include <windows.h> |
| 550 #include <tchar.h> |
| 551 #define HAVE_MMAP 1 |
| 552 #define HAVE_MORECORE 0 |
| 553 #define LACKS_UNISTD_H |
| 554 #define LACKS_SYS_PARAM_H |
| 555 #define LACKS_SYS_MMAN_H |
| 556 #define LACKS_STRING_H |
| 557 #define LACKS_STRINGS_H |
| 558 #define LACKS_SYS_TYPES_H |
| 559 #define LACKS_ERRNO_H |
| 560 #define LACKS_SCHED_H |
| 561 #ifndef MALLOC_FAILURE_ACTION |
| 562 #define MALLOC_FAILURE_ACTION |
| 563 #endif /* MALLOC_FAILURE_ACTION */ |
| 564 #ifndef MMAP_CLEARS |
| 565 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */ |
| 566 #define MMAP_CLEARS 0 |
| 567 #else |
| 568 #define MMAP_CLEARS 1 |
| 569 #endif /* _WIN32_WCE */ |
| 570 #endif /*MMAP_CLEARS */ |
| 571 #endif /* WIN32 */ |
| 572 |
| 573 #if defined(DARWIN) || defined(_DARWIN) |
| 574 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */ |
| 575 #ifndef HAVE_MORECORE |
| 576 #define HAVE_MORECORE 0 |
| 577 #define HAVE_MMAP 1 |
| 578 /* OSX allocators provide 16 byte alignment */ |
| 579 #ifndef MALLOC_ALIGNMENT |
| 580 #define MALLOC_ALIGNMENT ((size_t)16U) |
| 581 #endif |
| 582 #endif /* HAVE_MORECORE */ |
| 583 #endif /* DARWIN */ |
| 584 |
| 585 #ifndef LACKS_SYS_TYPES_H |
| 586 #include <sys/types.h> /* For size_t */ |
| 587 #endif /* LACKS_SYS_TYPES_H */ |
| 588 |
| 589 /* The maximum possible size_t value has all bits set */ |
| 590 #define MAX_SIZE_T (~(size_t)0) |
| 591 |
| 592 #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */ |
| 593 #define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \ |
| 594 (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0)) |
| 595 #endif /* USE_LOCKS */ |
| 596 |
| 597 #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */ |
| 598 #if ((defined(__GNUC__) && \ |
| 599 ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \ |
| 600 defined(__i386__) || defined(__x86_64__))) || \ |
| 601 (defined(_MSC_VER) && _MSC_VER>=1310)) |
| 602 #ifndef USE_SPIN_LOCKS |
| 603 #define USE_SPIN_LOCKS 1 |
| 604 #endif /* USE_SPIN_LOCKS */ |
| 605 #elif USE_SPIN_LOCKS |
| 606 #error "USE_SPIN_LOCKS defined without implementation" |
| 607 #endif /* ... locks available... */ |
| 608 #elif !defined(USE_SPIN_LOCKS) |
| 609 #define USE_SPIN_LOCKS 0 |
| 610 #endif /* USE_LOCKS */ |
| 611 |
| 612 #ifndef ONLY_MSPACES |
| 613 #define ONLY_MSPACES 0 |
| 614 #endif /* ONLY_MSPACES */ |
| 615 #ifndef MSPACES |
| 616 #if ONLY_MSPACES |
| 617 #define MSPACES 1 |
| 618 #else /* ONLY_MSPACES */ |
| 619 #define MSPACES 0 |
| 620 #endif /* ONLY_MSPACES */ |
| 621 #endif /* MSPACES */ |
| 622 #ifndef MALLOC_ALIGNMENT |
| 623 #define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *))) |
| 624 #endif /* MALLOC_ALIGNMENT */ |
| 625 #ifndef FOOTERS |
| 626 #define FOOTERS 0 |
| 627 #endif /* FOOTERS */ |
| 628 #ifndef ABORT |
| 629 #define ABORT abort() |
| 630 #endif /* ABORT */ |
| 631 #ifndef ABORT_ON_ASSERT_FAILURE |
| 632 #define ABORT_ON_ASSERT_FAILURE 1 |
| 633 #endif /* ABORT_ON_ASSERT_FAILURE */ |
| 634 #ifndef PROCEED_ON_ERROR |
| 635 #define PROCEED_ON_ERROR 0 |
| 636 #endif /* PROCEED_ON_ERROR */ |
| 637 |
| 638 #ifndef INSECURE |
| 639 #define INSECURE 0 |
| 640 #endif /* INSECURE */ |
| 641 #ifndef MALLOC_INSPECT_ALL |
| 642 #define MALLOC_INSPECT_ALL 0 |
| 643 #endif /* MALLOC_INSPECT_ALL */ |
| 644 #ifndef HAVE_MMAP |
| 645 #define HAVE_MMAP 1 |
| 646 #endif /* HAVE_MMAP */ |
| 647 #ifndef MMAP_CLEARS |
| 648 #define MMAP_CLEARS 1 |
| 649 #endif /* MMAP_CLEARS */ |
| 650 #ifndef HAVE_MREMAP |
| 651 #ifdef linux |
| 652 #define HAVE_MREMAP 1 |
| 653 #define _GNU_SOURCE /* Turns on mremap() definition */ |
| 654 #else /* linux */ |
| 655 #define HAVE_MREMAP 0 |
| 656 #endif /* linux */ |
| 657 #endif /* HAVE_MREMAP */ |
| 658 #ifndef MALLOC_FAILURE_ACTION |
| 659 #define MALLOC_FAILURE_ACTION errno = ENOMEM; |
| 660 #endif /* MALLOC_FAILURE_ACTION */ |
| 661 #ifndef HAVE_MORECORE |
| 662 #if ONLY_MSPACES |
| 663 #define HAVE_MORECORE 0 |
| 664 #else /* ONLY_MSPACES */ |
| 665 #define HAVE_MORECORE 1 |
| 666 #endif /* ONLY_MSPACES */ |
| 667 #endif /* HAVE_MORECORE */ |
| 668 #if !HAVE_MORECORE |
| 669 #define MORECORE_CONTIGUOUS 0 |
| 670 #else /* !HAVE_MORECORE */ |
| 671 #define MORECORE_DEFAULT sbrk |
| 672 #ifndef MORECORE_CONTIGUOUS |
| 673 #define MORECORE_CONTIGUOUS 1 |
| 674 #endif /* MORECORE_CONTIGUOUS */ |
| 675 #endif /* HAVE_MORECORE */ |
| 676 #ifndef DEFAULT_GRANULARITY |
| 677 #if (MORECORE_CONTIGUOUS || defined(WIN32)) |
| 678 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */ |
| 679 #else /* MORECORE_CONTIGUOUS */ |
| 680 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U) |
| 681 #endif /* MORECORE_CONTIGUOUS */ |
| 682 #endif /* DEFAULT_GRANULARITY */ |
| 683 #ifndef DEFAULT_TRIM_THRESHOLD |
| 684 #ifndef MORECORE_CANNOT_TRIM |
| 685 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U) |
| 686 #else /* MORECORE_CANNOT_TRIM */ |
| 687 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T |
| 688 #endif /* MORECORE_CANNOT_TRIM */ |
| 689 #endif /* DEFAULT_TRIM_THRESHOLD */ |
| 690 #ifndef DEFAULT_MMAP_THRESHOLD |
| 691 #if HAVE_MMAP |
| 692 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U) |
| 693 #else /* HAVE_MMAP */ |
| 694 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T |
| 695 #endif /* HAVE_MMAP */ |
| 696 #endif /* DEFAULT_MMAP_THRESHOLD */ |
| 697 #ifndef MAX_RELEASE_CHECK_RATE |
| 698 #if HAVE_MMAP |
| 699 #define MAX_RELEASE_CHECK_RATE 4095 |
| 700 #else |
| 701 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T |
| 702 #endif /* HAVE_MMAP */ |
| 703 #endif /* MAX_RELEASE_CHECK_RATE */ |
| 704 #ifndef USE_BUILTIN_FFS |
| 705 #define USE_BUILTIN_FFS 0 |
| 706 #endif /* USE_BUILTIN_FFS */ |
| 707 #ifndef USE_DEV_RANDOM |
| 708 #define USE_DEV_RANDOM 0 |
| 709 #endif /* USE_DEV_RANDOM */ |
| 710 #ifndef NO_MALLINFO |
| 711 #define NO_MALLINFO 0 |
| 712 #endif /* NO_MALLINFO */ |
| 713 #ifndef MALLINFO_FIELD_TYPE |
| 714 #define MALLINFO_FIELD_TYPE size_t |
| 715 #endif /* MALLINFO_FIELD_TYPE */ |
| 716 #ifndef NO_MALLOC_STATS |
| 717 #define NO_MALLOC_STATS 0 |
| 718 #endif /* NO_MALLOC_STATS */ |
| 719 #ifndef NO_SEGMENT_TRAVERSAL |
| 720 #define NO_SEGMENT_TRAVERSAL 0 |
| 721 #endif /* NO_SEGMENT_TRAVERSAL */ |
| 722 |
| 723 /* |
| 724 mallopt tuning options. SVID/XPG defines four standard parameter |
| 725 numbers for mallopt, normally defined in malloc.h. None of these |
| 726 are used in this malloc, so setting them has no effect. But this |
| 727 malloc does support the following options. |
| 728 */ |
| 729 |
| 730 #define M_TRIM_THRESHOLD (-1) |
| 731 #define M_GRANULARITY (-2) |
| 732 #define M_MMAP_THRESHOLD (-3) |
| 733 |
| 734 /* ------------------------ Mallinfo declarations ------------------------ */ |
| 735 |
| 736 #if !NO_MALLINFO |
| 737 /* |
| 738 This version of malloc supports the standard SVID/XPG mallinfo |
| 739 routine that returns a struct containing usage properties and |
| 740 statistics. It should work on any system that has a |
| 741 /usr/include/malloc.h defining struct mallinfo. The main |
| 742 declaration needed is the mallinfo struct that is returned (by-copy) |
| 743 by mallinfo(). The malloinfo struct contains a bunch of fields that |
| 744 are not even meaningful in this version of malloc. These fields are |
| 745 are instead filled by mallinfo() with other numbers that might be of |
| 746 interest. |
| 747 |
| 748 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a |
| 749 /usr/include/malloc.h file that includes a declaration of struct |
| 750 mallinfo. If so, it is included; else a compliant version is |
| 751 declared below. These must be precisely the same for mallinfo() to |
| 752 work. The original SVID version of this struct, defined on most |
| 753 systems with mallinfo, declares all fields as ints. But some others |
| 754 define as unsigned long. If your system defines the fields using a |
| 755 type of different width than listed here, you MUST #include your |
| 756 system version and #define HAVE_USR_INCLUDE_MALLOC_H. |
| 757 */ |
| 758 |
| 759 /* #define HAVE_USR_INCLUDE_MALLOC_H */ |
| 760 |
| 761 #ifdef HAVE_USR_INCLUDE_MALLOC_H |
| 762 #include "/usr/include/malloc.h" |
| 763 #else /* HAVE_USR_INCLUDE_MALLOC_H */ |
| 764 #ifndef STRUCT_MALLINFO_DECLARED |
| 765 /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */ |
| 766 #define _STRUCT_MALLINFO |
| 767 #define STRUCT_MALLINFO_DECLARED 1 |
| 768 struct mallinfo { |
| 769 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */ |
| 770 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */ |
| 771 MALLINFO_FIELD_TYPE smblks; /* always 0 */ |
| 772 MALLINFO_FIELD_TYPE hblks; /* always 0 */ |
| 773 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */ |
| 774 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */ |
| 775 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */ |
| 776 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */ |
| 777 MALLINFO_FIELD_TYPE fordblks; /* total free space */ |
| 778 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */ |
| 779 }; |
| 780 #endif /* STRUCT_MALLINFO_DECLARED */ |
| 781 #endif /* HAVE_USR_INCLUDE_MALLOC_H */ |
| 782 #endif /* NO_MALLINFO */ |
| 783 |
| 784 /* |
| 785 Try to persuade compilers to inline. The most critical functions for |
| 786 inlining are defined as macros, so these aren't used for them. |
| 787 */ |
| 788 |
| 789 #ifndef FORCEINLINE |
| 790 #if defined(__GNUC__) |
| 791 #define FORCEINLINE __inline __attribute__ ((always_inline)) |
| 792 #elif defined(_MSC_VER) |
| 793 #define FORCEINLINE __forceinline |
| 794 #endif |
| 795 #endif |
| 796 #ifndef NOINLINE |
| 797 #if defined(__GNUC__) |
| 798 #define NOINLINE __attribute__ ((noinline)) |
| 799 #elif defined(_MSC_VER) |
| 800 #define NOINLINE __declspec(noinline) |
| 801 #else |
| 802 #define NOINLINE |
| 803 #endif |
| 804 #endif |
| 805 |
| 806 #ifdef __cplusplus |
| 807 extern "C" { |
| 808 #ifndef FORCEINLINE |
| 809 #define FORCEINLINE inline |
| 810 #endif |
| 811 #endif /* __cplusplus */ |
| 812 #ifndef FORCEINLINE |
| 813 #define FORCEINLINE |
| 814 #endif |
| 815 |
| 816 #if !ONLY_MSPACES |
| 817 |
| 818 /* ------------------- Declarations of public routines ------------------- */ |
| 819 |
| 820 #ifndef USE_DL_PREFIX |
| 821 #define dlcalloc calloc |
| 822 #define dlfree free |
| 823 #define dlmalloc malloc |
| 824 #define dlmemalign memalign |
| 825 #define dlposix_memalign posix_memalign |
| 826 #define dlrealloc realloc |
| 827 #define dlrealloc_in_place realloc_in_place |
| 828 #define dlvalloc valloc |
| 829 #define dlpvalloc pvalloc |
| 830 #define dlmallinfo mallinfo |
| 831 #define dlmallopt mallopt |
| 832 #define dlmalloc_trim malloc_trim |
| 833 #define dlmalloc_stats malloc_stats |
| 834 #define dlmalloc_usable_size malloc_usable_size |
| 835 #define dlmalloc_footprint malloc_footprint |
| 836 #define dlmalloc_max_footprint malloc_max_footprint |
| 837 #define dlmalloc_footprint_limit malloc_footprint_limit |
| 838 #define dlmalloc_set_footprint_limit malloc_set_footprint_limit |
| 839 #define dlmalloc_inspect_all malloc_inspect_all |
| 840 #define dlindependent_calloc independent_calloc |
| 841 #define dlindependent_comalloc independent_comalloc |
| 842 #define dlbulk_free bulk_free |
| 843 #endif /* USE_DL_PREFIX */ |
| 844 |
| 845 /* |
| 846 malloc(size_t n) |
| 847 Returns a pointer to a newly allocated chunk of at least n bytes, or |
| 848 null if no space is available, in which case errno is set to ENOMEM |
| 849 on ANSI C systems. |
| 850 |
| 851 If n is zero, malloc returns a minimum-sized chunk. (The minimum |
| 852 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit |
| 853 systems.) Note that size_t is an unsigned type, so calls with |
| 854 arguments that would be negative if signed are interpreted as |
| 855 requests for huge amounts of space, which will often fail. The |
| 856 maximum supported value of n differs across systems, but is in all |
| 857 cases less than the maximum representable value of a size_t. |
| 858 */ |
| 859 DLMALLOC_EXPORT void* dlmalloc(size_t); |
| 860 |
| 861 /* |
| 862 free(void* p) |
| 863 Releases the chunk of memory pointed to by p, that had been previously |
| 864 allocated using malloc or a related routine such as realloc. |
| 865 It has no effect if p is null. If p was not malloced or already |
| 866 freed, free(p) will by default cause the current program to abort. |
| 867 */ |
| 868 DLMALLOC_EXPORT void dlfree(void*); |
| 869 |
| 870 /* |
| 871 calloc(size_t n_elements, size_t element_size); |
| 872 Returns a pointer to n_elements * element_size bytes, with all locations |
| 873 set to zero. |
| 874 */ |
| 875 DLMALLOC_EXPORT void* dlcalloc(size_t, size_t); |
| 876 |
| 877 /* |
| 878 realloc(void* p, size_t n) |
| 879 Returns a pointer to a chunk of size n that contains the same data |
| 880 as does chunk p up to the minimum of (n, p's size) bytes, or null |
| 881 if no space is available. |
| 882 |
| 883 The returned pointer may or may not be the same as p. The algorithm |
| 884 prefers extending p in most cases when possible, otherwise it |
| 885 employs the equivalent of a malloc-copy-free sequence. |
| 886 |
| 887 If p is null, realloc is equivalent to malloc. |
| 888 |
| 889 If space is not available, realloc returns null, errno is set (if on |
| 890 ANSI) and p is NOT freed. |
| 891 |
| 892 if n is for fewer bytes than already held by p, the newly unused |
| 893 space is lopped off and freed if possible. realloc with a size |
| 894 argument of zero (re)allocates a minimum-sized chunk. |
| 895 |
| 896 The old unix realloc convention of allowing the last-free'd chunk |
| 897 to be used as an argument to realloc is not supported. |
| 898 */ |
| 899 DLMALLOC_EXPORT void* dlrealloc(void*, size_t); |
| 900 |
| 901 /* |
| 902 realloc_in_place(void* p, size_t n) |
| 903 Resizes the space allocated for p to size n, only if this can be |
| 904 done without moving p (i.e., only if there is adjacent space |
| 905 available if n is greater than p's current allocated size, or n is |
| 906 less than or equal to p's size). This may be used instead of plain |
| 907 realloc if an alternative allocation strategy is needed upon failure |
| 908 to expand space; for example, reallocation of a buffer that must be |
| 909 memory-aligned or cleared. You can use realloc_in_place to trigger |
| 910 these alternatives only when needed. |
| 911 |
| 912 Returns p if successful; otherwise null. |
| 913 */ |
| 914 DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t); |
| 915 |
| 916 /* |
| 917 memalign(size_t alignment, size_t n); |
| 918 Returns a pointer to a newly allocated chunk of n bytes, aligned |
| 919 in accord with the alignment argument. |
| 920 |
| 921 The alignment argument should be a power of two. If the argument is |
| 922 not a power of two, the nearest greater power is used. |
| 923 8-byte alignment is guaranteed by normal malloc calls, so don't |
| 924 bother calling memalign with an argument of 8 or less. |
| 925 |
| 926 Overreliance on memalign is a sure way to fragment space. |
| 927 */ |
| 928 DLMALLOC_EXPORT void* dlmemalign(size_t, size_t); |
| 929 |
| 930 /* |
| 931 int posix_memalign(void** pp, size_t alignment, size_t n); |
| 932 Allocates a chunk of n bytes, aligned in accord with the alignment |
| 933 argument. Differs from memalign only in that it (1) assigns the |
| 934 allocated memory to *pp rather than returning it, (2) fails and |
| 935 returns EINVAL if the alignment is not a power of two (3) fails and |
| 936 returns ENOMEM if memory cannot be allocated. |
| 937 */ |
| 938 DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t); |
| 939 |
| 940 /* |
| 941 valloc(size_t n); |
| 942 Equivalent to memalign(pagesize, n), where pagesize is the page |
| 943 size of the system. If the pagesize is unknown, 4096 is used. |
| 944 */ |
| 945 DLMALLOC_EXPORT void* dlvalloc(size_t); |
| 946 |
| 947 /* |
| 948 mallopt(int parameter_number, int parameter_value) |
| 949 Sets tunable parameters The format is to provide a |
| 950 (parameter-number, parameter-value) pair. mallopt then sets the |
| 951 corresponding parameter to the argument value if it can (i.e., so |
| 952 long as the value is meaningful), and returns 1 if successful else |
| 953 0. To workaround the fact that mallopt is specified to use int, |
| 954 not size_t parameters, the value -1 is specially treated as the |
| 955 maximum unsigned size_t value. |
| 956 |
| 957 SVID/XPG/ANSI defines four standard param numbers for mallopt, |
| 958 normally defined in malloc.h. None of these are use in this malloc, |
| 959 so setting them has no effect. But this malloc also supports other |
| 960 options in mallopt. See below for details. Briefly, supported |
| 961 parameters are as follows (listed defaults are for "typical" |
| 962 configurations). |
| 963 |
| 964 Symbol param # default allowed param values |
| 965 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables) |
| 966 M_GRANULARITY -2 page size any power of 2 >= page size |
| 967 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support) |
| 968 */ |
| 969 DLMALLOC_EXPORT int dlmallopt(int, int); |
| 970 |
| 971 /* |
| 972 malloc_footprint(); |
| 973 Returns the number of bytes obtained from the system. The total |
| 974 number of bytes allocated by malloc, realloc etc., is less than this |
| 975 value. Unlike mallinfo, this function returns only a precomputed |
| 976 result, so can be called frequently to monitor memory consumption. |
| 977 Even if locks are otherwise defined, this function does not use them, |
| 978 so results might not be up to date. |
| 979 */ |
| 980 DLMALLOC_EXPORT size_t dlmalloc_footprint(void); |
| 981 |
| 982 /* |
| 983 malloc_max_footprint(); |
| 984 Returns the maximum number of bytes obtained from the system. This |
| 985 value will be greater than current footprint if deallocated space |
| 986 has been reclaimed by the system. The peak number of bytes allocated |
| 987 by malloc, realloc etc., is less than this value. Unlike mallinfo, |
| 988 this function returns only a precomputed result, so can be called |
| 989 frequently to monitor memory consumption. Even if locks are |
| 990 otherwise defined, this function does not use them, so results might |
| 991 not be up to date. |
| 992 */ |
| 993 DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void); |
| 994 |
| 995 /* |
| 996 malloc_footprint_limit(); |
| 997 Returns the number of bytes that the heap is allowed to obtain from |
| 998 the system, returning the last value returned by |
| 999 malloc_set_footprint_limit, or the maximum size_t value if |
| 1000 never set. The returned value reflects a permission. There is no |
| 1001 guarantee that this number of bytes can actually be obtained from |
| 1002 the system. |
| 1003 */ |
| 1004 DLMALLOC_EXPORT size_t dlmalloc_footprint_limit(); |
| 1005 |
| 1006 /* |
| 1007 malloc_set_footprint_limit(); |
| 1008 Sets the maximum number of bytes to obtain from the system, causing |
| 1009 failure returns from malloc and related functions upon attempts to |
| 1010 exceed this value. The argument value may be subject to page |
| 1011 rounding to an enforceable limit; this actual value is returned. |
| 1012 Using an argument of the maximum possible size_t effectively |
| 1013 disables checks. If the argument is less than or equal to the |
| 1014 current malloc_footprint, then all future allocations that require |
| 1015 additional system memory will fail. However, invocation cannot |
| 1016 retroactively deallocate existing used memory. |
| 1017 */ |
| 1018 DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes); |
| 1019 |
| 1020 #if MALLOC_INSPECT_ALL |
| 1021 /* |
| 1022 malloc_inspect_all(void(*handler)(void *start, |
| 1023 void *end, |
| 1024 size_t used_bytes, |
| 1025 void* callback_arg), |
| 1026 void* arg); |
| 1027 Traverses the heap and calls the given handler for each managed |
| 1028 region, skipping all bytes that are (or may be) used for bookkeeping |
| 1029 purposes. Traversal does not include include chunks that have been |
| 1030 directly memory mapped. Each reported region begins at the start |
| 1031 address, and continues up to but not including the end address. The |
| 1032 first used_bytes of the region contain allocated data. If |
| 1033 used_bytes is zero, the region is unallocated. The handler is |
| 1034 invoked with the given callback argument. If locks are defined, they |
| 1035 are held during the entire traversal. It is a bad idea to invoke |
| 1036 other malloc functions from within the handler. |
| 1037 |
| 1038 For example, to count the number of in-use chunks with size greater |
| 1039 than 1000, you could write: |
| 1040 static int count = 0; |
| 1041 void count_chunks(void* start, void* end, size_t used, void* arg) { |
| 1042 if (used >= 1000) ++count; |
| 1043 } |
| 1044 then: |
| 1045 malloc_inspect_all(count_chunks, NULL); |
| 1046 |
| 1047 malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined. |
| 1048 */ |
| 1049 DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t,
void*), |
| 1050 void* arg); |
| 1051 |
| 1052 #endif /* MALLOC_INSPECT_ALL */ |
| 1053 |
| 1054 #if !NO_MALLINFO |
| 1055 /* |
| 1056 mallinfo() |
| 1057 Returns (by copy) a struct containing various summary statistics: |
| 1058 |
| 1059 arena: current total non-mmapped bytes allocated from system |
| 1060 ordblks: the number of free chunks |
| 1061 smblks: always zero. |
| 1062 hblks: current number of mmapped regions |
| 1063 hblkhd: total bytes held in mmapped regions |
| 1064 usmblks: the maximum total allocated space. This will be greater |
| 1065 than current total if trimming has occurred. |
| 1066 fsmblks: always zero |
| 1067 uordblks: current total allocated space (normal or mmapped) |
| 1068 fordblks: total free space |
| 1069 keepcost: the maximum number of bytes that could ideally be released |
| 1070 back to system via malloc_trim. ("ideally" means that |
| 1071 it ignores page restrictions etc.) |
| 1072 |
| 1073 Because these fields are ints, but internal bookkeeping may |
| 1074 be kept as longs, the reported values may wrap around zero and |
| 1075 thus be inaccurate. |
| 1076 */ |
| 1077 DLMALLOC_EXPORT struct mallinfo dlmallinfo(void); |
| 1078 #endif /* NO_MALLINFO */ |
| 1079 |
| 1080 /* |
| 1081 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]); |
| 1082 |
| 1083 independent_calloc is similar to calloc, but instead of returning a |
| 1084 single cleared space, it returns an array of pointers to n_elements |
| 1085 independent elements that can hold contents of size elem_size, each |
| 1086 of which starts out cleared, and can be independently freed, |
| 1087 realloc'ed etc. The elements are guaranteed to be adjacently |
| 1088 allocated (this is not guaranteed to occur with multiple callocs or |
| 1089 mallocs), which may also improve cache locality in some |
| 1090 applications. |
| 1091 |
| 1092 The "chunks" argument is optional (i.e., may be null, which is |
| 1093 probably the most typical usage). If it is null, the returned array |
| 1094 is itself dynamically allocated and should also be freed when it is |
| 1095 no longer needed. Otherwise, the chunks array must be of at least |
| 1096 n_elements in length. It is filled in with the pointers to the |
| 1097 chunks. |
| 1098 |
| 1099 In either case, independent_calloc returns this pointer array, or |
| 1100 null if the allocation failed. If n_elements is zero and "chunks" |
| 1101 is null, it returns a chunk representing an array with zero elements |
| 1102 (which should be freed if not wanted). |
| 1103 |
| 1104 Each element must be freed when it is no longer needed. This can be |
| 1105 done all at once using bulk_free. |
| 1106 |
| 1107 independent_calloc simplifies and speeds up implementations of many |
| 1108 kinds of pools. It may also be useful when constructing large data |
| 1109 structures that initially have a fixed number of fixed-sized nodes, |
| 1110 but the number is not known at compile time, and some of the nodes |
| 1111 may later need to be freed. For example: |
| 1112 |
| 1113 struct Node { int item; struct Node* next; }; |
| 1114 |
| 1115 struct Node* build_list() { |
| 1116 struct Node** pool; |
| 1117 int n = read_number_of_nodes_needed(); |
| 1118 if (n <= 0) return 0; |
| 1119 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0); |
| 1120 if (pool == 0) die(); |
| 1121 // organize into a linked list... |
| 1122 struct Node* first = pool[0]; |
| 1123 for (i = 0; i < n-1; ++i) |
| 1124 pool[i]->next = pool[i+1]; |
| 1125 free(pool); // Can now free the array (or not, if it is needed later) |
| 1126 return first; |
| 1127 } |
| 1128 */ |
| 1129 DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**); |
| 1130 |
| 1131 /* |
| 1132 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]); |
| 1133 |
| 1134 independent_comalloc allocates, all at once, a set of n_elements |
| 1135 chunks with sizes indicated in the "sizes" array. It returns |
| 1136 an array of pointers to these elements, each of which can be |
| 1137 independently freed, realloc'ed etc. The elements are guaranteed to |
| 1138 be adjacently allocated (this is not guaranteed to occur with |
| 1139 multiple callocs or mallocs), which may also improve cache locality |
| 1140 in some applications. |
| 1141 |
| 1142 The "chunks" argument is optional (i.e., may be null). If it is null |
| 1143 the returned array is itself dynamically allocated and should also |
| 1144 be freed when it is no longer needed. Otherwise, the chunks array |
| 1145 must be of at least n_elements in length. It is filled in with the |
| 1146 pointers to the chunks. |
| 1147 |
| 1148 In either case, independent_comalloc returns this pointer array, or |
| 1149 null if the allocation failed. If n_elements is zero and chunks is |
| 1150 null, it returns a chunk representing an array with zero elements |
| 1151 (which should be freed if not wanted). |
| 1152 |
| 1153 Each element must be freed when it is no longer needed. This can be |
| 1154 done all at once using bulk_free. |
| 1155 |
| 1156 independent_comallac differs from independent_calloc in that each |
| 1157 element may have a different size, and also that it does not |
| 1158 automatically clear elements. |
| 1159 |
| 1160 independent_comalloc can be used to speed up allocation in cases |
| 1161 where several structs or objects must always be allocated at the |
| 1162 same time. For example: |
| 1163 |
| 1164 struct Head { ... } |
| 1165 struct Foot { ... } |
| 1166 |
| 1167 void send_message(char* msg) { |
| 1168 int msglen = strlen(msg); |
| 1169 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) }; |
| 1170 void* chunks[3]; |
| 1171 if (independent_comalloc(3, sizes, chunks) == 0) |
| 1172 die(); |
| 1173 struct Head* head = (struct Head*)(chunks[0]); |
| 1174 char* body = (char*)(chunks[1]); |
| 1175 struct Foot* foot = (struct Foot*)(chunks[2]); |
| 1176 // ... |
| 1177 } |
| 1178 |
| 1179 In general though, independent_comalloc is worth using only for |
| 1180 larger values of n_elements. For small values, you probably won't |
| 1181 detect enough difference from series of malloc calls to bother. |
| 1182 |
| 1183 Overuse of independent_comalloc can increase overall memory usage, |
| 1184 since it cannot reuse existing noncontiguous small chunks that |
| 1185 might be available for some of the elements. |
| 1186 */ |
| 1187 DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**); |
| 1188 |
| 1189 /* |
| 1190 bulk_free(void* array[], size_t n_elements) |
| 1191 Frees and clears (sets to null) each non-null pointer in the given |
| 1192 array. This is likely to be faster than freeing them one-by-one. |
| 1193 If footers are used, pointers that have been allocated in different |
| 1194 mspaces are not freed or cleared, and the count of all such pointers |
| 1195 is returned. For large arrays of pointers with poor locality, it |
| 1196 may be worthwhile to sort this array before calling bulk_free. |
| 1197 */ |
| 1198 DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements); |
| 1199 |
| 1200 /* |
| 1201 pvalloc(size_t n); |
| 1202 Equivalent to valloc(minimum-page-that-holds(n)), that is, |
| 1203 round up n to nearest pagesize. |
| 1204 */ |
| 1205 DLMALLOC_EXPORT void* dlpvalloc(size_t); |
| 1206 |
| 1207 /* |
| 1208 malloc_trim(size_t pad); |
| 1209 |
| 1210 If possible, gives memory back to the system (via negative arguments |
| 1211 to sbrk) if there is unused memory at the `high' end of the malloc |
| 1212 pool or in unused MMAP segments. You can call this after freeing |
| 1213 large blocks of memory to potentially reduce the system-level memory |
| 1214 requirements of a program. However, it cannot guarantee to reduce |
| 1215 memory. Under some allocation patterns, some large free blocks of |
| 1216 memory will be locked between two used chunks, so they cannot be |
| 1217 given back to the system. |
| 1218 |
| 1219 The `pad' argument to malloc_trim represents the amount of free |
| 1220 trailing space to leave untrimmed. If this argument is zero, only |
| 1221 the minimum amount of memory to maintain internal data structures |
| 1222 will be left. Non-zero arguments can be supplied to maintain enough |
| 1223 trailing space to service future expected allocations without having |
| 1224 to re-obtain memory from the system. |
| 1225 |
| 1226 Malloc_trim returns 1 if it actually released any memory, else 0. |
| 1227 */ |
| 1228 DLMALLOC_EXPORT int dlmalloc_trim(size_t); |
| 1229 |
| 1230 /* |
| 1231 malloc_stats(); |
| 1232 Prints on stderr the amount of space obtained from the system (both |
| 1233 via sbrk and mmap), the maximum amount (which may be more than |
| 1234 current if malloc_trim and/or munmap got called), and the current |
| 1235 number of bytes allocated via malloc (or realloc, etc) but not yet |
| 1236 freed. Note that this is the number of bytes allocated, not the |
| 1237 number requested. It will be larger than the number requested |
| 1238 because of alignment and bookkeeping overhead. Because it includes |
| 1239 alignment wastage as being in use, this figure may be greater than |
| 1240 zero even when no user-level chunks are allocated. |
| 1241 |
| 1242 The reported current and maximum system memory can be inaccurate if |
| 1243 a program makes other calls to system memory allocation functions |
| 1244 (normally sbrk) outside of malloc. |
| 1245 |
| 1246 malloc_stats prints only the most commonly interesting statistics. |
| 1247 More information can be obtained by calling mallinfo. |
| 1248 */ |
| 1249 DLMALLOC_EXPORT void dlmalloc_stats(void); |
| 1250 |
| 1251 #endif /* ONLY_MSPACES */ |
| 1252 |
| 1253 #ifdef __cplusplus |
| 1254 } /* end of extern "C" */ |
| 1255 #endif /* __cplusplus */ |
| 1256 |
| 1257 /* |
| 1258 ======================================================================== |
| 1259 To make a fully customizable malloc.h header file, cut everything |
| 1260 above this line, put into file malloc.h, edit to suit, and #include it |
| 1261 on the next line, as well as in programs that use this malloc. |
| 1262 ======================================================================== |
| 1263 */ |
| 1264 |
| 1265 /* #include "malloc.h" */ |
| 1266 |
| 1267 /*------------------------------ internal #includes ---------------------- */ |
| 1268 |
| 1269 #ifdef _MSC_VER |
| 1270 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */ |
| 1271 #endif /* _MSC_VER */ |
| 1272 #if !NO_MALLOC_STATS |
| 1273 #include <stdio.h> /* for printing in malloc_stats */ |
| 1274 #endif /* NO_MALLOC_STATS */ |
| 1275 #ifndef LACKS_ERRNO_H |
| 1276 #include <errno.h> /* for MALLOC_FAILURE_ACTION */ |
| 1277 #endif /* LACKS_ERRNO_H */ |
| 1278 #ifdef DEBUG |
| 1279 #if ABORT_ON_ASSERT_FAILURE |
| 1280 #undef assert |
| 1281 #define assert(x) if(!(x)) ABORT |
| 1282 #else /* ABORT_ON_ASSERT_FAILURE */ |
| 1283 #include <assert.h> |
| 1284 #endif /* ABORT_ON_ASSERT_FAILURE */ |
| 1285 #else /* DEBUG */ |
| 1286 #ifndef assert |
| 1287 #define assert(x) |
| 1288 #endif |
| 1289 #define DEBUG 0 |
| 1290 #endif /* DEBUG */ |
| 1291 #if !defined(WIN32) && !defined(LACKS_TIME_H) |
| 1292 #include <time.h> /* for magic initialization */ |
| 1293 #endif /* WIN32 */ |
| 1294 #ifndef LACKS_STDLIB_H |
| 1295 #include <stdlib.h> /* for abort() */ |
| 1296 #endif /* LACKS_STDLIB_H */ |
| 1297 #ifndef LACKS_STRING_H |
| 1298 #include <string.h> /* for memset etc */ |
| 1299 #endif /* LACKS_STRING_H */ |
| 1300 #if USE_BUILTIN_FFS |
| 1301 #ifndef LACKS_STRINGS_H |
| 1302 #include <strings.h> /* for ffs */ |
| 1303 #endif /* LACKS_STRINGS_H */ |
| 1304 #endif /* USE_BUILTIN_FFS */ |
| 1305 #if HAVE_MMAP |
| 1306 #ifndef LACKS_SYS_MMAN_H |
| 1307 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */ |
| 1308 #if (defined(linux) && !defined(__USE_GNU)) |
| 1309 #define __USE_GNU 1 |
| 1310 #include <sys/mman.h> /* for mmap */ |
| 1311 #undef __USE_GNU |
| 1312 #else |
| 1313 #include <sys/mman.h> /* for mmap */ |
| 1314 #endif /* linux */ |
| 1315 #endif /* LACKS_SYS_MMAN_H */ |
| 1316 #ifndef LACKS_FCNTL_H |
| 1317 #include <fcntl.h> |
| 1318 #endif /* LACKS_FCNTL_H */ |
| 1319 #endif /* HAVE_MMAP */ |
| 1320 #ifndef LACKS_UNISTD_H |
| 1321 #include <unistd.h> /* for sbrk, sysconf */ |
| 1322 #else /* LACKS_UNISTD_H */ |
| 1323 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__) |
| 1324 extern void* sbrk(ptrdiff_t); |
| 1325 #endif /* FreeBSD etc */ |
| 1326 #endif /* LACKS_UNISTD_H */ |
| 1327 |
| 1328 /* Declarations for locking */ |
| 1329 #if USE_LOCKS |
| 1330 #ifndef WIN32 |
| 1331 #if defined (__SVR4) && defined (__sun) /* solaris */ |
| 1332 #include <thread.h> |
| 1333 #elif !defined(LACKS_SCHED_H) |
| 1334 #include <sched.h> |
| 1335 #endif /* solaris or LACKS_SCHED_H */ |
| 1336 #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCK
S |
| 1337 #include <pthread.h> |
| 1338 #endif /* USE_RECURSIVE_LOCKS ... */ |
| 1339 #elif defined(_MSC_VER) |
| 1340 #ifndef _M_AMD64 |
| 1341 /* These are already defined on AMD64 builds */ |
| 1342 #ifdef __cplusplus |
| 1343 extern "C" { |
| 1344 #endif /* __cplusplus */ |
| 1345 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LON
G Comp); |
| 1346 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value); |
| 1347 #ifdef __cplusplus |
| 1348 } |
| 1349 #endif /* __cplusplus */ |
| 1350 #endif /* _M_AMD64 */ |
| 1351 #pragma intrinsic (_InterlockedCompareExchange) |
| 1352 #pragma intrinsic (_InterlockedExchange) |
| 1353 #define interlockedcompareexchange _InterlockedCompareExchange |
| 1354 #define interlockedexchange _InterlockedExchange |
| 1355 #elif defined(WIN32) && defined(__GNUC__) |
| 1356 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b) |
| 1357 #define interlockedexchange __sync_lock_test_and_set |
| 1358 #endif /* Win32 */ |
| 1359 #else /* USE_LOCKS */ |
| 1360 #endif /* USE_LOCKS */ |
| 1361 |
| 1362 #ifndef LOCK_AT_FORK |
| 1363 #define LOCK_AT_FORK 0 |
| 1364 #endif |
| 1365 |
| 1366 /* Declarations for bit scanning on win32 */ |
| 1367 #if defined(_MSC_VER) && _MSC_VER>=1300 |
| 1368 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */ |
| 1369 #ifdef __cplusplus |
| 1370 extern "C" { |
| 1371 #endif /* __cplusplus */ |
| 1372 unsigned char _BitScanForward(unsigned long *index, unsigned long mask); |
| 1373 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask); |
| 1374 #ifdef __cplusplus |
| 1375 } |
| 1376 #endif /* __cplusplus */ |
| 1377 |
| 1378 #define BitScanForward _BitScanForward |
| 1379 #define BitScanReverse _BitScanReverse |
| 1380 #pragma intrinsic(_BitScanForward) |
| 1381 #pragma intrinsic(_BitScanReverse) |
| 1382 #endif /* BitScanForward */ |
| 1383 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */ |
| 1384 |
| 1385 #ifndef WIN32 |
| 1386 #ifndef malloc_getpagesize |
| 1387 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ |
| 1388 # ifndef _SC_PAGE_SIZE |
| 1389 # define _SC_PAGE_SIZE _SC_PAGESIZE |
| 1390 # endif |
| 1391 # endif |
| 1392 # ifdef _SC_PAGE_SIZE |
| 1393 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE) |
| 1394 # else |
| 1395 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) |
| 1396 extern size_t getpagesize(); |
| 1397 # define malloc_getpagesize getpagesize() |
| 1398 # else |
| 1399 # ifdef WIN32 /* use supplied emulation of getpagesize */ |
| 1400 # define malloc_getpagesize getpagesize() |
| 1401 # else |
| 1402 # ifndef LACKS_SYS_PARAM_H |
| 1403 # include <sys/param.h> |
| 1404 # endif |
| 1405 # ifdef EXEC_PAGESIZE |
| 1406 # define malloc_getpagesize EXEC_PAGESIZE |
| 1407 # else |
| 1408 # ifdef NBPG |
| 1409 # ifndef CLSIZE |
| 1410 # define malloc_getpagesize NBPG |
| 1411 # else |
| 1412 # define malloc_getpagesize (NBPG * CLSIZE) |
| 1413 # endif |
| 1414 # else |
| 1415 # ifdef NBPC |
| 1416 # define malloc_getpagesize NBPC |
| 1417 # else |
| 1418 # ifdef PAGESIZE |
| 1419 # define malloc_getpagesize PAGESIZE |
| 1420 # else /* just guess */ |
| 1421 # define malloc_getpagesize ((size_t)4096U) |
| 1422 # endif |
| 1423 # endif |
| 1424 # endif |
| 1425 # endif |
| 1426 # endif |
| 1427 # endif |
| 1428 # endif |
| 1429 #endif |
| 1430 #endif |
| 1431 |
| 1432 /* ------------------- size_t and alignment properties -------------------- */ |
| 1433 |
| 1434 /* The byte and bit size of a size_t */ |
| 1435 #define SIZE_T_SIZE (sizeof(size_t)) |
| 1436 #define SIZE_T_BITSIZE (sizeof(size_t) << 3) |
| 1437 |
| 1438 /* Some constants coerced to size_t */ |
| 1439 /* Annoying but necessary to avoid errors on some platforms */ |
| 1440 #define SIZE_T_ZERO ((size_t)0) |
| 1441 #define SIZE_T_ONE ((size_t)1) |
| 1442 #define SIZE_T_TWO ((size_t)2) |
| 1443 #define SIZE_T_FOUR ((size_t)4) |
| 1444 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1) |
| 1445 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2) |
| 1446 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES) |
| 1447 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U) |
| 1448 |
| 1449 /* The bit mask value corresponding to MALLOC_ALIGNMENT */ |
| 1450 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE) |
| 1451 |
| 1452 /* True if address a has acceptable alignment */ |
| 1453 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0) |
| 1454 |
| 1455 /* the number of bytes to offset an address to align it */ |
| 1456 #define align_offset(A)\ |
| 1457 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\ |
| 1458 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK)) |
| 1459 |
| 1460 /* -------------------------- MMAP preliminaries ------------------------- */ |
| 1461 |
| 1462 /* |
| 1463 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and |
| 1464 checks to fail so compiler optimizer can delete code rather than |
| 1465 using so many "#if"s. |
| 1466 */ |
| 1467 |
| 1468 |
| 1469 /* MORECORE and MMAP must return MFAIL on failure */ |
| 1470 #define MFAIL ((void*)(MAX_SIZE_T)) |
| 1471 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */ |
| 1472 |
| 1473 #if HAVE_MMAP |
| 1474 |
| 1475 #ifndef WIN32 |
| 1476 #define MUNMAP_DEFAULT(a, s) munmap((a), (s)) |
| 1477 #define MMAP_PROT (PROT_READ|PROT_WRITE) |
| 1478 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) |
| 1479 #define MAP_ANONYMOUS MAP_ANON |
| 1480 #endif /* MAP_ANON */ |
| 1481 #ifdef MAP_ANONYMOUS |
| 1482 #define MMAP_FLAGS (MAP_PRIVATE) |
| 1483 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0) |
| 1484 #else /* MAP_ANONYMOUS */ |
| 1485 /* |
| 1486 Nearly all versions of mmap support MAP_ANONYMOUS, so the following |
| 1487 is unlikely to be needed, but is supplied just in case. |
| 1488 */ |
| 1489 #define MMAP_FLAGS (MAP_PRIVATE) |
| 1490 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */ |
| 1491 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \ |
| 1492 (dev_zero_fd = open("/dev/zero", O_RDWR), \ |
| 1493 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \ |
| 1494 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) |
| 1495 #endif /* MAP_ANONYMOUS */ |
| 1496 |
| 1497 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s) |
| 1498 |
| 1499 #if defined(__ANDROID__) |
| 1500 |
| 1501 static void android_log_error(const char* msg) { |
| 1502 __android_log_write(ANDROID_LOG_ERROR, "dlmalloc", msg); |
| 1503 } |
| 1504 |
| 1505 static int create_ashmem(size_t size) { |
| 1506 const int ashmem_fd = ashmem_create_region("dlmalloc", size); |
| 1507 if (ashmem_fd < 0) { |
| 1508 char buf[128]; |
| 1509 snprintf(buf, sizeof(buf), "Could not create ashmem with size: %d\n", size); |
| 1510 android_log_error(buf); |
| 1511 } |
| 1512 return ashmem_fd; |
| 1513 } |
| 1514 |
| 1515 static bool close_ashmem(int ashmem_fd) { |
| 1516 if (close(ashmem_fd) < 0) { |
| 1517 char buf[128]; |
| 1518 snprintf( |
| 1519 buf, sizeof(buf), "Could not close ashmem with fd: %d\n", ashmem_fd); |
| 1520 android_log_error(buf); |
| 1521 return false; |
| 1522 } |
| 1523 return true; |
| 1524 } |
| 1525 |
| 1526 #endif |
| 1527 |
| 1528 #else /* WIN32 */ |
| 1529 |
| 1530 /* Win32 MMAP via VirtualAlloc */ |
| 1531 static FORCEINLINE void* win32mmap(size_t size) { |
| 1532 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE); |
| 1533 return (ptr != 0)? ptr: MFAIL; |
| 1534 } |
| 1535 |
| 1536 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */ |
| 1537 static FORCEINLINE void* win32direct_mmap(size_t size) { |
| 1538 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN, |
| 1539 PAGE_READWRITE); |
| 1540 return (ptr != 0)? ptr: MFAIL; |
| 1541 } |
| 1542 |
| 1543 /* This function supports releasing coalesed segments */ |
| 1544 static FORCEINLINE int win32munmap(void* ptr, size_t size) { |
| 1545 MEMORY_BASIC_INFORMATION minfo; |
| 1546 char* cptr = (char*)ptr; |
| 1547 while (size) { |
| 1548 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0) |
| 1549 return -1; |
| 1550 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr || |
| 1551 minfo.State != MEM_COMMIT || minfo.RegionSize > size) |
| 1552 return -1; |
| 1553 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0) |
| 1554 return -1; |
| 1555 cptr += minfo.RegionSize; |
| 1556 size -= minfo.RegionSize; |
| 1557 } |
| 1558 return 0; |
| 1559 } |
| 1560 |
| 1561 #define MMAP_DEFAULT(s) win32mmap(s) |
| 1562 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s)) |
| 1563 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s) |
| 1564 #endif /* WIN32 */ |
| 1565 #endif /* HAVE_MMAP */ |
| 1566 |
| 1567 #if HAVE_MREMAP |
| 1568 #ifndef WIN32 |
| 1569 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv)) |
| 1570 #endif /* WIN32 */ |
| 1571 #endif /* HAVE_MREMAP */ |
| 1572 |
| 1573 /** |
| 1574 * Define CALL_MORECORE |
| 1575 */ |
| 1576 #if HAVE_MORECORE |
| 1577 #ifdef MORECORE |
| 1578 #define CALL_MORECORE(S) MORECORE(S) |
| 1579 #else /* MORECORE */ |
| 1580 #define CALL_MORECORE(S) MORECORE_DEFAULT(S) |
| 1581 #endif /* MORECORE */ |
| 1582 #else /* HAVE_MORECORE */ |
| 1583 #define CALL_MORECORE(S) MFAIL |
| 1584 #endif /* HAVE_MORECORE */ |
| 1585 |
| 1586 /** |
| 1587 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP |
| 1588 */ |
| 1589 #if HAVE_MMAP |
| 1590 #define USE_MMAP_BIT (SIZE_T_ONE) |
| 1591 |
| 1592 #ifdef MMAP |
| 1593 #define CALL_MMAP(s) MMAP(s) |
| 1594 #else /* MMAP */ |
| 1595 #define CALL_MMAP(s) MMAP_DEFAULT(s) |
| 1596 #endif /* MMAP */ |
| 1597 #ifdef MUNMAP |
| 1598 #define CALL_MUNMAP(a, s) MUNMAP((a), (s)) |
| 1599 #else /* MUNMAP */ |
| 1600 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s)) |
| 1601 #endif /* MUNMAP */ |
| 1602 #ifdef DIRECT_MMAP |
| 1603 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s) |
| 1604 #else /* DIRECT_MMAP */ |
| 1605 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s) |
| 1606 #endif /* DIRECT_MMAP */ |
| 1607 #else /* HAVE_MMAP */ |
| 1608 #define USE_MMAP_BIT (SIZE_T_ZERO) |
| 1609 |
| 1610 #define MMAP(s) MFAIL |
| 1611 #define MUNMAP(a, s) (-1) |
| 1612 #define DIRECT_MMAP(s) MFAIL |
| 1613 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s) |
| 1614 #define CALL_MMAP(s) MMAP(s) |
| 1615 #define CALL_MUNMAP(a, s) MUNMAP((a), (s)) |
| 1616 #endif /* HAVE_MMAP */ |
| 1617 |
| 1618 /** |
| 1619 * Define CALL_MREMAP |
| 1620 */ |
| 1621 #if HAVE_MMAP && HAVE_MREMAP |
| 1622 #ifdef MREMAP |
| 1623 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv
)) |
| 1624 #else /* MREMAP */ |
| 1625 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (n
sz), (mv)) |
| 1626 #endif /* MREMAP */ |
| 1627 #else /* HAVE_MMAP && HAVE_MREMAP */ |
| 1628 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL |
| 1629 #endif /* HAVE_MMAP && HAVE_MREMAP */ |
| 1630 |
| 1631 /* mstate bit set if continguous morecore disabled or failed */ |
| 1632 #define USE_NONCONTIGUOUS_BIT (4U) |
| 1633 |
| 1634 /* segment bit set in create_mspace_with_base */ |
| 1635 #define EXTERN_BIT (8U) |
| 1636 |
| 1637 |
| 1638 /* --------------------------- Lock preliminaries ------------------------ */ |
| 1639 |
| 1640 /* |
| 1641 When locks are defined, there is one global lock, plus |
| 1642 one per-mspace lock. |
| 1643 |
| 1644 The global lock_ensures that mparams.magic and other unique |
| 1645 mparams values are initialized only once. It also protects |
| 1646 sequences of calls to MORECORE. In many cases sys_alloc requires |
| 1647 two calls, that should not be interleaved with calls by other |
| 1648 threads. This does not protect against direct calls to MORECORE |
| 1649 by other threads not using this lock, so there is still code to |
| 1650 cope the best we can on interference. |
| 1651 |
| 1652 Per-mspace locks surround calls to malloc, free, etc. |
| 1653 By default, locks are simple non-reentrant mutexes. |
| 1654 |
| 1655 Because lock-protected regions generally have bounded times, it is |
| 1656 OK to use the supplied simple spinlocks. Spinlocks are likely to |
| 1657 improve performance for lightly contended applications, but worsen |
| 1658 performance under heavy contention. |
| 1659 |
| 1660 If USE_LOCKS is > 1, the definitions of lock routines here are |
| 1661 bypassed, in which case you will need to define the type MLOCK_T, |
| 1662 and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK |
| 1663 and TRY_LOCK. You must also declare a |
| 1664 static MLOCK_T malloc_global_mutex = { initialization values };. |
| 1665 |
| 1666 */ |
| 1667 |
| 1668 #if !USE_LOCKS |
| 1669 #define USE_LOCK_BIT (0U) |
| 1670 #define INITIAL_LOCK(l) (0) |
| 1671 #define DESTROY_LOCK(l) (0) |
| 1672 #define ACQUIRE_MALLOC_GLOBAL_LOCK() |
| 1673 #define RELEASE_MALLOC_GLOBAL_LOCK() |
| 1674 |
| 1675 #else |
| 1676 #if USE_LOCKS > 1 |
| 1677 /* ----------------------- User-defined locks ------------------------ */ |
| 1678 /* Define your own lock implementation here */ |
| 1679 /* #define INITIAL_LOCK(lk) ... */ |
| 1680 /* #define DESTROY_LOCK(lk) ... */ |
| 1681 /* #define ACQUIRE_LOCK(lk) ... */ |
| 1682 /* #define RELEASE_LOCK(lk) ... */ |
| 1683 /* #define TRY_LOCK(lk) ... */ |
| 1684 /* static MLOCK_T malloc_global_mutex = ... */ |
| 1685 |
| 1686 #elif USE_SPIN_LOCKS |
| 1687 |
| 1688 /* First, define CAS_LOCK and CLEAR_LOCK on ints */ |
| 1689 /* Note CAS_LOCK defined to return 0 on success */ |
| 1690 |
| 1691 #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) |
| 1692 #define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1) |
| 1693 #define CLEAR_LOCK(sl) __sync_lock_release(sl) |
| 1694 |
| 1695 #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))) |
| 1696 /* Custom spin locks for older gcc on x86 */ |
| 1697 static FORCEINLINE int x86_cas_lock(int *sl) { |
| 1698 int ret; |
| 1699 int val = 1; |
| 1700 int cmp = 0; |
| 1701 __asm__ __volatile__ ("lock; cmpxchgl %1, %2" |
| 1702 : "=a" (ret) |
| 1703 : "r" (val), "m" (*(sl)), "0"(cmp) |
| 1704 : "memory", "cc"); |
| 1705 return ret; |
| 1706 } |
| 1707 |
| 1708 static FORCEINLINE void x86_clear_lock(int* sl) { |
| 1709 assert(*sl != 0); |
| 1710 int prev = 0; |
| 1711 int ret; |
| 1712 __asm__ __volatile__ ("lock; xchgl %0, %1" |
| 1713 : "=r" (ret) |
| 1714 : "m" (*(sl)), "0"(prev) |
| 1715 : "memory"); |
| 1716 } |
| 1717 |
| 1718 #define CAS_LOCK(sl) x86_cas_lock(sl) |
| 1719 #define CLEAR_LOCK(sl) x86_clear_lock(sl) |
| 1720 |
| 1721 #else /* Win32 MSC */ |
| 1722 #define CAS_LOCK(sl) interlockedexchange(sl, (LONG)1) |
| 1723 #define CLEAR_LOCK(sl) interlockedexchange (sl, (LONG)0) |
| 1724 |
| 1725 #endif /* ... gcc spins locks ... */ |
| 1726 |
| 1727 /* How to yield for a spin lock */ |
| 1728 #define SPINS_PER_YIELD 63 |
| 1729 #if defined(_MSC_VER) |
| 1730 #define SLEEP_EX_DURATION 50 /* delay for yield/sleep */ |
| 1731 #define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE) |
| 1732 #elif defined (__SVR4) && defined (__sun) /* solaris */ |
| 1733 #define SPIN_LOCK_YIELD thr_yield(); |
| 1734 #elif !defined(LACKS_SCHED_H) |
| 1735 #define SPIN_LOCK_YIELD sched_yield(); |
| 1736 #else |
| 1737 #define SPIN_LOCK_YIELD |
| 1738 #endif /* ... yield ... */ |
| 1739 |
| 1740 #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0 |
| 1741 /* Plain spin locks use single word (embedded in malloc_states) */ |
| 1742 static int spin_acquire_lock(int *sl) { |
| 1743 int spins = 0; |
| 1744 while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) { |
| 1745 if ((++spins & SPINS_PER_YIELD) == 0) { |
| 1746 SPIN_LOCK_YIELD; |
| 1747 } |
| 1748 } |
| 1749 return 0; |
| 1750 } |
| 1751 |
| 1752 #define MLOCK_T int |
| 1753 #define TRY_LOCK(sl) !CAS_LOCK(sl) |
| 1754 #define RELEASE_LOCK(sl) CLEAR_LOCK(sl) |
| 1755 #define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0) |
| 1756 #define INITIAL_LOCK(sl) (*sl = 0) |
| 1757 #define DESTROY_LOCK(sl) (0) |
| 1758 static MLOCK_T malloc_global_mutex = 0; |
| 1759 |
| 1760 #else /* USE_RECURSIVE_LOCKS */ |
| 1761 /* types for lock owners */ |
| 1762 #ifdef WIN32 |
| 1763 #define THREAD_ID_T DWORD |
| 1764 #define CURRENT_THREAD GetCurrentThreadId() |
| 1765 #define EQ_OWNER(X,Y) ((X) == (Y)) |
| 1766 #else |
| 1767 /* |
| 1768 Note: the following assume that pthread_t is a type that can be |
| 1769 initialized to (casted) zero. If this is not the case, you will need to |
| 1770 somehow redefine these or not use spin locks. |
| 1771 */ |
| 1772 #define THREAD_ID_T pthread_t |
| 1773 #define CURRENT_THREAD pthread_self() |
| 1774 #define EQ_OWNER(X,Y) pthread_equal(X, Y) |
| 1775 #endif |
| 1776 |
| 1777 struct malloc_recursive_lock { |
| 1778 int sl; |
| 1779 unsigned int c; |
| 1780 THREAD_ID_T threadid; |
| 1781 }; |
| 1782 |
| 1783 #define MLOCK_T struct malloc_recursive_lock |
| 1784 static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0}; |
| 1785 |
| 1786 static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) { |
| 1787 assert(lk->sl != 0); |
| 1788 if (--lk->c == 0) { |
| 1789 CLEAR_LOCK(&lk->sl); |
| 1790 } |
| 1791 } |
| 1792 |
| 1793 static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) { |
| 1794 THREAD_ID_T mythreadid = CURRENT_THREAD; |
| 1795 int spins = 0; |
| 1796 for (;;) { |
| 1797 if (*((volatile int *)(&lk->sl)) == 0) { |
| 1798 if (!CAS_LOCK(&lk->sl)) { |
| 1799 lk->threadid = mythreadid; |
| 1800 lk->c = 1; |
| 1801 return 0; |
| 1802 } |
| 1803 } |
| 1804 else if (EQ_OWNER(lk->threadid, mythreadid)) { |
| 1805 ++lk->c; |
| 1806 return 0; |
| 1807 } |
| 1808 if ((++spins & SPINS_PER_YIELD) == 0) { |
| 1809 SPIN_LOCK_YIELD; |
| 1810 } |
| 1811 } |
| 1812 } |
| 1813 |
| 1814 static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) { |
| 1815 THREAD_ID_T mythreadid = CURRENT_THREAD; |
| 1816 if (*((volatile int *)(&lk->sl)) == 0) { |
| 1817 if (!CAS_LOCK(&lk->sl)) { |
| 1818 lk->threadid = mythreadid; |
| 1819 lk->c = 1; |
| 1820 return 1; |
| 1821 } |
| 1822 } |
| 1823 else if (EQ_OWNER(lk->threadid, mythreadid)) { |
| 1824 ++lk->c; |
| 1825 return 1; |
| 1826 } |
| 1827 return 0; |
| 1828 } |
| 1829 |
| 1830 #define RELEASE_LOCK(lk) recursive_release_lock(lk) |
| 1831 #define TRY_LOCK(lk) recursive_try_lock(lk) |
| 1832 #define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk) |
| 1833 #define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (l
k)->c = 0) |
| 1834 #define DESTROY_LOCK(lk) (0) |
| 1835 #endif /* USE_RECURSIVE_LOCKS */ |
| 1836 |
| 1837 #elif defined(WIN32) /* Win32 critical sections */ |
| 1838 #define MLOCK_T CRITICAL_SECTION |
| 1839 #define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0) |
| 1840 #define RELEASE_LOCK(lk) LeaveCriticalSection(lk) |
| 1841 #define TRY_LOCK(lk) TryEnterCriticalSection(lk) |
| 1842 #define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80
000000|4000)) |
| 1843 #define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0) |
| 1844 #define NEED_GLOBAL_LOCK_INIT |
| 1845 |
| 1846 static MLOCK_T malloc_global_mutex; |
| 1847 static volatile LONG malloc_global_mutex_status; |
| 1848 |
| 1849 /* Use spin loop to initialize global lock */ |
| 1850 static void init_malloc_global_mutex() { |
| 1851 for (;;) { |
| 1852 long stat = malloc_global_mutex_status; |
| 1853 if (stat > 0) |
| 1854 return; |
| 1855 /* transition to < 0 while initializing, then to > 0) */ |
| 1856 if (stat == 0 && |
| 1857 interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)
0) == 0) { |
| 1858 InitializeCriticalSection(&malloc_global_mutex); |
| 1859 interlockedexchange(&malloc_global_mutex_status, (LONG)1); |
| 1860 return; |
| 1861 } |
| 1862 SleepEx(0, FALSE); |
| 1863 } |
| 1864 } |
| 1865 |
| 1866 #else /* pthreads-based locks */ |
| 1867 #define MLOCK_T pthread_mutex_t |
| 1868 #define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk) |
| 1869 #define RELEASE_LOCK(lk) pthread_mutex_unlock(lk) |
| 1870 #define TRY_LOCK(lk) (!pthread_mutex_trylock(lk)) |
| 1871 #define INITIAL_LOCK(lk) pthread_init_lock(lk) |
| 1872 #define DESTROY_LOCK(lk) pthread_mutex_destroy(lk) |
| 1873 |
| 1874 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) &
& !defined(PTHREAD_MUTEX_RECURSIVE) |
| 1875 /* Cope with old-style linux recursive lock initialization by adding */ |
| 1876 /* skipped internal declaration from pthread.h */ |
| 1877 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr, |
| 1878 int __kind)); |
| 1879 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP |
| 1880 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y) |
| 1881 #endif /* USE_RECURSIVE_LOCKS ... */ |
| 1882 |
| 1883 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER; |
| 1884 |
| 1885 static int pthread_init_lock (MLOCK_T *lk) { |
| 1886 pthread_mutexattr_t attr; |
| 1887 if (pthread_mutexattr_init(&attr)) return 1; |
| 1888 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 |
| 1889 if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1; |
| 1890 #endif |
| 1891 if (pthread_mutex_init(lk, &attr)) return 1; |
| 1892 if (pthread_mutexattr_destroy(&attr)) return 1; |
| 1893 return 0; |
| 1894 } |
| 1895 |
| 1896 #endif /* ... lock types ... */ |
| 1897 |
| 1898 /* Common code for all lock types */ |
| 1899 #define USE_LOCK_BIT (2U) |
| 1900 |
| 1901 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK |
| 1902 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex); |
| 1903 #endif |
| 1904 |
| 1905 #ifndef RELEASE_MALLOC_GLOBAL_LOCK |
| 1906 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex); |
| 1907 #endif |
| 1908 |
| 1909 #endif /* USE_LOCKS */ |
| 1910 |
| 1911 /* ----------------------- Chunk representations ------------------------ */ |
| 1912 |
| 1913 /* |
| 1914 (The following includes lightly edited explanations by Colin Plumb.) |
| 1915 |
| 1916 The malloc_chunk declaration below is misleading (but accurate and |
| 1917 necessary). It declares a "view" into memory allowing access to |
| 1918 necessary fields at known offsets from a given base. |
| 1919 |
| 1920 Chunks of memory are maintained using a `boundary tag' method as |
| 1921 originally described by Knuth. (See the paper by Paul Wilson |
| 1922 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such |
| 1923 techniques.) Sizes of free chunks are stored both in the front of |
| 1924 each chunk and at the end. This makes consolidating fragmented |
| 1925 chunks into bigger chunks fast. The head fields also hold bits |
| 1926 representing whether chunks are free or in use. |
| 1927 |
| 1928 Here are some pictures to make it clearer. They are "exploded" to |
| 1929 show that the state of a chunk can be thought of as extending from |
| 1930 the high 31 bits of the head field of its header through the |
| 1931 prev_foot and PINUSE_BIT bit of the following chunk header. |
| 1932 |
| 1933 A chunk that's in use looks like: |
| 1934 |
| 1935 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 1936 | Size of previous chunk (if P = 0) | |
| 1937 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 1938 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| |
| 1939 | Size of this chunk 1| +-+ |
| 1940 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 1941 | | |
| 1942 +- -+ |
| 1943 | | |
| 1944 +- -+ |
| 1945 | : |
| 1946 +- size - sizeof(size_t) available payload bytes -+ |
| 1947 : | |
| 1948 chunk-> +- -+ |
| 1949 | | |
| 1950 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 1951 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1| |
| 1952 | Size of next chunk (may or may not be in use) | +-+ |
| 1953 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 1954 |
| 1955 And if it's free, it looks like this: |
| 1956 |
| 1957 chunk-> +- -+ |
| 1958 | User payload (must be in use, or we would have merged!) | |
| 1959 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 1960 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| |
| 1961 | Size of this chunk 0| +-+ |
| 1962 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 1963 | Next pointer | |
| 1964 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 1965 | Prev pointer | |
| 1966 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 1967 | : |
| 1968 +- size - sizeof(struct chunk) unused bytes -+ |
| 1969 : | |
| 1970 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 1971 | Size of this chunk | |
| 1972 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 1973 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0| |
| 1974 | Size of next chunk (must be in use, or we would have merged)| +-+ |
| 1975 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 1976 | : |
| 1977 +- User payload -+ |
| 1978 : | |
| 1979 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 1980 |0| |
| 1981 +-+ |
| 1982 Note that since we always merge adjacent free chunks, the chunks |
| 1983 adjacent to a free chunk must be in use. |
| 1984 |
| 1985 Given a pointer to a chunk (which can be derived trivially from the |
| 1986 payload pointer) we can, in O(1) time, find out whether the adjacent |
| 1987 chunks are free, and if so, unlink them from the lists that they |
| 1988 are on and merge them with the current chunk. |
| 1989 |
| 1990 Chunks always begin on even word boundaries, so the mem portion |
| 1991 (which is returned to the user) is also on an even word boundary, and |
| 1992 thus at least double-word aligned. |
| 1993 |
| 1994 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the |
| 1995 chunk size (which is always a multiple of two words), is an in-use |
| 1996 bit for the *previous* chunk. If that bit is *clear*, then the |
| 1997 word before the current chunk size contains the previous chunk |
| 1998 size, and can be used to find the front of the previous chunk. |
| 1999 The very first chunk allocated always has this bit set, preventing |
| 2000 access to non-existent (or non-owned) memory. If pinuse is set for |
| 2001 any given chunk, then you CANNOT determine the size of the |
| 2002 previous chunk, and might even get a memory addressing fault when |
| 2003 trying to do so. |
| 2004 |
| 2005 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of |
| 2006 the chunk size redundantly records whether the current chunk is |
| 2007 inuse (unless the chunk is mmapped). This redundancy enables usage |
| 2008 checks within free and realloc, and reduces indirection when freeing |
| 2009 and consolidating chunks. |
| 2010 |
| 2011 Each freshly allocated chunk must have both cinuse and pinuse set. |
| 2012 That is, each allocated chunk borders either a previously allocated |
| 2013 and still in-use chunk, or the base of its memory arena. This is |
| 2014 ensured by making all allocations from the `lowest' part of any |
| 2015 found chunk. Further, no free chunk physically borders another one, |
| 2016 so each free chunk is known to be preceded and followed by either |
| 2017 inuse chunks or the ends of memory. |
| 2018 |
| 2019 Note that the `foot' of the current chunk is actually represented |
| 2020 as the prev_foot of the NEXT chunk. This makes it easier to |
| 2021 deal with alignments etc but can be very confusing when trying |
| 2022 to extend or adapt this code. |
| 2023 |
| 2024 The exceptions to all this are |
| 2025 |
| 2026 1. The special chunk `top' is the top-most available chunk (i.e., |
| 2027 the one bordering the end of available memory). It is treated |
| 2028 specially. Top is never included in any bin, is used only if |
| 2029 no other chunk is available, and is released back to the |
| 2030 system if it is very large (see M_TRIM_THRESHOLD). In effect, |
| 2031 the top chunk is treated as larger (and thus less well |
| 2032 fitting) than any other available chunk. The top chunk |
| 2033 doesn't update its trailing size field since there is no next |
| 2034 contiguous chunk that would have to index off it. However, |
| 2035 space is still allocated for it (TOP_FOOT_SIZE) to enable |
| 2036 separation or merging when space is extended. |
| 2037 |
| 2038 3. Chunks allocated via mmap, have both cinuse and pinuse bits |
| 2039 cleared in their head fields. Because they are allocated |
| 2040 one-by-one, each must carry its own prev_foot field, which is |
| 2041 also used to hold the offset this chunk has within its mmapped |
| 2042 region, which is needed to preserve alignment. Each mmapped |
| 2043 chunk is trailed by the first two fields of a fake next-chunk |
| 2044 for sake of usage checks. |
| 2045 |
| 2046 */ |
| 2047 |
| 2048 struct malloc_chunk { |
| 2049 public: |
| 2050 void set_prev_foot(size_t prev_foot) { |
| 2051 assert(prev_foot & 0xffc00000 == 0); |
| 2052 prev_foot_ = prev_foot; |
| 2053 } |
| 2054 |
| 2055 size_t prev_foot() const { return prev_foot_; } |
| 2056 |
| 2057 int ashmem_fd() const { return ashmem_fd_; } |
| 2058 |
| 2059 void set_ashmem_fd(int fd) { |
| 2060 assert(fd & 0xfffff300 == 0); |
| 2061 ashmem_fd_ = fd; |
| 2062 } |
| 2063 |
| 2064 static bool unmap(malloc_chunk* chunk, size_t size) { |
| 2065 const int ashmem_fd = chunk->ashmem_fd(); |
| 2066 if (CALL_MUNMAP(chunk, size) < 0) |
| 2067 return false; |
| 2068 return close_ashmem(ashmem_fd); |
| 2069 } |
| 2070 |
| 2071 private: |
| 2072 size_t prev_foot_ : sizeof(size_t) * 8 - 10; |
| 2073 unsigned ashmem_fd_ : 10; |
| 2074 |
| 2075 public: |
| 2076 size_t head; // Size and inuse bits. |
| 2077 struct malloc_chunk* fd; /* double links -- used only if free. */ |
| 2078 struct malloc_chunk* bk; |
| 2079 }; |
| 2080 |
| 2081 typedef struct malloc_chunk mchunk; |
| 2082 typedef struct malloc_chunk* mchunkptr; |
| 2083 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */ |
| 2084 typedef unsigned int bindex_t; /* Described below */ |
| 2085 typedef unsigned int binmap_t; /* Described below */ |
| 2086 typedef unsigned int flag_t; /* The type of various bit flag sets */ |
| 2087 |
| 2088 /* ------------------- Chunks sizes and alignments ----------------------- */ |
| 2089 |
| 2090 #define MCHUNK_SIZE (sizeof(mchunk)) |
| 2091 |
| 2092 #if FOOTERS |
| 2093 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) |
| 2094 #else /* FOOTERS */ |
| 2095 #define CHUNK_OVERHEAD (SIZE_T_SIZE) |
| 2096 #endif /* FOOTERS */ |
| 2097 |
| 2098 /* MMapped chunks need a second word of overhead ... */ |
| 2099 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) |
| 2100 /* ... and additional padding for fake next-chunk at foot */ |
| 2101 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES) |
| 2102 |
| 2103 /* The smallest size we can malloc is an aligned minimal chunk */ |
| 2104 #define MIN_CHUNK_SIZE\ |
| 2105 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) |
| 2106 |
| 2107 /* conversion from malloc headers to user pointers, and back */ |
| 2108 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES)) |
| 2109 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES)) |
| 2110 /* chunk associated with aligned address A */ |
| 2111 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A))) |
| 2112 |
| 2113 /* Bounds on request (not chunk) sizes. */ |
| 2114 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2) |
| 2115 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE) |
| 2116 |
| 2117 /* pad request bytes into a usable size */ |
| 2118 #define pad_request(req) \ |
| 2119 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) |
| 2120 |
| 2121 /* pad request, checking for minimum (but not maximum) */ |
| 2122 #define request2size(req) \ |
| 2123 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req)) |
| 2124 |
| 2125 |
| 2126 /* ------------------ Operations on head and foot fields ----------------- */ |
| 2127 |
| 2128 /* |
| 2129 The head field of a chunk is or'ed with PINUSE_BIT when previous |
| 2130 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in |
| 2131 use, unless mmapped, in which case both bits are cleared. |
| 2132 |
| 2133 FLAG4_BIT is not used by this malloc, but might be useful in extensions. |
| 2134 */ |
| 2135 |
| 2136 #define PINUSE_BIT (SIZE_T_ONE) |
| 2137 #define CINUSE_BIT (SIZE_T_TWO) |
| 2138 #define FLAG4_BIT (SIZE_T_FOUR) |
| 2139 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT) |
| 2140 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT) |
| 2141 |
| 2142 /* Head value for fenceposts */ |
| 2143 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE) |
| 2144 |
| 2145 /* extraction of fields from head words */ |
| 2146 #define cinuse(p) ((p)->head & CINUSE_BIT) |
| 2147 #define pinuse(p) ((p)->head & PINUSE_BIT) |
| 2148 #define flag4inuse(p) ((p)->head & FLAG4_BIT) |
| 2149 #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT) |
| 2150 #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0) |
| 2151 |
| 2152 #define chunksize(p) ((p)->head & ~(FLAG_BITS)) |
| 2153 |
| 2154 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT) |
| 2155 #define set_flag4(p) ((p)->head |= FLAG4_BIT) |
| 2156 #define clear_flag4(p) ((p)->head &= ~FLAG4_BIT) |
| 2157 |
| 2158 /* Treat space at ptr +/- offset as a chunk */ |
| 2159 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s))) |
| 2160 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s))) |
| 2161 |
| 2162 /* Ptr to next or previous physical malloc_chunk. */ |
| 2163 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS))) |
| 2164 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot()) )) |
| 2165 |
| 2166 /* extract next chunk's pinuse bit */ |
| 2167 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT) |
| 2168 |
| 2169 /* Get/set size at footer */ |
| 2170 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot()) |
| 2171 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->set_prev_foot((s))) |
| 2172 |
| 2173 /* Set size, pinuse bit, and foot */ |
| 2174 #define set_size_and_pinuse_of_free_chunk(p, s)\ |
| 2175 ((p)->head = (s|PINUSE_BIT), set_foot(p, s)) |
| 2176 |
| 2177 /* Set size, pinuse bit, foot, and clear next pinuse */ |
| 2178 #define set_free_with_pinuse(p, s, n)\ |
| 2179 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s)) |
| 2180 |
| 2181 /* Get the internal overhead associated with chunk p */ |
| 2182 #define overhead_for(p)\ |
| 2183 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD) |
| 2184 |
| 2185 /* Return true if malloced space is not necessarily cleared */ |
| 2186 #if MMAP_CLEARS |
| 2187 #define calloc_must_clear(p) (!is_mmapped(p)) |
| 2188 #else /* MMAP_CLEARS */ |
| 2189 #define calloc_must_clear(p) (1) |
| 2190 #endif /* MMAP_CLEARS */ |
| 2191 |
| 2192 /* ---------------------- Overlaid data structures ----------------------- */ |
| 2193 |
| 2194 /* |
| 2195 When chunks are not in use, they are treated as nodes of either |
| 2196 lists or trees. |
| 2197 |
| 2198 "Small" chunks are stored in circular doubly-linked lists, and look |
| 2199 like this: |
| 2200 |
| 2201 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2202 | Size of previous chunk | |
| 2203 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2204 `head:' | Size of chunk, in bytes |P| |
| 2205 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2206 | Forward pointer to next chunk in list | |
| 2207 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2208 | Back pointer to previous chunk in list | |
| 2209 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2210 | Unused space (may be 0 bytes long) . |
| 2211 . . |
| 2212 . | |
| 2213 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2214 `foot:' | Size of chunk, in bytes | |
| 2215 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2216 |
| 2217 Larger chunks are kept in a form of bitwise digital trees (aka |
| 2218 tries) keyed on chunksizes. Because malloc_tree_chunks are only for |
| 2219 free chunks greater than 256 bytes, their size doesn't impose any |
| 2220 constraints on user chunk sizes. Each node looks like: |
| 2221 |
| 2222 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2223 | Size of previous chunk | |
| 2224 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2225 `head:' | Size of chunk, in bytes |P| |
| 2226 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2227 | Forward pointer to next chunk of same size | |
| 2228 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2229 | Back pointer to previous chunk of same size | |
| 2230 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2231 | Pointer to left child (child[0]) | |
| 2232 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2233 | Pointer to right child (child[1]) | |
| 2234 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2235 | Pointer to parent | |
| 2236 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2237 | bin index of this chunk | |
| 2238 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2239 | Unused space . |
| 2240 . | |
| 2241 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2242 `foot:' | Size of chunk, in bytes | |
| 2243 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| 2244 |
| 2245 Each tree holding treenodes is a tree of unique chunk sizes. Chunks |
| 2246 of the same size are arranged in a circularly-linked list, with only |
| 2247 the oldest chunk (the next to be used, in our FIFO ordering) |
| 2248 actually in the tree. (Tree members are distinguished by a non-null |
| 2249 parent pointer.) If a chunk with the same size an an existing node |
| 2250 is inserted, it is linked off the existing node using pointers that |
| 2251 work in the same way as fd/bk pointers of small chunks. |
| 2252 |
| 2253 Each tree contains a power of 2 sized range of chunk sizes (the |
| 2254 smallest is 0x100 <= x < 0x180), which is is divided in half at each |
| 2255 tree level, with the chunks in the smaller half of the range (0x100 |
| 2256 <= x < 0x140 for the top nose) in the left subtree and the larger |
| 2257 half (0x140 <= x < 0x180) in the right subtree. This is, of course, |
| 2258 done by inspecting individual bits. |
| 2259 |
| 2260 Using these rules, each node's left subtree contains all smaller |
| 2261 sizes than its right subtree. However, the node at the root of each |
| 2262 subtree has no particular ordering relationship to either. (The |
| 2263 dividing line between the subtree sizes is based on trie relation.) |
| 2264 If we remove the last chunk of a given size from the interior of the |
| 2265 tree, we need to replace it with a leaf node. The tree ordering |
| 2266 rules permit a node to be replaced by any leaf below it. |
| 2267 |
| 2268 The smallest chunk in a tree (a common operation in a best-fit |
| 2269 allocator) can be found by walking a path to the leftmost leaf in |
| 2270 the tree. Unlike a usual binary tree, where we follow left child |
| 2271 pointers until we reach a null, here we follow the right child |
| 2272 pointer any time the left one is null, until we reach a leaf with |
| 2273 both child pointers null. The smallest chunk in the tree will be |
| 2274 somewhere along that path. |
| 2275 |
| 2276 The worst case number of steps to add, find, or remove a node is |
| 2277 bounded by the number of bits differentiating chunks within |
| 2278 bins. Under current bin calculations, this ranges from 6 up to 21 |
| 2279 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case |
| 2280 is of course much better. |
| 2281 */ |
| 2282 |
| 2283 struct malloc_tree_chunk { |
| 2284 private: |
| 2285 /* The first four fields must be compatible with malloc_chunk */ |
| 2286 size_t prev_foot_; |
| 2287 |
| 2288 public: |
| 2289 size_t prev_foot() const { return prev_foot_; } |
| 2290 void set_prev_foot(size_t prev_foot) { prev_foot_ = prev_foot; } |
| 2291 |
| 2292 size_t head; |
| 2293 struct malloc_tree_chunk* fd; |
| 2294 struct malloc_tree_chunk* bk; |
| 2295 |
| 2296 struct malloc_tree_chunk* child[2]; |
| 2297 struct malloc_tree_chunk* parent; |
| 2298 bindex_t index; |
| 2299 }; |
| 2300 |
| 2301 typedef struct malloc_tree_chunk tchunk; |
| 2302 typedef struct malloc_tree_chunk* tchunkptr; |
| 2303 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */ |
| 2304 |
| 2305 /* A little helper macro for trees */ |
| 2306 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1]) |
| 2307 |
| 2308 /* ----------------------------- Segments -------------------------------- */ |
| 2309 |
| 2310 /* |
| 2311 Each malloc space may include non-contiguous segments, held in a |
| 2312 list headed by an embedded malloc_segment record representing the |
| 2313 top-most space. Segments also include flags holding properties of |
| 2314 the space. Large chunks that are directly allocated by mmap are not |
| 2315 included in this list. They are instead independently created and |
| 2316 destroyed without otherwise keeping track of them. |
| 2317 |
| 2318 Segment management mainly comes into play for spaces allocated by |
| 2319 MMAP. Any call to MMAP might or might not return memory that is |
| 2320 adjacent to an existing segment. MORECORE normally contiguously |
| 2321 extends the current space, so this space is almost always adjacent, |
| 2322 which is simpler and faster to deal with. (This is why MORECORE is |
| 2323 used preferentially to MMAP when both are available -- see |
| 2324 sys_alloc.) When allocating using MMAP, we don't use any of the |
| 2325 hinting mechanisms (inconsistently) supported in various |
| 2326 implementations of unix mmap, or distinguish reserving from |
| 2327 committing memory. Instead, we just ask for space, and exploit |
| 2328 contiguity when we get it. It is probably possible to do |
| 2329 better than this on some systems, but no general scheme seems |
| 2330 to be significantly better. |
| 2331 |
| 2332 Management entails a simpler variant of the consolidation scheme |
| 2333 used for chunks to reduce fragmentation -- new adjacent memory is |
| 2334 normally prepended or appended to an existing segment. However, |
| 2335 there are limitations compared to chunk consolidation that mostly |
| 2336 reflect the fact that segment processing is relatively infrequent |
| 2337 (occurring only when getting memory from system) and that we |
| 2338 don't expect to have huge numbers of segments: |
| 2339 |
| 2340 * Segments are not indexed, so traversal requires linear scans. (It |
| 2341 would be possible to index these, but is not worth the extra |
| 2342 overhead and complexity for most programs on most platforms.) |
| 2343 * New segments are only appended to old ones when holding top-most |
| 2344 memory; if they cannot be prepended to others, they are held in |
| 2345 different segments. |
| 2346 |
| 2347 Except for the top-most segment of an mstate, each segment record |
| 2348 is kept at the tail of its segment. Segments are added by pushing |
| 2349 segment records onto the list headed by &mstate.seg for the |
| 2350 containing mstate. |
| 2351 |
| 2352 Segment flags control allocation/merge/deallocation policies: |
| 2353 * If EXTERN_BIT set, then we did not allocate this segment, |
| 2354 and so should not try to deallocate or merge with others. |
| 2355 (This currently holds only for the initial segment passed |
| 2356 into create_mspace_with_base.) |
| 2357 * If USE_MMAP_BIT set, the segment may be merged with |
| 2358 other surrounding mmapped segments and trimmed/de-allocated |
| 2359 using munmap. |
| 2360 * If neither bit is set, then the segment was obtained using |
| 2361 MORECORE so can be merged with surrounding MORECORE'd segments |
| 2362 and deallocated/trimmed using MORECORE with negative arguments. |
| 2363 */ |
| 2364 |
| 2365 struct malloc_segment { |
| 2366 char* base; /* base address */ |
| 2367 size_t size; /* allocated size */ |
| 2368 struct malloc_segment* next; /* ptr to next segment */ |
| 2369 |
| 2370 flag_t sflags() const { return sflags_; } |
| 2371 |
| 2372 void set_sflags(flag_t flags) { |
| 2373 assert(flags & 0xffc00000 == 0); |
| 2374 sflags_ = flags; |
| 2375 } |
| 2376 |
| 2377 int ashmem_fd() const { return ashmem_fd_; } |
| 2378 |
| 2379 void set_ashmem_fd(int ashmem_fd) { |
| 2380 assert(ashmem_fd & 0xfffff300 == 0); |
| 2381 ashmem_fd_ = ashmem_fd; |
| 2382 } |
| 2383 |
| 2384 private: |
| 2385 flag_t sflags_ : sizeof(flag_t) * 8 - 10; /* mmap and extern flag */ |
| 2386 int ashmem_fd_ : 10; |
| 2387 }; |
| 2388 |
| 2389 #define is_mmapped_segment(S) ((S)->sflags() & USE_MMAP_BIT) |
| 2390 #define is_extern_segment(S) ((S)->sflags() & EXTERN_BIT) |
| 2391 |
| 2392 typedef struct malloc_segment msegment; |
| 2393 typedef struct malloc_segment* msegmentptr; |
| 2394 |
| 2395 /* ---------------------------- malloc_state ----------------------------- */ |
| 2396 |
| 2397 /* |
| 2398 A malloc_state holds all of the bookkeeping for a space. |
| 2399 The main fields are: |
| 2400 |
| 2401 Top |
| 2402 The topmost chunk of the currently active segment. Its size is |
| 2403 cached in topsize. The actual size of topmost space is |
| 2404 topsize+TOP_FOOT_SIZE, which includes space reserved for adding |
| 2405 fenceposts and segment records if necessary when getting more |
| 2406 space from the system. The size at which to autotrim top is |
| 2407 cached from mparams in trim_check, except that it is disabled if |
| 2408 an autotrim fails. |
| 2409 |
| 2410 Designated victim (dv) |
| 2411 This is the preferred chunk for servicing small requests that |
| 2412 don't have exact fits. It is normally the chunk split off most |
| 2413 recently to service another small request. Its size is cached in |
| 2414 dvsize. The link fields of this chunk are not maintained since it |
| 2415 is not kept in a bin. |
| 2416 |
| 2417 SmallBins |
| 2418 An array of bin headers for free chunks. These bins hold chunks |
| 2419 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains |
| 2420 chunks of all the same size, spaced 8 bytes apart. To simplify |
| 2421 use in double-linked lists, each bin header acts as a malloc_chunk |
| 2422 pointing to the real first node, if it exists (else pointing to |
| 2423 itself). This avoids special-casing for headers. But to avoid |
| 2424 waste, we allocate only the fd/bk pointers of bins, and then use |
| 2425 repositioning tricks to treat these as the fields of a chunk. |
| 2426 |
| 2427 TreeBins |
| 2428 Treebins are pointers to the roots of trees holding a range of |
| 2429 sizes. There are 2 equally spaced treebins for each power of two |
| 2430 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything |
| 2431 larger. |
| 2432 |
| 2433 Bin maps |
| 2434 There is one bit map for small bins ("smallmap") and one for |
| 2435 treebins ("treemap). Each bin sets its bit when non-empty, and |
| 2436 clears the bit when empty. Bit operations are then used to avoid |
| 2437 bin-by-bin searching -- nearly all "search" is done without ever |
| 2438 looking at bins that won't be selected. The bit maps |
| 2439 conservatively use 32 bits per map word, even if on 64bit system. |
| 2440 For a good description of some of the bit-based techniques used |
| 2441 here, see Henry S. Warren Jr's book "Hacker's Delight" (and |
| 2442 supplement at http://hackersdelight.org/). Many of these are |
| 2443 intended to reduce the branchiness of paths through malloc etc, as |
| 2444 well as to reduce the number of memory locations read or written. |
| 2445 |
| 2446 Segments |
| 2447 A list of segments headed by an embedded malloc_segment record |
| 2448 representing the initial space. |
| 2449 |
| 2450 Address check support |
| 2451 The least_addr field is the least address ever obtained from |
| 2452 MORECORE or MMAP. Attempted frees and reallocs of any address less |
| 2453 than this are trapped (unless INSECURE is defined). |
| 2454 |
| 2455 Magic tag |
| 2456 A cross-check field that should always hold same value as mparams.magic. |
| 2457 |
| 2458 Max allowed footprint |
| 2459 The maximum allowed bytes to allocate from system (zero means no limit) |
| 2460 |
| 2461 Flags |
| 2462 Bits recording whether to use MMAP, locks, or contiguous MORECORE |
| 2463 |
| 2464 Statistics |
| 2465 Each space keeps track of current and maximum system memory |
| 2466 obtained via MORECORE or MMAP. |
| 2467 |
| 2468 Trim support |
| 2469 Fields holding the amount of unused topmost memory that should trigger |
| 2470 trimming, and a counter to force periodic scanning to release unused |
| 2471 non-topmost segments. |
| 2472 |
| 2473 Locking |
| 2474 If USE_LOCKS is defined, the "mutex" lock is acquired and released |
| 2475 around every public call using this mspace. |
| 2476 |
| 2477 Extension support |
| 2478 A void* pointer and a size_t field that can be used to help implement |
| 2479 extensions to this malloc. |
| 2480 */ |
| 2481 |
| 2482 /* Bin types, widths and sizes */ |
| 2483 #define NSMALLBINS (32U) |
| 2484 #define NTREEBINS (32U) |
| 2485 #define SMALLBIN_SHIFT (3U) |
| 2486 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT) |
| 2487 #define TREEBIN_SHIFT (8U) |
| 2488 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT) |
| 2489 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE) |
| 2490 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD) |
| 2491 |
| 2492 struct malloc_state { |
| 2493 binmap_t smallmap; |
| 2494 binmap_t treemap; |
| 2495 size_t dvsize; |
| 2496 size_t topsize; |
| 2497 char* least_addr; |
| 2498 mchunkptr dv; |
| 2499 mchunkptr top; |
| 2500 size_t trim_check; |
| 2501 size_t release_checks; |
| 2502 size_t magic; |
| 2503 mchunkptr smallbins[(NSMALLBINS+1)*2]; |
| 2504 tbinptr treebins[NTREEBINS]; |
| 2505 size_t footprint; |
| 2506 size_t max_footprint; |
| 2507 size_t footprint_limit; /* zero means no limit */ |
| 2508 flag_t mflags; |
| 2509 #if USE_LOCKS |
| 2510 MLOCK_T mutex; /* locate lock among fields that rarely change */ |
| 2511 #endif /* USE_LOCKS */ |
| 2512 msegment seg; |
| 2513 void* extp; /* Unused but available for extensions */ |
| 2514 size_t exts; |
| 2515 }; |
| 2516 |
| 2517 typedef struct malloc_state* mstate; |
| 2518 |
| 2519 /* ------------- Global malloc_state and malloc_params ------------------- */ |
| 2520 |
| 2521 /* |
| 2522 malloc_params holds global properties, including those that can be |
| 2523 dynamically set using mallopt. There is a single instance, mparams, |
| 2524 initialized in init_mparams. Note that the non-zeroness of "magic" |
| 2525 also serves as an initialization flag. |
| 2526 */ |
| 2527 |
| 2528 struct malloc_params { |
| 2529 size_t magic; |
| 2530 size_t page_size; |
| 2531 size_t granularity; |
| 2532 size_t mmap_threshold; |
| 2533 size_t trim_threshold; |
| 2534 flag_t default_mflags; |
| 2535 }; |
| 2536 |
| 2537 static struct malloc_params mparams; |
| 2538 |
| 2539 /* Ensure mparams initialized */ |
| 2540 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams()) |
| 2541 |
| 2542 #if !ONLY_MSPACES |
| 2543 |
| 2544 /* The global malloc_state used for all non-"mspace" calls */ |
| 2545 static struct malloc_state _gm_; |
| 2546 #define gm (&_gm_) |
| 2547 #define is_global(M) ((M) == &_gm_) |
| 2548 |
| 2549 #endif /* !ONLY_MSPACES */ |
| 2550 |
| 2551 #define is_initialized(M) ((M)->top != 0) |
| 2552 |
| 2553 /* -------------------------- system alloc setup ------------------------- */ |
| 2554 |
| 2555 /* Operations on mflags */ |
| 2556 |
| 2557 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT) |
| 2558 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT) |
| 2559 #if USE_LOCKS |
| 2560 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT) |
| 2561 #else |
| 2562 #define disable_lock(M) |
| 2563 #endif |
| 2564 |
| 2565 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT) |
| 2566 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT) |
| 2567 #if HAVE_MMAP |
| 2568 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT) |
| 2569 #else |
| 2570 #define disable_mmap(M) |
| 2571 #endif |
| 2572 |
| 2573 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT) |
| 2574 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT) |
| 2575 |
| 2576 #define set_lock(M,L)\ |
| 2577 ((M)->mflags = (L)?\ |
| 2578 ((M)->mflags | USE_LOCK_BIT) :\ |
| 2579 ((M)->mflags & ~USE_LOCK_BIT)) |
| 2580 |
| 2581 /* page-align a size */ |
| 2582 #define page_align(S)\ |
| 2583 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE)) |
| 2584 |
| 2585 /* granularity-align a size */ |
| 2586 #define granularity_align(S)\ |
| 2587 (((S) + (mparams.granularity - SIZE_T_ONE))\ |
| 2588 & ~(mparams.granularity - SIZE_T_ONE)) |
| 2589 |
| 2590 |
| 2591 /* For mmap, use granularity alignment on windows, else page-align */ |
| 2592 #ifdef WIN32 |
| 2593 #define mmap_align(S) granularity_align(S) |
| 2594 #else |
| 2595 #define mmap_align(S) page_align(S) |
| 2596 #endif |
| 2597 |
| 2598 /* For sys_alloc, enough padding to ensure can malloc request on success */ |
| 2599 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT) |
| 2600 |
| 2601 #define is_page_aligned(S)\ |
| 2602 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0) |
| 2603 #define is_granularity_aligned(S)\ |
| 2604 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0) |
| 2605 |
| 2606 /* True if segment S holds address A */ |
| 2607 #define segment_holds(S, A)\ |
| 2608 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size) |
| 2609 |
| 2610 /* Return segment holding given address */ |
| 2611 static msegmentptr segment_holding(mstate m, char* addr) { |
| 2612 msegmentptr sp = &m->seg; |
| 2613 for (;;) { |
| 2614 if (addr >= sp->base && addr < sp->base + sp->size) |
| 2615 return sp; |
| 2616 if ((sp = sp->next) == 0) |
| 2617 return 0; |
| 2618 } |
| 2619 } |
| 2620 |
| 2621 /* Return true if segment contains a segment link */ |
| 2622 static int has_segment_link(mstate m, msegmentptr ss) { |
| 2623 msegmentptr sp = &m->seg; |
| 2624 for (;;) { |
| 2625 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size) |
| 2626 return 1; |
| 2627 if ((sp = sp->next) == 0) |
| 2628 return 0; |
| 2629 } |
| 2630 } |
| 2631 |
| 2632 #ifndef MORECORE_CANNOT_TRIM |
| 2633 #define should_trim(M,s) ((s) > (M)->trim_check) |
| 2634 #else /* MORECORE_CANNOT_TRIM */ |
| 2635 #define should_trim(M,s) (0) |
| 2636 #endif /* MORECORE_CANNOT_TRIM */ |
| 2637 |
| 2638 /* |
| 2639 TOP_FOOT_SIZE is padding at the end of a segment, including space |
| 2640 that may be needed to place segment records and fenceposts when new |
| 2641 noncontiguous segments are added. |
| 2642 */ |
| 2643 #define TOP_FOOT_SIZE\ |
| 2644 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHU
NK_SIZE) |
| 2645 |
| 2646 |
| 2647 /* ------------------------------- Hooks -------------------------------- */ |
| 2648 |
| 2649 /* |
| 2650 PREACTION should be defined to return 0 on success, and nonzero on |
| 2651 failure. If you are not using locking, you can redefine these to do |
| 2652 anything you like. |
| 2653 */ |
| 2654 |
| 2655 #if USE_LOCKS |
| 2656 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0) |
| 2657 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); } |
| 2658 #else /* USE_LOCKS */ |
| 2659 |
| 2660 #ifndef PREACTION |
| 2661 #define PREACTION(M) (0) |
| 2662 #endif /* PREACTION */ |
| 2663 |
| 2664 #ifndef POSTACTION |
| 2665 #define POSTACTION(M) |
| 2666 #endif /* POSTACTION */ |
| 2667 |
| 2668 #endif /* USE_LOCKS */ |
| 2669 |
| 2670 /* |
| 2671 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses. |
| 2672 USAGE_ERROR_ACTION is triggered on detected bad frees and |
| 2673 reallocs. The argument p is an address that might have triggered the |
| 2674 fault. It is ignored by the two predefined actions, but might be |
| 2675 useful in custom actions that try to help diagnose errors. |
| 2676 */ |
| 2677 |
| 2678 #if PROCEED_ON_ERROR |
| 2679 |
| 2680 /* A count of the number of corruption errors causing resets */ |
| 2681 int malloc_corruption_error_count; |
| 2682 |
| 2683 /* default corruption action */ |
| 2684 static void reset_on_error(mstate m); |
| 2685 |
| 2686 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m) |
| 2687 #define USAGE_ERROR_ACTION(m, p) |
| 2688 |
| 2689 #else /* PROCEED_ON_ERROR */ |
| 2690 |
| 2691 #ifndef CORRUPTION_ERROR_ACTION |
| 2692 #define CORRUPTION_ERROR_ACTION(m) ABORT |
| 2693 #endif /* CORRUPTION_ERROR_ACTION */ |
| 2694 |
| 2695 #ifndef USAGE_ERROR_ACTION |
| 2696 #define USAGE_ERROR_ACTION(m,p) ABORT |
| 2697 #endif /* USAGE_ERROR_ACTION */ |
| 2698 |
| 2699 #endif /* PROCEED_ON_ERROR */ |
| 2700 |
| 2701 |
| 2702 /* -------------------------- Debugging setup ---------------------------- */ |
| 2703 |
| 2704 #if ! DEBUG |
| 2705 |
| 2706 #define check_free_chunk(M,P) |
| 2707 #define check_inuse_chunk(M,P) |
| 2708 #define check_malloced_chunk(M,P,N) |
| 2709 #define check_mmapped_chunk(M,P) |
| 2710 #define check_malloc_state(M) |
| 2711 #define check_top_chunk(M,P) |
| 2712 |
| 2713 #else /* DEBUG */ |
| 2714 #define check_free_chunk(M,P) do_check_free_chunk(M,P) |
| 2715 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P) |
| 2716 #define check_top_chunk(M,P) do_check_top_chunk(M,P) |
| 2717 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N) |
| 2718 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P) |
| 2719 #define check_malloc_state(M) do_check_malloc_state(M) |
| 2720 |
| 2721 static void do_check_any_chunk(mstate m, mchunkptr p); |
| 2722 static void do_check_top_chunk(mstate m, mchunkptr p); |
| 2723 static void do_check_mmapped_chunk(mstate m, mchunkptr p); |
| 2724 static void do_check_inuse_chunk(mstate m, mchunkptr p); |
| 2725 static void do_check_free_chunk(mstate m, mchunkptr p); |
| 2726 static void do_check_malloced_chunk(mstate m, void* mem, size_t s); |
| 2727 static void do_check_tree(mstate m, tchunkptr t); |
| 2728 static void do_check_treebin(mstate m, bindex_t i); |
| 2729 static void do_check_smallbin(mstate m, bindex_t i); |
| 2730 static void do_check_malloc_state(mstate m); |
| 2731 static int bin_find(mstate m, mchunkptr x); |
| 2732 static size_t traverse_and_check(mstate m); |
| 2733 #endif /* DEBUG */ |
| 2734 |
| 2735 /* ---------------------------- Indexing Bins ---------------------------- */ |
| 2736 |
| 2737 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS) |
| 2738 #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT) |
| 2739 #define small_index2size(i) ((i) << SMALLBIN_SHIFT) |
| 2740 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE)) |
| 2741 |
| 2742 /* addressing by index. See above about smallbin repositioning */ |
| 2743 /* BEGIN android-changed: strict aliasing change: char* cast to void* */ |
| 2744 #define smallbin_at(M, i) ((sbinptr)((void*)&((M)->smallbins[(i)<<1]))) |
| 2745 /* END android-changed */ |
| 2746 #define treebin_at(M,i) (&((M)->treebins[i])) |
| 2747 |
| 2748 /* assign tree index for size S to variable I. Use x86 asm if possible */ |
| 2749 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)) |
| 2750 #define compute_tree_index(S, I)\ |
| 2751 {\ |
| 2752 unsigned int X = S >> TREEBIN_SHIFT;\ |
| 2753 if (X == 0)\ |
| 2754 I = 0;\ |
| 2755 else if (X > 0xFFFF)\ |
| 2756 I = NTREEBINS-1;\ |
| 2757 else {\ |
| 2758 unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builti
n_clz(X); \ |
| 2759 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ |
| 2760 }\ |
| 2761 } |
| 2762 |
| 2763 #elif defined (__INTEL_COMPILER) |
| 2764 #define compute_tree_index(S, I)\ |
| 2765 {\ |
| 2766 size_t X = S >> TREEBIN_SHIFT;\ |
| 2767 if (X == 0)\ |
| 2768 I = 0;\ |
| 2769 else if (X > 0xFFFF)\ |
| 2770 I = NTREEBINS-1;\ |
| 2771 else {\ |
| 2772 unsigned int K = _bit_scan_reverse (X); \ |
| 2773 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ |
| 2774 }\ |
| 2775 } |
| 2776 |
| 2777 #elif defined(_MSC_VER) && _MSC_VER>=1300 |
| 2778 #define compute_tree_index(S, I)\ |
| 2779 {\ |
| 2780 size_t X = S >> TREEBIN_SHIFT;\ |
| 2781 if (X == 0)\ |
| 2782 I = 0;\ |
| 2783 else if (X > 0xFFFF)\ |
| 2784 I = NTREEBINS-1;\ |
| 2785 else {\ |
| 2786 unsigned int K;\ |
| 2787 _BitScanReverse((DWORD *) &K, (DWORD) X);\ |
| 2788 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ |
| 2789 }\ |
| 2790 } |
| 2791 |
| 2792 #else /* GNUC */ |
| 2793 #define compute_tree_index(S, I)\ |
| 2794 {\ |
| 2795 size_t X = S >> TREEBIN_SHIFT;\ |
| 2796 if (X == 0)\ |
| 2797 I = 0;\ |
| 2798 else if (X > 0xFFFF)\ |
| 2799 I = NTREEBINS-1;\ |
| 2800 else {\ |
| 2801 unsigned int Y = (unsigned int)X;\ |
| 2802 unsigned int N = ((Y - 0x100) >> 16) & 8;\ |
| 2803 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\ |
| 2804 N += K;\ |
| 2805 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\ |
| 2806 K = 14 - N + ((Y <<= K) >> 15);\ |
| 2807 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\ |
| 2808 }\ |
| 2809 } |
| 2810 #endif /* GNUC */ |
| 2811 |
| 2812 /* Bit representing maximum resolved size in a treebin at i */ |
| 2813 #define bit_for_tree_index(i) \ |
| 2814 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2) |
| 2815 |
| 2816 /* Shift placing maximum resolved bit in a treebin at i as sign bit */ |
| 2817 #define leftshift_for_tree_index(i) \ |
| 2818 ((i == NTREEBINS-1)? 0 : \ |
| 2819 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2))) |
| 2820 |
| 2821 /* The size of the smallest chunk held in bin with index i */ |
| 2822 #define minsize_for_tree_index(i) \ |
| 2823 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \ |
| 2824 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1))) |
| 2825 |
| 2826 |
| 2827 /* ------------------------ Operations on bin maps ----------------------- */ |
| 2828 |
| 2829 /* bit corresponding to given index */ |
| 2830 #define idx2bit(i) ((binmap_t)(1) << (i)) |
| 2831 |
| 2832 /* Mark/Clear bits with given index */ |
| 2833 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i)) |
| 2834 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i)) |
| 2835 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i)) |
| 2836 |
| 2837 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i)) |
| 2838 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i)) |
| 2839 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i)) |
| 2840 |
| 2841 /* isolate the least set bit of a bitmap */ |
| 2842 #define least_bit(x) ((x) & -(x)) |
| 2843 |
| 2844 /* mask with all bits to left of least bit of x on */ |
| 2845 #define left_bits(x) ((x<<1) | -(x<<1)) |
| 2846 |
| 2847 /* mask with all bits to left of or equal to least bit of x on */ |
| 2848 #define same_or_left_bits(x) ((x) | -(x)) |
| 2849 |
| 2850 /* index corresponding to given bit. Use x86 asm if possible */ |
| 2851 |
| 2852 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)) |
| 2853 #define compute_bit2idx(X, I)\ |
| 2854 {\ |
| 2855 unsigned int J;\ |
| 2856 J = __builtin_ctz(X); \ |
| 2857 I = (bindex_t)J;\ |
| 2858 } |
| 2859 |
| 2860 #elif defined (__INTEL_COMPILER) |
| 2861 #define compute_bit2idx(X, I)\ |
| 2862 {\ |
| 2863 unsigned int J;\ |
| 2864 J = _bit_scan_forward (X); \ |
| 2865 I = (bindex_t)J;\ |
| 2866 } |
| 2867 |
| 2868 #elif defined(_MSC_VER) && _MSC_VER>=1300 |
| 2869 #define compute_bit2idx(X, I)\ |
| 2870 {\ |
| 2871 unsigned int J;\ |
| 2872 _BitScanForward((DWORD *) &J, X);\ |
| 2873 I = (bindex_t)J;\ |
| 2874 } |
| 2875 |
| 2876 #elif USE_BUILTIN_FFS |
| 2877 #define compute_bit2idx(X, I) I = ffs(X)-1 |
| 2878 |
| 2879 #else |
| 2880 #define compute_bit2idx(X, I)\ |
| 2881 {\ |
| 2882 unsigned int Y = X - 1;\ |
| 2883 unsigned int K = Y >> (16-4) & 16;\ |
| 2884 unsigned int N = K; Y >>= K;\ |
| 2885 N += K = Y >> (8-3) & 8; Y >>= K;\ |
| 2886 N += K = Y >> (4-2) & 4; Y >>= K;\ |
| 2887 N += K = Y >> (2-1) & 2; Y >>= K;\ |
| 2888 N += K = Y >> (1-0) & 1; Y >>= K;\ |
| 2889 I = (bindex_t)(N + Y);\ |
| 2890 } |
| 2891 #endif /* GNUC */ |
| 2892 |
| 2893 |
| 2894 /* ----------------------- Runtime Check Support ------------------------- */ |
| 2895 |
| 2896 /* |
| 2897 For security, the main invariant is that malloc/free/etc never |
| 2898 writes to a static address other than malloc_state, unless static |
| 2899 malloc_state itself has been corrupted, which cannot occur via |
| 2900 malloc (because of these checks). In essence this means that we |
| 2901 believe all pointers, sizes, maps etc held in malloc_state, but |
| 2902 check all of those linked or offsetted from other embedded data |
| 2903 structures. These checks are interspersed with main code in a way |
| 2904 that tends to minimize their run-time cost. |
| 2905 |
| 2906 When FOOTERS is defined, in addition to range checking, we also |
| 2907 verify footer fields of inuse chunks, which can be used guarantee |
| 2908 that the mstate controlling malloc/free is intact. This is a |
| 2909 streamlined version of the approach described by William Robertson |
| 2910 et al in "Run-time Detection of Heap-based Overflows" LISA'03 |
| 2911 http://www.usenix.org/events/lisa03/tech/robertson.html The footer |
| 2912 of an inuse chunk holds the xor of its mstate and a random seed, |
| 2913 that is checked upon calls to free() and realloc(). This is |
| 2914 (probabalistically) unguessable from outside the program, but can be |
| 2915 computed by any code successfully malloc'ing any chunk, so does not |
| 2916 itself provide protection against code that has already broken |
| 2917 security through some other means. Unlike Robertson et al, we |
| 2918 always dynamically check addresses of all offset chunks (previous, |
| 2919 next, etc). This turns out to be cheaper than relying on hashes. |
| 2920 */ |
| 2921 |
| 2922 #if !INSECURE |
| 2923 /* Check if address a is at least as high as any from MORECORE or MMAP */ |
| 2924 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr) |
| 2925 /* Check if address of next chunk n is higher than base chunk p */ |
| 2926 #define ok_next(p, n) ((char*)(p) < (char*)(n)) |
| 2927 /* Check if p has inuse status */ |
| 2928 #define ok_inuse(p) is_inuse(p) |
| 2929 /* Check if p has its pinuse bit on */ |
| 2930 #define ok_pinuse(p) pinuse(p) |
| 2931 |
| 2932 #else /* !INSECURE */ |
| 2933 #define ok_address(M, a) (1) |
| 2934 #define ok_next(b, n) (1) |
| 2935 #define ok_inuse(p) (1) |
| 2936 #define ok_pinuse(p) (1) |
| 2937 #endif /* !INSECURE */ |
| 2938 |
| 2939 #if (FOOTERS && !INSECURE) |
| 2940 /* Check if (alleged) mstate m has expected magic field */ |
| 2941 #define ok_magic(M) ((M)->magic == mparams.magic) |
| 2942 #else /* (FOOTERS && !INSECURE) */ |
| 2943 #define ok_magic(M) (1) |
| 2944 #endif /* (FOOTERS && !INSECURE) */ |
| 2945 |
| 2946 /* In gcc, use __builtin_expect to minimize impact of checks */ |
| 2947 #if !INSECURE |
| 2948 #if defined(__GNUC__) && __GNUC__ >= 3 |
| 2949 #define RTCHECK(e) __builtin_expect(e, 1) |
| 2950 #else /* GNUC */ |
| 2951 #define RTCHECK(e) (e) |
| 2952 #endif /* GNUC */ |
| 2953 #else /* !INSECURE */ |
| 2954 #define RTCHECK(e) (1) |
| 2955 #endif /* !INSECURE */ |
| 2956 |
| 2957 /* macros to set up inuse chunks with or without footers */ |
| 2958 |
| 2959 #if !FOOTERS |
| 2960 |
| 2961 #define mark_inuse_foot(M,p,s) |
| 2962 |
| 2963 /* Macros for setting head/foot of non-mmapped chunks */ |
| 2964 |
| 2965 /* Set cinuse bit and pinuse bit of next chunk */ |
| 2966 #define set_inuse(M,p,s)\ |
| 2967 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ |
| 2968 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) |
| 2969 |
| 2970 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */ |
| 2971 #define set_inuse_and_pinuse(M,p,s)\ |
| 2972 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
| 2973 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) |
| 2974 |
| 2975 /* Set size, cinuse and pinuse bit of this chunk */ |
| 2976 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ |
| 2977 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT)) |
| 2978 |
| 2979 #else /* FOOTERS */ |
| 2980 |
| 2981 /* Set foot of inuse chunk to be xor of mstate and seed */ |
| 2982 #define mark_inuse_foot(M,p,s)\ |
| 2983 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic)) |
| 2984 |
| 2985 #define get_mstate_for(p)\ |
| 2986 ((mstate)(((mchunkptr)((char*)(p) +\ |
| 2987 (chunksize(p))))->prev_foot ^ mparams.magic)) |
| 2988 |
| 2989 #define set_inuse(M,p,s)\ |
| 2990 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ |
| 2991 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \ |
| 2992 mark_inuse_foot(M,p,s)) |
| 2993 |
| 2994 #define set_inuse_and_pinuse(M,p,s)\ |
| 2995 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
| 2996 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\ |
| 2997 mark_inuse_foot(M,p,s)) |
| 2998 |
| 2999 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ |
| 3000 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
| 3001 mark_inuse_foot(M, p, s)) |
| 3002 |
| 3003 #endif /* !FOOTERS */ |
| 3004 |
| 3005 /* ---------------------------- setting mparams -------------------------- */ |
| 3006 |
| 3007 #if LOCK_AT_FORK |
| 3008 static void pre_fork(void) { ACQUIRE_LOCK(&(gm)->mutex); } |
| 3009 static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); } |
| 3010 static void post_fork_child(void) { INITIAL_LOCK(&(gm)->mutex); } |
| 3011 #endif /* LOCK_AT_FORK */ |
| 3012 |
| 3013 /* Initialize mparams */ |
| 3014 static int init_mparams(void) { |
| 3015 /* BEGIN android-added: move pthread_atfork outside of lock */ |
| 3016 int first_run = 0; |
| 3017 /* END android-added */ |
| 3018 #ifdef NEED_GLOBAL_LOCK_INIT |
| 3019 if (malloc_global_mutex_status <= 0) |
| 3020 init_malloc_global_mutex(); |
| 3021 #endif |
| 3022 |
| 3023 ACQUIRE_MALLOC_GLOBAL_LOCK(); |
| 3024 if (mparams.magic == 0) { |
| 3025 size_t magic; |
| 3026 size_t psize; |
| 3027 size_t gsize; |
| 3028 /* BEGIN android-added: move pthread_atfork outside of lock */ |
| 3029 first_run = 1; |
| 3030 /* END android-added */ |
| 3031 |
| 3032 #ifndef WIN32 |
| 3033 psize = malloc_getpagesize; |
| 3034 gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize); |
| 3035 #else /* WIN32 */ |
| 3036 { |
| 3037 SYSTEM_INFO system_info; |
| 3038 GetSystemInfo(&system_info); |
| 3039 psize = system_info.dwPageSize; |
| 3040 gsize = ((DEFAULT_GRANULARITY != 0)? |
| 3041 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity); |
| 3042 } |
| 3043 #endif /* WIN32 */ |
| 3044 |
| 3045 /* Sanity-check configuration: |
| 3046 size_t must be unsigned and as wide as pointer type. |
| 3047 ints must be at least 4 bytes. |
| 3048 alignment must be at least 8. |
| 3049 Alignment, min chunk size, and page size must all be powers of 2. |
| 3050 */ |
| 3051 if ((sizeof(size_t) != sizeof(char*)) || |
| 3052 (MAX_SIZE_T < MIN_CHUNK_SIZE) || |
| 3053 (sizeof(int) < 4) || |
| 3054 (MALLOC_ALIGNMENT < (size_t)8U) || |
| 3055 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) || |
| 3056 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) || |
| 3057 ((gsize & (gsize-SIZE_T_ONE)) != 0) || |
| 3058 ((psize & (psize-SIZE_T_ONE)) != 0)) |
| 3059 ABORT; |
| 3060 mparams.granularity = gsize; |
| 3061 mparams.page_size = psize; |
| 3062 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD; |
| 3063 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD; |
| 3064 #if MORECORE_CONTIGUOUS |
| 3065 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT; |
| 3066 #else /* MORECORE_CONTIGUOUS */ |
| 3067 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT; |
| 3068 #endif /* MORECORE_CONTIGUOUS */ |
| 3069 |
| 3070 #if !ONLY_MSPACES |
| 3071 /* Set up lock for main malloc area */ |
| 3072 gm->mflags = mparams.default_mflags; |
| 3073 (void)INITIAL_LOCK(&gm->mutex); |
| 3074 #endif |
| 3075 /* BEGIN android-removed: move pthread_atfork outside of lock */ |
| 3076 #if 0 && LOCK_AT_FORK |
| 3077 pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child); |
| 3078 #endif |
| 3079 /* END android-removed */ |
| 3080 |
| 3081 { |
| 3082 #if USE_DEV_RANDOM |
| 3083 int fd; |
| 3084 unsigned char buf[sizeof(size_t)]; |
| 3085 /* Try to use /dev/urandom, else fall back on using time */ |
| 3086 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 && |
| 3087 read(fd, buf, sizeof(buf)) == sizeof(buf)) { |
| 3088 magic = *((size_t *) buf); |
| 3089 close(fd); |
| 3090 } |
| 3091 else |
| 3092 #endif /* USE_DEV_RANDOM */ |
| 3093 #ifdef WIN32 |
| 3094 magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U); |
| 3095 #elif defined(LACKS_TIME_H) |
| 3096 magic = (size_t)&magic ^ (size_t)0x55555555U; |
| 3097 #else |
| 3098 magic = (size_t)(time(0) ^ (size_t)0x55555555U); |
| 3099 #endif |
| 3100 magic |= (size_t)8U; /* ensure nonzero */ |
| 3101 magic &= ~(size_t)7U; /* improve chances of fault for bad values */ |
| 3102 /* Until memory modes commonly available, use volatile-write */ |
| 3103 (*(volatile size_t *)(&(mparams.magic))) = magic; |
| 3104 } |
| 3105 } |
| 3106 |
| 3107 RELEASE_MALLOC_GLOBAL_LOCK(); |
| 3108 /* BEGIN android-added: move pthread_atfork outside of lock */ |
| 3109 if (first_run != 0) { |
| 3110 #if LOCK_AT_FORK |
| 3111 //pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child); |
| 3112 #endif |
| 3113 } |
| 3114 /* END android-added */ |
| 3115 return 1; |
| 3116 } |
| 3117 |
| 3118 /* support for mallopt */ |
| 3119 static int change_mparam(int param_number, int value) { |
| 3120 size_t val; |
| 3121 ensure_initialization(); |
| 3122 val = (value == -1)? MAX_SIZE_T : (size_t)value; |
| 3123 switch(param_number) { |
| 3124 case M_TRIM_THRESHOLD: |
| 3125 mparams.trim_threshold = val; |
| 3126 return 1; |
| 3127 case M_GRANULARITY: |
| 3128 if (val >= mparams.page_size && ((val & (val-1)) == 0)) { |
| 3129 mparams.granularity = val; |
| 3130 return 1; |
| 3131 } |
| 3132 else |
| 3133 return 0; |
| 3134 case M_MMAP_THRESHOLD: |
| 3135 mparams.mmap_threshold = val; |
| 3136 return 1; |
| 3137 default: |
| 3138 return 0; |
| 3139 } |
| 3140 } |
| 3141 |
| 3142 #if DEBUG |
| 3143 /* ------------------------- Debugging Support --------------------------- */ |
| 3144 |
| 3145 /* Check properties of any chunk, whether free, inuse, mmapped etc */ |
| 3146 static void do_check_any_chunk(mstate m, mchunkptr p) { |
| 3147 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
| 3148 assert(ok_address(m, p)); |
| 3149 } |
| 3150 |
| 3151 /* Check properties of top chunk */ |
| 3152 static void do_check_top_chunk(mstate m, mchunkptr p) { |
| 3153 msegmentptr sp = segment_holding(m, (char*)p); |
| 3154 size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */ |
| 3155 assert(sp != 0); |
| 3156 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
| 3157 assert(ok_address(m, p)); |
| 3158 assert(sz == m->topsize); |
| 3159 assert(sz > 0); |
| 3160 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE); |
| 3161 assert(pinuse(p)); |
| 3162 assert(!pinuse(chunk_plus_offset(p, sz))); |
| 3163 } |
| 3164 |
| 3165 /* Check properties of (inuse) mmapped chunks */ |
| 3166 static void do_check_mmapped_chunk(mstate m, mchunkptr p) { |
| 3167 size_t sz = chunksize(p); |
| 3168 size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD); |
| 3169 assert(is_mmapped(p)); |
| 3170 assert(use_mmap(m)); |
| 3171 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
| 3172 assert(ok_address(m, p)); |
| 3173 assert(!is_small(sz)); |
| 3174 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0); |
| 3175 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD); |
| 3176 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0); |
| 3177 } |
| 3178 |
| 3179 /* Check properties of inuse chunks */ |
| 3180 static void do_check_inuse_chunk(mstate m, mchunkptr p) { |
| 3181 do_check_any_chunk(m, p); |
| 3182 assert(is_inuse(p)); |
| 3183 assert(next_pinuse(p)); |
| 3184 /* If not pinuse and not mmapped, previous chunk has OK offset */ |
| 3185 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p); |
| 3186 if (is_mmapped(p)) |
| 3187 do_check_mmapped_chunk(m, p); |
| 3188 } |
| 3189 |
| 3190 /* Check properties of free chunks */ |
| 3191 static void do_check_free_chunk(mstate m, mchunkptr p) { |
| 3192 size_t sz = chunksize(p); |
| 3193 mchunkptr next = chunk_plus_offset(p, sz); |
| 3194 do_check_any_chunk(m, p); |
| 3195 assert(!is_inuse(p)); |
| 3196 assert(!next_pinuse(p)); |
| 3197 assert (!is_mmapped(p)); |
| 3198 if (p != m->dv && p != m->top) { |
| 3199 if (sz >= MIN_CHUNK_SIZE) { |
| 3200 assert((sz & CHUNK_ALIGN_MASK) == 0); |
| 3201 assert(is_aligned(chunk2mem(p))); |
| 3202 assert(next->prev_foot == sz); |
| 3203 assert(pinuse(p)); |
| 3204 assert (next == m->top || is_inuse(next)); |
| 3205 assert(p->fd->bk == p); |
| 3206 assert(p->bk->fd == p); |
| 3207 } |
| 3208 else /* markers are always of size SIZE_T_SIZE */ |
| 3209 assert(sz == SIZE_T_SIZE); |
| 3210 } |
| 3211 } |
| 3212 |
| 3213 /* Check properties of malloced chunks at the point they are malloced */ |
| 3214 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) { |
| 3215 if (mem != 0) { |
| 3216 mchunkptr p = mem2chunk(mem); |
| 3217 size_t sz = p->head & ~INUSE_BITS; |
| 3218 do_check_inuse_chunk(m, p); |
| 3219 assert((sz & CHUNK_ALIGN_MASK) == 0); |
| 3220 assert(sz >= MIN_CHUNK_SIZE); |
| 3221 assert(sz >= s); |
| 3222 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */ |
| 3223 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE)); |
| 3224 } |
| 3225 } |
| 3226 |
| 3227 /* Check a tree and its subtrees. */ |
| 3228 static void do_check_tree(mstate m, tchunkptr t) { |
| 3229 tchunkptr head = 0; |
| 3230 tchunkptr u = t; |
| 3231 bindex_t tindex = t->index; |
| 3232 size_t tsize = chunksize(t); |
| 3233 bindex_t idx; |
| 3234 compute_tree_index(tsize, idx); |
| 3235 assert(tindex == idx); |
| 3236 assert(tsize >= MIN_LARGE_SIZE); |
| 3237 assert(tsize >= minsize_for_tree_index(idx)); |
| 3238 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1)))); |
| 3239 |
| 3240 do { /* traverse through chain of same-sized nodes */ |
| 3241 do_check_any_chunk(m, ((mchunkptr)u)); |
| 3242 assert(u->index == tindex); |
| 3243 assert(chunksize(u) == tsize); |
| 3244 assert(!is_inuse(u)); |
| 3245 assert(!next_pinuse(u)); |
| 3246 assert(u->fd->bk == u); |
| 3247 assert(u->bk->fd == u); |
| 3248 if (u->parent == 0) { |
| 3249 assert(u->child[0] == 0); |
| 3250 assert(u->child[1] == 0); |
| 3251 } |
| 3252 else { |
| 3253 assert(head == 0); /* only one node on chain has parent */ |
| 3254 head = u; |
| 3255 assert(u->parent != u); |
| 3256 assert (u->parent->child[0] == u || |
| 3257 u->parent->child[1] == u || |
| 3258 *((tbinptr*)(u->parent)) == u); |
| 3259 if (u->child[0] != 0) { |
| 3260 assert(u->child[0]->parent == u); |
| 3261 assert(u->child[0] != u); |
| 3262 do_check_tree(m, u->child[0]); |
| 3263 } |
| 3264 if (u->child[1] != 0) { |
| 3265 assert(u->child[1]->parent == u); |
| 3266 assert(u->child[1] != u); |
| 3267 do_check_tree(m, u->child[1]); |
| 3268 } |
| 3269 if (u->child[0] != 0 && u->child[1] != 0) { |
| 3270 assert(chunksize(u->child[0]) < chunksize(u->child[1])); |
| 3271 } |
| 3272 } |
| 3273 u = u->fd; |
| 3274 } while (u != t); |
| 3275 assert(head != 0); |
| 3276 } |
| 3277 |
| 3278 /* Check all the chunks in a treebin. */ |
| 3279 static void do_check_treebin(mstate m, bindex_t i) { |
| 3280 tbinptr* tb = treebin_at(m, i); |
| 3281 tchunkptr t = *tb; |
| 3282 int empty = (m->treemap & (1U << i)) == 0; |
| 3283 if (t == 0) |
| 3284 assert(empty); |
| 3285 if (!empty) |
| 3286 do_check_tree(m, t); |
| 3287 } |
| 3288 |
| 3289 /* Check all the chunks in a smallbin. */ |
| 3290 static void do_check_smallbin(mstate m, bindex_t i) { |
| 3291 sbinptr b = smallbin_at(m, i); |
| 3292 mchunkptr p = b->bk; |
| 3293 unsigned int empty = (m->smallmap & (1U << i)) == 0; |
| 3294 if (p == b) |
| 3295 assert(empty); |
| 3296 if (!empty) { |
| 3297 for (; p != b; p = p->bk) { |
| 3298 size_t size = chunksize(p); |
| 3299 mchunkptr q; |
| 3300 /* each chunk claims to be free */ |
| 3301 do_check_free_chunk(m, p); |
| 3302 /* chunk belongs in bin */ |
| 3303 assert(small_index(size) == i); |
| 3304 assert(p->bk == b || chunksize(p->bk) == chunksize(p)); |
| 3305 /* chunk is followed by an inuse chunk */ |
| 3306 q = next_chunk(p); |
| 3307 if (q->head != FENCEPOST_HEAD) |
| 3308 do_check_inuse_chunk(m, q); |
| 3309 } |
| 3310 } |
| 3311 } |
| 3312 |
| 3313 /* Find x in a bin. Used in other check functions. */ |
| 3314 static int bin_find(mstate m, mchunkptr x) { |
| 3315 size_t size = chunksize(x); |
| 3316 if (is_small(size)) { |
| 3317 bindex_t sidx = small_index(size); |
| 3318 sbinptr b = smallbin_at(m, sidx); |
| 3319 if (smallmap_is_marked(m, sidx)) { |
| 3320 mchunkptr p = b; |
| 3321 do { |
| 3322 if (p == x) |
| 3323 return 1; |
| 3324 } while ((p = p->fd) != b); |
| 3325 } |
| 3326 } |
| 3327 else { |
| 3328 bindex_t tidx; |
| 3329 compute_tree_index(size, tidx); |
| 3330 if (treemap_is_marked(m, tidx)) { |
| 3331 tchunkptr t = *treebin_at(m, tidx); |
| 3332 size_t sizebits = size << leftshift_for_tree_index(tidx); |
| 3333 while (t != 0 && chunksize(t) != size) { |
| 3334 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; |
| 3335 sizebits <<= 1; |
| 3336 } |
| 3337 if (t != 0) { |
| 3338 tchunkptr u = t; |
| 3339 do { |
| 3340 if (u == (tchunkptr)x) |
| 3341 return 1; |
| 3342 } while ((u = u->fd) != t); |
| 3343 } |
| 3344 } |
| 3345 } |
| 3346 return 0; |
| 3347 } |
| 3348 |
| 3349 /* Traverse each chunk and check it; return total */ |
| 3350 static size_t traverse_and_check(mstate m) { |
| 3351 size_t sum = 0; |
| 3352 if (is_initialized(m)) { |
| 3353 msegmentptr s = &m->seg; |
| 3354 sum += m->topsize + TOP_FOOT_SIZE; |
| 3355 while (s != 0) { |
| 3356 mchunkptr q = align_as_chunk(s->base); |
| 3357 mchunkptr lastq = 0; |
| 3358 assert(pinuse(q)); |
| 3359 while (segment_holds(s, q) && |
| 3360 q != m->top && q->head != FENCEPOST_HEAD) { |
| 3361 sum += chunksize(q); |
| 3362 if (is_inuse(q)) { |
| 3363 assert(!bin_find(m, q)); |
| 3364 do_check_inuse_chunk(m, q); |
| 3365 } |
| 3366 else { |
| 3367 assert(q == m->dv || bin_find(m, q)); |
| 3368 assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */ |
| 3369 do_check_free_chunk(m, q); |
| 3370 } |
| 3371 lastq = q; |
| 3372 q = next_chunk(q); |
| 3373 } |
| 3374 s = s->next; |
| 3375 } |
| 3376 } |
| 3377 return sum; |
| 3378 } |
| 3379 |
| 3380 |
| 3381 /* Check all properties of malloc_state. */ |
| 3382 static void do_check_malloc_state(mstate m) { |
| 3383 bindex_t i; |
| 3384 size_t total; |
| 3385 /* check bins */ |
| 3386 for (i = 0; i < NSMALLBINS; ++i) |
| 3387 do_check_smallbin(m, i); |
| 3388 for (i = 0; i < NTREEBINS; ++i) |
| 3389 do_check_treebin(m, i); |
| 3390 |
| 3391 if (m->dvsize != 0) { /* check dv chunk */ |
| 3392 do_check_any_chunk(m, m->dv); |
| 3393 assert(m->dvsize == chunksize(m->dv)); |
| 3394 assert(m->dvsize >= MIN_CHUNK_SIZE); |
| 3395 assert(bin_find(m, m->dv) == 0); |
| 3396 } |
| 3397 |
| 3398 if (m->top != 0) { /* check top chunk */ |
| 3399 do_check_top_chunk(m, m->top); |
| 3400 /*assert(m->topsize == chunksize(m->top)); redundant */ |
| 3401 assert(m->topsize > 0); |
| 3402 assert(bin_find(m, m->top) == 0); |
| 3403 } |
| 3404 |
| 3405 total = traverse_and_check(m); |
| 3406 assert(total <= m->footprint); |
| 3407 assert(m->footprint <= m->max_footprint); |
| 3408 } |
| 3409 #endif /* DEBUG */ |
| 3410 |
| 3411 /* ----------------------------- statistics ------------------------------ */ |
| 3412 |
| 3413 #if !NO_MALLINFO |
| 3414 static struct mallinfo internal_mallinfo(mstate m) { |
| 3415 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; |
| 3416 ensure_initialization(); |
| 3417 if (!PREACTION(m)) { |
| 3418 check_malloc_state(m); |
| 3419 if (is_initialized(m)) { |
| 3420 size_t nfree = SIZE_T_ONE; /* top always free */ |
| 3421 size_t mfree = m->topsize + TOP_FOOT_SIZE; |
| 3422 size_t sum = mfree; |
| 3423 msegmentptr s = &m->seg; |
| 3424 while (s != 0) { |
| 3425 mchunkptr q = align_as_chunk(s->base); |
| 3426 while (segment_holds(s, q) && |
| 3427 q != m->top && q->head != FENCEPOST_HEAD) { |
| 3428 size_t sz = chunksize(q); |
| 3429 sum += sz; |
| 3430 if (!is_inuse(q)) { |
| 3431 mfree += sz; |
| 3432 ++nfree; |
| 3433 } |
| 3434 q = next_chunk(q); |
| 3435 } |
| 3436 s = s->next; |
| 3437 } |
| 3438 |
| 3439 nm.arena = sum; |
| 3440 nm.ordblks = nfree; |
| 3441 nm.hblkhd = m->footprint - sum; |
| 3442 nm.usmblks = m->max_footprint; |
| 3443 nm.uordblks = m->footprint - mfree; |
| 3444 nm.fordblks = mfree; |
| 3445 nm.keepcost = m->topsize; |
| 3446 } |
| 3447 |
| 3448 POSTACTION(m); |
| 3449 } |
| 3450 return nm; |
| 3451 } |
| 3452 #endif /* !NO_MALLINFO */ |
| 3453 |
| 3454 #if !NO_MALLOC_STATS |
| 3455 static void internal_malloc_stats(mstate m) { |
| 3456 ensure_initialization(); |
| 3457 if (!PREACTION(m)) { |
| 3458 size_t maxfp = 0; |
| 3459 size_t fp = 0; |
| 3460 size_t used = 0; |
| 3461 check_malloc_state(m); |
| 3462 if (is_initialized(m)) { |
| 3463 msegmentptr s = &m->seg; |
| 3464 maxfp = m->max_footprint; |
| 3465 fp = m->footprint; |
| 3466 used = fp - (m->topsize + TOP_FOOT_SIZE); |
| 3467 |
| 3468 while (s != 0) { |
| 3469 mchunkptr q = align_as_chunk(s->base); |
| 3470 while (segment_holds(s, q) && |
| 3471 q != m->top && q->head != FENCEPOST_HEAD) { |
| 3472 if (!is_inuse(q)) |
| 3473 used -= chunksize(q); |
| 3474 q = next_chunk(q); |
| 3475 } |
| 3476 s = s->next; |
| 3477 } |
| 3478 } |
| 3479 POSTACTION(m); /* drop lock */ |
| 3480 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp)); |
| 3481 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp)); |
| 3482 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used)); |
| 3483 } |
| 3484 } |
| 3485 #endif /* NO_MALLOC_STATS */ |
| 3486 |
| 3487 /* ----------------------- Operations on smallbins ----------------------- */ |
| 3488 |
| 3489 /* |
| 3490 Various forms of linking and unlinking are defined as macros. Even |
| 3491 the ones for trees, which are very long but have very short typical |
| 3492 paths. This is ugly but reduces reliance on inlining support of |
| 3493 compilers. |
| 3494 */ |
| 3495 |
| 3496 /* Link a free chunk into a smallbin */ |
| 3497 #define insert_small_chunk(M, P, S) {\ |
| 3498 bindex_t I = small_index(S);\ |
| 3499 mchunkptr B = smallbin_at(M, I);\ |
| 3500 mchunkptr F = B;\ |
| 3501 assert(S >= MIN_CHUNK_SIZE);\ |
| 3502 if (!smallmap_is_marked(M, I))\ |
| 3503 mark_smallmap(M, I);\ |
| 3504 else if (RTCHECK(ok_address(M, B->fd)))\ |
| 3505 F = B->fd;\ |
| 3506 else {\ |
| 3507 CORRUPTION_ERROR_ACTION(M);\ |
| 3508 }\ |
| 3509 B->fd = P;\ |
| 3510 F->bk = P;\ |
| 3511 P->fd = F;\ |
| 3512 P->bk = B;\ |
| 3513 } |
| 3514 |
| 3515 /* Unlink a chunk from a smallbin */ |
| 3516 #define unlink_small_chunk(M, P, S) {\ |
| 3517 mchunkptr F = P->fd;\ |
| 3518 mchunkptr B = P->bk;\ |
| 3519 bindex_t I = small_index(S);\ |
| 3520 assert(P != B);\ |
| 3521 assert(P != F);\ |
| 3522 assert(chunksize(P) == small_index2size(I));\ |
| 3523 if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \ |
| 3524 if (B == F) {\ |
| 3525 clear_smallmap(M, I);\ |
| 3526 }\ |
| 3527 else if (RTCHECK(B == smallbin_at(M,I) ||\ |
| 3528 (ok_address(M, B) && B->fd == P))) {\ |
| 3529 F->bk = B;\ |
| 3530 B->fd = F;\ |
| 3531 }\ |
| 3532 else {\ |
| 3533 CORRUPTION_ERROR_ACTION(M);\ |
| 3534 }\ |
| 3535 }\ |
| 3536 else {\ |
| 3537 CORRUPTION_ERROR_ACTION(M);\ |
| 3538 }\ |
| 3539 } |
| 3540 |
| 3541 /* Unlink the first chunk from a smallbin */ |
| 3542 #define unlink_first_small_chunk(M, B, P, I) {\ |
| 3543 mchunkptr F = P->fd;\ |
| 3544 assert(P != B);\ |
| 3545 assert(P != F);\ |
| 3546 assert(chunksize(P) == small_index2size(I));\ |
| 3547 if (B == F) {\ |
| 3548 clear_smallmap(M, I);\ |
| 3549 }\ |
| 3550 else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\ |
| 3551 F->bk = B;\ |
| 3552 B->fd = F;\ |
| 3553 }\ |
| 3554 else {\ |
| 3555 CORRUPTION_ERROR_ACTION(M);\ |
| 3556 }\ |
| 3557 } |
| 3558 |
| 3559 /* Replace dv node, binning the old one */ |
| 3560 /* Used only when dvsize known to be small */ |
| 3561 #define replace_dv(M, P, S) {\ |
| 3562 size_t DVS = M->dvsize;\ |
| 3563 assert(is_small(DVS));\ |
| 3564 if (DVS != 0) {\ |
| 3565 mchunkptr DV = M->dv;\ |
| 3566 insert_small_chunk(M, DV, DVS);\ |
| 3567 }\ |
| 3568 M->dvsize = S;\ |
| 3569 M->dv = P;\ |
| 3570 } |
| 3571 |
| 3572 /* ------------------------- Operations on trees ------------------------- */ |
| 3573 |
| 3574 /* Insert chunk into tree */ |
| 3575 #define insert_large_chunk(M, X, S) {\ |
| 3576 tbinptr* H;\ |
| 3577 bindex_t I;\ |
| 3578 compute_tree_index(S, I);\ |
| 3579 H = treebin_at(M, I);\ |
| 3580 X->index = I;\ |
| 3581 X->child[0] = X->child[1] = 0;\ |
| 3582 if (!treemap_is_marked(M, I)) {\ |
| 3583 mark_treemap(M, I);\ |
| 3584 *H = X;\ |
| 3585 X->parent = (tchunkptr)H;\ |
| 3586 X->fd = X->bk = X;\ |
| 3587 }\ |
| 3588 else {\ |
| 3589 tchunkptr T = *H;\ |
| 3590 size_t K = S << leftshift_for_tree_index(I);\ |
| 3591 for (;;) {\ |
| 3592 if (chunksize(T) != S) {\ |
| 3593 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\ |
| 3594 K <<= 1;\ |
| 3595 if (*C != 0)\ |
| 3596 T = *C;\ |
| 3597 else if (RTCHECK(ok_address(M, C))) {\ |
| 3598 *C = X;\ |
| 3599 X->parent = T;\ |
| 3600 X->fd = X->bk = X;\ |
| 3601 break;\ |
| 3602 }\ |
| 3603 else {\ |
| 3604 CORRUPTION_ERROR_ACTION(M);\ |
| 3605 break;\ |
| 3606 }\ |
| 3607 }\ |
| 3608 else {\ |
| 3609 tchunkptr F = T->fd;\ |
| 3610 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\ |
| 3611 T->fd = F->bk = X;\ |
| 3612 X->fd = F;\ |
| 3613 X->bk = T;\ |
| 3614 X->parent = 0;\ |
| 3615 break;\ |
| 3616 }\ |
| 3617 else {\ |
| 3618 CORRUPTION_ERROR_ACTION(M);\ |
| 3619 break;\ |
| 3620 }\ |
| 3621 }\ |
| 3622 }\ |
| 3623 }\ |
| 3624 } |
| 3625 |
| 3626 /* |
| 3627 Unlink steps: |
| 3628 |
| 3629 1. If x is a chained node, unlink it from its same-sized fd/bk links |
| 3630 and choose its bk node as its replacement. |
| 3631 2. If x was the last node of its size, but not a leaf node, it must |
| 3632 be replaced with a leaf node (not merely one with an open left or |
| 3633 right), to make sure that lefts and rights of descendents |
| 3634 correspond properly to bit masks. We use the rightmost descendent |
| 3635 of x. We could use any other leaf, but this is easy to locate and |
| 3636 tends to counteract removal of leftmosts elsewhere, and so keeps |
| 3637 paths shorter than minimally guaranteed. This doesn't loop much |
| 3638 because on average a node in a tree is near the bottom. |
| 3639 3. If x is the base of a chain (i.e., has parent links) relink |
| 3640 x's parent and children to x's replacement (or null if none). |
| 3641 */ |
| 3642 |
| 3643 #define unlink_large_chunk(M, X) {\ |
| 3644 tchunkptr XP = X->parent;\ |
| 3645 tchunkptr R;\ |
| 3646 if (X->bk != X) {\ |
| 3647 tchunkptr F = X->fd;\ |
| 3648 R = X->bk;\ |
| 3649 if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\ |
| 3650 F->bk = R;\ |
| 3651 R->fd = F;\ |
| 3652 }\ |
| 3653 else {\ |
| 3654 CORRUPTION_ERROR_ACTION(M);\ |
| 3655 }\ |
| 3656 }\ |
| 3657 else {\ |
| 3658 tchunkptr* RP;\ |
| 3659 if (((R = *(RP = &(X->child[1]))) != 0) ||\ |
| 3660 ((R = *(RP = &(X->child[0]))) != 0)) {\ |
| 3661 tchunkptr* CP;\ |
| 3662 while ((*(CP = &(R->child[1])) != 0) ||\ |
| 3663 (*(CP = &(R->child[0])) != 0)) {\ |
| 3664 R = *(RP = CP);\ |
| 3665 }\ |
| 3666 if (RTCHECK(ok_address(M, RP)))\ |
| 3667 *RP = 0;\ |
| 3668 else {\ |
| 3669 CORRUPTION_ERROR_ACTION(M);\ |
| 3670 }\ |
| 3671 }\ |
| 3672 }\ |
| 3673 if (XP != 0) {\ |
| 3674 tbinptr* H = treebin_at(M, X->index);\ |
| 3675 if (X == *H) {\ |
| 3676 if ((*H = R) == 0) \ |
| 3677 clear_treemap(M, X->index);\ |
| 3678 }\ |
| 3679 else if (RTCHECK(ok_address(M, XP))) {\ |
| 3680 if (XP->child[0] == X) \ |
| 3681 XP->child[0] = R;\ |
| 3682 else \ |
| 3683 XP->child[1] = R;\ |
| 3684 }\ |
| 3685 else\ |
| 3686 CORRUPTION_ERROR_ACTION(M);\ |
| 3687 if (R != 0) {\ |
| 3688 if (RTCHECK(ok_address(M, R))) {\ |
| 3689 tchunkptr C0, C1;\ |
| 3690 R->parent = XP;\ |
| 3691 if ((C0 = X->child[0]) != 0) {\ |
| 3692 if (RTCHECK(ok_address(M, C0))) {\ |
| 3693 R->child[0] = C0;\ |
| 3694 C0->parent = R;\ |
| 3695 }\ |
| 3696 else\ |
| 3697 CORRUPTION_ERROR_ACTION(M);\ |
| 3698 }\ |
| 3699 if ((C1 = X->child[1]) != 0) {\ |
| 3700 if (RTCHECK(ok_address(M, C1))) {\ |
| 3701 R->child[1] = C1;\ |
| 3702 C1->parent = R;\ |
| 3703 }\ |
| 3704 else\ |
| 3705 CORRUPTION_ERROR_ACTION(M);\ |
| 3706 }\ |
| 3707 }\ |
| 3708 else\ |
| 3709 CORRUPTION_ERROR_ACTION(M);\ |
| 3710 }\ |
| 3711 }\ |
| 3712 } |
| 3713 |
| 3714 /* Relays to large vs small bin operations */ |
| 3715 |
| 3716 #define insert_chunk(M, P, S)\ |
| 3717 if (is_small(S)) insert_small_chunk(M, P, S)\ |
| 3718 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); } |
| 3719 |
| 3720 #define unlink_chunk(M, P, S)\ |
| 3721 if (is_small(S)) unlink_small_chunk(M, P, S)\ |
| 3722 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); } |
| 3723 |
| 3724 |
| 3725 /* Relays to internal calls to malloc/free from realloc, memalign etc */ |
| 3726 |
| 3727 #if ONLY_MSPACES |
| 3728 #define internal_malloc(m, b) mspace_malloc(m, b) |
| 3729 #define internal_free(m, mem) mspace_free(m,mem); |
| 3730 #else /* ONLY_MSPACES */ |
| 3731 #if MSPACES |
| 3732 #define internal_malloc(m, b)\ |
| 3733 ((m == gm)? dlmalloc(b) : mspace_malloc(m, b)) |
| 3734 #define internal_free(m, mem)\ |
| 3735 if (m == gm) dlfree(mem); else mspace_free(m,mem); |
| 3736 #else /* MSPACES */ |
| 3737 #define internal_malloc(m, b) dlmalloc(b) |
| 3738 #define internal_free(m, mem) dlfree(mem) |
| 3739 #endif /* MSPACES */ |
| 3740 #endif /* ONLY_MSPACES */ |
| 3741 |
| 3742 /* ----------------------- Direct-mmapping chunks ----------------------- */ |
| 3743 |
| 3744 /* |
| 3745 Directly mmapped chunks are set up with an offset to the start of |
| 3746 the mmapped region stored in the prev_foot field of the chunk. This |
| 3747 allows reconstruction of the required argument to MUNMAP when freed, |
| 3748 and also allows adjustment of the returned chunk to meet alignment |
| 3749 requirements (especially in memalign). |
| 3750 */ |
| 3751 |
| 3752 /* Malloc using mmap */ |
| 3753 static void* mmap_alloc(mstate m, size_t nb) { |
| 3754 size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); |
| 3755 if (m->footprint_limit != 0) { |
| 3756 size_t fp = m->footprint + mmsize; |
| 3757 if (fp <= m->footprint || fp > m->footprint_limit) |
| 3758 return 0; |
| 3759 } |
| 3760 if (mmsize > nb) { /* Check for wrap around 0 */ |
| 3761 const int ashmem_fd = create_ashmem(mmsize); |
| 3762 if (ashmem_fd < 0) |
| 3763 return 0; |
| 3764 char* mm = (char*)(mmap(NULL, mmsize, MMAP_PROT, MMAP_FLAGS, ashmem_fd, 0)); |
| 3765 if (mm != CMFAIL) { |
| 3766 size_t offset = align_offset(chunk2mem(mm)); |
| 3767 size_t psize = mmsize - offset - MMAP_FOOT_PAD; |
| 3768 mchunkptr p = (mchunkptr)(mm + offset); |
| 3769 p->set_prev_foot(offset); |
| 3770 p->set_ashmem_fd(ashmem_fd); |
| 3771 p->head = psize; |
| 3772 mark_inuse_foot(m, p, psize); |
| 3773 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD; |
| 3774 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0; |
| 3775 |
| 3776 if (m->least_addr == 0 || mm < m->least_addr) |
| 3777 m->least_addr = mm; |
| 3778 if ((m->footprint += mmsize) > m->max_footprint) |
| 3779 m->max_footprint = m->footprint; |
| 3780 assert(is_aligned(chunk2mem(p))); |
| 3781 check_mmapped_chunk(m, p); |
| 3782 return chunk2mem(p); |
| 3783 } |
| 3784 } |
| 3785 return 0; |
| 3786 } |
| 3787 |
| 3788 /* Realloc using mmap */ |
| 3789 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) { |
| 3790 size_t oldsize = chunksize(oldp); |
| 3791 (void)flags; /* placate people compiling -Wunused */ |
| 3792 if (is_small(nb)) /* Can't shrink mmap regions below small size */ |
| 3793 return 0; |
| 3794 /* Keep old chunk if big enough but not too big */ |
| 3795 if (oldsize >= nb + SIZE_T_SIZE && |
| 3796 (oldsize - nb) <= (mparams.granularity << 1)) |
| 3797 return oldp; |
| 3798 else { |
| 3799 size_t offset = oldp->prev_foot(); |
| 3800 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD; |
| 3801 size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); |
| 3802 char* cp = (char*)CALL_MREMAP((char*)oldp - offset, |
| 3803 oldmmsize, newmmsize, flags); |
| 3804 if (cp != CMFAIL) { |
| 3805 mchunkptr newp = (mchunkptr)(cp + offset); |
| 3806 size_t psize = newmmsize - offset - MMAP_FOOT_PAD; |
| 3807 newp->head = psize; |
| 3808 mark_inuse_foot(m, newp, psize); |
| 3809 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD; |
| 3810 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0; |
| 3811 |
| 3812 if (cp < m->least_addr) |
| 3813 m->least_addr = cp; |
| 3814 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint) |
| 3815 m->max_footprint = m->footprint; |
| 3816 check_mmapped_chunk(m, newp); |
| 3817 return newp; |
| 3818 } |
| 3819 } |
| 3820 return 0; |
| 3821 } |
| 3822 |
| 3823 |
| 3824 /* -------------------------- mspace management -------------------------- */ |
| 3825 |
| 3826 /* Initialize top chunk and its size */ |
| 3827 static void init_top(mstate m, mchunkptr p, size_t psize) { |
| 3828 /* Ensure alignment */ |
| 3829 size_t offset = align_offset(chunk2mem(p)); |
| 3830 p = (mchunkptr)((char*)p + offset); |
| 3831 psize -= offset; |
| 3832 |
| 3833 m->top = p; |
| 3834 m->topsize = psize; |
| 3835 p->head = psize | PINUSE_BIT; |
| 3836 /* set size of fake trailing chunk holding overhead space only once */ |
| 3837 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE; |
| 3838 m->trim_check = mparams.trim_threshold; /* reset on each update */ |
| 3839 } |
| 3840 |
| 3841 /* Initialize bins for a new mstate that is otherwise zeroed out */ |
| 3842 static void init_bins(mstate m) { |
| 3843 /* Establish circular links for smallbins */ |
| 3844 bindex_t i; |
| 3845 for (i = 0; i < NSMALLBINS; ++i) { |
| 3846 sbinptr bin = smallbin_at(m,i); |
| 3847 bin->fd = bin->bk = bin; |
| 3848 } |
| 3849 } |
| 3850 |
| 3851 #if PROCEED_ON_ERROR |
| 3852 |
| 3853 /* default corruption action */ |
| 3854 static void reset_on_error(mstate m) { |
| 3855 int i; |
| 3856 ++malloc_corruption_error_count; |
| 3857 /* Reinitialize fields to forget about all memory */ |
| 3858 m->smallmap = m->treemap = 0; |
| 3859 m->dvsize = m->topsize = 0; |
| 3860 m->seg.base = 0; |
| 3861 m->seg.size = 0; |
| 3862 m->seg.next = 0; |
| 3863 m->top = m->dv = 0; |
| 3864 for (i = 0; i < NTREEBINS; ++i) |
| 3865 *treebin_at(m, i) = 0; |
| 3866 init_bins(m); |
| 3867 } |
| 3868 #endif /* PROCEED_ON_ERROR */ |
| 3869 |
| 3870 /* Allocate chunk and prepend remainder with chunk in successor base. */ |
| 3871 static void* prepend_alloc(mstate m, char* newbase, char* oldbase, |
| 3872 size_t nb) { |
| 3873 mchunkptr p = align_as_chunk(newbase); |
| 3874 mchunkptr oldfirst = align_as_chunk(oldbase); |
| 3875 size_t psize = (char*)oldfirst - (char*)p; |
| 3876 mchunkptr q = chunk_plus_offset(p, nb); |
| 3877 size_t qsize = psize - nb; |
| 3878 set_size_and_pinuse_of_inuse_chunk(m, p, nb); |
| 3879 |
| 3880 assert((char*)oldfirst > (char*)q); |
| 3881 assert(pinuse(oldfirst)); |
| 3882 assert(qsize >= MIN_CHUNK_SIZE); |
| 3883 |
| 3884 /* consolidate remainder with first chunk of old base */ |
| 3885 if (oldfirst == m->top) { |
| 3886 size_t tsize = m->topsize += qsize; |
| 3887 m->top = q; |
| 3888 q->head = tsize | PINUSE_BIT; |
| 3889 check_top_chunk(m, q); |
| 3890 } |
| 3891 else if (oldfirst == m->dv) { |
| 3892 size_t dsize = m->dvsize += qsize; |
| 3893 m->dv = q; |
| 3894 set_size_and_pinuse_of_free_chunk(q, dsize); |
| 3895 } |
| 3896 else { |
| 3897 if (!is_inuse(oldfirst)) { |
| 3898 size_t nsize = chunksize(oldfirst); |
| 3899 unlink_chunk(m, oldfirst, nsize); |
| 3900 oldfirst = chunk_plus_offset(oldfirst, nsize); |
| 3901 qsize += nsize; |
| 3902 } |
| 3903 set_free_with_pinuse(q, qsize, oldfirst); |
| 3904 insert_chunk(m, q, qsize); |
| 3905 check_free_chunk(m, q); |
| 3906 } |
| 3907 |
| 3908 check_malloced_chunk(m, chunk2mem(p), nb); |
| 3909 return chunk2mem(p); |
| 3910 } |
| 3911 |
| 3912 /* Add a segment to hold a new noncontiguous region */ |
| 3913 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) { |
| 3914 /* Determine locations and sizes of segment, fenceposts, old top */ |
| 3915 char* old_top = (char*)m->top; |
| 3916 msegmentptr oldsp = segment_holding(m, old_top); |
| 3917 char* old_end = oldsp->base + oldsp->size; |
| 3918 size_t ssize = pad_request(sizeof(struct malloc_segment)); |
| 3919 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK); |
| 3920 size_t offset = align_offset(chunk2mem(rawsp)); |
| 3921 char* asp = rawsp + offset; |
| 3922 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp; |
| 3923 mchunkptr sp = (mchunkptr)csp; |
| 3924 msegmentptr ss = (msegmentptr)(chunk2mem(sp)); |
| 3925 mchunkptr tnext = chunk_plus_offset(sp, ssize); |
| 3926 mchunkptr p = tnext; |
| 3927 int nfences = 0; |
| 3928 |
| 3929 /* reset top to new space */ |
| 3930 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); |
| 3931 |
| 3932 /* Set up segment record */ |
| 3933 assert(is_aligned(ss)); |
| 3934 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize); |
| 3935 *ss = m->seg; /* Push current record */ |
| 3936 m->seg.base = tbase; |
| 3937 m->seg.size = tsize; |
| 3938 m->seg.set_sflags(mmapped); |
| 3939 m->seg.next = ss; |
| 3940 |
| 3941 /* Insert trailing fenceposts */ |
| 3942 for (;;) { |
| 3943 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE); |
| 3944 p->head = FENCEPOST_HEAD; |
| 3945 ++nfences; |
| 3946 if ((char*)(&(nextp->head)) < old_end) |
| 3947 p = nextp; |
| 3948 else |
| 3949 break; |
| 3950 } |
| 3951 assert(nfences >= 2); |
| 3952 |
| 3953 /* Insert the rest of old top into a bin as an ordinary free chunk */ |
| 3954 if (csp != old_top) { |
| 3955 mchunkptr q = (mchunkptr)old_top; |
| 3956 size_t psize = csp - old_top; |
| 3957 mchunkptr tn = chunk_plus_offset(q, psize); |
| 3958 set_free_with_pinuse(q, psize, tn); |
| 3959 insert_chunk(m, q, psize); |
| 3960 } |
| 3961 |
| 3962 check_top_chunk(m, m->top); |
| 3963 } |
| 3964 |
| 3965 /* -------------------------- System allocation -------------------------- */ |
| 3966 |
| 3967 /* Get memory from system using MORECORE or MMAP */ |
| 3968 static void* sys_alloc(mstate m, size_t nb) { |
| 3969 char* tbase = CMFAIL; |
| 3970 size_t tsize = 0; |
| 3971 flag_t mmap_flag = 0; |
| 3972 size_t asize; /* allocation size */ |
| 3973 |
| 3974 ensure_initialization(); |
| 3975 |
| 3976 /* Directly map large chunks, but only if already initialized */ |
| 3977 if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) { |
| 3978 void* mem = mmap_alloc(m, nb); |
| 3979 if (mem != 0) |
| 3980 return mem; |
| 3981 } |
| 3982 |
| 3983 asize = granularity_align(nb + SYS_ALLOC_PADDING); |
| 3984 if (asize <= nb) { |
| 3985 /* BEGIN android-added: set errno */ |
| 3986 MALLOC_FAILURE_ACTION; |
| 3987 /* END android-added */ |
| 3988 return 0; /* wraparound */ |
| 3989 } |
| 3990 if (m->footprint_limit != 0) { |
| 3991 size_t fp = m->footprint + asize; |
| 3992 if (fp <= m->footprint || fp > m->footprint_limit) { |
| 3993 /* BEGIN android-added: set errno */ |
| 3994 MALLOC_FAILURE_ACTION; |
| 3995 /* END android-added */ |
| 3996 return 0; |
| 3997 } |
| 3998 } |
| 3999 |
| 4000 /* |
| 4001 Try getting memory in any of three ways (in most-preferred to |
| 4002 least-preferred order): |
| 4003 1. A call to MORECORE that can normally contiguously extend memory. |
| 4004 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or |
| 4005 or main space is mmapped or a previous contiguous call failed) |
| 4006 2. A call to MMAP new space (disabled if not HAVE_MMAP). |
| 4007 Note that under the default settings, if MORECORE is unable to |
| 4008 fulfill a request, and HAVE_MMAP is true, then mmap is |
| 4009 used as a noncontiguous system allocator. This is a useful backup |
| 4010 strategy for systems with holes in address spaces -- in this case |
| 4011 sbrk cannot contiguously expand the heap, but mmap may be able to |
| 4012 find space. |
| 4013 3. A call to MORECORE that cannot usually contiguously extend memory. |
| 4014 (disabled if not HAVE_MORECORE) |
| 4015 |
| 4016 In all cases, we need to request enough bytes from system to ensure |
| 4017 we can malloc nb bytes upon success, so pad with enough space for |
| 4018 top_foot, plus alignment-pad to make sure we don't lose bytes if |
| 4019 not on boundary, and round this up to a granularity unit. |
| 4020 */ |
| 4021 |
| 4022 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) { |
| 4023 char* br = CMFAIL; |
| 4024 size_t ssize = asize; /* sbrk call size */ |
| 4025 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top); |
| 4026 ACQUIRE_MALLOC_GLOBAL_LOCK(); |
| 4027 |
| 4028 if (ss == 0) { /* First time through or recovery */ |
| 4029 char* base = (char*)CALL_MORECORE(0); |
| 4030 if (base != CMFAIL) { |
| 4031 size_t fp; |
| 4032 /* Adjust to end on a page boundary */ |
| 4033 if (!is_page_aligned(base)) |
| 4034 ssize += (page_align((size_t)base) - (size_t)base); |
| 4035 fp = m->footprint + ssize; /* recheck limits */ |
| 4036 if (ssize > nb && ssize < HALF_MAX_SIZE_T && |
| 4037 (m->footprint_limit == 0 || |
| 4038 (fp > m->footprint && fp <= m->footprint_limit)) && |
| 4039 (br = (char*)(CALL_MORECORE(ssize))) == base) { |
| 4040 tbase = base; |
| 4041 tsize = ssize; |
| 4042 } |
| 4043 } |
| 4044 } |
| 4045 else { |
| 4046 /* Subtract out existing available top space from MORECORE request. */ |
| 4047 ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING); |
| 4048 /* Use mem here only if it did continuously extend old space */ |
| 4049 if (ssize < HALF_MAX_SIZE_T && |
| 4050 (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) { |
| 4051 tbase = br; |
| 4052 tsize = ssize; |
| 4053 } |
| 4054 } |
| 4055 |
| 4056 if (tbase == CMFAIL) { /* Cope with partial failure */ |
| 4057 if (br != CMFAIL) { /* Try to use/extend the space we did get */ |
| 4058 if (ssize < HALF_MAX_SIZE_T && |
| 4059 ssize < nb + SYS_ALLOC_PADDING) { |
| 4060 size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize); |
| 4061 if (esize < HALF_MAX_SIZE_T) { |
| 4062 char* end = (char*)CALL_MORECORE(esize); |
| 4063 if (end != CMFAIL) |
| 4064 ssize += esize; |
| 4065 else { /* Can't use; try to release */ |
| 4066 (void) CALL_MORECORE(-ssize); |
| 4067 br = CMFAIL; |
| 4068 } |
| 4069 } |
| 4070 } |
| 4071 } |
| 4072 if (br != CMFAIL) { /* Use the space we did get */ |
| 4073 tbase = br; |
| 4074 tsize = ssize; |
| 4075 } |
| 4076 else |
| 4077 disable_contiguous(m); /* Don't try contiguous path in the future */ |
| 4078 } |
| 4079 |
| 4080 RELEASE_MALLOC_GLOBAL_LOCK(); |
| 4081 } |
| 4082 |
| 4083 int ashmem_fd = -1; |
| 4084 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */ |
| 4085 ashmem_fd = create_ashmem(asize); |
| 4086 if (ashmem_fd > 0) { |
| 4087 char* mp = (char*)mmap(NULL, asize, MMAP_PROT, MMAP_FLAGS, ashmem_fd, 0); |
| 4088 if (mp != CMFAIL) { |
| 4089 tbase = mp; |
| 4090 tsize = asize; |
| 4091 mmap_flag = USE_MMAP_BIT; |
| 4092 } |
| 4093 } |
| 4094 } |
| 4095 |
| 4096 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */ |
| 4097 if (asize < HALF_MAX_SIZE_T) { |
| 4098 char* br = CMFAIL; |
| 4099 char* end = CMFAIL; |
| 4100 ACQUIRE_MALLOC_GLOBAL_LOCK(); |
| 4101 br = (char*)(CALL_MORECORE(asize)); |
| 4102 end = (char*)(CALL_MORECORE(0)); |
| 4103 RELEASE_MALLOC_GLOBAL_LOCK(); |
| 4104 if (br != CMFAIL && end != CMFAIL && br < end) { |
| 4105 size_t ssize = end - br; |
| 4106 if (ssize > nb + TOP_FOOT_SIZE) { |
| 4107 tbase = br; |
| 4108 tsize = ssize; |
| 4109 } |
| 4110 } |
| 4111 } |
| 4112 } |
| 4113 |
| 4114 if (tbase != CMFAIL) { |
| 4115 |
| 4116 if ((m->footprint += tsize) > m->max_footprint) |
| 4117 m->max_footprint = m->footprint; |
| 4118 |
| 4119 if (!is_initialized(m)) { /* first-time initialization */ |
| 4120 if (m->least_addr == 0 || tbase < m->least_addr) |
| 4121 m->least_addr = tbase; |
| 4122 m->seg.base = tbase; |
| 4123 assert(ashmem_fd != -1); |
| 4124 m->seg.set_ashmem_fd(ashmem_fd); |
| 4125 m->seg.size = tsize; |
| 4126 m->seg.set_sflags(mmap_flag); |
| 4127 m->magic = mparams.magic; |
| 4128 m->release_checks = MAX_RELEASE_CHECK_RATE; |
| 4129 init_bins(m); |
| 4130 #if !ONLY_MSPACES |
| 4131 if (is_global(m)) |
| 4132 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); |
| 4133 else |
| 4134 #endif |
| 4135 { |
| 4136 /* Offset top by embedded malloc_state */ |
| 4137 mchunkptr mn = next_chunk(mem2chunk(m)); |
| 4138 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE); |
| 4139 } |
| 4140 } |
| 4141 |
| 4142 else { |
| 4143 /* Try to merge with an existing segment */ |
| 4144 msegmentptr sp = &m->seg; |
| 4145 /* Only consider most recent segment if traversal suppressed */ |
| 4146 while (sp != 0 && tbase != sp->base + sp->size) |
| 4147 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next; |
| 4148 if (sp != 0 && |
| 4149 !is_extern_segment(sp) && |
| 4150 (sp->sflags() & USE_MMAP_BIT) == mmap_flag && |
| 4151 segment_holds(sp, m->top)) { /* append */ |
| 4152 sp->size += tsize; |
| 4153 init_top(m, m->top, m->topsize + tsize); |
| 4154 } |
| 4155 else { |
| 4156 if (tbase < m->least_addr) |
| 4157 m->least_addr = tbase; |
| 4158 sp = &m->seg; |
| 4159 while (sp != 0 && sp->base != tbase + tsize) |
| 4160 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next; |
| 4161 if (sp != 0 && |
| 4162 !is_extern_segment(sp) && |
| 4163 (sp->sflags() & USE_MMAP_BIT) == mmap_flag) { |
| 4164 char* oldbase = sp->base; |
| 4165 sp->base = tbase; |
| 4166 sp->size += tsize; |
| 4167 return prepend_alloc(m, tbase, oldbase, nb); |
| 4168 } |
| 4169 else |
| 4170 add_segment(m, tbase, tsize, mmap_flag); |
| 4171 } |
| 4172 } |
| 4173 |
| 4174 if (nb < m->topsize) { /* Allocate from new or extended top space */ |
| 4175 size_t rsize = m->topsize -= nb; |
| 4176 mchunkptr p = m->top; |
| 4177 mchunkptr r = m->top = chunk_plus_offset(p, nb); |
| 4178 r->head = rsize | PINUSE_BIT; |
| 4179 set_size_and_pinuse_of_inuse_chunk(m, p, nb); |
| 4180 check_top_chunk(m, m->top); |
| 4181 check_malloced_chunk(m, chunk2mem(p), nb); |
| 4182 return chunk2mem(p); |
| 4183 } |
| 4184 } |
| 4185 |
| 4186 MALLOC_FAILURE_ACTION; |
| 4187 return 0; |
| 4188 } |
| 4189 |
| 4190 /* ----------------------- system deallocation -------------------------- */ |
| 4191 |
| 4192 /* Unmap and unlink any mmapped segments that don't contain used chunks */ |
| 4193 static size_t release_unused_segments(mstate m) { |
| 4194 size_t released = 0; |
| 4195 int nsegs = 0; |
| 4196 msegmentptr pred = &m->seg; |
| 4197 msegmentptr sp = pred->next; |
| 4198 while (sp != 0) { |
| 4199 char* base = sp->base; |
| 4200 size_t size = sp->size; |
| 4201 msegmentptr next = sp->next; |
| 4202 ++nsegs; |
| 4203 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) { |
| 4204 mchunkptr p = align_as_chunk(base); |
| 4205 size_t psize = chunksize(p); |
| 4206 /* Can unmap if first chunk holds entire segment and not pinned */ |
| 4207 if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) { |
| 4208 tchunkptr tp = (tchunkptr)p; |
| 4209 assert(segment_holds(sp, (char*)sp)); |
| 4210 if (p == m->dv) { |
| 4211 m->dv = 0; |
| 4212 m->dvsize = 0; |
| 4213 } |
| 4214 else { |
| 4215 unlink_large_chunk(m, tp); |
| 4216 } |
| 4217 const int ashmem_fd = sp->ashmem_fd(); |
| 4218 if (CALL_MUNMAP(base, size) == 0) { |
| 4219 close_ashmem(ashmem_fd); |
| 4220 released += size; |
| 4221 m->footprint -= size; |
| 4222 /* unlink obsoleted record */ |
| 4223 sp = pred; |
| 4224 sp->next = next; |
| 4225 } |
| 4226 else { /* back out if cannot unmap */ |
| 4227 insert_large_chunk(m, tp, psize); |
| 4228 } |
| 4229 } |
| 4230 } |
| 4231 if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */ |
| 4232 break; |
| 4233 pred = sp; |
| 4234 sp = next; |
| 4235 } |
| 4236 /* Reset check counter */ |
| 4237 m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)? |
| 4238 (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE); |
| 4239 return released; |
| 4240 } |
| 4241 |
| 4242 static int sys_trim(mstate m, size_t pad) { |
| 4243 size_t released = 0; |
| 4244 ensure_initialization(); |
| 4245 if (pad < MAX_REQUEST && is_initialized(m)) { |
| 4246 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */ |
| 4247 |
| 4248 if (m->topsize > pad) { |
| 4249 /* Shrink top space in granularity-size units, keeping at least one */ |
| 4250 size_t unit = mparams.granularity; |
| 4251 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit - |
| 4252 SIZE_T_ONE) * unit; |
| 4253 msegmentptr sp = segment_holding(m, (char*)m->top); |
| 4254 |
| 4255 if (!is_extern_segment(sp)) { |
| 4256 if (is_mmapped_segment(sp)) { |
| 4257 if (HAVE_MMAP && |
| 4258 sp->size >= extra && |
| 4259 !has_segment_link(m, sp)) { /* can't shrink if pinned */ |
| 4260 size_t newsize = sp->size - extra; |
| 4261 (void)newsize; /* placate people compiling -Wunused-variable */ |
| 4262 /* Prefer mremap, fall back to munmap */ |
| 4263 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) || |
| 4264 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) { |
| 4265 released = extra; |
| 4266 if (ashmem_unpin_region(sp->ashmem_fd(), newsize, extra)) |
| 4267 android_log_error("Could not unpin ashmem region"); |
| 4268 } |
| 4269 } |
| 4270 } |
| 4271 else if (HAVE_MORECORE) { |
| 4272 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */ |
| 4273 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit; |
| 4274 ACQUIRE_MALLOC_GLOBAL_LOCK(); |
| 4275 { |
| 4276 /* Make sure end of memory is where we last set it. */ |
| 4277 char* old_br = (char*)(CALL_MORECORE(0)); |
| 4278 if (old_br == sp->base + sp->size) { |
| 4279 char* rel_br = (char*)(CALL_MORECORE(-extra)); |
| 4280 char* new_br = (char*)(CALL_MORECORE(0)); |
| 4281 if (rel_br != CMFAIL && new_br < old_br) |
| 4282 released = old_br - new_br; |
| 4283 } |
| 4284 } |
| 4285 RELEASE_MALLOC_GLOBAL_LOCK(); |
| 4286 } |
| 4287 } |
| 4288 |
| 4289 if (released != 0) { |
| 4290 sp->size -= released; |
| 4291 m->footprint -= released; |
| 4292 init_top(m, m->top, m->topsize - released); |
| 4293 check_top_chunk(m, m->top); |
| 4294 } |
| 4295 } |
| 4296 |
| 4297 /* Unmap any unused mmapped segments */ |
| 4298 if (HAVE_MMAP) |
| 4299 released += release_unused_segments(m); |
| 4300 |
| 4301 /* On failure, disable autotrim to avoid repeated failed future calls */ |
| 4302 if (released == 0 && m->topsize > m->trim_check) |
| 4303 m->trim_check = MAX_SIZE_T; |
| 4304 } |
| 4305 |
| 4306 return (released != 0)? 1 : 0; |
| 4307 } |
| 4308 |
| 4309 /* Consolidate and bin a chunk. Differs from exported versions |
| 4310 of free mainly in that the chunk need not be marked as inuse. |
| 4311 */ |
| 4312 static void dispose_chunk(mstate m, mchunkptr p, size_t psize) { |
| 4313 mchunkptr next = chunk_plus_offset(p, psize); |
| 4314 if (!pinuse(p)) { |
| 4315 mchunkptr prev; |
| 4316 size_t prevsize = p->prev_foot(); |
| 4317 if (is_mmapped(p)) { |
| 4318 psize += prevsize + MMAP_FOOT_PAD; |
| 4319 mchunkptr const chunk = (mchunkptr) ((char*) p - prevsize); |
| 4320 const int ashmem_fd = chunk->ashmem_fd(); |
| 4321 if (malloc_chunk::unmap(chunk, psize)) |
| 4322 m->footprint -= psize; |
| 4323 return; |
| 4324 } |
| 4325 prev = chunk_minus_offset(p, prevsize); |
| 4326 psize += prevsize; |
| 4327 p = prev; |
| 4328 if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */ |
| 4329 if (p != m->dv) { |
| 4330 unlink_chunk(m, p, prevsize); |
| 4331 } |
| 4332 else if ((next->head & INUSE_BITS) == INUSE_BITS) { |
| 4333 m->dvsize = psize; |
| 4334 set_free_with_pinuse(p, psize, next); |
| 4335 return; |
| 4336 } |
| 4337 } |
| 4338 else { |
| 4339 CORRUPTION_ERROR_ACTION(m); |
| 4340 return; |
| 4341 } |
| 4342 } |
| 4343 if (RTCHECK(ok_address(m, next))) { |
| 4344 if (!cinuse(next)) { /* consolidate forward */ |
| 4345 if (next == m->top) { |
| 4346 size_t tsize = m->topsize += psize; |
| 4347 m->top = p; |
| 4348 p->head = tsize | PINUSE_BIT; |
| 4349 if (p == m->dv) { |
| 4350 m->dv = 0; |
| 4351 m->dvsize = 0; |
| 4352 } |
| 4353 return; |
| 4354 } |
| 4355 else if (next == m->dv) { |
| 4356 size_t dsize = m->dvsize += psize; |
| 4357 m->dv = p; |
| 4358 set_size_and_pinuse_of_free_chunk(p, dsize); |
| 4359 return; |
| 4360 } |
| 4361 else { |
| 4362 size_t nsize = chunksize(next); |
| 4363 psize += nsize; |
| 4364 unlink_chunk(m, next, nsize); |
| 4365 set_size_and_pinuse_of_free_chunk(p, psize); |
| 4366 if (p == m->dv) { |
| 4367 m->dvsize = psize; |
| 4368 return; |
| 4369 } |
| 4370 } |
| 4371 } |
| 4372 else { |
| 4373 set_free_with_pinuse(p, psize, next); |
| 4374 } |
| 4375 insert_chunk(m, p, psize); |
| 4376 } |
| 4377 else { |
| 4378 CORRUPTION_ERROR_ACTION(m); |
| 4379 } |
| 4380 } |
| 4381 |
| 4382 /* ---------------------------- malloc --------------------------- */ |
| 4383 |
| 4384 /* allocate a large request from the best fitting chunk in a treebin */ |
| 4385 static void* tmalloc_large(mstate m, size_t nb) { |
| 4386 tchunkptr v = 0; |
| 4387 size_t rsize = -nb; /* Unsigned negation */ |
| 4388 tchunkptr t; |
| 4389 bindex_t idx; |
| 4390 compute_tree_index(nb, idx); |
| 4391 if ((t = *treebin_at(m, idx)) != 0) { |
| 4392 /* Traverse tree for this bin looking for node with size == nb */ |
| 4393 size_t sizebits = nb << leftshift_for_tree_index(idx); |
| 4394 tchunkptr rst = 0; /* The deepest untaken right subtree */ |
| 4395 for (;;) { |
| 4396 tchunkptr rt; |
| 4397 size_t trem = chunksize(t) - nb; |
| 4398 if (trem < rsize) { |
| 4399 v = t; |
| 4400 if ((rsize = trem) == 0) |
| 4401 break; |
| 4402 } |
| 4403 rt = t->child[1]; |
| 4404 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; |
| 4405 if (rt != 0 && rt != t) |
| 4406 rst = rt; |
| 4407 if (t == 0) { |
| 4408 t = rst; /* set t to least subtree holding sizes > nb */ |
| 4409 break; |
| 4410 } |
| 4411 sizebits <<= 1; |
| 4412 } |
| 4413 } |
| 4414 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */ |
| 4415 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap; |
| 4416 if (leftbits != 0) { |
| 4417 bindex_t i; |
| 4418 binmap_t leastbit = least_bit(leftbits); |
| 4419 compute_bit2idx(leastbit, i); |
| 4420 t = *treebin_at(m, i); |
| 4421 } |
| 4422 } |
| 4423 |
| 4424 while (t != 0) { /* find smallest of tree or subtree */ |
| 4425 size_t trem = chunksize(t) - nb; |
| 4426 if (trem < rsize) { |
| 4427 rsize = trem; |
| 4428 v = t; |
| 4429 } |
| 4430 t = leftmost_child(t); |
| 4431 } |
| 4432 |
| 4433 /* If dv is a better fit, return 0 so malloc will use it */ |
| 4434 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) { |
| 4435 if (RTCHECK(ok_address(m, v))) { /* split */ |
| 4436 mchunkptr r = chunk_plus_offset(v, nb); |
| 4437 assert(chunksize(v) == rsize + nb); |
| 4438 if (RTCHECK(ok_next(v, r))) { |
| 4439 unlink_large_chunk(m, v); |
| 4440 if (rsize < MIN_CHUNK_SIZE) |
| 4441 set_inuse_and_pinuse(m, v, (rsize + nb)); |
| 4442 else { |
| 4443 set_size_and_pinuse_of_inuse_chunk(m, v, nb); |
| 4444 set_size_and_pinuse_of_free_chunk(r, rsize); |
| 4445 insert_chunk(m, r, rsize); |
| 4446 } |
| 4447 return chunk2mem(v); |
| 4448 } |
| 4449 } |
| 4450 CORRUPTION_ERROR_ACTION(m); |
| 4451 } |
| 4452 return 0; |
| 4453 } |
| 4454 |
| 4455 /* allocate a small request from the best fitting chunk in a treebin */ |
| 4456 static void* tmalloc_small(mstate m, size_t nb) { |
| 4457 tchunkptr t, v; |
| 4458 size_t rsize; |
| 4459 bindex_t i; |
| 4460 binmap_t leastbit = least_bit(m->treemap); |
| 4461 compute_bit2idx(leastbit, i); |
| 4462 v = t = *treebin_at(m, i); |
| 4463 rsize = chunksize(t) - nb; |
| 4464 |
| 4465 while ((t = leftmost_child(t)) != 0) { |
| 4466 size_t trem = chunksize(t) - nb; |
| 4467 if (trem < rsize) { |
| 4468 rsize = trem; |
| 4469 v = t; |
| 4470 } |
| 4471 } |
| 4472 |
| 4473 if (RTCHECK(ok_address(m, v))) { |
| 4474 mchunkptr r = chunk_plus_offset(v, nb); |
| 4475 assert(chunksize(v) == rsize + nb); |
| 4476 if (RTCHECK(ok_next(v, r))) { |
| 4477 unlink_large_chunk(m, v); |
| 4478 if (rsize < MIN_CHUNK_SIZE) |
| 4479 set_inuse_and_pinuse(m, v, (rsize + nb)); |
| 4480 else { |
| 4481 set_size_and_pinuse_of_inuse_chunk(m, v, nb); |
| 4482 set_size_and_pinuse_of_free_chunk(r, rsize); |
| 4483 replace_dv(m, r, rsize); |
| 4484 } |
| 4485 return chunk2mem(v); |
| 4486 } |
| 4487 } |
| 4488 |
| 4489 CORRUPTION_ERROR_ACTION(m); |
| 4490 return 0; |
| 4491 } |
| 4492 |
| 4493 #if !ONLY_MSPACES |
| 4494 |
| 4495 void* dlmalloc(size_t bytes) { |
| 4496 /* |
| 4497 Basic algorithm: |
| 4498 If a small request (< 256 bytes minus per-chunk overhead): |
| 4499 1. If one exists, use a remainderless chunk in associated smallbin. |
| 4500 (Remainderless means that there are too few excess bytes to |
| 4501 represent as a chunk.) |
| 4502 2. If it is big enough, use the dv chunk, which is normally the |
| 4503 chunk adjacent to the one used for the most recent small request. |
| 4504 3. If one exists, split the smallest available chunk in a bin, |
| 4505 saving remainder in dv. |
| 4506 4. If it is big enough, use the top chunk. |
| 4507 5. If available, get memory from system and use it |
| 4508 Otherwise, for a large request: |
| 4509 1. Find the smallest available binned chunk that fits, and use it |
| 4510 if it is better fitting than dv chunk, splitting if necessary. |
| 4511 2. If better fitting than any binned chunk, use the dv chunk. |
| 4512 3. If it is big enough, use the top chunk. |
| 4513 4. If request size >= mmap threshold, try to directly mmap this chunk. |
| 4514 5. If available, get memory from system and use it |
| 4515 |
| 4516 The ugly goto's here ensure that postaction occurs along all paths. |
| 4517 */ |
| 4518 |
| 4519 #if USE_LOCKS |
| 4520 ensure_initialization(); /* initialize in sys_alloc if not using locks */ |
| 4521 #endif |
| 4522 |
| 4523 if (!PREACTION(gm)) { |
| 4524 void* mem; |
| 4525 size_t nb; |
| 4526 if (bytes <= MAX_SMALL_REQUEST) { |
| 4527 bindex_t idx; |
| 4528 binmap_t smallbits; |
| 4529 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); |
| 4530 idx = small_index(nb); |
| 4531 smallbits = gm->smallmap >> idx; |
| 4532 |
| 4533 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ |
| 4534 mchunkptr b, p; |
| 4535 idx += ~smallbits & 1; /* Uses next bin if idx empty */ |
| 4536 b = smallbin_at(gm, idx); |
| 4537 p = b->fd; |
| 4538 assert(chunksize(p) == small_index2size(idx)); |
| 4539 unlink_first_small_chunk(gm, b, p, idx); |
| 4540 set_inuse_and_pinuse(gm, p, small_index2size(idx)); |
| 4541 mem = chunk2mem(p); |
| 4542 check_malloced_chunk(gm, mem, nb); |
| 4543 goto postaction; |
| 4544 } |
| 4545 |
| 4546 else if (nb > gm->dvsize) { |
| 4547 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ |
| 4548 mchunkptr b, p, r; |
| 4549 size_t rsize; |
| 4550 bindex_t i; |
| 4551 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); |
| 4552 binmap_t leastbit = least_bit(leftbits); |
| 4553 compute_bit2idx(leastbit, i); |
| 4554 b = smallbin_at(gm, i); |
| 4555 p = b->fd; |
| 4556 assert(chunksize(p) == small_index2size(i)); |
| 4557 unlink_first_small_chunk(gm, b, p, i); |
| 4558 rsize = small_index2size(i) - nb; |
| 4559 /* Fit here cannot be remainderless if 4byte sizes */ |
| 4560 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) |
| 4561 set_inuse_and_pinuse(gm, p, small_index2size(i)); |
| 4562 else { |
| 4563 set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
| 4564 r = chunk_plus_offset(p, nb); |
| 4565 set_size_and_pinuse_of_free_chunk(r, rsize); |
| 4566 replace_dv(gm, r, rsize); |
| 4567 } |
| 4568 mem = chunk2mem(p); |
| 4569 check_malloced_chunk(gm, mem, nb); |
| 4570 goto postaction; |
| 4571 } |
| 4572 |
| 4573 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) { |
| 4574 check_malloced_chunk(gm, mem, nb); |
| 4575 goto postaction; |
| 4576 } |
| 4577 } |
| 4578 } |
| 4579 else if (bytes >= MAX_REQUEST) |
| 4580 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ |
| 4581 else { |
| 4582 nb = pad_request(bytes); |
| 4583 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) { |
| 4584 check_malloced_chunk(gm, mem, nb); |
| 4585 goto postaction; |
| 4586 } |
| 4587 } |
| 4588 |
| 4589 if (nb <= gm->dvsize) { |
| 4590 size_t rsize = gm->dvsize - nb; |
| 4591 mchunkptr p = gm->dv; |
| 4592 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */ |
| 4593 mchunkptr r = gm->dv = chunk_plus_offset(p, nb); |
| 4594 gm->dvsize = rsize; |
| 4595 set_size_and_pinuse_of_free_chunk(r, rsize); |
| 4596 set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
| 4597 } |
| 4598 else { /* exhaust dv */ |
| 4599 size_t dvs = gm->dvsize; |
| 4600 gm->dvsize = 0; |
| 4601 gm->dv = 0; |
| 4602 set_inuse_and_pinuse(gm, p, dvs); |
| 4603 } |
| 4604 mem = chunk2mem(p); |
| 4605 check_malloced_chunk(gm, mem, nb); |
| 4606 goto postaction; |
| 4607 } |
| 4608 |
| 4609 else if (nb < gm->topsize) { /* Split top */ |
| 4610 size_t rsize = gm->topsize -= nb; |
| 4611 mchunkptr p = gm->top; |
| 4612 mchunkptr r = gm->top = chunk_plus_offset(p, nb); |
| 4613 r->head = rsize | PINUSE_BIT; |
| 4614 set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
| 4615 mem = chunk2mem(p); |
| 4616 check_top_chunk(gm, gm->top); |
| 4617 check_malloced_chunk(gm, mem, nb); |
| 4618 goto postaction; |
| 4619 } |
| 4620 |
| 4621 mem = sys_alloc(gm, nb); |
| 4622 |
| 4623 postaction: |
| 4624 POSTACTION(gm); |
| 4625 return mem; |
| 4626 } |
| 4627 |
| 4628 return 0; |
| 4629 } |
| 4630 |
| 4631 /* ---------------------------- free --------------------------- */ |
| 4632 |
| 4633 void dlfree(void* mem) { |
| 4634 /* |
| 4635 Consolidate freed chunks with preceeding or succeeding bordering |
| 4636 free chunks, if they exist, and then place in a bin. Intermixed |
| 4637 with special cases for top, dv, mmapped chunks, and usage errors. |
| 4638 */ |
| 4639 |
| 4640 if (mem != 0) { |
| 4641 mchunkptr p = mem2chunk(mem); |
| 4642 #if FOOTERS |
| 4643 mstate fm = get_mstate_for(p); |
| 4644 if (!ok_magic(fm)) { |
| 4645 USAGE_ERROR_ACTION(fm, p); |
| 4646 return; |
| 4647 } |
| 4648 #else /* FOOTERS */ |
| 4649 #define fm gm |
| 4650 #endif /* FOOTERS */ |
| 4651 if (!PREACTION(fm)) { |
| 4652 check_inuse_chunk(fm, p); |
| 4653 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) { |
| 4654 size_t psize = chunksize(p); |
| 4655 mchunkptr next = chunk_plus_offset(p, psize); |
| 4656 if (!pinuse(p)) { |
| 4657 size_t prevsize = p->prev_foot(); |
| 4658 if (is_mmapped(p)) { |
| 4659 psize += prevsize + MMAP_FOOT_PAD; |
| 4660 if (malloc_chunk::unmap((mchunkptr)((char*) p - prevsize), psize)) |
| 4661 fm->footprint -= psize; |
| 4662 goto postaction; |
| 4663 } |
| 4664 else { |
| 4665 mchunkptr prev = chunk_minus_offset(p, prevsize); |
| 4666 psize += prevsize; |
| 4667 p = prev; |
| 4668 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ |
| 4669 if (p != fm->dv) { |
| 4670 unlink_chunk(fm, p, prevsize); |
| 4671 } |
| 4672 else if ((next->head & INUSE_BITS) == INUSE_BITS) { |
| 4673 fm->dvsize = psize; |
| 4674 set_free_with_pinuse(p, psize, next); |
| 4675 goto postaction; |
| 4676 } |
| 4677 } |
| 4678 else |
| 4679 goto erroraction; |
| 4680 } |
| 4681 } |
| 4682 |
| 4683 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { |
| 4684 if (!cinuse(next)) { /* consolidate forward */ |
| 4685 if (next == fm->top) { |
| 4686 size_t tsize = fm->topsize += psize; |
| 4687 fm->top = p; |
| 4688 p->head = tsize | PINUSE_BIT; |
| 4689 if (p == fm->dv) { |
| 4690 fm->dv = 0; |
| 4691 fm->dvsize = 0; |
| 4692 } |
| 4693 if (should_trim(fm, tsize)) |
| 4694 sys_trim(fm, 0); |
| 4695 goto postaction; |
| 4696 } |
| 4697 else if (next == fm->dv) { |
| 4698 size_t dsize = fm->dvsize += psize; |
| 4699 fm->dv = p; |
| 4700 set_size_and_pinuse_of_free_chunk(p, dsize); |
| 4701 goto postaction; |
| 4702 } |
| 4703 else { |
| 4704 size_t nsize = chunksize(next); |
| 4705 psize += nsize; |
| 4706 unlink_chunk(fm, next, nsize); |
| 4707 set_size_and_pinuse_of_free_chunk(p, psize); |
| 4708 if (p == fm->dv) { |
| 4709 fm->dvsize = psize; |
| 4710 goto postaction; |
| 4711 } |
| 4712 } |
| 4713 } |
| 4714 else |
| 4715 set_free_with_pinuse(p, psize, next); |
| 4716 |
| 4717 if (is_small(psize)) { |
| 4718 insert_small_chunk(fm, p, psize); |
| 4719 check_free_chunk(fm, p); |
| 4720 } |
| 4721 else { |
| 4722 tchunkptr tp = (tchunkptr)p; |
| 4723 insert_large_chunk(fm, tp, psize); |
| 4724 check_free_chunk(fm, p); |
| 4725 if (--fm->release_checks == 0) |
| 4726 release_unused_segments(fm); |
| 4727 } |
| 4728 goto postaction; |
| 4729 } |
| 4730 } |
| 4731 erroraction: |
| 4732 USAGE_ERROR_ACTION(fm, p); |
| 4733 postaction: |
| 4734 POSTACTION(fm); |
| 4735 } |
| 4736 } |
| 4737 #if !FOOTERS |
| 4738 #undef fm |
| 4739 #endif /* FOOTERS */ |
| 4740 } |
| 4741 |
| 4742 void* dlcalloc(size_t n_elements, size_t elem_size) { |
| 4743 void* mem; |
| 4744 size_t req = 0; |
| 4745 if (n_elements != 0) { |
| 4746 req = n_elements * elem_size; |
| 4747 if (((n_elements | elem_size) & ~(size_t)0xffff) && |
| 4748 (req / n_elements != elem_size)) |
| 4749 req = MAX_SIZE_T; /* force downstream failure on overflow */ |
| 4750 } |
| 4751 mem = dlmalloc(req); |
| 4752 if (mem != 0 && calloc_must_clear(mem2chunk(mem))) |
| 4753 memset(mem, 0, req); |
| 4754 return mem; |
| 4755 } |
| 4756 |
| 4757 #endif /* !ONLY_MSPACES */ |
| 4758 |
| 4759 /* ------------ Internal support for realloc, memalign, etc -------------- */ |
| 4760 |
| 4761 /* Try to realloc; only in-place unless can_move true */ |
| 4762 static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb, |
| 4763 int can_move) { |
| 4764 mchunkptr newp = 0; |
| 4765 size_t oldsize = chunksize(p); |
| 4766 mchunkptr next = chunk_plus_offset(p, oldsize); |
| 4767 if (RTCHECK(ok_address(m, p) && ok_inuse(p) && |
| 4768 ok_next(p, next) && ok_pinuse(next))) { |
| 4769 if (is_mmapped(p)) { |
| 4770 newp = mmap_resize(m, p, nb, can_move); |
| 4771 } |
| 4772 else if (oldsize >= nb) { /* already big enough */ |
| 4773 size_t rsize = oldsize - nb; |
| 4774 if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */ |
| 4775 mchunkptr r = chunk_plus_offset(p, nb); |
| 4776 set_inuse(m, p, nb); |
| 4777 set_inuse(m, r, rsize); |
| 4778 dispose_chunk(m, r, rsize); |
| 4779 } |
| 4780 newp = p; |
| 4781 } |
| 4782 else if (next == m->top) { /* extend into top */ |
| 4783 if (oldsize + m->topsize > nb) { |
| 4784 size_t newsize = oldsize + m->topsize; |
| 4785 size_t newtopsize = newsize - nb; |
| 4786 mchunkptr newtop = chunk_plus_offset(p, nb); |
| 4787 set_inuse(m, p, nb); |
| 4788 newtop->head = newtopsize |PINUSE_BIT; |
| 4789 m->top = newtop; |
| 4790 m->topsize = newtopsize; |
| 4791 newp = p; |
| 4792 } |
| 4793 } |
| 4794 else if (next == m->dv) { /* extend into dv */ |
| 4795 size_t dvs = m->dvsize; |
| 4796 if (oldsize + dvs >= nb) { |
| 4797 size_t dsize = oldsize + dvs - nb; |
| 4798 if (dsize >= MIN_CHUNK_SIZE) { |
| 4799 mchunkptr r = chunk_plus_offset(p, nb); |
| 4800 mchunkptr n = chunk_plus_offset(r, dsize); |
| 4801 set_inuse(m, p, nb); |
| 4802 set_size_and_pinuse_of_free_chunk(r, dsize); |
| 4803 clear_pinuse(n); |
| 4804 m->dvsize = dsize; |
| 4805 m->dv = r; |
| 4806 } |
| 4807 else { /* exhaust dv */ |
| 4808 size_t newsize = oldsize + dvs; |
| 4809 set_inuse(m, p, newsize); |
| 4810 m->dvsize = 0; |
| 4811 m->dv = 0; |
| 4812 } |
| 4813 newp = p; |
| 4814 } |
| 4815 } |
| 4816 else if (!cinuse(next)) { /* extend into next free chunk */ |
| 4817 size_t nextsize = chunksize(next); |
| 4818 if (oldsize + nextsize >= nb) { |
| 4819 size_t rsize = oldsize + nextsize - nb; |
| 4820 unlink_chunk(m, next, nextsize); |
| 4821 if (rsize < MIN_CHUNK_SIZE) { |
| 4822 size_t newsize = oldsize + nextsize; |
| 4823 set_inuse(m, p, newsize); |
| 4824 } |
| 4825 else { |
| 4826 mchunkptr r = chunk_plus_offset(p, nb); |
| 4827 set_inuse(m, p, nb); |
| 4828 set_inuse(m, r, rsize); |
| 4829 dispose_chunk(m, r, rsize); |
| 4830 } |
| 4831 newp = p; |
| 4832 } |
| 4833 } |
| 4834 } |
| 4835 else { |
| 4836 USAGE_ERROR_ACTION(m, chunk2mem(p)); |
| 4837 } |
| 4838 return newp; |
| 4839 } |
| 4840 |
| 4841 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) { |
| 4842 void* mem = 0; |
| 4843 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */ |
| 4844 alignment = MIN_CHUNK_SIZE; |
| 4845 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */ |
| 4846 size_t a = MALLOC_ALIGNMENT << 1; |
| 4847 while (a < alignment) a <<= 1; |
| 4848 alignment = a; |
| 4849 } |
| 4850 if (bytes >= MAX_REQUEST - alignment) { |
| 4851 if (m != 0) { /* Test isn't needed but avoids compiler warning */ |
| 4852 MALLOC_FAILURE_ACTION; |
| 4853 } |
| 4854 } |
| 4855 else { |
| 4856 size_t nb = request2size(bytes); |
| 4857 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD; |
| 4858 mem = internal_malloc(m, req); |
| 4859 if (mem != 0) { |
| 4860 mchunkptr p = mem2chunk(mem); |
| 4861 if (PREACTION(m)) |
| 4862 return 0; |
| 4863 if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */ |
| 4864 /* |
| 4865 Find an aligned spot inside chunk. Since we need to give |
| 4866 back leading space in a chunk of at least MIN_CHUNK_SIZE, if |
| 4867 the first calculation places us at a spot with less than |
| 4868 MIN_CHUNK_SIZE leader, we can move to the next aligned spot. |
| 4869 We've allocated enough total room so that this is always |
| 4870 possible. |
| 4871 */ |
| 4872 char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment - |
| 4873 SIZE_T_ONE)) & |
| 4874 -alignment)); |
| 4875 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)? |
| 4876 br : br+alignment; |
| 4877 mchunkptr newp = (mchunkptr)pos; |
| 4878 size_t leadsize = pos - (char*)(p); |
| 4879 size_t newsize = chunksize(p) - leadsize; |
| 4880 |
| 4881 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */ |
| 4882 newp->set_prev_foot(p->prev_foot() + leadsize); |
| 4883 newp->head = newsize; |
| 4884 } |
| 4885 else { /* Otherwise, give back leader, use the rest */ |
| 4886 set_inuse(m, newp, newsize); |
| 4887 set_inuse(m, p, leadsize); |
| 4888 dispose_chunk(m, p, leadsize); |
| 4889 } |
| 4890 p = newp; |
| 4891 } |
| 4892 |
| 4893 /* Give back spare room at the end */ |
| 4894 if (!is_mmapped(p)) { |
| 4895 size_t size = chunksize(p); |
| 4896 if (size > nb + MIN_CHUNK_SIZE) { |
| 4897 size_t remainder_size = size - nb; |
| 4898 mchunkptr remainder = chunk_plus_offset(p, nb); |
| 4899 set_inuse(m, p, nb); |
| 4900 set_inuse(m, remainder, remainder_size); |
| 4901 dispose_chunk(m, remainder, remainder_size); |
| 4902 } |
| 4903 } |
| 4904 |
| 4905 mem = chunk2mem(p); |
| 4906 assert (chunksize(p) >= nb); |
| 4907 assert(((size_t)mem & (alignment - 1)) == 0); |
| 4908 check_inuse_chunk(m, p); |
| 4909 POSTACTION(m); |
| 4910 } |
| 4911 } |
| 4912 return mem; |
| 4913 } |
| 4914 |
| 4915 /* |
| 4916 Common support for independent_X routines, handling |
| 4917 all of the combinations that can result. |
| 4918 The opts arg has: |
| 4919 bit 0 set if all elements are same size (using sizes[0]) |
| 4920 bit 1 set if elements should be zeroed |
| 4921 */ |
| 4922 static void** ialloc(mstate m, |
| 4923 size_t n_elements, |
| 4924 size_t* sizes, |
| 4925 int opts, |
| 4926 void* chunks[]) { |
| 4927 |
| 4928 size_t element_size; /* chunksize of each element, if all same */ |
| 4929 size_t contents_size; /* total size of elements */ |
| 4930 size_t array_size; /* request size of pointer array */ |
| 4931 void* mem; /* malloced aggregate space */ |
| 4932 mchunkptr p; /* corresponding chunk */ |
| 4933 size_t remainder_size; /* remaining bytes while splitting */ |
| 4934 void** marray; /* either "chunks" or malloced ptr array */ |
| 4935 mchunkptr array_chunk; /* chunk for malloced ptr array */ |
| 4936 flag_t was_enabled; /* to disable mmap */ |
| 4937 size_t size; |
| 4938 size_t i; |
| 4939 |
| 4940 ensure_initialization(); |
| 4941 /* compute array length, if needed */ |
| 4942 if (chunks != 0) { |
| 4943 if (n_elements == 0) |
| 4944 return chunks; /* nothing to do */ |
| 4945 marray = chunks; |
| 4946 array_size = 0; |
| 4947 } |
| 4948 else { |
| 4949 /* if empty req, must still return chunk representing empty array */ |
| 4950 if (n_elements == 0) |
| 4951 return (void**)internal_malloc(m, 0); |
| 4952 marray = 0; |
| 4953 array_size = request2size(n_elements * (sizeof(void*))); |
| 4954 } |
| 4955 |
| 4956 /* compute total element size */ |
| 4957 if (opts & 0x1) { /* all-same-size */ |
| 4958 element_size = request2size(*sizes); |
| 4959 contents_size = n_elements * element_size; |
| 4960 } |
| 4961 else { /* add up all the sizes */ |
| 4962 element_size = 0; |
| 4963 contents_size = 0; |
| 4964 for (i = 0; i != n_elements; ++i) |
| 4965 contents_size += request2size(sizes[i]); |
| 4966 } |
| 4967 |
| 4968 size = contents_size + array_size; |
| 4969 |
| 4970 /* |
| 4971 Allocate the aggregate chunk. First disable direct-mmapping so |
| 4972 malloc won't use it, since we would not be able to later |
| 4973 free/realloc space internal to a segregated mmap region. |
| 4974 */ |
| 4975 was_enabled = use_mmap(m); |
| 4976 disable_mmap(m); |
| 4977 mem = internal_malloc(m, size - CHUNK_OVERHEAD); |
| 4978 if (was_enabled) |
| 4979 enable_mmap(m); |
| 4980 if (mem == 0) |
| 4981 return 0; |
| 4982 |
| 4983 if (PREACTION(m)) return 0; |
| 4984 p = mem2chunk(mem); |
| 4985 remainder_size = chunksize(p); |
| 4986 |
| 4987 assert(!is_mmapped(p)); |
| 4988 |
| 4989 if (opts & 0x2) { /* optionally clear the elements */ |
| 4990 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size); |
| 4991 } |
| 4992 |
| 4993 /* If not provided, allocate the pointer array as final part of chunk */ |
| 4994 if (marray == 0) { |
| 4995 size_t array_chunk_size; |
| 4996 array_chunk = chunk_plus_offset(p, contents_size); |
| 4997 array_chunk_size = remainder_size - contents_size; |
| 4998 marray = (void**) (chunk2mem(array_chunk)); |
| 4999 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size); |
| 5000 remainder_size = contents_size; |
| 5001 } |
| 5002 |
| 5003 /* split out elements */ |
| 5004 for (i = 0; ; ++i) { |
| 5005 marray[i] = chunk2mem(p); |
| 5006 if (i != n_elements-1) { |
| 5007 if (element_size != 0) |
| 5008 size = element_size; |
| 5009 else |
| 5010 size = request2size(sizes[i]); |
| 5011 remainder_size -= size; |
| 5012 set_size_and_pinuse_of_inuse_chunk(m, p, size); |
| 5013 p = chunk_plus_offset(p, size); |
| 5014 } |
| 5015 else { /* the final element absorbs any overallocation slop */ |
| 5016 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size); |
| 5017 break; |
| 5018 } |
| 5019 } |
| 5020 |
| 5021 #if DEBUG |
| 5022 if (marray != chunks) { |
| 5023 /* final element must have exactly exhausted chunk */ |
| 5024 if (element_size != 0) { |
| 5025 assert(remainder_size == element_size); |
| 5026 } |
| 5027 else { |
| 5028 assert(remainder_size == request2size(sizes[i])); |
| 5029 } |
| 5030 check_inuse_chunk(m, mem2chunk(marray)); |
| 5031 } |
| 5032 for (i = 0; i != n_elements; ++i) |
| 5033 check_inuse_chunk(m, mem2chunk(marray[i])); |
| 5034 |
| 5035 #endif /* DEBUG */ |
| 5036 |
| 5037 POSTACTION(m); |
| 5038 return marray; |
| 5039 } |
| 5040 |
| 5041 /* Try to free all pointers in the given array. |
| 5042 Note: this could be made faster, by delaying consolidation, |
| 5043 at the price of disabling some user integrity checks, We |
| 5044 still optimize some consolidations by combining adjacent |
| 5045 chunks before freeing, which will occur often if allocated |
| 5046 with ialloc or the array is sorted. |
| 5047 */ |
| 5048 static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) { |
| 5049 size_t unfreed = 0; |
| 5050 if (!PREACTION(m)) { |
| 5051 void** a; |
| 5052 void** fence = &(array[nelem]); |
| 5053 for (a = array; a != fence; ++a) { |
| 5054 void* mem = *a; |
| 5055 if (mem != 0) { |
| 5056 mchunkptr p = mem2chunk(mem); |
| 5057 size_t psize = chunksize(p); |
| 5058 #if FOOTERS |
| 5059 if (get_mstate_for(p) != m) { |
| 5060 ++unfreed; |
| 5061 continue; |
| 5062 } |
| 5063 #endif |
| 5064 check_inuse_chunk(m, p); |
| 5065 *a = 0; |
| 5066 if (RTCHECK(ok_address(m, p) && ok_inuse(p))) { |
| 5067 void ** b = a + 1; /* try to merge with next chunk */ |
| 5068 mchunkptr next = next_chunk(p); |
| 5069 if (b != fence && *b == chunk2mem(next)) { |
| 5070 size_t newsize = chunksize(next) + psize; |
| 5071 set_inuse(m, p, newsize); |
| 5072 *b = chunk2mem(p); |
| 5073 } |
| 5074 else |
| 5075 dispose_chunk(m, p, psize); |
| 5076 } |
| 5077 else { |
| 5078 CORRUPTION_ERROR_ACTION(m); |
| 5079 break; |
| 5080 } |
| 5081 } |
| 5082 } |
| 5083 if (should_trim(m, m->topsize)) |
| 5084 sys_trim(m, 0); |
| 5085 POSTACTION(m); |
| 5086 } |
| 5087 return unfreed; |
| 5088 } |
| 5089 |
| 5090 /* Traversal */ |
| 5091 #if MALLOC_INSPECT_ALL |
| 5092 static void internal_inspect_all(mstate m, |
| 5093 void(*handler)(void *start, |
| 5094 void *end, |
| 5095 size_t used_bytes, |
| 5096 void* callback_arg), |
| 5097 void* arg) { |
| 5098 if (is_initialized(m)) { |
| 5099 mchunkptr top = m->top; |
| 5100 msegmentptr s; |
| 5101 for (s = &m->seg; s != 0; s = s->next) { |
| 5102 mchunkptr q = align_as_chunk(s->base); |
| 5103 while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) { |
| 5104 mchunkptr next = next_chunk(q); |
| 5105 size_t sz = chunksize(q); |
| 5106 size_t used; |
| 5107 void* start; |
| 5108 if (is_inuse(q)) { |
| 5109 used = sz - CHUNK_OVERHEAD; /* must not be mmapped */ |
| 5110 start = chunk2mem(q); |
| 5111 } |
| 5112 else { |
| 5113 used = 0; |
| 5114 if (is_small(sz)) { /* offset by possible bookkeeping */ |
| 5115 start = (void*)((char*)q + sizeof(struct malloc_chunk)); |
| 5116 } |
| 5117 else { |
| 5118 start = (void*)((char*)q + sizeof(struct malloc_tree_chunk)); |
| 5119 } |
| 5120 } |
| 5121 if (start < (void*)next) /* skip if all space is bookkeeping */ |
| 5122 handler(start, next, used, arg); |
| 5123 if (q == top) |
| 5124 break; |
| 5125 q = next; |
| 5126 } |
| 5127 } |
| 5128 } |
| 5129 } |
| 5130 #endif /* MALLOC_INSPECT_ALL */ |
| 5131 |
| 5132 /* ------------------ Exported realloc, memalign, etc -------------------- */ |
| 5133 |
| 5134 #if !ONLY_MSPACES |
| 5135 |
| 5136 void* dlrealloc(void* oldmem, size_t bytes) { |
| 5137 void* mem = 0; |
| 5138 if (oldmem == 0) { |
| 5139 mem = dlmalloc(bytes); |
| 5140 } |
| 5141 else if (bytes >= MAX_REQUEST) { |
| 5142 MALLOC_FAILURE_ACTION; |
| 5143 } |
| 5144 #ifdef REALLOC_ZERO_BYTES_FREES |
| 5145 else if (bytes == 0) { |
| 5146 dlfree(oldmem); |
| 5147 } |
| 5148 #endif /* REALLOC_ZERO_BYTES_FREES */ |
| 5149 else { |
| 5150 size_t nb = request2size(bytes); |
| 5151 mchunkptr oldp = mem2chunk(oldmem); |
| 5152 #if ! FOOTERS |
| 5153 mstate m = gm; |
| 5154 #else /* FOOTERS */ |
| 5155 mstate m = get_mstate_for(oldp); |
| 5156 if (!ok_magic(m)) { |
| 5157 USAGE_ERROR_ACTION(m, oldmem); |
| 5158 return 0; |
| 5159 } |
| 5160 #endif /* FOOTERS */ |
| 5161 if (!PREACTION(m)) { |
| 5162 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1); |
| 5163 POSTACTION(m); |
| 5164 if (newp != 0) { |
| 5165 check_inuse_chunk(m, newp); |
| 5166 mem = chunk2mem(newp); |
| 5167 } |
| 5168 else { |
| 5169 mem = internal_malloc(m, bytes); |
| 5170 if (mem != 0) { |
| 5171 size_t oc = chunksize(oldp) - overhead_for(oldp); |
| 5172 memcpy(mem, oldmem, (oc < bytes)? oc : bytes); |
| 5173 internal_free(m, oldmem); |
| 5174 } |
| 5175 } |
| 5176 } |
| 5177 } |
| 5178 return mem; |
| 5179 } |
| 5180 |
| 5181 void* dlrealloc_in_place(void* oldmem, size_t bytes) { |
| 5182 void* mem = 0; |
| 5183 if (oldmem != 0) { |
| 5184 if (bytes >= MAX_REQUEST) { |
| 5185 MALLOC_FAILURE_ACTION; |
| 5186 } |
| 5187 else { |
| 5188 size_t nb = request2size(bytes); |
| 5189 mchunkptr oldp = mem2chunk(oldmem); |
| 5190 #if ! FOOTERS |
| 5191 mstate m = gm; |
| 5192 #else /* FOOTERS */ |
| 5193 mstate m = get_mstate_for(oldp); |
| 5194 if (!ok_magic(m)) { |
| 5195 USAGE_ERROR_ACTION(m, oldmem); |
| 5196 return 0; |
| 5197 } |
| 5198 #endif /* FOOTERS */ |
| 5199 if (!PREACTION(m)) { |
| 5200 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0); |
| 5201 POSTACTION(m); |
| 5202 if (newp == oldp) { |
| 5203 check_inuse_chunk(m, newp); |
| 5204 mem = oldmem; |
| 5205 } |
| 5206 } |
| 5207 } |
| 5208 } |
| 5209 return mem; |
| 5210 } |
| 5211 |
| 5212 void* dlmemalign(size_t alignment, size_t bytes) { |
| 5213 if (alignment <= MALLOC_ALIGNMENT) { |
| 5214 return dlmalloc(bytes); |
| 5215 } |
| 5216 return internal_memalign(gm, alignment, bytes); |
| 5217 } |
| 5218 |
| 5219 int dlposix_memalign(void** pp, size_t alignment, size_t bytes) { |
| 5220 void* mem = 0; |
| 5221 if (alignment == MALLOC_ALIGNMENT) |
| 5222 mem = dlmalloc(bytes); |
| 5223 else { |
| 5224 size_t d = alignment / sizeof(void*); |
| 5225 size_t r = alignment % sizeof(void*); |
| 5226 if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0) |
| 5227 return EINVAL; |
| 5228 else if (bytes <= MAX_REQUEST - alignment) { |
| 5229 if (alignment < MIN_CHUNK_SIZE) |
| 5230 alignment = MIN_CHUNK_SIZE; |
| 5231 mem = internal_memalign(gm, alignment, bytes); |
| 5232 } |
| 5233 } |
| 5234 if (mem == 0) |
| 5235 return ENOMEM; |
| 5236 else { |
| 5237 *pp = mem; |
| 5238 return 0; |
| 5239 } |
| 5240 } |
| 5241 |
| 5242 void* dlvalloc(size_t bytes) { |
| 5243 size_t pagesz; |
| 5244 ensure_initialization(); |
| 5245 pagesz = mparams.page_size; |
| 5246 return dlmemalign(pagesz, bytes); |
| 5247 } |
| 5248 |
| 5249 void* dlpvalloc(size_t bytes) { |
| 5250 size_t pagesz; |
| 5251 ensure_initialization(); |
| 5252 pagesz = mparams.page_size; |
| 5253 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ON
E)); |
| 5254 } |
| 5255 |
| 5256 void** dlindependent_calloc(size_t n_elements, size_t elem_size, |
| 5257 void* chunks[]) { |
| 5258 size_t sz = elem_size; /* serves as 1-element array */ |
| 5259 return ialloc(gm, n_elements, &sz, 3, chunks); |
| 5260 } |
| 5261 |
| 5262 void** dlindependent_comalloc(size_t n_elements, size_t sizes[], |
| 5263 void* chunks[]) { |
| 5264 return ialloc(gm, n_elements, sizes, 0, chunks); |
| 5265 } |
| 5266 |
| 5267 size_t dlbulk_free(void* array[], size_t nelem) { |
| 5268 return internal_bulk_free(gm, array, nelem); |
| 5269 } |
| 5270 |
| 5271 #if MALLOC_INSPECT_ALL |
| 5272 void dlmalloc_inspect_all(void(*handler)(void *start, |
| 5273 void *end, |
| 5274 size_t used_bytes, |
| 5275 void* callback_arg), |
| 5276 void* arg) { |
| 5277 ensure_initialization(); |
| 5278 if (!PREACTION(gm)) { |
| 5279 internal_inspect_all(gm, handler, arg); |
| 5280 POSTACTION(gm); |
| 5281 } |
| 5282 } |
| 5283 #endif /* MALLOC_INSPECT_ALL */ |
| 5284 |
| 5285 int dlmalloc_trim(size_t pad) { |
| 5286 int result = 0; |
| 5287 ensure_initialization(); |
| 5288 if (!PREACTION(gm)) { |
| 5289 result = sys_trim(gm, pad); |
| 5290 POSTACTION(gm); |
| 5291 } |
| 5292 return result; |
| 5293 } |
| 5294 |
| 5295 size_t dlmalloc_footprint(void) { |
| 5296 return gm->footprint; |
| 5297 } |
| 5298 |
| 5299 size_t dlmalloc_max_footprint(void) { |
| 5300 return gm->max_footprint; |
| 5301 } |
| 5302 |
| 5303 size_t dlmalloc_footprint_limit(void) { |
| 5304 size_t maf = gm->footprint_limit; |
| 5305 return maf == 0 ? MAX_SIZE_T : maf; |
| 5306 } |
| 5307 |
| 5308 size_t dlmalloc_set_footprint_limit(size_t bytes) { |
| 5309 size_t result; /* invert sense of 0 */ |
| 5310 if (bytes == 0) |
| 5311 result = granularity_align(1); /* Use minimal size */ |
| 5312 if (bytes == MAX_SIZE_T) |
| 5313 result = 0; /* disable */ |
| 5314 else |
| 5315 result = granularity_align(bytes); |
| 5316 return gm->footprint_limit = result; |
| 5317 } |
| 5318 |
| 5319 #if !NO_MALLINFO |
| 5320 struct mallinfo dlmallinfo(void) { |
| 5321 return internal_mallinfo(gm); |
| 5322 } |
| 5323 #endif /* NO_MALLINFO */ |
| 5324 |
| 5325 #if !NO_MALLOC_STATS |
| 5326 void dlmalloc_stats() { |
| 5327 internal_malloc_stats(gm); |
| 5328 } |
| 5329 #endif /* NO_MALLOC_STATS */ |
| 5330 |
| 5331 int dlmallopt(int param_number, int value) { |
| 5332 return change_mparam(param_number, value); |
| 5333 } |
| 5334 |
| 5335 /* BEGIN android-changed: added const */ |
| 5336 size_t dlmalloc_usable_size(const void* mem) { |
| 5337 /* END android-change */ |
| 5338 if (mem != 0) { |
| 5339 mchunkptr p = mem2chunk(mem); |
| 5340 if (is_inuse(p)) |
| 5341 return chunksize(p) - overhead_for(p); |
| 5342 } |
| 5343 return 0; |
| 5344 } |
| 5345 |
| 5346 #endif /* !ONLY_MSPACES */ |
| 5347 |
| 5348 |
| 5349 /* -------------------- Alternative MORECORE functions ------------------- */ |
| 5350 |
| 5351 /* |
| 5352 Guidelines for creating a custom version of MORECORE: |
| 5353 |
| 5354 * For best performance, MORECORE should allocate in multiples of pagesize. |
| 5355 * MORECORE may allocate more memory than requested. (Or even less, |
| 5356 but this will usually result in a malloc failure.) |
| 5357 * MORECORE must not allocate memory when given argument zero, but |
| 5358 instead return one past the end address of memory from previous |
| 5359 nonzero call. |
| 5360 * For best performance, consecutive calls to MORECORE with positive |
| 5361 arguments should return increasing addresses, indicating that |
| 5362 space has been contiguously extended. |
| 5363 * Even though consecutive calls to MORECORE need not return contiguous |
| 5364 addresses, it must be OK for malloc'ed chunks to span multiple |
| 5365 regions in those cases where they do happen to be contiguous. |
| 5366 * MORECORE need not handle negative arguments -- it may instead |
| 5367 just return MFAIL when given negative arguments. |
| 5368 Negative arguments are always multiples of pagesize. MORECORE |
| 5369 must not misinterpret negative args as large positive unsigned |
| 5370 args. You can suppress all such calls from even occurring by defining |
| 5371 MORECORE_CANNOT_TRIM, |
| 5372 |
| 5373 As an example alternative MORECORE, here is a custom allocator |
| 5374 kindly contributed for pre-OSX macOS. It uses virtually but not |
| 5375 necessarily physically contiguous non-paged memory (locked in, |
| 5376 present and won't get swapped out). You can use it by uncommenting |
| 5377 this section, adding some #includes, and setting up the appropriate |
| 5378 defines above: |
| 5379 |
| 5380 #define MORECORE osMoreCore |
| 5381 |
| 5382 There is also a shutdown routine that should somehow be called for |
| 5383 cleanup upon program exit. |
| 5384 |
| 5385 #define MAX_POOL_ENTRIES 100 |
| 5386 #define MINIMUM_MORECORE_SIZE (64 * 1024U) |
| 5387 static int next_os_pool; |
| 5388 void *our_os_pools[MAX_POOL_ENTRIES]; |
| 5389 |
| 5390 void *osMoreCore(int size) |
| 5391 { |
| 5392 void *ptr = 0; |
| 5393 static void *sbrk_top = 0; |
| 5394 |
| 5395 if (size > 0) |
| 5396 { |
| 5397 if (size < MINIMUM_MORECORE_SIZE) |
| 5398 size = MINIMUM_MORECORE_SIZE; |
| 5399 if (CurrentExecutionLevel() == kTaskLevel) |
| 5400 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0); |
| 5401 if (ptr == 0) |
| 5402 { |
| 5403 return (void *) MFAIL; |
| 5404 } |
| 5405 // save ptrs so they can be freed during cleanup |
| 5406 our_os_pools[next_os_pool] = ptr; |
| 5407 next_os_pool++; |
| 5408 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK); |
| 5409 sbrk_top = (char *) ptr + size; |
| 5410 return ptr; |
| 5411 } |
| 5412 else if (size < 0) |
| 5413 { |
| 5414 // we don't currently support shrink behavior |
| 5415 return (void *) MFAIL; |
| 5416 } |
| 5417 else |
| 5418 { |
| 5419 return sbrk_top; |
| 5420 } |
| 5421 } |
| 5422 |
| 5423 // cleanup any allocated memory pools |
| 5424 // called as last thing before shutting down driver |
| 5425 |
| 5426 void osCleanupMem(void) |
| 5427 { |
| 5428 void **ptr; |
| 5429 |
| 5430 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++) |
| 5431 if (*ptr) |
| 5432 { |
| 5433 PoolDeallocate(*ptr); |
| 5434 *ptr = 0; |
| 5435 } |
| 5436 } |
| 5437 |
| 5438 */ |
| 5439 |
| 5440 |
| 5441 /* ----------------------------------------------------------------------- |
| 5442 History: |
| 5443 v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea |
| 5444 * fix bad comparison in dlposix_memalign |
| 5445 * don't reuse adjusted asize in sys_alloc |
| 5446 * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion |
| 5447 * reduce compiler warnings -- thanks to all who reported/suggested these |
| 5448 |
| 5449 v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee) |
| 5450 * Always perform unlink checks unless INSECURE |
| 5451 * Add posix_memalign. |
| 5452 * Improve realloc to expand in more cases; expose realloc_in_place. |
| 5453 Thanks to Peter Buhr for the suggestion. |
| 5454 * Add footprint_limit, inspect_all, bulk_free. Thanks |
| 5455 to Barry Hayes and others for the suggestions. |
| 5456 * Internal refactorings to avoid calls while holding locks |
| 5457 * Use non-reentrant locks by default. Thanks to Roland McGrath |
| 5458 for the suggestion. |
| 5459 * Small fixes to mspace_destroy, reset_on_error. |
| 5460 * Various configuration extensions/changes. Thanks |
| 5461 to all who contributed these. |
| 5462 |
| 5463 V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu) |
| 5464 * Update Creative Commons URL |
| 5465 |
| 5466 V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee) |
| 5467 * Use zeros instead of prev foot for is_mmapped |
| 5468 * Add mspace_track_large_chunks; thanks to Jean Brouwers |
| 5469 * Fix set_inuse in internal_realloc; thanks to Jean Brouwers |
| 5470 * Fix insufficient sys_alloc padding when using 16byte alignment |
| 5471 * Fix bad error check in mspace_footprint |
| 5472 * Adaptations for ptmalloc; thanks to Wolfram Gloger. |
| 5473 * Reentrant spin locks; thanks to Earl Chew and others |
| 5474 * Win32 improvements; thanks to Niall Douglas and Earl Chew |
| 5475 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options |
| 5476 * Extension hook in malloc_state |
| 5477 * Various small adjustments to reduce warnings on some compilers |
| 5478 * Various configuration extensions/changes for more platforms. Thanks |
| 5479 to all who contributed these. |
| 5480 |
| 5481 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee) |
| 5482 * Add max_footprint functions |
| 5483 * Ensure all appropriate literals are size_t |
| 5484 * Fix conditional compilation problem for some #define settings |
| 5485 * Avoid concatenating segments with the one provided |
| 5486 in create_mspace_with_base |
| 5487 * Rename some variables to avoid compiler shadowing warnings |
| 5488 * Use explicit lock initialization. |
| 5489 * Better handling of sbrk interference. |
| 5490 * Simplify and fix segment insertion, trimming and mspace_destroy |
| 5491 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x |
| 5492 * Thanks especially to Dennis Flanagan for help on these. |
| 5493 |
| 5494 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee) |
| 5495 * Fix memalign brace error. |
| 5496 |
| 5497 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee) |
| 5498 * Fix improper #endif nesting in C++ |
| 5499 * Add explicit casts needed for C++ |
| 5500 |
| 5501 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee) |
| 5502 * Use trees for large bins |
| 5503 * Support mspaces |
| 5504 * Use segments to unify sbrk-based and mmap-based system allocation, |
| 5505 removing need for emulation on most platforms without sbrk. |
| 5506 * Default safety checks |
| 5507 * Optional footer checks. Thanks to William Robertson for the idea. |
| 5508 * Internal code refactoring |
| 5509 * Incorporate suggestions and platform-specific changes. |
| 5510 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas, |
| 5511 Aaron Bachmann, Emery Berger, and others. |
| 5512 * Speed up non-fastbin processing enough to remove fastbins. |
| 5513 * Remove useless cfree() to avoid conflicts with other apps. |
| 5514 * Remove internal memcpy, memset. Compilers handle builtins better. |
| 5515 * Remove some options that no one ever used and rename others. |
| 5516 |
| 5517 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee) |
| 5518 * Fix malloc_state bitmap array misdeclaration |
| 5519 |
| 5520 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee) |
| 5521 * Allow tuning of FIRST_SORTED_BIN_SIZE |
| 5522 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte. |
| 5523 * Better detection and support for non-contiguousness of MORECORE. |
| 5524 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger |
| 5525 * Bypass most of malloc if no frees. Thanks To Emery Berger. |
| 5526 * Fix freeing of old top non-contiguous chunk im sysmalloc. |
| 5527 * Raised default trim and map thresholds to 256K. |
| 5528 * Fix mmap-related #defines. Thanks to Lubos Lunak. |
| 5529 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield. |
| 5530 * Branch-free bin calculation |
| 5531 * Default trim and mmap thresholds now 256K. |
| 5532 |
| 5533 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee) |
| 5534 * Introduce independent_comalloc and independent_calloc. |
| 5535 Thanks to Michael Pachos for motivation and help. |
| 5536 * Make optional .h file available |
| 5537 * Allow > 2GB requests on 32bit systems. |
| 5538 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>. |
| 5539 Thanks also to Andreas Mueller <a.mueller at paradatec.de>, |
| 5540 and Anonymous. |
| 5541 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for |
| 5542 helping test this.) |
| 5543 * memalign: check alignment arg |
| 5544 * realloc: don't try to shift chunks backwards, since this |
| 5545 leads to more fragmentation in some programs and doesn't |
| 5546 seem to help in any others. |
| 5547 * Collect all cases in malloc requiring system memory into sysmalloc |
| 5548 * Use mmap as backup to sbrk |
| 5549 * Place all internal state in malloc_state |
| 5550 * Introduce fastbins (although similar to 2.5.1) |
| 5551 * Many minor tunings and cosmetic improvements |
| 5552 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK |
| 5553 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS |
| 5554 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others. |
| 5555 * Include errno.h to support default failure action. |
| 5556 |
| 5557 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee) |
| 5558 * return null for negative arguments |
| 5559 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com> |
| 5560 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h' |
| 5561 (e.g. WIN32 platforms) |
| 5562 * Cleanup header file inclusion for WIN32 platforms |
| 5563 * Cleanup code to avoid Microsoft Visual C++ compiler complaints |
| 5564 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing |
| 5565 memory allocation routines |
| 5566 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work) |
| 5567 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to |
| 5568 usage of 'assert' in non-WIN32 code |
| 5569 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to |
| 5570 avoid infinite loop |
| 5571 * Always call 'fREe()' rather than 'free()' |
| 5572 |
| 5573 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee) |
| 5574 * Fixed ordering problem with boundary-stamping |
| 5575 |
| 5576 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee) |
| 5577 * Added pvalloc, as recommended by H.J. Liu |
| 5578 * Added 64bit pointer support mainly from Wolfram Gloger |
| 5579 * Added anonymously donated WIN32 sbrk emulation |
| 5580 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen |
| 5581 * malloc_extend_top: fix mask error that caused wastage after |
| 5582 foreign sbrks |
| 5583 * Add linux mremap support code from HJ Liu |
| 5584 |
| 5585 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee) |
| 5586 * Integrated most documentation with the code. |
| 5587 * Add support for mmap, with help from |
| 5588 Wolfram Gloger (Gloger@lrz.uni-muenchen.de). |
| 5589 * Use last_remainder in more cases. |
| 5590 * Pack bins using idea from colin@nyx10.cs.du.edu |
| 5591 * Use ordered bins instead of best-fit threshhold |
| 5592 * Eliminate block-local decls to simplify tracing and debugging. |
| 5593 * Support another case of realloc via move into top |
| 5594 * Fix error occuring when initial sbrk_base not word-aligned. |
| 5595 * Rely on page size for units instead of SBRK_UNIT to |
| 5596 avoid surprises about sbrk alignment conventions. |
| 5597 * Add mallinfo, mallopt. Thanks to Raymond Nijssen |
| 5598 (raymond@es.ele.tue.nl) for the suggestion. |
| 5599 * Add `pad' argument to malloc_trim and top_pad mallopt parameter. |
| 5600 * More precautions for cases where other routines call sbrk, |
| 5601 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de). |
| 5602 * Added macros etc., allowing use in linux libc from |
| 5603 H.J. Lu (hjl@gnu.ai.mit.edu) |
| 5604 * Inverted this history list |
| 5605 |
| 5606 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee) |
| 5607 * Re-tuned and fixed to behave more nicely with V2.6.0 changes. |
| 5608 * Removed all preallocation code since under current scheme |
| 5609 the work required to undo bad preallocations exceeds |
| 5610 the work saved in good cases for most test programs. |
| 5611 * No longer use return list or unconsolidated bins since |
| 5612 no scheme using them consistently outperforms those that don't |
| 5613 given above changes. |
| 5614 * Use best fit for very large chunks to prevent some worst-cases. |
| 5615 * Added some support for debugging |
| 5616 |
| 5617 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee) |
| 5618 * Removed footers when chunks are in use. Thanks to |
| 5619 Paul Wilson (wilson@cs.texas.edu) for the suggestion. |
| 5620 |
| 5621 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee) |
| 5622 * Added malloc_trim, with help from Wolfram Gloger |
| 5623 (wmglo@Dent.MED.Uni-Muenchen.DE). |
| 5624 |
| 5625 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g) |
| 5626 |
| 5627 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g) |
| 5628 * realloc: try to expand in both directions |
| 5629 * malloc: swap order of clean-bin strategy; |
| 5630 * realloc: only conditionally expand backwards |
| 5631 * Try not to scavenge used bins |
| 5632 * Use bin counts as a guide to preallocation |
| 5633 * Occasionally bin return list chunks in first scan |
| 5634 * Add a few optimizations from colin@nyx10.cs.du.edu |
| 5635 |
| 5636 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g) |
| 5637 * faster bin computation & slightly different binning |
| 5638 * merged all consolidations to one part of malloc proper |
| 5639 (eliminating old malloc_find_space & malloc_clean_bin) |
| 5640 * Scan 2 returns chunks (not just 1) |
| 5641 * Propagate failure in realloc if malloc returns 0 |
| 5642 * Add stuff to allow compilation on non-ANSI compilers |
| 5643 from kpv@research.att.com |
| 5644 |
| 5645 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu) |
| 5646 * removed potential for odd address access in prev_chunk |
| 5647 * removed dependency on getpagesize.h |
| 5648 * misc cosmetics and a bit more internal documentation |
| 5649 * anticosmetics: mangled names in macros to evade debugger strangeness |
| 5650 * tested on sparc, hp-700, dec-mips, rs6000 |
| 5651 with gcc & native cc (hp, dec only) allowing |
| 5652 Detlefs & Zorn comparison study (in SIGPLAN Notices.) |
| 5653 |
| 5654 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu) |
| 5655 * Based loosely on libg++-1.2X malloc. (It retains some of the overall |
| 5656 structure of old version, but most details differ.) |
| 5657 |
| 5658 */ |
OLD | NEW |