General-purpose scalable concurrent malloc implementation
http://www.canonware.com/jemalloc
This distribution is the stand-alone "portable" implementation of jemalloc.
- Developed at devel:libraries:c_c++
-
4
derived packages
- Download package
-
Checkout Package
osc -A https://api.opensuse.org checkout openSUSE:Factory/jemalloc && cd $_
- Create Badge
Refresh
Refresh
Source Files
Filename | Size | Changed |
---|---|---|
0001-ARM-Don-t-extend-bit-LG_VADDR-to-compute-high |
0000001583 1.55 KB | |
0001-remove-CPU_SPINWAIT.patch | 0000000297 297 Bytes | |
jemalloc-5.0.1.tar.bz2 | 0000499300 488 KB | |
jemalloc.changes | 0000030901 30.2 KB | |
jemalloc.spec | 0000003643 3.56 KB |
Revision 23 (latest revision is 36)
Dominique Leuenberger (dimstar_suse)
accepted
request 531585
from
Martin Liška (marxin)
(revision 23)
- Add 0001-ARM-Don-t-extend-bit-LG_VADDR-to-compute-high-addres.patch: fixes #979. - Add 0001-remove-CPU_SPINWAIT.patch: revert 701daa5298b3befe2aff05ce590533165abb9ba4 in order to fix #761. - Update to version 5.0.1 Bug fixes: * Update decay->nunpurged before purging, in order to avoid potential update races and subsequent incorrect purging volume. ([37]@interwq) * Only abort on dlsym(3) error if the failure impacts an enabled feature (lazy locking and/or background threads). This mitigates an initialization failure bug for which we still do not have a clear reproduction test case. ([38]@interwq) * Modify tsd management so that it neither crashes nor leaks if a thread's only allocation activity is to call free() after TLS destructors have been executed. This behavior was observed when operating with GNU libc, and is unlikely to be an issue with other libc implementations. ([39]@interwq) * Mask signals during background thread creation. This prevents signals from being inadvertently delivered to background threads. ([40]@jasone, [41]@davidtgoldblatt, [42]@interwq) * Avoid inactivity checks within background threads, in order to prevent recursive mutex acquisition. ([43]@interwq) * Fix extent_grow_retained() to use the specified hooks when the arena.<i>.extent_hooks mallctl is used to override the default hooks. ([44]@interwq) * Add missing reentrancy support for custom extent hooks which allocate. ([45]@interwq) * Post-fork(2), re-initialize the list of tcaches associated with each arena to contain no tcaches except the forking thread's. ([46]@interwq) * Add missing post-fork(2) mutex reinitialization for extent_grow_mtx. This fixes potential deadlocks after fork(2). ([47]@interwq) * Enforce minimum autoconf version (currently 2.68), since 2.63 is known to generate corrupt configure scripts. ([48]@jasone) * Ensure that the configured page size (--with-lg-page) is no larger than the configured huge page size (--with-lg-hugepage). ([49]@jasone) New features: * Implement optional per-CPU arena support; threads choose which arena to use based on current CPU rather than on fixed thread-->arena associations. ([59]@interwq) * Implement two-phase decay of unused dirty pages. Pages transition from dirty-->muzzy-->clean, where the first phase transition relies on madvise(... MADV_FREE) semantics, and the second phase transition discards pages such that they are replaced with demand-zeroed pages on next access. ([60]@jasone) * Increase decay time resolution from seconds to milliseconds. ([61]@jasone) * Implement opt-in per CPU background threads, and use them for asynchronous decay-driven unused dirty page purging. ([62]@interwq) * Add mutex profiling, which collects a variety of statistics useful for diagnosing overhead/contention issues. ([63]@interwq) * Add C++ new/delete operator bindings. ([64]@djwatson) * Support manually created arena destruction, such that all data and metadata are discarded. Add MALLCTL_ARENAS_DESTROYED for accessing merged stats associated with destroyed arenas. ([65]@jasone) * Add MALLCTL_ARENAS_ALL as a fixed index for use in accessing merged/destroyed arena statistics via mallctl. ([66]@jasone) * Add opt.abort_conf to optionally abort if invalid configuration options are detected during initialization. ([67]@interwq) * Add opt.stats_print_opts, so that e.g. JSON output can be selected for the stats dumped during exit if opt.stats_print is true. ([68]@jasone) * Add --with-version=VERSION for use when embedding jemalloc into another project's git repository. ([69]@jasone) * Add --disable-thp to support cross compiling. ([70]@jasone) * Add --with-lg-hugepage to support cross compiling. ([71]@jasone) * Add mallctl interfaces (various authors): + background_thread + opt.abort_conf + opt.retain + opt.percpu_arena + opt.background_thread + opt.{dirty,muzzy}_decay_ms + opt.stats_print_opts + arena.<i>.initialized + arena.<i>.destroy + arena.<i>.{dirty,muzzy}_decay_ms + arena.<i>.extent_hooks + arenas.{dirty,muzzy}_decay_ms + arenas.bin.<i>.slab_size + arenas.nlextents + arenas.lextent.<i>.size + arenas.create + stats.background_thread.{num_threads,num_runs,run_interval} + stats.mutexes.{ctl,background_thread,prof,reset}.{num_ops,num_ spin_acq,num_wait,max_wait_time,total_wait_time,max_num_thds,n um_owner_switch} + stats.arenas.<i>.{dirty,muzzy}_decay_ms + stats.arenas.<i>.uptime + stats.arenas.<i>.{pmuzzy,base,internal,resident} + stats.arenas.<i>.{dirty,muzzy}_{npurge,nmadvise,purged} + stats.arenas.<i>.bins.<j>.{nslabs,reslabs,curslabs} + stats.arenas.<i>.bins.<j>.mutex.{num_ops,num_spin_acq,num_wait ,max_wait_time,total_wait_time,max_num_thds,num_owner_switch} + stats.arenas.<i>.lextents.<j>.{nmalloc,ndalloc,nrequests,curle xtents} + stats.arenas.i.mutexes.{large,extent_avail,extents_dirty,exten ts_muzzy,extents_retained,decay_dirty,decay_muzzy,base,tcache_ list}.{num_ops,num_spin_acq,num_wait,max_wait_time,total_wait_ time,max_num_thds,num_owner_switch} Portability improvements: * Improve reentrant allocation support, such that deadlock is less likely if e.g. a system library call in turn allocates memory. ([72]@davidtgoldblatt, [73]@interwq) * Support static linking of jemalloc with glibc. ([74]@djwatson) Optimizations and refactors: * Organize virtual memory as "extents" of virtual memory pages, rather than as naturally aligned "chunks", and store all metadata in arbitrarily distant locations. This reduces virtual memory external fragmentation, and will interact better with huge pages (not yet explicitly supported). ([75]@jasone) * Fold large and huge size classes together; only small and large size classes remain. ([76]@jasone) * Unify the allocation paths, and merge most fast-path branching decisions. ([77]@davidtgoldblatt, [78]@interwq) * Embed per thread automatic tcache into thread-specific data, which reduces conditional branches and dereferences. Also reorganize tcache to increase fast-path data locality. ([79]@interwq) * Rewrite atomics to closely model the C11 API, convert various synchronization from mutex-based to atomic, and use the explicit memory ordering control to resolve various hypothetical races without increasing synchronization overhead. ([80]@davidtgoldblatt) * Extensively optimize rtree via various methods: + Add multiple layers of rtree lookup caching, since rtree lookups are now part of fast-path deallocation. ([81]@interwq) + Determine rtree layout at compile time. ([82]@jasone) + Make the tree shallower for common configurations. ([83]@jasone) + Embed the root node in the top-level rtree data structure, thus avoiding one level of indirection. ([84]@jasone) + Further specialize leaf elements as compared to internal node elements, and directly embed extent metadata needed for fast-path deallocation. ([85]@jasone) + Ignore leading always-zero address bits (architecture-specific). ([86]@jasone) * Reorganize headers (ongoing work) to make them hermetic, and disentangle various module dependencies. ([87]@davidtgoldblatt) * Convert various internal data structures such as size class metadata from boot-time-initialized to compile-time-initialized. Propagate resulting data structure simplifications, such as making arena metadata fixed-size. ([88]@jasone) * Simplify size class lookups when constrained to size classes that are multiples of the page size. This speeds lookups, but the primary benefit is complexity reduction in code that was the source of numerous regressions. ([89]@jasone) * Lock individual extents when possible for localized extent operations, rather than relying on a top-level arena lock. ([90]@davidtgoldblatt, [91]@jasone) * Use first fit layout policy instead of best fit, in order to improve packing. ([92]@jasone) * If munmap(2) is not in use, use an exponential series to grow each arena's virtual memory, so that the number of disjoint virtual memory mappings remains low. ([93]@jasone) * Implement per arena base allocators, so that arenas never share any virtual memory pages. ([94]@jasone) * Automatically generate private symbol name mangling macros. ([95]@jasone) Incompatible changes: * Replace chunk hooks with an expanded/normalized set of extent hooks. ([96]@jasone) * Remove ratio-based purging. ([97]@jasone) * Remove --disable-tcache. ([98]@jasone) * Remove --disable-tls. ([99]@jasone) * Remove --enable-ivsalloc. ([100]@jasone) * Remove --with-lg-size-class-group. ([101]@jasone) * Remove --with-lg-tiny-min. ([102]@jasone) * Remove --disable-cc-silence. ([103]@jasone) * Remove --enable-code-coverage. ([104]@jasone) * Remove --disable-munmap (replaced by opt.retain). ([105]@jasone) * Remove Valgrind support. ([106]@jasone) * Remove quarantine support. ([107]@jasone) * Remove redzone support. ([108]@jasone) * Remove mallctl interfaces (various authors): + config.munmap + config.tcache + config.tls + config.valgrind + opt.lg_chunk + opt.purge + opt.lg_dirty_mult + opt.decay_time + opt.quarantine + opt.redzone + opt.thp + arena.<i>.lg_dirty_mult + arena.<i>.decay_time + arena.<i>.chunk_hooks + arenas.initialized + arenas.lg_dirty_mult + arenas.decay_time + arenas.bin.<i>.run_size + arenas.nlruns + arenas.lrun.<i>.size + arenas.nhchunks + arenas.hchunk.<i>.size + arenas.extend + stats.cactive + stats.arenas.<i>.lg_dirty_mult + stats.arenas.<i>.decay_time + stats.arenas.<i>.metadata.{mapped,allocated} + stats.arenas.<i>.{npurge,nmadvise,purged} + stats.arenas.<i>.huge.{allocated,nmalloc,ndalloc,nrequests} + stats.arenas.<i>.bins.<j>.{nruns,reruns,curruns} + stats.arenas.<i>.lruns.<j>.{nmalloc,ndalloc,nrequests,curruns} + stats.arenas.<i>.hchunks.<j>.{nmalloc,ndalloc,nrequests,curhch unks} Bug fixes: * Improve interval-based profile dump triggering to dump only one profile when a single allocation's size exceeds the interval. ([109]@jasone) * Use prefixed function names (as controlled by --with-jemalloc-prefix) when pruning backtrace frames in jeprof. ([110]@jasone)
Comments 0