fix(deps): upgrade mimalloc#13944
Conversation
|
📝 Benchmark detail: Open
Base persistent cache hit rate:
Threshold exceeded: ["arco-pro_development-mode_hmr + rss memory","arco-pro_production-mode_persistent-hot + rss memory","arco-pro_production-mode_source-map + rss memory","threejs_production-mode_10x_persistent-cold + rss memory"] |
📦 Binary Size-limit
❌ Size increased by 28.04KB from 61.92MB to 61.95MB (⬆️0.04%) |
Rsdoctor Bundle Diff AnalysisFound 6 projects in monorepo, 0 projects with changes. 📊 Quick Summary
Generated by Rsdoctor GitHub Action |
61a1346 to
3dcb147
Compare
3dcb147 to
9a8616c
Compare
|
⏳ Triggered benchmark: Open |
9a8616c to
bdde25c
Compare
|
📝 Benchmark detail: Open
Base persistent cache hit rate:
Threshold exceeded: ["10000_big_production-mode_disable-minimize + exec","10000_development-mode + exec","10000_development-mode_noop-loader + exec","10000_production-mode + exec","10000_production-mode_persistent-cold + exec","10000_production-mode_persistent-hot + exec","10000_production-mode_source-map + exec","arco-pro_development-mode + exec","arco-pro_production-mode + exec","arco-pro_production-mode_generate-package-json-webpack-plugin + exec","arco-pro_production-mode_persistent-cold + exec","arco-pro_production-mode_persistent-hot + exec","arco-pro_production-mode_source-map + exec","arco-pro_production-mode_traverse-chunk-modules + exec","large-dyn-imports_development-mode + exec","large-dyn-imports_production-mode + exec","threejs_development-mode_10x + exec","threejs_production-mode_10x + exec","threejs_production-mode_10x_persistent-cold + exec","threejs_production-mode_10x_persistent-hot + exec","threejs_production-mode_10x_source-map + exec"] |
22481ce to
c37553f
Compare
8262baf to
0de2710
Compare
0de2710 to
247fd34
Compare
247fd34 to
2820dbd
Compare
|
📝 Benchmark detail: Open
Base persistent cache hit rate:
Threshold exceeded: ["10000_big_production-mode_disable-minimize + exec","arco-pro_development-mode_hmr + rss memory","arco-pro_production-mode_persistent-hot + rss memory","threejs_production-mode_10x_persistent-cold + rss memory"] |
Summary
related to #13942
Upgrade
mimallocfrom0.1.48to0.1.50.This also removes the stale
v3feature usage fromrspack_allocatorandrspack_benchmark.mimalloc 0.1.50no longer exposes that feature; it already vendors mimalloc v3.This PR intentionally does not change mimalloc runtime defaults such as
allow_thp,arena_purge_mult,purge_delay, ordisallow_arena_alloc.Binding RSS Investigation
The observed regression is from the binding ecosystem benchmark, not the Rust benchmark. The benchmark samples the Node.js process RSS via
process.memoryUsage().rssafter the native.nodebinding runs a build.The RSS increase is not caused by a
v3.3.1arena purge behavior change relative tov3.3.0. The relevant arena purge defaults and main purge path are effectively the same betweenv3.3.0andv3.3.1:purge_delay = 1000msarena_purge_mult = 1purge_delay * arena_purge_multThe root cause is the larger allocator policy change from mimalloc
v3.1.5tov3.3.x, combined with the binding benchmark's sampling point:mimalloc 0.1.48vendors mimallocv3.1.5;mimalloc 0.1.50vendors mimallocv3.3.1.v3.3.xenables mimalloc's large page size class by default (MI_ENABLE_LARGE_PAGES=1;v3.1.5defaulted to0).v3.3.xadds Linux THP-aware purge behavior. When THP is available andallow_thpis enabled,_mi_os_minimal_purge_size()returns the large OS page size, usually2MiB, so arena purge avoids fragmenting THP.In short: the benchmark is observing retained arena memory from the newer mimalloc policy. This is an allocator tradeoff for less frequent decommit/recommit and less THP fragmentation, not a live-heap leak in Rspack.
Evidence
Same-machine artifact reruns for
threejs_production-mode_10x_persistent-cold:0.1.48/v3.1.52.83s744MiB0.1.50/v3.3.12.86s870MiBMIMALLOC_PURGE_DELAY=100.1.50/v3.3.12.64s759MiBMIMALLOC_PURGE_DELAY=200.1.50/v3.3.12.67s758MiBMIMALLOC_SHOW_STATS=1also points in the same direction:v3.1.5: usually one arena, about1GiBreserved, current committed around570-590MiB, purged around360-380MiB.v3.3.1: usually two arenas, about2GiBreserved, current committed around1.0GiB, purged around440-515MiB.Custom mimalloc instrumentation showed that the second arena is not caused by one huge allocation. It can be triggered by normal page allocations such as:
Options that reduce RSS were tested but are not used in this PR:
MIMALLOC_DISALLOW_ARENA_ALLOC=1lowers RSS but changes allocation strategy broadly and can slow binding builds.arena_purge_mult=0lowers RSS by making arena purge immediate, but it changes runtime allocator behavior and may increase decommit/recommit overhead.allow_thp=0or compile-timeMI_ENABLE_LARGE_PAGES=0only explains part of the RSS difference and changes allocator/runtime tradeoffs.arena_max_object_sizeto16M/32Mdid not materially recover RSS for this workload.Validation
cargo update -p mimalloc --precise 0.1.50cargo tree -p rspack_allocator --target x86_64-unknown-linux-gnu -i mimalloc -e featurescargo tree -p rspack_benchmark --target x86_64-unknown-linux-gnu -i mimalloc -e featurescargo check -p rspack_allocator --target x86_64-unknown-linux-gnucargo check -p rspack_benchmark --target x86_64-unknown-linux-gnucargo fmt --all --checkpnpm dlx @taplo/cli@0.7.0 format --check Cargo.toml crates/rspack_allocator/Cargo.toml xtask/benchmark/Cargo.tomlgit diff --checkRelated links
9199d54b: enabled mimalloc large page size class by default again.#1246: previous arena purge scheduling issue; the relevant fixes are already in the v3.3 line.Checklist