fix(websocket): close proxy socket on tunnel shutdown so wss-via-proxy upgrade resources free#28965
fix(websocket): close proxy socket on tunnel shutdown so wss-via-proxy upgrade resources free#28965cirospaciari wants to merge 8 commits intomainfrom
Conversation
|
Updated 9:45 PM PT - Apr 8th, 2026
❌ Your commit 🧪 To try this PR locally: bunx bun-pr 28965That installs a local version of the PR into your bun-28965 --bun |
|
Found 3 issues this PR may fix:
🤖 Generated with Claude Code |
98f3daf to
79e2eca
Compare
79e2eca to
545f0a0
Compare
…e upgrade success The tunnel-mode success path took outgoing_websocket without dropping the cpp_websocket ref it represented (+1 from connect()'s out.ref()). C++ nulls m_upgradeClient in didConnectWithTunnel so nothing else ever releases that ref. Stacked on the socket-close fix (PR #28965), without which handleClose never fires at all; with both, the upgrade client now reaches refcount=0 instead of leaking ~4KB at refcount=1 per connection.
545f0a0 to
e10aed5
Compare
…e upgrade success The tunnel-mode success path took outgoing_websocket without dropping the cpp_websocket ref it represented (+1 from connect()'s out.ref()). C++ nulls m_upgradeClient in didConnectWithTunnel so nothing else ever releases that ref. Stacked on the socket-close fix (PR #28965), without which handleClose never fires at all; with both, the upgrade client now reaches refcount=0 instead of leaking ~4KB at refcount=1 per connection.
…e upgrade success The tunnel-mode success path took outgoing_websocket without dropping the cpp_websocket ref it represented (+1 from connect()'s out.ref()). C++ nulls m_upgradeClient in didConnectWithTunnel so nothing else ever releases that ref. Stacked on the socket-close fix (PR #28965), without which handleClose never fires at all; with both, the upgrade client now reaches refcount=0 instead of leaking ~4KB at refcount=1 per connection.
5190cb5 to
f0aee49
Compare
…e upgrade success The tunnel-mode success path took outgoing_websocket without dropping the cpp_websocket ref it represented (+1 from connect()'s out.ref()). C++ nulls m_upgradeClient in didConnectWithTunnel so nothing else ever releases that ref. Stacked on the socket-close fix (PR #28965), without which handleClose never fires at all; with both, the upgrade client now reaches refcount=0 instead of leaking ~4KB at refcount=1 per connection.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a SocketUnion.close method and changes WebSocketProxyTunnel.shutdown to extract-and-clear the stored socket before wrapper shutdown and close. Adjusts WebSocketUpgradeClient proxy cleanup to avoid re-entrancy. Adds a Bun fixture and a regression test exercising WSS over an HTTP CONNECT proxy to detect leaks. Changes
Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
This PR fixes a real memory leak in production networking code and has CI failures that need to be resolved before merging; the memory-management logic (ref-count sequencing, re-entrant shutdown guard) warrants a human look.
Extended reasoning...
Overview
The PR adds a sock.close() call in WebSocketProxyTunnel.shutdown() and a helper SocketUnion.close() method (~13 lines of Zig), plus two new test files for regression coverage. The fix addresses a ~2.5MB-per-connection memory leak where shutdown() sent TLS close_notify without closing the underlying socket.
Security risks
No auth or permission changes. The fix is scoped to cleanup/resource-release logic; there are no injection or data-exposure vectors introduced.
Level of scrutiny
Although the diff is small, it touches ref-count ordering and re-entrant shutdown in production WebSocket networking code. The PR description explains the expected refcount sequence (2 → 1 → 0), but this kind of reasoning is easy to get wrong and deserves human verification. Additionally, robobun reported CI build failures on the latest commit; those need to be triaged before merge.
Other factors
The only bug flagged by the hunting system is a documentation nit in the test JSDoc (wrong source file and root cause described) — it does not affect correctness. However, the CI failures are an independent blocker that a human should review.
f0aee49 to
05e739d
Compare
…e upgrade success The tunnel-mode success path took outgoing_websocket without dropping the cpp_websocket ref it represented (+1 from connect()'s out.ref()). C++ nulls m_upgradeClient in didConnectWithTunnel so nothing else ever releases that ref. Stacked on the socket-close fix (PR #28965), without which handleClose never fires at all; with both, the upgrade client now reaches refcount=0 instead of leaking ~4KB at refcount=1 per connection.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/http/websocket_client/WebSocketProxyTunnel.zig (1)
345-355:⚠️ Potential issue | 🟠 MajorMake the re-entrant
shutdown()path a real no-op.
Line 346still executeswrapper.shutdown(true)on the synchronous re-entry path fromsock.close(), so the function only becomes a no-op after it has already touched the SSL state. IfSSLWrapper.shutdown()emits another TLS record there,writeEncrypted()will now see.noneand queue it into#write_buffer.Suggested fix
pub fn shutdown(this: *WebSocketProxyTunnel) void { + const sock = this.#socket; + if (sock == .none) return; if (this.#wrapper) |*wrapper| { _ = wrapper.shutdown(true); // Fast shutdown } // Close the underlying proxy socket so the upgrade client's handleClose // fires and releases its socket ref + proxy state. Take the socket out // first so a re-entrant shutdown() (via handleClose → clearData → // proxy.deinit → tunnel.shutdown) finds .none and is a no-op. - const sock = this.#socket; this.#socket = .{ .none = {} }; sock.close(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/http/websocket_client/WebSocketProxyTunnel.zig` around lines 345 - 355, The re-entrant shutdown is not a true no-op because you call wrapper.shutdown(true) while this.#wrapper is still set; capture the wrapper first into a local, set this.#wrapper to .none before invoking its shutdown so any re-entrant shutdown() sees .none and becomes a no-op, e.g. grab this.#wrapper into a local variable, immediately set this.#wrapper = .{ .none = {} }, then call capturedWrapper.shutdown(true) if present; do the same take-then-close pattern already used for this.#socket so sock.close() cannot trigger a path that observes a still-set wrapper and queues writes via writeEncrypted()/#write_buffer.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/js/web/websocket/wss-proxy-tunnel-leak-fixture.ts`:
- Around line 30-36: The current handler uses client.once("data") and destroys
the socket if the first chunk doesn't contain "\r\n\r\n"; instead, change the
logic to accumulate data across multiple 'data' events until the header
terminator is found: use client.on("data", ...) (or re-arm once inside the
handler) to append incoming chunks to buf, check for eoh =
buf.toString().indexOf("\r\n\r\n") on every chunk, and only proceed (and remove
the listener) once eoh >= 0; do not call client.destroy() simply because the
terminator wasn't in the first packet—handle 'end'/'error' events to fail
cleanly if the socket closes before headers complete. Ensure you reference the
existing client, buf, and the eoh check so the change integrates with the
current flow.
---
Outside diff comments:
In `@src/http/websocket_client/WebSocketProxyTunnel.zig`:
- Around line 345-355: The re-entrant shutdown is not a true no-op because you
call wrapper.shutdown(true) while this.#wrapper is still set; capture the
wrapper first into a local, set this.#wrapper to .none before invoking its
shutdown so any re-entrant shutdown() sees .none and becomes a no-op, e.g. grab
this.#wrapper into a local variable, immediately set this.#wrapper = .{ .none =
{} }, then call capturedWrapper.shutdown(true) if present; do the same
take-then-close pattern already used for this.#socket so sock.close() cannot
trigger a path that observes a still-set wrapper and queues writes via
writeEncrypted()/#write_buffer.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 8c01a149-7ade-418e-859c-872faa41df84
📒 Files selected for processing (3)
src/http/websocket_client/WebSocketProxyTunnel.zigtest/js/web/websocket/wss-proxy-tunnel-leak-fixture.tstest/js/web/websocket/wss-proxy-tunnel-leak.test.ts
…y upgrade resources free WebSocketProxyTunnel.shutdown() only sent TLS close_notify and never closed the underlying socket, so the upgrade client's handleClose never fired, proxy.deinit never ran, and the tunnel + per-connection SSL_CTX + upgrade client all leaked (~2.5MB per wss-via-http-proxy connection). shutdown() now takes the socket out (so a re-entrant call via handleClose -> clearData -> proxy.deinit -> tunnel.shutdown finds .none and is a no-op) and closes it. The synchronous re-entry is refcount-safe without an extra guard: tunnel is at 2 (proxy + WebSocketClient) before cleanup; proxy.deinit derefs to 1; the WebSocketClient cleanup's own deref takes it to 0. Test runs 200 connect/close cycles with servers in a separate process and asserts client RSS growth < 1.5MB/iter (pre-fix ~2.5MB/iter, post-fix ~1.0MB/iter).
05e739d to
a5de551
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/js/web/websocket/wss-proxy-tunnel-leak.test.ts`:
- Around line 24-30: The test currently asserts stderr is exactly empty and
checks exitCode before validating stdout content; change this by removing the
brittle expect(stderr).toBe("") and instead assert stderr does not contain fatal
indicators (e.g., expect(stderr).not.toMatch(/error|ASAN/i)) or simply omit the
stderr assertion, then move expect(exitCode).toBe(0) to the end after parsing
stdout and asserting values (the JSON.parse(stdout.trim()), iter === ITER) so
diagnostics from stdout are visible on failure; apply the same change to the
other similar block (the second stderr/exitCode assertions around lines 37-43).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: f8b74c3e-8354-4a42-aef7-11eee9a8a1a8
📒 Files selected for processing (3)
src/http/websocket_client/WebSocketProxyTunnel.zigtest/js/web/websocket/wss-proxy-tunnel-leak-fixture.tstest/js/web/websocket/wss-proxy-tunnel-leak.test.ts
There was a problem hiding this comment.
LGTM — previous feedback addressed; only documentation nits remain.
Extended reasoning...
Overview
The PR modifies three files: the core fix is a ~15-line change in WebSocketProxyTunnel.zig adding a close() method to SocketUnion and updating shutdown() to clear-then-close the proxy socket. Two new test files (fixture + test) measure RSS growth over 200–500 wss-via-proxy upgrade cycles as a regression gate.
Previous feedback addressed
My earlier review raised two issues: (1) the JSDoc header in the test file named the wrong root-cause component (WebSocketUpgradeClient.zig instead of WebSocketProxyTunnel.zig), and (2) the server child was spawned with stderr: "inherit" which could contaminate the fixture stdout/stderr pipes. Both are fixed in the current diff — the JSDoc now correctly describes WebSocketProxyTunnel.zig and the server child uses stdout/stderr: "ignore". CodeRabbit's CONNECT-accumulation concern was also addressed.
Security risks
None. The change is confined to socket lifecycle management inside the wss-via-proxy tunnel — it adds a missing close() call to an already-scheduled shutdown path. No auth, crypto, or permission logic is touched.
Level of scrutiny
The fix is narrow and mechanical: the socket was not being closed on shutdown, so the cleanup chain never fired. The re-entrancy guard (take socket out before closing, early-return on .none) is a standard pattern and is described accurately in the updated doc comment. The regression test threshold (1.5 MB/iter, between pre-fix ~2.5 MB and post-fix ~1.0 MB) is conservative.
Other factors
The two remaining nit-level bugs (fixture comment naming WebSocketUpgradeClient.zig instead of WebSocketProxyTunnel.zig, and the test name saying "HTTPClient") are documentation inaccuracies with no functional impact — they are already captured as inline comments. The overall fix quality is high given its small scope.
|
🤖 PR feedback addressed by Claude Code.
Pushed in 4863f72. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/js/web/websocket/wss-proxy-tunnel-leak-fixture.ts`:
- Around line 63-75: Race the readiness promise against the child.exited promise
so the test fails immediately if the spawned server dies before sending IPC;
replace "await ready.promise" with awaiting Promise.race([ready.promise,
child.exited.then(({ exitCode, signal }) => { throw new Error(`server child
exited before readiness (exitCode=${exitCode}, signal=${signal})`); })])
(referencing ready, child, Bun.spawn, and ready.promise) to surface an
actionable error including the child's exit details.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 4ab94678-0e13-4c69-95ed-38cd58603f55
📒 Files selected for processing (2)
test/js/web/websocket/wss-proxy-tunnel-leak-fixture.tstest/js/web/websocket/wss-proxy-tunnel-leak.test.ts
|
Gate discrimination blocker — needs human input. Measured reality of the RSS leak test on 7d2fa8f (src reverted) + 410d198 (test exitCode ordering):
Per-iter difference: ~1.5 KB. The test's threshold is The PR description claims ~2.5 MB per iter pre-fix. I cannot reproduce that rate. What I can't do from here
What the branch currently has
All review threads resolved. What I'd need to continueEither a working ASAN build path in this container, or guidance to add a test-only counter to WebSocketProxyTunnel that exposes live-tunnel count via Going idle rather than flailing on the gate further. |
When cancel() calls clearData() and the proxy still owns a WebSocketProxyTunnel, p.deinit() -> tunnel.shutdown() -> sock.close() can fire handleClose synchronously (uSockets on_close runs inside us_socket_close), which calls clearData() again. On re-entry the outer pass is still inside p.deinit(), so this.proxy is still non-null and inner deinit would free #target_host / #websocket_request_buf a second time. Copy the proxy by value, null the field, then deinit the copy - the inner clearData now finds this.proxy == null and skips. test(fixture): race ready.promise against child.exited so the test fails fast with the child's exit code if the server child dies before sending its IPC readiness message, instead of hanging until timeout.
Three follow-ups from review on 82cb6dc: 1. WebSocketProxyTunnel.shutdown: call wrapper.shutdown(true) BEFORE clearing #socket so SSLWrapper.writeEncrypted can transmit close_notify through the real socket. Previously we cleared #socket first, so writeEncrypted wrote to .none, buffered into #write_buffer, and the buffered close_notify was then freed by the deinit chain without reaching the peer. Add a #shutdown_started bool guard instead of the #socket-is-none check so a re-entrant shutdown (via triggerCloseCallback -> onClose -> upgrade_client.terminate -> fail -> tcp.close -> handleClose -> clearData -> proxy.deinit -> tunnel.shutdown) no-ops. sock.close() is idempotent in uSockets so the trailing close is safe even if the re-entrant chain already closed the fd. 2. WebSocketUpgradeClient.cancel: move this.ref()/defer this.deref() to BEFORE this.clearData(). clearData can now synchronously drop the refcount to 0 (clearData -> proxy.deinit -> tunnel.shutdown -> sock.close -> handleClose -> deref) so the safety guard has to be in place before we call clearData, not after. 3. wss-proxy-tunnel-leak.test: import isASAN and scale the per-iteration RSS threshold 3MB (ASAN) vs 1.5MB (release) to match the pattern in test/regression/issue/28632.test.ts. ASAN shadow memory + red-zone padding + heap quarantine push per-connection RSS above the old 1.5MB headroom on bun-asan lanes.
The src-side changes in 02b76d2 broke 10 test-bun lanes in CI (debian, alpine, ubuntu, x64, aarch64, baseline + ASAN). Both the wrapper.shutdown ordering change and the cancel() ref-before-clearData change were theoretical review concerns (close_notify not transmitted; UAF window between clearData and ref), but the actual implementation regressed real tests that were previously passing. Revert the two src files to their 82cb6dc state; keep the ASAN threshold scaling in the test file.
There was a problem hiding this comment.
The UAF fix from 02b76d2 (moving this.ref() before clearData()) was reverted in 7d2fa8f, leaving an exploitable path: with the double-free fix (82cb6dc) now nulling this.proxy before local.deinit(), the re-entrant handleClose triggered by sock.close() can deref the HTTPClient to 0 before cancel()'s safety guard runs — please confirm the revert was intentional or reapply the guard.
Extended reasoning...
Overview
The PR fixes a ~2.5MB per-connection leak by adding sock.close() to WebSocketProxyTunnel.shutdown(). This causes handleClose to fire synchronously, which required companion fixes for double-free and UAF hazards. Two iterations of fixes were applied (82cb6dc for clearData double-free, 02b76d2 for cancel() ordering and close_notify ordering), but 7d2fa8f reverted the 02b76d2 changes.
Security risks
None beyond memory safety. The UAF is a crash-on-close for connections that receive cancel() while in proxy_tls_handshake state. No auth bypass or data exposure risks.
Level of scrutiny
High — this is production-critical socket lifecycle management in Zig with manually managed refcounts and synchronous re-entrancy. The core leak fix is sound, but the revert of the cancel() ordering fix leaves an open UAF path in a code path that is now reachable (pre-fix, handleClose was never triggered synchronously from within clearData()).
Other factors
- The 82cb6dc double-free fix (null-before-deinit in
clearData()) is correct and present - The revert of 02b76d2 is unexplained in the timeline; the author may have had a reason, but the UAF analysis in inline comment #3054224903 was acknowledged as correct before the revert
- The remaining open inline comment (#3054639610, pre-existing UAF in
onHandshake(false)) is a separate pre-existing issue and does not block this PR on its own - The test nit (JSON.parse before exitCode) is a debug ergonomics issue only
A fixture crash (SIGSEGV/ASAN abort) leaves stdout empty. JSON.parse('')
throws 'Unexpected end of input' which masks the real failure (exit
code 139/134, LeakSanitizer stderr). Check exitCode and stderr first
so the developer sees 'expected 0 but received 139' instead.
There was a problem hiding this comment.
The cancel() UAF fix (moving this.ref() before this.clearData()) from commit 02b76d2 was reverted in 7d2fa8f without explanation — please clarify whether this path is safe before merging.
Extended reasoning...
Overview
This PR fixes a significant ~2.5MB-per-connection resource leak in wss-via-HTTP-proxy connections by adding sock.close() to WebSocketProxyTunnel.shutdown(), ensuring handleClose fires and the cleanup chain runs. The double-free re-entrancy risk in clearData() was correctly addressed by nulling this.proxy before calling local.deinit().
Security risks
No auth or crypto concerns. The main risk is use-after-free in lifecycle management code.
Outstanding concern
In a previous review round I flagged a 🔴 UAF: cancel() calls this.clearData() at line 383 before the this.ref() safety guard at line 387. With the PR's new sock.close() in tunnel.shutdown(), clearData() can now synchronously drop the refcount to 0 (via clearData → proxy.deinit → tunnel.shutdown → sock.close → handleClose → dispatchAbruptClose → deref → bun.destroy) before cancel() can protect itself. robobun acknowledged this was correct and fixed it in commit 02b76d2. That commit was then reverted wholesale in 7d2fa8f ('back out close_notify reorder + cancel ref reorder') without any explanation in the PR thread, and the inline comment was left marked 'resolved' despite the fix being gone.
Level of scrutiny
This is production-critical socket lifecycle code. The revert of an acknowledged UAF fix without explanation warrants human review to confirm either (a) the path is unreachable in practice (e.g. the socket is always already closed when C++ calls cancel()), or (b) the fix should be reapplied via a different approach than what 02b76d2 used.
Other factors
All other issues I raised in previous review rounds have been addressed. The core leak fix and the double-free guard are correct. The test infrastructure is solid. This one outstanding lifecycle question is what blocks auto-approval.
|
The revert in 7d2fa8f was deliberate and empirical, not a disagreement with the UAF analysis. Leaving it unexplained was my mistake — here is the trace. Sequence:
The theoretical UAF window you identified exists on paper: So the choice was:
I picked (B). The path you flagged requires:
In the test failures after 02b76d2, the impact was real and broad (3 distinct error-dispatch tests + 3 timeouts). The theoretical UAF, by contrast, has no reproducer. I could not reconcile the two in the time window. If you have a better way to fix the ref ordering that doesn't touch the |
…29101) ## What does this PR do? Calling `ws.close()` on a `wss://` WebSocket that is connecting through an HTTP CONNECT proxy while the inner TLS handshake is in flight double-freed `WebSocketProxy.#target_host`, corrupting the mimalloc freelist. Subsequent allocations (X509 parse, `path.normalize`, `Buffer.from`) crashed at garbage addresses. The re-entrancy chain: `clearData()` → `WebSocketProxy.deinit()` frees `target_host` then calls `tunnel.shutdown()` → `SSLWrapper.shutdown(true)` synchronously fires `onHandshake`/`onClose` → `WebSocketProxyTunnel.onClose` → `upgrade_client.terminate()` → `fail()` → `tcp.close()` → `handleClose()` → `clearData()` again, before `this.proxy` had been set to `null`, so `deinit()` runs a second time. Fix: null `this.proxy` and detach the tunnel's non-owning `#upgrade_client` back-pointer before calling `deinit()`, so the SSLWrapper shutdown callbacks cannot re-enter the upgrade client. `#upgrade_client` is not an owning reference (the tunnel's own `deinit` never frees it and `setConnectedWebSocket` already clears it the same way), so detaching does not leak. ## Related issues Most likely fixes #28153 — crash report shows `WebSocket(25)` + `http_client_proxy(966)` both active, segfault at `0x73746E657665227B` = ASCII `{"events`, the exact freed-string-as-pointer freelist-corruption signature this bug produces. Most likely fixes #27790 — same reporter/app/env as #28153 with `WebSocket(4)` + `http_client_proxy(84)`, segfault after ~3min uptime. Not a duplicate of #28965 — that PR fixes a memory leak (tunnel.shutdown never closed the underlying socket); this one fixes a double-free. They touch adjacent code but are independent bugs. ## How did you verify your code works? New test `test/js/web/websocket/websocket-proxy-close-reentrancy.test.ts` spawns a fixture that opens 80 `wss://` connections through a local HTTP CONNECT proxy and calls `close()` at staggered offsets across the CONNECT → TLS handshake window. Before this change the fixture segfaults in ~50ms on a release build (crash address `0x5448202F20544547` = ASCII `"GET / HT"` — freed `target_host` bytes reinterpreted as a pointer). With the fix it exits 0 in ~1.3s on a debug build. Skipped on Windows where libuv defers the close callback so the re-entrancy cannot occur.
fix(websocket): close proxy socket on tunnel shutdown so wss-via-proxy upgrade resources free
WebSocketProxyTunnel.shutdown() only sent TLS close_notify and never
closed the underlying socket, so the upgrade client's handleClose never
fired, proxy.deinit never ran, and the tunnel + per-connection SSL_CTX +
upgrade client all leaked (~2.5MB per wss-via-http-proxy connection).
shutdown() now takes the socket out (so a re-entrant call via
handleClose -> clearData -> proxy.deinit -> tunnel.shutdown finds .none
and is a no-op) and closes it. The synchronous re-entry is refcount-safe
without an extra guard: tunnel is at 2 (proxy + WebSocketClient) before
cleanup; proxy.deinit derefs to 1; the WebSocketClient cleanup's own
deref takes it to 0.
Test runs 200 connect/close cycles with servers in a separate process
and asserts client RSS growth < 1.5MB/iter (pre-fix ~2.5MB/iter,
post-fix ~1.0MB/iter).