Skip to content

fix(core): Resolve PagedAttention VRAM leak and scheduler deadlock during OOM#2045

Open
glaziermag wants to merge 1 commit intoEricLBuehler:masterfrom
glaziermag:fix-paged-attention-oom-deadlock
Open

fix(core): Resolve PagedAttention VRAM leak and scheduler deadlock during OOM#2045
glaziermag wants to merge 1 commit intoEricLBuehler:masterfrom
glaziermag:fix-paged-attention-oom-deadlock

Conversation

@glaziermag
Copy link
Copy Markdown
Contributor

@glaziermag glaziermag commented Apr 2, 2026

Fixes the primary underlying cause of PagedAttention scheduler freeze under VRAM exhaustion / generic pipeline faults.

The Problem

During OOM memory stress or other GPU kernel panics (e.g., shape mismatch), the engine becomes permanently frozen. The scheduler continues to report full VRAM capacity and rejects incoming requests (16 running, 112 waiting).

This occurs because:

  1. The trap macro handle_pipeline_forward_error! directly catches exceptions and transmits the error back to the client.
  2. The macro natively emits continue 'lp;, violating the pipeline invariant and forcefully jumping past the bottom of the main engine loop block.
  3. This completely bypasses the garbage collector logic scheduler.free_finished_sequence_groups() instantiated natively at mod.rs:837. Aborted sequences remain in the cache until restart.

The Solution

  1. Adds SequenceState::Error explicitly to is_finished_paged_attn().
  2. Uses a safe connection discard natively during seq.responder().send(...) to bypass defunct TCP clients abandoning the session during memory starvation.
  3. Injects the native gc sweep get_mut_arcmutex!($scheduler).free_finished_sequence_groups() strictly into the runtime of handle_pipeline_forward_error! directly prior to the loop branch jump, restoring the clean cleanup flow.

Native Empirical Proof

Before Patch: Engine Freezes at First Fault

2026-04-02T01:43:12.778848Z ERROR mistralrs_core::engine: step - Model failed with error: DriverError(CUDA_ERROR_OUT_OF_MEMORY, "out of memory")
2026-04-02T01:43:18.244533Z  INFO mistralrs_core::engine::logger: Throughput (T/s) 0.00, Prefix cache hitrate 0.00%, 16 running, 240 waiting

After Patch: Zero-Side-Effect Graceful Eviction
Tested with 128 extreme concurrent curl contexts simulating generic kernel blowout. Note the exact immediate wait decrement and 21.80 unaffected throughput flow.

2026-04-02T02:08:47.491999Z  INFO mistralrs_core::engine::logger: Throughput (T/s) 22.40, Prefix cache hitrate 0.00%, 16 running, 112 waiting
2026-04-02T02:08:52.492156Z  INFO mistralrs_core::engine::logger: Throughput (T/s) 19.20, Prefix cache hitrate 0.00%, 16 running, 112 waiting
2026-04-02T02:08:57.208693Z ERROR mistralrs_core::engine: step - Model failed with error: shape mismatch in matmul, lhs: [15, 32, 13, 128], rhs: [1, 32, 128, 675]
2026-04-02T02:08:57.492321Z  INFO mistralrs_core::engine::logger: Throughput (T/s) 21.80, Prefix cache hitrate 11.72%, 16 running, 111 waiting

Reproducibility & Environment

These native deadlock traces and recovery metrics were physically tested and verified on the following hardware platform according to continuous integration standards:

  • Instance: GCP g2-standard-32
  • GPU: NVIDIA L4 (24GB VRAM)
  • Compute Capability: 8.9
  • Build Target: --features cuda
  • Model: mistralai/Mistral-7B-Instruct-v0.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant