Skip to content

Enforce global stream limit per WebSocket subscription#8536

Open
peterargue wants to merge 4 commits intomasterfrom
peter/improve-max-stream-enforcement-ws
Open

Enforce global stream limit per WebSocket subscription#8536
peterargue wants to merge 4 commits intomasterfrom
peter/improve-max-stream-enforcement-ws

Conversation

@peterargue
Copy link
Copy Markdown
Contributor

@peterargue peterargue commented Apr 8, 2026

Summary

  • Moves the global stream limiter from the WebSocket connection level to the subscription level
  • The /ws endpoint multiplexes many subscriptions over a single connection, so the previous connection-level enforcement was incorrect: one connection could bypass MaxGlobalStreams, while idle connections consumed the budget
  • Adds Acquire()/Release() methods to ConcurrencyLimiter for lifecycle-scoped slot management
  • The limiter is now passed into the WebSocket controller and acquired/released per subscription in handleSubscribe

Test plan

  • ConcurrencyLimiter tests for Acquire/Release (within limit, at limit, concurrent)
  • Controller test: subscription rejected with 429 when global limit exhausted
  • Controller test: slot released when provider creation fails
  • Controller test: slot released when provider completes
  • Full websocket test suite passes

Summary by CodeRabbit

Release Notes

  • New Features

    • Added concurrency limiting for websocket connections to prevent server overload and ensure stable stream handling.
  • Bug Fixes

    • Added validation to enforce that state streaming maximum global stream configuration must be greater than zero, preventing invalid startup configurations.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 8, 2026

Dependency Review

✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.

Snapshot Warnings

⚠️: No snapshots were found for the head SHA 247595a.
Ensure that dependencies are being submitted on PR branches and consider enabling retry-on-snapshot-warnings. See the documentation for more information and troubleshooting advice.

Scanned Files

None

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 8, 2026

📝 Walkthrough

Walkthrough

This PR implements WebSocket stream concurrency limiting for Flow's access node. It adds validation for state stream configuration, refactors the ConcurrencyLimiter to expose explicit Acquire() and Release() methods, and integrates stream limiting into the WebSocket handler and controller to restrict concurrent subscriptions.

Changes

Cohort / File(s) Summary
Configuration Validation
cmd/access/node_builder/access_node_builder.go, cmd/observer/node_builder/observer_builder.go
Added validation in extraFlags() to reject configurations where MaxGlobalStreams == 0, returning error "state-stream-global-max-streams must be greater than 0".
Concurrency Limiter Core
module/limiters/concurrency_limiter.go, module/limiters/concurrency_limiter_test.go
Refactored ConcurrencyLimiter to expose public Acquire() and Release() methods for explicit acquire/release pairing, with Allow() updated to use these methods. Added unit tests covering limit exhaustion, recovery after release, and concurrent stress scenarios.
Router and Server Wiring
engine/access/rest/router/router.go, engine/access/rest/server.go
Extended NewRouterBuilder and updated AddLegacyWebsocketsRoutes and AddWebsocketsRoute to accept *limiters.ConcurrencyLimiter parameters. Added validation in NewServer to reject nil limiter when websocket routes are enabled.
WebSocket Handler and Controller
engine/access/rest/websockets/handler.go, engine/access/rest/websockets/controller.go
Updated handler to accept and pass streamLimiter to controller. Refactored controller to rename rate limiter field to rateLimiter, added streamLimiter field, and modified NewWebSocketController to require it (returns error if nil). Updated handleSubscribe to acquire limiter before processing subscription and release on all exit paths.
WebSocket Controller Tests
engine/access/rest/websockets/controller_test.go
Added streamLimiter to test suite and updated all constructor calls. Introduced TestGlobalStreamLimiter with three subtests validating limiter exhaustion rejection, failure-path cleanup, and successful acquisition/release sequencing.

Sequence Diagram

sequenceDiagram
    participant Client
    participant Handler as WebSocket Handler
    participant Limiter as ConcurrencyLimiter
    participant Controller as Controller
    participant Provider as DataProvider

    Client->>Handler: WebSocket Subscribe Request
    Handler->>Controller: NewWebSocketController(streamLimiter)
    Controller->>Limiter: Acquire()
    alt Limiter Exhausted
        Limiter-->>Controller: false
        Controller-->>Handler: error (429 Too Many Requests)
        Handler-->>Client: Reject with 429
    else Limiter Available
        Limiter-->>Controller: true
        Controller->>Provider: NewDataProvider()
        alt Provider Creation Fails
            Provider-->>Controller: error
            Controller->>Limiter: Release()
            Controller-->>Handler: error (400 Bad Request)
            Handler-->>Client: Reject with 400
        else Provider Created
            Provider-->>Controller: provider
            Controller->>Controller: handleSubscribe (provider runs in goroutine)
            Note over Controller: defer streamLimiter.Release()
            Controller-->>Handler: Subscribe Response
            Handler-->>Client: Success
            Provider->>Provider: Process events...
            Provider->>Limiter: Release() (on goroutine completion)
            Limiter-->>Provider: slot released
        end
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • zhangchiqing
  • fxamacker

Poem

🐰 A limiter takes the stream's reins so tight,
Acquiring slots as subscriptions ignite,
When limits are reached, we gracefully deny,
Then release the tokens as goroutines say goodbye,
Bounded concurrency—a rabbit's delight! 🎯

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 56.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Enforce global stream limit per WebSocket subscription' directly and accurately describes the main change: moving global stream limiter enforcement from connection level to subscription level.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch peter/improve-max-stream-enforcement-ws

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Apr 8, 2026

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
cmd/observer/node_builder/observer_builder.go (1)

1132-1141: Validate state-stream-global-max-streams during flag validation, not during build.

Now that limiter creation is unconditional, a bad value (e.g., 0) fails later in Build(). Prefer failing earlier in flag validation with a direct config error.

💡 Suggested patch
@@
 		if builder.rpcConf.RestConfig.MaxRequestSize <= 0 {
 			return errors.New("rest-max-request-size must be greater than 0")
 		}
+		if builder.stateStreamConf.MaxGlobalStreams == 0 {
+			return errors.New("state-stream-global-max-streams must be greater than 0")
+		}
 
 		return nil
 	})
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/observer/node_builder/observer_builder.go` around lines 1132 - 1141, The
stream limiter is being created unconditionally in builder.Module (the "stream
limiter" block) causing Build() to fail later for invalid flag values like 0;
instead validate builder.stateStreamConf.MaxGlobalStreams during flag/config
validation and return a clear config error there. Add a check (e.g., ensure
MaxGlobalStreams > 0) in the flag validation path that sets/validates
builder.stateStreamConf before Build() runs, and only call
limiters.NewConcurrencyLimiter in the builder.Module after the validated value
is guaranteed correct; reference the builder.stateStreamConf.MaxGlobalStreams
field and the limiters.NewConcurrencyLimiter call to locate where to add the
pre-check and error.
engine/access/rest/websockets/connection_limited_handler_test.go (1)

31-37: Fail fast if the saturation goroutine does not acquire the slot.

If limiter.Allow ever returns false, started is never closed and Line 37 hangs until the test times out. Please surface the Allow result back to the test so this fails deterministically instead of deadlocking.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@engine/access/rest/websockets/connection_limited_handler_test.go` around
lines 31 - 37, The goroutine calling limiter.Allow can return false and cause
the test to hang because started is never closed; modify the goroutine that
calls limiter.Allow (the call to limiter.Allow in the anonymous go func) to
capture the boolean result and send it back to the test (e.g., via a result
channel) and then close started only if Allow returned true; in the main test
goroutine receive that result and call t.Fatalf or t.Fatalf-like assertion
immediately if Allow returned false so the test fails fast instead of
deadlocking (refer to limiter.Allow, started, unblock).
engine/access/rest/websockets/controller_test.go (1)

34-50: Avoid sharing one limiter across t.Parallel() subtests.

SetupTest builds a single s.streamLimiter, and the subtests in this suite reuse it while running in parallel. Any missed Release() in one path will bleed into sibling cases and make the suite flaky. TestGlobalStreamLimiter already uses the safer pattern here: create a fresh limiter per subtest, or drop t.Parallel() for the cases that share suite state.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@engine/access/rest/websockets/controller_test.go` around lines 34 - 50, The
suite currently creates a shared limiter in SetupTest (s.streamLimiter) which is
reused by parallel subtests; instead make each parallel subtest construct its
own limiter (call limiters.NewConcurrencyLimiter(...) inside the individual test
function or subtest) or stop using t.Parallel() for tests that rely on shared
WsControllerSuite state; locate references to s.streamLimiter in the suite tests
(and compare the safer pattern used in TestGlobalStreamLimiter) and change those
tests to create and use a fresh limiter per subtest (or remove parallelization)
so a missed Release() cannot affect sibling cases.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cmd/access/node_builder/access_node_builder.go`:
- Around line 2170-2179: The startup now constructs the stream limiter
unconditionally (limiters.NewConcurrencyLimiter using
builder.stateStreamConf.MaxGlobalStreams) so you must validate
state-stream-global-max-streams earlier in the flag/config validation path (the
same place other RPC-required settings are validated) rather than letting
Build() fail late; add an unconditional check for
builder.stateStreamConf.MaxGlobalStreams in the existing validation routine and
return a clear validation error if it is out of the allowed range/invalid,
ensuring the invalid value is reported during flag validation instead of when
NewConcurrencyLimiter is called in Build().

In `@engine/access/rest/server.go`:
- Around line 56-61: In NewServer, if stateStreamApi (the websocket routes flag)
is non-nil but limiter (the *limiters.ConcurrencyLimiter) is nil, return an
error immediately instead of allowing a nil limiter into
AddLegacyWebsocketsRoutes; add a nil check before calling
builder.AddLegacyWebsocketsRoutes and return a descriptive error (e.g.,
fmt.Errorf or errors.New) identifying that limiter is required when
stateStreamApi is enabled. Also apply the same nil-check logic at the second
websocket-route site referenced around the other AddLegacyWebsocketsRoutes call
so no code path can pass a nil limiter into websocket handlers.

In `@engine/access/rest/websockets/controller.go`:
- Around line 449-458: The code calls c.streamLimiter.Acquire() (and later
.Release()) without nil checks which can panic if NewWebSocketController was
given a nil limiter; add a defensive nil guard similar to checkRateLimit: before
calling c.streamLimiter.Acquire() verify if c.streamLimiter != nil and treat a
nil limiter as "no limit" (skip Acquire) or log/deny as appropriate, and
likewise wrap all subsequent c.streamLimiter.Release() calls (lines referenced
around the subscription flow) with nil checks; make sure the error handling path
that calls c.writeErrorResponse(..., wrapErrorMessage(...,
models.SubscribeAction, msg.SubscriptionID)) remains unchanged except for
guarding the Acquire/Release calls.

In `@module/limiters/concurrency_limiter.go`:
- Around line 46-47: The Release() method unconditionally calls
totalConcurrent.Sub(1) which can underflow if Release() is called too many
times; change Release in ConcurrencyLimiter to guard against underflow by
reading the current counter (totalConcurrent.Load() or equivalent), returning
early (or logging) if it is zero, otherwise perform a safe atomic decrement
using a CAS loop (atomic.CompareAndSwap/CompareAndSwapUint32) or conditional
FetchSub only when the loaded value > 0; reference the
ConcurrencyLimiter.Release method and the totalConcurrent field and ensure the
fix preserves concurrency semantics with Acquire().

---

Nitpick comments:
In `@cmd/observer/node_builder/observer_builder.go`:
- Around line 1132-1141: The stream limiter is being created unconditionally in
builder.Module (the "stream limiter" block) causing Build() to fail later for
invalid flag values like 0; instead validate
builder.stateStreamConf.MaxGlobalStreams during flag/config validation and
return a clear config error there. Add a check (e.g., ensure MaxGlobalStreams >
0) in the flag validation path that sets/validates builder.stateStreamConf
before Build() runs, and only call limiters.NewConcurrencyLimiter in the
builder.Module after the validated value is guaranteed correct; reference the
builder.stateStreamConf.MaxGlobalStreams field and the
limiters.NewConcurrencyLimiter call to locate where to add the pre-check and
error.

In `@engine/access/rest/websockets/connection_limited_handler_test.go`:
- Around line 31-37: The goroutine calling limiter.Allow can return false and
cause the test to hang because started is never closed; modify the goroutine
that calls limiter.Allow (the call to limiter.Allow in the anonymous go func) to
capture the boolean result and send it back to the test (e.g., via a result
channel) and then close started only if Allow returned true; in the main test
goroutine receive that result and call t.Fatalf or t.Fatalf-like assertion
immediately if Allow returned false so the test fails fast instead of
deadlocking (refer to limiter.Allow, started, unblock).

In `@engine/access/rest/websockets/controller_test.go`:
- Around line 34-50: The suite currently creates a shared limiter in SetupTest
(s.streamLimiter) which is reused by parallel subtests; instead make each
parallel subtest construct its own limiter (call
limiters.NewConcurrencyLimiter(...) inside the individual test function or
subtest) or stop using t.Parallel() for tests that rely on shared
WsControllerSuite state; locate references to s.streamLimiter in the suite tests
(and compare the safer pattern used in TestGlobalStreamLimiter) and change those
tests to create and use a fresh limiter per subtest (or remove parallelization)
so a missed Release() cannot affect sibling cases.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6025dcca-ed94-41d3-9830-420704534277

📥 Commits

Reviewing files that changed from the base of the PR and between eb3fcc3 and 32226a9.

📒 Files selected for processing (29)
  • cmd/access/node_builder/access_node_builder.go
  • cmd/observer/node_builder/observer_builder.go
  • cmd/util/cmd/run-script/cmd.go
  • engine/access/access_test.go
  • engine/access/handle_irrecoverable_state_test.go
  • engine/access/integration_unsecure_grpc_server_test.go
  • engine/access/rest/router/router.go
  • engine/access/rest/router/router_test_helpers.go
  • engine/access/rest/server.go
  • engine/access/rest/websockets/connection_limited_handler.go
  • engine/access/rest/websockets/connection_limited_handler_test.go
  • engine/access/rest/websockets/controller.go
  • engine/access/rest/websockets/controller_test.go
  • engine/access/rest/websockets/handler.go
  • engine/access/rest/websockets/legacy/routes/subscribe_events_test.go
  • engine/access/rest/websockets/legacy/websocket_handler.go
  • engine/access/rest_api_test.go
  • engine/access/rpc/engine.go
  • engine/access/rpc/engine_builder.go
  • engine/access/rpc/handler.go
  • engine/access/rpc/handler_test.go
  • engine/access/rpc/rate_limit_test.go
  • engine/access/secure_grpcr_test.go
  • engine/access/state_stream/backend/engine.go
  • engine/access/state_stream/backend/handler.go
  • engine/access/state_stream/backend/handler_test.go
  • engine/access/subscription/streaming_data.go
  • module/limiters/concurrency_limiter.go
  • module/limiters/concurrency_limiter_test.go
💤 Files with no reviewable changes (2)
  • engine/access/rest/websockets/legacy/websocket_handler.go
  • engine/access/subscription/streaming_data.go

Comment thread cmd/access/node_builder/access_node_builder.go
Comment thread engine/access/rest/server.go
Comment thread engine/access/rest/websockets/controller.go
Comment thread module/limiters/concurrency_limiter.go Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
cmd/observer/node_builder/observer_builder.go (1)

1133-1142: Validate state-stream-global-max-streams independently of state-stream-addr.

Line 1137 now initializes the limiter unconditionally, but flag validation for state-stream config is mostly gated by ListenAddr. Consider validating MaxGlobalStreams > 0 unconditionally for earlier, clearer startup failures.

♻️ Suggested validation update
 	}).ValidateFlags(func() error {
 		if builder.executionDataSyncEnabled {
 			...
 		}
+		if builder.stateStreamConf.MaxGlobalStreams == 0 {
+			return errors.New("state-stream-global-max-streams must be greater than 0")
+		}
 		if builder.stateStreamConf.ListenAddr != "" {
 			if builder.stateStreamConf.ExecutionDataCacheSize == 0 {
 				return errors.New("execution-data-cache-size must be greater than 0")
 			}
 			...
 		}

As per coding guidelines: "treat all inputs as potentially byzantine ... ALWAYS explicitly handle errors rather than logging and continuing".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/observer/node_builder/observer_builder.go` around lines 1133 - 1142, The
stream limiter is being created unconditionally but the numeric flag
builder.stateStreamConf.MaxGlobalStreams is not validated unless
state-stream-addr is set; add an unconditional validation that
builder.stateStreamConf.MaxGlobalStreams > 0 early in startup (before calling
limiters.NewConcurrencyLimiter) and return a clear error if it's not valid so
NewConcurrencyLimiter is never called with a bad value; reference
builder.stateStreamConf.MaxGlobalStreams, limiters.NewConcurrencyLimiter and
builder.streamLimiter when making the check and error return.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@cmd/observer/node_builder/observer_builder.go`:
- Around line 1133-1142: The stream limiter is being created unconditionally but
the numeric flag builder.stateStreamConf.MaxGlobalStreams is not validated
unless state-stream-addr is set; add an unconditional validation that
builder.stateStreamConf.MaxGlobalStreams > 0 early in startup (before calling
limiters.NewConcurrencyLimiter) and return a clear error if it's not valid so
NewConcurrencyLimiter is never called with a bad value; reference
builder.stateStreamConf.MaxGlobalStreams, limiters.NewConcurrencyLimiter and
builder.streamLimiter when making the check and error return.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d2fdb25a-4bbf-4a00-a0f3-26d51e29d507

📥 Commits

Reviewing files that changed from the base of the PR and between 32226a9 and 648e56b.

📒 Files selected for processing (2)
  • cmd/access/node_builder/access_node_builder.go
  • cmd/observer/node_builder/observer_builder.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • cmd/access/node_builder/access_node_builder.go

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
module/limiters/concurrency_limiter_test.go (1)

121-163: Make this test prove actual overlap.

This only asserts peak <= maxConcurrent, so it still passes if the goroutines run mostly serially or if Acquire() starts spuriously failing under contention. Hold successful acquirers behind a barrier and assert the test actually reaches full capacity.

♻️ Suggested test tightening
 func TestConcurrencyLimiter_Acquire_ConcurrentCalls(t *testing.T) {
 	const maxConcurrent = 5
 	const totalGoroutines = 50
@@
-	start := make(chan struct{})
+	start := make(chan struct{})
+	hold := make(chan struct{})
@@
 			<-start
 			if limiter.Acquire() {
 				n := current.Add(1)
 				for {
 					old := peak.Load()
 					if n <= old || peak.CompareAndSwap(old, n) {
 						break
 					}
 				}
-				time.Sleep(time.Millisecond)
+				<-hold
 				current.Add(-1)
 				limiter.Release()
 			}
 		}()
 	}
 
 	close(start)
+	require.Eventually(t, func() bool {
+		return peak.Load() == int32(maxConcurrent)
+	}, time.Second, time.Millisecond)
+	close(hold)
 	wg.Wait()
 
-	assert.LessOrEqual(t, peak.Load(), int32(maxConcurrent),
-		"peak concurrent acquisitions must not exceed maxConcurrent")
+	assert.Equal(t, int32(maxConcurrent), peak.Load(),
+		"test should observe the limiter at full capacity")
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@module/limiters/concurrency_limiter_test.go` around lines 121 - 163, Update
TestConcurrencyLimiter_Acquire_ConcurrentCalls so it proves real overlap by
blocking successful acquirers on a barrier until at least maxConcurrent have
acquired: after limiter.Acquire() succeeds in the goroutine, increment current
and send a token on a rendezvous channel (or increment a sync.WaitGroup counter)
and then wait on a separate release channel (or waitgroup) that is only
closed/released once you have observed maxConcurrent tokens; only then let
acquirers sleep and release. After the test runs, assert that you observed at
least maxConcurrent simultaneous acquirers (e.g., check a counter or that
peak.Load() >= int32(maxConcurrent)) instead of just peak <= maxConcurrent;
reference symbols: TestConcurrencyLimiter_Acquire_ConcurrentCalls,
limiter.Acquire, limiter.Release, peak, current, maxConcurrent, totalGoroutines.
engine/access/rest/websockets/controller_test.go (1)

258-399: Add a malformed-subscription case for the remaining Release() path.

handleSubscribe() now releases the global slot when parseOrCreateSubscriptionID() fails, but this suite only covers exhaustion, provider creation failure, and provider completion. A regression there would leak a slot on bad client input without tripping these tests.

🧪 Suggested subtest
 func (s *WsControllerSuite) TestGlobalStreamLimiter() {
+	s.T().Run("Releases slot when subscription ID parsing fails", func(t *testing.T) {
+		t.Parallel()
+
+		streamLimiter, err := limiters.NewConcurrencyLimiter(1)
+		require.NoError(t, err)
+
+		conn, dataProviderFactory, _ := newControllerMocks(t)
+		controller, err := NewWebSocketController(s.logger, s.wsConfig, conn, dataProviderFactory, streamLimiter)
+		require.NoError(t, err)
+
+		request := models.SubscribeMessageRequest{
+			BaseMessageRequest: models.BaseMessageRequest{
+				SubscriptionID: uuid.New().String() + " .42",
+				Action:         models.SubscribeAction,
+			},
+			Topic: dp.BlocksTopic,
+		}
+		requestJSON, err := json.Marshal(request)
+		require.NoError(t, err)
+
+		done := make(chan struct{})
+		conn.
+			On("ReadJSON", mock.Anything).
+			Run(func(args mock.Arguments) {
+				msg, ok := args.Get(0).(*json.RawMessage)
+				require.True(t, ok)
+				*msg = requestJSON
+			}).
+			Return(nil).
+			Once()
+
+		conn.
+			On("WriteJSON", mock.Anything).
+			Return(func(msg interface{}) error {
+				defer close(done)
+
+				response, ok := msg.(models.BaseMessageResponse)
+				require.True(t, ok)
+				require.NotEmpty(t, response.Error)
+				require.Equal(t, http.StatusBadRequest, response.Error.Code)
+
+				return &websocket.CloseError{Code: websocket.CloseNormalClosure}
+			}).
+			Once()
+
+		s.expectCloseConnection(conn, done)
+		controller.HandleConnection(context.Background())
+
+		require.True(t, streamLimiter.Acquire(), "slot should be released after subscription ID parse failure")
+		streamLimiter.Release()
+
+		conn.AssertExpectations(t)
+		dataProviderFactory.AssertExpectations(t)
+	})
+
 	s.T().Run("Rejects subscription when global limit reached", func(t *testing.T) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@engine/access/rest/websockets/controller_test.go` around lines 258 - 399, Add
a subtest to TestGlobalStreamLimiter that simulates a malformed subscription to
exercise the code path where handleSubscribe calls parseOrCreateSubscriptionID
and fails: create a ConcurrencyLimiter(1), acquire its slot to simulate full
capacity, instantiate the controller via NewWebSocketController with that
limiter, arrange the mock connection to send a subscribe payload that will cause
parseOrCreateSubscriptionID to fail (use the existing s.expectSubscribeRequest
helper if available or craft a bad subscribe message), set expectations that
conn.WriteJSON is called with a models.BaseMessageResponse containing an error
(HTTP 400) and the connection is closed, ensure
dataProviderFactory.NewDataProvider is never called, call
controller.HandleConnection, and finally assert the limiter slot was released by
checking streamLimiter.Acquire() returns true; this verifies handleSubscribe
releases the global slot on malformed input without leaking.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@engine/access/rest/websockets/controller_test.go`:
- Around line 258-399: Add a subtest to TestGlobalStreamLimiter that simulates a
malformed subscription to exercise the code path where handleSubscribe calls
parseOrCreateSubscriptionID and fails: create a ConcurrencyLimiter(1), acquire
its slot to simulate full capacity, instantiate the controller via
NewWebSocketController with that limiter, arrange the mock connection to send a
subscribe payload that will cause parseOrCreateSubscriptionID to fail (use the
existing s.expectSubscribeRequest helper if available or craft a bad subscribe
message), set expectations that conn.WriteJSON is called with a
models.BaseMessageResponse containing an error (HTTP 400) and the connection is
closed, ensure dataProviderFactory.NewDataProvider is never called, call
controller.HandleConnection, and finally assert the limiter slot was released by
checking streamLimiter.Acquire() returns true; this verifies handleSubscribe
releases the global slot on malformed input without leaking.

In `@module/limiters/concurrency_limiter_test.go`:
- Around line 121-163: Update TestConcurrencyLimiter_Acquire_ConcurrentCalls so
it proves real overlap by blocking successful acquirers on a barrier until at
least maxConcurrent have acquired: after limiter.Acquire() succeeds in the
goroutine, increment current and send a token on a rendezvous channel (or
increment a sync.WaitGroup counter) and then wait on a separate release channel
(or waitgroup) that is only closed/released once you have observed maxConcurrent
tokens; only then let acquirers sleep and release. After the test runs, assert
that you observed at least maxConcurrent simultaneous acquirers (e.g., check a
counter or that peak.Load() >= int32(maxConcurrent)) instead of just peak <=
maxConcurrent; reference symbols:
TestConcurrencyLimiter_Acquire_ConcurrentCalls, limiter.Acquire,
limiter.Release, peak, current, maxConcurrent, totalGoroutines.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2779b87a-6506-437d-9a62-bc28c901e6a0

📥 Commits

Reviewing files that changed from the base of the PR and between 648e56b and 247595a.

📒 Files selected for processing (8)
  • cmd/access/node_builder/access_node_builder.go
  • cmd/observer/node_builder/observer_builder.go
  • engine/access/rest/server.go
  • engine/access/rest/websockets/controller.go
  • engine/access/rest/websockets/controller_test.go
  • engine/access/rest/websockets/handler.go
  • module/limiters/concurrency_limiter.go
  • module/limiters/concurrency_limiter_test.go
✅ Files skipped from review due to trivial changes (1)
  • cmd/observer/node_builder/observer_builder.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • module/limiters/concurrency_limiter.go
  • engine/access/rest/websockets/handler.go

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants