fix(cloudflare): Stabilize MCP OAuth refresh reuse#882
Merged
Conversation
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 4c21009. Configure here.
Keep legacy MCP OAuth grants usable while their cached upstream access token still works, instead of revoking them immediately when refresh metadata is missing. This narrows one avoidable logout path and makes the refresh callback classify local-valid, probe-valid, invalid, and indeterminate outcomes explicitly. Expand MCP OAuth integration coverage across authorize, callback, token exchange, refresh reuse, and scoped MCP requests so the normal token lifecycle is exercised through the real entry points. Update the Cloudflare OAuth docs to match the current refresh architecture. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Update the Cloudflare worker test stack to the newer Vitest v4 and workers pool APIs. This replaces the deprecated vitest-pool-workers config entrypoint, aligns wrangler and related test dependencies, and makes the worker-pool startup failure explicit instead of surfacing as an opaque undefined error. The upgraded stack still cannot run locally on this Ubuntu 20.04 WSL environment because workerd now requires a newer glibc, so CI remains the verification path for these tests. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Adjust the OpenAI provider integration tests to use the Vitest v4 test signature. The previous argument order typechecked under Vitest v3 but now fails the workspace typecheck before the Cloudflare test suite can run in CI. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Adjust the Cloudflare worker test setup for the newer fetchMock lifecycle. The upgraded worker test stack no longer exposes the old deactivate() helper, and calling it caused the entire mcp-cloudflare Vitest suite to fail during teardown before the real test results were visible. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Avoid importing the cloudflare:test fetch mock helpers at setup-file module evaluation time. The Workers Vitest integration can resolve setup-file imports in the wrong environment, which left fetchMock undefined and caused the entire mcp-cloudflare suite to fail before any meaningful assertions ran. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Load the cloudflare:test fetchMock binding inside the shared test helpers instead of at module evaluation time. The upgraded Workers Vitest integration was still leaving fetchMock undefined in the mcp-cloudflare harness, so this defers resolution until the test hook actually runs in the worker runtime. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Move the shared Cloudflare fetch-mock hooks out of Vitest setup-files and into the actual mcp-cloudflare test modules. The Workers Vitest integration only supports cloudflare:test imports from files running inside the worker runtime, so keeping fetchMock registration in shared setup infrastructure left the binding undefined and caused the entire suite to fail before reaching real assertions. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Cloudflare's test APIs are only supported when imported from Worker test\nmodules. Stop routing fetchMock through helper modules and have each\nactual test file import it directly, while keeping the shared mock\nregistration logic in a runtime-agnostic helper.\n\nThis keeps the change narrow and aligns the suite with the documented\nvitest-pool-workers boundary instead of layering more setup indirection.\n\nCo-Authored-By: OpenAI Codex <noreply@openai.com>
The upgraded vitest-pool-workers stack is running the suite, but\ncloudflare:test still does not expose fetchMock at runtime. Make the\nworker entrypoint explicit in both the pool config and the test\nWrangler config so the pool is exercising the same main Worker shape\nwe use in production.\n\nThis is the narrowest current-version fix to try before rewriting the\noutbound mocking approach around Miniflare directly.\n\nCo-Authored-By: OpenAI Codex <noreply@openai.com>
The upgraded vitest-pool-workers stack does not provide a usable\ncloudflare:test fetchMock export at runtime, but Miniflare still\nsupports outbound request mocking through its fetchMock worker option.\n\nMove the shared Sentry API interceptors into a Miniflare-configured\nMockAgent and keep the per-test hook as a compatibility no-op. This\nkeeps the current Cloudflare stack and uses the mocking surface it still\nexplicitly supports.\n\nCo-Authored-By: OpenAI Codex <noreply@openai.com>
The Miniflare-configured Vitest setup now loads at config time, which made a bad named import visible: @sentry/mcp-server-mocks/payloads exports eventFixture, not eventsFixture. Correct the import so the upgraded Cloudflare test config can finish loading and the suite can move on to runtime execution. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Replace the metadata route mocks with a real token-backed integration flow so the upgraded worker runtime exercises the actual /api/metadata to /mcp path. This avoids depending on brittle module mocking inside workerd. Also align the callback test env with the wrapped handler and make the indeterminate refresh regression hit a constrained /mcp path that actually forces upstream verification. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Update the Cloudflare route and worker tests to match the upgraded runtime and the actual OAuth resource semantics. The old expectations were mixing hosts, relying on constructor-incompatible mocks, and assuming initialize would prove upstream token validity. Keep the integration coverage, but narrow the metadata assertions to what the route actually guarantees when MCP tool discovery fails closed to an empty list. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Restore a constructor-compatible OAuthProvider spy in the worker entrypoint tests and provide an OpenAI API key in the chat integration worker env. Both failures were caused by upgraded test harness expectations rather than runtime behavior changes. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Drop the over-mocked /api/chat integration file. Under the upgraded Cloudflare worker runtime it no longer provides reliable signal, and it is outside the MCP OAuth surface we are reconciling here. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Treat only 4xx probe failures as definite invalid-token results during MCP refresh reuse. Leave 5xx and other non-client failures in the indeterminate path so transient upstream outages are not mislabeled as token expiry. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Drop the no-op per-file fetchMock installation and rely on the centralized Miniflare setup in vitest config. This trims repeated test boilerplate without changing how outbound mocks are registered.\n\nAlso stop treating grants without refresh tokens as a supported compatibility shape. Revoke those stale grants again, record the explicit grant-revoked metric, and tighten WorkerProps so current valid props always include a refresh token while legacy stored props are handled only at the trust boundary.\n\nCo-Authored-By: OpenAI Codex <noreply@openai.com>
Probe the ambient OPENAI_API_KEY before running the live OpenAI integration suite so local and CI test runs skip cleanly when the key is unset, invalid, or otherwise unusable. Also keep MCP handler logging consistent with the validated client ID used elsewhere in the stale-grant path. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Keep the Cloudflare MCP handler aligned with the shared ServerContext type in mcp-core. The refresh token remains in OAuth worker props, but it is not part of the MCP server runtime context. This fixes the clean CI build that caught the extra property during mcp-cloudflare compilation. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Keep token refresh from dropping unknown stored grant properties when reissuing OAuth props, while still validating the fields we require for current grants. Also escape error descriptions before placing them in the WWW-Authenticate header so future callers cannot create malformed header values. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Restore plain skipIf gating for the OpenAI integration tests and align the OAuth architecture docs with the current stale-grant behavior. Also split invalid upstream tokens from indeterminate refresh probes. Invalid probes now mark the grant so /mcp rejects it unconditionally, while transient failures like 429s remain indeterminate and do not force re-authorization. Co-Authored-By: OpenAI Codex <noreply@openai.com>
Clear any stale upstreamTokenInvalid flag when a refresh probe succeeds. Without this, a previously poisoned grant could keep revoking itself even after the upstream Sentry token was validated again. Co-Authored-By: OpenAI Codex <noreply@openai.com>
4c21009 to
a46e266
Compare
…r directly The 429 test was using mockRejectedValue(new ApiRateLimitError(...)) which caused SentryApiService.request() to wrap it in a plain Error. The instanceof ApiRateLimitError check in probeUpstreamAccessToken never matched; the test passed only because the catch-all also returned "verification_indeterminate". Switch to mockResolvedValue with a 429 Response so the error flows through the normal API client path. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Tighten the Cloudflare MCP OAuth refresh path so we stop logging users out when we can still safely reuse the upstream Sentry token.
The main behavioral change is to distinguish locally valid cached tokens, probe-validated cached tokens, definitively invalid upstream tokens, and indeterminate verification failures. Legacy grants without a refresh token remain usable while the embedded upstream access token still works instead of being revoked immediately.
This also expands the MCP OAuth coverage around
/oauth/callback,/oauth/token, and/mcp, and upgrades the Cloudflare worker test stack to the currentvitest-pool-workers/ Wrangler APIs. That upgrade makes the previous opaque local startup failure explicit, but this Ubuntu 20.04 WSL environment still cannot run the worker pool because newerworkerdnow requires a newer glibc. CI onubuntu-latestis the intended verification path for these tests.I did not add a separate workflow because the existing workspace
test:cijob already runs@sentry/mcp-cloudflare.