Add Databricks Vector Search MCP token exchange with per-workspace token isolation#27
Merged
Add Databricks Vector Search MCP token exchange with per-workspace token isolation#27
Conversation
…ken isolation Signed-off-by: yosrixp <yosrixp@yahoo.com>
MartinTrojans
approved these changes
Apr 16, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
TokenExchangeServiceDatabricksImplis scoped as@Dependentand configured viaInstance<>injection inTokenExchangeServiceProducerto create two distinct instances -- one for SQL (scope=sql) and one for Vector Search (scope=vector-search) -- each with its ownDatabricksTokenExchangeConfig.DatabricksSqlWorkspaceResolvertoDatabricksWorkspaceResolver(handles multi-segment paths like/<workspace>/<catalog>/<schema>/mcp),DatabricksSqlTokenClienttoDatabricksTokenClient, and extract a commonDatabricksTokenExchangeConfiginterface extended by both SQL and Vector Search config mappings./userinfo"Okta token record not found" error that affects all providers (Databricks SQL/Vector Search, Glean, Google Monitoring/Logging, Splunk) whose exchanged access token outlives the Okta row TTL. When the Oktaid_tokenrow is missing or stale,/userinfonow attempts an upstream Okta refresh viaUpstreamRefreshService.refreshUpstream()with the same distributed lock and optimistic versioning as the/tokenrefresh_token grant, then stores the refreshed tokens and continues.Test plan
UserInfoResourceTest-- 5 new tests covering upstream refresh on Okta row missing, id_token null, lock failure, andUpstreamRefreshException, with metric verification (mop_token_exchange_step_total{exchange_step=upstream_refresh})DatabricksWorkspaceResolverTest-- new cases for vector search multi-segment URL patternsTokenExchangeServiceDatabricksImplTest-- vector search token exchange withscope=vector-searchTokenExchangeServiceProducerTest-- bothdatabricks-sqlanddatabricks-vector-searchresolve to distinct instancesExchangedTokenUserinfoStoreProviderResolverTest-- vector search audience resolutionConfigServiceTest-- glob-to-regex matching fordatabricks-vector-search/*/*/*/mcpwildcard resource URIsOauthProviderLabelTest--databricks-vector-searchnormalizes correctly/tokenwith a Databricks Vector Search resource, then/userinfowith the returned access token -- confirm userinfo returns Okta claims withmcp_resource_idp=databricks-vector-search-<hostname>/userinfoagain -- confirm it auto-refreshes and returns 200 instead of 401