Skip to content

[FEAT] Raw OPRF (no ZK no MPC) for browser extension#66

Open
Scratch-net wants to merge 5 commits intomainfrom
oprf-raw
Open

[FEAT] Raw OPRF (no ZK no MPC) for browser extension#66
Scratch-net wants to merge 5 commits intomainfrom
oprf-raw

Conversation

@Scratch-net
Copy link
Copy Markdown
Contributor

@Scratch-net Scratch-net commented Mar 27, 2026

Summary by CodeRabbit

  • New Features

    • Added "oprf-raw" server-side OPRF mode: marked plaintext regions can be revealed to attestors, tracked via markers, and support spans that cross message blocks with overshoot reporting.
    • Reveal metadata (markers and overshot lengths) now flows through claim processing, packet generation, and packet verification.
  • Tests

    • Added tests for oprf-raw redaction, cross-block handling, and nullifier consistency.
  • Chores

    • Extended schemas/types to accept "oprf-raw" as a valid hash option.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 27, 2026

📝 Walkthrough

Walkthrough

Adds an "oprf-raw" mode: clients mark plaintext byte ranges in ZK reveal packets; servers compute server-side OPRF nullifiers for those ranges (including cross-packet overshot), substitute nullifier text into decrypted transcripts, and validate provider parameters against the canonicalized, replaced params.

Changes

Cohort / File(s) Summary
Proto schema & generated
proto/api.proto, src/proto/api.ts
Added MessageReveal.MessageRevealZk.OPRFRawMarker (dataLocation: DataSlice) and two fields on MessageReveal.MessageRevealZk: repeated OPRFRawMarker oprfRawMarkers and uint32 overshotOprfRawLength; updated codecs, JSON, and partial helpers.
Type definitions
src/types/zk.ts, src/types/general.ts, src/types/providers.gen.ts
Added OPRFRawMarker type; added oprfRawMarkers / overshotOprfRawFromPrevBlock to ZK reveal types; added OPRFRawReplacement and IDecryptedTranscript.oprfRawReplacements; added 'oprf-raw' to redaction hash enums and provider schema.
Client: redaction & packet prep
src/utils/redactions.ts, src/utils/prepare-packets.ts, src/client/create-claim.ts
Redaction flow special-cases hash === 'oprf-raw': record oprfRawMarkers and overshotOprfRawFromPrevBlock without performing client-side OPRF/nullifier replacement; include oprfRawMarkers and overshotOprfRawLength in zkReveal payloads and pass markers into reveal metadata.
Server: OPRF raw flow & validation
src/server/utils/oprf-raw.ts, src/server/utils/assert-valid-claim-request.ts
New computeOPRFRaw computes server-side TOPRF nullifiers from plaintext slices using env TOPRF key shares; decryptTranscript handles in-block and cross-packet markers, writes nullifier bytes into transcript buffers and returns oprfRawReplacements; assertValidProviderTranscript applies replacements before parameter verification.
ZK verification & helpers
src/utils/zk.ts, src/utils/*
verifyZkPacket now returns oprfRawMarkers alongside redactedPlaintext; minor import/formatting tweaks; adjusted return shape consumed by server decryption.
Tests
src/tests/zk.test.ts
Added tests covering oprf-raw redaction slices, cross-block overshoot handling, and nullifier consistency between ZK and server-side flows; included explicit oprfRawMarkers: [] in ZK test vectors.
Schemas / generated providers
provider-schemas/http/parameters.yaml, src/types/providers.gen.ts
Allowed oprf-raw in responseRedactions[].hash enum and updated generated provider types/schema.

Sequence Diagram

sequenceDiagram
    participant Client
    participant ZKProver as ZK Prover
    participant Server
    participant Validator

    Client->>ZKProver: build zkReveal { proofs, toprfs, oprfRawMarkers, overshotOprfRawLength }
    Client->>Server: send ZK packet with zkReveal
    Server->>Server: verifyZkPacket() → { redactedPlaintext, oprfRawMarkers }
    Server->>Server: decryptTranscript() reads markers
    Server->>Server: computeOPRFRaw(plaintext, markers)
    Server->>Server: load TOPRF key shares from env
    Server->>Server: generate/evaluate/finalise TOPRF → nullifiers
    Server->>Server: write nullifier bytes into transcript, collect replacements
    Server->>Validator: assertValidProviderTranscript(canonicalParamsWithReplacements)
    Validator->>Validator: apply replacements and continue validation
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • adiwajshing

Poem

🐰
I nibble bytes where secrets lie,
The server hums a TOPRF sigh.
Nullifiers stitched across the stream,
Cross-block hops fixed, a midnight dream. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 36.84% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and accurately describes the main feature addition: implementing Raw OPRF functionality for browser extension use, which is the central theme across all changed files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch oprf-raw

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@promptless
Copy link
Copy Markdown

promptless bot commented Mar 27, 2026

📝 Documentation updates detected!

New suggestion: Add changelog entry for oprf-raw OPRF mode in attestor-core


Tip: Send Promptless a meeting transcript in Slack to generate doc updates 📝

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
src/server/utils/oprf-raw.ts (1)

51-65: Consider validating markers don't overlap.

Overlapping markers could lead to undefined behavior where the same plaintext bytes are processed multiple times, potentially producing inconsistent nullifiers or corrupted replacements.

Add overlap validation
+	// Sort markers by fromIndex and validate no overlaps
+	const sortedMarkers = [...markers]
+		.filter(m => m.dataLocation)
+		.sort((a, b) => a.dataLocation!.fromIndex - b.dataLocation!.fromIndex)
+	
+	for(let i = 1; i < sortedMarkers.length; i++) {
+		const prev = sortedMarkers[i - 1].dataLocation!
+		const curr = sortedMarkers[i].dataLocation!
+		if(prev.fromIndex + prev.length > curr.fromIndex) {
+			throw new Error(
+				`oprf-raw markers overlap: [${prev.fromIndex}, ${prev.fromIndex + prev.length}) and [${curr.fromIndex}, ${curr.fromIndex + curr.length})`
+			)
+		}
+	}
+
 	for(const marker of markers) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/utils/oprf-raw.ts` around lines 51 - 65, Markers currently only
check bounds but not overlaps; update the validation in the markers loop (using
markers, dataLocation, fromIndex, length, endIndex, plaintext) to detect
overlapping ranges by first sorting markers by fromIndex (or tracking
previousEnd) and then ensuring each marker's fromIndex is >= previousEnd (or
previous endIndex) before processing; if an overlap is found throw a descriptive
Error (e.g., include fromIndex/length of both ranges) or skip as appropriate so
no two markers cover the same plaintext bytes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/server/utils/assert-valid-claim-request.ts`:
- Around line 141-155: The current code does global string replacements on
canonicalStringify(params) using oprfRawReplacements which can corrupt unrelated
fields; instead parse params into an object (use canonicalStringify/JSON.parse
if needed), then for each replacement iterate the object and only replace string
values that exactly match originalText (or match a provided parameter path if
oprfRawReplacements includes one), e.g., implement a small recursive walker that
checks typeof value === 'string' and replaces value === originalText with
nullifierText, update params with the modified object, set info.parameters =
canonicalStringify(params) (not the raw string with blind replacements), and
keep the logger.debug call with replacements count; reference symbols:
oprfRawReplacements, canonicalStringify, params, info.parameters, logger.debug.
- Around line 349-351: originalText is being decoded from plaintext after
earlier marker replacements in the same loop, which can corrupt captures when
markers overlap/adjacent; modify the logic in the loop that handles dataLocation
to first iterate and collect (map) all originalText values by decoding
plaintext.slice(dataLocation.fromIndex, dataLocation.fromIndex +
dataLocation.length) for every dataLocation, storing them in an array or map
keyed by index, and only after all originals are captured perform the
replacements using those stored original strings; update references to
originalText to use the pre-captured values (variables: originalText, plaintext,
dataLocation, and the loop that applies replacements) so no decode reads
plaintext that has been mutated by earlier replacements.

In `@src/utils/prepare-packets.ts`:
- Around line 69-77: The switch case for 'zk' contains a lexical declaration
(oprfRawMarkers) directly in the case clause which violates Biome's
noSwitchDeclarations rule; wrap the entire case 'zk' body in a block (add { ...
} around the existing case body) so oprfRawMarkers and the subsequent await
proofGenerator.addPacketToProve(...) callback are lexically scoped inside the
block and no declarations leak between cases.

In `@src/utils/redactions.ts`:
- Around line 124-140: The current handler for slice.hash === 'oprf-raw'
incorrectly pushes a single marker for the full slice length onto
slicesWithReveal[blockIdx].oprfRawMarkers even when the slice crosses block
boundaries; instead, iterate across the slice range using cursor/advance() and
for each block chunk compute its local fromIndex and length and push a marker
into the corresponding slicesWithReveal[blockIdx].oprfRawMarkers (create the
array if missing) so each block gets a block-local dataLocation; keep using
cursorInBlock, blockIdx, cursor, advance(), slice.fromIndex and slice.toIndex to
split the slice into per-block chunks while advancing the cursor.

---

Nitpick comments:
In `@src/server/utils/oprf-raw.ts`:
- Around line 51-65: Markers currently only check bounds but not overlaps;
update the validation in the markers loop (using markers, dataLocation,
fromIndex, length, endIndex, plaintext) to detect overlapping ranges by first
sorting markers by fromIndex (or tracking previousEnd) and then ensuring each
marker's fromIndex is >= previousEnd (or previous endIndex) before processing;
if an overlap is found throw a descriptive Error (e.g., include fromIndex/length
of both ranges) or skip as appropriate so no two markers cover the same
plaintext bytes.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b5f760f9-4226-4362-b2f3-7c67fb79d48e

📥 Commits

Reviewing files that changed from the base of the PR and between c036b80 and 077d826.

📒 Files selected for processing (11)
  • proto/api.proto
  • src/client/create-claim.ts
  • src/proto/api.ts
  • src/server/utils/assert-valid-claim-request.ts
  • src/server/utils/oprf-raw.ts
  • src/tests/zk.test.ts
  • src/types/general.ts
  • src/types/zk.ts
  • src/utils/prepare-packets.ts
  • src/utils/redactions.ts
  • src/utils/zk.ts

Comment thread src/server/utils/assert-valid-claim-request.ts
Comment thread src/server/utils/assert-valid-claim-request.ts Outdated
Comment thread src/utils/prepare-packets.ts Outdated
Comment thread src/utils/redactions.ts
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
src/server/utils/assert-valid-claim-request.ts (1)

141-154: ⚠️ Potential issue | 🟠 Major

Avoid replaceAll on serialized JSON.

This rewrites every matching substring in canonicalStringify(params), including unrelated fields or keys, and it also misses values whose JSON form is escaped (\n, \", \\, etc.). Replace exact string leaves on the parsed object, then re-canonicalize once.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/utils/assert-valid-claim-request.ts` around lines 141 - 154, The
current logic uses replaceAll on the serialized JSON
(canonicalStringify(params)) which can accidentally replace substrings in keys
or escaped JSON; instead iterate the parsed params object and for each
oprfRawReplacements entry (oprfRawReplacements -> { originalText, nullifierText
}) find exact string leaf values equal to originalText and replace them with
nullifierText, then re-run canonicalStringify once and assign that string to
info.parameters and JSON.parse it back into params; update the block around
canonicalStringify, params reassignment, and logger.debug (keeping the same
symbols: oprfRawReplacements, canonicalStringify, params, info.parameters,
logger.debug) to perform replacement on object leaves rather than on the
serialized string.
src/utils/redactions.ts (1)

129-160: ⚠️ Potential issue | 🔴 Critical

oprf-raw continuation bookkeeping breaks on boundary-aligned and longer spans.

By Line 157, advance() has already moved blockIdx past the last byte that belonged to the marker. If the slice ends on a block boundary, the overshot flag is written onto the next block, or off the array entirely at EOF. Even when it does not, only the final touched block gets tagged, so an A -> B -> C span leaves packet B untracked and the server cannot reconstruct the full OPRF input. Record continuation bytes on each touched block before advancing past them, and let the receiver accumulate until the original marker length is satisfied.

🧹 Nitpick comments (1)
src/tests/zk.test.ts (1)

275-355: This “raw” consistency test never exercises the production raw path.

Both branches still do generateOPRFRequestData(...)toprf(...)finaliseOPRF(...) directly. The code added in src/server/utils/oprf-raw.ts uses evaluateOPRF(...) with env-loaded key shares instead, so a regression there could pass this test unchanged. Consider asserting against computeOPRFRaw(...) itself here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tests/zk.test.ts` around lines 275 - 355, The test currently computes
both nullifiers via generateOPRFRequestData → toprf → finaliseOPRF, so it never
hits the production raw path in src/server/utils/oprf-raw.ts; update the second
branch to call computeOPRFRaw(...) (the exported production helper) instead of
repeating generateOPRFRequestData/toprf/finaliseOPRF, supplying the same input
(testData and TOPRF_DOMAIN_SEPARATOR) and logger, then compare its returned
nullifier to the zk flow; additionally ensure any code paths that normally call
evaluateOPRF(...) (used inside oprf-raw.ts) are exercised by this computeOPRFRaw
call so regressions there will fail the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/server/utils/assert-valid-claim-request.ts`:
- Around line 348-387: The code currently finalizes a pending OPRF-raw marker in
the pendingOprfRaw handling block (pendingForThis, overshotOprfRawLength)
without verifying the assembled length equals the declared length; change the
logic to compare fullData.length to pendingForThis.dataLocation.length and only
call computeOPRFRaw/update plaintexts and delete pendingOprfRaw[i] when they
match; if fullData is shorter, keep pendingOprfRaw[i] with the concatenated
partialData so future packets can continue accumulation, and after processing
the whole transcript ensure any remaining pendingOprfRaw entries cause the claim
to fail (use the same failure path as other validation errors).

---

Duplicate comments:
In `@src/server/utils/assert-valid-claim-request.ts`:
- Around line 141-154: The current logic uses replaceAll on the serialized JSON
(canonicalStringify(params)) which can accidentally replace substrings in keys
or escaped JSON; instead iterate the parsed params object and for each
oprfRawReplacements entry (oprfRawReplacements -> { originalText, nullifierText
}) find exact string leaf values equal to originalText and replace them with
nullifierText, then re-run canonicalStringify once and assign that string to
info.parameters and JSON.parse it back into params; update the block around
canonicalStringify, params reassignment, and logger.debug (keeping the same
symbols: oprfRawReplacements, canonicalStringify, params, info.parameters,
logger.debug) to perform replacement on object leaves rather than on the
serialized string.

---

Nitpick comments:
In `@src/tests/zk.test.ts`:
- Around line 275-355: The test currently computes both nullifiers via
generateOPRFRequestData → toprf → finaliseOPRF, so it never hits the production
raw path in src/server/utils/oprf-raw.ts; update the second branch to call
computeOPRFRaw(...) (the exported production helper) instead of repeating
generateOPRFRequestData/toprf/finaliseOPRF, supplying the same input (testData
and TOPRF_DOMAIN_SEPARATOR) and logger, then compare its returned nullifier to
the zk flow; additionally ensure any code paths that normally call
evaluateOPRF(...) (used inside oprf-raw.ts) are exercised by this computeOPRFRaw
call so regressions there will fail the test.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ea35213d-f66c-460f-aabe-56137e2e5e65

📥 Commits

Reviewing files that changed from the base of the PR and between 077d826 and 5b96a13.

📒 Files selected for processing (7)
  • proto/api.proto
  • src/proto/api.ts
  • src/server/utils/assert-valid-claim-request.ts
  • src/tests/zk.test.ts
  • src/types/general.ts
  • src/utils/prepare-packets.ts
  • src/utils/redactions.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/utils/prepare-packets.ts
  • src/proto/api.ts

Comment thread src/server/utils/assert-valid-claim-request.ts
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/server/utils/assert-valid-claim-request.ts (1)

276-283: ⚠️ Potential issue | 🟠 Major

Missing validation for incomplete pending markers at end of transcript.

After processing all packets, there's no check for remaining entries in pendingOprfRaw. If a marker declared a span that extended beyond the transcript, it would be silently ignored rather than failing the claim. This could lead to incorrect OPRF computation or security gaps.

🐛 Proposed fix
 	}
+
+	// Fail if any pending oprf-raw markers were not completed
+	const remainingPending = Object.keys(pendingOprfRaw)
+	if(remainingPending.length) {
+		throw new AttestorError(
+			'ERROR_INVALID_CLAIM',
+			`Incomplete oprf-raw markers: expected data in packets ${remainingPending.join(', ')}`
+		)
+	}

 	return {
 		transcript: decryptedTranscript,
 		hostname: hostname,
 		tlsVersion: tlsVersion,
 		oprfRawReplacements: oprfRawReplacements.length ? oprfRawReplacements : undefined
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/utils/assert-valid-claim-request.ts` around lines 276 - 283, After
processing packets and before returning the result, validate that there are no
leftover entries in pendingOprfRaw (i.e., ensure pendingOprfRaw.length === 0);
if any remain, throw a clear error or reject the claim to fail validation so a
declared span that extended beyond decryptedTranscript is not silently ignored.
Add this check just before the return that builds the object with
decryptedTranscript, hostname, tlsVersion, and oprfRawReplacements so incomplete
pending markers cause an explicit failure instead of producing an incorrect
oprfRawReplacements result.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/server/utils/assert-valid-claim-request.ts`:
- Around line 398-405: The assignment to pendingOprfRaw[pendingMarker.nextIdx]
can clobber an existing entry when multiple source packets target the same
nextIdx; update the logic in the block handling pendingMarker (symbols:
pendingMarker, pendingMarker.nextIdx, pendingMarker.pending,
pendingMarker.pending.partialData.set(...), pendingOprfRaw) to avoid collisions
by first checking if pendingOprfRaw[pendingMarker.nextIdx] already exists and
either (a) push both pending entries into an array at that key, (b) merge
partialData into the existing pending entry appropriately, or (c) throw/log a
clear error if collisions are invalid in your domain; implement one of these
defensive behaviors instead of unconditionally overwriting
pendingOprfRaw[pendingMarker.nextIdx].
- Around line 495-517: The loop that handles oprf-raw markers currently
overwrites the single variable pendingMarker when a marker spans to the next
packet, dropping earlier spanning markers; change pendingMarker from a single
object to an array (e.g., pendingMarkers) and push each pending marker object
instead of reassigning it (keep the structure created with nextIdx and pending
fields), update any uses of pendingMarker in this module (and callers like
functions that consume pendingMarker) to iterate over the array or handle
multiple pending entries, and ensure findNextPacketIdx() usage and the
logger.debug call remain correct for each pushed pending marker.

---

Outside diff comments:
In `@src/server/utils/assert-valid-claim-request.ts`:
- Around line 276-283: After processing packets and before returning the result,
validate that there are no leftover entries in pendingOprfRaw (i.e., ensure
pendingOprfRaw.length === 0); if any remain, throw a clear error or reject the
claim to fail validation so a declared span that extended beyond
decryptedTranscript is not silently ignored. Add this check just before the
return that builds the object with decryptedTranscript, hostname, tlsVersion,
and oprfRawReplacements so incomplete pending markers cause an explicit failure
instead of producing an incorrect oprfRawReplacements result.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c38cccca-9f3e-47ab-94af-26f9d64d9a8c

📥 Commits

Reviewing files that changed from the base of the PR and between 5b96a13 and 078067a.

📒 Files selected for processing (1)
  • src/server/utils/assert-valid-claim-request.ts

Comment thread src/server/utils/assert-valid-claim-request.ts
Comment thread src/server/utils/assert-valid-claim-request.ts
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (3)
src/server/utils/assert-valid-claim-request.ts (3)

139-148: ⚠️ Potential issue | 🟠 Major

Avoid blind replacement in the serialized params JSON.

Replacing originalText inside canonicalStringify(params) can rewrite unrelated substrings/keys and also miss values whose JSON escaping differs from the plaintext. That mutates provider parameters before validation and hashing. Walk the parsed object and only replace matching string leaf values, then reserialize.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/utils/assert-valid-claim-request.ts` around lines 139 - 148, The
current blind string replacement on canonicalStringify(params) can corrupt
unrelated keys/values; instead, parse params into an object and recursively
traverse it (e.g., a helper used within assert-valid-claim-request.ts),
replacing only string leaf values that exactly equal each oprfRawReplacements
entry's originalText with nullifierText, then reserialize via
canonicalStringify, assign the new JSON to info.parameters and JSON.parse it
back into params; update the logic around oprfRawReplacements,
canonicalStringify, params, and info.parameters to use this targeted leaf-only
replacement approach.

245-254: ⚠️ Potential issue | 🟠 Major

Do not collapse spanning oprf-raw markers to a single pending slot.

result.oprfRawMarkers is an array, but separateOprfRawMarkers() returns only one pendingMarker, and pendingOprfRaw[nextIdx] stores only one entry. If two raw spans in the same packet cross the boundary, the earlier one is overwritten and never replaced. Return/store arrays here and consume all pending markers for the continuation packet.

Also applies to: 406-423, 480-537

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/utils/assert-valid-claim-request.ts` around lines 245 - 254, The
pendingOprfRaw structure and handling currently store a single pending marker
per nextPktIdx causing earlier spanning oprf-raw markers to be overwritten;
change pendingOprfRaw to map each nextPktIdx to an array of pending marker
objects (same shape as current value) and update all code that assigns/reads
pendingOprfRaw (including where separateOprfRawMarkers() returns a pendingMarker
and where pendingOprfRaw[nextIdx] is consumed) to push/shift all entries so
every spanning marker from result.oprfRawMarkers is preserved and all pending
markers for a continuation packet are iterated/consumed rather than replaced.
Ensure oprfRawReplacements handling and the three other affected blocks (around
separateOprfRawMarkers usage at the other ranges) follow the same array-based
logic.

355-403: ⚠️ Potential issue | 🟠 Major

Carry pending raw spans across more than one follow-up packet.

This state machine still only supports a single continuation hop. Client-side marker generation in src/utils/redactions.ts:127-160 can overshoot across multiple later blocks, but here the marker is scheduled for the immediate next packet and then resolved only once. A three-packet span will either never be consumed or fail the exact-length check too early. Keep appending bytes until dataLocation.length is satisfied, and only reject after the transcript ends with leftovers.

Also applies to: 406-423, 489-529

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/server/utils/assert-valid-claim-request.ts`:
- Around line 139-148: The current blind string replacement on
canonicalStringify(params) can corrupt unrelated keys/values; instead, parse
params into an object and recursively traverse it (e.g., a helper used within
assert-valid-claim-request.ts), replacing only string leaf values that exactly
equal each oprfRawReplacements entry's originalText with nullifierText, then
reserialize via canonicalStringify, assign the new JSON to info.parameters and
JSON.parse it back into params; update the logic around oprfRawReplacements,
canonicalStringify, params, and info.parameters to use this targeted leaf-only
replacement approach.
- Around line 245-254: The pendingOprfRaw structure and handling currently store
a single pending marker per nextPktIdx causing earlier spanning oprf-raw markers
to be overwritten; change pendingOprfRaw to map each nextPktIdx to an array of
pending marker objects (same shape as current value) and update all code that
assigns/reads pendingOprfRaw (including where separateOprfRawMarkers() returns a
pendingMarker and where pendingOprfRaw[nextIdx] is consumed) to push/shift all
entries so every spanning marker from result.oprfRawMarkers is preserved and all
pending markers for a continuation packet are iterated/consumed rather than
replaced. Ensure oprfRawReplacements handling and the three other affected
blocks (around separateOprfRawMarkers usage at the other ranges) follow the same
array-based logic.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 958703a5-0278-4c80-aedd-d21ce353acfc

📥 Commits

Reviewing files that changed from the base of the PR and between 078067a and 07ac63a.

📒 Files selected for processing (3)
  • provider-schemas/http/parameters.yaml
  • src/server/utils/assert-valid-claim-request.ts
  • src/types/providers.gen.ts
✅ Files skipped from review due to trivial changes (1)
  • provider-schemas/http/parameters.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants