Skip to content

Opt : smaller proof size by using alternate randomness sampling approach#236

Closed
ocdbytes wants to merge 16 commits intoWizardOfMenlo:mainfrom
ocdbytes:aj/opt/proof-size
Closed

Opt : smaller proof size by using alternate randomness sampling approach#236
ocdbytes wants to merge 16 commits intoWizardOfMenlo:mainfrom
ocdbytes:aj/opt/proof-size

Conversation

@ocdbytes
Copy link
Copy Markdown
Contributor

@ocdbytes ocdbytes commented Mar 17, 2026

Summary

  • Implements zkWHIR 2.0 using the "Alternative Randomness Sampling" approach, replacing the previous per-round blinding strategy. Proof size drops from (μ+1)·q(δ) to (ν+1)·q(δ) field
    elements, where ν = ⌊μ/ℓ⌋ + 1 ≪ μ.
  • Extracts shared WHIR round logic into whir/rounds.rs, deduplicating the fold-commit-sumcheck loop between base WHIR and zkWHIR.
  • Adds open_at_indices / verify_at_indices to irs_commit for the Γ consistency check (opening at caller-provided codeword positions instead of transcript-sampled ones).

Design

Two WHIR instances run as sub-protocols:

  1. Blinded polynomial (f̂ = f + msk(Φ₀)): standard WHIR rounds on f_zk = ρ·f + g, with a Γ consistency check verifying [[f̂]] openings match [[H]].
  2. Blinding polynomial (M, ĝ₁..ĝ_ν): batched proof over ν + n committed vectors using weight covectors derived from beq tables.

Protocol steps:

  1. Commitment — sample masking/blinding polynomials, commit both instances
  2. Blinding claims — verifier sends β, prover sends G claims
  3. Combination — verifier sends ρ ≠ 0, prover forms f_zk = ρ·f + g
  4. Initial sumcheck on f_zk
  5. OOD/STIR queries + remaining WHIR rounds (shared via rounds.rs)
  6. Γ consistency check — verify [[f̂]] openings match [[H]] at FRI query indices
  7. Batched blinding proof via second WHIR instance

Reference Issue : #230

Reference Doc : [ ZK WHIR updated params doc by @yswami-tfh ]

Metric Previous Implementation New Implementation Change
Proof Size 3.2 MB 740 KB -77%
Verifier Time 243 ms 136 ms -44%

@codspeed-hq
Copy link
Copy Markdown

codspeed-hq Bot commented Mar 17, 2026

Merging this PR will not alter performance

✅ 10 untouched benchmarks
⏩ 22 skipped benchmarks1


Comparing ocdbytes:aj/opt/proof-size (77340f3) with main (716b457)

Open in CodSpeed

Footnotes

  1. 22 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@ocdbytes ocdbytes marked this pull request as ready for review March 23, 2026 09:02
.position(|&p| p == alpha_base)
.expect("query point must be in IRS domain")
/// `leak(δ, k, μ, d) := k · [q(δ) + stir(δ)] + T(δ) + ood(δ) + (d+1) · μ`
const fn leak(&self) -> usize {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leak() regresses from the old code's saturating_mul/saturating_add to plain */+, which wrap silently in release mode. If any intermediate product wraps, q_ub becomes tiny, ℓ becomes too small, and ZK is compromised.

Both saturating_mul and saturating_add are const fn since Rust 1.47, so the fix is drop-in:

const fn leak(&self) -> usize {
    self.k.saturating_mul(self.q_delta.saturating_add(self.stir_delta))
        .saturating_add(self.t_delta)
        .saturating_add(self.ood_delta)
        .saturating_add((self.d + 1).saturating_mul(self.mu))
}

/// Compute `q_ub` — the total leakage upper bound across both WHIR instances.
///
/// `q_ub ≤ leak(δ₁, k₁, μ, d) + leak(δ₂, k₂, μ, 3)`
const fn query_upper_bound(witness: &InstanceLeak, blinding: &InstanceLeak) -> usize {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue as leak(), use witness.leak().saturating_add(blinding.leak()).

Comment thread src/protocols/whir_zk/prover.rs Outdated
// and proves: ρ·F + G = Σ_{b̄} w(f_zk(b̄), b̄)
// =====================================================================
let rho: F = self.prover_state.verifier_message();
debug_assert_ne!(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is debug_assert_ne!, which is stripped in release builds. The verifier uses verify!(rho != F::ZERO) which returns a proper error. If the astronomically unlikely rho == 0 happens in a release build, the prover would silently produce a broken proof. Consider assert_ne! for consistency.

}

const TEST_NUM_VARIABLES: usize = 8;
const TEST_NUM_VARIABLES: usize = 12;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All integration tests use num_variables = 12. Depending on derived ell, this may or may not produce rem = mu % ell != 0. Consider adding a test with parameters that force a non-zero rem (where Φ₀ and Φ₁ extract different bit windows) to exercise the uneven-tiling path end-to-end. The phi_i_bits unit tests cover the indexing, but no round-trip test confirms the full protocol when windows don't tile evenly.

Comment thread src/protocols/whir_zk/utils.rs Outdated

gamma_points
.iter()
.map(|&gamma| discrete_log_pow2(gamma, gen_h, log_round0_len) * stride)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

discrete_log_pow2 computes gen.inverse() on every call. Since gen_h is the same for all gamma points, the inverse could be precomputed once and passed in (or discrete_log_pow2 could accept a precomputed inverse). Not a bottleneck at current query counts, but a free win.

Comment thread src/protocols/whir_zk/mod.rs Outdated
/// zkWHIR 2.0 — Zero-Knowledge WHIR with poly-logarithmic overhead.
///
/// Uses the "Alternative Randomness Sampling" approach which samples only
/// ν = ⌊μ/ℓ⌋ + 1 blinding polynomials (instead of μ + 1), reducing proof
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The doc comment says "ν = ⌊μ/ℓ⌋ + 1 blinding polynomials" but the code variable nu = ⌊μ/ℓ⌋ (line 140). The code's nu is actually ν − 1 in this convention, and num_g_polys() = nu + 1 is the real count. The line 139 comment also calls nu the "number of blinding polynomials" which is off by one.

Suggest: change comment to // nu = ⌊mu/ell⌋ — highest blinding polynomial index (ν+1 = nu+1 total g-polynomials) and update the top doc to match.

Comment thread src/protocols/whir_zk/mod.rs Outdated
pub fn num_blinding_variables(&self) -> usize {
self.blinding_commitment.initial_num_variables() - 1
}
// nu = ⌊mu/ell⌋ — number of blinding polynomials (alternative sampling)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same naming issue as above, nu is used both as ⌊μ/ℓ⌋ here and as a subscript in ĝ₁..ĝ_ν in other comments, creating ambiguity about whether ν means nu or nu+1. Consider renaming to something unambiguous or adding a one-line clarification.

/// 1a. Sample n random ℓ-variate masking polynomials mskᵢ
/// 1b. Compute f̂ᵢ = fᵢ + mskᵢ(Φ₀) and commit [[f̂]]
/// 1c. Sample ν + 1 random ℓ-variate blinding polynomials ĝ₀..ĝ_ν
/// 1d. Build committed vectors: n M-polynomials Mᵢ(ȳ,t) = ĝ₀(ȳ) + t·mskᵢ(ȳ)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment says Mᵢ(ȳ,t) = ĝ₀(ȳ) + t·mskᵢ(ȳ) but the interleaving [g₀[k], mskᵢ[k]] gives MLE (1−t)·ĝ₀(ȳ) + t·mskᵢ(ȳ). No correctness impact (code works on raw vectors, not MLEs), but this will mislead anyone reasoning about the polynomial algebra.

witness_leak.t_delta = witness_t_delta;
// For the blinding instance, T(δ) is approximated as q(δ) since its config
// depends on ℓ, which we're computing here (conservative upper bound).
let blinding_leak = InstanceLeak::new(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The blinding instance InstanceLeak uses num_variables_main (μ) for the (d+1)·μ sumcheck term and borrows ood_delta from the blinded instance's IRS config. Both overestimate since the blinding polynomial actually has ℓ+1 variables and a smaller vector size. This is necessary (ℓ isn't known yet), but an iterative tightening pass could reduce ℓ/ν and shave proof size.

ell: usize,
rem: usize,
) -> usize {
let start = if phi_index == 0 {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When rem = 0 (μ divisible by ℓ), Φ₀ and Φ₁ extract the same bit window (start = 0 for both). This wastes one blinding polynomial, g₁ adds no new bit coverage beyond g₀. ZK still holds (all μ bits covered by Φ₂..Φ_ν), but you could save one committed vector by special-casing rem = 0 to skip the duplicate projection.

@Bisht13
Copy link
Copy Markdown
Collaborator

Bisht13 commented Apr 6, 2026

Replaced with #239 and meant to lie in v1

@Bisht13 Bisht13 closed this Apr 6, 2026
@Bisht13 Bisht13 deleted the aj/opt/proof-size branch April 6, 2026 18:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants