Opt : smaller proof size by using alternate randomness sampling approach#236
Opt : smaller proof size by using alternate randomness sampling approach#236ocdbytes wants to merge 16 commits intoWizardOfMenlo:mainfrom
Conversation
Merging this PR will not alter performance
Comparing Footnotes
|
| .position(|&p| p == alpha_base) | ||
| .expect("query point must be in IRS domain") | ||
| /// `leak(δ, k, μ, d) := k · [q(δ) + stir(δ)] + T(δ) + ood(δ) + (d+1) · μ` | ||
| const fn leak(&self) -> usize { |
There was a problem hiding this comment.
leak() regresses from the old code's saturating_mul/saturating_add to plain */+, which wrap silently in release mode. If any intermediate product wraps, q_ub becomes tiny, ℓ becomes too small, and ZK is compromised.
Both saturating_mul and saturating_add are const fn since Rust 1.47, so the fix is drop-in:
const fn leak(&self) -> usize {
self.k.saturating_mul(self.q_delta.saturating_add(self.stir_delta))
.saturating_add(self.t_delta)
.saturating_add(self.ood_delta)
.saturating_add((self.d + 1).saturating_mul(self.mu))
}| /// Compute `q_ub` — the total leakage upper bound across both WHIR instances. | ||
| /// | ||
| /// `q_ub ≤ leak(δ₁, k₁, μ, d) + leak(δ₂, k₂, μ, 3)` | ||
| const fn query_upper_bound(witness: &InstanceLeak, blinding: &InstanceLeak) -> usize { |
There was a problem hiding this comment.
Same issue as leak(), use witness.leak().saturating_add(blinding.leak()).
| // and proves: ρ·F + G = Σ_{b̄} w(f_zk(b̄), b̄) | ||
| // ===================================================================== | ||
| let rho: F = self.prover_state.verifier_message(); | ||
| debug_assert_ne!( |
There was a problem hiding this comment.
This is debug_assert_ne!, which is stripped in release builds. The verifier uses verify!(rho != F::ZERO) which returns a proper error. If the astronomically unlikely rho == 0 happens in a release build, the prover would silently produce a broken proof. Consider assert_ne! for consistency.
| } | ||
|
|
||
| const TEST_NUM_VARIABLES: usize = 8; | ||
| const TEST_NUM_VARIABLES: usize = 12; |
There was a problem hiding this comment.
All integration tests use num_variables = 12. Depending on derived ell, this may or may not produce rem = mu % ell != 0. Consider adding a test with parameters that force a non-zero rem (where Φ₀ and Φ₁ extract different bit windows) to exercise the uneven-tiling path end-to-end. The phi_i_bits unit tests cover the indexing, but no round-trip test confirms the full protocol when windows don't tile evenly.
|
|
||
| gamma_points | ||
| .iter() | ||
| .map(|&gamma| discrete_log_pow2(gamma, gen_h, log_round0_len) * stride) |
There was a problem hiding this comment.
discrete_log_pow2 computes gen.inverse() on every call. Since gen_h is the same for all gamma points, the inverse could be precomputed once and passed in (or discrete_log_pow2 could accept a precomputed inverse). Not a bottleneck at current query counts, but a free win.
| /// zkWHIR 2.0 — Zero-Knowledge WHIR with poly-logarithmic overhead. | ||
| /// | ||
| /// Uses the "Alternative Randomness Sampling" approach which samples only | ||
| /// ν = ⌊μ/ℓ⌋ + 1 blinding polynomials (instead of μ + 1), reducing proof |
There was a problem hiding this comment.
The doc comment says "ν = ⌊μ/ℓ⌋ + 1 blinding polynomials" but the code variable nu = ⌊μ/ℓ⌋ (line 140). The code's nu is actually ν − 1 in this convention, and num_g_polys() = nu + 1 is the real count. The line 139 comment also calls nu the "number of blinding polynomials" which is off by one.
Suggest: change comment to // nu = ⌊mu/ell⌋ — highest blinding polynomial index (ν+1 = nu+1 total g-polynomials) and update the top doc to match.
| pub fn num_blinding_variables(&self) -> usize { | ||
| self.blinding_commitment.initial_num_variables() - 1 | ||
| } | ||
| // nu = ⌊mu/ell⌋ — number of blinding polynomials (alternative sampling) |
There was a problem hiding this comment.
Same naming issue as above, nu is used both as ⌊μ/ℓ⌋ here and as a subscript in ĝ₁..ĝ_ν in other comments, creating ambiguity about whether ν means nu or nu+1. Consider renaming to something unambiguous or adding a one-line clarification.
| /// 1a. Sample n random ℓ-variate masking polynomials mskᵢ | ||
| /// 1b. Compute f̂ᵢ = fᵢ + mskᵢ(Φ₀) and commit [[f̂]] | ||
| /// 1c. Sample ν + 1 random ℓ-variate blinding polynomials ĝ₀..ĝ_ν | ||
| /// 1d. Build committed vectors: n M-polynomials Mᵢ(ȳ,t) = ĝ₀(ȳ) + t·mskᵢ(ȳ) |
There was a problem hiding this comment.
Comment says Mᵢ(ȳ,t) = ĝ₀(ȳ) + t·mskᵢ(ȳ) but the interleaving [g₀[k], mskᵢ[k]] gives MLE (1−t)·ĝ₀(ȳ) + t·mskᵢ(ȳ). No correctness impact (code works on raw vectors, not MLEs), but this will mislead anyone reasoning about the polynomial algebra.
| witness_leak.t_delta = witness_t_delta; | ||
| // For the blinding instance, T(δ) is approximated as q(δ) since its config | ||
| // depends on ℓ, which we're computing here (conservative upper bound). | ||
| let blinding_leak = InstanceLeak::new( |
There was a problem hiding this comment.
The blinding instance InstanceLeak uses num_variables_main (μ) for the (d+1)·μ sumcheck term and borrows ood_delta from the blinded instance's IRS config. Both overestimate since the blinding polynomial actually has ℓ+1 variables and a smaller vector size. This is necessary (ℓ isn't known yet), but an iterative tightening pass could reduce ℓ/ν and shave proof size.
| ell: usize, | ||
| rem: usize, | ||
| ) -> usize { | ||
| let start = if phi_index == 0 { |
There was a problem hiding this comment.
When rem = 0 (μ divisible by ℓ), Φ₀ and Φ₁ extract the same bit window (start = 0 for both). This wastes one blinding polynomial, g₁ adds no new bit coverage beyond g₀. ZK still holds (all μ bits covered by Φ₂..Φ_ν), but you could save one committed vector by special-casing rem = 0 to skip the duplicate projection.
|
Replaced with #239 and meant to lie in v1 |
Summary
elements, where ν = ⌊μ/ℓ⌋ + 1 ≪ μ.
Design
Two WHIR instances run as sub-protocols:
Protocol steps:
Reference Issue : #230
Reference Doc : [ ZK WHIR updated params doc by @yswami-tfh ]