Skip to content

Commit 101c119

Browse files
RadNiquangvdaocursoragent
authored
perf: batching advice polynomials (#1172)
* first draft * fmt * clippy * WIP * fmt * wired in new sumchecks * more progress * refactor: move RamHammingBooleanity to Stage 6 for unified r_cycle - Move RamHammingBooleanity sumcheck from Stage 5 to Stage 6 This ensures RAM HW claims use r_cycle_stage6, matching HammingWeightClaimReduction - Remove ram_hw_claims from JoltProof struct RAM HW claims now flow through accumulator like all other claims - Simplify HammingWeightClaimReduction params - Fetch RAM HW claims from accumulator (VirtualPolynomial::RamHammingWeight) - Remove separate ram_hw_claims handling in prover/verifier - Update tracking document with new stage layout All e2e tests pass when run individually. * refactor(sumcheck): unify booleanity, consolidate claim reductions, remove opening reduction This is a major refactoring of the sumcheck infrastructure for Jolt Stage 6-7. 1. Unified Booleanity Sumcheck - Add unified_booleanity.rs to batch booleanity checks across all ra polynomials (instruction, bytecode, ram) into a single sumcheck - Replaces separate booleanity sumchecks per polynomial family 2. Claim Reductions Consolidation - Move all claim reduction sumchecks into zkvm/claim_reductions/: - hamming_weight.rs - fused HammingWeight + RA address reduction - increments.rs - increment counter claim reduction - instruction_lookups.rs - instruction lookups claim reduction - ram_ra.rs - RAM RA claim reduction - Remove scattered implementations from subprotocols/, zkvm/ram/, zkvm/spartan/ 3. Remove Deprecated Opening Reduction (Old Stage 7) - Delete subprotocols/opening_reduction.rs - Delete subprotocols/hamming_weight.rs (replaced by fused version) - Delete subprotocols/hamming_weight_claim_reduction.rs - Delete subprotocols/inc_reduction.rs - Delete zkvm/ram/ra_reduction.rs - Delete zkvm/spartan/claim_reductions.rs - Clean up deprecated Stage 7 methods from opening_proof.rs The new architecture reduces the number of sumcheck instances and aligns all ra polynomials to a common opening point for efficient batch opening proofs. * Refactor: SharedRaPolynomials with shared eq table and non-transposed indices - Add SharedRaPolynomials type that stores ONE shared eq table instead of N copies - Store Vec<RaIndices> (non-transposed) instead of Vec<Vec<Option<u16>>> - Implement Round1/Round2/Round3/RoundN state machine for delayed binding - Add compute_all_G and compute_ra_indices for parallel computation - Refactor UnifiedBooleanityProver to use SharedRaPolynomials - Remove separate compute_instruction_G/bytecode_G/ram_G functions - Use rayon::join for parallel G and ra_indices computation * WIP: Stage 7 debugging - r_cycle source mismatch investigation Debug state: - SharedRaRound3::get_bound_coeff: Fixed LowToHigh offset ordering (F_10 <-> F_01) - SharedRaRound3::bind: Fixed to create 8 separate F tables as in RaPolynomialRound3 - compute_all_G: Optimized to use flat N*K vector with unsafe_allocate_zero_vec Current issue: - HammingWeight's G·eq(r_addr) doesn't match UnifiedBooleanity's claims_bool - Debug shows ~0.3% difference, suggesting r_cycle mismatch - HammingWeight now extracts r_cycle from Stage 5 (same source as UnifiedBooleanity) - eq_bool mle vs bound mismatch indicates potential endianness issue Hypothesis: - UnifiedBooleanity claims are at (ρ_addr, r_cycle_stage5) where: - ρ_addr = sumcheck challenges for address - r_cycle_stage5 = original r_cycle from Stage 5 - HammingWeight should use the same r_cycle for G computation * more opt * refactoring, still failing * removing more dead code related to one hot polys * fixed bug, add streaming VMV from materialized trace * more fix * Checkpoint before follow-up message Co-authored-by: qvd <qvd@andrew.cmu.edu> * Checkpoint before follow-up message Co-authored-by: qvd <qvd@andrew.cmu.edu> * Fix clippy warning: remove debug eprintln from sumcheck Remove temporary Stage 7 debugging code that was causing clippy uninlined_format_args warning. * Checkpoint before follow-up message Co-authored-by: qvd <qvd@andrew.cmu.edu> * Refactor: Rename UnifiedBooleanity to Booleanity This commit renames the UnifiedBooleanity subprotocol to Booleanity to better reflect its purpose. The functionality remains the same. Co-authored-by: qvd <qvd@andrew.cmu.edu> * Refactor: Improve code formatting and imports Co-authored-by: qvd <qvd@andrew.cmu.edu> * feat: Add initial core functionality Co-authored-by: qvd <qvd@andrew.cmu.edu> * Refactor: Integrate advice polynomial claim reduction This commit refactors the advice polynomial handling by introducing a new claim reduction sumcheck. This sumcheck consolidates multiple advice claims into fewer claims, which are then batched into the Stage 8 opening proof. This change removes the need for separate advice opening proofs and simplifies the overall proof structure. Co-authored-by: qvd <qvd@andrew.cmu.edu> * feat: Add trusted advice hint to prover and SDK Co-authored-by: qvd <qvd@andrew.cmu.edu> * optimize ra reduction init * cleaning up * turn remainder into bitmask in some places * add documentation to eq poly * added more inlines * simpler split eq implementation of compute G * optimization to streaming vmv * final small change * name change booleanity => booleanity sumcheck * change back threshold * clippy * Fix advice claim reduction and Stage 8 batching issues - Fixed AdviceClaimReductionParams::new to check if advice openings actually exist in the accumulator, not just if memory layout allows advice. This fixes tests like SHA3 that don't use advice. - Fixed SDK macro typo: <<< -> << in OpeningProofHint type parameter - Removed advice from Stage 8 batch opening: Advice is committed with different dimensions (max_padded_trace_length) than main polynomials (actual padded_trace_len), so they cannot be batched together. Advice claims are still reduced via AdviceClaimReduction in Stage 6. - Reverted advice test to use TrustedAdvice Dory context instead of Main context * delete executable * remove redundant stuff in booleanity params * delete redundant split eq poly method * I think everything is working? * add advice for streaming rlc as well * added more tests * delete executable * added extra e2e tests for advice * clippy * more clippy * final clippy I swear * okay final final clippy * optimize rlc computation * fmt * can batch advice with arbitrary dimensions now? * fmt * revert to old unmodified batched sumcheck impl * fmt * append C_mid claim to transcript * addressing comments (part 1) * more cleanups * more cleanup * final changes * small rename * final removal of chunk ranges * final changes * reorganize sumcheck ids: remove old ones, rename some, and reorder according to appearance in stages * no fixedbitset for ram sumcheck inits * tidy up comments in prover & verifier * final removal of chunk ranges & fixed bit set for compute G * fix padding issue * have unreduced addition in rlc accumulating ra poly evals * increase threshold for increasing log_k_chunk (from 23 to 25) * revert to FixedBitSet impl for compute G * Fix test code to use new DoryOpeningState pattern - Remove old test block that used dory_opening_state field - Move joint_commitment_for_test computation into prove_stage8_impl - Update advice_row_coords test to get opening_point from OpeningAccumulator - Add missing imports (OpeningAccumulator, CommittedPolynomial) * Simplify prove_stage8: remove unnecessary test scaffolding The better-opening-reduction branch never actually sets joint_commitment_for_test, so the verifier cross-check (if let Some...) never triggers. Match that pattern by removing the commitments_map passing and the two-impl split. * Remove dead joint_commitment_for_test field This field was initialized to None and never set, so the verifier cross-check never triggered. Remove from prover, verifier, and proof struct. * fmt + minor refactor * cleaned up tests * clippy * clippy * Added test for advice polynomials size exceeding maximum and few other minor changes * failure in building python files * refactoring * removed mds * minor fix in comments * more small fixes * added serializer for max_padded_trace_length in shared preprocessing * clippy * removed extra advice constructors * replaced manual eq zero_selector --------- Co-authored-by: Quang Dao <quang.dao@layerzerolabs.org> Co-authored-by: Cursor Agent <cursoragent@cursor.com> Co-authored-by: qvd <qvd@andrew.cmu.edu> Co-authored-by: Quang Dao <>
1 parent 1889740 commit 101c119

File tree

28 files changed

+2135
-477
lines changed

28 files changed

+2135
-477
lines changed

examples/merkle-tree/src/main.rs

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ pub fn main() {
2020
let leaf3 = [7u8; 32];
2121
let leaf4 = [8u8; 32];
2222

23-
let (trusted_advice_commitment, _hint) = guest::commit_trusted_advice_merkle_tree(
23+
let (trusted_advice_commitment, trusted_advice_hint) = guest::commit_trusted_advice_merkle_tree(
2424
TrustedAdvice::new(leaf2),
2525
TrustedAdvice::new(leaf3),
2626
&prover_preprocessing,
@@ -36,6 +36,7 @@ pub fn main() {
3636
TrustedAdvice::new(leaf3),
3737
UntrustedAdvice::new(leaf4),
3838
trusted_advice_commitment,
39+
trusted_advice_hint,
3940
);
4041
info!("Prover runtime: {} s", now.elapsed().as_secs_f64());
4142

examples/recursion/src/main.rs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -338,6 +338,7 @@ fn collect_guest_proofs(guest: GuestProgram, target_dir: &str, use_embed: bool)
338338
&[],
339339
&[],
340340
None,
341+
None,
341342
&mut output_bytes,
342343
&guest_prover_preprocessing,
343344
);
@@ -511,6 +512,7 @@ fn run_recursion_proof(
511512
&[],
512513
&[],
513514
None,
515+
None,
514516
&mut output_bytes,
515517
&recursion_prover_preprocessing,
516518
);

jolt-core/benches/commit.rs

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
use criterion::Criterion;
22
use jolt_core::poly::commitment::commitment_scheme::CommitmentScheme;
3-
use jolt_core::poly::commitment::dory::{DoryCommitmentScheme, DoryGlobals};
3+
use jolt_core::poly::commitment::dory::{DoryCommitmentScheme, DoryContext, DoryGlobals};
44
use jolt_core::poly::multilinear_polynomial::MultilinearPolynomial;
55
use jolt_core::utils::math::Math;
66
use rand::Rng;
@@ -9,7 +9,7 @@ use rand_core::{RngCore, SeedableRng};
99
// use rayon::prelude::*;
1010

1111
fn benchmark_dory_dense(c: &mut Criterion, name: &str, k: usize, t: usize) {
12-
let globals = DoryGlobals::initialize(k, t);
12+
let globals = DoryGlobals::initialize_context(k, t, DoryContext::Main);
1313
let setup = <DoryCommitmentScheme as CommitmentScheme>::setup_prover(k.log_2() + t.log_2());
1414
let mut rng = ChaCha20Rng::seed_from_u64(111111u64);
1515

@@ -26,7 +26,7 @@ fn benchmark_dory_dense(c: &mut Criterion, name: &str, k: usize, t: usize) {
2626
}
2727

2828
fn benchmark_dory_one_hot_batch(c: &mut Criterion, name: &str, k: usize, t: usize) {
29-
let globals = DoryGlobals::initialize(k, t);
29+
let globals = DoryGlobals::initialize_context(k, t, DoryContext::Main);
3030
let setup = <DoryCommitmentScheme as CommitmentScheme>::setup_prover(k.log_2() + t.log_2());
3131
let mut rng = ChaCha20Rng::seed_from_u64(111111u64);
3232

@@ -52,7 +52,7 @@ fn benchmark_dory_one_hot_batch(c: &mut Criterion, name: &str, k: usize, t: usiz
5252
}
5353

5454
fn benchmark_dory_mixed_batch(c: &mut Criterion, name: &str, k: usize, t: usize) {
55-
let globals = DoryGlobals::initialize(k, t);
55+
let globals = DoryGlobals::initialize_context(k, t, DoryContext::Main);
5656
let setup = <DoryCommitmentScheme as CommitmentScheme>::setup_prover(k.log_2() + t.log_2());
5757
let mut rng = ChaCha20Rng::seed_from_u64(111111u64);
5858

jolt-core/benches/e2e_profiling.rs

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -211,9 +211,9 @@ fn prove_example(
211211
bytecode,
212212
program_io.memory_layout.clone(),
213213
init_memory_state,
214+
padded_trace_len,
214215
);
215-
let preprocessing =
216-
JoltProverPreprocessing::new(shared_preprocessing.clone(), padded_trace_len);
216+
let preprocessing = JoltProverPreprocessing::new(shared_preprocessing.clone());
217217

218218
let elf_contents_opt = program.get_elf_contents();
219219
let elf_contents = elf_contents_opt.as_deref().expect("elf contents is None");
@@ -224,6 +224,7 @@ fn prove_example(
224224
&[],
225225
&[],
226226
None,
227+
None,
227228
);
228229
let program_io = prover.program_io.clone();
229230
let (jolt_proof, _) = prover.prove();
@@ -266,9 +267,9 @@ fn prove_example_with_trace(
266267
bytecode.clone(),
267268
program_io.memory_layout.clone(),
268269
init_memory_state,
270+
trace.len().next_power_of_two(),
269271
);
270-
let preprocessing =
271-
JoltProverPreprocessing::new(shared_preprocessing, trace.len().next_power_of_two());
272+
let preprocessing = JoltProverPreprocessing::new(shared_preprocessing);
272273

273274
let elf_contents_opt = program.get_elf_contents();
274275
let elf_contents = elf_contents_opt.as_deref().expect("elf contents is None");
@@ -281,6 +282,7 @@ fn prove_example_with_trace(
281282
&[],
282283
&[],
283284
None,
285+
None,
284286
);
285287
let now = Instant::now();
286288
let (jolt_proof, _) = prover.prove();

jolt-core/src/guest/prover.rs

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,18 +23,20 @@ pub fn preprocess(
2323
let mut memory_config = guest.memory_config;
2424
memory_config.program_size = Some(program_size);
2525
let memory_layout = MemoryLayout::new(&memory_config);
26-
let shared_preprocessing = JoltSharedPreprocessing::new(bytecode, memory_layout, memory_init);
27-
JoltProverPreprocessing::new(shared_preprocessing, max_trace_length)
26+
let shared_preprocessing =
27+
JoltSharedPreprocessing::new(bytecode, memory_layout, memory_init, max_trace_length);
28+
JoltProverPreprocessing::new(shared_preprocessing)
2829
}
2930

30-
#[allow(clippy::type_complexity)]
31+
#[allow(clippy::type_complexity, clippy::too_many_arguments)]
3132
#[cfg(feature = "prover")]
3233
pub fn prove<F: JoltField, PCS: StreamingCommitmentScheme<Field = F>, FS: Transcript>(
3334
guest: &Program,
3435
inputs_bytes: &[u8],
3536
untrusted_advice_bytes: &[u8],
3637
trusted_advice_bytes: &[u8],
3738
trusted_advice_commitment: Option<<PCS as CommitmentScheme>::Commitment>,
39+
trusted_advice_hint: Option<<PCS as CommitmentScheme>::OpeningProofHint>,
3840
output_bytes: &mut [u8],
3941
preprocessing: &JoltProverPreprocessing<F, PCS>,
4042
) -> (
@@ -51,6 +53,7 @@ pub fn prove<F: JoltField, PCS: StreamingCommitmentScheme<Field = F>, FS: Transc
5153
untrusted_advice_bytes,
5254
trusted_advice_bytes,
5355
trusted_advice_commitment,
56+
trusted_advice_hint,
5457
);
5558
let io_device = prover.program_io.clone();
5659
let (proof, debug_info) = prover.prove();

jolt-core/src/guest/verifier.rs

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,14 +15,16 @@ use common::jolt_device::MemoryLayout;
1515

1616
pub fn preprocess(
1717
guest: &Program,
18+
max_trace_length: usize,
1819
verifier_setup: <DoryCommitmentScheme as CommitmentScheme>::VerifierSetup,
1920
) -> JoltVerifierPreprocessing<ark_bn254::Fr, DoryCommitmentScheme> {
2021
let (bytecode, memory_init, program_size) = guest.decode();
2122

2223
let mut memory_config = guest.memory_config;
2324
memory_config.program_size = Some(program_size);
2425
let memory_layout = MemoryLayout::new(&memory_config);
25-
let shared = JoltSharedPreprocessing::new(bytecode, memory_layout, memory_init);
26+
let shared =
27+
JoltSharedPreprocessing::new(bytecode, memory_layout, memory_init, max_trace_length);
2628
JoltVerifierPreprocessing::new(shared, verifier_setup)
2729
}
2830

jolt-core/src/poly/commitment/dory/commitment_scheme.rs

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,11 @@ impl CommitmentScheme for DoryCommitmentScheme {
4848
let mut rng = ChaCha20Rng::from_seed(seed);
4949
let setup = ArkworksProverSetup::new_from_urs(&mut rng, max_num_vars);
5050

51+
// The prepared-point cache in dory-pcs is global and can only be initialized once.
52+
// In unit tests, multiple setups with different sizes are created, so initializing the
53+
// cache with a small setup can break later tests that need more generators.
54+
// We therefore disable cache initialization in `cfg(test)` builds.
55+
#[cfg(not(test))]
5156
DoryGlobals::init_prepared_cache(&setup.g1_vec, &setup.g2_vec);
5257

5358
setup

jolt-core/src/poly/commitment/dory/dory_globals.rs

Lines changed: 50 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,45 @@ impl Drop for DoryContextGuard {
5858
pub struct DoryGlobals;
5959

6060
impl DoryGlobals {
61+
/// Split `total_vars` into a *balanced* pair `(sigma, nu)` where:
62+
/// - **sigma** is the number of **column** variables
63+
/// - **nu** is the number of **row** variables
64+
///
65+
/// Dory matrices are conceptually shaped as `2^nu` rows × `2^sigma` columns (row-major).
66+
/// We use the balanced policy `sigma = ceil(total_vars / 2)` and `nu = total_vars - sigma`.
67+
#[inline]
68+
pub fn balanced_sigma_nu(total_vars: usize) -> (usize, usize) {
69+
let sigma = total_vars.div_ceil(2);
70+
let nu = total_vars - sigma;
71+
(sigma, nu)
72+
}
73+
74+
/// Convenience helper for the main Dory matrix where `total_vars = log_k_chunk + log_t`.
75+
#[inline]
76+
pub fn main_sigma_nu(log_k_chunk: usize, log_t: usize) -> (usize, usize) {
77+
Self::balanced_sigma_nu(log_k_chunk + log_t)
78+
}
79+
80+
/// Computes balanced `(sigma, nu)` dimensions directly from a max advice byte budget.
81+
///
82+
/// - `max_advice_size_bytes` is interpreted as bytes of 64-bit words.
83+
/// - Rounds word count up to the next power of two (minimum 1) and computes log2 as `advice_vars`.
84+
/// - Returns `(sigma, nu)` where `sigma = ⌈advice_vars/2⌉` and `nu = advice_vars - sigma`.
85+
#[inline]
86+
pub fn advice_sigma_nu_from_max_bytes(max_advice_size_bytes: usize) -> (usize, usize) {
87+
let words = max_advice_size_bytes / 8;
88+
let len = words.next_power_of_two().max(1);
89+
let advice_vars = len.log_2();
90+
Self::balanced_sigma_nu(advice_vars)
91+
}
92+
93+
/// How many row variables of the *cycle* segment exist in the unified point:
94+
/// `row_cycle_len = max(0, log_t - sigma_main)`.
95+
#[inline]
96+
pub fn cycle_row_len(log_t: usize, sigma_main: usize) -> usize {
97+
log_t.saturating_sub(sigma_main)
98+
}
99+
61100
/// Get the current Dory context
62101
pub fn current_context() -> DoryContext {
63102
CURRENT_CONTEXT.load(Ordering::SeqCst).into()
@@ -182,46 +221,34 @@ impl DoryGlobals {
182221
(side, side)
183222
} else {
184223
// Odd total vars: almost square (columns = 2*rows)
185-
let sigma = total_vars.div_ceil(2);
186-
let nu = total_vars - sigma;
224+
let (sigma, nu) = Self::balanced_sigma_nu(total_vars);
187225
(1 << sigma, 1 << nu)
188226
};
189227

190228
(num_columns, num_rows, T)
191229
}
192230

193-
/// Initialize the globals for the main Dory matrix
231+
/// Initialize the globals for a specific Dory context
194232
///
195233
/// # Arguments
196234
/// * `K` - Maximum address space size (K in OneHot polynomials)
197235
/// * `T` - Maximum trace length (cycle count)
236+
/// * `context` - The Dory context to initialize (Main, TrustedAdvice, or UntrustedAdvice)
198237
///
199238
/// The matrix dimensions are calculated to minimize padding:
200239
/// - If log2(K*T) is even: creates a square matrix
201240
/// - If log2(K*T) is odd: creates an almost-square matrix (columns = 2*rows)
202-
pub fn initialize(K: usize, T: usize) -> Option<()> {
241+
pub fn initialize_context(K: usize, T: usize, context: DoryContext) -> Option<()> {
203242
let (num_columns, num_rows, t) = Self::calculate_dimensions(K, T);
204-
Self::set_num_columns_for_context(num_columns, DoryContext::Main);
205-
Self::set_T_for_context(t, DoryContext::Main);
206-
Self::set_max_num_rows_for_context(num_rows, DoryContext::Main);
207-
Some(())
208-
}
243+
Self::set_num_columns_for_context(num_columns, context);
244+
Self::set_T_for_context(t, context);
245+
Self::set_max_num_rows_for_context(num_rows, context);
209246

210-
/// Initialize the globals for trusted advice commitments
211-
pub fn initialize_trusted_advice(K: usize, T: usize) -> Option<()> {
212-
let (num_columns, num_rows, t) = Self::calculate_dimensions(K, T);
213-
Self::set_num_columns_for_context(num_columns, DoryContext::TrustedAdvice);
214-
Self::set_T_for_context(t, DoryContext::TrustedAdvice);
215-
Self::set_max_num_rows_for_context(num_rows, DoryContext::TrustedAdvice);
216-
Some(())
217-
}
247+
// For Main context, ensure subsequent uses of `get_*` read from it by default
248+
if context == DoryContext::Main {
249+
CURRENT_CONTEXT.store(DoryContext::Main as u8, Ordering::SeqCst);
250+
}
218251

219-
/// Initialize the globals for untrusted advice commitments
220-
pub fn initialize_untrusted_advice(K: usize, T: usize) -> Option<()> {
221-
let (num_columns, num_rows, t) = Self::calculate_dimensions(K, T);
222-
Self::set_num_columns_for_context(num_columns, DoryContext::UntrustedAdvice);
223-
Self::set_T_for_context(t, DoryContext::UntrustedAdvice);
224-
Self::set_max_num_rows_for_context(num_rows, DoryContext::UntrustedAdvice);
225252
Some(())
226253
}
227254

jolt-core/src/poly/commitment/dory/tests.rs

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ mod tests {
44
use super::super::*;
55
use crate::field::JoltField;
66
use crate::poly::commitment::commitment_scheme::CommitmentScheme;
7+
use crate::poly::commitment::dory::DoryContext;
78
use crate::poly::dense_mlpoly::DensePolynomial;
89
use crate::poly::multilinear_polynomial::{MultilinearPolynomial, PolynomialEvaluation};
910
use crate::transcripts::{Blake2bTranscript, Transcript};
@@ -64,7 +65,7 @@ mod tests {
6465

6566
let num_coeffs = 1 << num_vars;
6667
// Dense polynomial: K = 1, T = num_coeffs
67-
let _guard = DoryGlobals::initialize(1, num_coeffs);
68+
let _guard = DoryGlobals::initialize_context(1, num_coeffs, DoryContext::Main);
6869

6970
let prover_setup = DoryCommitmentScheme::setup_prover(num_vars);
7071
let verifier_setup = DoryCommitmentScheme::setup_verifier(&prover_setup);
@@ -241,7 +242,7 @@ mod tests {
241242
let num_coeffs = 1 << num_vars;
242243

243244
// Dense polynomial: K = 1, T = num_coeffs
244-
let _guard = DoryGlobals::initialize(1, num_coeffs);
245+
let _guard = DoryGlobals::initialize_context(1, num_coeffs, DoryContext::Main);
245246

246247
let mut rng = thread_rng();
247248
let coeffs: Vec<Fr> = (0..num_coeffs).map(|_| Fr::rand(&mut rng)).collect();
@@ -385,7 +386,7 @@ mod tests {
385386
let K = 8;
386387
let T = 8;
387388

388-
let _guard = DoryGlobals::initialize(K, T);
389+
let _guard = DoryGlobals::initialize_context(K, T, DoryContext::Main);
389390

390391
let mut rng = thread_rng();
391392
let nonzero_indices: Vec<Option<u8>> = (0..T)
@@ -450,7 +451,7 @@ mod tests {
450451
let num_coeffs = 1 << num_vars;
451452
let num_polys = 5;
452453

453-
let _guard = DoryGlobals::initialize(1, num_coeffs);
454+
let _guard = DoryGlobals::initialize_context(1, num_coeffs, DoryContext::Main);
454455

455456
let mut rng = thread_rng();
456457

@@ -534,7 +535,7 @@ mod tests {
534535
let num_coeffs = 1 << num_vars;
535536
let num_polys = 5;
536537

537-
let _guard = DoryGlobals::initialize(1, num_coeffs);
538+
let _guard = DoryGlobals::initialize_context(1, num_coeffs, DoryContext::Main);
538539

539540
let mut rng = thread_rng();
540541

jolt-core/src/poly/eq_poly.rs

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,28 @@ impl<F: JoltField> EqPolynomial<F> {
5555
}
5656
}
5757

58+
/// Computes the zero selector: `eq(r, [0, 0, ...]) = ∏ᵢ (1 - rᵢ)`.
59+
///
60+
/// This is equivalent to `mle(r, &vec![F::zero(); r.len()])` but more efficient
61+
/// as it avoids allocating the zeros vector. Commonly used for Lagrange factors
62+
/// when computing embeddings (e.g., advice polynomial embeddings in Dory batch openings).
63+
///
64+
/// # Mathematical Interpretation
65+
/// - `eq(r, 0) = ∏ᵢ (1 - rᵢ)` selects the "all-zeros" vertex of the boolean hypercube
66+
/// - Returns 1 when all `rᵢ = 0`, and decays multiplicatively as more bits become non-zero
67+
///
68+
/// # Arguments
69+
/// - `r`: Point at which to evaluate
70+
///
71+
/// # Returns
72+
/// The product `∏ᵢ (1 - rᵢ)` over all elements in `r`
73+
pub fn zero_selector<C>(r: &[C]) -> F
74+
where
75+
C: Copy + Send + Sync + Into<F>,
76+
{
77+
r.par_iter().map(|r_i| F::one() - (*r_i).into()).product()
78+
}
79+
5880
#[tracing::instrument(skip_all, name = "EqPolynomial::evals")]
5981
/// Computes the table of evaluations: `{ eq(r, x) : x ∈ {0, 1}^n }`.
6082
///

0 commit comments

Comments
 (0)