JOLT Atlas is a zero-knowledge machine learning (zkML) framework that extends the JOLT proving system to support ML inference verification from ONNX models.
Made with ❤️ by ICME Labs.
JOLT Atlas enables practical zero-knowledge machine learning by leveraging Just One Lookup Table (JOLT) technology. Traditional circuit-based approaches are prohibitively expensive when representing non-linear functions like ReLU and SoftMax. Lookups eliminate the need for circuit representation entirely.
In JOLT Atlas, we eliminate the complexity that plagues other approaches: no quotient polynomials, no byte decomposition, no grand products, no permutation checks, and most importantly — no complicated circuits.
Our core ethos is to reduce commitment costs via sumcheck while committing only to small-value polynomials.
Examples live in jolt-atlas-core/examples/ and demonstrate end-to-end prove → verify flows for various ONNX models.
GPT-2 proof and verification flow.
cargo run --release --package jolt-atlas-core --example gpt2A ~0.25M-parameter GPT model (4 transformer layers). Loads the ONNX graph, generates a SNARK proof of inference, and verifies it.
cargo run --release --package jolt-atlas-core --example nanoGPTSingle self-attention block proof.
cargo run --release --package jolt-atlas-core --example transformerSmaller GPT variants useful for quick iteration and debugging.
cargo run --release --package jolt-atlas-core --example minigpt
cargo run --release --package jolt-atlas-core --example microgptSystem specs: MacBook Pro M3, 16GB RAM
GPT-2 is a 125-million-parameter transformer model from OpenAI.
JOLT Atlas
| Stage | Wall clock |
|---|---|
Proving/verifying key generation (setup_prover) |
1.003 s |
Witness + commitment phase (ONNXProof::commit_witness_polynomials) |
0.762 s |
IOP proving (ONNXProof::iop) |
5.997 s |
Reduction opening proof (excluding HyperKZG::prove) |
1.899 s |
HyperKZG prove (HyperKZG::prove) |
2.392 s |
Proof time (ONNXProof::prove) |
14.889 s |
Verify time (ONNXProof::verify) |
1.038 s |
End-to-end total (setup_prover + prove + verify) |
16.930 s |
nanoGPT is the standard workload we use for cross-project comparison. It is a ~250k-parameter GPT model with 4 transformer layers.
JOLT Atlas:
| Stage | Wall clock |
|---|---|
Verifying key generation (setup_verifier) |
<0.001 s |
Proving key generation (setup_prover) |
0.263 s |
Proof time (ONNXProof::prove) |
2.288 s |
Verify time (ONNXProof::verify) |
0.127 s |
End-to-end total (setup_prover + prove + verify) |
2.678 s |
ezkl on the same model (source):
| Stage | Wall clock |
|---|---|
| Verifying key generation | 192 s |
| Proving key generation | 212 s |
| Proof time | 237 s |
| Verify time | 0.34 s |
JOLT Atlas produces a proof for nanoGPT in ~2.29 s versus ezkl's ~237 s proof time (not counting their 400+ s of key generation). That is roughly a 104× speed-up on proof generation alone.
# from repo root
cargo run --release --package jolt-atlas-core --example gpt2Add -- --trace for Chrome Tracing JSON output (view in chrome://tracing), or -- --trace-terminal for timing printed to the terminal.
GPT-2 uses a Hugging Face–hosted ONNX model that is not checked into the repo. A helper script downloads and prepares it automatically.
- Clone the repository.
- Install Rust and Cargo.
- Download the model:
# Create a virtual environment (one-time)
python3 -m venv .venv
source .venv/bin/activate
# Run the download script
python scripts/download_gpt2.pyThis exports GPT-2 via Hugging Face Optimum
into atlas-onnx-tracer/models/gpt2/ and copies model.onnx → network.onnx.
- Test the model (trace only, no proof):
cargo run --release --package atlas-onnx-tracer --example gpt2You should see the model graph printed and an output shape like
[1, 16, 65536] (vocab size 50257 padded to the next power of two).
- Prove & verify GPT-2:
cargo run --release --package jolt-atlas-core --example gpt2A successful run prints Proof verified successfully!.
Thanks to the Jolt team for their foundational work. We are standing on the shoulders of giants.