As we enter the new year, we wanted to take a moment to reconnect with you, our community, and share EZKL’s journey - both where we stand today and our vision for the path ahead.
EZKL started as an exploration in privacy preserving machine learning (ML) using zero knowledge (ZK) proofs - back then you had to hand-code neural nets in Rust, wait 10 minutes to prove a small convnet, and could only use basic operations like linear layers and a few non-linearities like ReLU and Sigmoid.
Today the library can automatically compile almost any computational graph you throw at it. We’ve ZK’ed transformer models, random forests, generative models for worm-brains and the library keeps getting faster. Although every model is different, our popular xgboost example got 15x faster in 2024.
But, while the ezklcompiler remains our core offering, we’ve grown into much more than that.
The EZKL ecosystem
We’ve kept the team lean while consistently shipping major performance updates and serving a rapidly growing roster of projects that have built their future on our stack. Our ecosystem has grown organically, without tokens, chains, or compromising our core product - and we couldn’t be prouder of that.
Risk assessment models can run continuously and trustlessly off-chain, to safely update on-chain loan protocols like Sentiment.
New kinds of Automated market makers (AMMs), like QuantAMM, can be dynamically rebalanced verifiably, using quant strategies that were too complex to run fully on-chain.
AMMs have long sought dynamic pool fees that adapt to market conditions, rewarding liquidity providers for assuming additional risk. OpenGradient have been using EZKL to create a dynamic fee mechanism for Uniswap V3.
Strobe are creating a perpetual futures market for *everything,* leveraging EZKL to trustlessly verify and resolve complex outcomes for these futures.
Large decentralized networks like OpenGradient, Ritual, Omron, and Bagel are using EZKL to deliver always-on verifiable and fully decentralized machine learning inference networks.
Projects, like Inference Labs, are building on-chain NPCs.
On-chain games have been using us for a while to build on-chain physics and basic NPCs. Check out our NPC tutorial that leverages Lattice’s MUD framework here and here.
Support for builders
Proving complex models often overwhelms standard computers. To address this we launched Lilith, our closed beta high-performance compute cluster. Lilith lets developers upload ML models, convert them to ZK-circuits, and receive an instant REST API for generating proofs.
After finding standard off-the-shelf solutions inadequate, we ignored the adage to not reinvent the wheel and built our own custom orchestration system from the ground up. The result: job dispatch times under 1ms across 200+ workers, 200k+ proofs per day at peak load, and zero downtime for over 6 months.
We’re also taking a more hands-on approach with partners to help navigate ZK-specific architecture challenges. Our expertise has helped reduce project timelines from months to days. If you’re looking to build an application but need guidance, reach out - we’d love to help you get started and know how to build fun and weird end-to-end ZK applications.
What’s coming next ?
Big things coming in 2025:
The core compiler, our proving system, and our smart contracts are all being audited as we speak.
We’re launching new cryptography to make Lilith deployments completely private for developers.
Several major partnerships and applications are going live in Q1
And here’s the exciting part: entirely new sectors are about to start using our library publicly (more on this soon!)
As we enter 2025, we’re incredibly grateful for our community. You guys have focused on building real applications with EZKL that make a genuine impact and have real users. We can’t wait to see what you build next.