Commoditizing the petaflop through radically simplified software and uncompromising hardware.
- A Radical Simplification of AI Software: Tinygrad strips complex neural networks down to just three fundamental operations, offering a significantly leaner, highly optimizable alternative to giant frameworks like PyTorch.
- Accessible Power Hardware: The “tinybox” disrupts AI hardware economics by delivering top-tier MLPerf benchmark performance at a fraction of the cost of traditional, high-end computing rigs.
- A Developer-Driven Mission: Driven by a unique, contribution-based hiring model, the “tiny corp” aims to democratize AI, asking supporters to “invest with PRs” rather than traditional capital.
The artificial intelligence space is currently dominated by massive frameworks and prohibitively expensive hardware. But a quiet rebellion is brewing under the banner of “tiny.” Maintained by the recently funded tiny corp, tinygrad is rapidly earning its title as the fastest-growing neural network framework. By marrying an elegant software architecture with high-performance, cost-effective hardware, the tiny corp has a singular, ambitious goal: to accelerate AI and commoditize the petaflop for everyone.

The Software: Stripping Down the Neural Network
At the heart of the tinygrad philosophy is extreme simplicity. Rather than relying on a bloated library of thousands of specific functions, tinygrad breaks down even the most complex neural networks into just three fundamental Operation Types (OpTypes):
- ElementwiseOps: These operate on one to three tensors on an element-by-element basis (e.g., ADD, MUL, SQRT, LOG2, WHERE).
- ReduceOps: These operations take one tensor and condense it into a smaller one (e.g., SUM, MAX).
- MovementOps: These are virtual operations that move data around seamlessly without copying it (e.g., RESHAPE, PERMUTE, EXPAND).
If you are wondering where the traditional heavy lifters like Convolutions (CONVs) and Matrix Multiplications (MATMULs) are hiding, the tiny corp invites you to dive into the code to solve the mystery. This minimalist approach is not just for show; it is highly functional. Tinygrad supports full forward and backward passes with autodiff. Because this is implemented at a high level of abstraction, any port to a new accelerator gets this functionality for free.
While currently in its alpha stage, tinygrad boasts a refined API similar to PyTorch but with a fraction of the complexity. How does it plan to outpace the giants? By leveraging aggressive lazy tensor fusion, compiling custom kernels for every specific shape, and maintaining a backend that is over ten times simpler. The framework will officially leave alpha when it can reproduce a common set of papers on a single NVIDIA GPU twice as fast as PyTorch, with a target ETA of Q2 next year.
It is also already proving its worth in the real world. Tinygrad currently powers the driving model in openpilot, running efficiently on a Snapdragon 845 GPU. It completely replaces SNPE, brings faster speeds, supports ONNX file loading, allows for training, and crucially, supports attention mechanisms that previous systems could not handle.
Unleashing the Tinybox
Software is only half the battle; AI needs compute. Enter the tinybox, a powerhouse computer purpose-built for deep learning. Designed to offer the absolute best performance-to-dollar ratio on the market, the tinybox has already proven its mettle by benchmarking in MLPerf Training 4.0 against machines that cost ten times as much. And, as the tiny corp notes, any machine capable of training can effortlessly handle inference.
Getting your hands on a tinybox is an exercise in no-nonsense efficiency. The factory is operational, and units ship worldwide (or are available for pickup in San Diego) within a week of receiving payment. However, to keep prices low and quality exceptionally high, the tiny corp operates with strict boundaries: there are no custom orders, no supplier onboarding forms, and wire transfer is the only accepted payment method. Once you buy it, the hardware is entirely yours to customize and tweak as you see fit.
The Culture: Investing with Pull Requests
The tiny corp is building a community as unique as its tech stack. Development thrives out in the open on GitHub and Discord. Now fully funded, the company is actively hiring software engineers, operations staff, hardware experts, and highly talented interns.
Their hiring process, however, flips the traditional corporate model on its head. Instead of standard interviews, prospective engineers are encouraged to tackle bounties—getting paid to prove they are a good fit. For non-engineering roles, the barrier to entry remains deeply tied to the product: if you haven’t contributed to the tinygrad repository, your application simply won’t be considered. Even for those looking to back the company financially, the message is clear and uniquely open-source: “Invest with your PRs.”
For organizations wanting to accelerate their own workflows, George Hotz and the tiny corp are open to contracts and sponsorships to further improve the tinygrad ecosystem.
By stripping away software bloat, cutting out hardware middlemen, and demanding hands-on engineering excellence from its team, the tiny corp isn’t just building another ML framework. They are attempting to rewrite the economics of artificial intelligence entirely.


