Open-Source Artificial Super-Intelligence Institute.
Intelligence is the substrate of civilization. It has to be open. AER Labs is the IAS / IHES of the ASI era — native to the substrate it is built on.
The ASI-Native Thesis
The acronym AER comes from the Latin word for air, the invisible medium that surrounds and enables every living thing. The choice is intended to be evocative rather than literal: it points to the idea that intelligence, like air, is best understood as a substrate — a medium through which other activities are made possible. The stronger claim, which the rest of this note will attempt to make plausible, is that intelligence is the substrate of civilization, and that this substrate must be open.
I will argue this in three steps, using a structure suggested by the acronym itself.
A. Acceleration. It is a familiar empirical observation that hard mathematical and physical problems have historically been resolved by individual human cognition operating at relatively slow serial rates. The Clay Millennium Problems offer a useful calibration: of the seven, only the Poincaré Conjecture has been resolved, and that resolution required roughly a decade of focused work by a single researcher. One may reasonably interpret the remaining six not as evidence of intrinsic difficulty but as evidence of insufficient cognitive throughput — what I will call, somewhat informally, the supply of capable mind-decades available to the field at any given time. If one accepts this framing, then sufficiently capable AI systems should be expected to expand that supply by perhaps two or three orders of magnitude. Whether the resulting expansion in scientific output scales linearly with the input — that is, whether one obtains a thousandfold increase in the rate of paradigm-level discoveries — is, I admit, a non-trivial conjecture. There are reasonable models in which it does (open contribution graphs, parallelizable subproblems, recursive refinement) and reasonable models in which it does not. I lean toward the first class of models, though I emphasize this is a conjecture, not a theorem.
E. Efficiency. Even granting Acceleration, one then faces the architectural question of how the multiplied cognitive supply is to be organized. Here the case is, I think, somewhat clearer. Open contribution graphs — the canonical examples being Linux, Wikipedia, and the modern open machine-learning ecosystem (vLLM, PyTorch, llama.cpp) — appear to compound by something resembling Metcalfe's law, while closed alternatives compound primarily by accumulation of capital. Over horizons of several decades the difference becomes asymptotically dramatic; over a century, the closed approach loses by what I would estimate to be at least one or two orders of magnitude. There is also a narrower, more rigorous argument: alignment, in any meaningful sense, requires verification, and verification requires read-access to weights, training data, and inference traces. A closed system in which these are unavailable fails verification by construction, and one is left with assertion — trust us — as the only available substitute. For systems whose failure mode is civilization-scale, I do not think that substitute is adequate.
R. Recursion. The third principle is more operational than conjectural. If an institute is going to bet on Acceleration and demand Efficiency, it must also be capable of using its own outputs as inputs: the systems it ships into the open ecosystem must return to it as accelerants for subsequent work. The atomic unit of research at AER is therefore the triple (human, frontier model, open codebase), in continuous interaction — a different organizational primitive from the lone-investigator model of the Cavendish or the small-team model of the Institute for Advanced Study. Whether it is the correct primitive is something only operating experience will settle. The structural argument, however, is, I think, clear: a theorem nobody can run is, at best, half a result, and the institute should be measured by the substrate it leaves behind, not only by the papers it publishes.
Closing the loop. The three principles are not independent claims; they are mutually entangled. Acceleration without Efficiency dissipates into wasted compute. Efficiency without Recursion produces one-shot gains that do not compound. Recursion without Acceleration is a feedback loop running too slowly to matter on civilizational timescales. The three are, I think, best understood as three faces of one institutional form, each forcing the others.
On historical context. Pre-ASI institutes — the Royal Society, the Cavendish Laboratory, the Institute for Advanced Study, IHES — were each optimized within a single fixed constraint: too few capable minds operating at human serial speeds. The constraint is now, plausibly, releasable. AER is, to my knowledge, the first institute deliberately designed for the regime that follows.
On the unit of measurement. Once cognitive supply expands substantially, the existing prizes (Nobel, Fields, Turing) will, I suspect, begin to feel as if they are calibrated to the wrong scale. I would not abolish them. But I think the relevant unit of progress for an ASI-native institute is something closer to paradigms per decade, and the frontier itself shifts from the questions we have to the questions we could not previously pose. This is, in spirit, what Kardashev did for civilization-scale energy. The energy axis has a Kardashev scale; the intelligence axis does not yet, but I think it should. AER is, in part, an argument for one. The horizon is K-I, then K-II.
Open Source
50+ merged PRs into the top-tier OSS projects the field runs on.
- DeepSeek6 PRs
- Google DeepMind4 PRs
- PyTorch · TorchAO1 PR
- Meta · llama.cpp1 PR
- Unsloth13 PRs
- verl · ByteDance RLHF9 PRs
- Qwen · FlashQLA2 PRs
- Flash Linear Attention2 PRs
- TileLang2 PRs
- OpenRLHF1 PR
- Axolotl1 PR
- LlamaFactory1 PR
Fellows
Highlighted Events & Seminars
See the full list of events and seminars →
- AI Research Summit 2026 AER LABS · Brian Chau
- Red Hat Meetup at Network School Red Hat · Network School · Ayush Satyam
- PyTorch Meetup Singapore Red Hat Asia Pacific
- Ralphthon @ Singapore OpenAI
- Forest City — Claude Code for Entrepreneurs Anthropic · Network School
Join Us
The most important institute of the next decade is being built now.