Imagine you were a programmer in the 1950s. Your day to day was punching machine code into cards.
In the 1970s, you were wrestling with FORTRAN. By the 1990s, maybe C++. Today? You’re basically just talking to the computer in English. Turing’s childhood dream is now your reality with coding agents.
Each generation got to think a little less about the machine and a little more about the fun problem they were actually trying to solve.
Modern compilers are one-way DAGs. They take Python and transform it down the stack through multiple representations until it becomes machine code. There’s some upward flow - as you type in your IDE, a partial compilation happens via Language Server Protocol, feeding information back up to create those squiggly lines and suggestions. But compiler…
Imagine you were a programmer in the 1950s. Your day to day was punching machine code into cards.
In the 1970s, you were wrestling with FORTRAN. By the 1990s, maybe C++. Today? You’re basically just talking to the computer in English. Turing’s childhood dream is now your reality with coding agents.
Each generation got to think a little less about the machine and a little more about the fun problem they were actually trying to solve.
Modern compilers are one-way DAGs. They take Python and transform it down the stack through multiple representations until it becomes machine code. There’s some upward flow - as you type in your IDE, a partial compilation happens via Language Server Protocol, feeding information back up to create those squiggly lines and suggestions. But compilers never go fully back up the abstraction ladder. Disassembly exists, but it doesn’t reconstruct your original high-level intent.
Mathematics needs something compilers never achieved: a true bidirectional cycle. For thousands of years, mathematicians think in high-level, intuitive leaps, not formal logic. Yet, math is undergoing a modernization. With proofs now spanning hundreds of pages, they are often riddled with hidden errors. In fact, every time proofs dogmatically resist being formally proven, the informal human source was wrong – just recently, mistakes were fixed during the formalization effort of Fermat’s Last Theorem. The bottom formal layer catches what high-level intuition misses.
Meanwhile, autoformalization - the process of converting natural language proofs to Lean code - is a form of hierarchical planning, bridging between the abstraction layers.
Going down the stack: Autoformalization translates intuitive mathematical reasoning into formal proof – the compiler’s traditional direction.
Going up the stack: Autoinformalization translates formal proofs back into human intuition – something compilers never truly do.
When combined, these create an interactive prover in natural language, freeing mathematicians to explore dozens of strategies and rapidly prototype ideas while the machine handles formalization.
But here’s the most powerful part: machine discoveries at the formal level feed back up. The machine finds patterns, lemmas, and connections in the formal space that humans never intuited, then surfaces them as new high-level insights. The mathematician’s intuition becomes augmented by discoveries happening in a space they couldn’t naturally explore.
The compiler revolution took decades; the mathematical compiler revolution is happening now.