24 min read2 days ago
–
When code walks before it runs, who’s auditing the steps?
In which compile-time execution creates a blind spot in supply chain security, and a developer realises we’ve been trusting the compiler a bit too much
Press enter or click to view image in full size
…brainzzz, brainzzzzz, serverzzzzzz…
“In the beginning the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move.” * — Douglas Adams, *The Restaurant at the End of the Universe
I** ignored Zig **for two years.
Not out of any particular animosity — God knows **there are enough programming languages to be actively hostile toward without adding more to the list **— but out of sheer exhaustion. Another systems language. Another attempt to be *“bett…
24 min read2 days ago
–
When code walks before it runs, who’s auditing the steps?
In which compile-time execution creates a blind spot in supply chain security, and a developer realises we’ve been trusting the compiler a bit too much
Press enter or click to view image in full size
…brainzzz, brainzzzzz, serverzzzzzz…
“In the beginning the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move.” * — Douglas Adams, *The Restaurant at the End of the Universe
I** ignored Zig **for two years.
Not out of any particular animosity — God knows **there are enough programming languages to be actively hostile toward without adding more to the list **— but out of sheer exhaustion. Another systems language. Another attempt to be “better C” in a landscape already crowded with Rust’s borrow checker sermons, Go’s pragmatic minimalism, and approximately seventeen other contenders I can’t even remember the names of anymore.
The Bun hype train rolled through. I watched it pass. Then it came back due to Claude taking over it. Rollercoaster. The “Zig is the future” posts accumulated in my feed. I scrolled past them. The same pattern, over and over: new tool promises revolution, developers get excited, I get older and more tired.
Just another hype cycle. I was too exhausted to care.
Got other things to keep myself busy. Like, for instance, then came the supply chain attacks. Not the first ones — we’ve had those for years, escalating in both frequency and management panic levels — but the recent wave that hit the Node ecosystem particularly hard. The kind that make your Teams channels light up at 3 AM and your management look like they need more Relanium than usual. The kind where you’re scrambling to audit dependencies you didn’t even know you had, checking if some package seventeen layers deep in your node_modules decided to phone home to an interesting IP address.
I’ve been on the receiving end of these incidents enough times that I’ve developed what you might call expertise through repeated beating. Not the kind where you become a security expert — I’m not a pentester, I don’t write exploits for fun, I just work as what you might generously call the court poet for programmable things in a fairly large international manufacturing company. My cybersecurity credentials are simple: I’ve been hit in the face by enough attacks that, through sheer repetition, quantity eventually transitions into something resembling quality.
Unless you were hit in the head too many times. Then all bets are off.
Press enter or click to view image in full size
…ordinary day…
It was during one of these delightful episodes — management spiraling, Teams on fire, me drinking coffee and wondering if Socrates was right about wisdom being awareness of ignorance or just deeply depressed — that I stumbled across an article on Medium.
“Why I’m Falling in Love with Zig” by Olenin Slava.
Normally I’d have scrolled past it. Another enthusiastic love letter to another hyped language. The internet is full of them. But this one caught my eye precisely because it wasn’t trying to sell me on Zig as the next revolutionary thing that would solve all of programming’s problems and possibly bring about world peace.
Slava was writing about something much simpler, much more personal: his experience moving from C++ to Zig (incl. pingponging Rust bandwagon) and discovering a workflow that just… worked. He wasn’t an evangelist. He was a developer who’d found a language that fit his brain, describing how the simple grammar made AI assistants genuinely useful, how fast compilation meant he could iterate every few seconds, how the whole thing felt “like a non-stop pipeline.”
He was celebrating productivity. Developer happiness. The joy of tools that get out of your way.
I read it and saw a mutation engine.
Not because of anything malicious in his article — Slava was being perfectly sincere about loving a language for entirely reasonable reasons. But because when he described how comptime code was *"just standard Zig" *— not some *"gibberish-like language" *that AI struggles with — and how this made AI-assisted development seamless, something clicked in my caffeine-addled brain.
Not about Zig specifically. About where we’ve been looking for supply chain vulnerabilities.
We audit our dependencies. We scan our binaries. We set up SBOMs and vulnerability databases and probably sacrifice a chicken to the OWASP gods every full moon.
But the compiler? The actual transformation layer between source and binary, where code can execute before the binary exists, in a language simple enough that AI can generate it as easily as regular code, with a feedback loop measured in seconds?
We just… trusted it. Assumed it was deterministic. A pure function: code goes in, binary comes out. No side effects. Just translation.
Turns out that assumption is becoming increasingly adorable. But let’s put our boots back on the ground with a bit of a disclaimer first…
A Confession: I Don’t Actually Live Here
Press enter or click to view image in full size
…the machine? what machine?…
“Real knowledge is to know the extent of one’s ignorance.” * — Confucius *(which is comforting when you’re about to write about a language you’ve never used in production)
Let me be clear about my expertise in systems programming: I have almost none.
**Like any sensible programmer, I’m fundamentally lazy. **Not in the pejorative sense — lazy in the efficient sense. We automate boring bits. We reach for tools that solve the problem with minimum cognitive overhead. Which means that my day-to-day existence involves Python for when I need something done quickly, TypeScript for when I’m forced to acknowledge that browsers exist, and Go for when I need the performance of a systems language but also the convenience of a garbage collector doing the thinking about memory so I don’t have to.
**What you use is what you need. **And I rarely need to descend into the catacombs of systems programming.
The exception — and every rule has one — is Rust. But that’s not because I’m building performance-critical systems. That’s therapy. Specifically, therapy for PTSD acquired from prolonged TypeScript exposure, where as unknown as any is apparently a solution to type problems rather than an admission that the type system has given up and gone to the pub.
Rust’s borrow checker, after you stop fighting it and accept that it might actually know what it’s talking about, feels like someone finally cares about correctness. It’s the programming equivalent of a weighted blanket for anxiety — constricting, yes, but in a way that’s oddly comforting.
C? Anecdotally. Usually when debugging something that went wrong in a dependency three layers down, which is to say: when someone else’s problem becomes my emergency at 2 AM. One does not choose to write C. One is chosen by circumstances, usually unfortunate ones.
So when I say I* “ignored Zig for two years,”* I’m not speaking from the position of a systems programming expert evaluating a new entrant to the field. I’m speaking as someone who operates comfortably in the garbage-collected safety of languages that handle memory management so I can focus on actual problems, and who views systems programming the same way one might view open-heart surgery: absolutely essential, glad someone knows how to do it, rather hoping I never have to learn.
This makes what follows either more or less credible depending on your perspective. I’m not warning you about Zig from deep expertise. I’m warning you about a pattern I noticed while reading about Zig — a pattern in how we think about supply chain security, compilation, and trust.
The fact that I don’t live in this space might actually be an advantage. Sometimes you need distance to see the shape of the problem. Or possibly I’m just another person on the internet with opinions about technologies they don’t use, in which case I’m upholding a fine tradition.
Either way, here’s what I saw.
What Compile-Time Execution Actually Means (And Why I Should Have Paid Attention Sooner)
Press enter or click to view image in full size
…the very “zong” moment…
“We are what we pretend to be, so we must be careful about what we pretend to be.” * — Kurt Vonnegut, Mother Night *(equally applicable to code pretending to be passive)
Here’s what I thought I knew about compilation: source code gets transformed into machine code through a deterministic, inspectable process. Macros exist, sure — little text-replacement demons that make C++ developers feel clever and everyone else feel nauseated — but fundamentally, the compiler is a translator, not an interpreter.
Build tools can execute code, obviously. Make, CMake, npm scripts — half of them are Turing-complete accidentally and the other half are Turing-complete out of spite. But the language compiler itself? That’s supposed to be the deterministic bit. The trustworthy layer. The bit that doesn’t have opinions about what your code should become beyond what the language specification demands.
Zig looked at that model and said: “What if we just… didn’t?”
The comptime keyword in Zig doesn’t give you macros. It gives you actual program execution during compilation. Any code you can write in Zig, you can run at compile time. Full Turing-complete computation happening before your binary even exists.
This isn’t Rust’s const fn, which lets you run a carefully constrained subset of the language in compile-time contexts — functions marked as* "safe enough to run when the compiler needs them." *This isn’t C macros, which operate on tokens like a particularly aggressive sed script that’s achieved sentience and become rather pleased with itself. This is the entire language, available during compilation, with the same capabilities as runtime code.
comptime { // This runs during compilation. // Not as a macro. As actual code. // With full language capabilities. const message = processEnvironmentAndGenerateCode(); // Whatever that function does, it happens // before the binary exists.}
My first reaction: “That’s… actually elegant. Code generation without a separate meta-language.”
My second reaction, approximately ten minutes later while my coffee went cold: “Oh. Oh fuck.”
Because** here’s the thing about supply chain security: we’ve built our entire defensive model around the assumption that we can audit what runs**. Source code review. Dependency scanning. Binary analysis. SBOM generation. The whole apparatus assumes a world where malicious code has to exist in some inspectable form.
But what about code that generates itself during compilation based on the environment it’s being built in?
What about payloads that only materialise when compiled on specific machines, at specific times, by specific users?
What about malicious logic that lives purely in the transformation, never appearing in either the source or the final binary in recognisable form?
We’ve been auditing the ingredients and inspecting the meal. Nobody thought to watch the kitchen while the chef was cooking. Which, in retrospect, seems like rather an oversight.
The Audit Gap: We’ve Been Looking at the Wrong Layer
Press enter or click to view image in full size
…not here means nowhere…
“You can’t depend on your eyes when your imagination is out of focus.” * — Mark Twain* (and our imagination about compilation has been very out of focus indeed)
Supply chain security has a mental model. It’s not written down anywhere officially, but it’s implicit in every tool we use, every process we follow:
Source → [COMPILATION (TRUSTED)] → Binary
We** audit the source**. We scan the binary. We trust the middle bit to be a deterministic transformation. A sort of programming equivalent of *“innocent until proven guilty,” *except we never bother with the trial.
This model works remarkably well for most languages:
- C/C++: Macros are textual, visible in preprocessed output
- Go: Virtually no metaprogramming, compilation is straightforward transformation
- Rust:
const fnis constrained, macros operate on tokens and can be expanded for inspection - Python/JavaScript: No compilation phase to worry about (the interpreter is its own problem, but at least it’s a visible problem)
But Zig — and I’m not singling out Zig maliciously here, it’s just the purest current expression of this pattern — breaks the model rather thoroughly.
Source → [ARBITRARY CODE EXECUTION] → Binary
The compilation phase isn’t a passive transformation anymore. It’s an execution environment. And we have exactly zero tooling designed to audit it.
Think about where our security tooling operates:
- SAST (Static Application Security Testing): Analyses source code
- Dependency scanners: Check known vulnerabilities in declared dependencies
- SBOM generators: Track what went into the build
- Binary scanners: Look for patterns in the output
- Sandboxing: Limits what the final program can do
Notice what’s missing? Anything that audits what the compiler did during compilation.
The transformation layer is a black box. We put source in one end, we get binary out the other, and we just… assume nothing interesting happened in between. It’s like trusting that your food is safe because you inspected the raw ingredients and then checked that the final dish didn’t smell funny, without ever watching what the cook did in between. What could possibly go wrong?
The AI Acceleration Problem: When Simple Becomes Dangerous
Press enter or click to view image in full size
…unleash whatever works…
“Simplicity is the ultimate sophistication.” * — Leonardo da Vinci *(though he didn’t have to worry about AI generating thousands of compile-time attack variants)
Reading Slava’s article, I hit this paragraph that stopped me cold:
“Unlike Rust macros, comptime code isn’t some absolutely different gibberish-like language; it’s just standard Zig. You’ve probably guessed that this helps AI agents a lot. They can easily read existing comptime code and, of course, write new code.”
He was celebrating this. The simplicity. The accessibility. The fact that AI coding assistants can understand and generate comptime code as easily as they handle a regular for loop.
And he’s absolutely right to celebrate it — from a productivity standpoint, it’s brilliant. The same language at compile time and runtime. No context switching. No learning a separate macro DSL that reads like line noise had a disagreement with a keyboard. AI agents that can help you write sophisticated metaprogramming as naturally as they help with a for loop.
But I read that sentence and saw something else entirely.
Here’s what Slava described about his workflow: fast compilation (even with the older, slower Zig compiler), file-watch enabled, AI agents making changes every few seconds. He described iterations feeling “like a non-stop pipeline.” Change, compile, check, change and compile again. The loop is measured in seconds.
For legitimate development, this is paradise. Tight feedback loops accelerate learning and experimentation. It’s the programming equivalent of having a conversation rather than exchanging letters by post.
**For an attacker? **This is a mutation engine.
The Feedback Loop From Hell: AI + Comptime + Fast Compilation
Press enter or click to view image in full size
…while True: but worse…
“The difference between screwing around and science is writing it down.”* — Adam Savage *(and the difference between malware and an epidemic is iteration speed)
Let me connect the dots Slava laid out, but viewed through a rather less optimistic lens.
First: Zig’s grammar is simple. Deliberately, beautifully simple. No complex macro syntax. No separate metaprogramming DSL. This means AI models — which struggle with Byzantine syntax the way I struggle with morning meetings — can generate Zig comptime code with the same confidence they generate regular code.
Second: Zig compiles fast. Particularly for the simple, direct code that both the language philosophy encourages and AI naturally produces. Slava mentioned that even on the older compiler, his iteration loop was nearly instant. Blink and you’ve missed it.
Third: Comptime executes during compilation. Which means the feedback loop for “did my generated payload work?” happens at compile time, not deploy time. No need to actually run the program. No need to worry about runtime detection. Just: did it compile? Good. What did the comptime code do? Let’s check.
Put these three together and you get something genuinely concerning: an AI-powered mutation engine for compile-time attacks with a feedback loop measured in seconds.
Here’s the workflow for an attacker:
1. AI generates comptime payload variant2. Compilation attempts (< 5 seconds)3. Did it compile? Did the comptime logic execute? 4. If no: AI generates new variant based on error5. If yes: Does it produce the desired effect?6. Iterate until successful7. Generate 50 more variants with slight mutations8. Package the most reliable ones into dependencies
**This isn’t hypothetical. **This is literally the workflow Slava described for legitimate development, just with different goals. And possibly less coffee.
The AI doesn’t need to understand why something works. It just needs to observe that compilation succeeded or failed, that certain environment variables were or weren’t present, and that the output binary did or didn’t contain certain patterns. The simple grammar means mutations are less likely to break syntax. The fast compilation means thousands (or anyway less but still a lot) of iterations per hour. The comptime execution means the testing happens during build, invisibly.
Compare this to attacking a language like Rust:
- Complex macro syntax → AI struggles to generate valid macros
- Constrained
const fn→ Many attack patterns simply won’t compile - Slower compilation → Longer feedback loop, fewer iterations
- Separate macro DSL → More for AI to learn, more ways to fail
Or C++:
- Template metaprogramming → Byzantine syntax, even AI makes mistakes
- Slow compilation → Feedback loop measured in minutes, not seconds
- Preprocessor visibility → Macros can be inspected separately
Zig’s advantages for legitimate developers — simplicity, speed, unified syntax — become advantages for AI-generated attacks. The language isn’t designed to be hostile to AI generation. It’s designed to be friendly to it. Which is wonderful, until it isn’t.
The Epidemic Metaphor: Why This Spreads Faster Than We Can Track
Press enter or click to view image in full size
…pure science fiction horror, but looks funny enough…
“In theory, theory and practice are the same. In practice, they are not.” * — Attributed to various *(and in practice, we’re rather unprepared for this)
Think about why flu epidemics are dangerous. Not because any single strain is unstoppable, but because the virus mutates faster than we can develop immunity. By the time we’ve created a vaccine for this year’s flu, next year’s variant has already evolved. It’s rather unsporting of the virus, really.
Now apply that to compile-time attacks generated by AI:
Traditional malware: Human writes payload → defense detects signature → human must write new payload. Slow. Rate-limited by human creativity, effort, and the need to occasionally sleep.
AI-generated runtime malware: AI generates variants → some get caught → AI learns and iterates. Faster, but still rate-limited by deployment and detection cycles.
AI-generated comptime attacks: AI generates variant → compiles in seconds → comptime logic succeeds or fails → AI observes and mutates → new variant ready for next build. Feedback loop measured in seconds. No deployment needed to test. No runtime detection to evade. Just: does it compile? Does the comptime logic execute? Does it generate the desired payload?
And because Zig’s simple grammar means fewer syntax errors, and fast compilation means rapid iteration, the mutation rate is limited only by how fast you can run zig build. Which, as Slava helpfully demonstrated, is "very fast indeed."
Slava mentioned his workflow felt like* “a non-stop pipeline” *with AI making changes every few seconds. That’s wonderful for legitimate development.
It’s terrifying for defence.
Because while you’re analysing one variant, teaching your SAST tool to recognise its pattern, updating your binary scanner for its signature — the AI has already generated fifty new mutations. Different variable names. Different environment checks. Different payload encoding. Same ultimate effect. It’s like playing Whack-A-Mole except the moles are multiplying exponentially, and you’re still looking for your mallet.
The language wasn’t designed to make this easy. But the combination of simplicity (AI-friendly), speed (rapid iteration), and compile-time execution (invisible testing) creates an environment where mutations can evolve faster than defences can adapt.
It’s not any single attack that’s the problem. It’s the rate at which new attacks can be generated and tested. The flu comparison isn’t hyperbole — it’s uncomfortably accurate. Except this flu has a laboratory that can test thousands of mutations per hour, and we’re still trying to figure out how to take its temperature.
Why “Cowboy Code” Becomes Camouflage
Press enter or click to view image in full size
…feel the vibe and let’s break things, chaps…
“Perfect is the enemy of good.” * — Voltaire *(and in this case, “good enough” is the enemy of “secure”)
Here’s where** Zig’s current state becomes particularly interesting from an attacker’s perspective**.
The language is young. Version 0.x. Still evolving. The documentation freely admits that breaking changes will happen. Stability is a future goal, not a current reality. For legitimate development, this is a consideration — do you bet your production systems on a moving target?
For a black-hat operator? This is perfect.
Ugly code? Doesn’t matter. Hard to maintain? Irrelevant. Might break in future Zig versions? Who cares — it only needs to compile once. Cowboy language with rough edges? Even better.
This isn’t gentlemanly fencing in white uniforms with rules and judges scoring points for style. This is MMA in the parking lot of a pub on Friday night. **Messy. Brutal. Whatever works. **No style points, just results. And possibly some regrettable decisions.
The very characteristics that make some developers hesitant about Zig — the instability, the sharp edges, the lack of established patterns — are advantages for disposable attack code. You’re not building a maintainable system. You’re building a one-shot payload generator that needs to work exactly once, on one machine, and then can be forgotten like a bad Tinder date.
Zig’s “cowboy” reputation? That’s not a bug for this use case. That’s camouflage. Messy, experimental-looking comptime code doesn’t raise red flags — it looks like someone learning the language, trying things out, following the Zig philosophy of simplicity over abstraction. “Oh, they’re just experimenting with comptime,” you think. “How enthusiastic.”
And because the language is still young and fast (precisely because there aren’t layers of abstraction yet), an AI can iterate on attack variants quickly. The feedback loop is tight. Generate code, compile, test, mutate. Over and over. Evolution in fast-forward. Darwin would be fascinated, if also slightly horrified.
The Horde Problem: When AI Generates the Apocalypse
Press enter or click to view image in full size
…you can run, but you cannot hide…
“Quantity has a quality all its own.” * — Disputed attribution *(Stalin? Lenin? Someone who understood exponential growth?)
Here’s where the zombie metaphor becomes uncomfortably apt.
Traditional malware has a problem: it needs to be written, tested, and deployed. Each variant requires human effort. Signature-based detection works because attackers reuse code — it’s economically rational. Why write fifty variants when you can reuse one that works?
But AI changes the economics. Completely. Rather thoroughly, actually.
An AI can generate thousands of comptime variants per hour. Each one is slightly different. Each one targets different environment variables, different build signatures, and different conditions. Not because some human sat down and carefully crafted them, but because you fed the AI a template and the Zig compiler’s error messages as feedback.
Horde mode. Except the zombies are code that writes itself, and they all walk during the compilation phase, where we’re not looking.
Something doesn’t work? The AI has already generated fifty more variants based on the compiler error. One of them will work. Probably several, actually.
Binary fails signature check? The comptime logic that generated it gets mutated. New payload structure, same ultimate function, generated from slightly different compile-time code that your scanner has never seen before.
Defence catches the pattern? The AI has already spawned a hundred new approaches you haven’t seen yet, because the iteration loop is measured in seconds and the simple grammar means most mutations compile successfully. It’s like playing chess against an opponent who gets a thousand moves for every one of yours. And they’re not even particularly good moves, but eventually one of them will be.
This isn’t careful, surgical hacking. This is evolutionary pressure applied to attack vectors. Generate, test, mutate, repeat. Like a virus optimising itself against immune systems, except the virus is code that generates itself during compilation based on who’s building it and where.
**We’ve dealt with polymorphic malware before. But that was runtime polymorphism — code that changes itself after deployment. **This is compile-time polymorphism. The attack vector adapts before the binary even exists.
The walking dead, except they walk during the build phase, they evolve at the speed of compilation, and by the time you’re looking at the binary, they’ve already shambled past your defences, baked themselves into production, and are probably having a cup of tea while you’re still trying to figure out what happened.
And because Zig’s design makes this feedback loop so fast, so simple, so AI-friendly — we have exactly zero infrastructure to detect it at the rate it can now evolve.
Why This Isn’t Just Zig (But Zig Is the Clearest Example)
Press enter or click to view image in full size
…i mean language is not to be blamed…
“The road to hell is paved with good intentions.” * — Proverb *(and Zig’s intentions are genuinely, thoroughly good)
I need to be clear: this is not an attack on Zig. The language isn’t malicious. The feature isn’t a bug. Compile-time execution has legitimate uses:
- Hardware abstraction without runtime cost
- Compile-time verification of properties
- Code generation for performance-critical paths
- Build-time configuration
These are good things. Powerful things. The kind of capabilities that make systems programming more expressive and safer. The kind of features that make developers go “oh, that’s elegant” rather than “oh God, what fresh hell is this.”
**But power is dual-use by nature. **Has been since someone figured out that fire was useful for both cooking food and burning down rival villages.
The same feature that lets you generate optimal code paths at compile time also lets you generate conditional code paths based on who’s compiling and where. The same capability that enables zero-cost abstractions also enables zero-trace payload generation. The same simplicity that makes AI-assisted development delightful also makes AI-generated attacks trivial.
Zig isn’t unique in having powerful compile-time features. But it is unique in how accessible those features are:
- Rust: Powerful, but
const fnis constrained and macros require learning a different syntax (and possibly therapy) - C++: Templates are Turing-complete but so Byzantine that even generating them with AI is challenging (AI looks at C++ templates and quietly backs away)
- Zig: “It’s just standard Zig” — same syntax, same semantics, AI-friendly, fast feedback loop
That accessibility is a feature for legitimate developers. It’s also a feature for attack vectors that can now evolve at machine speed. Which is one of those “yes, and” situations that nobody really wanted.
The Blue Team Problem: How Do You Defend Against This?
Press enter or click to view image in full size
..no clue whatsoever…
“In preparing for battle I have always found that plans are useless, but planning is indispensable.” * — Dwight D. Eisenhower* (and we don’t even have plans for this battle yet)
Here’s the uncomfortable truth: I don’t have a solution. I have awareness of a problem, which is philosophically the beginning of wisdom and practically the beginning of anxiety. Possibly also the beginning of a drinking problem, but we’re trying to remain professional here.
Our existing defences don’t work:
- Static analysis? Looks at the source code. The malicious logic might be conditional, environment-dependent, or generated dynamically during compilation. And even if you catch one variant, fifty more are already queued up like a very patient, very malicious queue at the post office.
- Binary scanning? Looks at the output. The payload is already baked in, potentially obfuscated, possibly different on every build. You’re always examining yesterday’s attack while today’s is already compiling.
- Dependency auditing? Checks declared dependencies. Doesn’t check what those dependencies do during compilation. The audit happens at the wrong layer. It’s like checking if someone has a driver’s license without ever watching them actually drive.
- Reproducible builds? Helps, but only if you can verify that the compilation environment itself is clean. If the attack vector is environment-dependent, reproducible builds just reproduce the attack. Very reliably. With perfect consistency. Which is almost admirable, really.
- Code review? Human reviewers looking at comptime code that “looks like normal Zig.” How do you spot malicious intent in code that legitimately needs to check environment variables for cross-compilation? And how do you review fast enough when new variants appear every few seconds? Humans blink. AI doesn’t.
The only realistic mitigations I can think of:
- Trusted build environments — hermetically sealed, audited, paranoid (the infrastructure equivalent of never leaving your house)
- Multi-party compilation — different organizations build from source, compare binaries (democracy for your build pipeline)
- Compiler provenance — cryptographically verify the compiler itself (but who compiles the compiler? It’s turtles all the way down, and some of them might be compromised)
- Compile-time sandboxing — limit what comptime code can access (breaks legitimate use cases, makes developers sad)
- AI-assisted audit — fight fire with fire, use AI to detect suspicious comptime patterns (arms race at machine speed, what could go wrong?)
Every single one of these is:
- Expensive
- Requires organisational discipline
- Assumes you know you have a problem
- Creates friction in the development process
- Still might not keep pace with mutation rates
The attacker needs to succeed once. The defender needs to be perfect always. And now the attacker can try a thousand times per hour while having a cup of tea.
This is the fundamental asymmetry of supply chain security, and compile-time execution combined with AI-generated mutations makes it exponentially worse. It’s like playing a game where the rules are* “you must win every time”, and “they only need to win once”, and “also they get a thousand tries to your one.”* Quite unsporting, really.
Why This Pattern Will Spread (Whether We Like It or Not)
Press enter or click to view image in full size
…s**t just happens…
“Those who cannot remember the past are condemned to repeat it.” * — George Santayana* (and we’re about to repeat every meta-programming security mistake at AI speed)
All historically powerful meta-technologies get weaponised. It’s not a moral judgment; it’s a pattern. Rather like how every communications technology eventually gets used for both coordination and spam.
- C preprocessor macros: Used for obfuscation and code injection
- Executable packers: Legitimate compression tool, widely used for malware
- JIT compilation: Performance optimisation, also attack vector
- Reflection: Runtime introspection, also an anti-analysis technique
The pattern is consistent: powerful abstraction appears → legitimate uses flourish → someone realises it’s also useful for attacks → arms race begins → security conferences get new material for presentations.
Zig’s comptime fits this pattern perfectly:
- Powerful: Turing-complete execution
- Accessible: Same syntax as runtime code, AI-friendly
- Fast: Tight feedback loops enable rapid iteration
- Under-audited: Existing tools don’t check this layer
- Legitimate use cases: Can’t just ban it without breaking the language
**The question isn’t whether compile-time execution will be weaponised. **The question is whether we’ll notice when it happens, whether we can adapt our defences fast enough, and what we’ll do about it. Also whether we’ll have finished our coffee before the next incident.
Given that we’re still struggling with supply chain attacks in ecosystems with much simpler threat models — looking at you, npm — and those attacks don’t mutate at machine speed, **I’m not optimistic. But then again, I’m rarely optimistic before noon. **Or after noon, come to think of it.
Conclusion: Power That Precedes the Program
We spent forty years (plus) teaching ourselves to* think like machines*. Now we’re teaching machines to write code. And we’ve created languages where code can think before it exists, where the transformation layer is invisible, and where AI can iterate on attack variants faster than humans can analyse them.
Press enter or click to view image in full size
…interesting, my dear lizard, interesting indeed…
The combination is… interesting. In the ancient curse sense of the word. The kind of interesting that historians will write about, assuming we survive to have historians.
**I don’t have solutions. I don’t even have complete expertise. **What I have is observations from the receiving end of enough attacks to recognise a pattern when I see one, and enough coffee-fueled paranoia to wonder if we’ve been looking at the wrong layer this entire time:
- One: Our supply chain security model assumes compilation is a passive transformation. That assumption is increasingly wrong. Adorably, charmingly, dangerously wrong.
- Two: AI makes generating complex, environment-dependent, compile-time code trivial. Combined with fast compilation and simple grammar, this creates feedback loops measured in seconds. This is great for productivity. It’s also perfect for rapidly mutating attacks. The law of unintended consequences strikes again.
- Three: The transformation layer — the gap between source and binary — is unaudited, under-tooled, and largely invisible to our existing defences. We’re not even watching the layer where the mutations are happening. It’s like having a security guard who only watches the front door while the thieves are coming through the chimney.
- Four: Black-hat operators don’t need elegant code. They need code that works once, on one machine, and then disappears. Compile-time execution is perfect for that. And AI generation makes it scalable in ways we haven’t had to defend against before. Evolution in fast-forward, Darwin on amphetamines.
- Five: This isn’t Zig’s fault. But Zig makes the problem clearest because comptime is powerful, accessible, simple, fast, and AI-friendly. Other languages will follow this pattern. Some already have. The difference is in iteration speed. And whether we’re paying attention.
The question isn’t whether this capability will be abused. The **question is whether we’ll detect it **when it is, whether our defences can adapt at machine speed, and whether we’ll care enough to fix the underlying trust model before the next supply chain attack makes management reach for the Relanium.
Meanwhile, I’m going to go audit our build pipeline. Again. Because while I was writing this, seventeen new dependencies probably snuck into our stack, at least three of them are doing something interesting, and somewhere an AI just generated fifty new variants I’ve never seen before and probably never will.
Probably nothing malicious.
***Probably. ***Hopefully…
Press enter or click to view image in full size
…let’s call it “fin”…
Though if I’m being honest, the probability is getting less comforting* *by the day.
The author works as a court poet for programmable things at a fairly large manufacturing company, where “interesting times” remains a standing operational condition. He can be found reviewing build logs with increasing paranoia, wondering whether Socrates had imposter syndrome or just honest self-awareness, and occasionally writing about the intersection of code and philosophy. He maintains that learning Rust without joining the cult should count as a therapeutic achievement.
—
Credit: Thanks to Olenin Slava’s article “Why I’m Falling in Love with Zig” for inadvertently inspiring this paranoia. His enthusiasm for the fast feedback loop made me realise just how fast an attack feedback loop could be.
This story is published on Generative AI. Connect with us on LinkedIn and follow Zeniteq to stay in the loop with the latest AI stories.
Subscribe to our newsletter and YouTube channel to stay updated with the latest news and updates on generative AI. Let’s shape the future of AI together!