There are two really good ways to learn the deep fundamentals of a field. One we could call the Carmack/Ilya method: get an expert to give you a list of the seminal papers, systematically work through them, and in the process develop a deep, grounded intuition. This seems to work. The second is: funny tweets.
A case in point:
ok so: engram is moe over ngramed memory mHC is moe over the residual stream NSA is moe over attention MoE is moe over FFNs … im sensing a theme ….
— xjdr (@_xjdr) January 13, 2026
Other than the fact you have to be in a very particular niche in order to understand all the acronyms in that tweet, the idea that everything is an MoE feels right? Pretty much every notab…
There are two really good ways to learn the deep fundamentals of a field. One we could call the Carmack/Ilya method: get an expert to give you a list of the seminal papers, systematically work through them, and in the process develop a deep, grounded intuition. This seems to work. The second is: funny tweets.
A case in point:
ok so: engram is moe over ngramed memory mHC is moe over the residual stream NSA is moe over attention MoE is moe over FFNs … im sensing a theme ….
— xjdr (@_xjdr) January 13, 2026
Other than the fact you have to be in a very particular niche in order to understand all the acronyms in that tweet, the idea that everything is an MoE feels right? Pretty much every notable model release, and probably all the secret frontier models, are MoE.
Like every other idea in deep learning this goes back to something Hinton did in the 90s, specifically the paper Adaptive Mixtures of Local Experts by Jacobs, Jordan, Nowland and Hinton:
If backpropagation is used to train a single, multilayer network to perform different subtasks on different occasions, there will generally be strong interference effects that lead to slow learning and poor generalization. If we know in advance that a set of training cases may be naturally divided into subsets that correspond to distinct subtasks, interference can be reduced by using a system composed of several different “expert” networks plus a gating network that decides which of the experts should be used for each training case. […] The idea behind such a system is that the gating network allocates a new case to one or a few experts, and, if the output is incorrect, the weight changes are localized to these experts (and the gating network).
The idea is that if your data naturally clusters, then having separate networks avoids smearing understanding across the weights. A dataset with both German and English training data might produce a model that mixes up both languages. If we train two different experts and learn a gating network, we can get a clean “German-speaking” model, and a clean “English-speaking” model, in one.
Also, like every other idea in deep learning, this was very clever, but painful to train. In particular, this was because the decision about which expert to choose was a bit of a cliff. If you choose the German expert when you needed the English expert then the German expert would get some loss, but the English expert would get none. This could lead to the awkward situation where the German expert performed better for both English and German: you ended up with a smaller, smeared model, and a dead expert.
Noam Shazeer and co came to the rescue in 2017 with the excellently titled “Outrageously Large Neural Networks”. They introduced concepts that didn’t fundamentally change the approach, but did make it practical.
The key trick was adding an auxiliary loss that penalized the model for using one expert over the others. By adding some noise to the gating decision they helped it be differentiable and ensure errors could flow back effectively. This gave the training process a much better chance of avoiding this kind of “winner-takes-all” collapse.

Over time these methods were refined. In a contemporary MoE like DeepSeek v3, sigmoid-based routing removed the noise from the gating and the auxiliary loss is replaced in favor of a what they call bias updates: they just put their thumb on the scale during training if some experts aren’t getting enough samples, which seems to work great.
All of that is about how we got MoEs to scale, but doesn’t really say… why? Intuitively, if you can train a model with X parameters, it seems like it would be better to have all of them doing something (a dense model), rather than some subset1?
The main reason this has taken over the field is it is a way of decoupling capacity (how much can the network “know”) from compute (how much work does it do for each input).
In a dense model when you add a new token to train you send it to all parts of the model: every bit of capacity touches it, each of which uses some compute to process. MoEs are a form of sparsity: a way of ignoring some of the parameters. They let you add capacity without adding compute2.
There are other ways of achieving the same result, but the MoE approach is very hardware friendly. You’re still mostly doing dense matmuls, just split between experts. In parallelism terms, Expert Parallelism is efficient because you’re moving tokens between devices: it needs an all-to-all, but the data volumes are manageable.
The tweet calls out NSA, engram and mHC, all recent papers from Deepseek. But underneath it calls out the design pattern: make a few alternative compute or memory paths, then use a learned gate to pick (or mix) a subset of them, per token. You get sparsity at the routing level, decoupling formerly coupled aspects, while each path can remain fairly dense and hardware-friendly.
Engrams makes the argument that language models have to do two things: reasoning and looking stuff up. The reasoning works great with stacks of Transformers, but the looking-stuff-up part is approximated through computation rather than just… looking stuff up.
This process essentially amounts to an expensive runtime reconstruction of a static lookup table, wasting valuable sequential depth on trivial operations that could otherwise be allocated to higher-level reasoning.
Classically, Natural Language Processing used a lot of N-grams: representations of more than one token at a time, but language models pretty much dropped that in favor of a fixed vocabulary. Deepseek is bringing it back. These extra embeddings are retrieved for subsets3 of the tokens in the context window, the resulting vectors are summed4, then the model gates how much to incorporate the information based on the current state.
It’s the same move of decoupling compute and capacity. Here they are adding a bunch of extra storage parameters but letting the model learn whether or not to use them. Because the retrieval is based on tokens the table doesn’t have to live in VRAM but can be loaded with the input5 .
The second paper, Manifold-constrained Hyper Connectors is the most math-heavy of the recent release, and it builds on possibly the most cited paper in ML: ResNet.
In the bad old days ,the “Deep” in Deep Neural Nets didn’t really exist: you could theorize, but if you tried to train one you’d get into a place where the early layers received basically no useful loss signal. ResNets fixed this in the simplest way possible: As well sending through the “output” of a layer, you sent through the input as well. This gave an efficient highway for loss gradients to flow back and enabled successfully training much, much deeper models.
mHC builds on an observation that ResNets hard-code another compute/capacity tradeoff: the size of the residual channel. If you think of a layer of a transformer: it has an input of C tokens, and an output the same size. The residual connection works by summing the input tokens and the output tokens. That’s assigning as much information capacity to the residual channel as you do to the processing channel. E.g.
- Layer 0 gets raw tokens, and outputs a sum of raw+contextualized tokens
- Layer 1 gets layer 0 tokens and outputs a sum of layer0+contextualized tokens
- Etc.
- At the end you get a cake recipe
But maybe that cake recipe would be better if Layer 2 had access not just to the layer0 tokens, but also to the raw tokens? We don’t really have a way to express that outside of adding extra skip connections. Hyper Connections widen the ResNet channel into multiple lanes, and mHC lets the model decide what to put in each: so you could have layer 1 putting layer0 context in one lane, and raw tokens in another lane6 . If MoE lets you take a bunch of parameters and selectively route tokens to a subset, then mHC lets you take a bunch of residual bandwidth and selectively mix the information flow from your module to a subset of it.
Finally, Native Sparse Attention follows the classic Deepseek move of throwing a bunch of engineering wins together. Instead of assuming the amount of attention compute for each token in is the same they are scaling it dynamically based on the content itself. They mix the outputs of a pooled version of the content window to get a compressed representation, a MoE-style gated selection from the full context window7, and a classic sliding window attention.
This is the pattern MoE exemplified:
- look at what is constrained
- add more of it, but make it conditional to avoid scaling other things at the same time
It’s a thread that runs through an awful lot of the industry right now. Understanding that is useful when anticipating where the things are going to go next.
Or, you could have saved yourself a lot of time and just liked the tweet.
- MoEs do have some inference advantages: if you have a 100bn parameters model where just 20bn are active for a given token you simply have to do less work than a 100bn param dense model. That’s a win for latency! But, you still have to store all those 100bn parameters, meaning you need quite a lot of memory kicking around. ↩︎
- More specifically, they make the ratio of adding capacity and adding capacity very flexible: modern MoEs often have many experts and activate several at a time. ↩︎
- In this case Deepseek uses 2-gram and 3-grams ↩︎
- Weighted summed ↩︎
- In practice they inject the ngram embeddings at a couple of different points later in the model, where empirically there seemed to be enough context for the model to make useful mixing decisions ↩︎
- The specific clever thing the Deepseek folks added was a constraint to stop this from exploding, using the wonderfully named Sinkhorn-Knopp algorithm (apparently) ↩︎
- Based on those pooled tokens. Effectively its taking the “summarized” context window, and using runtime gating to decide which bits of the context window to add in full. ↩︎