Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🗃️ Otaku Theory
Specific
Azuma Hiroki, database animals, moe, anime culture
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
151746
posts in
25.5
ms
Resources
🧵
Digital Folklore
learnjapanese.moe
·
1d
Embarrassingly
Simple
Self-Distillation
Technique
🧠
AI Knowledge Work
mail.bycloud.ai
·
3d
Symbiotic-MoE
: Unlocking the
Synergy
between Generation and Understanding
🧠
AI Knowledge Work
arxiv.org
·
15h
Lightning
Talks V (
sotm2025
)
🌐
Media Ecology
cdn.media.ccc.de
·
1d
We
rebuilt
how MoE models generate tokens on Blackwell GPUs,
resulting
in 1.84x faster inference and more accurate outputs.
🧠
AI Knowledge Work
twitter.macworks.dev
·
3d
JordiSilvestre/Spectral-AI
: "O(log N) MoE Expert routing via RT Core ray tracing.
BVH
traversal replaces matrix multiplication in neural language models."
🧠
AI Knowledge Work
github.com
·
1d
·
Hacker News
Workers AI - Google Gemma 4
26B
A4B
now available on Workers AI
🧠
AI Knowledge Work
developers.cloudflare.com
·
6d
Better MoE model inference with
warp
decode
🖥️
Retro Computing
cursor.com
·
4d
·
Hacker News
Frontier Pretraining Infrastructure Is Already Open Source: GPT-OSS on
TPU
with
MaxText
🖥️
Retro Computing
patricktoulme.substack.com
·
3d
·
Substack
Seeing but Not Thinking: Routing
Distraction
in Multimodal
Mixture-of-Experts
🧠
AI Knowledge Work
arxiv.org
·
15h
Alloc-MoE
: Budget-Aware Expert Activation Allocation for Efficient Mixture-of-Experts Inference
🧠
AI Knowledge Work
arxiv.org
·
15h
Do Domain-specific Experts
exist
in
MoE-based
LLMs?
🗂️
Zettelkasten
arxiv.org
·
2d
trevorgordon981/alfred-abliterate
: Residual-stream abliteration toolkit for MoE models (Qwen3.5-397B-A10B) on Apple Silicon. Removes PRC-aligned content policies from local inference. Tested on Mac Studio M3 Ultra 512GB.
📺
Glitch Art
github.com
·
4d
·
r/LocalLLaMA
MoBiE
: Efficient Inference of Mixture of Binary Experts under Post-Training
Quantization
🧠
AI Knowledge Work
arxiv.org
·
1d
QA-MoE
: Towards a Continuous Reliability Spectrum with Quality-Aware Mixture of Experts for Robust Multimodal Sentiment Analysis
💻
Digital Humanities
arxiv.org
·
2d
Gemma 4,
Phi-4
, and Qwen3: Accuracy-Efficiency
Tradeoffs
in Dense and MoE Reasoning Language Models
🧠
AI Knowledge Work
arxiv.org
·
1d
TalkLoRA
: Communication-Aware
Mixture
of Low-Rank Adaptation for Large Language Models
💻
Digital Humanities
arxiv.org
·
1d
Efficient
Quantization
of Mixture-of-Experts with
Theoretical
Generalization Guarantees
🧠
AI Knowledge Work
arxiv.org
·
1d
HI-MoE
: Hierarchical
Instance-Conditioned
Mixture-of-Experts for Object Detection
🧠
AI Knowledge Work
arxiv.org
·
3d
MoE Routing
Testbed
: Studying Expert
Specialization
and Routing Behavior at Small Scale
🧠
AI Knowledge Work
arxiv.org
·
1d
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help