What I learned building a language-learning app
🤖AI
Flag this post
“Reasoning with Sampling” — Notes on Karan & Du (2025)
kosti.bearblog.dev·2d
🤖AI
Flag this post
AI Isn't Alchemy: Not Mystical, Just Messy
🤖AI
Flag this post
Constrained and Robust Policy Synthesis with Satisfiability-Modulo-Probabilistic-Model-Checking
arxiv.org·5h
🤖AI
Flag this post
Why Language Models Are “Lost in the Middle”
pub.towardsai.net·1d
🤖AI
Flag this post
Beyond Redundancy: Diverse and Specialized Multi-Expert Sparse Autoencoder
arxiv.org·1d
🤖AI
Flag this post
E2E-VGuard: Adversarial Prevention for Production LLM-based End-To-End Speech Synthesis
arxiv.org·1d
🤖AI
Flag this post
Bayesian Uncertainty Quantification with Anchored Ensembles for Robust EV Power Consumption Prediction
arxiv.org·1d
🤖AI
Flag this post
Sensitivity of Small Language Models to Fine-tuning Data Contamination
arxiv.org·1d
🤖AI
Flag this post
NOTAM-Evolve: A Knowledge-Guided Self-Evolving Optimization Framework with LLMs for NOTAM Interpretation
arxiv.org·5h
🤖AI
Flag this post
LLM-Guided Reinforcement Learning with Representative Agents for Traffic Modeling
arxiv.org·1d
🤖AI
Flag this post
Anchors in the Machine: Behavioral and Attributional Evidence of Anchoring Bias in LLMs
arxiv.org·1d
🤖AI
Flag this post
On Text Simplification Metrics and General-Purpose LLMs for Accessible Health Information, and A Potential Architectural Advantage of The Instruction-Tuned LLM ...
arxiv.org·2d
💎Ruby
Flag this post
Why is MiniMax M2 a Full Attention model?
🤖AI
Flag this post
Lookahead Unmasking Elicits Accurate Decoding in Diffusion Language Models
arxiv.org·1d
🤖AI
Flag this post
Loading...Loading more...