bennorichters/taal.nvim
dotfyle.com·19h
📝NLP
Flag this post
Even G2 – Smartglasses
evenrealities.com·1h·
Discuss: Hacker News
📝NLP
Flag this post
Hades II Isn’t A Story—It’s Maintenance
kotaku.com·1d
🚀Science fiction
Flag this post
Did you solve it? Two dead at the drink-off – a brilliant new lateral thinking puzzle
theguardian.com·2d
Productivity
Flag this post
Cognitive Biases and A.I. – shows worse biases than human practitioners
ai.nejm.org·1d·
Discuss: Hacker News
🎲Bayesian Cognition
Flag this post
Why giving up on goals is good for you, and how to know which to ditch
newscientist.com·2d
💡Cognitive Science
Flag this post
Why Do Great Engineers Lose Architecture Debates?
pushtoprod.substack.com·1d·
Discuss: Substack
🎲Bayesian Cognition
Flag this post
Literature Is Not a Vibe: On ChatGPT and the Humanities
lareviewofbooks.org·6d·
Discuss: r/Longreads
🚀Science fiction
Flag this post
A bunch of books I read (or at least started)
reedybear.bearblog.dev·3d
📚Reading
Flag this post
Chicaneries, Contradictions, Justifications
znetwork.org·3h
🧩Morphology
Flag this post
Beyond Chat: Scaling Operations, Not Conversations
reddit.com·1d·
Discuss: r/LLM
📝NLP
Flag this post
Book Review: ‘The American Revolution,’ by Geoffrey C. Ward and Ken Burns
nytimes.com·2d
🚀Science fiction
Flag this post
Resurrection-As-A-Service? Inside The Coming AI Afterlife Boom
forbes.com·7h
🚀Science fiction
Flag this post
Hidden Gem "Rue Valley" in Review: Again and Again and Again a Hit
heise.de·2d
Productivity
Flag this post
The Trauma Era: From Awareness to Misunderstanding
psychologytoday.com·2d
💡Cognitive Science
Flag this post
Pareto-Improvement-Driven Opinion Dynamics Explaining the Emergence of Pluralistic Ignorance
arxiv.org·1d
🎲Bayesian Cognition
Flag this post
IMDMR: An Intelligent Multi-Dimensional Memory Retrieval System for Enhanced Conversational AI
arxiv.org·1d
🔄Transformers
Flag this post
Asking Eric: All friend wanted to talk about was one other person, and I couldn’t take it anymore
orlandosentinel.com·1d
🔤Linguistics
Flag this post
Attention and Compression is all you need for Controllably Efficient Language Models
arxiv.org·2d
🔄Transformers
Flag this post