Europe can build its own social media
japantimes.co.jpยท1d
๐Ÿ‡ธ๐Ÿ‡ชSwedish Protocols
Secure your infrastructure
blog.james.rcpt.toยท3h
๐Ÿ“ŠHomelab Monitoring
Re-factoring a large Flask template to accommodate Jinja and AI coding
circusscientist.comยท4d
๐ŸŒ€Brotli Internals
Bitcoin Core 30.0
bitcoincore.orgยท1dยท
๐ŸงฌBitstream Evolution
Why Feeds Fun normalizes tags โ€” and how
blog.feeds.funยท1dยท
Discuss: Hacker News, r/rss
๐Ÿ“กRSS Extensions
Do AI Reasoning Models Abstract and Reason Like Humans?
aiguide.substack.comยท7hยท
Discuss: Substack
๐Ÿ”ฒCellular Automata
Zippers: Making Functional "Updates" Efficient (2010)
goodmath.orgยท4dยท
๐ŸŒณIncremental Parsing
Why Self-Host?
romanzipp.comยท4dยท
Discuss: Hacker News
๐Ÿ Personal Archives
Curious about this
reddit.comยท7hยท
Discuss: r/artificial
โœ…Verification Codecs
Transforming the physical world with AI: the next frontier in intelligent automation
aws.amazon.comยท7h
๐Ÿ Homelab Orchestration
Cracking Blackjack with Go: A Step-by-Step Guide to Your First Move
dev.toยท13hยท
Discuss: DEV
๐ŸŽฏProof Tactics
Knowledge-Aware Mamba for Joint Change Detection and Classification from MODIS Times Series
arxiv.orgยท1h
๐Ÿง Machine Learning
ABLEIST: Intersectional Disability Bias in LLM-Generated Hiring Scenarios
arxiv.orgยท1h
๐ŸŒณContext free grammars
You're Not Gonna Believe This: A Computational Analysis of Factual Appeals and Sourcing in Partisan News
arxiv.orgยท1h
๐Ÿ“ฐContent Curation
Check out my Mini Homelab Build!
reddit.comยท1dยท
Discuss: r/homelab
๐Ÿ HomeLab
Cattle-CLIP: A Multimodal Framework for Cattle Behaviour Recognition
arxiv.orgยท1d
๐Ÿค–Advanced OCR
Iterative LLM-Based Generation and Refinement of Distracting Conditions in Math Word Problems
arxiv.orgยท1d
๐ŸงฎSMT Solvers
Gender Bias in Large Language Models for Healthcare: Assignment Consistency and Clinical Implications
arxiv.orgยท1d
๐Ÿ’ปProgramming languages
Path Drift in Large Reasoning Models:How First-Person Commitments Override Safety
arxiv.orgยท1h
๐Ÿ’ปProgramming languages
The Curious Case of Factual (Mis)Alignment between LLMs' Short- and Long-Form Answers
arxiv.orgยท1h
๐Ÿง Intelligence Compression