🐿️ ScourBrowse
LoginSign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
📼 Tape Linguistics

Sequential Grammar, Linear Syntax, Magnetic Semantics, Format Theory

Watch the Very First YouTube Video, “Me at the Zoo,” Now 20 Years Old
openculture.com·15h
🖼️JPEG Archaeology
Jane Street’s sneaky retention tactic
economist.com·15h·
Discuss: Hacker News
🦠Parasitic Storage
Researchers train AI to generate long-form text using only reinforcement learning
the-decoder.com·2d
🧠Intelligence Compression
Study Finds LLM Users Have Weaker Understanding After Research
slashdot.org·9h
🧠Intelligence Compression
LFCS Seminar Tuesday 1st July: John Longley
informatics.ed.ac.uk·3d
💻Programming languages
I wrote my PhD Thesis in Typst
fransskarman.com·4d·
Discuss: Lobsters, Hacker News
📝Concrete Syntax
From expected to actual: Kotlin doesn't reinvent, it reuses 🔄
dev.to·48m·
Discuss: DEV
💧Liquid Types
🧠 My First Rosetta Stone: When OrKa Proved AI Can Think Structurally
dev.to·1d·
Discuss: DEV
⚡Proof Automation
COIN: Uncertainty-Guarding Selective Question Answering for Foundation Models with Provable Risk Guarantees
arxiv.org·20h
🧠Intelligence Compression
Programming Entry Level: how to interpreter
dev.to·2d·
Discuss: DEV
🔗Lisp
The Bitter Lesson is coming for Tokenization
lucalp.dev·2d·
Discuss: Lobsters, Hacker News, r/programming
🔗Monadic Parsing
ByteSpan: Information-Driven Subword Tokenisation
arxiv.org·2d
💾Binary Linguistics
No ban planned: Creative Commons is working on licenses for AI training
heise.de·4h
🕵️Vector Smuggling
A Nested Watermark for Large Language Models
arxiv.org·2d
💧Manuscript Watermarks
Machine Learning Fundamentals: active learning
dev.to·2d·
Discuss: DEV
🤖Grammar Induction
Learning-based safety lifting monitoring system for cranes on construction sites
arxiv.org·20h
📄Document Digitization
LARP: Learner-Agnostic Robust Data Prefiltering
arxiv.org·20h
💻Local LLMs
Brains over bots: Why toddlers still beat AI at learning language
phys.org·2d
🤖Grammar Induction
SlimMoE: Structured Compression of Large MoE Models via Expert Slimming and Distillation
arxiv.org·2d
⚡Modern Compression
The value of human and machine in machine-generated creative contents
arxiv.org·2d
🧭Content Discovery
Loading...Loading more...
AboutBlogChangelogRoadmap