Context-sensitive Grammars, Type-dependent Syntax, Proof-carrying Parsers, Verified Parsing

You Can Get a Lifetime Subscription to Qlango for Just $32 Right Now
lifehacker.com·10h
🌳Context free grammars
Cactus Language • Semantics 1
inquiryintoinquiry.com·1d
🔢Denotational Semantics
LLM Poisoning [1/3] - Reading the Transformer's Thoughts
synacktiv.com·2d
💻Local LLMs
Python 3.14 Released with Template String Literals, Deferred Annotations, and
socket.dev·13h·
Discuss: Hacker News
💧Liquid Types
LaDiR: Latent Diffusion Enhances LLMs for Text Reasoning
arxiv.org·1d
💻Local LLMs
CAM: A Constructivist View of Agentic Memory for LLM-Based Reading Comprehension
arxiv.org·7h
📝Concrete Syntax
Context Length Alone Hurts LLM Performance Despite Perfect Retrieval
arxiv.org·7h
🧮Kolmogorov Complexity
Learning from Failures: Understanding LLM Alignment through Failure-Aware Inverse RL
arxiv.org·7h
💻Local LLMs
Training Dynamics of Parametric and In-Context Knowledge Utilization in Language Models
arxiv.org·2d
🌲Parse Trees
LLM Optimization Notes: Memory, Compute and Inference Techniques
gaurigupta19.github.io·1d·
Discuss: Hacker News
💻Local LLMs
PoLi-RL: A Point-to-List Reinforcement Learning Framework for Conditional Semantic Textual Similarity
arxiv.org·1d
🗂️Vector Search
Latency vs. Accuracy for LLM Apps — How to Choose and How a Memory Layer Lets You Win Both
dev.to·1d·
Discuss: DEV
Performance Mythology
Understanding the 4 Main Approaches to LLM Evaluation (From Scratch)
magazine.sebastianraschka.com·3d·
Discuss: Hacker News
Automated Theorem Proving
CARE: Cognitive-reasoning Augmented Reinforcement for Emotional Support Conversation
arxiv.org·7h
🔲Cellular Automata
H1B-KV: Hybrid One-Bit Caches for Memory-Efficient Large Language Model Inference
arxiv.org·7h
💨Cache Optimization
BanglaLlama: LLaMA for Bangla Language
arxiv.org·7h
🌀Brotli Dictionary
Constraint Satisfaction Approaches to Wordle: Novel Heuristics and Cross-Lexicon Validation
arxiv.org·2d
🧮SMT Solvers
Thinking on the Fly: Test-Time Reasoning Enhancement via Latent Thought Policy Optimization
arxiv.org·1d
💻Local LLMs