The Chip That Spoke Lisp
jxself.orgยท4d
๐Ÿค–Lisp Machines
Toy Binary Decision Diagrams
philipzucker.comยท6d
๐ŸงฎAlgebraic Datatypes
Lessons from building 15 iOS apps serving 1M daily users
gist.github.comยท3dยท
Discuss: Hacker News
๐Ÿ”ŒInterface Evolution
Beyond the AI Hype: Guido van Rossum on Python's Philosophy, Simplicity, and Th
odbms.orgยท1dยท
๐Ÿ“ŠAPL Heritage
Advanced Multiphase Flow Characterization via Hybrid Acoustic-Optical Tomography
dev.toยท6hยท
Discuss: DEV
๐ŸบComputational Archaeology
Structured Cognition for Behavioral Intelligence in Large Language Model Agents: Preliminary Study
arxiv.orgยท4d
๐Ÿง Intelligence Compression
Optimal Stopping in Latent Diffusion Models
arxiv.orgยท2d
๐Ÿง Machine Learning
Code Green: How Big Data and AI are Engineering a Sustainable Planet
dev.toยท20hยท
Discuss: DEV
๐ŸŒŠStream Processing
Autonomous Reef Mapping & Predictive Maintenance via Hydrodynamic Field Modeling
dev.toยท1dยท
Discuss: DEV
๐ŸŒŠStream Processing
Unlock Deep Learning Stability: Navigate the Activation Function Galaxy with 9 Dimensions!
dev.toยท10hยท
Discuss: DEV
๐Ÿง Machine Learning
Tech With Tim: Why 1M People Tried This AI Coding Tool (Full Vibe Coding Tutorial)
dev.toยท4hยท
Discuss: DEV
๐ŸŽฌWebCodecs
Scalable Anomaly Detection in Oxidizer Tank Vent Lines via Hyperdimensional Vector Analysis
dev.toยท2dยท
Discuss: DEV
๐Ÿ‘๏ธObservatory Systems
Quantum Agents: The Algorithmic Alchemists Reshaping Discovery
dev.toยท20hยท
Discuss: DEV
โš›๏ธQuantum Algorithms
Parameterized Complexity of Temporal Connected Components: Treewidth and k-Path Graphs
arxiv.orgยท4d
๐ŸŽจGraph Coloring
Build a Private AI Chatbot for Your PDFs with Genkit and Gaia
dev.toยท2hยท
Discuss: DEV
๐Ÿ“„Document Streaming
Unveiling the Power of Queues: A Journey into Data Structures and Algorithms
dev.toยท2dยท
Discuss: DEV
โšกCache Theory
Which Heads Matter for Reasoning? RL-Guided KV Cache Compression
arxiv.orgยท2d
๐Ÿ“ผCassette Combinators
Expanding the Action Space of LLMs to Reason Beyond Language
arxiv.orgยท2d
๐Ÿ’ปLocal LLMs
OBCache: Optimal Brain KV Cache Pruning for Efficient Long-Context LLM Inference
arxiv.orgยท2d
๐Ÿ’ปLocal LLMs