I/O Multiplexing (select vs. poll vs. epoll/kqueue)
nima101.github.ioยท2dยท
Discuss: Hacker News
๐Ÿ”eBPF
IBM invites CockroachDB to infest its mainframes with PostgreSQL
theregister.comยท3dยท
Discuss: Hacker News
๐Ÿ—ƒ๏ธdatabase engineering
Walrus, A 1M ops/sec, 1 GB/s Write Ahead Log in Rust
nubskr.comยท3dยท
๐Ÿ”ŒEmbedded Rust
Why I switched from HTMX to Datastar
everydaysuperpowers.devยท2dยท
๐ŸŒJAMstack
Let's Write a Macro in Rust
hackeryarn.comยท23hยท
Discuss: Hacker News
๐Ÿ”ŒEmbedded Rust
Meet Amazon Quick Suite: The agentic AI application reshaping how work gets done
aboutamazon.comยท1dยท
Discuss: Hacker News
๐ŸงฉLow-code
Enhancing Synthetic Data Generation via Adaptive Kernel Density Estimation with Bayesian Optimization
dev.toยท2dยท
Discuss: DEV
โฑ๏ธTime-series Optimization
We built a CUDA emulator that profiles GPU code with zero hardware
rightnowai.coยท4dยท
Discuss: Hacker News
โšกHardware Acceleration
Indexing, Hashing
dev.toยท2dยท
Discuss: DEV
๐Ÿ”งDatabase Engines
Over-indexed databases are silent AI killers
singlestore.comยท1d
๐Ÿ“ŠDatabase Monitoring
Relational Transformer: Toward Zero-Shot Foundation Models for Relational Data
arxiv.orgยท2d
๐ŸŽฏVector Databases
rule-router: I built a high-performance rule engine for NATS in Go
reddit.comยท1dยท
Discuss: r/golang
๐ŸŒRust Networking
Get RICH or Die Scaling: Profitably Trading Inference Compute for Robustness
arxiv.orgยท2d
๐Ÿ“ฑEdge AI
Operable Software
ferd.caยท1dยท
Discuss: Hacker News
๐ŸงฉLow-code
Hardware Vulnerability Allows Attackers to Hack AI Training Data โ€“ NC State News
news.ncsu.eduยท18hยท
Discuss: Hacker News
โšกHardware Acceleration
CPU Cache-Friendly Data Structures in Go: 10x Speed with Same Algorithm
skoredin.proยท5dยท
๐Ÿ”งDatabase Engines
Software Architecture Horror Story
blog.mihaisafta.comยท7hยท
Discuss: Hacker News
๐ŸงฉLow-code
GNN Blind Spots: The Hidden Cost of Powerful Graph Models
dev.toยท12hยท
Discuss: DEV
๐Ÿ—๏ธAI Infrastructure
Which Heads Matter for Reasoning? RL-Guided KV Cache Compression
arxiv.orgยท1d
๐Ÿ—๏ธAI Infrastructure
OBCache: Optimal Brain KV Cache Pruning for Efficient Long-Context LLM Inference
arxiv.orgยท1d
๐Ÿ’ปLocal LLMs