Valuing platforms and R&D with real options
blog.42futures.comยท19hยท
Discuss: Hacker News
๐Ÿ“ŠCode Metrics
Demystifying Automatic Instrumentation: How the Magic Actually Works
opentelemetry.ioยท13h
๐ŸงชTesting Compilers
smartgo: I wish for a Go-like language with Rust-like pointers
iio.ieยท3d
๐Ÿ”’Rust Borrowing
Nvidia Chiefโ€™s Not-So-Subtle Dig at AMD
theinformation.comยท7h
๐Ÿ”ฎSpeculative Execution
(PR) Cisco Rolls Out 8223 Router with 51.2 Tbps, Powered by In-House Chip
techpowerup.comยท17hยท
Discuss: Hacker News
๐Ÿ“กProtocol Stacks
4 Linux kernel tweaks I made that actually improved performance
xda-developers.comยท4d
๐Ÿ“Šperf Tools
Dual-stage and Lightweight Patient Chart Summarization for Emergency Physicians
arxiv.orgยท3h
๐ŸŒฑForth Kernels
SEMI Reports Global 300mm Fab Equipment Spending Expected to Total $374 Billion Over Next Three Years
prnewswire.comยท5h
๐Ÿ”ŒMicrocontrollers
Executing Very Complex Projects
reddit.comยท2dยท
Discuss: r/rust
โšกPerformance
Leaker Clears The Air On Intel Core Ultra X Series, Models To Feature Full Xe3 iGPU
pokde.netยท3d
๐Ÿ”งRISC-V
When Internal Tools Become Your Hidden Liability (And One Real Example That Does It Right)
dev.toยท1dยท
Discuss: DEV
๐ŸฅพBootstrapping Strategies
This AI Agent Fixes Security Bugs Automatically (While Senior Devs Sleep)
dev.toยท1dยท
Discuss: DEV
โšกLive Coding
Stop Treating AI Coding Assistants Like Magic. Hereโ€™s the Context-Aware System That Actually Works.
pub.towardsai.netยท7h
๐ŸŽฎLanguage Ergonomics
Multiโ€‘AI Agents: The Good, the Bad, and the Ugly
dev.toยท12hยท
Discuss: DEV
๐ŸŽญProgram Synthesis
Structured Cognition for Behavioral Intelligence in Large Language Model Agents: Preliminary Study
arxiv.orgยท1d
๐ŸŽฏFinite Automata
How to run LLMs on a 1GB (e-waste) GPU without changing a single line of code
reddit.comยท2dยท
Discuss: r/LocalLLaMA
๐Ÿ’พCache Algorithms
Mechanisms for Quantum Advantage in Global Optimization of Nonconvex Functions
arxiv.orgยท2d
โšกPartial Evaluation
Revisiting Long-context Modeling from Context Denoising Perspective
arxiv.orgยท1d
๐Ÿ“ŠLR Parsing
Attention Sinks and Compression Valleys in LLMs are Two Sides of the Same Coin
arxiv.orgยท1d
๐ŸชœRecursive Descent