2025-10-10: An Internship Experience With the Internet Archive as a Google Summer of Code Contributor
ws-dl.blogspot.comยท18hยท
๐Ÿ”“Open Source Software
Experience the Magic of Falling Sand
sand-blast.orgยท11hยท
Discuss: Hacker News
๐ŸบComputational Archaeology
GNU Health
gnuhealth.orgยท1hยท
Discuss: Hacker News
๐Ÿ”“Open Source Software
Keeping my Nix inputs fresh
jimmyff.co.ukยท2dยท
โ„๏ธNix Flakes
How to store ordered information in a Relational Database (2015)
softwareengineering.stackexchange.comยท2dยท
๐ŸงฎAlgebraic Datatypes
Kubernetes 1.34 Features Explained
scaleops.comยท2dยท
Discuss: Hacker News
๐Ÿ“ฆContainer Security
PLA Gears Fail To Fail In 3D Printed Bicycle Drivetrain
hackaday.comยท18h
โš™๏ธTape Transport
Sony Teases New GPU Tech For the PS6
games.slashdot.orgยท9h
๐ŸบGaming Archaeology
Why Go (Golang) Is Worth Learning in 2025
dev.toยท1hยท
Discuss: DEV
๐Ÿง Lisp Dialects
Unsupervised Radio Map Construction in Mixed LoS/NLoS Indoor Environments
arxiv.orgยท1d
๐Ÿ“ŠComputational Geometry
Scholarship, Hackathons, and Swahili Words: Wikimania 2025 Through My Eyes
diff.wikimedia.orgยท1d
๐ŸŒCultural Algorithms
The highly-rated DJI Osmo Action 5 Pro drops to a record-low price on Amazon
techradar.comยท19h
๐Ÿ”ฌOptical Physics
Automated Anomaly Detection in Time-Series Statistical Spreadsheets via Hyperdimensional Vector Similarity
dev.toยท20hยท
Discuss: DEV
๐Ÿ”คCharacter Classification
Real-Time Visualization of Glial Glymphatic Clearance Dynamics via Multi-Modal Deep Learning Fusion
dev.toยท1dยท
Discuss: DEV
๐ŸงฒMagnetic Resonance
Efficient Test-Time Scaling for Small Vision-Language Models
arxiv.orgยท4d
๐Ÿ—œ๏ธLZW Variants
From Moments to Models: Graphon Mixture-Aware Mixup and Contrastive Learning
arxiv.orgยท4d
๐Ÿง Machine Learning
CubicLog โ€“ A single-binary logging server with zero-config smart analytics
github.comยท1dยท
Discuss: Hacker News
๐Ÿ“Log Parsing
Evaluating Fundus-Specific Foundation Models for Diabetic Macular Edema Detection
arxiv.orgยท2d
๐ŸŒ€Riemannian Computing
OBCache: Optimal Brain KV Cache Pruning for Efficient Long-Context LLM Inference
arxiv.orgยท1d
๐Ÿ’ปLocal LLMs