Graphics API, GPU Computing, WebGPU Backend, Rust Graphics

Show HN: KV Marketplace – share LLM attention caches across GPUs like memcached
github.com·14h·
Discuss: DEV, Hacker News
Hardware Acceleration
Flag this post
Why Rust is Revolutionizing Game Development: Performance, Safety, and Future-Ready Code
dev.to·1d·
Discuss: DEV
🦀Rust
Flag this post
Red Hat Enterprise Linux 10.1: Top features for developers
developers.redhat.com·16h
🐧Linux
Flag this post
How the PolyBlocks AI Compiler Works
docs.polymagelabs.com·1h·
Discuss: Hacker News
🌐SIMD.js
Flag this post
Rusty-R2: Open source AI you can actually train yourself on consumer hardware
github.com·1d·
Discuss: r/LocalLLaMA
📱Edge AI
Flag this post
How System76 & Red Hat Hope To Finally Improve The Linux Multi-GPU Experience
phoronix.com·2d
🐧Linux
Flag this post
Why Renting a GPU Server Can Help Startups Jumpstart AI Projects
dev.to·2d·
Discuss: DEV
🎮WebGPU
Flag this post
Conj 2025 Workshop: Sharing your Data Analysis
clojurecivitas.github.io·1d
🔧Abseil
Flag this post
Mojo: MLIR-Based Performance-Portable HPC Science Kernels on GPUs for the Python Ecosystem
arxiv.org·2d·
Discuss: Lobsters
📊Profile-Guided Optimization
Flag this post
Setting up Linagora’s OpenRAG locally
dev.to·22h·
Discuss: DEV
🦙Ollama
Flag this post
Imagination Meets Intelligence in GMI Cloud's Inference Engine 2.0
gmicloud.ai·1d·
Discuss: Hacker News
🎨Creative Coding
Flag this post
Rust and JavaScript are a perfectly valid combination with no problems
jakobmeier.ch·13h·
Discuss: r/rust
🦀Rust Macros
Flag this post
Open-source AI browser. Switch between ChatGPT, Claude, Gemini, or local LLMs
github.com·13h·
Discuss: Hacker News
🦙Ollama
Flag this post
10× Faster Log Processing at Scale: Beating Logstash Bottlenecks with Timeplus
timeplus.com·18h·
Discuss: Hacker News
🔎Quickwit
Flag this post
What Caused Performance Issues in My Tiny RPG
jslegenddev.substack.com·2d·
Discuss: Substack
🎮Game Engines
Flag this post
Building an autograd engine in pure Rust
evis.dev·11h·
Discuss: Hacker News
🦀Rust Macros
Flag this post
Fast and Affordable LLMs serving on Intel Arc Pro B-Series GPUs with vLLM
blog.vllm.ai·2d·
Discuss: r/LocalLLaMA
🧩mimalloc
Flag this post
Loom: Universal AI Runtime for Local, Cross-Platform Inference
medium.com·1d·
🦙Ollama
Flag this post