Benchmarking LLM Inference on RTX 4090 / RTX 5090 / RTX PRO 6000 #2
reddit.com·11h·
Discuss: r/LocalLLaMA
🏗️LLM Infrastructure
BQN "Macros" with •Decompose (2023)
saltysylvi.github.io·7h·
Discuss: Hacker News
🎭Rust Macros
Looking at my Arduino
boswell.bearblog.dev·12h
🖥️Hardware Architecture
How I Built My Own Tool for Disk Space Cleanup
debamitro.github.io·7h
🔬Rust Profiling
Explicit Lossless Vertex Expanders!
gilkalai.wordpress.com·19h
🔬RaBitQ
Progress being made in porting AMD OpenSIL Turin PoC to Coreboot in a Gigabyte MZ33-AR1
blog.3mdeb.com·8h·
🖥GPUs
SLip - An aspiring Common Lisp environment in the browser.
lisperator.net·16h·
Discuss: r/programming
🌿Leptos
GCC Patches Posted For C++26 SIMD Support
phoronix.com·18h
SIMD
QUIC! Jump to User Space!
hackaday.com·13h
QUIC Protocol
Show HN: Realization Jsmn on a Pure Zig
github.com·19h·
Discuss: Hacker News
🔤Tokenization
Size doesn't matter: Just a small number of malicious files can corrupt LLMs of any size
techxplore.com·14h
🕳LLM Vulnerabilities
FileAccessErrorView 1.33
majorgeeks.com·2h
📄File Formats
Trusted Execution Environments? More Like "Trust Us, Bro" Environments
libroot.org·10h·
Discuss: Hacker News
🔐Hardware Security
(Forward) automatic implicit differentiation in Rust with num-dual 0.12.0
reddit.com·13h·
Discuss: r/rust
🎭Rust Macros
Let's Write a Macro in Rust
hackeryarn.com·13h·
Discuss: Hacker News
🎭Rust Macros
Run Containers and VMs Easily With OrbStack GUI
thenewstack.io·14h
🏠Self-hosting
Parallelizing Cellular Automata with WebGPU Compute Shaders
vectrx.substack.com·19h·
Discuss: Substack
🏟️Arena Allocators
LLM-Based AI Agent That Automates The Transistor Sizing Process (Univ. of Edinburgh)
semiengineering.com·8h
🆕New AI
Iterated Development and Study of Schemers (IDSS)
lesswrong.com·14h
🆕New AI