Anyone else running their whole AI stack as Proxmox LXC containers? Im currently using Open WebUI as front-end, LiteLLM as a router and A vLLM container per mod...
🔧Hardware
Flag this post
The Development of Pie
🦀Rust
Flag this post
Libevpl – event loop engine with unified abstractions for network and block I/O
🔧Hardware
Flag this post
Runs-On: Mac
🦀Rust
Flag this post
Scaling Embeddings with Feast and KubeRay
🚀CUDA Kernels
Flag this post
Sparse Adaptive Attention “MoE”: How I Solved OpenAI’s $650B Problem With a £700 GPU
📱Edge AI
Flag this post
The AI Capability Gap
📱Edge AI
Flag this post
How Well Does RL Scale?
📱Edge AI
Flag this post
How We Saved 70% of CPU and 60% of Memory in Refinery’s Go Code, No Rust Required.
🦀Rust
Flag this post
Loading...Loading more...