Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🧮 Jemalloc
Specific
Memory Allocator, Thread Caching, Arena Allocation, Performance
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
122579
posts in
40.0
ms
Stack vs
malloc
: real-world benchmark shows 2–6x
difference
💾
Memory Allocators
blog.stackademic.com
·
22h
·
DEV
·
…
Iteratively
optimizing an
SPSC
queue
⭕
Ring Buffers
blog.c21-mac.com
·
3d
·
r/cpp
·
…
Finding Hidden Bottlenecks in Go Apps: A Lazy,
Hacky
, and
Bruteforce
Method
🔍
eBPF
dev.to
·
3h
·
DEV
·
…
Finding performance
bottlenecks
with
Pyroscope
and Alloy: An example using TON blockchain
🌳
Merkle Trees
grafana.com
·
2d
·
…
facebookincubator/dispenso
: The project provides high-performance concurrency, enabling highly parallel computation.
🌀
Naiad
github.com
·
7h
·
Hacker News
·
…
Compiling
Code LLMs into Lightweight
Executables
🌲
CedarDB
arxiv.org
·
1d
·
…
Why I’m Building a
Database
Engine in C#
🔨
Incremental Compilation
nockawa.github.io
·
5d
·
Hacker News
·
…
Metal Quantized Attention: pulling M5 Max ahead with
Int8
matrix
multiplication
⚡
Hardware Acceleration
releases.drawthings.ai
·
17h
·
Hacker News
·
…
Uplevel
your workload scaling performance with
GKE
active buffer
☸️
Kubernetes
cloud.google.com
·
1d
·
Hacker News
·
…
Intel Delivers Open, Scalable AI Performance in
MLPerf
Inference
v6.0
🎯
Intel IPP
newsroom.intel.com
·
19h
·
…
New
Reproducibility
Standard Exposes Hidden
Variables
in Database-Kernel Performance Research
🚀
Performance
hackernoon.com
·
2d
·
…
Systematic
Analysis of CPU-Induced
Slowdowns
in Multi-GPU LLM Inference (Georgia Tech)
🎮
WebGPU
semiengineering.com
·
5d
·
…
JetStream
3: A modern benchmark for high-performance,
compute-intensive
Web applications
🚀
Performance
blog.chromium.org
·
1d
·
Hacker News
,
Blogger
·
…
MXFP8
GEMM: Up to 99% of
cuBLAS
Performance Using CUDA and PTX
🧩
mimalloc
danielvegamyhre.github.io
·
4d
·
Hacker News
·
…
Running local models on
Macs
gets faster with Ollama's
MLX
support
🦙
Ollama
arstechnica.com
·
1d
·
Hacker News
·
…
Building a
Production-Grade
Vector Database in Rust: What We
Shipped
🚀
Shuttle
ferres.io
·
1d
·
DEV
·
…
Stop
obsessing
over your GPU's core
clock
— memory
clock
matters more for local LLM inference
🧩
mimalloc
xda-developers.com
·
4d
·
…
Discussion - The
9950X3D2
performance speculation thread - The most
divisive
CPU ever?
⚙️
Performance Profiling
forums.anandtech.com
·
2d
·
…
Accelerate CPU-based AI inference workloads using Intel
AMX
on Amazon
EC2
🔢
Intel AMX
aws.amazon.com
·
2d
·
…
Optimizing
Python Web Apps:
Reducing
High Memory Usage on Shared Servers for Improved Performance
⚡
Cache Optimization
dev.to
·
10h
·
DEV
·
…
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help