Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
⚡ Performance Engineering
Optimization, Profiling, Benchmarking, Tuning
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
13979
posts in
19.6
ms
Engineering approach: Startup Mode v/s Big Tech Mode
🚀
Performance
dev.to
·
1d
·
DEV
sdeonvacation/throttle
: A sophisticated task execution framework in Java that automatically adapts to system resource availability.
🚀
Performance
github.com
·
5d
·
DEV
[
WIP
] Benchmarking Local LLMs Against Coding Agent
Harnesses
🚀
Performance
neuralnoise.com
·
2d
·
Hacker News
Java Performance
Tuning
and Event-Driven System Design for
Scalable
Systems
📡
Event-Driven Architecture
medium.com
·
6d
TurboQuant
on a MacBook Pro, part 2: perplexity, KL
divergence
, and asymmetric K/V on M5 Max
🚀
Performance
llmkube.com
·
1d
·
r/LocalLLaMA
Letting Claude Code's
Routines
continuously
tune my CLI's performance
🚀
Performance
dev.to
·
5h
·
DEV
Easily
benchmark all your app's
endpoints
at once
⚡
FastAPI
dev.to
·
23h
·
DEV
vnmoorthy/pavo-bench
: A 50K-turn voice pipeline benchmark and an 85K-param meta-controller that cuts P95 latency 10.3% and energy 71% vs fixed cloud.
TMLR
2026.
🚀
Performance
github.com
·
2d
·
Hacker News
Key Principles of SaaS Performance Optimization for Speed,
Scalability
, and
Reliability
🏗
Budget Infrastructure
dev.to
·
3d
·
DEV
Performance Test:
Ollama
0.5.0 vs.
vLLM
0.4.0 Local LLM Inference Latency on NVIDIA RTX 5090 and AMD Radeon RX 8900 in 2026
🦙
Ollama
dev.to
·
1d
·
DEV
Benchmark: Vector 0.40 vs.
Fluent
Bit 3.0 Log Processing
Throughput
for 100k Logs/Second
🚀
Performance
dev.to
·
1d
·
DEV
How to Benchmark LLM Inference Performance:
TTFT
,
ITL
, and Throughput Metrics
💸
Affordable LLMs
dev.to
·
4d
·
DEV
Architecture Teardown: Kubernetes 1.32 Control Plane
Internals
and Performance
Optimizations
📬
Message Queues
dev.to
·
2d
·
DEV
The
70B
Threshold: How the RTX 5090
Rewrites
the Home Lab Equation
🧩
LLM Integration
dev.to
·
5d
·
DEV
Case Study: Reducing Data
Ingestion
Latency by 96.4% (24.5x
Speedup
)
🚀
Performance
dev.to
·
1d
·
DEV
Caddy
2.8 vs
Nginx
1.26: Static File Serving Speed Benchmark 2026
🚀
Performance
dev.to
·
1d
·
DEV
RTX 4090 Cooling, LLM
KV
Cache
Quantization
, & Deepseek V4 Flash Models
🚀
Performance
dev.to
·
5d
·
DEV
What 200
Concurrent
Users Taught Me About
SQLite
Performance
🚀
Performance
dev.to
·
2d
·
DEV
The Most Important
Announcement
at NEXT '26 Was a
Sidecar
🚀
Performance
dev.to
·
4d
·
DEV
Under the
Hood
: Go 1.24 and
pprof
1.10’s New Garbage Collector Improvements for Long-Running Services
⚡
Caching Strategies
dev.to
·
1d
·
DEV
Page 2 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help