Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
⚡ Performance Engineering
profiling, perf, latency, performance tuning, benchmarking
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
12457
posts in
11.2
ms
How to achieve
P90
sub-microsecond
latency in a C++ FIX engine
✍️
Prompt Engineering
akinocal1.substack.com
·
5d
·
Substack
,
r/cpp
ConfigSpec
: Profiling-Based Configuration Selection for Distributed Edge--Cloud
Speculative
LLM Serving
🏗️
System Design
arxiv.org
·
1d
Shifting to an
Observability
Mindset
from a Developer's Point-of-view
🏗️
System Design
dev.to
·
11h
·
DEV
Edge
Computing
Explained for Developers: When Cloud Is Too
Slow
🏗️
System Design
edstellar.com
·
2h
·
DEV
Using
JIT
Compilation
to Improve Performance and Reduce Cloud Spend
🏗️
System Design
hackernoon.com
·
2d
Stop
benchmarking
inference
providers
, a guide to easy evaluation
🔲
ML Hardware
huggingface.co
·
15h
·
r/LocalLLaMA
Presentation:
Latency
: The Race to Zero...Are We There Yet?
🏗️
System Design
infoq.com
·
4d
Model API Performance
🔲
ML Hardware
news.ycombinator.com
·
19h
·
Hacker News
Sustainable GPU
FinOps
: Optimizing AI
Compute
for Cost and Carbon
☁️
Cloud Computing
github.com
·
1d
·
DEV
I "
Rewrote
" My
ORM
Again with AI. And Ended Up Benchmarking Every PHP
ORM
in the Process.
🗄️
Database Internals
technex.us
·
15h
·
Hacker News
Benchmarking
LLMs with
Marimo
Pair
🧠
LLMs
ericmjl.github.io
·
5d
·
Hacker News
reviser
:
Analyzing
Real-Time Data Revisions in R
💰
US Economy
r-bloggers.com
·
2d
I-DLM
:
Introspective
Diffusion Language Models
🤖
LLM
introspective-diffusion.github.io
·
22h
·
Hacker News
,
r/LocalLLaMA
CerebroChain
Supply Chain & Market Data – Enterprise supply chain AI, warehouse optimization, logistics intelligence, real-time crypto/forex/stock data,
shippin
...
📋
Formal Methods
glama.ai
·
2d
·
r/mcp
Gemma
4 vs Qwen3.5: benchmarking
quantized
local LLMs on Go coding
🧠
LLMs
msf.github.io
·
4d
·
r/LocalLLaMA
Quantization
,
LoRA
, and the 8% Problem: Benchmarking Local LLMs for Production AI
🧠
LLMs
walsenburgtech.com
·
3d
·
Hacker News
447 Terabytes per Square
Centimetre
at Zero Retention Energy: Non-Volatile Memory at the Atomic Scale on
Fluorographane
🔌
Embedded Systems
zenodo.org
·
3d
·
Hacker News
,
r/hardware
Why More Code Doesn’t
Necessarily
Mean More
Progress
🛠️
Software Craft
hackernoon.com
·
2d
Reinforcement
fine-tuning
on Amazon
Bedrock
: Best practices
✍️
Prompt Engineering
aws.amazon.com
·
6d
How I dropped LLM
latency
from 500ms to 0ms in real-time physics
loops
✍️
Prompt Engineering
dev.to
·
22h
·
DEV
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help