Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
💾 CPU Caching
L1/L2/L3, Cache Lines, False Sharing, Alignment
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
372
posts in
16.7
ms
Enhancing Performance Insight at Scale: A Heterogeneous Framework for
Exascale
Diagnostics
⚙️
Mechanical Sympathy
arxiv.org
·
8h
GhostServe
: A Lightweight
Checkpointing
System in the Shadow for Fault-Tolerant LLM Serving
🔐
MVCC
arxiv.org
·
1d
Efficient Non-Interactive Key Refresh with Multiple Independent
Refreshers
for Threshold
Cryptography
🤝
Paxos Consensus
eprint.iacr.org
·
6d
Implementing True
MPI
Sessions and Evaluating
MPI
Initialization
Scalability
🌐
Distributed Systems
arxiv.org
·
8h
Run an
ALTER
TABLE for a huge table in
Aurora
⚙️
Database Internals
percona.com
·
5d
SURGE:
SuperBatch
Unified Resource-efficient GPU Encoding for Heterogeneous
Partitioned
Data
⚙️
Mechanical Sympathy
arxiv.org
·
1d
Lifting to
tensors
when
compiling
scientific computing workloads for AI Engines
⚡
SIMD Vectorization
arxiv.org
·
8h
Undirected
Replacement
Paths
: Dual Fault Reduces to Single Source
📐
E-graphs
arxiv.org
·
1d
MANOJAVAM
: A Scalable, Unified FPGA Accelerator for Matrix
Multiplication
and Singular Value Decomposition in Principal Component Analysis
⚡
SIMD Vectorization
arxiv.org
·
1d
Tempus: A Temporally Scalable Resource-Invariant
GEMM
Streaming Framework for
Versal
AI Edge
🔄
Incremental Computation
arxiv.org
·
2d
Network Digital
Untwinning
: Towards
Backward
Optimization of Digital Twins
📡
Low-Level Networking
arxiv.org
·
2d
VUDA
: Breaking
CUDA-Vulkan
Isolation for Spatial Sharing of Compute and Graphics on the Same GPU
⚙️
Mechanical Sympathy
arxiv.org
·
1d
LLM-Emu
: Native
Runtime
Emulation of LLM Inference via Profile-Driven Sampling
🧮
SMT Solvers
arxiv.org
·
2d
Eliminating Hidden
Serialization
in Multi-Node
Megakernel
Communication
🔐
MVCC
arxiv.org
·
2d
SplitZip
: Ultra Fast Lossless KV Compression for
Disaggregated
LLM Serving
🗜️
Compression Algorithms
arxiv.org
·
1d
Predictive
Multi-Tier Memory Management for
KV
Cache in Large-Scale GPU Inference
⚙️
Mechanical Sympathy
arxiv.org
·
5d
Taming Request Imbalance:
SLO-Aware
Scheduling for
Disaggregated
LLM Inference
💰
Cost-Based Optimization
arxiv.org
·
1d
Affinity
Tailor
: Dynamic
Locality-Aware
Scheduling at Scale
⚙️
Mechanical Sympathy
arxiv.org
·
5d
Write-Read
Decoupling
in Modern Large-Scale Search Engines:
Architectures
, Techniques, and Emerging Approaches
🔍
Search Indexing
arxiv.org
·
1d
CvxCluster
: Solving Large, Complex,
Granular
Resource Allocation Problems 100-1000x Faster
🧠
Query Planners
arxiv.org
·
1d
« Page 1
·
Page 3 »
Log in to enable infinite scrolling
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help