Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
You're currently offline. Some features may not work.
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
⚡ Vectorized Execution
Query Processing, SIMD, Columnar Storage, Batch Processing
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
20713
posts in
431.8
ms
TimelyFreeze
: Adaptive Parameter Freezing Mechanism for Pipeline
Parallelism
arxiv.org
·
1d
🚀
Async Optimization
Human-like Search for Modern
Applications
anvitra.ai
·
1h
·
Discuss:
Hacker News
🎯
Vector Search
feldera/feldera
: The
Feldera
Incremental
Computation Engine
github.com
·
18h
⚡
DataFusion
Oatmeal
-
Constraint
propagation for fun
eli.li
·
1h
🧮
SMT Solvers
Performance
Tip
of the Week #79: Make at most one
tradeoff
at a time
abseil.io
·
5h
⚙️
Mechanical Sympathy
Show HN:
LocalGPT
– A local-first AI assistant in Rust with
persistent
memory
news.ycombinator.com
·
2m
·
Discuss:
Hacker News
🔎
Tantivy
Why Real-Time Execution Is Now Expected in
Lakehouse
Architectures
singlestore.com
·
1d
📦
In-process Databases
I Built a 6
BIPS
JIT
in Five Months
unlikelyemphasis.substack.com
·
1d
·
Discuss:
Substack
⚙️
Language Runtimes
Open source
USearch
library
jumpstarts
ScyllaDB vector search
thenewstack.io
·
2d
🎨
ChromaDB
The Top 10 Best
Practices
for AI/BI
Dashboards
Performance Optimization (Part 2)
databricks.com
·
3d
⚡
SQL Optimization
ggml
: backend-agnostic tensor parallelism by
JohannesGaessler
· Pull Request #19378
github.com
·
2d
·
Discuss:
r/LocalLLaMA
⚡
Hardware Acceleration
Retro PC breakthrough:
NVMe
SSD running on
Pentium
III via PCIe slot adapter
generationamiga.com
·
10h
⚙️
Mechanical Sympathy
Speeding
Up
HTML
Generation by 2000%
bobrubbens.nl
·
2d
💾
Prompt Caching
Fast
Autoscheduling
for Sparse ML
Frameworks
ajroot.pl
·
3d
·
Discuss:
Hacker News
,
r/Compilers
🕯️
Candle
Geospatial
System Design
Patterns
systemdr.substack.com
·
17m
·
Discuss:
Substack
🎯
Data Locality
Deterministic Retrieval at Scale: Optimal-Space
LCP
Indexing
and 308x Energy Reduction on Modern GPUs
arxiv.org
·
1d
🗂️
Vector Indexes
Modern
Trends
In
Floating-Point
semiengineering.com
·
2d
⚡
Hardware Acceleration
From Questions to
Insights
: Data Analysis with
LangChain
’s Built-In Tools
pub.towardsai.net
·
2d
🏗️
LLM Infrastructure
How we cut
Vertex
AI latency by 35% with
GKE
Inference Gateway
cloud.google.com
·
1d
🧠
Inference Serving
How can
computing
for AI and other
demands
be more energy efficient?
techxplore.com
·
9h
🖥
GPUs
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help