Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
You're currently offline. Some features may not work.
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
⚡ Hardware Acceleration
GPU Computing, Tensor Cores, Specialized Chips, SIMD Instructions
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
27030
posts in
310.7
ms
anulum/sc-neurocore
: Verified Rust-based Neuromorphic Compiler. 512x Real-Time Speed. Bit-True FPGA Equivalence. (AGPLv3 / Commercial)
github.com
·
5h
·
Discuss:
Hacker News
🔄
SIMD Programming
AI
Inference
Needs A
Mix-And-Match
Memory Strategy
semiengineering.com
·
8h
🏗️
LLM Infrastructure
Timing
and Memory
Telemetry
on GPUs for AI Governance
arxiv.org
·
1d
🖥
GPUs
NVIDIA
DGX
Spark
Powers Big Projects in Higher Education
blogs.nvidia.com
·
1h
🖥
GPUs
From Buffers to Registers: Unlocking Fine-Grained
FlashAttention
with
Hybrid-Bonded
3D NPU Co-Design
arxiv.org
·
11h
🖥️
Hardware Architecture
Porting an INT8 VHDL CNN from Intel
Agilex
3 to Lattice
Certus-NX
news.ycombinator.com
·
3h
·
Discuss:
Hacker News
🖥️
Hardware Architecture
Linux 7.0 Graphics Drivers See New AMD Hardware, Intel Xe
SR-IOV
+ Multi-Device
SVM
phoronix.com
·
15h
🤖
AI
Samsung
starts
mass production of next-gen AI memory chip
techxplore.com
·
7h
🔬
Chip Fabrication
New
Ovis2.6-30B-A3B
, a lil better than
Qwen3-VL-30B-A3B
huggingface.co
·
4h
·
Discuss:
r/LocalLLaMA
🚀
Astral
Edge AI Chip
Solutions
hailo.ai
·
2d
📱
Edge AI Optimization
MolmoSpaces
, an open ecosystem for
embodied
AI
allenai.org
·
23h
✨
Gemini
Semidynamics
Unveils
3nm
AI Inference Silicon and Full-Stack Systems
semiwiki.com
·
1d
🏗️
LLM Infrastructure
Rolling out the
carpet
for Spin
Qubits
in new quantum chip architecture
tudelft.nl
·
6h
🔢
BitNet Inference
AI, GPU, And
HPC
Data
Centers
: The Infrastructure Behind Modern AI
semiengineering.com
·
8h
🖥
GPUs
Supercharging
Inference for AI Factories: KV Cache
Offload
as a Memory-Hierarchy Problem
blog.min.io
·
1h
🏗️
LLM Infrastructure
Parallel Track Transformers:
Enabling
Fast GPU Inference with Reduced
Synchronization
machinelearning.apple.com
·
2d
📦
Batch Embeddings
Introducing
Dedicated
Container Inference:
Delivering
2.6x faster inference for custom AI models
together.ai
·
16h
🏗️
LLM Infrastructure
A
Conceptual
Framework for Exploration
Hacking
lesswrong.com
·
18m
🕳
LLM Vulnerabilities
NVIDIA
GeForce
NOW Turns
Screens
Into a Gaming Machine
elevenforum.com
·
2h
🖥
GPUs
Results from the
Advent
of
FPGA
Challenge
blog.janestreet.com
·
12h
·
Discuss:
Hacker News
⚙️
Mechanical Sympathy
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help