Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
📱 Edge AI
On-Device Inference, NPU, Neural Processing, Mobile ML
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
154106
posts in
13.9
ms
Multi-Turn Reasoning LLMs for
Task
Offloading
in Mobile Edge Computing
🤖
LLM
arxiv.org
·
2d
Radar Reference Platform
Improves
Identification
in Edge AI
🧠
AI Security
embedded.com
·
20h
Google’s
Gemma
4 is finally bringing real on-device AI to Android
phones
♊
Gemini
phandroid.com
·
3d
Fast Isn’t Fast Enough:
Redefining
Metrics
for Edge AI
🤖
AI
semiengineering.com
·
2d
I found 7 Windows apps that use your PC's
NPU
to improve efficiency and performance with AI — You might be
surprised
at what's on the list
💾
Local-first Software
windowscentral.com
·
5d
F&S M.2 AI Accelerator Uses
NXP
Ara-240
for Edge Inference Workloads
🖥️
Homelab
linuxgizmos.com
·
1d
Edge AI Is
Forcing
a
Rethink
of Predictive Maintenance Architecture
📜
AI Regulation
eetimes.com
·
2d
How to Run
Gemma
4 on Your Phone Without Internet: A
Hands-On
Guide
💾
Local-first Software
analyticsvidhya.com
·
2d
Vibhor
Kumar
: AI at the Edge, Truth in Postgres
👥
P2P Networks
vibhorkumar.wordpress.com
·
2d
PCA-Driven
Adaptive Sensor
Triage
for Edge AI Inference
🤖
AI
arxiv.org
·
3d
"If it's this easy, why don't more Windows apps use a PC's
NPU
?" — Microsoft MVP
demonstrates
how he added meaningful AI to an app in just 10 minutes
✍️
Prompt Engineering
windowscentral.com
·
4d
Position
Paper: From Edge AI to
Adaptive
Edge AI
🤖
AI Agents
arxiv.org
·
1d
ENEC
: A Lossless AI Model Compression Method Enabling Fast Inference on Ascend
NPUs
🧠
AI Security
arxiv.org
·
4d
Networking-Aware
Energy
Efficiency
in Agentic AI Inference: A Survey
🤖
AI
arxiv.org
·
1d
From LLM to Silicon:
RL-Driven
ASIC
Architecture Exploration for On-Device AI Inference
✍️
Prompt Engineering
arxiv.org
·
1d
L-SPINE: A Low-Precision
SIMD
Spiking
Neural Compute Engine for Resource-efficient Edge Inference
🤖
LLM
arxiv.org
·
4d
SHIELD: A
Segmented
Hierarchical Memory Architecture for Energy-Efficient LLM Inference on Edge
NPUs
🤖
LLM
arxiv.org
·
1d
Edge Intelligence for Satellite-based Earth
Observation
:
Scheduling
Image Acquisition and Processing
♊
Gemini
arxiv.org
·
3d
AutoSOTA
: An End-to-End
Automated
Research System for State-of-the-Art AI Model Discovery
✍️
Prompt Engineering
arxiv.org
·
3d
DHFP-PE
: Dual-Precision Hybrid Floating Point Processing Element for AI
Acceleration
🧠
AI Security
arxiv.org
·
4d
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help