Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
📱 Edge AI
Specific
Model Quantization, ONNX Runtime, Embedded Inference, TinyML
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
170955
posts in
11.1
ms
Position
Paper: From Edge AI to
Adaptive
Edge AI
🛡️
AI Security
arxiv.org
·
4d
Google Released
Gemma
4 with a Focus On Local-First, On-Device AI
Inference
🦙
Ollama
infoq.com
·
1d
Your developers are already running AI
locally
: Why on-device inference is the
CISO
’s new blind spot
🚀
MLOps
venturebeat.com
·
2d
Fast Isn’t Fast Enough:
Redefining
Metrics
for Edge AI
🚀
Performance
semiengineering.com
·
5d
Ultra-efficient
on-device
AI, now even faster -
MiniCPM
💬
Prompt Engineering
producthunt.com
·
1d
Radar Reference Platform
Improves
Identification
in Edge AI
🛡️
AI Security
embedded.com
·
4d
F&S M.2 AI Accelerator Uses
NXP
Ara-240
for Edge Inference Workloads
⚡
Hardware Acceleration
linuxgizmos.com
·
4d
AEG
: A
Baremetal
Framework for AI Acceleration via Direct Hardware Access in Heterogeneous Accelerators
⚡
Hardware Acceleration
arxiv.org
·
17h
Hardware
Utilization
and Inference Performance of Edge Object Detection Under
Fault
Injection
🎯
Intel IPP
arxiv.org
·
17h
A-IO: Adaptive Inference
Orchestration
for Memory-Bound
NPUs
📏
Linear Types
arxiv.org
·
17h
LoDAdaC
: a unified local training-based decentralized framework with adaptive
gradients
and compressed communication
🤖
TVM
arxiv.org
·
17h
Modality-Aware
Zero-Shot
Pruning
and Sparse Attention for Efficient Multimodal Edge Inference
🤖
TVM
arxiv.org
·
1d
EdgeFlow
: Fast Cold
Starts
for LLMs on Mobile Devices
⏰
Timely Dataflow
arxiv.org
·
1d
Networking-Aware
Energy
Efficiency
in Agentic AI Inference: A Survey
🛡️
AI Security
arxiv.org
·
4d
AutoSOTA
: An End-to-End
Automated
Research System for State-of-the-Art AI Model Discovery
💬
Prompt Engineering
arxiv.org
·
6d
Multi-Turn Reasoning LLMs for
Task
Offloading
in Mobile Edge Computing
🦙
Ollama
arxiv.org
·
5d
PCA-Driven
Adaptive Sensor
Triage
for Edge AI Inference
🧠
Machine Learning
arxiv.org
·
6d
From LLM to Silicon:
RL-Driven
ASIC
Architecture Exploration for On-Device AI Inference
🧮
Intel MKL-DNN
arxiv.org
·
4d
Edge Intelligence for Satellite-based Earth
Observation
:
Scheduling
Image Acquisition and Processing
🏗️
System Design
arxiv.org
·
6d
SHIELD: A
Segmented
Hierarchical Memory Architecture for Energy-Efficient LLM Inference on Edge
NPUs
🔄
Hardware Transactional Memory
arxiv.org
·
4d
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Save / unsave
s
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help