[MEGATHREAD] Local AI Hardware - November 2025
reddit.com·1d·
Discuss: r/LocalLLaMA
Flag this post

This is the monthly thread for sharing your local AI setups and the models you’re running.

Whether you’re using a single CPU, a gaming GPU, or a full rack, post what you’re running and how it performs.

Post in any format you like. The list below is just a guide:

Hardware: CPU, GPU(s), RAM, storage, OS

Model(s): name + size/quant

Stack: (e.g. llama.cpp + custom UI)

Performance: t/s, latency, context, batch etc.

Power consumption

Notes: purpose, quirks, comments

Please share setup pics for eye candy!

Quick reminder: You can share hardware purely to ask questions or get feedback. All experience levels welcome.

House rules: no buying/selling/promo.

Similar Posts

Loading similar posts...