An Affordable AI Server
dev.to·3d·
Discuss: DEV
🖥GPUs
Preview
Report Post

Two AMD MI60s from eBay cost me about $1,000 total and gave me 64GB of VRAM. That’s enough to run Llama 3.3 70B at home with a 32K context window.

When I started looking into running large language models locally, the obvious limiting factor was VRAM. Consumer GPUs top out at 24GB, and that’s on an RTX 4090 at the high end. I wanted to run 70B parameter models locally, on hardware I own.

Datacenter Castoff, Homelab Treasure

The MI60 is a 2018 server GPU that AMD built for datacenters. It has 32GB of HBM2 memory, the same high-bandwidth memory you find in modern AI accelerators, and you can pick one up for around $500 on eBay. Two of them give you 64GB of VRAM, more than enough for Llama 3.3 70B.

One problem: they’re passive-cooled cards designed for server chassis with ser…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help