Apple
Apple
Out of context: Reply #3721
- Started
- Last post
- 3,882 Responses
- ********-2
"Apple unveils new Mac Studio, the most powerful Mac ever, featuring M4 Max and new M3 Ultra"
- M3 Ultra goes up to 512GB RAM********
- But will wait for next 1TB RAM versions then running LLMs locally will be great********
- i mean a 512GB could theoretically run a model of 200 million parameterskingsteven
- Llama for sure, but for R1 I think you need more, so 1TB RAM next iteration should do it to have good performance********
- RI is 671B parameters, so you'd really need 2TBkingsteven
- although, the more, the better.kingsteven
- Found the specs to run R1 models:
https://apxml.com/po…******** - yeah 1,543 GB so would require 2TBkingsteven
- looks like DeepSeek-R1-Zero = 671B ~436 GB would run on the M3 Ultra 512GBkingsteven
- crazy specs though if it can compete with 6 x A100 80GB in any way - thats a datacenter with $200k of hardwarekingsteven
- Saying that, i've seen people build systems to run R1 locally for £2k and it looks to be fast enough for personal use.kingsteven
- M3 Ultra goes up to 512GB RAM
