Apple
Out of context: Reply #3682
- Started
- Last post
- 3,882 Responses
- ********0
From Reddit:
MacBooks use unified memory. On standard laptop and PC, you have fixed RAM and fixed GPU memory, but on MacBook, you can decide what amount of memory you use as GPU and CPU memory. So if you buy MacBook with 128GB unified memory, you can left 16GB for system and you have 112GB of GPU memory. In machine learning, the biggest limiting factor is not GPU performance or CPU performance, it is GPU memory. The biggest amount of memory you can get on consumer GPU in PC world is 4090 with 24GB memory. If you want more, you can buy NVIDIA A40 with 48GB memory for around 7000 USD. If you need more for your neural net model, you can buy Nvidia A100 with 80GB memory for around 20 000 USD. If you want even more, you would have to go way way above 30K. But with MacBook Pro M4 Max 128GB you have still more GPU memory and it is only 4999 USD. So it is by far the cheapest computer for this kind of purpose which you can buy.
- Nvidia cards aren't utilizing all PCI-E lanes, they can put multiple TB RAM on a card if they want.********
- They don’t. Do they?monospaced
- The bottleneck is ram speed. M4 chips use DDR5 memory which is about 10x slower than the DDR6 and HBM memory used on graphics cards.monNom
- You can run just about any model on CPU with system memory. It just might be really slow.monNom
- yep, unless you're developing your own AI or using something Apple architecture specific this is not the case... GPUs are so much faster they can swap back andkingsteven
- forth between GPU memory and system memory and still be many times faster. + most AI applications are designed to work in this waykingsteven
- i have a £700 4070Ti SUPER in my 6 y/o PC and it'll beat the ass off an M4 Max in anything AI graphics (other than theoretical benchmarks), and heat my house...kingsteven
- still love my M1 Max though its a case of choose (speed + cost effectiveness) or (memory + energy efficiency) nobody has both and 99.9% of applications GPU winskingsteven
- don't get me started on compatibility/ documentation of ARM specific builds of machine learning toolskingsteven
- Nvidia cards aren't utilizing all PCI-E lanes, they can put multiple TB RAM on a card if they want.