TheGuySwann on Nostr: Holy shit… it costs $10K but you can get a Mac Studio (the little computer) with as ...
Holy shit… it costs $10K but you can get a Mac Studio (the little computer) with as much as 512GB of unified memory. Thats right, that’s RAM that can be used for vRAM. Meaning you can run natively the largest DeepSeek and Llama models with tons of room to spare on this single device.
The acceleration of hardware toward Ai optimization is going to be crazy. I get the sense we will see double, triple, and quadruple the vRAM equivalent (though it’ll all go unified) in just the next couple of years of product iterations.
Running huge LLMs and video/image models locally will get easier and easier.
The acceleration of hardware toward Ai optimization is going to be crazy. I get the sense we will see double, triple, and quadruple the vRAM equivalent (though it’ll all go unified) in just the next couple of years of product iterations.
Running huge LLMs and video/image models locally will get easier and easier.
