Nearly all such software support CUDA, (which up to now was Nvidia only) and some also support AMD through ROCm, DirectML, ONNX, or some other means, but CUDA is most common. This will open up more of those to users with AMD hardware.
Yes, llama.cpp and derivates, stable diffusion, they also run on ROCm. LLM fine-tuning is CUDA as well, ROCm implementations not so much for this, but coming along.
They are usually released for CUDA first, and if the projects got popular enough, someone will come in and port them to other platforms, which can take a while especially for rocm. Apple m series ports usually appear first before rocm, that’s show how much the devs community dislike working with rocm with famous examples such as geohot throwing the towel after working with rocm for a while.
Do LLM or that AI image stuff run on CUDA?
Cuda is required to be able to interface with Nvidia GPUs. AI stuff almost always requires GPUs for the best performance.
Nearly all such software support CUDA, (which up to now was Nvidia only) and some also support AMD through ROCm, DirectML, ONNX, or some other means, but CUDA is most common. This will open up more of those to users with AMD hardware.
Thanks that is what I was curious about. So good news!
Yes, llama.cpp and derivates, stable diffusion, they also run on ROCm. LLM fine-tuning is CUDA as well, ROCm implementations not so much for this, but coming along.
They are usually released for CUDA first, and if the projects got popular enough, someone will come in and port them to other platforms, which can take a while especially for rocm. Apple m series ports usually appear first before rocm, that’s show how much the devs community dislike working with rocm with famous examples such as geohot throwing the towel after working with rocm for a while.