NVIDIA's Open-Source Linux Kernel Driver Performing At Parity To Proprietary Driver Review - Phoronix
NVIDIA's Open-Source Linux Kernel Driver Performing At Parity To Proprietary Driver Review - Phoronix
![](https://lemmy.ml/pictrs/image/126a5fe1-caef-44cd-a3d4-55f4a1e34342.webp?format=webp&thumbnail=128)
![NVIDIA's Open-Source Linux Kernel Driver Performing At Parity To Proprietary Driver Review](https://lemmy.ml/pictrs/image/126a5fe1-caef-44cd-a3d4-55f4a1e34342.webp?format=webp)
NVIDIA's Open-Source Linux Kernel Driver Performing At Parity To Proprietary Driver Review - Phoronix
As a reminder, the same (closed-source) user-space components for OpenGL / OpenCL / Vulkan / CUDA are used regardless of the NVIDIA kernel driver option with their official driver stack.
CUDA hell remains. :(
AMD needs to get their ducks in a row. They already have the advantage of not being Nvidia
They already have the advantage of not being Nvidia
That's just because they release worse products.
If AMD had Nvidia's marketshare, they would be just as scummy as the business climate allows.
In fact, AMD piggybacks off of Nvidia's scumbaggery to charge more for their GPUs rather than engage in an actual price war.
it's breaking down. Pytorch supports ROCm now.
Yes, the CUDA is the only reason why I consider NVIDIA. I really hate this company but the AMD tech stack is really inferior.
I've heard this but don't really understand it... At a high level, what makes cuda so much better?
So is CUDA good or bad?
I keep reading it's hell, but the best. Apparently it's the single one reason why Nvidia is so big with AI, but it sucks.
What is it?
Both.
The good: CUDA is required for maximum performance and compatibility with machine learning (ML) frameworks and applications. It is a legitimate reason to choose Nvidia, and if you have an Nvidia card you will want to make sure you have CUDA acceleration working for any compatible ML workloads.
The bad: Getting CUDA to actually install and run correctly is a giant pain in the ass for anything but the absolute most basic use case. You will likely need to maintain multiple framework versions, because new ones are not backwards-compatible. You'll need to source custom versions of Python modules compiled against specific versions of CUDA, which opens a whole new circle of Dependency Hell. And you know how everyone and their dog publishes shit with Docker now? Yeah, have fun with that.
That said, AMD's equivalent (ROCm) is just as bad, and AMD is lagging about a full generation behind Nvidia in terms of ML performance.
The easy way is to just use OpenCL. But that's not going to give you the best performance, and it's not going to be compatible with everything out there.
The fact that cuda means 'wonders' in polish is living in my mind rent free several days after I read about nvidia news.
I think this will change. Nvidia hired devs on Nouveau, NVK is coming along, etc
Well... it is an out-of-tree kernel driver that is made by the same company, and the userspace drivers are still proprietary.
This says NOTHING other than "wow NVIDIA can write good code (open source) that doesnt suck"?
How is it different. Wouldn't just be the same software with source code available?
It’s not, they’re not open sourcing their driver. They’ve made an open source driver.
Yes
Woohoo!
Anyone tried this beta version yet? Any idea how stable it is?
I been using the open kernel driver with my Debian Workstation, it has worked better then the default driver by far with the Debian backport Kernel, I installed it using the Nvidia Cuda Repo.
Performance parity? Heck no, not until this bug with the GSP firmware is solved: https://github.com/NVIDIA/open-gpu-kernel-modules/issues/538