Hello, this will be that subject, and please do not take it too much. It is only my personal opinion on the comparison as an associate scientist. And this will be on mainstream level cards opinion, not about the professional segment that is above my budget… highly above, to be honest. I was trying to express my thoughts, and I think I did it right. My opinion is based on the GNU/Linux Ubuntu 20.04.3 version and Today’s latest “standard” packages. The thing is that NVidia in CUDA crushes AMD in OpenCL. Even NVidia 3090 RTX card in OpenCL destroys the AMD Radeon RX 6900 XT. I am sorry to say that this is about drivers, and what can I say more? If someone asks me about that, I will recommend any research on the NVidia card. Sorry AMD!
And I am frustrated, so below you can see Ubuntu 20.04.3 derived PC with 4 x PCIe 3.0 x8 running AMD Radeons RX 6900 XT on the powerful motherboard and CPU. What can I conclude after weeks of tests? It was the most significant waste of money in my life. Especially since I bought from eBay from sculptors. The only good thing is 234 MH/s in ETH mining.
Thanks for reading!
p ;).
Wow. I do agree that NVIDIA GPUs are better than AMDs when it comes to drivers. Specially in the AI/ML scope of things.
I am interested to see the benchmark results of your multi AMD cards. I spoke with the OpenCL guys at their IRC channel (Libera Chat) and they say that AMD have similar performance as to Nvidia in terms of OpenCL.
Nvidia have tensor cores and these really boosts the AI/ML performance in which AMD GPUs does not have any tensor cores.
I’m sure you’re glad that now Intel has finally joined in the GPU market.
2 months ago I did some heavy research because of the problems with Nvidia and AMD gpus for Open Source AI/ML and I concluded that for price to performance, custom hardware is best.
I determined that a SoCs vendor from MediaTek have great potential when used in cluster, the SoCs are low price down to $17 -- $20 a piece and super powerful. They have built in tensor cores which have 4.5 TOPs performance, ARM Mali GPUs, 8 core ARM CPUs and also supports OpenCL.
From my calculations that 30 of these SoCs will make 1 RTX 3090 for equivalent tensor cores and costs about half the price. Also imagine what 30 x 8 ARM CPUs and Mali GPUs can also provide…
Example:
Dimensity 1100 (MT6891 MT6891Z/CZA) mt6891z SoC ~$17:
6 nm N6 lithograpgy
4× Cortex-A78 @ 2.6 GHz
4× Cortex-A55 @ 2.0 GHz
Video Encoding HEVC 2160p 4K @ 60FPS
Tensor Core: MediaTek APU 3.0
GPU: Mali-G77 MC9 @ 836 MHz
GNSS GPS
5G Sub-6
DL = 4700Mbps
200MHz 2CA, 256-QAM,
4x4 MIMO
UL = 2500Mbps
200MHz 2CA, 256-QAM,
2x2 MIMO
LTE Category 19 DL
Bluetooth 5.2
But now Intel just joined into the GPU market, and looking into their AI/ML hardware, I concluded that it’s better than Nvidia AI hardware. But their drivers might take some time to be at it’s best. However the Intel Arc GPUs have great hardware and they do open source drivers. If Intel keeps their new Arc GPUs low price and powerful, I think it will benefit everyone greatly in the AI/ML industry without needing to look into custom hardware.