Instead of CUDA cores, AMD called it Stream processors, they are all the same, serve the same purposes. Without it, you won’t be able to encode or decode video. Don’t spread mislead information.
VEAI used to use CUDA cores to run. AMD GPUs were not supported until 2020. But then, VEAI decided to use fp16 with tensor cores for better quality and performance on newer GPUs, led to slower performance on low-end GPUs. Recently, VEAI seems to add fp32 which has even better precision than fp16, but at cost of performance which will be reduced in half. But Ampere GPUs were released with fp16=fp32 that might help increasing quality at the same performance.
VEAI uses 100% or not, depends on the models. Power goes up and down, voltage goes up and down for each frame. After finished 1 frame, power goes down, then goes up for the next frame. It WON’T be consitent. With 1080p to UHD or 8K, VEAI uses almost 400W of power. As you see in the Task Manager only shows 3D related things (like gaming). For tasks like machine learning, only tensor cores/CUDA cores or Stream processors are being used. Not all GPU components are forced or required to run. Again, this is not gaming where you can see 100% GPU usage. Unless you push VEAI heavily with 8K upscaling or so (but still, not all components are used). Also, not all models use 100% GPU due to CPU/SSD/RAM bottlenecks. GPUs are created mostly for gaming, it works 100% its power for gaming, but not rendering tasks or machine learning workloads. Most components are designed for gaming. But render tasks or machine learning tasks are NOT gaming at all. It took me 2 seconds to understand that, but people still refuse to spend 2 seconds for that. Google shows everything about machine learning. Don’t just let the machines learn itself and be smarter than you.
Still, people just don’t look up things on the internet, and believe that they know everything.