Most of the open-source diffusion models and workflows are optimized for CUDA, with DirectML taking like 10-20x longer on AMD GPUs vs otherwise similarly tiered Nvidia GPUs.
Is this also the case for Gigapixel’s Recovery mode?
Most of the open-source diffusion models and workflows are optimized for CUDA, with DirectML taking like 10-20x longer on AMD GPUs vs otherwise similarly tiered Nvidia GPUs.
Is this also the case for Gigapixel’s Recovery mode?