Does a cloud server based AI task still need maxed out GPU/CPU?
Use case generally determines hardware priorities (impacted by cost limits).
Use Case Example - Topaz Photo AI Super Focus and other intensive enhancements (or similar functions in LR like AI denoise). All of these enhancements are painfully slow for everyone right now ( based on chatroom entries)
If these are cloud server based AI functions, does that lessen the neccessity for top of the line (ie-expensive) GPU’s with maxed out cores and VRAM or CPU’s with maxed # cores and RAM ? Sorry if that reveals my poor understanding how this all works
What I am trying to ask is whether an end user can mitigate this bottleneck through upgrading hardware and what upgrade is the most bang for the buck? It strikes me that having fast bidirectional internet connection may be a big factor (not sure why that is not discussed more) but how far does one need to go to eliminate the bottleneck? OR… is this simply a problem of being new beta functions of Topaz AI that will improve with time without the need to spend $5000 on a maxed out Mac Studio.
sidebar- the question that will be for next month for Apple users is M4 Mac Mini or M2 max Mac Studio if hardware is the priority but I need to answer the first question before going down that rabbit hole