Does a cloud server based AI task still need maxed out GPU/CPU?
Use case generally determines hardware priorities (impacted by cost limits).
Use Case Example - Topaz Photo AI Super Focus and other intensive enhancements (or similar functions in LR like AI denoise). All of these enhancements are painfully slow for everyone right now ( based on chatroom entries)
If these are cloud server based AI functions, does that lessen the neccessity for top of the line (ie-expensive) GPU’s with maxed out cores and VRAM or CPU’s with maxed # cores and RAM ? Sorry if that reveals my poor understanding how this all works
What I am trying to ask is whether an end user can mitigate this bottleneck through upgrading hardware and what upgrade is the most bang for the buck? It strikes me that having fast bidirectional internet connection may be a big factor (not sure why that is not discussed more) but how far does one need to go to eliminate the bottleneck? OR… is this simply a problem of being new beta functions of Topaz AI that will improve with time without the need to spend $5000 on a maxed out Mac Studio.
sidebar- the question that will be for next month for Apple users is M4 Mac Mini or M2 max Mac Studio if hardware is the priority but I need to answer the first question before going down that rabbit hole
Following this question as I may/may not upgrade to a Mac Mini M4 from an M2. I am guessing that some improvement in processing speeds would be experienced, but might it be significant enough to be worth the money spent on essentially a hobby?
2nd question – in a cloud processing scenario, is the target video file uploaded completely to the cloud before processing starts on the cloud service? If that happens and the output file is downloaded later, there isn’t a bunch of file I/O going on via the internet connection and one’s connection speed has less overall impact. Is this how it works?
Main Thesis and Opportunity
End-users of Topaz software need tailored, actionable recommendations to make the smartest investment in image processing performance.
I fully expect the minimum or recommended hardware configuration to run all future models of TPAI will increase in the years to come. Today’s “8 GB VRAM” might become tomorrow’s “12 GB”. And that’s progress, because there is a new model. If we don’t want the model, don’t want to upgrade the GPU, that is entirely within our control.