Cloud solution to run Video AI?

  1. Do you have any recommendation solution to run Video AI without buying the hardware?
  2. Is it works inside VM?
  3. What about GOOGLE TPU?

That’s a great question! One of the biggest challenges is that at least according to the system requirements page it doesn’t appear there is any Linux support. That said, I wonder if it could work in WINE under Linux.

I think running a VM would not make much sense when it is running on your local machine at home where you need the hardware anyway.

I think it’s painful waiting for a 40GB file to transfer from one computer to another on a 1 gigabit network.
Now try that on a 200 megabit internet upload connection. Chances are, even if you have faster internet, the place you are uploading to won’t allow for the full speed.

Would it be great to rent hardware through the cloud? Yes. Would you be able to get around the slow file transfers? Probably not. I’d say it’s worth buying the hardware just to be able to handle files fast enough see if you’re on the right track.

Another issue mentioned not long ago on this forum: You have to have a monitor plugged into the video card for TVAI to work. I don’t remember the technical reasons. I do know that that will never be an option for cloud computing. Though, that should be resolved, if they ever get the Linux version made.

Your comments about file transfer are definitely true and worth consideration in any workflow of this kind. I will say though that I’m able to run TPAI and TVAI on a Mac Mini and a Mac Pro that do not have monitors plugged into them. So perhaps it is only some machines that this is a limitation on.

It might be specific to Nvidia GPUs.

I know there are cloud VM services that will run Windows instances. I’m not sure what kinds of GPU options they have available. There is a chance that one of them would work right now. Might be a lot of work finding what one does though.

I have a Nvidia GPU, RTX3070 not plugged into a monitor and TVAI works perfectly.

Ah this post is what I was thinking of when I said that.

Has anyone tried this? I have two decent machines processing videos simulataneously and I have to think I could have uploaded the files, processed them on large GPU instances, and downloaded them in half the time, albeit with some additional costs for the instances. Surely there has to be a better way for this type of video enhancement processing.

You could process the first half of the video by the first machine and the second half of the video by the second computer if you save the video to images and transform them to a video file later when both machines finished their work!

What Imo wrote is the most elegant solution, but it sort of assumes you already have a rather sophisticated workflow and custom tools to begin with. Likely the case already for big media companies.

However to answer your question; Yes. I did a test a while back spinning up a few windows-VMs (instances) on AWS with TVAI installed. A simple python-script to transfer the files to the machines and run the TVAI CLI on those machines. Then poll for file-completion to download once they were done, after which it killed the VMs and stopped the “Cloud tax” from bankrupting me.

I’d say this is the easiest and cheapest cloud option right now for running TVAI on rented hardware. If you just want to run a couple of clips now and then I think this method is sufficient.

From this basic setup, you can make it a lot more elaborate if you want, but I’m trying to keep this answer as simple as possible.

As to the discussion about the slow transfer speeds over the internet for the humongous file sizes media (video) work entails, I 100% agree. In a former life when I worked at one of the major cloud providers, the story was basically; “Do all your media work in the cloud to begin with, from content creation, editing to distribution. That way transfer-speeds is a non-issue”. While this stance is true, it also plays into the cloud provider’s business objective :wink: And, with the large media firms I worked with, they all had multi-gigabit (e.g. 20Gbit to 2x100Gbit fiber lines from their office to the cloud provider’s back-bone) so they could continue doing editing locally and just save to something like S3 though a mounted disk volume, or some in-house transfer job.

Nowadays being just a customer of the providers having to “pay for cloud” and stuck with a 1GBe link, I just submit my jobs in the background while continue to work on other things. A few hours later I have the TVAI enhanced files on my local machine again while my CPU and GPU have been available for me to use for other things. Works pretty well.

I’m having some success using a Windows VM on Paperspace. Best/most efficient setup thusfar (if you’re only doing Topaz and trying to max GPUs) seems to be 4x A4000 GPUs. I’m mostly doing Enhancement with Proteus, and thusfar have been CPU limited. Initially I ran it with 4x A6000 GPUs, but they rarely got above 40% while CPU was often maxed. FFMpeg seems to be the culprit, so I’m going to do more investigation to see if it’s not fully utilizing GPU acceleration, or if I can shift the balance. Might also experiment with lossless encoded video as I believe de/encoding is taking up much of that processing.

Most of the cloud GPU providers I’ve seen provide instructions on their websites for confuguring a VM to use their services. And quite a few of them will give you a couple of hours’ worth of free service, which I presume is both a marketing promotion and a provision for you to configure and test your setup on them.

What’s important to remember is that as far as your VM is concerned, it will not “know” that the cloud GPU is not actual hardware on your local computer. And also that TVAI has to have models that support whatever GPU you configure. So you can cross the idea of running your enhancements on processors like nvidia HGX and AMD Instinct off your bucket list.

Is there a list of which GPUs are supported? Are any datacenter GPUs supported, or is just consumer/professional? We’re trying to use an Amazon EC2 instance but it doesn’t appear to be working with the GPUs we’re provisioning (Nvidia Tesla V100, Tesla A100, Tesla M60, T4, A10G). I feel like I read somewhere that they don’t support any datacenter GPUs, but haven’t been able to find where I read that. Paperspace is a decent solution but a bit expensive and harder to work with programatically. AWS/Azure would be super clutch.

There used to be a list of recommended minimums, but I don’t think it’s been updated in ages. As far as I can see, it’s Nvidia and AMD graphics cards, and also Intel ARCs, but not very well. I can’t remember ever seeing any mention of datacenter or other accelerator cards.