I upscaled a video on my Macbook Pro using the same settings that I used on my Wjndows machine, and it created a file that is 5 times as large, and when I zoomed in I noticed that the quality isn’t as good either. This was unexpected. I thought that it would use the same algorithms so it would come up with the same video file.
The Macbook is using the M2 chip, and my desktop has an Nvidia GeForce RTX 3080. I set the program to use 4 cores on both of them.
Hey, Tony Tiger.
I just purchased my MacBook Pro 16” and successfully upgraded/enhanced my first project 15 min long using the new version and prores 4444 xq. However, the output file size was unbelievably huge at 450 GB. With the previous version, most of my completed projects 10-15 min long using Artemis H264 are about 3-5 GB only.
Generally speaking for AI stuff, yes. The power of the hardware influences the final results. The best way to check this is with Stable Diffusion, if you have two machines with different power, try to generate the same image and you will see that the less powerful machine will have a worse result. The same can happen here.
This is expected. ProRes 4444 XQ typically takes up a lot of space because it doesn’t make use of temporal compression. And the lack of temporal compression means ProRes can’t be as efficient at compression videos as H264.
With AI stuff like TVAI or Stable Diffusion, as long as all the parameters are the same, then the only difference between a slow device and a fast device should be speed. The output should be the same.
Once you start changing the backend that runs the AI, or enabled optimizations, that’s when differences can start to appear.
For example, some AI applications will enable the use of Tensor cores on RTX GPUs. Typically this results in faster processing, but the tensor cores typically operate in a different precision range from “standard processors” and thus different results may appear.