Video AI 3.0.0.40a

  1. It should use 2x and not 1x, seems like a bug. This should be fixed in the next version.
  2. We are using the scale filter because it is available on all platforms and has no licensing issues. CUDA like you pointed out can also cause running out of VRAM for larger resolutions.
  3. Currently the priority is stability and feature parity with 2.6, unless the performance is horrible like the GUI right now; all other GPU usage and performance issues will be addressed in the near future.
  4. Unfortunately, the models are RNN based so they need the output from the previous frame is required for the next frame to process. But if you chain two filters together like upscale and fps conversion then they should run in parallel.
  5. The threads not matter when the device option is used, if device is not set then vram, threads are both set automatically as well.
  6. Unfortunately due to licensing issues we cannot release a libx264/libx265 compiled ffmpeg. @ibobbyts already compiles his own version for Mac. This is the reason why we provide the required libraries to build the backend from source. May be someone can do it for windows and share it.
  7. The AI models themselves support only 1x, 2x and 4x upscale. The only difference is in 3.0 you get a peak under the curtain, 2.6 does the same thing out directly in the code instead of a filter.
3 Likes