I am trying to tweak the speed performance of the Proteus model. I am doing it from command line with ffmpeg and a DOS batch script that reads a sequence of images from an Input folder and outputs the enhanced files to an Output folder. I’m using the command line generated in the GUI in my batch job to get the syntax right. I’m using tiff and resolution is 1440 x 1080, so it’s a very heavy task (circa 28,000 frames). Right now the algorithm is enhancing well less than 1 frame per second.
From time to time the batch script crashes too due to out of memory errors, but it’s designed to pick up where it last left and hence cover all te images. (It would be impossible to make it work from the Topaz GUI btw).
But now I am ambitious and wanting to go one step further. I want to force the Topaz AI model to use TensorRT. Initially I imagined just rebuilding ffmpeg with --enable-tensorrt would be enough, but then I realized only Topaz’s own ffmpeg works, bc it enables the -TVAI parameter, which requires Topaz’s codes and libs.
Below are my CUDA versions. Can somebody help me a bit?
CUDA
export CUDA_HOME=/c/PROGRA~1/NVIDIA~2/CUDA/v11.7
export CUDA_INCLUDE_DIR=/c/PROGRA~1/NVIDIA~2/CUDA/v11.7/include
export CUDA_LIBRARY=/c/PROGRA~1/NVIDIA~2/CUDA/v11.7/lib/x64
export NV_CODEC_INCLUDE=/c/nv-codec-headers/include/ffnvcodec
TENSOR RT
export TENSORRT_HOME=“/c/Program Files/NVIDIA GPU Computing Toolkit/TensorRT-8.4.1.5”
export TENSORRT_INCLUDE_DIR=“/c/Program Files/NVIDIA GPU Computing Toolkit/TensorRT-8.4.1.5/include”
export TENSORRT_LIBRARY=“/c/Program Files/NVIDIA GPU Computing Toolkit/TensorRT-8.4.1.5/lib”
CUDNN
export CUDNN_HOME=/c/PROGRA~1/NVIDIA/CUDNN/v9.6
export CUDNN_BIN_DIR=/c/PROGRA~1/NVIDIA/CUDNN/v9.6/bin/11.8
export CUDNN_INCLUDE_DIR=/c/PROGRA~1/NVIDIA/CUDNN/v9.6/include/11.8
export CUDNN_LIBRARY=/c/PROGRA~1/NVIDIA/CUDNN/v9.6/lib/11.8/x64