Video upscale and frame interpolation for circular fisheye footage

Hello

Not sure if it belongs here. I shoot a lot of stereo 3D in VR-180 SBS and 360 TB. That means I have footage from two, or more cameras that are synchronized with circular fisheye video.

As the demand for high resolution and high framerates are important for VR when watching in a HMD (Like the Meta Quest, HTC Vive and so on), I use VIdeo AI for upscaling and frame interpolation of the footage.

I can clearly see an improvement, so it definitively helps.

I am not sure if the software “de-fishes” the video before it enhances the video. I tend to assume that the program does not consider the lens distortion and other lens specific “faults” that are unique to each different lenses (as in known lens profiles you can find in camera raw plugins) before it goes on and enhances the footage.

I further assume that the final result would be improved upon if the software knows and could accept input for lens characteristics like “equal to 8mm full frame fisheye”, so it could unwrap the footage prior to running the enhancement AI before wrapping it back again.

Is that already implemented in the software, or is this something that could be done in the future?

I know that some already are up-scaling their VR footage and increase the frame rate to enhance the quality of VR videos, so I definitively think that would be a nice feature for some to have.

While Video AI can process VR and 3D footage, it’s not specifically trained on equirectangular or lens-distorted inputs, so it doesn’t “de-fish” footage or use lens profiles. This is something that could be considered for future updates. In the meantime, correcting lens distortion before processing may improve your results. Thanks for sharing your experience!