I’m keen to hear your thoughts. I work at a VR platform and curious about whether Topaz Video AI can improve the look of VR content for our users. I’ve connected with someone who has done some short tests but I’d like to hear from people who have done some detailed exploration into this kind of enhancement.
If there’s no response, perhaps run some test runs using video representative of your platform’s content. The cost is pretty negligible.
I have run Proteus on 3D SBS content and the results were fantastic. I haven’t tried VR yet.
I’ve upscaled hundreds of hours of VR footage with Video AI, with very good results. Artemis had the best results, and now Iris because of the faces.
I upscaled most footage from 4096x2880 from a Vuze VR camera to 8192x4320 (the proven maximum the Meta Quest 2 and 3 can play) and some from 5760x2880 (Insta360 Evo) to 8192x4320. I used FFMPEG batch converter for it and replaced the ffmpeg binary together with the necessary files in the process, for more options, like -tune hq and -preset 7 (slowest) and -maxrate for NVenc.
In some instances some details at the left and the right eye differed a bit, but it didn’t bother me.
This was upscaled to 8K 60 fps for example: Kutterfahrt von Norddeich zu den Seehundbänken mit Kapitän Heiner Tholen
That’s really interesting @vista.joel! I work for a VR video site and I’m looking into this at the moment for an article. Would it be ok to speak to you about it more? Not sure if I can DM on here but let me know what’s best, thanks!