Feature Idea: Skip Processing on High- and Low-Quality Video Segments

Hello Topaz team , greetings to you all , really love and appreciate topaz video AI and all the wonderful and hard work you are doing ,
I was thinking of a feature that maybe you can implement it into topaz video AI in the next updates and that is to let the AI analyze the video and be able to decide which parts have really good quality and which parts have very low or extremely low quality , and by having a customizable upper and lower limit , the AI can decide not to process the parts of the video that are either too good quality (no need to enhance ) or very bad quality ( unable to fix because they were shot in a very poor camera or bad photography experience)
Now as a result of this , we will have much faster processing time because of all these parts that the AI decided it will not process , very reduced video files sizes due the same reason mentioned
:slightly_smiling_face::slightly_smiling_face::heart::heart: I hope to hear from you soon concerning this feature.
Thank you

To a degree, this is exactly what Proteus, Iris, Nyx and Rhea are made to do. There’s no speed gain on high quality videos because they still get processed, but otherwise, this request has already been fulfilled.

If you want to understand my point , for example : I have a compilation video ( family , collection of any other types of videos put together In one video )
when you apply proteus or any other model you will immediately notice that some parts of the video compilation are descent quality and using proteus or artemis will make them look just perfect , while the other parts will get too much artifacts , get it ? :smiley:
compilation of videos shot in different cameras having different quality they don’t respond in the same way to the settings in the model used , that is why I suggested to have a threshold for the AI to decide not to process these parts entirely because it considered them to be unfixable and vise versa they are too good so no need to enhance them further , of course this threshold is set by us users and we can choose to enable or disable this feature

We tried to get them to change the UI to something that would allow us to define models for multiple sections. Yes, that would be harder than having an AI that can correctly apply the best model and settings to each part of the video, but I explained the state of what it is currently able to do in my post.

What you’re asking for is not a new idea—they can make the claim that they have already implemented it. It’s just not nearly as useful as you need it to be.

in other words, the models should simply do everything perfectly, easy just do it :grin:

seriously, i think topaz labs could take the positives from what they have and merge models into a better one. For example overlay which I and others does, two encodes and set opacity both merges into one, that could be made integrated.

1 Like