Add an option to choose the stiching boxes size

Hello Toplabz team,

Due to my tests I discover that in this TVAI filter:
image
… stabilyzes the image better in 4k, than in 1080p (so to stabilyze properly I have to upscale, then apply the filter in the upscaled video).

What is happening is that in 4k your process method divide each frame in more pieces before stitch the image together. And it seems that this method works better to make the video more stable.

So if possible, please add a way to select how many pieces which frame will be divided (so far, more pieces = more stable) during the processing.

obs: how do I know your method? Is because when the AI fail to recognize camera motion flawlessly, the stitching area is pronounced.

1080p (6)

4k (28)

8k (Can’t count, but you get the idea)

I understand that you guys are working hard to create a magical tool that in one click solve tons of video problems, but not all filters works properly as intended and since the software is mostly cpu bound and our current technology takes too long to process to see the result, I also would like to suggest in this idea 2 things that will make our life much easier and productive.

  1. Change the stiching box sizes but not only for equal boxes. In areas where whe need more stabilization/detail, the box could be small, and the other areas bigger.

  2. It will be faster if also it could process one box at the time, like “nail here”. One entire box start to finish will be generated and the rest is blur (or maybe the original). Then the user selects which will be the next box (neart to the first one) to be generated. Then the user could check if the stich is correct, select and regenerate only the incorrect area.

This not only allows your software to use all processing power available to a single area, but also reduce the hardware stress to deal with an insane ammount of files at the same time, makes our life easier to fast check, retry and apply different filters and tests (specially because most of the time we only need a certain area to be perfect), not to mention that potentially reduce the minimal requirements to use your software, which means more customers and money, for you.

image

Thanks.

It is still so that stabilizing can create artifacts when I know that the person is a professional person like the one who filmed my Marriage 27 years ago I do not use it, but I do use it when the person is an amateur I still only use reduce jittery motions checkbox 70 percent 4 passes

so it depends and I use h265 because H265 does a higher quality rewrite than h264

Hey Anne, thanks for the 265 tip.

I think in my case is more challenging for the AI to deal with the imperfections of the video, since the image I’m trying to fix was filmed with a galaxy s7 in a bright day and at the time I didn’t know that I should have a polar lenses to avoid image wobbling due to reflections.

Topaz Video AI was able to fix it in a particular part of the video (in which I need the most), a fun fact is that was this capability that made me to buy the software, however it only fix it in higher resolutions and by the imperfections generated by the câmera motion blur in other parts of the video (in my case, it is more evident with above 50 strength with all passes when “reduce jitery motions” is on), it become clear why. That is why the idea of allow to add more boxes per frame in lower resolutions appear.

I think with a video it will be easier to spot the issue I fixing, and the collateral issues that appeared after the fix:

I’ll probably will create a Ekaterina’s Palace challenge to check the improvements in between TVAI versions. Because most of what could place a chalenge to the AI to fix is in this video. :rofl:

This Is interesting It would mean that given the fact that the amount of frames in the source is the same any video should be converted to 4K to ensure that a block in the frame gets the same amount of parsing time as a single frame conversion… Hence becoming less complex for AI to process with better results only much larger files and longer processing then a 1.1 conversion where input is output.

I just did an old prince 1988 DVD conversion with a low pixelation some old SD and to HD the HD took so much longer but looks far better…

I am now going to 4 K and check again then see what happens if I convert it back to HD maybe that works better and have a smaller file in the end.

Thanks never thought about it this way but yeah the time and lenghth of the file is not just an upscale apparently but maybe first a split up in blocks of similar multiple frames then do the AI which allows for more precise handling.

This Is interesting It would mean that given the fact that the amount of frames in the source is the same any video should be converted to 4K to ensure that a block in the frame gets the same amount of parsing time as a single frame conversion… Hence becoming less complex for AI to process with better results only much larger files and longer processing then a 1.1 conversion where input is output

Exaclty. If it helps, during the tests I noticed that is better to upscale first, then drag and drop the upscaled version, then apply the enhancement/stabilization. Because when you apply together with the upscale the AI will process using the same amount of “stitching boxes” from the original size.

I believe the perfect world will be when we could select the size, the amount and where the “stitching boxes” are. Meaning, point to the AI where I want to make it better and not re-work the entire frame or video (it will make the processing time way faster also). Probably will be a challenging nightmare to the UI-UX team to make it coherent.