Best algorithm and settings for reducing pixelation at the centre of symmetric video fx

Hi Topaz Video,

I am upscaling high quality 4K visual FX videos based on quality photographic resources [EDIT: and computer generated resources] with computer visual FX applied. (I am upscaling them to 6K so that I can use a trick moving a 4K selection window within the 6K ones to break up the symmetry and applying additional FX and layering etc. for a final video that is 4K).

Some of the highly symmetrical FX give rise to distinct and distracting pixelation in the centre. For example, the classic Kaleidoscope type FX. Lines get “stretched out” into pixels with gaps. It is noticeable in stills, but it is more prominent when the video is driven with parametrics, you can see the pixel dots streaming into or out of the centre of the video, and there is aliasing.

I’ve attached a small portion of a still to show the typical pixelation [EDIT: This example is from a fully computer generated resource, but a similar issue arises when circular symmetry FX are applied to photographic resources].

I’d welcome any advice on which Topaz Video AI settings and algorithms I can use to help reduce such pixelation while I upscale. I do have lots of other video tools I can use, but I’d like to try to tackle them first with Topaz “upstream” while upscaling to 6K to gain any benefit before further application of FX.

It would be helpful to know what models you have tried and what model produced the image you shared.

Thx for reply.

In the original posting I said ‘based on quality photographic resources with computer visual FX applied’. I am also doing that, but in the central pixelation example I uploaded, the original 4K video was created from a “thick line art” style logo image generated using the Wire patch-based (“node-based”) visual programming environment from Resolume, fed through (mostly custom or highly customised) Wire and Resolume FX. For some TopazVideo algorithms the source counts therefore appropriately as Computer Generated. The logo image had in most cases anti-aliasing applied to the thick lines before other FX processing.

(BTW Wire gives me the ability to do some tricks like adding Blur or PixelBlur to a central region then overlaying, which helps a bit, but is not great, they don’t have an AntiAlias in Wire yet. That’s a separate but related topic I’m discussing on the Resolume forums.)

I am working through some comparative TopazVideo runs, which helpfully keeps a record of the algorithm used in the export filename, and I mostly also keep screenshots of the TopazVideo settings.

I’ve attached 2 examples of the same frame of a central region 620x620 pixels after TopazVideo processing, the 1st one with Artemis Denoise/Sharpen vs High Quality and the 2nd with Gaia Upscale vs Computer Generated (CG). In this case I’d say Gaia gave a slightly better result for this purpose.

I’m holding off from playing with all the Proteus options for this until I’ve tried some of the other algorithms (which helps me also learn better what they are for), because doing comparisons with the many Proteus options will take a lot of experimentation, so I’m hoping forum feedback here will help inform and guide my later Proteus experiments.

I appreciate this strategy can’t fully address the problem alone, but my idea is to squeeze as much benefit out of each TopazVideo run as possible (since I’m upscaling also anyway) before applying Resolume or Wire or other processing.

Currently on an M1 Max MacBook Pro it takes about 1 or 2 hours to process about 1 minute of 4K video to 6K, depending on other settings and the algorithm chosen of course.

All up, I have about 30 videos to convert, so for now I’m going to do one run per night over the next weeks until I’ve settled on the best settings.

This project is important to me, but not urgent. It’s a very long term project, so if I don’t reply promptly, please don’t interpret it as lack of interest in forum feedback, it just means I’m busy with client work.

2023-01-12 01-28-01

2023-01-12 01-30-49