Topaz Video AI Alpha 5.3.2.1.a

Hi everyone,

Today we’re releasing a new alpha build for Video AI.

This update includes v4 of the Hyperion SDR-to-HDR model, along with a second version of RXL.


5.3.2.1.a.hyp.rxl


Hyperion Alpha 4

  • Color accuracy is vastly improved with or without exposure changes. This fixes the color-shifts in Alpha #3 (and to a lesser extent Alpha #2), particularly in skin tones.

  • The adjust exposure slider has been tuned. It remains relative, allowing for an exposure increment or decrement of 1 stop. The new tooltip explains this more clearly than before.

  • The recover highlights slider has been removed. We found that it wasn’t as effective as we had hoped, and the results are better without that extra parameter.

  • Support for ProRes 4444 XQ on macOS has been enabled.


RXL Alpha 2

  • Fixes texture/pattern flowing artifacts

We also wanted to present some detail from our research team on why the model uses 4x upscaling for all renders:

For a model that is designed for producing details or restoring contents (e.g. Iris, Proteus, Rhea), we prefer to start with 4x upscaling.

Compared to 1x or 2x, a 4x model must has more restoration capabilities, mostly because it is trained to generate 16x more details, and it is generally larger in terms of number of parameters.

So, when we experiment with a new model architecture or a different training scheme, it is better to evaluate the new strategy with the 4x models. It gives us a benchmark of the amount of details/restoration we can achieve with the new strategy.

Once we have a satisfactory 4x model, the 1x/2x model generally follows a similar architecture/training but with slightly different objectives (such as higher fidelity, fewer artifacts, faster processing, etc.) than the 4x model.

On the other hand, for models that are not intended for restoration and expected to be applied to high quality/resolution input (e.g., Nyx, Hyperion, Themis), we generally start with 1x/2x.


Backend changes

  • Improved duplicate frame detection to handle noise better
  • Fixed issue with frame missing or extra frame at the end when using enhancement
  • Upgraded to latest FFmpeg master
  • FFmpeg command line now supports multiple specific GPU selections (e.g. “device=1.3.4” to select the first, third and fourth GPU for processing)

Known Issues

  • Exporting Hyperion results with H265 Main10 on macOS can create color shifts over time. We are not aware of any other encoder or setting with this issue.

As always, we welcome feedback (particularly on the Hyperion model) as we get closer to launch.

Does this model cover your use-case? Are there some controls you’d like that are currently missing? Is there a major issue that is stopping you from using this model?

11 Likes

Can I ask - what is the intended target source type for RXL?

SD / HD / FHD ? And what about quality level - LQ / MQ ?

Thanks

8 Likes

Suggestion #1 (not necessarily useful)

It would be cool if there was a more… Verbose real time status option that could be toggled during upscaling. It would show like… What model is running, what its doing to the image, how many frames it’s processed so far etc.

It’s not necessary I just think it would be cool.

Suggestion #2 (a useful one)
It would also be cool to get a option to stop processing BUT it will finalize the video right where it’s at and save it in the output directory the user picked while still allowing resume later.

Suggestion #3 IGNORE THIS ONE LOL

*Lastly it would be nice to be able to set the location where the video temp files are stored during processing so if you (for example) have 3 very fast SSD’s you can read from 1, have another as the temp directory and then finally the 3rd for the final output. *

*I don’t know if there would be any performance advantages (or disadvantages?) to doing it that way but even if it would only save a couple min per hour it would really add to for someone like me who is basically upscaling 24/7 for 3/4 of the month. *

*I have a 4090 (soon 5090),7950x3d and 4 990 pro’s to take advantage of as many PCIE lanes as possible on my rog x670e extreme. I don’t really use drive 3 or 4 often except for games and my output is on a very large Seagate Barracuda HDD so if I can speed it up even a little id definitely do it. *

I’ve tested using just SSD’s for read/output instead of reading from an SSD and writing to the HDD but there was no performance difference in that scenario.

*Im Usually going from 480p30 or 720p30 to 4k60 Proteus/Aion (but sometimes I’ll use Chronos for the speed advantage even if quality is juuuuust slightly lower in fast motion) and I’ll also use Rhea which really slows things down so any gains I’m going to jump at lol. *

Otherwise everything has been pretty fantastic even if I’m not a big fan of the new carousel style UI but only because at first I couldn’t find a couple options but maybe I was just tired that day.

2 Likes

for the 3rd point you already kinda have this option. you set your temp directory to one fast SSD drive (in the App settings) and you load your video files (video directory) from a different fast SSD drive.
but splitting the temp & Video source to 2 drives might slow things down as it will need to copy a file (potentially large one) from one drive to another, while keeping everything at the same drive/Directory (in apps setting you can bypass temp folder to export the temp file at the video’s source directory), it just renames the file from the _temp to the final name on completion (at least that is what I noticed on my rig).

But honestly, I don’t think it would make much of a difference as the bottleneck is not your HDD/SDD, but the CPU/GPU rendering. your drive writes the data much quicker then your GPU/CPU can output the rendered video.

so the best drive performance would be to have the temp and video source at the same directory. that way you avoid a file copy between drives.

1 Like

Thanks. That’s what I figured but I thought I’d mention it anyway for funsies lol.

My drives are never being “taxed” even when my GPU/CPU is as loaded up as topaz will make it. I generally output AV1 using the custom nVidia AV1 version of FFMPEG and it’s never let me down… I’m just a chaser of speed lol. Obviously.

Right now my temp and video source are the same drive, that’s what I found to be quickest the last time I did any comparison testing.

1 Like

same here. I do the same.

1 Like

This, times 100, as I’m a stickler against default settings in general, as I’m somewhat of a control freak when it comes to my PC/progs/games/etc…

But yeah, verbosity and minutiae-level tinkering for every setting is always welcomed!

2 Likes

A few immediate observations:

  1. This version of RXL is dramatically better than the first.
  2. RXL has a great amount of sensitivity to the parameter sliders available.
  3. RXL is phenomenal at deinterlacing. Results when using RXL alone are far better than Dione → RXL.
  4. RXL does a great job restoring faces.
4 Likes

Seeing some artifacting with this release of RXL, auto settings:

Look at the red uniforms, same export, but the later frame, for some reason line artifacts show up on the uniforms:

Looks good:

Artifacting:

Artifacting also present somewhat on their faces.

Also, the depth of field in this shot is still being ignored/overridden by the face restoration, making the background faces look like bobble-heads - same issue from original Rhea persists:

Other than that RXL does a nice job with textures and preserving details, but the artifacts need to be fixed and we need a slider to adjust depth of field for face restoration, if possible.

I can upload the source clip if needed for further investigation.

6 Likes

I can also reproduce this issue. In my testing when something only has small movements, the quality starts off high, then slowly degrades, typically producing the block or line artifacts like the ones you’ve shown.

2 Likes

@tony.topazlabs From what I remember HDR10 is 10 stops and DolbyVision is 14.

This is the communication and/or explanation I(we, some of us, no one) expected : more details, the course and the reason for your choices.

I really like this !

EDIT : The slider + text box get the #1 position into the bogus things but I don’t charge if this alpha improve quality :grin:

2 Likes

The inability to preview the HDR video in the app is a show stopper for me. Being able to compare two videos side by side in the app is important for tuning settings, and we just can’t do that at the without external tools.

Ideally a video player upgrade (that includes HDR viewing support) should land before the Hyperion model is released to the public.

RXL still looks way too artificial for my taste with too much added detail which doesnt fit the rest of the scene (tested it on various SD sources), especially for faces. Even when I turn down the values to manual 0 on x4 upscaling.

3 Likes

When I select the custom resolution I can nowhere find the dialog to set up my custom resolution unless I open the “Enhancements” menu.

The minimum application window width is set too small so that parts of the GUI are cut off. This issue has been reported before.

1 Like

It seems that the “replace duplicate frames” doesn’t work anymore when the frame rate is set to the actual amount from the video and sensitivity is at 10 or 20 (and with, for example, set to 60fps, it barely works), and I’ve tested it on that destroyed Dungeons & Dragons cartoon (29.97 fps). Either there’s something wrong with sensitivity, or the algorithm got a bit screwed. :slight_smile:

I also noticed like the Iris v2 model on auto parameters performs a bit underwhelmingly in terms of quality compared to, for example VEAI 5.0.4 or even last stable 5.3.2. The frames, where the interpolation finally kicks in, look like they’re not processed by the enhancement model at all, or barely touched.

but the amount of RAM does matter and keep in mind by default Windows 10/11 compresses memory even if it not necessary

Look at the indezing part even if you have ssds
How to speed up Windows 11 – Computerworld

I am talking about HDD / SDD (Storage), not about volatile memory.
the Storage is not the bottleneck throughout the rendering process flow.
for OS activities or other APPs that write heavily to the local storage (e.g. Database, file transfer, etc). local storage would have an effect on speed.

But Topaz Process is more of system resource hogging process and local storage “waits” for the CPU/GPU data to be fed and written. not the other way around.

That’s about the exact same conclusion I came to.
I got really excited for a second. Then I realized that I had clicked on something that reset the enhancement model to Proteus.

1 Like

Wait you got “replace duplicate frames” to work? On Apollo and Aion, it has simply never worked. If I remember right, with Chronos, it adjusts the amount of blur added to each motion.