Topaz Video AI 5.2.1

Hello everyone,

Today we’re excited to release Video AI 5.2!

This release includes the new Rhea Enhancement model, Pro seat management, Frame Interpolation for After Effects, Alpha layer support, and many UI and backend fixes.

  1. Rhea Enhancement Model

Rhea represents a combination of Proteus and Iris. The model is intended to be more accurate in preserving fine details across a wider range of subjects, while also handling text in a less destructive way.

This model and Rhea internally processes inputs at 4x scale and then downscales to your selected output resolution.

We’re very excited to see your results with Rhea, here are some example frames from this model:

  1. Frame Interpolation for After Effects

Today we are also launching Frame Interpolation in Adobe After Effects. This plugin addition includes access to all Frame Interpolation models for up to 16x slow motion conversion.

For more information on using the After Effects Frame Interpolation plugin, see this post in the Video AI Plug-ins forum section.

  1. Alpha (transparency) layer support

Video AI can now copy alpha layers from inputs and merge them back with AI model output videos. Alpha layer copying is supported in the following codecs:

  • QuickTime Animation
  • TIFF
  • PNG


Pro seat management

After a pre-launch period with several Pro teams around the world, Video AI Pro will be officially launching later this week. This new license tier offers seat management, commercial usage rights, and multi-GPU support for teams working with Video AI.

As part of this transition in licensing, we will be discontinuing the previous implementation of multi-GPU rendering for the standard license. Our team found that less than 2% of users made use of the multi-GPU setting, and further development will be focused on multi-GPU optimization for Pro teams.

If you are affected by this change and would like to discuss further options, please contact

Bug fixes & other improvements:

  • Preview set saving functionality restored.
  • UI lag greatly reduced, specifically for long operations.
  • Ensures colorspace correctness for input videos with a wider list of colorspace related flags.
    – Uses the setparams filter instead of -colorspace, ... flags to explicitly override the input metadata to ensure only the metadata tags are updated without changing the video data itself.
  • Timeline width now adjusts correctly between different video inputs.
  • Export preferences now save consistently.
  • Added pause/resume controls to right-click menu.
  • Fixed crop settings auto-populating after first digit.
  • Time remaining/elapsed toggle now available for both Previews and Exports.
  • Fixed “Open in Explorer/Finder” button for previews and exports.
  • Disabled automatic generation of TIFF sequences for Previews. These temporary files are no longer in use and the app now reads preview files in the selected Export format.

Update 07/11: Video AI 5.2.1 is now released with a fix for videos stored on network shares sometimes not importing correctly.


So what will “further development” of multi-GPU optimization consist of? The change I see in this release is already contradicting your earlier post that said multiple GPUs for single videos would continue to be supported in the standard version, so this scenario is once again taking away an existing feature and further reducing the value of the existing license.

Will Pro receive greater attention to stabilization issues than standard?

Is there going to be an upgrade price vs. having to buy an entirely new license, and will there be a trial period so we can decide whether it’s worth it?


The Rhea model seems pretty good for a relatively good-quality source.
With low resolution and compressed videos, it smooths out people too much while keeping more of their original shape than Iris.

I really would like an updated version of Iris with low resolution and a compressed source. Iris MQ is good for people, but not for everything else.

A more aggressive model, combined with Proteus for everything that is not a face or human, with some kind of internal deflicker engine, would smooth out the frames.


Great update! Can’t wait to try out this new model. I hope it works well with low resolution videos. Upscaling from 720p to 1080p isn’t too exciting because a video at 720p already looks pretty good, but anything at 480p really needs the extra help.


I don’t think you’re really out anything. Sounds like pro milti-gpu will be focused on fast rendering of a single clip or video—something that I have not seen TVAI able to do yet. Maybe 5 has changed dramatically in that regard and I’m wrong. I have not specifically tested that, but… My reasoning that nothing has changed is that, if they had found a way to use multi-gpu on a single clip, I think all those that can get better speeds from processing multiple instances at the same time, would have seen a big increase in solo run speeds.

All GPUs has been removed from the AI Processor selection options in the 5.2 release build. Granted, that has never worked well compared to splitting a video between separate instances. But it was only a couple of weeks ago that Tony said it would continue to be available in the standard version.

I cannot think of any way that multi-GPU processing in one system would improve things for people working in teams. Maybe if they figured out how to make one team member’s seat reach out and distribute processing to multiple GPUs in other team members’ systems…


I only want to preview for a selected amount of time (5 frames).
Do not automatically connect A, B, and C.
Don’t take any more of my time!


Any improvement with rendering performance under DaVinci Resolve? In addition there is no easy way to do a A-B compare in Resolve … the only way I can make this work is test on a very small clip in Resolve (I guess similar to Video AI preview mode) and do a “Render In place” that really just generates another clip and replace existing clip in the timeline.

In addition, how are we supposed to Video AI in a workflow that uses RAW (SLOG3) with a LUT and then goes thru color grading … every Video AI model I’ve used adjusts color and brightness level. Idealy when I’m working with SLOG3 I really only need Video AI to remove noise/grain shot from low light footage.

My Sony FX6 does a really good job in low light but sometimes I’m working with my Sony A7VR which isn’t as good with low light footage (I know it’s more of a picture camera but it can do 4K 60 in reasonably good light) … and they I have the DJI drone, pocket, action cameras all 4K 60 also but very different level of image quality.

Is there some method to get Video AI to just denoise and nothing else?

Cheers, Rob.

1 Like

I beg to differ - there are plenty of 720p videos that are noisy, full of compression artifacts, chromatic aberration, the list goes on. 720p resolution does not mean it’s guaranteed to be good. 720p for me is pretty much the starting point resolution to upscale.

I don’t waste time with 360p sources etc. as I know it will always look like garbage.


I don’t have a model in the list of models Rhea. Who also?

“As part of this transition in licensing, we will be discontinuing the previous implementation of multi-GPU rendering for the standard license.”

In a previous announcement, it was assured that the existing multi-GPU support would remain. What is this other than a lie? :frowning:
I have used 2x 3080Ti so far and was very satisfied with almost double the performance.
This is how a company loses its reputation! :frowning:


Does using the multi GPU require the same GPU twice or can you use a 4080 with a 3090?

As in any version before the “show in explorer” option just does nothing. I also mentioned something regarding the minimum Window size. I am getting tired of reporting the same issues all over again only to find them still present in any release. :eyes:

I don’t think telling them over and over is going to do any good, because at this point the issue is likely specific to your pc. I don’t recall when the “show in explorer” option hasn’t worked correctly. I think your best option is to raise a support ticket to try and work out why it’s happening.

I wanted to test out this new model I can not find this model either back to 5.0.4 for me.

I don’t even have it in 5.2

Maybe they forgot to add the model to this version or this new model will not be added till another 2 or 3 versions down the road.

The constant bitrate failure when using H245 Main 10 is still not fixed.

1 Like

Is the final 5.2 slower than the last beta? In the beta I was processing 3 clips simultanously with average 17 fps on each clip. Now I process only two clips at the same time average 13 fps per clip because processing three would be even slower.