Topaz Video AI v3.3.0

Hello Everyone!

A new release of Topaz Video AI is now available.

Released June 13th, 2023

Downloads: Windows | Mac

Changes from v3.2.9

  • Adds a new enhancement model Iris for face enhancement and/or improving low-to-medium quality progressive/interlaced videos
  • Adds Recover Original Detail option for Proteus, Iris, and Artemis models
  • Adds DPX support
  • Maps all audio streams with metadata to output
  • Fixes issue with estimate not working for certain image sequences
  • Misc bug fixes and improvements

Known Issues:

  • Videos with mismatched metadata and streams will display incorrect duration
  • Frame number preview length may shorten on app restart

Submit files here: Dropbox

Please take a look at the Video Roadmap Update for more context on what we’re focusing on right now. Thanks in advance for your feedback!

Enhancement model - Iris

  • Primary aim of the model is to improve human faces
  • This model is also designed to enhance low- to medium-quality video, particularly, with noise and compression artifacts. Note, the model tends to lose details on high-quality videos
  • This model is trained to work on all three categories - progressive, interlaced, interlaced progressive
  • Please look at some examples in the roadmap post

Recover Original Detail

  • Purpose of this slider is to retain details from original source if the model’s output is overly smoothed
  • It adds the source texture details to the model’s output to bring back the original details

very happy to see the new facial AI enhancement, it will be very helpful to me!


Congrats to the Topaz team on this release!

Recover additional detail is a game changer - kudos to whomever came up with that innovative idea!

And a special thank you to ‘all yall’s’ who worked so hard to resolve the NVENC bug we saw in the beta releases!

I can confirm that content encoded with NVENC is playing without issue on plain old vanilla VLC 3.0.18.

Well done!


This is what I’m talking about! Fantastic update! I knew you guys would figure it out. I can’t wait to try this!


The Iris model is so great !! i didn’t expect it

The previews are still buggy af. It sits between an inconvenience and an annoyance


Dang it - another killer update while I’m stuck at work. The example videos for the Iris model look incredible. Outstanding work, guys!


Is there a particular reason, why I keep reporting problems on the Beta versions so they can be looked at, only to have to repeat myself ad nauseum because no one ever responds that they are looking it it? This release has the same Iris version that you cannot use properly due to the image constantly morphing I reported for the last two Beta versions and apparently ignored.

Happy for a wider array of people to test, but if you don’t follow feedback on the Beta thread, should I be making them as bug reports instead?


It’s probably a good idea to make it a proper bug report with full system details, as it might be a system specific issue.

I’ve not seen constant morphing, just occasional disocclusion artifacts that can affect some videos, but without examples it’s impossible to comment further. Can you point to a post that shows this?

1 Like

My second to last post had screenshots, and exact settings to reproduce issues, so taking some of my posts and assuming I don’t do that is a bit rich. I follow up with the same bug reports without reposting the same details so that not every post is has the same entire details.

This is a zoomed out - zoomed out because of how much I put in it - bugs issue from the 3.2.9 Beta:


Another follow up:

As for the morping problem, I gave an imageslider link to show it in action, which has been viewed a few times, and expected that if it wasn’t understood, that yes someone may ask to clarify if what I said wasn’t making sense.

I had, last night before seeing this release, spent some time trying to sort out a gif to show it because it is one of those things that isn’t straight forward, and I still don’t know if this is the best showcase of it.

The below is 100 frames taken from 3.2.9 Beta:
3.2.9 Iris V1

When compared to the currently version:
3.3 Iris V1

Note that I have not uploaded gifs to this site before, so if it doesn’t work I suppose I will find out shortly. Also, I compressed it so the images won’t be quite as clear as original but were too large to upload.

The only way I can explain it is to look at the left side (his left our right) of his shirt and watch the pattern on his shirt. Its also visible on his hands, and head but is clearest on his shirt.

Putting aside normal noise, his pattern moves around like its an amorphous blob. Unfortunately, the compression on the gif make it hard to see but I was trying to show it moving.

The original imageslider link I posted with the report is here: Iris V1 Comparison - Imgsli
Where I referenced the above his right (my left) ear where you see between the two the images have changed. That morphing where the image is skewed doesn’t happen in Proteus or any version of Iris prior to 3.3 Betas.

This is another example of it taken from the originals these gifs were made from:

His shirt changes shape, his vest moves, his nose changes shape. This has been happening since the last couple of Betas of Iris and the main issue is that it changes on each and every frame, meaning that everything pulses slightly like its alive. It is more noticeable in some scenes than others, but occurs through the entire set of 65k frames - its not speciifc to a single sequence, is not related just to faces so not explicitly tied to the model trying to repair faces.

My problem is I do not know enough of the back end to explain what it is and how the model is suddenly doing it, only that it is and I was reporting on it.

EDIT: TO Clarify, this has happened on every setting of Iris I have thrown at it, so I cannot isolate it yet to any specific slider or I would have reported that - auto, relative to auto, manual, they all do it as long as Iris is selected.


(v.3.2.9) really - all filters from the category Enhancement - distort facial features
out of 8 videos - 2 videos have distorted faces - so I made an “upscale” using another program and topaz 60fps conversion by Chronos

here is a video (source)
1- [MV] NS윤지 1st Album / 머리아파 on Vimeo
2- Girls' Generation 소녀시대 'Gee' MV (Dance White Ver.) - YouTube

1 Like

I was responding to the accusation you just made that I don’t post any evidence and just anecdotal. I do NOT post every evidence of posts on every post I make otherwise it would be copy pastes of previous posts - those ones are reminders of previous posts that the same issue still exists.

The significantly larger post above you didn’t screenshot was abut Iris, but i note you ignored that as well. But given your entire first post was nothing but having a go at me over nothing to piss me off, I will assume you have no beneficial feedback to provide.

I thought that this effect - the distortion of colors and lines - is a special feature of Topaz, and not a bug =)))))))))))


Will the lack of audio with previews be added to the “known issues” until it is fixed?

1 Like

The preview gets stuck again shortly after beginning of the processing. I ve put the source file to the dropbox mentioned above. :eyes:


Are we going to get proper interpretation for half-float EXR generated by Unreal any time soon? Currently when VAI is fed an EXR sequence from UE5, it gets the gamma completely wrong at input, and it’s somehow even more wrong at the output! :laughing:

Currently the only method is to first convert the source sequence to lossless tiff format before giving it to VAI which is very inconvenient and consumes vastly more storage space than is reasonable.

1 Like

Hi, something that could be useful is to write the file name on the output list. When I’m doing a lot of previews/exports, I can feel lost…


This is the best I think I can show definitively what I am referring to on morphing the images.

This is a blown up, side by side output of the same face over the same frames. Note this morphing issue occurs across the entire image and is not limited to faces - but we are good at detecting problems with faces so it stands out more.

Left is Iris from Beta 3.2.9, right is this current release Iris. It is played back at 12fps instead of 24, and the sequence is repeated several times.

His face is literally changing shape constantly. His hair moves, his eyes move, his shirts moving - its like there is an army of ants under their skin readjusting constantly. It makes the output unwatchable as you notice it - the video feels wrong all the time, even if you can’t work out what the cause is at first.

Note that this is with the 2x upscale. I have already noticed that the 2x and 4x upscale Iris models haven’t been the same since first release yet, so I cannot given an opinion on the 4x. Only that the 2x model started bad but interesting, got progressively better until 2.9 - and then inexplicably got significantly worse before releasing to this version.

In terms of the cause, there is only one that I am aware of and I have never encountered it in Topaz. If you run QTGMC in Progressive mode on the source where the source has duplicate frames, something in the duplicate frames causes them to be treated differently in QTGMC where the result is the two identical frames will have the same morphing this is displaying.

I learnt the hard way that early on that if I wanted to convert a VFR clip to a FPS adjusted clip that used duplicate frames to adjust the FPS, that you could not run it through QTGMC without this issue occuring. Why it happens here suddenly, I do not know. There are no duplicate frames in the input. Though if it was duplicating the frame in the background as part of its process, then using whatever filter that is part of QTGMC (I never worked out which internal filter it was), then that may explain it.


did you use the ProRes codec? try nvidia h.264

thanks for reply, but unfortunately the output isn’t really where the problem lies. Unreal produces EXRs that pretend to be 32bit, but they’re not - they’re 16bit :grinning:

every other application manages to interpret them just fine, but when they come into VAI - they appear maybe 3 or 4 stops underexposed and the gamma is completely incorrect.

We have to use EXR image sequences for our purposes - outputting any other format from the renderer isn’t feasible unfortunately.

1 Like

WOW the Recover Original Detail REALLY helps when I am doing MASSIVE noise reduction (you know, older SANDSTORM videos like Ghostbusters) trying to recover some detail and depth in faces when you go that hard.