Topaz Video AI v3.0.4

Hello Everyone!

We have a smaller release for y’all this week, and there will be no release next week. We’re working on some larger, longer-term improvements for the end of the year.

Released November 15, 2022

Download: Windows | Mac (.dmg) | Mac (.pkg)

Changes from v3.0.3

  • Starting an update in app no longer shows pending exports warning when no previews/exports are running
  • Installer no longer prompts to remove VEAI 2.
    • The UNINSTALLOLDVEAI=1 command line option can perform this step for users who still want it
  • Fixed login program wrongly returning unowned status for some users
  • Misc UI tweaks

Known Issues:

  • Quality mismatch between 2.6.4. and 3.0.x for some models.
  • Some users may experience reduced performance compared to 2.6.4
  • Preview experience needs to be improved

File Submission Dropbox: Submit Files


the models have not been updated or new ones added for more than 1 year, please update the models, the models used to be updated every week and now nothing has happened for almost 20 months, that is the core task of this program


Topaz develops software to the beat of their own drum. However, they did add some models recently.

that’s all? now I feel disappointed. :neutral_face:


Stabilization, Apollo are new. Also if you believe some here, the Proteus and other models have all been changed (re-trained?) since 2.6.4. We all share your desire for perfect video restoration tools yesterday, but your post is inaccurate, not helpful.


Hello there. I’m not sur if it’s the right place to put upgrade proposition but here are mine:
Major missing fuctionnality for my utilisation is:
1- Having an option to not loose HDR metadata. H265 main 10 should have a keep HDR option or something like this.
2- Copy AUDIO is very good, it keep audio format most of the time even if the video is TRIM, BUT it keep only 1 audio track… what if multiple audio track exist…?
3- Doing AI video upscale is very ressource dependant and take lots of time to parse a long video. It do ofter crash (apps, or the apps make the computer crash). That would be good to have a Resume function. Having the TVAI unfinished video file in same output folder help as we can juste redo the reste of the video by checking where the TVAI ended.
4- Having a PAUSE option along with STOP processing in the … option. Sometime it take days to parse a video, but I may require my computer to do something else like playing game. For most, it’s impossible doing both at same time so having a PAUSE would be very helpfull.
5- In version 2.6.4, that was possible to compare 4 differents model at same time, now we have to go back and forth to 1 processed video to another. That would be good to have that function back.
6- In the option on the right, basic: Brightness, contrast, hue, saturation would be good too.
7- There was so many model in previous version, where are they now?
8- The Recover faces in Photo AI working quite good with very low resolution picture, but Video AI is awful with face when video is very low resolution. It deform more than it correct. So why that function is not a possibility somewhere to have a Recover faces function?
9- Why when playing a preview it’s in slow motion? There is the button aside the play that do that if watching the preview in slow motion is required.
10- In previous version 2.6.4, there was an option to put a prefix or suffix to the file, that was awsome. But now, instead we have some random number and we have to manually name the file everytime using the Export As…
11- That should be good that the Export As… point directly at first at the same folder as the source when possible, now we have to manually browse to that folder. Or at least that would be good to have an option for that in the preference
12- The TRIM function is not as intuitive as it was in 2.6.4. We have a manually type the time frame at when we want it to be cut but this is not working well for the last 2 digits that is used for the frame. When i type 12, it put 8… Maybe that’s appening because the timeframe is rescale to a basic 24 frame/sec or something but that’s really not convinient. Before it was possible to just put the video at when you want it to be cut, even going frame by frame then clicking on the button for that.
13- Sometime, doing a preview will have offset parsed version time than the original when in single video preview mode. By clicking the video will show the original at a different time. I suspect that this is due to a bad time frame calculation based on the wrong frame/sec that is the same in the app than the video.
14- I’m not sur what is the ADD NOISE option is for. For some model, it do add noise, but for Proteus it’s not adding any noise. It look like it do something good for bad resolution video. Putting noise in max would avoid some big deformation for face. But that’s surely not what this have been designed for isn’t it?


there is certainly a good reason for that.

I’ve head people say this several times over the past few months. Maybe I’ll try it with my GTX 1060, but with the RTX 3080ti I can have three instances of TVAI running and play multiplayer games at 300 FPS. (That is Splitgate specifically. I’m sure there are other more demanding games, that might suffer, but I don’t own them.)

I’m afraid I could answer almost all your questions as, all your requests are the same as all the people here ask each week since 3.x started to be devolopped.
for what i saw during the developpement, the software is not going as fast as people would like, but with time the requests seems to be the same on the dev team side and each requests are put on release, with some (sometimes big) patience and time.

for the trim issue, it will be fixed certainly as soon as the request “frame number addeed to timecode” will be available at the same as a fix for the playback preview speed (yes, it’s not a feature, it’s a bug lol or not well implanted yet !!). when it will play at normal speed, the little button next to it is to slow down the playback speed by 20% but as you can see it, the playback is a bit chaotic actually , so the button is not really helpful for now.
there is a small workaround for the trim use : using the left and right arrow on the keyboard to find a precise frame and use the shortcut to do the trim in/out (they are listed in the edit menu).

for requests not asked before, there is a separated specific thread on this forum for that, but as said, almost all of them have already been requested.

for the add noise button, i’m not sure, it’s maybe to add some noise to the input file, as having more noise can make the denoiser of the Model to work stronger but i’m maybe wrong, i didn’t tried it yet. maybe someone can anwer.
It would be good that someone who know or a dev explain it !

1 Like

according to the developers adding a bit of extra noise helps the ai getting rid of blocky compression artifacts.


You are not alone my friend. :eyes:

1 Like

to be honest, I have really no idea what they are currently working on. yes, they made a big improvement with the new standard model for gigapixel, so they are still able to. on the other hand another week past by but I can’t see any progress in video ai. so many good tips were given months ago, but they are still working on - I really don’t know what they are working on at video ai. it’s probably very basic gui and background functionallity that must be get right before they can improve other aspects of the application. because of the long time of development on the basic part it looks like they might experienced serious trouble getting it working as intended. but all this is only a bad guess and I will do my best testing whatever appears week after week, only my hope gets weak.


Since V3 is so fundamentally broken, can you at least add an NVIDIA AV1 encoder export option to 2.6.4 while we wait for a functional V3?

1 Like

MacOS Intel user, I submitted this to support but thought to share here in case anyone knows what the deal is… My computer locks up and reboots part way through running “Upscale to 4K” (from 720x480 SDR) using AI Model “Gaia” and “Computer Generated”. Looks spectacular in the preview but I can never complete the export as the computer freezes every time I try to export. TVAI 3.0.4 logs say “out of memory”, but the system shows I have plenty of RAM and AMD graphics memory that are unused. I have 64GB RAM and an AMD Radeon Pro 5700 XT 16 GB.

From the log:

  • 022-11-16 00-18-29 Thread: 0x7ff8573094c0 Debug Updating video info {“sar”:0.9090909090909091,“framerate”:29.97,“startNumber”:1,“frames”:8678,“progress”:9,“status”:0,“frame”:805,“procStatus”:{“status”:0,“eta”:3291.8289750896815,“fps”:2.3906795372959833,“message”:“Out of memory”,“pass”:1,“error”:"- Process ran out of memory.\n- Process ran out of memory. Couldn’t generate output from the model for: VideoSR_Unet/Out4X/output/add.",“progress”:9,“frame”:805,“priority”:3,“requestPos”:0,“processorIndex”:-1}}
  • 022-11-16 00-18-29 Thread: 0x7ff8573094c0 Info ~TProcess(): destroyed
  • 2022-11-16 00:18:29.877 Topaz Video AI[1186:12200] HIToolbox: received notification of WindowServer event port death.
  • 2022-11-16 00:18:29.877 Topaz Video AI[1186:12200] port matched the WindowServer port created in BindCGSToRunLoop

I have been meaning to ask, what’s with the file naming convention? How is that 9-digit random number supposed to help me find a file or understand what’s in the file. What happened to having the dimensions suffixed to the file name like in 2.6.4?

1 Like

v3 has simplified my workflow tremendously and the basic concept with the use of ffmpeg + command line is more future-proof than v2.
Main benefits for me over v2:

  • variable framerate issues gone (before I had to re-encode videos before using them in TVEAI)
  • built-in cropping of black bars giving performance boost
  • command line option allows customization
  • command line allows for easy pause/resume
  • easy parallelization of tasks

They just need to address the wonky UI and performance problems and v3 will be way better than v2


So I wanted to convert some GoPro 5k 30p footage to 4k 60p (as YouTube don’t support 5k any more and I wanted to smooth out some fairly fast pans). Using 3.0.3 it is currently talking about 19 days to completion, so does anyone have any suggestions as to which of the options (I was quite liberal in my selections, but it’s only a 19 minute video) should be de-selected to get it within reasonable limits, or if starting again with 3.0.4 would be a plan? (Note anything that doesn’t use my Nvidia RTX2060 is a more likely culprit.) Also the preview looks odd?

Oh and does my TVAI licence (I started with v3) allows me to run its predecessor app, as a temporary work-around? Is there anything special I need to do (and where do I get it and any extra models I’d need)?



FYI the original video is here (uploaded at 5k and downscaled to 4k by YouTube):

*** command line allows for easy pause/resume**

would you please explain how you can do that

hello. I also reverted to 2.6.4.
3.0 has degraded the image quality of DDV3.
I thought Proteus auto correction was convenient for 3.0,
Even if the parameter is set to zero, some correction is performed.
2.6.4 is better if you do it manually.
2.6.4 can be downloaded here.

1 Like

If TVAI v3 is telling you 19 days, v2 may be faster but it is not orders of magnitude faster, so I would explore other options.

I would probably use ffmpeg scale filter for the spatial downsample (you can pick the algorithm you want) and Flowframes for temporal upsample. Maybe use Flowframes first, to maximize the amount of available detail for the upsample part. The default RIFE model should be fine.

For the ffmpeg scale filter, you could experiment with lanczos, bilinear, bicubic (or others). Lanczos can add some apparent sharpness, which may be good or bad, sometimes it looks like oversharpening when downsampling. Unless the source is very bad, Topaz’ strength is really in upsampling, not downsampling, so I think using it for what you’re trying to do is overkill and a waste of compute power (especially for your spatial 5k->4k objective).

Just like ffmpeg, Flowframes is free so you have little to lose in trying. Flowframes’ optical flow is very different but much faster than what Topaz does. For different kinds of footage I have observed it produces better or worse results than Chronos or Apollo, but always much faster so I always try it first. For interpolating a single frame between each frame pair, I think the results could be quite good with optical flow. Again, this depends heavily on what the footage is.

Regarding the other 99% of the ranting in this thread, I agree V3 needs a lot of work and was not ready to come out of beta, but here we are.

A common refrain is to “just add face enhancement.” I encourage everyone to try out current ML face enhancement tech themselves if they have the ability, most of the published approaches for face enhancement work on standalone images. This has been reiterated many times by other posters, but some readers just do not hear it. Without temporal coherence integrated into the model, the facial identity fluctuates wildly with changing aspect. This is very different from single image enhancement. If you want to give it a try, look up Codeformer or GFPGAN, there are easy to use GUIs that anyone could operate. The only hurdle is following the setup instructions on github. I mention these two because they are free, but of course you can try Topaz Gigapixel if you have it.

In any of these you can input either video clips or image sequences, and if a subject’s face is not too blurred and not moving, the results look good. A single frame of motion blur or a partial face creates the worst kind of single-frame result you can imagine – blurred mangled features punctuated by sharp eyes and misplaced teeth etc. True horror, in many ways worse than what is done by any of the face-un-aware models Topaz has. I think a better short term objective for Video AI would be integrating one of the readily available facial detection models into Topaz with a tick box to disable upscaling on faces (when the model truly fails), where a small blurred mask patch of the original sequence upscaled traditionally can be overlaid on the upscaled result (blurry face is certainly better than a sharp monster face, in the short term). The latest literature has a few algorithms describing approaches with attempts at temporal coherence, but they do not appear mature enough to be applied to general footage as Topaz is targeting. If you have a tremendous amount of compute power, and upscaling only specific types of footage with specific people, there are more options there, but again, that is outside what Topaz is aiming for so you’re basically asking them move beyond the envelope of cutting edge basic research. Don’t hold your breath.

By the way, I think the comments about “the team” are bordering on personal attacks, not cool. Settle down and have some sense of decorum, this behavior reflects poorly on the poster. From the outside you have little insight into what is going on with the team. They are not obligated to let you behind the curtain, as much as you wish for it. Besides, if you’re ranting about how you think the team is this that and the other thing, why would they be inclined to share MORE with you? You can catch more flies with honey than with vinegar, as they say. Anyway, that is just my two cents. I have been a long time lurker and it’s unfortunate to see the hostility growing in a few of the regular posters in these threads.

Back to the software, hopefully we can see some good enhancements in the next months, but I wouldn’t hold your breath. The next 6-7 weeks are very dense with holidays in the US where the development goes on AFAIK. This is obviously a good explanation for why there will be no new release next week. The developers are humans too and they have families and lives outside of feverishly trying to address every scatterbrained screed on the forums.