Video Roadmap Update (Jan 2023)

Since the last roadmap update, we’ve substantially improved export performance, sped up in-app playback, and added a new Motion Blur model:

With this performance update, we’ve revised our core focus areas:

  1. Full speed ahead on video quality models for denoising/upscaling. Afterwards, explore options for Face Recovery (1).
  2. Continue building a smoother in-app previewing experience, particularly scrub performance and the ease of comparing before/after results.

Now that we’ve shipped the major performance improvement, we’ll start solidifying the app and improving quality of life in other ways:

  • Improve stability and error message handling
  • Improve the video trim experience
  • Allow downloading all usable models at once instead of as-needed
  • Explore options to pause / resume exports (1)

Please comment your thoughts, positive or negative. Feedback is a gift for us; while we may not be able to implement every suggested feature or respond to every post, we do read and consider everything you write.

Thanks again. The whole team is really excited about the improved Topaz Video AI that we’ll build together with you in 2023!

(If you’re having technical issues, contact us and we’ll get you squared away. If you’re curious why Topaz Video AI v3 has changed so much from v2, read the previous roadmap post.)


Thanks a lot devs for all your efforts!

Please consider adding these features as well…


Good job.
Is there any possibility to implement a lossless codec like UT video?

Running good so far.
I convert mostly VOB files with PCM audio.
I would like to see PCM audio passthrough when converting to Apple pro res.
Also a detailed manual explaining each setting and model would be great.
Thanks for the quick updates.


I hope that we can even integrate models ourselves even if some are not in the list as we could do with 2.3.0

+1 to exploring options for face recovery. That will REALLY be great. It would be nice if the program could find a high res face from a frame in the current video and use that for frames where the face is further in the background and a bit blurry OR if we could somehow feed the program a few high resolution images of the face so that it could use that for reconstruction in the video.


Could you please add support for AMD’s new AI Cores in the 7000 series GPUs?


Could it make sure all the ProRes formats for the PC (including ProRes 4444 with alpha channel) can be read from and written to including any alpha channel.

And can it keep all existing AI models instead of turning some off.

Someone mentioned the new Topaz Video AI didn’t improve a cartoon type video. It could give the full list of AI models describing more clearly what each is best for, including models for CGI or 2d cartoons etc. eg. with gigapixel you could choose the model for those different things better in some ways.

It could have models that are more designed for increased detail by taking many frames into account (or a user specified number of frames or different models for different numbers of frames so some could be faster than others and some could be better quality). Like the post above about face recovery but it could be anything in the video (such as a car or whatever) where taking frames where the objects are closer to the camera should be able to give detail that could be added when those things are further away from the camera. It would be good if it mentioned how many frames each model takes into account (as well as have some where you can edit the number of frames).

I don’t know if this would work well but if it would help - if you’re chaining different models and options (such as stabilization) - could a node based way of working be a good option for that and make things simpler?

Maybe you could specify what’s in/not in the video so it could fine tune the AI model a bit or pick the best model for it, as well as if it’s live action/3d CGI/2D cartoon etc.

Maybe the video roadmap could also include easily creating other types of animations or effects in a simple type way. Since it does stabilization maybe it could also track things and you could use that tracking info for other things (eg. placing other objects/clips into the scenes/videos). eg. maybe a compositor that was cheap but high enough quality and simple enough to use but with all the existing features of the Topaz Video AI/Photo AI.

Maybe an AI rotoscoping feature could be added or different ones could (eg. for removing the background from videos. There could also be more artistic rotoscoping options).

Maybe the video roadmap could include adding stable diffusion type animations and images into the video - ones where there would definitely be no rights issues (eg. where it could be trained totally on content with the right rights including by the user on content they own the rights to).

Maybe if the output (eg. of the upscaling or interpolation) isn’t accurate enough/giving the results wanted the user could help correct it in different, new ways. eg. a different interpolation app can use masks to correct frame interpolation - maybe similar things could be used.

Maybe there could be some way for the user to add additional training to the models(s) on their video content if that would then make upscale etc. better.

1 Like

The dev team are truly badass, and I’m so grateful that I bought this prog, it has become an invaluable tool for restoring/repairing badly compressed videos! I suppose if there’s something I would add would be to have the zoom stabilising feature include a slider as I’d still like to control the strength, etc…

But to be honest, I’m stupefyingly impressed by what they did accomplish with this model as I frequently use VirtualDub2 and the DeShaker plugin as you can control every single aspect of the plugin, which is perfect for control freaks like myself.

As a test, I ran a horribly edited video through TVAI and holy shit, was I impressed; the video in question had idiotic constant zooming which made the video hard to watch, but with TVAI all zooming was nullified, I may have to cut VD2 from my pipeline!

EDIT: This IS fast! I’m currently encoding a 4:57:21 (That’s 4 minutes 57 sec) 1152x648p video 30fps ⇒ 120 fps, using the stabiliser/themis/chronos/proteus models, and it’s estimated at taking only 2 hours, rather than the usual 5-10, -NOTE- Finished rendering and yeah it only took 2 hours 20 min!


Do you usually have to deinterlace your vobs? I find that all of mine have that comb effect.


1 Like

I’d like to see the ability to output SMPTE VC-5 (CineForm) .avi video added at some point (in addition to ProRes and h.264/h.265 as it is now). This would help my overall workflow a lot to have a codec and format that was native to my work environment (Windows and Davinci Resolve)!

And we have hundreds of hours of older SD video that we want to deinterlace and uprez to FHD and/or UHD, some of it was shot on SVHS and some on DVCAM, so MORE speed and the ability to use render farms would be helpful to make such a thing practical for us to do!

I’ve been able to deinterlace and uprez some of this material to FHD with the older v2 and was able to use it interchangeably some more recent FHD video, so we’re excited about the prospect of remastering the other material in this library with v3!


On the MacOS version, store presets outside of the application container.

Maybe in ~/Library/Preferences? Unless I’m missing something I have to copy my custom presets out then back in when I update TVAI on MacOS. The Windows version is much better about this.


1 Like

Please restore the ability to change the interface from timestamps to frame number. I really need the ability to fully stop an upscale (not just pause) and then resume. In 2.6.4 it was easy to identify which frame you left off on and pick up right from there, but in 3.1 there’s no timestamps vs. frames toggle so it isn’t possible


Is this true?

Facts that everybody forget / forgot :

1/ 2.6.4 and version before was using frame number only
2/ one day, “someone” complained about the missing of timecode, and that frame number was totally useless. some people beleived ihimand added their voices to the complains… (as it’s happen here and there everydays on this forum).
3/ well… lol… i think you got it.
4/ from what i know, displaying frame number in the actual 3.x version is not as easy as we can think of, it’s on their road map as a priority but it’s certainly take times to do it. so it need some patience.
5/ it’s perfectly possible to do what jason want, but it need a bit some knowledge in video editing or be used to it. like cut a video in several part and then re assemble it etc… until the feature of pause / resuming / save / frame number is implanted.


V2.6.4 allowed me to toggle between timecode or frame number.

I don’t understand what you are trying to say.

If it is now timecode only then it is no longer a feature of the software.

1 Like

He’s saying that 2.6.4 was originally built in a way that made frame number the basis and time stamps were added as a feature later. Meanwhile, 3.0 uses time stamps as its basis and hasn’t added the ability to use/view frame number in the GUI and because it’s built in this way, adding it takes time/development that hasn’t happened yet.


I do agree though, frame numbering is a must have.


So let’s say it is a requested feature on this new piece of software. Just say what it is.

The confusion is we are working with version numbers for two completely different software applications - that also have two different names.

Organization is the key.

I’m not trying to argue something or pick a fight — but just trying to be factual without hearsay or misinformation.

We have a new software application. The EOL for the other software (VEAI 2.6.4) was I guess mid last year or whenever they released 2.6.4.

Usually you get notice about when the EOL date will occur. We didn’t. The fact is - support or future upgrades for VEAI 2.6.4 ended abruptly.

Bad PR move in my opinion, but it is not my company.

So past is past. I paid $299 last year for VEAI. I received the version before 2.6.4.

In my year of upgrades - I got one - 2.6.4. It works and is not buggy for the most part.

I received access to a new software application, called TVAI. It remained buggy until v3.1. I view it as a Beta version to try.

My access to updates ended in November 2022, but I bought VEAI, not TVAI. My updates ended when they discontinued VEAI.

In theory - I should not be complaining and I will accept that fact. A little bitter - yes. Oh well.

Early adopters never get the best deal usually, but I got access to software that others did not have yet.

That’s life. It’s just $299. Luckily I did not buy 10 copies.

I can see now that it is best not to compare one software application with another. They aren’t the same.

I just wish that Topaz Labs would state the obvious as well.

We should all move on (especially me). But let’s not sugarcoat what happened too.