Video Roadmap Update (May 2023)

Product Updates

Since last month’s update you can now seek or trim by frames, enjoy faster frame interpolation, and access legacy AI models from previous versions.

Seek or trim by frame

You can now display playhead position by frame # after selecting the option in Preferences → General → Timecode Display Format:

This means that you can now seek or trim by frame number instead of timecode:

CleanShot 2023-05-16 at 07.59.12

After enabling this option, you can also specify desired preview duration in frames rather than seconds. We’ve put a lot of effort into improving timeline accuracy as part of this change, so you will get a more consistent seek experience with both frame and timecode. Read more in the Trim documentation.

Significantly faster Apollo model

You can now select a new “Apollo Fast” model in the Frame Interpolation filter:

While we’ve also increased the speed of regular Apollo, Apollo Fast is still about 300-400% faster for generating 2x or 4x frames:

There’s a few things to consider when you select a frame interpolation model. In general:

  • Use Apollo when you need exactly 2x, 4x, or 8x frame count multiple. For example, converting from 30 → 60fps at 200% slow motion would require exactly 4x the frames.
    • Use Apollo Fast when speed is a factor and you require exactly 2x or 4x frames.
    • Use Apollo when you desire the best quality or if you need 8x frames.
  • Use Chronos (Fast) for other frame multiples, like converting from 24 → 60 FPS.

These aren’t hard and fast rules; sometimes one model might unexpectedly give you better results than another more “optimal” model. We also plan on further improving the quality of Apollo Fast in a future update. For more details and recommendations, please read the Frame Interpolation docs.

Enable legacy AI models

If you want to use an AI model from a previous version of the app, you can now enable this option in Preferences → Application:

You will be then be able to select earlier versions of most AI models in each filter:

Our core value comes from the quality of our AI models, so we’ve shipped dozens of model releases over the last few years. While newer models will generally perform better on most videos, because of the nature of model development we can’t guarantee that they will perform better on all videos. This new feature makes it convenient for you to use older versions if you already know they perform well on your footage.

That said, we want to focus all our energy on developing new models instead of maintaining older ones that will likely become obsolete soon. Thus, we consider the Legacy Models unsupported, and we won’t optimize them or fix future compatibility issues. We’ll continue working hard on improving model quality, but we hope this option gives you more choice in the meantime. Read more in the Legacy Models docs.

Other improvements

  • Added a preference option to automatically remove older previews
  • Maximum Processes preference includes warning for higher settings
  • Added overwrite warning for image sequence exports and fixed export location
  • Fixed persistent temp files on export
  • Double-clicking Mac title bar now maximizes correctly
  • Removed the timeline preview duration indicator
  • Many interface, stability, and quality of life improvements

For the full list of changes, please read the individual release threads for v3.2.6, v3.2.5, v3.2.4, and v3.2.3.

Coming Soon

We’re focusing on improving model visual quality and app usability in the next couple of months:

  • New enhancement and upscaling model with better face recovery (beta)
  • Add scene change detection to improve Stabilization and Frame Interpolation quality for videos with multiple scenes.
  • Add option to pause and resume exports.
  • Faster playback and seeking in the in-app preview.
  • Allow applying a second Enhancement pass without exporting and re-importing.
  • Improve Themis (Motion Deblur) quality with object motion.
  • Improve Output Settings interface and audio transcode options, including bitrate control.
  • Improve Video Output list interface for comparing multiple previews and parameters.

Topaz Video AI is intended to be a visual quality tool for professional creators, so in the longer term we’re also considering closer integration with popular NLEs. If you’re interested in testing new features before they’re released, please apply for the video beta program.

We look forward to hearing your thoughts. Thanks for using Topaz Video AI!

14 Likes

Does this include addressing the previously reported black spots, afterimages, and color alterations?
Proteus 4 Beta, which is currently being tested, is worse than 3 and before, rather than addressing them.
Are they doing internal testing before offering it as a beta? :upside_down_face:

4 Likes

Hello,

We’re currently running some long test exports to try and reproduce the black spots seen in some exports, and our team is looking into options for preserving all color range in enhanced videos. Currently the app has some limitations with wide color gamut and high bit-depth content that we hope to address in an update to our model processing.

Proteus 4 is also being trained on more and more video types to improve facial recovery and scaling quality. Models that are released during the beta period are not recommended for production use but early access is an important part of our development cycle.

6 Likes

You mentioned that you perform ‘long test exports’, but why do you frequently find numerous artifacts and bugs after a beta version is released and beta testers have tested it for only a few hours?

Between the quantity of in-house testing and the quantity of beta testers, I think there are more beta testers, but even so, there are so many artifacts that are obvious after a few uses that I have to wonder if the in-house testing is not functioning properly.

If the in-house testing is not functioning properly, the direction of development becomes unstable and time spent on improvement is wasted.
I am concerned that the one year usage fee I paid will be wasted.

1 Like

Is there a way to test the new version for free? I have an expired license, and the versions I tried before it expired in March had issues that were worse than previous versions. Before purchasing another license, I’d like to confirm it’s an improvement to my work flow without uninstalling my current version.

2 Likes
  • New enhancement and upscaling model with better face recovery (beta)
  • Add scene change detection to improve Stabilization and Frame Interpolation quality for videos with multiple scenes.

I would like to purchase again when these two improvements are applied. When do you think it will be updated?

2 Likes

yay! thanks for the great work!!

2 Likes

Hello Eric Yang,
The latest Topaz Labs’ Video AI iteration is welcomed. I have been enjoying using the various tools in this app’, yet still find room for improvement. For example, when de-blocking, de-noising and sharpening an option to blend enhancements back in to the clip would be welcome (as it would in Photo AI).

I do not find the newest de-interlacing tool as as effective as previous versions of this tool; although this process only requires one pass today.

With regards to time remapping, or at least Frame Interpolation, I have found this tool infinitely better than After Effects and Premiere Pro to re-calibrate small gauge scanned film footage (16 fps & 18fps) to say 25 fps (PAL). However, pixel artefacts can occur using Frame Interpolation, not to mention unexplained colour shifts and banding. When creating slow motion it would be a bonus to nominate key framing output speeds across a clip, similar to the key frame options in ReVision FX’ Twixtor.

Finally, to add my voice to previous blogs, it would be useful to have guidance from Topaz Labs’ in using Video AI’s Duplicate Frames tool. To date use of this tool has been pure guesswork. Is there a rough calculation end users can do to estimate the number of duplicate frames within a time remapped clip? And if so, how doers this work? Thank you. :slight_smile:

1 Like

I know Video AI will NOT be it but by any chance are you working on Text to Video app, you describe the setting and the AI generates it? This is similar to Text to Image but with full motion.

Yes, absolutely! Go ahead and update and you will be in trial mode. If you decide to wait to purchase an upgrade license, you can uninstall the current build and roll back to your owned version from the “My Products" page. As a note, if you own v3.2, you also own all minor patches (v3.2.x).

Keep an eye out for the release notes :slight_smile: We do not yet have definitive dates for these releases.

2 Likes

Not something we are currently exploring, but that is a great idea!

1 Like

Hello , i have great hopes when i have read about legacy models, but, when i tested it, i didn’t find the artemis V8 models included, for some footage artemis V8 models works better for me, it would be great to include artemis V8 as part of legacy models in future release ( with no support no optimisation or course )

Hi Denis, which specific model are you referring to (quality-wise)? The models excluded are ones that we found to create artifacts or other unwanted behavior.

i am referring to artemis V8 low quality and high quality, that i still use in some cases. in many cases you get artefacts with these models, but in some other cases (when there is no fine patterns, and not used for upscale) they do a better job for me.

1 Like

Hello, can you give us some insight as to how Video AI utilizes the M1/M2 CPU for processing? Or does it use the GPUs to assist? And what about the neural engines? As I’m told the neural engines can provide better/higher image quality and faster processing. Is there any plans to leverage them in the future?

Currently, I have the M1 with 8 high performance cores. However, according to the V AI settings, I can only employ 4 processes. I know more cores may not necessarily improve performance, but any hopes of using additional machines like a “render” farm?

Thanks.

Great to see that we can use previous versions of models for comparisons! I started on 2.5/2.6 so hopefully I can use it to see how things have come along

Any chance for SVT-AV1 encoding in addition to VP9? VP9 Good/Best’s preset encoding performance / thread splitting isn’t good (just generally even with lots of tweaks in ffmpeg outside of VEAI, I see 4 cores max for a 1080p video) and image quality wise it’s worse than AV1/H265 hw imo. Loves to smooth details out, AV1 does that but at least it’s not as bad, and a futureproof format

Video AI does make use of the Neural Engine on the M-series processors for most tasks. The primary processing is still done on the GPU cores, but the Neural Engine is also assisting.

The macOS Activity Monitor does not display these cores in any of the graphs available for CPU and GPU usage, but using a tool like asitop, you can monitor the power and utilization of these cores.

We’re currently working on some major improvements to the export options available in Video AI, and while there is no current plan to add support for SVT-AV1, we do support AV1 hardware encoding on the NVIDIA RTX 4000 series and Intel Arc GPUs.

Unfortunately no mention or future planes for color management. Topaz Video AI tends to change original color appearance in its output. For professional use this is important and having color consistency with proper color managed workflow is needed. When can we except development in that area?

4 Likes