Video Enhance AI thoughts and requests

Please make it available on iMAC platform.

I made further tests last days so far again with bad quality video footage.
Resolution original footage 640x480 and below.
Sometimes the results are very impressive. I never thought before, that it would be possible to deblocking, denoise and sharpening such videos in such an impressive quality. But the results are very various from video to video. I guess, Topaz has to fill the AI much more with reference footage.
And again: faces and hairs get often destroyed.
ONLY the modus “Upsampling (HQ-CG)" brings sometimes such impressive results.
The modus "Upsamping (HQ) brings more sharpness but destroys faces and hairs much more.
The modus "Upsampling (LQ) works for me on no footage. In each rendered video it adds spots which were not part of original footage.

My wishes so far on improvments:

  • add better face and structure regocnition
  • implement slide controls for sharpness and noise reduction to be more flexible in order get best results
  • implement audio
  • improve the modus "Upsampling (LQ); e.g. also here treshold sliders for manual adjustments
4 Likes

2 posts were split to a new topic: Video Enhance AI - Current Issues

I’ll chime in on the HQ-CG mode seeming to produce better results in many cases. It sure handles lines and edges very well, anything where aliasing would be an issue. Seems it handles such potentially jagged edges (step-stair appearance on diagonal lines) very well.
But then it seems to handle textures less well in some situations, meaning it smooths out fine details somewhat between such lines and edges, making some surfaces look plastic. The HQ mode seems to keep textures better, but seems to have aliasing-issues with lines and edges.

Suggestion:
Some sort of combination of the two modes might do wonders. Maybe a slider to add in textures and fine details from the HQ mode into an otherwise clean/smooth HQ-CG upscaled video would solve this. Would make it possible to have nice upscaled edges without aliasing issues, and keep more detail where they might otherwise get smoothed out.
A slider for sharpening might also be good.

Regarding the sharpening; as it is now I get pretty good results by upscaling to about 1.5X the target resolution, and then apply some sharpening and scale down to the target resolution in Premiere/AE or something.

2 Likes

I further tested several footage.
Often the deblocking and denoising are very impressive.
Also smoothen and straighten of edges are nearly perfect.

In often cases the denoising and smoothen of the footage is too much and the video looks too unnatural on the end, cause fine structures are gone.
Here it needs some adjustment sliders in order to increase/decrease

  • denosing
  • deblocking
  • sharpening

In other cases of e.g. grass the tool adds unnatural structures into the grass.

Is a next update so far with first improvements foreseeable?

1 Like

4 posts were merged into an existing topic: Video Enhance AI - Current Issues

I have tried it with various inputs and so far I am very impressed with the results. One area where it doesn’t really work all that well is digitalized analog video. I have quite a few videos from analog sources which show all the signs of analog footage (wobbly image, scanlines, combing). If the product had a way of upscaling these just as convincing as it can do upscaling of videos with lots of compression artifacts that would be a real game changer for people doing restoration of old footage.

1 Like

please add amd gpu support! :heart:

11 Likes

I made the same experience. I tried with different settings some old Hi8 video clips, which I digitalzied years ago.
The improvements are on a very small level. Details are not really improved. Also edges are not stabilized after rendering.
Same with old VHS footage.
Obvouisly at least DV footage is required as minimum requirement in order to get improvements.

5 posts were split to a new topic: Video Enhance AI Install & Update

Also agree that sliders for various DeNoise settings would be useful.

Please add industry standard output codes like ProRes and ProRes HQ. The h263.Mp4 format is not useful at all for pro applications.

Keep up the great work!

3 Likes

I received this morning a Topaz Newsletter, that a new version for Video Enhance AI is available.
Is this true?
If yes: which features was changed and which features was added?

Sorry, typo…I meant codecs…ProRes please!

1 Like

It’s written on the Newsletter :

✓ Upgraded AI Models
✓ DeNoise/Deblock processing
✓ Audio track preservation (if output mp4)

It also indicates the GPU VRAM usage when processing

Some additional info on how the program handles certain video is really needed for professional users. Mainly:

  • The program will now open interlaced video. What deinterlacing algorithm is used?
    This actually isn’t true… most DVD’s are actaually progressive but flagged as interlaced. It would have been nice if Topaz had communicated here.

  • The program will open anamorphic video (i.e., DVD) and desqueeze 720x480 or 720x576 according to the metadata (for example 720x576 at 1.78 = 1024x576). What sort of algorithm is used for the desqueezing?

  • In looking in the tldb directory, I can see there are 1x, 2x and 4x training data. How does the program determine what data is used? For example, if I scale 1024x576 to 1920x1080, that’s 188%. Is it using the 2x model data? At what percent does it decide on which training data? If the final result is 202%, does that mean it switches to 4x?

  • For final resolutions that are not 2x or 4x, what scaling algorithm is used to get to the final resolution? For example, if I scale 480x360 to 1920x810, that’s 2.25x. Does that mean it uses the 4x training data, then downscales the result, or the 2x training data, then upscales the result? If so, what downscaling or upscaling algorithm does it use (lanczos, spline36, spline64, etc.)?

03/22/2018 - Questions still not answered. Why won’t Topaz communicate? :frowning:

2 Likes

It’s clear that Nvidia has more market than AMD, but with RDNA arquitecture that tendency is changing, so AMD market is growing.
Just for curiosity it would be good to know why AMD GPU isn’t supported. Do Nvidia and Intel implement something that AMD doesn’t support? Is this just for Video Enhance or is something general of the AI engine in all the Topaz apps?

2 Likes

I’m sure there are many things being worked on to improve Video Enhance AI. So, maybe the concerns I will express here will be non existent after additional versions come out.

After processing a clip with Video Enhance AI, I’m not able to do any further processing of this clip in my video editing software. I’m using Power Director 16. A Video Enhanced AI clip can be placed into the timeline just as any other clip, but I can’t use any of PD 16’s tools to do any of the many things it allows me to do to an unprocessed clip. I can’t even edit the length of the clip. At this point, anything I need to do to a clip needs to be done before it even gets imported into Power Director 16.

It works okay on small videos. Anything more than 1 hour of processing and it just crashes without an error. Tried 2 PCs with different setups. It’s simply too crashy to be considered a professional tool.

Faulting application name: Topaz Video Enhance AI.exe, version: 0.0.0.0, time stamp: 0x5e612cd7
Faulting module name: ucrtbase.dll, version: 10.0.18362.387, time stamp: 0x4361b720
Exception code: 0xc0000409
Fault offset: 0x000000000006db8e
Faulting process id: 0x26b8
Faulting application start time: 0x01d5f75b35964d06
Faulting application path: C:\Program Files\Topaz Labs LLC\Topaz Video Enhance AI\Topaz Video Enhance AI.exe
Faulting module path: C:\WINDOWS\System32\ucrtbase.dll
Report Id: b8da3011-11f4-4021-ad42-cbb85f7d9b38
Faulting package full name:
Faulting package-relative application ID:

Barely uses half of my mid-range gpu, which is a good sign it’s not optimized.

Would buy a couple of copies for work, but considering the product can’t remain open long enough to finish a decent sized job, its not worth the aggravation and wasted time.

1 Like

Hi, If you are using Windows 10 please raise a support request at the main website, topazlabs.com, I take it you meet the technical requirements which are:

  1. Need nVidia GPU >4GB VRAM to run fast (CPU can run but quite slow).
  2. Cannot handle “interlaced” video directly, footage needs to be to de-interlaced first.
  3. Windows 10 platform ONLY, Mac OS 10.12 or higher

I tried a DVD with interlaced PAL. VOB files were processed (not re-encoded) into the mkv container using mkvmerge. Checked with mediainfo, the video stream in the mkv is definitely interlaced. The mkv opens and deinterlaces in Video Enhance AI. So I think you should clarify what you mean by “Cannot handle “interlaced” video directly, footage needs to be to de-interlaced first,” because what you’re saying doesn’t appear to be true. Edit - I was wrong; the DVD I used was only flagged as interlaced, but was encoded as progressive, so Video Edit AI handled it. It would have been nice if someone from Topaz had communicated here.

terryleemartin13, as a workaround you could save / export as png in Video Enhance AI, if you have the disk space. That way, if it crashes, you should be able to restart the program and pick it back up at the exact frame where it crashed.

To encode the individual png frames into video:

ffmpeg -fflags +genpts -f image2 -i filename%06d.png -r 25 -profile:v high -level 4.0 -preset veryslow -crf 10 -pix_fmt yuv420p filename.mp4

Notes:

%06d is the number of numbers at the end of the file name (filename_000001.png, filename_000002.png, etc.).

“-r 25” is very important, this needs to be changed to the exact frame rate of the original video (29.97, 30, 24, etc.).

Finally, “-crf 10” is the quantization factor, you can change this as you please. 23 is default, 17 or 18 is considered “visually lossless,” so 10 would be a very high quality that is suitable for further processing until you get your final result. If the output from Video Enhance AI is the final result, you might change this to 18 or 23.

Alternatively, you could split your source video up into say 15 minute segments, process those separately in Video Enhance AI, then join them back when finished.

For source video:

ffmpeg -i input.mp4 -c copy -map 0 -segment_time 900 -f segment output%03d.mp4

“-segment_time 900” is the size of your segments in seconds, 15 minutes in this case.

Process output000.mp4, output001.mp4, etc. in Video Enhance AI. When finished, to join the results:

ffmpeg -fflags +genpts -i "concat:output000.mp4|output001.mp4|output002.mp4|output003.mp4|output004.mp4|output005.mp4" -vcodec copy -acodec copy output.mp4

Note that the above ffmpeg commands for splitting and joining do NOT re-encode anything; the streams are bit for bit the same, just split / combined.

2 Likes