Video Enhance AI v1.6.1

You can use ffmpeg to replace audio without touching the video quality. Just remove existing audio (-an) and copy your audio from original video, use (-c:v copy to avoid re-encoding video). Shouldn’t take longer than 10 minutes, even if you’ve never touched ffmpeg before.
Presonally i don’t even bother exporting container video from Topaz, it’s better to control encoding process by feeding PNG frames to editing software of your choice or straight up only using ffmpeg.

UP: Sorry, i see you know about ffmpeg already. I think Topaz should just copy all audio streams from source to result without re-encoding audio.

1 Like

As I said in my post thankfully I know how to use ffmpeg to fix the audio… How does your comment which suggests I use ffmpeg to fix the audio help?

Hi there. Please contact support so we can get your file. We did test 5.1 audio and it normally converts to stereo just fine. It may be something specific about your video files.

Please turn on logging and then send your logs into support.

Matt, what do you mean by “we did test 5.1 audio and it normally converts to stereo just fine”? It shouldn’t convert to stereo, it should stay at 5.1 !!! Your product is supposed to enhance my videos which FYI includes audio, NOT deliberately down scale part of a video… However, what you’re comment suggests is that you’re doing just that! I’m utterly gobsmacked by your admission.

1 Like

I refer to my previous post about you downscaling 5.1 audio. What you’re asking me to do now is turn on logging and wait another 4 hours process time and redo a 23 minute video even though I know that you’ve deliberately downscaled the audio… I’m sorry but I’m so angry at the moment I don’t think I should tell you what I think of that.

I’d suggest cutting the video down and just processing a small chunk. The only way issues get fixed is if people take the time to contact support unfortunately.

We tried copying audio but it proved too unreliable. There are just too many audio formats and many of them had issues(Dolby 5.1 being one of them) In our testing every video file we processed did successfully keep its audio in sync with a stereo track so we kept that. For most users we found a in sync stereo track to be more useful than an out of sync multi-channel track. VEAI is about enhancing the video tracks not audio. We do our best to sync the audio so that you can align higher resolution tracks in post if you want (many tools such as ffmpeg can do the job), but this isn’t a core focus of the app. More of just a connivence feature. I’m sorry that we didn’t set your expectations correctly in this regard. VEAI is more of a tool in a tool box rather than a full solution. If you would like to make a suggestion for a product enhancement I encourage you to send it to support. We review these every morning and do look for trends of what people are asking for.

1 Like

Matt, your current audio offering is absolutely worthless to me, as I suspect it will be to many other users. As far as I’m aware you’ve not published notes detailing the audio formats that you’ve chosen to down scale / change. What this means is that I have no idea if the VEAI output has “affected” audio and, therefore, I have to assume EVERY video needs correction. Isn’t it time you produced proper documentation for your product ? What’s currently available is extremely poor / inadequate

All audio is transcoded to AAC stereo. Even if it was AAC stereo to start with it is re-encoded at a new bit rate. I can’t remember off the top of my head what the bit rate is though.

This is precisely why you need to produce documentation.

3 Likes

@matt.lathrop

I’ve been testing 1.6.1 for several days now, so far and it’s definitely better than previous versions. I am getting great results with Theia HF model, with the following settings:
VEAI%20Theia%20settings

Here’s a comparison between VEAI v1.6.1 compared to VEAI v.1.5.1:

It’s obvious a great improvement, the “Matrix Grid” artifacts are greatly reduced and barely visible in the VEAI v1.6.1 (Theia-HF 2.0), great job to the team!!

I’m getting better performance too with Theia model compared to Gaia-HQ model, about 70% faster. (0.3s/frame vs. 0.51s/frame)

Theia model has became my favorite due to the ability to adjust the processing parameters.

1 Like

My prevous settings for Theia HF were Sharpen=12, Restore=97.
After testing v1.6.1 my new settings are Sharpen=55, Restore=82, De-noise=7.
For restore setting up to 100, there is a beneficial glare reduction in some areas, but a loss of detail around some shadowed areas. 80 seems to be a good starting point.

For some footage, Theia HF is giving better results than Gaia CG.
The grid pattern artifacts are also reduced compared to Gaia CG.

How are your tests with ‘Restore Detail = 80’ fitting with hello.tien comment?
Thanks

I occasionally run into a file that has an incorrect Display Aspect Ratio in the metadata. For example the frames might be 512x384, which is 4:3, but the DAR in the metadata says 5:4. When I play this in most players they ignore DAR and properly display the frame size.

If I take that file and put it in VEAI at 200%, the product is 1920x1536 with a DAR of 5:4 instead of the expected 2048x1536.

Once found I correct the metadata without reencoding the original using ffmpeg -i <INPUT_FILE> -aspect 512:384 -c copy [OUTPUT_FILE] and then I can process it, but often I find this only after a few hours of the incorrect processing and have to start over.

I’m not sure if I can think of a circumstance where you should ignore the actual pixel size of the frame and force it into a different ratio, which is what you do

The AI processing can’t handle non-square pixels so we have to use the aspect ratio in the metadata to convert to a different resolution with square pixels.

Hmm. I understand the square pixel need, but I just don’t see how it applies here. When I change the metadata, without any change to the pixels, VEAI handles it just fine.

Incidentall I mistakenly said 200% when the numbers show I meant 400%, but the same problem would occur at 200% or probably any other resizing.

The metadata is the only way to know the aspect ratio. You could have 2 720x480 videos each with different aspect ratios. So we need to read the metadata to know what size to output them. Does that make sense?

Not really. The aspect ratio of a 720x480 video is 3:2. If the metadata incorrectly says 4:3 and you then squeeze the horizontal to match that incorrect ratio you wind up with a weird-looking video.

Ah I think you should go read up on things like anamorphic video. The ratio of pixels only coincides with the pixel aspect ratio in the case of square pixels. If you have non-square pixels the ratio of w:h will not be equal to the pixel aspect ratio.

2 Likes