Topaz Video AI 4.2.2

9 days?! :eyes:

1 Like

We might need holodeck technology for that to become a reality.

1 Like

Shot/scene backgrounds can be somewhat expanded like that with real footage from nearby frames if there is camera or zoom movement. BorisFx Mocha Pro for example - there tend to still be some missing bits but there would be much less work for the AI to do because it would have more than the original 4:3 frames to work with (the totality of the background in the entire shot).

Trouble is that those pesky directors tend to do cuts/shots every 4 or 5 seconds in some genres and not a lot longer in most others so your AI might have to work on each or most of hundreds or thousands of short clips and try to find consistency between clips that often isn’t there. Heck, AI is already inconsistent at duplication even when everything ‘real’ is identical. AI would do the invented bits of the same backgrounds differently in different clips, and we’d notice.

But it should be possible to fill the missing parts of an expanded (4:3>16:9 etc) static background in say a 5 second clip, probably at lowish resolution, I guess. Making it look the same in other shots, or for animate objects including people, I suspect won’t happen though, not in the foreseeable future anyway.

I don’t think you’re likely to see much effort toward that goal, since original aspect ratio and black bars became the SOP over stretched widescreen way back in the early days of DVDs.

It’s not a stupid idea :sweat_smile:. But unfortunately it doesn’t yet exist in reality.

Maybe it’ll come one day. Maybe in 10 or 20 years. But to do it right, you’d need a really special model created for Star Trek series in 4:3 so that it can analyze every sequence in every episode. That way, it could better determine the fill areas for 16:9, based as you say on camera movements and the neighboring images. And keep the whole thing coherent.

1 Like

Aion always failing with an error within a few minutes of beginning a conversion. Not enough VRAM on a 3070 to use this model I’m assuming?

1 Like

I have been struggling with this problem for a long time–I explained all my latest findings in the last post of the thread I opened some time ago related to the problem:

1 Like

Are there any plans of supporting Prores 4444 output?

That’s right!
That’s what happens when Aion has such strict memory requirements that I had to use the CPU.
The first round did not produce a file for a reason I don’t know, so after updating my OS and program, I am confirming every 12 hours that the output file is growing.
P.S. Wish partial files were playable, I’m pretty sure mpv and VLC have the functionality but it doesn’t work

Oh no! If I can help you out processing for you don’t hesitate to ping me. :slight_smile:

I think it’s exclusive to Mac. I don’t really know, but I think I see someone say that about once a week on here.

1 Like

Even when I don’t use mark-in and mark-out, I’m getting Aion failing in the same way on my 3070 8GB.

I’m trying to take a 720p 30fps file to 1080p 60fps with Proteus and Aion, so nothing particularly demanding I wouldn’t think.

Given that Aion’s QUALITY is VRAM dependent according to Topaz, I probably won’t get great results with it anyway.

I just wanted to experiment with it I guess, because Apollo still leaves something to be desired, in the sense that while it’s supposedly AI powered, I would say its quality is merely on par with Adobe Optical Flow’s non-ai based interpolation that was introduced 5 years ago. Well, maybe it’s slightly better than Optical Flow, but not by a lot.

Here’s a few clips I did as a test. You can try the supplied original clip to see how Adobe Optical Flow compares. If they’re just using Nvidia Optical Flow, then all of the interpolation models in TVAI are easily better. (You might have some troubles since it seems Adobe is loath to admit that Matroska is a competent digital video container worth supporting.)

I guess my opinion is based mostly on the anecdotal evidence that I still frequently see obvious artifacting with Apollo at a 2X framerate when someone waves their arm across the frame, the same way I do with Optical flow, even if it’s a little less obvious with Apollo. In both instances, the footage with the artifacting would still be unusable and must be edited around, or that segment of the video must be replaced with non-interpolated footage.

I’ve heard that said too, just cannot fathom why. I gave an arm and a leg to upgrade from a 422 to a 444 recorder for my camera, and now i have to buy a Mac too?

Could we get an official statement on this @tony.topazlabs?
Feels like I’m missing something when I have to sacrifice half of my color resolution in such an advanced software.

I was considering to purchase an upgrade, but I won’t make sense until this issue gets resolved.
Could I please get an update/confirmation that this is being looked into?

9 days? But that’s madness. You’re going to burn up your CPU running your PC H24 for 9 days. Especially for rendering. It’s not good for the machine. And hello energy consumption too.

Hmmm it´s not good for the machine when the cooling is too bad.
Normally a PC should be able to run at max. especially when doing overclocking.

You only burn up your hardware if it´s not well cooled…by the way the CPU is able to limit it´s speed when going hot.
Using an cooling-system that´s not designed for running at max. should never be used in a PC for a longer time…

beautiful teeth!