Topaz Video AI v3.1.7

I am noticing a shimmering artifact using Proteus - first saw it on 3.1.5 only in a very select frames, but again in 3.1.7. Using the older models back in V2 isn’t reproducing it so I assume its a bug in the newer versions of proteus.

I have uploaded the test sequence to the dropbox. Use Proteus Manual, 20,55,25,25,25,25. Watch the person in middles mouth area and preview - you will see constant shimmering occurring around his face.

If you use the same settings in Proteus V1, no shimmer. If you use this version and switch to Artemis Halo, no shimmer. If I drill down further, it appears to be primarily linked to an issue with the Noise Reduction as turning that option to 0 virtually eliminates it.

I am just not sure if this is a bug in Proteus itself, or these newer versions with speed improvements - either way, something is not entirely right with the noise reduction part of Proteus currently.

1 Like

No, I stress that Cronos is generally unsuitable for converting to 60fps if there is panoramic camera movement in the frame. On the trees, mountains, sand there will be strange flickers. which spoil all the videos…

There are different uses for Chronos/Apollo and for some people increasing the frame rate is not important. Also, some people just want to make a quick comparison - for example of how a face changes in one particular scene, in some cases only a few frames - and the preview method in 2.x allows this to be done very quickly and intuitively. This was very useful with the various upscaling models, for example or even the different flavours of Chronos.

The original comparison method should have been retained and the new method on 3.x offered as an option. Devs should not be second-guessing existing customers use cases on important core features like previews, offering the change as an option is better.

2 Likes

If Apollo can overcome those problems, there basically won’t be a time to use Chronos Fast, it just seems like it’s a long way off.

yes, crazy this issue has been around since ever.
I have a 4K display so I have Windows scaling set to 130%. Knowing it’s an issue with Topaz I couldn’t be bothered testing it at 100% Windows scaling.

I always set the preview at 74%, which is closest to the true size. This worked great in V2, the app would remember this for the entire session. Now in V3 VEAI just assumes I want fit screen or 100% and I have to change it constantly. Yet another (!) reason why I’m still at 2.6.4

2 Likes

I assume it’s just a matter of time before AI will recognize the shots and apply models accordingly. I don’t see any other way. You can’t expect the hobby consumers to process every scene separately. So time consuming.

We in beta still and frankly I don’t care about hobbyists, they can do what they do now “AUTO”, but the REST of us who would use it, it would be nice to have to use.
I agree, eventually AI will be able to take my settings, see the difference that I have set compared to original, then use that to chose how to adjust the scenes as it goes for sure. Like maybe I find some “key frames” for lack of a term, then it uses the settings I chose for that section of video to adjust and apply to the rest of the film.

Yeah as a professional product Topaz is not it (yet). Everything about Topaz screams consumer. From the interface (development) to the marketing tactics (juggling with discounts) I can see it’s aimed at them :slight_smile:

ACDsee, a consumer product (when they add ‘Ultimate/Professional’ you know it’s not pro), uses same marketing but worse, throwing discounts like it’s christmas everyday, trying to sell me an update with an EXCLUSIVE offer FROM THE CEO (wooaah) at a reduced price which is literally the regular upgrade price they have all year long :rofl:
But hey, it’s ‘coming from the CEO’ so must be good :rofl: :crazy_face:

It’s not strange to dislike grain because, since the late 1990s, shows and movies have used it sparingly. There are many shows and movies I’ve seen where there appears to be no grain at all, just a clear, sharp image.

I only use Proteus because it allows me to not get that plastic look that the other models give, which in my opinion is more of a remastered look, which can’t be done yet with shows that are not in HD.

I only use upscale, and my settings are very conservative, so I hardly have any problems. My stuff looks as if it was done by a pro. I’m very nitpicky because I want to share my work with family, and I want it to look as natural as if, like I said, it were done by the pros.

That is interesting that you mentioned selecting different models, but I don’t know if that would ever be done because I think the program would get confused. I do like that you have a very deep mind, and I believe that technology is always improving, so maybe one day a program can do that complex work you speak of.

2 Likes

Well, I just said it so people would understand what I mean. My use case would be multiple of the same model with different settings for time periods within the same movie.
Lets take a movie like Clash of the Titans, the older one with the mechanical owl. The dark scenes in that movie are a solid 50% more full of artifacts and noise. I would want to use COMPLETELY different settings for those scenes to make it look anything like the other light scenes, but I cannot currently do that.

For the future, when they have different models… I might want to do some AI face detection for scenes that are face heavy, and then use something more suited for motion or scenery recreation. I might want to use chronos for low motion scenes and then apollo for high motion scenes… I am thinking ahead here while also applying how I would use it now. Heck man, maybe there is a scene where they are showing a “ye ole timey” television with a black and white image on it and the idea is for that image to be noisy, blurry, and the like and if I have the same model and settings on for the whole movie, it would clean up that video and ruin the entire effect.

It is 100% a tool or function the minority would use regularly, but for that minority this would be a game changer as much as Video AI was over something like MeGUI.

Apollo 7 does not replace duplicate frames.
Source: Typical DVD of CG animation. You don’t get any cleaner than that at DVD level quality. Meaning: the duplicate frames will match more than more complex content.
Set slomo to 2x.
Tried with senativity at 10 and at 90. Results were the same.
In both, the duplicate frames make frames that are like a quarter-step back in time.

Uploading the video to the drop-box.

2 Likes

When i got that back jump it was the field order that was wrong.

When playing back the sourse file, every fifth frame is duplicate. It plays like most other DVDs. I just thought I’d try out the replace frame feature instead of my usual method of dropping frames with ffmpeg.

1 Like

Could you reach out to our support team? They can provide you with instructions on how to get your installer logs to us.

Hi… I am trying the new Themis, but i have seen serious colors shifting between the original image and the upscaled image

1 Like

Getting an error (as others have also mentioned) with these newer versions. I uploaded the log to dropbox already.

I went through the log to look at the error and this seems to be the important bit

2023-03-03 14-44-12,273 Thread: 
2023-03-03 14-44-12 Thread: 7468 Info OUT: 2 7844 Critical ONNX problem:  Run:  Non-zero status code returned while running DmlFusedNode_0_5 node. Name:'DmlExecutionProvider_DmlFusedNode_0_5_0' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\FusedGraphKernel.cpp(479)\onnxruntime.dll!00007FF9C1CBF3C4: (caller: 00007FF9C1CBD815) Exception(2) tid(1ea4) 887A0006 The GPU will not respond to more commands, most likely because of an invalid command passed by the calling application.


2023-03-03 14-44-12,273 Thread: 7844 Critical Model couldn't be run for outputName out out
2023-03-03 14-44-12,273 Thread: 7844 Critical Error debug information
2023-03-03 14-44-12,273 Thread: 7844 Critical Unable to run model with index  0  it had error:  
2023-03-03 14-44-12,273 Thread: 20244 Critical Model Backend state is invalidiated due to previous errors
2023-03-03 14-44-12,273 Thread: 7844 Critical Caught exception in tile processing thread and stopped it msg: unable to run model with index 0 1

2023-03-03 14-44-12 Thread: 7468 Info OUT: 2 2023-03-03 14-44-12,273 Thread: 20244 Critical Unable to run model with index  0  it had error:  

2023-03-03 14-44-12 Thread: 7468 Info OUT: 2 2023-03-03 14-44-12,273 Thread: 20244 Critical Caught exception in tile processing thread and stopped it msg: unable to run model with index 0 2

I also managed to watch an error occur while using the computer this time, on Windows 11 the below error had come
“application blocked from accessing graphics hardware”

And from my Event viewer log in Windows

Faulting application name: ffmpeg.exe, version: 0.0.0.0, time stamp: 0x63fd363a
Faulting module name: ucrtbase.dll, version: 10.0.22621.608, time stamp: 0xf5fc15a3
Exception code: 0xc0000409
Fault offset: 0x000000000007f61e
Faulting process id: 0x0x28F4
Faulting application start time: 0x0x1D94E185236CA94
Faulting application path: C:\Program Files\Topaz Labs LLC\Topaz Video AI\ffmpeg.exe
Faulting module path: C:\Windows\System32\ucrtbase.dll
Report Id: 746ed149-5dcd-4961-887b-cebea193dd61
Faulting package full name: 
Faulting package-relative application ID: 

Hopefully this is something can be resolved. I have now seen this error with version 3.1.4-3.1.7.

1 Like

It would be great to have the option to trim clips using the frame number. That option was available in previous Topaz Video AI versions, but I cannot find it anymore.

5 Likes

See that orange bar you slide it now

I am pretty sure that is not what he is asking. Since v3 the timecode used in Video AI is SMTPE ( hour:minute:second:frame). Previously you were able to choose in preferences to use that, or the # of frames, which comes in handy very much when the application is crashing, sometimes after 48 hours of processing. Using the # of frames you are able to split, let’s say, an entire movie in sections that are easier to manage and join them afterwards to re-construct the movie. I asked last week about this feature to be re-implemented, if possible.

Personally I came up with a temporary solution meanwhile. I split the movies in sections of 20,000 frames using Adobe Premiere Pro, process all the parts with Video AI, then join all using MKVToolnix. I wasn’t able to find any other app, at least for macOS, able to split videos using the number of frames.

2 Likes

Ohh i never knew it was possible to split certain parts

1 Like