Download: Windows, Mac. Released January 19, 2020.
This release brings to you the ability to directly enhance the INTERLACED video. The new “Dione” series models denoises, sharpens and doubles framerate of INTERLACED videos in one-pass.
“Dione-DV” is trained mainly to deal with high quality interlaced digital source such as DV, DVCPRO, DVCAM, etc.
“Dione-TV” is designed to handle analog TV/VHS/8mm sources.
“Dione-TD” is developed to be more robust for the mixed frame type in DVDs.
This release also updates Artemis-HQ/MQ[/LQ to v10. They are improved upon v9 and should produce output with better denoising, less residual color flicker, and fewer artifacts.
Changelog:
New Dione-DV/TV/TD v1 models for de-interlacing/enhancing/upscaling analog videos
Update Artemis HQ/LQ/MQ to v10
Add adaptive grain parameters to reduce “plastic” looking output.
Support reading .dv, .vob and .mxf video files.
Change the “scale” to a real number for setting precise output dimensions.
Improved processing speed on certain systems.
The installer has an option to keep existing models
Bug fixes:
All GPU mode no longer causes model loading error.
Fixed model parameters change inconsistent problem.
my experiences with Artemies v10 and low quality source matieral:
great deblocking capabilities
great recoverage of all sorts of edges
great smoothing of gradients
low ability of recovering surface structures/textures
Overall it ends up looking a bit artifictial/sketchy often. The ability to recover believeable surface details needs additional training maybe. But overall the results are already impressive.
We have been told (a while ago) that a custom appdata directory choice is in work …
But now several weeks and even months later (and several releases later) it’s still not available in the new release…
The models folder is by default in the C drive and it can get really big : it’s a problem and it can become a big one ( the size increase with each models downloaded )
Gigapixel has this option for a very long time …
So is there a problem specific to Video Enhance AI to implement a custom appdata directory?
Just finished my first “test-drive” with the new version and I think VEAI is not quite there yet but definitely on the right track.
Artemis, for example, I hated when it first came out but now in version 10 I really starting to prefer it over Gaia.
Dione I think needs some more training, since it overpronounces details a bit strange but it is miles away from the weird quirks that gaia often shows with stuff like hair or vegetation. Some details look astonishingly better than with gaia and QTGMC deinterlacing or in other words more real. Making me think all the time I tried different filters or resolutions as preparation was indeed more counterproductive.
Overall it’s really good that there are now models for specific source-formats like DVDs. One thing bugs me though. I wonder why digital and analogue sources are targeted with one and the same model? I am sure not an expert but the typical problems of VHS tapes and DVDs don’t seem to overlap too much. That said I would love to see codec specific models, for example divx or xvid.
Keep up the good work! BTW. one thing that would be a nice addition, since VOB´s are now supported, to add them in sequence. With some DVDs I always had a odd jump in between different files, that I could only solve by exporting the entire stream as one file.
I am not exactly sure if its the enhancement model itself or the build-in interlacing that’s making the difference but I think it looks pretty good for being the first version of this new model.
Regarding QTGMC, I would say it’s not the deinterlacing but more the re-encoding that’s the problem. The official guides do state it’s best to leave everything expect deinterlacing (till now) to the AI. But I never really cared much for that advice, since a lot of other people on this forum showed some pretty good results by pre-editing some stuff, like color grading and filter grain etc. For me this so far only helped to reduce some quirky artifacts from appearing but also made the picture blurry or other artifacts started appearing instead, as if the AI won’t recognize some details anymore, even though to my eyes there was no difference.
I’m much the same, I don’t pre-edit at a cosmetic level unless the video is in evident need of care before hitting the upscaler. I use QTGMC and Neat Video to get the video structurally into shape before going to Topaz but I’m starting to research ways to speed up my workflow without sacrificing quality.
I tried a lot, also playing with resolution quiet a bit, since somehow the AI really shines when applied on low-res material but this has it’s own flaws. So far I have found no method that would actually do better than the actual original source. (Except for interlaced stuff off-course) At least for me… don’t know if I am just doing something wrong or just have mostly crappy sources, since I never knew before how much variance there actually is between different DVDs.
But within QTGMC there are also mindbogglingly much differences to be taken into account. As I started I used the Avisynth version and cranked everything up in terms of quality but once I switched to Vapoursynth/Hybrid I realized there is no “up” but only a difference, even when using all the placebo options.
BTW. I have found a good method to get a bit of both worlds, if I encountered one of these artifact riddled but still to “blurry to blur even more” type of video. If one just does 2 runs (or even 3) one with gaia (for details) and one with artemis, (for sharp edges) it’s possible to compile them afterwards by layer them in for example adobe premiere. I usually play around with 40% to 60% opacity. If you’re really desperate this solution can also be applied on specific spots that are especially impacted.
Technically QTGMC is not a deinterlacer but a sophisticated avisynth script calling many other filters (dependencies), the main deinterlacer in qtgmc is Nnedi3. QTGMC is more than just a deinterlacer, it’s also a denoiser, temporal smoother and a repair tool.
Then the question is, what does Topaz use to deinterlace? Is it a customised QTGMC variant or something of their own making? I’d hedge the latter. I’m going to do side by side tests to see if there’s any difference, even marginal, when it comes to problem areas for deinterlacers like videos with heavy contrasts, sharp edges and intense artifical lighting. I tried out some DV captures earlier and they shined up like a penny but for professional use, I want to be 100% sure I’m not just happy to be skipping QTGMC and blinding myself to any pitfalls.
I honestly thought I’d be using QTGMC up to the end of time but Topaz are delivering pretty interesting updates.
Indeed, I also tried Nnedi3 on its own with ffmpeg. At first I though the result looks pretty good but some details where really weirdly warped. I guess the other components of QTGMC do actually correct for that.
@Jupiter_500
I think the deinterlacer is the AI itself, wasn’t this the point? BTW. In my tests I found that at least in the previews it’s sometimes visible that there where scan-lines, more like a shadow than an actual scan-line. But only with quiet fast movements so that it is impossible to see when playing in normal speed.
I remember finally getting QTGMC to work after a few aborted attempts and thinking all my Christmases had came at once. I could finally deinterlace and not have it look hellish.
I was thinking that. I’d prefer a pure deinterlacing step as part of the VEAI process as the sound of shadows where scanlines were needs carefully compared with pure QTGMC output. But at that point, the differences admittedly become marginal.
I must say it was not a christmas for me but quite an awakening, since I had never engaged with deinterlacing or outright failed entirely to identify the problem. I just knew to keep away from interlaced video. :DDD To be honest that’s quite embarrassing for me working in IT but I am still baffled today how scares information is sometimes when it comes to video problems. I can google my butt of for some problem I encountered 100 times in my life and never knew the name of it. makes me think how hellish this learning process might be if it was actually my work.
These links show identical frames processed via QTGMC and VEAI. VEAI is showing significant anomolies around the arm where QTGMC deinterlaces perfectly. The VEAI example was processed with the Dione TV model. I can’t think what this might be. Same thing happens with Dione DV. Dione TD is of no use as it doesn’t process out to a doubled frame rate.
Interesting, what model did you use for the upscaling after applying QTGMC? With regards to the artifacts, I see the problem too but I also see that the picture is truly sharper with VEAI.
I didn’t upscale the examples, that’s pure QTGMC output through AvsPmod, before any sort of processing. The VEAI example was kept at the same size and yes, it sharpens up the overall image but my workflow when it comes to footage that is pure 50 or 59.94i is to process it via a QTGMC script within Virtualdub, crop the borders, then onto VEAI. Artemis LQ is my go-to model as it removes the sludge of analogue noise infinitely better than anything I’ve ever seen before and brings out the details without looking artifical. I’ve processed a lot of videos using QTGMC and then VEAI at the same scale, no upscaling, to simply provide the best standard definition source that can be obtained from the original video. I have dreamed of the day where I could remove DVD compression without sacrificing detail within the image and it’s essentially here.
Let’s see where this goes from here. I guess 2 iterations down the line and you can have the best of both worlds, sharpness and good deinterlacing. The big question for me is still, since you mentioned analogue noise, if this will be a main focus of Dione going forward, since I find a lot of artifacts with tapes that I would say are pretty impossible to fix with normal methods. Over-sharpening for example is something I could so far not get rid of, only lessen it a bit with Adobe After Effects. Then there is ghosting, color bleed etc. all stuff that pretty much cries for its own special AI to get it fixed.