Video Enhance AI thoughts and requests

Have you tried the 1.2.0 update?

OK, it tested the new 1.2.0 version.
The rendering process with my GTX is faster.

Again: I tested my common old VHS/S-VHS video clips with different pre-rendered resolutions:
640x480, 720x576, 720p and 1080p.
I see on each new named modus - I tested each new (named) modus in VEAI - no improvements on landscape footage. I use since many weeks in testing landscape footage several clips von Monument Valley. You have there a good mix from big stones, table stones and green (plants). Even on the table stones and much more on green plants VEAI adds the always named small point structures like a net.

The 3 variations of Artemis modus are not usable for low resolution, noisy and blocking footage.
The improvement results are always noisy, blocking and has no clear edges like e.g. the CG modus.

I tested alternative other landscape footage like Yosemite, Sequioa and so on. Same as with above named footage: The only modus, which brings a real improvement is the CG modus. But here again the added point net structures on e.g. greens/green plants/fields are not helpful.

I get a small improvement - also with the versions before -, if I very slightly denoise my footage with Neat Video and us it afterwards in VEAI. But also here on some clips I get the point net structures added.

It seems the common thing I hear around different areas of the net about this program is “LQ (now renamed Artemis) is no good.” So it sounds like the 1.2.0 update didn’t change things much. When I tested it I could see what people meant in that it seemed to try to create detail that didn’t exist in the source, making it look more like artifacts. Hopefully this could maybe get better in the future. But I still think we’re a ways off from feeding in crappy EP mode VHS and getting footage that looks like it was shot in modern HD. But with the way computers and technology is progressing, I really do think we will get to that point some time in the future.

Thanks, that’s what I’ve started doing. The other issue I run into is that the mp4 produced by Video Enhance AI has the audio horribly out of sync. I’m not sure but I think it may be because my source is 24fps and the program seems to default to 29.97 fps, and I don’t see an option to adjust the fps.

That the idea/concept of VEAI from the scratch by Topaz could work pretty good is already known.
For example: I have old bad footage of Las Vegas from 1992 via S-VHS. It’s very noisy, has no clear edges/lines and has common blockings. Overall: the picture is really “dirty”. Here VEAI works really “magic”. The results are really really impressive. Nobody including myself could believe the comparison before and after. No remarkable noise, clear edges and lines, deblocked and sharper picture. Great Great “magic” :slight_smile:
Also impressive are results of the night/dark footage of Las Vegas. Really impressive!
Same mostly with footage e.g. from San Francisco (also from 1992). BUT: footage taken e.g. in parks of San Francisco shows the problem: greens/plants/grass/trees became the named additional structures.
Perhaps it need much much more AI learnings, which depends on landscapes, especially on handling of plants/trees/grass. Perhaps it’s on the end not possible…I don’t know. But I hope very strong, that Topaz will work hard on it, cause using VEAI only for “town/city” footage can’t be the business case.
From my side I always stress adjustment sliders. Perhaps it need on the end only one to adjust the added fishnet/pointnet structure.

Have you checked fps before and after you render with VEAI? I’ve only had the out-of-sync issue when I render a video that was previously rendered in VEAI. In other words, I take the original footage, run it through VEAI at 100%, then run it once more and it will have the out-of-sync issue.

Using Mediainfo, I see the fps before and after are the same. They’re 23.976. Both the original footage and the VEAI rendered clip so I doubt that’s the problem. I have not tried it in version 1.2.0. I will wait for the new release next week and some release notes.

Have you checked fps before and after you render with VEAI?

I just ran this test with a short clip. The source was 24 fps and after encoding it with VEAI the frame rate is 29.97. I’m using 1.1.0.

EDIT: I may have found the issue. MediaInfo is telling me the source file is 24 fps (which makes sense since this is a DVD rip of a TV show) and VEAI is detecting it as 29.97 when I load it. So maybe VEAI is just reading the fps incorrectly?

Oh wow, maybe that is the problem. I can’t test right now because it is currently processing a file, and still has a few hours left. But I’d be interested to see if VEAI reads the files that it produces incorrectly. Even then, mediainfo is still telling that the final file is correct fps. Will update once I test.

Sounds like your source might have some kind of Telecine or 3:2 pulldown going on.

How to check this: https://tommycatkins.com/2020/Neuron2_Video_Frame_Structure.htm

If you have such source material, it should be deinterlaced first, then decimated (remove the duplicate frames inserted for interlacing, restoring the original 24 fps).

Try these Hybrid settings if your source is 30fps telecined and you want to get it to 24 fps:

Further testing at home with DVD:

Gaia HQ seems to do the least. Looks like a regular scale with a little detail enhancement. Tool tip says this is really intended for HD sources. (So I’m guessing 1080p to 4K basically.)

Gaia HQ-CG is a mode a lot of people seem to like. I have found that currently, in some sitations, it adds “pattern dither” types of artifacts that are clearly visible even when viewing at a distance on a TV screen. It looks similar to say dithering an image to a 64 or 32 color gif with pattern dithering. I think it’s coming from the training data trying to re-create detail. This effect doesn’t appear in other modes.

The “Artemis” mode is greatly improved. THESE are the settings to use for scaling standard video (480p) to HD (1080p).

Artemis-HQ looks good but sometimes leaves details more blurred than Artemis-MQ. It can also “enhance” certain slightly noisy elements. I think it really needs to be used on only VERY clean sources.

Artemis-MQ seems to be the main setting to use for upscaling DVD and other average to good quality HD sources. It produces the most sharpness in fine details while denoising and removing some other slight artifacts. The file size is only slightly smaller than Artemis-HQ which to me says some denoising is taking place. Film grain is still mostly visible.

Artemis-LQ when used on the same sources begins to start resembling Gaia HQ. Detail is once again lost and many scenes take on a “painter” type of effect. This I believe is due to heavier denoising and deblocking. This mode is inteded for low bitrate sources with visible blocks and artifiacts so it’s mainly trying to remove those. Don’t expect it to save very poor quality VHS or 1990’s “postage stamp” sized video encodes from RealPlayer.

On several discs, including good quality 1970’s films with visible film grain (which you want to keep), and recent 2000’s movies with no visible grain, Artemis-MQ seems to be what wins every time.

Hi there
If I understand correctly Ai works by using a lot of processing power from huge servers and the more times it runs stuff the more it “learns” thus improving the program.
Wouldn’t it be great that they make an app or something like the one the used with the SETI stuff so we could contribute with the unused portion of our own cpu/gpu to improve it?

I’m not sure if this is the right place for this or not, but I have a few requests for usability. Simple little things that probably wouldn’t take much to implement.

  1. if a project is running and it’s closed, I’d love to have a message pop up and warn me… maybe even prevent closing altogether if a job is running.

  2. Provide a que system where the state can be saved. If I add a few jobs to be processed and I close the program, when it’s restarted have give me the option to load the previous jobs… even if it’s just saved in some sort of project file.

  3. <deleted - in version 1.2>

  4. <deleted - in version 1.2>

  5. Please allow custom preview times. Sometimes I’d like to just see a little bit longer preview!

1 Like

If this was a free app I would agree, but it’s a paid one and not on the cheap side. I would like to discourage devs on asking users’ horsepower for DL unless we somehow get paid for it. Also depends on how they got they ‘neural network/deep learning training’ done. I hardly believe they do their own training, most likely they rent the power from another company. If I am correct, NVIDIA rents their own GAN database to others. They use huge amounts of their server GPUs (like de DGX-2) to create that, feeding millions of images per day, so I’m not sure a user outside of NVIDIA’s network could feed anything into it, unless NVIDIA themselves makes something along the lines of SETI@H and F@H.

To be clear here, I’m not a dev, but I am certain that this program works 100% locally. It does not rely on any online sources for its work. That’s why it’s so slow.

Well its just that I’m amazed by how much the program has improved from the first version I tried to the current version (1.2).
I was just converting a dvd pal movie to 1080P and 1.2 did manage to work like at 2x or more speed and with better quality (artemisHQ seems to works best than older CG for that)
I just hope it keeps getting better and better and I think we can all agree that if we can help to achieve that faster, well…
Trying to get Aliens in 4k from the BRay (GaiaHQ) and getting 1.9 frames/s with my 1080Ti…still way too slow! I think it will probably take all day to do 50 minutes.

If you were refering to my comment, I never said the app works online, as I’m pretty sure everyone knows that it works with your own GPU. @gargamel9 was refering to DL training so I commented on that. What VEAI does is DL-based image enhancing. DL training is another thing, VEAI doesn’t do DL training and is a MUCH more demanding task, our PCs would do only around a hundred images per day working 24/7 (using FP32 instead of INT8/FP16 which actually would be prefered, but comparing FP32 because VEAI uses FP32), compared to thousands enhance per day you can do (taking on my RTX2070, of course each GPU will do different) with VEAI.

I actually had to roll back to 1.1 because 1.2 was getting errors and taking much more time to encode than 1.1

I have been using Video Enhance AI extensively for weeks now; still in the trial version that let me extend it.
Version 1.2.0 has now been released and can be downloaded.
Since the last updates, audio has also been processed and when using GTX / RTX you can clearly see performance improvements.
Fortunately, faces and structures have only been “destroyed” in rare cases since the last versions.
Only with the CG mode can I achieve significant improvements with old VHS / SVHS / Hi8 material. And they are usually very, very impressive. If you consider that the source material is really very “dirty” (picture noise at the highest level, block formation over the whole picture, frayed / indistinct edges, blurring anyway), it almost borders on magic, which video enhancement AI does in part. In the past I had ‘learned’ garbage in / garbage out 'and only made such a video worse. I have now a very different and very positive experiences in the past few weeks.
HOWEVER: the weakness of the tool in CG mode lies in the addition of new artifacts / structures. It doesn’t happen with every clip. Most of the time the net-like / point-like artifacts are used in landscape shots e.g. Plants, trees and grass added; sometimes stronger / larger and sometimes less pronounced. Apparently AI is still learning how to deal with plants, grasses, etc. I don’t know if that will ever be possible.
But I have long since found a workflow to get this under control to some extent. AND, the results are sometimes incredible; especially in the before / after comparison:
1.Deinterlace AVI material with Tmpgenc Masterworks 6 or 7 with deinterlace in HIGH PRECISION mode and render with 1280x720, 25FPS, 30Mbit in MP4
2. Denoise the new MP4 with Neat Video … but only very “tender” … until the noise in the sky has almost disappeared (only almost!) (Use it in Magix Video ProX)
3. Perform color grading in the sky >>> at VHS / S-VHS / HI8 the sky was mostly purple at that time … (use the color adjustment tool in Magix Video ProX … very quick / easy for it)
4. Allow video enhancement in CG mode and render 200%
5. Color grading is best in my case = ColorDirector from Cyberlink (works similar to e.g. Luminar … therefore simple and fast and has motion tracking)
That sounds like a lot of work in advance and afterwards … true! :slight_smile: But, the results are definitely worth it. I never thought that I could get so much out of the old, bad material. Of course the end result doesn’t look like 1080p / 2K or 4K …

Disagree, with my RTX 2070, 1.2.0 is a significant retrocess and has only made things slower to the point of unusable (40minutes of 24FPS video, from 720p to 1440p, with HQ - now Gaia-HQ - would ever since 1.0 took around 24 hours to make, with 1.2 it shows more than 5 days to do, and was getting slower and slower as the encode progresses). Had to move back to 1.1.

1 Like

Appreciate the info about the commercial software you use.

For your steps 1 and 2, I would encourage you to try QTGMC for both deinterlacing and denoising.

You don’t have to spend all kinds of time configuring AviSynth or anything, just install Hybrid, run it, and set the options.

I would start with these settings for QTGMC:

Turn “Final Temporal Smoothing” down from 3 to lessen noise removal. “0” should be off. You can also turn off the “Source Matching” and “Lossless” settings as these can add noise back, for example turn these off and try turning up “Final Temporal Smoothing” to 1, 2, etc. The upper button in the lower right will turn on real time preview. You can also set “Sharpness” to 0; I find .1 to .5 settings are useful. I would not mess with any of the other settings.