Advice on upsizing and enhancing low quality video

I’m using the Video Enhance AI trial and wanted to see if I could upsize and enhance a few videos that I have that are very low quality. MediaInfo shows them as 146 kb/s, 160x112 at 25 fps. I believe I used a “Flip Video” device many years ago. I know I may be fighting a lost cause here but wanted to make sure that I’m making the most of the tool. I’ve tried different settings like the Artimis LQ v9 model but I really don’t know what I’m doing. It certainly upsizes the video but for lack of a better term - over does the video. If possible I could share a small video that I’m working with.

Thank you.

Yes, please upload your video so we can give it a try with a short description what you expect from the result. What do you mean by it “overdoes the video”?

At such a small input resolution, I believe you should exercise sensible judgement regarding the output one. I think I wouldn’t go larger than 540p vertically, in order to achieve acceptable quality. I mean, VEAI can produce impressive results at times, but it doesn’t exactly do magic. :slight_smile:

I had good luck with Theia Fidelity on some old XviD (MPEG4 ASP) compressed 320x240 videos, so you may want to try that one. I seem to recall setting the detail value fairly high (70 or 80), and leaving the rest at default.

I’m in the same boat with experience BUT I have noticed with Artimis v9 and very low res video is that if you try all three you’ll notice HQ will have the least “Over Done” look and LQ will have the most. I experiment a lot with this software and in this situation I might run it through HQ and then MQ and see what it loos like. Either by doing a short clip or just looking at the preview. Less res the higher Artimis v9 and usually always land on Artimis MQ v9 in the end.

“Over done” situations happen often and usually require some sort of pre processing to give it as much detail for Artimis v9 to “stick to”. I sometimes use Artimis HQ v9 to pre-process if I dont want to use other tools and programs.

I’ve also been doing a lot of research on different tools, like Avisynth and VituralDub2, to help with pre-process or even post process if it comes to that. So far it’s working out great but takes a lot of time to set up.

Yes you are right, in many cases, especially for some very low res or bad quality input, Artemis HQ v9 handles the situation with much less “over-done” aberrations.
You could try pre-processing the video with QTGMC (using Staxrip for example, probably one of the best video processing GUIs ever made to manage tools like ffmpeg, AVISynth or VapourSynth) . Often these low res videos suffer from extra deterioration, and not only bad de-interlacing. QTGMC can often improve this.
But do not expect miracles. When your source is really too bad, there is not much you can do and sometimes you can not even hope to go beyond 576p output and keep a convincing result. The NN models currently are not able to “guess” missing detail with such little base video data.

As others have noted, you need more resolution than you have to get workable results.

I’m pushing the limits of what is possible, and unfortunately I have yet to get acceptable results with 720x480i and 720x576i DVD film and video content. I can upscale 720x480p to 1280x720p, with barely acceptable results. But interlaced SD sources, deinterlaced with QTGMC, VEAI can’t yet handle and give acceptable results (IMO). I’m experienced with QTGMC and yet I can’t get 720p that has decent detail. My best results have used:
QTGMC( Preset=“Slower”, EZKeepGrain=1.0, SourceMatch=3, Lossless=2, MatchEnhance=0.8, Sharpness=1.0, Sbb=0, FPSDivisor=2 )

That Sharpness setting is much higher than recommended with Lossless enabled (when not using VEAI). Anything over 1.0 doesn’t help, over 1.5 and halos begin to form.

In my testing, Artemis-MQ-v9, Artemis-HQv9 (in 1.8.0) give the highest detail. I’ve tried most of the models, including Artemis-MQ from 1.2.0, and Gaia-CG in 1.6.1, read a bunch of sharpening and upscaling methods using Avisynth, did some tests, but still can’t get 480i and 576i looking good at 720p. CAS and/or vsMSharpen have provided the best sharpening, but they can only do so much with turds.

I have tried Artemis-MQv9 1.8.0 followed by Gaia-HQv5. I also tried Gaia-CG 1.6.1 as a followup. Either I have exceptional visual perception, or those that think this produces acceptable results are near blind (IMO). To me, the 2 step method looks like overprocessed crap.

Tomorrow I’m going to try the new beta with the deinterlacer model.

I read all the posts on the facebook page, and I gather that deinterlacing is new territory for the developers. My impression is that they have ignored the work that went into making QTGMC and haven’t even studied how it works. That’s my impression. Apologies if they actually did dissect QTGMC. Maybe QTGMC just isn’t compatible with the processing VEAI does?

quoted from the QTGMC wiki:
"The core algorithm is this:

  1. Bob the source clip. Temporally smooth the bob to remove shimmer then analyse its motion
  2. More accurately interpolate the source clip (e.g. NNEDI3). Use the motion analysis from previous step to temporally smooth this interpolate with motion compensation. This removes shimmer whilst retaining detail. Resharpen the result to counteract any blurring.
  3. A final light temporal smooth to clean the result.

Stages 0 & 1 use a binomial smooth (similar to a Gaussian) to remove deinterlacing shimmer. Stage 2 uses a simple linear smoothing. Each stage’s temporal radius (the number of frames out from the current) is given in the settings TR0, TR1 and TR2.

The shimmer reduction is critical for the algorithm so TR0 and TR1 should be at least 1. TR0 only affects the motion analysis and is only indirectly visible, increasing it to 2 will generally give a better motion match. Increasing TR1 and TR2 will create a smoother and more stable output and more strongly denoise; the downside is increased blurring and possibly lost detail, and potentially can cause stronger artifacts where motion analysis is inaccurate. The blur is partially counteracted by the sharpening settings.

The deinterlacer primarily tries to reduce “bob shimmer”: horizontal lines of shimmer created when interpolating an interlaced stream. Consequently any changes made to the initial interpolation (e.g. NNEDI3) are expected to be horizontal lines of change only. The repair stages Rep0, Rep1 and Rep2 occur after each temporal smooth. They only allow such horizontal lines of change - shimmer fixes, discarding other changes. This prevents the motion blur that temporal smoothing could generate. The repX settings control the size of areas to allow through."

QTGMC wiki

I might not have enough video knowledge to say this with total confidence, but it seems to me VapourSynth’s QTGMC version has a better deinterlace filter than the AVISynth’s NNEDI13, called znedi3
As for the " SourceMatch=3" and “Lossless=2” settings, they tend to add too many artifacts with bad quality sources, especially with a higher sharpness than recommended (0.2).
They only seem to do wonders on good quality interlaced or badly de-interlaced progressive material.
Besides that, I seem to have come to the same conclusion as you, but with decent quality DVD sources, using progressive upscaling* can sometimes push this limit a bit further.
Although there will always be strong issues with blurred our zoomed-out faces that definitely cannot currently be given convinving replacement detail with the current models (it will probably require tricks like generic face part patterns matching like with some recent innovating mobile application), but thats all we can hope for, and crossing fingers random actors faces are not added accidently like what happened with Gigapixel :slight_smile:
The thing will always be about compromises as some information is definitely not there anymore with low res or bad quality footage, and the best we can expect, event in the future, is convincing replacement detail with much more accurate context-aware models. That might take some time, although technology sometimes advances brutally.

  • i meant progressive as step-by-step, like from 480i to 576p then 720p then 1080p using some light processing at each step. When I do this I use Artemis HD v9 that doesn’t oversharpen like the 2 others.

I’m trying to restore this vid - I have tried all the models n various pre sets, I wonder if anyone can recommend a way of getting the best out of it - or is it un saveable due to being 360? I get it looking really good with Artemis LQ - However the faces are badly blurred and distorted. Thanks.

Unfortunately you can’t do much about blurred faces. You might notice that the original video has no detail, maybe a few pixels. The AI can only go so far in restoring the missing details in a face. I have some very bad quality episodes of The Goodies like that :frowning: