Upscaling with better detail using Proteus in TVAI v3.x

Proteus does wonderful upscaling. I use it almost exclusive ly for that. However, the input must be always cleaned up at original, but not so much as the ‘detail’ gets cleaned out of it, too. (Proteus can do this too, If you’re careful to not overdo.)

FYI: after a careful cleanup at original size and using a low loss/lossless output format use proteus-Relative to upscale, in increments. If your source is 720, go to 1200, adjusting carefully. That gives Proteus AI to use the original image to build out the ‘clues’ it found in the original image.

IMPORTANT: This small upsize doesn’t force the AI past its normal limit. If you crank the decompress, denoise, sharpen up settings too high, it will ‘accommodate’ you by ‘faking it’ and it makes a mess!

Repeat the Proteus enhancement to full size or another intermediate step. Once again, don’t push the settings.

The final step: Upscaling to full scale, you can finally crank up the sharpness and detail to give you a very clean, detailed result.

Sure! The multiple passes do take longer, but it makes a difference that’s really significant.

3 Likes

I have not seen a performance enhancement compared to running multiple instances.

They have just done what should have been done before automatically.

Nothing has changed in rendering efficiency - they are just now properly using the resources correctly.

1 Like

I disagree 100%. That has not been my experience using the same video clip from 2.6.4 until now. I have images that prove this to be true.

If you post - back it up with facts.

Precisely! Which (shameless self-plug) is why I made this topic:

Some pre-processing is simply best left not to A.i.

2 Likes

I agree 100%. Give TVAI your best video possible. Then let it do its thing.

There are many applications out there for PC and Mac to accomplish it.

TVAI should have a minimum criteria posted on what it will accept — for best results.

1 Like

Although I do have a few outside utilities I use for massaging video into good shape for TVAI, I have noticed that several of their tools for deinterlacing and clean-up have improved significantly. They have become much more useful as a result.

In any case, never denoise or decompress to the point where detail degrades or the image gets distorted. And always at original resolution.

2 Likes

That’s all going to change fairly soon; although that also depends on the hardware you’re using.

Interesting factoid: The cost of 4TB high speed NVMe is dropping. Also, there are lot of new adapters that are evolving to provide capability for having up to 1000 TB of NVMe SSD on one machine. - I’m sure it will be few more years before they get the wrinkles out of that last ‘fact,’ but it’s here.

Computer processing power is getting cheap quickly.

For me I want to see the next generation of enhancement from this company. AI tech is moving extremely fast.

Nothing dramatic has changed since I bought my original copy (on the AI end). It has been over a year.

If the models were getting better, I would be seeing it with my reference video. But it is not happening. The frames are identical.

Without using the same reference video for testing each version (and saving the processed video) - you will never be certain that a model or process has gotten better or been updated.

So when people give opinions of better quality - back it up with the original image, the previous version and the new and improved version.

Otherwise it cannot be taken as fact.

I see a lot of the same people on these forums with very strong opinions. I have no idea who they really are and certainly hope they have no stake in this company.

Time will tell.

1 Like

For me, in many cases, v2.6.4 still outputs a higher quality image (I’ve posted several comparison images in these threads).

Not for TVAI, it seems. :slight_smile: What is higly needed, IMHO, is some sort of facial recognition – especially for small/low res faces that take up a very small portion on the video still (like people in a crowd). Not saying TV needs to to create the equivalent of the human Fusiform Gyrus Area, but recognize them enough to leave well enough alone. Most processed videos I have tossed, over time, is because of TVAI horribly mangling small faces. To wit:

A human will understand, that the guy didn’t get his lip busted, all swollen, due to having been in a huge fight, but that it’s just shadow underneath his lips. Now, granted, this was a small face in the overall scene:

But that’s kinda my point exactly: if TVAI possesed even the most rudimentary way of recognizing faces, it could simply decide to skip it altoger, instead of being all smart-alecky dumb about it, and ruin my video:

EDIT: This is ‘auto’, btw:

3 Likes

Out of curiosity, what processing are you doing to your ‘reference’ video? I know you’re doing enhancement, but are you also rescaling?

Just some mild denoising to get rid of the worst, like (VapourSynth):

vid = haf.QTGMC (vid, InputType=1, Preset="Very Slow", TR2=3, EdiQual=2, EZDenoise=0.5, NoisePreset="Slower", TFF=True, Denoiser="KNLMeansCL")

Certainly no rescaling. Source is 1080p already.

1 Like

If you aren’t rescaling, most enhancers, such as Proteus can’t make use of their real potential.

In order to see that you should start with a lower-resolution video, make sure it is clean and reasonably denoised and then use an enhancement method, like Proteus to upsize it. I recommend you use Proteus - (Relative) for this initially, as it puts all the controls at your disposal.

With a reasonably clean image, (at original resolution) a good enhancement algorithm can do more. The process of enlarging the image actually gives it enough room to work and use AI to add/enhance the missing detail. - small increments such as 150%-200%

On the other hand, if you try to upscale in big jumps, you can create worse results.

1 Like

O, that’s what I’m doing. :slight_smile: Slight miscommunication. I meant I do no rescalling in the pre-pass. Source is 1080p being upscaled by TVAI to 4K.

I’d like some presets or tips for enhancing and cleaning a 1080p source.

Best settings for cleaning compression artifacts before upscaling. (Avisynth needed?)
Best proteus settings for 1080p to 4k without removing too much grain.

I am so tired of the numbers game hahaha

First, don’t worry about existing grain. We need clean. (Grain can be put back, enlarged grain is not pretty.)

Set rescale to (nominally) 150% or original size - This is a multi-pass method. - You may want to experiment with a short piece of the original video to save time.

Using Proteus Relative:

  • If necessary, push up decompression,
  • Raise Denoise until you begin to lose detail and then back off slightly.
  • Big Note: You are going to be able to refine settings on each pass; it is better to keep
    your initial setting a little short of perfection, the next pass will use the previous
    partial enhancements as the clues for the next pass.
  • If restore detail begins to show ‘dittoed’ in detail, you have pushed it too far and the AI
    is ‘inventing’ detail for you. (Try to avoid this, even if detail setting is not as high as
    you’d like. (Backing off denoise can help, too.)
  • Sharpen lightly.
  • Some folks like to turn dehalo to 1 or 2; I’m not certain if this really helps.
  • Push Restore Detail until you begin to it see more clearly. (You may need to go back
    to Denoise to get these two ‘opposites’ back into balance.)
  • Save the settings and try previewing them in several places in your video. After
    saving any needed modifications, Export, (using a high bitrate or lossless as
    possible,)
    titled “Pass 1 - Your Title”
  • Wait until finished.

Next or final pass:

  • This assumes you are going to the final pass, or full resolution, - If it is still an
    intermediate step, treat settings like first step above.
  • Load your Pass 1 video as the source.
  • Using Proteus Relative:
  • Set resolution to next (or final) size.
  • Decompression, should be nearly unnecessary. (leave at “0”)
  • Only if any residual noise: Raise Denoise until you begin to lose detail and then back off slightly.
  • Crank up detail If restore detail begins to show ‘dittoed’ in detail, you have pushed it too far and the AI
    is ‘inventing’ detail for you. (Try to avoid this, even if detail setting is not as high as
    you’d like. (Backing off denoise can help, too.)
  • Some folks like to turn dehalo to 1 or 2; I’m not certain if this really helps.
  • Push Restore Detail until you begin to it see more clearly. (You may need to go back
    to Denoise to get these two ‘opposites’ back into balance.)
  • Sharpen more, but don’t overdo. - The output can be so sharp, it seems to cut your
    eyes.
  • Save the settings and try previewing them in several places in your video. After
    saving any needed modifications, (including putting back grain, if you like that,)
    Export as “Pass 2 (or Final) - Your Title”
  • Wait until finished.

Your final output should be superior to the results obtained going from 100% - to Full Resolution in one swell foop. (Note: Sometimes multi-pass is not necessary, depending on the detail of the original source.)

Starting at 1080 is much easier than starting at 720 and going to FHD or 4K, as there is more original detail data for AI to work with.

I hope this helps…

3 Likes

Ok. Thanks for clarification…

Thanks for the tips! but it sounds like you are calibrating a still. How can you slide the denoise, for ex, til the right value when you dont see what is happening?

On your original video, denoise and detail are roughly coupled. - Think of it this way, a noise is roughly a speck that normally has a momentary duration. A detail may or may not be identifiable as noise. Therefore, denoising too much can degrade or erode the detail. So the detail and the denoise settings must me set for the cleanest image, as long as it doesn’t eat details.

Turning detail up to high pushes AI recognition to try too hard. If it doesn’t make a perfect recognition, it makes a ‘best guess.’ This can become very noticeable, especially when it misrecognizes foliage or over-saturated color.

Above all this is decompression. MPEG is a lossy compression method. Not decompressing allows incomplete image recovery. Pushing decompression can cause too much noise and image distortion.

These three settings are the key to getting the most out of your original input.

Using a higher output bit rate or a lossless CODEC will decrease image degradation between processing passes.

After your final video is complete you can use your enhanced low-loss output to create video files of the type and bitrate normally used for that kind of media.

3 Likes

What I’m describing is what you see while doing a preview. You can play it at a high magnification and also step through it frame-by-frame. Look for bits of noise. Look for details. - You can’t really sharpen and enhance until you have a clean image.

Attempting to upscale and sharpen a deficient image will bake defects in and yield a bigger, more deficient result.

2 Likes

Yeah. Redoing several movies now, after I re-discovered the usefulness of AVFS with TVAI. A pre-pass denoise phase is paramount, really. It’s not just on sharpening, though. Often the A.is simply trying to be too clever, interpreting things the wrong way – especially with noise, of course. So, yeah, the more of these ‘detractor’ factors you can pre-remove from the image, the better.

I know TVAI does not recognize faces – at least not locally (on our computer). Nothing prevents TVAI from making a consorted effort to try and train Proteus for what I simply call ‘deviation from normal faces’, though, instead of just essentially doing a mere ‘lazy’ statistical analysis of a large data set of images. I.e, training their models needs sanity checks, such as, indeed, reject faces that look too distorted, or outright contorted even. I’ve shown many examples of this here already (as have others). A human immediately sees these face are mangled, belonging to would-be monstrosities. Like I said, I understand this cannot be done at our homecomputer level (too little computing power, for one); but it can, and should be done, on the big machines Topaz trains their data on. I would go so far as to say ‘getting (small) faces right’ is the biggest challenge facing TVAI to date.

1 Like