Proteus 3 - Dialing it In

Don,
You got to the root of the biggest problem. The first thing needed to do any kind of image enhancement is to be able to feed it clean video. - And that can be a major problem, or no problem at all. - It just depends on your source footage.

If you video is new, clean, good quality, good resolution and just need a little tweaking is one thing, but having dirty, noisy, poor resolution is yet another. And most video will fall somewhere between them.

IMO: VEAI’s Proteus enhancement is probably most useful for getting poor quality video up to a level where it will look good when it’s gonna be watched on a big HD screen.

Back to your initial point. Getting old, noisy video clean enough to actually run through the enhancement filters is a huge challenge.

Personally, I’m into restoring old video and so I expect that it’s going to be in rough shape and in low resolution, too. Even worse, is that lot of really old video was originally on film, so the noise is from film grain as well as compression. A lot of it is interlaced, as well.
(I do have several video utilities I frequently need to massage the source footage through before opening it in VEAI; but those are details we should bring up just a bit further down this discussion. - I don’t really know enough about Neat Video, And I hope that everyone taking part in this topic will want to mention what other tools they use…)

As such, job one is getting the video clean enough to enhance. And in a lossless format so it can be run through numerous intermediate processes with minimum degradation. Unfortunately, I can get my video into a format like that, with 3rd-party utilities, but I would like to be able to keep it in a lossless format, and VEAI (GUI) doesn’t really give us the necessary output options.

As far as cleaning up, I think that the revert compression and despeckle features in several of the enhancements is crucial to being able to clean up the source video properly. In the framework of using lossless source I would like to be able to run those two operations independently prior to running through the enhancements.

So I am asking, what would be an effective methodology for doing that?

1 Like

Deinterlace:
ffmpeg -i C:\IN.mp4 -filter:v bwdif=mode=send_field:parity=auto:deint=all C:\OUT.mp4

Get up to speed with Neatvideo, it is simply the most powerfull noise ui full stop.
Takes a long time to learn controls and master, but it really is unsurpassed.
Remember to leave some noise LOL

The problem with addressing multiple issues at once, we are correcting for problems with the wrong tools, where a smart pre-process adjustment removes all of the problems.

Running features separately and saving lossless before next process is the right way to go IMO.
Topaz has never made changes to UI to do this. Looked at VAI as plugin with independent Models?
Like with Audio Plugins that are chain together or have chained Sequenced events.
Each can be enabled/disabled.

Does bwdif work better than yadif in ffmpeg for deinterlacing?

Don,
I downloaded a demo of NeatVideo 5 and tried it in Vegas. The demo looked nice, but the problem is the sample size. Perhaps there is a workaround. I’m trying to denoise SD scale video. (Actually it’s widescreen in SD) As such, NeatVideo is complaining that my sample boxes are too small. I tried using a generic setting but got only mediocre results.

The application I’m using currently can read/write/filter just about anything. I simply set it to remove Luma and Chroma artifacts and to change the input 29.7i to 29.7p and wrote it to a lossless avi. (It got really big!) but VEAI read the file in as nice and clean. So, I cropped off the black bars and ran it through Artemis De-halo into a .mov @180Mbs. (I wish there were more output choices)

My next step will be to se if I can resize the export result to HD. Or, I may go back to mt original AVI and try to de-halo and resize using Proteus. The main difference is that I didn’t have to contend with the problems stemming from Revert Compression and Reduce Noise.

If VEAI had an Enhancement that was dedicated to decompressing mpeg, removing compression artifacts and noise so the original size output could be stored in a lossless format, a great deal of the angst involved in using their enhancements to resize would vanish. - Oh, and I forgot to mention cleaning up deinterlace.

(FYI: The app I used to prep the video is called Acrovid Footage Studio2.).

Oh yeah, the RTX 3090 will do a crop without enhancement at between 1000 -2300 FPS. Running that crop through Artemis Strong Halo cuts it back to about 62, FPS, but the output at original resolution is beautiful and I believe it will scale up to FHD very well. (Doing that one tomorrow.)

I wish they hadn’t taken Yadif out of VEAI. I don’t know what FootageStudio used to deinterlace, but it is the best I’ve found so far.

Hello! FYI: I am creating a new topic in suggestions. It is due to one of the problems I had preparing my outline for this discussion. Please visit it at We need a dedicated source video pre-enhancement feature!

1 Like

Possibly, but what is really needed is to be able to work in a lossless format. Then blocking should cease to be an issue. That is;unless you ‘baked’ the deblocking artifacts into it before you went to lossless.

Yes, the sample size can be very limiting with low res vid we are using!
If your blue sky has faint cloud your sample will filter clouds.

Finding a flat eventless scene of 5-6 frames long can be hard.
In some cases I have used skin, or even edited in a sample extension into a set of frames.

When you have a series of clips from the same camera and settings, search all for the sample area.

Unfortunately, universal samples do not cut it for most material.

My original use of Neat video was to sample and remove artefacts from earlier VEAI version.
This task was easy because I could apply VEAI to any chosen clip to create sample, which could then be used on any video of the same or similar dimensions.

Neat video is very good where you have blocking and good sample area.
Just depends what type of scenery is in the footage.

Some of the footage is highly compressed with bad blocking.
The lossless save is to keep 100% of your processing improvements, with no new blocking added between saves.

I will not be talking about size and quality here, because there will be many hanging to argue their position for a perticular process.

Good quality is whatever someone wants it to be, on whatever platform or hardware they wish to use! Nobody cares unless they watch it.

That’s just what I have been using.
The mp4 footage I have used it on has infrequent interlace in some areas of movement.

I do not know which would be best.
What is your experience?

Cheers

PROTEUS 3:
Mp4 13.3mb 480x848
High Compression.
Moderate Blocking Present.

Final Proteus Settings:
Revert Compression 38
Recover Detail 22
Sharpen 6
Reduce Noise 17

Not used.
Dehalo
Antialias/DeBlur

Comments: Sharpen literally goes from no effect to over sharpen between 6-7.
I adjust each setting in order working down, then a second time for finetune adjustment.

As mentioned previously, I also carry out other processes before and or after Topaz.

AFTER TOPAZ:
Lift -0.02
Gain 0.02
Tint -2
Exposure 0.06

DE:NOISE
Spatial Rad 2.15
Spatial Thre 4.52
Temporal Thre 4.52

Ivm using Intertake to convert my source into a lossless AVI format. I also activate deinterlace and turn on artifact filters and denoise, if necessary.

The result is usually clean enough to feed directly to VEAI with few adjustments. - I still need to deal with ghosting now and then.

Things won’t be better until they have added some AI to the raw import section of their application, and not just the enhancers.

What is the best Topaz model Saving 16bit tiff at 100% scale for Deinterlace without de-noise/block?

When you talk of topaz use and process of events, this is my first task.
I would think topaz should include option “Only process where interlace is found”!

The only model I have been able to test was “Interlaced Robust Dehalo v1”
Other models would not work for me.

What have been your most used interlaced models?
What happens if you use interlaced dehalo model on video without dehalo?
I ask because it might be my only option that works.
I do not even know what dehalo is. LOL

1 Like

Halo is generally a white band commonly around edges where a contrast adjustment has been made to make thing appear sharper (can also be a dark band). It’s more noticeable around light and dark edges. In my opinion VEAI does not deal with halo at all it just blurs the entire image and leaving it to sharpen just makes the halos stand out even more. I think it does the best de-interlacing around but unfortunately all the de-interlace models do additional processing. I’d love to see just a pure de-interlace option with nothing else, Denoise can do an excellent job on very poor VHS video, but it tends to be over the top for most things making skin in particular lose any texture and giving that plastic look.

Deinterlace should be a standalone process.
Interlace identification Setting to Disable processing below threshold.
In many cases only a small percentage of total footage would be processed.

So “Interlaced Robust Dehalo v1” is the best option?
These models that double frames, if I can get to work, could I remove duplicates with ffmpeg as work around to leave correct number of frames?

Share what you use for interlace?

When selecting any of the Dione models to de-interlace you have no control of settings. I tend to use the Dione TV model for old DVDs and don’t worry about the frame rate unless it needs some repair work. There are a multitude of tools out there to change the frame rate without de-synching the audio. As I said I don’t think VEAI deals with halo correctly so I wouldn’t use it. I usually import into Blackmagic to reduce halo after de-interlacing if necessary and use an edge detect filter to de-sharpen the edge halo only and not the entire picture.

I save as tiff.
If number of frames was doubled, removing every second frame leaves it correct.

As I understand it:
You can just choose “Custom Setting” and manually change to 100%, to avoid use of “(Denoise / Deblock)”

In such case Dione Interlaced TV or DV models, should only apply “deinterlace” process!

As I understand it, these models deinterlace frames but also double the number of frames?

Guessing the doubling of frames, is done with copies, not newly processed frames as with fps conversion? Doubt unique frames are created.

Seems to me that every second frame could be deleted leaving what is needed.
What am I missing here?

No, interlaced footage comes from the days of CRT televisions where electrons were fired at the screen to give the picture. For NTSC 720x480i ran at 29.97Hz but interlaced means effectively only half the vertical resolution at double the frequency so it becomes 720x240 @ 59.94. For PAL the 720x576i at 25Hz this would become 720x288 @ 50. This is why on fast movement you get those interlaced lines but not during no or slow motion. You are effectively joining 2 half height frames to give one full height frame so any movement between the time the first set of half frames is taken and the second set of half frames appear offset to each other. What I assume VEAI does it take each of those half frames and fills in the missing vertical resolution from the frames before and after and then does some interpolation to fill in the missing details to actually produce 1 progressive frame. So while there can be duplicate frames where there is no motion in effect, every other frame won’t be where there is motion. This is why you only tend to see the interlacing on objects that are moving.

Perhaps this link will make it clearer What is an interlaced display and how does it work?

Yes, some video was stored on cd with one reduced dimension and stretched for playback, sort of like a poor mans compression.
LOL, when I first come across this I actually did laugh out loud.

It changed the way I process videos.
After seeing how many ways that self claimed expert editors can screw up a film, you never know what is going on until looking.

My point is, either way, seems like the models could be used