Proteus 3 - Dialing it In

At the popular request, (of one person,) this Topic has been moved from suggested feature discussions to this location.

(There is only one thing better than being right; that is being absolutely right.) :nerd_face:

Hello!

I have just finished a session with the VEAI 3.0.0.6 beta where I was taking some really bad, old fil-based source material and converting it to FHD leveraging Proteus 3 to do the heavy lifting. This is with a piece of terrible video I roll out whenever I want to see if VEAIā€™s enhancement capabilities have evolved enough to make a silk purse out of a sowā€™s ear.

One of the things that struck me was simply having Proteus 3 and some lousy footage to work with wasnā€™t enough; itā€™s kind of like having a set of expensive ā€œproā€ golf clubs and still playing lousy golf. - Itā€™s not the toolā€™s fault if the techniques arenā€™t goodā€¦ :thinking: - (Thatā€™s a simile Iā€™m using here to illustrate my point. And, I actually hate golf.)

As Iā€™ve been discovering, Proteus 3 is a tool. Even though it isnā€™t perfect (yet) you can get a lot more out of it by perfecting your technique. - Letā€™s call it ā€œDialing it in.ā€

What I am hoping to do is start a discussion of what VEAI userā€™s methodologies are when it comes to setting Proteus 3 to do its best on your video. - Iā€™m hoping everyone will contribute their comments here and maybe we can collaborate to find an approach that will get the best enhancement out of Proteus 3.

I am composing an outline of a method Iā€™ve more or less ā€˜tripped overā€™ in the last week or two, that is helping me get much better results than I did previously. - Iā€™m going to submit it separately, in another post to this topic.

If you have made some discoveries and formed your own approach to good results that you share them here, as well.

4 Likes

I have travelled from afar, rest now.

Thanks for starting this thread.

I only use Proteus and have found first the old garbage in garbage out is never more real than when you are trying to upscale even with something like VEAI. That doesnā€™t mean you canā€™t take a poor quality video and use VEAI and Proteus but recognize what your working with first.

I like to first inspect the video based on input resolution as compared to a know high quality video of the same resolution. If your video to be upscaled is of lesser quality then preprocessing is probably going to help. Anti-aliasing, deblocking, mild noise reduction maybe smoothing. And the encoding should be done at the highest rate you can do. I personally use a CF or 0 or whatever equates to lossless.

I also downscale or upscale my input videos to 854x480 for better time management if the quality is not great to begin with. So if a input video is 1280x that gets reduced to 854 or say the video is 640 then up it goes to 854. Iā€™ve not had enough success with anything lower then 640 going to up to FHD so I donā€™t try. Those will go to 1280 and Iā€™m fine with that. The exception is if the video is already 1920x1080. In that case I will clean up the video in preprocessing and then run VEAI at 100 percent scaling.

For my VEAI I use the encoder.json mod that @david.123 came up with. This gives me the best output file for me. Lastly I then do post-processing to rate I require and any other touchup that might help.

Thatā€™s my standard pre and post VEAI-Proteus workflow. As far as VEAI settings the better the input video the better Proteus auto works. Currently I find Auto mostly OK but still slightly soft for my use case but not objectionable. Depending on the monitor or TV/Streaming device Iā€™m using to view the video it makes a big difference which is why I say Auto is mostly ok but I cannot stress enough Auto can be all over the place if the input video is poor and varies in quality during the video. Proteus and VEAI can do wonders but it canā€™t do miracles.

[
{
ā€œtextā€: ā€œNVENC QP (NVIDIA)ā€,
ā€œencoderā€: ā€œ-c:v h264_nvenc -qp 0 -pix_fmt yuv420pā€,
ā€œextā€: [
ā€œmovā€,
ā€œmkvā€,
ā€œmp4ā€
],
ā€œtranscodeā€: ā€œaacā€,
ā€œosā€: ā€œwindowsā€,
ā€œdeviceā€: ā€œnvidiaā€
}
]

1 Like

I know, I could get much better results if I used different settings for each sceneā€¦ but I donā€™t care that much. Maybe if I was getting paid to do it, I would.
Because of that, Iā€™ll take the losses with running the whole thing on the same settings.
Hereā€™s what I have found:
The model for 480 to 1080 processes faster than 720 to 1080.
Iā€™m not interested in anything higher than 1080, so I have no idea if my findings hold true with resolutions higher than 1080.

Revert Compression. Most cases I can keep this around 30, for more compressed I go up to ~60. This setting seems to fix more artifacts than it creates. Like even in a source with no compression artifacts, I feel itā€™s safe to keep this around 30.

Recover details. I notice more artifacts when itā€™s set to more than 10.
Sharpen. This seems to smear things when set to more than 10.
These two can be set higher and do amazing, but only in the context of one scene. They tend to destroy other scenes.

Reduce noise. I like removing noise, but itā€™s tricky. You want to set this one to the smallest number you can while still cleaning things enough. If itā€™s too low, it will bake noise into solid colored objects giving them spots. If itā€™s too high, it will smear or blotch things very noticeably. Mainly background and far away faces. Maybe you can leave this at 0. I never do because noise makes the final file too big for my storage.

Dehalo. 10 seems to be the sweet spot. 0 amplifies artifacts from the last slider. More than 10 just blurs things too much.

Antialias / Deblur. I want Deblur to work, but the artifacts have always been too bad to forgive. Any positive value seems to add a circle-with-dot-in-the-center pattern texture to tree leafs and rocky or dirt ground.
Anyway, Iā€™ve left this at -4 for the last number of files Iā€™ve ran. Itā€™s not too invasive at that value. Lower than that makes those circle textures come back.

When deciding what settings to run on the whole source, I look for sections with far away faces and others with trees. If I can get those to look about as blurry as the source, close up faces an such still gain great improvements.

1 Like

Mike,
I agree with just about everything youā€™ve said here, although I donā€™t know much about the encoder.json mod by @david.123. (Especially) while beta testing, I prefer to keep everything as vanilla as possible; but if I come across making a change that resolves a problem, like the one you mention above, I do let the dev team know about it. - Which sounds like it has already been done.

One of the things I hope we can put together here is a guide to the technique (a.k.a. skill) of tweaking Proteus-3 Auto to get the best results. Itā€™s a useful tool, but how good the results are depends on how its used.

Two old platitudes here:
1.) From computer programming: There is no such thing as automatic.
2.) From traditional crafts: It is a poor craftsman that blames his tools. (In reality, there are some tools that are better suited than others.)

In this topic, Iā€™ll be putting in my approach and (hopefully) provoke an interactive discussion. The sharing of ideas will help us all get more out of VEAI; and possibly influence the evolution of Proteus itselfā€¦

So, What do you think this topic should cover? We need a scope. Please suggestā€¦

1 Like

(Probably) unnecessary comment: Iā€™ve just moved to the new VEAI beta 3.0.0.7.b. I really donā€™t know whether there will be much different about Proteus itself, but I suspect the models may be changing or the rendering engine improved. - I noticed a somewhat faster FPS rate at the end of the weekā€¦

I think a lot of people might think in almost a 2d manner when it comes to Proteus controls. In other words you adjust one thing and that works one type of problem but in reality the change one parameter can effect others.

Iā€™d like to see discussions on things like how to control heavy contrast areas like eyes and teeth or very shiny objects. Itā€™s not difficult but there are some compromises that you have to sort of live with it.

Or when is Dehalo to much? This one is one where I find close inspection of the original video is needed.

Another is understanding want your trying to get Proteus to do. That may sound weird but there are 2 main things I use proteus for. Cleanup only or Upscale. Upscale can have cleanup with the process but as anyone who has used regular method of upscaling knows general upscaling is never perfect or a better way of putting it the same as the original in terms of artifacts. If it was would we be using AI.

I guess as this thread continues on we will see where people want to take this.

One last thing I am a retired Test Engineer by trade. When testing for future production repeatability is everything. During prototyping nothing is repeatable at first and as time goes on and everyone understands the design better and adjustments are made. Section by section things come together. In order to judge that measure of repeatability you need gold standards to test against. For VEAI that would be a set of videos that never change. I keep a few that are only a few minutes max. Those are the videos I run through a new build of VEAI first. I can make a lot of quick judgements for my use case based on how those videos respond to the new build.

Anyway I hope in the future we see more filters added like AI face recognition with controls to increase or decrease the amount added.

I also think a new proteus with somewhat of a more controlled and complex Recover Details would be good. That might be a separate filter that could be used in other models, who knows.

Anyway looking forward to what others have been doing with Proteus.

Mike,
I hear you. VEAI+Proteus is a great tool for enhancing videos. However, it is not a panacea for all problems. And, as V3 is still in beta, weā€™re in a somewhat new ballgame every week. However, it appears to be getting better each week, and is also getting somewhat fasterā€¦ :slightly_smiling_face:

I understand what youā€™re saying about the interrelationship between the Proteus controls. And one of the issues there is that making the settings can be like playing digital Whack-a-Mole.

The main purpose of this topic is to see if we can discuss and formulate a methodology for setting up Proteus for optimum output. Putting our heads together and discussing the pros and cons and various approaches should serve to clarify the situation in everyoneā€™s minds.

FYI: Iā€™ve been working on my pet method for configuring Proteus. Iā€™ll be posting that when Iā€™ve fleshed out an outline Iā€™ve made with a few ā€˜punchyā€™ paragraphs and remarkably long run-on sentences. :roll_eyes: - OK, just kidding about the run-on sentencesā€¦

One of the by-products of this exercise is that the process of documenting to explain my/your methodology is it also makes in obvious where some of the weaknesses are, how to work around them and what we might want to suggest to the VEAI devs to resolve the problem. - I already have a few of these jotted down for later.

Ultimately, this discussion should help improve our technique, but may help improve VEAI as well.

Thanks for joining the discussion.

Phil
:cowboy_hat_face: :nerd_face:

1 Like

I started with 30, 20, 10, 20
Find Proteus over processes and losses clarity compared to Gaia HQ.
Neatvideo applied only to break blocking, which means zooming in and using Neatvideo settings to leave most blocking in the footage. Gaia can then remove most remaining blocking.

For those who are interested, I will share a very Time expensive tip.
When using NeatVideo you want to find the area of most noise for profile.
First use Gaia x2 on clip, and then watch to find the worse blocking in processed footage.
Because most of the blocking has been removed, the troublesome blocking is left.

Or I use Proteus before post process adjustments like exposure, lift and clarity with other apps.
Things might have changed with new models, but this was my last exsperience.

IMO, if there is no noise or blocking the footage is over processed.

Don,
You got to the root of the biggest problem. The first thing needed to do any kind of image enhancement is to be able to feed it clean video. - And that can be a major problem, or no problem at all. - It just depends on your source footage.

If you video is new, clean, good quality, good resolution and just need a little tweaking is one thing, but having dirty, noisy, poor resolution is yet another. And most video will fall somewhere between them.

IMO: VEAIā€™s Proteus enhancement is probably most useful for getting poor quality video up to a level where it will look good when itā€™s gonna be watched on a big HD screen.

Back to your initial point. Getting old, noisy video clean enough to actually run through the enhancement filters is a huge challenge.

Personally, Iā€™m into restoring old video and so I expect that itā€™s going to be in rough shape and in low resolution, too. Even worse, is that lot of really old video was originally on film, so the noise is from film grain as well as compression. A lot of it is interlaced, as well.
(I do have several video utilities I frequently need to massage the source footage through before opening it in VEAI; but those are details we should bring up just a bit further down this discussion. - I donā€™t really know enough about Neat Video, And I hope that everyone taking part in this topic will want to mention what other tools they useā€¦)

As such, job one is getting the video clean enough to enhance. And in a lossless format so it can be run through numerous intermediate processes with minimum degradation. Unfortunately, I can get my video into a format like that, with 3rd-party utilities, but I would like to be able to keep it in a lossless format, and VEAI (GUI) doesnā€™t really give us the necessary output options.

As far as cleaning up, I think that the revert compression and despeckle features in several of the enhancements is crucial to being able to clean up the source video properly. In the framework of using lossless source I would like to be able to run those two operations independently prior to running through the enhancements.

So I am asking, what would be an effective methodology for doing that?

1 Like

Deinterlace:
ffmpeg -i C:\IN.mp4 -filter:v bwdif=mode=send_field:parity=auto:deint=all C:\OUT.mp4

Get up to speed with Neatvideo, it is simply the most powerfull noise ui full stop.
Takes a long time to learn controls and master, but it really is unsurpassed.
Remember to leave some noise LOL

The problem with addressing multiple issues at once, we are correcting for problems with the wrong tools, where a smart pre-process adjustment removes all of the problems.

Running features separately and saving lossless before next process is the right way to go IMO.
Topaz has never made changes to UI to do this. Looked at VAI as plugin with independent Models?
Like with Audio Plugins that are chain together or have chained Sequenced events.
Each can be enabled/disabled.

Does bwdif work better than yadif in ffmpeg for deinterlacing?

Don,
I downloaded a demo of NeatVideo 5 and tried it in Vegas. The demo looked nice, but the problem is the sample size. Perhaps there is a workaround. Iā€™m trying to denoise SD scale video. (Actually itā€™s widescreen in SD) As such, NeatVideo is complaining that my sample boxes are too small. I tried using a generic setting but got only mediocre results.

The application Iā€™m using currently can read/write/filter just about anything. I simply set it to remove Luma and Chroma artifacts and to change the input 29.7i to 29.7p and wrote it to a lossless avi. (It got really big!) but VEAI read the file in as nice and clean. So, I cropped off the black bars and ran it through Artemis De-halo into a .mov @180Mbs. (I wish there were more output choices)

My next step will be to se if I can resize the export result to HD. Or, I may go back to mt original AVI and try to de-halo and resize using Proteus. The main difference is that I didnā€™t have to contend with the problems stemming from Revert Compression and Reduce Noise.

If VEAI had an Enhancement that was dedicated to decompressing mpeg, removing compression artifacts and noise so the original size output could be stored in a lossless format, a great deal of the angst involved in using their enhancements to resize would vanish. - Oh, and I forgot to mention cleaning up deinterlace.

(FYI: The app I used to prep the video is called Acrovid Footage Studio2.).

Oh yeah, the RTX 3090 will do a crop without enhancement at between 1000 -2300 FPS. Running that crop through Artemis Strong Halo cuts it back to about 62, FPS, but the output at original resolution is beautiful and I believe it will scale up to FHD very well. (Doing that one tomorrow.)

I wish they hadnā€™t taken Yadif out of VEAI. I donā€™t know what FootageStudio used to deinterlace, but it is the best Iā€™ve found so far.

Hello! FYI: I am creating a new topic in suggestions. It is due to one of the problems I had preparing my outline for this discussion. Please visit it at We need a dedicated source video pre-enhancement feature!

1 Like

Possibly, but what is really needed is to be able to work in a lossless format. Then blocking should cease to be an issue. That is;unless you ā€˜bakedā€™ the deblocking artifacts into it before you went to lossless.

Yes, the sample size can be very limiting with low res vid we are using!
If your blue sky has faint cloud your sample will filter clouds.

Finding a flat eventless scene of 5-6 frames long can be hard.
In some cases I have used skin, or even edited in a sample extension into a set of frames.

When you have a series of clips from the same camera and settings, search all for the sample area.

Unfortunately, universal samples do not cut it for most material.

My original use of Neat video was to sample and remove artefacts from earlier VEAI version.
This task was easy because I could apply VEAI to any chosen clip to create sample, which could then be used on any video of the same or similar dimensions.

Neat video is very good where you have blocking and good sample area.
Just depends what type of scenery is in the footage.

Some of the footage is highly compressed with bad blocking.
The lossless save is to keep 100% of your processing improvements, with no new blocking added between saves.

I will not be talking about size and quality here, because there will be many hanging to argue their position for a perticular process.

Good quality is whatever someone wants it to be, on whatever platform or hardware they wish to use! Nobody cares unless they watch it.

Thatā€™s just what I have been using.
The mp4 footage I have used it on has infrequent interlace in some areas of movement.

I do not know which would be best.
What is your experience?

Cheers