I have a very noisy video 720p 23,97fps.
The task was to reduce noise as much as possible, but add sharpness and detail.
The mission is almost impossible! You can crush the image, but as soon as you want to make it more detailed, artifacts constantly appear…
But, I tested over 100 customization options and came to some conclusions!
I never managed to get rid of distortion if I improved the video at the same time and upscaled to 1080. This means that you should improve the video in the same resolution as the original, because the upscale during processing multiplies the distortions that are in the original.
Interpolation of frames to 60fps automatically, without unnecessary distortion, visually improves any video.
In the settings Proteus, Reduce Noise and Revert Compression reduce noise and smooth video. And Recovers Details and Sharpen add sharpness, but at the same time, if there are many dates, they add distortions.
By the way, Artemis Low Quality handles noisy video best, but makes it too synthetic, sometimes not live. And if after Proteus to make the second pass of Artemis, the smoothness of movement of 60fps is disturbed…
Do anyone have any thoughts about what I wrote? Maybe advise something better for processing a very noisy video?
Unfortunately, the screen does not show how noisy the original is. In the freeze frame, 50% of the noise is not visible. In the photo, the noise just freezes and does not flicker, and in the video it flickers like colorful dirty snow.
The Topaz Labs developers are working on a new AI model that can handle denoising better. It’s currently not out yet, and probably won’t be out for a while longer as it’s still in the early stages of development. But once it’s released maybe it will help you.
I just uploaded a file called C0109.MP4. I had to revert to v2.6.4 to do any enhancement processing on Video Enhance AI without freezing my iMac Pro. The only change from defaults were sharpening and detail enhancement both at 74 percent. The old version works fine but the latest release only outputs a few frames and then my entire system hangs. The first sign of trouble is that the mouse becomes unresponsive.
Artemis V3 still produces unusual noise artifacting in almost all attempts to upscale anything I throw at it.
As it happens, I was upscaling this episode last night so I was very curious to replicate your settings. This screenshot is taken from roughly the same location, using Artemis Anti Aliasing with no upscale followed by your Proteus relative to Auto settings of 30 20 100 20 0 50 - and it looks revolting.
And Kira starts to look Alien:
Now there were two differences - I have a couple of filters on the source to get rid off issues with the source itself, and I can tell from your framerate you are using a PAL source.
So I thought I would try again with no processing at all on the original, but with no change:
Meaning makes me wonder how different your source file is in terms of base quality. Can you extract a small section from any of these images given and send to me for testing? I want to see if its replicable here with the same source file and do some direct comparisons between that source and this one.
This is not the correct place to post questions about your problem. I advice you to create a new topic in the General section of the forum explaining your problem the same way you did here. I’m sure you will get more users helping you.
If that were the case, YouTube would have taken down thousands of video snippets from hundreds of television shows from their website. It’s considered fair use and doesn’t violet copyright law.
It looks a bit plastic in skin textures in some areas of the face, but maybe it’s just my impression. I think there is still room for improvement. Nonetheless, the result is very good!
Yeah, then it was a little bit in motion. So it’s not easy to get the details back. If the original image is smooth, it will not be able to recreate the details of the skin. Anyway, thanks for the compliment at the end.
I asked this to Chat GPT and this was their answer.
What do you think Topaz Video AI needs to improve?
Although Topaz Video AI is a fairly advanced program in terms of applied artificial intelligence for video processing, there are still some areas where it could be improved to further increase its effectiveness and versatility. Some possible improvements could include:
Faster processing speed: Although Topaz Video AI is already quite fast compared to other similar programs, there may still be room for improvement in terms of processing speed.
Greater manual adjustment capacity: Although Topaz Video AI offers a wide variety of manual adjustments and settings, some users may want greater manual adjustment capacity to more precisely adjust video output.
Greater integration with other video editing programs: Although Topaz Video AI is already compatible with many popular video editing programs, greater integration with other popular video editing programs could be useful in streamlining and improving workflow for users.
Improvement in output quality in certain situations: Although Topaz Video AI is capable of significantly improving the quality of many types of video, in some situations there may be room for further improvement in output quality, especially for videos with difficult lighting conditions or low resolution.
Greater support for less common video file formats: Although Topaz Video AI is compatible with many popular video file formats, some users may want greater support for less common or more exotic file formats.
Apart from that, I would also add that it is missing the ability to do previews like in previous versions with four different visualizations at once. It also lacks the ability to interact more with benchmarks and send data directly from the application from each personal computer if one wants to help gather data for the engineers to improve the software. And greater direct interaction with Premiere Pro and DaVinci Resolve.
PS: I have also noticed that depending on the type of photography used in one video or another, you have to use one or another AI model. For example, it also depends if there is blur in the background due to interchangeable lenses, and the photography is more cinematic in terms of lighting and environment. For me, Proteus in Manual mode with only 15% Revert Compression is fine if there aren’t many artifacts that dirty the image. On the other hand, if it’s more video with a camera that has a fixed lens and doesn’t create much blur or none at all, and has a more documentary or video-style, I use Gaia. The problem I always see is the compression, and if it’s too strong, it’s difficult to fix.
I have sometimes gotten better results by pre-processing noisy clips with the Neat Video plugin in my other editor first. It does a great job generally. Would be nice to see TVAI do everything eventually, though. I have not been impressed with the denoising in any of the TVAI models so far.
Below, is me finding the exact frame frame, after doing the same process precisely:
This is not a perfect comparison, but this is an image slider using a cutout from yours to mine, which is the clearest way to see how different they are.
Now I am not mentioning this as you may know the answer to this issue, but rather does anyone here have a suggestion as to why using the same source file and same steps are getting two completely different results?
Is Proteus that dependent on PC hardware that it will depend entirely on what PC you are running it on?