Video Enhance AI v2.1.1

I would like to see smooth settings for the parameters: Sharpen, Deblock, Reduce Noise, just like it was done in the Theia model. In general, I would like to do this in the form of separate filters for any models, as it is done with Grain Settings. For example, in front of each filter we turn it on or off, and in the settings we select smooth modes in a smaller or larger direction. You can apply standard filters: Deinterlace, Crop, Sharpen, Deblock, Reduce Noise, Reduce Cinema Stripes and Scratches, Reduce VHS horizontal noise flash stripes, Replacing parts of the frame in the rumpled areas of VHS film, Removing chromatic oberations, Correcting Color, Automatic White and Black Balance , Adjust brightness and contrast. After selecting and adjusting the filters, you can choose different models to restore the details in the image when magnified by artificial intelligence. Models can be made for specific scenes and types of video, where the nature and technical structures and structures should be fundamentally different. So the tree and bushes should be branched, and the window and the wall should be flat (now it turns out randomly). In principle, the AI ​​must be smart and itself recognize and be able to reproduce objects in 3D, the basics can be taken from the video and, after recognition, build models exactly with the original. I think that in the near future, the program will restore an object from a worldwide database, even simulating inscriptions from recognized newspapers and magazines, or restoring a face, hair and whole body with clothes based on the recognized data from the video. It would be amazing!

4 Likes

I was able to repeat this mistake. Indeed, if you change the frame preview value, for example, from 30 to 50, then it is fixed only if you press the Enter key or drag the max VRAM setting slider. In this case, you can see how the icons appear to the left of the field “V”. If you again change the value of the frames to another and just close or click with the mouse in another place from the settings window to close this window, then the frame parameters are not remembered.

Yes, there is a distortion of the face on many models, and this is really a very big nuisance, because the face is distorted beyond recognition. I noticed that it was mainly about enhancing the black on the pupils and eyebrows and outlining the outlines of the pupil and under the eyes. As a result, it happens that one eye or pupil is 2 times larger than the other and the slits of the eyes become black and elongated. The tip of the nose is also cut off, so that its shape becomes rounded and not natural for this person. Here you need to either simply apply a mask on the face and do only clarity and removal of floating noise, but at the same time leave clear contours without much amplification. Or use an AI that would analyze the scene in 3D, memorize the shape of the face, as in the rotation of the head scan, and would already scale this face with facial expressions to the given plot, overlaying the face like a palette. Or, you can also do a thorough analysis of the scene, for example, when there is an increase in the scene from the far position to the near one, then do the miscalculation not from the first frame to the last in the scene, but vice versa from the last to the first. This will give you the best result, because the last frames are already crisp and large, and the first one was still far away and blurry. With the reverse technology, especially at the change of the scene, it will already have all the data on the noise and the very shape of the object, which can be modeled and restored in the reverse order with good quality. In one noisy video I had two people in the background and they walked towards the camera. So the AI ​​identified one person as a tree, and when they approached and increased in size, only then the second person became the form of a person, not a tree … In a reverse analysis from the last frames, this would not have happened and two people would have been recognized until the last frame and then reproduced in the correct version and well filtered from noise.

Fantastic, thank you! I’ll get this fixed ASAP.

1 Like

It seems that there are a lot of different approaches. Maybe two-step remaster? Once I have read that CG cleaning old 480p footage and then Artemis HQ scalling can give good results. I must also check it and shall upload here the results. As I can see there is some interest among users to find good option to upscale 480p into 1080p. At least my files have high bitrate, around 20000 kb/s/.

Are you on Mac or Windows?

1 Like

I am not totally sure if a trained AI program can work in smooth steps between will say 0 % and 100 %. The AI is trained to “guess” how it might look like and that’s it. It might be possible that there is no step in between to get a bit more or less of the guess. :neutral_face:

1 Like

What type of footage did you compare? What was the target?

Thank you. And what did you do with the clip, upscalled to 4K?

1 Like

So you have blu-ray in 1080 and wanted to clean? I do not get it. Blu ray 24p or 50i/60i and what next?

But what exactly did you do using ALQ model?

Is there a new beta? I don’t see anything about one.

You’re absolutely right, you have to test different approaches and try to achieve the best desired result based on your criteria.

Personally, I already have 4 different workflows. Each time I say to myself: this is it: eureka. And a new idea comes to me which I experiment with. Often it’s KO but sometimes it’s better. My ideal workflow is not yet 100% found. But I have time for myself …

Is gaia v6 on the next release ? :slight_smile:

Where are people seeing AHQ12 and ALQ13?

GUYS,DEVS! How VEAI is perfoming on m1?

2 Likes

Edit the json files :slight_smile:
Might still need some with ( artemis hq 12 )… getting 12 sec pr frame :open_mouth:
(RTX 2070)

To return to this subject, I have since today the possibility of making the upgrade to the 20H2. No need to force it. It came naturally as usual with the previous versions.

But before doing it, I will make a backup as I do before every big update.

Perhaps I am doing it wrong, but I know I have to open the json files with the editor, but could someone please explain how to edit them correctly (where to find them and what to change) and what models can be “updated” currently that are not in the official 2.1.1? Also, is there a way to preserve the already present models (meaning that I don’t change the values from an already present model, for example copy one model json, change that and transfer it back)?

Sorry if it has already been explained somewhere, I guess that one went by me. :slight_smile:

Close Topaz if open.
Go to " C:\Users(YOUR USER)\AppData\Roaming\Topaz Labs LLC\Video Enhance AI\models "
Make a copy of artemis-mq-12.json ( for example ). Rename it to artemis-mq-13.json
Edit it and change version (line 3) from 12 to 13. Save and enjoy :slight_smile: