Video Enhance AI v2.2.0

Check your NVIDIA drivers. Another person fixed it by updating to the latest Studio driver.

moving with keyboard shortcuts in Windows OS.
Alt+space+M
then with arrows

resize any window
alt+space+s
click one arrow to choose which side do you want to resize and afterwards with arrows you can resize (left/right or top/bottom) that side.

finish with enter key to confirm your window size and position

Doesnt fix your issue but as a workaround for now could work

1 Like

I personally use ā€œAdd > Restoration > ColorDeband > f3kdb_neoā€ (high (default) or very high depending on the source) for debanding purposes then clean the grain generated by it using artemis high/medium v9.

Edit: Here are the artemis v9 and v10 models that you asked for: Artemis - Forced*
*Forced veai to use the (sometimes) higher quality fp32 models on fp16 capable gpuā€™s. (Barely any speed reduction on Amper gpuā€™s or similar.)

Edit: You lose twice the speed using the fp32 gaia models with barely any improvement at all, so the link for those was therefore removed.

Question: I was lead to believe there is no visible quality difference between using fp16 and fp32 models. Is there a difference? Also fp32 would not be nearly as fast as fp16 on newer Nvidia RTX cards, true?

Nobody knows for sure how Topaz handles things as far as their inner workings go, but the way I understand it is that the neural network needs to be fully written using FP16 in order to be stand equal to FP32 - FP16 have lower accuracy by design because it has much less bits to represent the same number and will subsequently result in a lot of overflow and underflow issues, (sometimes even if done property) but I could be wrongā€¦

So, did they write their neural network to use FP16 fully or only partially (mixed-precision)? Or are they just using FP32 weights and converting some of them to FP16 weights, sacrificing accuracy for the sake of processing speed? I honestly havenā€™t got a clue, but one thingā€™s for sure is that when I abxā€™d the results of both FP16 and FP32 of the same ai model(s), I noticed some improvements in favor of latter, and that was honestly the only deciding factor that prompted me to make the switch. I heavily suggest you do a couple of tests yourself to see how things fare on your end.

AFAIK Ampere gpuā€™s should have the same speed in FP32 as in FP16, as opposed to previous gen when youā€™ll get roughly twice the speed with FP16 than with FP32.

Thanks for answering. I sort of understand the technical differences between fp16 and fp32, but my question is does anyone see a difference when comparing VEAI results side by side? Do you see a difference? I have heard those that are ā€˜in the knowā€™ say that fp16 is technically equivalent to 16 bit video, which is a superior video bit depth to most video sources we see today.

If you can notice it - Yes!


Absolutely no correlation whatsoever.

Hmm, Iā€™m surprised. The person that said it is quite knowledgeable about such things and highly regarded.

In your samples, you are sure everything else was equal? The one on the right looks crisper, but Iā€™m not sure if thatā€™s because it is oversharpened.

Of course, but mind you that was one of the few occasions were the difference was that drastic - For the most part there was virtually no difference between the two, expect that maybe fp32 was better at rounding errors, but at the end thereā€™s nothing to boast about really.

I just thought to myself that even if fp32 usually produces identical results to fp16 it can sometimes just ā€œpop-offā€ out of the blue, and with no speed penalty (using an amper gpu) to justify staying with fp16 , forcing veai to use the fp32 models as default seems like a no-brainer to me.

What about those pesky compression artifacts, with large patches of boxes that ruin the image? What filter will you use in that case? I will upload an example soon.

Iā€™ve made a test with GAIA-HQ forcing fp32 on a RTX 3090, not only it is 2 times slower but also there is absolutely no eye visible difference even at 400% zoom vs fp16. Just a very very tiny difference only visible on a vectorscope.
Maybe was just the test video I pick but why would it be 2 times slower when for you there is no speed difference ? Or was it for Artemis ?

Hi guys. Check this method out. I just discover it. No artifacts and detail smoothout with all models.

Can you post screenshots because I had trouble replicating your claims. The only way I could get ā€œERRā€ to show up on the lower left was if I set the default to SD, HD, or 4K. After importing a video go to ā€œsizingā€ and choose ā€œCustom settingā€. Now ā€œERRā€ will appear behind whichever model you chooseā€¦but it doesnā€™t look any better. In fact it looks exactly the same. If you try changing the Scale % to 200 or whatever the ā€œERRā€ disappears and the program behaves normally.

It makes a lot of differences. If you upscale low resolution footages, you will see what Iā€™m talking about. I had to upscale a 25m video for 10 days and I know where the artifacts appears. But after doing this, no artifacts appeared. I saw 4K UHD + 200%/450% and ERR. And indeed ERR has less artifacts and most details are preserved. Only 4K/8K UHD, Full HD, SD presets give artifacts.

It would be really useful if you could post a screen shot of the interface when ā€˜errā€™ shows up. Also a comparison of the same frame with and without artifacts, as you claim. Personally, I use custom sizing a lot of the time and have never noticed this. As well, it occurred to me that if ā€˜errā€™ (indicating error) does show up for whatever reason, it could mean that it is only a normal upscale without AI enhancement.

Why? The clip with typical problems from my old Canon G9 - flickering, artefacts, dancing pixels even in daylight. And the clips shows the scenery with potentially plenty of details and some architecture. I have analysed and it seems the file you have uploaded does not contain details, plenty of them had been removed. Of course thank you for help, it is important contribution to my workflow.
Below my attempt as described earlier:
https://we.tl/t-QnJCgHeyGq
So far this method gives me the best results - I have not sharpen the footage so probably some sharpening could also be beneficial.

But I have used similar approach, I have just choosen 225% to upscale my 640x480p clips to 1080p. What is the difference?

Full HD option and Custom setting option make major difference about artifacts. Almost no artifacts or detail smoothout like I used to have with Artemis.

Itā€™s really hard to compare because the pictures will be compressed when I upload them here.

I tried everything, and I canā€™t get the ā€˜errā€™ to show up. Tried on both versions, 2.2.0 and 2.1.1. When I enter any percentage in custom size, it shows up beside the model. Eg. Artemis HQ v12 150%. Also, there is no ERR where the size presets are listed. Thatā€™s why I wanted to see what your interface looks like.