Sharpen AI v4.0

I noticed something a few months ago concerning Topaz Sharpen AI, Denoise AI, and Gigapixel AI. In Preferences, Topaz added an Auto option in the AI Processor drop down menu in addition to the other choices that were there before and it is fastest on my laptop using that setting. My current laptop that I bought in 2019 has these specs:

i7-8565U, 4 cores, (1.8ghz, Turbo Boost 4.6ghz)
Nvidia Geforce MX250 2gb
Intel UHD 620
32gb RAM DDR4 2666mhz

Sharpen AI 4.0.2 has these choices on my laptop:

I set Auto Update Preview to off to make it easier to time and using my 1920x1200 monitor. This is what I did:

I rebooted to clear everything out on the computer and then waited about 3 minutes after booting to give all the initialization time to finish.

  1. Then I started the standalone Sharpen AI with the processor set to Auto.
  2. I opened a 16mp photo using Auto for everything (model, settings).
  3. I set the size to Zoom to Fit because that would display the full image and would also take the longest to update the screen and give me more granularity for timing. Naturally, it is much slower than using 100% which only displays a portion of the photo.

I didnā€™t time saving it to file because then the time to do that is included and that can vary a lot depending on what kind of drive you are saving to, the speed of the drive, etc. I was just interested in the time to update the display while showing the whole photo.

I ran it 3 times selecting Auto and 3 times selecting the MX250. I didnā€™t bother with the other choices because I have done it in the past and they are much slower. On each run I started Sharpen AI, opened the photo, set it to Zoom to Fit, and then hit the Update Preview button while timing with my phone. After each run I closed Sharpen AI, waiting for about a minute, and then restarted it. This was to make sure nothing was cached in the program and also allow for the fan to slow down again. Here are the results:

Auto: 1:10, 1:10, 1:08
MX250: 1:40, 1:37, 1:36

Apparently Topaz is not just choosing the MX250 when I select Auto. Maybe it is trying to make more optimal use of the MX250 and/or the 620 and/or the CPU. Topaz seems to not tell us anything about what they are doing though.

I will note that Neat Image 9 has an Optimize Settings button that runs through all the combinations including how many CPU threads to use. It takes a couple of minutes or so. This is the result it chose for my laptop:

Untitled-1

That is pretty cool, but I donā€™t think Topaz is doing anything like that.

Anyone else want to find out what kind of results you get? Of course, we are not comparing to the timings on my PC since we are using different photos, probably different megapixels, and different screen resolutions, but you can see if Auto is faster than explicitly selecting the GPU on your computer.

1 Like

Remember that the MX250 is a low powered entry level GPU and, without looking at your logs, it may well be that it is choosing the Intel HD620 as the performance is about the same.

Are you saying that on your computer choosing Auto or the GPU yields the same time?

Woah. I didnā€™t even know about this update until today, I was still using 3.3.3 with no in-app or email notification of the new 4.0 version.

ETA The way I learned of it is the version number was mentioned in one of those ubiquitous $20 off ads.

1 Like

Larry, thanks fpr bringing this to our attention! I hadnā€™t noticed as I use tif files but this jpg quality reduction is not acceptable unless user selected.

Do you know if the 3.x versions do the same?

Yes it does.

I went back to some notes I made 11 months ago when I bought the 3 program package. I see that at that time using Sharpen AI 3.0.3 that the MX250 was a bit faster than the 620. Today I ran things again using the same photo I used yesterday and got these timings:

Auto: 1:10
CPU: 3:36
MX250: 1:29
620: 1:08, 1:10
All GPUs (Experimental): 1:28, 1:30

So, using Auto seems to mean that it uses the 620. It is strange that with 4.02 the 620 is quite a bit faster than the MX250, but with 3.0.3 last year the MX250 was a little faster than the 620. In any benchmark you look at the MX250 is quite a bit faster than the 620. This is just one example:

UHD Graphics 620 vs GeForce MX250 [in 8 benchmarks] (technical.city)

This is all very perplexing. While the MX250 is no speed demon, it is much faster than the 620. Is Sharpen AI doing something wrong with the MX250 that makes it significantly slower than the 620?

Last year when I was checking Denoise AI 3.0.3 the MX250 was 3 times faster than the 620. I will have to check again now with 3.5 to see if the situation is also reversed. Last year Gigapixel AI 5.5.1 was definitely faster using the 620 than the MX250 (which was the same as the CPU).

What is not acceptable? JPEG format always has quality reduction. 100% is a fake, even after one save, one can often see artifacts in smooth gradient areas. Save several times, and can see artifacts everywhere. The point Topaz made is that when one saves 100% quality JPEG after SAI or DAI application, file size often increases, in my experience up to ~50%. Hence they made a reasonable decision for size reduction. Continue to work with TIFF that I and many others do, whatā€™s the problem?

The MX250 is only marginally faster than the HD 620. And, if a 16:9 aspect ratio the HD 620 is much faster.

Thanks for your reply. Each of us is entitled to our own standards. Yours may be lower than mine.

I am well aware of how JPEG works and what it does. I worked for a while with Dr. Joan Mitchell, the co-inventor of the JPEG digital format and read the bible of JPEG (may she R.I.P.):

Dr. Joan Mitchell

Still Image Data Compression Standard

My problem with this arbitrary setting In Topaz is I want to determine the acceptable quality level for my needs. If I have to save a jpg out of SharpenAI (for what ever reason) and then need to process that output again in another application (it can happen) then Iā€™m already at a greater disadvantage.

Second point is, when I need to reduce the quality level to reduce file size, I want to select the application to accomplish that. I have tested different applications and their compression output. I can assure you that no two applications use exactly the same algorithms so some are more to my liking than others.

2 Likes

Iā€™m not sure what release of DeNoise it started with. I know with Sharpen it started with release 4.0.2.

Iā€™m not sure I understand why the drastic reduction in file size. I can see some reduction, but a 2/3rd reduction in size is huge. Before the file size would be reduced by about 1MB, maybe 2MB.

I tried a 16Mpx jpg photo. My GPU is fairly fast and the results of updating the full picture (using fit to screen) was 6.04s for the GPU and 13.97s for Auto. I closed Sharpen and restarted it between tests.


DxO-DeepPrime + Capture One + Midtones strongly brightened.: Interesting, I didnā€™t know that the two together would produce patterns in dark areas.



First Stage Sharpen - Midtones strongly brightened.



Second Stage Sharpen - Image was sharpened downsized - Midtones strongly brightened.
I wanted to make the lunar surface more visible



Image I wanted to achieve.

Nothing special, but I noticed the bar that Sharpen produces on the right side of the image.

To the right of the moon, the area that can be seen as an artifact (second stage image) has some separation from the background, itā€™s not completely black.

Itā€™s hard to see, but I noticed it in Photoshop even though I had lights on.

My laptop is connected to an external monitor which is 16:10. The benchmarks show that the MX250 is generally twice as powerful as the 620. Some benchmarks show it even higher.

However, you even did not notice that until were told. With all your standards :slight_smile:
Not to mention that your workflow is lossy anyway and my is not. Good luck with your problems!

Thank you for checking. That is interesting that for your computer selecting Auto was MUCH slower than manually selecting GPU. I wonder why Sharpen AI didnā€™t use the GPU when using Auto? Is the 13.97s time the same as selecting CPU? As I said, for my case selecting Auto resulted in Sharpen AI choosing the fastest way. I tried them all manually to find out.

Oddly, the time using the CPU is 1min 16 sec. I re-ran the tests again and I found that using my GPU is actually 16.72 seconds while Auto is 13.97s. I think I will set my preference to Auto as it seem to be faster like you found.
I think the first test with 9 sec may have already started the process when I stopped it and restarted (i donā€™t really remember).

On a Windows machine it is easy to check with Task Manager what is actually doing all the work. You might be surprised :wink:

What are the choices in your menu? When set to Auto you get 13.97s and that is faster than selecting GPU and much, much faster than selecting CPU. What are the other choices and which one matches the time for Auto?

This whole thing is rather mysterious and I am wondering how the software decides what to use with Auto. Does the software have a table of every CPU and GPU and use that to decide based on Topaz testing? Maybe, but that seems rather unlikely. The Neat Image 9 example I gave above is ideal. It actually checks the performance using all combinations for your computer and decides which gives the best actual performance. And you can rerun it any time if a driver changes, etc. to see if the best choice changes. Above I showed what mine got. That was the result of running it last year. Since then I have had Win10 updates and Nvidia driver updates so I ran it again and this time the best result was a slightly different combination than last time.

Untitled-1

I clicked on Optimize Settings today (note it ignores the 620 for some reason):

Untitled-7

It may be that it selects the best combination.

But I know from before the Denoise AI years when I used Neat that between optimum speed and ā€œwhat makes senseā€ can be 200 watts more power consumption because 5 CPU cores and a GPU are maybe 5% faster than the GPU alone for example.

As you can see, the CPU and GPU are equally fast when the CPU only uses two cores, because the GPU is so slow.

The sweet spot is 5 CPU cores and the GPU, thats 0.02 sec slower than 8 core + GPU.