Gigapixel v4.4.0 , any insights of users with older Intel CPU's

Hi all

Firstly, I have not updated from v.4.1.2 yet and noted this in the changelog in regard to v4.4.0

" * Intel devices optimization: Now a new option “Intel optimization” is in advanced preferences. If you choose “Yes”, it will optimize 6th-10th generation Intel CPUs and Intel iGPUs for Gigapixel. Typically, it will make Gigapixel run 3-5 times fasterthan the previous “CPU” mode. But we cannot guarantee if it can work for CPUs from other vendors (e.g., AMD) or any Intel CPUs before 5th generation. If you still have very high performance desktop GPUs (e.g., NVIDIA 1080), “Enable dedicated GPU” (Yes) option will be the best choice. However, if you have relatively better Intel CPUs or Intel iGPUs than low performance GPU, you will see faster performance on Intel optimization (Yes) option.

  • Automatic detection for the best devices choice: Now you do not have to choose between GPU/CPU or enable/disable Intel optimization manually. Gigapixel will automatically run a benchmark and choose the best advanced preferences settings for you to achieve the highest speed on your computer. The benchmark will only be triggered once when you process the first image. Thus, the first image will take longer time than usual."

Now my CPU is the Intel i5 760 2.8Ghz and having looked it up I have found out that it is 1st Generation (please don’t laugh :wink:)

So, has anyone running such an early Intel CPU updated to v4.4.0 and is all well and if not was it ‘made to run’ AOK by manually setting the preferences as appropriate???

TIA for any replies :slight_smile:

I have a 7th generation i7-7700k Intel CPU, and an older GTX 980 dedicated graphics card.

I ran a few time trials on a TIF file to see how effective the new internal benchmark function was at finding the best settings.

The initial Auto detect process after install set the CPU optimization on and turned the GPU off. The file took 28 seconds to process.

I then manually set it back to GPU on, using High Memory, and Intel Optimization off. The file took 18 seconds to process.

I used the reset button they provided to have it run the benchmark test again and it set the CPU optimization back on, but this time it also left the GPU on with High Memory. The file took 28 seconds to process.

I tried to repeat the test manually several times setting either GPU on, or CPU on, (not both) and consistently got the 18 versus 28 second results using the GPU.

Manually setting both on (as the Reset Button had done) turned in a 28 sec result, indicating that the CPU was the one being used and not the GPU.

p.s.
I probably should have run this test before updating to the new version so I would have had a baseline to compare these timings with what we had before the CPU optimizations were added. Perhaps someone who hasn’t updated yet can do that and report the before and after results for the CPU times.

I’ve updated to v4.4.0.
My Intel i7 CPU is a Sandy Bridge, and so I wouldn’t expect the Intel Devices Optimization to be a benefit. As my graphics card is a GTX1050ti 4GB I would expect setting Enabling dedicated GPU to Yes would be the best option. System RAM incidentally is 16GB.

So some comparison tests.
Using the same 1074 x 2272 .jpg source photo, same 2 x resize, manual suppress noise =0.50, manual Remove Blur = 0.50, Process Images as Background Task = No, Use maximum quality AI models = Yes

(a) Enable dedicated GPU = No, Intel optimization = No.
Time: 7 minutes 3 seconds

(b) Enable dedicated GPU = No, Intel optimization = Yes.
Crash back to desktop about 3 seconds into processing.

(c) Enable dedicated GPU = Yes, Intel optimization = No. Allow Graphics memory consumption = High
Time: 48 seconds

(d) Enable dedicated GPU = Yes, Intel optimization = Yes. Allow Graphics memory consumption = High
Time: 48 seconds

(e) Automatic Detection. Clicked on the reset button to activate. On next process crash back to desktop with only 1% showing.

So as expected, the Intel devices optimization does nothing positive on my system, and I will return to using the GPU for processing as I have been.

I do think that the programming should be smart enough to detect CPU versions that will not benefit from the process, and not offer it in the first place, particularly when it can cause the software to crash back to desktop.

Hi @Greyfox

Ah! my old i5 was Q3/09 compared to your younger i7 Q1/11 :wink:

Many thanks for the insight tests especially as the rest of the PC specs are identical GTX 1050Ti 4GB version plus 16GB system RAM

:slight_smile:

I have an AMD Ryzen CPU, Windows 10 PC and a AMD Radeon RX580 GPU so I didn’t try the Intel optimization. I did run a test of version 4.3.1 vs 4.4.0 using a 2272 x 1074 pixel picture so it could be compared to those above.
I found that running my browser open (25 tabs) greatly affected the results. Browser open, the 2X scaling was 64 seconds and when closed it dropped to 17 seconds. I have 32 Gb of fast RAM so I don’t think this was a problem. I didn’t test it again so I don’t know if this was a fluke.

Settings were 0.50 for blur and noise, GPU memory (4GB) = high, 2X scaling, Face refine= off.
V4.3.1: time was 17 seconds for max quality and 9 seconds for Max quality = off
V4.4.0: time was 16 seconds for max quality and 8 seconds for max quality = off

I opened each result picture (Quality max vs norm) from 4.4.0 in Affinity Photo and compared them (see below) at 100%. They looked identical.
Max quality = no:

Max Quality = yes

1 Like