No. They are using two different model repository control mechanisms. You will probably be able to do this in the future but not now.
I was thinking about not downloading the models one by one, but in a zip package from your servers
This version results are not as clear as v5.7.3 using CPU. reverted back to 5.7.3 and all is good!
The problem is the inconsistency in GUI between the various Topaz products. After processing an image in Sharpen AI, the screen changes as below. I guess something like this should be introduced for Gigapixel.
I found a bug: open a JPEG image with drag&drop, and image size 0x0???
I switch to any model, it does nothing. I restarted the program, same thing in CPU or GPU mode.
Here is my logs:
Logs.zip (7,0 kB)
Many thanks for your reply. Indeed, going to ‘auto’ seems to solve a lot of issues, especially the one with ‘faceless’ outputs. However, face refinement still seems to stop working at random moments (again the problematic image is the ‘lady in the middle’ one). Here are some images I tested right now:
‘Single-face’ selfies - ok, everything works like a charm, even when I set noise reduction and deblur to 100%. But for the ‘lady in the middle’ photo I had to restart Gigapixel to make face refinement work, it worked for some time and then stopped working again and doesn’t work for this image even after another restart.
Yazi.saradest - Would you be okay if I sent you my interesting test image? (It’s not actually mine, but I find it the most stressful, without falling apart, for GPAI)?
(Actually I sent a Zip to the Dropbox since I have to go out, look at or ignore as you have the time, but I think it’s interesting, especially re 5-8-0 and designer stubble, amongst other things…)
BTW it won’t let me embed images in this post, am I doing something wrong or just not hit some magic post-count?
Now that 5.8 is out, can we hope that GPAI will soonish learn to do upscaling higher than x6 again (like it did up to v4.2.2)?
I just noticed this:
Fixed using 1x model when scale factor over 6x
So I will give this a test-run.
Seems like my license expired. So other have to test if upscaling over x6 is working better now, it did not in the last Beta version. I was waiting for an improvement in this area since version 4.2.2, so it would be nice if it was finally looked into.
I purchased Gigapixel AI last year and really enjoyed using the product. However, since that time I have changed to a M1 based mac. I will save re-downloading or any upgrade purchases until after you make a native version.
Can you send us the log file for one of the sessions that you reproduce this face not updating issue? Go to Help->File Logging->Open log folder. The log has the postfix of -Main.tzlog. You can sort the folder by revised time to get the correct file.
Further experiments allowed us to draw certain conclusions for my specific computer configuration described above.
Perhaps other configurations will show different results.
So, in my configuration, the program “out of the box” turned out to be inoperable.
That is: with the disabled ability to load “fast” models, the GPU processing mode does not work at all.
Instead of this mode, the program unsuccessfully tries to process images using the CPU.
When trying to enable CPU processing (with the ability to load “light” models disabled), blue rectangular artifacts appear on the output image, such as on the attached sample.
The question arises: If the program cannot work correctly with the models supplied in the kit, then why are such models needed?
And only after activating the ability to download “fast” models and download them to the hard disk, both processing modes (CPU and GPU) begin to work without the appearance of artifacts.
Although the issue of processing quality is a separate topic.
I should note that I was changing filenames at some point during today’s tests so I’d suggest to have a look at the most recent logs first, especially those that refer to ‘2.jpg’.
Awesome update. The stability and speed have been phenominal. I am able to batch process and 4x increase 4,000 images at a time (so far) and not a single crash. the last batch of images took 30 minutes to process. Before the update it took be 2 weeks to process 20,000 images due to crashing and having to manually do 500 at a time.
I love Gigapixel and it’s been an essential part of my data science workflow.
If you did that then you would have to install them. Just open a picture and click on each scale and each process model and they will automatically download and install. A few minutes (if you have fast internet) and you are done.
I found something odd. I had already downloaded the GPU models yesterday and today I loaded a picture and set my preference to CPU and ran the Standard model 2X. This took 19 seconds. I then changed to GPU (Radeon RX5600 XT) and ran the same process. Models A and B downloaded again. Why would this happen when they were downloaded yesterday? I normally only use GPU and after downloading the process took 11 seconds.
Ok I can explain that. In the installer (kit), we have a base model file for each model. That base model file supports all the backends (hardware setups). However, this model file does not give optimized performance for the specific hardware config of a specific user.
This base model file has a known issue of producing the square-shape artifacts. It only appears in a small amount of machines (we can reproduce this in certain testing machine we have). Thus, we still decide to use this model file as the based model to be included, because any other model format will alway fail certain types of machines.
We recommend to have allow model download on to get the best experience.
Interesting findings. But what are the fast/light models you are referring to? How did you disable loading them in the software?
Here is a sample picture, or download a photo from Facebook or Instagram, and try it. You get 0x0 image bug:
Very pleased with the latest version…first time i have noticed noise reduction to be effective with low
resolution,very compressed seems to have been improved and color bleed is more effective.