Can you please summarize which features work or don’t work with Intel Mac, in my case with AMD RX6800XT e-GPU? Also, which of these Gigapixel versions work best with it? I am considering upgrading my license if it’s truly compatible with the latest features.
Hi John.
The current version of Topaz Gigapixel is Compatible with Intel Macs and now Intel users have the ability to access all of Topaz Gigapixel’s Models.
However, for the new Generative Models only Cloud Rendering is possible as the Local Rendering for Generative Models is not available.
Don’t forget all Image Cloud Rendering is now free so, no Credits required
Hope this helps
Can you specify which generative models are not available locally?
Wonder is unavailable for local render on Personal tier subscriptions.
Cloud rendering must be used with generative models on Intel-based Macs; generative models are not compatible with local rendering for these systems.
Generative models are these ones:
I also tested it with AI-generated images. And it’s the same problem. If I only scale them up by 2x, the quality is degraded. I have to scale them up by 4x to get a good result. Regardless of the original resolution.
That may not be directly related to your issue but I have also experienced that the quality of the rendering varies significantly with the scale factor (model). For example, when it comes to animal fur, Redefine realistic does the best job at 2x while 1x and 4x are inferior.
It depends. When I use it, I usually do it at 1x with 50-megapixel images because it renders faster. If I have a 12.5-megapixel photo (4080x3060) and I upscale it by 2x with Redefine, it takes longer and the result isn’t much better. But it does improve slightly when you start zooming in to see the details. At least, that’s what I’ve noticed with my photos.
Can you reply to the email thread you had with examples so we can test that? I don’t see any issues on my end for 2x versus 4x. Obviously, 4x is adding a lot more pixels, so the results are different.
I just read something about context size and outsourcing to SSDs. No wonder memory prices are rising when SSDs are used to outsource the context (KV-Cache) that an AI model can process to SSDs (Via Chips like nvidia Blufield4). I would be delighted if TL would also use this technology to program better models. For my work, models such as Standard Max and Wonder are state of the art. However, I haven’t quite reached my goal of creating really large images with lots of detail down to the last pixel.
Gemini said, to render a 24mp image to 100mp with fine detail in fp32 precision, am v-ram size of 500gb would be needed.
I hope you’re doing well. I recall you mentioning in the Gigapixel beta thread that a new beta would hopefully be in the works after Black Friday/Cyber Monday.
Since it has been about 50 days and we have seen some minor patches released in the meantime (but no betas), I wanted to kindly check in. Is there a new beta version on the horizon soon? We are eager to test out what’s coming next! ![]()
Thanks!
500 GO?
. That’s huge. Apart from a cloud server (and even then I don’t know if any exist with such a VRAM capacity) no one in the general public could have that.
Here is an example with the following 3 images. The first is the source. The second is a 2x scale and the third a 4x scale. I’m using a zip link to keep the best quality because if I post the images directly here, they will be compressed and the difference won’t necessarily be noticeable.
The link expires in 7 days.
Hi!
Thanks for patiently waiting! We are definitely working up a few things so hold on a little bit longer. Glad to hear that you’re excited to try out some new things!
Always. ![]()
There is a local saying here that ‘patience is a virtue,’ so I am happy to wait.
That said, my ultimate wishlist would feature a new HQ Max model alongside a refined iteration of the Wonder model that eliminates those tiny artifacts.
Additionally, an eventual 2nd Pass upscale option would be a fantastic addition. :)) Best of luck with the development! ![]()
Hi.
Doesn’t look like you’re have to wait long for the new BETA version this Topic was setup by Lingyu yesterday
Hope this helps
Cloud servers are using huge amount of VRAM. Multiple GPUs are combined so up to 1000 GB of VRAM. This is what we use with our cloud rendering service.
I can’t open the zipped file. Not sure if it’s just me.
It appears like that thread has been closed.
It’s coming soon, but not available today.