Gigapixel v8.4.1

Personally I preferred the postimage placeholder picture myself. :joy:

2 Likes

I don’t find it funny

It is quite interesting to experience how generative AI can beef up input images that lack significant information such as video stills. I hope that one day, Redefine, as being still in Beta state, will come up with a creative mode that does not make fur explode even at the lowest creativity settings. Maybe this could be achieved with a better prompt adherence. Pure genAI models generate lions with lion fur and mammoths with mammoth fur, it is just GPAI that stays on the track of (too) long fur for everything. Dunno to what degree Topaz models can be trained but it should be feasible to come up with a Redefine model that generates correct fur (hair length) on at least the most common animal species.

2 Likes

When you posted the images side x side, one had richer, darker, more saturated colors than the other. It seemed the one that disappeared was lighter.

So, I was curious if the GAI processing stripped out some of that color richness of the original or if you did a 2nd step to lighten your image.

Just a bit of fun Harald. No offence intended. For what it’s worth I like your picture.

Regards. :smiley:

Again GUI changed a lot for no reason with no improvements. Why?

Splitting Redefine into Creative and Realistic makes anything more complicated. When trying which one is working better I lose the text promt and settings when switching between them.

Apropos Textpromt and settings. Why cant you always keep the last settings and textprompt even after a restart? Or better keep the last 10 textprompts used in a list or give us the possibility to save presets. That would make much more sense than constantly changing the GUI without any usability improvement for the users.

9 Likes

Yeah, Redefine still needs to be improved. In fact, I have to make at least 3-4 variants of the same image and then recompose it with the best parts of each one. Because Redefine adds a lot of artefacts despite a very detailed prompt. As for the fur, it tends to add it all over the image. I therefore need to create a variant without the animal. And then a variant with Recover v2 for certain details and sometimes a fourth with Hight Fidelity or Low res v2 to soften certain details, particularly in blurred areas that become super sharp when they should remain blurred (for depth of field blur).

2 Likes

No, I didn’t do a colour pass again after the GPAI run. I did it before GPAI :wink:. But if it looks less saturated, it’s not serious in my case because I think I had over-saturated the image a little.

1 Like

OK, and thank you :slightly_smiling_face:

1 Like

Personally, I think it’s better to have split Redefine into 2 distinct groups. It means you can’t go wrong when you want either realism or creativity.

1 Like

Hello rene,
Good comments - I think it’s called ā€œFiddling while Rome burns to the groundā€.
Especially like your ideas for the text prompts - it drives me crazy that if you select or deselect AutoPilot (thunderbolt icon) or change the Texture level or flip from Realistic to Artistic the darn text prompt vanishes and you can’t pop it back by right clicking the mouse and then clicking paste.
In the worst cases you have to jump out of Topaz and into Notepad (or whatever) to re-copy the text prompt and then step back into Topaz - click in the text prompt box and hold down CTRL+V to paste - uuuggghhh !!! In every other software I use my right hand never has to leave the mouse and I can hit single or double shortcut keys with my left hand making editing so much easier and faster.
I fully agree with you regarding unnecessary superficial changes to the user interface and detect an almost stubborn refusal by Topaz to acknowledge and resolve these and other long standing user suggestions for improving usability.
One more grump - I can’t afford a 5000 series GPU but it seems unacceptable that 4 months after their general market release the Topaz coders have not caught up. I’ve always understood that bigger software companies can get pre-production hardware and work collaborating with manufacturers to hit the release with working software.
If you share any of my frustrations please look at my post below. As users maybe if we squeak enough Topaz will have to grease the wheels and solve some of these usability criticisms.

FOCUS ON CORE FUNCTIONS

2 Likes

Just ran Recover v2 with pre-downscaling on a 32Mp photo at 1x in Gigapixel AI and the result is running around in circles laughing at what I previously got out of PAI.

On the negative side - rendering time on my RX 6700 was just over 12 minutes…
-Won’t do any larger batches anytime soon - at least not until I upgrade my GPU. :laughing:

Hello Harald,

But it doesn’t add any functionality and doing repeated tests with different model options it makes stepping through the range of setting choices a two screen process instead of choosing any one of 6 levels from one screen menu.
The more clicks you have to do the less usability and the longer it takes. One render of one image no problem but if you are doing say 50 or more test variations those clicks add up for both wasted minutes and in years to come arthritis and/or carpal tunnel syndrome.
I don’t see any advantage in two separate screen menus - do you really get confused with 6 choices on one screen?
Don’t most of us probably easily manage making selections from far more options in say LrC Presets or DxO Film Pack thumbnails ?

1 Like

Yes - your multi render and compositing solution is a basic workaround for one of the current weakness of the Topaz software which is its current inability to adequately meld the core functions of AI denoising, sharpening and upscaling in one single step.

In the mid 90s I did a lot of work with AutoCad, Maya, Blender etc on 2D photorealistic architectural single view rendering leading to very decent 3D architectural walk through and fly-by animations. That was somewhat concurrent with the early Tolkien Ring special effects.
In 1990 multi layer Photoshop compositing was all we had for architecture artistic simulations but surely now in a post-Avatar 1&2 and Top Gun 2 era we should not be forced backwards to Photoshop layers?

Personally I would like Topaz to concentrate on achieving the one step solution and excellence and not on other individual and tangential features like colourisation, dust and scratches, HDR, panorama, half toning etc etc. Other software already does those things well enough. My concern is that I see Topaz facing quality and reliability and usability threats from competitors. Candles and more icing on the cake will not cut it if the sponge falls flat into needing layers.

Downscaling
This afternoon with my RTX4070 Super 12GB GPU, Gigapixel AI v8.4.1 took nearly 23 minutes to downscale an 8,000x6,000px RAW file to a Redefine Artistic Low Creativity Texture-1 2160H jpg image. Normally my Redefine renders around 4000x3000 final image size tend to take between 2-4 min and my largest 6x upscaled Local Redefine image to 10000x7500px took only some 12 minutes.

On several occasions previously I have noticed that Gigapixel AI and Photo AI have seemed to take a very very long time over a 4x downscale (0.25x) - is that normal?
Photoshop does the same downscale thing in seconds.
Is PS standard bicubic reduction adequate in terms of quality for input to Topaz ?

1 Like

HI Rene.

I can understand why Topaz has split Realistic from Creative make’s more sense

I like your suggestion about remembering previous Text Prompts so. I ask you why not post your suggestion under the Ideas Category I’ll certainly place a vote

2 Likes

It probably depends on how one works and where one is aiming. It is true that if Redefine was in the whole (original slider 1…6), then that big jump like Topaz Girl appeared somewhere for values ​​higher than 2 – the division has its own logic. On the other hand, I liked having the gradation in the whole, because I could go back between Realistic and Creative without jumping in the menu – I just had to move the slider to value 3 or 4. For example, with birds photographed in flight high in the sky, a lot of details (feathers, eyes, etc.) disappear, so sometimes Creative 3, 4, rarely 5 helped there (and 6 only for Topaz Girl). When artifacts or the bird’s head started to appear in front and back (outstretched legs changed to a beak), I just backed off with the value on the slider and saw the result, without having to go back to the parent menu and from there to Realistic. There should probably be an option for users to set up individually the original or new system (perhaps with complications for developers), or to devise a way to avoid having to go back to a higher level in the menu. I was satisfied with the previous system, I don’t like traveling inside complex decision trees :slightly_smiling_face:.

Thx for the explanation!

I just wondered b/c earlier generations of the product betas & releases did strip out color during the Topaz processing & I wondered if that issue was recurring.

But, if you purposely made color and/or light adjustments, that’s not an artifact of the processing. It’s a personal choice. I’m not judging the choice, just making sure the software isn’t usurping the choice.

It’s a nice pic. And, I think it’s cool you can extract stills of that quality from video frames. Good to know! You should be able to create interesting outputs with that. It sparks ideas for composites for me (in Ps) with that ability.

1 Like

do you really get confused with 6 choices on one screen?

I adapt very easily to new interfaces

Don’t most of us probably easily manage making selections from far more options in say LrC Presets or DxO Film Pack thumbnails ?

I’ve never tested DXO film Pack.

I’m getting to know the software better and better. So I know what to expect in terms of output rendering depending on the creativity setting. In my case, I often use level 2 (currently subtle) because level 3 gives a painted effect. This wasn’t the case in early versions of Topaz 7, when Redefine was first introduced. I rarely use level 4, because level 2 is very good in my case on my photos. And 5 and 6, well that’s even rarer, only if I want to do stuff that’s a bit more HDR.

Thank you for your appreciation of the photo. Basically, I extracted one of the images from my video because I’d forgotten to take a direct photo of the lion family. But on reviewing the video, I noticed on one shot that I had the whole family. Hence the idea of extracting an image from this shot :wink:. Then, a good description with Gemini and several variations with Redefine and Recover V2 enabled me to achieve this superb result. A result almost as good as a photo from a very expensive camera.

1 Like