Education | Artifacts in generative models

Gigapixel still has issues with Redefine when processing clear skies, even in the cloud. There are still haloes of various kinds. I raised this issue MONTHS ago, but it has not been resolved.

Local renderings are even worse.

3 Likes

Try subtle with a promt.

That should not be necessary since there are no blotches or ghosting in the original images. It is something that needs to eb addressed. ChatGPT works fine.

Besides, the result with a prompt is even worse!

Could you tell me the promt please?

You dont know what ChatGPT does in the background with your images, maybe there is some image recogmissionsystem that does tell the ai where the image was made and what kind the image is.

Something like “Enhance this photo, removing any ghosting or blotches in the sky”.Since there aren’t any in the original low-res photo, I can understand why this wouldn’t work.

Here is the result from ChatGPT, although I would still want to upscale this.

Often, of course, ChatGPT changes an image substantially.

Try: The image of a city taken from a high-rise building, trees can be seen from above and the city stretches to the horizon, the sky takes up a very large area of the image and a skyscraper can be seen, the image is very noisy and details are difficult to see. The weather is very cloudy.

You need to tell it where the image was made, since vegetation and how buildings are designed will change from country to country.

So you add in the text, i did present, where the image was taken.


Ask ChatGPT where the image was made.
If its able to tell then you know that the model that did create the image was guided by a language model and one that is able to scan an image for information about it.

The ghosting and blotches have nothing to do with interpretation of the image in my view. It is a problem with Topaz’s AI algorithms.

With ChatGPT, I usually write something like “Enhance this photo taken at such and such location. In this case I merely said CDMX, but it cam back saying it was an image of the monument to the Niños Heroes, which is correct.

It tries to insert something it does recognise that isnt there but is in the training material and maybe the model is running out of ideas at the same time because its nothing there. :sweat_smile:

ChatGPT is much bigger, OpenAI did suck out the whole internet to design Chatgpt, its nothing that could fit on our home computers.

FYI, we are working on improving the models so they don’t add artifacts like you mentionned. This is coming in the next versions :slight_smile:

I have to say, the results of Chat GPT are not good for this example. There is a very bad texture on the sky; it’s just not upscaled yet. The issue here is that some models, like Redefine, are interpreting this “noise” or pixels on the sky as a texture. It will then try to create a higher resolution texture to replace the low-resolution one.

Chat GPT on the left, Gigapixel on the right:

I would add that Redefine is not meant for this type of picture. If you want, you can share the original, and we can test on our end to see which model would work best.

2 Likes

Thanks for the input. I will look at the other models myself

1 Like