Enhancement Improvement | Wonder Beta 2 Model | Large Image Improvement

@Ange.topazlabs
@Lingyu

This is not a quantization error or compression.

Details have been shifted; it may be that one of the models is starting at the wrong position.
Or in the wrong resolution.

I imagine you can fix this and that Wonder 2 Local will then also work with large images.

@Lingyu
@Ange.topazlabs
@partha.acharjee

Or a different Architecture is needed.

Mamba Zigzag Diffusor

1 Like

@Lingyu
@Ange.topazlabs

Or this one.

I think this should be a bug report but I voted anyway

1 Like

Its already a bugreport.

It could be that its related to the localisation problem.

What does 30 min finetunimg mean?

Whose 30 mins.? Initial AI trainers or ours?

Where did you read that?

Could you show me a screenshot?

@Ange.topazlabs
@alexandre.topazlabs
@Lingyu
I can confirm that the artifacts are gone in version 1.3.3 after switching to the US system format.

And yes, memory usage is now in line with what you’d expect for this model.

At one point, Wonder 2 alone was using 45 GB system memory.


2 Likes

Confirmed with a second photo.

Great! The issue is referenced in a development team ticket for foreign Windows machines, but still good to keep this Feature request for a model for large images or from modern cameras

It was in the CleanDIFT: Diffusion Features without Noise link you provided above.

I’ll attach a snip, Thomas.

The “30 minutes” is mentioned in the center of the page. It seemed to imply that to get the CleanDIFT quality would take 30 mins. But, perhaps I misunderstood what it was showing… do you know what that 30 mins. represents? You’re much more tech oriented than I am.

The point here is that a clean output can be achieved if the model has been fine-tuned on an A100 for 30 minutes.

That would have to be handled by TL.

1 Like

@Lingyu

At the end of the processing (20k output) cpu usage was very high and so was the ram load.

Is this normal or is something that should run on the gpu on the cpu?

Got it! That’s what I wondered - if us or them. You cleared that up. Thx!


I’m currently enlarging a 5K image by 4X.

Wonder 2 with Stream and Neuroserver is using the entire system.

When the CPU is in use—in the third step—it uses 90 GB of RAM for short bursts.



Its name could be “Massive”.

I’ve been testing wonder. It’s pretty awesome. However, sometimes it seems to do nothing if the image is huge like 1500x1500 or even larger. Is that how it is supposed to be? I literaly could see no difference when running it at 1x or 2x.

Does wonder only work when you upscale a low resolution image? I was hoping I could clean up larger images that have pixelation or other artifiacts. Of course I can simply shrink the image and then use wonder, but I am just trying to understand how it works.

1 Like

@Moebius - Correct, Wonder is meant for small, low-resolution images with compression artifacts to fix. You can downscale a large image to force Wonder on it, though it would not be its intended use, and we have users that mention that it works for them. If you downscale the image enough, then you can upscale at 4x on Cloud, it would give the best results the model can give at that scale factor.

Added you to the thread for large image requests for Wonder-like model, make sure to add a vote!

1 Like

Thanks, I’ll do that! Is there a maximum resolution that is recommended to stay below for Wonder 2?

The model is trained with images from AI-generated platforms which are 1024 pixels usually, at 72 dpi, or low-quality web images, as well as mobile phone files with compression. Downscaling to around that pixel size and resolution would be ideal to force the model on files that fall outside its intended use.

If you have examples to share of good results on files that were originally large, feel free to add here, it helps to see cases like this.

Would explain why i see repetive pattern sometimes and why it does realy good with AI generated images.

Hmmmm, there comes an idea. :thinking: