Workflow | Studio and iOS

Alright,

I haven’t seen any example use of the iOS Gigapixel App, so I am going to share my experience, but also a quick comparison with the Desktop version ( I tried both Mac and PC version and no difference). This is not exhaustive , but my first experience.

Hopefully the attached PNG make sense. (from left to right, A-original low Rez image–> B-desktop resluts–>C/D/E-App results )

I took some video and images on my iPhone at a VERY dark small concert. Needless to say the “best” useable images were still frames from the video. After exporting the still frame, the resolution and quality was too low (see Col A).

Since I was traveling, I downloaded Gigapixel App for iOS and selected and uploaded the image using the “Pro” Everything option which uploads to the cloud. Column E was the result and I was floored. Yes it’s not perfect, but for cropping in and doing some Lightroom adjustments it was more than I expected.

So here is the conundrum/confusion and hoping to get some comments from Topaz.
When back at my desktop Mac running Gigapixel 8.2.2 I tried using a variety of models, settings, etc. and the best I could get was similar results to Column B. (If I use too strong of creative settings it goes sideways real fast)

Just to verify I sent the original image back to the cloud from my iPhone App and tried a couple different options in the App settings (not very adjustable) and the results were Columns C & D.

Questions/Observations:
So, it seems that the cloud processing using the app is using “better” models for the generative upscaling? Also, there is no selectable control as to the size the final output will be, so that plus the generic settings leave little adjustment.
How can I get similar quality generative enhancing and upscaling on the desktop App? What is this huge discrepancy? It seems like the desktop App should be able to get much closer to the iOS App result.

Thanks,
Norbert

1 Like

I think the difference here is more cloud-rendering vs. Offline rendering than Desktop App vs. iOS.

I, too, have experienced that cloud renderings tend to come out better than the offline ones in most cases.

I think that they use a bigger model in the cloud that wouldn’t fit in the VRAM of most GPUs.

(Which is a bit unfortunate because on AppleSilicon there’s plenty of VRAM compared to the standalone GFX cards on the PC).

I have to strongly agree with your point.

I guess the next question is HOW do I intentionally get the cloud model selected via the iOS app when using the Desktop version to send to the cloud? I also tried a cloud rendering or two via the desktop version, but they looked as bad as the Desktop/locally generated version (they looked the same but just processed faster of course).

I feel the iOS settings (tho simple/generic descriptions) are more optimized settings once they are processed/recieved in the cloud. Would like transparency and access to those “settings” to send to the cloud on the desktop version, vs. having to send my image to a mobile device to send to the cloud.

Using Redefine on the desktop will provide the closest results to renders on iOS.
Depending on the input dimensions of the image, one tip is to downsize the image first.
Downscaling the source gives Generative models more freedom of interpretation when upscaling the image.

I’ll give that a test to see what comes back.

Thanks