For recovery exports it might be useful to re-export as the results can differ depending on the random seed used for the diffusion (or something like that).
Regardless, the same thing can be achieved with a quick export button.
I didn’t keep track of images in which I used Spd vs Quality, Dakota. But I can do so in future if of interest.
The Spd model, in the portrait test, was a ‘chaser’ to a round of Quality using Ps plugin on my PC.
And I’m not sure which model PlugsNPixels used for his initial run that I piggybacked off of for my own 2nd & 3rd runs.
Once images get above 1K px I don’t have the patience to wait for the fabulous Quality model (+ I’m always afraid doing so will fry my PC’s brain from the ongoing strain).
Indeed - yeah I’m just curious if we have any Recovery (Speed) model fans out there.
We are looking at other ways to speed things up for the more powerful diffusion models without sacrificing the quality of results, so it is top of mind.
The results from some of the big diffusion models are incredible - but some of them require quite a bit of juice to get the best results… Even more than the Recovery model.
I think “export” is different than “render” or “re-render”.
I know there is an option to re-render before you get to the export screen, so that gives me the impression that export is just converting the rendered photo into a chosen format. (Jpeg, tiff, png etc)
If that’s all “export” is doing, it makes the extra clicks have less potential to be useful.
Bring them on! I’m cool with waiting longer for higher quality! I do have a RTX 4090 though so my wait times will still be significantly less than average hardware.
Okay, @dakota.wixom a photo buddy just sent around a .jpg shot of musicians at a campfire (very low light - no ancillary light or flash) he took with his iPhone (handheld - with a longish exposure and open aperture to add more light to the dark scene).
Needless to say there was some blur (handholding + movement of the instrument-playing musicians; it wasn’t a posed, static portrait shot).
I didn’t run it through PAI at all. Just experimented with current GAI 7.4.1 Ps Plugin (W11) to see what it could do. Using Recovery - Quality and Speed as separate runs. Labelled in filenames snips below. Original in GAI UI 1st.
Original .jpg downloaded from email (so his orig. orig. may be higher res. IDK):
GAI 2x Upscale - Speed Recovery - NO Face Recovery (ran with ORIG, not w/prior Quality Recovery - that layer eye was off and I launched from the orig’s Ps layer):
GAI 1x - 2nd Pass Speed Recovery - Face Recovery Activated, All Faces Detected (including the face in profile) - 2nd pass meaning I used the prior Spd Recovery output as my launch layer into GAI, not the Orig.:
Here’s how my layer stack in Ps looks after those steps. I turned off any layer eye icons when not working with a given layer. I typically label my layers when processing to keep track of what’s what:
My perceptions/analysis of the outputs (see for yourselves):
The initial 2x Quality Recovery did a great job. But would likely have improved if I’d activated face recovery with it. Which I didn’t.
The initial 2x Speed Recovery (Face Recovery = Off) added quite a bit of detail - particularly to the more stationary objects/people in the scene (look at the guitar strings and the guys who seem to be moving less - i.e., almost all except the wht beared gent at center - significant clarity improvements). I don’t know if you add the % details feature to Speed if it will essentially knock it down speed-wise to what Quality is. But if there could be a bit more control it might be a good compromise… (???).
The 2nd pass (i.e. using the 2x Spd No Face Recovery output as a new launch layer) added Face Recovery (latest beta vers.) On. And, b/c I’d increased the size in the prior step I used a 1x sizing and only turned Face Recovery On + ran the Spd Recovery the 2nd time. This added significant visible details to the faces that provided useful definition and this run only added very subtle (some folks might not see anything…) clarity to the more stationary objects that were improved in the 1st pass. At least that’s my perception
Well, I had to try the 1.5x upsize Quality Recovery with Face Recovery On (Realistic, Gen2) just for my own curiosity. Here it is as output in Ps workspace:
If of any consideration, I think I set the Face Recovery Gen2 Realistic (when used) to around 60%.
I kept dialing it back b/c [A] Didn’t want the heads to have overclarification relative to the rest of their bodies, [B] Didn’t want them to look too much like cutouts against a dark, uncluttered background (I think I still got that fx with the centered standing guy and the Art Garfunkel-like guy at right; so maybe could have dialed back more or should selectively mask & lower opacity on Ps output layer. IDK if that would work).
I don’t take pictures of people, so I haven’t dealt with complaints about Face recovery including Recovery (Beta). By chance, I was asked to crop and possibly enlarge one of my acquaintances from a group photo. I tried with Face recovery turned on, overall it was passable (compared to the original). But I have to say one thing: That acquaintance does not squint so terribly! He really doesn’t squint at all. Unlike AI, I guess. So I gave up; I have a too weak temper for such things…
Is this how your image was framed that you tried Face Recovery with? Also, did you choose the “Realistic” or “Creative” Face Recovery (using which model generation - 1 or 2)?
If cropped as shown above 1st, it’s not what Face Recovery is designed to handle. It’s designed to work with small faces in a larger scene. Not high res faces or headshots that fill an image’s frame.
Is this example you’ve posted above how the face looked after you ran Face Recovery on the full group shot of smaller faces and then cropped in on the person of interest to show the result? Or, did you crop, enlarge and then run Face Recovery?
The eyes in your example - one is cross-eyed, one is squinty.
I am attaching some illustrations of how it really was. The framing of the faces in the photo seems fine to me. I tried both Gen1 and Gen2 Realistic. In terms of the eyes, it turned out equally ugly for both models (and some of the teeth are pretty crappy too). The unlucky guy (with perfectly good eyes in reality!) is third from the right. Poor man, an innocent victim of artificial intelligence rebellion already now! And here are some illustrations (I haven’t tried editing the texts on the photos, so there are only cryptic characters):
Thx! Yeah, that sort of pic run in that framing should work okay. Wild. Wonder what is confusing the robot…
The light direction is from the left. And, most of the guys have shaded right eyes (except for him). Yet the guy in the back whose eyes are also brighter (glasses man) doesn’t seem to have had his eye altered.
I suppose worse comes to worse you could copy & horiz flip the left eye to put over his right eye.
Would you mind sharing the original pic without the face ID squares? And no pre- brightening of the image (like not opening up any shadows on anyone as a 1st step). Things like this intrigue me.
It was a result of running your full group shot (not cropped) through GAI with Face Recovery Gen 1 around 40%. It seemed as though once I started to get above that level the robot started making your buddy’s right eye move into a cross-eyed position. So I stopped when that eye was still somewhat pixelated but still centered in his eye socket.
Then (I cheated - b/c all the guys are squinting their left eyes due to the bright sun from that direction + that ambient is also giving all their left eyes - certainly the ones camera right - a reddish cast) I copied and flipped his left eye onto the right eye side, masked where the shadowing wasn’t correct and retouched (by sampling colors from the shadows surrounding the other adjacent guys’ right eyes) your buddy’s ‘new’ eye. I also added a Ps Hue/Saturation layer, selected all the nearby guys around your friend (incl him), sampled that red ‘mouse eyes’ color and desaturated and brightened the reds to try to get those areas back to a non-sun affected color.
Next, I ran that doctored image thru PAI’s Face Recovery. Look at the differences (Photoshop layers are labelled so you can see which outputs are from GAI then PAI) in the Face Recovery b/wn those layers. PAI Face Recovery helped a lot based on the doctoring.
On that output I cropped tighter to your buddy and the guys directly adjacent to him. Then sent that back into GAI 7.4.0. Ran a Quality Recovery on that crop (4x upscale) NO Face Recovery. What I’ve posted up top is the result of all that.
I ended up with a ton of tiny thumbnail snips, so goodness knows if I’ve attached the right ones above to the proper steps I followed. But, there you go!
If you are a Ps aficionado you could probably go into Liquify and do a bit of “plastic surgery” on his eyes to compensate for the effects of the sun and cast shadows…
p.s. Denoise (at least in PAI) did hideous things to the face of the guy in the rear who’s wearing glasses. So I turned the denoising off for that run.
Ran the image I’ve shown as the final thru Radiant (way toned down their initial processing - but it eliminates a lot of haze) then back in Ps 2024 I selected the men’s heads (b/c they still looked very yellow to me) and with Hue/Sat reduced the yellow cast. Here’s that final final. Now I’m going to the gym to do my circuit…