Masking for Recover & Redefine

Hi,

I’m sure this has already been suggested, but it would be really nice to have the ability to either select or exclude areas of an image when using the Recovery and Redefine modes. For example, I’d like to use the Redefine function for an image of myself but I don’t want the AI to alter my face in any way.

On the other side of the coin, I’d like to be able to use the recovery function for a small area of a photograph instead of the whole image. Similar to how the preview square works.

James

I also hope that we can help train the refine model in specific object recognition so we can advance some uses of its generative functions. For example, if it could better recognize what my shirt is, I would do a test to change the color of the shirt.

Agree. Even better than just include or exclude areas, adding a layer to draw the intensity (creativity). For example, if you want to redefine an image with a high creativity but want to reduce the creativity in some areas, such as the hands or feet. Similar idea as the face recovery strength but applied as a layer.

1 Like

This is simple to achieve today. As megapixel does not have any clue of which areas (and in which degree you want to redefine or recover), you can load two layers of your image into Adobe PS or PS Elements , one original and another where yo have run the image through the recover/redefine model. Then in the original image you can use the eraser tool on the parts of the image you want to change. Another option is to use the opacity of the image to tune the percentage af the original image you want to keep.

Doing this takes effect immediately, so it is very simple to have full control of the changes you are doing. Also in the eraser-tool you can control the edges as soft as you like the areas in the image where you have added pieces the recover/redefine image will be as seamless as you like.

Br
BT

1 Like

I agree, though I do see a high value in having it specifically available even if it was a hard mask that could define the boundary of say what you want to redefine vs. not. That way as you go through the iterations over and over to achieve the look your are going for you aren’t having to move back and forth so much between applications. I’m not a developer, but sometimes it helps me think a little outside…what if gigapixel didn’t have to consider what I wanted to redefine or recover as you mentioned it does today (or the inverse …what do you not want it to redefine or recover), but instead continued as normal and redefined/recovered the whole picture, but an area you masked (ie. used the edit selection function) would remain untouched. I run into this a fair about…The whole image turned out great except for say a hand that was really low resolution. Low creativity or not and regardless of other healing efforts prior (say in Photo AI), gigapixel turns that hand into a furry monster. Now I am having to go through the cycles of jumping to another program which as you said is relatively simple (for some folks), but adds minutes to many of us. I think the suggestion is good. I do acknowledge their is a way around it…and of course photoshop would be the superior platform for very detailed work like that as to your point.

I get what you are saying though…the same would be true for me if I were dealing with color temperature. Photoshop is the best though a little complex for that, lightroom is perfect for making those edits for color temperature and still provides a lot of control (not to PS level of course), but in an intuitive non-graphic designer person sort of way. I don’t leverage it in photo AI because I haven’t run into a scenario where that was the only slide I needed to move to visualize the photo the way I wanted to. That DID also apply to lighting, but I do have to say though lightroom and photoshop overall do better in that area as well…sometimes there are times when they don’t. There is something about the new photo AI revamp of lighting that I can’t put my finger on exactly what is different…(probably because I don’t want to spend the cycles jumping to a previous version), but it often seems to just work better now. So its value to me is more like… I have another reason to want to go to photo AI when I have a challenge other than denoise, focusing, and face recovery. I get my example is a different product line, but that is what I could think of at the time. Hmmm…on that note…I wonder if text masking sort of works that way…if so…that might be a workaround…(for my scenario at least). Much of the creative aspects of gigapixel I tend to leverage is more along the lines of …“Yikes that spot in the picture came out so bad, but I have to make this photo work somehow…will the gigapixel redefine save my butt on this one?” (Sort of makes up for my lack of artistic skill and hand dexterity).

That doesn’t work because Topaz changes the area so much in both texture, color, etc that trying to use only parts doesn’t work. Having a brush on/off or even redo a portion would be a game changer. Right now i don’t use it because i can’t control the results. Example, had a painting of flowers that i wanted more detail, tried 1 and 2, wasn’t enough, did 3 and it changed some of the petals to birds… sadly i like what it did to the rest of the image. would be great to either highlight and redo that area. i tried saving it and putting it over the original and brushing off the birds but then that part of the original doesn’t match the rest of what gigapixel did. Hopefully that makes it a little clearer on what I believe people are asking for.