I’m sure this has already been suggested, but it would be really nice to have the ability to either select or exclude areas of an image when using the Recovery and Redefine modes. For example, I’d like to use the Redefine function for an image of myself but I don’t want the AI to alter my face in any way.
On the other side of the coin, I’d like to be able to use the recovery function for a small area of a photograph instead of the whole image. Similar to how the preview square works.
I also hope that we can help train the refine model in specific object recognition so we can advance some uses of its generative functions. For example, if it could better recognize what my shirt is, I would do a test to change the color of the shirt.
Agree. Even better than just include or exclude areas, adding a layer to draw the intensity (creativity). For example, if you want to redefine an image with a high creativity but want to reduce the creativity in some areas, such as the hands or feet. Similar idea as the face recovery strength but applied as a layer.
This is simple to achieve today. As megapixel does not have any clue of which areas (and in which degree you want to redefine or recover), you can load two layers of your image into Adobe PS or PS Elements , one original and another where yo have run the image through the recover/redefine model. Then in the original image you can use the eraser tool on the parts of the image you want to change. Another option is to use the opacity of the image to tune the percentage af the original image you want to keep.
Doing this takes effect immediately, so it is very simple to have full control of the changes you are doing. Also in the eraser-tool you can control the edges as soft as you like the areas in the image where you have added pieces the recover/redefine image will be as seamless as you like.