My results still come out poor.
They are even confusing to me sometimes. Eg, I will use the green keep marker, and the very pixels I have gone over in green aren’t kept. That doesn’t make sense to me.
Given all that it seems like I would have to do by hand to get a quality mask, I am wondering about a different approach.
I can easily make a rough mask (binary image) of my original image using python/OpenCV or a program I’ve been utilizing from github. The result is not perfect (or I wouldn’t be here, of course. ); but it is decent. I could take the result and apply morphological operations – open and close to get rid of small holes, and then erode, to make the objects a little smaller. If could take the result of this and somehow use it so that the white portions (which are objects) could be translated or assigned to green/keep within Topaz AI Mask, that would seem like it would go a very long way, providing huge amount of correct “keep” marking for AI Mask to go on – and hopefully translating into much better results (and faster).
Is this possible? Can one input a binary mask image (or something like it) in order to provide a great starting point for AI Mask to then do it’s work?