Request | Adding Contextual Knowledge

New user, please excuse if this got brought up before…
Is it planned to give Gigapixel project knowledge about people or locations?

Whether you work on your family album or recover overcompressed E-Commerce footage with dozens of images showing the same model in many variations … I wished that I could tag people and Gigapixel started building knowledge about their physical features. This could avoid notorious mistakes, like idealized teeth or a smile that you know is wrong when you know this person well.

I recently ran into a case where Gigapixel in all its modes insisted that the smiling person shows teeth when the lips were indeed shut. I had the urge to mark the lips and to tell Gigapixel to give me variations (similarly to how Photoshop does this).

When the shot shows often photographed scenes, such as the recently posted image of the iconic Gherkin in London, couldn’t Gigapixel pull in external data to complete the picture?

An AI-assisted text and research tool we use lets me store large amounts of project-specific information that gets considered in every new job. The tool learns our tone of voice and also ensures that new texts do not repeat what was said before.

I’d like to build similar libraries, either teaching in by using Gigapixel or by feeding in context deliberately (Anna as a child, a young woman, middle-aged mother…).

In the post above, I already suggested to make Gigapixel learn about features of people and objects. I see another use-case, that, if tackled efficiently, could save most of the processing time for specific images.

In e-commerce, it is quite common not to have a full series of images for all available colours. Hence, Editors change colours in post-production. Unfortunately, their CDN often creates small renditions with harsh compression. Such imagery may end up as input for Gigapixel.

With low-quality stacks of images showing a shirt in blue, green and yellow, it should suffice, to fully process only the images in one colour. If Gigapixel detects identical pixel distribution and colours for 80% of the image, it could concentrate on the shirt. For the majority of image-pixels, Gigapixel could reapply the recipe it used earlier.

Of course, it would also be thinkable that the user informs GP about the version with the richest contrast and texture (the original photo).