Better UX regarding model-choosing

Currently, the process of choosing models is like trying to “read Greek” for the uninitiated: the differences between models aren’t at all clear by the model names themselves, and the written descriptions between them have massive amounts of overlap to the point of not being useful IMO. It’s not apparent when or why you’d want to use one over another, at least beyond whether a human face is involved in the footage.

I propose implementing a form of “abstraction layer”: instead of meaningless model names that don’t convey information to a new or unfamiliar user, renaming the options based on scenarios/use-cases for when you’d choose that particular model over another. In the description, the model name could be footnoted for technical reference.

Or, at least implementing some sort of visual example/comparison UX for why a particular model should be chosen over another. For example, as a user’s mouse cursor hovers over a model in the dropdown, a popup modal appears with a 2x2 grid of image/video examples for what that model excels at, e.g. removing compression/macroblocking, or improving detail, or an animated GIF showing changed framerate / slow-motion added, etc.

Another possibility is a “buffet sampler” function: a checkbox that when ticked, will process a sample of each model using optimal (or most-used) parameters, then present the user with the “plate” of results. The user then clicks which result they like best, that particular model is communicated to the user like a “game reward” announcement (for teaching/reinforcing the user). And then optionally, a rinse-wash-repeat process for if the user wants more down to the video (I’m aware of the “Second Enhancement” function).

And then this could go further via some sort of in-program traveling “User Notebook” UX, remembering comparison snapshots of models used and the change/improvements instantly accessible via floating modals, giving the user an instant reminder what they used previously and how it performed.

This could even be taken further as a form of user feedback and refinement of the models, by the user picking which they like best out of the “buffet sampler”, and/or even allowing the user to timeline scrub the buffet samples to the particular points of interest, which would reveal where a model excelled at – or did not – and which was chosen for that particular need. Obviously this feedback would need user permission.

Just some thoughts.

floating “Recent Models Used” conveyed via a floating notebook, visual reminders of models

I have not a clue what your post is about.