I’ve been using TVAI for almost a year now, and one thing that I struggle the most is finding the optimal models and values for a given input video. Granted, I only use 480i as input (restoring old family VHS tapes), but I’d think many people probably go through the same struggle.
To overcome this, I’ve created a simple Python script to iterate through the models (including second enhancements) and value ranges for “Add Noise” and “Recover Detail” (prenoise and blend, in CLI language) for generating 5 frame previews for all possible combinations.
Now, considering just 3 models and a range of 10 different values for prenoise and blend, this amounts to 7,776 different possible value combinations… so obviously, the struggle is real. And that is taking the individual settings per model (if not in Auto mode) completely out of the equation.
There should be an easier way to do this within TVAI- like iterating through a combination of models (and combinations of models when second enhancement is enabled), and different values, but in a sane manner as not to have so many combinations that would be absurd to compare all different outcomes; I believe no one outside of Topaz understands the weight and how different values affect one another, so they’d be in a position to better judge what would a “sane” number of combinations and values that could help users choose one that better fits their input video and their preferred output.
It sounds like you’ve put in a lot of effort to streamline the process, and I completely agree—having to manually test so many combinations can be overwhelming. Your Python script seems like a smart workaround, but you’re right: this kind of functionality should really be built into TVAI itself. Having a system that intelligently narrows down the options, based on the input video and common preferences, would save users a ton of time. Topaz, with their deep understanding of how the models and values interact, could definitely help create a more user-friendly approach to this problem. Hopefully, they take note of this!
What if they make a championship bracket style compare AI tool?
It would work something like this: It would make one preview per model, then show you two, and you pick the one that is better. Then it shows you two more, and so on. The AI part comes in by making better predictions about what settings to use in the next previews based on what the user picks.
For a simplified example, let’s say the AI makes 4 previews: Artemis HQ, Artemis LQ, Proteus Auto and Iris Auto. The user picks Artemis HQ over Artemis LQ and Proteus Auto over Iris Auto. The AI would then have more models generated for the next round. Let’s say Artemis MQ and Proteus with manual settings.
Anyway, let’s say eventually Artemis gets defeated by Proteus and Proteus Auto gets defeated by Proteus manual settings. The AI then would base its next round of only Proteus manual previews and only on the settings that tend to win.
In this way, even though there are thousands of possible combinations of setting for the Proteus model alone, most of them would be weeded out quickly and somewhat efficiently.
I think the biggest shortcoming of this style of preview competition is: the user would already need to know what the most problematic scene is in the movie they are trying to enhance.
Another problem with this idea is that the AI would have to know a bunch of good starting settings for models like Proteus. I have at least three combinations that work on most DVDs. That fact that Proteus Auto does not come close to giving the results I get with my combos, means Topaz doesn’t already know about those excellent starting settings for DVDs.