In Runway there is a feature where I can take a shots that’s 9x16, and generate new pixels to the sides, so that the output video is 1x1 or 16x9. it would be great to have something similar built into Topaz video AI. You can go the other way as well, from 16x9 to 1x1 or 9x16. This feature is very useful when re-purposing footage for online use, as it’s often way too cropped, and if we could generate new material to extend the edges of the shot (and change aspect ratio) that would solve the problem without needing letterboxing or cropping.
Adding features that are already available in other apps would just be bloat and another potential source of bugs that never seem to get fixed.
That’s a valid opinion. However for me one of the main features of Topaz is “high quality upscale”, and my use case is conceptually similar; I want more pixels, same source material. So I think the new feature fits with the one of the core strengths of Topaz, and what many users know it for
I agree, stop implement new features. I even think SDR to HDR was a fault, because they should spend all energy into bugfixes now and model updates. For example RheaXL, yes the model shows great potential but is beta stage, can produce so many artifacts…and all others, when will they receive model updates?
They should focus on the core tasks and certainly not create new UIs that customers don’t want and only Topaz’s marketing department thinks is great.
I don’t want to be a downer too, but I agree that I don’t want to see any more AI models until they pick a good user interface design and stick with it long enough to get the bugs under control.
For this model idea, I honestly think it’s not something that would work locally. It would need to generate based off of similar scenes and such. More often than not, I think it would come up empty and either generate the same non-matching edges or error out. To solve for that issue, It would probably need to be something that would need every video ever recorded for training data, and would only be able to run on a super computer.
The examples of generative fill that I’ve seen work by using frames from the same video that the user selects because they contain additional views of the same background, like a pan, zoom or different camera angle.
IIRC, Premiere and Resolve are both adding this function, so I wouldn’t expect to see Topaz adding it to a product they’re pitching as a plug in to both.
If there’s a disclaimer on the model that it will only work with that kind of content, it would be fine.