Happy to see the roadmap, and happy to know what the future holds for what should actually be labeled as beta software. I can’t use any version of 3.x.x at all, because the Grain implementation is utterly useless in it’s current form. Who thought that adding increasing amounts of chroma noise, with no ability to change size or saturation was a good idea? If the feature “needs improvement”—which I’ve seen discussed elsewhere—why not just disable it, or at least label it as “beta” like Adobe does with its products?
Thank God that 2.6 still works. It’s an absolutely indispensable product. I’m hoping that version 3 gets close quickly. But for right now, I’m REALLY P----D OFF that I paid for a 12-month upgrade cycle for utterly useless beta crap. How about you extend my subscription expiration to a year after you move this junk to a usable version?
Have to agree with kevinbarre here. Paying for a 12-month upgrade on a product which currently has lower quality and features than the previous version does make customers feel very badly about the product and Topaz Labs as a company even with the promise of improvements in the future because of the rewrite. Do the right thing to show support for customers as they show support for you with their upgrade dollars.
Face restoration would be huge! Right now I’m piping image sequences of my worst videos through Photo AI, and the issue with that is not every frame gets processed the same (for example, sometimes circles around the eyes result in a couple of frames adding glasses, or noise can cause one or two frames to significantly age the face). Having temporal awareness would be great (plus video ai is significantly faster)
TVAI obviously does both. Put differently, the whole idea of upscaling intelligently is restauration. If not, there would be no point. For example, a strongly anti-aliased curve in SD, can be restored to a near perfectly smooth one when upscaled to 4k (simply because there’s more pixel room. And the A.i does this extra cleverly. Like DLSS for games. And especially (small and blurry) faces are subject to restauration attempts (at varying degrees of success, of course).
"Upscaling and restoration are two different processes that are often used to improve the quality of images or video.
Upscaling refers to the process of increasing the size or resolution of an image or video, often by using algorithms to add additional pixels or data. This can be used to make a low-resolution image or video appear clearer or more detailed when viewed on a larger display or at a higher resolution.
Restoration, on the other hand, refers to the process of repairing or improving the quality of an image or video that has been damaged or degraded over time. This can involve a variety of techniques, such as removing dirt, scratches, or other blemishes, correcting color balance or contrast, and removing noise or other artifacts. Restoration can be used to repair or improve the quality of old photographs, videos, or other media that has been damaged or degraded over time.
In general, upscaling is used to make an image or video appear clearer or more detailed, while restoration is used to repair or improve the overall quality of an image or video that has been damaged or degraded."
OpenAi / ChatGPT
P.S : actually VideoAi does not do both, but lot of people think it is. maybe in the future it will do both but for now it’s not. you need several tools before and after to have a restauration job done correctly.
that’s the same with Photo Ai. to repair face problem; i use an another Ai software, AFTER the PhotoAi / Gigapixel Treatment (sometimes doing the reverse make a nice result, even better too). it’s the same here, except that such software are not ready yet for video.
removing scratch etc… are as well done externally by other software. certainly in the future, Topaz will offer a bunch of nice tool to make a full restauration, but for now, it’s mainly focused on upscaling. that’s why so much people complain about face ugly result in Topaz ;)… you can’t have a good result from a 320x240 video to a 4K one actually. that’s the same issue that Miss Gimenez … but don’t forget her example ! a bad result can make you very popular (the answer above is one from an Ai).
Agreed. As I have stated before – video may appear to have been restored when upscaling – but that is not the intent of this software. Yes - it does seem to filter unwanted noise to making upscaling clearer, but it is not being advertised as software that will restore your video – it is software to upscale using simple AI and various well known, tried and true, algorithms.
I have seen no where that facial restoration would be a future feature in this software. I can’t even imagine the processing power required to apply such a technique to a 30fps video.
Have to admit, it’s funny ChatGPT gave such an intelligent, coherent answer.
However, the distinction is also largely, pardon the pun, artificial, as ‘the process of repairing or improving the quality of an image or video that has been damaged or degraded over time’ is precisely what TVAI does. Except the damage isn’t caused by physical deterioration of the material, but degradation as the result of compression, low bitrate, re-encoding, etc. Hence why TVAI (Proteus) has sliders for ‘revert compression’, ‘recover detail’, etc. Those are all restorative measures. And why it takes almost a full day to complete for a full-length movie.
If I wanted a quick upscale, I’d just slap in a Lanczos resize into my VapourSynth script, and be done with is. I use TVAI, precisely because it’s so good at restoring low-quality video (this isn’t magic, and has its limits, but TVAI is pretty darn good at it).
There are many different types of video processed by TVAI, and some AI models are not worth using.
When improving AI models, we would like to see priority given to those that are effective for more video types.
Completed images such as TV programs and DVDs
stabilization and blur removal do not make much sense. This is because they may be video art effects.
Face enhancement should also not be used, as current technology changes the actor’s face.
CG and animation
Stabilization and blur removal will not be effective.
Will not benefit from face enhancement.
Stabilization and blur removal would be effective.
Face enhancement would not work for reminiscence videos because it may change the faces of relatives.
On the other hand, noise reduction, scale, and fps enhancement can be used for relatively any video type, so I think they have a high priority for improvement in the AI model.
I have noticed on my ryzen 5950x, rtx 3080 system, that running one export, I can get about 3.7fps. If I run 3 jobs simultaneously, I can get about 6fps total. Will the faster export speeds improve performance while running only 1 job?
@email@example.com@adam.mains I now have 2 Tvs I can use as monitors.Vizio And Sony.M55Q7-H1 and 43X85K.There is also Free Divinchi Resolve for iPad free and Studio features for 94 inside the app.
TVs only show JPEG now for photo.Bot Have Mov for QuickTime.