Have to admit, it’s funny ChatGPT gave such an intelligent, coherent answer.
However, the distinction is also largely, pardon the pun, artificial, as ‘the process of repairing or improving the quality of an image or video that has been damaged or degraded over time’ is precisely what TVAI does. Except the damage isn’t caused by physical deterioration of the material, but degradation as the result of compression, low bitrate, re-encoding, etc. Hence why TVAI (Proteus) has sliders for ‘revert compression’, ‘recover detail’, etc. Those are all restorative measures. And why it takes almost a full day to complete for a full-length movie.
If I wanted a quick upscale, I’d just slap in a Lanczos resize into my VapourSynth script, and be done with is. I use TVAI, precisely because it’s so good at restoring low-quality video (this isn’t magic, and has its limits, but TVAI is pretty darn good at it).
There are many different types of video processed by TVAI, and some AI models are not worth using.
When improving AI models, we would like to see priority given to those that are effective for more video types.
Completed images such as TV programs and DVDs
stabilization and blur removal do not make much sense. This is because they may be video art effects.
Face enhancement should also not be used, as current technology changes the actor’s face.
CG and animation
Stabilization and blur removal will not be effective.
Will not benefit from face enhancement.
Stabilization and blur removal would be effective.
Face enhancement would not work for reminiscence videos because it may change the faces of relatives.
On the other hand, noise reduction, scale, and fps enhancement can be used for relatively any video type, so I think they have a high priority for improvement in the AI model.
I have noticed on my ryzen 5950x, rtx 3080 system, that running one export, I can get about 3.7fps. If I run 3 jobs simultaneously, I can get about 6fps total. Will the faster export speeds improve performance while running only 1 job?
@firstname.lastname@example.org@adam.mains I now have 2 Tvs I can use as monitors.Vizio And Sony.M55Q7-H1 and 43X85K.There is also Free Divinchi Resolve for iPad free and Studio features for 94 inside the app.
TVs only show JPEG now for photo.Bot Have Mov for QuickTime.
Estou a ter vários problemas com as duas funções da estabilização. Funcionam bem no PREVIEW, mas quando exporto, vêm sempre ERROS. Em relação ao Convert to 60 fps, acontece a mesma coisa. No Preview está tudo bem, mas no Export, resultam erros.
All of the updates sound really good and I can certainly understand the issues with bringing out something new.
However, there is NEVER a good reason to ditch a proven and tested USER INTERFACE. I am having major problems, and I am sure I am not alone, with doing some simple video processing with the extremely confusing and intolerable poor interface. There too much crap all over the screen and it is hard to tell whether you are about to mess with the preview or actually processing what you want.
Even more inexcusable is the TRIM feature. This is just about useless, and forces you to completely restart looking for the starting point that you have reached. There have been lots of complaints about this, but it appears to fall on deaf ears.
Please put back the prior BRACKET method for choosing the area to be processed. It worked, the new version is a waste of time and no rational person would change something that works properly and has had, as far as I know, no complaints.
It’s nice to hear that you want feedback, but it would be even better if you really paid attention to it.
There is never a good reason to make radical changes to the USER INTERFACE. Sure you can add new features, but pleae don’t &*(E#@ with things that work.
Just imagine some idiot swapping the position of the accelerator and the brake in a car, or not quite as bad, change the location and direction for using the turn signal. We learn to do things, we learn the location of things (especially important) and when things change it is not only frustrating but a huge burden to have to relearn to use something that you have already learned to use easily.
I cannot emphasize enough how this concept is overlooked by almost all software designers and programmers. It is a continual problem that when software is updated there are interface changes which annoy the hell out of most users, especially when these changes are totally unnecessary.
I am about to order a new computer, potentially a Ryzen 5700X or better based system as replacement for my long in use HP z800 Workstation. I found that the latest builds of Video AI and Photo AI are no longer supporting the older Xeon CPUs. Guess you won’t provide seperate compile compatible with these any longer?
You write your are about for implementing smoother in-app preview. Just in case, as I was involded in diverse video player developments in the past, make sure not to compute color processing but use 12 bit or better look up tables instead. If you haven’t done this yet, you will be blown away by the inherent speed improvement. But I guess you guys know that already.
Have you ever considered opening a path for using Video AI (and e.g. Photo AI as well) as plug ins for host apps like BMD Davince Resolve oder Adobe Premiere Pro?
That could streamline pipeline integration a bit and there you have so much other helpful tools, that I find myself going back and forth constantly. Why not integrate with each other, at some point?
There are SDKs for creating plugins for these hosts, though.
You wrote that V3 provides proprietary FFmpeg filters. Does that mean the actual AI processing is already inside an FFmpeg filter and can therefor be intergrated in an FFmpeg compatible pipeline? Is that what you are saying?
What about predicting the best settings for a video conversion by analyzing its content?
I see you make first steps here, but I guess there is a long road to go.
I am happy to do heavy bug hunting and reporting, as soon as I get a machine being able to run my tools again.
Well, that depend on the specific XEON.
All HP z800 support only XEON CPUs without AVS support. But since mid/late 2022, Topaz makes AVS a requirement for several apps, like Video AI and Photo AI. If you look at the specs you’ll quickly find it.
I really liked my HP z800 workstations, but it seems that they soon need to be replaced. These are still solid and strong machines, run at least Win 10 fine and up to latest CPUs. But it doesn’t help if the latest software requires CPU features that have been introduced a decade ago and my machines are simply a bit older than that.
Road to perdition. Time to say goodbye. Anyone around needing two HP z800?
On the other hand, latest CPUs are so much more power efficient, that I will save the investment on the new machine on the power bill alone.