Not satisfied with TVAI

To be very clear, I am satisfied with TVAI.

This topic is to address some of the common complaints I have seen, almost weekly, for the past two years.
Complaint number one:
I have a computer, TVAI should run on it, but it doesn’t.
Come with me to the world of the past. It’s the year nineteen ninety six. Your computer cost you two thousand US dollars. It has—I’ll be super generous—two gigabytes of storage in it’s hard drive. Images are stored in bit map files. One frame from a not-yet-released-to-the-public-DVD takes over one megabyte of storage. Audio is stored in wave files. One minute of DVD quality stereo audio takes over ten megabytes of storage. Let us do some math. I’ll stop spelling out numbers. Rounding images to 1MB per frame, and audio to 10MB per minute. 24 frames a second for one minute = 1440MB + your audio = 1450MB total. You now have one minute of pure uncompressed video and audio on your 2GB hard drive—and you cannot fit another.
The point in this history lesson is that no matter the compression used to make digital videos smaller and take up less storage, when they are being processed, they must be uncompressed to do any pixel manipulation. It has been many years now since 1996. Images are in bigger resolutions and can have more different colors per pixel. Both make them bigger than my example when uncompressed. Videos can have higher framerates. Audio can have more channels and higher frequencies. In computers today, when you view a video, it’s been compressed. There are pieces of hardware in the computer that are made to uncompress those videos just for viewing. Since no permanent pixel manipulation is needed, after the image has been rendered to the screen, it’s essentially gone from the computer. It is one thing to view a digital video and a complete other thing to modify the pixels of a video and save those modifications.

A.I., or more accurately, Machine Learning, is the aggregation of creating hundreds of random logic bits and weeding them based on desired outcomes. Keep the bits that seem to be closer to the target goal and delete the ones that are farther. Randomly generate more logic bits to fill what just got deleted and repeat the process until you are happy with the result. The final trained set of logic bits can be quite large and difficult for modern computers to obey. Hence large companies saying their new hardware now has dedicated AI ‘cores’ and the like. The reality is that there are several ways to implement an AI model, and those ‘AI cores’ are probably only good at computing one specific implementation. (TVAI has been around longer than dedicated AI cores. I would not be surprised if their models are not compatible.)
TL;DR:
TVAI must be ran on a powerful computer, and claims of ‘AI computation power’ are specific to the products of the brands that made the claim.

Complaint number two:
I shouldn’t have to pay to beta test TVAI.
Short answer: You are not beta testing TVAI. You are just another victim of the now-needless complexities that arose from how old storage mediums and broadcast technologies needed to be digitized. Oh, and everything I said before about computers and digital videos, except imagine all the different combinations of computer hardware that exist.
Back to the history lesson. Everyone wanted to solve the giant storage problem of digital video. All the money would go to the one that did it first, and if not first, then best. I don’t know any of the technical details, but I do know the results. There are so many digital video storage formats, you cannot count them on your finger. Most of them are good at most of those before-mentioned complexities, but none of them are perfect at all. It is very possible that you can spend hundreds of [insert currency type here] on a video editing program that does most everything you need. Oh but you cannot open certain types of video in it. In fact, you can only create a few select types of video formats with it. The reason for this is to reduce the amount of incompatibilities. (Or they created the video format and think it’s the best. Forcing people to use it, is their way of making all their hard work worth it.) Along comes a new video technology. Let’s say high frame rates as an example. You want to get in on the action. That program you have been using does not support the new technology and their video format cannot handle it anyway. You can either wait to see if they ever support it, or spend loads of money again to change video editors to one that can. What you don’t know is that the new format is incompatible with some feature of the old one. Like variable frame rate. Something you’ve used on all of your videos to save a bit of space. The more I describe this hypothetical situation, the worse and more complicated it gets. I’m getting more annoyed the more I think about it.
Can one program account for all of the features that have been integrated into digital video formats over the years? Remember, some of those features are probably proprietary. FFMPEG has been worked on and refined for years. Specifically it probably has had the most work done in trying to cross compatibility barriers between digital video formats. Even still, they are not all accounted for.
Should anyone blame Topaz for the history of digital video formats?

TL;DR: Topaz did the best they can to be compatible with all the non-sense video formats that exist. It’s not their fault. I think we’re all glad they didn’t try to reinvent digital video.
It is unrealistic for anyone to expect no issues out of the mess that is digital video.

Disclaimer: This is not meant to be a source of correct history and facts. I did not check any of my sources nor my math. This is just me trying to compile the things I’ve learned over the years into one place. Would I have ever written all of this if TVAI did not work for me. No. No I would not have. Is all of this biased on the perspective of some extreme computer nerd. Yes. Yes it is.

3 Likes

Concerning my storage vault, I have problems with h264 as they get really big with good Q. I want h265, but then i can not take them on an usb and show family and friends old clips. It’s a lottery if their TV or Win can play them, so i better store all high res masters in h264.

And those upscaled i have in 10 bits h265, take a l o t of space.

why are you using 10 bit encoding when your source is only 8 Bit?

I think those was from Topaz 16bit tiff’s → Davinci resolve (color grading) lossless 10 bit prores out → Handbrake h265 10bit. To be honest, they looked a bit better in fast swap between two 8 / 10 bit h265.

TVAI might be able to add those extra colors. I don’t have anything that can view them, so I make sure the final output is in 8 bit.
Hehe, that kind of leads me to another rant.
My TV screen is pretty big I think. About 55 inches as is the popular way to report screen sizes in the US. I sit 13 feet back from it and cannot pick out individual pixels. It’s a 1920 by 1080 screen resolution. If I got a 4K screen, I cannot move the seating back… So the screen would have to be twice as big for me to even notice the resolution upgrade. At that size. I would need to move my eyes around a lot to see all parts of the image.
It’s the same story with my computer. 24 inch screen. 1920 by 1080. About 2 and a half feet away. I would need double the size in 4K to appreciate the increase in resolution—and at that distance, I would need to turn my head around a lot to see all corners of the screen.
There was a research paper put out by Dolby in regards to their Dolby Vision setups. I couldn’t find it when I halfheartedly looked for it yesterday. If I remember it correctly, it backs up my experience of screen size versus resolution versus distance away.

From all that, I know I won’t benefit from any 4K screen. I want to see the measurements of those who claim they do benefit from it. Maybe I just need to see it for myself to be converted. My 4K laptop screen is idiotic. It might be the reason I started searching and found the Dolby paper in the first place.

It is true that with 4K you can grow to larger screen size and get good quality. But it’s not limited to this type of use case.
You do get benefit of a 4K TV with 4K content even at the same screen size.
the Picture is much sharper and High Res, you notice it immediately.

I switched from 1080p 50" size TV to a 4K 49" size screen and 4K content is much sharper and High Res to the Eyes, you do see the difference and quite noticeable one too… and my eyes are still locked at the same spot as I didn’t extend the TV size nor the distance from the screen. the distance from the TV is around 12ft.

Click bait title! You got me! lol

1 Like

I guess I understand the premise of this thread but I don’t really agree with all the points. Topaz gets a huge pass from me since they build a Linux Beta that in 50% of scenarios is completely usable. 30% I’ll chalk up to lack of users testing and the other 20% is broken in Windows too.

Topaz charges $300 for TVAI now, and $150 annually for updates. My problem with the whole thing is it’s never something that’s “fully functional”. Every update breaks something else and then you’re left waiting for the next version hoping it fixes the bugs. It’s a cycle of adding features and breaking old ones so you’re always gonna buy renew that license cuz 3.3.x worked good but didn’t have the newer model and 3.5.x has the models but xxx is broken so…shit my license ran out but maybe 3.6.x will fix xxx…It’s been going on since I started using it with V2. And this is all Windows.

This one…

Complaint number two:
I shouldn’t have to pay to beta test TVAI.

Is a double edged sword. Topaz should be doing alot more in-house beta testing themselves. Not sure what kind of team they have but they drop some really obvious bugs. I was just in a position where my license ran out and my GPUs died due to a water leak. It was said that Intel ARCs work on TVAI but no one really posts about them so I figured I’d try them. They didn’t really work on my last owned version (Windows or Linux) and I couldn’t really tell if they worked with the newer beta since I was restricted to demo and I wasn’t going to spend hours running videos that have a giant watermark. SO I bought a new license and ARC works like crap in TVAI. Topaz doesn’t test Intel ARCs so it’s not really their fault but they also say that they work…they did not work for crap in 3.5.1 or 3.5.2 and I returned the 2 ARC A770’s I purchased and just picked up a single 4070 for now. It’s not Topaz’s fault the A770’s don’t work, but they should not have said they do because they are not testing them. Best I could do was get 1 to work for a few minutes…then failed videos and all sorts of crap. I know Intel is still working out drivers but outside of Topaz the A770s work fine, TVAIs models do not work right on A770s and they won’t tell anyone what models are actually available at this moment.

So yeah, anyone willing to test that sort of thing and report back ought to get free license extensions since they won’t be hiring beta testers for $150/yr. I can certainly see how people would try to get free rides on beta testing but I think if you are testing something like Intel ARC or Linux and reporting back well then you should probably be granted freebies. For 1 you likely won’t be using a reasonably functional program and for 2 they’re getting really cheap beta testing. Maybe I would have kept one of the ARCs if it were offset by the cost of a license extension. All I know is it was a waste for me to use if it doesn’t produce any result and the only mention of it from the Topaz group is “not all models for ARC are available yet”

That said, since 90% of my use case is working well in the Linux beta I’m glad to report back about it as a “thanks” for making it available. I hate Windows and the only box I kept windows on was the box running TVAI. No more Windows in my house so I can’t complain.

I am not an official beta tester. I got a beta version once from actively working with support to fix an issue, and I don’t remember any watermark. Do you have to have a current license to get the beta builds?
Your profile has the Beta Tester tag. I presume that means you download all the betas and report back all the issues you have with them.

I agree that is does feel like they don’t do enough Alpha testing. (Beta is always done by those not in the company) I can also understand not having the amount of hardware configurations to make the alpha testing cover much. That’s just for hardware issues. They do keep doing things like removing abilities in the user interface that seem like it is by accident.

Yeah okay. If I had to use the user interface to process a video with TVAI, I would probably hate it. And that’s what VEAI 2 was for me. I really wanted to use it, but it couldn’t handle any movie correctly. Once the command line interface came out, I’ve been consistently able to process the videos I want with the models I want.

I requested to be a beta tester back when I they first mentioned a Linux beta. I never tried it until recently because I didn’t even realize they added the UI to it. Just cuz I use Linux doesn’t mean I want to do everything in CLI lol. Beta testing is just whoever wants to download a beta and report back if they feel like it. There’s no obligation to do either and if it weren’t for Linux I wouldn’t bother. I’m a bit surprised how few people use the Linux version considering the amount of people who replied in a Linux thread asking for it. I doubt it’s a very had thing to maintain the Linux version anyway, the UI is built with QT afaict and that’s cross platform as is ffmpeg and most of the other external applications.

I can also understand not having the amount of hardware configurations to make the alpha testing cover much.

And you figure if there’s a GPU that they don’t want to buy or spend time testing, it would be in their best interest to compensate a person with a license extension who has it and is willing to test it. The cost of the GPU alone vs a license extension…

That’s just for hardware issues. They do keep doing things like removing abilities in the user interface that seem like it is by accident.

Right and most of the things I mentioned could be resolved with them offering bugfix updates to expired licenses. I understand bugs happen, OK…but nobody purchased the bugs. They purchased software with XYZ features and expected 0 bugs. So realistically they should revamp their license model for feature updates and keep bugfixes separate. I understand that it is probably much harder to deal with versioning in that way but really you should not have to extend your license to get the bugs fixed.

Once the command line interface came out, I’ve been consistently able to process the videos I want with the models I want.

I can’t even imagine using just the CLI for TVAI. I need to tweak all my stuff and I need to see the previews. No idea how to manage that with the CLI. That said I have no qualms with the UI in general. My issue is the problems with the models and GPUs that are supposed to work that don’t

the correct definition as defined as part of software release life cycle, Beta is done by audience that are not part of product development. could be also from the same company. :slight_smile: just FYI…

I knew that one was going to come back to me. It doesn’t help that we never have any beta testing done at my job.

1 Like

I think the term “beta” is a moot point. Topaz has posted “alphas” here and I’ve tested one or 2 alphas that were not posted.

Round here I just take the terms to mean the state of readiness

Alpha is used to test new features and functionality, while beta is used to test for bugs and other issues. that is the general rue of thumb.

But with Topaz it is slightly different. since they release every week a build, so on builds where they introduce new feature / functionality , they don’t have much Alpha time and some times it goes directly to Public Beta by the beta testers members.