The VFR Issue in v5.5. + 5.5.1 makes the Prog unusefull. Please work hardly to fix it.
I even just returned to 5.3.6. This Version works without issues.
BTW… THX for hosting and continuously developing the Video AI ! From Starting to NOW:
Difference = Worlds!
keep going on!
Remember, commenting on a issue does not equal commitment to its correction (not FIXING or RESOLVED which = only a bandaid MANY of times by just circumventing [hiding, ect.] the issue).
What are you whining like a grandmother, do upscale in 5.5.0 (read twice faster than previous versions), do interpolation in 5.3.6.
Well, let’s be honest, there is no AI, these are just algorithms, because it cannot distinguish a person in the background and in the foreground (as a simple example), there is no self-learning, what the developers said to use, they use - this is just an algorithm.
And how it should be - the user seemed to the program that he did not like some moments of the processed video, artificial intelligence would have to process the specified interval ITSELF in some variants and offer the best option …
All this artificial intelligence is a big soap bubble, there are no such technologies and there will not be in the next 25 years, tales about gpt chat, this is the algorithm.
I apologize for my English
When I was growing up in my home country we had an expression:
“If one person tells you that you are drunk, you may disregard the remark/brush it off. If 2 other people tell you the same thing, you better go and take a nap to clear it off.”
That’s kind of a rough translation but you get the gist.
EDIT: The fact that the other people don’t get as drunk as me is simply an alcohol tolerance. But legally, if I get pulled over while I am driving, both of us will still get in jail.
Obviously there is an issue since multiple users with multiple machine configurations, multiple input/output settings. The output settings and/or container shouldn’t matter if the integration of the codec is properly done.
Let’s say, if I decide to have to use a purple lipstick (even though I am a man), I shouldn’t have to use red and blue to get the correct look going through multiple phases. In this case, if I want to process with the H264 or H265 codec, I should be able to do that and not use FFV1 in a MOV container (which I didn’t even try because I don’t want to go through re-encoding the output in a different step).
We do support H264 and H265 as export codecs. These codecs will use NVENC/AMF/VideoToolbox depending on your OS and GPU.
If you specifically want to use the CPU-based x264/265 codecs, we do not have a license to distribute them commercially and you would need to use an intermediate codec out of Video AI.