Models, we want to improve the models, what about Artemis, Proyeus or Iris that have not evolved for years or months?..or is it that they have really reached their maximum and nothing more can be obtained from them…contact us to see what that reality is.
All these configuration improvements for the Pro or as a complement to the models…most of us don’t even bother to update because of these silly things. Don’t be confused with the product you have in your hands…you were born boasting about creating models to improve videos…especially low resolution ones…that should be the goal…because that’s where the challenge lies. Enough of configurations and pluggins for other products…it’s supposed that everything is based on hours of training…therefore we want to know what you are doing with the work of the models and why they don’t continue to evolve.
As soon as you push Proteus above the estimated value, it starts to mess up.
Yet it is only in these conditions that they give satisfactory overall results but with significant aberrations on the teeth, water surfaces, rain, fog, buildings, cars …
as an nvidia 3000 series user, i was wondering where’s the AV1 option that some people said they see, i guess that explains it - you only allow people with hardware encoders to use it?
why not expose a software encoder like SVT-AV1? that would be a huge advantage for me, as i wouldn’t need to export the videos to FFV1 just to separately encode them in AV1…
Why does the live preview not appear in the new versions when we click to see 2-5-10 seconds of a configuration?
in previous versions it would preview in a few seconds, now it should finish processing and then preview appears … why we can not to see immediately preview ?
I am still super upset you took away a feature I already paid for and relied on (multi gpu rendering) and are trying to make me pay 3x more to get it back.
Neatvideo is 1/4th the price at $79, lets me use both my GPUs to halve rendering time, support all the color spaces I use AND they actually reply to my emails for support. It also handles pathtracing noise much better
Leave local multi gpu rendering in the standard version and put networked gpu rendering in the pro version if you really want to add a professional feature without alienating your current users.
I completely share your opinion, investing on my side on RTX 4090.
Having decided not to add any more to Topaz, I resign myself to ultra-slow rendering, but they don’t care.
I am very surprised, that the only communication is release post and 1 reply to each, and then it turns into ghost mode, may be because of our common frustration and anger.
In fact I think we are 99% of users in such case and there is simply ‘no’ pro users …
Even ChatGPT is aware of these problems, stability issues, and that insane fork.
In fact I think we are 99% of users in such case and there is simply ‘no’ pro users …
There is no pro base for this product, because it’s simply not a professional-grade product. No studio is going to tolerate a product that is continually broken. And slapping a $1k price tag on it won’t turn it into a professional product. Version 2.x was really good, but since then it’s been like watching a blind guy in a doorknob factory. Half-baked code and poorly trained models.
For quite a few versions now, exports that I do via the command terminal (the only way I really do them) end up having broken time data when using ProRes LT or ProRes Std. They show up as 00:00:00. If I repair it with ffmpeg it fixes the time but it is super annoying to do that with each export. Plus take double the space to get a file I can use.