RTX 3090 worth it?

Any word when its coming out? Its been a while since the last update and my subscription ends in March.

Thanks

Are you referring to V2.6.4 or V3 ? :thinking:
v3.0.0-7 have just released a few hours ago.

They said “next month” in the last few days of September for Version 3.
I’ve only seen version 3.0.0.6 and it’s not one month away from being ready for the public. Maybe if I saw 3.0.0.8 it would put my mind at ease.

James,

I think I recall that they were implying the VEAI 3.0 would be ready sometime in October. - IMO, I believe they were being overly optimistic when they said that. The new 3.x version is a big departure from the current VEAI 2.x. and the number of problems that need resolving in any new body of software can be orders of magnitude greater than simply republishing the older version with a few new features.

In addition to bugs, VEAI 3.0 still has an Achille’s Heel having to do with input preparation prior to Enhancement. - It still needs work. - Presently, I’m running 3.0.0.9.b and it’s still not ready for prime time.

As I’m somewhat skeptical about a release in October.

1 Like

From Linus Tech Tips Please Buy Intel GPUs. - Arc A750 & A770 Review - YouTube

Here are some more result :grinning:

Seems like in PhotoAI the GPUs were off. :sweat_smile:

Here are some GP & Denoise Benches with A770.

https://techgage.com/article/intel-arc-a750-a770-workstation-review/2/

Please take note: As VEAI leverages the power from all these devices’ drivers, the timings will be changing significantly.

Yeah if they did that, I think it would hurt them a lot. They don’t want to be the next company that gets a bed name from releasing before all the fixes are in.

And, (as if mentioned elsewhere,) I’m concerned that they need to add more functionality in terms of rendering settings to the GUI. I’m concerned that if they don’t, the GUI will become secondary to the CLI.

I can suggest an approach for adding additional output configuration features to the GUI while also keeping it simple and intuitive; something that will appeal to both GUI and CLI users.

Recently I got RTX 3090 ( Colorful Vulcan OC-V 24GB LHR) and comparing with my ex 1070Ti on same project, processing speed nearly doubled. In Proteus FT mode it was 0.52s for 1 frame before and now it is 0.28s median on 10000 frames range.
And bad news - nvidia-smi shows that GPU in P3 state while processing (fastest is P0) and card itself even not hot. After 2 hours of VEAI processing, temperature was 52C and coolers started spinning may be 3 or 4 times for several seconds.
I tried 5 different NVIDIA drivers (latest, one 5xx, two 4xx and even 381) with same result and have feeling that VEAI just can’t use all the power of GPU properly

VEAI 2.6.4? Yeah. Version 3.0.0.7 can use more of the GPU on some of the models. For example: scale 100% Artemis Medium Quality now uses 95% of my RTX 3080ti.

Yep 2.6.4…
95% looks interesting, but unfortunately I can’t imagine working with video without avisynth and VEAI 3 have no .avs support. So I prefer to wait :slight_smile:

4090 has ECC.

He did write 3090ti too???

https://techgage.com/article/nvidia-geforce-rtx-4090-the-new-rendering-champion/

Seems like 4090 is 2x 3090 with almost the same power consumption.

Its more than a month to the RX 7900XT <— without ECC?

EposVox has done some testing on 4090 , Topaz Labs VEAI, GAI and PAI.

Here is the result,

So the RTX 4090 is 12% faster than the a770 in Video Enhance AI but costs 4.57x more?

RTX 4090 takes 3 PCI slots in your case, you could just buy 3 a770 units. For 65% of the cost of the RTX 4090 you would get 2.7x the workload

Is this right?

1 Like

Going to take the guy’s graph with a big grain of salt. He’s on prerelease drivers, a VEAI that hasn’t been updated for 40-series, and who knows what other settings he used that may have bottlenecked the GPU. Because in my experience, the sec/frame margin between 3090 and 3080 is wider than what his graph expresses.

As usual you can’t assume youtubers know what they’re doing and, while it’s an interesting datapoint, ultimately have to do your own testing in the configuration and with the presets you actually use.

Are you sure that EposVox actually tested something and shows correct numbers on VEAI graph? Jut tried to convert 1000 720p frames to 4k using 3090 and result was 0.58s for frame.
But more questionable is 0.511 for RTX 3060…
I asked my grandson to use his PC with RTX 3070Ti for a few minutes and VEAI 2.6.4 shows 0.82s for each frame on same 720p source. Suppose that in case of 3060, number must be more then 1s?