RTX 3090 worth it?

The fastest general single core CPUs are Intels Raptor Lake CPUs (e.g. 13700K which has less coress than the 13900K but too many cores are usually not used by TVEAI).
Second fastest would be the latest AMD Zen 4 CPUs (e.g. 7900X or 7950X) which have slightly slower cores in general compared to Intel but support AVX-512 instructions which might be utilized by Topaz software but I am not sure about it.
Memory speed (combination of bandwidth and latency) might also play a role in TVEAI performance since the operations are quite memory intensive. DDR5 should have an advantage here but I have no data.
IO performance probably doesn’t play a big role if u at least use a decent SATA SSD. Only if you don’t immediately encode the output and output single images, sustained write speed might play a role.

One more little detail regarding the power connector they have chosen for the RTX 4090 and many of the newer generation video cards. - In the past connectors of that type were mainly intended to carry signals, not power; certainly not a lot of power. IMO: If the newer GPUs are going to consume as much power as they suggest, using a miniaturized connector for power is a big mistake.

The small number of failures due to cables or adapters overheating resulting in card damage is actually rather remarkable, IMO. - Especially when you consider the cheaply made third-party cable extensions and adapters on the market and take Murphy’s Law into consideration.

While I haven’t installed my RT 4090 card yet, I have noticed that my existing 3090 doesn’t use anywhere near the number of watts it’s required to need in its documentation. This is so even when running strenuous video benchmarks for lengthy periods.

1 Like

Not sure what you mean by that. The usual 8-pin connectors (on video-cards) have, far as I know, never been used for signals, but always for power only. Signals go thru the PCI-e connector.

As for the power required, tests (by GamerNexus) have shown you can even run the 4090 on 2 pins only (Sic!). The only problem starts to exist when the connector isn’t plugged in properly, and one of the pins makes contact with the metal side (because the connector is seated lop-sided a bit), thus causing extra resistence, thus (a lot of) extra heat.

Generally speaking, power connectors for circuit boards have heavier pins, which also requires wider spacing. The size of the connector pins vs the wires connecting to them is very different. They are more like what I remember for connecting data cables between circuit boards and very rarely for power, unless the power requirement is very small. (Whis is not the case for RTX 4090 cards.)

When someone finds RDNA 3 Workstation Benchmarks just post it here.

Would be nice.

This is an FYI: The price of the RTX 4090 was well worth the investment. The current Version of TVAI doesn’t just support it, The RTX 4090 supports Proteus. While the framerate has picked up, but not by a staggering number, the extra processing power is being used by the AI to get far better results! - I’ve redone a couple of videos from back when I had me RTX3090 in the same machine. The difference in picture quality has gone way up.

Conclusion: Currently, an RTX 4090 may not make the processing enormously faster (yet,) but it just allowed me to redo a few older videos with quality that blows VEAI 2.x and the RTX 3090 out of the water!

Oh yes! for got to mention. with the RTX 4090 on board my gigabyte z590 system with 32 Gigs of memory and 14 TB of storage and an i9 11900K CPU is drawing around 225 Watts at idle and about 390 Watt when processing SD to FHD using Proteus. So a lot of that chatter about needing a 900-1000 watt PSU may be nonsense. - We shall see. - After I start overclocking…

1 Like

As far as I know, the models were updated, which means that even with the 3090 the same quality would have come out … so I assume that.
I made a comparison between a RTX Q 5000, W8100 and 2080S and the result of the 2080S was 100% different than the two Pro GPUs but the image did not look different.
The pixel grid was 100% different, but the image was not worse.
The result of the two pro gpus was 100% the same.


could you show a sample screenshot please?

1 Like

The power supply recommendation is based on transient load handling - if you look at more detailed reviews of graphics cards they will do measurement of transients and you can see quite high spikes (even with the 3xxx series).

“PCI SIG has basically outlined the capability of a GPU to exceed the maximum sustained power of the card by 3x. That means a 600 watt card on a PCIe 5.0 12VHPWR connector is allowed to spike to 1,800 watts for 100 micro-seconds.”

Reference

I am not certain I have two comparable copies of the same video at hand just now. Along with the holidays, I’m moving everything on several machines around and moving a lot of it from an older NAS box to a newer (much bigger) one. (Never dreamed I ever needed to manage so many terabytes of storage.) Until I get all this stuff squared away, I’m not going to have sufficient time to produce the ‘evidence’ you’ve requested. - I’m not even at home this week. Hopefully, sometime in early January.

I do know the (stored) Proteus settings used for rendering a given video using a 3090 vs. a 4090 are very different. Most of the stored presets I used with my 3090 are set way too high to get the same results using the 4090.

That is a very interesting statistic. - I’m certain that none of my monitoring software could cut things that fine. And it’s nice to see a technical explanation.

And, thank you for the reference link. It contains some very interesting information on the subject.