RTX 3090 worth it?

I believe you can make do with an 850W PSU, but to do any OC you will need to go to 1000 watts. (I bought the OC version.)

I’m also certain that VEAI’s throughput and stability on both RTX 30x0 and 40x0 series video cards. It has become obvious that this is now a necessity.

By default, most 4090s have a 450W power limit. OC versions only have slightly increased clock speeds with the same poewr limit. If you want to have additional OC headroom u can increase the power limit up to 600W (depending on GPU model) with MSI Afterburner.
Even when raising the voltage to 1.1v, I couldn’t really reach 600W usage of my 4090 TUF during OC benchmarking. Highest was maybe 560W. And all that was 3D-Mark stable with my 5950 and an 850W Seasonic platinum power supply.
Ofc, it’s better to have a (quality) power supply with higher wattage, for the peace of mind :slight_smile: . In practice, it’s often not needed.
The sweet spot for efficiency with the 4090 is around 300W imho with some undervolting. For maybe 10% more performance u need almost 100W more.

I tend to agree about the speed and the wattage. I buy the OC version so I can make small adjustments and don’t know if I will need the 1000w PSU, OR the 2000VA UPS i’ll need to support that!

found this article, for what it worths, unless official statement by Nvidia

Update on Cables.

https://wccftech.com/nvidia-16-pin-adapter-comes-in-two-flavors-300v-is-good-for-geforce-rtx-4090-but-150v-is-a-fire-hazard/

So, you’re not even watching!

:dizzy_face:

from the last articles, some cards are shipped with connectors which deal with 150v instead of 300.
and these cables have bad solders, 4 instead of two “something”. it’s well explained in the link i gave and the one from Thomas D. if it’s that, Nvidia will have to replace the 150v cables.

That is actually interesting, but puzzling as well. The local or “house current” voltage and number of cycles per second does vary from country to country and on various continents.

Examble :

US/Canada 120v 60 cycle
Europe/UK 208v 50 cycle

However, As far as computers are concerned, these differences stop at the AC interface side of the (ATX-family) PSU. Once the incoming line voltage has been rectified and the DC regulated down to the various standard DC voltages used inside the computer, they are almost universally the same.

If there is a manufacturer’s connection problem it would most likely be due to the likelihood of overheating and shorting out, not the line voltage. One common cause of computer wiring, and connector overheating is using “competitively priced” parts made by manufacturing cables using as little copper as possible. - The fundamental fact is that at any given voltage a heavier gauge cable will carry more current than a thinner one will.

In any case, after looking at some of the fried power connectors that came packed in the boxes the RTX-4090s came in, I’d be unwilling to use one of them. (See note) - My 4090 is still in its anti-static wrapping and won’t get plugged in until my correctly engineered GPU power cable arrives.

Note: I can also imagine the excuses that the video card maker if I had to make a warranty claim because my board was damaged because of their cheap-o cable adapter.

1 Like

the situation is taken very seriously by Nividia who asked to the people who had the problem to send their card and cables for investigation. so if such a thing would happen to you and that no other damage would happen, pretty sure that the warranty would of course work. Nvidia has a lot to lose in this case, as AMD already advertised that their upcoming new cards will not have this issue" (lol) but fortunatly, it seems that the issue are not on the card and most on the cable shipped with it.
if you read both articles they talk about bad soldering too.

but i can see a possible warranty problem if you had such issue with a cable not suited for the card, or not the one provided by the manufacturer (lol, that would be very funny, unless it’s stated or not in the warranty). but i doubt it, as said, nvidia has too much money to lose in such situation.

What a shitty situation, you also only experience this (frequently) with gaming cards. (the situations that something is not right.)

I think every time a new card comes out I’m going to get a gaming card, but I don’t, because the pro cards are just fine, even if they are more expensive.

If I do get a Radeon Pro, it won’t be until this time next year.

Actually, I had planned to use a card for a maximum of two years and then sell it again when the loss is not yet so high.

If I use it until the end of its life or performance, I won’t get as much for it.

2 Likes

Marty,

This is a case where the real Murphy’s Law applies:

  • Standard Version: “Anything that can go wrong will go wrong.”
  • Computer (Programmer’s) Extended Version: “Anything that can go wrong will go wrong, AND at the worst possible time.”

Most likely, the problem lies in the adapter cable getting too hot due to any or a combination of these causes:

  1. Cheap pins and sockets in connectors: -Too thin metal which breaks or distorts when misaligned, pushed too hard, or bent. -Any of these create situations where the metal heats up, melting insulators, and plastic connectors which can result in lower current flow or melting into a much hotter short circuit (a.k.a., “A short Circus” :nerd_face:)

  2. Skimpy wire: Copper is a relatively expensive metal. Using a higher gauge wire (meaning thinner) makes it cost less. And, whether you are producing a wired part or buying it, that lowers the cost. This is especially true if you’re working in industrial quantities. - At a given voltage and wattage, a thinner wire has more resistance and heats up more than a lower gauge (heavier/espensivier) wire will.

  3. The people buying the part to pack in the box inevitably understand price better than the physics of electrical conduction. -So, they go with what they know.

In the case of cheap vs. good, cheap usually wins.

To prevent an overheating/shorting problem as this one it, is mandatory to make a part that is totally good-proof and theirs is NOT.

I hope you all can appreciate the Murphyness in what I’ve explained above. :nerd_face:

:thinking: :heavy_dollar_sign:
:scream: :sob: :cry:

Actually, Europe (not including UK) runs on 240v/50Hz and the UK runs on 250v/50Hz. AFAIK, no country runs on a 208v base (i.e. a single phase supply). 208v/60Hz is found in the US on 3 phase circuits.

TVAI 3.0 changes more than just the models and UI. By switching to ffmpeg filtergraphs they opened up the possibility of launching multiple processes from a single instance of the app. So, you now have to consider the cumulative processing rate, which appears to be non-linear. I have done some comparison of running 1, 2, 3 or 4 jobs simultaneously (I run out of VRAM on my 3080Ti after 4). One job runs at about 7.5 fps. Two jobs run at about 4 fps, or about a half frame per second faster than one job in aggregate. 3 jobs run at about 3.3 fps each, or almost another 2 fps faster. 4 jobs run at about 2 fps, so a bit slower than 3.

Compare this to TVEAI 2.x where each Topaz instance could only run a single job, and multiple Topaz instances chewed up resources so much that running more than 2 jobs would crawl.

BTW: The drivers released for Nvidia’s retail GPUs (the RTX/GTX variants) have a hard coded limit of 3 NVENC streams, so I have patched mine to allow more. However, it would appear that the drivers are optimized for 3 streams anyway.

Not any more:

"The voltage used throughout Europe (including the UK) has been harmonised since January 2003 at a nominal 230v 50 Hz (formerly 240V in UK, 220V in the rest of Europe) but this does not mean there has been a real change in the supply.

Instead, the new “harmonised voltage limits” in most of Europe (the former 220V nominal countries) are now:

230V -10% +6% (i.e. 207.0 V-243.8 V)

In the UK (former 240V nominal) they are:

230V -6% +10% (i.e. 216.2 V – 253.0 V)

This effectively means there is no real change of supply voltage, only a change in the “label”, with no incentive for electricity supply companies to actually change the supply voltage.

To cope with both sets of limits all modern equipment will therefore be able to accept 230V +/-10% i.e. 207-253V."

1 Like

Thank you, Paul.

I stand corrected. And pleased for the information.

Phil

1 Like

From Videoscardz,

“AMD RDNA3 architecture has ‘Dual Issue’ design, which can now execute not one but two FP32 arithmetic commands at the same time. What this means is that each CU can now do 128 FP32 calculations instead of 64 (RDNA2). To reach the advertised 61 TFLOPs, one would have to multiply 6144 SP × 4 × 2.5 GHz ≅ 61 TFLOPs, or use the same method we use for every modern GPU: 12288 SP × 2 × 2.5 GHz. Obviously, the second option should be more readable to users.”

Has anyone more info on that?
Seems like a full parallel/async Architecture.

Update:

“Each CU also features two AI acceleration components that provide a 2.7x uplift in AI inference performance over SIMD”

The AI cores are not exposed through software, software developers cannot use them directly (unlike NVIDIA’s Tensor Cores), they are used exclusively by the GPU internal engines. Later today AMD will give us a more technical breakdown of the RDNA3 architecture."


Until now, there has always been the problem that gpus are not fully utilized because the work they get has not been so great that they could have been fully utilized.

Maybe amd fixes this problem, that would be great.

Still pure speculation from me.

1 Like

Update#2

Taken from Anandtech:

"But, as with all dual-issue configurations, there is a trade-off involved. The SIMDs can only issue a second instruction when AMD’s hardware and software can extract a second instruction from the current wavefront. This means that RDNA 3 is now explicitly reliant on extracting Instruction Level Parallelism (ILP) from wavefronts in order to hit maximum utilization. If the next instruction in a wavefront cannot be executed in parallel with the current instruction, then those additional ALUs will go unfilled.

This is a notable change because AMD developed RDNA (1) in part to get away from a reliance on ILP, which was identified as a weakness of GCN – which was why AMD’s real-world throughput was not as fast as their on-paper FLOPS numbers would indicated. So AMD has, in some respects, walked backwards on that change by re-introducing an ILP dependence."

1 Like

I just took a better look at the power cable I bought for my 4090. It will not be long enough to plug directly into the card and still reach the sockets on the PSU. (although it is about the ‘standard’ length for these cables.)

So, I checked out the CableMods RTX 4090 adapter, they have several 90 and 180 degree adapters for various manufacturer’s 4090 cards. The one I need for my system is pictured below. It will lessen the possibility of an overheating problem and also allow my power cable to reach the PSU.

I’m ordering it today directly from them.- It appears they have a waiting list. (I wonder why? :grin:)

CableMod 12VHPWR Angled Adapter – CableMod Global Store

image

I bought a used RTX 3090 and I’m very disappointed, 1080p to 4k performance is identical to 3070
Gaming performance is excellent temperature is normal, but Topaz 2.6.4 performs better by 20-25% only when increasing video FPS
The card works with R9 5950x cpu
I’m thinking maybe the Radeon RX6900 can get the job done faster?

The bottlneck is your CPU not the GPU. In most models a lot of work is done by the CPU. Either get a faster CPU (eg. AMD 79XX or Intel Raptor Lake might give you 30% more GPU utilization/speed) or run multiple tasks in parallel.

1 Like