the situation is taken very seriously by Nividia who asked to the people who had the problem to send their card and cables for investigation. so if such a thing would happen to you and that no other damage would happen, pretty sure that the warranty would of course work. Nvidia has a lot to lose in this case, as AMD already advertised that their upcoming new cards will not have this issue" (lol) but fortunatly, it seems that the issue are not on the card and most on the cable shipped with it.
if you read both articles they talk about bad soldering too.
but i can see a possible warranty problem if you had such issue with a cable not suited for the card, or not the one provided by the manufacturer (lol, that would be very funny, unless it’s stated or not in the warranty). but i doubt it, as said, nvidia has too much money to lose in such situation.
What a shitty situation, you also only experience this (frequently) with gaming cards. (the situations that something is not right.)
I think every time a new card comes out I’m going to get a gaming card, but I don’t, because the pro cards are just fine, even if they are more expensive.
If I do get a Radeon Pro, it won’t be until this time next year.
Actually, I had planned to use a card for a maximum of two years and then sell it again when the loss is not yet so high.
If I use it until the end of its life or performance, I won’t get as much for it.
This is a case where the realMurphy’s Law applies:
Standard Version: “Anything that can go wrong will go wrong.”
Computer (Programmer’s) Extended Version:“Anything that can go wrong will go wrong, AND at the worst possible time.”
Most likely, the problem lies in the adapter cable getting too hot due to any or a combination of these causes:
Cheap pins and sockets in connectors: -Too thin metal which breaks or distorts when misaligned, pushed too hard, or bent. -Any of these create situations where the metal heats up, melting insulators, and plastic connectors which can result in lower current flow or melting into a much hottershort circuit(a.k.a., “A short Circus”)
Skimpy wire: Copper is a relatively expensive metal. Using a higher gauge wire(meaning thinner) makes it cost less. And, whether you are producing a wired part or buying it, that lowers the cost. This is especially true if you’re working in industrial quantities. - At a given voltage and wattage, a thinner wire has more resistance and heats up more than a lower gauge (heavier/espensivier) wire will.
The people buying the part to pack in the box inevitably understand price better than the physics of electrical conduction. -So, they go with what they know.
In the case of cheap vs. good, cheap usually wins.
To prevent an overheating/shorting problem as this one it, is mandatory to make a part that is totally good-proofand theirs is NOT.
I hope you all can appreciate the Murphyness in what I’ve explained above.
Actually, Europe (not including UK) runs on 240v/50Hz and the UK runs on 250v/50Hz. AFAIK, no country runs on a 208v base (i.e. a single phase supply). 208v/60Hz is found in the US on 3 phase circuits.
TVAI 3.0 changes more than just the models and UI. By switching to ffmpeg filtergraphs they opened up the possibility of launching multiple processes from a single instance of the app. So, you now have to consider the cumulative processing rate, which appears to be non-linear. I have done some comparison of running 1, 2, 3 or 4 jobs simultaneously (I run out of VRAM on my 3080Ti after 4). One job runs at about 7.5 fps. Two jobs run at about 4 fps, or about a half frame per second faster than one job in aggregate. 3 jobs run at about 3.3 fps each, or almost another 2 fps faster. 4 jobs run at about 2 fps, so a bit slower than 3.
Compare this to TVEAI 2.x where each Topaz instance could only run a single job, and multiple Topaz instances chewed up resources so much that running more than 2 jobs would crawl.
BTW: The drivers released for Nvidia’s retail GPUs (the RTX/GTX variants) have a hard coded limit of 3 NVENC streams, so I have patched mine to allow more. However, it would appear that the drivers are optimized for 3 streams anyway.
"The voltage used throughout Europe (including the UK) has been harmonised since January 2003 at a nominal 230v 50 Hz (formerly 240V in UK, 220V in the rest of Europe) but this does not mean there has been a real change in the supply.
Instead, the new “harmonised voltage limits” in most of Europe (the former 220V nominal countries) are now:
230V -10% +6% (i.e. 207.0 V-243.8 V)
In the UK (former 240V nominal) they are:
230V -6% +10% (i.e. 216.2 V – 253.0 V)
This effectively means there is no real change of supply voltage, only a change in the “label”, with no incentive for electricity supply companies to actually change the supply voltage.
To cope with both sets of limits all modern equipment will therefore be able to accept 230V +/-10% i.e. 207-253V."
“AMD RDNA3 architecture has ‘Dual Issue’ design, which can now execute not one but two FP32 arithmetic commands at the same time. What this means is that each CU can now do 128 FP32 calculations instead of 64 (RDNA2). To reach the advertised 61 TFLOPs, one would have to multiply 6144 SP × 4 × 2.5 GHz ≅ 61 TFLOPs, or use the same method we use for every modern GPU: 12288 SP × 2 × 2.5 GHz. Obviously, the second option should be more readable to users.”
Has anyone more info on that?
Seems like a full parallel/async Architecture.
Update:
“Each CU also features two AI acceleration components that provide a 2.7x uplift in AI inference performance over SIMD”
The AI cores are not exposed through software, software developers cannot use them directly (unlike NVIDIA’s Tensor Cores), they are used exclusively by the GPU internal engines. Later today AMD will give us a more technical breakdown of the RDNA3 architecture."
Until now, there has always been the problem that gpus are not fully utilized because the work they get has not been so great that they could have been fully utilized.
Maybe amd fixes this problem, that would be great.
"But, as with all dual-issue configurations, there is a trade-off involved. The SIMDs can only issue a second instruction when AMD’s hardware and software can extract a second instruction from the current wavefront. This means that RDNA 3 is now explicitly reliant on extracting Instruction Level Parallelism (ILP) from wavefronts in order to hit maximum utilization. If the next instruction in a wavefront cannot be executed in parallel with the current instruction, then those additional ALUs will go unfilled.
This is a notable change because AMD developed RDNA (1) in part to get away from a reliance on ILP, which was identified as a weakness of GCN – which was why AMD’s real-world throughput was not as fast as their on-paper FLOPS numbers would indicated. So AMD has, in some respects, walked backwards on that change by re-introducing an ILP dependence."
I just took a better look at the power cable I bought for my 4090. It will not be long enough to plug directly into the card and still reach the sockets on the PSU. (although it is about the ‘standard’ length for these cables.)
So, I checked out the CableMods RTX 4090 adapter, they have several 90 and 180 degree adapters for various manufacturer’s 4090 cards. The one I need for my system is pictured below. It will lessen the possibility of an overheating problem and also allow my power cable to reach the PSU.
I’m ordering it today directly from them.- It appears they have a waiting list. (I wonder why? )
I bought a used RTX 3090 and I’m very disappointed, 1080p to 4k performance is identical to 3070
Gaming performance is excellent temperature is normal, but Topaz 2.6.4 performs better by 20-25% only when increasing video FPS
The card works with R9 5950x cpu
I’m thinking maybe the Radeon RX6900 can get the job done faster?
The bottlneck is your CPU not the GPU. In most models a lot of work is done by the CPU. Either get a faster CPU (eg. AMD 79XX or Intel Raptor Lake might give you 30% more GPU utilization/speed) or run multiple tasks in parallel.
Yes, that power adapter design is bad. Especially because it’s difficult for the user to note whether it’s properly inserted and if not, power supply is not interrupted but runs full throttle through bad connection.
Said that, the issue seems a bit overrated. Latest update is that failure rate is about 0.05% (50 cases from 125.000 units sold) and it seems to be essentially user error due to the adapter not being fully inserted or becoming loose after a while due to vibrations/movement (adapter could need better design for secure indention).
During TVEAI use, power usage of my 4090 does not really exceed 250W even with 3 parallel 1080p tasks. So I don’t worry.
The processor in this case is not so important, on my second computer with 6 core 5600g and 3070 I got almost identical results in processing 1080p in 4K Artemis high quality The difference with the first computer was only when processing 25 to 60 frames chronos, there already the processor allowed me to do the work faster. 5950x is a good 16 core processor
I understand, but it is better to err on the side of caution, especially with expensive components.
I may install a temporary set of short modular VGA Power extension cables until the CableMod adapter is available. - (I’m on the waiting list…)
Note: Generally, I try to avoid power extenders. Most are made with cery cheap pins and sockets which can easily bend and not make a solid connection and heat up
I have a 5950 myself. For Artemis it’s clearly bottlenecking a 3080 (a 3070 most likely too and now I have a 4090 which is ofc even more under utilized).
You can check that if u use AMD Ryzen Master software to change the all-core frequency of your CPU while running Topaz. If u reduce the clock speed by 10%, the task will take about 10% longer and GPU will be proportionally less utilized. You simply can’t clock a 5950 high enough to fully utilize a 3080 or 3070.
One of the reasons is, that the CPU load from TVEAI is not very well multi-threaded. Most of your 16 cores will be essentially idling while running TVEAI. You can take a look with Windows task manager and u will see that your CPU is far from being utilized to 100% which doesn’t mean that the GPU is too slow (because it’s not utilized to 100% either) but that only a few cores are actually used. U can also test that with Windows task manager by only letting TVEAI use 4 cores at once (using the CPU core affinity function in task manager). You will see that the speed of TVEAI will not decrease even though u just took away 75% of all CPU cores.
Thanks for the reply, sorry I didn’t know that earlier. Perhaps it would be necessary to invest in a more powerful processor. The gaming performance of the 3090 is amazing, of course, but gaming is not my main concern right now.
Which processor will be optimal in terms of price / performance ratio?
Let’s hope that topaz will continue to optimize the drivers to make better use of the 4090’s capabilities. When the 3090 arrived on the scene, early adopters had to wait for their software providers to integrate the 3090’s potential into their applications. I’m certain that will also happen here.
One thing I noticed is that nearly all Topz products have “3D” settings listed for them in the Nvidia Control Panel, except for TVAI, which seems odd. If anything, video apps make much more demands on the GPU for image processing that those "single frame’ apps do.
Why is this the case? Are there settings which will make the Nvidia GPUs work more closely with TVAI’s demands? If so, where are they?
I did a few experiments with the 5950x, I tried to keep only two multithreaded cores, and the performance was almost the same in artemis high.
I think the developers still need to improve a lot.