RTX 3090 worth it?

:nerd_face:

@suraj

FYI: If you are looking for a Gigabyte RTX 4090 Overclockable video card, today October 12th. (Currently at list price, free shipping.) They have them in stock for (US only) Gigabyte’s new on-line store.

I just bought mine!!!

:cowboy_hat_face: :nerd_face: :grinning_face_with_smiling_eyes:

1 Like

I’m not sure about the A770 (i would need a Whitepaper for its achitecture to see whats going on).

But the numbers for 3080 vs 2080 are legit, is 29 vs 22 teraflops.

And PCI-E 4 plays a role.

Rx 7900X will have PCI-E 4 & 132 Teraflops fp16. ← so maybe a good piece faster than 4090.

Update: Seems like A770 has a tensor like (Matrix Operations) Core.

https://www.intel.com/content/www/us/en/products/docs/arc-discrete-graphics/xe-hpg-microarchitecture.html

I heard similar from RDNA3.

Update 2:

https://videocardz.com/newz/amd-adds-wmma-wave-matrix-multiply-accumulate-support-to-gfx11-rdna3-architecture-amds-tensor-core

1 Like

Hmmm,

I wonder if the SDKs provided by the makers of this hardware will support all these cards or just the more popular ones on the market. - I know that programmers like to keep what’s called 'scope creep to a minimum, so they may only want to write and debug code that will run with the most widely used hardware. - In that case the 3090, 4090 and Intel (advanced) video stuff has a greater chance of being considered for optimization.

1 Like

Thats why Apples hardware does work realy fast for its raw performance.

You don’t have to optimize it for a lot of different hardware.

I heard that in gaming you optimise for the weakest hardware of an generation.

Personally, I am not much of an Apple fan. But, having the same company that manufactures the stuff and provides the OS, drivers and SDKs in control of it all does give them a certain advantage when optimizing.

I know their history and I know the evolution of the IBM PC to what it has morphed into today. - It may be more chaotic, but I’m a fan of structured code, knowledgeable systems programming and the freedom of choice it provides.

I’m also not a gamer, so I have no idea what their programming philosophy is. But that sounds reasonable. - Why deny an eager gamer a chance to BUY and play your game just because their system is a bit too old or underpowered?
:heavy_dollar_sign::scream: :nerd_face: :disappointed_relieved: :heavy_dollar_sign:

aaaannnnnd?

So slow that you can’t show it? :wink:

The other thread did show only 22% speedup (best case, vs 3090), Proteus 720p to 1080p.

Well,
When it gets here, we’ll find out. I’m sure it will be faster than my 3090, which has been moving faster lately, I think that Topaz has gotten into the Nvidia SDKs.

I’m hoping they’ll implement the additional stuff available to take advantage of the 4090’s capabilities, too.

One of the problems I see in the speed comparisons posted so far, is that they don’t mention whether the benchmarks provide driver support for all the various GPUs, or just use “generic” calls.

It’s starting to look to me like it depends more on the CPU than the GPU, at a certain level.

For ADA and AMP to work properly everything has to be floating point.

I would like to see everything loaded into gpu memory, then the performance should be outstanding, like rendering.

I agree. And, the Nvidia 3000 and 4000 series have more than adequate memory and a wide data path to the GPU. I would think their driver SDK has been designed to optimize how the various cores and onboard memory are used.

Having a fast line between the system CPU and the video card resources is also very important. On 11th, - 12th gen Intel chipsets the first PCIe x4 slot feeds directly to the CPU. the same PCIe x4 (in my Z590 machine with an i9-11100K CPU in it) also services my first NVMe drive. - It’s pretty fast.

I have a fair number of monitoring utilities on my system, and occasionally active a few just to see how much work the various components are doing. It wasn’t all that long ago when VEAI only used a few % of the Nvidia’s capabilities. Now that they offer Nvidia-specific drivers for some CODECS and their containers, things have sped up significantly.

That said, the thing that will truly make VEAI’s enhancement capabilities really pop will be the adoption of a few lossless codecs.

In fact, compressing slows down processing a lot, that’s why I always use tiffs for pictures (and of course for the quality).

The primary need for lossless video is when you are editing or enhancing. It should never need to be compressed until it gets finalized for distribution on whatever media it is intended for.

What VEAI needs is a lossless CODEC that is optimized to work on modern high-capability hardware. The Nvidia GeForce RTX GPUs are only one example. (But good ones. )

These results can be skewed by the compatibility of the benchmark being used. If the benchmark doesn’t use a driver that can utilize the explicit capabilities of the GPUs and just runs a “generic” standard graphics driver on all of them, the results are meaningless for most practical purposes.

1 Like

Why not work with 8/16 bit tiff?

That can be done, but it is relatively inefficient.

The primary problem is that unless the process of decompressing, deinterlacing and denoising is perfect is that unless this can be done perfectly, the flaws are permanently baked into the pictures.

Secondly, it can’t carry a audio track. which can lead to audio sync complications.

Generally, exporting to a stream of single images is most useful to prepare for re-editing in a graphic editing program that can run macros on each image to effectively animate special effects into the image. It is also a useful way to facilitate finding a single frame that will be “perfect” for a still graphic composition, such as a poster or photo for an article.

As such, a truly lossless CODEC is still the best way to go.

1 Like

hah, i’m a idiot, i did forget about the audio.


I’m really pissed off about this benchmark and I’ll save my swear words.

That the author has no idea what the gpus accelerate within the apps can be seen in the comment on the memory, he thinks that the size would play a role, and this would make the apps faster, but you could run 12x capture one in the 4090s memory, C1 does need 2GB.

In capture one, the display is also accelerated on the screen, which is jerky without gpu even with the 24 core 3960X.

And what really annoys me is the TL products being left out, Neat Image, Helicon Phocus, PT-Gui and DxO PL6 too, all GPU accelerated, all specialised Pro-User Apps.

Of course, you can also do that with a slow GPU if you use the wrong APPs and work like 15 years ago.

GPGPU computing is always a symbiosis of the CPU with the most MHZ, the fastest Ram connection and the widest PCI-E bandwidth. As well as the GPU with the fastest memory connection to the most cores.


And why aren’t additional Radeon gpus tested, are they all afraid that team green will suddenly be overtaken by the mid-range model?

I’ve seen someone use an outdated gigapixel version to make radeon look bad.

I remember in 2020 when I remotely tested a Vega Pro VII and was told that a Radeon Pro W5700 was just as fast in Denoise as an RTX 6000.

I didn’t want to hear that back then either.

https://petapixel.com/2022/10/17/the-nvidia-rtx-4090-is-amazing-and-photographers-should-not-buy-it/

I’m not very concerned about whether they tested on any Topaz apps or not. I really do disagree about how useful all that GPU power is for still (or single-image) proto/graphics processing.

I am a voracious photographer. It is just a dream come true for someone like me that you can now have a pocket-sized Hasselblad digital camera that can make VHD video, lots of other photo options and XPAN, too. (And yes, I also has a phone built into it. :nerd_face:)

Obviously, enhancing video does need more GPU power that need for single still images, but that isn’t the sole consideration.

I own several photo editing apps, several of which make very good use of AI and a powerful GPU that have made doing complex editing operations that seemed to take all day and make them happen in seconds.

Another thing to consider: Making a photo WAY larger than needed for editing and refinement purposes and then reducing it to a ‘normal’ resolution for output. Using that method, coupled with a few editing tricks, and a bit of TLC and you can really make those photos pop.

There will always be articles (especially on the net) that have outrageous headlines and completely ignore the facts and then spout some outrageous BS on the subject. And then sit back and make $$$ based on the click-tracing numbers for the advertising on their page that gets washed into your eyeballs when you load their page.

Unfortunately, there are still people out there who will believe anything they read. :roll_eyes:
:thinking:

I think it will be delivered tomorrow morning…
:cowboy_hat_face:

1 Like

I do not see any advertising.

u-Block & No-Script.

1 Like

If you have time, I’d be interested in seeing times of Gigapixel, Denoise AI and DxO Deep Prime Extreme Detail.

OK! However, I should also mention that the 3090 I currently have installed is getting used by a higher and higher percentage for rendering and preview since the last few beta releases. Very gratifying. But I don’t know if the improvements Topaz has recently made for the Nvidia 3000 series will also be present for the 4000 series yet. - I assume the Nvidia drivers for these cards will me the same in many ways, but I’m certain they will also have differences. – We’ll see. I don’t expect to have time to change over to 4090 for a few more days…