AI Video Enhance with new GPU 3090 RTX?

Look at a Tensorflow Benchmark @ Puget Systems.

The 3080 is twice as fast as a 2080.

But it wasn’t really optimised for this Linux Bench, they did use an old one.

3090 is targeted at Creatives, its the new Titan. Look at the GA102 whitepaper.

sadly it does not suppor 3090 now. the video would be damaged using 3090

1 Like

ok, how?

How does it look like?

I cannot post a picture here.

you can take a look at this picture if you can access the following link
https://tieba.baidu.com/p/7024269756

Did you try to lower the Memory Clock and GPU Clock?

Give it a try.

But i think the 30X0 Series needs to added first to the TL Ecosystem to work.

Slowly I have my doubts about the 3000 series.

I have heard many rumors in the meantime.

I will wait for a very long time for a GPU update.

Hi there,

first i tried Topaz VEAI V1.7.1 on an Asus RTX 3070 OC 8GB.
The render times dropped 60% compared to my 1050ti. Nice power boost.
(My system: CPU I9-10900KF with 32gb 4266 ram. Z490 chipset, Win10 pro 2004)
(I know that my system only has PCIe3, but professionel hardware testers like tomshardwareguide say the difference between pcie3 and pcie4 with an RTX3090 is just 1-2%)
Harddisks are 1TB and 2TB nvme … 3500mb/sek read 3300mb/sek write

But i still was not satisfied with the “long” rendering times.
And so i decided to buy a nvidia 3090 Founders Edition.

I was hoping the 3090 is at least double as fast as the 3070 with topaz.
When we compare the specs it should be and looking at 3x the price of an 3070 - it REALLY should.

But before i exchanged the cards i prepared a decent list of to be rendered videos and noted the RTX3070 render times (frames per second). It was all kind of different videos in resolutions from 240p up to 2160p.

After the hardware change and the reinstall of the studio drivers i reloaded the 3070 queue list and tested the videos again on the RTX3090 - 1:1 , 3070 vs 3090

And the results were more than disappointing.
I will not copy my complete results, but i can say with 3070 it took 7hours of upscaling a 1hour video from 720p to 4K (2160p).

With the 3090 it takes “only = still” 6 hours of upscaling the same video to 4K.

So the 3090 saved me “just” 1h rendering time.
Shouldn’t that at least be 3 saved hours ?

---------------3070-------------------------------------------------3090-------------------------------
video1 8h30min 0.25sec/frame — 6h40min 0.20sec/frame
video2 6h55min 0.25sec/frame — 5h40min 0.21sec/frame
video3 8h45min 0.34sec/frame — 7h05min 0.27sec/frame

At this very moment my results dont cover the 3x higher price of the 3090.
The 3090 is hardly 20-30% faster than the 3070. How comes???
Did i miss something? Do i calculate wrong?

(btw. before the nvidia RTX3090FE i got an Zotac RTX3090 … results were exactly the same)
Most recent game ready or studio drivers make no difference.

I hope the “bad=slow” results of the 3090 come from not fully developed drivers yet. (12/2020)

OR Topaz VEAI does not fully support / use all capabilities of the RTX3090 yet?
I hope future updates will boost the performance.
I also recognized (via Asus GPU Tools) that VEAI 1.7.1 only uses 8GB max. of my 24GB GPU Ram.
Even when i start VEAI twice and render 2 videos at the same time. it never takes more than 8GB.

Someone really needs to look at this bad performance…
Otherwise the 3090 will never be an option for Topaz VEAI users.
Its not just the price … that will drop one day…
But remember the 3090 needs far more electric power than the 3070 …
3090 350W - 3070 220W … During winter times thats okay for me as i could switch off the floor heating… The 3090 is quite some hot hairdryer :wink: I now fear a hot summer day …

Soooo IMHO the 3090 should save ~50% of the rendering time…
compared to an 2080 or 3070 … otherwise its a waste of money

P.S. at least i never had broken videos from the 3090 like others mentioned above…

2 Likes

I think your expectations of the 3090 are a little unrealistic. The 3090 does cost alot but it’s only a bit faster than the 3080 (your mostly paying for the extra VRAM). Compared to the 3070 the compute performance (nongaming performance) is probably about 50% extra on the 3090, based on the few benchmarks I’ve seen.

So while there’s probably some performance still to be gained through driver updates and updates to VEAI, 2 - 3 times faster than the 3070 seems highly unlikely.

I never expected the 3090 to be 2-3 times faster than the 3070.
(that would be 200% - 300%)

BUT i expected it to be at least 50% faster (=150% of the 3070).
and it is now only ~20% faster … thats disappointing
I never see the 50% as you stated. And i have no benefit from gaming performance.
The 3090 is not “really” made for gaming. Since it is the only one left with SLI (twin 3090)

Especially when you see almost double electric power costs … for month /years

When i render … the whole system takes 600Watt for hours/days.

I think quite a few VEAI users have discovered that overall performance isn’t just tied to the theoretical performance of the GPU. The rest of the system plays quite a significant role, especially once you get down to 0.3sec frame times.

lol no.
-------------------- RTX 3070 ----------------- RTX 3090 ------------------------- Relation
FP32/16------ 20.31 TFLOPS --------- 35.58 TFLOPS ------------------ 1,75 (75% more for 3090)

What you can get is 75% more, MAX. Then there’s CPU, RAM, Storage, Transfer rate/BUS and of course Sotfware bottlenecks in between those numbers. You’re also only likely to get anywhere near the 75% boost when using high transfer rates. Remember that those TFLOPS are theoretical numbers that takes into consideration that you’re not only maxing out GPU usage (ie clock and compute units on the calculation), but also maxing out bandwith as you’re going to need that for maxing out raster operations(or instructions; IPC or RO in the calculation). If you’re interested: max theoretical FP operations = clock * instructions per cycle * number of CU

Sounds like This is all Windows I notice tensorflow uses Cuda.

Thanks for the testing - great info - I wish we would have more of this and more info from the devs on best ways to optimize. I had posts like this years back but it was hard to get much support. Lets keep it up. I will probably buy a 3080ti which is come in 1/3 cheaper than the 3090 - but still a massive 20gb ( 3090 is 24gb) so I will test it then. I should say 3080ti not coming out till feb 2021 earliest. I make art and art movies (beside my labs AI research work) that can take weeks to run. FYI - I batch a lot of my work so I can run all night or on several machines. I have my own code to batch different types of jobs with TopazLabs with timing stats and such. I use the older TopazLab Studio 1 for most of my work in conjunction with my own AI systems I write ( I am prof/researcher with a lab of PhD in AI). This is why I have my own batch code, as it moves between many commercial ( like TP) and our own Ai systems per frame ( and between Linux and Windows at times). Most of our work is on 1080tis - I literally buy them used now as they are still a good sweet spot (we mainly want that vram for the ai work) and I need a lot of GPUS for all our home and lab computers.

If you have other tests or suggestions of ways to reduce bottlenecks - let us know. Thanks again.

From my tests I found that the 3090 behaves as if it had 20 teraflops.

Tested Apps:

Capture One: 3090 is 50% faster than Quadro RTX 5000, a second RTX 5000 added is 65% faster, together 22 Teraflops FP 32 /// 44 TF_ FP16)

Denoise AI: 3090 is 26% Faster than a Quadro RTX 5000. (Same as Quadro RTX 6000 vs. RTX 5000, did a test in June here).

Gigapixel: a RTX 3080 is as fast as a Quadro RTX 5000 here.

1 Like

I’m upgrading from the 1070ti card. Sounds like I should just go for the 3090… If they ever become available.
Steve

as i have the rtx3090 for 2 month now, i can tell the price difference and power consumption compared to the rtx3080 is far to high

33% = 500$ higher price for just the 24gb Vram, what is used by even 3 veai parallel instances up to 10gb only … 14gb stay unused.

and you hardly get 5% to 10% better frames per second rates…

so i would take the 3080 today…

Thank you.
Now I just have to wait, till some are instock.
Steve

AMD R9 5950X 16CORE, 32G DDR4 3600MHZ, RTX 3070 8G, STRIX X570-E GAMING, SSD 980 PRO 500G, ALL PCIE4.0
THIS IS THE TEST WITHOUT OVERCLOCK.
0.09SEC/FRAME 720X480 TO 1920X1080

1 Like

you upscaled to 1080p
i upscaled to 4K

so the results are not compareable

p.s. i see 0.07spf often upscaling to 1080p, it mostly depends on the source, model and parallel tasks

i did not see anyone had 0.07s/f. that is not the point !
i suggest you try to change to more core cpu. even the best graphics card cannot do all the work alone.
it is up to you.

my rtx3090 runs on a i9-10900K with 32gb 4266 ram on z490 chipset, pcie-3
alle upscaling drives are 2TB samsung pro nvme with 3500/3300 mbyte/sek

thats just perfect for upscaling. btw. my cpu never goes over 50% while upscaling.
professional hardware magazines say the difference of pcie-3 to pci2-4 is less than 2%

i have no idea why you are suggesting a hardware upgrade.