AI Video Enhance with new GPU 3090 RTX?

This is a question mostly directed to the team behind the AI Video Enhance program, i am currently running an overclocked 2080TI with a custom cooler and getting around 0.27/sec per frame when upscailing DVD 4:3 video.

I am wondering, what exactly is giving the most increase in performance for this program? The new nvidia 3090 card is boasting extra memory (24gb) but also a lot more processing power on the tensorflow and rtx, but as for cuda cores they say it’s double but it’s more that one cuda on the new 3000 series have some type of hyperthreading so i don’t think you can count on it being the exact twice performance in cuda calculations.

Damn i went so long… My question is really just, do you guys think its worth investing in a 3090 GPU for AI Video Enhance to render faster or do you think the performance will not be that much more? (Most of the reviews when they come out will most likely just focus on gaming and that’s something i don’t do)

1 Like

As long as no integers are calculated but floating point numbers, Ampere can implement its FP32 increased power

As soon as integers come through Ampere falls down to Turing speed.

The way I saw it, you can compare the AI software from Topaz more to render software, so when the reviews come you can orientate yourself a bit on it.

I expect a good performance increase.

I look also to get a 3090 this time and not a Quadro card.

I did a lot of testing for the last 3 days and the differences are so small that they really dont matter.

The Upside for Quadro is: Very Good Drivers, Very Good Hardware (Best Chips) (Runs not as hot als Geforce), ECC and Support you can chat with any time.

But maybe after half a year I am of a different opinion, who knows.

that’s a great tip, thanks. hopefully we’ll see some actual reviews before release, it seems the 3090 chips will be very hard to get after release because samsung has problems with the 8nm manufacturing process and only the finest wafers are good enough for the 3090

expecting 3090 to sell out instantly and then not be on the shelves until early next year

The shortness of boards is a rumor.

Something gets released today @ 15:00 Berlin time, maybe a overclocking tool. (Update: No its LDAT)

But I don’t know if I dreamed that or if I really read it. I really enjoy rumors.

I think I read somewhere in June that they probably brake ampere artificially and that the brakes are opened in October.

But this time they really have to make a fuss so they can sell as much as possible, the consoles have really attractive prices. (aber darum geht es uns ja nicht)

On my side i would have to get Capture One export twice as fast, same for Luminace layers in CO. Since 2015 i couldn’t get more out of it and the only thing that works here are dual gpus (CO scales with cuda cores) and memory bandwidth, at amp 3090 we would have both.

Look at a Tensorflow Benchmark @ Puget Systems.

The 3080 is twice as fast as a 2080.

But it wasn’t really optimised for this Linux Bench, they did use an old one.

3090 is targeted at Creatives, its the new Titan. Look at the GA102 whitepaper.

sadly it does not suppor 3090 now. the video would be damaged using 3090

1 Like

ok, how?

How does it look like?

I cannot post a picture here.

you can take a look at this picture if you can access the following link
https://tieba.baidu.com/p/7024269756

Did you try to lower the Memory Clock and GPU Clock?

Give it a try.

But i think the 30X0 Series needs to added first to the TL Ecosystem to work.

Slowly I have my doubts about the 3000 series.

I have heard many rumors in the meantime.

I will wait for a very long time for a GPU update.

Hi there,

first i tried Topaz VEAI V1.7.1 on an Asus RTX 3070 OC 8GB.
The render times dropped 60% compared to my 1050ti. Nice power boost.
(My system: CPU I9-10900KF with 32gb 4266 ram. Z490 chipset, Win10 pro 2004)
(I know that my system only has PCIe3, but professionel hardware testers like tomshardwareguide say the difference between pcie3 and pcie4 with an RTX3090 is just 1-2%)
Harddisks are 1TB and 2TB nvme … 3500mb/sek read 3300mb/sek write

But i still was not satisfied with the “long” rendering times.
And so i decided to buy a nvidia 3090 Founders Edition.

I was hoping the 3090 is at least double as fast as the 3070 with topaz.
When we compare the specs it should be and looking at 3x the price of an 3070 - it REALLY should.

But before i exchanged the cards i prepared a decent list of to be rendered videos and noted the RTX3070 render times (frames per second). It was all kind of different videos in resolutions from 240p up to 2160p.

After the hardware change and the reinstall of the studio drivers i reloaded the 3070 queue list and tested the videos again on the RTX3090 - 1:1 , 3070 vs 3090

And the results were more than disappointing.
I will not copy my complete results, but i can say with 3070 it took 7hours of upscaling a 1hour video from 720p to 4K (2160p).

With the 3090 it takes “only = still” 6 hours of upscaling the same video to 4K.

So the 3090 saved me “just” 1h rendering time.
Shouldn’t that at least be 3 saved hours ?

---------------3070-------------------------------------------------3090-------------------------------
video1 8h30min 0.25sec/frame — 6h40min 0.20sec/frame
video2 6h55min 0.25sec/frame — 5h40min 0.21sec/frame
video3 8h45min 0.34sec/frame — 7h05min 0.27sec/frame

At this very moment my results dont cover the 3x higher price of the 3090.
The 3090 is hardly 20-30% faster than the 3070. How comes???
Did i miss something? Do i calculate wrong?

(btw. before the nvidia RTX3090FE i got an Zotac RTX3090 … results were exactly the same)
Most recent game ready or studio drivers make no difference.

I hope the “bad=slow” results of the 3090 come from not fully developed drivers yet. (12/2020)

OR Topaz VEAI does not fully support / use all capabilities of the RTX3090 yet?
I hope future updates will boost the performance.
I also recognized (via Asus GPU Tools) that VEAI 1.7.1 only uses 8GB max. of my 24GB GPU Ram.
Even when i start VEAI twice and render 2 videos at the same time. it never takes more than 8GB.

Someone really needs to look at this bad performance…
Otherwise the 3090 will never be an option for Topaz VEAI users.
Its not just the price … that will drop one day…
But remember the 3090 needs far more electric power than the 3070 …
3090 350W - 3070 220W … During winter times thats okay for me as i could switch off the floor heating… The 3090 is quite some hot hairdryer :wink: I now fear a hot summer day …

Soooo IMHO the 3090 should save ~50% of the rendering time…
compared to an 2080 or 3070 … otherwise its a waste of money

P.S. at least i never had broken videos from the 3090 like others mentioned above…

I think your expectations of the 3090 are a little unrealistic. The 3090 does cost alot but it’s only a bit faster than the 3080 (your mostly paying for the extra VRAM). Compared to the 3070 the compute performance (nongaming performance) is probably about 50% extra on the 3090, based on the few benchmarks I’ve seen.

So while there’s probably some performance still to be gained through driver updates and updates to VEAI, 2 - 3 times faster than the 3070 seems highly unlikely.

I never expected the 3090 to be 2-3 times faster than the 3070.
(that would be 200% - 300%)

BUT i expected it to be at least 50% faster (=150% of the 3070).
and it is now only ~20% faster … thats disappointing
I never see the 50% as you stated. And i have no benefit from gaming performance.
The 3090 is not “really” made for gaming. Since it is the only one left with SLI (twin 3090)

Especially when you see almost double electric power costs … for month /years

When i render … the whole system takes 600Watt for hours/days.

I think quite a few VEAI users have discovered that overall performance isn’t just tied to the theoretical performance of the GPU. The rest of the system plays quite a significant role, especially once you get down to 0.3sec frame times.

lol no.
-------------------- RTX 3070 ----------------- RTX 3090 ------------------------- Relation
FP32/16------ 20.31 TFLOPS --------- 35.58 TFLOPS ------------------ 1,75 (75% more for 3090)

What you can get is 75% more, MAX. Then there’s CPU, RAM, Storage, Transfer rate/BUS and of course Sotfware bottlenecks in between those numbers. You’re also only likely to get anywhere near the 75% boost when using high transfer rates. Remember that those TFLOPS are theoretical numbers that takes into consideration that you’re not only maxing out GPU usage (ie clock and compute units on the calculation), but also maxing out bandwith as you’re going to need that for maxing out raster operations(or instructions; IPC or RO in the calculation). If you’re interested: max theoretical FP operations = clock * instructions per cycle * number of CU

Sounds like This is all Windows I notice tensorflow uses Cuda.

Thanks for the testing - great info - I wish we would have more of this and more info from the devs on best ways to optimize. I had posts like this years back but it was hard to get much support. Lets keep it up. I will probably buy a 3080ti which is come in 1/3 cheaper than the 3090 - but still a massive 20gb ( 3090 is 24gb) so I will test it then. I should say 3080ti not coming out till feb 2021 earliest. I make art and art movies (beside my labs AI research work) that can take weeks to run. FYI - I batch a lot of my work so I can run all night or on several machines. I have my own code to batch different types of jobs with TopazLabs with timing stats and such. I use the older TopazLab Studio 1 for most of my work in conjunction with my own AI systems I write ( I am prof/researcher with a lab of PhD in AI). This is why I have my own batch code, as it moves between many commercial ( like TP) and our own Ai systems per frame ( and between Linux and Windows at times). Most of our work is on 1080tis - I literally buy them used now as they are still a good sweet spot (we mainly want that vram for the ai work) and I need a lot of GPUS for all our home and lab computers.

If you have other tests or suggestions of ways to reduce bottlenecks - let us know. Thanks again.

From my tests I found that the 3090 behaves as if it had 20 teraflops.

Tested Apps:

Capture One: 3090 is 50% faster than Quadro RTX 5000, a second RTX 5000 added is 65% faster, together 22 Teraflops FP 32 /// 44 TF_ FP16)

Denoise AI: 3090 is 26% Faster than a Quadro RTX 5000. (Same as Quadro RTX 6000 vs. RTX 5000, did a test in June here).

Gigapixel: a RTX 3080 is as fast as a Quadro RTX 5000 here.

I’m upgrading from the 1070ti card. Sounds like I should just go for the 3090… If they ever become available.
Steve

as i have the rtx3090 for 2 month now, i can tell the price difference and power consumption compared to the rtx3080 is far to high

33% = 500$ higher price for just the 24gb Vram, what is used by even 3 veai parallel instances up to 10gb only … 14gb stay unused.

and you hardly get 5% to 10% better frames per second rates…

so i would take the 3080 today…

Thank you.
Now I just have to wait, till some are instock.
Steve