VEAI Performance

Hi, I have a ryzen 7 2700, a gtx 1660, 16 gb of ram, I have 0.30 sec / frames with Dione-DV from sd to hd. Do you find that reasonable?

Please,

Crop instrument, like in gigapixel.

1 Like

Yes

I guess with a better gpu like the RTX3070 you could get 0.13 on the same task.
on my RTX3090 i get 0.08 spf on a similiar task

ok thnakā€™s! :slight_smile:

I just ordered a HP desktop with Ryzen 7 and 16GB. It will be here on Monday and Iā€™ll begin testing.

My present computer is a 2013 Mac Pro. The D500 with 1 channel dedicated to VEAI takes about 2.6s/f which is agonizingly slow. Iā€™m in at $600 for the new hardware and would be reasonably happy with 0.3s/f as this is a 100x increase.

The RTX 3090 that @mxrevolution referenced is around $2700.

it will change your life! the rtx 3000 are too expensive for me ā€¦ haha

I messed up in my post. Itā€™s a 10x improvement over the Mac Pro not 100x.

Iā€™m multitasking.

Hey guys does anyone have experienced a loss in performance after updating to 1.9.0?

I was previously on 1.8.0 and wanted to run a quick comparison test of Artemis HQ v10 vs v9, so I took and old 1 min, 320p footage and try it to upscaled to 4K (I know VEIA only does x2 and x4 this was mostly to get a greater picture and to spot differences).

For reference previously I was getting around .2 seconds per frame for that, and now Iā€™m getting around .6 :expressionless: what makes it worst is that the preview button seems to be working properly as it does only take the normal .2 seconds. And after checking GPU utilization itā€™s not getting the full 100% while processing the video.
On 1.8 I was getting 100% utilization while doing that. While creating previews the gpu also goes up to 100% in brief spikes right now.

Is this normal? Should I delete all and do a clean installation? If it helps I have set Max VRAM Usage to High even to my Card only has 4GB of VRAM. Tho I donā€™t think that should cause any trouble as it was like that also on 1.8

that is not entirely correct ā€¦ well i my case
i ā€œonlyā€ paid 1499$ for the original nvidia founders edition in mid december, i was just lucky ā€¦

but you are also right, there are some ā€œgangstersā€ asking up to 3200$
as this cards are still hardly available on the market

the rtx3070 is available for 600$ and has 80% of the rtx3090 power
the rtx3080 goes for 1000$ at 90% of the rtx3090 power ā€¦

before i knew VEAI i was just doing fine with an ultraslow 1050ti
as i am no gamer anymore
As soon as you get into the amazing world of AI upscaling you WANT more gpu power :wink:

on a single VEAI instance i never get 100% gpu utilization (rtx3090)
no matter what version 1.7.0 1.8.0 1.9.0 ā€¦

i cannot confirm your experiences. 1.9 is as fast as 1.8 on my system.

even on 3 parallel VEAI instances i hardly get 100% use of the gpuā€¦
(what some developers recommendedā€¦ to use the new gpus)

The veai algorithms still dont use the rtx30x0 cards to its limits.
But topaz gigapixel does ā€¦ so i expect veai to step up soon ā€¦

I dont expect a better use by re-installing the app.
BUT it can help to do a fresh system reboot before upscaling.

If you have 4gb ram ā€œonlyā€ ā€¦ you need to set the preferences to vram use: LOW anyways
otherwise you can lose hours of upscaling work with a crash

i tried that HIGH setting on a 4gb 1050ti ā€¦ and i regret

1 Like

Hey thanks for the answer, I appreciate the input.

However I was expecting 100% utilization mostly because I donā€™t have a high end card like you, so I thought that my card was actually too weak therefore it will hit 100% utilization easily. (I have a GTX 1650) So all things considered and shortcomings on new videocards I cannot upgrade right now.

Btw can you explain why should I put vram usage on low? Iā€™m probably too dumb to understand that, but I thought that the slider was like suggesting that from the 4GB I have to use all of them if it was on high, and putting lower might use only lets say 2gb or so. Now I feel a bit confused, maybe Iā€™ve been using it wrong this whole time :expressionless:

Anyway as said, thanks for your input, I appreciate it!

i wish i could explain.
I just remember that when i used the high setting i had some crashes.
And there was a recommendation in this forum not to use high on 4gb cards

i understood the vmax setting like

high only on 10+ gb cards
medium on 4+ gb cards
low on cards with 4 or less gb

i must confess that on my 24gb card ā€¦ i never saw veai using more than 10-12gb ā€¦ even when i run 3 parallel instances ā€¦

1 Like

Thanks, yesterday I forgot to answer after you told me that I started looking everywhere to where the amount of Vram was said and the only thing I could find were two mentions on the subject on the VEIA 1.7 release. Indeed a developer said that you need to not set high unless you have more than 6gb of VRAM. But there wasnā€™t any proper explanation. Nor suggestion of correct values for other cards.

I did some test with the slide on different levels and couldnā€™t find a difference :confused: I guess for my use case doesnā€™t matterā€¦

Still thanks!!!

I have the new setup running:

https://www.amazon.com/gp/product/B08RF2TXXP

I have doubled the memory to 16GB:

https://www.amazon.com/gp/product/B0143UM4TC

I put the source file on the local SSD and am running a test with a file I previously ran on my Mac Pro. The performance is averaging 0.25f/s. This is the 10x performance increase that was anticipated. Itā€™s fully within acceptable limits to me at this time.

My cost is < $700

Iā€™m disabled and on a pension so my budget is stringent.

Now, to buy Babylon 5 from HBO Max!!!

Gigapixel does run on the old engine, no wonder it does burn your gpu.

Whoever holds most of the market share will have the direction of software development. NIVIDIA or AMD or INTEL? Or co-owned DirectML.

I donā€™t know why VEAI doesnā€™t de-mux, render the video then tack it on at the end?

1 Like

I assume youā€™ve got a 3090. I know theyā€™re still working on optimizing performance on ampere. my 3080 practically never uses more than 5-6gb vram either

1 Like

Why not have a calibrated VRAM slider ranging from 2GB to 24GB, in 2GB increments? Then, all a user needs to do is set it to the VRAM that they have in their GPU.

I think they all make it work as it should, and what has been done here in one year is really worth applauding. :clap:

By the way, at the Moment iā€™m interrested in a Tesla T4, its a Turing Arc. GPU with 65 Teraflop of FP16.

The RTX 3090 has 35 TF fp16.

Update# Seems like Servethehome did bench the t4 some time ago and the T4 performs like a 2060.