I’ve searched for all RTX 4080 SUPER benchmarks and I’m consistently getting lower than the rest of the community. I’ve had v4.0.9 for a long time and I thought upgrading to the latest version potentially improve the performance but the performance got worse with the latest.
I’m posting v5.5.0 and v4.0.9 from my benchmark. In non-benchmark real life scenarios, the performance is slow in general.
The only thing I can think of is RAM. I got some that runs at 3600Mhz. That’s supposed to be the best for this CPU. If you have that speed of RAM, check if it’s enabled with the Task Manager.
I suppose I am using a year-old driver: 537.58. It’s the last driver they made that actually turns off the monitor after inactivity if you have the refresh rate set higher than 60Hz.
My memory speed is currently at 2133. I switched out my RAM modules with DDR4-4000 few weeks ago but it made no difference so I returned them. I’m currently on 560.94 Nvidia driver.
I currently have a super ultrawide monitor at 120Hz. Maybe that could be it?
Did you enable the RAM speed in your BIOS? It’s always at the default low speed if you don’t.
The monitor makes no difference. Mine’s at 244Hz and I have it set to that for all of Windows, not just full screen applications.
Oh wow, that already improved the numbers from changing the RAM speed in my BIOS. It was set as Auto. 20-30% improvement. The RAM speed was set as Auto so I changed to 3000. Here are the new numbers:
You are just hurting me… ha-ha… by starving that RTX 4080 SUPER!
Before you pull-the-trigger on just adding more RAM… check out all the great deals this Black Friday sales (that are active now at many outlets ) on a new MB chipset that could easily give you an additional ~80%+ performance jump easy.
I don’t understand how it could get worse. One difference is that I have 2 RAM (2x16GB) modules instead of 4 RAM (4x8GB) modules as before, but I don’t think that should make a difference.
Noob mistake. I set the RAMs to be slots 1 and 2. I changed to 2 and 4 now. I’ve restored my old performance but the difference between 3000HZ and 3600HZ isn’t that significant.
It’s too late now and not worth buying other RAM sticks, but the timings on the RAM can make an impact. Generally the smaller the timing numbers, the faster the RAM.
So for example, I have some DDR3 sitting on my desk. It’s at 1333Mhz with timings CL9-9-9-24. If I were to swap it with 2666Mhz RAM but with timings like CL20-20-20-60, it would be slower.
And there’s also the matter of if your CPU memory controller can run at the speeds and timings the RAM is advertised at. There are CPUs that cannot.
So for example, I have some DDR3 sitting on my desk. It’s at 1333Mhz with timings CL9-9-9-24. If I were to swap it with 2666Mhz RAM but with timings like CL20-20-20-60, it would be slower.
While yes your example does show higher latency on the faster memory, that does not necessarily mean the PC will perform slower.
As a general rule every time you double the speed you will double the timings too, as they are a relative measure against the RAM speed. So 1333Mhz CL9-9-9-24 would be identical to 2666Mhz CL18-18-18-48, except the latter has double the bandwidth. But even if the timings or more than double, if its faster or slower depends on what your PC is doing, its never as simple as “faster RAM with worse timings are bad”, although I would expect Topaz to be latency sensitive.
Before i purchased a TVAi licence i ran some ai rendering using real-ESRGAN, an open source image and video upscale/enhance model. It’s not been updated for some 3 years now, but the different AI models, using different complex algorithms scale with the file size of the model at hand. real-ESRGAN has 3 pre-trained AI-models. 1 for “real-life” enhancements focused on images, one for anime upscaling for images, and one for anime upscaling used for image frames extracted from a video. The two models focused on enhancing a single image are 15x as big in file size compared to the model made for videos. As frames extracted from a video can easily be more than 100 000 images depending on length and framerate, running either of the more refined models for single image upscaling/enhancement would take days to render all the images extracted from a video.
Render times between the image focused models and the video focused model would increase the time to enhance all frames by x5-10. The smaller video ai model took about 8h to enhance a 1h long 1080p video at 30fps. Running the image focused models would take well over several days to render all images extracted.
The more trained an AI model gets, the larger it becomes both in file size and knowledge.
I suspect that is the reason to slower render rates. You are essentially deciding to stay with an older version because it’s faster at rendering, but it will yield worse results.
That’s not what they sound like they are reporting though. Most people sound like it’s the same version of the AI model that needlessly runs slower in the newer version of TVAI. If the reports are true, the only reason for this I can think of is the new TVAI UI is now forcing 4X upscaling where the older versions use 2X.