Thank you for the compliments.
It was very nice to photograph the butterflies, as they were busy eating I only had to wait until they were in the right position to photograph them.
A long focal length is necessary to have enough distance, they flee very quickly, depending on the species, each butterfly species ticks differently.
Generative AI requires a significant amount of VRAM due to its large model size. You can verify this by checking the ProgramData Model folder, where the Generative AI model occupies 1.5GB of space, whereas other models for upscaling, sharpening, and denoising are only around 100-200MB in size. Moreover, to ensure optimal performance, the software needs to load the AI model into the VRAM; otherwise, it will run extremely slowly. It is possible to further reduce VRAM usage by implementing techniques such as xFormers or enabling settings like medvram or lowvram. (It seem that Xformers only avialable for Nividia GPU)
Currently, the GPU utilization is low because the software is still in the beta stage, and performance optimization is underway. The Remove tools still rely on openvino, onnx16, and coreml, as TensorRT for Nvidia GPUs is not available yet."
Yes, Nvidia GPUs usually consume less VRAM than AMD GPUs, including during gaming, because Nvidia GPUs have better data compression techniques that help reduce VRAM usage.
I remember watching a YouTube video where a YouTuber tested several games at 4k maximum detail and texture settings. With a 16GB AMD GPU, the frame rate dropped and the game froze due to a lack of VRAM. However, with a 16GB Nvidia GPU, the game played smoothly without any problems.
Furthermore, they tested the same scene with a 24GB AMD GPU and found that it consumed around 18-20GB of VRAM, while the 24GB Nvidia GPU only consumed 16GB of VRAM.
(I canât remember the exact number in the video, so the number may not precise)
Just to clarify, I am not saying Nvidia is better than AMD. I believe they simply use different approaches. Nvidia GPUs may require extra hardware or chip space for data compression, which enables them to save some money by having less VRAM.
On the other hand, AMD GPUs may save some money on data compression in exchange for allocating more VRAM to their GPUs, making the specs more appealing to customers. This is because most consumers tend to compare numbers without fully understanding the technology behind them.
So what you are saying is, you donât need as many gb of vRam on Nvidia GPUs. So if AMD needs 8gb, then Nvidvia only needs 4 or 6gb to run, as well. Not all GPUs are the same, compare the RTX 4060 with the RTX 3050 both with 8gb of vRam and the 4060 is much faster. Not sure how relevant gaming tests are when using for photo editing. Should users be downloading studio drivers for their GPUs, instead of gaming?
I have read similar things about AMD, thats why Iâm getting Nvidia.
A rough and dirty try resulted in something only remotely useable - maybe for small mobile phone screens, but not for viewing on a bigger monitor and for sure not for printing.
And even this needed MANY steps (30+) and would need heavy afterwork with the copy stamp to get some skin texture back plus the borders arenât clean/sharp and it trounced the models shoulder. So Iâm not sure if you wouldnât be faster using just the copy stamp from the start.
If you have a drawing tablet with a stylus it might be more convenient and more exact than having to do this with a mouse but still this kinda defeats the purpose of AI (getting decent results with little effort).
So (Topazâ) AI currently isnât really up to that task I guess.
For same VRAM, Nvidia GPU have the advantage.
But if you ask smaller VRAM Nvidia vs larger VRAM AMD, I donât think there is a simple answer for this question. The effectiveness of data compression can vary significantly depends on application
It all boils down to price/performance. How you get there is irrelevant.
So NVidia has better compression but then at least here you often get an AMD GPU with more RAM for the same price or less than the Nvidia cardâŠ
I also played with this image yesterday, it is a bit difficult to deal with in TPAI. It require lots of trial & error and require regenerating multiple time.
On the other hand, I tried with Lama cleaner, it remove the tatto within seconds. Here is the result
But in Lama Cleaner the uniform grey skin color of the arm is off / looks unnatural. And while not that over smoothed as the TPAI result the âtextureâ really isnât a texture but just noise if you look closer :-/
Maybe Lama Cleaner and then some afterwork with the copy stamp would be the fastest in this case.
Yes AMD prices tend to be lower because of lack of demand. Although looking at my choice from my supplier, prices donât seem that different. RTX 4060 is around ÂŁ300, what AMD gpu matches that?
I will give Pai more time, only been using it for 2 weeks. I really only use Sharpen, DeNoise and Gigapixel. If I buy the DxO Raw converter then I wonât need to use DeNoise much.
I meant RTX 4060 - which really isnât the best/fastest card, btw. Itâs limited by several bad design decisions.
But the PC was built for a totally different use case and TPAI / TVAI running there just for interest. And for the intended use the RTX 4060 was a good choice price/performance wise. A 3060 wouldâve also done but that wasnât really available at at the time
For photo / video work I have the MacBook Pro and the Mac Studio.