I just retired and, while I have plenty of time, I cannot afford to buy a 12th generation CPU and Nvidia 4xxx. Nor will I buy a Mac.
I would be very interested in the CPU specs and GPU specs for users who have built “low-end” systems that work with ALL FEATURES of Video Enchance 3.0. Basically, what CPU model number and what model GPU will actually do Gaia and Theia, interpolation and stabilization up to 2K video. Desktop computer model numbers are not helpful. Belarc Advisor free download will give you your CPU and GPU model numbers.
Currently I can run Topaz Video Enhance 3.0 for 640 x480 to 1920 x 1080 up to the Proteus - Fine-tune/enhance level. Gaia and Theia crash my PC. Interpolation and stabilization give “process failed” errors. My existing system has a fourth generation 3.8 GHz Intel i7-4770 and an NVIDIA GTX 970 with 12GB RAM.
Topaz support is understandably not interested in troubleshooting obsolete hardware. I am prepared to upgrade but as economically as possible. Speed is NOT essential.
Take a look at task manager performance tab and see if you have anything maxing out (besides vram)- which could indicate bottleneck. Do you have an SSD in there? HDD can be quite the bottleneck with video. Vram is going to be the likely issue. You mention 12gb ram, but i assume that’s normal ram, not vram. My guess is that 6gbof vram is going to be the min for this type of work. You card also lacks AI cores for caching the models. This is game changing and running models without them may be a source of your crashes.
Now that the software uses GPU-based encoders for exporting, having a newer gen onboard encoding chip is likely to increase your stability. The 2xxx series onward has some insane encoding chips for this (with nvidia leading the way). Intel chips have really good encoders too and your may. Check to see if you can export using Intel’s. If not, google how to enable quicksync, so you can take advantage of that).
I ran well on an 8th gen i5 with a 1660 super (which still holds its ground without being pricey).
I moved up to a 3070ti and its snappier but not proportionally to the cost. I’m upscaling a lot of videos though so eventually it’ll pay off.
If you are thinking of upgrading the whole PC, and staying on a budget, I recommend just looking for a laptop deal. You’ll get a modern gpu and cpu and the prices can be a steal now. My office snagged some dells xps laptops with 3050s in them for $600 each a few weeks back. Check the “computers” section on slickdeals.net for a few weeks to see what comes up. That’s my best advice being a computer buying miser myself, while still wanting to mess around with video/ai. Bonus that that
If it boils down between GPUs, more vram is going to give you more performance and less crashes with this software (well, theoretically). Get something with tensor cores if you are working with AI models (these cores do the work of the model). Any modern CPu is likely to work, no need to overspend there.
If you just want to upgrade your GPU on the cheap, i’d get a used rtx 2070 super off of ebay for just under $200.
I am testing video ai on my Lenovo laptop. It offers integrated graphics type radeon rx vega 8 only. the cpu is an amd ryzen 7 type 5800U. system memory is 16 gbytes. processing times are long of course.
I have a very similar system, but with an i5 4690K and a GTX 1060 3GB card. It has 8GB of RAM, but it never seems to use more than 3GB. It has never crashed, but I have also never tried to go as high as 2k on it. When it’s done with it’s current task maybe I can give it a try.
Hi Matt - would you want to comment on my hardware and whether it makes sense to upgrade in order to speed things up with Photo AI autopilot? I do only stills/photos, upload the RAW file (24mb) to Photo AI and let autopilot show my what it reckons (then usually tinker around the edges and process). The preview for each pic takes some 20s, the rendering if I ‘process’ is about 1 minute per file. I upload a bunch (10-20 pics), process and then walk away and make a cuppa . Ideally I’d like to see this go in less than half the time… My system:
Processor Intel(R) Core™ i5-10400F CPU @ 2.90GHz 2.90 GHz
Installed RAM 16.0 GB (15.9 GB usable)
System type 64-bit operating system, x64-based processor
NIVIDIA gtx 1660, graphics boot clock 1830Mhz, memory data rate 8gb, memory bandwith 192GB/s, available graphics memory 14280mb, dedicated video ram 6144 mb gddr5, shared system memory 8136mb, BUS pci express x16 gen3
I don’t really know what a lot of this means … question is, would 32GB or more make a difference or will x2 speed require an update to processor and/or video card as well? Many thanks!
I have not used Photo AI yet and am not familiar with the engine and what hardware it takes advantages of. I would follow the same advice I posted above about looking at the hardware/performance monitor in windows and seeing if anything is maxing out when processing. That will clue you in on what to upgrade. If you don’t see anything maxing out, it likely can be unoptimization on the software side.
Regarding the feature of maximum 6 processes in the TVAI settings, I’d be most interested to hear from anyone who has managed to get 5 or more processes to run in parallel and what their hardware is that allows that to happen. For some bizarre reason (which I’m awaiting a response from Topaz) TVAI limits my Mac Studio to 4 processes per app instance. If I start another instance of TVAI, I can easily run another 4 processes in parallel with memory to spare! My Mac Studio is the base spec 10 core CPU with 32 GB unified memory.
Today is the first time I actually tried 2 concurrent processes of TVAI. That seems the limit for me: all my P-Cores are almost maxed out (good saturation). And my RTX 3080 Ti uses 8G memory (4 for each TVAI instance). From the latter alone, I cannot add a third, as 12G would assuredly crash… someting, as card only has 12 max (and I need a wee bit for youtube and such too). So, 2 is really the sweet spot for me, also CPU-wise (i9 12900k).
Update - I’ve heard back from support that 4 parallel processes is a hard-coded maximum per app instance (I’ve just checked and that still applies to v3.08). That’s despite the maximum of 6 being presented as an option in the settings. Well, at least I can start a new instance and run more in parallel. I’ve found that 6 in total is optimum on my Mac Studio 10 core / 32 GB.