You can’t compare specs between Apple and PC computers.
Built in Apple M1 and M2 chips are designed to perform certain tasks much faster than a typical PC chip.
It is apples and oranges. I guarantee you that a Mac Mini M2 @ $499 will easily convert most video formats faster than any $1000 PC based system. Adobe products will run faster on equipment easily 1/2 the price if you go the Apple route.
That is the problem with this software. For Mac users, they are not utilizing the machine’s full capabilities. Other big brands, like Adobe, are writing code to utilize the high performing GPU cores in Mac computers.
Now it appears that this software is not designed to utilize Apple’s GPU chips fully. That is their choice – but even if it says it is compatible with an Apple M2 chip – it does not mean it fully utilizes it.
AI is moving super fast. Other options are appearing quickly for Macs and for professional video producers, this software is just becoming a headache. It should not crash any more.
For PC users – you can add any GPU you want – at any price – and at a high usage of electricity and noisy fans.
I have seen zero performance gains and a bizarre workflow with no real time previewing after going to V4. For me it is based on the value of my time. For a hobbyist – maybe not.
The lack of any almost real-time viewing of the rendering ends my interest in this product. I want to see if it is rendering the way I want it to – not after I have rendered the entire project.
hardware is hardware. a processor that runs @1.50 GHZ is slower than a processor that runs @3.00 GHZ. does not matter if it is intel, AMD or whatever brand is out there.
its not the same having 8 gbs of RAM than 16 gbs.
its not the same having a brand new AMD gpu than a brand new NVIDIA GPU nor a INTEL gpu.
MAC os its not the best thing in the world, many many stuff does not work well on it.
windows is no marvel eighter, linux may be optimized but is all command based to do advanced stuff.
if i sell you windows PC hardware with a processor that runs @ 3.00 GHZ INTEL I9 with 32 GBS of RAM and a graphics card NVIDIA from 2023 at $XXX value is better than a macbook air with a cpu of @ 2.00 GHZ 16 GBS of RAM and a graphics chip AMD from early 2022. facts are facts. look for some benchmark comparisons between your fav mac hardware and a similar windows PC hadware. cheaper and better performance.
Arm processors can execute many more millions of instructions per second than Intel processors. By stripping out unneeded instructions and optimizing pathways, an Arm processor can deliver outstanding performance while using much less energy than a CISC-based processor.
Sounds like it levels out. ARM can do lots of single instructions, and X86 does more complex instructions that would be equal to several single instructions. I’m not finding anything that agrees with your statement about “many more millions of instructions per second than Intel processors”.
The most I found was they’re like comparing a car on water and a boat on a road.
Ouch, ouch. Apparently you don’t know recent Apple architecture at all - which then of course does explain your previous statement.
Yes, earlier MacBooks were quite slow because they relied on PC hardware (Intel/Intel or Intel/AMD) for the mobile notebook/miniPC segment (so just what you suggested to use :-/) and that was the main reason Apple switched to their own C/GPUs.
So: Yes, current PC hardware IS faster at the cost of power consumption and heat (and also being nearly as expensive if you look at those prices for 4080 or even 4090 GPUs) - but not really so with components from the mobile/notebook sector.
Besides, those gaming laptops are really quite expensive if you want one of a known brand with decent build quality. So there a M2 MacBook Air in the basic configuration can definitely compete, just not for gaming.
I thought you were unhappy about the performance of your new gaming laptop.
Personally, I do not recommend using a laptop for TVAI. Laptops are not designed to work at 100% CPU/GPU load 24/7. They easily overheat and start thermal throttling. For long periods of heavy-load AI processing, a real desktop tower is preferred (not a mini PC). If you have a powerful CPU, it is preferable to use a decent CPU cooler, such as a beefy dual-tower CPU cooler or a 360 AIO, to prevent the CPU from thermal throttling.
you quoted me out of context: i am complaining of how awful the performance of TVAI is not my laptop pc. my laptop pc runs any game at ultra settings and wont lag for a second specially on 60 fps. i always preferred pc towers untill i realized CPU’s change their sockets like every 2 years so if i am not upgrading every 2 years my pc, even my motherboard will run outdated and may as well just get a new tower and add any cool hardware from the previous one. i prefer a laptop that will last a minimum of 5 years running at top notch and deal with it rather than after 2 years if i wanna upgrade my cpu or ram need to swap the mother board as well. and any component it gets outdated with it.
It pains me to read these words. I’ve done that. It wasted so much of my time. It takes far less time to render around five ten second previews on scenes that are more difficult for the AI to get right, than stop the full processing near the end when you see it’s going wrong. That might be a good way to learn what types of things the AI struggles with—but once you know those things, don’t waste your time. Go straight to them and run previews.
at this point the preview rendering should be optional. i prefer to take from certain important points in the video (for example very low res or very decent res and do 10 seconds preview) then if i am happy just render the whole thing. now if the live preview is going to consume rendering power slowing the process down is a no go for me. prefer max speed over live preview. now if you need it then it should be avaidable. i think the software has an option to turn it on or off. i think they need to fix the live preview and let us choose where to be on or off.
Platforrm wars aside, I chose not to renew my Topaz products this year as I’ve seen performance on my M1 Max Studio get worse rather than better. I plan to renew in 2024 if/when I see improvements as the software continues to mature. I was a Windows user for decades and I will never go back.
From what I am seeing — this software does not use the M1 / M2 / PRO GPU cores. You can just run the charts. GPU usage is nothing — if it shows something — it is for the OS or something else.
It just uses the CPU on all Apple devices. So it is not designed to take advantage of Apple hardware. It was misrepresented to me in a (written email) as an Apple user when then M1 chip came out. This is just a port over.
I believe you’re right, which is quite unfortunate. The raw CPU power of the M-series chips isn’t impressive other than on a per-watt basis, so taking advantage of the other parts of the silicon (GPU + Neural engine) is what we really need. I’m not a developer so I don’t know how much of this Apple exposes to developers…
Because here GPU is used nearly to 100% with the most used models (Proteus, Iris) - but still processing is slower than expected (and we also definitely know from earlier versions that it really can be faster).