Is Priority on PC speed now?

That’s not entirely true.
You’ll have a really hard time finding anything Windows based that can compete with an MacBook Air M1/M2 in the basic configurations from several points:

  • shear performance CPU/GPU compared to cost at 800€
    -quality of build
  • heat /noise
  • power consumption / costs with the Macbook running at 10W for most tasks
  • stability and overall “elegance” of the system, user friendliness (OK, those points are of course discussable)
  • really nice ecosystem if you also have an iPad and/or iPhone

Only with the more expensive versions the price/performance ratio of those Apple devices reduces drastically.

And, of course if you’re only remotely into gaming you should avoid the Mac.

P.S.: Look at your own slow speeds with your gaming laptop…

look the specs of any decent priced windows laptop VS a brand new macbook. or a windows tower vs a mac competer like a mini mac. you will see price is lower and specs are higher.

You can’t compare specs between Apple and PC computers.

Built in Apple M1 and M2 chips are designed to perform certain tasks much faster than a typical PC chip.

It is apples and oranges. I guarantee you that a Mac Mini M2 @ $499 will easily convert most video formats faster than any $1000 PC based system. Adobe products will run faster on equipment easily 1/2 the price if you go the Apple route.

That is the problem with this software. For Mac users, they are not utilizing the machine’s full capabilities. Other big brands, like Adobe, are writing code to utilize the high performing GPU cores in Mac computers.

Now it appears that this software is not designed to utilize Apple’s GPU chips fully. That is their choice – but even if it says it is compatible with an Apple M2 chip – it does not mean it fully utilizes it.

AI is moving super fast. Other options are appearing quickly for Macs and for professional video producers, this software is just becoming a headache. It should not crash any more.

For PC users – you can add any GPU you want – at any price – and at a high usage of electricity and noisy fans.

I have seen zero performance gains and a bizarre workflow with no real time previewing after going to V4. For me it is based on the value of my time. For a hobbyist – maybe not.

The lack of any almost real-time viewing of the rendering ends my interest in this product. I want to see if it is rendering the way I want it to – not after I have rendered the entire project.

Bizarre.

1 Like

hardware is hardware. a processor that runs @1.50 GHZ is slower than a processor that runs @3.00 GHZ. does not matter if it is intel, AMD or whatever brand is out there.
its not the same having 8 gbs of RAM than 16 gbs.
its not the same having a brand new AMD gpu than a brand new NVIDIA GPU nor a INTEL gpu.
MAC os its not the best thing in the world, many many stuff does not work well on it.
windows is no marvel eighter, linux may be optimized but is all command based to do advanced stuff.
if i sell you windows PC hardware with a processor that runs @ 3.00 GHZ INTEL I9 with 32 GBS of RAM and a graphics card NVIDIA from 2023 at $XXX value is better than a macbook air with a cpu of @ 2.00 GHZ 16 GBS of RAM and a graphics chip AMD from early 2022. facts are facts. look for some benchmark comparisons between your fav mac hardware and a similar windows PC hadware. cheaper and better performance.

Have you heard of the word ARM?

yes but that is relatively new hardware. and windows 11 does support ARM.

ARM is all about power saving. Not powerful processing.

Well I think it is both:

Arm processors can execute many more millions of instructions per second than Intel processors. By stripping out unneeded instructions and optimizing pathways, an Arm processor can deliver outstanding performance while using much less energy than a CISC-based processor.

1 Like

Sounds like it levels out. ARM can do lots of single instructions, and X86 does more complex instructions that would be equal to several single instructions. I’m not finding anything that agrees with your statement about “many more millions of instructions per second than Intel processors”.
The most I found was they’re like comparing a car on water and a boat on a road.

I guess you never really used Win on ARM :rofl::rofl:

Ouch. Ever heard of Netburst architecture?

Ouch, ouch. Apparently you don’t know recent Apple architecture at all - which then of course does explain your previous statement.
Yes, earlier MacBooks were quite slow because they relied on PC hardware (Intel/Intel or Intel/AMD) for the mobile notebook/miniPC segment (so just what you suggested to use :-/) and that was the main reason Apple switched to their own C/GPUs.

So: Yes, current PC hardware IS faster at the cost of power consumption and heat (and also being nearly as expensive if you look at those prices for 4080 or even 4090 GPUs) - but not really so with components from the mobile/notebook sector.
Besides, those gaming laptops are really quite expensive if you want one of a known brand with decent build quality. So there a M2 MacBook Air in the basic configuration can definitely compete, just not for gaming.

Interesting… :thinking:
I thought you were unhappy about the performance of your new gaming laptop. :thinking:

Personally, I do not recommend using a laptop for TVAI. Laptops are not designed to work at 100% CPU/GPU load 24/7. They easily overheat and start thermal throttling. For long periods of heavy-load AI processing, a real desktop tower is preferred (not a mini PC). If you have a powerful CPU, it is preferable to use a decent CPU cooler, such as a beefy dual-tower CPU cooler or a 360 AIO, to prevent the CPU from thermal throttling.

4080 Laptop vs 4070 Desktop
Topaz Video AI  v3.5.4
System Information
OS: Windows v11.22
CPU: 13th Gen Intel(R) Core(TM) i7-13700H  15.737 GB
GPU: NVIDIA GeForce RTX 4080 Laptop GPU  11.729 GB
Processing Settings
device: 0 vram: 0.89 instances: 0
Input Resolution: 1920x1080
Benchmark Results
Artemis		1X: 	16.87 fps 	2X: 	10.64 fps 	4X: 	03.11 fps 	
Iris		1X: 	18.81 fps 	2X: 	09.47 fps 	4X: 	02.73 fps 	
Proteus		1X: 	15.74 fps 	2X: 	10.07 fps 	4X: 	03.19 fps 	
Gaia		1X: 	05.50 fps 	2X: 	03.64 fps 	4X: 	02.43 fps 	
Nyx		1X: 	06.98 fps 	
4X Slowmo		Apollo: 	23.38 fps 	APFast: 	70.35 fps 	Chronos: 	12.64 fps 	CHFast: 	20.89 fps 	
Topaz Video AI  v3.5.0
System Information
OS: Windows v11.22
CPU: 13th Gen Intel(R) Core(TM) i7-13700K  31.773 GB
GPU: NVIDIA GeForce RTX 4070  11.744 GB
GPU: Intel(R) UHD Graphics 770  0.125 GB
Processing Settings
device: 0 vram: 1 instances: 1
Input Resolution: 1920x1080
Benchmark Results
Artemis		1X: 	19.47 fps 	2X: 	13.15 fps 	4X: 	04.33 fps 	
Iris		1X: 	20.48 fps 	2X: 	11.19 fps 	4X: 	03.59 fps 	
Proteus		1X: 	18.01 fps 	2X: 	12.12 fps 	4X: 	04.49 fps 	
Gaia		1X: 	06.64 fps 	2X: 	04.59 fps 	4X: 	03.16 fps 	
Nyx		1X: 	07.96 fps 	
4X Slowmo		Apollo: 	24.93 fps 	APFast: 	72.91 fps 	Chronos: 	14.21 fps 	CHFast: 	22.26 fps 	

2 Likes

you quoted me out of context: i am complaining of how awful the performance of TVAI is not my laptop pc. my laptop pc runs any game at ultra settings and wont lag for a second specially on 60 fps. i always preferred pc towers untill i realized CPU’s change their sockets like every 2 years so if i am not upgrading every 2 years my pc, even my motherboard will run outdated and may as well just get a new tower and add any cool hardware from the previous one. i prefer a laptop that will last a minimum of 5 years running at top notch and deal with it rather than after 2 years if i wanna upgrade my cpu or ram need to swap the mother board as well. and any component it gets outdated with it.

But we are talking about computer for TVAI, not gaming. :sweat_smile:

You can upgrade your graphic card without change motherboard.
Also, if you choose AMD system, they keep their sockets for 6 years.
(AM4 2016~2022)

It pains me to read these words. I’ve done that. It wasted so much of my time. It takes far less time to render around five ten second previews on scenes that are more difficult for the AI to get right, than stop the full processing near the end when you see it’s going wrong. That might be a good way to learn what types of things the AI struggles with—but once you know those things, don’t waste your time. Go straight to them and run previews.

at this point the preview rendering should be optional. i prefer to take from certain important points in the video (for example very low res or very decent res and do 10 seconds preview) then if i am happy just render the whole thing. now if the live preview is going to consume rendering power slowing the process down is a no go for me. prefer max speed over live preview. now if you need it then it should be avaidable. i think the software has an option to turn it on or off. i think they need to fix the live preview and let us choose where to be on or off.

And I believe Topaz has said that they are working on getting it back.

1 Like

Platforrm wars aside, I chose not to renew my Topaz products this year as I’ve seen performance on my M1 Max Studio get worse rather than better. I plan to renew in 2024 if/when I see improvements as the software continues to mature. I was a Windows user for decades and I will never go back. :upside_down_face:

From what I am seeing — this software does not use the M1 / M2 / PRO GPU cores. You can just run the charts. GPU usage is nothing — if it shows something — it is for the OS or something else.
It just uses the CPU on all Apple devices. So it is not designed to take advantage of Apple hardware. It was misrepresented to me in a (written email) as an Apple user when then M1 chip came out. This is just a port over.

1 Like

I believe you’re right, which is quite unfortunate. The raw CPU power of the M-series chips isn’t impressive other than on a per-watt basis, so taking advantage of the other parts of the silicon (GPU + Neural engine) is what we really need. I’m not a developer so I don’t know how much of this Apple exposes to developers…

Which tool do you use to monitor GPU usage?

Because here GPU is used nearly to 100% with the most used models (Proteus, Iris) - but still processing is slower than expected (and we also definitely know from earlier versions that it really can be faster).