The new Mac Studio with M3 Ultra can be configured with a max now of 512 GB unified RAM. Since the unified RAM can be used as VRAM, would Topaz VEAI be able to take advantage of this and process faster when a local Starlight model releases?
The team is working on configuring the Starlight model to bring it down in memory resource needs so that it can be run locally. How this looks and what it will need are still being worked out as it is very early in this process.
Sounds good Kyle!
Any progess now?
Any update Kyle?
@kyle.topazlabs , Mac users also need some updates as they also use these products, they deserve discounted prices for having lesser updates than PC users!
This is still being worked on and developed, the frustration and feedback is being shared directly with the devs as well.
While you’re at it, tell them we want Astra locally on M1 Max as well. Don’t care if it’s 0.001 fps hehe
It just launched, and for now it will be only the web-based app. Who knows what the future holds as the entire AI industry is constantly changing.