I for one would be very happy if there was a way to run the models on linux.
I already run most of my ML training and inference on my headless linux box. In fact, just like Topaz, I convert my pytorch and tensorflow models to ONNX for the exact same reason; portability and ease of inferencing with ONNX Runtime. No dependency on any other microsoft cruft. Definitely not DirectX as someone else speculated about in this thread. Native Linux runtime.
My workflow is as follows:
- Prepare source clips for a project in some GUI NLE on Mac or Windows.
- The clips that need enhancement, copy them over to a SAMBA share on the linux box and launch a CLI over there to run the number crunching (custom filters or ONNX ML models), over night.
- Use the processed clips in the NLE editor.
If the work is really heavy I spin up a small farm on AWS, and have the CLIs wrapping my models kick off the inferencing in response to file uploads to a certain S3 bucket. Since that uses segmented processing, what would take weeks on my local 32 core linux machine with one RTX 3090 is done in minutes to hours on a farm of cloud machines equipped with GPUs.
I for one would be content if Topaz just provided the passwords for the models tz zip files, and a document which describes what the model input parameters are, and the image (frame) preprocessing required to make the frames model-compatible + post-processing to turn the prediction results back into image data that can be serialized to TIFF/BMP/PNG.
Now I realize that the model files are half of the Topaz R&D investment, so I don’t expect you to do that. Instead, a compromise would be to offer a CLI with flags for configuring the model(s) to be executed. I expect this will be really easy for you guys, since you’ve already structured the code along those lines. You provide meta-data for the models (json files) that define input parameters already, so the code that pre-processes and hands the tensors over to ONNXRT I’d bet good money you would be able to just wrap a simple CLI around.
Having 1 instance of the CLI included in the license would be awesome. You can verify this the same way you verify the license in the GUI, just make a web call and write the license info to the machine the CLI runs on. For “Farm use”, there’s new revenue potential. A discounted “CLI” license for those of us who want to speed the processing up by using a set of beefier cloud machines.
For licensing inspiration, look at Duplicacy’s licensing. You’d need to discount the CLI significantly for farm use though, else you’d get zero sales. Have a conversation with us the community to help calibrate a suitable price point.