We have another beta available for your testing. It was a bit of a slower week on our end with some of the team out sick and on vacation, but we have a few things for you all to try out.
Adds dynamic bitrate options for Apple Silicon computers.
Adds “Import” section to preferences and moves “Default Import File Type” to this section.
Adds preference for default FPS for image sequence import. Changing fps on an image sequence that has been imported will no longer be “sticky” - set your default here instead.
Fix for aspect ratio input to reset to Custom after applying reset or auto crop.
Nyx Beta:
A new model for denoising high-quality video.
The focus of the model is on footage from high-quality cameras captured in less-than-ideal conditions.
Nyx Beta Updates:
Reduced low-frequency noise in outputs.
Updated parameters and tooltips.
Support for TensorRT backend.
Iris v2 Beta:
Adds Iris-v2 (beta 1).
Known Issues:
Nyx “Auto” may not perform well on all videos; we recommend using manual parameters for now
Iris-v2’s performance is not optimized.
Iris versions are not selectable with enabled previous model versions.
Intel ARC is currently unsupported.
Videos with mismatched metadata and streams will display incorrect duration
Frame number preview length may shorten on app restart
Please upload problem videos and logs here: Submit Files.
cannot preview new Iris with MacOS (processing error). As soon as I find the dropbox link I will upload the logs (please add in first message next time ). Thanks
Another area where I feel communication should be better.
What is Iris v2’s goal/use case?
I assume it’s similar to Iris v1 (does a better job restoring faces than other models) but what’s different between v1 and v2? Quality of faces? Quality of everything else? Both? Types of content it’s designed to be used with?
Nyx should have a note mentioning it only has a 1x model so people don’t complain about it producing bad quality at 2x and 4x.
Also, are there plans to release a 2x or 4x model?
First try Iris only. I will convert the file via Handbrake now and then try Iris again. EDIT: this did not help. Nyx dropped the red cross error immediately and did not even begin to process. Proteus dropped the red cross error like Iris did. But Artemis LQ seems to work.
Iris worked fine on my GTX 1060 at both 2x and 4x (with 4x running very slowly as expected).
Nyx doesn’t seem to want to use the gpu. Tried a couple of different resolution videos and in both cases the cpu maxed out after downloading a model, while the gpu idled. I stopped the cpu processing at it wasn’t progressing after almost a minute. Didn’t get a red cross error though.
Will upload logs to the Dropbox link in the OP.
Edit: Retrying Nyx results in TVAI attempting to download models again.
Now Iris ran for a while until it ended up with a red cross error again. I sent some log files to the dropbox tough but will check if my new PC runs stable. Will also try the Nvidia studio driver again.
My license was kindly extended for testing on 27 July 2022 but that has now expired.
Anyway, got an error trying a preview with an Interlaced Progressive file I often use for testing. iris model selected. "- Error message from AI engine: download failed. - Error message from AI engine: model failed."
The error seems to be because I was not logged in, even though I had logged in.
iris v2 is fantastic! it better manages temporal denoising, which removes less detail from very fast movements (in my case of water flowing from a fountain, the details are more visible)
There is an artifact in the model that came in around 3.2.9 ish that caused Iris to generate wobbly motion in static images - it could be unnoticeable in some places, but really noticeable in others. Sort of like dancing ants under the skin.
At the time, devs said it may be fixed in the next model release - but then I moved house and only started looking at Topaz again a week ago - some 7x or so betas since that issue raised - but when I checked the previous Beta, the issue was still present.
I sent a message asking if there was an update but Ida didn’t respond. However, this version looks like it may have fixed it, or reduced it less than noticeable. I just did some testing and that issue is not resurfacing, making this the first version of Iris in about 8x Betas that is actually usable again for me, which is great.
Separately, it also appears to be resulting in more clarity of faces. For example, this image slider comparison is current live Iris with this Bet Iris (live on left beta on right):
Both identical settings, relative to auto. Now obviously that could be a change in auto as its relative not absolute, but it definitely appears to be an improvement.
The only downside I have seen so far is that the naming of image sequences is broken again. It says it will name them the input frame numbers, which it does on live, but on this experiment on beta the image sequence starts from 1 regardless of whether set to input frame number.