– Send us a short few seconds of input video and the corresponding output.
– Share the ffmpeg command or settings that you used.
– Leave a short note on the issue (e.g. artifact type) or what you’d like to see improved.
– Any comparison videos/screenshots with any of our existing models.
I did a test with IrisLQV2 by itself, and with Artemis as a second enchancement. Adding Artemis made it report an fps about 50% faster. Going to let it run a bit and see if it really is faster or if the numbers are just wrong.
Edit: the numbers are wrong. The alpha is importing a file with a 29.97 fps framerate and reporting the input as 59.94 fps.
Running Iris 2x on M1 Max 1080p video (Babylon 5 again!) @ 4ps. I have all 5 seasons to enhance, still undecided between Iris v2 vs. Proteus v4, but I think I will like Iris more…
Following up my earlier post, I imported a wmv file and the alpha incorrectly read the 29.97 fps as 59.94. I converted the wmv to an mp4 and the alpha read it correctly.
And adding Artemis as a second enhancement to an Iris enhancement actually did increase the framerate by nearly 50%. Any chance that can be made to happen with Proteus?
Is aion the same multi scaled Apollo model as before?
Also are there any plans to allow us run the models with no tiling. Considering some of us probably do fit the vram requirements of running models without tiling, it probably would improve temporal consistency for some models
OK, Iris 2x is about 2 times faster now yes - that’s good.
BUT: It’s still not quite at the speed it could be / formerly was; and overall performance is still quite uneven on Apple Silicon.
What happened to Iris 4x upscale being dramatically slow now? And the speed gain is only there for Iris LQ and not for IrisLQ V2?
Also there are still quite big fluctuations in speed between having RAM set to 10% or 100% where 10% is much faster for SD upscales but slower for HD upscaling.
Another thing:
While Iris LQ V2 does look good at first glance for MQ material when doing 2x upscales (and there slightly better than V1) it’s still highly unnatural when doing 4x upscales :-/
Left 4x, right 2x. Look at those artificial lines the 4x model introduces on the forehead or at the right eye. Also, looking at e.g. the border of the left cheek the face seems somewhat as if it was “pasted” into the background with the 4x model.
Just the logo and text are MUCH better with 4x.
EDIT:
Due to the forum compression the above image doesn’t show the downsides of Iris x4 very good, so here is the image on IMGUR:
(You should right click the image and open it in a new tab/window full screen)
Yes, I’m also waiting for the premiere of the Aion v1 model for Windows OS. I hope it will be included in the next alpha build. There is progress on the Iris LQ2 model, but I still mainly use the Protheus v4 model in 2nd pass mode.
I’m not seeing very much difference in faces that are large on screen, so I thought I’d see what happens with faces that are much smaller.
1920x1080 x 2. Proteus is about 20% faster. Iris seems to do better on both the face and the background, but whether anyone would notice from a normal viewing distance is questionable.