You write above:
“For normal videos I’ve been”
and then:
“ProRes is not lossless.”
Okay, but that seems contradictory to me.
Lossless is needed in the very top professional league.
As a private user, I work with ProRes LT for all intermediate stages—Topaz, DaVinci, etc.
For short things, such as slow motion, I also use ProRes Std…
(on 2 NVME with 4 and 2 TB)
What does your Task Manager say about resource utilization?
Do you have other analysis tools such as TechPowerUp?
What PCIe version do you have for your graphics card and fast NVME?
How fast is your RAM. (I have 4400 MT/s DDR4 )
What resolution are you working with, 2K or 4K?
… just my opinion:…
to avoid frustration when using Topaz in 2K and 4K, you need a higher 40xx or 50xx …
and, of course, the entire computer environment must be right (PCIe; RAM - max MT/s)
To reduce the load on the system, use ProRes.
→ No interframe encoding necessary – significantly reduces the load on the system!
→ However, if it is also due to the write performance (which I suspect), use ProRes Proxy
→ still tolerable data rate.
–>> This way, you can first test whether AION runs at all…
→ and don’t mix in other things (enhancement, scaling), for that you need a 4090 or 5090
(I only have a 5080 and it’s constantly at its ‘limit’ )
I would like to return to the core topic of this thread,
addressed to the Topaz developers:
Accurate frame interpolation cannot consist solely of models that only use base 2 as a divisor. (Apollo: 8; AION: 16)
Multiple division itself is great, of course, because it results in the high quality that Chronos doesn’t have…
We also need base 3 and maybe even base 5 to at least start with a mathematically excellent approach to all possible fps conversions or slow motion. (No jitter!)
Since 24->60 (or 120) is probably needed very often, a model for this would also be good:
It would have to have a 10 or 20 times divider, but over a step of two images!
Then a 2.5 times or even 5 times multiplication would be mathematically exact.
… Of course, there will still be artifacts of other kinds if we instruct AI to reinvent things (images).
I found that taking a 24fps movies to 25fps with shutter encoder(it speeds it up without inserting frames, but it’s not noticeable) and then going from 25fps to 50fps with chronos and setting the tv to 50hz is the best solution of all. Try it out. It’s a very fast encode with better results than aion or two pass model. I can then take that 50hz movie and use my tv’s motion interpolation on top of it as well and it looks the best I’ve seen so far.
The problem with these models arises when you try to go from 24fps to 60. When you simply double the frame rate, you’re just adding in one frame which is easier for the models.
You’re actually creating a Variable frame rate video when going from 24 to 60fps.
So I’m now doing 25fps to 50fps for all my movies. I use Chronos because isn’t jittery like apollo is. Pans are much smoother with chronos.
No… Since I output to images, and then encode it later with another program, it is not variable frame rate. Is the motion correct with smooth pans? Maybe not. That’s where doing Chronos as the second pass comes in to help.
24fps movies to 25fps, will un-synchronize the sound. Chronos alone on less than 40fps tends to randomly blur things.
Ok outputting to images is completely different obviously. I don’t know about that.
The sound will not unsyncrhonize the sound when using shutter encoder. Sound is perfect including pitch. Try it out. It’s “conform” function will align everything perfectly.
23.976 conform to 25fps in shutter encoder and then input into topaz 2x frames to 50fps
I don’t understand. What is that supposed to be, and how does it work?
What results is judder.
And that is already present in the individual images because they are created from/with the non-homogeneous divider of the slow-motion models in Topaz.
(That is, if the divider set by the user is not 2, 4, 8, or in the case of AION, 16. )
Variable frame rate is a form of compression where it will store less frames and display them according to timestamp instead of a single time interval. So like, one part of the video with no motion will play back at 12fps, but a part with more action will be at 24fps.
This is according to me. I might have just made all that up, since I did not fact check it.
I mean topaz ai can’t go straight from 24 to 60fps without creating a variable frame rate.
It can only do 2x, 4x ect. Unifab program has the same limitation and has major stutter if you try anything other than an even multiple.
Regardless of VFR, Motion is so much smoother when using an even multiple in topaz. No comparison. I would never use motion interpolation in topaz without even multiples at this point.
That’s what this topic is all about.
Can we get a model that generates two correctly timed frames between original frames.
I’m not really sure how you train such a model. The best I can think of is have a camera that films both 24FPS and 60FPS or 120FPS at the same time.
Hey one other question. Do you notice that using any of the frame interpolation models causes “low bit rate color banding” to be introduced in occasional dark scenes? Even with a 4k 10 bit video. I tested it out. Any way to avoid this? I’m using 265 main 10.
btw I’m so happy now with my motion to 100fps with apollo btw. Incredible results.
1. Temporally correct - no Jidder:
Chronos always achieves this, as it only generates an intermediate image.
→ AION and APOLLO only achieve this if you select a factor that is exactly divisible by their internal model-fixed division ratio. …
2. Minimal artifacts:
Since Chronos only calculates one intermediate image, this has the most artifacts - you have confirmed this yourself
(essentially errors in the assessment of the movement sequence between the two original images)
→ Apollo performs much better here, as it generates 8 intermediate stages - even if fewer are needed, it has much better accuracy in reconstructing the movement sequence …
→ AION outperforms all of them, and I have seen two cases so far where only AION worked without artifacts, as it generates 16 intermediate stages between two images to reconstruct the motion sequence. (The very best 50% slow motion for me)
→ So far, I haven’t noticed any artifacts… but it’s best to use a second or third computer for this, where you don’t care how long it takes to calculate…
The motion interpolation models definitely introduce color banding. Multiple scenes have color banding that didn’t have it in the original. It’s very mild color banding in solid dark scenes. Even with FFLV or whatever its called. The motion models have trouble figuring out color gradients and they also reduce noise a little(which I don’t mind).
Also, I only noticed because at 100hz my tv loses it’s color gradation smoothing feature.
I’m not complaining though because the motion is incredible.