I been converting a series I own to 4K 60 FPS. I have it down to the results that look best to me. Each episode is about an hour.
So here the issue, I convert to 4K, that’s fine, then I do the frame interpolation to make it 60 FPS. This was going pretty smooth before the last 2 updates and making files around 100GB. Now it’s making files over 400GB. It’s taking two days to render, and then bam on the 99% done it’s hitting the wall and deleting my work because it’s going over the space of the drive. Anyone else noticing that happening now where it’s doubling the file size in frame interpolation more then it used too? Make the best of it, I’ll toss it in handbrake and get a 3GB file back.
MP4 with audio copied. Settings are all the same as it’s a saved preset. But since the last 2 updates, when doing the frame interpolation stage, it’s now out putting file sizes well over 400GB compared to before the updates when the output of the frame interpolation stage used to only be in the 100GB to 200GB. I’m just wondering if something changed in the model. But I can’t imagine what the size would be if I did 60 FPS for a video longer then 1 hour with more detail in the picture, I’d probably have to use a drive with 3 or 4 Terabytes of storage.
I never use the file output as my storage file , I always Handbrake it using x265 (libx265) with CRF settings of 21-23.
That way I get almost as lossless quality but much much smaller file size.
You can read more about the CRF value settings Here that might work best for you.
I use handbrake once the project is done as I mentioned in my 1st post. As I said, I upscale to 4K which is still normal. Then I do frame interpolation for creating 60 FPS which has changed since the last 2 updates, creating unreasonably double to triple size files in the 400GB range compared to 100 to 200 GB files it used to create. Then I use handbrake for my final step which makes the file 2 to 3 GB in size.
The issue is that whatever happened to frame interpretation is causing me issues. It literally takes 2 days of rendering, only to come back in the morning of day 2 just to find out that it failed and deleted all the data at 99% done because it ran out of storage on a drive that 400 GB of free space. That’s the issue, because it wasn’t taking up that much data before when doing this project until the last 2 updates. So I’m just wondering if anyone else is having this issue of frame interpretation now creating double and triple the size of its files compared to before.
The model didn’t change, I think they changed the FFMpeg output settings and now the file sizes are much larger.
It would be interesting to see if the CLI export commands differ between v4 and v5, you can check that from the GUI by using the Process → Show Export Command.
I now have to re-encode my output to get around the ridiculously large file sizes.
I check, it only shows the current one that is taking over 400GB of space, it doesn’t show the CLI export command of the videos using the same preset from the older version.
My power bill gonna love me, two days of rendering turned into 4 days because it fails when its almost done with no space left. I just had it fall again for same reason on a drive with 685 GB of space, like what is this program doing ???.
Can I export this to images by frame, I really can’t have this erroring out every time its almost done. Then put it all together as a video and passthru the audio? at least if it kicks the bucket again at 98% / 99% I only have to go to the last few frames.
I can’t help wonder if all these bizarre errors are because of ffmpeg. I believe that TVAI is now using ffmpeg v7.x. I was using the previous ffmpeg v6.x for CLI work and it worked (and still works) fine. But when I tried ffmpeg v7.0 I noticed serious problems during simple routines similar to what has been reported with TVAI.
This could be due to purely ffmpeg version bugs alone, CPU compatibility or a combination of certain GPU drivers. Not sure.
Anyway, ffmpeg v6.x works great and newer versions of ffmpeg 7 should eventually clean things up. However, the Topaz developers will need to update ffmpeg within TVAI too.
I don’t man, all I know is the preset settings haven’t changed. The 1st 2 disks I upscaled to 4K, then did the 60 FPS frame interpolation. File sizes were only 80 to 100 GB’s. Then the rest of the work was in handbrake to compress it in 265. But after the last 2 updates to Video AI, the frame interpolation step on the last disk I’m finishing off has been creating file sizes of 300 GB +. Heck it ran out of space on a drive that had close to 700GB’s free.
Right now it has wasted 4 days to fail at the very end of rendering due to the file sizes taking up all the space on the drives.
I’m currently in the middle of a heat wave at 40% trying to get this video to finish with 1d and 6 hours left to go for rendering time on a GTX 2080 Ti. This time the drive has 1.30TB of free space. If it fails at 98% this time, I don’t know what to do. I got another program that can do AI frame interpolation but I already did the first 2 disk by Video AI so I don’t want mixed results.
I don’t understand why they delete the whole render if it runs out of space. Why not save the file, give the user the option to make space or move the file to another drive, then start the rendering where it left off and append it to the rest of the video file.
Depending on version you have, I thought in preferences you could save a temporary file that you could use if needed.
Regarding the ffmpeg engine in TVAI… I’m not sure what they’re using, but when I use a stand-alone version of ffmpeg 7.0x it’s incredibly slow and the rendered files are larger. Never had an issue with the last version of ffmpeg 6.0x
I got it setup for temporary files. But that’s only for pausing the rendering.
If you run out of space during the rendering. The program gives you a fail message and deletes all progress of the rendering.
Which makes no sense why the program wouldn’t just save that much of the progress and ask the user to make room or pick a new location and then start the rendering where it left off and append the rest of the data to the file.
If you can pause it, then this shouldn’t be an issue to implement for the end user. Yet the solution Video AI takes, is delete all progress and make you start over again.