I’ve been processing some files one at a time and they usually process at about .7 fps.
I then tried running three files in parallel. I was expecting the fps processing speed to drop by a factor of about three but instead the fps only dropped to about .6 fps.
I was surprised by that. I must not understand how resources are used when processing a file. What is the usual bottleneck in processing?
If relevant I am running with Stabilization off, the Chronos Fast Model, deinterlacing, and the Dione: TV 2X FPS . My output FPS matches the input of 29.97. My input file is crappy VHS tapes at 720*480 (as is the output).
(I need to process some large VHS files and am now planning splitting them apart, processing, in parallel, and then rejoining. This will save me days of time)
That’s the smartest thing you can do.
We’ll see if anything changes in the future.
In principle, I personally have nothing against running multiple processes in parallel.
I now notice that when I ran only one process at a time my GPU would go to 100% and my CPU was unaffected. Now with three processes running in parallel I see the GPU continue at 100% and now the CPU is running at about 50%.
I am encoding with VP9 Best as MP4, auto bitrate, and Audio Settings as Copy.
Any tips, or points to a link, for minimizing loss on re-encodings (when re-assembling a split file)? I need to use something I can be reasonably sure other people have access to on their PCs (will be running results on PCs). So should I encode as MP4 on highest quality settings (if expressed as a bitrate I’m thinking 4 MBPS is sufficient for VHS sourced stuff). Happy to be told otherwise, I’m new to all this.
Personally, I like outputting to image files. TVAI even numbers them correctly by default, so you can just drag and drop all the images into one folder then encode them back into a video and add the sound back in.
If you don’t do it that way, I don’t know how to not add sound gaps in the final movie.
Actually I got lazy and it was .20 spf with three processes running = .067 spf.
There does seem to be some variation with the total remaining time left so I’d need hả independent timer.
I’m thinking the delay every couple of seconds may be adding additional time that is not being reported in the spf. This delay seems to be slowly increasing now and I’m not sure now if I am actually gaining or not. Perhaps I should just test 2 processes.
I am using SSD drives, but maybe the ProRes LT is starting to bog it down.
Sorry my calculations on seconds per frame rate are not accurate because the estimated time clock goes up and then down and you start to gain more time than what it is estimating.
For a 60 minute clip going from 480p to 720p using Artemis (med) it estimated to be about 4.5 hours throughout the whole render. The first clip was 60 minutes and the second clip was 64 minutes.
But in reality it took over 5.5 hrs based on the file creation date and screen shot date. You can see the clock jumps all over the place over just a 40 second time interval.
So if it were indeed .15 spf then the completion time would be 4.5 hours. But the completion time was actually over 5.5 hours making it .20 spf.
I don’t know what point I am trying to make, but the average frame rate is wrong.
as mentioned above - output to single images, leavy out audio and encode afterwards to your desired end-format (is VP9 your goal?)… Then remux the audio stream back into it - thats the most crash-resilient, safest way to do it and also gives you the opportunity to tune the encoding afterwards.
I use LosslessCut to split a video into 4 segments (taking just a few seconds) for parallel processing in TVAI and then to join the up-scaled segments afterwards. I have discovered that I can up-scale 480p / 25fps to 1080 a bit faster than real-time on a base spec Mac Studio! Each of the 4 processes run at around 6.5 fps when in parallel. If I run just one process, the best I can get is around 10 fps. I’m SO much looking forward to TVAI speed / efficiency improvements so I don’t have to go through these extra steps to take advantage of parallel processing.
yes, image creation can even be the fastest output method.
png and TIF are losless, JPG is not… Look up “image compression” on wikipedia to get a grasp of the losless/lossy concept.
Of course you need a lot of discspace - but you will keep most of the quality. 16Bit TIF will keep the most possible, PNG and TIF will stay at the same level of quality (PNG being a compressed format, while TIF in this case is purely uncompressed - its just there to give you a choice).
Putting single files back into a video can be done by a lot of ways - one very simple and free method would be virtualdub2. Of course you can try whatever videosoftware you have at hand, most are able to import an image seqence and spit it out in whatever format you want.
muxers: a very robust example for the mkv container would be mkvtoolnix, for MP4 container, there are a lot of MP44tools guis for every operating system.