I’m definitely a novice in understanding what is going on under the hood, but from my observation, when you go to process something like a movie, the software processes 1 thread linearly.
Would it not be efficient to scan the file, look for transitions, offer up recommended transitions, crop the video into segments in between the transitions, and process those segments multi-threaded in parallel and then recombine at the end? This may also allow for shutting down threads so that you can take breaks after a few segments have processed and then queue up the rest later.
Use case, 120 minute movie file:
- Topaz scans file
- Topaz detects 8 transitions
- User identifies 6 of them to be true transitions.
– Opportunity for training feedback on detecting transitions - Topaz begins processing and filling GPU threads up to user-defined limit (let’s say 3)
– Opportunity to process in user-defined priority order (shortest first, longest first, etc)
– As Topaz processes files, completes them, and fills its next slot, toward the end it speeds up given less strain on the GPU with threads going unburdened - Topaz takes all completed segments and then recombines to create the final output file