You are correct.That is up to a point: This video was original created on Film. The standard speed for film is 24 FPS. It was converted for use on TVs, which are (NTSC USA) 29.97 FPS. So they apply a little maneuver called [2:3 Pulldown] (Three-two pull down - Wikipedia) - This explains a lot! (including some stuff about another “half” frame.
I think this little detail may explain a lot about the complications involved in deinterlacing issues from the bumpy evolution of MPEG; The be-all and end-all standard of video encoding. -There are a lot of great, accurate, and to the point articles about MPEG and it many forms and containers available on Wikipedia
It may also explain the need for emphasizing temporal factors when deinterlacing original movie film video that was converted to 29.97 FPS to television.
The bottom line: If the original video was on 24 FPS Film and it was being interlaced for TV it was actually labeled as Progressive and had 29.97 FPS, each consisting of two interleaved (scan frames). The actual frame rate was increased for TV via 2:3 Pulldown (and some Gar-bagé about a
Bottom line:When converting to ‘modern’ 24p for MP4 or MOV, involves removing a lot of that micky mouse stuff
The max of 4 processes may work. But, as I discovered, Don’t select all of your 4 or more videos and attempt to export them at the same time. It will make a mess.
BUG REPORT: This version doesn’t like working offline with Gaia models, even though they are all already in the models folder. It deletes the model it wants to use and tries to re-download it. This results in an unknown error. Once I went back online, it downloaded the model and worked fine.
There seems to be no issue working offline with the other models.
Currently, not overclocked. Gigabyte Z50 Vision mobo, 32 GB g.skill SDRAM @ 4000, Intel i9-11900K CPU and Gigabyte Geforce RTX 4090 Gaming OC (not over clocked ,yet.) For drives, I am running on 3 2TB NVMe M.2 drives and an 8-TB Seagate SATA-3 for local bulk storage.
This system is very fast, stays fairly cool and, according to my UPS software status, draws approx. 220 watts at idle and goes as high as 470 watts when processing video with TVAI and other utils. (I am told that the GPU can cause very brief ‘power spikes,’ but I have no way of measuring this. - The whole thing runs on an EVGA 850 watt PSU and is cooled by a Corsair iCue cooler with three fans. -
Oh yes, I’m running the most recent Windows 10. - I’ll move to Windows 11 when (if) they fix the aberration they first released called the Start Menu.
If I crank it up, this rig can bench in the 99th percentile world-wide. (At least until some newer hardware comes along.)
V3.x is a very different feel from what we were accustomed to in V2.x It can do a lot more, but the techniques of using it is very different. - The feeling is akin to the reaction of us who have just moved to the newest version of Windows from an older one; especially if there have been major changes. And TVAI v3.x is a major change from its predecessors.
The new GUI and functionality is a big step up from what went before, but they’re still cleaning off many of the glitches that come included with any implementation of newly developed software.
One of the chronic problems is that the import code isn’t always interpreting the source files specs correctly, which results in other problems, such as deinterlacing and noise.
The actual deinterlacing is actually very good, but until they get the source video spec identification thing down, it’s going to be a problem. (Although accomplishing this is not going to be easy.)
One of my pet peeves is that the Preview/Export codec spec ‘automatically’ changes itself back to the default now and again when I don’t expect ti to. - I wish they would put a lock button to prevent this annoying ‘feature.’@yazi.saradest
Since good deinterlacing is key in preparing source video for enhancement and upscaling, (Especially in forums for the most recent versions of TVAI 3.x.) I’ve started to do a bit of web-surfing in the hope that I could find a definitive guide to a better understanding of the archeology and processes necessary to accomplish this successfully.
So far, I’m certain I haven’t yet found the ‘Rosetta Stone’ of deinterlacing, but I did run across this little nugget. It is a PDF of a research paper that does shed a lot of light on the subject.
I haven’t had a chance to absorb the whole thing yet, but I think it contains a lot of relevant and useful information for those looking to do (optimal) deinterlacing.
Yes, the offline option works, as long as you have all the models in place. As per my post above, it still works for all the other models I’ve tried, except Gaia.
Since we’re making our wishes known, I want my 90 minute movie to process in 30 minutes. Hop to it, Topaz! Just for reference, two other AI video enhance ‘competitors’, AVCLabs and HitPaw are at least 3 times slower than Topaz.
Maybe I’m missing something but I set a specific export folder in the preferences and my exports always end up there. I never use Export as only Export.
You can set it in preferences to export to the same location every time. I like to set it to the source folder, so it’s always right beside the original video.
Preview panes are noticeably out of sync by several frames. In fact, if you start playing back the preview, the left pane starts playing and then the right pane plays after a several frames delay, causing them to be out of sync.
Yes. Why isn’t the Export location functionality working? Previously having a “.” in the preferences would set the destination to the original import folder.
Don’t have a clue why as I always export to a different location since the choice has been allowed. I also always start my video processing from a set folder moving the file or files to that folder. Just an easier way for me to always keep track of files for me.
What are you processing? I’m getting Proteus SD to FHD (at least) double the framerate with my RTX 4090 than from in the v3.0.10 release. - I think the rendering speed will be going up, but I’d be skeptical of being able to Enhance using AI in faster than real-time; at least not in the near future…
SD to SD on Artemis Medium and High Quality is over 2X faster than real-time. At least on my 3080ti system.
But yeah, I also don’t think it will get faster than real-time for any upscaling any time soon.