Video Enhance AI v1.7.1

I splitted a long video into several smaller parts in order to render it in different times of the day (I use mp4 CRF 0, theoretically lossless), and 2 days later when I joined the files I realized that some frames are missing at the beggining of each part.
I found out that this didn’t happen with AVI files, only MKVs as far as I tested, haven’t tried with MP4 yet.

For my use cases Artemis tends to be my favorite, and the boxes are not quite as noticeable when upscaling, but I tend to use it to clean up grainy images so upscaling is not typically my goal. Really hoping they address this with Artemis v9, or vWhatever, because its pretty unsightly.

Wow again! This is the AI model that others have said is so awesome? Why did I buy an AMD Vega 64 for my new PC? Why was I planning on putting 1.7.1 on? FORGET IT!

With those artifacts, I’m not going to waste my time! The card is going up for sale when I receive it and I’ll get another used GTX1080Ti and use 1.6.1 which does NOT have this artifact issue with Gaia-CG.

2.5x faster is meaningless when video is mangled like this. Newer is not always better!

I was getting blocking artifacts. I was using the x264 output at (the default?) CRF17.
I tried setting the CRF to 12 and the blocking artifacts went away.

Importing the video into VEAI with my own AVS scripts resolved the frame stuttering issues I was seeing. This is a work-around not a fix. The issues is definitely with VEAIs frame server/ingest.

Audio processing in VEAI is broken. Disable it and you can also leave it out of your source file. There is currently no way to tune the mp4 encoder other than bitrate. I use the png image sequence option and then encode with x264 in Virtualdub2. I use Avisynth’s ImageSource function to import the pngs into Virtualdub2. I also do any final tweaks with Avisynth and Virtualdub2 before saving the mp4.

You should upscale your whole video, not work in sections.

Audio must be demuxed from the video before VEAI and remuxed back in at the end with MP4Box or MkvMerge. If you do try to use audio in VEAI, it reencodes the audio to AAC stereo (I forget the bitrate). If is was 5.1 channel, it won’t be after VEAI. Another reason to demux and remux with the original audio.

The source filter in VEAI is no good. They appear to be using something similar to Avisynth’s DirectShowSource which has issues with AVC and MPEG2. I use Virtualdub2 + Avisynth and a proper source filter like DGMPGDec, DGAVCDecNV, DGAVCDec, and save as Lagarith avi. For MPEG2, sometimes the frame count is wrong with all source filters (you’ll know because of AV sync issues later), and I have to use VirtualdubMPEG2 and save to Lagarith avi. Feed VEAI only Lagarith or RGB avi files (one person here has success with image sequences). I think someone mentioned here that 1.7.1 can open Avisynth scripts now. But I’m not touching 1.7.1 because of the blocks in video issue.

VFR is a pain. The best option I’ve seen was posted here:

Sep 30

The best way to deal with vfr is to extract the timecode in V2 format from the original video.
I use avisynth for that

FFVideoSource(“yourvideo”, timecodes=“timecodes.txt”)

Then load your avisynth script in VirtualDub2, in just a few seconds the timecodes text file will be written and you can close VirtualDub. Demux your audio, the program to use will depend on your original container file. There are tons of freewares available fot that task or you can still use VirtualDub to reencode the audio only.

Process your video normally in VEAI by directly import it there. Once finish use MKVToolnix-gui to remux your video/audio/timecode together. And that’s it you will have the same VFR video upscaled.
And if you prefer MP4 format over MKV then use Avidemux. Import your MKV choose copy on both streams and MP4 output. This will normally keep the variable framerate."

When I get to the Voyager upscale project, I’ll have fun with that…

Maximum quality is everything with the projects I do, so I work with Huffyuv and Lagarith and png for intermediate files. I have 80TB and 96TB file servers and 16TB local. When HAMR drives are available, I’m going to build a new file server with around 1000TB. I use two 8TB externals for archiving data and get new ones after they fill up (2 copies of the data). Windows 7 is the fastest at video processing but for assembling it all I use XP with an Intel DCS3700 SSD (fast datacenter drive with MLC flash). I do have Premiere and Vegas on one of my Win7 boxes but rarely need a NLE.

This a just a hobby for me. Maybe eventually I’ll figure out a way to make money with all this stuff.


GTX1080ti is using the FP32 models, where RTX 20 series switched to the FP16. I don’t know what the AMD cards are using in the current release.

I’d like to see H265 output for good quality & smaller files sizes.

1 Like

I’m making some money doing VHS transfers of homemade tapes, the results I get from Artemis LQ are like nothing I’ve ever seen, specially with deteriorated tapes. Most people that do this kind of jobs use just a VCR to DVD converter and the results are usually terrible.
I actually hope people step up and start using software like this, it’s not my job, just a hobby.

Sorry for late reply. I am trying to merge PAL audio with NTSC video, but I found that NTSC video contains more frames than PAL. What can I do?

I’ve tried QTGMC to fix the source and it actually helped a lot with this.

AMD is using FP16. I read that in this thread somewhere. AMD support was added in 1.7.0 per the release notes. That’s why I won a AMD Vega 64 which does 20.4 TFLOPS at FP16. The GTX1080Ti is 0.166(!) TFLOPS at FP16. I just looked them up again on wikipedia.

AMD is not supported in 1.6.1. Can anyone confirm that?

I’m not going to bother with 1.7.1 based on what I’ve seen posted here re: the blocks artifacts. Thus the reason I plan to sell the Vega 64 when it arrives (I won the auction a few days ago)… I not a happy camper at the moment. I just reverified specs, and it does seem the GTX1080Ti is the best for 1.6.1. So I’m watching several eBay auctions at the moment.

I just did a VHS (commercial tape) transfer much better than I was able to do in 2009. This time I used a prosumer SVHS editing deck with built in TBC with a HVR-1150 capture card. Using an external TBC is supposed to be better, but I’m very happy with the results. With VEAI, I was able to bump it from 480p to 576p in the final file. It looks massively better than the old file, almost DVD in quality, which seems impossible with VHS topping out at 250 pixels in the horizontal.

It took 10 rips of the tape before I got a good one (no frame drops and perfect AV sync). I ended up reinstalling XP and everything. Something was borked in the install causing Virtualdub to drop 10% of the frames. I made an Acronis True Image backup immediately.

QTGMC helped a lot, but finding the right settings took a lot of tries. Well at least now I know how to do it right…

Yes, its since 1.7.0.

Additional Performance Numbers from Gigapixel Thread:
Not directly comparable but trend-setting.

Performance Number for a Quadro RTX 5000 (ProGPU) and RTX 3080 (Gaming GPU).

Both 7 - 8 sec for 1080p to 4K.

RTX 5000: fp32: 11.15 Tflop / fp16: 22.30 Tflop
RTX 3080: fp32: 29.77 Tflop / fp16: 29.77 Tflop.

If in the Future the Tensor Cores get used the numbers will not change much here (Quadro RTX 5000 to RTX 3080. (but maybe with RTX A5000 vs RTX 3080) because:

Peak FP16 Tensor TFLOPS with FP16 Accumulate.
RTX 5000: 89 Tflop
RTX2080: 84 Tflop
RTX3080: 119/238 <-- second is Sparse Feature.

Peak FP16 Tensor TFLOPS with FP32 Accumulate.
RTX 5000: 89 Tflop
RTX2080: 40 Tflop
RTX 3080: 59.5/119 <-- second is Sparse Feature.

Just to feel where things really work:
You know there is also the Nvidia A100 its Data Center GPU, look at the numbers here.

Peak FP32: 19.5 TFLOPS
Peak FP16: 78 TFLOPS
Peak TF32 Tensor Core: 156 TFLOPS | 312 TFLOPS2 <-- second is Sparse Feature
Peak FP16 Tensor Core: 312 TFLOPS | 624 TFLOPS <-- second is Sparse Feature

Price for A100 PCI-E: 9.125 €

1 Like

Sadly the output seems to be irrelevant, as that would have been an easy fix. I already tested with CRF0 and it made no change, as well as all the other output methods. Its definitely just an issue with the Artemis v8 model. Its just significantly more noticeable when using the LQ version and leaving it at 100% scaling.

I have a similar project: PAL video with TV audio (from divx download). It’s one of those 90s series where they replaced much of the original music in the DVD release. I bought both the PAL and NTSC boxsets. The PAL has noticeably better quality, which means they may have done a separate film scan, rather than create the PAL from the NTSC release as is sometimes done. Like your project, the frame counts are different and the audio doesn’t sync up, mostly due to the audio coming from broadcast TV. The frame rates of course of different, 23.976 for NTSC (after TFM().TDecimate()), 25 for PAL, 29.97 for TV broadcast. The audio lengths sometimes don’t match after correction with Audition. And it varies depending on the episode. Some episodes have a different edit from the TV broadcast. It’s a mess.

Segmenting each episode into clips- the video between the commercial breaks has helped, but I’m going to need an automated way to sync the audio. I bought Plural Eyes for Vegas, but the audio sync is worse than it can handle. Plural Eyes is supposed to handle slight length differences automatically. And it does if the audio is not too far off (like ±2%).

So I’m writing my own better Plural Eyes. It’ll be a commercial product. It been a very steep learning curve so far… between figuring out the algorithms and learning code I’ve not done before.

With your puzzle, first make sure the NTSC and PAL are the same edit. I would expect the NTSC to have fewer frames than the PAL.

It may be solvable with Plural Eyes and Vegas. Split the PAL and NTSC video into segments (the video between commercial breaks). Demux the PAL video so you have audio files. Import the NTSC video segments into Vegas. Import the PAL audio segments into Vegas. Drag them onto the timeline in the proper order. Use Plural Eyes to sync the PAL audio with the NTSC video clips. Mute the NTSC audio. Export (render) your project into mp4 (or whatever you like).

Premiere was useless for me. I couldn’t figure out how to make PE work with it.

I am using NTSC source because it has got much more quality than PAL one, but I need to merge the output file with PAL audio.
I checked and NTSC source has got 700 frames more than the PAL one (resulting at 25fps in something like 28s more).
Not all episodes have got this issue, sometimes it’s only 1s more and audio doesn’t go too out of sync and result is good, but many other episodes have got this problem and doing it manually every time would be too time consuming, an automated way would be the best.
If there isn’t any way to do that process automated I think I’ll use PAL sources only with that issued episodes, not the best thing to do but after days I’m running out of options that wouldn’t require too much time. If you have any other idea I really appreciate!
Thank you for your post!

Just bought the software.

“login failed” email or password don’t match, or no topaz account etc…

How do I continue from here? I tried removing and re-installing (twice). Rebooting. Removing registry entries. Verified that the user details are correct around 10 times. Website works without problems with the login info… I even tried installing older version…

There went 250 euros?

Not cool. :frowning:

Got to use the Gibberish password Topaz assigns you, not you Topaz account you made.