Aurora - Nidarosdomen, Trondheim, Norway (1080p50fps) upscale

Here’s my upscale and doubled framerate version of Aurora’s fantastic performance at Nidarosdomen, Trondheim, Norway in 2017.
It is copyrighted, however freely available on’s website (you can see the original and compare). It’s on 1337x so if you live in some barbaric place that locks you up just for visiting a torrent site (India), you may not want to click the following:
Aurora - 2017-11-02 Nidarosdomen, Trondheim, Norway v2.0 (1080p50fps)

After capturing, I muxed the files with MkvMerge. I demuxed the mkv with MkvExtract. I use v9.6 as it doesn’t have the corruption issue some later versions have.
I used DGIndexNV to demux the stream and create a project file (the .dgi).
Then in Notepad++ I created 01-ant.avs with the following:
DGSource(“ant.dgi”, deinterlace=0)

This is the Avisynth script to frameserve the video to Virtualdub2 so you can convert to a Lagarith avi to feed to VEIA. Open in Virtualdub2. Set to no audio. Set compression to Lagarith. Save as:

Folks who have trouble opening mp4 files directly in VEAI, well you can’t do that. VEAI, like most programs, has issues opening AVC streams. There is only one source filter that gets it right, and that is DGSource in the DGDecNV package ($15). A LOT of effort and discussions with Nvidia went into creating that. You can read the details on the author’s website. You’ll need a Nvidia GPU to use.

So now you have a big Lagarith avi. Open that with VEAI. At the time of this posting, I highly recommend you use v1.6.1 as it doesn’t have the blocking artifact issue in 1.7.1 and Gaia-CG works very well (IMO).
From my notes:
run2 Gaia-CG 2560x1440 200per cropoff

That means I created a folder called run2 for the png images sequence. I set VEAI to use Gaia-CG and 200% upscale. Do not use VEIA’s downscaler. As 150% is what is needed to get 1920x1080 and it can only do 200% and 400% properly (100% doesn’t work), it will have to first upscale with 200%, then downscale 200% to 150%. Don’t let it. Who knows what downsizing algorithm they chose. Most resizers produce artifacts (maybe not that visible, but they’re there).

Have it process the video. On 1.6.1 you should have a GTX1080Ti GPU for maximum performance.

When done open this Avisynth script in Virtualdub2:
02-ant-imgsrc-svp-resize.avs containing:

vectors=SVAnalyse(super, “{}”)
SVSmoothFps(super, vectors, “{}”, mt=1)

The ImageSource function opens the pngs. You have to change your path to what you used. My main file server is mapped to the U drive. There is a backslash between the run2 and % (the forum software is hiding it). You can change the framerate here if so desired (like doing a 25fps PAL DVD at the original film framerate of 23.976). End must be set to your last frame (look in the run2 folder with Windows Explorer to find this).

ConvertToYV12 changes the colorspace (CAS requires this). It doesn’t affect the video.
Spline64Resize is an excellent artifact free resizer that’s built into Avisynth.

vectors=SVAnalyse(super, “{}”)
SVSmoothFps(super, vectors, “{}”, mt=1)

These 3 lines invoke SVPFlow’s frame doubler, changing the framerate to 50fps, which is needed for live action to avoid judder. This filter is 2 DLLs you have to locate, download and copy to your Avisynth plugin folder. I use SVPflow v4.2.0.142. Later versions add nothing and are not free. Like DGDecNV, you’ll need a Nvidia GPU to use.

I can’t begin to describe just how difficult it is to properly double the framerate. The algorithm is hugely complex and way over my head.

I had to do this several times. I first tried FrameRateConverter with the normal preset. The video was filled with blocking artifacts. I tried the slow preset, better but not perfect. Only the slowest preset delivered artifact free video. It took 3 days to encode on my fastest Win7 box with a quad core Xeon at 3.6GHz… SVPFlow did the same in about 18 hours. The GPU hugely speeds up the calculations. I could see no visible difference between the two. SVPFlow has had several more years development and more developers coding it (it’s part of an open source project designed for realtime framerate conversion in players like MPC-HC).

CAS is a new sharpening algorithm AMD developed. It’s now available on Github for Avisynth because I requested it on doom9 (only the source code previously existed). CAS is amazing on some videos like this one, and not so great on others. You have to experiment with each project.

I used x264 8 bit that’s built into Virtualdub2:
Slower Film High L4.2 YUV 4:2:0 SAR 1/1, 1 pass CBR, 25742Kbps.
Save as part 14 MP4.

I’m not a fan of 2 pass as they can lead to inconsistent encodes. With some experience, you’ll learn what the proper bitrate should be for a video for a given resolution and framerate. Bits/(Pixel*Frame) as shown in Mediainfo gives a clue how much compression was used for a video. Youtube is around 0.040 and awful. Way overcompressed. Minimum for decent quality is around 0.120. Better is 0.200 and higher. This video is 0.248. The original file is 0.098 which is barely enough for VEAI to work its magic. The original file is 1009MB and 2406Kbps 1280x720p. With VEAI, SVPFlow, and CAS at 1920x1080p 50fps it needs about 25000Kbps to look good.

To know what level to use, study this wiki:
AVC wiki

With the subtitles, I used SubtitleEdit to edit out the lyrics as they were distracting.

I then remuxed the new mp4, aac audio, and srt subtitles with MkvMerge to the final file and renamed it to what it is. I don’t normally remux with the subtitles enabled, but in this case, they were useful to have on during the Norwegian dialog.

If you’re lucky the audio-video will be in sync and you’re done. If not in sync, make sure the video and audio are the exact same length. If they are you can use MPC-HC to find the offset (plus and minus on the keypad) and remux with the audio offset in MkvMerge (it might take a few iterations to get perfect).

Hi, thanks for sharing your process.

I am new to Video Stuff, and i want to use your workflow as base.

I have double checked that the “preferred output” when using veai its tiff 16. (And read that is the fastest output as well over there.)

I have double checked that the tiff format outputted is an RGB48 (3x16 channels) as .tif

As per version 1.7.1 veai its capable of loading the raw 000000.tif sequence.

  1. Why not attempt to convert the DGSource(“ant.dgi”, deinterlace=0)
    *Optionaly LinearTransformation(Input=“SLog3”, Output=“Linear_BT709”)

And feed it directly to VEAI (wich internaly probably is working on RGB48)?

  1. Also i think that CAS implementations does not require YV12, they require a planar format
    and RGB48 is planar (16 bit RGB)

Thank you again for your workfow,
Kind Regards.

Thank you, I plan to post several of these turorials from the various projects as I complete them. It would be great if others can share their processes too.

There’s not much point in 10bit color (from RGB48) as very few folks have 10bit capability (not even me). All have to support 10bit: monitor, driver, videocard, all the plugins used, every processing step, the source video, the final codec and all the intermediate files. Good luck with that. Maybe in 10 or 15 years… As I archive my work files (the ones not deleted) to 3 different drives, that includes the png image sequence, so 600GB vs. 4TB is important. I 7z (store mode) the whole png folder to a single file for speed and the checksum file sanity. For me, VEAI and Avisynth is fastest with pngs.

I’m not using 1.7.1 due to its issues, primarily poor quality output (compared to 1.2.0 and 1.6.1) and the blocking artifacts. I use 1.6.1 and 1.2.0 (for Artemis-MQ). If they are not able to get Gaia-CG 1.6.1 and Artemis-MQ 1.2.0 quality again, well I guess I’ll forever be using those versions. Speed is much less important than high quality. If I need more speed, I’ll build more VEAI boxes (I have 2 VEAI boxes now, both with a GTX1080Ti GPU).

CAS needs ConvertToYV12 in Avisynth (even says it on the github page). If not YV12 an error a generated.