I am trying to use the image sequence output from Video Enhance AI (as I have to interrupt processing at times).
My question is, what is the easiest way to assemble the output enlarged video with the original sound from the image sequence?
I typically reassemble the images with ffmpeg. Something like:
ffmpeg -framerate 29.970 -pattern_type glob -i "*.jpg" -c:v libx264 -pix_fmt yuv420p new_video.mp4
But you have to make sure to match the framerate of the input source. Then I extract the audio from the original source, e.g.:
ffmpeg -i original.wmv -vn -acodec copy original.wma
Here you have to make sure that this works by using the correct filenames and audio codec. In the end I merge the audio track and the video file.
ffmpeg -i new_video.mp4 -i original.wma -map 0:v -map 1:a -c:v copy -shortest new_video_with_audio.mp4
I am sure this is not the best way to do it and will not give you reliable results with variable frame rates, but for me it does the trick. I would also very much welcome hints to do it better, though!
Read the thread at:
My post there:
"Audio processing in VEAI is broken. Disable it and you can also leave it out of your source file. There is currently no way to tune the mp4 encoder other than bitrate. I use the png image sequence option and then encode with x264 in Virtualdub2. I use Avisynth’s ImageSource function to import the pngs into Virtualdub2. I also do any final tweaks with Avisynth and Virtualdub2 before saving the mp4.
You should upscale your whole video, not work in sections.
Audio must be demuxed from the video before VEAI and remuxed back in at the end with MP4Box or MkvMerge. If you do try to use audio in VEAI, it reencodes the audio to AAC stereo (I forget the bitrate). If is was 5.1 channel, it won’t be after VEAI. Another reason to demux and remux with the original audio.
The source filter in VEAI is no good. They appear to be using something similar to Avisynth’s DirectShowSource which has issues with AVC and MPEG2. I use Virtualdub2 + Avisynth and a proper source filter like DGMPGDec, DGAVCDecNV, DGAVCDec, and save as Lagarith avi. For MPEG2, sometimes the frame count is wrong with all source filters (you’ll know because of AV sync issues later), and I have to use VirtualdubMPEG2 and save to Lagarith avi. Feed VEAI only Lagarith or RGB avi files (one person here has success with image sequences). I think someone mentioned here that 1.7.1 can open Avisynth scripts now. But I’m not touching 1.7.1 because of the blocks in video issue."
Example AviSynth scripts for a 480 29.97fps telecined interlaced DVD with a bit of noise:
All the standard MPEG2 source filters failed to find the correct frame count, so I used VirtualdubMPEG2 to open the mpg and save as Huyffyuv avi. This is after demuxing the mkv file with MkvExtract (which also creates your audio file you’ll need later). Note I used Huffyuv at this point rather than Lagarith as it’s faster and works properly with interlaced. Lagarith is progressive only (despite what its documention says). Likewise, Huffyuv is interlaced only.
01c.avs (filename to be opened in Virtualdub2, and processed by Avisynth, use any text editor like Notepad to create avs files)
In that file I have:
QTGMC( Preset=“Fast”, EZDenoise=5, NoiseProcess=1, NoisePreset=“Slower”, TR0=1 )
That does an IVTC and a high quality deinterlace, outputing 23.976fps video. Note I disabled audio in Virtualdub2 and VEAI. Whether to use AssumeTFF or AssumeBFF depends on what Mediainfo tells you- look the at original video file with it, it’ll tell you the field order.
Set to Lagarith compression in Vdub2 and Save video. Note my filenames clue me in to what processing was done:
After that finishes, open the avi in VEAI and upscale (I use 1.6.1 due to blocking artifacts in 1.7.1, and Gaia-CG is by far the best model in 1.6.1 (IMO)). Output in png.
Notes I keep (yes this is the 4th try… my first run, the QTGMC setting were too soft, limiting upscaling, 200 percent didn’t provide the detail I was after)
run4 Gaia-CG 2880x1920 400per cropoff
This is the answer to your question. Change the image folder, fps, and end frame to suit. This is also a very handy was to easily change the framerate (like if you’re doing a PAL 25fps video that was shot on film at 23.976fps):
I did some sharpening with CAS (brand new algorithm by AMD, that often works better than older methods). Spline64Resize is typically better than Lanczos, Bicubic, and Bilinear. Essentially a perfect resizer that is artifact free (see forums.doom9.org). CropVD I need to publish to doom9. It’s a simple function I wrote that fixes the bizarre Crop dimensions ordering and values Avisynth uses. Makes them VDub style.
In VDub 8bit x264 I used Placebo Film High L4.1 YUV 4:2:0 SAR 1/1 10722Kbps (1 pass, CBR):
Remux your video, audio, and subtitles (if any) with MkvMerge.
Rename the final file as desired.
If you think this is not a simple answer, you are right. Video processing is a complex process.
Wow - that’s more than I bargained for Thank you anyway, need to work through this in time.
There are many ways diifferent ways to process audio and video. It takes time to learn some of the more popular methods. Some folks prefer to only use GUI apps. I don’t care either way. Just whatever works. And everyday you’ll learn something new.
Extract the sound from the original file with a tool. Load the first image of the series into VirtualDub. Set the fps you want, select the audio file and export the final video.
I have Pinnacle Studio 21.5 Ultimate. I put whole sequence on track 1 and add original video on track 2, separate original video and audio and delete video track, render image sequence with original audio. Just make sure to have same framerate as original file so Audio is in sync with the image sequence.
Thanks, I used ffmpeg to assemble the video, then Magix Movie Studio to add the original audio.
The frame numbering output by VEAI is a bit confusing as it starts with 0, so I had one duplicate frame when I started the next session with supposedly the first unprocessed frame.
How long does it take to get the images into the timeline? I tried this with Magix Movie Edit Pro and gave up after about 10 minutes. I ended up with command line ffmpeg and then using Magix just to add the original audio to the ffmpeg generated output video.
It all depends how many frames you want to import. I tried around 1000 and that was pretty quick. I think around 8000 the pinnacle starts to have problems, so probably doing it in batches, no compression, and once all the images are in parts put them all on timeline.
That’s why I like to use Avisynth’s ImageSource function + Virtualdub2 to handle the pngs. It opens instantly no matter how many images are in your output folder. Virtualdub2 can open an image sequence- 5 minutes for 100,000 pngs over the LAN, and about a minute locally on the file server. That’s for Windows 7. XP takes about twice as long. Windows 10 seems to never open the sequence (I ended Vdub2 in Task Manager after an hour or two…).
And once it’s in Avisynth, you can Crop and use the fabulous Spline64Resize and Tweak the saturation, brightness, contrast, gamma, etc, and use CAS sharpening (CAS works best with high def, it seems to only amplify noise with std def). I find I can do most post VEAI processing in Avisynth and just use Vdub2 for making the mp4 with x264. I often reuse a script or parts of several scripts on the next project via copy/paste.
You can add your audio back in using Avisynth too. I prefer doing that with MkvMerge as most of my projects require a MKV file due to SRT subtitles, XML chapters, or using AC3 or FLAC audio with H264 video (the MP4 container is quite limited in what it allows).
So VirtualDub2 isn’t really optimizer for Windows 10 or Windows 10 isn’t optimizer for VirtualDub2
Virtualdub came out around 1998, well before Windows 10. Virtualdub2 is a fork of the original project with added features. I don’t know if any Windows 10 specific optimizations were done.
Virtualdub2 sort of works in Windows 10. I say “sort of” because there’s a bug in Windows 10 with allocating memory to 32bit processes. For example, you won’t be able to use SVPFlow (via Avisynth) at the same time you use Virtualdub2’s x264. That’ll get you a “corrupt data” “-100” error. Otherwise Virtualdub2 and Avisynth work on Windows 10. On Windows 7 or XP, they both run fine with no issues.
A lot of folks have a hard time with Avisynth because it is controlled by an avs file you open in a video editor (like Virtualdub2). Once you understand that abstraction, it’s easy to use. The avs file you create and edit with any text editor. I use Notepad++ because it has a tab for each file opened, and you’ll usually have 2 or 3 avs files for each video when working with VEAI.
If it wasn’t clear in my previous post, don’t open image sequences with Virtualdub2’s file open dialog (which invokes the slow image sequence importer). Open them indirectly via Avisynth by opening an avs file with the ImageSouce function in Virtualdub2. Done that way, it opens instantly.
Here’s a post VEAI script I have open at the moment:
avs filename (I made up that filename to hint to me what I’m doing):
You’ll note the ImageSource function opens the image sequence. I set it to 50fps. The original video was 25fps interlaced. But after QTGMC (without fpsdivisor =2) it was 50 unique frames per second (found by analyzing the video in Virtualdub2 and toggling on/off the QTGMC line in the avs script, and reopening the video). Some interlaced video is like that and it’s great if you want a high framerate (best for live action like a concert). If each frame is doubled (same content), that’s when you use SelectEven() or fspdivisor=2. If you still want a real double framerate, that’s when you use SVPFlow in your Avisynth script to interpolate new frames. Most people prefer movies at 23.976fps (if doubled you get the soap opera effect). End is ??? because the video is still processing in VEAI and I don’t yet know what the last frame number is (yes it would be nice if ImageSource was smart enough to figure it out for you).
The path to the images is set, %06d.png tells it to open a png sequence with 6 numbers in the filename. I crop 2 pixels off the left, 4 off the right, 10 off the top and bottom (they were black). I set the colorspace to YV12 for CAS to function. I resize and slightly increase the saturation. I use Contrast Adaptive Sharpening (CAS) with 0.7 for a moderate amount of sharpening. These values I tweak once VEAI processing is done and sometimes I’'ll add additional commands to the Avisynth script. Sometimes I’ll use a filter or 3 in Virtualdub2. This flexibilty allows you to tweak to get the best results.
I do very little tweaking. Go to forum.doom9.org to see what the hardcore videogeeks do. A lot of it is way over my head.