AI gigapixel batch processing


Is it possible to get batch-processing in AIgigapixel?

I made this video recently ( ‪Jacob Larsen - A belated #Halloween treat :) But to make up... | فيسبوك‬ ) where I used AIgigapixel to re-scale every single image of the video (The original video was shot in SD and heavily compressed, but upscaling it to HD still gave great results. So having this ability for more video-work would be great)

I had to load and process a small group of images per instance. Needless to say this was quite a laborious task :slight_smile: since the video consists of thousands and thousands of single image-frames.

In TopazStudio doing batch-processing is a breeze :slight_smile: so if it’s possible to get something similar in AIgigapixel that would be nice.

Thank you.


Open through the Shift the desired number of images and run.

Thank you.

Yes I know I can open multiple images, but the problem is AIgigapixel slows down to a halt when the number exceeds more than a few hundred.

Try loading 15000 images… it’s too much for AIgigapixel and it simply chokes.

I’m not sure why, but it seems like it’s updating some internal list each time a new image is loaded (when you load multiple images in one go), and it seems like it has to go through the full list for each single image, which leads to a slower and slower loading. Eventually, after it has ingested a few hundred images, it becomes so slow that loading 15000 images would eventually take days or weeks to complete (I’m not talking about the resize processing time, which is actually quite fast)

In Topaz Studio this load-problem does not exist, because the batch-processing mode doesn’t need to actually load all images at once, but goes through processing the images one after another.

1 Like

I admit I do not know that. Try support.

1 Like

Oh ok :slight_smile: I thought Topaz read these forums, but I will email them directly.

1 Like

A.I. Gigapixel will only be limited by your hardware. We’ve had reports of users successfully running up to 10,000 images at once, though, admittedly, they have some wicked powerful hardware.

The actual processing vs interface rendering is handled a bit differently, and I suspect the limit on your interface loading has to do with System RAM size. Do you think you could provide a DXDiag report? Here’s how:

  • From the Windows desktop, press the Windows key and the R key to open the Run window.
  • In the Open: field of the Run window, type “dxdiag”. Click OK or the Enter key to open the DirectX Diagnostic Tool (if prompted to check if your drivers are digitally signed, click Yes).
  • In the DirectX Diagnostic Tool window, click the Save All Information button.
  • In the Save As window, the DirectX information will be saved as a text (.TXT) file. Designate where the file will be saved (such as the Desktop or other easily accessible location) and then click on the Save button.
  • After you’ve saved the file, please attach it to your reply.

Thanks so much for your patience and I will await your reply.

1 Like

I did some tests and found that AIgiga (at least on my system) got progressively slower at loading when the number of images increased. Loading 10 images in one go was fast, a 100 took longer than 10x10, a 1000 took longer than 100x10, and above 1000 it just took too long. So I ended up using the ‘sweet-spot’ of 1000 images per go and just repeated that 16 times until all images were processed.

1 Like

Thanks for sending the system report to me, Jacob. I see you have a pretty powerful system, and I think your comment here describes why the performance might degrade as the image list gets larger. I’m going to use your example to describe this issue to the developers.

1 Like

if we could run TopazLabs Studio and plugins like AIGigi work as command line commands - these problems would go away ( many different ones.) most modern tools ( including Photoshop and other filter plugins) have a command line version with options. SO imaingin topazStudio.exe -i input.jpg -s (adjustment script> ( which would start TS, load the input file and run an already made adjustments script/macro). Where then in say in Window 10 but similar in Mac, you can use say a .bat script to run anything you want and as many times you want with a simple script - like for all the files in a folder, save them to another folder and run as many topazStudio.exe commands in between including commands to other programs like photoshop actions and other programs - this would make folks like us who do extensive batching and smart scripting able to do more and do it faster. Better to be developing this then making cheap batching tools in TS.


That could work.

Personally I don’t care how it’s done, as long as I can get the software to process a folder full of images without needing my intervention during the process.

I’m not sure editing .bat files is for all people though. It’s easy enough when you know how, but can seem daunting at first.

But yeah, it would certainly make the problem I have go away :slight_smile:

I have experimented more, to see if it is my windows-system that is the cause of the issue (since I understand other people have no problems loading thousands of images in one go), but so far I have not found anything that made any difference.

I’ve tried turning ON and OFF ‘background processing’, and thumb-nail display, but it doesn’t make any difference.

I can load 1000 images, without getting a prolonged ‘hanging’ of the UI.
But if I load 2000 it just hangs.
I wonder if windows does something in the background which cause things to get bogged down. But since I don’t know what to look for I can’t really make any progress at the moment.

1 Like

I also tried it for some videos with virtualdub frame extractions. With ffmpeg you can merge the extractions quickly back. With this technique you miss information from adjacent frames but the tools that I used to test that (that promise excellent results) were not that good. A single image upscale with AI gigapixel gave better results imho.

1 Like

We’re aware of the utility and flexibility of CLIs, but we just don’t have that anywhere on our priority list at the moment. For now, our technologies will be accessed exclusively through our GUIs.

1 Like

I’m running into the same problem here. I can batch a few hundred images, but at some point, my computer will become completely unresponsive. My machine is relatively high-spec (64GB RAM, i7-5960x, Quadro K4200). If I watch the batch processing, RAM usage will climb higher and higher until all system RAM is used. RAM is not freed until I close the application. It seems like a leak, or that memory is not being freed after each image.

I’ll also throw in a vote for a CLI. That would allow me to distribute processing to my render farm and actually use AI Gigapixel in production.

1 Like

I see the continuous RAM increase too, when I try to load a huge number of images.

I submitted this to the Topaz Studios tech support, but thought I would put this out there also, since Gigapixel is absolutely amazing!

Recently, I was trying to upscale a 720p video by exporting it as an image sequence in Premiere (24 frames per second), and then upscale each image using the Gigapixel software to 4K UHD.

The video was batched to 37,700 images.

Unfortunately, when trying to import 37,000 images into Gigapixel… the memory usage skyrockets (I think to the total size of the images), and then just crashes.


  1. Fix the importing, to only import files into memory one at a time (or per the number of available GPU’s), or import them without a preview image (simple solution I’m sure).

  2. Allow as a stand-alone back-end server process like Media Encoder from Adobe to simply listen to a folder, and render accordingly.

  3. Allow multi-GPU rendering (I actually never got to this step, because Nvidia doesn’t allow Consumer GPU’s on Windows Server anymore, so maybe this DOES work, but I don’t know). Renting a cloud-based windows server with 4x-8x Nvidia 1080 GPU’s is very do-able nowadays.

  4. Perhaps do the video upscaling in-app (Topaz mentioned this will be in the R&D eventually, I’m sure it won’t be easy).

  5. Usage of Tensor Cores on Nvidia and AMD GPU’s. This will drastically speed up render time, once fully implemented, since these chips are specific to A.I.-based rendering.

Obviously the process of rendering 37,000 images, then importing them into Gigapixel, upscaling each image, and then re-importing to Premiere is slow and cumbersome…

But the quality of Upscaling in Gigapixel is absolutely incredible… about 2-3x better than the competition (Waifu, Red Giant 4K, Alienskin Blowup, Photoshop, and After Effects).


P.S. needs an SSL certificate ASAP.

1 Like

Funny, I tried to do the same with 37,700 images (my thread got merged into yours).

Luckily you managed to get to 8 minutes of video. My setup kept crashing after 7 seconds of 24fps.

What settings did you use? I set up Strong noise & blur reduction.

I can see in your video there is a lot of artifacts from the render process - but still quite interesting.

1 Like

In that case a command line .exe would even be easier, then you can wrap it within your own scripts with the amount of threads you like, if one crashes and your output <> exit code 0 then you can retry it with a new instance. Config file could be generated with the GUI app for a single image.

1 Like

I loaded 1000 images per go (it took about a couple of minutes before the images got loaded into AIgiga)

Then I processed those 1000 images.

Then I repeated the process on the next 1000 images until I had made it through all 16000 images

Cumbersome? Yes :slight_smile:

Finally I loaded the 16000 upscaled images into the video editor and exported a video from there.

1 Like

Has anyone tested this on a multi-GPU setup?

With a 16 core CPU, on a single GPU (I think it was a Tesla M50 GPU or some other), I was only at about 30-40% CPU usage, with 100% GPU usage.

The GPU usage in Gigapixel is amazing (I still wonder why Adobe can’t fix Premiere’s abysmal CPU/GPU usage).

1 Like

Just thought I’d chime in here. I had no idea I was not the only one using AI for upscaling movies. I just did this for about 160,000 frames (only manages 2000 at a time) and although the results were pretty good, I’m not sure they were worth the month or so it took on and off to do them compared to Adobe After Effects new upscale algorithm. Perhaps I was de-blurring too strong but of course I’m not going to do the whole lot again! I’d love to see this technology available for movie input!