Topaz Video AI Beta v4.0.9.0.b + v4.0.9.1.b

Hello everyone,

We have a new beta available for testing.

4.0.9.1.b

4.0.9.0.b

Changelog from 4.0.9.0.b

  • Fix Image Sequence crashing on import.

Changelog from 4.0.8

  • Updated EULA and added it to help menu for viewing.
    • These changes only affect Enterprise users working with Video AI in certain venues such as movie theaters and premium streaming services.
  • Adobe After Effects plug-in now available for macOS.
  • Disabled audio automatically when there are no audio tracks in input.
  • Batch processing now preserves more from video input settings on export.
  • Setting priority on outputs now respects your priority decisions.
  • Fix rotation bug where it won't rotate around 180 degree axis of current selected rotation.
  • Batch processing crop will only apply to cropped input videos, not all the selected ones.
  • Fixed incorrect version labels in UI.
  • Added "Recover Original Detail" slider details to labeling.

Known Issues

  • Batch processing selection will overwrite individual video filter settings.
  • Inconsistent “Preview X frames” enable status.
  • Rotation w/ previews will intermittently be over-rotated.
  • Looping previews has a stutter.
  • Live preview can have frame de-sync.
  • 7 Likes

    Wiskers on Cat Now Sharp.


    area between wickers blurry.

    1 Like

    With the Preview, the comparison does only work with the first preview, everyone after is off in time.

    My choice for an AI dencompression algo would be:

    We distinguish

    Color = color
    Pixel = resolution

    Restore colors:
    Restore colors that were shifted due to compression (8x8 pixel compression) to the correct position.

    Recalculate compression:
    Recalculate the 8x8 pixel pattern (or other pattern) (arrange pixels correctly, replace missing pixels (recalculate empty areas).

    Correct the colors and pixels in a separate model.

    Then the actual AI adjustments such as size, fps and so on can be carried out.

    First, a basis must always be created with which the AI can work, if we leave the AI to do the work alone, the result will be a coincidence.



    Well, it’s important not to change the bit depth from start to finish, mixing bit depths inevitably leads to errors.

    Changing from 16 bit to 8 bit and back does not work either, what is lost with 8 bit conversion does not come back by converting back to 16 bit.


    Here I have converted the image to 8bit, darkened the exposure, lightened it again and converted it back to 16 bit, the histogram shows how the information has been lost.




    This is the loss map, the loss is everywhere.

    I have seen samples that show that this change of bit depth takes place somewhere in this and the other TL programs.

    Especially in the dark areas this is very suspicious.



    Another example:


    Loss map



    16 bit



    8 bit - look at the histogram.



    These are the effects.

    The bit depth ensures that an image can display as many gradations as possible, i.e. images with more bit depth can also display a higher resolution.


    8bit



    16 bit

    If the material with which the network is trained has a low bit depth, the model cannot correct the errors caused by a reduction in bit depth.

    Images with low bit depths are also noisier because they cannot show as many differences.

    freely available image libraries that consist of images stolen from the internet, as used by all the big companies like OpenAI or Midjurney, are virtually contaminated with poor quality, because the quality of images on the internet has been declining for years.

    These companies are very lucky that these errors go under due to the new generation of images.

    Also, all images on the net are 8bit, 8bit is the target result, so that the file as such is small and portable.

    The trends over the years have also meant that images have become worse and worse.

    On the side of the wedding photographers you see the boho styles, these are desaturated everywhere, the sky is white, the picture itself goes to red, people look like vampires, because of the skin color.

    Some images are full of editing errors.

    As described above with the bit depth, the problem is that the people who create the images don’t know this either, because many of them are newcommers who will never learn the profession (photographer or videographer).


    The list of problems caused by inexperience is endless, because the errors can be combined in all possible ways.

    For example, I can make the image softer by reducing the contrast, I can do this by removing noise, I can simply set the slider in software xyz to 100% and blend in the mud using a brush, or I can reduce the contrast using a gradation curve and use this with a brush, the possibilities are endless.

    The same applies to videos.

    The tools available to us for manipulation only change the brightness and color of the pixels within a certain radius.

    2 Likes

    Back from an unfortunate hiatus, I was not doing too well, and my doctor had to change my meds around, but I can’t wait to test this out!

    What ever happened to Apollo SR1 and the pan/rotate/zoom settings for stabilisation? I’m hoping it will come back soon, as those were such promising features!

    3 Likes

    Both still in progress and will return in a later alpha/beta!

    2 Likes

    Thanks, yeah I’m looking forward to it as it’s a pretty cool model, in the meantime I’m currently still using that beta, though my GPU memory is over 10 Gb when using it, thankfully my 3090 can handle it! :smiley:

    I don’t know if this is going to carry over to a full render, but I am geting a watermark on output previews.

    It does not say anything about trial mode in the upper right.

    The alpha build is not doing this.

    A screenshot of that would be nice…