Urgent: Topaz Video AI's Unwanted Color & Contrast Shifts—Vote to Fix & Share Your Findings!

Dear Topaz Labs and Community:

Since the first version until the latest 4.0 release, I have been encountering a significant issue with the Topaz Video AI that disrupts my professional workflow. Specifically, the software alters the color space and contrast levels of footage after applying any of the AI models. While the changes are subtle in well-lit shots, they become glaringly evident in darker footage, lifting the shadows to an unacceptable degree. This forces me to take extra steps in my color grading process, adding unnecessary work and potentially compromising the final output.

As of now, I can only utilize Topaz’s stabilization feature, which performs exceptionally well. However, the color and contrast changes have made it impossible for me to fully integrate the software into my workflow.

I strongly urge Topaz Labs to address these issues immediately, as they are critical for professionals like myself who require precise control over their footage. If anyone else has experienced similar issues, please share your findings to help prompt a swift resolution.

When importing shots directly from the camera (as in shots that have not been graded yet the software drastically changes what i think is the color space because i see lifted-up shadows and reduced/shifted saturation in some colors.

## Footage and Camera Settings Tested with Topaz Video AI

Extremely Noticeable Color and Contrast Changes:

  • Panasonic GH5 Shooting V-log: Extremely noticeable changes
  • Sony FX6 Shooting Slog3: Extremely noticeable changes

Very Noticeable to Noticeable Changes in Dark Shots:

  • Panasonic GH5 Shooting CineD: Noticeable changes, very noticeable in dark shots
  • Panasonic S5MK2X Shooting 422 10-bit CineD Profile: Noticeable changes

Less Noticeable Changes:

  • DJI Phantom 4 Pro: Less noticeable, likely due to typical bright, daytime drone shots
  • iPhone 14: Less noticeable

Noticeable Even in Graded Footage:

  • Prores 422 Already Graded: Less noticeable but still present, making it unsuitable for professional workflows requiring 100% color accuracy

Thank you for your attention to this matter.

Best regards, Bing Bang @stonjaus.films

A lot of this is inherited from FFmpeg’s color conversion from YUV>RGB48>YUV. Color issues are compounded when the source is not tagged correctly with range / space / transfer and is interpreted at [2] Unknown by FFmpeg/TVAI. And finally there is the AI synthesis.

So how should the software deal with range / space / primaries / characteristics?

I would like to see TVAI allow the user to select “treat input as [ BT.601 | 709 ]” on the input and also allow user to specify [ BT.601 | 709 | 2020 | RGB | sRGB ] to produce correctly converted and tagged output for all of { range, space, primaries, transfer characteristics }, as per industry-standard specifications. As TVAI is a professional tool, it should allow the user to explicitly state the color characteristics of input and select the desired output characteristics. Video frames should be correctly tagged. The output should be accurately tagged. There should be templates for the main professional, broadcast and production standards of [ BT.601/525 | BT.601/625 | 709 | 2020 | RGB | sRGB ].

Incorrect tagging of of full/limited color range in source content is very common, especially when dealing with digitally created content (or amateur content) and can lead to stretching, compressing or clipping of the colors within the scale (crushed blacks etc). This isn’t Topaz’s fault - it is sources that are untagged as [2] Unknown. The solution is to offer the user template overrides of the professional color standards. Some software (like zscale & MPV) make assumptions on the color characteristics based on the resolution, but in the world of ai upscaling, the resolution may no longer be a useful hint. Color characteristic templates for input and output would be preferable.

There are multiple places where FFmpeg reads or writes color tags - and one area that Topaz could help with is confirming what color range / space / transfer / primaries are when the video leave the tvai_up filter. Are the colors RGB, sRGB, Adobe RGB, IEC 61966-2 or BT.709? Which color characteristics are always passed through from source, which characteristics are always changed within tvai_up filter. Is the tvai filter outputting full range?

That’s the tagging part done. If containers, video frames and metadata describe the color characteristics correctly, the downstream process or device will be in a better position to render it as intended.

Yikes! My content is YUV. TVAI only thinks in RGB.

Unfortunately, TVAI does not have separate models for YUV and RGB workflow. The conversion from YUV > BGR > YUV can be imperfect at some pixel formats and bit depths, and oversampling to 16 bit-depth BGR48 and subsequently downsampling to 8 bit-depth YUV will, by definition, cause sampling errors.

Test case…

I’m going to try to use an objective example of the very simplest content I can come up with… a limited-range black frame of YUV 16,16,16, programmatically generated using FFmpeg’s geq filter. Only a single video frame needed for this test. We’ll use YUV444 so avoid any inaccuracies from YUV420p chroma subsampling…

$ ffmpeg-topaz -hide_banner -color_range 'tv' -colorspace:v 'smpte170m' -color_primaries:v 'smpte170m' -color_trc:v 'smpte170m' -f 'lavfi' -i nullsrc=size='ntsc':rate='ntsc',format=pix_fmts='yuv444p',trim=start_frame=0:end_frame=1,geq=lum_expr=16:cb_expr=16:cr_expr=16 -vf showinfo,signalstats,metadata=mode='print' -f 'null' -

We measure the output using the signalstats filter. In this case, the output from the signalstats filter is YUV 16,16,16. Great! As expected.

Now lets do the same test, going from YUV > BGR48 > YUV, which is what happens when the TVAI filter forces BGR48 for model processing. But we’ll not even need to include the TVAI filter in this example, we’ll just use the underlying FFmpeg to force the color conversion to RGB48 that TVAI would have done…

$ ffmpeg-topaz -hide_banner -color_range 'tv' -colorspace:v 'smpte170m' -color_primaries:v 'smpte170m' -color_trc:v 'smpte170m' -f 'lavfi' -i nullsrc=size='ntsc':rate='ntsc',format=pix_fmts='yuv444p',trim=start_frame=0:end_frame=1,geq=lum_expr=16:cb_expr=16:cr_expr=16 -vf showinfo,format=pix_fmts='bgr48',format=pix_fmts='yuv444p',signalstats,metadata=mode='print' -f 'null' -

The output is now YUV 84.0156, 88.7656, 78.3281. It is likely to be impossible to spot this by eye, but you’ve already incurred objectively-measured color shift from 16,16,16 to about 80,80,80 - just on a simple black frame.

You can do the same tests at various colorpoints - YUV 32,128,128 etc.

So, irrelevant of what TVAI’s filter is doing, the fact that Topaz is operating in BGR48 domain, it will most likely be mathematically imperfect when dealing with YUV sources and YUV output and going through a YUV>BGR48>YUV conversion.

I have not done the same tests using RGB source, since I only work in YUV.

I have no idea whether it would be practical for TVAI’s models to have an alternative to operate within YUV domain rather than RBG48, but while it is going from YUV > RGB48> YUV, there will always be mathematical imperfection after supersampling and then subsequently subsampling.

So how does the community help Topaz come up with a color accurate workflow?

Firstly, general subjective opinions are useless. “My color is shifting” is a common cry, but is kinda useless as an agent for change. Furthermore, displaying images of screenshots and captures in a browser is also useless, since some browsers themselves are not color accurate - and not everyone sees the same colors on their system, and most users have not Calibrated their monitor/TV with SMPTE bars or a calibration disc. The community needs to use objective measures to help Topaz.

Those who can generate programmatic examples (like the FFmpeg commands above that demonstrate the YUV>BGR48>YUV issue) could produce some test cases so that Topaz can add them to their system / regression test suite.

Don’t always assume it is the TVAI model that is to blame - it could be FFmpeg or the codec. If you speak FFmpeg and are seeing color-shift, remove the TVAI filter from the command line. If you are still getting color-shift, it is not the model. My example above shows that you will get some level of colorshift with the conversion of YUV > RGB48 > YUV with FFmpeg before you even add the TVAI filter or model.

If you have public domain test patterns and calibration content that demonstrates color-shift after conversion, upload and share it. Professionals typically include leaders (aka bars and tone) on content. The SMPTE HD test cards are great. The movie industry has historically used the rather archaically named China Girl (or the photo industry’s Kodak Shirley Cards equivalent).

Some operating systems have Digital Color Meter software, where the RGB value is displayed on mouse-over. When combined with a color-accurate player (like Quicktime on macOS), a color meter can be used to objectively measure any color shift.

Some of the professionals may have X-Rites and Colorometers that can be used to test the end-to-end workflow.

Any then, even when measured, there needs to be some consensus from the community about what level of color-shift is acceptable from either a workflow or AI model?

There’s a lot here to digest - and before we all start giving Topaz a hard time around the models themselves, the fundamentals are interpreting the color characteristics tags (range, space, primaries, transfer) of the source correctly, allowing the user to override the characteristics of the source, ensuring TVAI outputs RGB frames that they are tagged accurately and appreciating that most professional video content is typically distributed in YUV.

6 Likes

This also happens when with “normal” movies! Look at the shadows on the houses and the car on the right. THIS IS UNUSABLE! PLEASE FIX!


Before anyone says otherwise: These colorspace issues existed when I started using TVAI, when it was VEAI 2. I would guess that it has always been an issue.

2 Likes

Hello.
In my tests, there are color changes that occur during YUV->RGB->YUV conversion and color changes that occur within the TVAI filter (RGB → RGB).
FFMPEG has a function to generate test images, but when I use the TVAI filter on test images generated in RGB and output to PNG (RGB), color alteration occurs.

For example, red (255,0,0).
For Ahq12, Gcg5, and Prob3, it transforms to (252-254,0-1,0-1).
For Iris1,2, alter to (240-243,7-11,7-11).

I have been reporting this for some time now, but there has been no improvement.

testsrc → FFMPEG Scale 2x
testsrc → Iris-2 2x
testsrc → Prob-3 2x
testsrc-scale2x_010
testsrc-iris2-2x_010
testsrc-prob3-2x_010
testsrcbatpng.zip (11.0 MB)

2 Likes

Agree @TicoRodriguez - and thanks! testsrc is useful since it starts in RGB24, whereas many of the other FFmpeg testsources are YUV. My test methodology ignored tvai filter itself, in order to highlight there are some colorspace conversion challenges in FFmpeg. Your post eloquently highlights issues introduced by the various tvai filters. Thanks.

QQ - How are you choosing to measure the RGB values? Are you using Digital Color Meter or are you inspecting the raw frames to get the RGB values? For measuring in RGB, FFmpeg’s datascope filter is really neat - it zooms into a particular area and overlays the RGB values of pixels. You can specify an area for the datascope to zoom in on by passing x, y values to the filter.

ffmpeg-topaz -hide_banner -color_range 'pc' -f 'lavfi' -i testsrc=duration=10,setparams=range='pc',datascope=mode='color2':axis=true -codec:v 'rawvideo' -f 'nut' - | ffplay -hide_banner -

And you can then insert tvai-up in the filterchain before datascope. That will visualize accurate RGB values of the raw video frames straight out of the tvai filter and bypass any issues from subsequent color conversion.

Combined with your testsrc, it should be a very objective and unambiguous test.

Your tests of 255 becoming 240 with iris sounds like there is some limited/full range clipping going on inside the model. Hmmm.

[ I do worry that we may be asking too much of an AI model to be fully color accurate, which is why I focused on the workflow side of things where there is room for improvement. In terms of TVAI, we are kinda saying “please synthesize some pixels based on what you assume the objects to be based on adversarial network trained models”. On the other hand, big blocks of uniform color such as test-bars should be pretty easy to train for.]

Thanks again!

1 Like

RGB measurements are taken by loading the output PNG into Photoshop.
I am not sure if this is the correct measurement method, but compared to the FFMpeg Scale output under the same conditions, there is a significant change in the RGB values, which leads me to believe that there is something wrong with the TVAI filter.

As for Proteus, Gaia, etc., where the variation of RGB values is 1 or 2, due to the nature of AI learning, it is within the margin of error and we do not consider it a problem. Images like the test bar are a special case.
However, the Iris variation is far too large. It is possible that they are using the wrong color space, data level, etc. as the training source.

2 Likes


Here are 2 screenshots of a video upscaled with Artemis or Proteus (don´t remember) - but NOT iris! And you can clearly see that some of the blacks/dark colors get changed to greys.

Here a zoom.

2023-12-01 00_33_38-Windows Default Lock Screen
2023-12-01 00_34_17-Windows Default Lock Screen

Clearly you can see the windows and shadows of the house are much brighter.

It’s NOT just an Iris problem!

1 Like

I literally just bought this software and the first clip I tried to stabilize, Topaz turned it super red! How can I prevent/fix this???

Hi all,
I’ve been noticing this color problem also for some time now, but the color changes didn’t seem too bad. I have also minor issues seeing green/red colors, so maybe I just didn’t notice it as much?

However, lately I decided to remaster some very old anime clips I have collected during my teen years. Some of them have very poor resolution and contain heavy compression artifacts. Video AI, depending on the style of the anime, can be a real miracle worker here.
But also with animated content you have often big areas with the same flat color. And here color changes become extremely noticeable.
I tested with both Artemis HQ and Proteus, using Video AI 4.2.0.

Here are some examples:
image
image

I noticed it mostly for green and read colors, blues didn’t seem affected as much.
Having them side by side is quite noticeable, but if you use single preview mode and click on the image, the color change is quite extreme.

So here is hoping that this can be fixed somehow. I would like a postprocessing step to make changes to the colors. There is something like that in Photo AI where you can modify color temperature and this worked really well for my personal workflow.
I have also noticed that removing heavy grain/noise makes the videos look darker, which makes sense because you remove a grey-ish layer from the video. So having some kind of manual correction for this would be great as well.

There need to be some kind of means inside Video AI to correct this behavior.

2 Likes

this needs to be fixed ASAP! Program is unusable in this state…

1 Like

@topazlabs-9702 From your screenshot, your colorshift seems to match the following description… BT.601 vs BT.709

Generally speaking…

  • Red too orange, green too dark? 601 is being decoded as 709.
  • Red too dark, green too yellowish? 709 is being decoded as 601.

Your original SD is likely being converted from Untagged or BT.601 (SD) through to RGB48le and then to BT.709 (HD). You may need to add a colorspace conversion filter into the chain or override the colorspace, colorprimaries and colortrc of the input - any unfortunately this is not exposed in the GUI.

Video Players are not blameless either. Sometimes players will assume one of:

  • A) all content is BT.709
  • B) SD content is BT.601 and HD content is BT.709, making an assumption based on the resolution
  • C) all content is BT.601

Try a color-aware player such as MPV and see if you get a different result. Not all players are equal when it comes to correct colorspace rendering. A player should be using the colorspace and transfer characteristics info to convert back to RGB - and often enough, many players make incorrect decisions, even when fed with correctly-tagged content.

The above-linked article it seems to describe your issue…

On the original H264 encode, you’ll see the red is a bit more dim. A bit more orange. This is the telltale sign of 601 vs. 709 gone wild — reds becoming orange. The encoder used the 601 (SD) coefficients, but then the decoder uses the 709 (HD) ones because it auto-detects a video above 720 pixels in resolution.

To add further complication for anime, you may have an original which uses NTSC-J / System-J colorspace/primaries/transfercharacteristics rather than BT.601.


I do agree that TVAI needs better colorspace override and handling to ensure a clean conversion from somethink like BT.601 YUV to RGB48le and then to BT.709 or 2020.

Much of confusing color and range handling is inherited from the underlying FFmpeg, and may not be Topaz’s original fault, but since TVAI is a wrapper for FFmpeg there needs to be a way to control the colorspace / primaries and transfer characteristics through the GUI, rather than let the codec and decoders and players randomly guess the color characteristics. The moment that Topaz chose to operate in the RGB48 pixel format within the tvai_up filter, the burden of responsibility for accurate pixel format, colorspace, color primaries, color transfer characteristics and chroma sample location conversion back to YUV then falls on Topaz’ shoulders.

I believe that TVAI should:

  • provide documentation that describes end-to-end colorspace workflows from common use cases such as RGB images, YUV SD video, YUV HD video etc. With reference to pixel format, range, space, primaries, transfer characteristics and sub-sample chroma location
  • provide public sample test patterns (perfect source and TVAI sample output) on the documentation site which has been tested in a color-calibrated environment. This allows users to ensure that it isn’t a player or viewing device that is making poor assumptions on playback.
  • Allow the override of the input tags (range, space, primaries, trc, chroma_loc) in the TVAI UI - especially if any of the color characteristics are Unknown [2}. An “Auto” could still make a best guess based upon Resolution/Framerate
  • Ensure perfect internal conversion to rgb48le pixel format, even before tvai_up starts messin’.
  • Allow the user to select a target/output color model that matches one of the industry standard templates defined by [ BT.601 | 709 | 2020 | 2100 | sRGB ] standards.
  • Ensure that the colorspace, primaries, transfer and range is actually converted, not just re-tagged.
  • Resampling from RGB48-FullRange to something like YUV 8-bit LimitedRange is never going to be mathematically perfect, but if the previous steps are taken then the inherent inaccuracy in pixel-format resampling and encoding quantization can be minimized.
2 Likes