Video stream ITU-R patcher (Hyperion & Rhea purpose + AV1 encoding (WIP) )

Not much availability lately to code, but the project is moving forward anyway

  • Fixed bug in main display data
  • Fixed clipping when changing main display / color space
  • Fixed min luminance / max luminance
  • Added detection of 5 interpolation models at five frames
  • Added VP9 codec analysis
  • Added AVC codec analysis
  • Added analysis of 12 newest and oldest codecs (used for color retrieval from metadata and later for encoding)
  • Removed dynamic range specifications from input/output data because they have no useful metadata, displayed as quality information at the top and bottom
  • Fixed HDR detection caused by a space character …
  • Invisible but time-consuming work with decision matrix creation: for predictive filling of missing fields, a % probability will be displayed if it is not sure at 100%, (case of old sequences)

→ I would be very happy if someone posts old SD sequences, <576p, interlaced, so that I can test my “predictions”. Such videos are becoming very rare and are very often re-encoded.

Screenshots updated relative to this
EDIT : What a heel forum … I can’t edit anymore my 1st post :face_with_spiral_eyes:

One more step forward

  • Automatic identification of the encoding resolution and displayed
  • Automatic identification of cropping with approximation with the closest resolution cut
  • Establishment of an ITU-R reference + history of 49 standardized resolutions (PC + TV + cinema)
  • Addition of the management of 7 containers, mainly those of TVAI (Matroska, FFV1, QTTFF, MP4, HEVC, WebMedia, IVF)
  • Addition of the management of 13 codecs, mainly those of TVAI (HEVC/H.265,AVC/H.264,VC1,VP9,AV1,MPEG2Video,MPEG-4 Part10,Theora,ProRes,DNxHD,Cinepak,Digital Video)
  • Addition of the management of 6 Dolby Vision profile (meta copy purpose only as an evidence, Profile :0,4,5,7,8.1,8.2)
  • Russian doll display of container and codecs with color coding for advanced ergonomics

First goal that is now close will be copy method to patch file without re-enconding, just filling holes with recovered ones.
Somes tests endeds flawsless well, turning unusable TVAI file in rock solid playable ones everywhere. (HDR10+ / smoot framrate / FPS upgrade)

  • Some changes to the menu frame to make the tab/background more readable
  • Added video duration
  • Source file analysis is coded, top frame will be used as progress information
    → First run will test the player for later runs (to get an accurate progress bar)
    → Further scans will progress on detected player capabilities
    This code will (should) give an accurate ETA estimate at launch, not after an hour
    From the start, the ETA won’t yo-yo for minutes or hours, because I hate that.
  • Alternate “Player/File Benchmarking” display will inform the user if the app has learned the player capabilities or not

I really hope to get a working alpha out by the end of the week. This will at least run Hyperion files.

With some anger, the release is postponed, the latest Hyperion generates less metas than ever. Topaz drives me crazy : I have to recode some new stuff.

Serious problem with the use of ProRes codec by TVAI.

To top it all off, my DNS server is dead…

Best to wait until Hyperion is finalized before updating and releasing your tool, I think.

Mhh may be, may be not
Anyway I am currently porting everything into WinAPI, it works i did encoded some, but also noticed a nasty behaviour if you don’t set early correct Color range, it becomes unpatchable and need full re-encoding ffmpeg doesn’t handle such case

  • Fixed “Proteus” model detection
    → They named it “prob” instead of “prot”, for my pleasure …
  • Fixed “Pixel Format” icon not resetting properly
  • 450 kB of graphic theme integrated and compressed using Lempel-Ziv
    → Super light weight .exe of 1.7 MB
  • MPEG-2 and VC1 support added
  • Added ProRes and HEVC codec profile analysis
    → Analysis based on ITU-T which is official (may be slightly different of MediaInfo)
    The analysis is based on TRUE information extracted from the stream, it does not use internal tables like MediaInfo does

Partial handling of VC1 codec

Partial handling of MPEG-2 codec

Full ProRes profiles handling

  • 4444 XQ
  • 4444
  • 422 HQ
  • 422
  • 422 LT
  • 422 Proxy

Full HEVC tier handling

  • Main 10
  • Main 12
  • Main 10 4:4:4
  • Main 12 4:4:4
  • Main 16 4:4:4

Level

  • 1 to 6.2

Sub-tier

  • Main
  • High

Full AV1 Tier handling

Level

  • 2 to 7.1

I did not forgot my last sentences, but as you may know all these values are necessary to output a correct file :slight_smile: so I must code it

is this tool planned to be released to the public at any point of time?

Yes it is, but no plan, I can’t because I spent my spare time on it and it’s variable.

Do you know that I currently spend almost all nights on it ? :sweat_smile:
I will deliver what I said

I am on the most difficult part with data aggregation. I’ll published the tables used with first alpha
It starts from videos from ~NTSC/PAL to latest ones, in order to satisfy maximum of users.

→ Will focus tomorrow latest graphics things, around 150Kb of BMP to set as binaries
→ Will focus settings.ini to keep ProgressBar realistic
→ Will focus first translated Hyperion

Even more of a reason why to share it. if you spent time on it, let it have a true purpose.

If you are not planning on releasing it any time soon, then I am struggling to understand what is the point & benefit we are getting from all your updates on the tool with all the screenshots? I am not sure what value we getting from those posts at this point of time. it will make more sense to see those closer to release date IMO.

Hello.

I’ve been following your progress here and IMO, you are doing great work for this community; I think this could be a game-changer to Topaz Alpha Hyperion’s at release 11/12 (I’m 99.989898% certain that Hyperion will still be in Alpha state [full of bugs that will take several months of patches well into 2025 to correct them all] on that date :smirk:) however, that doesn’t change all the tedious work… testing & MORE testing that you’ve DEDICATED in just to get things right.

IMO, this is not just another simple script, and, IMO, you need to monetize or be compensated by Topaz (if anyone should understand why is definitely Topaz, right) if all goes well. Great job!

I find it very interesting that an unlicensed Topaz user would start encouraging someone to monetize anything…

Hello.

I can respect your plight, David.

In my college years (side hustle… ha-ha), I’ve been there where this author has been… grinding (endless) hours shouldn’t go unappreciated with just a thank you when it’s only Topaz that only benefits from this author’s work, IMO.

…and the elephant in the room is… does ethics/integrity changes my above post statement?

Hmm…

Dear all, I accept all your encouragement and criticism, whatever it may be.

It is always a positive way to lead a project. I do this in my professional life so do not worry.

So talking about monetizing this tool is not in my head, I do not really think about the few dollars I could get from it, that would change my life. My goal is to push the improvement of AI a step further, because I have confidence in it.

Topaz delivers a product that we know can be improved but that at least does “something”, and at the time of writing, this “something” does not exist elsewhere without spending an incredible amount of time configuring a workstation dedicated to ESRGAN or something close to it. I am not even talking about the parameter that can be adjusted here.

So I am only keeping you informed of the progress, as a “project manager” should do, to let you know that the thing is not only not abandoned but coded precisely (as much as my skills allowed)

Even more reason that you should be highly-valued in this community and not rushed, ect. due to inpatients. Thank you for all you do!