Source video preparation: adjust contrast/gamma/brightness first and give upscale models something like this Starlight - Imgsli left: source / right: same source but contrast adjusted
(it’s just a quick and dirty sample, can be done better)
Blury in gets blury out, pending on source contrast adjust can be super important, before upscale I import the source into my video editing program and can strongly recommend this.
Tried Starlight on three of my most well known clips (45 year old super-8).
In most of the cases it is pretty good. It is a “one-stop” solution to what I might have to use two or three runs to achieve.
The Achilles heel, as for all other Video AI models, is faces, especially in the background. Until it is possible to turn off any attempts of face restore for all, specific and / or background faces, Video AI cannot restore old film.
It would be fine for many cases if I had some unknown video on my hands with people I did not know, but when it is family and dear ones, minute details count, and the poor original quality is still better than Video AI distortions.
The proper solution unfortunately involves going scene by scene, tracking faces and selecting / deselecting / modifying settings for each face. I could also attach photos of better qualities of the faces for persons at different ages and match them to each scene. I do not care how long time the render would take.
TVAI not performing face recovery on faces that have few pixels is something that many have wished for and would help. I think that alone will not solve everything, some faces in the original are covered by noise and if this is removed, then also monster faces can become visible. TVAI have to recognize such regions and treat them specially by not recovering, not denoising, blurring or mix of it, but this is not realized today
I currently get the best results with just iris, and it is fast enough to live stream the generated preview locally. It’s a very hard sell to make me want to pay $1000 for online cloud credits to use Starlight instead.
Greetings Kevin! Just wanted to say thanks to you and the AI research team for all the hard work with Starlight! I can only imagine how machine learning and trial/error like this takes time. Looking forward to it’s advancement.
FWIW: The environment variable: VEAI_MODEL_DIR, is not being set on install. It still points to ProgramData, where I’m trying to put them on a different drive.
You’re welcome Kevin! I have a question. What format is the output file when using Starlight via cloud right now? Is it H.264? Because I see a lot of banding in the image on a lot of plain colored backgrounds. Also, I noticed it’s adding a bit of saturation and contrast to the image. I usually output to ProRes or H.265 10 bit to avoid the banding.
Awesome! Btw, I just wanted to show this Sony RX10 IV 960 fps clip. The first is the original, the second is with Artemis LQ and the last is Starlight with a second pass of Proteus 2X with all the sliders at default but with Dehalo at 60. I’m still impressed at how much more Starlight reduces aliasing/moire effectively compared to Artemis LQ especially if you look at the straight lines near the bottom of left deck near the Squirrels feet, the whiskers and claws. It’s night and day! One nitpick for me is that Starlight added a touch of saturation, warmness and contrast but this may just be codec thing. The reason I added a second pass of Proteus with Dehalo is because I thought the detail restoring and sharpening was a bit too aggressive and made the footage too crunchy/contrasty (another nitpick and issue that I hope gets ironed out). I feel like once Starlight’s restoring/sharpening is more subtle or has different strengths to choose from (with maybe dehaloing built into the model) while still retaining the impressive aliasing/moire reduction, it will be gamechanging for this type of footage. I know it’s still early to think about and I’m just throwing ideas. Really glad the team is trying to tackle on aliasing/moire as this is something that Sony bridge cams are notorious for when shooting at super fast frame rates. I’m hoping down the line we can get a Starlight Anti-Aliasing/Moire iteration that solely focuses on that and then the user can do another pass of upscale via their choice depending on how subtle or aggressive they want to be in upscaling and sharpening. Maybe doing it this way can facilitate a faster and local Starlight model. Anyway, just talking out loud to get the ideas out, but I know this may take more time.
I have to congratulate you this time: the processing speed has already improved tremendously from the first attempts, and this shows how much, with the best optimization, it will be possible to run this model locally as well (which will have to happen to make your long-time users, as I am, really happy):
Here is another example: the result this time is very good!
So now I’d like to see starlight grab information about the entire frame. I.e if there’s panning going through the thousands of frames that it sees, it could expand the frames so that the images could be reframed any way we want by using all the information in a video. A lot of times there’s more information. Inferior Superior left or right and this could be added allowing us to make much more restored video if we would like
There is an option to select the Models folder on install. I can change it to where I want it, and it installs to where it wants to, in c:\ProgramData. So, I change it manually after install. And copy the new files over. And I noticed this time the TVAI environment variable was wrong also.
I haven’t tried to fight it in a few versions. Last I checked, there are some things that have to live in C:/ProgramData. Nothing that takes up much space. You can change where the models are stored.
As for the environment variable, I had to manually add it. No idea if the newer installers add it, but I’m guessing not, since my script has not been broken form an update.