My thoughts exactly. 4.0 made previewing such a hassle.
So much so that with all the “improvements” to speed up workflow, this actually sets it way back. I was real bummed that was not included in this update…since it seems to be a universal complaint…
That is what I want to show:
Don’t expect IRIS2 can create realistic faces out of too small details.
In the first example the “creation” looks very unrealistic,
but I’m surprised which details TVAI could carve out in the second example.
Development of IRIS2 is going into the right direction, perhaps it needs more training by Active Learning. I don’t think it needs another 5 years to get AI Models which can regenerate better natural looking faces out of tiny details.
If you want something that is not there, you can only invent it. Maybe in 5 years you’ll get “perfect” faces, but if the source material never had the real face, you’ll always have perfect faces, but not the real person behind them. It will be some person with a “perfect” face. I want to be able to guess the real face in old videos even if in bad image quality.
I personally use video AI to make good videos even better. I don’t expect any miracles! What I do expect: Not so many bugs that even the developers can not overlook!
There should be a setting in Iris telling it to just leave extremely LQ (or maybe just very small) faces alone.
This would IMO cure most of the current problems. Iris should concentrate on the bigger faces where it actually can work well. This desperate processing of small LQ face in the background really destroys the overall impression.
Subject detection in “Preview Next”, like that in Photo AI, would help to distinguísh between small patterns to be enhanced by AI from those just to get enlarged by conventionel methods.
To be a perfect AI you need to scan the complete movie first:
-
See each object/person in each scene that is there…build each object in a kind of virtual reality and add each new detail that used throughout the movie.
-
The KI needs to track each object/person…and record each movement and be sure that the data will be added to right object in virtual reality.
-
After each object/person is known in every aspect (perfect near view maybe)…you are able to reconstuct it in perfection no matter if there´s bad visiual condition.
At the end the complete scenes can be reconstructed from scratch or what would be easier just like the KI here…but using that details that could be captured throughout the complete movie.
Also converting from 4:3 to 16:9 or so could be possible with KI…
Sometimes there´s a camera movement to the left…all that contend that would be going out of the frame on the right side can be used to reconstruct that area when the camera comes back.
If a complete virtual reality could be created out of all seen scenes it is possible to reconstruct nearly everything including switching between different chooseable/loadable actors from other movies including choosing different voices also.(Deepfake technology)
You could change cam view and movement in postproduction.
Edit Actors in a WorldofWarcraft like character editor.
Also just saying the AI what kind of film you woul like to see could be possible or if you know how a film could be better out of your personal point of view…
Guessing what should be in unknown areas of a movie could be also an AI feature.
I am thinking that this kind of AI already exists…but it´s not released to us yet…we just get some peaces of the cookie from time to time…
When dreaming our mind just does exactly this already.
So Topaz you now know how to do it… ![]()
Is there an official KB article that other Video AI Vendors supply I got one who told me make the pagefie 2 X your memory and fixed only of course it appeared o be a bug but the point is that they are guessing
How much virtual memory should I set for 32GB RAM?
Microsoft recommends that you set virtual memory to be no less than 1.5 times and no more than 3 times the amount of RAM on your computer.
I think it would help, like people have said for it to stop previewing when it’s reached the end of the preview section that it’s rendered (with maybe an option to loop it).
People are mentioning Iris 2 - mine only shows “Iris - Face LQ/MQ” in the list with the other models (no other Iris in the list). Is Iris 2 one of those (either LQ or MQ version) or is there supposed to be a seperate model listed for Iris2 and if so under what conditions does it show?
Iris 1 = LQ, Iris 2 = MQ (came as an enhanced version of Iris, stronger but has sometimes artifacts)
I mean Iris1 is LQ and Iris2 is MQ
So it was already a good HD movie of 1280
It’s not only previews! The same thing sometimes happens with actual renders, More than once, I’ve set it to to an upscale (Iris MQ), rendered that, then loaded the upscaled file, changed to a slomo (Chronos and Apollo)… and it’s only gone and upscaled the damned thing all over again!
That’s forced me to do a short render for everything I do, check that it did what I asked it to, then do the full render. I’ve had enough - so back to 3.5.4 for me. Topaz you need to chuck 4.x back into the beta pot (or alpha) and leave it there for a month while you do some proper testing on it.
I want to set preview 1 always to be Original, but when switching to another clip to edit while the other is processing then switching back it is set to the same as preview 2 ? gee!
I have a clip that is 14.xx FPS but UI in TVAI shows original 5376 FPS? gee!
found this out when a 120 frame preview showed 14 hours!
This release (4.x) overall is more Alpha, not even Beta!
There are too many bugs, I reverted back to 3.5.4 on both the M2 Mac Pro and Ryzen 5900x
This seems like a bug - even when you put in the output settings:
“h264, profile: High, bitrate: constant (instead of choosing “dynamic”), target bitrate: 16 Mbps”
when viewing the output video in MediaInfo it says:
Overall bit rate mode : Variable
Overall bit rate : 16.3 Mb/s
…
Bit rate mode : Variable
Bit rate : 16.3 Mb/s
Maximum bit rate : 48.0 Mb/s
So it’s not really outputting it at the constant bitrate when you select “constant” next to “bitrate”, it’s outputting with the values set to variable bitrate.
Why with every patch my processing times increase with the same settings to the same type and length of episodes?
I went from 2 hours and 30 minutes on a 3080 Ti in April-May, to over 5 hours on a 4090 now for a ~50min 2K to 4K upscale. 4K to 8K took 1 day and 6 hours, now it’s over 2 days.
What is this?
I set 0 virtual memory and I’m very happy, and my SSD is happy ![]()
Even if I’m connected to the internet while starting the program, if I drop internet connection while a queue is processing, all the videos will have a watermark on them. ![]()