Why is it wrong? Because you’ve decided it’s wrong.
the 2.x version stayed below the target size and made the rest of the upscale lancoise. the 3.x version goes beyond the target size and shrinks to fit. this was made because the old version lost quality by doing the missing upscale the classic way, so now it scales up a step further via ai and then downscales to fit.
You can believe that if you wish Have a nice day!
Receive this message now :
"Failed unlocking input buffer!: generic error (20): ��>
Error submitting video frame to the encoder"
Is there any way the AI can be improved to handle TEXT better? Even CG/titles that are easily read usually don’t look great. Certainly seems AI enhancement of text would be far easier than a human face. Funky looking text is always a dead giveaway that AI enhancement has been used, and we’d all prefer if that fact wasn’t given away so easily.
Here is an example of text on a logo cap… One quick screen-grab and a Google Lens search brought me to an example of what it’s supposed to look like. I don’t expect it to be perfect, particularly in this instance, but it seems like it could be much better.
This is also an example of how the AI tends to do poorly in low light conditions. A face in bright sunlight comes out perfectly, and a face in a dimly lit room or under an overcast sky looks…muddy like this.
When I look at the original I could not even imagine what those letters were. I can’t see anything really helping that. The original looks to me to have 2 words in larger font with 2 other words above the large font words in smaller font. It may be the same cap but it just doesn’t look like from this example and why I can’t see AI being able to figure it out.
Yeah, I read all the complaints. But I downloaded 3.1 anyway because Topaz excites me. I’m more interested in where they’re going than where they are. Topaz is on a trajectory toward perfection. The toughest thing in this game for developers on the cutting edge is trying to make the software work for the universe of their customer’s existing platforms. If you can get it to work on older and weaker platforms, and only experience a loss in speed, you’re doing good. When I transitioned from film to video in the late 90s, I asked Media 100 what system was best and they steered me to a high-end Compaq with a Medea RAID 0 setup. A few years later, I went to high-end Boxx systems with fast RAID 0 servers. I called Topaz last year and asked them what platform would work best. I settled on a high-end Ryzen/Nvidia system with a big M.2 card. This thing works like a dream. I have never had a problem with Topaz software. But should I encounter some, I am confident that all I need do is wait for the next update. The thing is, Topaz, of course, uses the best hardware to develop their stuff. They don’t develop it on old tech platforms - which I suspect a lot of these complaints come from. When an update doesn’t seem to work right, don’t just immediately criticize the software. Look at your hardware. When you fall behind the curve, bite the bullet and upgrade.
I agree all people can do is bitch I will be upgrading when I get off my hitch (offshore)
I have to stop to waste my time trying to fine-tune the task.
- Relative to Auto ineffectiveness.
- Noise level ignorance.
TVAI 3.1.0 appears to consistently crash after completion of rendering and audio copy. The job is complete, but I always return to find a crash report. The text of the report is in the dropbox. Thx.
After three years of using this software, I still don’t know what the AI algorithm of this software is, but it is certain that the team doesn’t have a clear goal.
I could give you other examples that are human readable, but this was handy since I was curious about the hat and had already done the search. Other text looks just as bad, and appears to be a matter of size and light intensity in the scene, as stated before. Even large title text with white letters on a black background tend to experience internal halo-ing (where there are lines, brighter than the lettering, lining the inside of each letter), or jaggedness of what should be smooth edges of the lettering.
The difficulty in reading that you describe, MikeF, also comes from the fact you’re only looking at one frame–where the software is looking at a good 500 consecutive frames of video. Multiple frames CAN be compared to one another for change detection and interpolation…feel free to use your imagination as to how this actually could be done even if you can’t read text in this one frame.
This is the problem in a forum setting in presenting a single example. We can only go by what is presented. A more complete example or examples are needed. This also why the developers ask for the video in order to examine more closely the issue.
Someone else, a long while ago, asked if AI could restore the labels on bottles in an old movie. The answer is no, it cannot. Even if it were to create legible text, it would just be words from whatever sources were used to train the AI.
I can already hear the heated complaints of people learning that TVAI needs internet to search for better examples of what it’s trying to upscale…
Others have suggested making a model that accepts a collection of images to use as examples of what something in the movie should look like. It’s an interesting idea, but I think we are many years away from that.
if the data is not available in the source AI can only guess - just like humans
so the answer is yes, it can, but it will be always a guess, not the actual truth
Restore. It cannot restore, unless the original was used to train it. Recreate similar? Yes. Restore? No.
Very happy with the v3.1.0 performance on exports. My RTX 3090 is getting a good workout without multiple instances. With the Topaz redistributables now being updated to support v3.1.0, people can make ffmpeg builds with CPU encoders like x264 and x265 enabled and take max advantage of GPU for the AI upscaling like I have done.
Don’t know how it work but would love the GUI to be able to select these encoders, so we can still take advantage of the GUI when desired.
Is anyone else dealing with “Out of Memory” when using Themis? I’m using a 16" Macbook Pro M1 Max 10-Core CPU, 32-Core GPU, 16-Core Neural Engine, 64GB RAM so I know my laptop is capable of running these. Running any model seems to run fine, it’s strictly Themis that causes my computer to get extremely loud and hot and then cancel the project roughly 25% through and says “Out of Memory.” I’ve fiddled with the settings changing from selecting my M1 Max for the Processor to Auto to see if this is a glitch I can work around. I’ve additionally deleted and reinstalled the software to no avail. Sucks not having Apollo for the time being but I’m glad they’re just working on it and not retiring it completely. Wish they would’ve held off on this update until Apollo was done.
Someone did write setting the memory % down in the app will be helpfull.
The preview during encoding is completely broken, correct? Doesnt matter if its 4K or 1080p.