Compression factor

Can somebody explain how compression factor is working? Formula of output bitrate.
Example:
h.264 video 1000x1000 12000 bitrate
upscale 200% into 2000x2000 with compression factor 25
What will the final bitrate be?

You can run some test yourself, if you use VLC player.

Play the output h264 video and look for “codec information” in the Tools tab. You’ll see a breakdown of bitrates for video, audio, and other handy details.
Hope this helps.

I don’t need breakdowns, i need to calculate final results, because different videos have different bitrates.

1 Like

I keep asking people here the sane thing and no valid answer so far. I don’t know why can’t we have option to enter bitrate we want and or size of the file we want and have the program calculate both size and bit rate of final video. Its a standard. Compression factor of something is not a standard and does not tell me much of anything.

It’s going to be dependent on the source I’m afraid - CRF resulting bitrate will be proportional to content complexity, e.g. a source that is noisy, has lots of action and is difficult to compress might come out 2-3x the size of a 2pass 3Mb/s encode at CRF 18-19.

It might not be ideal, but if you want full control over the encoding process I suggest you initially pick prores/proresHQ as output to your upscales, then re-encode them afterwards (using handbrake/staxrip/hybrid etc.) into 2-pass fixed bitrate x264 encodes for example.

2 Likes

Since it uses CRF, which is quite good if you don’t need a precise bitrate, you’ll need to output to ProRes then transcode using some other software, like avidemux

But why? Other programs offer ability to calculate output bitrate and output size right away? Seems like Topaz needs to implement that as well. Converting to ProRes is not really good option, because a) its freaking huge file size and I can’t tell how large it is, because there is no way to know until you get a message, run out of disk space. Besides, not everyone has endless TB to spare, and b) its a double work and it should not be. I hope Topaz team understands the needs of its users and implement the tools they need to.

3 Likes

Same problem here:
I use for my home movie files (self made videos) in all video editing software 50mbit for mp4 files. This should be quiet enough for really good quality using e.g. a beamer to show in homecinema.
If I load them into VEAI and only improve 1080p to 1080p e.g. with GAIA modus, it produces on output between 100 - 130 mbit/s. Much too high.
So I set the the compression factor from standard 14 to 18 (or higher) in order, that the export files are not so huge.

Unless you are streaming videos it’s much more efficient to use CRF as a rate control that way you can specify a quality level. Imagine if you have a 10min video and you set 20mbit constant. No matter the scene complexity the bitrate used will be about 20mbit. For some scenes this might be ok, others could have used more, and some might not have needed that much. Using CRF you specify a quality level and it adjusts the bitrate as needed. That way you can say “Hey, I want these archive videos to be 18CRF” or “These are just being previewed so 22 CRF is fine”. That way all the videos, regardless of content, have a similar quality compressed.

In fact the file size could be the same but since bitrate was adjusted as needed the quality could be better. This article details rate control methods. https://slhck.info/video/2017/03/01/rate-control.html

Finally if you are transcoding the video before it gets to VEAI you shouldn’t use a lossy compression like h264 (with exception to the lossless mode).
Each time you transcode to h264 you are reducing the quality permanently and yes if you use some ridiculous high bitrate for the resolution (like 100mbit/sec for 1080p) it might not be noticeable but it’s still there.

There is a huge misconception with audio/video compression, it’s not a zip file, the quality is permanently lost. https://en.wikipedia.org/wiki/Generation_loss

1 Like

Efficient for whom? Not for me. I use bitrate for last 10 years to calculate everything. I know what to expect. I know what quality works for me in terms of size vs quality compromise by using resolution and bit rate that I’m familiar with. In fact the videos I’m using to process in VideoEnhance, show nothing of Compression factor. It does show the end result, bitrate, size, pixel dimensions etc. Compression factor is no where to be found, why, because if anyone used it, it was for the person choosing it not a standard that gets written into metadata.

“Finally if you are transcoding the video before it gets to VEAI you shouldn’t use a lossy compression like h264 (with exception to the lossless mode). Each time you transcode to h264 you are reducing the quality permanently and yes if you use some ridiculous high bitrate for the resolution (like 100mbit/sec for 1080p) it might not be noticeable but it’s still there.”

Well, yes in theory. But in practice, you really have to be open to all kind of use case scenarios. For some usage it does not matter, for others it does. And if you are going to be posting something on social media like youtube, forget it, the compression on youtube will kill any attempt to preserve quality. So there is no point if that is your final output to pay attention to quality very much. Quality on Youtube is pronounced silent.

Thank you for the links, I knew most of it. minus the math, for which I’m not interested. I’m still fuzzy why is there a segment of the community here on Topazlab who are comfortable with coding and ```
ffmpeg terminology and think that is what everyone wants or knows how to do.

I’m a creative guy, and I’m also fairly technical but I’m not a coder nor do I want to be. Bitrate is a very old STANDARD that is familiar to most people who work with video. Why can’t a STANDARD be implemented that makes sense to largest number of people and instead it has to be something unclear and ambigious like compression factor with no added explinations in the program, or why require math for something that most programs already do and its a foolproof method that works.

Just give me bitrate and size and resolution and other basic familiar controls. I’m not asking developers to reinvent the wheel, just to start using one.

1 Like

Ah yes, their ill-advised relabel for Constant Rate Factor. I’ve got piles and piles of projects in all manner of frame sizes and rates that became do-overs because I forgot to not bitrate starve the project with the default ‘Compression Factor’ of 20.

I ginned up a graph paper quick-reference cheat sheet matrix taped to my desk that serves well enough for the “ish” factor of using CRF instead of Variable Bit Rate or Average Bit Rate, based entirely on my own anecdotal lessons learned and judicious application of the Mk 1 calibrated eyeball. I go rich on the bitrate, because I am probably just going to motion interpolate or stuff it into an h.265 container later anyways. But I agree, they need to relabel this very key tool. And maybe integrate a file size/bitrate estimation tool. Edit: Calculators will only give you an “ish” anyways, but it would be nice to know I’m not going to overflow my 2tb workspace SSD before I render it down to HEVC.

Some example ‘perceptibly lossless’ standards I settled on before final encode:
6400x3200x120fps = 10 CRF
3840x2160x60fps = 12 CRF
3840x2160x30fps = 14 CRF
2560x1440x120fps = 14 CRF
3840x2160x24fps = 16 CRF
2560x1440x60fps = 16 CRF
1920x1080x120fps = 18 CRF
1920x1080x24fps = 20 CRF
1280x720x60fps = 24 CRF

1 Like

“But I agree, they need to relabel this very key tool. And maybe integrate a file size/bitrate estimation tool. Edit: Calculators will only give you an “ish” anyways, but it would be nice to know I’m not going to overflow my 2tb workspace SSD before I render it down to HEVC.”

Yes, exactly.

I’m still not clear what Constant Rate Factor or Compression factor actually means.

I am familiar with Variable Bit Rate and Constant Bit Rate and measurement units standardized around that. .

Constant Rate Factor or Compression factor means nothing to me, I have no reference to what they do to my video, and compression factor is also strange because what exactly is being factored in. What is a factor, factor of what? And what is compression measured in. It does not have any label of measurement units in the user interface.

Things like bit rate (kb/s), frame rate (fps), Duration (min) etc all have their measurement units. What does ‘Compression Factor’ of 20 mean? I still don’t have a clear understanding or reason why its used instead of standardized measurement units.

1 Like

CRF is like a benchmark for quality retention. If you do Average Bitrate, you set a ceiling of sorts, a Do Not Exceed number. It may never reach that number, or if you set it too low, it will bitrate starve your video and make it look like garbage, undoing everything you did with Topaz in the first place.

CRF is a less well explained pain in the rear for how tightly you wanna squeeze that file for h.264 containerizing your video. And it’s easy to screw up. Less is more (bitrate) in FACTORS. Like the Richter scale.
A CRF of 14 (good for 4k 30fps) is many, many times less compressed and squashed down than a CRF of 20 (good for HD 24fps). In practice, it’s not a linear 70% ratio, but more like 8 to 1, and you’ll observe it yourself in the bitrate when your render is complete. But that’s just my understanding in layman’s terms. I’m also not a programmer. I’ve just been wrestling with it for too bloody long. And it’s really irritating when you underestimated and wasted rendering hours because you chose poorly.
Edit: As someone said earlier, if you’re expecting a per pixel measurement of CRF to Framesize x Framerate, you’re not going to get it. Because it’s a question of complexity of image data that is then squashed down according Space Ninja Math Magic. A black screen for 20 minutes will not be the same as One Punch Man animation for 20 minutes.

Hmm. That sounds a lot like trying to explain someone how long he should cook the eggs. Instead of minutes and temperature, you say, you will know it when you see it. Not very helpful. We can judge music or art that way, but this is math, its not supposed to be rocket science as it were.

1 Like

Believe me, I feel your pain here. And so do my hard drives.

You just kinda got experiment to get a feel for it yourself. Find a short clip and try running it at a few different Compression Factor sizes using the same AI model. Being a director is still more art than science.

1 Like

I suppose I’ll have to. I see they released new version 2.3 today, except it seems they still haven’t changed the output settings we are talking about, but at least there are other improvements. Will have to test it. Cheers!

2 Likes

Yeah, please, give an educated guess of the final size or the bitrate it is encoding to.

I can run a few tests and keep a spreadsheet, but that’s wasting your users’ time. And it’s just as valuable as yours.

1 Like

Agreed. Except ProRes is an annoying proprietary Apple codec, not recognized by most demuxers I have.

Constant bitrate isn’t necessarily all that useful. (Some scenes likely benefit more from higher bitrate than others). CRF is a good place to start, but woefully inadequate alone. What VEA needs is to have an option to add comparable x264 parameters to the (internal?) command line for H264 processing.

1 Like

“What VEA needs is to have an option to add comparable x264 parameters to the (internal?) command line for H264 processing.”

For non coders. Can you translate that into something more approachable by the general user group.

1 Like