The idea sprung to me awhile ago.
VEAI has the ability to use CRF to select image quality output of a video. Though I was wondering why it couldn’t just use AI to determine a better output?
The idea being the AI is only looking for artifacts between two images. These two images would come from the the AI taking multiple screenshots throughout a given video and running them through various CRF values.
It will then output a recommended CRF value based on what it thinks is most ideal for compression to maximum image quality with little to no produced artifacts.