JPEG quantization tables and progressive scan scripts
Posted: 2012-02-28T07:52:46-07:00
Would it be possible to add the ability to pass alternative JPEG quantization tables/progressive scan scripts to, say, convert? (I'm pretty sure it's not possible now.)
These features are documented in the wizard.doc file included with the ImageMagick source distro (a copy of the JPEG Group's wizard.txt file).
One possible use (besides my own shenanigans ) would be to recompress a medium/low quality JPEG with exactly the quantization table used to create it (figured using, say, http://www.impulseadventure.com/photo/j ... ation.html) to avoid rounding error, chopping off high modes with progressive scan to make the file smaller, and doing this without rounding/truncation error messing up the low modes. What this basically does is remove fine detail without affecting the rest (it's using progressive encoding to perform low pass filtering, reducing file size in the process). (Which sounds like and expert only thing to do, but is actually pretty straightforward if one is provided with step-by-step instructions.) A similar dirty trick is that you could figure out which quantization table to use to re-encode so that integers mostly land on integers (that is, so that rounding is minimized), with a different overall quality setting (this particular dirty trick definitely requires some math), the simplest version being multiplying all entries of the "incoming" effective table resulting from the quantization/quality combination by 2, and then using -quality 100, to reduce file size and quality a big notch.
These features are documented in the wizard.doc file included with the ImageMagick source distro (a copy of the JPEG Group's wizard.txt file).
One possible use (besides my own shenanigans ) would be to recompress a medium/low quality JPEG with exactly the quantization table used to create it (figured using, say, http://www.impulseadventure.com/photo/j ... ation.html) to avoid rounding error, chopping off high modes with progressive scan to make the file smaller, and doing this without rounding/truncation error messing up the low modes. What this basically does is remove fine detail without affecting the rest (it's using progressive encoding to perform low pass filtering, reducing file size in the process). (Which sounds like and expert only thing to do, but is actually pretty straightforward if one is provided with step-by-step instructions.) A similar dirty trick is that you could figure out which quantization table to use to re-encode so that integers mostly land on integers (that is, so that rounding is minimized), with a different overall quality setting (this particular dirty trick definitely requires some math), the simplest version being multiplying all entries of the "incoming" effective table resulting from the quantization/quality combination by 2, and then using -quality 100, to reduce file size and quality a big notch.