Page 5 of 6
Re: Better JPEG quantization tables?
Posted: 2013-04-23T16:13:21-07:00
by NicolasRobidoux
As usual, things are more complicated than I thought.
-----
Here is one that seems for work really well with 2x2 Chroma subsampling, same table for Luma and Chroma, and quality 77:
Code: Select all
22 13 12 14 17 21 26 34
13 17 18 20 22 22 25 29
12 18 19 22 26 32 40 51
14 20 22 25 29 36 44 56
17 22 26 29 34 41 50 63
21 22 32 36 41 48 59 72
26 25 40 44 50 59 70 85
34 29 51 56 63 72 85 103
It's nicely "punchy" but clean too.
Re: Better JPEG quantization tables?
Posted: 2013-05-03T19:33:28-07:00
by NicolasRobidoux
I've put a well tested current favorite in the config/ folder of the svn ImageMagick source repo. It's this, which I recommend using for all three channels with 2x2 Chroma subsampling:
Code: Select all
16 16 16 18 25 37 56 85
16 17 20 27 34 40 53 75
16 20 24 31 43 62 91 135
18 27 31 40 53 74 106 156
25 34 43 53 69 94 131 189
37 40 62 74 94 124 169 238
56 53 91 106 131 169 226 311
85 75 135 156 189 238 311 418
Interestingly, even though it is a remix of the Klein et al quantization table given for 1 min/pixel viewing, it is really close to the Luma table I assembled from scratch (and tested by eye) and presented in
viewtopic.php?f=22&t=20333&start=30#p81176. The large change in the top left corner entries comes from better understanding of what's what, when the table is used around quality 75 with the conventions defined by the IJG. It is truly quite amazing to me how close the two tables are, otherwise, given how differently they were derived.
I'm at the point where even when I explore new directions, the final product is more or less the same. This is a good sign. (At least, it's a good sign that I should be doing something else.)
Re: Better JPEG quantization tables?
Posted: 2014-03-30T05:18:04-07:00
by Rumpelstielzchen
Dear Mr. Robidoux,
for my portrait shootings with wide aperture I've always wondered about banding in the soft background using strong jpeg compression, and noticed quite often slight hue shifts in skin tones and background.
Trying around with dctune2.0 does not give me a adequate result (sorry for not trying one of your quantization tables, because they're not scalable (?) ).
So I've tried some quantization tables on my own, with priority on the DC component, on both luma and chroma.
This is the perl script to generate the quantization tables (one for luma, one for both chroma channels):
#!/usr/bin/perl
# to be called with a multiplier as parameter; the larger, the smaller the files will be.
sub dither { my $x=shift; my $f=$x-int($x); $x=int($x); return $x+($f>rand()?1:0); }
for($i=0; $i<64*2; $i++) {
$x=$i%8;
$y=int($i/8)%8;
printf "%.0f ", dither((1+$x+$y)*8*$ARGV[0]); # simply linear.
if($i%8==7) { print "\n"; }
if($i%64==63) { print "\n"; }
}
I've tested this with one image of my daughter (only) at filesize of the q-equiv 24,28 and 73 like this one (parameter for qtable.pl (see above) adapted manually to get a similiar file size.
perl qtable.pl 1.6 > q.txt
cjpeg -qtable q.txt -qslots 0,1,1 -sample 2x2,1x1,1x1 -dct float -optimize ori.ppm >x.jpg
In every quality level I'll find my tables superior, but perhaps this is just my impression.
Any one out here wanna try them?
Kind regards,
Jochen
Edit: The jpgs generated by Lightroom were mostly worse then those of cjpeg, by far; Lightroom-filesizes compared were without metadata (jpegtran -copy none).
Re: Better JPEG quantization tables?
Posted: 2014-03-30T05:37:00-07:00
by NicolasRobidoux
Thank you for your post.
I will test... when I have a minute.
(Don't hold your breath: Sunday afternoon and I'm in the lab...)
P.S. The above are not quite my state of the art. But they are close. My quantization tables were mostly optimized for photographs of buildings, indoor and outdoor, that were already mangled by prior JPEG compression and other reprocessing. So it could be that I did not set them to do justice for other subject matter. And I also paid further attention to banding in my later, unpublished, work.
Re: Better JPEG quantization tables?
Posted: 2014-03-30T05:47:52-07:00
by Rumpelstielzchen
Hello,
Just experimenting around cjpeg -quality 10 "just for fun".
Where cjpeg with its qtables generates something like "mosaic pop-art" (pop-art because pink hue shifts on the white wall in the background, the brown chocolate cat gets some more reddish brown, yellow cup gets greenish etc.),
with my quantization tables at same file size colors keep almost constant. EDIT: but still pop-art
So I've expexted that my qtables result in some loss in detail or any other loss, but I cant's see any disadvantages with my qtables in this image.
Perhaps just a lucky strike of a blind player?!
Kind regards,
Jochen
Re: Better JPEG quantization tables?
Posted: 2014-03-30T05:50:14-07:00
by NicolasRobidoux
Jochen:
We'll see
Again, thank you for sharing this.
Let us know what you find. I'll read with interest.
Re: Better JPEG quantization tables?
Posted: 2014-03-30T06:11:15-07:00
by Rumpelstielzchen
Well,
with this one:
http://printtest.eirasys.pt/Testes%20Im ... e%2028.jpg
At cjpeg -quality 50 / parameter 0.45 of qtable.pl
With my qtables:
You'll get some loss of detail in the right (compared with cjpeg), deep red flowers (those with the white tips).
But the skin tones seem to be more preserved.
Re: Better JPEG quantization tables?
Posted: 2014-03-31T05:06:03-07:00
by Rumpelstielzchen
Dear reader,
I'll just tought about sensivity on Y, Pb, Pr, and came to the conclusion that rounding Y result in larger errors in R,G,B (by sum(abs(deltaR,G,B))) then rounding of Pb+Pr.
(Perhaps it would be worth analyzing this in ciede2000, a (hopefully) more sophisticated distance metric.)
So I came up to this (the perl script here generates automatically a just larger file then the cjpeg-reference for easier comparison):
#!/usr/bin/perl
use strict;
use File::stat;
sub dither {
my $x=shift;
my $frac=$x-int($x);
$x=int($x);
return $x+($frac>rand()?1:0);
}
system("cjpeg -dct float -optimize -quality 50 ori.ppm > cjpeg.jpg");
my $f=0.1;
while($f<100) {
print "$f...\n";
open my $ff, ">q.txt";
for(my $i=0; $i<64*2; $i++) {
my $x=$i%8;
my $y=int($i/8)%8;
#my $v=dither((1+sqrt($x**2+$y**2))*8*$f); if($v<1) { $v=1; }
# sensivity Y 3.000000000 (sum(abs(...)) instead of euclid. dist.)
# sensivity Pb 2.116136000 "
# sensivity Pr 2.116136000 "
my $inverse_sensivity=1;
if($i>=64) { $inverse_sensivity=3/2.116136; }
my $v=dither((1+$x+$y)*8*$f*$inverse_sensivity); if($v<1) { $v=1; }
print $ff "$v ";
if($i% 8== 7) { print $ff "\n"; }
if($i%64==63) { print $ff "\n"; }
}
close $ff;
system("cjpeg -qtable q.txt -qslots 0,1,1 -sample 2x2,1x1,1x1 -dct float -optimize ori.ppm >myq.jpg");
if(stat("myq.jpg")->size <= stat("cjpeg.jpg")->size) { goto end; }
$f+=0.01;
}
end:
system("convert myq.jpg ori.ppm -fx 'v+3*(u-v)' myq-diff.ppm");
system("convert cjpeg.jpg ori.ppm -fx 'v+3*(u-v)' cjpeg-diff.ppm");
system("convert myq.jpg ori.ppm -fx '15*(u-v)+0.5' myq-diff2.ppm");
system("convert cjpeg.jpg ori.ppm -fx '15*(u-v)+0.5' cjpeg-diff2.ppm");
#system("convert ori.ppm cjpeg-diff.ppm cjpeg.jpg myq-diff.ppm myq.jpg vergl.tif");
system("convert ori.ppm cjpeg-diff.ppm cjpeg-diff2.ppm ref-diff.ppm ref-diff2.ppm myq-diff.ppm myq-diff2.ppm vergl.tif");
printf "f=$f\n"; exit;
Re: Better JPEG quantization tables?
Posted: 2014-03-31T05:32:02-07:00
by Rumpelstielzchen
Oh, and well... ciede2000 says something other:
If traversing the whole Y/Pb/Pr space and looking for "what's happening when changing one of the three a bit",
LittleCms2.5 tells us the following:
de00 - Y: 0.292999
de00 - Pb: 0.442875
de00 - Pr: 0.407863
I'm no expert, by my experience says, that Y (lightness, brightness, whatever) is underrated by ciede2000, but perhaps due to my uncalibrated eyes.
Here the C program to get this result (extract with base64 -d | bzip2 -d -c ;; make with gcc ... -llcms2 -lm):
Code: Select all
QlpoOTFBWSZTWZXe/mAAAKPfgAAwe3///z/3336/59+uUAOb127d1LcZ1axCSSJNqejRpoJ6mQyA
MgBoAGnpNGjah6gaECKeEE/UmTTCaGjRkwCGEMIAyDCREqeJo0nqnlPSeo9IPUaPU0ZDIPUDI0aa
DE00OMmTRiGmhgJoYmjTJiBkYTRpphBkwkSETCp+lP1Q2jypmobU0GgBoNAAADT1GkhoaGMTYDGN
pME1onI3SVR9TNGWuQBOkIhtp7ZTc0uq5wOCYVEUbGWGcUBf4yPEFCgZmfrtkNsbchec+38d374l
c7U1HInwWSk6lAmwEbleyJYE1BIeLcw1eson85SMWa59CA2IGLEwcG17EaIhDvf90kUtRnxFHqHw
Ld2XiKhFGs4V6hP3UrVW2OYalBSHD1VfgRkeJ6hsbbpEycNgkj2om9JX1g5wpA3molgGYMJM3S/k
ADAGBz+hc0kUDnPIjge3riXtOM0SVdOrJVBQ5y26caWhMMOO9EA0QxQGHb5t5birK807RNWtA+MG
cpCXmnsTAZcgajWizYcuwcL4c4EioFvTlJHTvAWgfIN+sRYA6zu0vaa4+JXW6eWyidoBcB2Kg6YG
M1JFk2mnyBLDeIGM1AyYG5AUCIDGydWlt9Hkvz7pDkGx+Ld7Gm6vcer2UtuwkpnX37JwPBMY589l
cKcRpRM4Y7HZIRIA5C5GtzAo3uhcCstlhimKKMgHkZK/NnMoEyaZgQe2EKeYoeTy+LbyxoogJ/Nw
7QZtCnJEJEIm8QnB119rtZircZfdQs9l9yykLDYXnxojTvJoMj6sSRyHMbCrFnFKXAERsDrSlWsU
21ISSoU4+0yL8VRx57FtNbJ1CMojh7KbbbBjGxttvh3spynAbj9FhJYNxChQocRETUCWG+N55kSM
1OQcgffdQGC+nf8gNasrlTJdwaYNpdSIGDOsN/mwG/oXt5J0uE0crbVRjri1NUrjmnNVSVbpTQwN
BUukq/FISka1A+Yt/b4B3h2QyutsfgufwVn1TKzMURD8V5YBqFNehgc1crkrNYWTgBxryKmEek8k
EjaVfi6t4zWOXDKJBrklIa1rEC8zngWgdAMRLGY3tp5jolTLZApmaDgi9bIbi8bQbwbKqKihUGUr
PBqEF2GvW2x3SsMRDGwaiITGmTESVVo7n2DypiqojV4b0xKSGZ8QGYyBWtgwzN+7LRgKTTxbTF/H
/uEdv/F3JFOFCQld7+YA
Re: Better JPEG quantization tables?
Posted: 2014-03-31T11:46:26-07:00
by Rumpelstielzchen
Dear reader,
see below for my latest qtable.pl (cat snippet | base64 -d | bzip2 -d -c ) to compare my qtables;
My experience: taking into account the ciede2000 results from above, you'll get - at cjpeg-quality 50 - very slight perhaps better results - even with 1,5 times larger quantization values for Y than for Pb.
Please remember that the greater spatial resolution of luma than chroma is already respected with the chroma subsampling.
Kind regards,
Jochen
QlpoOTFBWSZTWUoE0ygAAI1fgAIwf////4EAQCS/9//6UAM6Fbbe7rOgBKaghGnqnqbUTegR6oep
7VPKep5Tyn6QT1MmmBKJU8yT1NNPRJiYI0MAAhgCaAaeooU9T0T9TUeieoek0Gg0AAAyD1DmATTA
JkMAATBMAAAEkgmiekyNJ6aE9qo8gIADQNA09SyEygYBhDAIGzXgbkXxMgGcBOqh0Qt0fPrnjKih
7kJpkckEiY1o8+P+0Ob2gyvWDMnHUCP4m1ewZam8LuYSZ8teY9bNiS9fGfJj4XZ41CjtSSuga0lW
cYNul6W9053OLrac0nKcdzgHbXEwjpipvk989NiAkzT2BEJqkyfTUDm3MCVwG1bx34hHpQzmd3GF
O7TQnSG7nnKCEaVWaIjODpYlITGOnpHoE3WWaaiiE2DGGiVOW8uKr+ENxg2YdUDMbUkzYooVnjKZ
ZI5JzRKTxTLRbjIErlqskXxRop6HccrGKwISwCb9Suy2rb13ySkhMowrCDAVpLWVAgGRwIkiOf63
0jZVvn0iYRkINa2R3eV43uzjqMvgWDvnwcRspGcpnBr3JIl8fAtIQeVUebxsxQMaPiRo5MndqcRK
qDjsvZtnzqgsDaeFxaPUiWp4zDEuqFPCWO6Wip+aDld90gTFLj5JUTqdyTzxzqnOq7bZsurfjYUX
uwmB9dsCOExUCqxciuDNb0TGQOz7Zx+3c/kTNziuTUlsYEJIJ/GE2ZWUdjisVLYnSZ1cq7+8cdx6
gc8NWjA/wK+ADvpOSXkqEWEVxNyrymg/ahgOchNiwoAmkXOH7AkWiGETJTV3OwIgMIs95pKOWQWs
CUHdpna8LMFZUClfVJrCJMicVVYAuIiz7mN+0qpvHWbpAhgPNIlnjv5MeLJ4FO/XPyCc3oHIXEVN
xowFbQ6IAR2tzBWdjhHDIEcARMsMqzvEPx4Xru6FKD9ZVI0SJAbNKH9LCPzCmmVRwDQuYdeXoSvL
fZZnZy+VeQEN0WxZHVUHE7cDkMDnkE/PphVdriCk9UMzdFs4EQNQV1RbPUce6geI4W0vEWpSwCIv
geDlptOsQwPZmTOEXAWBMS3spn/F3JFOFCQSgTTKAA==
Re: Better JPEG quantization tables?
Posted: 2014-03-31T12:13:39-07:00
by Rumpelstielzchen
Dear reader,
modifying the "core formula" for the quantization coefficients from "1+x+y" to "(1+x)*(1+y)" -- as "stronger penalty" for hight frequency on both x and y does produce: partially _softer_ images and seems (to me) not be the right way to go further.
Re: Better JPEG quantization tables?
Posted: 2014-03-31T13:13:03-07:00
by Rumpelstielzchen
Dear Mr. Robidoux,
I've tested your first table of this thread, 2012-02-21T21:46:52+00:00
Your qtable results in somewhat more sharpness at expense of color reproduction.
(spatial resolution vs color granularity)
(as expected)
Re: Better JPEG quantization tables?
Posted: 2014-04-01T05:11:33-07:00
by Juce
Here is my qtable generator:
Code: Select all
/*
Written in 2014 by Jukka Ripatti
To the extent possible under law, the author have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
See http://creativecommons.org/publicdomain/zero/1.0/
*/
#include <stdio.h>
#include <math.h>
/*
f(0) == 0 f(1) == 1
f'(0) == 0 f'(1) == 0
f''(0) == 0 f''(1) == 0
f'''(0) == 0 f'''(1) == 0
*/
double f(double x) {
return 16 * pow(x, 9) - 72 * pow(x, 8) + 108 * pow(x, 7) - 42 * pow(x, 6) - 36 * pow(x, 5) + 27 * pow(x, 4);
// return .5 * (1. - sin(.5 * M_PI * cos(M_PI * x)));
}
int main(int argc, char **argv) {
double quality;
double min, max;
if(argc != 2) { fprintf(stderr, "Usage: %s <0...100> > qtable.txt\n", argv[0]); return 1; }
if(sscanf(argv[1], "%lf", &quality) != 1) { fprintf(stderr, "Usage: %s <0...100> > qtable.txt\n", argv[0]); return 1; }
if(!(quality <= 100. && quality >= -100.)) { fprintf(stderr, "Usage: %s <0...100> > qtable.txt\n", argv[0]); return 1; }
printf("# %1.3f\n", quality);
printf("# cjpeg -optimize -dct float -qtables this_file.txt -qslots 0 -sample 2x2 -outfile outputfile.jpg inputfile\n\n");
quality = sqrt(1. - (quality / 100.));
min = exp2(quality * 6);
max = exp2(quality * 10.65); // This value may need fine-tuning.
int i, j;
for(j = 0; j < 8; j++) {
for(i = 0; i < 8; i++) {
double value;
value = (f(i / 7.) + f(j / 7.)) / 2.;
value = value * (max - min) + min;
if(value > 255.) value = 255.;
printf(" %3.0f", value);
}
printf("\n");
}
return 0;
}
It creates tables that are intended to be used as such without the -quality setting in cjpeg. It works fine with settings between 75 and 100.
For example a table which is created with setting 75:
Code: Select all
# 75.000
# cjpeg -optimize -dct float -qtables this_file.txt -qslots 0 -sample 2x2 -outfile outputfile.jpg inputfile
8 8 10 13 19 22 24 24
8 8 10 14 19 23 24 24
10 10 11 15 20 24 26 26
13 14 15 19 24 28 29 30
19 19 20 24 29 33 34 35
22 23 24 28 33 37 38 38
24 24 26 29 34 38 40 40
24 24 26 30 35 38 40 40
Setting 90:
Code: Select all
# 90.000
# cjpeg -optimize -dct float -qtables this_file.txt -qslots 0 -sample 2x2 -outfile outputfile.jpg inputfile
4 4 4 5 6 7 7 7
4 4 4 5 6 7 7 7
4 4 4 5 6 7 7 7
5 5 5 6 7 8 8 8
6 6 6 7 8 9 9 9
7 7 7 8 9 10 10 10
7 7 7 8 9 10 10 10
7 7 7 8 9 10 10 10
Re: Better JPEG quantization tables?
Posted: 2014-04-01T07:28:52-07:00
by Rumpelstielzchen
Dear Juce,
after taking a first look at your f(x) using gnuplot my first tought was: -cos(pi*x).
But it has a lower f'(x) near 0 and 1
What do you think about different tables for Y, Pb and Pr ?
Kind regards,
Jochen
Re: Better JPEG quantization tables?
Posted: 2014-04-01T07:59:02-07:00
by Rumpelstielzchen
Same filesize as your quality75 - Table:
with sqrt(1+x²+y²)-Kernel (the (1+x+y) kernel is not much a visual difference):
5 7 11 15 20 25 30 34
7 9 12 16 21 26 31 35
11 12 15 18 23 27 31 36
15 16 18 22 25 29 33 37
21 21 22 25 28 31 36 40
25 26 27 29 32 35 38 43
30 31 31 33 35 39 42 46
34 35 36 38 40 43 45 49
4 5 7 11 14 16 20 23
5 6 8 11 13 17 20 23
7 8 9 12 15 18 21 24
11 11 12 14 17 19 22 25
13 14 15 17 19 21 24 26
17 17 18 20 21 23 25 28
20 20 21 22 23 26 27 30
23 24 24 25 27 28 30 32
4 4 8 12 14 18 21 25
5 6 9 12 15 18 21 26
8 8 10 13 16 20 22 26
11 11 13 15 18 21 24 27
15 15 16 18 20 23 26 29
18 19 19 21 23 25 28 30
21 22 23 24 26 28 30 32
25 25 26 27 28 31 32 35
My opinion: your qtable for q=75 produces very little more color disturbance, and perhaps very little more sharpness.
Kind regards,
Jochen