HDRI - Statistic bias in convertion16bit.ppm to 8bit.ppm
Posted: 2010-05-04T16:19:48-07:00
Hello!
I know this is an exotic topic, and anyway, it's not exactly a bug, but I noticed a statistic bias in the HDRI enabled version....
I was doing statistics on differences between 8bit and 16bit versions of the same image, to find the standard deviation of the sampling error (should be about 1/1024, and it is), and I found some weird results while using the HDRI version to convert between the 16bit source file and the 8bit file.
The 16bit integer version seems to work fine, with minimal statistical bias. The HDRI version introduces a bias of half-a-sample in the conversion. Both versions are consistent, in that converting 8->16->8 produces the same file both using only HDRI and integer versions and also mixing them. The inconsistency only occurs in the conversion of original 16 bits (really containing 16 bits precision) files into 8 bit files.
Here are the statistics for the 16 bits integer version:
channel;minimum;maximum;average;avgdiff;inv_avgdiff
0;-1.95318E-3;+1.95315E-3;-1.34887E-6;+9.46763E-4;+1.05623E3
1;-1.95318E-3;+1.95315E-3;-9.19341E-7;+9.69026E-4;+1.03196E3
2;-1.95318E-3;+1.95315E-3;-8.85847E-7;+9.69657E-4;+1.03129E3
And for the HDRI version:
channel;minimum;maximum;average;avgdiff;inv_avgdiff
0;+0.00000E0;+3.90631E-3;+2.02278E-3;+1.01292E-3;+9.87250E2
1;+0.00000E0;+3.90631E-3;+1.97609E-3;+9.91682E-4;+1.00839E3
2;+0.00000E0;+3.90631E-3;+1.97442E-3;+9.91065E-4;+1.00902E3
This are the statistics for the 16bit.ppm minus the 8bit.ppm, so the bias makes the 8 bit image 1/500 (half a sample) darker.
I did the difference and statistics myself, on a small program I made in java. It converts the interval 0-255 to 0.0-1.0 and 0-65535 into 0.0-1.0 both into an array of floats. Then subtract both arrays and proceed to calculate the statistics in double precision.
The input images were raw photographs converted into 16 bit ppm at an earlier stage, then converted into 8bit.ppm using the 16bit integer ImageMagick and then the HDRI enabled ImageMagick.
Just to clarify the meaning of the columns:
channel: the number of the ppm channel, 0=R, 1=G, 2=B
minimum: the value of the minimum in the differences
maximum: the value of the maximum in the differences
average: the average of the differences, this should tend to zero ideally.
avgdiff: the standard deviation of the differences
inv_avgdiff: the inverse of the standard deviation of the differences (a.k.a. signal to noise ratio)
I don't really know if this is an error... or something I'm interpreting badly. It happens with all photos I tried. I don't even know if I explained myself well
If you need any help to uncover this... the program I used to gather the statistics, for example... just ask. It's trivial code, but nevertheless...
I know this is an exotic topic, and anyway, it's not exactly a bug, but I noticed a statistic bias in the HDRI enabled version....
I was doing statistics on differences between 8bit and 16bit versions of the same image, to find the standard deviation of the sampling error (should be about 1/1024, and it is), and I found some weird results while using the HDRI version to convert between the 16bit source file and the 8bit file.
The 16bit integer version seems to work fine, with minimal statistical bias. The HDRI version introduces a bias of half-a-sample in the conversion. Both versions are consistent, in that converting 8->16->8 produces the same file both using only HDRI and integer versions and also mixing them. The inconsistency only occurs in the conversion of original 16 bits (really containing 16 bits precision) files into 8 bit files.
Here are the statistics for the 16 bits integer version:
channel;minimum;maximum;average;avgdiff;inv_avgdiff
0;-1.95318E-3;+1.95315E-3;-1.34887E-6;+9.46763E-4;+1.05623E3
1;-1.95318E-3;+1.95315E-3;-9.19341E-7;+9.69026E-4;+1.03196E3
2;-1.95318E-3;+1.95315E-3;-8.85847E-7;+9.69657E-4;+1.03129E3
And for the HDRI version:
channel;minimum;maximum;average;avgdiff;inv_avgdiff
0;+0.00000E0;+3.90631E-3;+2.02278E-3;+1.01292E-3;+9.87250E2
1;+0.00000E0;+3.90631E-3;+1.97609E-3;+9.91682E-4;+1.00839E3
2;+0.00000E0;+3.90631E-3;+1.97442E-3;+9.91065E-4;+1.00902E3
This are the statistics for the 16bit.ppm minus the 8bit.ppm, so the bias makes the 8 bit image 1/500 (half a sample) darker.
I did the difference and statistics myself, on a small program I made in java. It converts the interval 0-255 to 0.0-1.0 and 0-65535 into 0.0-1.0 both into an array of floats. Then subtract both arrays and proceed to calculate the statistics in double precision.
The input images were raw photographs converted into 16 bit ppm at an earlier stage, then converted into 8bit.ppm using the 16bit integer ImageMagick and then the HDRI enabled ImageMagick.
Just to clarify the meaning of the columns:
channel: the number of the ppm channel, 0=R, 1=G, 2=B
minimum: the value of the minimum in the differences
maximum: the value of the maximum in the differences
average: the average of the differences, this should tend to zero ideally.
avgdiff: the standard deviation of the differences
inv_avgdiff: the inverse of the standard deviation of the differences (a.k.a. signal to noise ratio)
I don't really know if this is an error... or something I'm interpreting badly. It happens with all photos I tried. I don't even know if I explained myself well
If you need any help to uncover this... the program I used to gather the statistics, for example... just ask. It's trivial code, but nevertheless...