32bit mean pixel calculation
Posted: 2016-04-12T05:21:52-07:00
Hi
I used the following to calculate the mean pixel value from a series of 16bit input images (ImageMagick 6.9.1-2 Q16) as per
convert t00.tif t01.tif t02.tif -evaluate-sequence mean t_mean.tif
for the first pixel (for example) this gives values in the original images of 2312, 1276 and 4305, with a computed value of 2631 (makes sense).
These images were converted to 32bit and pixel values show as 151521544, 83625212 and 282136785. I then re-computed the mean (ImageMagick 6.9.1-2 Q32 hdri enabled) and got 000.040.
The histograms all look the same so the image values are being scaled. However all precision is lost in the final mean calculation. Two queries:
1. why is 32bit mean calculation not an integer?
2. can I increase the number of significant figures for it?
thanks
mike
I used the following to calculate the mean pixel value from a series of 16bit input images (ImageMagick 6.9.1-2 Q16) as per
convert t00.tif t01.tif t02.tif -evaluate-sequence mean t_mean.tif
for the first pixel (for example) this gives values in the original images of 2312, 1276 and 4305, with a computed value of 2631 (makes sense).
These images were converted to 32bit and pixel values show as 151521544, 83625212 and 282136785. I then re-computed the mean (ImageMagick 6.9.1-2 Q32 hdri enabled) and got 000.040.
The histograms all look the same so the image values are being scaled. However all precision is lost in the final mean calculation. Two queries:
1. why is 32bit mean calculation not an integer?
2. can I increase the number of significant figures for it?
thanks
mike