Page 1 of 1

Grayscale TIFF depth change from 32-bit to 16-bit

Posted: 2013-06-24T10:57:18-07:00
by Ryan Marcus
Hello,

I had a question about how exactly ImageMagick performs a depth conversion from a grayscale 32-bit floating point TIFF to a 16-bit grayscale TIFF:

Code: Select all

convert 32bitGrayscaleTiff.tif -depth 16 16bitGrayscaleTiff.tif
The command works fine, and convert appears to produce the desired output, but I'm confused how it works.

The range of a 32-bit number is obviously much larger than that of a 16-bit number. How does ImageMagick quantize the 32-bit values into a 16-bit value?

Thanks for any help you can provide,

Ryan

EDIT: typos and reading the TIFF documentation

Re: Grayscale TIFF depth change from 32-bit to 16-bit

Posted: 2013-06-24T11:02:32-07:00
by fmw42
Are you talking about a 32-bit grayscale image or 32-bit RGBA (8-bits per channel) image?

see also the tiff format at http://www.imagemagick.org/script/formats.php

The IM developers would have to explain if you need more detail.

Re: Grayscale TIFF depth change from 32-bit to 16-bit

Posted: 2013-06-24T11:25:40-07:00
by Ryan Marcus
Oops, I should've been more clear. Both image are grayscale, i.e., a 32-bit grayscale TIFF to a 16-bit grayscale TIFF.

I also discovered the sample format flag, which answers the first question: http://www.awaresystems.be/imaging/tiff ... ormat.html

Thanks,

Ryan

Re: Grayscale TIFF depth change from 32-bit to 16-bit

Posted: 2013-06-24T18:33:46-07:00
by glennrp
The range is the same, 0.0 to 1.0. The 32-bit numbers are just a lot more precise. To convert 32-bit to 16-bit, we in essence just drop the lower 16 bits of each sample (actually using a floating-point division that is slightly more accurate)

It looks something like this:
q16 = (unsigned short) (q32/65537.0+0.5));

Re: Grayscale TIFF depth change from 32-bit to 16-bit

Posted: 2013-06-26T15:03:28-07:00
by Ryan Marcus
Ah, that makes sense.

In my investigation, I wrote some quick libtiff code that would map the actual range of the floats contained in the image to the range 0...1 and then multiply that value by USHRT_MAX. I believe the differences I was seeing in the output of my application and ImageMagick are explainable by the slight difference in linearization.

I'm confused by the idea of "dropping the lower 16-bits of each sample" -- floating point data contains an 8-bit exponent that is not value-equivalent to the mantissa, right? Or does the bit-math essentially work out to a normalization?

Thanks,

Ryan

Re: Grayscale TIFF depth change from 32-bit to 16-bit

Posted: 2013-06-26T18:22:32-07:00
by glennrp
Ryan Marcus wrote:Ah, that makes sense.

In my investigation, I wrote some quick libtiff code that would map the actual range of the floats contained in the image to the range 0...1 and then multiply that value by USHRT_MAX. I believe the differences I was seeing in the output of my application and ImageMagick are explainable by the slight difference in linearization.

I'm confused by the idea of "dropping the lower 16-bits of each sample" -- floating point data contains an 8-bit exponent that is not value-equivalent to the mantissa, right? Or does the bit-math essentially work out to a normalization?

Thanks,

Ryan
I don't know what TIFF does. If it uses floating point samples, which is apparently the case, then what you did is correct. In a regular Q32 ImageMagick build, the samples are 32-bit unsigned integers so you can use the integer division plus rounding that I mentioned to convert Q32 samples to Q16 samples.

Re: Grayscale TIFF depth change from 32-bit to 16-bit

Posted: 2013-06-27T09:18:41-07:00
by Ryan Marcus
Awesome, thank you for the clarification!

Thanks,

Ryan