Grayscale TIFF depth change from 32-bit to 16-bit

Questions and postings pertaining to the usage of ImageMagick regardless of the interface. This includes the command-line utilities, as well as the C and C++ APIs. Usage questions are like "How do I use ImageMagick to create drop shadows?".
Post Reply
Ryan Marcus
Posts: 4
Joined: 2013-06-24T10:52:25-07:00
Authentication code: 6789

Grayscale TIFF depth change from 32-bit to 16-bit

Post by Ryan Marcus »

Hello,

I had a question about how exactly ImageMagick performs a depth conversion from a grayscale 32-bit floating point TIFF to a 16-bit grayscale TIFF:

Code: Select all

convert 32bitGrayscaleTiff.tif -depth 16 16bitGrayscaleTiff.tif
The command works fine, and convert appears to produce the desired output, but I'm confused how it works.

The range of a 32-bit number is obviously much larger than that of a 16-bit number. How does ImageMagick quantize the 32-bit values into a 16-bit value?

Thanks for any help you can provide,

Ryan

EDIT: typos and reading the TIFF documentation
Last edited by Ryan Marcus on 2013-06-24T11:26:28-07:00, edited 1 time in total.
User avatar
fmw42
Posts: 25562
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: Grayscale TIFF depth change from 32-bit to 16-bit

Post by fmw42 »

Are you talking about a 32-bit grayscale image or 32-bit RGBA (8-bits per channel) image?

see also the tiff format at http://www.imagemagick.org/script/formats.php

The IM developers would have to explain if you need more detail.
Ryan Marcus
Posts: 4
Joined: 2013-06-24T10:52:25-07:00
Authentication code: 6789

Re: Grayscale TIFF depth change from 32-bit to 16-bit

Post by Ryan Marcus »

Oops, I should've been more clear. Both image are grayscale, i.e., a 32-bit grayscale TIFF to a 16-bit grayscale TIFF.

I also discovered the sample format flag, which answers the first question: http://www.awaresystems.be/imaging/tiff ... ormat.html

Thanks,

Ryan
User avatar
glennrp
Posts: 1147
Joined: 2006-04-01T08:16:32-07:00
Location: Maryland 39.26.30N 76.16.01W

Re: Grayscale TIFF depth change from 32-bit to 16-bit

Post by glennrp »

The range is the same, 0.0 to 1.0. The 32-bit numbers are just a lot more precise. To convert 32-bit to 16-bit, we in essence just drop the lower 16 bits of each sample (actually using a floating-point division that is slightly more accurate)

It looks something like this:
q16 = (unsigned short) (q32/65537.0+0.5));
Ryan Marcus
Posts: 4
Joined: 2013-06-24T10:52:25-07:00
Authentication code: 6789

Re: Grayscale TIFF depth change from 32-bit to 16-bit

Post by Ryan Marcus »

Ah, that makes sense.

In my investigation, I wrote some quick libtiff code that would map the actual range of the floats contained in the image to the range 0...1 and then multiply that value by USHRT_MAX. I believe the differences I was seeing in the output of my application and ImageMagick are explainable by the slight difference in linearization.

I'm confused by the idea of "dropping the lower 16-bits of each sample" -- floating point data contains an 8-bit exponent that is not value-equivalent to the mantissa, right? Or does the bit-math essentially work out to a normalization?

Thanks,

Ryan
User avatar
glennrp
Posts: 1147
Joined: 2006-04-01T08:16:32-07:00
Location: Maryland 39.26.30N 76.16.01W

Re: Grayscale TIFF depth change from 32-bit to 16-bit

Post by glennrp »

Ryan Marcus wrote:Ah, that makes sense.

In my investigation, I wrote some quick libtiff code that would map the actual range of the floats contained in the image to the range 0...1 and then multiply that value by USHRT_MAX. I believe the differences I was seeing in the output of my application and ImageMagick are explainable by the slight difference in linearization.

I'm confused by the idea of "dropping the lower 16-bits of each sample" -- floating point data contains an 8-bit exponent that is not value-equivalent to the mantissa, right? Or does the bit-math essentially work out to a normalization?

Thanks,

Ryan
I don't know what TIFF does. If it uses floating point samples, which is apparently the case, then what you did is correct. In a regular Q32 ImageMagick build, the samples are 32-bit unsigned integers so you can use the integer division plus rounding that I mentioned to convert Q32 samples to Q16 samples.
Ryan Marcus
Posts: 4
Joined: 2013-06-24T10:52:25-07:00
Authentication code: 6789

Re: Grayscale TIFF depth change from 32-bit to 16-bit

Post by Ryan Marcus »

Awesome, thank you for the clarification!

Thanks,

Ryan
Post Reply