I had a question about how exactly ImageMagick performs a depth conversion from a grayscale 32-bit floating point TIFF to a 16-bit grayscale TIFF:
Code: Select all
convert 32bitGrayscaleTiff.tif -depth 16 16bitGrayscaleTiff.tif
The range of a 32-bit number is obviously much larger than that of a 16-bit number. How does ImageMagick quantize the 32-bit values into a 16-bit value?
Thanks for any help you can provide,
Ryan
EDIT: typos and reading the TIFF documentation