32-bit TIFF vs 16-bit defaultQuantum: silently corrupts data
Posted: 2011-03-16T13:13:14-07:00
When looking at a 32-bit greyscale TIFF (for example, to dump pixels out as integers, using 'stream'), ImageMagick silently "corrupts" the data, converting, for example., a TIFF pixel whose real value is 0xffeeddcc into a reported value of 0xffeeffee.
This turns out to be because the default quantum is only 16-bits, but it took me a very long time to discover this, and I had positively identified (and reported) what is perhaps an invalid bug on IM.
It seems rather strange to default to handling ints as 16-bit when the CPU native type is either 32 or 64. But notwithstanding that, there's rather a violation of the principle of least-surprise going o: I start with a 32-bit integer pixel value, and dump out a value formatted as "integer" (32 bit) - so surely there shouldn't be any data loss?
Please may I request that, when IM is processing/converting/displaying/streaming an image with a bit-depth greater than the compiled quantum, it should at least emit a WARNING on stderr to this effect?
Something like: fprintf(stderr, "Warning: %s was compiled with quantum depth %d, but your images uses %d bits per pixel. Less significant bits will be discarded.\n", argv[0], quantum, bitdepth);
I'm pleased that "display" can in fact handle 32-bit TIFFs mostly-correctly (most other apps fail entirely, or treat them wrongly as RGBA). But it should warn me that the low bits are being ignored. The existence of such a warning would have saved me many hours of work
Aside: there doesn't seem to be any way (except experiment) to discover what the quantum is, when faced with a given IM binary. Shouldn't "--version" print this out?
Thanks for your help - Richard
P.S. For furture users who chance upon this, the solution is to rebuild IM, using: ./configure --with-quantum-depth=32
This turns out to be because the default quantum is only 16-bits, but it took me a very long time to discover this, and I had positively identified (and reported) what is perhaps an invalid bug on IM.
It seems rather strange to default to handling ints as 16-bit when the CPU native type is either 32 or 64. But notwithstanding that, there's rather a violation of the principle of least-surprise going o: I start with a 32-bit integer pixel value, and dump out a value formatted as "integer" (32 bit) - so surely there shouldn't be any data loss?
Please may I request that, when IM is processing/converting/displaying/streaming an image with a bit-depth greater than the compiled quantum, it should at least emit a WARNING on stderr to this effect?
Something like: fprintf(stderr, "Warning: %s was compiled with quantum depth %d, but your images uses %d bits per pixel. Less significant bits will be discarded.\n", argv[0], quantum, bitdepth);
I'm pleased that "display" can in fact handle 32-bit TIFFs mostly-correctly (most other apps fail entirely, or treat them wrongly as RGBA). But it should warn me that the low bits are being ignored. The existence of such a warning would have saved me many hours of work
Aside: there doesn't seem to be any way (except experiment) to discover what the quantum is, when faced with a given IM binary. Shouldn't "--version" print this out?
Thanks for your help - Richard
P.S. For furture users who chance upon this, the solution is to rebuild IM, using: ./configure --with-quantum-depth=32