Page 1 of 1

32-bit TIFF vs 16-bit defaultQuantum: silently corrupts data

Posted: 2011-03-16T13:13:14-07:00
by RichardNeill
When looking at a 32-bit greyscale TIFF (for example, to dump pixels out as integers, using 'stream'), ImageMagick silently "corrupts" the data, converting, for example., a TIFF pixel whose real value is 0xffeeddcc into a reported value of 0xffeeffee.

This turns out to be because the default quantum is only 16-bits, but it took me a very long time to discover this, and I had positively identified (and reported) what is perhaps an invalid bug on IM.
It seems rather strange to default to handling ints as 16-bit when the CPU native type is either 32 or 64. But notwithstanding that, there's rather a violation of the principle of least-surprise going o: I start with a 32-bit integer pixel value, and dump out a value formatted as "integer" (32 bit) - so surely there shouldn't be any data loss?

Please may I request that, when IM is processing/converting/displaying/streaming an image with a bit-depth greater than the compiled quantum, it should at least emit a WARNING on stderr to this effect?
Something like: fprintf(stderr, "Warning: %s was compiled with quantum depth %d, but your images uses %d bits per pixel. Less significant bits will be discarded.\n", argv[0], quantum, bitdepth);

I'm pleased that "display" can in fact handle 32-bit TIFFs mostly-correctly (most other apps fail entirely, or treat them wrongly as RGBA). But it should warn me that the low bits are being ignored. The existence of such a warning would have saved me many hours of work :-)

Aside: there doesn't seem to be any way (except experiment) to discover what the quantum is, when faced with a given IM binary. Shouldn't "--version" print this out?

Thanks for your help - Richard

P.S. For furture users who chance upon this, the solution is to rebuild IM, using: ./configure --with-quantum-depth=32

Re: 32-bit TIFF vs 16-bit defaultQuantum: silently corrupts

Posted: 2011-03-16T14:07:34-07:00
by fmw42
see viewtopic.php?f=3&t=17839#p67735, which I believe works with Q16 IM compiles.

Re: 32-bit TIFF vs 16-bit defaultQuantum: silently corrupts

Posted: 2011-03-28T03:27:57-07:00
by RichardNeill
fmw42 wrote:see viewtopic.php?f=3&t=17839#p67735, which I believe works with Q16 IM compiles.
Thanks - but this doesn't work. I actually need the full precision of 32-bit integers.

Anyway, I've resolved this by rebuilding IM with 32-bit quantum. However I still believe there are a couple of bugs:

(1). If IM is processing an image whose pixel-depth is greater than the quantum depth, it should emit a warning to that effect.

(2). Invocation of IM with --version should include information on the quantum-depth with which it was compiled (to allow scripts to check it).

Re: 32-bit TIFF vs 16-bit defaultQuantum: silently corrupts

Posted: 2011-03-28T09:50:53-07:00
by fmw42
(2). Invocation of IM with --version should include information on the quantum-depth with which it was compiled (to allow scripts to check it).
convert -version
Version: ImageMagick 6.6.8-10 2011-03-27 Q16 http://www.imagemagick.org
Copyright: Copyright (C) 1999-2011 ImageMagick Studio LLC
Features:


Features includes for example HDRI (when built with that enabled) and OpenMP (when built with that enabled)

I will leave your question 1) to those more knowledgeable than I.

Re: 32-bit TIFF vs 16-bit defaultQuantum: silently corrupts

Posted: 2011-03-28T10:32:14-07:00
by RichardNeill
Version: ImageMagick 6.6.8-10 2011-03-27 Q16 http://www.imagemagick.org

Oops - didn't see that! I was looking for it in the features list. Thanks for tip.