When looking at a 32-bit greyscale TIFF (for example, to dump pixels out as integers, using 'stream'), ImageMagick silently "corrupts" the data, converting, for example., a TIFF pixel whose real value is 0xffeeddcc into a reported value of 0xffeeffee.
This turns out to be because the default quantum is only 16-bits, but it took me a very long time to discover this, and I had positively identified (and reported) what is perhaps an invalid bug on IM.
It seems rather strange to default to handling ints as 16-bit when the CPU native type is either 32 or 64. But notwithstanding that, there's rather a violation of the principle of least-surprise going o: I start with a 32-bit integer pixel value, and dump out a value formatted as "integer" (32 bit) - so surely there shouldn't be any data loss?
Please may I request that, when IM is processing/converting/displaying/streaming an image with a bit-depth greater than the compiled quantum, it should at least emit a WARNING on stderr to this effect?
Something like: fprintf(stderr, "Warning: %s was compiled with quantum depth %d, but your images uses %d bits per pixel. Less significant bits will be discarded.\n", argv[0], quantum, bitdepth);
I'm pleased that "display" can in fact handle 32-bit TIFFs mostly-correctly (most other apps fail entirely, or treat them wrongly as RGBA). But it should warn me that the low bits are being ignored. The existence of such a warning would have saved me many hours of work
Aside: there doesn't seem to be any way (except experiment) to discover what the quantum is, when faced with a given IM binary. Shouldn't "--version" print this out?
Thanks for your help - Richard
P.S. For furture users who chance upon this, the solution is to rebuild IM, using: ./configure --with-quantum-depth=32
32-bit TIFF vs 16-bit defaultQuantum: silently corrupts data
-
- Posts: 4
- Joined: 2011-03-16T12:36:52-07:00
- Authentication code: 8675308
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: 32-bit TIFF vs 16-bit defaultQuantum: silently corrupts
see viewtopic.php?f=3&t=17839#p67735, which I believe works with Q16 IM compiles.
-
- Posts: 4
- Joined: 2011-03-16T12:36:52-07:00
- Authentication code: 8675308
Re: 32-bit TIFF vs 16-bit defaultQuantum: silently corrupts
Thanks - but this doesn't work. I actually need the full precision of 32-bit integers.fmw42 wrote:see viewtopic.php?f=3&t=17839#p67735, which I believe works with Q16 IM compiles.
Anyway, I've resolved this by rebuilding IM with 32-bit quantum. However I still believe there are a couple of bugs:
(1). If IM is processing an image whose pixel-depth is greater than the quantum depth, it should emit a warning to that effect.
(2). Invocation of IM with --version should include information on the quantum-depth with which it was compiled (to allow scripts to check it).
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: 32-bit TIFF vs 16-bit defaultQuantum: silently corrupts
convert -version(2). Invocation of IM with --version should include information on the quantum-depth with which it was compiled (to allow scripts to check it).
Version: ImageMagick 6.6.8-10 2011-03-27 Q16 http://www.imagemagick.org
Copyright: Copyright (C) 1999-2011 ImageMagick Studio LLC
Features:
Features includes for example HDRI (when built with that enabled) and OpenMP (when built with that enabled)
I will leave your question 1) to those more knowledgeable than I.
-
- Posts: 4
- Joined: 2011-03-16T12:36:52-07:00
- Authentication code: 8675308
Re: 32-bit TIFF vs 16-bit defaultQuantum: silently corrupts
Version: ImageMagick 6.6.8-10 2011-03-27 Q16 http://www.imagemagick.org
Oops - didn't see that! I was looking for it in the features list. Thanks for tip.
Oops - didn't see that! I was looking for it in the features list. Thanks for tip.