Depth change - is this a bug? -- Discussion for future
Posted: 2011-10-29T19:00:13-07:00
I have been going though the Command line interface as part of its redevelopment for IMv7 and came across a 'quirk' which I am not certain could be called a bug (and fixed) or not.
Suppose you create an image with a 16-bit color (quite common), and output it.
perfectly fine.
But now suppose you what to save that image as 8-bit
Also perfectly fine.
The problem however is that if you switch the image from 16 bit to 8bit back to 16bit you lose resolution!
The color has now changed, at least in 16 bit color depth terms.
NOTE: The image data is actually modified when you go from 8 bit to 16 bit! going from 16 to 8 is just a setting change, and the data remains unchanged in memory.
The question is... Do other developers consider this a bug?
Should images in memory always preserve their 16 bit resolution!
Or should -depth actually round off the image data (even though it remains 16bit in memory)
as it is currently doing.
This is important as it determines id -depth is just a setting (for input and output - as I have always maintained), or is actually an operator causing image data loss (when it is increased!)
This may be especially important in HDRI, where depth also seems to do quantum rounding, when it may not be expected! Though the code does seem ensure setting -depth to 32 or 64 in HDRI does not actually modify the image data, BUT it still does the -depth 8 to -depth 16 bit quantum rounding when applied!
ASIDE: When a 8-bit color such as #1133FF is promoted from 8 bits to 16 bits the bits are replicated to fill in the lower bits. For example it becomes #11113333FFFF . This is important so that a 8-bit white maps to a 16-bit white.
Though this generally means that a pure 8-bit gray #7F7F7F or #808080 does not actually map to a pure 16bit grey #7FFF7FFF7FFF or #80008000800 respectively. This is also why 16 bit gray colors should be declared as 'gray(50%)' and not as 'gray50' or 'gray(128)' which are the respective 8-bit gray colors.
Suppose you create an image with a 16-bit color (quite common), and output it.
Code: Select all
convert 'xc:#123412341234' txt:-
# ImageMagick pixel enumeration: 1,1,65535,rgb
0,0: ( 4660, 4660, 4660) #123412341234 rgb(7.1107%,7.1107%,7.1107%)
But now suppose you what to save that image as 8-bit
Code: Select all
convert 'xc:#123412341234' -depth 8 txt:-
# ImageMagick pixel enumeration: 1,1,255,rgb
0,0: ( 18, 18, 18) #121212 grey7
The problem however is that if you switch the image from 16 bit to 8bit back to 16bit you lose resolution!
Code: Select all
convert 'xc:#123412341234' -depth 8 -depth 16 txt:-
# ImageMagick pixel enumeration: 1,1,65535,rgb
0,0: ( 4626, 4626, 4626) #121212121212 grey7
NOTE: The image data is actually modified when you go from 8 bit to 16 bit! going from 16 to 8 is just a setting change, and the data remains unchanged in memory.
The question is... Do other developers consider this a bug?
Should images in memory always preserve their 16 bit resolution!
Or should -depth actually round off the image data (even though it remains 16bit in memory)
as it is currently doing.
This is important as it determines id -depth is just a setting (for input and output - as I have always maintained), or is actually an operator causing image data loss (when it is increased!)
This may be especially important in HDRI, where depth also seems to do quantum rounding, when it may not be expected! Though the code does seem ensure setting -depth to 32 or 64 in HDRI does not actually modify the image data, BUT it still does the -depth 8 to -depth 16 bit quantum rounding when applied!
ASIDE: When a 8-bit color such as #1133FF is promoted from 8 bits to 16 bits the bits are replicated to fill in the lower bits. For example it becomes #11113333FFFF . This is important so that a 8-bit white maps to a 16-bit white.
Though this generally means that a pure 8-bit gray #7F7F7F or #808080 does not actually map to a pure 16bit grey #7FFF7FFF7FFF or #80008000800 respectively. This is also why 16 bit gray colors should be declared as 'gray(50%)' and not as 'gray50' or 'gray(128)' which are the respective 8-bit gray colors.