[SOLVED] HDRI is now default for IM7 but not IM6 (still Q16)
Posted: 2012-07-28T05:51:16-07:00
Cristy and Anthony:
I don't have the time to collect precise examples buttressing the following opinion. If you are not convinced, let me know, and I'll add such to this thread as I run into them.
As usual, this is just an opinion. (But I'm opinionated )
Very strong opinion: The default ImageMagick distribution should be switched from 16-bit to HDRI.
This, of course, does not mean that I suggest that we should get rid of 8- and 16-bit compiles. Just that when a Ubuntu user, say, installs ImageMagick from the package manager, she should get HDRI by default.
Rationale:
Part 0:
Nowadays' consumer hardware has frighteningly fast FPUs. What's somewhat slowish is conversion from floating point to integers. So, although I've not benchmarked, it could be that the HDRI version runs comparably fast, at least when the images are reasonably small. (Memory usage is king in the speed department.)
("Frightening" is the word for someone whose first programming job involved a punch card reader which was physically larger than his office, and whose "internet" connection speed to the very first Canadian VAX---at a now extinct particle physics center---was measured in bauds. Ma Bell bankrupted the center.)
Part 1:
ImageMagick is getting serious about colourspaces.
Now, if you convert, say, an sRGB image imported with a recent ICC profile in Perceptual Rendering Intent mode, to XYZ (a favorite colourspace of many people) with libvips (which I know does colourspace correctly), the probability that you will get out of "nominal" gamut colours (below 0 or above 100) is extremely high. They won't overshoot by much (no more than about 4, usually), but they will. If the XYZ values are clipped when storing into 16-bit, the transformation won't be reversible.
Now, maybe your brand new XYZ implementation prevents this somehow? (I have not checked.)
But the same issue, I expect, will occur with all esoteric colourspaces (Lab, LCh, ...).
Summary: If your default implementation is not HDRI, you are likely to get bug reports from people who use fancy colourspaces.
Nip those in the bud.
Part 2:
With negative lobe filters, we need negative transparency to get best results (documented elsewhere).
Part 3:
The clipping that occurs between the horizontal and vertical passes of orthogonal resize (-resize, the most commonly used) causes filters with negative lobes to have avoidable artifacts. These could be eliminated by making an exception to the between pass storage used by such methods (Anthony mentioned that this was a possibility) but then methods without negative lobes would be slowed down without noticeable gain. In addition, these artifacts are less noticeable when downsampling, which is, I would guess, a major use of ImageMagick. So, one can argue that for many uses, the current implementation is better (speed!).
Making HDRI the default eliminates the issue. The artifacts disappear in the default version. If you want speed, grab an alternate (8- or 16-bit) version, but be aware of the possible consequences.
I don't have the time to collect precise examples buttressing the following opinion. If you are not convinced, let me know, and I'll add such to this thread as I run into them.
As usual, this is just an opinion. (But I'm opinionated )
Very strong opinion: The default ImageMagick distribution should be switched from 16-bit to HDRI.
This, of course, does not mean that I suggest that we should get rid of 8- and 16-bit compiles. Just that when a Ubuntu user, say, installs ImageMagick from the package manager, she should get HDRI by default.
Rationale:
Part 0:
Nowadays' consumer hardware has frighteningly fast FPUs. What's somewhat slowish is conversion from floating point to integers. So, although I've not benchmarked, it could be that the HDRI version runs comparably fast, at least when the images are reasonably small. (Memory usage is king in the speed department.)
("Frightening" is the word for someone whose first programming job involved a punch card reader which was physically larger than his office, and whose "internet" connection speed to the very first Canadian VAX---at a now extinct particle physics center---was measured in bauds. Ma Bell bankrupted the center.)
Part 1:
ImageMagick is getting serious about colourspaces.
Now, if you convert, say, an sRGB image imported with a recent ICC profile in Perceptual Rendering Intent mode, to XYZ (a favorite colourspace of many people) with libvips (which I know does colourspace correctly), the probability that you will get out of "nominal" gamut colours (below 0 or above 100) is extremely high. They won't overshoot by much (no more than about 4, usually), but they will. If the XYZ values are clipped when storing into 16-bit, the transformation won't be reversible.
Now, maybe your brand new XYZ implementation prevents this somehow? (I have not checked.)
But the same issue, I expect, will occur with all esoteric colourspaces (Lab, LCh, ...).
Summary: If your default implementation is not HDRI, you are likely to get bug reports from people who use fancy colourspaces.
Nip those in the bud.
Part 2:
With negative lobe filters, we need negative transparency to get best results (documented elsewhere).
Part 3:
The clipping that occurs between the horizontal and vertical passes of orthogonal resize (-resize, the most commonly used) causes filters with negative lobes to have avoidable artifacts. These could be eliminated by making an exception to the between pass storage used by such methods (Anthony mentioned that this was a possibility) but then methods without negative lobes would be slowed down without noticeable gain. In addition, these artifacts are less noticeable when downsampling, which is, I would guess, a major use of ImageMagick. So, one can argue that for many uses, the current implementation is better (speed!).
Making HDRI the default eliminates the issue. The artifacts disappear in the default version. If you want speed, grab an alternate (8- or 16-bit) version, but be aware of the possible consequences.