Page 1 of 1
Why would not I enable HDRI?
Posted: 2007-10-04T21:02:52-07:00
by mi
It appears,
HDRI is a popular thing in digital photography.
ImageMagick includes (some) support for it, but it is not turned on by default -- on need to configure with
--enable-hdri and rebuild first...
What's the drawback to enabling it? Will it cause stability (crashes) or performance problems to those, who don't use HDRI? Thanks!
Re: Why would not I enable HDRI?
Posted: 2007-10-04T21:40:43-07:00
by magick
Generally ImageMagick is 16 unsigned bits per pixel component. HDRI is 32 float bits per pixel component which can be quite resource intensitive. HDRI does not clamp pixel values so some algorithms may give unexpected behavior (e.g. negative pixel values or values that exceed the QuantumRange). HDRI is not very useful if you are not using image formats that support it such as EXR, PFM, or MIFF.
Re: Why would not I enable HDRI?
Posted: 2007-10-04T22:52:09-07:00
by anthony
Or doing some very fancy image processing where the in-memory intermedite images are important. It isn't just clamping that HDRI helps, it is quantization issues. That is the round off to the nearest integer value every time an image processing operations saves the pixel data back into the image data channel.
For example dividing an image data by 1000 then multiplying it by 1000 will produce gaps in the data space, but not with HDRI. With HDRI you should get something very close to what you originally started with. Basically it automatically the significant bits, when integers does not.
Of course its original intended use is for generating images that are more like what we humans would see.
Re: Why would not I enable HDRI?
Posted: 2007-10-08T05:27:38-07:00
by seanburke1979
Here is a question on the same topic. As noted above, enabling HDRI allows for low-error propagation, but may have unexpected results when used for untested functions. An example:
In ImportImagePixels [pixel.c] (by way of ConstitueImage) double values are still scaled. Here is some code from the case "DoublePixel" (line 2,375):
if (LocaleCompare(map,"I") == 0)
{
for (y=0; y < (long) rows; y++)
{
q=GetImagePixels(image,x_offset,y_offset+y,columns,1);
if (q == (PixelPacket *) NULL)
break;
for (x=0; x < (long) columns; x++)
{
q->red=RoundToQuantum((MagickRealType) QuantumRange*(*p));
q->green=q->red;
q->blue=q->red;
p++;
q++;
}
if (SyncImagePixels(image) == MagickFalse)
break;
}
break;
}
For the case of an HDRI image the values are not (or are not necessarily) scaled 0-1.0. The end result is that your HDRI values, which *could* start out astronomically high (like, say the DC component of a Fourier transform) gets multiplied by 2^32 (at the minimum). RoundToQuantum under a properly configured HDRI setup is an identity function - it returns the input as the output.
Is this a bug, or should we be scaling our HDRI values? ExportImagePixels does the same, only in the other direction. A few lines of CPP code would straighten it out. I can come up with some if it would expedite a resolution.
Best,
Sean
Re: Why would not I enable HDRI?
Posted: 2007-10-08T06:27:48-07:00
by magick
Typically QuantumRange is 65535.0 for HDRI with a quantum depth of 16. If you are concerned about a loss of precision you can use a quantum depth of 8 which scales by a factor of 255.0. If the quantum depth is 32, under Unix/Linux, we use long doubles so scaling should still be ok without a loss of precision. Given this information, do you anticipate any other problems related to HDRI in ImageMagick?
Re: Why would not I enable HDRI?
Posted: 2007-10-08T07:09:44-07:00
by seanburke1979
Well, as it is, a Fourier->InvFourier is lossy (since we have a finite sample space). The last step of the Fourier algorithm is to perform a log transform in order to display what people commonly refer to as "the Fourier" (the magnitude component). InvFourier goes the other way, so I am backed up against exponential error propagation (otherwise, long double would be more than acceptable).
I understand the need to fit integer types to the correct quantum range, but it seems superfluous for floats or doubles. Doubly so for HDRI enabled images since the intensities may have some absolute calibrated reference value. This would be the case for scientific, Fourier-space (or z-space), or process control calibrated images.
Incidentally, IPL format supports float and double values in addition to the above mentioned formats.
-Sean
Re: Why would not I enable HDRI?
Posted: 2007-10-08T08:32:29-07:00
by magick
The scaling is performed to ensure speed (no if/then special cases required for HDRI support) and support for mapped pixels which is used not only for colormapped images but a number of algorithms that use lookup tables for such things as gamma correction, colorspace conversion, color reduction, etc.. We suspect there is probably a hundred algorithms that depend on mapping so we do not anticipate changing the current scaling method anytime soon. We could add a configure option to enable long double for HDRI and that should solve the precision problem you reported but of course there would be an increased resource demand.
Image processing has an unusual number of compromises that many algorithms do not have. See
http://www.imagemagick.org/script/architecture.php for a discussion of a number of compromises we have already made in order to efficiently support the greatest number of image formats, colorspaces, image processing algorithms, and more. One suggestion we get alot is why not map image pixels at their native depths. Fax images, in particular, are 1-bit deep but we store them (typically) at 16-bits per pixel component. Our reasoning is that with a fixed depth we gain speed because each time pixels are moved into and out of the pixel cache, no depth conversion is required and no special algorithms are required (if depth is 1 do this, if depth is 4 do that, if depth is 16 do something else ), and we can directly map the pixels in memory for fast processing (in many cases ImageMagick algorithms have direct access to the image pixels).
Re: Why would not I enable HDRI?
Posted: 2007-10-08T08:44:49-07:00
by seanburke1979
Well, that may be the solution. Hold off on it though, I still need to shake this code down. There is no reason to start changing the other code if I can figure a self-contained way around it.
Best,
Sean
Re: Why would not I enable HDRI?
Posted: 2007-10-17T19:02:09-07:00
by anthony
I would have thought that the
Code: Select all
q->red=RoundToQuantum((MagickRealType) QuantumRange*(*p));
would automatically be adjusted to do nothing (other than the scaling) when HDRI is in effect.
Re: Why would not I enable HDRI?
Posted: 2007-10-18T05:16:01-07:00
by seanburke1979
It does, but the scaling still occurs. I gues that's where my confusion comes in. I don't understand the necessity of scaling a 32 bit float by the 32 integer quantum range. It makes sense for integer calculations, but seems a little useless for float point operaitons.
Sean
Re: Why would not I enable HDRI?
Posted: 2007-10-18T06:55:00-07:00
by magick
You should set the QuantumDepth to 16 for HDRI. We utilize mapping within ImageMagick but no mapping table exceeds 16-bits. You could also choose a QuantumDepth of 8 but a depth of 32 or 64 does not make sense for HDRI.
Re: Why would not I enable HDRI?
Posted: 2007-10-18T19:20:01-07:00
by anthony
seanburke1979 wrote:It does, but the scaling still occurs. I gues that's where my confusion comes in. I don't understand the necessity of scaling a 32 bit float by the 32 integer quantum range. It makes sense for integer calculations, but seems a little useless for float point operaitons.
Okay. yes, typically this scaling does not make sense. But all the operation in IM are designed for use with integers, as as such expect numbers to in in some quantum range, such as 256, 65536, or more. To avoid problems IM stores HDRI numbers using a 65536 range (16 bit intergers).
That way options which accept a direct value (-fuzz, -evaluate, etc) will not suddenly only work with values of 0.0 to 1.0 range (though all such options usally take a 0-100 % form of the numbers too.
Basically it ensures functions designed for use with integers, don't suddenly find themselves only using 0.0 to 1.0 values. It is a precautionary measure.
HDRI itself does not really care.