Right. But IM seems to be decoding it (as linear) and converting it to sRGB in coders/hdr.c, but then
setting image->gamma=1.0 which means it is still linear.
Nope... on reading the HDR file it looks to me like it correctly sets the colorspace to linear RGB (not sRGB) initially, in 7.0.6-2 Q16 HDRI at least:
Code: Select all
magick identify rose.hdr
rose.hdr HDR 70x46 70x46+0+0 16-bit RGB 11657B 0.000u 0:00.000
Well, the PNG Developers recommend using 16-bit samples with linear colorspace; in fact, the libpng "simplified API" switches back and forth between 8-bit sRGB and 16-bit linear pixels.Dabrosny wrote: ↑2017-07-28T15:28:11-07:00 In addition, in my opinion, we shouldn't have written a "linear png" to begin with, because (1) this is generally very lossy in terms of decreased dynamic range, and (2) a linear png is incompatible with almost all other software that might read it (including IM 7.0.6 although hopefully not IM 7.0.8 or so).
Okay, so, at least if we're writing an 8-bit per sample PNG file would you agree that sRGB should be the default, rather than whatever-colorspace-the-image-just-so-happens-to-already-be-in-for-whatever-reasons?
The libpng developers' recommendation that I mentioned was of course in the context of writing a PNG in which you are limited to integer samples of up to 16 bits/sample. If you look at browsers such as Firefox you'll find that they strip 16-bit PNG samples down to 8, so yes we recommend using 8-bit sRGB rather than 16-bit linear samples for export for web use. From a human-vision viewpoint, 8-bit sRGB is roughly equivalent to 11- or 12-bit linear, so 16-bit PNG is actually more precise (but not after it has been stripped down to 8 bits!)
Sure, but if I want to use 16-bit (or if IM is defaulting to 16-bit and I haven't given it any thought), there is certainly no harm in using 16-bit sRGB vs 16-bit linear. If it happens to be read directly by a browser, if anything the 16-bit sRGB reduced to 8-bit will be better than a 16-bit linear reduced to 8-bit.
16-bit linear is better than 8-bit sRGB, but 16-bit sRGB is even better than that.
Right, so why can't it be the default, even if we happen to currently be working in a linear colorspace (perhaps because we read in an hdri image)?