Re: Colorspace handling and the future of ImageMagick.
Posted: 2012-11-13T01:55:50-07:00
You would also need some representation of the channel spatial coverage if you want some kind of complete description of data <-> physical behaviour mapping.rnbc wrote: How to make colorspace support generic? Well, that's a problem! The generic answer is:
- N channels
- Each channel has a specific spectral power curve and power baseline.
- Each channel has a specific luminance curve Y=f(X).
This makes it possible to represent any input device (raw sensor, or whatever) and any output device (even laser projectors, etc), and convert between representations any such representations (complex, but possible, even if a different number of channels is involved in each phase).
-Bayer sensors might be (spatially) represented as
ch0: [1 0; 0 0];//green 0
ch1: [0 1 0 0];//red
ch2: [0 0; 1 0];//blue
ch3: [0 0; 0 1];//green1
-LCD monitors might be represented as
ch0: [1 0 0];//red
ch1: [0 1 0];//green
ch2: [0 0 1];//blue
-Some YCbCr 4:2:2 interleaved format might be represented as:
ch0: [1 0 1 0];//luma
ch1: [0 1 0 0];//Cb
ch2: [0 0 0 1];//Cr
Even now, I am mixing the in-memory representation with the spatial sampling offset. Those two should (generally) be separately represented, of course.
If you want to be picky, even these representations are idealized abstractions that ignore the precense of e.g. anti-alias filtering. For a scientific application you might need per-channel, per-site, per-wavelength point-spread-function estimates.
Interlaced image formats (png?) might need some special memory-space-time representation to generalize into your system.
Some formats might only be defined by the (idealized) physical capture process, while others might be defined by the (idealized) physical rendering process. Some formats will be seriously under-specified (rather relying on conventions and "luck"). I think it is a difficult project, but you might look into what the GStreamer and VirtualDub projects have been doing.
I think it is exciting to view image/video recording/rendering as a general "sampling process"/"discrete representation in memory" in time, space, wavelength, power. It is also difficult to map to the narrower case of real image processing with cost/quality trade-offs.
-h