As Fred says.
You should give "-color-matrix" a square matrix of numbers: 1x1, 2x2, ... 6x6. If you give less than 6x6, IM internally pads to that size, with zeros in all entries except for ones on the diagonal.
So you can give any square size you want. IM will assume "sensible" values for the ones you haven't supplied. If you need offsets, you must supply 6x6 values.
I think about is this way: pixel values are normalised to 0.0 to 1.0 (but HDRI can exceed that range). Each output channel is one row of the matrix, multiplied by the input pixel. An HDRI example (Windows BAT syntax):
Code: Select all
%IMDEV%convert xc:rgb(10%%,-20%%,30%%) -color-matrix ^
1.1,0,0,0,0,0,^
1,0,0,0,0,1.2,^
0,2,0,0,0,0,^
1,0,0,0,0,0,^
1,0,0,0,0,0,^
1,0,0,0,0,0^
txt:
0,0: (4.72446e+08,5.58346e+09,1.28849e+09) #1C28F5C2FFFFFFFF4CCCCCCD srgb(11%,
130%,-40%)
Output red is input red * 1.1. IM calculates: 0.10 * 1.1 = 0.11, or 11%.
Output green is input red * 1, plus 1.2. IM calculates: 0.10 * 1 + 1.2 = 1.3, or 130%.
Output blue is input green * 2. IM calculates: -0.20 * 2 = -0.40, or -40%.