Summed Area Tables or Integral Images
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Summed Area Tables or Integral Images
I wanted to bring a technique to the IM user groups attention, called summed area tables (sat) or integral images.
http://en.wikipedia.org/wiki/Summed_area_table
This is a rather old technique that was originally developed for antialiasing images as an alternate to multi-resolution images (MIP maps).
http://www.soe.ucsc.edu/classes/cmps160 ... 7-crow.pdf
One basically sums or integrates the (normalized) graylevels over the image and uses differences from this table as local averages for later antialiasing. (Anthony may find this of interest for potential multi-resolution antialiased interpolation for -distort)
But has seen use also for HDR images tone mapping, shadow mapping and feature (face) detection:
http://www.inf.ufrgs.br/~oliveira/pubs_ ... GI2008.pdf
http://ati.amd.com/developer/gdc/GDC200 ... ctions.pdf
http://www.cgl.uwaterloo.ca/poster/andrew_I3D07.pdf
http://en.wikipedia.org/wiki/Robust_rea ... _detection
http://research.microsoft.com/en-us/um/ ... s_IJCV.pdf
My interest at the moment is much more limited (to 1D images) and has to do with my recent attempts to speed up my redist, histmatch and omnistretch scripts. In these scripts, I need to generate a 1D function (such as a gaussian) or generate the image histogram, then integrate the function or histogram (to make the cumulative histogram) and then finally create a 1D (look up table image) lut from the 1D integrated function and the 1D cumulative histogram to use with -clut to transform the image in a way that redistributes the histogram of the image.
In the past, I did all this (except applying -clut) in the shell of my scripts. The histogram of the image was extracted as text using histogram:info, then filtered with the unix commands sed and sort and then added in the empty bins so that it contained every bin from 0 to 255 (whether empty or not). (Note that IM text histograms do not include any empty bins in the listing).
I recently realized I could create the histogram directly as a 1D image as follows (using the zelda image in grayscale, but works similarly in color):
convert zelda3.jpg -colorspace gray histogram:- | convert - -scale 256x1! zelda3g_hist.png
which when graphed looks like:
Then I realized that I could integrate the histogram image or any 1D function image (normalized between 0 and 1), but it is rather slow as it has to iterate multiple IM commands 256 times. For example here is the algorithm for the zelda histogram.
norm=`convert zelda3g_hist.png -format "%[fx:1/(mean*w)]" info:`
i=0
convert -size 1x1 xc:black tmp_cumhist.mpc
while [ $i -lt 256 ]; do
ww=`expr $i + 1`
fact=`convert xc: -format "%[fx:$ww*$norm]" info:`
convert tmp_cumhist.mpc \
\( zelda3g_hist.png[${ww}x1+0+0] -scale 1x1! -evaluate multiply $fact \) \
+append tmp_cumhist.mpc
i=`expr $i + 1`
done
convert tmp_cumhist.mpc[256x1+1+0] zelda3g_cumhist.png
which when graphed looks as follows:
Likewise a gaussian distribution can be created as follows:
sigma=48
den=`convert xc: -format "%[fx:2*($sigma/256)^2]" info:`
convert -size 256x1 xc: -monitor -fx "exp(-(((i-0.5*w)/w)^2)/($den))" gauss48.png
which when graphed looks as follows:
and when integrated as above, looks as follows:
As a confirmation, if one integrates a constant, one should get a function y=x or diagonal line.
convert -size 256x1 xc:"gray(50%)" test_uni_hist.png
which when graphed looks as follows:
and when integrated and graphed looks as follow:
I am wondering if anyone knows a more efficient method of doing the integration of the 1D function/histogram image using IM processing?
http://en.wikipedia.org/wiki/Summed_area_table
This is a rather old technique that was originally developed for antialiasing images as an alternate to multi-resolution images (MIP maps).
http://www.soe.ucsc.edu/classes/cmps160 ... 7-crow.pdf
One basically sums or integrates the (normalized) graylevels over the image and uses differences from this table as local averages for later antialiasing. (Anthony may find this of interest for potential multi-resolution antialiased interpolation for -distort)
But has seen use also for HDR images tone mapping, shadow mapping and feature (face) detection:
http://www.inf.ufrgs.br/~oliveira/pubs_ ... GI2008.pdf
http://ati.amd.com/developer/gdc/GDC200 ... ctions.pdf
http://www.cgl.uwaterloo.ca/poster/andrew_I3D07.pdf
http://en.wikipedia.org/wiki/Robust_rea ... _detection
http://research.microsoft.com/en-us/um/ ... s_IJCV.pdf
My interest at the moment is much more limited (to 1D images) and has to do with my recent attempts to speed up my redist, histmatch and omnistretch scripts. In these scripts, I need to generate a 1D function (such as a gaussian) or generate the image histogram, then integrate the function or histogram (to make the cumulative histogram) and then finally create a 1D (look up table image) lut from the 1D integrated function and the 1D cumulative histogram to use with -clut to transform the image in a way that redistributes the histogram of the image.
In the past, I did all this (except applying -clut) in the shell of my scripts. The histogram of the image was extracted as text using histogram:info, then filtered with the unix commands sed and sort and then added in the empty bins so that it contained every bin from 0 to 255 (whether empty or not). (Note that IM text histograms do not include any empty bins in the listing).
I recently realized I could create the histogram directly as a 1D image as follows (using the zelda image in grayscale, but works similarly in color):
convert zelda3.jpg -colorspace gray histogram:- | convert - -scale 256x1! zelda3g_hist.png
which when graphed looks like:
Then I realized that I could integrate the histogram image or any 1D function image (normalized between 0 and 1), but it is rather slow as it has to iterate multiple IM commands 256 times. For example here is the algorithm for the zelda histogram.
norm=`convert zelda3g_hist.png -format "%[fx:1/(mean*w)]" info:`
i=0
convert -size 1x1 xc:black tmp_cumhist.mpc
while [ $i -lt 256 ]; do
ww=`expr $i + 1`
fact=`convert xc: -format "%[fx:$ww*$norm]" info:`
convert tmp_cumhist.mpc \
\( zelda3g_hist.png[${ww}x1+0+0] -scale 1x1! -evaluate multiply $fact \) \
+append tmp_cumhist.mpc
i=`expr $i + 1`
done
convert tmp_cumhist.mpc[256x1+1+0] zelda3g_cumhist.png
which when graphed looks as follows:
Likewise a gaussian distribution can be created as follows:
sigma=48
den=`convert xc: -format "%[fx:2*($sigma/256)^2]" info:`
convert -size 256x1 xc: -monitor -fx "exp(-(((i-0.5*w)/w)^2)/($den))" gauss48.png
which when graphed looks as follows:
and when integrated as above, looks as follows:
As a confirmation, if one integrates a constant, one should get a function y=x or diagonal line.
convert -size 256x1 xc:"gray(50%)" test_uni_hist.png
which when graphed looks as follows:
and when integrated and graphed looks as follow:
I am wondering if anyone knows a more efficient method of doing the integration of the 1D function/histogram image using IM processing?
Last edited by fmw42 on 2009-06-28T13:57:04-07:00, edited 3 times in total.
Re: Summed Area Tables or Integral Images
We can code up a new method for you if you provide a name for the technique, the expected inputs and outputs, and pseudo code that describes what transformations you require. Even better would be an input and output image so we can verify our code works. Alternatively, if you can post a script that takes an input image and produces the expected output image we can code from the script rather than pseudo code.
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: Summed Area Tables or Integral Images
Thanks. I would be happy to provide more details.
But lets get input from other people as well as Anthony. Right now I don't know how the summed area table could be used, except for antialiasing and for my current need to integrate a 1D image.
However, if you do decide to code the summed area image, then in order to make my redist and histmatch more efficient, I would also need a function coded that would take two 1D (summed) images and create the appropriate 1D transformation lut that would be used by -clut. This is currently the most time consuming step (again as I do it all in unix shell code).
Thus I don't know if it would be better to create two functions (summed area table and lut processing) or to actually implement the redist and histmatch as IM functions. Doing the former provides more flexibility in what functions can be used (other than gaussian). But then it will still require my scripts. Also one has a very limited function for the lut processing which is likely only used in this situation. Perhaps a third option would be to have a new IM function (redist) that takes 3 inputs: the image, and two 1D images that would be the input image's cumulive historgram and either some other image's cumulative histogram or some integrated function (such as the integrated gaussian) and would simply take the two 1D images and build the lut and apply it with -clut to the input image. This would require the SAT function to create the integrated histograms/functions. Any way, these are just some thoughts/possibilities and comments pro and con are welcome and solicited from anyone.
(P.S. to Magick - at one time I believe that you implemented something in the way of redist. But before I had time to try it, it was pulled out of the beta. What happened to that code?)
The other optional detail might be having IM create a proper (filled out) 1D histogram image (and possibly cumulative histogram image) directly as an option to the histogram:info text and the 2D histogram:image.gif.
Any way, lets get more input before we do anything further. I don't want to burden you unless it is first well thought out as I know you are busy.
_____________
My perhaps simple minded understanding is that the summed area table at pixel (i,j) is just the cumulate graylevel of all pixels above and to the left of (i,j) from the input, but normalized by mean*w*h. Thus the first pixel (at 0,0 in the top left corner) in the result is just the first pixel of the input (divided by the normalization), the next pixel (at 0,1) is the sum of the previous result (at 0,0) with the graylevel of the input (at 0,1) normalized. This process of adding the previous sum to the next normalized input graylevel is continued along the first row. Then the first row of the result is used to help cumulate the values in the next row. However, from the article below, it can be done in one pass. Ultimately you reach the bottom right where the final value at w-1,h-1 in the result is just 1 (or quantumrange). Thus the values in each channel of the result vary from near zero or possibly zero depending upon the value of the pixel in the input at 0,0 to 1 (or quantumrange) at the bottom right at w-1,h-1.
A simple example apart from the normalization is
(One suggestion for a name would just be -sat)
A simple algorithm is given by eqs 1 and 2 in
http://research.microsoft.com/en-us/um/ ... s_IJCV.pdf
except for the normalization that I added so that the resulting image range is 0 to 1 (or 0 to quantumrange) depending upon how you represent the graylevels (as 0 to 1 or 0 to quantumrange).
A perhaps simpler alternate algorithm/equation is presented in
http://en.wikipedia.org/wiki/Summed_area_table
Possibly faster methods are presented (with pseudocode) in http://www.shaderwrangler.com/publicati ... EG2005.pdf
and
http://www.inf.ufrgs.br/~oliveira/pubs_ ... GI2008.pdf
but this may need a parallel architecture.
But lets get input from other people as well as Anthony. Right now I don't know how the summed area table could be used, except for antialiasing and for my current need to integrate a 1D image.
However, if you do decide to code the summed area image, then in order to make my redist and histmatch more efficient, I would also need a function coded that would take two 1D (summed) images and create the appropriate 1D transformation lut that would be used by -clut. This is currently the most time consuming step (again as I do it all in unix shell code).
Thus I don't know if it would be better to create two functions (summed area table and lut processing) or to actually implement the redist and histmatch as IM functions. Doing the former provides more flexibility in what functions can be used (other than gaussian). But then it will still require my scripts. Also one has a very limited function for the lut processing which is likely only used in this situation. Perhaps a third option would be to have a new IM function (redist) that takes 3 inputs: the image, and two 1D images that would be the input image's cumulive historgram and either some other image's cumulative histogram or some integrated function (such as the integrated gaussian) and would simply take the two 1D images and build the lut and apply it with -clut to the input image. This would require the SAT function to create the integrated histograms/functions. Any way, these are just some thoughts/possibilities and comments pro and con are welcome and solicited from anyone.
(P.S. to Magick - at one time I believe that you implemented something in the way of redist. But before I had time to try it, it was pulled out of the beta. What happened to that code?)
The other optional detail might be having IM create a proper (filled out) 1D histogram image (and possibly cumulative histogram image) directly as an option to the histogram:info text and the 2D histogram:image.gif.
Any way, lets get more input before we do anything further. I don't want to burden you unless it is first well thought out as I know you are busy.
_____________
My perhaps simple minded understanding is that the summed area table at pixel (i,j) is just the cumulate graylevel of all pixels above and to the left of (i,j) from the input, but normalized by mean*w*h. Thus the first pixel (at 0,0 in the top left corner) in the result is just the first pixel of the input (divided by the normalization), the next pixel (at 0,1) is the sum of the previous result (at 0,0) with the graylevel of the input (at 0,1) normalized. This process of adding the previous sum to the next normalized input graylevel is continued along the first row. Then the first row of the result is used to help cumulate the values in the next row. However, from the article below, it can be done in one pass. Ultimately you reach the bottom right where the final value at w-1,h-1 in the result is just 1 (or quantumrange). Thus the values in each channel of the result vary from near zero or possibly zero depending upon the value of the pixel in the input at 0,0 to 1 (or quantumrange) at the bottom right at w-1,h-1.
A simple example apart from the normalization is
(One suggestion for a name would just be -sat)
A simple algorithm is given by eqs 1 and 2 in
http://research.microsoft.com/en-us/um/ ... s_IJCV.pdf
except for the normalization that I added so that the resulting image range is 0 to 1 (or 0 to quantumrange) depending upon how you represent the graylevels (as 0 to 1 or 0 to quantumrange).
A perhaps simpler alternate algorithm/equation is presented in
http://en.wikipedia.org/wiki/Summed_area_table
Possibly faster methods are presented (with pseudocode) in http://www.shaderwrangler.com/publicati ... EG2005.pdf
and
http://www.inf.ufrgs.br/~oliveira/pubs_ ... GI2008.pdf
but this may need a parallel architecture.
Last edited by fmw42 on 2009-06-28T15:47:59-07:00, edited 4 times in total.
Re: Summed Area Tables or Integral Images
I started reading your thread and looked at some of the links Fred but I am afraid my eyes glazed over
What is the actual benifit? an example photo before and after would be nice
Some of the links were showing what could be acheved and compairing it to other methods, but they didnt show the input image to see what was actualy happening.
What is the actual benifit? an example photo before and after would be nice
Some of the links were showing what could be acheved and compairing it to other methods, but they didnt show the input image to see what was actualy happening.
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: Summed Area Tables or Integral Images
Most of my past experience has only been reading that it was an alternate to the use of MIP maps to create multi-resolution resampling for image warping or texture mapping to help with antialiasing. I have no examples of my own, only whatever is in those papers. I knew of this technique many years ago when it first came out as I had done some perspective transformations in my past using multi-resolution imagery and the EWA resampling that Anthony has implemented in some of the -distort functions (but not the summed area tables)
Most of the papers have to do with texture mapping. This is kind of like using -distort. For example my spherize and cylinderize scripts are equivalent to texture mapping the image onto a sphere or cylinder.
The simplest example is to take a checkerboard image and warp it in perspective at a fairly oblique angle. In -distort perspective, (and my 3Drotate script) Anthony allows multiply interpolation techniques from nearest neighbor, bilinear, cubic to the use of elliptically weighted averaging EWA in order to improve the antialising in the distant parts of the scene where more data needs to be averaged per output pixel as it represents a larger part of the input image. One method to help this is to use multi-resolution imagery so that one can average a more consistent amount of data but at different resolutions. This is a performance issue. One technique, called MIP maps, stores the input image at multiple resolutions. The summed area table is an alternate storage method for making multi-resolution data available. The example 6 in http://www.soe.ucsc.edu/classes/cmps160 ... 7-crow.pdf shows this kind of perspective distorted image and aliasing artifacts and how the summed area table helps reduce the antialiasing.
My interest, at the moment, is simply to integrate a 1D function or histogram, which would then be used to create a look up table image that -clut would use to transform the image in the way that I do in my scripts redist and histmatch. I was trying to speed up the scripts (or get a true IM function built) as the scripts are very slow, since they do most of the work in getting the histogram, integrating it and creating the lut in shell code rather than via IM processing.
The summed area table image is just the means to do the integration. It can be done on a 2D image or in my case I just needed it for a 1D image. By itself, a 2D summed area table image may not have any direct application as an image itself. But it is useful in the 1D case when you want to get the cumulative histogram of an image. Cumulative histograms are at the hearth of the IM functions, -contrast-stretch, -normalize and -linear-stretch and perhaps others.
I pointed out this technique as general interest (also motivation for my needs) and to see if anyone else has had any experience or need for it that would make it of more interest to be coded into IM. Anthony might find it useful behind the scenes to help improve antialiasing via multiresolution support in -distort.
I have not explored the tone mapping paper yet, but that might be the next most useful real application.
Most of the papers have to do with texture mapping. This is kind of like using -distort. For example my spherize and cylinderize scripts are equivalent to texture mapping the image onto a sphere or cylinder.
The simplest example is to take a checkerboard image and warp it in perspective at a fairly oblique angle. In -distort perspective, (and my 3Drotate script) Anthony allows multiply interpolation techniques from nearest neighbor, bilinear, cubic to the use of elliptically weighted averaging EWA in order to improve the antialising in the distant parts of the scene where more data needs to be averaged per output pixel as it represents a larger part of the input image. One method to help this is to use multi-resolution imagery so that one can average a more consistent amount of data but at different resolutions. This is a performance issue. One technique, called MIP maps, stores the input image at multiple resolutions. The summed area table is an alternate storage method for making multi-resolution data available. The example 6 in http://www.soe.ucsc.edu/classes/cmps160 ... 7-crow.pdf shows this kind of perspective distorted image and aliasing artifacts and how the summed area table helps reduce the antialiasing.
My interest, at the moment, is simply to integrate a 1D function or histogram, which would then be used to create a look up table image that -clut would use to transform the image in the way that I do in my scripts redist and histmatch. I was trying to speed up the scripts (or get a true IM function built) as the scripts are very slow, since they do most of the work in getting the histogram, integrating it and creating the lut in shell code rather than via IM processing.
The summed area table image is just the means to do the integration. It can be done on a 2D image or in my case I just needed it for a 1D image. By itself, a 2D summed area table image may not have any direct application as an image itself. But it is useful in the 1D case when you want to get the cumulative histogram of an image. Cumulative histograms are at the hearth of the IM functions, -contrast-stretch, -normalize and -linear-stretch and perhaps others.
I pointed out this technique as general interest (also motivation for my needs) and to see if anyone else has had any experience or need for it that would make it of more interest to be coded into IM. Anthony might find it useful behind the scenes to help improve antialiasing via multiresolution support in -distort.
I have not explored the tone mapping paper yet, but that might be the next most useful real application.
- anthony
- Posts: 8883
- Joined: 2004-05-31T19:27:03-07:00
- Authentication code: 8675308
- Location: Brisbane, Australia
Re: Summed Area Tables or Integral Images
Why do it WITH gnuplot. histogram handling is simply a table of numbers, and the UNIX text handling utilities like awk and perl handle these much more efficiently!
So I'll explore this!
You give...
WHY?
Why not get the raw numbers from the histogram?
Though something is strange I get duplicate lines in the output....
At depth 8 the duplicate gray values should probably MERGE, and counts add!
Adding -depth 16 however resolved the problem.
Okay... back to it..
Get the raw histogram, extract the first two numbers from each line, and sort it by the second gray-scale value
to place the histogram in the right order.
Of course as this is a 16 bit histogram, the binning is very very sparse, and rather useless for generating a graph, but it is perfectly accurate!
Here is a awk script to 'bin' it into 8-bit values, though this is only needed if you want to graph it!
If you remove the '/256' from the above it should correct the bin'ing of the broken IM 8-bit output.
It also has the advantage of 'filling-in' the missing 'zero' value bins.
This is probably what IM -depth 8 histogram output should have done!
And graph it
The 'using' part is to make X = gray value and y = pixel count.
instead of 'impulses' try 'boxes' or 'boxes fs solid'
The colors are in black to white order, so it is only a matter of converting the above histogram into a cumulative histogram, by addition, I'll use the more accurate 16 bit version...
NOTE this should also work with either the 8-bit bin'ed output, or even the original 'broken' 8-bit histogram output that IM generated, though in that case it may generate extra numbers that it probably shouldn't.
And graphing this...
Which looks very similar to yours, is perfectly accurate, and pixel counts, and color value have not been normalized.
So I'll explore this!
You give...
Code: Select all
convert zelda3.jpg -colorspace gray histogram:- | convert - -scale 256x1! zelda3g_hist.png
Why not get the raw numbers from the histogram?
Code: Select all
convert zelda3.jpg -colorspace gray -format %c histogram:info:
Cristy can you please look at this as I am positive you should never have any duplicate lines!...
2: (252,252,252) #FCFCFC gray(252,252,252)
2: (252,252,252) #FCFCFC gray(252,252,252)
1: ( 1, 1, 1) #010101 gray(1,1,1)
1: ( 1, 1, 1) #010101 gray(1,1,1)
1: ( 2, 2, 2) #020202 gray(2,2,2)
1: ( 2, 2, 2) #020202 gray(2,2,2)
1: ( 2, 2, 2) #020202 gray(2,2,2)
1: ( 4, 4, 4) #040404 gray(4,4,4)
1: ( 5, 5, 5) #050505 gray(5,5,5)
1: ( 5, 5, 5) #050505 gray(5,5,5)
1: ( 5, 5, 5) #050505 gray(5,5,5)
1: ( 5, 5, 5) #050505 gray(5,5,5)
1: ( 5, 5, 5) #050505 gray(5,5,5)
1: ( 5, 5, 5) #050505 gray(5,5,5)
1: ( 6, 6, 6) #060606 gray(6,6,6)
1: ( 6, 6, 6) #060606 gray(6,6,6)
1: ( 6, 6, 6) #060606 gray(6,6,6)
1: ( 6, 6, 6) #060606 gray(6,6,6)
...
At depth 8 the duplicate gray values should probably MERGE, and counts add!
Adding -depth 16 however resolved the problem.
Okay... back to it..
Get the raw histogram, extract the first two numbers from each line, and sort it by the second gray-scale value
to place the histogram in the right order.
Code: Select all
convert zelda3.jpg -colorspace gray -depth 16 -format %c histogram:info: | tr -cs '0-9\012' '\t' | cut -f2,3 | sort -n -k2 > histogram.txt
First number is the pixel count, the second the grayscale value....
1 59883
1 59885
2 59889
1 59891
2 59897
1 59902
1 59904
1 59905
1 59906
2 59912
1 59913
6 59919
1 59921
1 59922
2 59931
2 59934
1 59937
1 59938
...
Of course as this is a 16 bit histogram, the binning is very very sparse, and rather useless for generating a graph, but it is perfectly accurate!
Here is a awk script to 'bin' it into 8-bit values, though this is only needed if you want to graph it!
Code: Select all
awk '# bin into 8-bit values
{ bin[int($2/256)] += $1 }
END { for (i=0;i<256;i++) print bin[i]+0, i; }
' histogram.txt > histogram-8bit.txt
It also has the advantage of 'filling-in' the missing 'zero' value bins.
This is probably what IM -depth 8 histogram output should have done!
And graph it
Code: Select all
gnuplot
plot [0:255] "histogram-8bit.txt" using 2:1 with impulses
instead of 'impulses' try 'boxes' or 'boxes fs solid'
The colors are in black to white order, so it is only a matter of converting the above histogram into a cumulative histogram, by addition, I'll use the more accurate 16 bit version...
Code: Select all
awk '{cum+=$1; print cum,$2;}' histogram.txt > cumulative.txt
The first value is the cumulative count, the second is the gray value for that count!1 231
2 307
3 453
4 499
5 560
6 1046
7 1227
8 1247
9 1278
10 1295
11 1306
12 1358
13 1468
...
16377 64555
16378 64614
16379 64643
16381 64706
16383 64823
16384 65033
And graphing this...
Code: Select all
gnuplot
plot "cumulative.txt" using 2:1 with lines
Anthony Thyssen -- Webmaster for ImageMagick Example Pages
https://imagemagick.org/Usage/
https://imagemagick.org/Usage/
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: Summed Area Tables or Integral Images
My redist and histmatch script do pretty much what you are doing in the shell, but I wrote a loop to do the integration (summing) rather than awk (as I know nothing, yet, about awk). My shell script is currently faster than the IM script above, but still not very quick. That was why I was exploring the summed area table as a means of doing it with an IM function or other IM techniques which might be faster.
This was what I did to get the histogram from IM:
value=`convert $img -format %c -depth 8 histogram:info: | sort -k 2 -b | sed -n 's/^ *[0-9]*: [(]\([0-9 ]*\).*$/\1/ p'`
count=`convert $img -format %c -depth 8 histogram:info: | sort -k 2 -b | sed -n 's/^ *\([0-9]*\): [(].*$/\1/ p'`
Then I wrote a loop to fill in the histogram with the empty bins and combine counts for the duplicate bin values (as you noticed)
Then I wrote a script loop to generate the cumulative histogram.
That all works. I was just looking for a faster way by using IM commands rather than shell commands.
GNUPLOT is useful for graphing, but it is of less use elsewise here and I don't want to rely upon some external routine in my scripts, if I can avoid it. I simply used my profile script to demonstrate the results. That was all. It has nothing to do with the histogram process or anything else in my redist script.
So I found that I could generate a 1D image histogram, which in itself is interesting and useful and that then I could cumulate it with a loop using -scale. But it turned out that method was slower. I was hoping that perhaps -fx could do it without looping, but now I don't think that is possible. It does not seem that IM has any integration functions that are available in command line and I thought that such might be useful. Although I only needed to do it 1D, the summed area table is a method that would also do it in 2D. I was not sure if anyone had any other purpose for it, except you might for multi-resolution resampling. So I was not sure if it was worth coding.
The bottom line is I am looking to find a way to speed up my redist and histmatch. At one point, Magick had coded a command line function, but it got pulled back for some reason and I never got to test or play with it. Any chance it might be resurrected to experiment with it. Do you know what happened to it?
This was what I did to get the histogram from IM:
value=`convert $img -format %c -depth 8 histogram:info: | sort -k 2 -b | sed -n 's/^ *[0-9]*: [(]\([0-9 ]*\).*$/\1/ p'`
count=`convert $img -format %c -depth 8 histogram:info: | sort -k 2 -b | sed -n 's/^ *\([0-9]*\): [(].*$/\1/ p'`
Then I wrote a loop to fill in the histogram with the empty bins and combine counts for the duplicate bin values (as you noticed)
Then I wrote a script loop to generate the cumulative histogram.
That all works. I was just looking for a faster way by using IM commands rather than shell commands.
GNUPLOT is useful for graphing, but it is of less use elsewise here and I don't want to rely upon some external routine in my scripts, if I can avoid it. I simply used my profile script to demonstrate the results. That was all. It has nothing to do with the histogram process or anything else in my redist script.
So I found that I could generate a 1D image histogram, which in itself is interesting and useful and that then I could cumulate it with a loop using -scale. But it turned out that method was slower. I was hoping that perhaps -fx could do it without looping, but now I don't think that is possible. It does not seem that IM has any integration functions that are available in command line and I thought that such might be useful. Although I only needed to do it 1D, the summed area table is a method that would also do it in 2D. I was not sure if anyone had any other purpose for it, except you might for multi-resolution resampling. So I was not sure if it was worth coding.
The bottom line is I am looking to find a way to speed up my redist and histmatch. At one point, Magick had coded a command line function, but it got pulled back for some reason and I never got to test or play with it. Any chance it might be resurrected to experiment with it. Do you know what happened to it?
- anthony
- Posts: 8883
- Joined: 2004-05-31T19:27:03-07:00
- Authentication code: 8675308
- Location: Brisbane, Australia
Re: Summed Area Tables or Integral Images
The awk scripts abov does simular but it also probably does it faster as their is only one histogram and sort generated and values are kept together. And awk for cumulation would be very much faster than the shell,fmw42 wrote:This was what I did to get the histogram from IM:
value=`convert $img -format %c -depth 8 histogram:info: | sort -k 2 -b | sed -n 's/^ *[0-9]*: [(]\([0-9 ]*\).*$/\1/ p'`
count=`convert $img -format %c -depth 8 histogram:info: | sort -k 2 -b | sed -n 's/^ *\([0-9]*\): [(].*$/\1/ p'`
and can be made part of the pipeline.
To read from the file in the sh you can read the two values like this
Code: Select all
while read count color; do
do_something_with $count $color
done < histogram.txt
You can also just pipe in the histogram,
Code: Select all
convert .... | ... | ... |
while read count color; do
...
done
I understand, Just trying to be more helpful. A simular perl script can also replace the awk.The bottom line is I am looking to find a way to speed up my redist and histmatch. At one point, Magick had coded a command line function, but it got pulled back for some reason and I never got to test or play with it. Any chance it might be resurrected to experiment with it. Do you know what happened to it?
Though please see my point about the default 8-bit histogram output being broken!
I was surprised that was the case!
Anthony Thyssen -- Webmaster for ImageMagick Example Pages
https://imagemagick.org/Usage/
https://imagemagick.org/Usage/
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: Summed Area Tables or Integral Images
Thanks I appreciate your input.Anthony wrote:I understand, Just trying to be more helpful. A simular perl script can also replace the awk.
Though please see my point about the default 8-bit histogram output being broken!
I was surprised that was the case!
It has been that way for a long time. I just assumed, so long ago when I originally wrote the redist, that that was the way of IM histograms. So I never complained. I dealt with it in my script by combining counts when there were multiple bins for the same graylevel.
The other thing is that IM histograms do not include the empty bins. So I had to fill those out also. That was the advantage of writing the histogram out as a 1D image. It was all taken care of.
- anthony
- Posts: 8883
- Joined: 2004-05-31T19:27:03-07:00
- Authentication code: 8675308
- Location: Brisbane, Australia
Re: Summed Area Tables or Integral Images
True but you do loose detail.
I did not know the -depth 8 histogram binning was broken until I tried it. Though it does make sense, as it its probably just the output generator that is doing 8'bit, while the histogram procedure feeds it 16 bit values, thus producing duplicate lines.
Hmmm... looks like it is only broken IF -depth is undefined!
No matter. The awk 8-bit bin'ing script can not only convert the -depth 16 values to 8-bit bins (or whatever you need) but it can also re-bin the -depth 8 output properly (just remove the '/256' from the script. It also will output the empty bins for you.
Finally you will not need to sort (an expensive process), as that is done by the bin'ing process. Technically it is a "bin sort" algorithm You also don't need the cut either. Though you will need 'tr' (or equivalent such as in 'sed') to 'cleanup' the input to plain numbers.
You can also reorder the output so color is first then the pixel count
So going back to your handler...
By the way as all the valus are fully defined you can even convert it back into an image!
For example using a PGM text format... (man PGM) EG: "P2 width height max values...."
A VERY accurate profile, and probably very fast too...
Note that I also specified in the file format that the 'maximum bin value' should be white, so IM normalizes it to its internal bit depth as it is reading the image in. Shame you can specify the exact 'max value' to use when writing out a PGM image
Also note that the man page lists "P5" instead of "P2" but that is for binary PGM image not a ascii text PGM image that I am generating and which which the man page describes.
ASIDE: Remember I was the one to release the 1995 patch version of NetPBM image processing package, so I know this image file format very well! It is in some ways simpler than the IM 'txt:' format, but limited to defining pixels in the correct order.
I did not know the -depth 8 histogram binning was broken until I tried it. Though it does make sense, as it its probably just the output generator that is doing 8'bit, while the histogram procedure feeds it 16 bit values, thus producing duplicate lines.
Hmmm... looks like it is only broken IF -depth is undefined!
No matter. The awk 8-bit bin'ing script can not only convert the -depth 16 values to 8-bit bins (or whatever you need) but it can also re-bin the -depth 8 output properly (just remove the '/256' from the script. It also will output the empty bins for you.
Finally you will not need to sort (an expensive process), as that is done by the bin'ing process. Technically it is a "bin sort" algorithm You also don't need the cut either. Though you will need 'tr' (or equivalent such as in 'sed') to 'cleanup' the input to plain numbers.
You can also reorder the output so color is first then the pixel count
So going back to your handler...
Code: Select all
img=zelda3.jpg
convert $img -format %c histogram:info:- | tr -cs '0-9\012' ' ' |
awk '{ bin[int($2)] += $1 } END { for (i=0;i<256;i++) print i, bin[i]+0; }' |\
while read count color; do
...do what you need here...
done
For example using a PGM text format... (man PGM) EG: "P2 width height max values...."
Code: Select all
convert $img -format %c histogram:info:- | tr -cs '0-9\012' ' ' |
awk '{ bin[int($2)] += $1; max = max>bin[int($2)] ? max : bin[int($2)]; }
END { print "P2 256 1 " max; for (i=0;i<256;i++) print bin[i]+0; }' |\
im_profile - histogram.png
A VERY accurate profile, and probably very fast too...
Note that I also specified in the file format that the 'maximum bin value' should be white, so IM normalizes it to its internal bit depth as it is reading the image in. Shame you can specify the exact 'max value' to use when writing out a PGM image
Also note that the man page lists "P5" instead of "P2" but that is for binary PGM image not a ascii text PGM image that I am generating and which which the man page describes.
ASIDE: Remember I was the one to release the 1995 patch version of NetPBM image processing package, so I know this image file format very well! It is in some ways simpler than the IM 'txt:' format, but limited to defining pixels in the correct order.
Anthony Thyssen -- Webmaster for ImageMagick Example Pages
https://imagemagick.org/Usage/
https://imagemagick.org/Usage/
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: Summed Area Tables or Integral Images
Thanks. I will have to study this. Looks promising to speed up the histogram and cumulative histogram.
I have been using PGM to do similar things already. You had told me about that. A number of my scripts use it. Redist uses it now to convert the lut array (that I generate from the two text cumulative arrays - the heart of the algorithm) to an image for -clut to apply.
I will discuss this further with you by PM to see if you can speed the computation of the lut array via awk.
I have been using PGM to do similar things already. You had told me about that. A number of my scripts use it. Redist uses it now to convert the lut array (that I generate from the two text cumulative arrays - the heart of the algorithm) to an image for -clut to apply.
I will discuss this further with you by PM to see if you can speed the computation of the lut array via awk.
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: Summed Area Tables or Integral Images
With Anthony's help and some study of awk (and changing while loops to for loops), I have been able to make a 10x speed improvement (on a 128x128 image) in redist and histmatch.