Page 2 of 3
Re: Make an Adaptive Histogram Equalization
Posted: 2013-06-21T03:35:47-07:00
by cedricb
@snibgo: any thought on applying this enhancement technique (CLAHE) on an image which has been outputted by enfuse? (exposure-fusion) ...at the moment I'm using a simple -sigmoidal-contrast 2x50%
Re: Make an Adaptive Histogram Equalization
Posted: 2013-06-21T07:51:05-07:00
by snibgo
I use other tools in the Hugin toolset, but have limited experience with enfuse.
Processing an image that has been built by enfuse will discard information. In your example, it will compress the tones at the extremes and leave gaps in the central portion of the histogram. You will lose less information if you can persuade enfuse to do your required changes as part of its processing.
Alternatives include pre-processing files before sending them to enfuse, or using another program (such as ImageMagick) instead of enfuse to process the images.
Re: Make an Adaptive Histogram Equalization
Posted: 2013-06-28T22:30:58-07:00
by fmw42
snibgo wrote:
My own software. The contrast limiting is very simple: after reading the histogram, it can cap each value before calculating the cumulative. For the cap value, the code calculates the mean and standard deviation of the histogram counts. The user specifies a multiple of the standard deviation. The cap is the mean, plus this multiple times the StdDev.
This comes after other process such as filling null counts. (A 14-bit camera can only generate 2^14 values, so a 16-bit histogram has counts in only about 1/4 of the values. The histogram is shaped like a comb, which confuses some processes. I fill in null counts by spreading non-zero counts.)
snibgo,
Do you have an example input and output image that you could provide. I would be interested in comparing your results to my gaussian redist script.
Re: Make an Adaptive Histogram Equalization
Posted: 2013-06-29T13:11:03-07:00
by snibgo
Here's an example. Images are at end of post.
A1.png is the source image. It is a resize of an in-camera JPEG, 8-bits/channel, so the quality isn't high.
a1eq.jpg is equalized by IM 6.8.6-0.
Code: Select all
%IM%convert A1.png -equalize a1Eq.jpg
a1a.jpg is equalised with my software.
Code: Select all
cHisto /ia1hist.txt /IA1.png /Pa1.ppm /k /L1
%IM%convert A1.png a1.ppm -clut a1a.jpg
The result is different to IM's equalization, possibly because I construct the histogram from all the values in all the channels. I then fill nulls (as described above). From this, I make an ordinary CLUT file (a1.ppm) and IM uses this to convert the image.
Filling nulls should make no difference to an ordinary equalization but does affect the capping calculation.
Equalisation reveals detail in the dark tree on the left, but conceals detail in the clouds near the lamp post. My equalization doesn't conceal the clouds as much as IM's equalization.
a1c2.jpg is equalised, but capping the counts at the mean count plus twice the standard deviation of the counts. (For this image, MeanCount = 10.9863, StdDevCount = 12.8473, so counts are capped to 37.)
Code: Select all
cHisto /ia1hist.txt /Pa1.ppm /k /L1 /N2
%IM%convert A1.png a1.ppm -clut a1c2.jpg
The result is between the original and equalization, showing detail everywhere but without looking too artificial (to my eyes).
For me, capped equalization is a useful technique for bringing out detail without exagerating noise in large areas of fairly constant tone. This example image doesn't have noise.
a1c3.jpg is the same, but capped at mean + 3 * StdDev = 50. The result is visually the same as the uncapped equalization, a1a.png.
Code: Select all
cHisto /ia1hist.txt /Pa1.ppm /k /L1 /N3
%IM%convert A1.png a1.ppm -clut a1c3.jpg
A1.png
A1eq.png
A1a.png
A1c2.png
A1c3.png
Re: Make an Adaptive Histogram Equalization
Posted: 2013-06-29T15:40:21-07:00
by fmw42
Thanks. Do I assume that all but A1eq are done with CLAHE?
Here is the current fixed -equalize from 6863 (see
viewtopic.php?f=3&t=23643#p100076)
convert A1.png -equalize A1eq6863.png
For comparison, the next two are from my redist using uniform mode, rec709luma grayscale and global (all channels merged) for the cumulative histogram.
redist -s uniform -m rgb -g rec709 A1.png A1_rdist_uni_rec709.png
redist -s uniform -m rgb -g rec709 A1.png A1_rdist_uni_global.png.png
Next are several variation using gaussian shape, rec709luma for grayscale and with varying mean,lowsigma,highsigma (where the sigmas control the contrast). The default is 60,60,60.
redist -s gaussian -m rgb -g rec709 50,80,80 A1.png A1_redist_gauss_709_50x80x80.png
redist -s gaussian -m rgb -g rec709 50,80,80 A1.png A1_redist_gauss_709_55x70x70.png
redist -s gaussian -m rgb -g rec709 50,80,80 A1.png A1_redist_gauss_709_60x60x60.png.png
The next set uses global (all channels) for grayscale. But it shows some color discontinuity in the sky near its boundary with the clouds. But it shows the details in the clouds well.
redist -s gaussian -m rgb -g rec709 40,100,100 A1.png A1_redist_gauss_global_40x100x100.png
redist -s gaussian -m rgb -g rec709 45,90,80 A1.png A1_redist_gauss_global_45x90x90.png
redist -s gaussian -m rgb -g rec709 50,80,80 A1.png A1_redist_gauss_global_50x80x80.png.png
Re: Make an Adaptive Histogram Equalization
Posted: 2013-06-29T17:55:28-07:00
by snibgo
fmw42 wrote:Do I assume that all but A1eq are done with CLAHE?
My examples above are Contrast Limited Histogram Equalisation, without the "Adaptive" feature. Adaptive adds another layer of complexity/confusion. It increases local contrast, hence visibility of detail, at the expense of an artificiality between areas.
Here's a full Contrast Limited Adaptive Histogram Equalisation. I implement adaptation by weighting the histogram by the alpha channel. Helpfully, "convert infile -depth 16 -format %c histogram:info:- >hist.txt" takes account of alpha. The alpha channel might indicate whether the pixel falls in a highlight area, in which case it creates a CLUT suitable for the highlights. Here, I create just two images, each with suitable alpha values. My software runs once per image, and then IM composes them together.
The script has essentially three variables (for simplicity, shown here as constants):
-- 5% for the resizing blur
-- 60% for the highlights level
-- 60% for the shadows level
A more complex script can segment the image into an arbitrary number of levels.
It could instead segment the image into a grid of rectangles. I haven't tried this.
Code: Select all
set SRC=A1.png
FOR /F "usebackq" %%L IN (`%IM%identify -format "WW=%%w\nHH=%%h" %SRC%`) DO set %%L
%IM%convert %SRC% ^
-resize 5%% -resize "%WW%x%HH%^!" ^
-normalize ^
b.png
rem Highlights
%IM%convert %SRC% ( b.png -level 0%%,60%% ) -compose CopyOpacity -composite a1b1.png
rem Equalised with cap
cHisto /ia1hist.txt /Ia1b1.png /Pa1.ppm /k /L1 /N2
%IM%convert %SRC% a1.ppm -clut a1b1o.png
rem Shadows
%IM%convert %SRC% ( b.png -level 60%%,100%% -negate ) -compose CopyOpacity -composite a1b2.png
rem Equalised with cap
cHisto /ia1hist.txt /Ia1b2.png /Pa1.ppm /k /L1 /N2
%IM%convert %SRC% a1.ppm -clut a1b2o.png
rem Merge
%IM%convert ^
a1b2o.png ^
a1b1o.png ^
b.png ^
-compose Over -composite ^
a1CLAHE2.png
Re: Make an Adaptive Histogram Equalization
Posted: 2013-06-29T18:29:31-07:00
by fmw42
So if I understand, you generate a cumulative histogram for the highlights and shadows image that have been limited by the alpha channel and then equalize by using the cumulative histogram as the clut. Do you do any special clipping of the histogram in your cHisto?
Interesting, I did not know you could get the alpha channel to limit the histogram! Or does it just show the alpha value for each bin and your cHisto does the limiting? Would you clarify further?
You also generate a low resolution version of the image by resizing down to 5% and resizing back up to full resolution (with -nomalize added).
Then you composite the 3 images with compose over.
But I am puzzled by the composition. -compose allows 3 images but the last is the mask. You seem to be using the low resolution image as a mask. Would you clarify further? How does this low resolution image work for a mask to blend the shadow and highlight versions? I understand the mask concept and use it all the time, but I don't see why this low resolution image is relevant here as a mask.
Code: Select all
%IM%convert %SRC% ^
-resize 5%% -resize "%WW%x%HH%^!" ^
-normalize ^
b.png
You use this low resolution image for the alpha channels with -level but it is color and not grayscale. Is this correct? I thought the alpha channel needed to be a grayscale image.
Your result is very interesting. But it shows the same color discontinuity as my redist (both uniform and gaussian) when using the combined channels (global mode) for the histogram rather than a conversion to grayscale, which does not show that discontinuity. Have you tried running the same script using -colorspace gray to get the histogram?
Your method is show better color definition than my redist. Here are a few variations.
redist -m global 40,120,120 A1.png A1_rdist_global_gauss_40x120x120.png
redist -m global 40,120,60 A1.png A1_rdist_global_gauss_40x120x60.png
redist -m global 50,120,60 A1.png A1_rdist_global_gauss_50x120x60.png
redist -m global 60,120,60 A1.png A1_rdist_global_gauss_60x120x60.png
Re: Make an Adaptive Histogram Equalization
Posted: 2013-06-29T19:56:40-07:00
by snibgo
It would be useful if IM had a read-mask. Sadly it doesn't (yet).
If I understand correctly, "convert infile -depth 16 -format %c histogram:info:- >hist.txt" treats alpha as just another channel, exactly like R, G and B. It lists all four RGBA values, with a count of how many pixels have those RGBA values. So it gives information about the entire image, and some counts will be for pixels with Alpha==0.
cHisto reads the histogram data: the count, then the 3 or 4 values, and I assume these are R, G, B and optionally alpha. I multiply the found count by alpha (scaling alpha to 0..1). I have an array of values that represent counts. For each colour value, I increment the array value by the multiplied count. In pseudocode:
Code: Select all
read (count, r, g, b, a);
if a is found then count = count * a/65535;
values[r].count += count;
values[g].count += count;
values[b].count += count;
So a pixel with alpha==0 will not participate in the count. Where alpha==1, it participates as normal. Similarly for 0 < alpha < 1. Then I fill the null values, then if required cap the counts, then calculate the cumulatives. There is also weird stuff with gamma, alternative curves (eg finding peaks nearest white and black, creating a curve that increases contrast between these peaks), white balance and other experimental stuff.
Masks in this context (my final merge) don't need to be greyscale. I suppose IM converts it. This script is a simplified version; I would usually create the mask (b.png) as a greyscale.
Taking the histogram of greyscale images (using "-modulate 100,0,100"), leaving everything else the same, tends to burn out highlights. The result is below. This is expected; a pixel may be rgb(80%,90%,100%) but have a lightness of 90%. I generally use all three values in the histogram, so equalizing will stretch it less than if I just use the grey value.
The banding between the trees and sky is noticably worse, but somewhat fixable by changing the three parameters mentioned above.
Re: Make an Adaptive Histogram Equalization
Posted: 2013-06-29T20:45:25-07:00
by fmw42
I believe I understand about how you process the histogram with the alpha. Very clever.
I am not clear on two points.
1) You use b.png to as the alpha channel when using -compose copy_opacity. I thought that required a grayscale image. I am not sure what IM would do with a color image? Do you know? Is that really what you do?
2) But I am puzzled by the composition. -compose allows 3 images but the last is the mask. You seem to be using the low resolution image b.png as a mask. Would you clarify further? How does this low resolution image work for a mask to blend the shadow and highlight versions? I understand the mask concept and use it all the time, but I don't see why this low resolution image is relevant here as a mask.
Thanks
Re: Make an Adaptive Histogram Equalization
Posted: 2013-06-29T22:16:08-07:00
by snibgo
1. An experiment shows that the following two commands give almost identical results. I conclude that IM effectively translates the mask to greyscale, using HCL colorspace (or something very close to HCL).
Code: Select all
%IM%convert ^
a1b2o.png ^
a1b1o.png ^
b.png ^
-compose Over -composite ^
a1CLAHE2.png
%IM%convert ^
a1b2o.png ^
a1b1o.png ^
( b.png -set option:modulate:colorspace hcl -modulate 100,0,100 ) ^
-compose Over -composite ^
a1CLAHE3.png
%IM%compare -metric RMSE a1CLAHE2.png a1CLAHE3.png NULL:
123.58 (0.0018857)
2. I think of b.png not as "low resolution" but a "heavy blur". It identifies highlight and shadow areas, with a graduation between. Put this another way: it assigns a value to every pixel that determines whether the pixel is either within the darkest shadow area or lightest highlight area, or within some area between. If the pixel is within the lightest area, it should have the highlight CLUT applied to it. If in the darkest area, it should get the shadow CLUT. If it is between the two (and almost all pixels are actually between the two), we would ideally blend the two CLUTs in the correct proportion, and apply that special CLUT to the pixel. This is impractical.
(The blurring is essential. I don't care if a pixel is light or dark. I care only if it is in a light or dark area. The blurring has to be heavy to tell me this. I could use "-blur" or "-gaussian-blur", but they are very slow for this degree of blurring.)
Wikipedia Adaptive histogram equalization explains one ideal adaptive mehod: for every pixel, calculate the histogram for a square with that pixel at the centre. (Why a square? I would think a circle would be more ideal. Never mind.) From that histogram, create the cumulative, and this is the CLUT. Calculating a histogram and CLUT for every pixel would be computationally expensive.
The wiki article describes a faster method that partitions the image into tiles, creates a CLUT for each, then interpolates between CLUTs for pixels that don't happen to be at the centre of a tile.
Tiling is all very well, but I prefer to partition the image by content, not geometry. The blurred image (b.png) partitions naturally, by tone, into 4 areas: sky, tree on right, bushes in foreground, and tree on left. If you look at a graph of a histogram of b.png, you will see 4 distinct peaks. I'm working on automatically recognising these peaks, segmenting the image accordingly, calculating histograms and CLUTs, applying the CLUTs and merging the results. As most pixels wouldn't be exactly on one peak, they would be interpolated between the CLUTs appropriate to the adjacent peaks.
Meanwhile, the cheap and cheerful technique -- partition into just two areas (b.png levelled in two different ways), then interpolate with unlevelled b.png -- works quite well.
Thanks for the comments and questions. They help me to think about (and understand, hopefully) what I am doing.
b.png
Re: Make an Adaptive Histogram Equalization
Posted: 2013-06-29T22:45:01-07:00
by fmw42
If IM converts your mask to grayscale, then I understand quite well what you are doing.
With regard to the resize doing blurring, I have used that concept before also.
In my earlier days, I developed some algorithms and had other code them that do the block interpolation concept explained in the Wikepedia article, not for histograms but for mean and standard deviation.
My script, space (spatially adaptive contrast enhancement) is essentially that kind of process. But the block interpolation is done via the resize concept you use above. So I understand the concepts. Unfortunately, it is slow because I have to combine the various terms using -fx.
My shadowhighlight script does something similar to what you have used to enhance shadows and highlights. It is modeled on the Photoshop function. It does a similar concept to isolate the highlights and shadows and then combines them using the alpha channels to merge them.
So I have used similar concepts in my scripts. But my main confusion was your use of color images for alpha channels and masks. I never knew that IM would convert them automatically to some form of grayscale. I always do my own conversion as appropriate.
What is quite unique is your use of the alpha channel to select/weight the pixels you want for your histograms and then of course your cHist code to filter/limit the large bins before creating the cumulative histogram for the clut. I also like your adaptive concept using shadows and highlights rather than the block geometry concept.
Thanks for your detailed explanations.
Re: Make an Adaptive Histogram Equalization
Posted: 2013-07-01T17:40:33-07:00
by fmw42
snibgo wrote:a1c2.jpg is equalised, but capping the counts at the mean count plus twice the standard deviation of the counts. (For this image, MeanCount = 10.9863, StdDevCount = 12.8473, so counts are capped to 37.)
snibgo,
I was just trying to see if I could duplicate these statistics and my std is quite larger than yours.
I can use AWK to sort and fill the IM histogram, convert it to a netpbm grayscale image and then use IM to get the mean and std.
histArr=(`convert A1.png -separate +append -depth 16 -format "%c" histogram:info:- \
| tr -cs '0-9\012' ' ' |\
awk '
# AWK to generate a histogram
{ bin[int($2)] += $1; max = 0; }
{ for (i=0;i<65535;i++) max = bin
>max?bin:max; }
END { for (i=0;i<65535;i++) {hist = bin+0; print hist; }
} '`)
echo ${histArr[*]}
echo ${#histArr[*]}
echo ""
echo "P2 65535 1 65535\n ${histArr[*]}" | convert - -format "mean=%[mean],std=%[standard-deviation]" info:
Since A1.png is an 8 bit image, there are lots of zero filled bins for your use of a 16-bit histogram.
histogram (part of it -- apparently the whole thing is too much data to process in this BB software):
Code: Select all
313 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 220 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 297 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 394 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 516 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 601 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 746 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 860 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Length of histogram: 65535
Stats:
mean=10.9856,
std=269.983
The mean is the same as yours, but my std is much larger than your value 12.8473.
I am wondering if you have any clue why the difference?
Re: Make an Adaptive Histogram Equalization
Posted: 2013-07-01T20:26:56-07:00
by fmw42
Just to make sure IM was calculating the mean and std correctly, I computed it from AWK
Given the histArr from above:
echo ${histArr[*]} | tr " " "\n" |\
awk '
# AWK to generate mean and std of histogram
{ sum += $1; sumsq += $1*$1 }
END { mean=sum/65535; std=sqrt(sumsq/65535 - mean*mean); print mean, std } '
10.9856 269.983
first number is the mean and the second is the std. Both agree with what I got before.
Re: Make an Adaptive Histogram Equalization
Posted: 2013-07-02T06:36:25-07:00
by snibgo
Filling nulls makes a difference to the standard deviation.
Suppose we have four values: 0, 0, 0 and 100. The mean is 25. The standard deviation is calculated by:
SigSq = 10000
Variance = SigSq / num_values - mean * mean
Variance = 10000/4 - 25*25
Variance = 1875
StdDev = sqrt (variance) = sqrt (24375) = 43.4
After filling nulls, we have four values: 25, 25, 25, 25. The mean is still 25. The std dev should be zero.
Check:
SigSq = 4 * 25^2 = 2500
Variance = 2500/4 - 25*25 = 0
StdDev = 0.
So, in this example, filling nulls has reduced the standard deviation from 43.4 to zero.
Re: Make an Adaptive Histogram Equalization
Posted: 2013-07-02T10:01:25-07:00
by fmw42
OK. Thanks. I did not realize you filled with non-zero values. My std was computed by filling with zeros.
How do you determine what value to use to fill the missing bins from the IM histogram? Do you use the min value or perhaps the mean value from the IM histogram? Or perhaps mean - factor*sigma?
In your experiments, did it make a significant difference in the result when filling with non-zero values? That may be important when using 16-bit histogram, esp. with 8-bit images, but may not be much of an issue with 8-bit histogram, as it may have very few missing bins.
I wonder if you can really see a difference by using a 16-bit histogram and clut from an 8-bit histogram and clut? Did you test that? I have been using only an 8-bit histogram and clut in my redist script.