agss wrote:I've constructed an example to try and figure out what the argument to -contrast-stretch actually is: an artificial image with three shades of gray in 100x100 pixel squares.
Code: Select all
convert -size 100x100 'xc:rgb(192,192,192)' \
-bordercolor 'rgb(128,128,128)' -border 100 \
-bordercolor 'rgb(64,64,64)' -border 100 \
-crop 300x100+0+200 \
test.ppm
(Possibly there's a better way to construct this?) Do -contrast-stretch 0:
Code: Select all
convert test.ppm -contrast-stretch 0 x:
The darkest gray has now gone black and the lightest one white. That's what I expected. Now, say I replace 0 by an integer black-point in percent form. For 0% through 33% the result is the same: the darkest gray goes black, the lightest goes white, the middle gray is unchanged. From 67% through 99% the two darkest grays both go black and the lightest goes white. At 100% the entire image goes black. This speaks for the black-point actually being a pixel count since one third of my image is in each shade of gray. However, I can't say I understand why the result for 34% through 66% is that the image is completely unchanged: nothing goes black or white. I would have expected the same result as for either 33% or 67%, depending on how aggressive -contrast-stretch is. (That said, doing nothing in every case arguably satisfies the documented requirements on its behaviour. :^)
I get the same results if I use a non-percent form given the fact that my image has 10000 pixels in each shade of gray: with a contrast stretch of 9999 I get black gray white; 10000 and 19999 give an unchanged image; 20000 and 29999 give black black white; 30000 gives all black.
So black-point and white-point do seem to represent pixel counts as documented but I'm still a little confused by the behaviour I see and my initial observation that a non-percentage white-point doesn't behave in the documented manner still seems to hold.
Hopefully someone can clarify matters.
Anders
I did some test and it appears that -contrast-stretch does use counts or percent counts.
I made a 1x1000 pixel gradient:
convert -size 1000x1 gradient: -rotate 90 grad1000.png
identify -verbose grad1000.png
shows 1 pixel at each of 1000 graylevels between 0 (black) and 65535 (white)
Then I issued the following to clip by percent count of 10% on each end:
convert grad1000.png -contrast-stretch 10,90% grad1000_cs10x90pct.png
identify -verbose grad1000_cs10x90pct.png
...
Histogram:
101: ( 0, 0, 0) #000000000000 black
101: (65535,65535,65535) #FFFFFFFFFFFF white
Thus 10% or 100 pixels were clipped to black and white each in addition to the 1 already there, so a value of 101 counts on each end.
Then I issued the following to clip by count of 10 on each end:
convert grad1000.png -contrast-stretch 10,990 grad1000_cs10x990t.png
identify -verbose grad1000_cs10x990.png
...
Histogram:
11: ( 0, 0, 0) #000000000000 black
11: (65535,65535,65535) #FFFFFFFFFFFF white
Thus 10 pixels were clipped to black and white each in addition to the 1 already there, so a value of 11 counts on both ends.
Thus the correct use of -contrast-stretch is by count from 0 at the black/low end and (totalpixels - count) from white/high end for non-percent. And percentcount from the black/low end and (100-percentcount) from the white/high end.
Thus the documentation is confusing if not wrong.
black-point value = either number of pixels to black out or percent pixels to black out [at the low end of the histogram]".
white-point value = either (total pixels minus number of pixels to white out) or (100 - percent pixels to white out) [at the high end of the histogram]