autotone - expected results
autotone - expected results
Hi there.
I am surprised by the results of running autotone on an image. Could anyone confirm this is what's expected?
Thanks
Original:
Autotone:
I am surprised by the results of running autotone on an image. Could anyone confirm this is what's expected?
Thanks
Original:
Autotone:
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: autotone - expected results
Note you did not specify your exact command nor IM version or platform.
Yes that is what I get from a default set of arguments.
Your image is very flat colored and not of the type expected by the script as a more natural image. Try the images in my examples or more natural like images.
Try a simpler approach:
or
Yes that is what I get from a default set of arguments.
Your image is very flat colored and not of the type expected by the script as a more natural image. Try the images in my examples or more natural like images.
Try a simpler approach:
Code: Select all
convert original.png -contrast-stretch 1% test1.png
Code: Select all
convert original.png -channel rgb -contrast-stretch 1% test2.png
Re: autotone - expected results
Ok, makes sense.
Having several of these images with different brightness, histograms, etc., how would you try to make them all more similar to each other. I was hoping that one of your auto* scripts could do this, since you analyze the histograms and make adjustments from there.
Having several of these images with different brightness, histograms, etc., how would you try to make them all more similar to each other. I was hoping that one of your auto* scripts could do this, since you analyze the histograms and make adjustments from there.
-
- Posts: 12159
- Joined: 2010-01-23T23:01:33-07:00
- Authentication code: 1151
- Location: England, UK
Re: autotone - expected results
You might look at my "Gain and bias" page, which also mentions some alternative methods.
snibgo's IM pages: im.snibgo.com
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: autotone - expected results
My script histmatch may work, but if your histogram is too sparse then that will not work well. Best suggestion is snibgo's gain and bias. It basically matches the brightness and contrast of images. If you know the desired mean and standard deviation you can use snibgo's gain and bias, which is a global approach. My script space does something similar in an adaptive approach. I would probably guess that the global method is what you need for similar types of images to the one you posted.
Re: autotone - expected results
@snibgo I'm sorry to be here again with errors... maybe it's me, but I get this error at calcGainBias:
identify.exe: unable to parse expression `igb2_mn_R' @ error/fx.c/FxGetSymbol/1848.
You can see the meanSdTr runs fine. It's the calcGainBias that errors out...
Using ImageMagick 6.9.3-10 Q16 x64 2016-05-04, on win10 x64.
I am using these images:
1-input: https://edia1-my.sharepoint.com/persona ... c241d67e3a
2-reference: https://edia1-my.sharepoint.com/persona ... ce23339c31
There is a divide by zero error, but I don't see any SD=0...
This got me thinking it may be related to using international settings where comma is the decimal place?
The output is:
identify.exe: unable to parse expression `igb2_mn_R' @ error/fx.c/FxGetSymbol/1848.
You can see the meanSdTr runs fine. It's the calcGainBias that errors out...
Using ImageMagick 6.9.3-10 Q16 x64 2016-05-04, on win10 x64.
I am using these images:
1-input: https://edia1-my.sharepoint.com/persona ... c241d67e3a
2-reference: https://edia1-my.sharepoint.com/persona ... ce23339c31
There is a divide by zero error, but I don't see any SD=0...
This got me thinking it may be related to using international settings where comma is the decimal place?
The output is:
Code: Select all
D:\temp\ortos\gainbias>imgGainBias.bat Product97.tif Product99_paintnet.tif Product97_gainbias.tif
D:\temp\ortos\gainbias>rem From image Product97.tif and reference image Product99_paintnet.tif
D:\temp\ortos\gainbias>rem applies gain and bias to match means and SD of reference.
D:\temp\ortos\gainbias>rem Output to Product97_gainbias.tif.
D:\temp\ortos\gainbias>call D:\temp\ortos\gainbias\meanSdTr Product97.tif igb1_
D:\temp\ortos\gainbias>rem From image Product97.tif with transparency,
D:\temp\ortos\gainbias>rem calculates mean and standard deviation.
D:\temp\ortos\gainbias>rem Prefixes output variable names with igb1_.
D:\temp\ortos\gainbias>for /F "usebackq" %L in (`convert Product97.tif -precision 19 ( -clone 0 -evaluate Pow 2 -scale "1x1^!" ) ( -clone 0 -scale "1x1^!" -format "igb1_mn_R=%[fx:mean.r]\nigb1_mn_G=%[fx:mean.g]\nigb1_mn_B=%[fx:mean.b]\n" +write info: -evaluate Pow 2 ) -delete 0 -alpha off -compose MinusSrc -composite -evaluate Pow 0.5 -format "igb1_sd_R=%[fx:mean.r]\nigb1_sd_G=%[fx:mean.g]\nigb1_sd_B=%[fx:mean.b]\n" info:`) do set %L
convert.exe: Unknown field with tag 33550 (0x830e) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
convert.exe: Unknown field with tag 33918 (0x847e) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
convert.exe: Unknown field with tag 33922 (0x8482) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
convert.exe: Unknown field with tag 34735 (0x87af) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
convert.exe: Unknown field with tag 34736 (0x87b0) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
convert.exe: Unknown field with tag 34737 (0x87b1) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
D:\temp\ortos\gainbias>set igb1_mn_R=0.3216144045166705
D:\temp\ortos\gainbias>set igb1_mn_G=0.40338750286106662
D:\temp\ortos\gainbias>set igb1_mn_B=0.38040741588464178
D:\temp\ortos\gainbias>set igb1_sd_R=0.23083848325322348
D:\temp\ortos\gainbias>set igb1_sd_G=0.11200122072175174
D:\temp\ortos\gainbias>set igb1_sd_B=0.089158464942397195
D:\temp\ortos\gainbias>call D:\temp\ortos\gainbias\meanSdTr Product99_paintnet.tif igb2_
D:\temp\ortos\gainbias>rem From image Product99_paintnet.tif with transparency,
D:\temp\ortos\gainbias>rem calculates mean and standard deviation.
D:\temp\ortos\gainbias>rem Prefixes output variable names with igb2_.
D:\temp\ortos\gainbias>for /F "usebackq" %L in (`convert Product99_paintnet.tif -precision 19 ( -clone 0 -evaluate Pow 2 -scale "1x1^!" ) ( -clone 0 -scale "1x1^!" -format "igb2_mn_R=%[fx:mean.r]\nigb2_mn_G=%[fx:mean.g]\nigb2_mn_B=%[fx:mean.b]\n" +write info: -evaluate Pow 2 ) -delete 0 -alpha off -compose MinusSrc -composite -evaluate Pow 0.5 -format "igb2_sd_R=%[fx:mean.r]\nigb2_sd_G=%[fx:mean.g]\nigb2_sd_B=%[fx:mean.b]\n" info:`) do set %L
convert.exe: Unknown field with tag 347 (0x15b) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
convert.exe: Unknown field with tag 34735 (0x87af) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
convert.exe: Unknown field with tag 34737 (0x87b1) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
D:\temp\ortos\gainbias>set igb2_mn_R=0.50083161669336995
D:\temp\ortos\gainbias>set igb2_mn_G=0.49130998702983136
D:\temp\ortos\gainbias>set igb2_mn_B=0.48006408789196614
D:\temp\ortos\gainbias>set igb2_sd_R=0.19180590524147403
D:\temp\ortos\gainbias>set igb2_sd_G=0.16433966582742046
D:\temp\ortos\gainbias>set igb2_sd_B=0.15707637140459296
D:\temp\ortos\gainbias>call D:\temp\ortos\gainbias\calcGainBias igb1_ igb2_ igb_gb_
D:\temp\ortos\gainbias>rem From prefixes igb1_ and igb2_, calculates gain and bias to transform image 1 to be like igb2_.
D:\temp\ortos\gainbias>rem Prefixes output variable names with igb_gb_.
D:\temp\ortos\gainbias>rem If an SD==0, the script will attempt to divide by zero.
D:\temp\ortos\gainbias>for /F "usebackq" %L in (`identify -precision 19 -format "igb_gb_gain_R=%[fx:!igb2_sd_R!/!igb1_sd_R!]\nigb_gb_gain_G=%[fx:!igb2_sd_G!/!igb1_sd_G!]\nigb_gb_gain_B=%[fx:!igb2_sd_B!/!igb1_sd_B!]\n" xc:`) do set %L
identify.exe: unable to parse expression `igb2_sd_R' @ error/fx.c/FxGetSymbol/1848.
identify.exe: divide by zero `!igb2_sd_R!/!igb1_sd_R!' @ error/fx.c/FxEvaluateSubexpression/2176.
identify.exe: unknown image property "%[fx:!igb2_sd_R!/!igb1_sd_R!]" @ warning/property.c/InterpretImageProperties/3678.
identify.exe: unknown image property "%[fx:!igb2_sd_G!/!igb1_sd_G!]" @ warning/property.c/InterpretImageProperties/3678.
identify.exe: unknown image property "%[fx:!igb2_sd_B!/!igb1_sd_B!]" @ warning/property.c/InterpretImageProperties/3678.
D:\temp\ortos\gainbias>set igb_gb_gain_R=
D:\temp\ortos\gainbias>set igb_gb_gain_G=
D:\temp\ortos\gainbias>set igb_gb_gain_B=
D:\temp\ortos\gainbias>for /F "usebackq" %L in (`identify -precision 19 -format "igb_gb_bias_R=%[fx:!igb2_mn_R!-!igb1_mn_R!*!igb_gb_gain_R!]\nigb_gb_bias_G=%[fx:!igb2_mn_G!-!igb1_mn_G!*!igb_gb_gain_G!]\nigb_gb_bias_B=%[fx:!igb2_mn_B!-!igb1_mn_B!*!igb_gb_gain_B!]\n" xc:`) do set %L
identify.exe: unable to parse expression `igb2_mn_R' @ error/fx.c/FxGetSymbol/1848.
D:\temp\ortos\gainbias>set igb_gb_bias_R=
D:\temp\ortos\gainbias>set igb_gb_bias_G=
D:\temp\ortos\gainbias>set igb_gb_bias_B=
D:\temp\ortos\gainbias>convert Product97.tif -channel R -function Polynomial !igb_gb_gain_R!,!igb_gb_bias_R! -channel G -function Polynomial !igb_gb_gain_G!,!igb_gb_bias_G! -channel B -function Polynomial !igb_gb_gain_B!,!igb_gb_bias_B! +channel Product97_gainbias.tif
convert.exe: Unknown field with tag 33550 (0x830e) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
convert.exe: Unknown field with tag 33918 (0x847e) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
convert.exe: Unknown field with tag 33922 (0x8482) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
convert.exe: Unknown field with tag 34735 (0x87af) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
convert.exe: Unknown field with tag 34736 (0x87b0) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
convert.exe: Unknown field with tag 34737 (0x87b1) encountered. `TIFFReadDirectory' @ warning/tiff.c/TIFFWarnings/891.
-
- Posts: 12159
- Joined: 2010-01-23T23:01:33-07:00
- Authentication code: 1151
- Location: England, UK
Re: autotone - expected results
Thanks for including the text output. I think I see the problem.
Most of my scripts have a line near the start:
My computer doesn't need that line, so I sometimes forget to include it. Without that line, Windows doesn't expand variables with exclamation marks like !var!.
So, try inserting that line at the top of your script.
Most of my scripts have a line near the start:
Code: Select all
setlocal enabledelayedexpansion
So, try inserting that line at the top of your script.
snibgo's IM pages: im.snibgo.com
Re: autotone - expected results
Fantastic!! That did it. What a relief... starting to think I was jinxed...
Re: autotone - expected results
Hi there again.
Looking at the results of gainbias and histmatch, they are sometimes good sometimes not so good, as would be expected, since we are trying to get a set of images to look as much as possible like another reference image.
I think this is another kind of problem. How to get a set of images to be transformed to a homogeneous "color surface", that has to be computed by looking at all images before transforming.
Something I found similar to this is Color dodging balancing:
http://desktop.arcgis.com/en/arcmap/10. ... atalog.htm
The algorithm I think is described here (Xiuguang Zhou, ISRSE-36, Berlin, 2015):
http://www.int-arch-photogramm-remote-s ... 5-2015.pdf
Looking at the abstract in the pdf it seems pretty simple (attempted humour):
"(...)To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. (...)"
Some of Fred's scripts already compute target transformations from the original image or from a target image. The point seems to be to get target transformations from a set of images.
The results published in the article are superb.
There is a high demand for this kind of computing in the (my) gis area, since many non-experts get aereal or satellite imagery that many times lack quality color processing.
So I was wondering what your thoughts would be on this, and if there is any existing script that could maybe be adapted? Maybe gainbias could calculate these "global" transformations instead of looking at just 1 reference image?
Looking at the results of gainbias and histmatch, they are sometimes good sometimes not so good, as would be expected, since we are trying to get a set of images to look as much as possible like another reference image.
I think this is another kind of problem. How to get a set of images to be transformed to a homogeneous "color surface", that has to be computed by looking at all images before transforming.
Something I found similar to this is Color dodging balancing:
http://desktop.arcgis.com/en/arcmap/10. ... atalog.htm
The algorithm I think is described here (Xiuguang Zhou, ISRSE-36, Berlin, 2015):
http://www.int-arch-photogramm-remote-s ... 5-2015.pdf
Looking at the abstract in the pdf it seems pretty simple (attempted humour):
"(...)To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. (...)"
Some of Fred's scripts already compute target transformations from the original image or from a target image. The point seems to be to get target transformations from a set of images.
The results published in the article are superb.
There is a high demand for this kind of computing in the (my) gis area, since many non-experts get aereal or satellite imagery that many times lack quality color processing.
So I was wondering what your thoughts would be on this, and if there is any existing script that could maybe be adapted? Maybe gainbias could calculate these "global" transformations instead of looking at just 1 reference image?
-
- Posts: 12159
- Joined: 2010-01-23T23:01:33-07:00
- Authentication code: 1151
- Location: England, UK
Re: autotone - expected results
Thanks for the links. The Zhou paper looks very interesting.
The problem of seamless joining of mosaic images is similar to making a movie out of time-lapse photos from a fixed camera of a building site (see recent thread), but more difficult.
Changing all images to a common standard sometimes works well, but in the general case we want an evolving standard. A set of aerial photos might include areas of towns, vegetation, mountain and desert. The standard needs to evolve between areas -- a moving window. For a seamless mosaic, we need to blend between standards.
The Zhou paper uses "vOut = vIn ^ gamma", which is one-dimensional. It affects brightness and contrast, but not independently.
The gain-and-bias method is based on "vOut = vIn * a + b". It is two-dimensional, changing lightness and contrast independently.
Of course, either technique can be applied independently to RGB channels or HSL or Lab or whatever.
The problem of seamless joining of mosaic images is similar to making a movie out of time-lapse photos from a fixed camera of a building site (see recent thread), but more difficult.
Changing all images to a common standard sometimes works well, but in the general case we want an evolving standard. A set of aerial photos might include areas of towns, vegetation, mountain and desert. The standard needs to evolve between areas -- a moving window. For a seamless mosaic, we need to blend between standards.
The Zhou paper uses "vOut = vIn ^ gamma", which is one-dimensional. It affects brightness and contrast, but not independently.
The gain-and-bias method is based on "vOut = vIn * a + b". It is two-dimensional, changing lightness and contrast independently.
Of course, either technique can be applied independently to RGB channels or HSL or Lab or whatever.
snibgo's IM pages: im.snibgo.com
Re: autotone - expected results
Yes that's true, aereal imagery capture everything. That's covered best by some of the methods, for instance 'color grid' or 2nd surface color target, though I prefer the polynomials.
gain-and-bias could be better like you say. I may be miss-understanding, but can it be applied like this? I mean, I thought it could only intake a single reference image.
gain-and-bias could be better like you say. I may be miss-understanding, but can it be applied like this? I mean, I thought it could only intake a single reference image.
-
- Posts: 12159
- Joined: 2010-01-23T23:01:33-07:00
- Authentication code: 1151
- Location: England, UK
Re: autotone - expected results
My gain and bias page http://im.snibgo.com/gainbias.htm, uploaded 5 minutes ago, now shows how to make a set of images follow a common standard (in just the L channel of Lab).
It doesn't show how to find the common standard, or an evolving standard.
It doesn't show how to find the common standard, or an evolving standard.
snibgo's IM pages: im.snibgo.com
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: autotone - expected results
I do not know if this is relevant, but I have a script, space, that does spatially adaptive contrast enhancement. That is, it adjust the brightness and contrast across an image differently for each part of the image. Snibgo has a similar script that modifies the histogram adaptively.
Re: autotone - expected results
Fred, that's relevant, and useful to correct in-image differences. I see also a good use-case for these scripts, because sometimes people who produce the aerials join images from different dates, resulting in 2 very different halves inside the one image.
For the case explained above the trick is getting to determine the transformation function for 1 image, from a whole set of images.
So instead of having for instance gain-bias look at 1 reference image, it would look to 10 images, compute a "statistical" reference and apply that to our image.
I'm sorry... this is the most I can understand of the algorithm... just thought you guys might have something that would apply to this.
For the case explained above the trick is getting to determine the transformation function for 1 image, from a whole set of images.
So instead of having for instance gain-bias look at 1 reference image, it would look to 10 images, compute a "statistical" reference and apply that to our image.
I'm sorry... this is the most I can understand of the algorithm... just thought you guys might have something that would apply to this.