Code: Select all
dcraw -v -H 0 -6 -w -W -g 1 0 -o 0 -T -O out_raw.tiff in.nef
- Per channel threshold above or below a certain level.
- Calculate how many pixels fit the criteria.
Over Under Exposure
Over Under Exposure
Hello All, I've been wondering if there's a quick (automated) way to determine if a set of images are over or under exposed, and if so by how much. I am looking to get results like this. I'm thinking that this would probably involve (crudely):
ImageMagick 7.0.7-25 Q16 x64 2018-03-04 · Cipher DPC HDRI Modules OpenMP · Windows 7
-
- Posts: 12159
- Joined: 2010-01-23T23:01:33-07:00
- Authentication code: 1151
- Location: England, UK
Re: Over Under Exposure
We can easily get any statistics we want about the distribution of pixel values: the percentage of pixels over or under certain values, histogram peaks, and so on. But what do we do with that data?
When I worked in film, I thought in terms of stops, with lighting, camera, film processing and printing. Underexposure was always the enemy.
With digital, stops are still important, but the major enemy is photographic overexposure, the dreaded clipping of highlights. If the camera is good and the lighting isn't diabolical, the shadows will take care of themselves.
In post, we need to prevent clipping at either end but anything else goes. I don't worry about stops in post. I care about tones and saturation and hues and contrast levels and sharpness at different parts of the image, and stops as measurements are too crude to be of much help.
From that link:
The thought of software automatically creating saturation (aka clipping) fills me with horrors.
Digitally, automatically, we can spread the tones to use the full range available without clipping ("-auto-level"). Nothing wrong with that. And then we have a wealth of tools available to automatically or manually adjust the image either for technical accuracy or aesthetic quality or anything else.
If an automated process is to simply correct "over exposure" or "underexposure", it needs to know what the "correct" exposure is. It may be different in different parts of the image. But changing the exposure will change all the tones together, where we are more concerned with relative tones, and hence contrast.
Sorry, this turned into a ramble. What was the question again?
When I worked in film, I thought in terms of stops, with lighting, camera, film processing and printing. Underexposure was always the enemy.
With digital, stops are still important, but the major enemy is photographic overexposure, the dreaded clipping of highlights. If the camera is good and the lighting isn't diabolical, the shadows will take care of themselves.
In post, we need to prevent clipping at either end but anything else goes. I don't worry about stops in post. I care about tones and saturation and hues and contrast levels and sharpness at different parts of the image, and stops as measurements are too crude to be of much help.
From that link:
In a photograph, 1% of the image is an awful lot. An area one-tenth of the total width and one-tenth of the total height is 1% of the image, and very noticeable. Sure, that software lets the user change the percentage, but what is the point of saturating any pixels?If Shift-A is pressed, or Menu – View – Auto Exposure Correction is enabled, and default settings are used, FastRawViewer calculates and applies automatic exposure correction in such a way, that 1% of the total amount of pixels in the image are pushed to saturation (receive the value of 255 on the 8-bit scale).
The thought of software automatically creating saturation (aka clipping) fills me with horrors.
Digitally, automatically, we can spread the tones to use the full range available without clipping ("-auto-level"). Nothing wrong with that. And then we have a wealth of tools available to automatically or manually adjust the image either for technical accuracy or aesthetic quality or anything else.
If an automated process is to simply correct "over exposure" or "underexposure", it needs to know what the "correct" exposure is. It may be different in different parts of the image. But changing the exposure will change all the tones together, where we are more concerned with relative tones, and hence contrast.
Sorry, this turned into a ramble. What was the question again?
snibgo's IM pages: im.snibgo.com
Re: Over Under Exposure
Thanks for your time, snibgo.
I might just be terrible at photography: in taking photos (in manual mode) and post-processing in attempt to salvage the raws. It's all so time-consuming and nerve-racking. (Maybe I'm looking at it all wrong.) Anyway, I usually end up with a multitude of photos I think are worth keeping. My problem simply stated is how I can prioritize them based on their recoverability, such that I wouldn't be spending time in those that probably won't work out without some intense expert session that is probably not worth my mental health or time to process.
You've hit a lot of the issues I've been facing.
I might just be terrible at photography: in taking photos (in manual mode) and post-processing in attempt to salvage the raws. It's all so time-consuming and nerve-racking. (Maybe I'm looking at it all wrong.) Anyway, I usually end up with a multitude of photos I think are worth keeping. My problem simply stated is how I can prioritize them based on their recoverability, such that I wouldn't be spending time in those that probably won't work out without some intense expert session that is probably not worth my mental health or time to process.
You've hit a lot of the issues I've been facing.
- What's the intended (i.e. correct) exposure?
- How to minimize the side-effects of exposure adjustments?
- I would often want a certain region of an image to have a certain amount of brightness. So, perhaps a quick and dirty way of setting priority is determining the distance (exposure adjustment) it takes to raise or lower the raw to the desired result. This is based on the (maybe flawed) assumption that a larger 'distance' would indicate more side-effects, making that particular image more 'difficult' to recover.
- Clipping would be another factor, which I may have addressed carelessly in the previous post. I don't know about FastRawViewer but maybe an image with lots of near- black or white has more clipping than those without.
ImageMagick 7.0.7-25 Q16 x64 2018-03-04 · Cipher DPC HDRI Modules OpenMP · Windows 7
Re: Over Under Exposure
Just going off topic for a bit
This works well for landscapes and other photos where I have time; but otherwise I just go back to AV. I have also started using partial metering with AV mode.
I started shooting in manual this year and had a day out with an experienced landscape photographer and he told me to: Find the brightest point in the photo ( spot metering ) and set that to plus 2 stops and as Fred said the shadows should be OK.I might just be terrible at photography: in taking photos (in manual mode) and post-processing in attempt to salvage the raws.
This works well for landscapes and other photos where I have time; but otherwise I just go back to AV. I have also started using partial metering with AV mode.
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: Over Under Exposure
I am not a photographer. But, I would think that It should be easy to compute how many pixels are over saturated by how many near white and how many near black pixels are in each channel (unless they correspond to true black and white objects in the image). But what can/would you do about that? You cannot recover any information from regions that are pure black or pure white. It would be easy to color code those regions as in some of the images further down the page.
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: Over Under Exposure
That was not me. Probably you mean snibgo.as Fred said the shadows should be OK.
-
- Posts: 12159
- Joined: 2010-01-23T23:01:33-07:00
- Authentication code: 1151
- Location: England, UK
Re: Over Under Exposure
Yes, that was me.
If the camera sensors have clipped, so the raw files contain pixels at 0 or 100%, we have lost data and can't get it back however hard we try. We could regard these clipped pixels as "holes" and fill them (see one of my pages), perhaps also reducing contrast in the rest of the image to maintain relative tones. I haven't investigated that.
The OP example dcraw command had "-w" for automatic white balancing using metadata created by the camera, and "-H 0" to use these factors directly, even if they cause clipping. I think this is unwise. If the camera has caught the image with no clipping, we don't want to throw any pixels away.
With "-W", you turn off auto-brighten, which is good, as auto-brighten can also cause clipping.
I don't seem to have documented my white balancing method, which involves finding how far each channel is from clipping, and using factors that don't quite clip.
ImageMagick can readily find what proportion of pixels have clipped, and one of my scripts (unpublished, I think) gives the numbers and creates marked-up copies of the image, showing where clipping occurs.
After that, my next question is, "what have I got in the photographs?" In the old days I would make a plain print and see what it looked like; 5-10 minutes per image in a darkroom. These days, I have a good meal and get back to the computer which has by then made a variety of versions of each image: the in-camera JPEG; an sRGB from dcraw; an equalized (more or less) version that looks horrible but often shows detail not visible in other versions; a contrast-limited equalisation (eqLimit.bat); and eqlQtr.bat which makes a contrast-limited histogram-equalisation (with iterative redistribution) as appropriate for four tiles, blending between the four quarters.
(I also have "matchGaus.bat" which makes the histogram into (roughly) a Gaussian curve. This generally looks horrible but can reveal detail hidden in other versions, so I may add it to my standard list.)
I have documented these processes.
Between these five versions, I can certainly see what I've got, and which images to take further.
My during-dinner script also makes a web page with a resized version of each eqLimit result, each image clickable to show the full-size image to check for focusing or whatever. This makes an easy after-dinner slide-show.
I don't claim this method is perfect. But it works for me.
If the camera sensors have clipped, so the raw files contain pixels at 0 or 100%, we have lost data and can't get it back however hard we try. We could regard these clipped pixels as "holes" and fill them (see one of my pages), perhaps also reducing contrast in the rest of the image to maintain relative tones. I haven't investigated that.
The OP example dcraw command had "-w" for automatic white balancing using metadata created by the camera, and "-H 0" to use these factors directly, even if they cause clipping. I think this is unwise. If the camera has caught the image with no clipping, we don't want to throw any pixels away.
With "-W", you turn off auto-brighten, which is good, as auto-brighten can also cause clipping.
I don't seem to have documented my white balancing method, which involves finding how far each channel is from clipping, and using factors that don't quite clip.
ImageMagick can readily find what proportion of pixels have clipped, and one of my scripts (unpublished, I think) gives the numbers and creates marked-up copies of the image, showing where clipping occurs.
After that, my next question is, "what have I got in the photographs?" In the old days I would make a plain print and see what it looked like; 5-10 minutes per image in a darkroom. These days, I have a good meal and get back to the computer which has by then made a variety of versions of each image: the in-camera JPEG; an sRGB from dcraw; an equalized (more or less) version that looks horrible but often shows detail not visible in other versions; a contrast-limited equalisation (eqLimit.bat); and eqlQtr.bat which makes a contrast-limited histogram-equalisation (with iterative redistribution) as appropriate for four tiles, blending between the four quarters.
(I also have "matchGaus.bat" which makes the histogram into (roughly) a Gaussian curve. This generally looks horrible but can reveal detail hidden in other versions, so I may add it to my standard list.)
I have documented these processes.
Between these five versions, I can certainly see what I've got, and which images to take further.
My during-dinner script also makes a web page with a resized version of each eqLimit result, each image clickable to show the full-size image to check for focusing or whatever. This makes an easy after-dinner slide-show.
I don't claim this method is perfect. But it works for me.
snibgo's IM pages: im.snibgo.com
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: Over Under Exposure
I do not know how useful this would be, but I have a bash shell script that will allow one to change the image by stops. See xposure at the link below. One could make bracketed images for visual review.
-
- Posts: 12159
- Joined: 2010-01-23T23:01:33-07:00
- Authentication code: 1151
- Location: England, UK
Re: Over Under Exposure
Sorry, I rambled again.
My simple answer would be: eqLimit.bat. It is fairly quick, and will show any major problems that would take too much fiddling to solve.afre wrote:My problem simply stated is how I can prioritize them based on their recoverability, such that I wouldn't be spending time in those that probably won't work out without some intense expert session that is probably not worth my mental health or time to process.
snibgo's IM pages: im.snibgo.com
Re: Over Under Exposure
Thanks for your insight, everyone. I feel loved - haha.
I'm not convinced with the results. Interested in this but my system is still having trouble with some of those scripts. I recently hacked together something and it seems to bring my test images to viewable results:
I tagged on stats.bat just for fun. I don't know what to make of it yet; in particular, the "sat" stats and "propWhite".
I copy-pasted that from your site. I often use "-w -H 2".snibgo wrote:[Comments on "-w" and "-H 0"]
I would be interested in your approach. I tend to use in-camera/auto wb because I have yet to find a method that is simple yet robust.snibgo wrote:I don't seem to have documented my white balancing method, which involves finding how far each channel is from clipping, and using factors that don't quite clip. ImageMagick can readily find what proportion of pixels have clipped, and one of my scripts (unpublished, I think) gives the numbers and creates marked-up copies of the image, showing where clipping occurs.
Great when done with purpose. I often go file-crazy and end up wasting time and hdd space on random files with limited usefulness.snibgo wrote:version that looks horrible but often shows detail not visible in other versions
After-dinner entertainment.snibgo wrote:My during-dinner script also makes a web page with a resized version of each eqLimit result, each image clickable to show the full-size image to check for focusing or whatever. This makes an easy after-dinner slide-show.
I'm not convinced with the results. Interested in this but my system is still having trouble with some of those scripts. I recently hacked together something and it seems to bring my test images to viewable results:
Code: Select all
break > stats.txt ^
&& FOR %i in (*.tif) DO %pictbat%sigSetSd %i mean ^
&& %imdev%convert %i %sssOPTION% %~ni_s.tif ^
&& %pictbat%eqLimit %~ni_s.tif . . . %~ni_e.tif ^
&& ECHO %i >> stats.txt ^
&& %pictbat%stats %~ni_e.tif >> stats.txt ^
&& ECHO. >> stats.txt
ImageMagick 7.0.7-25 Q16 x64 2018-03-04 · Cipher DPC HDRI Modules OpenMP · Windows 7
-
- Posts: 12159
- Joined: 2010-01-23T23:01:33-07:00
- Authentication code: 1151
- Location: England, UK
Re: Over Under Exposure
Ha! Yes, but the first line of that page is:
I don't have a page that describes my actual workflow, partly because it keeps evolving, but mostly because I'm more interested in discovering (and explaining) new methods and techniques. How people use them is their own affair.
The page considers just those two settings. Testing them with an image that is already clipped is useful, because it easily shows any more clipping.What are the best dcraw gamma and auto-brighten settings for various purposes?
I don't have a page that describes my actual workflow, partly because it keeps evolving, but mostly because I'm more interested in discovering (and explaining) new methods and techniques. How people use them is their own affair.
snibgo's IM pages: im.snibgo.com