Scanned Negatives - are they black and white
Posted: 2013-05-26T11:30:20-07:00
I've got a bunch of negatives I'm scanning in as TIFFs so that I can manipulate them at 16 bits/channel before converting to an Internet-friendly, 8-bit format. The automated dust-removal processing in the scanner needs to scan color, even B+W negatives.
After looping through several colorspaces that make a Luminance-like channel and two color difference channels (YUV, Lab, Luv, etc), I chose YIQ, as it gave the greatest difference in values when an image was color to my eyes, versus when it wasn't.
I ended up using this on the G and B (really I and Q) channels:
...then calculating the difference between maxima and minima. Note that I scale down to the same size bounding box with Cubic to blur the image a bit and ignore any spikes in the image data. I originally thought 256x256 was a good size, but with that size, I had to set my threshold to 0.13 to match all my B+W negatives (some have slight color spikes where the negative is breaking down). Going to 96x96 helped numerically distinguish color vs. B+W. My B+W images' now tend to have a max-min of of 0.09 or less, rather than 0.13 or less. Some dull color images that were hovering around 0.13 stayed there with the additional scaling.
So, for my images, 0.1 or less difference in both I and Q means B+W. This even "did the right thing" on a color photo my dad took of a black and white TV image, as the image was tinted blueish.
Fine tuning may require checking against the standard deviation to ensure than some weird outlier isn't biasing the max/min difference, but I haven't had to do that with my existing images. It may be that simply choosing an appropriate scaling factor is sufficient as that will blur away any spikes that bias the data.
Rather than BASH, I used Python's sh module, installed via MacPorts to do this:
I then use this to decide whether or not to colorspace-convert my images:
And then glob my source images to render my JPEGs:
After looping through several colorspaces that make a Luminance-like channel and two color difference channels (YUV, Lab, Luv, etc), I chose YIQ, as it gave the greatest difference in values when an image was color to my eyes, versus when it wasn't.
I ended up using this on the G and B (really I and Q) channels:
Code: Select all
/opt/local/bin/convert ~/Pictures/film_scan/1970s-00040.tif -filter Cubic -resize 96x96 -colorspace YIQ -channel G -separate +channel -format "%[fx:minima],%[fx:maxima],%[fx:standard_deviation]" info:
So, for my images, 0.1 or less difference in both I and Q means B+W. This even "did the right thing" on a color photo my dad took of a black and white TV image, as the image was tinted blueish.
Fine tuning may require checking against the standard deviation to ensure than some weird outlier isn't biasing the max/min difference, but I haven't had to do that with my existing images. It may be that simply choosing an appropriate scaling factor is sufficient as that will blur away any spikes that bias the data.
Rather than BASH, I used Python's sh module, installed via MacPorts to do this:
Code: Select all
#!/opt/local/bin/python2.7
import os,sh,sys
def detectBW(srcfile,threshold=0.1,resize='96x96'):
"""Compare the difference between a (converted) image's
I/Q channels in the YIQ colorspace. Return False if,
at any time, the difference exceeds our threshold."""
# Convert to YIQ and look at the G (I) and B (Q) channels for
# how much their minima/maxima differ. Capture the stddev
# while we're at it, as it may be useful in fine-tuning our decision.
for channel in ['G','B']:
try:
v=sh.convert(srcfile,
'-filter', 'Cubic',
'-resize', resize,
'-colorspace','YIQ',
'-channel', channel,
'-separate',
'+channel',
'-format',
'"%[fx:minima],%[fx:maxima],%[fx:standard_deviation]"',
'info:')
except sh.ErrorReturnCode:
raise
exit(1)
minima,maxima,stddev = [float(i) for i in
v.replace('"','').strip().split(',')]
print channel,maxima-minima,stddev
if maxima-minima > threshold:
return False
return True
I then use this to decide whether or not to colorspace-convert my images:
Code: Select all
def convertImg(srcimg,dstimg,contrast='3x33%',color='#828279'):
"""Take a 16-bit TIFF, make a JPEG. Convert to sepia-like
colors if the image seems to be Black and White."""
# color default is about what middle-gray is when sepia-izing
# in Photoshop. Also very similar to ImageMagick's built-in
# goldenrod color at ~7% saturation.
sys.stdout.write("converting %s -> %s ... " % (srcimg,dstimg))
convertargs=[srcimg,'-sigmoidal-contrast',contrast]
if detectBW(srcimg):
# Notify when we are converting the colorspace
sys.stdout.write("(B+W conversion)... ")
convertargs.append('-colorspace')
convertargs.append('gray')
convertargs.append('-fill')
convertargs.append(color)
convertargs.append('-tint')
convertargs.append('100')
convertargs.append(dstimg)
try:
sh.convert(convertargs)
except sh.ErrorReturnCode:
raise
exit(2)
sys.stdout.write("Done.\n")
Code: Select all
def main():
srcdir=os.path.expanduser('~/Pictures/film_scan')
dstdir=os.path.expanduser('~/Pictures/film_scan/jpg')
srcext='tif'
dstext='jpg'
for srcfile in sh.glob(srcdir + '/*.' + srcext):
dstfile="%s/%s.%s" % (dstdir,
os.path.splitext(os.path.basename(srcfile))[0],
dstext)
if os.path.isfile(dstfile):
# Make new JPG if our source was modified since.
if os.path.getmtime(srcfile) > os.path.getmtime(dstfile):
convertImg(srcfile,dstfile)
else:
# Make new JPG if it doesn't exist.
convertImg(srcfile,dstfile)
if __name__ == "__main__":
main()