Hello all!
Jason, your method, which I saw a few hours after starting experimenting myself, basically eliminates color fringing. I'm not so sure it increases perceived resolution. In fact it may decrease it a little bit. An improvement still over no-lcd-aware rendering imho.
Meanwhile I found a way to only use decent resize methods and keeping good sharpness, while still color-defringing. It's not as sharp at Point resampling in the end, but much more fool proof:
Code: Select all
convert SDIMa_24396.jpg -gamma 0.5 -filter Mitchell -resize 2700x1800\! -channel Red -morphology Convolve '3x1: 0, 0, 1' -channel Green -morphology Convolve '3x1: 0, 1, 0' -channel Blue -morphology Convolve '3x1: 1, 0, 0' +channel -filter Lanczos -resize 900x600\! -gamma 2.0 -depth 8 -quality 100 -sampling-factor 1x1 SDIMa_24396_rgb.jpg
Here is a result:
Rationale for the changes:
You need a 3x larger image to use small & simple shift matrices, but Mitchell, which is a good theoretical window for sinc with little losses, does not deliver visually sharp output, so let's not fight it and let's use a 3x larger image on both axis instead of horizontal only. Too many pixels never hurt, cpu and memory being cheap. This allows us to later Lanczos for final downsampling, which keeps things far sharper and is still robust for all images I've thrown at it.
Concerning the use of JPEG:
Anthony, while JPEG basically sucks it's standard. In the past I did a study to evaluate JPEG adequacy for high quality image transmission, so I know a bit about this, and I can share some useful facts here.
In the study I used a few jpeg encoders, including ImageMagick and cjpeg. At 100% quality with no color subsampling you get rounding errors only, since all frequency components are transmited, which translates into some added noise.
This noise keeps signal averages and basically does not introduce any statistical bias, and this holds even when you lower quality. That's because of a specific property in DCT I can't recall now (linearity?...).
At 100% quality those errors generated noise at a similar level to the errors already introduced by using 8 bits quantization anyway. Summing both errors it's as if you got precision similar to using 7 bit sampling, but without the posterizing effects.
This noise (8 bit + jpeg_100) also happens to be similar (in terms of magnitude) to shot noise level with a sensor featuring a well depth of ~32k electrons if you encode with gamma 2.0. Analysis was simplified by the fact that the square root transform applied to poisson distributed countings (shot noise...) has the nice property of making noise a constant, since in those countings noise=sqrt(signal).
The noise introduced if also of a magnitude similar to the dithering needed to properly encode using 8 bits anyway.
To keep things simple, by using JPEG at 100% quality with no color subsampling you basically introduce a very subtle noise at a magnitude similar to noise already present in any properly encoded 8 bit image. It's also lower than noise already present in most photographic images. So it's adequate even for very high quality transmission. Don't bother, you won't be able to see the difference from a tiff or a PNG. Well... except maybe in a high brightness DICOM calibrated medical display, but even then I have sincere doubts.
When not to use JPEG: in intermediate steps of image editing, when you don't want to introduce errors at each saving of your work. You should probably also use 16 bit sampling, because otherwise you get similar errors by just using 8 bits.
As a footnote I may add that jpeg supports 12 bits per channel, but I've never seen it used that way. 12 bit / channel jpeg with gamma 2.0 encoding would be way better than any existing image sensor in terms of noise, featuring noise similar to a well depth of 8 million electrons, while even very large sensors with very large pixels have at most 1 million electrons or so.