Re: image size convention in resize.c
Posted: 2010-11-12T08:32:53-07:00
I'll get going on this slowly.
There are, actually, three desirable behaviors:
corners (current behavior, and default; it extrapolates when upsampling)
centers (it extrapolates when downsampling)
and
switch (mix and match depending on whether one is enlarging or downsampling in each direction; it never extrapolates).
For a quick discussion of why extrapolation should be avoided if one wants accuracy, see http://en.wikipedia.org/wiki/Extrapolation. Basically, if you want accuracy, you should minimize the impact of the abyss. (Note that the orthogonal resize methods indirectly "feel" the abyss by virtue of the "chopping off" of the piece of the filter kernel which sticks ouf of the positions of the centers of the boundary pixels, which destroys the fact that the kernels are symmetric left/right and up/down. Since this symmetry contributes greatly to the good properties of the filters (accuracy being at the top of the list), this justifies wanting to minimize the number of pixels for which kernel truncation occurs.)
One may ask: "Do we really want accuracy?" and this is a valid question. For example, if you are blending into transparency, accuracy is clearly a secondary consideration.
(Enough pontificating. I'll write an explanatory blurb for IM Examples once the -define is programmed.)
nicolas
There are, actually, three desirable behaviors:
corners (current behavior, and default; it extrapolates when upsampling)
centers (it extrapolates when downsampling)
and
switch (mix and match depending on whether one is enlarging or downsampling in each direction; it never extrapolates).
For a quick discussion of why extrapolation should be avoided if one wants accuracy, see http://en.wikipedia.org/wiki/Extrapolation. Basically, if you want accuracy, you should minimize the impact of the abyss. (Note that the orthogonal resize methods indirectly "feel" the abyss by virtue of the "chopping off" of the piece of the filter kernel which sticks ouf of the positions of the centers of the boundary pixels, which destroys the fact that the kernels are symmetric left/right and up/down. Since this symmetry contributes greatly to the good properties of the filters (accuracy being at the top of the list), this justifies wanting to minimize the number of pixels for which kernel truncation occurs.)
One may ask: "Do we really want accuracy?" and this is a valid question. For example, if you are blending into transparency, accuracy is clearly a secondary consideration.
(Enough pontificating. I'll write an explanatory blurb for IM Examples once the -define is programmed.)
nicolas
Resize
Distort
Lanczos resize (tensor Sinc-Sinc 3-lobe)
LanczosSharp distort (Clamped EWA Jinc-Jinc 3-lobe with blur=0.9812505644269356)
A sharper Lanczos distort (Clamped EWA Jinc-Jinc 3 lobe) with blur=.8956036897402794
resize_lanczos
distort_lanczos
NIP2 upsharp (Nohalo+LBB, that is, halo-free sharpening subdivision + bounded interpolation) result
NIP2 upsmooth (VSQBS = diagonal preserving smoothing subdivision + quadratic B-Spline smoothing) result
NIP2 upsize (LBB = Locally Bounded Bicubic) result