Re: image size convention in resize.c
Posted: 2010-11-12T08:32:53-07:00
I'll get going on this slowly.
There are, actually, three desirable behaviors:
corners (current behavior, and default; it extrapolates when upsampling)
centers (it extrapolates when downsampling)
and
switch (mix and match depending on whether one is enlarging or downsampling in each direction; it never extrapolates).
For a quick discussion of why extrapolation should be avoided if one wants accuracy, see http://en.wikipedia.org/wiki/Extrapolation. Basically, if you want accuracy, you should minimize the impact of the abyss. (Note that the orthogonal resize methods indirectly "feel" the abyss by virtue of the "chopping off" of the piece of the filter kernel which sticks ouf of the positions of the centers of the boundary pixels, which destroys the fact that the kernels are symmetric left/right and up/down. Since this symmetry contributes greatly to the good properties of the filters (accuracy being at the top of the list), this justifies wanting to minimize the number of pixels for which kernel truncation occurs.)
One may ask: "Do we really want accuracy?" and this is a valid question. For example, if you are blending into transparency, accuracy is clearly a secondary consideration.
(Enough pontificating. I'll write an explanatory blurb for IM Examples once the -define is programmed.)
nicolas
There are, actually, three desirable behaviors:
corners (current behavior, and default; it extrapolates when upsampling)
centers (it extrapolates when downsampling)
and
switch (mix and match depending on whether one is enlarging or downsampling in each direction; it never extrapolates).
For a quick discussion of why extrapolation should be avoided if one wants accuracy, see http://en.wikipedia.org/wiki/Extrapolation. Basically, if you want accuracy, you should minimize the impact of the abyss. (Note that the orthogonal resize methods indirectly "feel" the abyss by virtue of the "chopping off" of the piece of the filter kernel which sticks ouf of the positions of the centers of the boundary pixels, which destroys the fact that the kernels are symmetric left/right and up/down. Since this symmetry contributes greatly to the good properties of the filters (accuracy being at the top of the list), this justifies wanting to minimize the number of pixels for which kernel truncation occurs.)
One may ask: "Do we really want accuracy?" and this is a valid question. For example, if you are blending into transparency, accuracy is clearly a secondary consideration.
(Enough pontificating. I'll write an explanatory blurb for IM Examples once the -define is programmed.)
nicolas