desired precision for Jinc approximation
Posted: 2010-12-19T14:02:57-07:00
Anthony:
I'm going to break the approximation of Jinc in two pieces: |x|<=r3 (minimax approx. custom built by Chantal Racette and I), and |x|>r3, for which I'll use the standard (Hart's) method. (Here, r3 is the true zero of BesselJ1 before pi scaling, which is just a bit larger than 10.)
Do you really care about getting close to full double precision for Jinc evaluation?
For example, my preliminary results give max absolute error on [-r3,r3] less than 8e-09 (when the rational function is evaluated in double precision; in single precision, it's less than 9e-7 <- not of interest to IM anyway, but interesting to me because of SSE/Altivec/MMX/... and GPUs) at a cost of 23 flops (including one division, but excluding the squaring back of the square rooted distance, which adds one flop to the total).
I would guess that this is good enough.
Agreed?
(I can, of course, produce more precise approximations, but I think it's a waste of flops for IM.)
I'm going to break the approximation of Jinc in two pieces: |x|<=r3 (minimax approx. custom built by Chantal Racette and I), and |x|>r3, for which I'll use the standard (Hart's) method. (Here, r3 is the true zero of BesselJ1 before pi scaling, which is just a bit larger than 10.)
Do you really care about getting close to full double precision for Jinc evaluation?
For example, my preliminary results give max absolute error on [-r3,r3] less than 8e-09 (when the rational function is evaluated in double precision; in single precision, it's less than 9e-7 <- not of interest to IM anyway, but interesting to me because of SSE/Altivec/MMX/... and GPUs) at a cost of 23 flops (including one division, but excluding the squaring back of the square rooted distance, which adds one flop to the total).
I would guess that this is good enough.
Agreed?
(I can, of course, produce more precise approximations, but I think it's a waste of flops for IM.)