In actual fact, forward bilinear is the exact opposite. The current bilinear is currently implemented as a reverse mapping as I have not been able to get back to implementing a proper forward mapped version. I am just getting back to finishing off perspective 'infinity handling' for time improvements, as well as properly resize filter control, within the distortion function.
Fred Wienhaus has been implementing some 'proof of concept' DIY scripted forms of forward mapped (inverse mathematical function) Bilinear Distortion. You can see the script and results in is "3Drotate" script
http://www.fmwconcepts.com/imagemagick/3Drotate/
this script only just predates the implementation of the -distort fintion
and will do perspective, reversed biliner and forward bilinear. However it will not to image tiling or handle the correct pixel merging needed for viewing distant horizons.
Note I do plan on implementing a more generalized polynomial mapping function in which the current reversed bilinear would be classed as a 1.5 degree (between 1 degree and a full 2 degree polynomial), though of course you will need more control points for mapping higher ordered polynomials. However only the simpler reversed forms will be possible using that technique.
Things are happening... slowly but surely. lack of Core programmers is the main problem.
Localized Warp distortions...
the above mathematic functions of course only do global distortions. moving a single control point will effect the whole image.
For more localized warps there are two main methods...
- One is a sequence of circular regional warps.... That is given a point, and a radius of influence scale (implode), rotate (swirl), or translate the point locally.
- The other is grid or mesh warping, where the source image has a rectangular grid, or a triangular mesh of points overlay, and the destination is a distorted form of the same grid/mesh.
This last type of distortion is what is used in movies, though the low level distort is generated from a sequence of two (or more) images where the same mesh is used and a parameter is used to not only define where the generated image is to fall between these two images (both in amount of distortion, and in color influence from each source).
This is often regarded as true 'morphing' and is currently only openly available in the X morph package. This is one of the goals I am working toward in IM, but will probably require a higher level script to achieve, once 'mesh distortions' have been implemented, to do the 'core' work.