Page 2 of 2

Some other 3D projections

Posted: 2012-09-26T11:49:07-07:00
by schnurzelpurz1
I just created some flavors of the environment mapping script. To see the differences between them clearly, I applied them to a special testing setup (pictures below):
  • The heightfield is a half sphere that was generated using a canvas of 128x128 pixels and the following fx expression: sqrt(1-(2*i/w-1)^2-(2*j/h-1)^2) (as far as I can see, this is the most accurate method, though not the fastest, to do so).
  • The environment map is specially designed to analyze surface discontinuities (taken from a 3D software).
Image Image

All of the projection types come in a version which uses 1 respectively 4 averaged surface normals. Click on a picture to get the used fx script with comments.

Radial Mapping
This is the simplest reflective projection. It uses the surface normal directly to look up the color (instead of calculating a reflection ray). It should (and does) produce exactly the used emap, so normal calculation has been sucessfully proven to be correct. The Sobel algorithm used by Anthony could also be applied to the test pictures and should give the same result. So let's compare! :D
Image | Image
8.528s _____________| 9.162s

Environment Mapping with distant camera
When using a distant camera no height variable (OIz) is needed and the incoming vector can be set to (0,0,-1). This simplifies the calculation of the reflected ray significantly at the price of all horizontal faces beeing set to the emap's center color. Thus the illusion gets often damaged. The calculation of a reflection ray causes some kind of minifiying glass (also known as fisheye effect), due to the fact that the angle between incoming and outgoing ray is doubled. All pixels below the 45° latitude reflect the bottom half of the environment sphere (which just mirrors the top half).
Image | Image
9.846s _____________| 11.505s

Environment Mapping with perspective camera
This is the original script from the tutorial on the beginning. The rendering time is more than twice than with distant camera but this just reflects the added complexity of an adjustable camera height (above scripts have been reduced as much as possible, of course). Beside the improved realism we can increase the fisheye effect by moving the camera nearer to the reflecting surface :).
Image | Image
23.969s ____________| 25.249s

Conclusions
  • Rendering time between 1 normal and 4 averaged normals is is only slightly higher, due to the fact, that the addition of the 4 vectors can be highly simplified. As 4 vectors represent the orientation of the surface much better than one, this method should be favored (even if the visual impact with the test setup is minor).
  • There's also a pixel shift to the left and top with 1 normal, caused by the asymmetric calculation.
  • Overall accuracy and antialiasing are on a good level (except at the very outher zone) thanks to the high normal count per angle. Multipliying heightfield pixels with a good interpolation method will be a reasonable way to improve image quality (as already statet earlier).

Re: Environment mapping - or: a better "shade"-operator?

Posted: 2012-09-26T17:34:24-07:00
by anthony
A very nice summary. Clear and concise.

Some area resampling, and background transparency in the color source background may also improve things.
With the original shape also having a mask added.

Note Absolute Distortion Maps (unlike Relative Displacement Maps) can not generate a pixel location that falls outside the image (except via scaling, or bias handling). I really should lookup the implementation of absolution distortions, and document it properly, as I created it sometime ago, and never did generate explaining examples for it.

The problem with these builtin maps, is they tend to assume the image (map and color source) are the same size, but sometimes it would be good if the mapped postions are scaled to match a different sized color source, and it is the map image that defines the final image size, rather than the source coloring image. (total mapping, rather than a partial lens map in a small area of the source image)

Also these mapping operators should be separated from Composite as its own separate operation.

That would however need to wait for IMv7 beta testing.