Environment mapping - or: a better "shade"-operator?

Questions and postings pertaining to the usage of ImageMagick regardless of the interface. This includes the command-line utilities, as well as the C and C++ APIs. Usage questions are like "How do I use ImageMagick to create drop shadows?".
schnurzelpurz1
Posts: 11
Joined: 2009-03-27T06:18:58-07:00
Authentication code: 8675309

Environment mapping - or: a better "shade"-operator?

Post by schnurzelpurz1 »

Let's do something magic with ImageMagick and execute the following command:

Code: Select all

convert -background none -bordercolor none -virtual-pixel transparent -interpolate Bicubic -alpha Set "heart.png" ( +clone -alpha Extract +level 0,3276 -white-threshold 3275 -morphology Distance Euclidean:7,20! -blur 1.5 -negate -evaluate pow 2 -negate -evaluate pow 0.5 "emaps/gold.jpg" -interpolate Bicubic -fx "Oh=8;ONx=Oh*(u-p[1,0]);ONy=Oh*(u-p[0,1]);ONz=1;OIx=i-w/2;OIy=j-h/2;OIz=-500;ONI=2*(ONx*OIx+ONy*OIy+ONz*OIz);ORx=OIx-ONx*ONI;ORy=OIy-ONy*ONI;ORz=OIz-ONz*ONI;OnR=sqrt(ORx*ORx+ORy*ORy+ORz*ORz);Om=v.w/2;v.p{Om*ORx/OnR+Om,Om*ORy/OnR+Om}" ) -compose In -composite "heart_gold_500_8_blur1.5.png"
Or to let the pictures speak...
Image + Image = Image

OK, that was a little fast! So let's explore the details step by step:

The first part of the script turns the heart shape into a heightfield: a grayscale image, whose pixels are interpreted as height coordinates in 3D space (this is well described on the IM Example pages):

Code: Select all

"heart.png" ( +clone                    // we first make a copy so we can restore the transparency later
-alpha Extract                          // turns the alpha channel into a grayscale image
+level 0,3276 -white-threshold 3275     // the semitransparent pixel must be given the right distance value to avoid
                                           aliasing effects along the border profile. The number is calculated as
                                           follows: QuantumRage / profile width (here: 20). The second one is 1 less
-morphology Distance Euclidean:7,20!    // creates the border profile, 20 pixels wide, as a linear gradient
-blur 1.5                               // a little smoothing at this point improves the final result
The next step is to turn the straight border profile into a nice rounded edge. This is also discussed on the example pages but I repeat it here for convenience:

Code: Select all

-negate -evaluate pow 2 -negate -evaluate pow 0.5

// Some other formulas, that may be useful here:
-evaluate pow 2 -negate -evaluate pow 0.5 -negate  // groove (the inverse of round)
-function ArcSin 1                                 // S-shape lying
-evaluate cos 0.5 -negate                          // S-shape upright
-function Polynomial -4,4,0 -evaluate Pow 0.5      // ridge (half circle)
Image => Image => Image

We now have almost all ingredients together and can swing the magic wand:

Code: Select all

"emaps/gold.jpg"                 // put the emap on the stack. It will be referenced by "u" (heightfield = "v")
-fx "..." )                                // render the reflections (details below)
-compose In -composite "goldenheart.png"   // restore the initial transparency and save
The usage is very similar to the -shade operator and in fact: the cryptic -fx script does nearly the same thing: it reflects the surrounding environment on the surface of a 3D-shape (defined by the heightfield). -shade uses one directional light (a vector defined by angle and azimut) whereas the script uses a special circular texture (a "photograph" of a reflecting sphere) to lookup the color. This is called "Spherical Environment Mapping" and can be accomplished by some simple vector calculations (find yourself detailed explanations on the internet, if nessecary).

Image

The fully commented script (hint: all user variables start with "O" = own):

Code: Select all

Oh = 8;     // Height of the 3D-shape

// N = normal vector of the triangle, built by the current pixel and his neighboring pixels right / bottom
ONx = Oh * (u - p[1,0]);
ONy = Oh * (u - p[0,1]);
ONz = 1;

// I = Incoming ray (camera vector)
OIx = i - w / 2;
OIy = j - h / 2;
OIz = -500;     // Height of the camera over ground

ONI = 2 * (ONx * OIx + ONy * OIy + ONz * OIz);    // Dotproduct of I and N, multiplied by 2

// R = Reflected ray
ORx = OIx - ONx * ONI;
ORy = OIy - ONy * ONI;
ORz = OIz - ONz * ONI;

// Texture lookup
OnR = sqrt(ORx * ORx + ORy * ORy + ORz * ORz);    // Normalizing factor for R
Om = v.w / 2;                                      // Radius of environment map
v.p{Om * ORx / OnR + Om, Om * ORy / OnR + Om}    // Assign color to current pixel (ATTENTION: no ";"" after last command)
There's also a second version here, which works with 4 averaged normals.

The script contains two adjustable variables (examples see below):
  • The height of the 3D-shape (Oh). The higher the value the steeper the profile and more distortion along the profile.
  • The height of the camera over ground (OIz). The smaller the value the larger the area of the map that is reflected on the horizontal top face. Vice versa the top face is only reflected by the center pixel, if the camera is at infinite distance (which doesn't look good). Good values are 3 to 10 times the larger dimension of the shape.
The influence of the shape height (Oh). Values from left to right: 2, 4, 8
Image Image Image
The influence of the camera height (OIz). Values from left to right: 100, 300, 10000
Image Image Image

Environment mapping is able to simulate any thinkable lighting compared to the single distant light of the -shade operator. Here some examples (outline and shadow added for visual impact):

2 spotlights on a black sphere (map):
Image

It's not all gold that glitters (map):
Image

Piano finish (map):
Image

For those who like it kitschy (map):
Image

Kind of glass with sharp highlights and subtle reflections (compositing with colored font using Vivid_Light; map):
Image

Just for comparison: shade operator enhanced with sigmoidal-contrast and level coloring:
Image

Unfortunately there are also some drawbacks:
  • The biggest one is the lack of speed of the -fx operator: the golden heart from above takes approx. 70 seconds to render on a 2x1.4GHz Netbook with 4GB Ram (8.8 ms/pixel). Without coding the script as standalone operator it remains experimental and can hardly be used in production environment.
  • The antialiasing is quite poor. This becomes visible when applying emaps with sharp transitions or fine details: broken lines and noise are the result (see below). The reason is the low number of surface normals, which point to not connected parts of the map. Maybe some subsampling (examining the reflected area with multiple normals) could improve quality.
Image (map)
Last edited by schnurzelpurz1 on 2012-09-26T11:52:54-07:00, edited 6 times in total.
User avatar
fmw42
Posts: 25562
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: Environment mapping - or: a better "shade"-operator?

Post by fmw42 »

Very nice. I did something not quite so fancy a long time ago with rendering a sphere at my script, sphere, below. I also had to use -fx. I implemented a faster compiled version using -process and a custom compiled function using the MagickFilterKit. However, I believe it was a special version to do that kind of thing and have not tried to see if it still works in recent versions of IM.
schnurzelpurz1
Posts: 11
Joined: 2009-03-27T06:18:58-07:00
Authentication code: 8675309

Re: Environment mapping - or: a better "shade"-operator?

Post by schnurzelpurz1 »

Thanks for your feedback. If I understand right: with -process it's not possible to calculate a single pixel and give the result back. Instead it would make it necessary to save the images first to disk and pass the filenames as arguments, then read in again the processed image for further manipulations in IM (compositing, creating shadows, ...).

That's not exactly the workflow that I'm aim for (disregarding the fact, that my programming capabilities are insufficient for that :) ). In my view, the method is so generic and simple that it should be considered to include it in IM. I also think, the results speak for themselves and are a real step forward compared to the shade operator.

As far as I can see, there is no official whishlist, so I posted it here, hoping that the right people would see it.
User avatar
fmw42
Posts: 25562
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: Environment mapping - or: a better "shade"-operator?

Post by fmw42 »

No that is not what I did.

I used

convert image -process "my compiled code" result.

All the work is done by putting in the fx like expression in my template and then compiling per the instructions with the MagickFilterKit. Though I used a special version. But it processed only on each pixel without regard to neighbors.
schnurzelpurz1
Posts: 11
Joined: 2009-03-27T06:18:58-07:00
Authentication code: 8675309

Re: Environment mapping - or: a better "shade"-operator?

Post by schnurzelpurz1 »

Now it's clear how process works. That would be a suitable approach which is probably as fast as an embedded operator. You say, you only had to put the fx code into a template and use the MagickFilterKit to compile. Is there an automatic conversion of fx scripts to C? Could you post or send me by email, how your template / generated code look like (just to have more demonstration aside the analyze filter from the MagickFilterKit)?
Bonzo
Posts: 2971
Joined: 2006-05-20T08:08:19-07:00
Location: Cambridge, England

Re: Environment mapping - or: a better "shade"-operator?

Post by Bonzo »

An interesting example schnurzelpurz1 and I will have to give it a go when I get time.
User avatar
fmw42
Posts: 25562
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: Environment mapping - or: a better "shade"-operator?

Post by fmw42 »

schnurzelpurz1 wrote:Now it's clear how process works. That would be a suitable approach which is probably as fast as an embedded operator. You say, you only had to put the fx code into a template and use the MagickFilterKit to compile. Is there an automatic conversion of fx scripts to C? Could you post or send me by email, how your template / generated code look like (just to have more demonstration aside the analyze filter from the MagickFilterKit)?

It is a little more complicated than that. Also I do not know if the template still works in the current IM. I have had to get the IM developers to upgrade it from time to time and it has been a long time since I used or compiled by scripts. But if you contact me offline I can send you one of my simpler examples. fmw at alink dot net

There is no current automatic conversion.
Last edited by fmw42 on 2012-09-23T11:34:47-07:00, edited 1 time in total.
schnurzelpurz1
Posts: 11
Joined: 2009-03-27T06:18:58-07:00
Authentication code: 8675309

Re: Environment mapping - or: a better "shade"-operator?

Post by schnurzelpurz1 »

Bonzo wrote:An interesting example schnurzelpurz1 and I will have to give it a go when I get time.
Great! I'm gladly looking forward...
schnurzelpurz1
Posts: 11
Joined: 2009-03-27T06:18:58-07:00
Authentication code: 8675309

Re: Environment mapping - or: a better "shade"-operator?

Post by schnurzelpurz1 »

fmw42 wrote:It is a little more complicated than that. Also I do not know if the template still works in the current IM. I have had to get the IM developers to upgrade it from time to time and it has been a long time since I used or compiled by scripts. But if you contact me offline I can send you one of my simpler examples. fmw at alink dot net
Thanks a lot for help, but it seems to involve a lot of work just to achieve a point, where I must admit, that it is beyond my capabilities (or would take too much time). I took a look at the source code of the ShadeImage function (effect.c line 4130). Altough the code is not very complicated, there's a lot of knowledge of the MagickCore API needed. So I have to leave it up to the developers, if and how they will include it.
User avatar
fmw42
Posts: 25562
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: Environment mapping - or: a better "shade"-operator?

Post by fmw42 »

Looking quickly at your code, you seem to need neighborhood pixels. The MagickFilterKit (special version) that I have will only process pixel-by-pixel from input to output. So I am not sure that your fx functions will work in the MagickFilter kit template. The purpose of the template was to have it do all the internal IM stuff and allow the user to provide some simple C code to process on each pixel. Thus it is not that hard to use. But it does have the limitation of processing as I mentioned above.

It was my hope and suggestion for IM 7 that new templates be developed to do inverse lookup with one or more input images.

On the other hand I have been able in some cases to separate the neighborhood lookup from -fx to simple IM processing by rolling the image by 1 pixel 8 times in each of the neighbor directions. Then loop over the 9 images, which now have all the neighbors at the same pixel location and do processing on them to get a result from all 9 image into one final image. The -function mathematics may also help. But if your process has negative values for some of the terms, then that will not work without HDRI IM compile to combine all the separate images.

If you are still interested, you can contact me offline and I can send you one or more of my -process function code. You may also want to see my script, sphere, for something like what you are doing, but certainly much simpler than yours. I have coded that into a MagicFilterKit -process function.

Another suggestion would be to make a request on the Developers forum to see if the IM developers would be interested and willing to code your script as a proper IM function. I cannot say if they have the time as they are busy with IM 7 development.

Do you have a text file of your full script with documentation of the mathematics involved or well commented code? If so, I might like to look at it in more detail.

One thing that I often do when explaining a complex script is to put line breaks using / at the end of the line and break each important step into separate lines. Then I can list the lines and explain what each step is doing. That might help me understand your script better and perhaps suggest some ways that might speed it up. I guess your separate code sections do that.

I guess another questions would be how do you create the spherical (circular) environment map itself.

P.S. My script, chrome, does a simple but fake job of simulating some of your text examples. Not exactly the same, but somewhat similar in results.

Scripts are available at the link below.
User avatar
anthony
Posts: 8883
Joined: 2004-05-31T19:27:03-07:00
Authentication code: 8675308
Location: Brisbane, Australia

Re: Environment mapping - or: a better "shade"-operator?

Post by anthony »

This seems to be a funny sort of distortion map. I did something similar to generate lens effect using Height fields.
Actually it looks almost exactly the same as what I worked out (but have not put into IM examples) as a distortion map lens effect.
(The FX is used to convert the gradient, into a 2D displacement map, which is then used to sample the input image).

To see what I do have look at Distorting Images with Maps.
http://www.imagemagick.org/Usage/mapping/#distort

What is needed is some general greyscale 'gradient modifier' operators to convert gradient (highfield) maps to Lens/Reflection displacements (much like shade does). ASIDE: I also want to add some similar operator to convert color difference vectors (from image comparisions) to Error maps (vector to magnitude) as a generalization of the functions used in "compare".

At the moment distortion maps (distort,displacement,variable blur) are being handled as part of Composite. But these really it is very different to Composite, and probably should be re-grouped into a seperate set of functions. Love to do this, but I have to get CLI handling in IMv7 finished first. And now I have finally some time on the horizon to do that, and can;t afford to get side-traced again.

If you feel comfortable with coding, perhaps you would like to have a go. Look at the distort/displace/blur code in the Composite Function core.

As for the aliasing... Super sampling will solve it but only to an extent, as it only merges a fixed number of 'samples', rather than to a true area re-sample. See Area Sampling vs Super Sampling
http://www.imagemagick.org/Usage/distor ... a_vs_super

What is needed is some proper EWA resampling using displacement/distortion maps. That is rather tha map a single point, map 4 vectors, the points between pixel centers and work out the slope vectors to determine the elliptical area that should be sampled. That will remove the current aliasing problems with both what you are doing above, and in distortion/displacement maps. The FX (or other 'lens/reflect' gradient conversion function) then just generates the displacement map, and let it do the EWA resampling lookup of pixels.

I do something similar already in Variable Blurring (also in composite) where the vectors are determined from the map image (slope rather than position) See Variable Blurring
http://www.imagemagick.org/Usage/mapping/#blur
Anthony Thyssen -- Webmaster for ImageMagick Example Pages
https://imagemagick.org/Usage/
schnurzelpurz1
Posts: 11
Joined: 2009-03-27T06:18:58-07:00
Authentication code: 8675309

Re: Environment mapping - or: a better "shade"-operator?

Post by schnurzelpurz1 »

@fmw42: I hoped that I gave enough comments with the script and what's behind. But just to go for sure what we're talking about: Spherical Environment Mapping (SEM) also known as Sphere Mapping is a very well known method to simulate reflexions on a 3D-object. Every 3D application has a display mode called SEM (at least I know no one that hasn't). I quickly asked Google for good information on the topic: To get environment maps there are basically 3 methods:
  • There are already plenty of maps on the internet :)
  • Use a 3D-application and make a little scene with a reflecting sphere
  • Make a real life photograph of a reflectiong sphere (e.g. christmas ball)
@anthony: I wouldn't speak about distortion because what we are doing here is a true 3D projection: we have a camera, a 3D-object and a half sphere of infinite size that surrounds all. We render the scene using vectors, angles and coordinates. Altough the distortion of Lena seems like a kind of 3D-wrapping of an image around a half sphere, it just works by moving the pixels towards the vertical center line (the more the nearer the are to the upper / lower side) right? These are 2 completely different methods that can hardly be transformed to each other (at least not in a general way).

As I already said: trying it on myself would be a work of Hercules for a relative simple program:
  • I've had to learn C syntax (I've knowledge in web programming but beyond PHP and Javascript it's getting dark very quickly :) ).
  • I've had to set up a developping environment.
  • I've had to learn the MagickCore API.
So I would really welcome, if someone experienced in IM programming could do it.

It's questionable, if the EWA method, you are refering to, is also suitable to e-mapping. Keep in mind that there is a fundamental difference between the distortions and e-mapping: the first uses a fixed relation between the input and output whereas the heightfield defines a discrete freeform object which normally cannot been described with formulas or kind of algorithm. But the basic approach of area sampling is certainly right: finding out which area is covered on the e-map by the normals of adjacent heightfield pixels (see image below). This is a projection of a great circle of the environment sphere. We can furthermore say:
  • If the normals differ to much, we have a crease, which produces a sharp transition either way. In this case we can ommit antialiasing (AA). This could even be exposed in the interface as "creasing angle" parameter.
  • If the normals differ less than the angle covered by the interpolation method (Bilinear), AA can be skipped too.
Image

Of course scaling up the heightfield and scaling down the resulting image afterwards would be a possible approach too. As the heightfield normally uses only a few grayscale values (i.e. the profile width), there are enough in between values (even for Q8) to obtain smoothing.
Last edited by schnurzelpurz1 on 2012-09-25T01:41:15-07:00, edited 1 time in total.
User avatar
anthony
Posts: 8883
Joined: 2004-05-31T19:27:03-07:00
Authentication code: 8675308
Location: Brisbane, Australia

Re: Environment mapping - or: a better "shade"-operator?

Post by anthony »

Not that in many ways this is related to distortions. Specifically distortion mapping. Distorts work not by moving pixels, but by reverse mapping the destiation pixel location to the source pixel location.

A distortion map is the same thing but with the map declaring the postion of the source look up, either relativally or absolutely. This may be 1D or 2D.

You are right in that the lena sphere example is a map to just wrap lena onto a sphere. But that isn;t reflaction mapping.

What you are doing if converting the height field into a distortion map of 2D slope (angle) vectors. So it is basically a mapping of height fields to Sphereical reflection vectors. That is then just a absolute distortion map into the given 'coloring image'.

So steps involved
  • shape -> height field (regardless of whether this is a blur, distance, or spherical cross sections)
  • height field -> X and Y reflection angles (essentially a X and Y slope map)
    This is essentially the same as a slope derivative from the height field, OR something like a convolution using a sobel, or roberts kernel.
    See Edge Detection Convolution Kernels..
    http://www.imagemagick.org/Usage/convolve/#sobel
    As negatives are involved, a bias is probably needed, typically at gray50%
  • X,Y reflection angles -> color lookup source image, essentially a absolute Distortion map
Hmmm... starting from the height field provided..
Image
convert to vector images... (the scale adjusts the limits of the slope, and probably wrong)

Code: Select all

convert heart_rounded.png -bias 50%  -morphology Convolve Sobel -negate heart_x-slope.png
convert heart_rounded.png  -bias 50%  -morphology Convolve Sobel:90 -negate heart_y-slope.png
And use as a two image absolute distortion map (resize color source to same size for simplicity)
The compose argument is to scale the abosolute coordinates to match the coloring image.

Code: Select all

convert gold.jpg -resize 93x85\! heart_color.jpg
convert heart_color.jpg heart_x-slope.png heart_y-slope.png -compose distort -define compose:args=50% -composite heart_gold.png
Image Image Image Image Image

All that is left is appropriate masking (which could have been done as part of the lookup), especially with some area resampling, to generate appropriate semi-transparent edging.

NOTE this is just interpolated or 'point' lookup. the distortion mapping does not do Area Resampling, at least not yet. And I agree that a limit of the EWA should be imposed, to prevent large areas being sampled into a 'crease' or strong ridge.

I also do not think I have the reflected slope handling quite right as I have not checked the math. It does not look quite right, but that is just a matter of getting the slope determinations right.
Anthony Thyssen -- Webmaster for ImageMagick Example Pages
https://imagemagick.org/Usage/
schnurzelpurz1
Posts: 11
Joined: 2009-03-27T06:18:58-07:00
Authentication code: 8675309

Re: Environment mapping - or: a better "shade"-operator?

Post by schnurzelpurz1 »

@anthony: Your heart looks looks like seen from an infinite distance (parallel camera rays). Do I understand right: Sobel travels the image from left to right and calculates the difference of the colors from pixel to pixel? This would produce difference vectors compared to the angular vectors of my script, which is geometrically not exactly the same but of course very similar (the resemblance of the results proof it).

Just a word about your distortion maps: this could also been seen as a kind of "texture baking" (= precalculation to be applied fast for many different images). Of course, I could save my normal calculations into an image too (e.g. storing the x-vector in the green and the y-vector in the red channel), instead of assigning the color directly. This way it would be much faster to replace the emap with another. But is this the task whe are usually faced to? I guess it's more frequent that we need different shapes of the same style (thinking of website titles or similar), so precalculation will not help much in this case.
User avatar
anthony
Posts: 8883
Joined: 2004-05-31T19:27:03-07:00
Authentication code: 8675308
Location: Brisbane, Australia

Re: Environment mapping - or: a better "shade"-operator?

Post by anthony »

schnurzelpurz1 wrote:@anthony: Your heart looks looks like seen from an infinite distance (parallel camera rays). Do I understand right: Sobel travels the image from left to right and calculates the difference of the colors from pixel to pixel? This would produce difference vectors compared to the angular vectors of my script, which is geometrically not exactly the same but of course very similar (the resemblance of the results proof it).
It produces X and Y derivatives vectors, averaging across three pixels. It should be equivelent to generating the normals of the hieght field. The roberts kernel will do it with just two pixels instead of three, but with a 1/2 pixel shift in each vector (though that may be more directly useful when generating area resampling vectors), where you want area limit vectors, rather that the center of pixel vector.

And yes it is equivalent to viewing using parallel rays. However it is not a reflection, but the normal vectors that are being used, otherwise any part of the height field hitting a 45 degree angle or more, will only see the background of the color source image, rather that the circular color field.

Anti-aliasing at the edge crease will also need some work.

Just a word about your distortion maps: this could also been seen as a kind of "texture baking" (= precalculation to be applied fast for many different images). Of course, I could save my normal calculations into an image too (e.g. storing the x-vector in the green and the y-vector in the red channel), instead of assigning the color directly. This way it would be much faster to replace the emap with another. But is this the task whe are usually faced to? I guess it's more frequent that we need different shapes of the same style (thinking of website titles or similar), so precalculation will not help much in this case.
All very true. The only case I can see it helping is in pre-generating lots of image styles to allow different users to have a selection of different UI colourings.
Anthony Thyssen -- Webmaster for ImageMagick Example Pages
https://imagemagick.org/Usage/
Post Reply