A Recipe for Creating Environment Maps for Image Based Lighting

This is the recipe I use for creating environment maps for use in image based lighting. While the example I’m going to use specifically involves a chrome ball, a lot of this also applies to environment maps captured by taking panoramic photos.

Goals and Flow

The two main goals of this technique are to…

  1. Maintain consistent and high-quality results.
  2. Make things as easy and automated as possible.

The first goal requires that we use image formats which allow floating-point colours and image processing techniques that degrade the image as little as possible.

In terms of balance between consistency and quality, I’d prefer to sacrifice quality in order to maintain consistency – this mainly becomes a problem when dealing when dealing with colour-spaces.

The second goal is to make things as uncomplicated and simple as possible. It’d also be nice to make as much of this as automated as possible so that large batches of images can be processed with minimal fuss.

If I was a bit more sorted my workflow would look some like this, where the raw image gets converted into an image which is worked with and then that gets converted into whatever output format I’m aiming for.

Ideal workflow.

However I’m not entirely keen on bring raw images directly into Nuke at the moment, primarily cause I’m not entirely happy with the results, so I’ve added an additional step to the process. This involves converting the raw image to an intermediate image, which at this stages means exporting the image out as a 16bit TIF with a gamma-encoded colour-space.

Current workflow.

So that means we’re aiming to use formats like OpenEXR or if push comes to shove we’ll use 16-bit TIF. We’re also going to try keep any colourspace conversions or resampling of the images to a bare minimum.

The Ingredients

  • Adobe Lightroom – This is my personal preference, but your probably able to get similar (or perhaps even better) results using other raw converters.
  • The Foundry’s Nuke – This works well with processing large batches of images and has good colour support. It also has a handy little node for converting mirror ball images into lat-long images.
  • J_Ops for Nuke – Primarily for the J_MergeHDR node, but it also contains  J_rawReader which allows you to read camera raw images within Nuke.

Preparing in Lightroom

The first goal after importing your images is to zero out any default tonal adjustments made by Lightroom, for this I apply the General – Zeroed preset in the Develop module.

Zeroed preset applied in Lightroom.

From here I export with the following settings…

  • Format: TIF
  • Compression: None
  • Colourspace: sRGB
  • Bit-Depth: 16bits per component
  • Image resizing: None

With regards to the colourspace, I’ve chosen sRGB because it’s the easiest colourspace to deal with. Ideally I’d like to use ProPhoto as it has a larger colour gamut, but I’m still working on the finer details of using ProPhoto within Nuke.

Hopefully the ACES colour-space will become more common in the future as it has a much larger colour gamut and is linear, but at this stage software support for it is limited.

In Nuke

Once you bring in all your images that you exported from Lightroom. The first thing you want to do is crop the image to the boundaries of the chrome ball. It’s best to get the crop as tight as possible.

Cropping in Nuke, the radial node is used to visualize the crop by overlaying a semi-transparent circle on top.

I use a radial node into order to visualise the crop to make sure things are lining up. You can also copy the settings from the radial node onto the crop node.

You can copy values by clicking and dragging from one curve icon to another.

A couple of little tips here, the first is to use whole pixel values (ie… 2350) for your crop values rather than sub-pixel values (ie… 2350.4). The reason for this is that Nuke will resample the image is you use sub-pixels – if your not careful when resampling an image you can lose quality and introduce either softening or sharpening to the image.

The second tip is if you want to maintain a perfect square when cropping. In order to do so click in the area.y attribute on the radial node and press the = key. In the expression editor that pops up enter…

area.t - (area.r - area.x)

Now when you adjust the top and side edges, the bottom edge will adjust itself automatically so that it maintains a square 1:1 ratio.

Merging into an HDR image

Once I’ve set up the crop on one image, it’s just a matter of copying the same crop node onto all the other images and plugging all of those into a J_MergeHDR node.

Cropped chrome ball images plugged into a MergeHDR node. Click for larger image.
MergeHDR node settings.

The first thing to do is click on the Get Source Metadata button to read the EXIF information off the images. The second thing to do is to set the target EV. You can either do this by setting the target ISO, Aperture and Shutter settings or by clicking on the EV Input checkbox and then manually setting a target EV value (I’ve set it to 12 in the above image).

Using the EV values we can also match exposures between images shot with different ISO, Aperture and Shutter settings.

The EV values can be used to match exposures on two images shot with different ISO, Aperture and Shutter settings.

In the example above we can use the difference between the two EV values (5.614 and 10.614) in order to match the exposure on one to the other. The difference between the two is approximately 5 stops (10.614 – 5.614 = 5), so if we apply an exposure node to the brighter image and set it to -5 stops, we can get a pretty good exposure match between two images. Although the example below is perhaps a bit extreme – as there are plenty of clipped values – in certain areas the exposures match up pretty well.

Where this potentially comes in useful is matching reference photography where automatic settings were used. If you don’t want to figure out the differences yourself, you can plug a MergeHDR node into each image and then set the target EV on all the MergeHDR nodes to the same value.

Applying an exposure node to the over exposed image and setting it to -5 stops.

From Chrome Ball to Lat-Long

The penultimate step in the puzzle is to convert the chrome ball into a lat-long image. This is easy using the SphericalTransform node in Nuke.

SphericalTransform node plugged into a MergeHDR node.

The settings to use are…

  • Input Type: Mirror Ball
  • Output Type: Lat-Long Map
  • Output Format: Any 2:1 image format (ie… 4096×2048, 2048×1024, 1024×512, 512×256)

Exporting from Nuke

The very last step is to write it out as an EXR and make sure the colourspace is linear.

 

Colour Temperature in Maya

For a while I’ve wanted to implement colour temperature control into my lighting workflow but I’ve never been able to figure out how it’s calculated. Then I came across this site, which has already mapped out blackbody temperatures to normalised sRGB values.

Using this as a starting point I mapped out the values into a SL function…

color blackbodyfast( float temperature;)
{
	uniform color c[16] = 
		{
		(1,0.0401,0),(1,0.1718,0),(1,0.293,0.0257),(1,0.4195,0.1119),(1,0.5336,0.2301),
		(1,0.6354,0.3684),(1,0.7253,0.517),(1,0.8044,0.6685),(1,0.874,0.8179),(1,0.9254,0.9384),(0.929,0.9107,1),
		(0.8289,0.8527,1),(0.7531,0.8069,1),(0.6941,0.77,1),(0.6402,0.7352,1),(0.6033,0.7106,1)
		};
	float amount = smoothstep ( 1000, 10000, temperature );
	color blackbody = spline ( "catmull-rom", amount, c[0],
		c[0],c[1],c[2],c[3],c[4],c[5],c[6],c[7],c[8],c[9],
		c[10],c[11],c[12],c[13],c[14],c[15],
		c[15]);
	return blackbody;
}

I decided rather than map every temperature value from 1000K to 40000K, I decided just to deal with 1000K to 10000K using the CIE 1964 10 degree Colour Matching Functions – only because of the later date of 1964, I couldn’t see (nor greatly understand) the difference between the colour matching functions. The original function I wrote called blackbody used every value of the kelvin scale from 1000K to 10000K, this resulted in an array of 90 values. The modified one above uses every 6th value which brings the array size down to 16 values, in my tests I didn’t notice a speed difference using 90 values, but looking at a comparison of the two functions I couldn’t see enough visual difference to bother using the full 90 steps.

Blackbody temperature comparison in sRGB. Temperature is mapped to T coordinate.

There is a slight peak where the warm and cool colours meet in the 90 step version. It’s a bit more obvious looking at the image in linear light.

Blackbody temperature comparison in Linear. Temperature is mapped to T coordinate.

Because the values are in sRGB, they need to be converted to Linear before getting used in the shader. The SL used in the main body of my test surface looks something like this…

#include "colour.h"

surface blackbody_srf(
	uniform float temperature = 5600; #pragma annotation temperature "gadgettype=intslider;min=1000;max=10000;step=100;label=Temperature;"
)
{
	color blackbody = blackbodyfast (temperature);
	blackbody = sRGB_decode(blackbody);
	Oi = Os;
	Ci = blackbody * Oi;
}

Used in a light shader the output looks something like this…

Blackbody temperature. Light intensity is the same throughout. sRGB.

The only problem now is that 3Delight doesn’t show a preview of light shader or more importantly the colour temperature in the AE settings for my light.

To get around this I decided to implement an expression which changed the colour of the Maya light that my 3Delight shader was attached to. Because MEL doesn’t have a spline function like SL does I had to improvise using animation curves. First up the MEL to create the three curves that I need to create the RGB colour temperature.

$red = `createNode animCurveTU`;
$green = `createNode animCurveTU`;
$blue = `createNode animCurveTU`;

setKeyframe -itt "spline" -ott "spline" -t 1 -v 1 $red ;
setKeyframe -itt "spline" -ott "spline" -t 10 -v 1 $red ;
setKeyframe -itt "spline" -ott "spline" -t 11 -v 0.929 $red ;
setKeyframe -itt "spline" -ott "spline" -t 12 -v 0.8289 $red ;
setKeyframe -itt "spline" -ott "spline" -t 13 -v 0.7531 $red ;
setKeyframe -itt "spline" -ott "spline" -t 14 -v 0.6941 $red ;
setKeyframe -itt "spline" -ott "spline" -t 15 -v 0.6402 $red ;
setKeyframe -itt "spline" -ott "spline" -t 16 -v 0.6033 $red ;

setKeyframe -itt "spline" -ott "spline" -t 1 -v 0.0401 $green;
setKeyframe -itt "spline" -ott "spline" -t 2 -v 0.172 $green;
setKeyframe -itt "spline" -ott "spline" -t 3 -v 0.293 $green;
setKeyframe -itt "spline" -ott "spline" -t 4 -v 0.4195 $green;
setKeyframe -itt "spline" -ott "spline" -t 5 -v 0.5336 $green;
setKeyframe -itt "spline" -ott "spline" -t 6 -v 0.6354 $green;
setKeyframe -itt "spline" -ott "spline" -t 7 -v 0.7253 $green;
setKeyframe -itt "spline" -ott "spline" -t 8 -v 0.8044 $green;
setKeyframe -itt "spline" -ott "spline" -t 9 -v 0.874 $green;
setKeyframe -itt "spline" -ott "spline" -t 10 -v 0.9254 $green;
setKeyframe -itt "spline" -ott "spline" -t 11 -v 0.9107 $green;
setKeyframe -itt "spline" -ott "spline" -t 12 -v 0.8527 $green;
setKeyframe -itt "spline" -ott "spline" -t 13 -v 0.8069 $green;
setKeyframe -itt "spline" -ott "spline" -t 14 -v 0.77 $green;
setKeyframe -itt "spline" -ott "spline" -t 15 -v 0.7352 $green;
setKeyframe -itt "spline" -ott "spline" -t 16 -v 0.7106 $green;

setKeyframe -itt "spline" -ott "spline" -t 2 -v 0 $blue;
setKeyframe -itt "spline" -ott "spline" -t 3 -v 0.0257 $blue;
setKeyframe -itt "spline" -ott "spline" -t 4 -v 0.1119 $blue;
setKeyframe -itt "spline" -ott "spline" -t 5 -v 0.2301 $blue;
setKeyframe -itt "spline" -ott "spline" -t 6 -v 0.3684 $blue;
setKeyframe -itt "spline" -ott "spline" -t 7 -v 0.517 $blue;
setKeyframe -itt "spline" -ott "spline" -t 8 -v 0.6685 $blue;
setKeyframe -itt "spline" -ott "spline" -t 9 -v 0.8179 $blue;
setKeyframe -itt "spline" -ott "spline" -t 11 -v 1 $blue;

rename $red "colourTemperatureRed";
rename $green "colourTemperatureGreen";
rename $blue "colourTemperatureBlue";
The resulting animation curves.

Then the next stage was to create an expression which linked the outputted colour temperature to the light colour.

float $r, $g, $b;
if (will_point_lgt1.colourType > 0)
{
	$temp = will_point_lgt1.temperature;
	$amount = `smoothstep 1000 10000 $temp`;
	$c = 16 * $amount;
	$r = `getAttr -t $c colourTemperatureRed.output`;
	$g = `getAttr -t $c colourTemperatureGreen.output`;
	$b = `getAttr -t $c colourTemperatureBlue.output`;
}else{
	$r = will_point_lgt1.lightColourR;
	$g = will_point_lgt1.lightColourG;
	$b = will_point_lgt1.lightColourB;
}
point_lgtShape.colorR = $r;
point_lgtShape.colorG = $g;
point_lgtShape.colorB = $b;
Previewing the light inside Maya. The Maya-specific settings of this light are ignored in the final render.

Rendering Overscan in Maya

There are a few attributes in Maya you can change in order to render the image with overscan. The first is resolution, while the second is either camera scale, focal length, field of view, camera aperture, camera pre-scale, camera post-scale or camera shake-overscan. I use camera scale as it’s more intuitive numbers you need to enter and it doesn’t mess with the camera aperture, focal length or field of view.

In order to render and work with overscan correctly, it needs to be done relative to your format your working with – this is typically your final output resolution inside Nuke, but it could also be the resolution of a matte-painting or a live-action plate. The way to figure out the amount of overscan to use is simple and we can use one of two methods, either based on a multiplier or based on the amount of extra pixels we want to use.

The simplest method to me is based on a multiplier. If our format size is 480*360 (as above) and we wanted to render the image with an extra 10%, we multiply the resolution by 1.1 and set the camera scale to 1.1. Like so…

Then in Nuke all we need to do is apply a Reformat node and set it to our original render format of 480×360, the resize type=none and keep preserve bounding box=on  – this has the effect of cropping the render to our output size but keeping the image data outside of the format. Or additionally you can set the reformat like so… type=scale; scale=0.90909091; resize type=none; preserve bounding box=on. Instead of typing in 0.90909091, you can also set the scale by just typing in 1/1.1 …

If we instead wanted to render an extra 32 pixels to the top, bottom, left and right of our image – making the image 64 pixels wider and higher – we need to do things a little bit differently as we need to change the camera aperture. The reason for doing this is that adding the same number of pixels to both the width and height results in a very slight change to the aspect ratio of the image.

new width = original width + extra pixels
new height = original height + extra pixels
overscan width = new width / original width
overscan height = new height / original height
new aperture width = original aperture width * overscan width
new aperture height = original aperture height * overscan height

So using our 480×360 example from above. If we wish to add an extra 64 pixels to the width and height we would calculate it like so…

480 + 64 = 544
360 + 64 = 424
544 / 480 = 1.13333333
424 / 360 = 1.17777777
1.417 * 1.13333333 = 1.606
0.945 * 1.17777777 = 1.113

Same as before in Nuke we then apply a Reformat node with the following settings. type=to box; width/height=480, 360; force this shape=on; resize type=none; preserve bounding box=on

More VRay Scene Access… or some more random tidbits

Following on from the last post. Here are another example of how you can mess around with VRay scenes using Python.

figure 1: transform += random() * 2, random() * 2, random() *2

This collection of cubes was created using only one cube, it’s been instanced 2500 times and moved about randomly, to do this I’ve used the random module in Python which is handy for doing random number things.

# figure 1
from vray.utils import *

import random as r
r.seed(1)

l=findByType("Node") # Get all Node plugins
v=Vector(0.0, 0.0, 0.0)
for x in range(2500):
	dupl = l[0].duplicate('dup' + str(x))
	t=dupl.get('transform')
	v = Vector(r.random()*2, r.random()*2, r.random()*2)
	t.offs += v
	dupl.set("transform", t)

The r.send(1) is used to create a seed point for any future calls to random module, this means that the random numbers chosen are going to be the same each time we render the image – if we’re making changes to the render we don’t want the position of the cubes to change each time we render.

The v variable is used to store the random number we’re using to offset the transform, at the moment this is just is using random.random() which produces random values between 0 and 1, in the above example this has the effect of moving the cubes only along the positive xyz axis. There is also random.uniform(min,max) which produces random values between the min and max numbers we give it.

figure 2: transform += uniform(-1,1) * 2, random() * 2, random(-1,1) * 2

Here the effect moves the cubes along positive and negative XZ. I’ve keep the Y axis in positive space so that the cubes don’t go through the ground plane.

# figure 2
from vray.utils import *

import random as r
r.seed(1)

l=findByType("Node") # Get all Node plugins
v=Vector(0.0, 0.0, 0.0)
for x in range(20):
	dupl = l[0].duplicate('dup' + str(x))
	t=dupl.get('transform')
	v = Vector(r.uniform(-1,1)*2, r.random()*2, r.uniform(-1,1)*2)
	t.offs += v
	dupl.set("transform", t)

VRay Scene Access… or modifying your scene after you’ve hit render

Introduction

One of the lesser known features of VRay is it’s ability to access information about the scene and modify it after the render button has been pressed and before it is rendered. This ability to access the VRay scene and modify it allows you the ability to create some custom solutions to problems which might not be doable inside the 3d application itself. It can also be used to workaround bugs in VRay – but only as a temporary measure to get around bugs when a deadline is fast approaching.

Note: I’m using VRay for Maya. I am not sure how much of this is possible in tools such as Max or Softimage, hopefully this knowledge is easily transferable between 3d applications.

Examples…

Some simple examples of what you do with this include changing shader properties such as colour and texture information, duplicating and moving geometry around or even loading in extra geometry at render time.

All of these things you can do inside your 3d application, but might present problems if your dealing with lot’s of objects – for example you may have thousands of objects that you wish to do texture variants on, rather than create a shader for each object, you could set it up so that you can use one shader on all the objects and use an attribute on each object to specify which texture to use when you hit render.

To get a better idea of what is going on behind the scenes, the diagram below shows what happens when you hit render in your favourite 3d application. The Post Translate Python script is run during the translation process (the nodes in red).

In order to manipulate the scene data requires an understanding of the vrscene file format. The best way to do this is to turn on the Export to a .vrscene file setting in the Render Globals and have a read of the file it outputs.

The VRay Scene Structure and Nodes

The vrscene file describes the 3d scene in a human-readable ascii file. If you open it up in your favourite text editor you should be able to figure out what is going on quite easily, the section below determines the image width, height, pixel aspect ratio and it’s filename…

SettingsOutput vraySettingsOutput {
  img_width=450;
  img_height=337;
  img_pixelAspect=1;
  img_file="tmp/untitled.png";

Each section represents a node (plugin) that VRay recognises. The basic structure of each node is simply…

[Type] [Name] {
    [Attribute]=[Value];
}

So using the image settings example from above…

[Type] = SettingsOutput
[Name] = vraySettingsOutput
[Attribute] = img_width
[Value] = 450

As you move down through the vrscene you’ll move pass all your image settings, render settings, global illumination settings and down towards all your material, brdf, texture, transform and geometry nodes. For example you might see a few nodes which looks like this…

BRDFDiffuse lambert1@diffuse_brdf {
  color=Color(0, 0, 0);
  color_tex=lambert1@diffuse_brdf_color_tex@tex_with_amount;
  transparency=Color(0, 0, 0);
}

TexAColorOp lambert1@diffuse_brdf_color_tex@tex_with_amount {
  color_a=AColor(0.5, 0.5, 0.5, 1);
  mult_a=0.8;
}

MtlSingleBRDF lambert1@material {
  brdf=lambert1@diffuse_brdf;
  allow_negative_colors=1;
}

It’s the default Lambert shader in Maya, which is made up of three nodes, starting from the bottom we have the MtlSingleBRDF node, this is the top-level material which gets applied to our object. You’ll notice that the brdf attribute refers to the node at the top which is a BRDFDiffuse node, this node determines what type of shading model to use (diffuse, blinn, mirror, phong, etc). Finally is a TexAColorOp, this stores a colour value along with an alpha value – this value is used in the BRDFDiffuse node to give us our colour, this node is perhaps redundant as we can specify the colour directly in the BRDFDiffuse node. To visualize how these are all connected, think of them in terms of nodes inside Nuke or Houdini…

Finally we come to the object and geo nodes which look something like this…

Node pSphereShape1@node {
  transform=TransformHex("0000803F0000000000000000000000000000803F0000000000000000000000000000803FE8B64401FDEE7EA96AB5F3BF00000000000000000000000000000000");
  geometry=pSphereShape1@mesh1;
  material=lambert1@material;
  nsamples=1;
  visible=1;
  user_attributes="";
  primary_visibility=1;
}

GeomStaticMesh pCubeShape1@mesh2 {
  vertices=ListVectorHex("ZIPB600000001C000000e7X81OBd6CFA3Xb712GUVKO4a886dYKD2YAA1ODEU9");
  faces=ListIntHex("ZIPB9000000036000000e7XKOVAHC79E523BHGNecBbRYaFa5SUJLaNJQacU6BC2UaSI67LYXAYOQJb0N6MZL28O79N1GJAUV7eNT");
  normals=ListVectorHex("ZIPB2001000021000000e7X81OY3TG4S11YMEZWOUH5bdV54FLVaU4DFGXQZbALLSKXD2FA");
  faceNormals=ListIntHex("ZIPB9000000041000000e7XQH4YFC8PHFTACH6UCF0ZcdXZKE4VAEK4FJGKIST0AUZREYbbB0TCbCEbbVcTEFEUbbAZ3MQV8d7CKR6L49d785aCaLYIA4DA");
  map_channels=List(
      List(
          0,
          ListVectorHex("ZIPBB40000003C000000e7X81O60O9EAZVBdOBaeK1U3QBFUDPGV6aSF5d1b4UKbE4CJIaUGbEb53TUBWAS3aEJZ5468IcAFdHDT3AAA2b7ZaS"),
          ListIntHex("ZIPB900000003A000000e7XUUVDECQRH923NF053bdb10CKCZA2PWbKGdUKQDaOKaFQ7HV7J719DREK7DTQ23Od3Xd73I2L5UIMMAQTQZT2")
      )
  );
  map_channels_names=ListString(
    "map1"
  );
  edge_visibility=ListIntHex("ZIPB0800000010000000e7XTYQd97OLXZ3TBAAIU92PP");
  primary_visibility=1;
  dynamic_geometry=0;
}

The first node (Node) is our object node and includes information about the transformation, geometry and material on the object. The second node (GeomStaticMesh) is storing information about the mesh – it’s vertices, faces, uv’s and normals.  You’ll notice that the transform and mesh data attributes are being stored as hex values, this is to save space in the file – you can write out ascii data if you want to. With transform data it’s not so bad and looks something like this…

transform=Transform(Matrix(Vector(1, 0, 0), Vector(0, 1, 0), Vector(0, 0, 1)), Vector(-1.231791174023726, 0, 0));

But with mesh-data you probably only want to write out ascii information for debugging purposes. Otherwise it makes the vrscene long and difficult to read.

Getting Started

The easiest way to see this all in action is to take the first example from the VRay documentation and run it by copying it into the Post Translate Python script field, which can be found in the Common tab within the Render Globals…

Editing the Post Translate Python in Maya 2009

Note: This brief section only applies to Maya 2009, you can ignore this section if your using Maya 2011, 2012 or 2013.

If your like me and using Maya 2009 you’ll notice that the text entry field here can only take one line. This is because Maya 2009’s python interpreter can’t handle escape characters properly (in particular carriage returns). This script works-around the problem by removing the problem escape characters before setting the attribute correctly.

DOWNLOAD willVR_ptpEditor.mel HERE

Download the file, copy it to one of your Maya script folders and in the script editor run…

source "willVR_ptpEditor.mel";

A window will pop up that will allow you to edit the python code.

Users of Maya 2011+ can continue reading

The following python code…

from vray.utils import *

l=findByType("Node") # Get all Node plugins
p=l[0].get("material") # Get the material of the first node
brdf=p.get("brdf") # Get the BRDF for the material
brdf.set("color_tex", Color(1.0, 0.0, 0.0)) # Set the BRDF color to red

t=l[0].get("transform") # Get the transformation for the first node
t.offs+=Vector(0.0, 1.0, 0.0) # Add one unit up
l[0].set("transform", t) # Set the new transformation

All it does is change the colour to red and moves one of the objects up one unit – not particular inspiring or useful, but it is a good introduction to what you can do.

The ‘before’ render shows what the scene looks like when rendered without the modification, while the ‘after’ render shows what happens when I paste the above python code into the Post Translate Python field. There isn’t any performance hit with an example like this, but I can imagine that once you started getting into some fairly complicated python code and when your dealing with lot’s of nodes that it could create a performance hit.

Camera Projection in Nuke

Camera projection is pretty straight forward when it comes to mapping one projection onto one object. It becomes less straight forward when you want to map multiple projections onto one object.

That’s where the MergeMat node comes in handy. It allows you to composite your Project3D nodes together before you apply them to an object. It acts exactly like the regular Merge node so you’ll need to have an alpha channel in the projection going into the Foreground Input (A) on the MergeMat.

If you do want to do multiple projection on multiple objects which share a similar 3d space, you may find that Nuke has trouble figuring out which object is supposed to be in front and which is at the back. This happens due to a lack of precision in the Z-Buffer. Nuke creates the depth pass in the ScanlineRenderer by remapping the near and far clipping plane values on the rendering camera to zero and one (respectively). If the near and far clipping planes are too far apart then this can result in a “chattering” effect where the two objects intersect.

This lack of precision is very similar to colour-banding you see 8-bit images. Even through Nuke is storing the depth as a floating point value which has lot’s of precision to begin with it can still result in banding if the near and far clipping planes are too far apart.

The way to fix this inside Nuke is to adjust the clipping planes so that they tightly bound the 3D scene as much as possible – taking into account any animation on the cameras or geometry. You can animate the clipping planes, but it’s best to leave them static, animating them results in an animated depth pass, which can cause troubles if your using the depth pass to do depth effects such as defocusing or atmospherics.

Another option which can also help is to adjust the Z-Blend Mode and Z-Blend Range within the ScanlineRenderer. This works at a pixel level by taking the depth values of each object in the scene, if the depth values of any object are within the Z-Blend Range of each other it’ll render the objects blended together.

 

Screenspace Texture Mapping in Maya/Mental Ray

Screenspace mapping or to be more geeky Normalized Device Coordinates  (NDC) mapping allows you to map a texture according the screenspace coordinates rather than use traditional UV coordinates.

The example below shows traditional UV mapping on the left and screenspace mapping on the right applied to a flat plane inside Maya (see middle for what the camera is seeing).

This technique was used in ye olden’ days (it started getting phased out around 2006-2008) inside Renderman shader to composite occlusion renders with beauty renders. The occlusion would be rendered out in a prepass and then composited during the beauty render.

You could also use this technique to do 2d compositing or even just general purpose image processing inside Maya.

The method of doing this is slightly different between Maya Software and Mental Ray, in order to do this in Mental Ray you need to use a mib_texture_vector and mib_texture_filter_lookup, the shading network looks like this…

The settings in the mib_texture_vector need to look like this…

With Maya Software the shading network looks like this…

The projection settings should look like this…

Note that the camera should be the one your rendering from if you want the mapping in screenspace, otherwise this will act like a camera projection (it is a projection node). One final caveat with Maya Software is that you’ll need to delete the UVs on the geometry in order for this to work. If you want to switch between UVs and no UVs, apply a Delete UVs node and set the node behaviour to HasNoEffect when you want UVs and set it to Normal when you don’t want UVs.

OpenEXR and 3Delight

The OpenEXR format has a number of useful features which are super handy for CG animation and VFX such as saving the image data in either half or full floating point, setting data-windows and adding additional metadata to the the file. 3Delight for Maya allows you to use all these features, but doesn’t cover how to use them in the documentation (at least I couldn’t find mention of it).

In order to use gain access to these features you need you need to add an extra attribute to the render pass called “exrDisplayParameters”. In the example below, the name of my render pass is called “beauty”.

addAttr -dt "string" -ln exrDisplayParameters beauty;
setAttr "beauty.exrDisplayParameters" -type "string"
"-p \"compression\" \"string\" \"zip\" -p \"autocrop\" \"integer\" \"1\" ";

The string attribute should end up looking like so in the attribute editor…

-p "compression" "string" "zip" -p "autocrop" "integer" "1"

The above sets the compression type to zip and to also tells 3Delight to autocrop the image when it’s rendered. Auto-crop adjusts the bounding box of the data-window (or ROI, region-of-interest) to only contain non-black pixels (I believe it does this based on the alpha channel), this allows Nuke to process the image quicker as it only calculates information within that data-window. See this tutorial on Nuke Bounding Boxes and how to speed up your compositing operations.

The basic syntax of the parameter string is easy enough to understand, the three arguments passed to the -p flag are name, type and value.

-p "[name]" "[type]" "[value]"

You can also add additional metadata to the header of the EXR render. For example you may wish to include things such as

  • Project, scene and shot information.
  • Characters or creatures in the shot.
  • Model, texture, animation versions used.
  • Maya scene used to render the shot.
  • Focal length, Fstop, Shutter Angle, Filmback size.

3Delight already includes some metadata already with the EXR, so you don’t need to add information for the following…

  • Near and far clipping planes.
  • WorldToCamera and WorldToNDC matrices. The Nuke Python documentation has info on how you can use this to create cameras in Nuke based of this data.

You can add this metadata using the “exrheader_” prefix and then the name of your attribute. The following will add three metadata attributes called “shutter”, “haperture” and “vaperture”.

-p "exrheader_shutter" "float" "180" -p "exrheader_haperture" "float" "36" -p "exrheader_vaperture" "float" "24"

While the following will add the project name “ussp” and the maya scene name that was used to render the shot…

-p "exrheader_project" "string" "ussp" -p "exrheader_renderscene" "string" "h:/ussp/bes_0001/scenes/lighting_v01.ma"

The easiest way to get information from your scene to this parameter pass is to set up a Pre-Render MEL script in your render-pass along the lines of…

string $sceneName = `file -q -sn`; //Grab the name of the current scene.
string $projectName = `getenv "PROJECT"`; //This assumes you have an environment variable called "PROJECT" with the project name setup already.
string $parameters = "";
$parameters += (" -p \"exrheader_renderScene\" \"string\" \"" +  $sceneName + "\" ");
$parameters += (" -p \"exrheader_projectName\" \"string\"" + $projectName + "\" ");
setAttr ($pass + ".exrDisplayParameters") -type "string" $parameters;

See the 3Delight documentation has more information on what type of metadata you can add  to the EXR.

Custom Shader UI in 3Delight

When you create a custom SL shader in 3Delight it’ll create a automatically create a shader which looks like this in Maya. Now the following UI doesn’t look very useful – the names we’ve called our variables vary in how descriptive they are – which isn’t very useful if others are going to be using this shader

This is based off a shader which looks like this the following SL code.

surface ui_example_srf
(
	string texmap = "";
	float blur = 0;
	float usebake = 1;
	float numsamples = 16;
	float doRefl = 0;
	color diffuseColour = color (0.5);
)
{
	// SHADER DOESN'T DO ANYTHING //
}

3Delight does however provide a method of creating nice looking shader UIs. You can use #pragma annotations in your shader source code to make things nicer.

#pragma annotation texmap "gadgettype=inputfile;label=Texture Map;hint=Texture Map"
#pragma annotation blur "gadgettype=floatslider;label=Blur;min=0;max=1;hint=Blur the Texture Map"
#pragma annotation usebake "gadgettype=checkbox;label=Use Bake;hint=Use Bake"
#pragma annotation numsamples "gadgettype=intslider;min=1;max=256;label=Samples;hint=Number of samples to use."
#pragma annotation doRefl "gadgettype=optionmenu:gather-env:occlusion-env:ptc-env;label=Reflection Method;hint=Reflection Method"
#pragma annotation diffuseColour "gadgettype=colorslider;label=Diffuse Colour;hint=Diffuse Colour."

This will create a shader that looks like this. The hint will be displayed either in the Maya status line or as a tool-tip if you hover the cursor over the UI element.

You can place the #pragma lines anywhere in your SL file, to see them you will need to re-compile the shader and then reload the shader inside Maya by right clicking on the shader, selecting “reload shader” and then selecting the shader in either the Assignment Panel or the Outliner.