Camera Projection in Nuke

Camera projection is pretty straight forward when it comes to mapping one projection onto one object. It becomes less straight forward when you want to map multiple projections onto one object.

That’s where the MergeMat node comes in handy. It allows you to composite your Project3D nodes together before you apply them to an object. It acts exactly like the regular Merge node so you’ll need to have an alpha channel in the projection going into the Foreground Input (A) on the MergeMat.

If you do want to do multiple projection on multiple objects which share a similar 3d space, you may find that Nuke has trouble figuring out which object is supposed to be in front and which is at the back. This happens due to a lack of precision in the Z-Buffer. Nuke creates the depth pass in the ScanlineRenderer by remapping the near and far clipping plane values on the rendering camera to zero and one (respectively). If the near and far clipping planes are too far apart then this can result in a “chattering” effect where the two objects intersect.

This lack of precision is very similar to colour-banding you see 8-bit images. Even through Nuke is storing the depth as a floating point value which has lot’s of precision to begin with it can still result in banding if the near and far clipping planes are too far apart.

The way to fix this inside Nuke is to adjust the clipping planes so that they tightly bound the 3D scene as much as possible – taking into account any animation on the cameras or geometry. You can animate the clipping planes, but it’s best to leave them static, animating them results in an animated depth pass, which can cause troubles if your using the depth pass to do depth effects such as defocusing or atmospherics.

Another option which can also help is to adjust the Z-Blend Mode and Z-Blend Range within the ScanlineRenderer. This works at a pixel level by taking the depth values of each object in the scene, if the depth values of any object are within the Z-Blend Range of each other it’ll render the objects blended together.

 

Screenspace Texture Mapping in Maya/Mental Ray

Screenspace mapping or to be more geeky Normalized Device Coordinates  (NDC) mapping allows you to map a texture according the screenspace coordinates rather than use traditional UV coordinates.

The example below shows traditional UV mapping on the left and screenspace mapping on the right applied to a flat plane inside Maya (see middle for what the camera is seeing).

This technique was used in ye olden’ days (it started getting phased out around 2006-2008) inside Renderman shader to composite occlusion renders with beauty renders. The occlusion would be rendered out in a prepass and then composited during the beauty render.

You could also use this technique to do 2d compositing or even just general purpose image processing inside Maya.

The method of doing this is slightly different between Maya Software and Mental Ray, in order to do this in Mental Ray you need to use a mib_texture_vector and mib_texture_filter_lookup, the shading network looks like this…

The settings in the mib_texture_vector need to look like this…

With Maya Software the shading network looks like this…

The projection settings should look like this…

Note that the camera should be the one your rendering from if you want the mapping in screenspace, otherwise this will act like a camera projection (it is a projection node). One final caveat with Maya Software is that you’ll need to delete the UVs on the geometry in order for this to work. If you want to switch between UVs and no UVs, apply a Delete UVs node and set the node behaviour to HasNoEffect when you want UVs and set it to Normal when you don’t want UVs.

OpenEXR and 3Delight

The OpenEXR format has a number of useful features which are super handy for CG animation and VFX such as saving the image data in either half or full floating point, setting data-windows and adding additional metadata to the the file. 3Delight for Maya allows you to use all these features, but doesn’t cover how to use them in the documentation (at least I couldn’t find mention of it).

In order to use gain access to these features you need you need to add an extra attribute to the render pass called “exrDisplayParameters”. In the example below, the name of my render pass is called “beauty”.

addAttr -dt "string" -ln exrDisplayParameters beauty;
setAttr "beauty.exrDisplayParameters" -type "string"
"-p \"compression\" \"string\" \"zip\" -p \"autocrop\" \"integer\" \"1\" ";

The string attribute should end up looking like so in the attribute editor…

-p "compression" "string" "zip" -p "autocrop" "integer" "1"

The above sets the compression type to zip and to also tells 3Delight to autocrop the image when it’s rendered. Auto-crop adjusts the bounding box of the data-window (or ROI, region-of-interest) to only contain non-black pixels (I believe it does this based on the alpha channel), this allows Nuke to process the image quicker as it only calculates information within that data-window. See this tutorial on Nuke Bounding Boxes and how to speed up your compositing operations.

The basic syntax of the parameter string is easy enough to understand, the three arguments passed to the -p flag are name, type and value.

-p "[name]" "[type]" "[value]"

You can also add additional metadata to the header of the EXR render. For example you may wish to include things such as

  • Project, scene and shot information.
  • Characters or creatures in the shot.
  • Model, texture, animation versions used.
  • Maya scene used to render the shot.
  • Focal length, Fstop, Shutter Angle, Filmback size.

3Delight already includes some metadata already with the EXR, so you don’t need to add information for the following…

  • Near and far clipping planes.
  • WorldToCamera and WorldToNDC matrices. The Nuke Python documentation has info on how you can use this to create cameras in Nuke based of this data.

You can add this metadata using the “exrheader_” prefix and then the name of your attribute. The following will add three metadata attributes called “shutter”, “haperture” and “vaperture”.

-p "exrheader_shutter" "float" "180" -p "exrheader_haperture" "float" "36" -p "exrheader_vaperture" "float" "24"

While the following will add the project name “ussp” and the maya scene name that was used to render the shot…

-p "exrheader_project" "string" "ussp" -p "exrheader_renderscene" "string" "h:/ussp/bes_0001/scenes/lighting_v01.ma"

The easiest way to get information from your scene to this parameter pass is to set up a Pre-Render MEL script in your render-pass along the lines of…

string $sceneName = `file -q -sn`; //Grab the name of the current scene.
string $projectName = `getenv "PROJECT"`; //This assumes you have an environment variable called "PROJECT" with the project name setup already.
string $parameters = "";
$parameters += (" -p \"exrheader_renderScene\" \"string\" \"" +  $sceneName + "\" ");
$parameters += (" -p \"exrheader_projectName\" \"string\"" + $projectName + "\" ");
setAttr ($pass + ".exrDisplayParameters") -type "string" $parameters;

See the 3Delight documentation has more information on what type of metadata you can add  to the EXR.