A Recipe for Creating Environment Maps for Image Based Lighting

This is the recipe I use for creating environment maps for use in image based lighting. While the example I’m going to use specifically involves a chrome ball, a lot of this also applies to environment maps captured by taking panoramic photos.

Goals and Flow

The two main goals of this technique are to…

  1. Maintain consistent and high-quality results.
  2. Make things as easy and automated as possible.

The first goal requires that we use image formats which allow floating-point colours and image processing techniques that degrade the image as little as possible.

In terms of balance between consistency and quality, I’d prefer to sacrifice quality in order to maintain consistency – this mainly becomes a problem when dealing when dealing with colour-spaces.

The second goal is to make things as uncomplicated and simple as possible. It’d also be nice to make as much of this as automated as possible so that large batches of images can be processed with minimal fuss.

If I was a bit more sorted my workflow would look some like this, where the raw image gets converted into an image which is worked with and then that gets converted into whatever output format I’m aiming for.

Ideal workflow.

However I’m not entirely keen on bring raw images directly into Nuke at the moment, primarily cause I’m not entirely happy with the results, so I’ve added an additional step to the process. This involves converting the raw image to an intermediate image, which at this stages means exporting the image out as a 16bit TIF with a gamma-encoded colour-space.

Current workflow.

So that means we’re aiming to use formats like OpenEXR or if push comes to shove we’ll use 16-bit TIF. We’re also going to try keep any colourspace conversions or resampling of the images to a bare minimum.

The Ingredients

  • Adobe Lightroom – This is my personal preference, but your probably able to get similar (or perhaps even better) results using other raw converters.
  • The Foundry’s Nuke – This works well with processing large batches of images and has good colour support. It also has a handy little node for converting mirror ball images into lat-long images.
  • J_Ops for Nuke – Primarily for the J_MergeHDR node, but it also contains  J_rawReader which allows you to read camera raw images within Nuke.

Preparing in Lightroom

The first goal after importing your images is to zero out any default tonal adjustments made by Lightroom, for this I apply the General – Zeroed preset in the Develop module.

Zeroed preset applied in Lightroom.

From here I export with the following settings…

  • Format: TIF
  • Compression: None
  • Colourspace: sRGB
  • Bit-Depth: 16bits per component
  • Image resizing: None

With regards to the colourspace, I’ve chosen sRGB because it’s the easiest colourspace to deal with. Ideally I’d like to use ProPhoto as it has a larger colour gamut, but I’m still working on the finer details of using ProPhoto within Nuke.

Hopefully the ACES colour-space will become more common in the future as it has a much larger colour gamut and is linear, but at this stage software support for it is limited.

In Nuke

Once you bring in all your images that you exported from Lightroom. The first thing you want to do is crop the image to the boundaries of the chrome ball. It’s best to get the crop as tight as possible.

Cropping in Nuke, the radial node is used to visualize the crop by overlaying a semi-transparent circle on top.

I use a radial node into order to visualise the crop to make sure things are lining up. You can also copy the settings from the radial node onto the crop node.

You can copy values by clicking and dragging from one curve icon to another.

A couple of little tips here, the first is to use whole pixel values (ie… 2350) for your crop values rather than sub-pixel values (ie… 2350.4). The reason for this is that Nuke will resample the image is you use sub-pixels – if your not careful when resampling an image you can lose quality and introduce either softening or sharpening to the image.

The second tip is if you want to maintain a perfect square when cropping. In order to do so click in the area.y attribute on the radial node and press the = key. In the expression editor that pops up enter…

area.t - (area.r - area.x)

Now when you adjust the top and side edges, the bottom edge will adjust itself automatically so that it maintains a square 1:1 ratio.

Merging into an HDR image

Once I’ve set up the crop on one image, it’s just a matter of copying the same crop node onto all the other images and plugging all of those into a J_MergeHDR node.

Cropped chrome ball images plugged into a MergeHDR node. Click for larger image.
MergeHDR node settings.

The first thing to do is click on the Get Source Metadata button to read the EXIF information off the images. The second thing to do is to set the target EV. You can either do this by setting the target ISO, Aperture and Shutter settings or by clicking on the EV Input checkbox and then manually setting a target EV value (I’ve set it to 12 in the above image).

Using the EV values we can also match exposures between images shot with different ISO, Aperture and Shutter settings.

The EV values can be used to match exposures on two images shot with different ISO, Aperture and Shutter settings.

In the example above we can use the difference between the two EV values (5.614 and 10.614) in order to match the exposure on one to the other. The difference between the two is approximately 5 stops (10.614 – 5.614 = 5), so if we apply an exposure node to the brighter image and set it to -5 stops, we can get a pretty good exposure match between two images. Although the example below is perhaps a bit extreme – as there are plenty of clipped values – in certain areas the exposures match up pretty well.

Where this potentially comes in useful is matching reference photography where automatic settings were used. If you don’t want to figure out the differences yourself, you can plug a MergeHDR node into each image and then set the target EV on all the MergeHDR nodes to the same value.

Applying an exposure node to the over exposed image and setting it to -5 stops.

From Chrome Ball to Lat-Long

The penultimate step in the puzzle is to convert the chrome ball into a lat-long image. This is easy using the SphericalTransform node in Nuke.

SphericalTransform node plugged into a MergeHDR node.

The settings to use are…

  • Input Type: Mirror Ball
  • Output Type: Lat-Long Map
  • Output Format: Any 2:1 image format (ie… 4096×2048, 2048×1024, 1024×512, 512×256)

Exporting from Nuke

The very last step is to write it out as an EXR and make sure the colourspace is linear.

 

OpenEXR and 3Delight

The OpenEXR format has a number of useful features which are super handy for CG animation and VFX such as saving the image data in either half or full floating point, setting data-windows and adding additional metadata to the the file. 3Delight for Maya allows you to use all these features, but doesn’t cover how to use them in the documentation (at least I couldn’t find mention of it).

In order to use gain access to these features you need you need to add an extra attribute to the render pass called “exrDisplayParameters”. In the example below, the name of my render pass is called “beauty”.

addAttr -dt "string" -ln exrDisplayParameters beauty;
setAttr "beauty.exrDisplayParameters" -type "string"
"-p \"compression\" \"string\" \"zip\" -p \"autocrop\" \"integer\" \"1\" ";

The string attribute should end up looking like so in the attribute editor…

-p "compression" "string" "zip" -p "autocrop" "integer" "1"

The above sets the compression type to zip and to also tells 3Delight to autocrop the image when it’s rendered. Auto-crop adjusts the bounding box of the data-window (or ROI, region-of-interest) to only contain non-black pixels (I believe it does this based on the alpha channel), this allows Nuke to process the image quicker as it only calculates information within that data-window. See this tutorial on Nuke Bounding Boxes and how to speed up your compositing operations.

The basic syntax of the parameter string is easy enough to understand, the three arguments passed to the -p flag are name, type and value.

-p "[name]" "[type]" "[value]"

You can also add additional metadata to the header of the EXR render. For example you may wish to include things such as

  • Project, scene and shot information.
  • Characters or creatures in the shot.
  • Model, texture, animation versions used.
  • Maya scene used to render the shot.
  • Focal length, Fstop, Shutter Angle, Filmback size.

3Delight already includes some metadata already with the EXR, so you don’t need to add information for the following…

  • Near and far clipping planes.
  • WorldToCamera and WorldToNDC matrices. The Nuke Python documentation has info on how you can use this to create cameras in Nuke based of this data.

You can add this metadata using the “exrheader_” prefix and then the name of your attribute. The following will add three metadata attributes called “shutter”, “haperture” and “vaperture”.

-p "exrheader_shutter" "float" "180" -p "exrheader_haperture" "float" "36" -p "exrheader_vaperture" "float" "24"

While the following will add the project name “ussp” and the maya scene name that was used to render the shot…

-p "exrheader_project" "string" "ussp" -p "exrheader_renderscene" "string" "h:/ussp/bes_0001/scenes/lighting_v01.ma"

The easiest way to get information from your scene to this parameter pass is to set up a Pre-Render MEL script in your render-pass along the lines of…

string $sceneName = `file -q -sn`; //Grab the name of the current scene.
string $projectName = `getenv "PROJECT"`; //This assumes you have an environment variable called "PROJECT" with the project name setup already.
string $parameters = "";
$parameters += (" -p \"exrheader_renderScene\" \"string\" \"" +  $sceneName + "\" ");
$parameters += (" -p \"exrheader_projectName\" \"string\"" + $projectName + "\" ");
setAttr ($pass + ".exrDisplayParameters") -type "string" $parameters;

See the 3Delight documentation has more information on what type of metadata you can add  to the EXR.