Although it’s far more common to render 3D images without Depth-Of-Field (DOF) as it renders quicker and offers some flexibility in compositing. In some situations that isn’t always the case as large ZBlur’s within Nuke can take a heck of a long time to render. In fact depending on your scene and your renderer it’s often quicker to render DOF in 3d than it is to apply it as a post process in 2d.
Flexibility in compositing. Can adjust effect as needed.
Quicker to render.
Can be inaccurate – no physically based parameters (although this is largely dependant on the plugin used. The effect is driven by the artist.
Large blurs are slow to render.
Prone to artifacts. Can’t handle certain situations at all without lot’s of hackery.
No Flexibility in compositing.
Slower to render.
Accurate. Physically based parameters.
Requires more pixel samples in order to avoid noisy renders.
The following render was done at 2048×1556 and was rendered without any DOF. The total render took 83 seconds. The Depth AOV was rendered using a Zmin filter with a filterwidth of 1×1 in order to avoid anti-aliasing in the render.
I also rendered the same image with DOF on.
Unfortunately my plan to show the difference between the two full resolution images was put on hold by Nuke taking far too long to render the full resolution ZBlur. I gave up after 10 minutes so decided to concentrate on a particular region.
The following 1:1 crop demonstrates the difference between Post DOF and Rendered DOF.
Keep in mind that the Nuke time for the Post DOF was only for the crop area your seeing above – it was taking too long to render the full image with Post DOF. As you can see the Post DOF breaks down quite heavily in some places, while the rendered DOF image did take longer to render, it’s much more accurate and the read time of the image is less than a second in Nuke.
The rendered DOF spent less time ray-tracing and more time spent on sampling the image. This was due to the increase in pixel samples in order to get less noisy results and higher focus factor.
With pixel samples at 3×3 the DOF render took 57 seconds, faster than the 83 seconds that it took to render the Non-DOF version although the final result was unacceptable. For less extreme blurs pixel samples could be set as low as 6×6.
Focus Factor parameter in 3Delight helps speed up DOF renders by reducing the shading rate in areas of high blur with very little perceivable difference.
Despite some noise, the end result is much more visually pleasing than the ZBlur result in Nuke.
This is the recipe I use for creating environment maps for use in image based lighting. While the example I’m going to use specifically involves a chrome ball, a lot of this also applies to environment maps captured by taking panoramic photos.
Goals and Flow
The two main goals of this technique are to…
Maintain consistent and high-quality results.
Make things as easy and automated as possible.
The first goal requires that we use image formats which allow floating-point colours and image processing techniques that degrade the image as little as possible.
In terms of balance between consistency and quality, I’d prefer to sacrifice quality in order to maintain consistency – this mainly becomes a problem when dealing when dealing with colour-spaces.
The second goal is to make things as uncomplicated and simple as possible. It’d also be nice to make as much of this as automated as possible so that large batches of images can be processed with minimal fuss.
If I was a bit more sorted my workflow would look some like this, where the raw image gets converted into an image which is worked with and then that gets converted into whatever output format I’m aiming for.
However I’m not entirely keen on bring raw images directly into Nuke at the moment, primarily cause I’m not entirely happy with the results, so I’ve added an additional step to the process. This involves converting the raw image to an intermediate image, which at this stages means exporting the image out as a 16bit TIF with a gamma-encoded colour-space.
So that means we’re aiming to use formats like OpenEXR or if push comes to shove we’ll use 16-bit TIF. We’re also going to try keep any colourspace conversions or resampling of the images to a bare minimum.
Adobe Lightroom – This is my personal preference, but your probably able to get similar (or perhaps even better) results using other raw converters.
The Foundry’s Nuke – This works well with processing large batches of images and has good colour support. It also has a handy little node for converting mirror ball images into lat-long images.
J_Ops for Nuke – Primarily for the J_MergeHDR node, but it also contains J_rawReader which allows you to read camera raw images within Nuke.
Preparing in Lightroom
The first goal after importing your images is to zero out any default tonal adjustments made by Lightroom, for this I apply the General – Zeroed preset in the Develop module.
From here I export with the following settings…
Bit-Depth: 16bits per component
Image resizing: None
With regards to the colourspace, I’ve chosen sRGB because it’s the easiest colourspace to deal with. Ideally I’d like to use ProPhoto as it has a larger colour gamut, but I’m still working on the finer details of using ProPhoto within Nuke.
Hopefully the ACES colour-space will become more common in the future as it has a much larger colour gamut and is linear, but at this stage software support for it is limited.
Once you bring in all your images that you exported from Lightroom. The first thing you want to do is crop the image to the boundaries of the chrome ball. It’s best to get the crop as tight as possible.
I use a radial node into order to visualise the crop to make sure things are lining up. You can also copy the settings from the radial node onto the crop node.
A couple of little tips here, the first is to use whole pixel values (ie… 2350) for your crop values rather than sub-pixel values (ie… 2350.4). The reason for this is that Nuke will resample the image is you use sub-pixels – if your not careful when resampling an image you can lose quality and introduce either softening or sharpening to the image.
The second tip is if you want to maintain a perfect square when cropping. In order to do so click in the area.y attribute on the radial node and press the = key. In the expression editor that pops up enter…
area.t - (area.r - area.x)
Now when you adjust the top and side edges, the bottom edge will adjust itself automatically so that it maintains a square 1:1 ratio.
Merging into an HDR image
Once I’ve set up the crop on one image, it’s just a matter of copying the same crop node onto all the other images and plugging all of those into a J_MergeHDR node.
The first thing to do is click on the Get Source Metadata button to read the EXIF information off the images. The second thing to do is to set the target EV. You can either do this by setting the target ISO, Aperture and Shutter settings or by clicking on the EV Input checkbox and then manually setting a target EV value (I’ve set it to 12 in the above image).
Using the EV values we can also match exposures between images shot with different ISO, Aperture and Shutter settings.
In the example above we can use the difference between the two EV values (5.614 and 10.614) in order to match the exposure on one to the other. The difference between the two is approximately 5 stops (10.614 – 5.614 = 5), so if we apply an exposure node to the brighter image and set it to -5 stops, we can get a pretty good exposure match between two images. Although the example below is perhaps a bit extreme – as there are plenty of clipped values – in certain areas the exposures match up pretty well.
Where this potentially comes in useful is matching reference photography where automatic settings were used. If you don’t want to figure out the differences yourself, you can plug a MergeHDR node into each image and then set the target EV on all the MergeHDR nodes to the same value.
From Chrome Ball to Lat-Long
The penultimate step in the puzzle is to convert the chrome ball into a lat-long image. This is easy using the SphericalTransform node in Nuke.
The settings to use are…
Input Type: Mirror Ball
Output Type: Lat-Long Map
Output Format: Any 2:1 image format (ie… 4096×2048, 2048×1024, 1024×512, 512×256)
Exporting from Nuke
The very last step is to write it out as an EXR and make sure the colourspace is linear.
There are a few attributes in Maya you can change in order to render the image with overscan. The first is resolution, while the second is either camera scale, focal length, field of view, camera aperture, camera pre-scale, camera post-scale or camera shake-overscan. I use camera scale as it’s more intuitive numbers you need to enter and it doesn’t mess with the camera aperture, focal length or field of view.
In order to render and work with overscan correctly, it needs to be done relative to your format your working with – this is typically your final output resolution inside Nuke, but it could also be the resolution of a matte-painting or a live-action plate. The way to figure out the amount of overscan to use is simple and we can use one of two methods, either based on a multiplier or based on the amount of extra pixels we want to use.
The simplest method to me is based on a multiplier. If our format size is 480*360 (as above) and we wanted to render the image with an extra 10%, we multiply the resolution by 1.1 and set the camera scale to 1.1. Like so…
Then in Nuke all we need to do is apply a Reformat node and set it to our original render format of 480×360, the resize type=none and keep preserve bounding box=on – this has the effect of cropping the render to our output size but keeping the image data outside of the format. Or additionally you can set the reformat like so… type=scale; scale=0.90909091; resize type=none; preserve bounding box=on. Instead of typing in 0.90909091, you can also set the scale by just typing in 1/1.1 …
If we instead wanted to render an extra 32 pixels to the top, bottom, left and right of our image – making the image 64 pixels wider and higher – we need to do things a little bit differently as we need to change the camera aperture. The reason for doing this is that adding the same number of pixels to both the width and height results in a very slight change to the aspect ratio of the image.
new width = original width + extra pixels
new height = original height + extra pixels
overscan width = new width / original width
overscan height = new height / original height
new aperture width = original aperture width * overscan width
new aperture height = original aperture height * overscan height
So using our 480×360 example from above. If we wish to add an extra 64 pixels to the width and height we would calculate it like so…
Camera projection is pretty straight forward when it comes to mapping one projection onto one object. It becomes less straight forward when you want to map multiple projections onto one object.
That’s where the MergeMat node comes in handy. It allows you to composite your Project3D nodes together before you apply them to an object. It acts exactly like the regular Merge node so you’ll need to have an alpha channel in the projection going into the Foreground Input (A) on the MergeMat.
If you do want to do multiple projection on multiple objects which share a similar 3d space, you may find that Nuke has trouble figuring out which object is supposed to be in front and which is at the back. This happens due to a lack of precision in the Z-Buffer. Nuke creates the depth pass in the ScanlineRenderer by remapping the near and far clipping plane values on the rendering camera to zero and one (respectively). If the near and far clipping planes are too far apart then this can result in a “chattering” effect where the two objects intersect.
This lack of precision is very similar to colour-banding you see 8-bit images. Even through Nuke is storing the depth as a floating point value which has lot’s of precision to begin with it can still result in banding if the near and far clipping planes are too far apart.
The way to fix this inside Nuke is to adjust the clipping planes so that they tightly bound the 3D scene as much as possible – taking into account any animation on the cameras or geometry. You can animate the clipping planes, but it’s best to leave them static, animating them results in an animated depth pass, which can cause troubles if your using the depth pass to do depth effects such as defocusing or atmospherics.
Another option which can also help is to adjust the Z-Blend Mode and Z-Blend Range within the ScanlineRenderer. This works at a pixel level by taking the depth values of each object in the scene, if the depth values of any object are within the Z-Blend Range of each other it’ll render the objects blended together.
The OpenEXR format has a number of useful features which are super handy for CG animation and VFX such as saving the image data in either half or full floating point, setting data-windows and adding additional metadata to the the file. 3Delight for Maya allows you to use all these features, but doesn’t cover how to use them in the documentation (at least I couldn’t find mention of it).
In order to use gain access to these features you need you need to add an extra attribute to the render pass called “exrDisplayParameters”. In the example below, the name of my render pass is called “beauty”.
The above sets the compression type to zip and to also tells 3Delight to autocrop the image when it’s rendered. Auto-crop adjusts the bounding box of the data-window (or ROI, region-of-interest) to only contain non-black pixels (I believe it does this based on the alpha channel), this allows Nuke to process the image quicker as it only calculates information within that data-window. See this tutorial on Nuke Bounding Boxes and how to speed up your compositing operations.
The basic syntax of the parameter string is easy enough to understand, the three arguments passed to the -p flag are name, type and value.
-p "[name]" "[type]" "[value]"
You can also add additional metadata to the header of the EXR render. For example you may wish to include things such as