Renderman + Masks

New video up on generating masks (mattes) in Renderman for Maya.

Here’s a quick script to connect up a single PxrMatteID node to many PxrMaterial nodes.

import pymel.core as pym
mynodes = pym.ls (sl=True, fl=True, dag=False, ni=True)

mybxdfs = []
myhairs = []
mymatteid = []

for x in mynodes:
    if isinstance(x , (pym.nodetypes.PxrSurface, pym.nodetypes.PxrLayerSurface)):
        mybxdfs.append(x)
    if isinstance(x , ( pym.nodetypes.PxrMarschnerHair)):
        myhairs.append(x)
    if isinstance(x , (pym.nodetypes.PxrMatteID)):
        mymatteid.append(x)

if len(mymatteid) == 1:
    for b in mybxdfs:
        mymatteid[0].resultAOV.connect(b.utilityPattern[0], force=True)
    for h in myhairs:
        mymatteid[0].resultAOV.connect(h.inputAOV, force=True)        
elif len(mymatteid) != 1:
    print 'too many PxrMatteID nodes selected, just try one'

Enjoy

Heat Map

Useful if you ever want to better visualize data images.

Basic idea is to put the data in the red channel, normalize it between zero and one, then make sure the green and blue channel are both set to one and convert the colourspace from HSV to Linear.

Blackpoint and Whitepoint are set to the minimum and maximum samples from the render. Gain is set to 0.666667 so that the range is mapped from Red to Blue, rather than Red through to Red.

Text Values in Nuke for Wedge Renders

This is more a reminder for myself, as I can never remember the syntax for outputting values to Text nodes in Nuke.

Strength [format %0.2f [value strength]]

Often use this when doing wedge renders during lookdev. A wedge render is a render where you render several frames of the same thing but only change a few specific values to see what effect the change has.

nukewedge

A few things to note…

  • In this case the “strength” attribute to look up comes from a custom user knob on the same knob.
  • The format %0.2f statement is used to format the value. See the printf reference for more details.

nukewedge1

Post DOF vs Rendered DOF

Although it’s far more common to render 3D images without Depth-Of-Field (DOF) as it renders quicker and offers some flexibility in compositing. In some situations that isn’t always the case as large ZBlur’s within Nuke can take a heck of a long time to render. In fact depending on your scene and your renderer it’s often quicker to render DOF in 3d than it is to apply it as a post process in 2d.

Post DOF

  • Flexibility in compositing. Can adjust effect as needed.
  • Quicker to render.
  • Can be inaccurate – no physically based parameters (although this is largely dependant on the plugin used. The effect is driven by the artist.
  • Large blurs are slow to render.
  • Prone to artifacts. Can’t handle certain situations at all without lot’s of hackery.

Rendered DOF

  • No Flexibility in compositing.
  • Slower to render.
  • Accurate. Physically based parameters.
  • Requires more pixel samples in order to avoid noisy renders.

The following render was done at 2048×1556 and was rendered without any DOF. The total render took 83 seconds. The Depth AOV was rendered using a Zmin filter with a filterwidth of 1×1 in order to avoid anti-aliasing in the render.

Click to see full resolution.

I also rendered the same image with DOF on.

Click to see full resolution.

Unfortunately my plan to show the difference between the two full resolution images was put on hold by Nuke taking far too long to render the full resolution ZBlur. I gave up after 10 minutes so decided to concentrate on a particular region.

Crop region used.

The following 1:1 crop demonstrates the difference between Post DOF and Rendered DOF.

Keep in mind that the Nuke time for the Post DOF was only for the crop area your seeing above – it was taking too long to render the full image with Post DOF. As you can see the Post DOF breaks down quite heavily in some places, while the rendered DOF image did take longer to render, it’s much more accurate and the read time of the image is less than a second in Nuke.

Observations…

  • The rendered DOF spent less time ray-tracing and more time spent on sampling the image. This was due to the increase in pixel samples in order to get less noisy results and higher focus factor.
  • With pixel samples at 3×3 the DOF render took 57 seconds, faster than the 83 seconds that it took to render the Non-DOF version although the final result was unacceptable. For less extreme blurs pixel samples could be set as low as 6×6.
  • Focus Factor parameter in 3Delight helps speed up DOF renders by reducing the shading rate in areas of high blur with very little perceivable difference.
  • Despite some noise, the end result is much more visually pleasing than the ZBlur result in Nuke.