The following information relates to the following video.
Renderman has access to the following data on a Paint Effects stroke. The most useful one is Cs (colour).
data – how the data attaches to the geometry, varying and vertex both attach to vertices, uniform attaches to each face and constant attaches to the whole mesh
type – what type of data is it – color, float, point, vector, normal
name – name of the primvar
description – a brief description of what the data contains
[data] [type] [name] (description)
vertex normal N (surface normal)
varying float width (width of curve)
vertex float t (length along curve)
varying color Cs (color on curve
vertex point P (position along curve)
New video up on generating masks (mattes) in Renderman for Maya.
Here’s a quick script to connect up a single PxrMatteID node to many PxrMaterial nodes.
import pymel.core as pym
mynodes = pym.ls (sl=True, fl=True, dag=False, ni=True)
mybxdfs = 
myhairs = 
mymatteid = 
for x in mynodes:
if isinstance(x , (pym.nodetypes.PxrSurface, pym.nodetypes.PxrLayerSurface)):
if isinstance(x , ( pym.nodetypes.PxrMarschnerHair)):
if isinstance(x , (pym.nodetypes.PxrMatteID)):
if len(mymatteid) == 1:
for b in mybxdfs:
for h in myhairs:
elif len(mymatteid) != 1:
print 'too many PxrMatteID nodes selected, just try one'
This got started from a question on the Renderman forums about the difference between ST and UV. Renderman uses UV and ST to differiate between “implicit” and “explicit” texture coordinates. Implicit is automatically defined by the geometric shape and explicit is manually defined by the user.
With polygons and subdivision geometry, Renderman uses ST coordinates for texture mapping (it converts Maya UVs to Renderman STs behind the scenes). But it also automatically assigns UV coordinates to each face on the geometry, you might ask why would you need this?
For a while I’ve wanted to implement colour temperature control into my lighting workflow but I’ve never been able to figure out how it’s calculated. Then I came across this site, which has already mapped out blackbody temperatures to normalised sRGB values.
Using this as a starting point I mapped out the values into a SL function…
color blackbodyfast( float temperature;)
uniform color c =
float amount = smoothstep ( 1000, 10000, temperature );
color blackbody = spline ( "catmull-rom", amount, c,
I decided rather than map every temperature value from 1000K to 40000K, I decided just to deal with 1000K to 10000K using the CIE 1964 10 degree Colour Matching Functions – only because of the later date of 1964, I couldn’t see (nor greatly understand) the difference between the colour matching functions. The original function I wrote called blackbody used every value of the kelvin scale from 1000K to 10000K, this resulted in an array of 90 values. The modified one above uses every 6th value which brings the array size down to 16 values, in my tests I didn’t notice a speed difference using 90 values, but looking at a comparison of the two functions I couldn’t see enough visual difference to bother using the full 90 steps.
There is a slight peak where the warm and cool colours meet in the 90 step version. It’s a bit more obvious looking at the image in linear light.
Because the values are in sRGB, they need to be converted to Linear before getting used in the shader. The SL used in the main body of my test surface looks something like this…
uniform float temperature = 5600; #pragma annotation temperature "gadgettype=intslider;min=1000;max=10000;step=100;label=Temperature;"
color blackbody = blackbodyfast (temperature);
blackbody = sRGB_decode(blackbody);
Oi = Os;
Ci = blackbody * Oi;
Used in a light shader the output looks something like this…
The only problem now is that 3Delight doesn’t show a preview of light shader or more importantly the colour temperature in the AE settings for my light.
To get around this I decided to implement an expression which changed the colour of the Maya light that my 3Delight shader was attached to. Because MEL doesn’t have a spline function like SL does I had to improvise using animation curves. First up the MEL to create the three curves that I need to create the RGB colour temperature.
There are a few attributes in Maya you can change in order to render the image with overscan. The first is resolution, while the second is either camera scale, focal length, field of view, camera aperture, camera pre-scale, camera post-scale or camera shake-overscan. I use camera scale as it’s more intuitive numbers you need to enter and it doesn’t mess with the camera aperture, focal length or field of view.
In order to render and work with overscan correctly, it needs to be done relative to your format your working with – this is typically your final output resolution inside Nuke, but it could also be the resolution of a matte-painting or a live-action plate. The way to figure out the amount of overscan to use is simple and we can use one of two methods, either based on a multiplier or based on the amount of extra pixels we want to use.
The simplest method to me is based on a multiplier. If our format size is 480*360 (as above) and we wanted to render the image with an extra 10%, we multiply the resolution by 1.1 and set the camera scale to 1.1. Like so…
Then in Nuke all we need to do is apply a Reformat node and set it to our original render format of 480×360, the resize type=none and keep preserve bounding box=on – this has the effect of cropping the render to our output size but keeping the image data outside of the format. Or additionally you can set the reformat like so… type=scale; scale=0.90909091; resize type=none; preserve bounding box=on. Instead of typing in 0.90909091, you can also set the scale by just typing in 1/1.1 …
If we instead wanted to render an extra 32 pixels to the top, bottom, left and right of our image – making the image 64 pixels wider and higher – we need to do things a little bit differently as we need to change the camera aperture. The reason for doing this is that adding the same number of pixels to both the width and height results in a very slight change to the aspect ratio of the image.
new width = original width + extra pixels
new height = original height + extra pixels
overscan width = new width / original width
overscan height = new height / original height
new aperture width = original aperture width * overscan width
new aperture height = original aperture height * overscan height
So using our 480×360 example from above. If we wish to add an extra 64 pixels to the width and height we would calculate it like so…