Screenspace Texture Mapping in Maya/Mental Ray

Screenspace mapping or to be more geeky Normalized Device Coordinates  (NDC) mapping allows you to map a texture according the screenspace coordinates rather than use traditional UV coordinates.

The example below shows traditional UV mapping on the left and screenspace mapping on the right applied to a flat plane inside Maya (see middle for what the camera is seeing).

This technique was used in ye olden’ days (it started getting phased out around 2006-2008) inside Renderman shader to composite occlusion renders with beauty renders. The occlusion would be rendered out in a prepass and then composited during the beauty render.

You could also use this technique to do 2d compositing or even just general purpose image processing inside Maya.

The method of doing this is slightly different between Maya Software and Mental Ray, in order to do this in Mental Ray you need to use a mib_texture_vector and mib_texture_filter_lookup, the shading network looks like this…

The settings in the mib_texture_vector need to look like this…

With Maya Software the shading network looks like this…

The projection settings should look like this…

Note that the camera should be the one your rendering from if you want the mapping in screenspace, otherwise this will act like a camera projection (it is a projection node). One final caveat with Maya Software is that you’ll need to delete the UVs on the geometry in order for this to work. If you want to switch between UVs and no UVs, apply a Delete UVs node and set the node behaviour to HasNoEffect when you want UVs and set it to Normal when you don’t want UVs.

OpenEXR and 3Delight

The OpenEXR format has a number of useful features which are super handy for CG animation and VFX such as saving the image data in either half or full floating point, setting data-windows and adding additional metadata to the the file. 3Delight for Maya allows you to use all these features, but doesn’t cover how to use them in the documentation (at least I couldn’t find mention of it).

In order to use gain access to these features you need you need to add an extra attribute to the render pass called “exrDisplayParameters”. In the example below, the name of my render pass is called “beauty”.

addAttr -dt "string" -ln exrDisplayParameters beauty;
setAttr "beauty.exrDisplayParameters" -type "string"
"-p \"compression\" \"string\" \"zip\" -p \"autocrop\" \"integer\" \"1\" ";

The string attribute should end up looking like so in the attribute editor…

-p "compression" "string" "zip" -p "autocrop" "integer" "1"

The above sets the compression type to zip and to also tells 3Delight to autocrop the image when it’s rendered. Auto-crop adjusts the bounding box of the data-window (or ROI, region-of-interest) to only contain non-black pixels (I believe it does this based on the alpha channel), this allows Nuke to process the image quicker as it only calculates information within that data-window. See this tutorial on Nuke Bounding Boxes and how to speed up your compositing operations.

The basic syntax of the parameter string is easy enough to understand, the three arguments passed to the -p flag are name, type and value.

-p "[name]" "[type]" "[value]"

You can also add additional metadata to the header of the EXR render. For example you may wish to include things such as

  • Project, scene and shot information.
  • Characters or creatures in the shot.
  • Model, texture, animation versions used.
  • Maya scene used to render the shot.
  • Focal length, Fstop, Shutter Angle, Filmback size.

3Delight already includes some metadata already with the EXR, so you don’t need to add information for the following…

  • Near and far clipping planes.
  • WorldToCamera and WorldToNDC matrices. The Nuke Python documentation has info on how you can use this to create cameras in Nuke based of this data.

You can add this metadata using the “exrheader_” prefix and then the name of your attribute. The following will add three metadata attributes called “shutter”, “haperture” and “vaperture”.

-p "exrheader_shutter" "float" "180" -p "exrheader_haperture" "float" "36" -p "exrheader_vaperture" "float" "24"

While the following will add the project name “ussp” and the maya scene name that was used to render the shot…

-p "exrheader_project" "string" "ussp" -p "exrheader_renderscene" "string" "h:/ussp/bes_0001/scenes/"

The easiest way to get information from your scene to this parameter pass is to set up a Pre-Render MEL script in your render-pass along the lines of…

string $sceneName = `file -q -sn`; //Grab the name of the current scene.
string $projectName = `getenv "PROJECT"`; //This assumes you have an environment variable called "PROJECT" with the project name setup already.
string $parameters = "";
$parameters += (" -p \"exrheader_renderScene\" \"string\" \"" +  $sceneName + "\" ");
$parameters += (" -p \"exrheader_projectName\" \"string\"" + $projectName + "\" ");
setAttr ($pass + ".exrDisplayParameters") -type "string" $parameters;

See the 3Delight documentation has more information on what type of metadata you can add  to the EXR.

Custom Shader UI in 3Delight

When you create a custom SL shader in 3Delight it’ll create a automatically create a shader which looks like this in Maya. Now the following UI doesn’t look very useful – the names we’ve called our variables vary in how descriptive they are – which isn’t very useful if others are going to be using this shader

This is based off a shader which looks like this the following SL code.

surface ui_example_srf
	string texmap = "";
	float blur = 0;
	float usebake = 1;
	float numsamples = 16;
	float doRefl = 0;
	color diffuseColour = color (0.5);

3Delight does however provide a method of creating nice looking shader UIs. You can use #pragma annotations in your shader source code to make things nicer.

#pragma annotation texmap "gadgettype=inputfile;label=Texture Map;hint=Texture Map"
#pragma annotation blur "gadgettype=floatslider;label=Blur;min=0;max=1;hint=Blur the Texture Map"
#pragma annotation usebake "gadgettype=checkbox;label=Use Bake;hint=Use Bake"
#pragma annotation numsamples "gadgettype=intslider;min=1;max=256;label=Samples;hint=Number of samples to use."
#pragma annotation doRefl "gadgettype=optionmenu:gather-env:occlusion-env:ptc-env;label=Reflection Method;hint=Reflection Method"
#pragma annotation diffuseColour "gadgettype=colorslider;label=Diffuse Colour;hint=Diffuse Colour."

This will create a shader that looks like this. The hint will be displayed either in the Maya status line or as a tool-tip if you hover the cursor over the UI element.

You can place the #pragma lines anywhere in your SL file, to see them you will need to re-compile the shader and then reload the shader inside Maya by right clicking on the shader, selecting “reload shader” and then selecting the shader in either the Assignment Panel or the Outliner.

Procedural Weathering

Based on seeing the VRay dirt shader, I decided to try replicate it in Renderman.

The above image was created using the following function with a couple of additional layers of noise to create the streak effect.

void dirtOcclusion (
	normal NN; vector down; float invmaxdistance, downmaxdistance, invsamples, downsamples;
	output float invocc, downocc;
	extern point P;
	downocc = 1;
	invocc = 1;
	downocc -= occlusion(P, down, downsamples, "coneangle", PI/6, "bias", 0.1, "maxvariation", 0, "hitsides", "front", "maxdist", downmaxdistance, "falloff", 0.001);
	invocc -= occlusion(P, -NN, invsamples, "coneangle", PI/2 , "bias", 0.1, "maxvariation", 0, "maxdist", invmaxdistance, "hitsides", "back", "falloff", 0);

sRGB to Linear

From “What the RiSpec never told you” we get this little nugget of useful information. Which is how to convert an sRGB colour to linear light. You can use this to convert textures, but best practice is to pre-convert your textures to linear before creating any mipmapped textures in order to preserve energy in the texture – see the Texture Painting section in Sony Pictures Imageworks colour pipeline for details. A much more common use would be to use this to set the colour of lights.

// decode from sRGB luma to linear light
float sRGB_decode_f(float F)
	float lin;
	if(F <= 0.03928)
		lin = F/12.92;
		lin = pow((F+0.055)/1.055, 2.4);
	return lin;

color sRGB_decode(color C)
	color D;
	setcomp(D, 0, sRGB_decode_f(comp(C,0)));
	setcomp(D, 1, sRGB_decode_f(comp(C,1)));
	setcomp(D, 2, sRGB_decode_f(comp(C,2)));
	return D;

The first function sRGB_decode_f does the majority of the work, the second function sRGB_decode uses that in order to operate on an input colour. To use this in SL we would use something along the lines of this. The first line here creates a colour variable with a mid-grey value in sRGB space. The second line converts that colour into a linear colour.

color myColour = (0.5);
myColour = sRGB_decode(myColour);


Custom Passes in 3Delight

Quite often in CG when working with multiple passes – you want  to do different things depending on what type of pass your rendering – the two more common ways to do this are to either create two different shaders and change the assignment for each pass or to write a shader which is able to figure out what type of pass it is and act appropriately.

For example in a bake pass you typically want to bake out attributes which aren’t dependant on the position or direction of the camera  – this includes things like diffuse/ambient shading, subsurface scattering and ambient occlusion data  – which can all be baked out and reused on objects. Only when the position and direction of the objects or lights changes does the scene need to be rebaked.

In order to control this in 3Delight you can use the RiMel commands to setup custom RIB commands which can be read by your shaders. The following MEL commands are placed inside a PreWorldMEL attribute on the render pass itself.

RiOption -n "user" -p "pass" "string" "bake";

RiAttribute -n "cull" -p "hidden" "integer" "0";
RiAttribute -n "cull" -p "backfacing" "integer" "0";
RiAttribute -n "dice" -p "rasterorient" "integer" "0";

This will output the following commands into the RIB file.

Option "user" "string pass" [ "bake" ]
Attribute "cull" "integer hidden" [ 0 ]
Attribute "cull" "integer backfacing" [ 0 ]
Attribute "dice" "integer rasterorient" [ 0 ]

Then within the shader I can query what pass I’m currently rendering and do something appropriate for that type of pass.

uniform string passtype = "";
option( "user:pass", passtype );
if (passtype == "bake")
    // Do something here

By default 3Delight also exports out the name of the render pass to an attribute called delight_renderpass_name – this is name of your render pass inside Maya. You can query that name using the following RSL. Obviously using this method highly depends on what you call your passes – the following shading code wouldn’t work if the render pass was called anything other than “bake” – for example if you wanted to do multiple bake passes for within the same scene like separating out animated bakes (which require multiple frames) from static bakes (things which don’t move which can be stored in one frame).

string passtype = "";
option ("user:delight_renderpass_name",passtype);
if (passtype == "bake")
	// Do something here	

HSV Adjustments

The following function can be used to do Hue, Saturation and Value adjustments on a colour input.

color hsvAdjust (color input; float hue, saturation, value;)
    color toHSV = ctransform("RGB", "HSV", input);
    toHSV *= color (hue, saturation, value);
    return ctransform ("HSV", "RGB", toHSV);

For example to de-saturate a colour input, you would use the following…

color myColour = (0.5, 0.2, 0.4);
myColour = hsvAdjust (myColour, 0, 0, 1);

Gamma Correction

The following function shows how to gamma correct a float using RSL. g is the value you wish to correct and f is the gamma value.

float gammacorrect(float g; float f;)
	return pow(g, f);

To use this inside a shader you would use something like this…

float myValue = 0.5;
float gamma = 2.2;
float myFloat = gammacorrect(myValue, gamma);

The result of which would be 0.5^2.2 = 0.217637641. To correct a colour input, you would need a function like so. c is our colour input and f is the gamma value to use. The major difference here is the use of setcomp in order to operate on each colour channel individually.

color gammacorrect(color c; float f;)
	color d;
	setcomp(d, 0, pow(comp(c,0), f));
	setcomp(d, 1, pow(comp(c,1), f));
	setcomp(d, 2, pow(comp(c,2), f));
	return d;

Note that Renderman is smart enough to recognise that although we have two functions with the same name called “gammacorrect” – they are returning two different types of data – one being a floating point number the other being a colour. As long as the variable you are passing the data into – in this case “myColour” – is of the correct type, it’ll know what to do with it.

color myValue = 0.5;
float gamma = 2.2;
color myColour = gammacorrect(myValue, gamma);