Write a GLSL Material

From Derivative
Jump to: navigation, search


This document explains the finer points of writing a GLSL Material in TouchDesigner. It is assumed the reader already has an understanding of the GLSL language. The official GLSL documentation can be found at this address.

TouchDesigner099's main supported version of GLSL 3.30.

Search also for Shader Toy in the forum.

Major changes since TouchDesigner088

A lot of changes have been done to TouchDesigner's GLSL API in 099. Most of these changes were done to better facilitate Multi-Camera Rendering. A summary of most of these changes is:

  • Lighting and other work is now done in World space instead of Camera space. This makes code cleaner since the shaders would need to do their work in multiple different camera spaces for multiple cameras. Legacy GLSL shaders are supported with the GLSL TOPs 'Lighting Space' parameter which will be set to Camera Space for older shaders.
  • TDInstanceID() should be used instead of gl_InstanceID/uTDInstanceIDOffset.
  • uTDMat has been removed when lighting in World Space, use the array uTDMats[] instead.
  • Some values from the uTDGeneral structure have been moved to uTDCamInfos[], since that info is camera specific.
  • A notion of camera index (obtained in the vertex shader using TDCameraIndex()), is needed for some functions such as TDFog().
  • TDAlphaTest(float) must be called to apply the alpha test. It can be safely called when the alpha test is disabled on the MAT, it'll do nothing in that case.
  • Before writing any color to a output color buffer, it should be passed through vec4 TDOutputSwizzle(vec4). This ensures the channels are in the correct place depending on how the channels are stored in the output texture. For example Alpha-only textures may be stored in a 'Red-only' texture internally, so the alpha value will need to be swizzled over to the red channel before output.

Changes from GLSL 1.20

Shaders written for 1.20 will not compile as 3.30 shaders. The language received a large overhaul, changing the name of many key functions and replacing a lot of functionality. All of the changes can be seen in the official GLSL documentation linked to earlier. We have taken this opportunity to improve TouchDesigner specific functionality also. Some of the more important changes are:

  • Removed texture1D(sampler1D, float), texture2D(sampler2D, vec2), etc. All texture sampling is done with identical function names, regardless of the dimensionality of the texture. e.g. texture(sampler1D, float), or texture(sampler2D, vec2).
  • Removed the keyword varying. Instead use in and out (depending on if the value is getting outputted from the shader or inputted from a previous shader stage). Examples later on in the article.
  • Removed the keyword attribute. Instead just use in in your vertex shader.
  • Removed built-in varyings gl_TexCoord[]. You'll need to always declare your own variables that get output/input between shader stages.
  • Removed gl_FragColor and gl_FragData[]. Instead you name your own color outputs using the syntax layout(location = 0) vec4 nameYouWant. To output to multiple color buffers declare outputs at the other locations layout(location = 1) vec4 secondFragColor.
  • Removed all built-in attributes such as gl_Vertex, gl_MultiTexCoord0, gl_Normal. In TouchDesigner these attributes will be accessible through automatically declared attributes such as in vec3 P; in vec3 N; in vec4 Cd. More details on this later.
  • Removed almost all built-in uniforms such as matrices (gl_ModelViewMatrix, gl_ProjectionMatrix), light information (gl_LightSource[]), fog information (gl_Fog). All of this data will be available through new means provided by TouchDesigner, detailed later.
  • Arrays of samplers are now supported, and are used extensively in TouchDesigner when appropriate. There are limitations on how these samplers are indexed though, detailed in the GLSL spec for the particular version you are using (3.30 has different rules from 4.10, for example).
  • All TouchDesigner provided functions and uniforms have been renamed to follow a more concise and consistent naming convention.
  • All TouchDesigner provided functions and uniforms will be declared for the user automatically. There is no need to manually declare the uniforms you want to use.

The concept of GLSL Shaders

A GLSL Shader is a program that is applied to geometry as it is being rendered. A GLSL shader is split into two main components, vertex shader and pixel shader.

Vertex Shader - A vertex shader is a program that is applied to every vertex of the geometry the Material is applied to.

Pixel Shader - A pixel shader is a program that is applied to every pixel that is drawn.

There is also the Geometry Shader, which is a stage between the vertex and pixel stages, but it isn't used very often. For more information on Geometry Shaders have a look here.

TouchDesigner GLSL conventions

All functions and uniforms provided by TouchDesigner as augmentations of the GLSL language will follow these conventions.

  • Function names will always start with the letters TD. e.g. TDLighting().
  • Uniforms will start with the letters uTD.
  • Samplers will start with the letters sTD.
  • Vertex input attributes will be named the same as they are in the TouchDesigner SOP interface (P, N, Cd, uv).

Most uniforms provided by TouchDesigner will be contained in Uniform Blocks. This means instead of accessing a single matrix by uTDMatrixName, the matrices will be stored a single block with many matrices such as uTDMats, which has members such as uTDMats[0].worldCam and uTDMats[0].projInverse.

Shader Stages

Vertex Shader

Inside a vertex shader you only have access to one vertex, you don't know the positions of other vertices are or what the output of the vertex shader for other vertices will be.

The input to a vertex shader is all the data about the particular vertex that the program is running on. Data like the vertex position in SOP space, texture coordinate, color, normal are available. These values are called attributes. Attributes are declared in the vertex shader using the in keyword.

The vertex shader can output many things. The primary things it will output are the vertex position (after being transformed by the world, camera and projection matrix), color, and texture coordinates. It can output any other values as well using output variables declared using out. Outputs from a vertex shader are linearly interpolated across the surface of the primitive the vertex is a part of. For example, if you output a value of 0.2 at 1st vertex and a value of 0.4 at the 2nd vertex on a line, a pixel drawn half-way between these two points will receive a value of 0.3.

Pixel Shader

The inputs to a pixel shader are all of the outputs from the vertex shader, and any uniforms that are defined. The outputs from the vertex shader will have been linearly interpolated across the polygon that the vertices created. A pixel shader can output two things: Color and Depth. Color is output through the variable declared as layout(location = 0) out vec4 whateverName. Depth is output through a variable declared as out float depthName. You can name these variables whatever you want. You are required to write out a color value, but you do not need to write out depth (in fact it's best if you don't unless absolutely needed). GLSL will automatically output the correct depth value for you if you don't write out a depth value. If you are outputting to multiple color buffer, you declare more color outputs with the location value set to 1, 2, 3, etc., instead of 0.

A Basic Shader

Vertex Shader

This vertex shader will simply transform each vertex correctly and leave it entirely up to the pixel shader to color the pixels:

void main()
	// P is the position of the current vertex
	// TDDeform() will return the deformed P position, in world space.
	// Transform it from world space to projection space so it can be rasterized
	gl_Position = TDWorldToProj(TDDeform(P));

Pixel Shader

This pixel shader simply sets every pixel to red:

// We need to declare the name of the output fragment color (this is different from GLSL 1.2 where it was automatically declared as gl_FragColor)
out vec4 fragColor;
void main()
	fragColor = vec4(1, 0, 0, 1);

Working with Geometry Attributes

The follow vertex shader attributes (inputs) will always be declared for you to use in your vertex shader. You do not need to declare them yourself.

layout(location = 0) in vec3 P; // Vertex position
layout(location = 1) in vec3 N; // normal
layout(location = 2) in vec4 Cd; // color
layout(location = 3) in vec3 uv[8]; // texture coordinate layers

Other Attributes

All other attributes you want to use need to be declared in your shader code. are passed as custom attributes. For attributes with 4 or less values, simply declare an attribute with the same name in the GLSL shader as it has in the geometry. For example if the geometry has an attribute speed[3], declare an attribute like this:

 in vec3 speed;

For attributes that are larger than 4 values, they will get put into an array of vec4s. If the size isn't a multiple of 4, then the extra values are undefined. For example for the attribute pCapt[6], you would declare it in your shader like this:

in vec4 pCapt[2];

The values pCapt[0].x, pCapt[0].y, pCapt[0].z, pCapt[0].w, pCapt[1].x and pCapt[1].y will have the values from the geometry and pCapt[1].z and pCapt[1].w will have undefined values.

The attribute pCapt[8] would also be declared as:

in vec4 pCapt[2];

Since it can fit all of its data in 2 vec4s.

If your attribute is an integer type, you can use the integer types (ivec2, ivec4) instead.

TouchDesigner specific defines

Shaders in TouchDesigner are dynamically recompiled based on a few things such as the number and types of lights in the scene or the number of cameras used in this pass (in the case of Multi-Camera Rendering). This is done for optimization purposes since loops with known amounts of iterations are far faster in GLSL. Some defines are provided which allow code written by end-users to be recompiled with different numbers of lights/cameras correct.

// This define will be defined at compile time, and your shader will be recompiled for each combination
// of lights in use (if it's is used in multiple light configurations). You can use it for things like a loop
// counter to loop over all lights in the scene, as you will see if you output example code from the Phong MAT.
// Environment COMPs are counted separately from regular Light COMPs.
#define TD_NUM_LIGHTS <defined at compile time>
#define TD_NUM_ENV_LIGHTS <defined at compile time>
// On newer hardware multiple cameras can be rendered at the same time. This define will be set to the
// number of cameras done on this render pass. This may be 1 or more, depending on many factors.
// Code should be written in a way that should work regardless of what this value is.
#define TD_NUM_CAMERAS <defined at compiler time>

TouchDesigner specific Uniforms

These are uniform that TouchDesigner will automatically set for you. You do not need to declare any of these, you can just use them in your shaders.

// General rendering state info
struct TDGeneral {
  vec4 ambientColor;  // Ambient light color (sum of all Ambient Light COMPs used)<br>
  vec4 viewport;      // viewport info, contains (left, bottom, 1.0 / (right - left), 1.0 / (top - bottom))<br>      
uniform TDGeneral uTDGeneral;<br>
// So for example you'd get the ambient color by doing 
// vec4 ambientColor = uTDGeneral.ambientColor;
// Matrices
struct TDMatrix {
	mat4 world;			// world transform matrix, combination of hierarchy of Object COMPs containing the SOP.
						// transforms from SOP space into World Space<br>
	mat4 worldInverse;	// inverse of the world matrix<br>
	mat4 worldCam;		// multiplication of the world and the camera matrix. (Cam * World)
	mat4 worldCamInverse;
	mat4 cam;			// camera transform matrix, obtained from the Camera COMP used to render
	mat4 camInverse;
	mat4 camProj;		// camera transform and the projection matrix from the camera COMP, (Proj * Cam)
	mat4 camProjInverse;
	mat4 proj;			// projection matrix from the Camera COMP
	mat4 projInverse;
	mat4 worldCamProj;	// multiplication of the world, camera and projection matrices. (Proj * Cam * World)
						// takes a vertex in SOP space and puts it into projection space
	mat4 worldCamProjInverse;
	mat3 worldForNormals;	// Inverse transpose of the world matrix, use this to transform normals
							// from SOP space into world space
	mat3 camForNormals;		// Inverse transpose of the camera matrix, use this to transform normals
							// from world space into camera space
	mat3 worldCamForNormals;	// Inverse transpose of the worldCam matrix, use this to transform normals
								// from SOP space into camera space inverse(transpose(Cam * World))
TDMatrix uTDMats[TD_NUM_CAMERAS];<br>
// For example you'd transform the vertex in SOP space into camera space with this line of code
// vec4 camSpaceVert = uTDMats[TDCameraIndex()].worldCam * vec4(P, 1.0);
struct TDCameraInfo {
	vec4 nearFar;			// near/far plane info, contains (near, far, far - near, 1.0 / (far - near))<br>
	vec4 fog;				// fog info, as defined in the Camera COMP
							// contains (fogStart, fogEnd, 1.0 / (fogEnd - fogStart), fogDensity)<br>
	vec4 fogColor;			// Fog Color as defined in the Camera COMP
	int renderTOPCameraIndex;	// Says which camera from the Render TOP's 'Cameras' parameter this particular camera is.
uniform TDCameraInfo uTDCamInfos[TD_NUM_CAMERAS];<br>
// A 1D texture that gives a half-sine ramp from (0, 0, 0, 1) to (1, 1, 1, 1)
// Sampling it with U coordinate outside the (0, 1) range will return 0 for anything below 0 and 1 for anything above 1.
// It's possibly faster than using the GLSL sin() function, in situations where you want behavior like this
uniform sampler1D sTDSineLookup;

In general you don't need to do anything with any of these light uniforms/samplers, as it's all done for you in the TDLighting(), TDLightingPBR() and, TDEnvLightingPBR() functions. Only if you are doing custom lighting will you need to worry about these.

struct TDLight {
	vec4 position;		// the light's position in world space<br>
	vec3 direction;		// the light's direction in world space<br>
	vec3 diffuse;		// the light's diffuse color<br>
	vec4 nearFar;		// the near and far settings from the light's View page
						// contains (near, far, far - near, 1.0 / (far - near)<br>
	vec4 lightSize;		// values from the light's Light Size parameter
						// containers (light width, light height, light width / light projection width, light height / light projection height) <br>
	vec4 misc;			// misc parameters values, right now it contains
						// (Max shadow softness, 0, 0, 0)<br>
	vec4 coneLookupScaleBias;	// applied to a cone light's contribution to create the spot lit area
								// contains (cone lookup scale, cone lookup bias, 1.0, 0.0) <br>
	vec4 attenScaleBiasRoll;	// applied to the light's distance from the point to get an attenuation value
								// contains (attenuation scale, attenuation bias, attenuation rolloff, 0)<br>
	mat4 shadowMapMatrix;		// transforms a point in world space into the projection space of the shadow mapped light
								// also rescales projection space from [-1, 1] to [0, 1], so the value can be used
								// to lookup into a shadow map<br>
	mat4 shadowMapCamMatrix;	// transforms a point in world space into the camera space of the shadow mapped light<br>
	vec4 shadowMapRes;			// the resolution of the shadow map associated with this light
								// contains (1.0 / width, 1.0 / height, width, height)
								// filled with 0s if the light isn't shadow mapped<br>
	mat4 projMapMatrix;			// transforms a point in world space into the projection map space of the light
								// used when using textureProj() function for projection mapping<br>
uniform TDLight uTDLights[TD_NUM_LIGHTS];
struct TDEnvLight {
	vec3 color;					// Color of the env light (not counting it's environment map)
	mat3 rotate;				// Rotation to be applied to the env light.
	vec3 shCoeffs[9];			// Spherical harmonic coefficients calculated from the environment map. Used for diffuse
uniform TDEnvLight uTDEnvLights[TD_NUM_ENV_LIGHTS];
// The environment maps for the env lights. Will be black for lights that don't have a map
// of that particular dimentionality. 
uniform samplerCube sTDEnvLightCubeMaps[TD_NUM_ENV_LIGHTS];
uniform sampler2D sTDEnvLight2DMaps[TD_NUM_ENV_LIGHTS];
// shadow maps, these are both the same maps, but are setup in compare or sampling mode
// for lights that aren't shadow maps, these will be fill with 0s
// The first one is used in the function texture(sampler2DShadow, vec3) or textureProj(sampler2DShadow, vec4)
// for automatic depth comparison and hardware percentage closer filtering 
uniform sampler2DShadow sTDCompareShadowMaps[TD_NUM_LIGHTS];<br>
// This one is used for directly getting the depth from the shadow map using
// texture(sampler2D, vec2) or textureProj(sampler2D, vec3)
// If using hard shadows the values are in [0, 1] post-projection depth units
// If using soft shadows the values are in camera space of the light (not the camera being rendered from)
uniform sampler2D sTDShadowMaps[TD_NUM_LIGHTS];
// The projection maps defined in the Projection Map parameter of the Light COMP
// the map will always return black if the light doesn't have a projection map defined
uniform sampler2D sTDProjMaps[TD_NUM_LIGHTS];
// The falloff ramp from when the cone light starts to fade out until it reaches black
// used in combination with uTDLights[].coneLookupScaleBias
uniform sampler1D sTDConeLookups[TD_NUM_LIGHTS];

TouchDesigner specific Functions

Further details about each of these functions are given in the sections following this.

Vertex Shader Only Functions

// Transforms a point
// from world space to projection space. 
// These functions should always be used to output to gl_Position, allowing TouchDesigner to do custom manipulations
// of the values as needed for special operations and projections.
// Both the vec4 and vec3 version of TDWorldToProj() treat the xyz as a point, not a vector.
vec4 TDWorldToProj(vec4 v);
vec4 TDWorldToProj(vec3 v);
// Returns the instance ID for the current instance. This should always be used, not gl_InstanceID directly.
// For more information look at the Instancing section of this document.
int TDInstanceID();
// Returns the index of the camera for this particular vertex. Needed to support [[#Multi-Camera_Rendering|Multi-Camera Rendering]] rendering.
// This is always 0-based, it does not reflect which camera is being currently being used from the Render TOP. 
// For that, use the uTDCamInfos[TDCameraIndex()].renderTOPCameraIndex
int TDCameraIndex();
// Creates a rotation matrix to rotate from vector 'from' to vector 'to'.
mat3 TDCreateRotMatrix(vec3 from, vec3 to);
// Deforms a point or vector using instancing and skinned deforms.
// Be sure to use the *Vec version in the case of vectors to get correct results.
// Also use the *Norm verion when deforming a normal, to make sure it still matches the surface correctly.
vec4 TDDeform(vec4 pos);
vec4 TDDeform(vec3 pos);
vec3 TDDeformVec(vec3 v);
vec3 TDDeformNorm(vec3 n);
// ** In general you don't need to call any of the below functions, just calling TDDeform or TDDeformVec will
// do all the work for you
// Just the skinning or instancing portion of the deforms
// Returned position/vector is in SOP space for the *Skinned* version, and world space for the *Instance* version.
vec4 TDSkinnedDeform(vec4 pos);
vec3 TDSkinnedDeformVec(vec3 vec);
vec4 TDInstanceDeform(vec4 pos);
vec3 TDInstanceDeformVec(vec3 vec);
// You also don't need to call these usually, but are available for special cases
// For instancing functions, if you don't provide an index, 
// it will use (gl_InstanceID) or (gl_InstanceID + uTDInstanceIDOffset), depending on the situation
mat4 TDBoneMat(int index);
mat4 TDInstanceMat(int index);
mat4 TDInstanceMat();
// Returns a 3x3 matrix only. Useful if you are only working with vectors, not positions.
mat3 TDInstanceMat3(int index);
mat3 TDInstanceMat3();
vec3 TDInstanceTranslate(int index);
vec3 TDInstanceTranslate();
vec3 TDInstanceScale(int index);
vec3 TDInstanceScale();
// To calculate the texture coordinates for your instance (if used in the Geometry COMP's parameters), use these functions
// For texture coordinates the passed in variable 't' is the current texture coordinate to be modified/replaced
vec3 TDInstanceTexCoord(int index, vec3 t);
vec3 TDInstanceTexCoord(vec3 t);
// Available in build 099.2017.15400 or later
// Takes a surface normal, the tangent to the surface, and a handedness value (either -1 or 1)
// Returns a matrix that will convert vectors from tangent space, to the space the normal and tangent are in
// Both the normal and the tangent must be normalized before this function is called.
// The w coordinate of the T attribute created by the [[Attribute Create SOP]] contains the handedness
// that should be passed in as-is.
mat3 TDCreateTBNMatrix(vec3 normal, vec3 tangent, float handedness);

Pixel Shader Only Functions

// Call this function to give TouchDesigner a chance to discard some pixels if appropriate.<br>
// This is used in things such as order-indepdendent transparency and dual-paraboloid rendering.
// For best performance call it at the start of the pixel shader.
// It will do nothing if no features that require it are active, so it's safe to always call it.
void TDCheckDiscard();<br>
// Call this function to apply the alpha test to the current pixel. This function will do nothing
// if the alpha test is disabled, so it can be safely always called
void TDAlphaTest(float alphaValue);<br>
// Call this to apply the Camera COMPs fog to the passed color. Requires the world space vertex position also
// This function will do nothing if fog is disabled, so it's safe to always call it at the end of your shader
// there would be no performance impact from calling it if fog is disabled
// the cameraIndex should be passed through from the vertex shader using a varying, sourced from TDCameraIndex()
vec4 TDFog(vec4 curColor, vec3 worldSpacePos, int cameraIndex);
// Call this at the end of your shadow to apply a dither to your final color. This function does nothing
// if dithering is disabled in the Render TOPs parameters
vec4 TDDither(vec4 curColor);
// Pass any color value through this function before writing it out to a color buffer.
// This is needed to ensure that color channels are output to the correct channels
// in the color buffer, based on hardware limitation that may store alpha-only
// textures as red-only internally, for example
vec4 TDOutputSwizzle(vec4 curColor);

The TDLighting() functions are called per light to determine that light's diffuse and specular contributions. Shadowing, projection mapping are all automatically handled for you inside this functions.

  • The diffuseContrib, specularContrib and specularContrib2 parameters will be filled with the results.
  • lightIndex is the light index to calculate
  • worldSpacePos is the world space vertex position
  • shadowStrength is a scalar on the shadow to increase or decrease its effect for example a value of 0.5 would give a maximum 50% shadow.
  • worldSpaceNorm is the normalized world space normal
  • vertToCamVec is the normalized vector from the vertex position to the camera position.
  • shininess is the specular shininess exponent.

Physically Based (PBR) Lighting

// For all regular lights. Should be called in a loop from 0 to TD_NUM_LIGHTS
void TDLightingPBR
	(inout vec3 diffuseContrib,
	inout vec3 specularContrib,
	int index,
	vec3 diffuseColor,
	vec3 specularColor,
	vec3 worldSpacePos,
	vec3 worldSpaceNormal,
	float shadowStrength,
	vec3 shadowColor,
	vec3 vertToCamVec,
	float roughness);
// For environment lights. Should be called in a loop from 0 to TD_NUM_ENV_LIGHTS
void TDEnvLightingPBR
	inout vec3 diffuseContrib,
	inout vec3 specularContrib,
	int index,
	vec3 diffuseColor,
	vec3 specularColor,
	vec3 worldSpaceNormal,
	vec3 vertToCamVec,
	float roughness,
	float ambientOcclusion);

Phong Lighting

You can disable features by calling a different version of TDLighting().

// All features
void TDLighting(
	out vec3 diffuseContrib,
	out vec3 specularContrib,
	out vec3 specularContrib2,
	int lightIndex,
	vec3 worldSpacePos,
	vec3 worldSpaceNorm,
	float shadowStrength,
	vec3 shadowColor,
	vec3 vertToCamVec,
	float shininess,
	float shininess2
// Diffuse and specular, but no specular2, with shadows
void TDLighting(
	out vec3 diffuseContrib,
	out vec3 specularContrib,
	int lightIndex,
	vec3 worldSpacePos,
	vec3 worldSpaceNorm,
	float shadowStrength,
	vec3 shadowColor,
	vec3 vertToCamVec,
	float shininess
// Diffuse and both speculars, no shadows
void TDLighting(
	out vec3 diffuseContrib,
	out vec3 specularContrib,
	out vec3 specularContrib2,
	int lightIndex,
	vec3 worldSpacePos,
	vec3 worldSpaceNorm,
	float shininess,
	float shininess2,
	vec3 vertToCamVec
// Diffuse only, with shadows
void TDLighting(
	out vec3 diffuseContrib,
	int lightIndex,
	vec3 worldSpacePos,
	vec3 worldSpaceNorm,
	float shadowStrength,
	vec3 shadowColor
// Diffuse Only, no shadows
void TDLighting(
	out vec3 diffuseContrib,
	int lightIndex,
	vec3 worldSpacePos,
	vec3 worldSpaceNorm
// Diffuse and specular, no specuar2 or shadows.
void TDLighting(
	out vec3 diffuseContrib,
	out vec3 specularContrib,
	int lightIndex,
	vec3 worldSpacetPos,
	vec3 worldSpaceNorm,
	vec3 vertToCamVec,
	float shininess

Common Lighting Functions

// In general you don't need to use these functions, they are called for you in the TDLighting() functions.
// These functions return the shadow strength at the current pixel for light at the given index.
// Also requires the world space vertex position to do its calculations
// returns undefined results if the shadow isn't mapped using the chosen shadow type
// The returned value is 0 for no shadow, 1 for 100% shadowed
// Due to percentage closer filtering, hard shadows can still have values between 0 and 1 at the edges of the shadow
float TDHardShadow(int lightIndex, vec3 worldSpacePos);<br>
// This one will apply soft shadows with both 25 search steps done, and 25 filter samples.
float TDSoftShadow(int lightIndex, vec3 worldSpacePos);<br>
// Allows for control of search steps and filter samples.
float TDSoftShadow(int lightIndex, vec3 worldSpacePos, int samples, int searchSteps);
// Gets the projection map color for the given world space vertex position.
// No other lighting calculations are applied to the returned color
// If the given light index is not using a projection map, then 'defaultColor' is returned.
vec4 TDProjMap(int lightIndex, vec3 worldSpacePosition, vec4 defaultColor);

Common Functions

Available in all shader stages.

Matrix functions

// Note: Became available in all stages starting in 099.2017.35800. Older versions only have these in the vertex shader.
// Creates a rotation matrix that rotates starting from looking down +Z, to the 'forward' vector direction
mat3 TDRotateToVector(vec3 forward, vec3 up);<br>
// Creates a rotation matrix that rotates around the 'axis', the given number of 'radians'
mat3 TDRotateOnAxis(float radians, vec3 axis)

Perlin and Simplex noise functions

// Noise functions
// These will return the same result for the same input
// Results are between -1 and 1
// Can be slow so just be aware when using them. 
// Different dimensionality selected by passing vec2, vec3 or vec4. 
float TDPerlinNoise(vec2 v);
float TDPerlinNoise(vec3 v);
float TDPerlinNoise(vec4 v);
float TDSimplexNoise(vec2 v);
float TDSimplexNoise(vec3 v);
float TDSimplexNoise(vec4 v);

HSV Conversion

// Converts between RGB and HSV color space
vec3 TDHSVToRGB(vec3 c);
vec3 TDRGBToHSV(vec3 c);

Space Conversions

// Converts a 0-1 equirectangular texture coordinate into cubemap coordinates.
// A 0 for the U coordinate corresponds to the middle of the +X face. So along the vec3(1, Y, 0) plane.
// As U rises, equirectangular coordinates rotate from +X, to +Z, then -X and -Z.
vec3 TDEquirectangularToCubeMap(vec2 equiCoord);
// Converts from cubemap coordinates to equirectangular
// cubemapCoord MUST be normalized before calling this function.
vec2 TDCubeMapToEquirectangular(vec3 cubemapCoord);

Working with Lights

To help shaders be as fast as possible, a lot of the logic to calculate lights is hard-coded into the shader depending on what features are enabled and what the light type is. Shaders written for the GLSL MAT will be recompiled with different implementation of TDLightingPBR(), TDLighting() etc depending on the number and types of lights in the scene. This allows the same GLSL MAT to be used in multiple different scenes without needing to be changed based on the number of lights in the scene. These compilations are cached, so each permutation of lighting settings will only cause one compilation to occur, each time TD is run.

TIP: Geometry viewers have built-in lighting separate from your scene's lighting objects. For information on how to duplicate that lighting, see the Geometry Viewer article.

Custom work with lights

If you decide to do custom lighting work, this section describes how a lot of the light values are used in our shader.

Knowing which variables correspond to which Light COMPs

The variables will be indexed to differentiate the lights, starting at 0. Light 0 will be the first light listed in the Render TOP, Light 1 will be the 2nd light listed and so on. In the event that lights are selected using a wildcard such as light*, the lights gathered from this wildcard will be sorted alpha-numerically.

For example, say the Render TOP has "/light3 /container1/light* /alight1" listed in its Light parameter, and /container1/ has two light COMPs, named light1 and light2. In this case the lights would correspond to the following indices:
/light3 would be index 0
/container1/light1 would be index 1
/container1/light2 would be index 2
/alight1 would be index 3

Light Parameters

All of the parameters for the lights are defined in the uTDLights structure, defined here.

Cone Lighting Technique

TouchDesigner's built-in shaders use a custom cone-lighting technique that you can mimic in your shader. The intensity of the cone light is pre-computed into a 1D texture (a lookup table ) to reduce the workload in the shader. The start of the 1D texture (texture coordinate 0.0) is the intensity of the light at the edge of the cone angle (the first pixel is always 0). The end of the 1D texture (texture coordinate 1.0) is the intensity of the light at the center of the cone. This lookup table is declared as:

uniform sampler1D sTDConeLookups[TD_NUM_LIGHTS];

A second helper uniform is also given to the shader to make looking up in the 1D texture easier:


To correctly look into this lookup table the following algorithm should be used:

// 'spot' is the spot vector
// 'lightV' is the vector coming from the light position, pointing towards
// the point on the geometry we are shading.
// It doesn't matter which space these vectors are in (camera space, object space), 
// as long as they are both in the same space. <br>
// Determine the cosine of the angle between the two vectors, will be between [-1, 1]
float spotEffect = dot(spot, lightV); <br>
// Now rescale the value using the special helper uniform so that value is between [0,1]
// A value of 0 will mean the angle between the two vectors is the same as the total 
// cone angle + cone delta of the light
spotEffect = (spotEffect * uTDLights[i].coneLookupScaleBias.x) + uTDLights[i].coneLookupScaleBias.y; <br>
// Finally lookup into the lookup table
float dimmer = texture1D(sTDConeLookup[i], spotEffect).r; <br>
// You can now multiply the strength of the light by 'dimmer' to create the correct
// light intensity based on this pixels position in or outside the cone light's area of
// influence


Attenuation is handled for you in the TDLighting() function, but if you want to add it yourself this section describes how. TouchDesigner's built-in shaders use a custom attenuation technique. Like the cone lighting, a pre-calculated scale and bias is provided for you that will allow you to look into a lookup table to get the correct attenuated intensity of the light. The uniform is:


The code for calculation attenuation is as follows: (lightDist is the distance from the vertex to the light)

float lightAtten = lightDist * uTDLights[i].attenScaleBiasRoll.x;
lightAtten = uTDLights[i].attenScaleBiasRolll.y;
lightAtten = clamp(lightAtten, 0.0, 1.0) * 1.57073963
lightAtten = sin(lightAtten);
lightAtten = pow(lightAtten, uTDLights[i].attenScaleBiasRoll.z);

Projection and Shadow Mapping

Projection mapping and shadowing mapping are handled for you in the TDLighting() functions, but you can do it yourself if you want using the below information.

Projection and Shadow mapping are very similar operations. The only difference is a projection map will be used to color the surface, while a shadow map will be used to decide if that surface receives lighting from a certain light. The math however is essentially the same. First you need calculate the correct textures coordinates to look up into the projection or shadow map with. For both operations you need to know the position of the vertex in world space.

Projection Mapping

For projection mapping you then use this math to find its texture coordinates. The 3rd component isn't necessary since a projection map is just a 2D image (the 4th one is though as part of the math).

// Notice how we remove the z coordinate here
vec3 projCoord = (uTDLights[i].projMapMatrix * worldSpaceVert).xyw;

Now in the pixel shader, sample the projection map with this code:

vec4 projColor = textureProj(sTDProjMaps[i], projCoord);

Now simply use projColor however you see fit in your shader.

Shadow Mapping

For shadow mapping it's very similar, only all 4 components are needed since you need the z for depth comparison.

vec4 shadowCoord = uTDLights[i].shadowMapMatrix * camSpaceVert;

Sample the shadow map with this code:

float shadowDepth = textureProj(sTDShadowMaps[i], shadowCoord).r;

Compare shadowDepth with (shadowCoord.z / shadowCoord.w) to determine if the pixel is in a shadow or not.

if ((shadowCoord.z / shadowCoord.w) >= shadowDepth)
// then it's in the shadow
// It's not in the shadow

Multiple Render Targets

Using the '# Of Color Buffers' parameter in the Render TOP along with the Render Select TOP, you can write GLSL shaders that output multiple color values per pixel. This is done by declaring and writing to pixel shader outputs declare like this:

layout(location = 0) vec4 fragColor[TD_NUM_COLOR_BUFFERS];

The constant TD_NUM_COLOR_BUFFERS with automatically be set for you based on the render settings.

Multi-Camera Rendering

Multi-Camera Rendering is rendering multiple cameras in a single rendering pass, all looking at the same scene. This means the scene-graph is only traversed once, which avoids many calls to the graphics driver. Lights, textures, material and draw calls only need to be done once for the entire set of cameras being rendered. This feature is supported by Nvidia Pascal (Geforce 1000, Quadro P-Series) or AMD Polaris (Radeon R9, Radeon Pro WX) and newer GPUs. This feature is importance for VR rendering, as well as things such as rendering a Cube Map in a single pass (instead of one pass per side).

Multi-Camera Rendering will not function if the Cameras have different light masks. The cameras will be rendered one pass at a time in that case.

This feature is used by the Render TOP when multiple cameras are listed in the 'Cameras' parameter. The 'Multi-Camera Hint' parameter can help control how this feature is used for that particular Render TOP.

Nvidia calls this feature 'Simultaneous Multi-Projection'.

The multi-camera functionality on these GPUs is not general and requires some tricks to function properly. Because of this it's important all of your shaders make use of the TD* functions such as TDWorldToProj(), TDInstanceID() instead of doing those things manually and using built-in GLSL functionality. Functions such as TDFog() also require a camera index to be passed to it to apply fog for the correct camera.

Outputting gl_Position

Although in general you can transform your points/vectors using the built-in model/view and projection matrices at will, when outputting to gl_Position you should use the built-in functions. These functions allow TouchDesigner to do some manipulation of the values for final output, depending on the rendering setup. For example for doing optimized Multi-Camera Rendering, the position will need to be multiplied by the correct camera for this execution of the vertex shader. To give TouchDesigner a chance to do this manipulation, you should call the built-in functions to transform your vertex position:

vec4 TDWorldToProj(vec4 v);
vec4 TDWorldToProj(vec3 v);

So for example at the end of your shader you would do:

gl_Position = TDWorldToProj(worldSpacePosition);

Working with Deforms

Currently there are two different types deformations that can be applied to geometry: skinned deforms and instanced transforms.

TouchDesigner automatically encapsulates all of the work for both of these deforms in the GLSL functions. Use the *Vec version when deforming vectors.
These functions always return the point/vector in World space, not model/SOP.

vec4 TDDeform(vec4 p);
vec3 TDDeform(vec3 p);
vec3 TDDeformVec(vec3 v);
vec3 TDDeformNorm(vec3 v);

As the shader writer, it's your job to manipulate the vertex attributes such as the position and normal (since there's no place for TouchDesigner to do it if you're the one writing the shader), so it's up to you to call the TDDeform() function. In general you will simply call it simply like this:

vec4 worldSpaceVert = TDDeform(vec4(P, 1.0)); 
vec3 worldSpaceNorm = TDDeformNorm(N));

However you can use the below declared functions directly.

Skinning Deforms (Bone Deforms)

When you enable the Deform feature in the GLSL MAT, TouchDesigner will automatically declare some attributes, varyings, uniforms and functions for you to use to deform your geometry in the same way that other MATs deform geometry. It's important you don't re-use any of these reserved words when using deforms to avoid name conflicts when compiling the shader. Even when not using deforms though, the below listed functions will be declared anyway so shader code will run correctly both when deforms are turned on and off. The functions do nothing when deforms are off (and have no cost to the shader speed). The bone matrices for deforms are built by using the pCaptPath and pCaptData detail attributes along with the the bone's current position based on the skeleton at that frame. The indices in pCapt match up with the array index of the matrices for the bones. More information on how Skinning Deforms work can be found here: Deforming Geometry (Skinning)


You generally will not need to call these directly, they are called by the TDDeform() function.

In the vertex shader:

vec4 TDSkinnedDeform(vec4 pos);
vec3 TDSkinnedDeformVec(vec3 vec);

You can get the bone matrix for the given matrix index with this function:

mat4 TDBoneMat(int index);


When you enable instancing on the Instance page of the Geometry COMP the TDDeform() functions will automatically call the correct lower level function that will transform the instance, based on the channels given in the XForm CHOP parameter. If you don't specify a CHOP, then all of the instances will be drawn at the same spot, unless you transform them yourself.

Instance Index/ID

To calculate the instance ID, use the provided TDInstanceID() function. Do not use gl_InstanceID directly because the number of instances being rendered may be larger than requested due to Multi-Camera Rendering. Shader writers familiar with TouchDesigner088 may also remember the uTDInstanceIDOffset that had to be used, this does not need to be used with TDInstance();

int TDInstanceID();

Since this function is only available in the vertex shader, you will need to pass it onwards to the pixel shader through an out/in, if you require it in the pixel shader.

// In the vertex shader, declare something like this, and assign vInstanceID = gl_InstanceID in the main()
flat out int vInstanceID;<br>
void main()
	vInstanceID = TDInstanceID();
	// other main vertex function stuff
	// ....

And in the pixel shader you can read this value if it's declared like this:

// Pixel shader
flat in int vInstanceID;

This is declared as flat since int variable types can not be interpolated across a primitive.

Deform Functions

You generally will not need to call any of these directly, they are called by the deform() function. These functions are only available in the vertex shader.

In the vertex shader:

vec4 TDInstanceDeform(vec4 pos);
vec3 TDInstanceDeformVec(vec3 vec);

For the transform, there are two different types of uniforms that could be declared. If you are using any of rotations (RX, RY, RZ), then matrices will be sent to the GPU. Access these matrices using the functions:

mat4 TDInstanceMat(int instanceIndex);
mat4 TDInstanceMat();

These matrices will contain the entire transform, including TX, TY, TZ, and SX, SY, SZ.

If you aren't using rotation, then individual arrays for each of TX, TY, TZ, SX, SY and SZ will be sent to the GPU (if they are used). The fewer channels you use, the less space these will take up.

vec3 TDInstanceTranslate(int instanceIndex);
vec3 TDInstanceTranslate();
vec3 TDInstanceScale(int instanceIndex);
vec3 TDInstanceScale();

Attribute Functions

When modifying the texture coordinates, these functions do the texture coordinate modifications per instance. t is the texture coordinate to modify. The version without instanceIndex will use the current value for gl_InstanceID automatically.

vec3 TDInstanceTexCoord(int instanceIndex, vec3 t);
vec3 TDInstanceTexCoord(vec3 t);

To modify diffuse color, these functions will replace/add/subtract from the original diffuse color. In general you'll want to pass in the attribute Cd into these functions to have them modify it. If instance color is not in use, this function will just return the passed in color, unmodified.

vec4 TDInstanceColor(vec4 curColor);
vec4 TDInstanceColor(int instanceIndex, vec4 curColor);
Instance Texturing

These are only available on Nvidia Kepler GPUs or newer.

Instance texturing allows mapping any number of individual textures onto instances. The number of textures available to be used in a single render is extremely high and fast. It avoids needing to use 2D Texture Array to map multiple images onto instances. Only one type of instance texture is supported at a time, so if the wrong function is called (e.g. the *Cube() when the instance textures are using 2D textures), you will receive a white texture instead of one of the instance textures. Use these functions to fetch the current instance texture for the current instance.

sampler2D TDInstanceTexture2D();
sampler3D TDInstanceTexture3D();
sampler2DArray TDInstanceTexture2DArray();
samplerCube TDInstanceTextureCube();

If you require more custom control of which instance texture you which to use, you can use these functions instead:

// Gives to you the textureIndex for the given instanceIndex, or the current instance.
uint TDInstanceTextureIndex(int instanceIndex);
uint TDInstanceTextureIndex();

// Returns the texture for the given textureIndex.
sampler2D TDInstanceTexture2D(int textureIndex);
sampler3D TDInstanceTexture3D(int textureIndex);
sampler2DArray TDInstanceTexture2DArray(int textureIndex);
samplerCube TDInstanceTextureCube(int textureIndex);

There are two pieces of information used, the first is the texture index. This comes from the Texture Index parameter in the Geometry COMP. You can get this texture index with either TDInstanceTextureIndex() or TDInstanceTextureIndex(int instanceIndex) (if you wish to get the index of a instance different than the current one). Using this texture index, or some other texture index you've calculated on your own, you can then call the appropriate other TDInstanceTexture*() function to get the sampler for the texture you are looking for.

Once you have the sampler, you use it like any other sampler, using the texture() function to sample a color from it. To use it in the pixel shader you will need to pass this sampler through an in/out variable to the pixel shader for it to gain access to it. Note Older Nvidia drivers had a bug where passing a sampler* through the in/out parameters wouldn't work. You had to use a uint64_t as the variable type for the in/out variable, and cast the sampler to and from this when assigning and retrieving it from the in/out variable.

The best way to see this code being used for real is to output a shader from the Phong MAT that is doing instance texturing.

Point Sprites

When rendering point sprites primitives you are required to write to the vertex shader output gl_PointSize. This output variable determines how large the point sprite is (in pixels) when it is rendered. If you don't write to the output then your point sizes are undefined.

Each point sprite will be rendered as a square of pixels gl_PointSize pixels wide. The square of pixels will receive textures coordinates from 0-1 over the entire square in the pixel shader input variable gl_PointCoord.

Order Independent Transparency

You can make your shader support Order Independent Transparency by simply adding this line at the start of your pixel shader's main() function. If Order Independent Transparency isn't enabled in the Render TOP, then this function will do nothing.



If dithering is enabled in the Render TOP, you can have this dithering applied to your color by simply calling:

finalColor = TDDither(finalColor);

You generally want to do this right at the end of the shader, just before you write the value to your output color. If dithering is disabled in the Render TOP this function will still be available (to avoid compiler errors), but it will leave the color unchanged.


The Render Pick DAT and CHOP do their work with a render operation, so they need to interact with the shader to do their work. If you export a Phong MAT shader you will see the following lines in it

	// All the typical shader code

The key thing that is occurring here is that when picking is occuring, the define TD_PICKING_ACTIVE is set and only the code inside the #else block is executed. The function:

void TDWritePickingValues();

Will write default values for picking, which the Render Pick DAT/CHOP will read. If you have a custom shader that changes vertex positions in a non standard way, or if you want to output different kinds of information (like a color other that's Cd), you can replace the values that have been written by this function afterwards. The values available to you are:

TDPickVertex {
	vec3 sopSpacePosition;
	vec3 worldSpacePosition;
	vec3 camSpacePosition;
	vec3 sopSpaceNormal;
	vec3 worldSpaceNormal;
	vec3 camSpaceNormal;
	vec3 uv[1];
	flat int instanceId;
	vec4 color;
} vTDPickVert;

So for example if you modifying the vertex position in a way different from the standard TDDeform() way, you could write these newly calculated values to like this: Be sure to do this AFTER the call to TDWritePickingValues(), otherwise that call will overwrite your values .

vTDPickVert.sopSpacePosition = newPosition;
vTDPickVert.worldSpacePosition = uTDMats.world * vec4(newPosition, 1.0);
vTDPickVert.camSpacePosition = uTDMats.worldCam * vec4(newPosition, 1.0);

You do not have to write to all the entries in this structure, but you can for completeness. Only the values that are being read by the Render Pick CHOP/DAT (selected in their parameters) must be filled in.


VR Shaders

Shaders that come from Shadertoy that support VR rendering will have a mainVR function defined. Re-creating the fragRayOri and fragRayDir variables that function uses inside of TD is simple. In the vertex shader:

vec4 worldSpaceVert = TDDeform(P);
vec4 worldSpaceCamPos = uTDMat.camInverse[3]; // The last column of the camera transform is it's position

vec3 fragRayOri = worldSpaceCamPos.xyz;
vec3 fragRayDir = worldSpaceVert.xyz - worldSpaceCamPos.xyz;
// Pass these variables to the pixel shader using 'out' variables named of your choosing

And in the pixel shader you just need to normalize whatever variable the fragRayDir was went through. The variable that came from fragRayOri and be used as-is.

To support these shaders, which are usually raymarching shaders, you'll want to render geometry that covers the entire viewport, such as putting a sphere around your camera.

Other Notes

#version statement

#include statements

  1. include statements are a new feature added to GLSL recently. Almost all drivers supprot them, except for AMD drivers on Windows. AMD drivers should be adding support for them soon though. You can use an #include statement in one DAT to include code from another DAT. The path can be absolute or relative.
 #include </project1/text1>
 #include <../../geo1/text2>

TouchDesigner will automatically put a #version statement at the start of the shaders when compiling them, so you should make sure your shaders don't have a #version statement. You will get an error if they do.

Related Articles

An Operator Family that associates a shader with a SOP or Geometry Object for rendering textured and lit objects.

The OpenGL code that creates a rendered image from polygons and textures. Shaders can be made of up to three parts: Vertex Shader, Geometry Shader and/or Pixel Shader, which are either embedded inside Materials, or placed in Text DATs and referenced to a GLSL Material.

An Operator Family that associates a shader with a SOP or Geometry Object for rendering textured and lit objects.

An Operator Family that reads, creates and modifies 3D polygons, curves, NURBS surfaces, spheres, meatballs and other 3D surface data.

Information associated with SOP geometry. Points and primitives (polygons, NURBS, etc.) can have any number of attributes - position (P) is standard, and optional are normals (N), texture coordinates (uv), color (Cd), etc.

The component types that are used to render 3D scenes: Geometry Component contain the 3D shapes to render, plus Camera, Light, Ambient Light, Null, Bone, Handle and other component types.

An Operator Family that contains its own Network inside. There are twelve 3D Object Component and eight 2D Panel Component types. See also Path.

An Operator Family that creates, composites and modifies images, and reads/writes images and movies to/from files and the network. TOPs run on the graphics card's GPU.

A 3D image created with the Render TOP. Also used more generally to include the compositing (with TOPs) to generate an output image.

(1) A Geometry Component can render its SOP geometry many times using CHOP samples, (2) an OP that doesn't actually have its own data, but rather just refers to an OP (or has an input) whose data it uses. This includes Null OPs, Select OPs and Switch OPs.

An Operator Family which operate on Channels (a series of numbers) which are used for animation, audio, mathematics, simulation, logic, UI construction, and many other applications.

The Graphics Processing Unit. This is the high-speed, many-core processor of the graphics card/chip that takes geometry, images and data from the CPU and creates images and processed data.

Each SOP has a list of Points. Each point has an XYZ 3D position value plus other optional attributes. Each polygon Primitive is defined by a vertex list, which is list of point numbers.

An Operator Family that manipulates text strings: multi-line text or tables. Multi-line text is often a command Script, but can be any multi-line text. Tables are rows and columns of cells, each containing a text string.