Difference between revisions of "Write a GLSL Material"

From TouchDesigner Documentation
Jump to: navigation, search
 
(47 intermediate revisions by 5 users not shown)
Line 1: Line 1:
 
== Overview ==
 
== Overview ==
  
This document explains the finer points of writing a GLSL Material in TouchDesigner. It is assumed the reader already has an understanding of the GLSL language. The official GLSL documentation can be found at [http://www.opengl.org/documentation/glsl/ this address.]
+
This document explains the finer points of writing a GLSL Material in TouchDesigner. It is assumed the reader already has an understanding of the GLSL language. The official GLSL documentation can be found at [https://www.khronos.org/opengl/wiki/Core_Language_(GLSL) this address.]
  
TouchDesigner099's main supported version of GLSL 3.30.
+
===GLSL Version===
  
Search also for Shader Toy in the forum.
+
TouchDesigner uses GLSL 3.30 and newer versions as it's language. Many online examples, as well as WebGL shaders, are written against GLSL 1.20. There are some significant language differences between GLSL 1.20 and GLSL 3.30+. For information about some of these differences, please refer to [[#Changes_from_GLSL_1.20|Changes from GLSL 1.20]]
 
 
=== Major changes since TouchDesigner088 ===
 
 
 
A lot of changes have been done to TouchDesigner's GLSL API in 099. Most of these changes were done to better facilitate [[#Multi-Camera_Rendering|Multi-Camera Rendering]]. A summary of most of these changes is:
 
* Lighting and other work is now done in World space instead of Camera space. This makes code cleaner since the shaders would need to do their work in multiple different camera spaces for multiple cameras. Legacy GLSL shaders are supported with the [[GLSL TOP]]s 'Lighting Space' parameter which will be set to Camera Space for older shaders.
 
* <code>TDInstanceID()</code> should be used instead of <code>gl_InstanceID/uTDInstanceIDOffset</code>.
 
* <code>uTDMat</code> has been removed when lighting in World Space, use the array <code>uTDMats[]</code> instead.
 
* Some values from the <code>uTDGeneral</code> structure have been moved to <code>uTDCamInfos[]</code>, since that info is camera specific.
 
* A notion of camera index (obtained in the vertex shader using <code>TDCameraIndex()</code>), is needed for some functions such as <code>TDFog()</code>.
 
* <code>TDAlphaTest(float)</code> must be called to apply the alpha test. It can be safely called when the alpha test is disabled on the MAT, it'll do nothing in that case.
 
* Before writing any color to a output color buffer, it should be passed through <code>vec4 TDOutputSwizzle(vec4)</code>. This ensures the channels are in the correct place depending on how the channels are stored in the output texture. For example Alpha-only textures may be stored in a 'Red-only' texture internally, so the alpha value will need to be swizzled over to the red channel before output.
 
 
 
=== Changes from GLSL 1.20 ===
 
 
 
Shaders written for 1.20 will not compile as 3.30 shaders. The language received a large overhaul, changing the name of many key functions and replacing a lot of functionality. All of the changes can be seen in the official GLSL documentation linked to earlier. We have taken this opportunity to improve TouchDesigner specific functionality also. Some of the more important changes are:
 
 
 
* Removed <code>texture1D(sampler1D, float), texture2D(sampler2D, vec2), etc.</code> All texture sampling is done with identical function names, regardless of the dimensionality of the texture. e.g. <code>texture(sampler1D, float), or texture(sampler2D, vec2)</code>.
 
 
 
* Removed the keyword <code>varying</code>. Instead use <code>in</code> and <code>out</code> (depending on if the value is getting outputted from the shader or inputted from a previous shader stage). Examples later on in the article.
 
 
 
* Removed the keyword <code>attribute</code>. Instead just use <code>in</code> in your vertex shader.
 
 
 
* Removed built-in varyings <code>gl_TexCoord[]</code>. You'll need to always declare your own variables that get output/input between shader stages.
 
 
 
* Removed <code>gl_FragColor</code> and <code>gl_FragData[]</code>. Instead you name your own color outputs using the syntax <code>layout(location = 0) vec4 nameYouWant</code>. To output to multiple color buffers declare outputs at the other locations <code>layout(location = 1) vec4 secondFragColor</code>.
 
 
 
* Removed all built-in attributes such as <code>gl_Vertex, gl_MultiTexCoord0, gl_Normal</code>. In TouchDesigner these attributes will be accessible through automatically declared attributes such as <code>in vec3 P; in vec3 N; in vec4 Cd</code>. More details on this [[#Working with Geometry Attributes|later]].
 
 
 
* Removed almost all built-in uniforms such as matrices (<code>gl_ModelViewMatrix, gl_ProjectionMatrix</code>), light information (<code>gl_LightSource[]</code>), fog information (<code>gl_Fog</code>). All of this data will be available through new means provided by TouchDesigner, detailed [[#TouchDesigner specific Uniforms|later]].
 
 
 
* Arrays of samplers are now supported, and are used extensively in TouchDesigner when appropriate. There are limitations on how these samplers are indexed though, detailed in the GLSL spec for the particular version you are using (3.30 has different rules from 4.10, for example).
 
 
 
* All TouchDesigner provided functions and uniforms have been renamed to follow a more concise and consistent naming convention.
 
 
 
* All TouchDesigner provided functions and uniforms will be declared for the user automatically. There is no need to manually declare the uniforms you want to use.
 
  
 
===The concept of GLSL Shaders===
 
===The concept of GLSL Shaders===
Line 48: Line 13:
 
'''Vertex Shader''' - A vertex shader is a program that is applied to every vertex of the geometry the Material is applied to.
 
'''Vertex Shader''' - A vertex shader is a program that is applied to every vertex of the geometry the Material is applied to.
  
'''Pixel Shader''' - A pixel shader is a program that is applied to every pixel that is drawn.
+
'''Pixel Shader''' - A pixel shader is a program that is applied to every pixel that is drawn. This is often also referred to as a Fragment Shader.
  
 
There is also the '''Geometry Shader''', which is a stage between the vertex and pixel stages, but it isn't used very often. For more information on Geometry Shaders have a look [https://open.gl/geometry here].
 
There is also the '''Geometry Shader''', which is a stage between the vertex and pixel stages, but it isn't used very often. For more information on Geometry Shaders have a look [https://open.gl/geometry here].
Line 56: Line 21:
 
All functions and uniforms provided by TouchDesigner as augmentations of the GLSL language will follow these conventions.
 
All functions and uniforms provided by TouchDesigner as augmentations of the GLSL language will follow these conventions.
  
* Function names will always start with the letters <code>TD</code>. e.g. <code>TDLighting()</code>.
+
* Function names will always start with the letters <syntaxhighlight lang=glsl inline>TD</syntaxhighlight>. e.g. <syntaxhighlight lang=glsl inline>TDLighting()</syntaxhighlight>.
  
* Uniforms will start with the letters <code>uTD</code>.
+
* Uniforms will start with the letters <syntaxhighlight lang=glsl inline>uTD</syntaxhighlight>.
  
* Samplers will start with the letters <code>sTD</code>.
+
* Samplers will start with the letters <syntaxhighlight lang=glsl inline>sTD</syntaxhighlight>.
 +
 
 +
* Images will start with the letters <syntaxhighlight lang=glsl inline>mTD</syntaxhighlight>.
  
 
* Vertex input attributes will be named the same as they are in the TouchDesigner SOP interface (P, N, Cd, uv).
 
* Vertex input attributes will be named the same as they are in the TouchDesigner SOP interface (P, N, Cd, uv).
  
Most uniforms provided by TouchDesigner will be contained in Uniform Blocks. This means instead of accessing a single matrix by <code>uTDMatrixName</code>, the matrices will be stored a single block with many matrices such as <code>uTDMats</code>, which has members such as <code>uTDMats[0].worldCam</code> and <code>uTDMats[0].projInverse</code>.
+
Most uniforms provided by TouchDesigner will be contained in Uniform Blocks. This means instead of accessing a single matrix by <syntaxhighlight lang=glsl inline>uTDMatrixName</syntaxhighlight>, the matrices will be stored a single block with many matrices such as <syntaxhighlight lang=glsl inline>uTDMats</syntaxhighlight>, which has members such as <syntaxhighlight lang=glsl inline>uTDMats[0].worldCam</syntaxhighlight> and <syntaxhighlight lang=glsl inline>uTDMats[0].projInverse</syntaxhighlight>.
  
 
== Shader Stages ==
 
== Shader Stages ==
Line 72: Line 39:
 
Inside a vertex shader you only have access to one vertex, you don't know the positions of other vertices are or what the output of the vertex shader for other vertices will be.  
 
Inside a vertex shader you only have access to one vertex, you don't know the positions of other vertices are or what the output of the vertex shader for other vertices will be.  
  
The input to a vertex shader is all the data about the particular vertex that the program is running on. Data like the vertex position in SOP space, texture coordinate, color, normal are available. These values are called attributes. Attributes are declared in the vertex shader using the <code>in</code> keyword.
+
The input to a vertex shader is all the data about the particular vertex that the program is running on. Data like the vertex position in SOP space, texture coordinate, color, normal are available. These values are called attributes. Attributes are declared in the vertex shader using the <syntaxhighlight lang=glsl inline>in</syntaxhighlight> keyword.
  
The vertex shader can output many things. The primary things it will output are the vertex position (after being transformed by the world, camera and projection matrix), color, and texture coordinates. It can output any other values as well using output variables declared using <code>out</code>. Outputs from a vertex shader are linearly interpolated across the surface of the primitive the vertex is a part of. For example, if you output a value of 0.2 at 1st vertex and a value of 0.4 at the 2nd vertex on a line, a pixel drawn half-way between these two points will receive a value of 0.3.
+
The vertex shader can output many things. The primary things it will output are the vertex position (after being transformed by the world, camera and projection matrix), color, and texture coordinates. It can output any other values as well using output variables declared using <syntaxhighlight lang=glsl inline>out</syntaxhighlight>. Outputs from a vertex shader are linearly interpolated across the surface of the primitive the vertex is a part of. For example, if you output a value of 0.2 at 1st vertex and a value of 0.4 at the 2nd vertex on a line, a pixel drawn half-way between these two points will receive a value of 0.3.
  
 
===Pixel Shader===
 
===Pixel Shader===
The inputs to a pixel shader are all of the outputs from the vertex shader, and any uniforms that are defined. The outputs from the vertex shader will have been linearly interpolated across the polygon that the vertices created. A pixel shader can output two things: Color and Depth. Color is output through the variable declared as <code>layout(location = 0) out vec4 whateverName</code>. Depth is output through a variable declared as <code>out float depthName</code>. You can name these variables whatever you want. You are required to write out a color value, but you do not need to write out depth (in fact it's best if you don't unless absolutely needed). GLSL will automatically output the correct depth value for you if you don't write out a depth value. If you are outputting to multiple color buffer, you declare more color outputs with the <code>location</code> value set to 1, 2, 3, etc., instead of 0.
+
The inputs to a pixel shader are all of the outputs from the vertex shader, and any uniforms that are defined. The outputs from the vertex shader will have been linearly interpolated across the polygon that the vertices created. A pixel shader can output two things: Color and Depth. Color is output through the variable declared as <syntaxhighlight lang=glsl inline>layout(location = 0) out vec4 whateverName</syntaxhighlight>. Depth is output through a variable declared as <syntaxhighlight lang=glsl inline>out float depthName</syntaxhighlight>. You can name these variables whatever you want. You are required to write out a color value, but you do not need to write out depth (in fact it's best if you don't unless absolutely needed). GLSL will automatically output the correct depth value for you if you don't write out a depth value. If you are outputting to multiple color buffer, you declare more color outputs with the <syntaxhighlight lang=glsl inline>location</syntaxhighlight> value set to 1, 2, 3, etc., instead of 0.
 +
 
 +
===Geometry Shader===
 +
A geometry shader takes a single point primitive points, line or triangle, and outputs set of points, line strips, or triangle strips. Currently in TouchDesigner the input primitive types you must use to match with what we are rendering is one of:
 +
  layout(points) in;
 +
  layout(lines_adjacency) in;
 +
  layout(triangles) in;
 +
 
 +
Other input types such as <code>lines</code> or <code>triangles_adjacency</code> is not currently supported.
 +
 
 +
'''Note''' - In the 2018.20000 series of builds <code>lines</code> were supported and <code>lines_adjacency</code> were not, so the change to the 2019.10000 that switched the support to <code>lines_adjacency</code> unfortunetely breaks existing geometry shaders.
  
 
== A Basic Shader ==
 
== A Basic Shader ==
Line 85: Line 62:
 
This vertex shader will simply transform each vertex correctly and leave it entirely up to the pixel shader to color the pixels:
 
This vertex shader will simply transform each vertex correctly and leave it entirely up to the pixel shader to color the pixels:
  
  void main()
+
<syntaxhighlight lang=glsl>
  {
+
void main()
    // P is the position of the current vertex
+
{
    // TDDeform() will return the deformed P position, in world space.
+
// P is the position of the current vertex
    // Transform it from world space to projection space so it can be rasterized
+
// TDDeform() will return the deformed P position, in world space.
    gl_Position = TDWorldToProj(TDDeform(P));
+
// Transform it from world space to projection space so it can be rasterized
  }
+
gl_Position = TDWorldToProj(TDDeform(P));
 +
}
 +
</syntaxhighlight>
  
 
=== Pixel Shader ===
 
=== Pixel Shader ===
  
 
This pixel shader simply sets every pixel to red:
 
This pixel shader simply sets every pixel to red:
 
+
<syntaxhighlight lang=glsl>
  // We need to declare the name of the output fragment color (this is different from GLSL 1.2 where it was automatically declared as gl_FragColor)
+
// We need to declare the name of the output fragment color (this is different from GLSL 1.2 where it was automatically declared as gl_FragColor)
  out vec4 fragColor;
+
out vec4 fragColor;
  void main()
+
void main()
  {
+
{
    fragColor = vec4(1, 0, 0, 1);
+
fragColor = vec4(1, 0, 0, 1);
  }
+
}
 
+
</syntaxhighlight>
 
== Working with Geometry Attributes ==
 
== Working with Geometry Attributes ==
  
 
The follow vertex shader attributes (inputs) will always be declared for you to use in your vertex shader. You do not need to declare them yourself.
 
The follow vertex shader attributes (inputs) will always be declared for you to use in your vertex shader. You do not need to declare them yourself.
 
+
<syntaxhighlight lang=glsl>
  layout(location = 0) in vec3 P; // Vertex position
+
layout(location = 0) in vec3 P; // Vertex position
  layout(location = 1) in vec3 N; // normal
+
layout(location = 1) in vec3 N; // normal
  layout(location = 2) in vec4 Cd; // color
+
layout(location = 2) in vec4 Cd; // color
  layout(location = 3) in vec3 uv[8]; // texture coordinate layers
+
layout(location = 3) in vec3 uv[8]; // texture coordinate layers
 
+
</syntaxhighlight>
 
'''Other Attributes'''
 
'''Other Attributes'''
  
All other attributes you want to use need to be declared in your shader code. are passed as custom attributes. For attributes with 4 or less values, simply declare an attribute with the same name in the GLSL shader as it has in the geometry. For example if the geometry has an attribute <code>speed[3]</code>, declare an attribute like this: <br>  
+
All other attributes you want to use need to be declared in your shader code. are passed as custom attributes. For attributes with 4 or less values, simply declare an attribute with the same name in the GLSL shader as it has in the geometry. For example if the geometry has an attribute <syntaxhighlight lang=glsl inline>speed[3]</syntaxhighlight>, declare an attribute like this: <br>  
 
   in vec3 speed;
 
   in vec3 speed;
  
For attributes that are larger than 4 values, they will get put into an array of vec4s. If the size isn't a multiple of 4, then the extra values are undefined. For example for the attribute <code>pCapt[6]</code>, you would declare it in your shader like this:
+
For attributes that are larger than 4 values, they will get put into an array of vec4s. If the size isn't a multiple of 4, then the extra values are undefined. For example for the attribute <syntaxhighlight lang=glsl inline>pCapt[6]</syntaxhighlight>, you would declare it in your shader like this:
  in vec4 pCapt[2];
+
<syntaxhighlight lang=glsl>
 
+
in vec4 pCapt[2];
The values <code>pCapt[0].x, pCapt[0].y, pCapt[0].z, pCapt[0].w, pCapt[1].x</code> and <code>pCapt[1].y</code> will have the values from the geometry and <code>pCapt[1].z</code> and <code>pCapt[1].w</code> will have undefined values.
+
</syntaxhighlight>
 
+
The values <syntaxhighlight lang=glsl inline>pCapt[0].x, pCapt[0].y, pCapt[0].z, pCapt[0].w, pCapt[1].x</syntaxhighlight> and <syntaxhighlight lang=glsl inline>pCapt[1].y</syntaxhighlight> will have the values from the geometry and <syntaxhighlight lang=glsl inline>pCapt[1].z</syntaxhighlight> and <syntaxhighlight lang=glsl inline>pCapt[1].w</syntaxhighlight> will have undefined values.
The attribute <code>pCapt[8]</code> would also be declared as:
 
  in vec4 pCapt[2];
 
  
 +
The attribute <syntaxhighlight lang=glsl inline>pCapt[8]</syntaxhighlight> would also be declared as:
 +
<syntaxhighlight lang=glsl>
 +
in vec4 pCapt[2];
 +
</syntaxhighlight>
 
Since it can fit all of its data in 2 vec4s.
 
Since it can fit all of its data in 2 vec4s.
  
If your attribute is an integer type, you can use the integer types (<code>ivec2, ivec4</code>) instead.
+
If your attribute is an integer type, you can use the integer types (<syntaxhighlight lang=glsl inline>ivec2, ivec4</syntaxhighlight>) instead.
  
 
== TouchDesigner specific defines  ==
 
== TouchDesigner specific defines  ==
  
 
Shaders in TouchDesigner are dynamically recompiled based on a few things such as the number and types of lights in the scene or the number of cameras used in this pass (in the case of [[#Multi-Camera_Rendering|Multi-Camera Rendering]]). This is done for optimization purposes since loops with known amounts of iterations are far faster in GLSL. Some defines are provided which allow code written by end-users to be recompiled with different numbers of lights/cameras correct.  
 
Shaders in TouchDesigner are dynamically recompiled based on a few things such as the number and types of lights in the scene or the number of cameras used in this pass (in the case of [[#Multi-Camera_Rendering|Multi-Camera Rendering]]). This is done for optimization purposes since loops with known amounts of iterations are far faster in GLSL. Some defines are provided which allow code written by end-users to be recompiled with different numbers of lights/cameras correct.  
 +
<syntaxhighlight lang=glsl>
 +
// This define will be defined at compile time, and your shader will be recompiled for each combination
 +
// of lights in use (if it's is used in multiple light configurations). You can use it for things like a loop
 +
// counter to loop over all lights in the scene, as you will see if you output example code from the Phong MAT.
 +
// Environment COMPs are counted separately from regular Light COMPs.
 +
#define TD_NUM_LIGHTS <defined at compile time>
 +
#define TD_NUM_ENV_LIGHTS <defined at compile time>
 +
</syntaxhighlight>
 +
<syntaxhighlight lang=glsl>
 +
// On newer hardware multiple cameras can be rendered at the same time. This define will be set to the
 +
// number of cameras done on this render pass. This may be 1 or more, depending on many factors.
 +
// Code should be written in a way that should work regardless of what this value is.
 +
#define TD_NUM_CAMERAS <defined at compiler time>
 +
</syntaxhighlight>
  
  // This define will be defined at compile time, and your shader will be recompiled for each combination
+
<syntaxhighlight lang=glsl>
  // of lights in use (if it's is used in multiple light configurations). You can use it for things like a loop
+
// One of these defines will be set depending on which shader stage is being compiled.
  // counter to loop over all lights in the scene, as you will see if you output example code from the Phong MAT.
+
// This allows shaders to use #ifdef <stageName> and #ifndef <stageName> to either include or
  // Environment COMPs are counted separately from regular Light COMPs.
+
// omit code based on the current shader stage being included. This also allows for single
  #define TD_NUM_LIGHTS <defined at compile time>
+
// large DATs to contain all the code for all stages, with each portion #ifdef-ed in for
  #define TD_NUM_ENV_LIGHTS <defined at compile time>
+
// each stage.
 
+
#define TD_VERTEX_SHADER
  // On newer hardware multiple cameras can be rendered at the same time. This define will be set to the
+
#define TD_GEOMETRY_SHADER
  // number of cameras done on this render pass. This may be 1 or more, depending on many factors.
+
#define TD_PIXEL_SHADER
  // Code should be written in a way that should work regardless of what this value is.
+
#define TD_COMPUTE_SHADER
  #define TD_NUM_CAMERAS <defined at compiler time>
+
</syntaxhighlight>
  
 
== TouchDesigner specific Uniforms  ==
 
== TouchDesigner specific Uniforms  ==
  
 
These are uniform that TouchDesigner will automatically set for you. You do not need to declare any of these, you can just use them in your shaders.
 
These are uniform that TouchDesigner will automatically set for you. You do not need to declare any of these, you can just use them in your shaders.
 
+
<syntaxhighlight lang=glsl>
  // General rendering state info
+
// General rendering state info
  struct TDGeneral {
+
struct TDGeneral {
      vec4 ambientColor;  // Ambient light color (sum of all Ambient Light COMPs used)<br>
+
  vec4 ambientColor;  // Ambient light color (sum of all Ambient Light COMPs used)
      vec4 viewport;      // viewport info, contains (left, bottom, 1.0 / (right - left), 1.0 / (top - bottom))<br>     
+
  vec4 viewport;      // viewport info, contains (left, bottom, 1.0 / (right - left), 1.0 / (top - bottom))    
  };
+
};
  uniform TDGeneral uTDGeneral;<br>
+
uniform TDGeneral uTDGeneral;
  // So for example you'd get the ambient color by doing  
+
// So for example you'd get the ambient color by doing  
  // vec4 ambientColor = uTDGeneral.ambientColor;
+
// vec4 ambientColor = uTDGeneral.ambientColor;
 
+
</syntaxhighlight>
  // Matrices
+
<syntaxhighlight lang=glsl>
  struct TDMatrix {
+
// Matrices
      mat4 world;         // world transform matrix, combination of hierarchy of Object COMPs containing the SOP.
+
struct TDMatrix
                          // transforms from SOP space into World Space<br>
+
{
      mat4 worldInverse; // inverse of the world matrix<br>
+
mat4 world; // world transform matrix, combination of hierarchy of Object COMPs containing the SOP.
      mat4 worldCam;     // multiplication of the world and the camera matrix. (Cam * World)
+
// transforms from SOP space into World Space
      mat4 worldCamInverse;<br>
+
mat4 worldInverse; // inverse of the world matrix
      mat4 cam;         // camera transform matrix, obtained from the Camera COMP used to render
+
mat4 worldCam; // multiplication of the world and the camera matrix. (Cam * World)
      mat4 camInverse;<br>
+
mat4 worldCamInverse;
      mat4 camProj;     // camera transform and the projection matrix from the camera COMP, (Proj * Cam)
+
mat4 cam; // camera transform matrix, obtained from the Camera COMP used to render
      mat4 camProjInverse;<br>
+
mat4 camInverse;
      mat4 proj;         // projection matrix from the Camera COMP
+
mat4 camProj; // camera transform and the projection matrix from the camera COMP, (Proj * Cam)
      mat4 projInverse;<br>
+
mat4 camProjInverse;
      mat4 worldCamProj; // multiplication of the world, camera and projection matrices. (Proj * Cam * World)
+
mat4 proj; // projection matrix from the Camera COMP
                          // takes a vertex in SOP space and puts it into projection space
+
mat4 projInverse;
      mat4 worldCamProjInverse;<br>
+
mat4 worldCamProj; // multiplication of the world, camera and projection matrices. (Proj * Cam * World)
      mat3 worldForNormals; // Inverse transpose of the world matrix, use this to transform normals
+
// takes a vertex in SOP space and puts it into projection space
                            // from SOP space into world space
+
mat4 worldCamProjInverse;
      mat3 camForNormals;   // Inverse transpose of the camera matrix, use this to transform normals
+
mat3 worldForNormals; // Inverse transpose of the world matrix, use this to transform normals
                            // from world space into camera space
+
// from SOP space into world space
      mat3 worldCamForNormals; // Inverse transpose of the worldCam matrix, use this to transform normals
+
mat3 camForNormals; // Inverse transpose of the camera matrix, use this to transform normals
                                // from SOP space into camera space inverse(transpose(Cam * World))
+
// from world space into camera space
         
+
mat3 worldCamForNormals; // Inverse transpose of the worldCam matrix, use this to transform normals
  };
+
// from SOP space into camera space inverse(transpose(Cam * World))
  TDMatrix uTDMats[TD_NUM_CAMERAS];<br>
+
 
  // For example you'd transform the vertex in SOP space into camera space with this line of code
+
};
  // vec4 camSpaceVert = uTDMats[TDCameraIndex()].worldCam * vec4(P, 1.0);
+
uniform TDMatrix uTDMats[TD_NUM_CAMERAS];
 
+
// For example you'd transform the vertex in SOP space into camera space with this line of code
  struct TDCameraInfo {
+
// vec4 camSpaceVert = uTDMats[TDCameraIndex()].worldCam * vec4(P, 1.0);
      vec4 nearFar;       // near/far plane info, contains (near, far, far - near, 1.0 / (far - near))<br>
+
</syntaxhighlight>
      vec4 fog;           // fog info, as defined in the Camera COMP
+
<syntaxhighlight lang=glsl>
                          // contains (fogStart, fogEnd, 1.0 / (fogEnd - fogStart), fogDensity)<br>
+
struct TDCameraInfo
      vec4 fogColor;     // Fog Color as defined in the Camera COMP
+
{
      int renderTOPCameraIndex; // Says which camera from the Render TOP's 'Cameras' parameter this particular camera is.
+
vec4 nearFar; // near/far plane info, contains (near, far, far - near, 1.0 / (far - near))
  };
+
vec4 fog; // fog info, as defined in the Camera COMP
  uniform TDCameraInfo uTDCamInfos[TD_NUM_CAMERAS];<br>
+
// contains (fogStart, fogEnd, 1.0 / (fogEnd - fogStart), fogDensity)
 
+
vec4 fogColor; // Fog Color as defined in the Camera COMP
 
+
int renderTOPCameraIndex; // Says which camera from the Render TOP's 'Cameras' parameter this particular camera is.
  // A 1D texture that gives a half-sine ramp from (0, 0, 0, 1) to (1, 1, 1, 1)
+
};
  // Sampling it with U coordinate outside the (0, 1) range will return 0 for anything below 0 and 1 for anything above 1.
+
uniform TDCameraInfo uTDCamInfos[TD_NUM_CAMERAS];
  // It's possibly faster than using the GLSL sin() function, in situations where you want behavior like this
+
</syntaxhighlight>
  uniform sampler1D sTDSineLookup;
+
<syntaxhighlight lang=glsl>
 
+
// A 1D texture that gives a half-sine ramp from (0, 0, 0, 1) to (1, 1, 1, 1)
In general you don't need to do anything with any of these light uniforms/samplers, as it's all done for you in the <code>TDLighting(), TDLightingPBR() and, TDEnvLightingPBR()</code> functions. Only if you are doing custom lighting will you need to worry about these.
+
// Sampling it with U coordinate outside the (0, 1) range will return 0 for anything below 0 and 1 for anything above 1.
 
+
// It's possibly faster than using the GLSL sin() function, in situations where you want behavior like this
  struct TDLight {
+
uniform sampler1D sTDSineLookup;
      vec4 position;     // the light's position in world space<br>
+
</syntaxhighlight>
      vec3 direction;   // the light's direction in world space<br>
+
In general you don't need to do anything with any of these light uniforms/samplers, as it's all done for you in the <syntaxhighlight lang=glsl inline>TDLighting(), TDLightingPBR() and, TDEnvLightingPBR()</syntaxhighlight> functions. Only if you are doing custom lighting will you need to worry about these.
      vec3 diffuse;     // the light's diffuse color<br>
+
<syntaxhighlight lang=glsl>
      vec4 nearFar;     // the near and far settings from the light's View page
+
struct TDLight
                        // contains (near, far, far - near, 1.0 / (far - near)<br>
+
{
      vec4 lightSize;   // values from the light's Light Size parameter
+
vec4 position; // the light's position in world space
                        // containers (light width, light height, light width / light projection width, light height / light projection height) <br>
+
vec3 direction; // the light's direction in world space
      vec4 misc;       // misc parameters values, right now it contains
+
vec3 diffuse; // the light's diffuse color
                        // (Max shadow softness, 0, 0, 0)<br>
+
vec4 nearFar; // the near and far settings from the light's View page
      vec4 coneLookupScaleBias; // applied to a cone light's contribution to create the spot lit area
+
// contains (near, far, far - near, 1.0 / (far - near)
                                // contains (cone lookup scale, cone lookup bias, 1.0, 0.0) <br>
+
vec4 lightSize; // values from the light's Light Size parameter
      vec4 attenScaleBiasRoll;  // applied to the light's distance from the point to get an attenuation value
+
// containers (light width, light height, light width / light projection width, light height / light projection height)
                                // contains (attenuation scale, attenuation bias, attenuation rolloff, 0)<br>
+
vec4 misc; // misc parameters values, right now it contains
      mat4 shadowMapMatrix;     // transforms a point in world space into the projection space of the shadow mapped light
+
// (Max shadow softness, 0, 0, 0)
                                // also rescales projection space from [-1, 1] to [0, 1], so the value can be used
+
vec4 coneLookupScaleBias; // applied to a cone light's contribution to create the spot lit area
                                // to lookup into a shadow map<br>
+
// contains (coneLookupScale, coneLookupBias, 1.0, 0.0).
      mat4 shadowMapCamMatrix;  // transforms a point in world space into the camera space of the shadow mapped light<br>
+
                                // If the light is not a cone light, this will contain (0.0, 0.0, 0.0, 0.0).
      vec4 shadowMapRes;        // the resolution of the shadow map associated with this light
+
vec4 attenScaleBiasRoll; // applied to the light's distance from the point to get an attenuation value
                                // contains (1.0 / width, 1.0 / height, width, height)
+
// contains (attenuation scale, attenuation bias, attenuation rolloff, 0)
                                // filled with 0s if the light isn't shadow mapped<br>
+
mat4 shadowMapMatrix; // transforms a point in world space into the projection space of the shadow mapped light
      mat4 projMapMatrix;       // transforms a point in world space into the projection map space of the light
+
// also rescales projection space from [-1, 1] to [0, 1], so the value can be used
                                // used when using textureProj() function for projection mapping<br>
+
// to lookup into a shadow map
  }
+
mat4 shadowMapCamMatrix; // transforms a point in world space into the camera space of the shadow mapped light
  uniform TDLight uTDLights[TD_NUM_LIGHTS];
+
vec4 shadowMapRes; // the resolution of the shadow map associated with this light
 
+
// contains (1.0 / width, 1.0 / height, width, height)
  struct TDEnvLight {
+
// filled with 0s if the light isn't shadow mapped
    vec3 color;               // Color of the env light (not counting it's environment map)
+
mat4 projMapMatrix; // transforms a point in world space into the projection map space of the light
    mat3 rotate;               // Rotation to be applied to the env light.
+
// used when using textureProj() function for projection mapping
    vec3 shCoeffs[9];         // Spherical harmonic coefficients calculated from the environment map. Used for diffuse
+
}
  };
+
uniform TDLight uTDLights[TD_NUM_LIGHTS];
  uniform TDEnvLight uTDEnvLights[TD_NUM_ENV_LIGHTS];
+
</syntaxhighlight>
 
+
<syntaxhighlight lang=glsl>
  // The environment maps for the env lights. Will be black for lights that don't have a map
+
struct TDEnvLight
  // of that particular dimentionality.  
+
{
  uniform samplerCube sTDEnvLightCubeMaps[TD_NUM_ENV_LIGHTS];
+
vec3 color; // Color of the env light (not counting it's environment map)
  uniform sampler2D sTDEnvLight2DMaps[TD_NUM_ENV_LIGHTS];
+
mat3 rotate; // Rotation to be applied to the env light.
 
+
vec3 shCoeffs[9]; // Spherical harmonic coefficients calculated from the environment map. Used for diffuse
  // shadow maps, these are both the same maps, but are setup in compare or sampling mode
+
};
  // for lights that aren't shadow maps, these will be fill with 0s
+
uniform TDEnvLight uTDEnvLights[TD_NUM_ENV_LIGHTS];
  // The first one is used in the function texture(sampler2DShadow, vec3) or textureProj(sampler2DShadow, vec4)
+
</syntaxhighlight>
  // for automatic depth comparison and hardware percentage closer filtering  
+
<syntaxhighlight lang=glsl>
  uniform sampler2DShadow sTDCompareShadowMaps[TD_NUM_LIGHTS];<br>
+
// The environment maps for the env lights. Will be black for lights that don't have a map
  // This one is used for directly getting the depth from the shadow map using
+
// of that particular dimentionality.  
  // texture(sampler2D, vec2) or textureProj(sampler2D, vec3)
+
uniform samplerCube sTDEnvLightCubeMaps[TD_NUM_ENV_LIGHTS];
  // If using hard shadows the values are in [0, 1] post-projection depth units
+
uniform sampler2D sTDEnvLight2DMaps[TD_NUM_ENV_LIGHTS];
  // If using soft shadows the values are in camera space of the light (not the camera being rendered from)
+
</syntaxhighlight>
  uniform sampler2D sTDShadowMaps[TD_NUM_LIGHTS];
+
<syntaxhighlight lang=glsl>
 
+
// shadow maps, these are both the same maps, but are setup in compare or sampling mode
  // The projection maps defined in the Projection Map parameter of the Light COMP
+
// for lights that aren't shadow maps, these will be fill with 0s
  // the map will always return black if the light doesn't have a projection map defined
+
// The first one is used in the function texture(sampler2DShadow, vec3) or textureProj(sampler2DShadow, vec4)
  uniform sampler2D sTDProjMaps[TD_NUM_LIGHTS];
+
// for automatic depth comparison and hardware percentage closer filtering  
 
+
uniform sampler2DShadow sTDCompareShadowMaps[TD_NUM_LIGHTS];
  // The falloff ramp from when the cone light starts to fade out until it reaches black
+
// This one is used for directly getting the depth from the shadow map using
  // used in combination with uTDLights[].coneLookupScaleBias
+
// texture(sampler2D, vec2) or textureProj(sampler2D, vec3)
  uniform sampler1D sTDConeLookups[TD_NUM_LIGHTS];
+
// If using hard shadows the values are in [0, 1] post-projection depth units
 
+
// If using soft shadows the values are in camera space of the light (not the camera being rendered from)
 +
uniform sampler2D sTDShadowMaps[TD_NUM_LIGHTS];
 +
</syntaxhighlight>
 +
<syntaxhighlight lang=glsl>
 +
// The projection maps defined in the Projection Map parameter of the Light COMP
 +
// the map will always return black if the light doesn't have a projection map defined
 +
uniform sampler2D sTDProjMaps[TD_NUM_LIGHTS];
 +
</syntaxhighlight>
 +
<syntaxhighlight lang=glsl>
 +
// The falloff ramp from when the cone light starts to fade out until it reaches black
 +
// used in combination with uTDLights[].coneLookupScaleBias
 +
uniform sampler1D sTDConeLookups[TD_NUM_LIGHTS];
 +
</syntaxhighlight>
 
== TouchDesigner specific Functions ==
 
== TouchDesigner specific Functions ==
  
Line 266: Line 273:
  
 
=== Vertex Shader Only Functions ===
 
=== Vertex Shader Only Functions ===
 +
<syntaxhighlight lang=glsl>
 +
// Transforms a point from world space to projection space.
 +
// These functions should always be used to output to gl_Position, allowing TouchDesigner to do custom manipulations
 +
// of the values as needed for special operations and projections.
 +
// The extra manipulations that are currently done are:
 +
// 1. Conversions to other projections such as FishEye
 +
// 2. Conversions to Quad Reprojection using TDQuadReproject()
 +
// 3. Adjustments needed for picking using TDPickAdjust().
 +
// When using this function you should not call the above functions, since it will do those for you.
 +
// Both the vec4 and vec3 version of TDWorldToProj() treat the xyz as a point, not a vector.
 +
vec4 TDWorldToProj(vec4 v);
 +
vec4 TDWorldToProj(vec3 v);
 +
  
  // Transforms a point
+
// These ones take an extra 'uv' coordinate, which is used when UV Unwrapping is done in the Render TOP.
  // from world space to projection space.  
+
// If the version without the uv is used, then TDInstanceTexCoord(TDUVUnwrapCoord()) will be used to get the texture coord.
  // These functions should always be used to output to gl_Position, allowing TouchDesigner to do custom manipulations
+
vec4 TDWorldToProj(vec4 v, vec3 uv);
  // of the values as needed for special operations and projections.
+
vec4 TDWorldToProj(vec3 v, vec3 uv);
  // Both the vec4 and vec3 version of TDWorldToProj() treat the xyz as a point, not a vector.
+
</syntaxhighlight>
  vec4 TDWorldToProj(vec4 v);
+
<syntaxhighlight lang=glsl>
  vec4 TDWorldToProj(vec3 v);
+
// Returns the instance ID for the current instance. This should always be used, not gl_InstanceID directly.
 +
// For more information look at the Instancing section of this document.
 +
int TDInstanceID();
 +
</syntaxhighlight>
 +
<syntaxhighlight lang=glsl>
 +
// Returns the index of the camera for this particular vertex. Needed to support Multi-Camera Rendering.
 +
// This is always 0-based, and it does not reflect which camera is being currently being used from the Render TOP.  
 +
// For that, use the uTDCamInfos[TDCameraIndex()].renderTOPCameraIndex
 +
int TDCameraIndex();
 +
</syntaxhighlight>
  
  // Returns the instance ID for the current instance. This should always be used, not gl_InstanceID directly.
+
<syntaxhighlight lang=glsl>
  // For more information look at the Instancing section of this document.
+
// Deforms a point or vector using instancing and skinned deforms. The returned result is in World Space.
  int TDInstanceID();
+
// Be sure to use the *Vec version in the case of vectors to get correct results.
 +
// Also use the *Norm version when deforming a normal, to make sure it still matches the surface correctly.
 +
/// These functions will internally call TDSkinnedDeform() and TDInstanceDeform().
 +
vec4 TDDeform(vec4 pos);
 +
vec4 TDDeform(vec3 pos);
 +
vec3 TDDeformVec(vec3 v);
 +
vec3 TDDeformNorm(vec3 n);
 +
</syntaxhighlight>
 +
<syntaxhighlight lang=glsl>
  
  // Returns the index of the camera for this particular vertex. Needed to support [[#Multi-Camera_Rendering|Multi-Camera Rendering]] rendering.
+
// These versions allow you to control the instanceID used for the deform.
  // This is always 0-based, it does not reflect which camera is being currently being used from the Render TOP.  
+
vec4 TDDeform(int instanceID, vec3 pos);
  // For that, use the uTDCamInfos[TDCameraIndex()].renderTOPCameraIndex
+
vec3 TDDeformVec(int instanceID, vec3 v);
  int TDCameraIndex();
+
vec3 TDDeformNorm(int instanceID, vec3 n);
 +
</syntaxhighlight>
 +
<syntaxhighlight lang=glsl>
 +
// ** In general you don't need to call any of the below functions, just calling TDDeform or TDDeformVec will
 +
// do all the work for you
 +
// Just the skinning or instancing portion of the deforms
 +
// Returned position/vector is in SOP space for the *Skinned* version, and world space for the *Instance* version.
 +
vec4 TDSkinnedDeform(vec4 pos);
 +
vec3 TDSkinnedDeformVec(vec3 vec);
 +
vec4 TDInstanceDeform(vec4 pos);
 +
vec3 TDInstanceDeformVec(vec3 vec);
 +
</syntaxhighlight>
 +
<syntaxhighlight lang=glsl>
 +
// You also don't need to call these usually, but are available for special cases
 +
// For instancing functions, if you don't provide an index, it will use TDInstanceID().
 +
mat4 TDBoneMat(int boneIndex);
 +
mat4 TDInstanceMat(int instanceID);
 +
mat4 TDInstanceMat();
 +
// Returns a 3x3 matrix only. Useful if you are only working with vectors, not positions.
 +
// If you are using both, it is faster to just call TDInstanceMat(), and cast the result to a mat3
 +
// when required.
 +
mat3 TDInstanceMat3(int instanceID);
 +
mat3 TDInstanceMat3();
 +
</syntaxhighlight>
 +
<syntaxhighlight lang=glsl>
 +
// To calculate the texture coordinates for your instance (if used in the Geometry COMP's parameters), use these functions
 +
// For texture coordinates the passed in variable 't' is the current texture coordinate to be modified/replaced
 +
vec3 TDInstanceTexCoord(int instanceID, vec3 t);
 +
vec3 TDInstanceTexCoord(vec3 t);
 +
</syntaxhighlight>
  
  // Creates a rotation matrix to rotate from vector 'from' to vector 'to'.
+
<syntaxhighlight lang=glsl>
  mat3 TDCreateRotMatrix(vec3 from, vec3 to);
+
// TDWorldToProj() will already apply the Quad Reproject feature if used by the [[Camera COMP]].
 +
// However in same cases you may be doing custom operations that require it to be applied manually.
 +
// This function just returns the point unchanged if Quad Reproject isn't being used.
 +
vec4 TDQuadReproject(vec4 v, int camIndex);
 +
</syntaxhighlight>
  
  // Deforms a point or vector using instancing and skinned deforms.
+
<syntaxhighlight lang=glsl>
  // Be sure to use the *Vec version in the case of vectors to get correct results.
+
// Available in builds 2019.32020 or later.
  // Also use the *Norm verion when deforming a normal, to make sure it still matches the surface correctly.
+
// TDWorldToProj() will already apply the Picking adjustment if required (when picking is occuring).
  vec4 TDDeform(vec4 pos);
+
// However in same cases you may be doing custom operations that require it to be applied manually.
  vec4 TDDeform(vec3 pos);
+
// This function just returns the point unchanged if picking isn't active for this render.
  vec3 TDDeformVec(vec3 v);
+
vec4 TDPickAdjust(vec4 v, int camIndex);
  vec3 TDDeformNorm(vec3 n);
+
</syntaxhighlight>
  
  // ** In general you don't need to call any of the below functions, just calling TDDeform or TDDeformVec will
 
  // do all the work for you
 
  // Just the skinning or instancing portion of the deforms
 
  // Returned position/vector is in SOP space for the *Skinned* version, and world space for the *Instance* version.
 
  vec4 TDSkinnedDeform(vec4 pos);
 
  vec3 TDSkinnedDeformVec(vec3 vec);
 
  vec4 TDInstanceDeform(vec4 pos);
 
  vec3 TDInstanceDeformVec(vec3 vec);
 
  
  // You also don't need to call these usually, but are available for special cases
+
<syntaxhighlight lang=glsl>
  // For instancing functions, if you don't provide an index,
+
// Returns the uv coordinate that was selected for UV unwrapping in the Render TOP
  // it will use (gl_InstanceID) or (gl_InstanceID + uTDInstanceIDOffset), depending on the situation
+
vec3 TDUVUnwrapCoord();
  mat4 TDBoneMat(int index);
+
</syntaxhighlight>
  mat4 TDInstanceMat(int index);
 
  mat4 TDInstanceMat();
 
  // Returns a 3x3 matrix only. Useful if you are only working with vectors, not positions.
 
  mat3 TDInstanceMat3(int index);
 
  mat3 TDInstanceMat3();
 
  vec3 TDInstanceTranslate(int index);
 
  vec3 TDInstanceTranslate();
 
  vec3 TDInstanceScale(int index);
 
  vec3 TDInstanceScale();
 
  
  // To calculate the texture coordinates for your instance (if used in the Geometry COMP's parameters), use these functions
+
=== Geometry Shader Only Functions ===
  // For texture coordinates the passed in variable 't' is the current texture coordinate to be modified/replaced
+
<syntaxhighlight lang=glsl>
  vec3 TDInstanceTexCoord(int index, vec3 t);
 
  vec3 TDInstanceTexCoord(vec3 t);
 
  
  // Available in build 099.2017.15400 or later
+
// Similar to th ones in the vertex shader, but require a camera index since it
  // Takes a surface normal, the tangent to the surface, and a handedness value (either -1 or 1)
+
// needs to be passed through to the geometry shader via a input variable.
  // Returns a matrix that will convert vectors from tangent space, to the space the normal and tangent are in
+
vec4 TDWorldToProj(vec4 v, vec3 uv, int cameraIndex);\n";
  // Both the normal and the tangent must be normalized before this function is called.
+
vec4 TDWorldToProj(vec3 v, vec3 uv, int cameraIndex);\n";
  // The w coordinate of the T attribute created by the [[Attribute Create SOP]] contains the handedness
+
vec4 TDWorldToProj(vec4 v, int cameraIndex);\n";
  // that should be passed in as-is.
+
vec4 TDWorldToProj(vec3 v, int cameraIndex);\n";
  mat3 TDCreateTBNMatrix(vec3 normal, vec3 tangent, float handedness);
+
vec4 TDQuadReproject(vec4 v, int camIndex);
 +
</syntaxhighlight>
  
 
=== Pixel Shader Only Functions ===
 
=== Pixel Shader Only Functions ===
 
+
<syntaxhighlight lang=glsl>
  // Call this function to give TouchDesigner a chance to discard some pixels if appropriate.<br>
+
// This function is provided as a wrapper for gl_FrontFacing.
  // This is used in things such as order-indepdendent transparency and dual-paraboloid rendering.
+
// It is required since some GPUs (Intel on macOS mainly) have broken
  // For best performance call it at the start of the pixel shader.
+
// functionality for gl_FrontFacing.
  // It will do nothing if no features that require it are active, so it's safe to always call it.
+
// On most GPUs this just returns gl_FrontFacing. On GPUs where the behavior
  void TDCheckDiscard();<br>
+
// is broken, an alternative method using position and normal is used to
 
+
// determine if the pixel is front or back facing.
  // Call this function to apply the alpha test to the current pixel. This function will do nothing
+
// Position and normal should be in the same space, and normal must be normalized.
  // if the alpha test is disabled, so it can be safely always called
+
bool TDFrontFacing(vec3 position, vec3 normal);
  void TDAlphaTest(float alphaValue);<br>
+
</syntaxhighlight>
 
+
<syntaxhighlight lang=glsl>
  // Call this to apply the Camera COMPs fog to the passed color. Requires the world space vertex position also
+
// Call this function to give TouchDesigner a chance to discard some pixels if appropriate.
  // This function will do nothing if fog is disabled, so it's safe to always call it at the end of your shader
+
// This is used in things such as order-indepdendent transparency and dual-paraboloid rendering.
  // there would be no performance impact from calling it if fog is disabled
+
// For best performance call it at the start of the pixel shader.
  // the cameraIndex should be passed through from the vertex shader using a varying, sourced from TDCameraIndex()
+
// It will do nothing if no features that require it are active, so it's safe to always call it.
  vec4 TDFog(vec4 curColor, vec3 worldSpacePos, int cameraIndex);
+
void TDCheckDiscard();
 
+
</syntaxhighlight>
  // Call this at the end of your shadow to apply a dither to your final color. This function does nothing
+
<syntaxhighlight lang=glsl>
  // if dithering is disabled in the Render TOPs parameters
+
// Call this function to apply the alpha test to the current pixel. This function will do nothing
  vec4 TDDither(vec4 curColor);
+
// if the alpha test is disabled, so it can be safely always called
 
+
void TDAlphaTest(float alphaValue);
  // Pass any color value through this function before writing it out to a color buffer.
+
</syntaxhighlight>
  // This is needed to ensure that color channels are output to the correct channels
+
<syntaxhighlight lang=glsl>
  // in the color buffer, based on hardware limitation that may store alpha-only
+
// Call this to apply the Camera COMPs fog to the passed color. Requires the world space vertex position also
  // textures as red-only internally, for example
+
// This function will do nothing if fog is disabled, so it's safe to always call it at the end of your shader
  vec4 TDOutputSwizzle(vec4 curColor);
+
// there would be no performance impact from calling it if fog is disabled
 +
// the cameraIndex should be passed through from the vertex shader using a varying, sourced from TDCameraIndex()
 +
vec4 TDFog(vec4 curColor, vec3 worldSpacePos, int cameraIndex);
 +
</syntaxhighlight>
 +
<syntaxhighlight lang=glsl>
 +
// Call this at the end of your shadow to apply a dither to your final color. This function does nothing
 +
// if dithering is disabled in the Render TOPs parameters
 +
vec4 TDDither(vec4 curColor);
 +
</syntaxhighlight>
 +
<syntaxhighlight lang=glsl>
 +
// Pass any color value through this function before writing it out to a color buffer.
 +
// This is needed to ensure that color channels are output to the correct channels
 +
// in the color buffer, based on hardware limitation that may store alpha-only
 +
// textures as red-only internally, for example
 +
vec4 TDOutputSwizzle(vec4 curColor);
 +
</syntaxhighlight>
  
 
The TDLighting() functions are called per light to determine that light's diffuse and specular contributions.
 
The TDLighting() functions are called per light to determine that light's diffuse and specular contributions.
 
Shadowing, projection mapping are all automatically handled for you inside this functions.
 
Shadowing, projection mapping are all automatically handled for you inside this functions.
* The <code>diffuseContrib, specularContrib and specularContrib2</code> parameters will be filled with the results.
+
* The <syntaxhighlight lang=glsl inline>diffuseContrib, specularContrib and specularContrib2</syntaxhighlight> parameters will be filled with the results.
* <code>lightIndex</code> is the light index to calculate
+
* <syntaxhighlight lang=glsl inline>lightIndex</syntaxhighlight> is the light index to calculate
* <code>worldSpacePos</code> is the world space vertex position
+
* <syntaxhighlight lang=glsl inline>worldSpacePos</syntaxhighlight> is the world space vertex position
* <code>shadowStrength</code> is a scalar on the shadow to increase or decrease its effect for example a value of 0.5 would give a maximum 50% shadow.
+
* <syntaxhighlight lang=glsl inline>shadowStrength</syntaxhighlight> is a scalar on the shadow to increase or decrease its effect for example a value of 0.5 would give a maximum 50% shadow.
* <code>worldSpaceNorm</code> is the normalized world space normal
+
* <syntaxhighlight lang=glsl inline>worldSpaceNorm</syntaxhighlight> is the normalized world space normal
* <code>vertToCamVec</code> is the normalized vector from the vertex position to the camera position.  
+
* <syntaxhighlight lang=glsl inline>vertToCamVec</syntaxhighlight> is the normalized vector from the vertex position to the camera position.  
* <code>shininess</code> is the specular shininess exponent.
+
* <syntaxhighlight lang=glsl inline>shininess</syntaxhighlight> is the specular shininess exponent.
  
 
==== Physically Based (PBR) Lighting ====
 
==== Physically Based (PBR) Lighting ====
 
+
<syntaxhighlight lang=glsl>
  // For all regular lights. Should be called in a loop from 0 to TD_NUM_LIGHTS
+
// For all regular lights. Should be called in a loop from 0 to TD_NUM_LIGHTS
  void TDLightingPBR
+
void TDLightingPBR
      (inout vec3 diffuseContrib,
+
(inout vec3 diffuseContrib,
        inout vec3 specularContrib,
+
inout vec3 specularContrib,
        int index,
+
int lightIndex,
        vec3 diffuseColor,
+
vec3 diffuseColor,
        vec3 specularColor,
+
vec3 specularColor,
        vec3 worldSpacePos,
+
vec3 worldSpacePos,
        vec3 worldSpaceNormal,
+
vec3 worldSpaceNormal,
        float shadowStrength,
+
float shadowStrength,
        vec3 shadowColor,
+
vec3 shadowColor,
        vec3 vertToCamVec,
+
vec3 vertToCamVec,
        float roughness);
+
float roughness);
 
+
</syntaxhighlight>
  // For environment lights. Should be called in a loop from 0 to TD_NUM_ENV_LIGHTS
+
<syntaxhighlight lang=glsl>
  void TDEnvLightingPBR
+
// For environment lights. Should be called in a loop from 0 to TD_NUM_ENV_LIGHTS
      inout vec3 diffuseContrib,
+
void TDEnvLightingPBR
      inout vec3 specularContrib,
+
inout vec3 diffuseContrib,
      int index,
+
inout vec3 specularContrib,
      vec3 diffuseColor,
+
int lightIndex,
      vec3 specularColor,
+
vec3 diffuseColor,
      vec3 worldSpaceNormal,
+
vec3 specularColor,
      vec3 vertToCamVec,
+
vec3 worldSpaceNormal,
      float roughness,
+
vec3 vertToCamVec,
      float ambientOcclusion);
+
float roughness,
 +
float ambientOcclusion);
 +
</syntaxhighlight>
  
 
==== Phong Lighting ====
 
==== Phong Lighting ====
 
You can disable features by calling a different version of TDLighting().
 
You can disable features by calling a different version of TDLighting().
 
+
<syntaxhighlight lang=glsl>
  // All features
+
// All features
  void TDLighting(
+
void TDLighting(
    out vec3 diffuseContrib,
+
out vec3 diffuseContrib,
    out vec3 specularContrib,
+
out vec3 specularContrib,
    out vec3 specularContrib2,
+
out vec3 specularContrib2,
    int lightIndex,
+
int lightIndex,
    vec3 worldSpacePos,
+
vec3 worldSpacePos,
    vec3 worldSpaceNorm,
+
vec3 worldSpaceNorm,
    float shadowStrength,
+
float shadowStrength,
    vec3 shadowColor,
+
vec3 shadowColor,
    vec3 vertToCamVec,
+
vec3 vertToCamVec,
    float shininess,
+
float shininess,
    float shininess2
+
float shininess2
    );
+
);
 
+
</syntaxhighlight>
  // Diffuse and specular, but no specular2, with shadows
+
<syntaxhighlight lang=glsl>
  void TDLighting(
+
// Diffuse and specular, but no specular2, with shadows
    out vec3 diffuseContrib,
+
void TDLighting(
    out vec3 specularContrib,
+
out vec3 diffuseContrib,
    int lightIndex,
+
out vec3 specularContrib,
    vec3 worldSpacePos,
+
int lightIndex,
    vec3 worldSpaceNorm,
+
vec3 worldSpacePos,
    float shadowStrength,
+
vec3 worldSpaceNorm,
    vec3 shadowColor,
+
float shadowStrength,
    vec3 vertToCamVec,
+
vec3 shadowColor,
    float shininess
+
vec3 vertToCamVec,
    );
+
float shininess
 
+
);
// Diffuse and both speculars, no shadows
+
</syntaxhighlight>
void TDLighting(
+
<syntaxhighlight lang=glsl>
    out vec3 diffuseContrib,
+
// Diffuse and both speculars, no shadows
    out vec3 specularContrib,
+
void TDLighting(
    out vec3 specularContrib2,
+
out vec3 diffuseContrib,
    int lightIndex,
+
out vec3 specularContrib,
    vec3 worldSpacePos,
+
out vec3 specularContrib2,
    vec3 worldSpaceNorm,
+
int lightIndex,
    float shininess,
+
vec3 worldSpacePos,
    float shininess2,
+
vec3 worldSpaceNorm,
    vec3 vertToCamVec
+
float shininess,
    );
+
float shininess2,
 
+
vec3 vertToCamVec
  // Diffuse only, with shadows
+
);
  void TDLighting(
+
</syntaxhighlight>
    out vec3 diffuseContrib,
+
<syntaxhighlight lang=glsl>
    int lightIndex,
+
// Diffuse only, with shadows
    vec3 worldSpacePos,
+
void TDLighting(
    vec3 worldSpaceNorm,
+
out vec3 diffuseContrib,
    float shadowStrength,
+
int lightIndex,
    vec3 shadowColor
+
vec3 worldSpacePos,
    );
+
vec3 worldSpaceNorm,
 
+
float shadowStrength,
  // Diffuse Only, no shadows
+
vec3 shadowColor
  void TDLighting(
+
);
    out vec3 diffuseContrib,
+
</syntaxhighlight>
    int lightIndex,
+
<syntaxhighlight lang=glsl>
    vec3 worldSpacePos,
+
// Diffuse Only, no shadows
    vec3 worldSpaceNorm
+
void TDLighting(
    );
+
out vec3 diffuseContrib,
 
+
int lightIndex,
  // Diffuse and specular, no specuar2 or shadows.
+
vec3 worldSpacePos,
  void TDLighting(
+
vec3 worldSpaceNorm
    out vec3 diffuseContrib,
+
);
    out vec3 specularContrib,
+
</syntaxhighlight>
    int lightIndex,
+
<syntaxhighlight lang=glsl>
    vec3 worldSpacetPos,
+
// Diffuse and specular, no specuar2 or shadows.
    vec3 worldSpaceNorm,
+
void TDLighting(
    vec3 vertToCamVec,
+
out vec3 diffuseContrib,
    float shininess
+
out vec3 specularContrib,
    );
+
int lightIndex,
 
+
vec3 worldSpacetPos,
 +
vec3 worldSpaceNorm,
 +
vec3 vertToCamVec,
 +
float shininess
 +
);
 +
</syntaxhighlight>
 
==== Common Lighting Functions ====
 
==== Common Lighting Functions ====
 
+
<syntaxhighlight lang=glsl>
  // In general you don't need to use these functions, they are called for you in the TDLighting() functions.
+
// In general you don't need to use these functions, they are called for you in the TDLighting() functions.
  // These functions return the shadow strength at the current pixel for light at the given index.
+
// These functions return the shadow strength at the current pixel for light at the given index.
  // Also requires the world space vertex position to do its calculations
+
// Also requires the world space vertex position to do its calculations
  // returns undefined results if the shadow isn't mapped using the chosen shadow type
+
// returns undefined results if the shadow isn't mapped using the chosen shadow type
  // The returned value is 0 for no shadow, 1 for 100% shadowed
+
// The returned value is 0 for no shadow, 1 for 100% shadowed
  // Due to percentage closer filtering, hard shadows can still have values between 0 and 1 at the edges of the shadow
+
// Due to percentage closer filtering, hard shadows can still have values between 0 and 1 at the edges of the shadow
  float TDHardShadow(int lightIndex, vec3 worldSpacePos);<br>
+
float TDHardShadow(int lightIndex, vec3 worldSpacePos);
  // This one will apply soft shadows with both 25 search steps done, and 25 filter samples.
+
// This one will apply soft shadows with both 25 search steps done, and 25 filter samples.
  float TDSoftShadow(int lightIndex, vec3 worldSpacePos);<br>
+
float TDSoftShadow(int lightIndex, vec3 worldSpacePos);
  // Allows for control of search steps and filter samples.
+
// Allows for control of search steps and filter samples.
  float TDSoftShadow(int lightIndex, vec3 worldSpacePos, int samples, int searchSteps);
+
float TDSoftShadow(int lightIndex, vec3 worldSpacePos, int samples, int searchSteps);
 
+
</syntaxhighlight>
  // Gets the projection map color for the given world space vertex position.
+
<syntaxhighlight lang=glsl>
  // No other lighting calculations are applied to the returned color
+
// Gets the projection map color for the given world space vertex position.
  // If the given light index is not using a projection map, then 'defaultColor' is returned.
+
// No other lighting calculations are applied to the returned color
  vec4 TDProjMap(int lightIndex, vec3 worldSpacePosition, vec4 defaultColor);
+
// If the given light index is not using a projection map, then 'defaultColor' is returned.
 +
vec4 TDProjMap(int lightIndex, vec3 worldSpacePosition, vec4 defaultColor);
 +
</syntaxhighlight>
  
 
=== Common Functions ===
 
=== Common Functions ===
Line 494: Line 570:
  
 
==== Matrix functions ====
 
==== Matrix functions ====
  // Note: Became available in all stages starting in 099.2017.35800. Older versions only have these in the vertex shader.
+
<syntaxhighlight lang=glsl>
  // Creates a rotation matrix that rotates starting from looking down +Z, to the 'forward' vector direction
+
// Creates a rotation matrix that rotates starting from looking down +Z, to the 'forward' vector direction
  mat3 TDRotateToVector(vec3 forward, vec3 up);<br>
+
mat3 TDRotateToVector(vec3 forward, vec3 up);
  // Creates a rotation matrix that rotates around the 'axis', the given number of 'radians'
+
// Creates a rotation matrix that rotates around the 'axis', the given number of 'radians'
  mat3 TDRotateOnAxis(float radians, vec3 axis)
+
mat3 TDRotateOnAxis(float radians, vec3 axis)
 
+
// Creates a rotation matrix to rotate from vector 'from' to vector 'to'. The solution isn't particularly stable, but useful in some cases.
 +
mat3 TDCreateRotMatrix(vec3 from, vec3 to);
 +
</syntaxhighlight>
 +
<syntaxhighlight lang=glsl>
 +
// Takes a surface normal, the tangent to the surface, and a handedness value (either -1 or 1)
 +
// Returns a matrix that will convert vectors from tangent space, to the space the normal and tangent are in
 +
// Both the normal and the tangent must be normalized before this function is called.
 +
// The w coordinate of the T attribute created by the [[Attribute Create SOP]] contains the handedness
 +
// that should be passed in as-is.
 +
mat3 TDCreateTBNMatrix(vec3 normal, vec3 tangent, float handedness);
 +
</syntaxhighlight>
 
==== Perlin and Simplex noise functions ====
 
==== Perlin and Simplex noise functions ====
 
+
<syntaxhighlight lang=glsl>
  // Noise functions
+
// Noise functions
  // These will return the same result for the same input
+
// These will return the same result for the same input
  // Results are between -1 and 1
+
// Results are between -1 and 1
  // Can be slow so just be aware when using them.  
+
// Can be slow so just be aware when using them.  
  // Different dimensionality selected by passing vec2, vec3 or vec4.  
+
// Different dimensionality selected by passing vec2, vec3 or vec4.  
  float TDPerlinNoise(vec2 v);
+
float TDPerlinNoise(vec2 v);
  float TDPerlinNoise(vec3 v);
+
float TDPerlinNoise(vec3 v);
  float TDPerlinNoise(vec4 v);
+
float TDPerlinNoise(vec4 v);
  float TDSimplexNoise(vec2 v);
+
float TDSimplexNoise(vec2 v);
  float TDSimplexNoise(vec3 v);
+
float TDSimplexNoise(vec3 v);
  float TDSimplexNoise(vec4 v);
+
float TDSimplexNoise(vec4 v);
 
+
</syntaxhighlight>
 
==== HSV Conversion ====
 
==== HSV Conversion ====
 
+
<syntaxhighlight lang=glsl>
  // Converts between RGB and HSV color space
+
// Converts between RGB and HSV color space
  vec3 TDHSVToRGB(vec3 c);
+
vec3 TDHSVToRGB(vec3 c);
  vec3 TDRGBToHSV(vec3 c);
+
vec3 TDRGBToHSV(vec3 c);
 
+
</syntaxhighlight>
 
==== Space Conversions ====
 
==== Space Conversions ====
 +
<syntaxhighlight lang=glsl>
 +
// Converts a 0-1 equirectangular texture coordinate into cubemap coordinates.
 +
// A 0 for the U coordinate corresponds to the middle of the +X face. So along the vec3(1, Y, 0) plane.
 +
// As U rises, equirectangular coordinates rotate from +X, to +Z, then -X and -Z.
 +
vec3 TDEquirectangularToCubeMap(vec2 equiCoord);
 +
</syntaxhighlight>
 +
<syntaxhighlight lang=glsl>
 +
// Converts from cubemap coordinates to equirectangular
 +
// cubemapCoord MUST be normalized before calling this function.
 +
vec2 TDCubeMapToEquirectangular(vec3 cubemapCoord);
  
  // Converts a 0-1 equirectangular texture coordinate into cubemap coordinates.
+
// Available in builds 2019.18140
  // A 0 for the U coordinate corresponds to the middle of the +X face. So along the vec3(1, Y, 0) plane.
+
// This version will also output a mipmap bias float. This float should be passed in the 'bias'
  // As U rises, equirectangular coordinates rotate from +X, to +Z, then -X and -Z.
+
// parameter of texture(), to help select the mipmap level. This helps avoids seams at the edge
  vec3 TDEquirectangularToCubeMap(vec2 equiCoord);
+
// of equirectangular map.
 
+
vec2 TDCubeMapToEquirectangular(vec3 cubemapCoord, out float mipMapBias);
  // Converts from cubemap coordinates to equirectangular
+
</syntaxhighlight>
  // cubemapCoord MUST be normalized before calling this function.
 
  vec2 TDCubeMapToEquirectangular(vec3 cubemapCoord);
 
 
 
 
== Working with Lights ==
 
== Working with Lights ==
  
Line 557: Line 650:
 
====Cone Lighting Technique====
 
====Cone Lighting Technique====
  
TouchDesigner's built-in shaders use a custom cone-lighting technique that you can mimic in your shader. The intensity of the cone light is pre-computed into a 1D texture (a lookup table ) to reduce the workload in the shader. The start of the 1D texture (texture coordinate 0.0) is the intensity of the light at the edge of the cone angle (the first pixel is always 0). The end of the 1D texture (texture coordinate 1.0) is the intensity of the light at the center of the cone. This lookup table is declared as:  
+
TouchDesigner's built-in shaders use a custom cone-lighting technique that you can mimic in your shader. The intensity of the cone light is pre-computed into a 1D texture (a lookup table ) to reduce the workload in the shader. The start of the 1D texture (texture coordinate 0.0) is the intensity of the light at the edge of the cone angle (the first pixel is always 0). The end of the 1D texture (texture coordinate 1.0) is the intensity of the light at the center of the cone. This lookup table is declared as:
  uniform sampler1D sTDConeLookups[TD_NUM_LIGHTS];
+
<syntaxhighlight lang=glsl>
 
+
uniform sampler1D sTDConeLookups[TD_NUM_LIGHTS];
 +
</syntaxhighlight>
 
A second helper uniform is also given to the shader to make looking up in the 1D texture easier:  
 
A second helper uniform is also given to the shader to make looking up in the 1D texture easier:  
  uTDLights[i].coneLookupScaleBias;
+
<syntaxhighlight lang=glsl>
 
+
uTDLights[i].coneLookupScaleBias;
 +
</syntaxhighlight> 
 
To correctly look into this lookup table the following algorithm should be used:
 
To correctly look into this lookup table the following algorithm should be used:
 +
<syntaxhighlight lang=glsl>
 +
// 'spot' is the spot vector
 +
// 'lightV' is the vector coming from the light position, pointing towards
 +
// the point on the geometry we are shading.
 +
// It doesn't matter which space these vectors are in (camera space, object space),
 +
// as long as they are both in the same space.
 +
// Determine the cosine of the angle between the two vectors, will be between [-1, 1]
 +
float spotEffect = dot(spot, lightV);
 +
// Now rescale the value using the special helper uniform so that value is between [0,1]
 +
// A value of 0 will mean the angle between the two vectors is the same as the total
 +
// cone angle + cone delta of the light
 +
spotEffect = (spotEffect * uTDLights[i].coneLookupScaleBias.x) + uTDLights[i].coneLookupScaleBias.y;
 +
// Finally lookup into the lookup table
 +
float dimmer = texture1D(sTDConeLookup[i], spotEffect).r;
 +
// You can now multiply the strength of the light by 'dimmer' to create the correct
 +
// light intensity based on this pixels position in or outside the cone light's area of
 +
// influence
 +
</syntaxhighlight>
 +
====Attenuation====
  
  // 'spot' is the spot vector
+
Attenuation is handled for you in the TDLighting() function, but if you want to add it yourself this section describes how.
  // 'lightV' is the vector coming from the light position, pointing towards
 
  // the point on the geometry we are shading.
 
  // It doesn't matter which space these vectors are in (camera space, object space),
 
  // as long as they are both in the same space. <br>
 
  // Determine the cosine of the angle between the two vectors, will be between [-1, 1]
 
  float spotEffect = dot(spot, lightV); <br>
 
  // Now rescale the value using the special helper uniform so that value is between [0,1]
 
  // A value of 0 will mean the angle between the two vectors is the same as the total
 
  // cone angle + cone delta of the light
 
  spotEffect = (spotEffect * uTDLights[i].coneLookupScaleBias.x) + uTDLights[i].coneLookupScaleBias.y; <br>
 
  // Finally lookup into the lookup table
 
  float dimmer = texture1D(sTDConeLookup[i], spotEffect).r; <br>
 
  // You can now multiply the strength of the light by 'dimmer' to create the correct
 
  // light intensity based on this pixels position in or outside the cone light's area of
 
  // influence
 
  
====Attenuation====
+
To determine the attenuation from the light to a point in space, use this function. <code>lightDist</code> is the distance from the vertex to the light.
 +
<syntaxhighlight lang=glsl>
 +
// Will return 1 if there is no attenuation, 0 if the light is fully attenuated, and something in between if it's in the fade-off region.
 +
float TDAttenuateLight(int lightIndex, float lightDist);
 +
</syntaxhighlight>
  
Attenuation is handled for you in the TDLighting() function, but if you want to add it yourself this section describes how.
+
The math behind TDAttenutateLight is as follows:
TouchDesigner's built-in shaders use a custom attenuation technique. Like the cone lighting, a pre-calculated scale and bias is provided for you that will allow you to look into a lookup table to get the correct attenuated intensity of the light. The uniform is: <br>
 
  uTDLights[i].attenScaleBiasRoll
 
  
The code for calculation attenuation is as follows: (lightDist is the distance from the vertex to the light)
+
TouchDesigner's built-in shaders use a custom attenuation technique. Like the cone lighting, a pre-calculated scale and bias is provided for you that will allow you to get the correct attenuated intensity of the light. The uniform is: <br>
  float lightAtten = lightDist * uTDLights[i].attenScaleBiasRoll.x;
+
<syntaxhighlight lang=glsl>
  lightAtten = uTDLights[i].attenScaleBiasRolll.y;
+
uTDLights[i].attenScaleBiasRoll // Contains (1 / -(attenEnd - attenStart), attenEnd / (attenEnd - attenStart), attenRolloff)
  lightAtten = clamp(lightAtten, 0.0, 1.0) * 1.57073963
+
</syntaxhighlight>
  lightAtten = sin(lightAtten);
 
  lightAtten = pow(lightAtten, uTDLights[i].attenScaleBiasRoll.z);
 
  
 +
TDAttenutateLight is defined as:
 +
<syntaxhighlight lang=glsl>
 +
float TDAttenuateLight(int lightIndex, float lightDist)
 +
{
 +
    float lightAtten = lightDist * uTDLights[lightIndex].attenScaleBiasRoll.x;
 +
    lightAtten += uTDLights[lightIndex].attenScaleBiasRoll.y;
 +
    lightAtten = clamp(lightAtten, 0.0, 1.0) * 1.57073963
 +
    lightAtten = sin(lightAtten);
 +
    lightAtten = pow(lightAtten, uTDLights[lightIndex].attenScaleBiasRoll.z);
 +
    return lightAtten;
 +
}
 +
</syntaxhighlight>
 
==== Projection and Shadow Mapping ====
 
==== Projection and Shadow Mapping ====
  
Line 603: Line 714:
 
=====Projection Mapping=====
 
=====Projection Mapping=====
 
For projection mapping you then use this math to find its texture coordinates. The 3rd component isn't necessary since a projection map is just a 2D image (the 4th one is though as part of the math).
 
For projection mapping you then use this math to find its texture coordinates. The 3rd component isn't necessary since a projection map is just a 2D image (the 4th one is though as part of the math).
 
+
<syntaxhighlight lang=glsl>
  // Notice how we remove the z coordinate here
+
// Notice how we remove the z coordinate here
  vec3 projCoord = (uTDLights[i].projMapMatrix * worldSpaceVert).xyw;
+
vec3 projCoord = (uTDLights[i].projMapMatrix * worldSpaceVert).xyw;
 
+
</syntaxhighlight>
 
Now in the pixel shader, sample the projection map with this code:
 
Now in the pixel shader, sample the projection map with this code:
 
+
<syntaxhighlight lang=glsl>
  vec4 projColor = textureProj(sTDProjMaps[i], projCoord);
+
vec4 projColor = textureProj(sTDProjMaps[i], projCoord);
 
+
</syntaxhighlight>
 
Now simply use projColor however you see fit in your shader.
 
Now simply use projColor however you see fit in your shader.
  
Line 616: Line 727:
  
 
For shadow mapping it's very similar, only all 4 components are needed since you need the z for depth comparison.
 
For shadow mapping it's very similar, only all 4 components are needed since you need the z for depth comparison.
 
+
<syntaxhighlight lang=glsl>
  vec4 shadowCoord = uTDLights[i].shadowMapMatrix * camSpaceVert;
+
vec4 shadowCoord = uTDLights[i].shadowMapMatrix * camSpaceVert;
 
+
</syntaxhighlight>
 
Sample the shadow map with this code:
 
Sample the shadow map with this code:
 
+
<syntaxhighlight lang=glsl>
  float shadowDepth = textureProj(sTDShadowMaps[i], shadowCoord).r;
+
float shadowDepth = textureProj(sTDShadowMaps[i], shadowCoord).r;
 
+
</syntaxhighlight>
 
Compare shadowDepth with (shadowCoord.z / shadowCoord.w) to determine if the pixel is in a shadow or not.
 
Compare shadowDepth with (shadowCoord.z / shadowCoord.w) to determine if the pixel is in a shadow or not.
 
+
<syntaxhighlight lang=glsl>
  if ((shadowCoord.z / shadowCoord.w) >= shadowDepth)
+
if ((shadowCoord.z / shadowCoord.w) >= shadowDepth)
  // then it's in the shadow
+
// then it's in the shadow
  else
+
else
  // It's not in the shadow
+
// It's not in the shadow
 
+
</syntaxhighlight>
 
== Multiple Render Targets ==
 
== Multiple Render Targets ==
  
 
Using the '# Of Color Buffers' parameter in the [[Render TOP]] along with the [[Render Select TOP]], you can write GLSL shaders that output multiple color values per pixel. This is done by declaring and writing to pixel shader outputs declare like this:
 
Using the '# Of Color Buffers' parameter in the [[Render TOP]] along with the [[Render Select TOP]], you can write GLSL shaders that output multiple color values per pixel. This is done by declaring and writing to pixel shader outputs declare like this:
  layout(location = 0) vec4 fragColor[TD_NUM_COLOR_BUFFERS];
+
<syntaxhighlight lang=glsl>
 
+
layout(location = 0) vec4 fragColor[TD_NUM_COLOR_BUFFERS];
 +
</syntaxhighlight>
 
The constant TD_NUM_COLOR_BUFFERS with automatically be set for you based on the render settings.
 
The constant TD_NUM_COLOR_BUFFERS with automatically be set for you based on the render settings.
  
Line 646: Line 758:
  
 
Although in general you can transform your points/vectors using the built-in model/view and projection matrices at will, when outputting to gl_Position you should use the built-in functions. These functions allow TouchDesigner to do some manipulation of the values for final output, depending on the rendering setup. For example for doing optimized [[#Multi-Camera_Rendering|Multi-Camera Rendering]], the position will need to be multiplied by the correct camera for this execution of the vertex shader. To give TouchDesigner a chance to do this manipulation, you should call the built-in functions to transform your vertex position:
 
Although in general you can transform your points/vectors using the built-in model/view and projection matrices at will, when outputting to gl_Position you should use the built-in functions. These functions allow TouchDesigner to do some manipulation of the values for final output, depending on the rendering setup. For example for doing optimized [[#Multi-Camera_Rendering|Multi-Camera Rendering]], the position will need to be multiplied by the correct camera for this execution of the vertex shader. To give TouchDesigner a chance to do this manipulation, you should call the built-in functions to transform your vertex position:
 
+
<syntaxhighlight lang=glsl>
  vec4 TDWorldToProj(vec4 v);
+
vec4 TDWorldToProj(vec4 v);
  vec4 TDWorldToProj(vec3 v);
+
vec4 TDWorldToProj(vec3 v);
 
+
</syntaxhighlight>
 
So for example at the end of your shader you would do:
 
So for example at the end of your shader you would do:
 
+
<syntaxhighlight lang=glsl>
  gl_Position = TDWorldToProj(worldSpacePosition);
+
gl_Position = TDWorldToProj(worldSpacePosition);
 
+
</syntaxhighlight>
 
== Working with Deforms ==
 
== Working with Deforms ==
  
Line 661: Line 773:
 
Use the *Vec version when deforming vectors.<br>
 
Use the *Vec version when deforming vectors.<br>
 
'''These functions always return the point/vector in World space, not model/SOP.'''
 
'''These functions always return the point/vector in World space, not model/SOP.'''
 
+
<syntaxhighlight lang=glsl>
  vec4 TDDeform(vec4 p);
+
vec4 TDDeform(vec4 p);
  vec3 TDDeform(vec3 p);
+
vec3 TDDeform(vec3 p);
  vec3 TDDeformVec(vec3 v);
+
vec3 TDDeformVec(vec3 v);
  vec3 TDDeformNorm(vec3 v);
+
vec3 TDDeformNorm(vec3 v);
 
+
</syntaxhighlight>
  
  
 
As the shader writer, it's your job to manipulate the vertex attributes such as the position and normal (since there's no place for TouchDesigner to do it if you're the one writing the shader), so it's up to you to call the TDDeform() function. In general you will simply call it simply like this:
 
As the shader writer, it's your job to manipulate the vertex attributes such as the position and normal (since there's no place for TouchDesigner to do it if you're the one writing the shader), so it's up to you to call the TDDeform() function. In general you will simply call it simply like this:
  vec4 worldSpaceVert = TDDeform(vec4(P, 1.0));  
+
<syntaxhighlight lang=glsl>
  vec3 worldSpaceNorm = TDDeformNorm(N));
+
vec4 worldSpaceVert = TDDeform(vec4(P, 1.0));  
 
+
vec3 worldSpaceNorm = TDDeformNorm(N));
 +
</syntaxhighlight>
 
However you can use the below declared functions directly.
 
However you can use the below declared functions directly.
  
Line 682: Line 795:
 
==== Functions ====
 
==== Functions ====
  
You generally will not need to call these directly, they are called by the <code>TDDeform()</code> function.
+
You generally will not need to call these directly, they are called by the <syntaxhighlight lang=glsl inline>TDDeform()</syntaxhighlight> function.
  
 
In the vertex shader:
 
In the vertex shader:
vec4 TDSkinnedDeform(vec4 pos);
+
<syntaxhighlight lang=glsl>
vec3 TDSkinnedDeformVec(vec3 vec);
+
vec4 TDSkinnedDeform(vec4 pos);
 
+
vec3 TDSkinnedDeformVec(vec3 vec);
 +
</syntaxhighlight>
 
You can get the bone matrix for the given matrix index with this function:
 
You can get the bone matrix for the given matrix index with this function:
  mat4 TDBoneMat(int index);
+
<syntaxhighlight lang=glsl>
 
+
mat4 TDBoneMat(int boneIndex);
 +
</syntaxhighlight>
 
=== Instancing ===
 
=== Instancing ===
  
Line 697: Line 812:
 
==== Instance Index/ID ====
 
==== Instance Index/ID ====
  
To calculate the instance ID, use the provided TDInstanceID() function. Do not use gl_InstanceID directly because the number of instances being rendered may be larger than requested due to [[#Multi-Camera_Rendering|Multi-Camera Rendering]]. Shader writers familiar with TouchDesigner088 may also remember the <code>uTDInstanceIDOffset</code> that had to be used, this does '''not''' need to be used with TDInstance();
+
To calculate the instance ID, use the provided TDInstanceID() function. Do not use gl_InstanceID directly because the number of instances being rendered may be larger than requested due to [[#Multi-Camera_Rendering|Multi-Camera Rendering]]. Shader writers familiar with TouchDesigner088 may also remember the <syntaxhighlight lang=glsl inline>uTDInstanceIDOffset</syntaxhighlight> that had to be used, this does '''not''' need to be used with TDInstance();
 
+
<syntaxhighlight lang=glsl>
  int TDInstanceID();
+
int TDInstanceID();
 
+
</syntaxhighlight>
 
Since this function is only available in the vertex shader, you will need to pass it onwards to the pixel shader through an out/in, if you require it in the pixel shader.
 
Since this function is only available in the vertex shader, you will need to pass it onwards to the pixel shader through an out/in, if you require it in the pixel shader.
 
+
<syntaxhighlight lang=glsl>
  // In the vertex shader, declare something like this, and assign vInstanceID = gl_InstanceID in the main()
+
// In the vertex shader, declare something like this, and assign vInstanceID = TDInstanceID() in the main()
  flat out int vInstanceID;<br>
+
flat out int vInstanceID;
  void main()
+
void main()
  {
+
{
    vInstanceID = TDInstanceID();
+
vInstanceID = TDInstanceID();
    // other main vertex function stuff
+
// other main vertex function stuff
    // ....
+
// ....
  }
+
}
 
+
</syntaxhighlight>
 
And in the pixel shader you can read this value if it's declared like this:
 
And in the pixel shader you can read this value if it's declared like this:
 
+
<syntaxhighlight lang=glsl>
  // Pixel shader
+
// Pixel shader
  flat in int vInstanceID;
+
flat in int vInstanceID;
 
+
</syntaxhighlight>
This is declared as <code>flat</code> since <code>int</code> variable types can not be interpolated across a primitive.
+
This is declared as <syntaxhighlight lang=glsl inline>flat</syntaxhighlight> since <syntaxhighlight lang=glsl inline>int</syntaxhighlight> variable types can not be interpolated across a primitive.
  
 
==== Deform Functions ====
 
==== Deform Functions ====
  
You generally will not need to call any of these directly, they are called by the deform() function. These functions are only available in the vertex shader.
+
You generally will not need to call any of these directly, they are called by the TDDeform() function. These functions are only available in the vertex shader.
  
 
In the vertex shader:
 
In the vertex shader:
  vec4 TDInstanceDeform(vec4 pos);
+
<syntaxhighlight lang=glsl>
  vec3 TDInstanceDeformVec(vec3 vec);
+
vec4 TDInstanceDeform(vec4 pos);
 
+
vec3 TDInstanceDeformVec(vec3 vec);
For the transform, there are two different types of uniforms that could be declared. If you are using any of rotations (RX, RY, RZ), then matrices will be sent to the GPU. Access these matrices using the functions:
+
</syntaxhighlight>
  mat4 TDInstanceMat(int instanceIndex);
+
For the transform, access these matrices using the functions:
  mat4 TDInstanceMat();
+
<syntaxhighlight lang=glsl>
 
+
mat4 TDInstanceMat(int instanceIndex);
These matrices will contain the entire transform, including TX, TY, TZ, and SX, SY, SZ.
+
mat4 TDInstanceMat();
 +
</syntaxhighlight>
  
If you aren't using rotation, then individual arrays for each of TX, TY, TZ, SX, SY and SZ will be sent to the GPU (if they are used). The fewer channels you use, the less space these will take up.
+
These matrices will contain the entire transform, including TX, TY, TZ, SX, SY, SZ as well as Rotate To.
  vec3 TDInstanceTranslate(int instanceIndex);
 
  vec3 TDInstanceTranslate();
 
  vec3 TDInstanceScale(int instanceIndex);
 
  vec3 TDInstanceScale();
 
  
 
==== Attribute Functions ====
 
==== Attribute Functions ====
  
 
When modifying the texture coordinates, these functions do the texture coordinate modifications per instance. t is the texture coordinate to modify. The version without instanceIndex will use the current value for gl_InstanceID automatically.
 
When modifying the texture coordinates, these functions do the texture coordinate modifications per instance. t is the texture coordinate to modify. The version without instanceIndex will use the current value for gl_InstanceID automatically.
  vec3 TDInstanceTexCoord(int instanceIndex, vec3 t);
+
<syntaxhighlight lang=glsl>
  vec3 TDInstanceTexCoord(vec3 t);
+
vec3 TDInstanceTexCoord(int instanceIndex, vec3 t);
 +
vec3 TDInstanceTexCoord(vec3 t);
 +
</syntaxhighlight>
 +
To modify diffuse color, these functions will replace/add/subtract from the original diffuse color. In general you'll want to pass in the attribute Cd into these functions to have them modify it. If instance color is not in use, this function will just return the passed in color, unmodified.
 +
<syntaxhighlight lang=glsl>
 +
vec4 TDInstanceColor(vec4 curColor);
 +
vec4 TDInstanceColor(int instanceIndex, vec4 curColor);
 +
</syntaxhighlight>
  
To modify diffuse color, these functions will replace/add/subtract from the original diffuse color. In general you'll want to pass in the attribute Cd into these functions to have them modify it. If instance color is not in use, this function will just return the passed in color, unmodified.
+
Custom instance attributes can be retrieved using these functions:
  vec4 TDInstanceColor(vec4 curColor);
+
<syntaxhighlight lang=glsl>
  vec4 TDInstanceColor(int instanceIndex, vec4 curColor);
+
vec4 TDInstanceCustomAttrib0();
 +
vec4 TDInstanceCustomAttrib0(int instanceIndex);
 +
vec4 TDInstanceCustomAttrib1();
 +
vec4 TDInstanceCustomAttrib1int instanceIndex);
 +
vec4 TDInstanceCustomAttrib2();
 +
vec4 TDInstanceCustomAttrib2(int instanceIndex);
 +
vec4 TDInstanceCustomAttrib3();
 +
vec4 TDInstanceCustomAttrib3(int instanceIndex);
 +
</syntaxhighlight>
  
 
===== Instance Texturing =====
 
===== Instance Texturing =====
''' These are only available on Nvidia Kepler GPUs or newer. '''
+
''' These are only available on Nvidia Kepler GPUs or newer, on Windows only.'''
  
 
Instance texturing allows mapping any number of individual textures onto instances. The number of textures available to be used in a single render is extremely high and fast. It avoids needing to use 2D Texture Array to map multiple images onto instances. Only one type of instance texture is supported at a time, so if the wrong function is called (e.g. the *Cube() when the instance textures are using 2D textures), you will receive a white texture instead of one of the instance textures. Use these functions to fetch the current instance texture for the current instance.
 
Instance texturing allows mapping any number of individual textures onto instances. The number of textures available to be used in a single render is extremely high and fast. It avoids needing to use 2D Texture Array to map multiple images onto instances. Only one type of instance texture is supported at a time, so if the wrong function is called (e.g. the *Cube() when the instance textures are using 2D textures), you will receive a white texture instead of one of the instance textures. Use these functions to fetch the current instance texture for the current instance.
 
+
<syntaxhighlight lang=glsl>
  // AVAILABLE IN THE VERTEX SHADER ONLY
+
// AVAILABLE IN THE VERTEX SHADER ONLY
  sampler2D TDInstanceTexture2D();
+
sampler2D TDInstanceTexture2D();
  sampler3D TDInstanceTexture3D();
+
sampler3D TDInstanceTexture3D();
  sampler2DArray TDInstanceTexture2DArray();
+
sampler2DArray TDInstanceTexture2DArray();
  samplerCube TDInstanceTextureCube();
+
samplerCube TDInstanceTextureCube();
 
+
</syntaxhighlight>
 
If you require more custom control of which instance texture you which to use, you can use these functions instead:
 
If you require more custom control of which instance texture you which to use, you can use these functions instead:
 +
<syntaxhighlight lang=glsl>
 +
// AVAILABLE IN THE VERTEX SHADER ONLY
 +
// Gives to you the textureIndex for the given instanceIndex, or the current instance.
 +
uint TDInstanceTextureIndex(int instanceIndex);
 +
uint TDInstanceTextureIndex();
  
  // AVAILABLE IN THE VERTEX SHADER ONLY
+
// Returns the texture for the given textureIndex.
  // Gives to you the textureIndex for the given instanceIndex, or the current instance.
+
sampler2D TDInstanceTexture2D(int textureIndex);
  uint TDInstanceTextureIndex(int instanceIndex);
+
sampler3D TDInstanceTexture3D(int textureIndex);
  uint TDInstanceTextureIndex();
+
sampler2DArray TDInstanceTexture2DArray(int textureIndex);
 
+
samplerCube TDInstanceTextureCube(int textureIndex);
  // Returns the texture for the given textureIndex.
+
</syntaxhighlight>
  sampler2D TDInstanceTexture2D(int textureIndex);
 
  sampler3D TDInstanceTexture3D(int textureIndex);
 
  sampler2DArray TDInstanceTexture2DArray(int textureIndex);
 
  samplerCube TDInstanceTextureCube(int textureIndex);
 
 
 
  
There are two pieces of information used, the first is the texture index. This comes from the Texture Index parameter in the [[Geometry COMP]]. You can get this texture index with either <code>TDInstanceTextureIndex()</code> or <code>TDInstanceTextureIndex(int instanceIndex)</code> (if you wish to get the index of a instance different than the current one).
+
There are two pieces of information used, the first is the texture index. This comes from the Texture Index parameter in the [[Geometry COMP]]. You can get this texture index with either <syntaxhighlight lang=glsl inline>TDInstanceTextureIndex()</syntaxhighlight> or <syntaxhighlight lang=glsl inline>TDInstanceTextureIndex(int instanceIndex)</syntaxhighlight> (if you wish to get the index of a instance different than the current one).
 
Using this texture index, or some other texture index you've calculated on your own, you can then call the appropriate other TDInstanceTexture*() function to get the sampler for the texture you are looking for.
 
Using this texture index, or some other texture index you've calculated on your own, you can then call the appropriate other TDInstanceTexture*() function to get the sampler for the texture you are looking for.
  
Once you have the sampler, you use it like any other sampler, using the <code>texture()</code> function to sample a color from it. To use it in the pixel shader you will need to pass this sampler through an in/out variable to the pixel shader for it to gain access to it. '''Note''' Older Nvidia drivers had a bug where passing a sampler* through the in/out parameters wouldn't work. You had to use a <code>uint64_t</code> as the variable type for the in/out variable, and cast the sampler to and from this when assigning and retrieving it from the in/out variable.
+
Once you have the sampler, you use it like any other sampler, using the <syntaxhighlight lang=glsl inline>texture()</syntaxhighlight> function to sample a color from it. To use it in the pixel shader you will need to pass this sampler through an in/out variable to the pixel shader for it to gain access to it. '''Note''' Older Nvidia drivers had a bug where passing a sampler* through the in/out parameters wouldn't work. You had to use a <syntaxhighlight lang=glsl inline>uint64_t</syntaxhighlight> as the variable type for the in/out variable, and cast the sampler to and from this when assigning and retrieving it from the in/out variable.
  
 
The best way to see this code being used for real is to output a shader from the Phong MAT that is doing instance texturing.
 
The best way to see this code being used for real is to output a shader from the Phong MAT that is doing instance texturing.
Line 783: Line 910:
 
== Point Sprites ==
 
== Point Sprites ==
  
When rendering point sprites primitives you are required to write to the vertex shader output <code>gl_PointSize</code>. This output variable determines how large the point sprite is (in pixels) when it is rendered. If you don't write to the output then your point sizes are undefined.  
+
When rendering point sprites primitives you are required to write to the vertex shader output <syntaxhighlight lang=glsl inline>gl_PointSize</syntaxhighlight>. This output variable determines how large the point sprite is (in pixels) when it is rendered. If you don't write to the output then your point sizes are undefined.  
  
Each point sprite will be rendered as a square of pixels <code>gl_PointSize</code> pixels wide. The square of pixels will receive textures coordinates from 0-1 over the entire square in the pixel shader input variable <code>gl_PointCoord</code>.
+
Each point sprite will be rendered as a square of pixels <syntaxhighlight lang=glsl inline>gl_PointSize</syntaxhighlight> pixels wide. The square of pixels will receive textures coordinates from 0-1 over the entire square in the pixel shader input variable <syntaxhighlight lang=glsl inline>gl_PointCoord</syntaxhighlight>.
  
 
== Order Independent Transparency ==
 
== Order Independent Transparency ==
  
 
You can make your shader support Order Independent Transparency by simply adding this line at the start of your pixel shader's main() function. If Order Independent Transparency isn't enabled in the Render TOP, then this function will do nothing.  
 
You can make your shader support Order Independent Transparency by simply adding this line at the start of your pixel shader's main() function. If Order Independent Transparency isn't enabled in the Render TOP, then this function will do nothing.  
  TDCheckOrderIndTrans();
+
<syntaxhighlight lang=glsl>
 
+
TDCheckOrderIndTrans();
 +
</syntaxhighlight>
 
== Dithering ==
 
== Dithering ==
  
 
If dithering is enabled in the [[Render TOP]], you can have this dithering applied to your color by simply calling:
 
If dithering is enabled in the [[Render TOP]], you can have this dithering applied to your color by simply calling:
 
+
<syntaxhighlight lang=glsl>
  finalColor = TDDither(finalColor);
+
finalColor = TDDither(finalColor);
 
+
</syntaxhighlight>
 
You generally want to do this right at the end of the shader, just before you write the value to your output color. If dithering is disabled in the Render TOP this function will still be available (to avoid compiler errors), but it will leave the color unchanged.
 
You generally want to do this right at the end of the shader, just before you write the value to your output color. If dithering is disabled in the Render TOP this function will still be available (to avoid compiler errors), but it will leave the color unchanged.
  
Line 803: Line 931:
  
 
The Render Pick DAT and CHOP do their work with a render operation, so they need to interact with the shader to do their work. If you export a Phong MAT shader you will see the following lines in it
 
The Render Pick DAT and CHOP do their work with a render operation, so they need to interact with the shader to do their work. If you export a Phong MAT shader you will see the following lines in it
 
+
<syntaxhighlight lang=glsl>
  #ifndef TD_PICKING_ACTIVE
+
#ifndef TD_PICKING_ACTIVE
  // All the typical shader code
+
// All the typical shader code
  #else
+
#else
      TDWritePickingValues();
+
TDWritePickingValues();
  #endif
+
#endif
 
+
</syntaxhighlight>
 
The key thing that is occurring here is that when picking is occuring, the define TD_PICKING_ACTIVE is set and only the code inside the #else block is executed. The function:
 
The key thing that is occurring here is that when picking is occuring, the define TD_PICKING_ACTIVE is set and only the code inside the #else block is executed. The function:
 
+
<syntaxhighlight lang=glsl>
  void TDWritePickingValues();
+
void TDWritePickingValues();
 
+
</syntaxhighlight>
 
Will write default values for picking, which the Render Pick DAT/CHOP will read. If you have a custom shader that changes vertex positions in a non standard way, or if you want to output different kinds of information (like a color other that's Cd), you can replace the values that have been written by this function afterwards. The values available to you are:
 
Will write default values for picking, which the Render Pick DAT/CHOP will read. If you have a custom shader that changes vertex positions in a non standard way, or if you want to output different kinds of information (like a color other that's Cd), you can replace the values that have been written by this function afterwards. The values available to you are:
 
+
<syntaxhighlight lang=glsl>
  TDPickVertex {
+
TDPickVertex {
  vec3 sopSpacePosition;
+
vec3 sopSpacePosition;
  vec3 worldSpacePosition;
+
vec3 worldSpacePosition;
  vec3 camSpacePosition;
+
vec3 camSpacePosition;
  vec3 sopSpaceNormal;
+
vec3 sopSpaceNormal;
  vec3 worldSpaceNormal;
+
vec3 worldSpaceNormal;
  vec3 camSpaceNormal;
+
vec3 camSpaceNormal;
  vec3 uv[1];
+
vec3 uv[1];
  flat int instanceId;
+
flat int instanceId;
  vec4 color;
+
vec4 color;
} vTDPickVert;
+
} vTDPickVert;
 
+
</syntaxhighlight>
 
So for example if you modifying the vertex position in a way different from the standard TDDeform() way, you could write these newly calculated values to like this:
 
So for example if you modifying the vertex position in a way different from the standard TDDeform() way, you could write these newly calculated values to like this:
 
''' Be sure to do this AFTER the call to TDWritePickingValues(), otherwise that call will overwrite your values '''.
 
''' Be sure to do this AFTER the call to TDWritePickingValues(), otherwise that call will overwrite your values '''.
 +
<syntaxhighlight lang=glsl>
 +
TDWritePickingValues();
 +
vTDPickVert.sopSpacePosition = newPosition;
 +
vTDPickVert.worldSpacePosition = uTDMats[TDCameraIndex()].world * vec4(newPosition, 1.0);
 +
vTDPickVert.camSpacePosition = uTDMats[TDCameraIndex()].worldCam * vec4(newPosition, 1.0);
 +
</syntaxhighlight>
 +
You do not have to write to all the entries in this structure, but you can for completeness. Only the values that are being read by the Render Pick CHOP/DAT (selected in their parameters) must be filled in.
  
  TDWritePickingValues();
+
For custom attributes that you set for picking in the [[Render Pick CHOP]] or [[Render Pick DAT]], the attributes are available in <code>vTDCustomPickVert</code> with the name and size as defined in the Render Pick node.
  vTDPickVert.sopSpacePosition = newPosition;
 
  vTDPickVert.worldSpacePosition = uTDMats.world * vec4(newPosition, 1.0);
 
  vTDPickVert.camSpacePosition = uTDMats.worldCam * vec4(newPosition, 1.0);
 
 
 
You do not have to write to all the entries in this structure, but you can for completeness. Only the values that are being read by the Render Pick CHOP/DAT (selected in their parameters) must be filled in.
 
  
 
== Shadertoy ==
 
== Shadertoy ==
Line 842: Line 972:
 
=== VR Shaders ===
 
=== VR Shaders ===
  
Shaders that come from [http://www.shadertoy.com Shadertoy] that support VR rendering will have a <code>mainVR</code> function defined. Re-creating the <code>fragRayOri</code> and <code>fragRayDir</code> variables that function uses inside of TD is simple. In the vertex shader:
+
Shaders that come from [http://www.shadertoy.com Shadertoy] that support VR rendering will have a <syntaxhighlight lang=glsl inline>mainVR</syntaxhighlight> function defined. Re-creating the <syntaxhighlight lang=glsl inline>fragRayOri</syntaxhighlight> and <syntaxhighlight lang=glsl inline>fragRayDir</syntaxhighlight> variables that function uses inside of TD is simple. In the vertex shader:
 +
<syntaxhighlight lang=glsl>
 +
vec4 worldSpaceVert = TDDeform(P);
 +
vec4 worldSpaceCamPos = uTDMat.camInverse[3]; // The last column of the camera transform is it's position
  
  vec4 worldSpaceVert = TDDeform(P);
+
vec3 fragRayOri = worldSpaceCamPos.xyz;
  vec4 worldSpaceCamPos = uTDMat.camInverse[3]; // The last column of the camera transform is it's position
+
vec3 fragRayDir = worldSpaceVert.xyz - worldSpaceCamPos.xyz;
  <br>
+
// Pass these variables to the pixel shader using 'out' variables named of your choosing
  vec3 fragRayOri = worldSpaceCamPos.xyz;
+
</syntaxhighlight>
  vec3 fragRayDir = worldSpaceVert.xyz - worldSpaceCamPos.xyz;
+
And in the pixel shader you just need to normalize whatever variable the <syntaxhighlight lang=glsl inline>fragRayDir</syntaxhighlight> was went through. The variable that came from <syntaxhighlight lang=glsl inline>fragRayOri</syntaxhighlight> and be used as-is.
  // Pass these variables to the pixel shader using 'out' variables named of your choosing
 
 
 
And in the pixel shader you just need to normalize whatever variable the <code>fragRayDir</code> was went through. The variable that came from <code>fragRayOri</code> and be used as-is.
 
  
 
To support these shaders, which are usually raymarching shaders, you'll want to render geometry that covers the entire viewport, such as putting a sphere around your camera.
 
To support these shaders, which are usually raymarching shaders, you'll want to render geometry that covers the entire viewport, such as putting a sphere around your camera.
Line 859: Line 989:
 
=== #version statement ===
 
=== #version statement ===
  
TouchDesigner will automatically put a #version statement at the start of the shaders when compiling them, so you should make sure your shaders don't have a #version statement. You will get an error if they do.
+
The #version statement will be added to the code automatically for you. Your code should not have a #version statement, otherwise compile errors may occur.
 +
 
 +
=== #include statements ===
 +
 
 +
#include statements are a new feature added to GLSL recently. Almost all modern drivers support them. You can use an #include statement in one DAT to include code from another DAT. The path can be absolute or relative.
  
 +
  #include </project1/text1>
 +
  #include <../../geo1/text2>
 +
 
 +
 +
=== Changes from GLSL 1.20 ===
 +
 +
Shaders written for 1.20 will not compile as 3.30 shaders. The language received a large overhaul, changing the name of many key functions and replacing a lot of functionality. All of the changes can be seen in the official GLSL documentation linked to earlier. Some of the more important changes are:
 +
 +
* Removed <syntaxhighlight lang=glsl inline>texture1D(sampler1D, float), texture2D(sampler2D, vec2), etc.</syntaxhighlight> All texture sampling is done with identical function names, regardless of the dimensionality of the texture. e.g. <syntaxhighlight lang=glsl inline>texture(sampler1D, float), or texture(sampler2D, vec2)</syntaxhighlight>.
 +
 +
* Removed the keyword <syntaxhighlight lang=glsl inline>varying</syntaxhighlight>. Instead use <syntaxhighlight lang=glsl inline>in</syntaxhighlight> and <syntaxhighlight lang=glsl inline>out</syntaxhighlight> (depending on if the value is getting outputted from the shader or inputted from a previous shader stage). Examples later on in the article.
 +
 +
* Removed the keyword <syntaxhighlight lang=glsl inline>attribute</syntaxhighlight>. Instead just use <syntaxhighlight lang=glsl inline>in</syntaxhighlight> in your vertex shader.
 +
 +
* Removed built-in varyings <syntaxhighlight lang=glsl inline>gl_TexCoord[]</syntaxhighlight>. You'll need to always declare your own variables that get output/input between shader stages.
 +
 +
* Removed <syntaxhighlight lang=glsl inline>gl_FragColor</syntaxhighlight> and <syntaxhighlight lang=glsl inline>gl_FragData[]</syntaxhighlight>. Instead you name your own color outputs using the syntax <syntaxhighlight lang=glsl inline>layout(location = 0) vec4 nameYouWant</syntaxhighlight>. To output to multiple color buffers declare outputs at the other locations <syntaxhighlight lang=glsl inline>layout(location = 1) vec4 secondFragColor</syntaxhighlight>.
 +
 +
* Removed all built-in attributes such as <syntaxhighlight lang=glsl inline>gl_Vertex, gl_MultiTexCoord0, gl_Normal</syntaxhighlight>. In TouchDesigner these attributes will be accessible through automatically declared attributes such as <syntaxhighlight lang=glsl inline>in vec3 P; in vec3 N; in vec4 Cd</syntaxhighlight>. More details on this [[#Working with Geometry Attributes|later]].
 +
 +
* Removed almost all built-in uniforms such as matrices (<syntaxhighlight lang=glsl inline>gl_ModelViewMatrix, gl_ProjectionMatrix</syntaxhighlight>), light information (<syntaxhighlight lang=glsl inline>gl_LightSource[]</syntaxhighlight>), fog information (<syntaxhighlight lang=glsl inline>gl_Fog</syntaxhighlight>). All of this data will be available through new means provided by TouchDesigner, detailed [[#TouchDesigner specific Uniforms|later]].
 +
 +
* Arrays of samplers are now supported, and are used extensively in TouchDesigner when appropriate. There are limitations on how these samplers are indexed though, detailed in the GLSL spec for the particular version you are using (3.30 has different rules from 4.10, for example).
 +
 +
=== Major changes since TouchDesigner088 ===
 +
 +
A lot of changes have been done to TouchDesigner's GLSL API in 099. Most of these changes were done to better facilitate [[#Multi-Camera_Rendering|Multi-Camera Rendering]]. A summary of most of these changes is:
 +
* Lighting and other work is now done in World space instead of Camera space. This makes code cleaner since the shaders would need to do their work in multiple different camera spaces for multiple cameras. Legacy GLSL shaders are supported with the [[GLSL TOP]]s 'Lighting Space' parameter which will be set to Camera Space for older shaders.
 +
* <syntaxhighlight lang=glsl inline>TDInstanceID()</syntaxhighlight> should be used instead of <syntaxhighlight lang=glsl inline>gl_InstanceID/uTDInstanceIDOffset</syntaxhighlight>.
 +
* <syntaxhighlight lang=glsl inline>uTDMat</syntaxhighlight> has been removed when lighting in World Space, use the array <syntaxhighlight lang=glsl inline>uTDMats[]</syntaxhighlight> instead.
 +
* Some values from the <syntaxhighlight lang=glsl inline>uTDGeneral</syntaxhighlight> structure have been moved to <syntaxhighlight lang=glsl inline>uTDCamInfos[]</syntaxhighlight>, since that info is camera specific.
 +
* A notion of camera index (obtained in the vertex shader using <syntaxhighlight lang=glsl inline>TDCameraIndex()</syntaxhighlight>), is needed for some functions such as <syntaxhighlight lang=glsl inline>TDFog()</syntaxhighlight>.
 +
* <syntaxhighlight lang=glsl inline>TDAlphaTest(float)</syntaxhighlight> must be called to apply the alpha test. It can be safely called when the alpha test is disabled on the MAT, it'll do nothing in that case.
 +
* Before writing any color to a output color buffer, it should be passed through <syntaxhighlight lang=glsl inline>vec4 TDOutputSwizzle(vec4)</syntaxhighlight>. This ensures the channels are in the correct place depending on how the channels are stored in the output texture. For example Alpha-only textures may be stored in a 'Red-only' texture internally, so the alpha value will need to be swizzled over to the red channel before output.
 
== Related Articles ==
 
== Related Articles ==
  
* [[Import a GLSL Material Shader]]
 
  
  

Latest revision as of 18:54, 24 August 2020

Overview[edit]

This document explains the finer points of writing a GLSL Material in TouchDesigner. It is assumed the reader already has an understanding of the GLSL language. The official GLSL documentation can be found at this address.

GLSL Version[edit]

TouchDesigner uses GLSL 3.30 and newer versions as it's language. Many online examples, as well as WebGL shaders, are written against GLSL 1.20. There are some significant language differences between GLSL 1.20 and GLSL 3.30+. For information about some of these differences, please refer to Changes from GLSL 1.20

The concept of GLSL Shaders[edit]

A GLSL Shader is a program that is applied to geometry as it is being rendered. A GLSL shader is split into two main components, vertex shader and pixel shader.

Vertex Shader - A vertex shader is a program that is applied to every vertex of the geometry the Material is applied to.

Pixel Shader - A pixel shader is a program that is applied to every pixel that is drawn. This is often also referred to as a Fragment Shader.

There is also the Geometry Shader, which is a stage between the vertex and pixel stages, but it isn't used very often. For more information on Geometry Shaders have a look here.

TouchDesigner GLSL conventions[edit]

All functions and uniforms provided by TouchDesigner as augmentations of the GLSL language will follow these conventions.

  • Function names will always start with the letters TD. e.g. TDLighting().
  • Uniforms will start with the letters uTD.
  • Samplers will start with the letters sTD.
  • Images will start with the letters mTD.
  • Vertex input attributes will be named the same as they are in the TouchDesigner SOP interface (P, N, Cd, uv).

Most uniforms provided by TouchDesigner will be contained in Uniform Blocks. This means instead of accessing a single matrix by uTDMatrixName, the matrices will be stored a single block with many matrices such as uTDMats, which has members such as uTDMats[0].worldCam and uTDMats[0].projInverse.

Shader Stages[edit]

Vertex Shader[edit]

Inside a vertex shader you only have access to one vertex, you don't know the positions of other vertices are or what the output of the vertex shader for other vertices will be.

The input to a vertex shader is all the data about the particular vertex that the program is running on. Data like the vertex position in SOP space, texture coordinate, color, normal are available. These values are called attributes. Attributes are declared in the vertex shader using the in keyword.

The vertex shader can output many things. The primary things it will output are the vertex position (after being transformed by the world, camera and projection matrix), color, and texture coordinates. It can output any other values as well using output variables declared using out. Outputs from a vertex shader are linearly interpolated across the surface of the primitive the vertex is a part of. For example, if you output a value of 0.2 at 1st vertex and a value of 0.4 at the 2nd vertex on a line, a pixel drawn half-way between these two points will receive a value of 0.3.

Pixel Shader[edit]

The inputs to a pixel shader are all of the outputs from the vertex shader, and any uniforms that are defined. The outputs from the vertex shader will have been linearly interpolated across the polygon that the vertices created. A pixel shader can output two things: Color and Depth. Color is output through the variable declared as layout(location = 0) out vec4 whateverName. Depth is output through a variable declared as out float depthName. You can name these variables whatever you want. You are required to write out a color value, but you do not need to write out depth (in fact it's best if you don't unless absolutely needed). GLSL will automatically output the correct depth value for you if you don't write out a depth value. If you are outputting to multiple color buffer, you declare more color outputs with the location value set to 1, 2, 3, etc., instead of 0.

Geometry Shader[edit]

A geometry shader takes a single point primitive points, line or triangle, and outputs set of points, line strips, or triangle strips. Currently in TouchDesigner the input primitive types you must use to match with what we are rendering is one of:

 layout(points) in;
 layout(lines_adjacency) in;
 layout(triangles) in;

Other input types such as lines or triangles_adjacency is not currently supported.

Note - In the 2018.20000 series of builds lines were supported and lines_adjacency were not, so the change to the 2019.10000 that switched the support to lines_adjacency unfortunetely breaks existing geometry shaders.

A Basic Shader[edit]

Vertex Shader[edit]

This vertex shader will simply transform each vertex correctly and leave it entirely up to the pixel shader to color the pixels:

void main()
{
	// P is the position of the current vertex
	// TDDeform() will return the deformed P position, in world space.
	// Transform it from world space to projection space so it can be rasterized
	gl_Position = TDWorldToProj(TDDeform(P));
}

Pixel Shader[edit]

This pixel shader simply sets every pixel to red:

// We need to declare the name of the output fragment color (this is different from GLSL 1.2 where it was automatically declared as gl_FragColor)
out vec4 fragColor;
void main()
{
	fragColor = vec4(1, 0, 0, 1);
}

Working with Geometry Attributes[edit]

The follow vertex shader attributes (inputs) will always be declared for you to use in your vertex shader. You do not need to declare them yourself.

layout(location = 0) in vec3 P; // Vertex position
layout(location = 1) in vec3 N; // normal
layout(location = 2) in vec4 Cd; // color
layout(location = 3) in vec3 uv[8]; // texture coordinate layers

Other Attributes

All other attributes you want to use need to be declared in your shader code. are passed as custom attributes. For attributes with 4 or less values, simply declare an attribute with the same name in the GLSL shader as it has in the geometry. For example if the geometry has an attribute speed[3], declare an attribute like this:

 in vec3 speed;

For attributes that are larger than 4 values, they will get put into an array of vec4s. If the size isn't a multiple of 4, then the extra values are undefined. For example for the attribute pCapt[6], you would declare it in your shader like this:

in vec4 pCapt[2];

The values pCapt[0].x, pCapt[0].y, pCapt[0].z, pCapt[0].w, pCapt[1].x and pCapt[1].y will have the values from the geometry and pCapt[1].z and pCapt[1].w will have undefined values.

The attribute pCapt[8] would also be declared as:

in vec4 pCapt[2];

Since it can fit all of its data in 2 vec4s.

If your attribute is an integer type, you can use the integer types (ivec2, ivec4) instead.

TouchDesigner specific defines[edit]

Shaders in TouchDesigner are dynamically recompiled based on a few things such as the number and types of lights in the scene or the number of cameras used in this pass (in the case of Multi-Camera Rendering). This is done for optimization purposes since loops with known amounts of iterations are far faster in GLSL. Some defines are provided which allow code written by end-users to be recompiled with different numbers of lights/cameras correct.

// This define will be defined at compile time, and your shader will be recompiled for each combination
// of lights in use (if it's is used in multiple light configurations). You can use it for things like a loop
// counter to loop over all lights in the scene, as you will see if you output example code from the Phong MAT.
// Environment COMPs are counted separately from regular Light COMPs.
#define TD_NUM_LIGHTS <defined at compile time>
#define TD_NUM_ENV_LIGHTS <defined at compile time>
// On newer hardware multiple cameras can be rendered at the same time. This define will be set to the
// number of cameras done on this render pass. This may be 1 or more, depending on many factors.
// Code should be written in a way that should work regardless of what this value is.
#define TD_NUM_CAMERAS <defined at compiler time>
// One of these defines will be set depending on which shader stage is being compiled.
// This allows shaders to use #ifdef <stageName> and #ifndef <stageName> to either include or
// omit code based on the current shader stage being included. This also allows for single
// large DATs to contain all the code for all stages, with each portion #ifdef-ed in for
// each stage.
#define TD_VERTEX_SHADER
#define TD_GEOMETRY_SHADER
#define TD_PIXEL_SHADER
#define TD_COMPUTE_SHADER

TouchDesigner specific Uniforms[edit]

These are uniform that TouchDesigner will automatically set for you. You do not need to declare any of these, you can just use them in your shaders.

// General rendering state info
struct TDGeneral {
  vec4 ambientColor;  // Ambient light color (sum of all Ambient Light COMPs used)
  vec4 viewport;      // viewport info, contains (left, bottom, 1.0 / (right - left), 1.0 / (top - bottom))     
};
uniform TDGeneral uTDGeneral;
// So for example you'd get the ambient color by doing 
// vec4 ambientColor = uTDGeneral.ambientColor;
// Matrices
struct TDMatrix
{
	mat4 world;			// world transform matrix, combination of hierarchy of Object COMPs containing the SOP.
						// transforms from SOP space into World Space
	mat4 worldInverse;	// inverse of the world matrix
	mat4 worldCam;		// multiplication of the world and the camera matrix. (Cam * World)
	mat4 worldCamInverse;
	mat4 cam;			// camera transform matrix, obtained from the Camera COMP used to render
	mat4 camInverse;
	mat4 camProj;		// camera transform and the projection matrix from the camera COMP, (Proj * Cam)
	mat4 camProjInverse;
	mat4 proj;			// projection matrix from the Camera COMP
	mat4 projInverse;
	mat4 worldCamProj;	// multiplication of the world, camera and projection matrices. (Proj * Cam * World)
						// takes a vertex in SOP space and puts it into projection space
	mat4 worldCamProjInverse;
	mat3 worldForNormals;	// Inverse transpose of the world matrix, use this to transform normals
							// from SOP space into world space
	mat3 camForNormals;		// Inverse transpose of the camera matrix, use this to transform normals
							// from world space into camera space
	mat3 worldCamForNormals;	// Inverse transpose of the worldCam matrix, use this to transform normals
								// from SOP space into camera space inverse(transpose(Cam * World))
	   
};
uniform TDMatrix uTDMats[TD_NUM_CAMERAS];
// For example you'd transform the vertex in SOP space into camera space with this line of code
// vec4 camSpaceVert = uTDMats[TDCameraIndex()].worldCam * vec4(P, 1.0);
struct TDCameraInfo
{
	vec4 nearFar;			// near/far plane info, contains (near, far, far - near, 1.0 / (far - near))
	vec4 fog;				// fog info, as defined in the Camera COMP
							// contains (fogStart, fogEnd, 1.0 / (fogEnd - fogStart), fogDensity)
	vec4 fogColor;			// Fog Color as defined in the Camera COMP
	int renderTOPCameraIndex;	// Says which camera from the Render TOP's 'Cameras' parameter this particular camera is.
};
uniform TDCameraInfo uTDCamInfos[TD_NUM_CAMERAS];
// A 1D texture that gives a half-sine ramp from (0, 0, 0, 1) to (1, 1, 1, 1)
// Sampling it with U coordinate outside the (0, 1) range will return 0 for anything below 0 and 1 for anything above 1.
// It's possibly faster than using the GLSL sin() function, in situations where you want behavior like this
uniform sampler1D sTDSineLookup;

In general you don't need to do anything with any of these light uniforms/samplers, as it's all done for you in the TDLighting(), TDLightingPBR() and, TDEnvLightingPBR() functions. Only if you are doing custom lighting will you need to worry about these.

struct TDLight
{
	vec4 position;		// the light's position in world space
	vec3 direction;		// the light's direction in world space
	vec3 diffuse;		// the light's diffuse color
	vec4 nearFar;		// the near and far settings from the light's View page
						// contains (near, far, far - near, 1.0 / (far - near)
	vec4 lightSize;		// values from the light's Light Size parameter
						// containers (light width, light height, light width / light projection width, light height / light projection height)
	vec4 misc;			// misc parameters values, right now it contains
						// (Max shadow softness, 0, 0, 0)
	vec4 coneLookupScaleBias;	// applied to a cone light's contribution to create the spot lit area
								// contains (coneLookupScale, coneLookupBias, 1.0, 0.0).
                                // If the light is not a cone light, this will contain (0.0, 0.0, 0.0, 0.0).
	vec4 attenScaleBiasRoll;	// applied to the light's distance from the point to get an attenuation value
								// contains (attenuation scale, attenuation bias, attenuation rolloff, 0)
	mat4 shadowMapMatrix;		// transforms a point in world space into the projection space of the shadow mapped light
								// also rescales projection space from [-1, 1] to [0, 1], so the value can be used
								// to lookup into a shadow map
	mat4 shadowMapCamMatrix;	// transforms a point in world space into the camera space of the shadow mapped light
	vec4 shadowMapRes;			// the resolution of the shadow map associated with this light
								// contains (1.0 / width, 1.0 / height, width, height)
								// filled with 0s if the light isn't shadow mapped
	mat4 projMapMatrix;			// transforms a point in world space into the projection map space of the light
								// used when using textureProj() function for projection mapping
} 
uniform TDLight uTDLights[TD_NUM_LIGHTS];
struct TDEnvLight
{
	vec3 color;					// Color of the env light (not counting it's environment map)
	mat3 rotate;				// Rotation to be applied to the env light.
	vec3 shCoeffs[9];			// Spherical harmonic coefficients calculated from the environment map. Used for diffuse
};
uniform TDEnvLight uTDEnvLights[TD_NUM_ENV_LIGHTS];
// The environment maps for the env lights. Will be black for lights that don't have a map
// of that particular dimentionality. 
uniform samplerCube sTDEnvLightCubeMaps[TD_NUM_ENV_LIGHTS];
uniform sampler2D sTDEnvLight2DMaps[TD_NUM_ENV_LIGHTS];
// shadow maps, these are both the same maps, but are setup in compare or sampling mode
// for lights that aren't shadow maps, these will be fill with 0s
// The first one is used in the function texture(sampler2DShadow, vec3) or textureProj(sampler2DShadow, vec4)
// for automatic depth comparison and hardware percentage closer filtering 
uniform sampler2DShadow sTDCompareShadowMaps[TD_NUM_LIGHTS];
// This one is used for directly getting the depth from the shadow map using
// texture(sampler2D, vec2) or textureProj(sampler2D, vec3)
// If using hard shadows the values are in [0, 1] post-projection depth units
// If using soft shadows the values are in camera space of the light (not the camera being rendered from)
uniform sampler2D sTDShadowMaps[TD_NUM_LIGHTS];
// The projection maps defined in the Projection Map parameter of the Light COMP
// the map will always return black if the light doesn't have a projection map defined
uniform sampler2D sTDProjMaps[TD_NUM_LIGHTS];
// The falloff ramp from when the cone light starts to fade out until it reaches black
// used in combination with uTDLights[].coneLookupScaleBias
uniform sampler1D sTDConeLookups[TD_NUM_LIGHTS];

TouchDesigner specific Functions[edit]

Further details about each of these functions are given in the sections following this.

Vertex Shader Only Functions[edit]

// Transforms a point from world space to projection space. 
// These functions should always be used to output to gl_Position, allowing TouchDesigner to do custom manipulations
// of the values as needed for special operations and projections.
// The extra manipulations that are currently done are:
// 1. Conversions to other projections such as FishEye
// 2. Conversions to Quad Reprojection using TDQuadReproject()
// 3. Adjustments needed for picking using TDPickAdjust().
// When using this function you should not call the above functions, since it will do those for you.
// Both the vec4 and vec3 version of TDWorldToProj() treat the xyz as a point, not a vector.
vec4 TDWorldToProj(vec4 v);
vec4 TDWorldToProj(vec3 v);


// These ones take an extra 'uv' coordinate, which is used when UV Unwrapping is done in the Render TOP.
// If the version without the uv is used, then TDInstanceTexCoord(TDUVUnwrapCoord()) will be used to get the texture coord.
vec4 TDWorldToProj(vec4 v, vec3 uv);
vec4 TDWorldToProj(vec3 v, vec3 uv);
// Returns the instance ID for the current instance. This should always be used, not gl_InstanceID directly.
// For more information look at the Instancing section of this document.
int TDInstanceID();
// Returns the index of the camera for this particular vertex. Needed to support Multi-Camera Rendering.
// This is always 0-based, and it does not reflect which camera is being currently being used from the Render TOP. 
// For that, use the uTDCamInfos[TDCameraIndex()].renderTOPCameraIndex
int TDCameraIndex();
// Deforms a point or vector using instancing and skinned deforms. The returned result is in World Space.
// Be sure to use the *Vec version in the case of vectors to get correct results.
// Also use the *Norm version when deforming a normal, to make sure it still matches the surface correctly.
/// These functions will internally call TDSkinnedDeform() and TDInstanceDeform().
vec4 TDDeform(vec4 pos);
vec4 TDDeform(vec3 pos);
vec3 TDDeformVec(vec3 v);
vec3 TDDeformNorm(vec3 n);
// These versions allow you to control the instanceID used for the deform.
vec4 TDDeform(int instanceID, vec3 pos);
vec3 TDDeformVec(int instanceID, vec3 v);
vec3 TDDeformNorm(int instanceID, vec3 n);
// ** In general you don't need to call any of the below functions, just calling TDDeform or TDDeformVec will
// do all the work for you
// Just the skinning or instancing portion of the deforms
// Returned position/vector is in SOP space for the *Skinned* version, and world space for the *Instance* version.
vec4 TDSkinnedDeform(vec4 pos);
vec3 TDSkinnedDeformVec(vec3 vec);
vec4 TDInstanceDeform(vec4 pos);
vec3 TDInstanceDeformVec(vec3 vec);
// You also don't need to call these usually, but are available for special cases
// For instancing functions, if you don't provide an index, it will use TDInstanceID().
mat4 TDBoneMat(int boneIndex);
mat4 TDInstanceMat(int instanceID);
mat4 TDInstanceMat();
// Returns a 3x3 matrix only. Useful if you are only working with vectors, not positions.
// If you are using both, it is faster to just call TDInstanceMat(), and cast the result to a mat3
// when required.
mat3 TDInstanceMat3(int instanceID);
mat3 TDInstanceMat3();
// To calculate the texture coordinates for your instance (if used in the Geometry COMP's parameters), use these functions
// For texture coordinates the passed in variable 't' is the current texture coordinate to be modified/replaced
vec3 TDInstanceTexCoord(int instanceID, vec3 t);
vec3 TDInstanceTexCoord(vec3 t);
// TDWorldToProj() will already apply the Quad Reproject feature if used by the [[Camera COMP]].
// However in same cases you may be doing custom operations that require it to be applied manually.
// This function just returns the point unchanged if Quad Reproject isn't being used.
vec4 TDQuadReproject(vec4 v, int camIndex);
// Available in builds 2019.32020 or later.
// TDWorldToProj() will already apply the Picking adjustment if required (when picking is occuring).
// However in same cases you may be doing custom operations that require it to be applied manually.
// This function just returns the point unchanged if picking isn't active for this render.
vec4 TDPickAdjust(vec4 v, int camIndex);


// Returns the uv coordinate that was selected for UV unwrapping in the Render TOP
vec3 TDUVUnwrapCoord();

Geometry Shader Only Functions[edit]

// Similar to th ones in the vertex shader, but require a camera index since it
// needs to be passed through to the geometry shader via a input variable.
vec4 TDWorldToProj(vec4 v, vec3 uv, int cameraIndex);\n";
vec4 TDWorldToProj(vec3 v, vec3 uv, int cameraIndex);\n";
vec4 TDWorldToProj(vec4 v, int cameraIndex);\n";
vec4 TDWorldToProj(vec3 v, int cameraIndex);\n";
vec4 TDQuadReproject(vec4 v, int camIndex);

Pixel Shader Only Functions[edit]

// This function is provided as a wrapper for gl_FrontFacing.
// It is required since some GPUs (Intel on macOS mainly) have broken
// functionality for gl_FrontFacing.
// On most GPUs this just returns gl_FrontFacing. On GPUs where the behavior
// is broken, an alternative method using position and normal is used to 
// determine if the pixel is front or back facing.
// Position and normal should be in the same space, and normal must be normalized.
bool TDFrontFacing(vec3 position, vec3 normal);
// Call this function to give TouchDesigner a chance to discard some pixels if appropriate.
// This is used in things such as order-indepdendent transparency and dual-paraboloid rendering.
// For best performance call it at the start of the pixel shader.
// It will do nothing if no features that require it are active, so it's safe to always call it.
void TDCheckDiscard();
// Call this function to apply the alpha test to the current pixel. This function will do nothing
// if the alpha test is disabled, so it can be safely always called
void TDAlphaTest(float alphaValue);
// Call this to apply the Camera COMPs fog to the passed color. Requires the world space vertex position also
// This function will do nothing if fog is disabled, so it's safe to always call it at the end of your shader
// there would be no performance impact from calling it if fog is disabled
// the cameraIndex should be passed through from the vertex shader using a varying, sourced from TDCameraIndex()
vec4 TDFog(vec4 curColor, vec3 worldSpacePos, int cameraIndex);
// Call this at the end of your shadow to apply a dither to your final color. This function does nothing
// if dithering is disabled in the Render TOPs parameters
vec4 TDDither(vec4 curColor);
// Pass any color value through this function before writing it out to a color buffer.
// This is needed to ensure that color channels are output to the correct channels
// in the color buffer, based on hardware limitation that may store alpha-only
// textures as red-only internally, for example
vec4 TDOutputSwizzle(vec4 curColor);

The TDLighting() functions are called per light to determine that light's diffuse and specular contributions. Shadowing, projection mapping are all automatically handled for you inside this functions.

  • The diffuseContrib, specularContrib and specularContrib2 parameters will be filled with the results.
  • lightIndex is the light index to calculate
  • worldSpacePos is the world space vertex position
  • shadowStrength is a scalar on the shadow to increase or decrease its effect for example a value of 0.5 would give a maximum 50% shadow.
  • worldSpaceNorm is the normalized world space normal
  • vertToCamVec is the normalized vector from the vertex position to the camera position.
  • shininess is the specular shininess exponent.

Physically Based (PBR) Lighting[edit]

// For all regular lights. Should be called in a loop from 0 to TD_NUM_LIGHTS
void TDLightingPBR
	(inout vec3 diffuseContrib,
	inout vec3 specularContrib,
	int lightIndex,
	vec3 diffuseColor,
	vec3 specularColor,
	vec3 worldSpacePos,
	vec3 worldSpaceNormal,
	float shadowStrength,
	vec3 shadowColor,
	vec3 vertToCamVec,
	float roughness);
// For environment lights. Should be called in a loop from 0 to TD_NUM_ENV_LIGHTS
void TDEnvLightingPBR
	inout vec3 diffuseContrib,
	inout vec3 specularContrib,
	int lightIndex,
	vec3 diffuseColor,
	vec3 specularColor,
	vec3 worldSpaceNormal,
	vec3 vertToCamVec,
	float roughness,
	float ambientOcclusion);

Phong Lighting[edit]

You can disable features by calling a different version of TDLighting().

// All features
void TDLighting(
	out vec3 diffuseContrib,
	out vec3 specularContrib,
	out vec3 specularContrib2,
	int lightIndex,
	vec3 worldSpacePos,
	vec3 worldSpaceNorm,
	float shadowStrength,
	vec3 shadowColor,
	vec3 vertToCamVec,
	float shininess,
	float shininess2
);
// Diffuse and specular, but no specular2, with shadows
void TDLighting(
	out vec3 diffuseContrib,
	out vec3 specularContrib,
	int lightIndex,
	vec3 worldSpacePos,
	vec3 worldSpaceNorm,
	float shadowStrength,
	vec3 shadowColor,
	vec3 vertToCamVec,
	float shininess
);
// Diffuse and both speculars, no shadows
void TDLighting(
	out vec3 diffuseContrib,
	out vec3 specularContrib,
	out vec3 specularContrib2,
	int lightIndex,
	vec3 worldSpacePos,
	vec3 worldSpaceNorm,
	float shininess,
	float shininess2,
	vec3 vertToCamVec
);
// Diffuse only, with shadows
void TDLighting(
	out vec3 diffuseContrib,
	int lightIndex,
	vec3 worldSpacePos,
	vec3 worldSpaceNorm,
	float shadowStrength,
	vec3 shadowColor
);
// Diffuse Only, no shadows
void TDLighting(
	out vec3 diffuseContrib,
	int lightIndex,
	vec3 worldSpacePos,
	vec3 worldSpaceNorm
);
// Diffuse and specular, no specuar2 or shadows.
void TDLighting(
	out vec3 diffuseContrib,
	out vec3 specularContrib,
	int lightIndex,
	vec3 worldSpacetPos,
	vec3 worldSpaceNorm,
	vec3 vertToCamVec,
	float shininess
);

Common Lighting Functions[edit]

// In general you don't need to use these functions, they are called for you in the TDLighting() functions.
// These functions return the shadow strength at the current pixel for light at the given index.
// Also requires the world space vertex position to do its calculations
// returns undefined results if the shadow isn't mapped using the chosen shadow type
// The returned value is 0 for no shadow, 1 for 100% shadowed
// Due to percentage closer filtering, hard shadows can still have values between 0 and 1 at the edges of the shadow
float TDHardShadow(int lightIndex, vec3 worldSpacePos);
// This one will apply soft shadows with both 25 search steps done, and 25 filter samples.
float TDSoftShadow(int lightIndex, vec3 worldSpacePos);
// Allows for control of search steps and filter samples.
float TDSoftShadow(int lightIndex, vec3 worldSpacePos, int samples, int searchSteps);
// Gets the projection map color for the given world space vertex position.
// No other lighting calculations are applied to the returned color
// If the given light index is not using a projection map, then 'defaultColor' is returned.
vec4 TDProjMap(int lightIndex, vec3 worldSpacePosition, vec4 defaultColor);

Common Functions[edit]

Available in all shader stages.

Matrix functions[edit]

// Creates a rotation matrix that rotates starting from looking down +Z, to the 'forward' vector direction
mat3 TDRotateToVector(vec3 forward, vec3 up);
// Creates a rotation matrix that rotates around the 'axis', the given number of 'radians'
mat3 TDRotateOnAxis(float radians, vec3 axis)
// Creates a rotation matrix to rotate from vector 'from' to vector 'to'. The solution isn't particularly stable, but useful in some cases.
mat3 TDCreateRotMatrix(vec3 from, vec3 to);
// Takes a surface normal, the tangent to the surface, and a handedness value (either -1 or 1)
// Returns a matrix that will convert vectors from tangent space, to the space the normal and tangent are in
// Both the normal and the tangent must be normalized before this function is called.
// The w coordinate of the T attribute created by the [[Attribute Create SOP]] contains the handedness
// that should be passed in as-is.
mat3 TDCreateTBNMatrix(vec3 normal, vec3 tangent, float handedness);

Perlin and Simplex noise functions[edit]

// Noise functions
// These will return the same result for the same input
// Results are between -1 and 1
// Can be slow so just be aware when using them. 
// Different dimensionality selected by passing vec2, vec3 or vec4. 
float TDPerlinNoise(vec2 v);
float TDPerlinNoise(vec3 v);
float TDPerlinNoise(vec4 v);
float TDSimplexNoise(vec2 v);
float TDSimplexNoise(vec3 v);
float TDSimplexNoise(vec4 v);

HSV Conversion[edit]

// Converts between RGB and HSV color space
vec3 TDHSVToRGB(vec3 c);
vec3 TDRGBToHSV(vec3 c);

Space Conversions[edit]

// Converts a 0-1 equirectangular texture coordinate into cubemap coordinates.
// A 0 for the U coordinate corresponds to the middle of the +X face. So along the vec3(1, Y, 0) plane.
// As U rises, equirectangular coordinates rotate from +X, to +Z, then -X and -Z.
vec3 TDEquirectangularToCubeMap(vec2 equiCoord);
// Converts from cubemap coordinates to equirectangular
// cubemapCoord MUST be normalized before calling this function.
vec2 TDCubeMapToEquirectangular(vec3 cubemapCoord);

// Available in builds 2019.18140
// This version will also output a mipmap bias float. This float should be passed in the 'bias'
// parameter of texture(), to help select the mipmap level. This helps avoids seams at the edge
// of equirectangular map.
vec2 TDCubeMapToEquirectangular(vec3 cubemapCoord, out float mipMapBias);

Working with Lights[edit]

To help shaders be as fast as possible, a lot of the logic to calculate lights is hard-coded into the shader depending on what features are enabled and what the light type is. Shaders written for the GLSL MAT will be recompiled with different implementation of TDLightingPBR(), TDLighting() etc depending on the number and types of lights in the scene. This allows the same GLSL MAT to be used in multiple different scenes without needing to be changed based on the number of lights in the scene. These compilations are cached, so each permutation of lighting settings will only cause one compilation to occur, each time TD is run.

TIP: Geometry viewers have built-in lighting separate from your scene's lighting objects. For information on how to duplicate that lighting, see the Geometry Viewer article.

Custom work with lights[edit]

If you decide to do custom lighting work, this section describes how a lot of the light values are used in our shader.

Knowing which variables correspond to which Light COMPs[edit]

The variables will be indexed to differentiate the lights, starting at 0. Light 0 will be the first light listed in the Render TOP, Light 1 will be the 2nd light listed and so on. In the event that lights are selected using a wildcard such as light*, the lights gathered from this wildcard will be sorted alpha-numerically.

For example, say the Render TOP has "/light3 /container1/light* /alight1" listed in its Light parameter, and /container1/ has two light COMPs, named light1 and light2. In this case the lights would correspond to the following indices:
/light3 would be index 0
/container1/light1 would be index 1
/container1/light2 would be index 2
/alight1 would be index 3

Light Parameters[edit]

All of the parameters for the lights are defined in the uTDLights structure, defined here.

Cone Lighting Technique[edit]

TouchDesigner's built-in shaders use a custom cone-lighting technique that you can mimic in your shader. The intensity of the cone light is pre-computed into a 1D texture (a lookup table ) to reduce the workload in the shader. The start of the 1D texture (texture coordinate 0.0) is the intensity of the light at the edge of the cone angle (the first pixel is always 0). The end of the 1D texture (texture coordinate 1.0) is the intensity of the light at the center of the cone. This lookup table is declared as:

uniform sampler1D sTDConeLookups[TD_NUM_LIGHTS];

A second helper uniform is also given to the shader to make looking up in the 1D texture easier:

uTDLights[i].coneLookupScaleBias;

To correctly look into this lookup table the following algorithm should be used:

// 'spot' is the spot vector
// 'lightV' is the vector coming from the light position, pointing towards
// the point on the geometry we are shading.
// It doesn't matter which space these vectors are in (camera space, object space), 
// as long as they are both in the same space.
// Determine the cosine of the angle between the two vectors, will be between [-1, 1]
float spotEffect = dot(spot, lightV);
// Now rescale the value using the special helper uniform so that value is between [0,1]
// A value of 0 will mean the angle between the two vectors is the same as the total 
// cone angle + cone delta of the light
spotEffect = (spotEffect * uTDLights[i].coneLookupScaleBias.x) + uTDLights[i].coneLookupScaleBias.y;
// Finally lookup into the lookup table
float dimmer = texture1D(sTDConeLookup[i], spotEffect).r;
// You can now multiply the strength of the light by 'dimmer' to create the correct
// light intensity based on this pixels position in or outside the cone light's area of
// influence

Attenuation[edit]

Attenuation is handled for you in the TDLighting() function, but if you want to add it yourself this section describes how.

To determine the attenuation from the light to a point in space, use this function. lightDist is the distance from the vertex to the light.

// Will return 1 if there is no attenuation, 0 if the light is fully attenuated, and something in between if it's in the fade-off region.
float TDAttenuateLight(int lightIndex, float lightDist);

The math behind TDAttenutateLight is as follows:

TouchDesigner's built-in shaders use a custom attenuation technique. Like the cone lighting, a pre-calculated scale and bias is provided for you that will allow you to get the correct attenuated intensity of the light. The uniform is:

uTDLights[i].attenScaleBiasRoll // Contains (1 / -(attenEnd - attenStart), attenEnd / (attenEnd - attenStart), attenRolloff)

TDAttenutateLight is defined as:

float TDAttenuateLight(int lightIndex, float lightDist)
{
    float lightAtten = lightDist * uTDLights[lightIndex].attenScaleBiasRoll.x;
    lightAtten += uTDLights[lightIndex].attenScaleBiasRoll.y;
    lightAtten = clamp(lightAtten, 0.0, 1.0) * 1.57073963
    lightAtten = sin(lightAtten);
    lightAtten = pow(lightAtten, uTDLights[lightIndex].attenScaleBiasRoll.z);
    return lightAtten;
}

Projection and Shadow Mapping[edit]

Projection mapping and shadowing mapping are handled for you in the TDLighting() functions, but you can do it yourself if you want using the below information.

Projection and Shadow mapping are very similar operations. The only difference is a projection map will be used to color the surface, while a shadow map will be used to decide if that surface receives lighting from a certain light. The math however is essentially the same. First you need calculate the correct textures coordinates to look up into the projection or shadow map with. For both operations you need to know the position of the vertex in world space.

Projection Mapping[edit]

For projection mapping you then use this math to find its texture coordinates. The 3rd component isn't necessary since a projection map is just a 2D image (the 4th one is though as part of the math).

// Notice how we remove the z coordinate here
vec3 projCoord = (uTDLights[i].projMapMatrix * worldSpaceVert).xyw;

Now in the pixel shader, sample the projection map with this code:

vec4 projColor = textureProj(sTDProjMaps[i], projCoord);

Now simply use projColor however you see fit in your shader.

Shadow Mapping[edit]

For shadow mapping it's very similar, only all 4 components are needed since you need the z for depth comparison.

vec4 shadowCoord = uTDLights[i].shadowMapMatrix * camSpaceVert;

Sample the shadow map with this code:

float shadowDepth = textureProj(sTDShadowMaps[i], shadowCoord).r;

Compare shadowDepth with (shadowCoord.z / shadowCoord.w) to determine if the pixel is in a shadow or not.

if ((shadowCoord.z / shadowCoord.w) >= shadowDepth)
// then it's in the shadow
else
// It's not in the shadow

Multiple Render Targets[edit]

Using the '# Of Color Buffers' parameter in the Render TOP along with the Render Select TOP, you can write GLSL shaders that output multiple color values per pixel. This is done by declaring and writing to pixel shader outputs declare like this:

layout(location = 0) vec4 fragColor[TD_NUM_COLOR_BUFFERS];

The constant TD_NUM_COLOR_BUFFERS with automatically be set for you based on the render settings.

Multi-Camera Rendering[edit]

Multi-Camera Rendering is rendering multiple cameras in a single rendering pass, all looking at the same scene. This means the scene-graph is only traversed once, which avoids many calls to the graphics driver. Lights, textures, material and draw calls only need to be done once for the entire set of cameras being rendered. This feature is supported by Nvidia Pascal (Geforce 1000, Quadro P-Series) or AMD Polaris (Radeon R9, Radeon Pro WX) and newer GPUs. This feature is importance for VR rendering, as well as things such as rendering a Cube Map in a single pass (instead of one pass per side).

Multi-Camera Rendering will not function if the Cameras have different light masks. The cameras will be rendered one pass at a time in that case.

This feature is used by the Render TOP when multiple cameras are listed in the 'Cameras' parameter. The 'Multi-Camera Hint' parameter can help control how this feature is used for that particular Render TOP.

Nvidia calls this feature 'Simultaneous Multi-Projection'.

The multi-camera functionality on these GPUs is not general and requires some tricks to function properly. Because of this it's important all of your shaders make use of the TD* functions such as TDWorldToProj(), TDInstanceID() instead of doing those things manually and using built-in GLSL functionality. Functions such as TDFog() also require a camera index to be passed to it to apply fog for the correct camera.

Outputting gl_Position[edit]

Although in general you can transform your points/vectors using the built-in model/view and projection matrices at will, when outputting to gl_Position you should use the built-in functions. These functions allow TouchDesigner to do some manipulation of the values for final output, depending on the rendering setup. For example for doing optimized Multi-Camera Rendering, the position will need to be multiplied by the correct camera for this execution of the vertex shader. To give TouchDesigner a chance to do this manipulation, you should call the built-in functions to transform your vertex position:

vec4 TDWorldToProj(vec4 v);
vec4 TDWorldToProj(vec3 v);

So for example at the end of your shader you would do:

gl_Position = TDWorldToProj(worldSpacePosition);

Working with Deforms[edit]

Currently there are two different types deformations that can be applied to geometry: skinned deforms and instanced transforms.

TouchDesigner automatically encapsulates all of the work for both of these deforms in the GLSL functions. Use the *Vec version when deforming vectors.
These functions always return the point/vector in World space, not model/SOP.

vec4 TDDeform(vec4 p);
vec3 TDDeform(vec3 p);
vec3 TDDeformVec(vec3 v);
vec3 TDDeformNorm(vec3 v);


As the shader writer, it's your job to manipulate the vertex attributes such as the position and normal (since there's no place for TouchDesigner to do it if you're the one writing the shader), so it's up to you to call the TDDeform() function. In general you will simply call it simply like this:

vec4 worldSpaceVert = TDDeform(vec4(P, 1.0)); 
vec3 worldSpaceNorm = TDDeformNorm(N));

However you can use the below declared functions directly.

Skinning Deforms (Bone Deforms)[edit]

When you enable the Deform feature in the GLSL MAT, TouchDesigner will automatically declare some attributes, varyings, uniforms and functions for you to use to deform your geometry in the same way that other MATs deform geometry. It's important you don't re-use any of these reserved words when using deforms to avoid name conflicts when compiling the shader. Even when not using deforms though, the below listed functions will be declared anyway so shader code will run correctly both when deforms are turned on and off. The functions do nothing when deforms are off (and have no cost to the shader speed). The bone matrices for deforms are built by using the pCaptPath and pCaptData detail attributes along with the the bone's current position based on the skeleton at that frame. The indices in pCapt match up with the array index of the matrices for the bones. More information on how Skinning Deforms work can be found here: Deforming Geometry (Skinning)

Functions[edit]

You generally will not need to call these directly, they are called by the TDDeform() function.

In the vertex shader:

vec4 TDSkinnedDeform(vec4 pos);
vec3 TDSkinnedDeformVec(vec3 vec);

You can get the bone matrix for the given matrix index with this function:

mat4 TDBoneMat(int boneIndex);

Instancing[edit]

When you enable instancing on the Instance page of the Geometry COMP the TDDeform() functions will automatically call the correct lower level function that will transform the instance, based on the channels given in the XForm CHOP parameter. If you don't specify a CHOP, then all of the instances will be drawn at the same spot, unless you transform them yourself.

Instance Index/ID[edit]

To calculate the instance ID, use the provided TDInstanceID() function. Do not use gl_InstanceID directly because the number of instances being rendered may be larger than requested due to Multi-Camera Rendering. Shader writers familiar with TouchDesigner088 may also remember the uTDInstanceIDOffset that had to be used, this does not need to be used with TDInstance();

int TDInstanceID();

Since this function is only available in the vertex shader, you will need to pass it onwards to the pixel shader through an out/in, if you require it in the pixel shader.

// In the vertex shader, declare something like this, and assign vInstanceID = TDInstanceID() in the main()
flat out int vInstanceID;
void main()
{
	vInstanceID = TDInstanceID();
	// other main vertex function stuff
	// ....
}

And in the pixel shader you can read this value if it's declared like this:

// Pixel shader
flat in int vInstanceID;

This is declared as flat since int variable types can not be interpolated across a primitive.

Deform Functions[edit]

You generally will not need to call any of these directly, they are called by the TDDeform() function. These functions are only available in the vertex shader.

In the vertex shader:

vec4 TDInstanceDeform(vec4 pos);
vec3 TDInstanceDeformVec(vec3 vec);

For the transform, access these matrices using the functions:

mat4 TDInstanceMat(int instanceIndex);
mat4 TDInstanceMat();

These matrices will contain the entire transform, including TX, TY, TZ, SX, SY, SZ as well as Rotate To.

Attribute Functions[edit]

When modifying the texture coordinates, these functions do the texture coordinate modifications per instance. t is the texture coordinate to modify. The version without instanceIndex will use the current value for gl_InstanceID automatically.

vec3 TDInstanceTexCoord(int instanceIndex, vec3 t);
vec3 TDInstanceTexCoord(vec3 t);

To modify diffuse color, these functions will replace/add/subtract from the original diffuse color. In general you'll want to pass in the attribute Cd into these functions to have them modify it. If instance color is not in use, this function will just return the passed in color, unmodified.

vec4 TDInstanceColor(vec4 curColor);
vec4 TDInstanceColor(int instanceIndex, vec4 curColor);

Custom instance attributes can be retrieved using these functions:

vec4 TDInstanceCustomAttrib0();
vec4 TDInstanceCustomAttrib0(int instanceIndex);
vec4 TDInstanceCustomAttrib1();
vec4 TDInstanceCustomAttrib1int instanceIndex);
vec4 TDInstanceCustomAttrib2();
vec4 TDInstanceCustomAttrib2(int instanceIndex);
vec4 TDInstanceCustomAttrib3();
vec4 TDInstanceCustomAttrib3(int instanceIndex);
Instance Texturing[edit]

These are only available on Nvidia Kepler GPUs or newer, on Windows only.

Instance texturing allows mapping any number of individual textures onto instances. The number of textures available to be used in a single render is extremely high and fast. It avoids needing to use 2D Texture Array to map multiple images onto instances. Only one type of instance texture is supported at a time, so if the wrong function is called (e.g. the *Cube() when the instance textures are using 2D textures), you will receive a white texture instead of one of the instance textures. Use these functions to fetch the current instance texture for the current instance.

// AVAILABLE IN THE VERTEX SHADER ONLY
sampler2D TDInstanceTexture2D();
sampler3D TDInstanceTexture3D();
sampler2DArray TDInstanceTexture2DArray();
samplerCube TDInstanceTextureCube();

If you require more custom control of which instance texture you which to use, you can use these functions instead:

// AVAILABLE IN THE VERTEX SHADER ONLY
// Gives to you the textureIndex for the given instanceIndex, or the current instance.
uint TDInstanceTextureIndex(int instanceIndex);
uint TDInstanceTextureIndex();

// Returns the texture for the given textureIndex.
sampler2D TDInstanceTexture2D(int textureIndex);
sampler3D TDInstanceTexture3D(int textureIndex);
sampler2DArray TDInstanceTexture2DArray(int textureIndex);
samplerCube TDInstanceTextureCube(int textureIndex);

There are two pieces of information used, the first is the texture index. This comes from the Texture Index parameter in the Geometry COMP. You can get this texture index with either TDInstanceTextureIndex() or TDInstanceTextureIndex(int instanceIndex) (if you wish to get the index of a instance different than the current one). Using this texture index, or some other texture index you've calculated on your own, you can then call the appropriate other TDInstanceTexture*() function to get the sampler for the texture you are looking for.

Once you have the sampler, you use it like any other sampler, using the texture() function to sample a color from it. To use it in the pixel shader you will need to pass this sampler through an in/out variable to the pixel shader for it to gain access to it. Note Older Nvidia drivers had a bug where passing a sampler* through the in/out parameters wouldn't work. You had to use a uint64_t as the variable type for the in/out variable, and cast the sampler to and from this when assigning and retrieving it from the in/out variable.

The best way to see this code being used for real is to output a shader from the Phong MAT that is doing instance texturing.

Point Sprites[edit]

When rendering point sprites primitives you are required to write to the vertex shader output gl_PointSize. This output variable determines how large the point sprite is (in pixels) when it is rendered. If you don't write to the output then your point sizes are undefined.

Each point sprite will be rendered as a square of pixels gl_PointSize pixels wide. The square of pixels will receive textures coordinates from 0-1 over the entire square in the pixel shader input variable gl_PointCoord.

Order Independent Transparency[edit]

You can make your shader support Order Independent Transparency by simply adding this line at the start of your pixel shader's main() function. If Order Independent Transparency isn't enabled in the Render TOP, then this function will do nothing.

TDCheckOrderIndTrans();

Dithering[edit]

If dithering is enabled in the Render TOP, you can have this dithering applied to your color by simply calling:

finalColor = TDDither(finalColor);

You generally want to do this right at the end of the shader, just before you write the value to your output color. If dithering is disabled in the Render TOP this function will still be available (to avoid compiler errors), but it will leave the color unchanged.

Picking[edit]

The Render Pick DAT and CHOP do their work with a render operation, so they need to interact with the shader to do their work. If you export a Phong MAT shader you will see the following lines in it

#ifndef TD_PICKING_ACTIVE
	// All the typical shader code
#else
	TDWritePickingValues();
#endif

The key thing that is occurring here is that when picking is occuring, the define TD_PICKING_ACTIVE is set and only the code inside the #else block is executed. The function:

void TDWritePickingValues();

Will write default values for picking, which the Render Pick DAT/CHOP will read. If you have a custom shader that changes vertex positions in a non standard way, or if you want to output different kinds of information (like a color other that's Cd), you can replace the values that have been written by this function afterwards. The values available to you are:

TDPickVertex {
	vec3 sopSpacePosition;
	vec3 worldSpacePosition;
	vec3 camSpacePosition;
	vec3 sopSpaceNormal;
	vec3 worldSpaceNormal;
	vec3 camSpaceNormal;
	vec3 uv[1];
	flat int instanceId;
	vec4 color;
} vTDPickVert;

So for example if you modifying the vertex position in a way different from the standard TDDeform() way, you could write these newly calculated values to like this: Be sure to do this AFTER the call to TDWritePickingValues(), otherwise that call will overwrite your values .

TDWritePickingValues();
vTDPickVert.sopSpacePosition = newPosition;
vTDPickVert.worldSpacePosition = uTDMats[TDCameraIndex()].world * vec4(newPosition, 1.0);
vTDPickVert.camSpacePosition = uTDMats[TDCameraIndex()].worldCam * vec4(newPosition, 1.0);

You do not have to write to all the entries in this structure, but you can for completeness. Only the values that are being read by the Render Pick CHOP/DAT (selected in their parameters) must be filled in.

For custom attributes that you set for picking in the Render Pick CHOP or Render Pick DAT, the attributes are available in vTDCustomPickVert with the name and size as defined in the Render Pick node.

Shadertoy[edit]

VR Shaders[edit]

Shaders that come from Shadertoy that support VR rendering will have a mainVR function defined. Re-creating the fragRayOri and fragRayDir variables that function uses inside of TD is simple. In the vertex shader:

vec4 worldSpaceVert = TDDeform(P);
vec4 worldSpaceCamPos = uTDMat.camInverse[3]; // The last column of the camera transform is it's position

vec3 fragRayOri = worldSpaceCamPos.xyz;
vec3 fragRayDir = worldSpaceVert.xyz - worldSpaceCamPos.xyz;
// Pass these variables to the pixel shader using 'out' variables named of your choosing

And in the pixel shader you just need to normalize whatever variable the fragRayDir was went through. The variable that came from fragRayOri and be used as-is.

To support these shaders, which are usually raymarching shaders, you'll want to render geometry that covers the entire viewport, such as putting a sphere around your camera.

Other Notes[edit]

#version statement[edit]

The #version statement will be added to the code automatically for you. Your code should not have a #version statement, otherwise compile errors may occur.

#include statements[edit]

  1. include statements are a new feature added to GLSL recently. Almost all modern drivers support them. You can use an #include statement in one DAT to include code from another DAT. The path can be absolute or relative.
 #include </project1/text1>
 #include <../../geo1/text2>
 

Changes from GLSL 1.20[edit]

Shaders written for 1.20 will not compile as 3.30 shaders. The language received a large overhaul, changing the name of many key functions and replacing a lot of functionality. All of the changes can be seen in the official GLSL documentation linked to earlier. Some of the more important changes are:

  • Removed texture1D(sampler1D, float), texture2D(sampler2D, vec2), etc. All texture sampling is done with identical function names, regardless of the dimensionality of the texture. e.g. texture(sampler1D, float), or texture(sampler2D, vec2).
  • Removed the keyword varying. Instead use in and out (depending on if the value is getting outputted from the shader or inputted from a previous shader stage). Examples later on in the article.
  • Removed the keyword attribute. Instead just use in in your vertex shader.
  • Removed built-in varyings gl_TexCoord[]. You'll need to always declare your own variables that get output/input between shader stages.
  • Removed gl_FragColor and gl_FragData[]. Instead you name your own color outputs using the syntax layout(location = 0) vec4 nameYouWant. To output to multiple color buffers declare outputs at the other locations layout(location = 1) vec4 secondFragColor.
  • Removed all built-in attributes such as gl_Vertex, gl_MultiTexCoord0, gl_Normal. In TouchDesigner these attributes will be accessible through automatically declared attributes such as in vec3 P; in vec3 N; in vec4 Cd. More details on this later.
  • Removed almost all built-in uniforms such as matrices (gl_ModelViewMatrix, gl_ProjectionMatrix), light information (gl_LightSource[]), fog information (gl_Fog). All of this data will be available through new means provided by TouchDesigner, detailed later.
  • Arrays of samplers are now supported, and are used extensively in TouchDesigner when appropriate. There are limitations on how these samplers are indexed though, detailed in the GLSL spec for the particular version you are using (3.30 has different rules from 4.10, for example).

Major changes since TouchDesigner088[edit]

A lot of changes have been done to TouchDesigner's GLSL API in 099. Most of these changes were done to better facilitate Multi-Camera Rendering. A summary of most of these changes is:

  • Lighting and other work is now done in World space instead of Camera space. This makes code cleaner since the shaders would need to do their work in multiple different camera spaces for multiple cameras. Legacy GLSL shaders are supported with the GLSL TOPs 'Lighting Space' parameter which will be set to Camera Space for older shaders.
  • TDInstanceID() should be used instead of gl_InstanceID/uTDInstanceIDOffset.
  • uTDMat has been removed when lighting in World Space, use the array uTDMats[] instead.
  • Some values from the uTDGeneral structure have been moved to uTDCamInfos[], since that info is camera specific.
  • A notion of camera index (obtained in the vertex shader using TDCameraIndex()), is needed for some functions such as TDFog().
  • TDAlphaTest(float) must be called to apply the alpha test. It can be safely called when the alpha test is disabled on the MAT, it'll do nothing in that case.
  • Before writing any color to a output color buffer, it should be passed through vec4 TDOutputSwizzle(vec4). This ensures the channels are in the correct place depending on how the channels are stored in the output texture. For example Alpha-only textures may be stored in a 'Red-only' texture internally, so the alpha value will need to be swizzled over to the red channel before output.

Related Articles[edit]

An Operator Family that associates a shader with a SOP or Geometry Object for rendering textured and lit objects.

A sequence of vertices form a Polygon in a SOP. Each vertex is an integer index into the Point List, and each Point holds an XYZ position and attributes like Normals and Texture Coordinates.

The OpenGL code that creates a rendered image from polygons and textures. Shaders can be made of up to three parts: Vertex Shader, Geometry Shader and/or Pixel Shader, which are either embedded inside Materials, or placed in Text DATs and referenced to a GLSL Material.

An Operator Family that reads, creates and modifies 3D polygons, curves, NURBS surfaces, spheres, meatballs and other 3D surface data.

Information associated with SOP geometry. Points and primitives (polygons, NURBS, etc.) can have any number of attributes - position (P) is standard, and built-in optional attributes are normals (N), texture coordinates (uv), color (Cd), etc.

An Operator Family that associates a shader with a SOP or Geometry Object for rendering textured and lit objects.

The component types that are used to render 3D scenes: Geometry Component contain the 3D shapes to render, plus Camera, Light, Ambient Light, Null, Bone, Handle and other component types.

An Operator Family that contains its own Network inside. There are twelve 3D Object Component and eight 2D Panel Component types. See also Network Path.

An Operator Family that creates, composites and modifies images, and reads/writes images and movies to/from files and the network. TOPs run on the graphics card's GPU.

Quad Reprojection renders pixel-perfect perspective-correct images for flat TVs and LED panels hung at any orientation.

A 3D image created with the Render TOP. Also used more generally to include the compositing (with TOPs) to generate an output image.

(1) A Geometry Component can render its SOP geometry many times using CHOP samples, DAT rows, TOP pixels or SOP points, (2) An instance is an OP that doesn't actually have its own data, but rather just refers to an OP (or has an input) whose data it uses. This includes Null OPs, Select OPs and Switch OPs.

An Operator Family which operate on Channels (a series of numbers) which are used for animation, audio, mathematics, simulation, logic, UI construction, and many other applications.

Each SOP has a list of Points. Each point has an XYZ 3D position value plus other optional attributes. Each polygon Primitive is defined by a vertex list, which is list of point numbers.

An Operator Family that manipulates text strings: multi-line text or tables. Multi-line text is often a command Script, but can be any multi-line text. Tables are rows and columns of cells, each containing a text string.