Experimental:Kinect Azure TOP

From TouchDesigner Documentation
Jump to: navigation, search

Summary
[edit]

The Kinect Azure TOP can be used to configure and capture data from a Microsoft Kinect Azure camera.

NOTE: This TOP requries Microsoft Windows 10 April 2018 or newer. An NVIDIA GEFORCE GTX 1070 or better graphics card is required to obtain efficient body tracking data from the camera, although a CPU-only mode can be enabled using the parameters.

The TOP can be used to configure the settings of the camera (resolution, frame rate, synchronization, etc) as well to retrieve captured images from either its color or depth cameras. Image data from one camera can be remapped (aligned) to the other camera in order to match color and depth information. Only one Kinect Azure TOP can be connected to a single Kinect camera. To retrieve additional images from the same camera, use a Kinect Azure Select TOP.

The Kinect Azure can also extract body tracking information and skeleton positions using the depth camera image. To access this data, use a Kinect Azure CHOP and set its Kinect TOP parameter to the primary Kinect Azure TOP that is connected to the device.

PythonIcon.pngkinectazureTOP_Class


Parameters - Kinect Azure Page

Active active - Enable or disable the camera. Note Disabling this TOP will also disable any other operators (Kinect Azure Select TOP or Kinect Azure CHOP) that rely on it.  

Sensor sensor - - The serial number of the connected Kinect Azure camera. The TOP will automatically fill the list with all available cameras. Note: Only one Kinect Azure TOP should be connected to a single camera.

  • 000416392312 000416392312 -

Color Resolution colorres - - The resolution of images captured by the color camera. Different resolutions may have different aspect ratios. Note: 4096 x 3072 is not supported at 30 FPS.

  • 1280 x 720 (16:9) 1280x720 -
  • 1920 x 1080 (16:9) 1920x1080 -
  • 2560 x 1440 (16:9) 2560x1440 -
  • 2048 x 1536 (4:3) 2048x1536 -
  • 3840 x 2160 (16x9) 3840x2160 -
  • 4096 x 3072 (4x3) 4096x3072 - Note: This resolution is not supported when operating at 30FPS. Reduce the framerate to 15 or 5 FPS to use this setting.

Depth Mode depthmode - - The depth mode controls which of the Kinect's two depth cameras (Wide or Narrow FOV) are used to produce the depth image and whether any 'binning' is used to process the data. In 'binned' modes, 2x2 blocks of pixels are combined to produce a filter, lower resolution image. Note: Body tracking is not supported when using the Passive IR depth mode.

  • Narrow FOV - Unbinned (640x576) narrowunbinned -
  • Wide FOV - 2x2 Binned (512x512) widebinned -
  • Narrow FOV - 2x2 Binned (320x288) narrowbinned - Note: This mode is not recommended for use with body tracking.
  • Wide FOV - Unbinned (1024x1024) wideunbinned - Note: This mode is not supported at 30FPS and is not recommended for use with body tracking.
  • Passive IR (1024x1024) passiveir - Note: This mode does not support body tracking.

Camera FPS fps - - Controls the frame rate of both the color and depth cameras. Some higher camera resolutions are not supported when running at 30FPS. Lower framerates can produce brighter color images in low light conditions.

  • 5 FPS fps5 -
  • 15 FPS fps15 -
  • 30 FPS fps30 -

CPU Body Tracking cpu - When enabled, body tracking calculations will be done on the CPU rather than on the graphics card. This method is much slower, but does not require a high-powered graphics card to function.  

Image image - - A list of available image types to capture from the device and display in this TOP. All image types have a second version that is mapped (aligned) to the image space of the other camera so that color and depth image data can be matched. The resolution of the image is controlled by the Color Resolution or Depth Mode parameters depending on the type of image selected. Use a Kinect Azure Select TOP to access additional image types from the same camera.

  • Color color - An 8-bit RGBA image from the color camera.
  • Color aligned to Depth colorremap - The color camera image remapped to align with the current depth camera image. The resolution is determined by the Depth Mode.
  • Depth depth - A single channel 32bit floating point depth image where pixel values measure the distance in meters from the camera. Resolution and field of view are determined by the Depth Mode parameter.
  • Depth aligned to Color depthremap - The depth image remapped to align with the current color camera image. The resolution is determined by the Color Resolution parameter.
  • IR ir - A 16-bit integer infrared image captured by the depth camera. Resolution and field of view are determined by the Depth Mode.
  • IR aligned to Color irremap - The infrared image remapped to align the current color camera image. The resolution is determined by the Color Resolution parameter.
  • Player Index playerindex - An 8-bit single channel image that maps pixels to players that have been identified by the body tracking system. Pixel values represent the body id multiplied by 25 to improve contrast. Note: The player index map may have additional hardware requirements (CUDA capable graphics card) and may not synchronize with the color or depth images depending on current settings. The resolution is determined by the current Depth Mode.
  • Player Index aligned to Color playerindexremap - The player index map remapped to align with the current color image. The resolution is determined by the Color Resolution parameter.
  • Point Cloud pointcloud - A 32bit floating point RGBA image where the depth information has been converted into XYZ positions that are stored in the RGB channels. Position data is represented in meters and the resolution and field of view are determined by the Depth Mode. The Point Cloud image can be viewed in 3D by activating the TOP and selecting 'View as Points' in the right-click menu, or by using the TOP as an instance source in the Geometry COMP.
  • Point Cloud aligned to Color pointcloudremap - The point cloud image remapped to align with the current color image. Note: Remapping the image can create gaps and artifacts when view the points in 3D.

Align Image to Other Camera remapimage - When enabled, the current image will be remapped to align with images from the other camera. For example, use this feature to create a color camera image that maps to the pixels of the depth camera. The current depth mode and color resolution will be used to do the remapping. Note: Remapping the point cloud image can create artifacts in the distribution of points due to gaps in the remapping algorithm.  

Sync Image to Body Tracking bodyimage - When enabled, the image produced will be delayed so that it corresponds to the most recent data in the body tracking system. The amount of delay may fluctuate based on the power of the processor doing the body tracking and the complexity of the scene. A Kinect Azure Select TOP can be used at the same time to retrieve the real-time image stream.  

Mirror Image mirrorimage - Flip the image in the horizontal axis.  


Parameters - Color Page

Use the controls on the Color page to adjust the output of the color camera.

Manual Exposure manualexposure - Enable to allow setting the exposure time manually. When disabled, the camera will automatically choose an exposure based on the light levels and frame rate. Note: This feature may not work correctly due to issues in the current Kinect SDK.  

Exposure Time exposure - Adjust the exposure time of the color image measured in microseconds. The time must be less than one frame. Note: This feature may not work correctly due to issues in the current Kinect SDK.  

Manual White Balance manualwhitebalance - Enable to allow setting the camera white balance manually.  

White Balance whitebalance - Select the temperature in degrees Kelvin used to set the white balance of the image. The value is rounded to the nearest 10 degrees.  

Brightness brightness - Used to adjust the brightness of the image from 0 to 255. 128 is the default.  

Contrast contrast - Conrtols the contrast of the color image.  

Saturation saturation - Controls the saturation of the color image.  

Sharpness sharpness - Adjusts the sharpness of the color image.  

Gain gain - The gain of the color image.  

Backlight Compensation backlight - Enables compensation for bright back lighting in a scene.  

Powerline Frequency powerfreq - - Select the frequency of the power supply for use in the cameras noise cancellation system.

  • 50Hz 50hz -
  • 60Hz 60hz -


Parameters - Timing Page

Depth Image Delay depthdelay - A delay in microseconds between when the depth and color images are captured. The delay must be less than one frame in length based on the current framerate.  

Wired Sync Mode syncmode - - When using more than one Kinect Azure camera, this setting can be used to determine which unit is the master and which are subordinates.

  • Standalone standalone -
  • Master master -
  • Subordinate subordinate -

Subordinate Delay subdelay - A delay in microseconds between when the master unit captures an image and when this device captures an image. (Only applicable for subordinate devices).  


Parameters - Common Page

Output Resolution outputresolution - - quickly change the resolution of the TOP's data.

  • Use Input useinput - Uses the input's resolution.
  • Eighth eighth - Multiply the input's resolution by that amount.
  • Quarter quarter - Multiply the input's resolution by that amount.
  • Half half - Multiply the input's resolution by that amount.
  • 2X 2x - Multiply the input's resolution by that amount.
  • 4X 4x - Multiply the input's resolution by that amount.
  • 8X 8x - Multiply the input's resolution by that amount.
  • Fit Resolution fit - Grow or shrink the input resolution to fit this resolution, while keeping the aspect ratio the same.
  • Limit Resolution limit - Limit the input resolution to be not larger than this resolution, while keeping the aspect ratio the same.
  • Custom Resolution custom - Directly control the width and height.

Resolution resolution - - Enabled only when the Resolution parameter is set to Custom Resolution. Some Generators like Constant and Ramp do not use inputs and only use this field to determine their size. The drop down menu on the right provides some commonly used resolutions.

  • W resolutionw -
  • H resolutionh -

Resolution Menu resmenu -  

Use Global Res Multiplier resmult - Uses the Global Resolution Multiplier found in Edit>Preferences>TOPs. This multiplies all the TOPs resolutions by the set amount. This is handy when working on computers with different hardware specifications. If a project is designed on a desktop workstation with lots of graphics memory, a user on a laptop with only 64MB VRAM can set the Global Resolution Multiplier to a value of half or quarter so it runs at an acceptable speed. By checking this checkbox on, this TOP is affected by the global multiplier.  

Output Aspect outputaspect - - Sets the image aspect ratio allowing any textures to be viewed in any size. Watch for unexpected results when compositing TOPs with different aspect ratios. (You can define images with non-square pixels using xres, yres, aspectx, aspecty where xres/yres != aspectx/aspecty.)

  • Use Input useinput - Uses the input's aspect ratio.
  • Resolution resolution - Uses the aspect of the image's defined resolution (ie 512x256 would be 2:1), whereby each pixel is square.
  • Custom Aspect custom - Lets you explicitly define a custom aspect ratio in the Aspect parameter below.

Aspect aspect - - Use when Output Aspect parameter is set to Custom Aspect.

  • Aspect1 aspect1 -
  • Aspect2 aspect2 -

Aspect Menu armenu -  

Input Smoothness inputfiltertype - - This controls pixel filtering on the input image of the TOP.

  • Nearest Pixel nearest - Uses nearest pixel or accurate image representation. Images will look jaggy when viewing at any zoom level other than Native Resolution.
  • Interpolate Pixels linear - Uses linear filtering between pixels. This is how you get TOP images in viewers to look good at various zoom levels, especially useful when using any Fill Viewer setting other than Native Resolution.
  • Mipmap Pixels mipmap - Uses mipmap filtering when scaling images. This can be used to reduce artifacts and sparkling in moving/scaling images that have lots of detail.

Fill Viewer fillmode - - Determine how the TOP image is displayed in the viewer. NOTE:To get an understanding of how TOPs work with images, you will want to set this to Native Resolution as you lay down TOPs when starting out. This will let you see what is actually happening without any automatic viewer resizing.

  • Use Input useinput - Uses the same Fill Viewer settings as it's input.
  • Fill fill - Stretches the image to fit the edges of the viewer.
  • Fit Horizontal width - Stretches image to fit viewer horizontally.
  • Fit Vertical height - Stretches image to fit viewer vertically.
  • Fit Best best - Stretches or squashes image so no part of image is cropped.
  • Fit Outside outside - Stretches or squashes image so image fills viewer while constraining it's proportions. This often leads to part of image getting cropped by viewer.
  • Native Resolution nativeres - Displays the native resolution of the image in the viewer.

Viewer Smoothness filtertype - - This controls pixel filtering in the viewers.

  • Nearest Pixel nearest - Uses nearest pixel or accurate image representation. Images will look jaggy when viewing at any zoom level other than Native Resolution.
  • Interpolate Pixels linear - Uses linear filtering between pixels. Use this to get TOP images in viewers to look good at various zoom levels, especially useful when using any Fill Viewer setting other than Native Resolution.
  • Mipmap Pixels mipmap - Uses mipmap filtering when scaling images. This can be used to reduce artifacts and sparkling in moving/scaling images that have lots of detail. When the input is 32-bit float format, only nearest filtering will be used (regardless of what is selected).

Passes npasses - Duplicates the operation of the TOP the specified number of times. For every pass after the first it takes the result of the previous pass and replaces the node's first input with the result of the previous pass. One exception to this is the GLSL TOP when using compute shaders, where the input will continue to be the connected TOP's image.  

Channel Mask chanmask - Allows you to choose which channels (R, G, B, or A) the TOP will operate on. All channels are selected by default.  

Pixel Format format - - Format used to store data for each channel in the image (ie. R, G, B, and A). Refer to Pixel Formats for more information.

  • Use Input useinput - Uses the input's pixel format.
  • 8-bit fixed (RGBA) rgba8fixed - Uses 8-bit integer values for each channel.
  • sRGB 8-bit fixed (RGBA) srgba8fixed - Uses 8-bit integer values for each channel and stores color in sRGB colorspace. Note that this does not apply an sRGB curve to the pixel values, it only stores them using an sRGB curve. This means more data is used for the darker values and less for the brighter values. When the values are read downstream they will be converted back to linear. For more information refer to sRGB.
  • 16-bit float (RGBA) rgba16float - Uses 16-bits per color channel, 64-bits per pixel.
  • 32-bit float (RGBA) rgba32float - Uses 32-bits per color channel, 128-bits per pixels.
  • 10-bit RGB, 2-bit Alpha, fixed (RGBA) rgb10a2fixed - Uses 10-bits per color channel and 2-bits for alpha, 32-bits total per pixel.
  • 16-bit fixed (RGBA) rgba16fixed - Uses 16-bits per color channel, 64-bits total per pixel.
  • 11-bit float (RGB), Positive Values Only rgba11float - A RGB floating point format that has 11 bits for the Red and Green channels, and 10-bits for the Blue Channel, 32-bits total per pixel (therefore the same memory usage as 8-bit RGBA). The Alpha channel in this format will always be 1. Values can go above one, but can't be negative. ie. the range is [0, infinite).
  • 16-bit float (RGB) rgb16float -
  • 32-bit float (RGB) rgb32float -
  • 8-bit fixed (Mono) mono8fixed - Single channel, where RGB will all have the same value, and Alpha will be 1.0. 8-bits per pixel.
  • 16-bit fixed (Mono) mono16fixed - Single channel, where RGB will all have the same value, and Alpha will be 1.0. 16-bits per pixel.
  • 16-bit float (Mono) mono16float - Single channel, where RGB will all have the same value, and Alpha will be 1.0. 16-bits per pixel.
  • 32-bit float (Mono) mono32float - Single channel, where RGB will all have the same value, and Alpha will be 1.0. 32-bits per pixel.
  • 8-bit fixed (RG) rg8fixed - A 2 channel format, R and G have values, while B is 0 always and Alpha is 1.0. 8-bits per channel, 16-bits total per pixel.
  • 16-bit fixed (RG) rg16fixed - A 2 channel format, R and G have values, while B is 0 always and Alpha is 1.0. 16-bits per channel, 32-bits total per pixel.
  • 16-bit float (RG) rg16float - A 2 channel format, R and G have values, while B is 0 always and Alpha is 1.0. 16-bits per channel, 32-bits total per pixel.
  • 32-bit float (RG) rg32float - A 2 channel format, R and G have values, while B is 0 always and Alpha is 1.0. 32-bits per channel, 64-bits total per pixel.
  • 8-bit fixed (A) a8fixed - An Alpha only format that has 8-bits per channel, 8-bits per pixel.
  • 16-bit fixed (A) a16fixed - An Alpha only format that has 16-bits per channel, 16-bits per pixel.
  • 16-bit float (A) a16float - An Alpha only format that has 16-bits per channel, 16-bits per pixel.
  • 32-bit float (A) a32float - An Alpha only format that has 32-bits per channel, 32-bits per pixel.
  • 8-bit fixed (Mono+Alpha) monoalpha8fixed - A 2 channel format, one value for RGB and one value for Alpha. 8-bits per channel, 16-bits per pixel.
  • 16-bit fixed (Mono+Alpha) monoalpha16fixed - A 2 channel format, one value for RGB and one value for Alpha. 16-bits per channel, 32-bits per pixel.
  • 16-bit float (Mono+Alpha) monoalpha16float - A 2 channel format, one value for RGB and one value for Alpha. 16-bits per channel, 32-bits per pixel.
  • 32-bit float (Mono+Alpha) monoalpha32float - A 2 channel format, one value for RGB and one value for Alpha. 32-bits per channel, 64-bits per pixel.


TOPs
Add • Analyze • Anti Alias • Blob Track • Blur • Experimental:Buffer Select • Cache Select • Cache • Channel Mix • CHOP to • Experimental:CHOP to • Chroma Key • Circle • Composite • Constant • Convolve • Corner Pin • CPlusPlus • Crop • Cross • Cube Map • Depth • Difference • DirectX In • DirectX Out • Displace • Edge • Emboss • Feedback • Fit • Flip • Experimental:Function • GLSL Multi • Experimental:GLSL Multi • GLSL • Experimental:GLSL • HSV Adjust • HSV to RGB • Import Select • In • Inside • Introduction To TOPs id • Experimental:Kinect Azure • Kinect • Layout • Leap Motion • Level • Experimental:Limit • Lookup • Luma Blur • Luma Level • Math • Experimental:Math • Matte • Mirror • Monochrome • Movie File In • Movie File Out • Experimental:Movie File Out • Multiply • NDI In • NDI Out • Noise • Normal Map • Notch • Experimental:Notch • Null • Nvidia Flow • Oculus Rift • OP Viewer • OpenColorIO • OpenVR • Experimental:Ouster Select • Experimental:Ouster • Out • Outside • Over • Pack • Photoshop In • Experimental:Point Cloud In • Experimental:Point File Select • PreFilter Map • Projection • Ramp • RealSense • Rectangle • Remap • Render Pass • Experimental:Render Pass • Render Select • Render • Experimental:Render • Reorder • Resolution • RGB Key • RGB to HSV • Scalable Display • Screen Grab • Screen • Select • Shared Mem In • Shared Mem Out • Slope • SSAO • Stype • Substance Select • Substance • Subtract • SVG • Switch • Syphon Spout In • Syphon Spout Out • Text • Texture 3D • Texture Sampling arameters • Threshold • Tile • Time Machine • TOP • Experimental:TOP • TOP iewer • Touch In • Touch Out • Transform • Under • Video Device In • Experimental:Video Device In • Video Device Out • Experimental:Video Device Out • Video Stream In • Video Stream Out • Vioso • Web Render • ZED

An Operator Family that creates, composites and modifies images, and reads/writes images and movies to/from files and the network. TOPs run on the graphics card's GPU.

The width and height of an image in pixels. Most TOPs, like the Movie File In TOP can set the image resolution. See Aspect Ratio for the width/height ratio of an image, taking into account non-square pixels.

The frame rate that TouchDesigner's Timeline uses. Equal to the Frames per Second value project.cookRate.

Each SOP has a list of Points. Each point has an XYZ 3D position value plus other optional attributes. Each polygon Primitive is defined by a vertex list, which is list of point numbers.

An Operator Family that contains its own Network inside. There are twelve 3D Object Component and eight 2D Panel Component types. See also Network Path.

The viewer of a node can be (1) the interior of a node (the Node Viewer), (2) a floating window (RMB->View... on node), or (3) a Pane that graphically shows the results of an operator.

A CHOP outputs one or more channels, where a channel is simply a sequence of numbers, representing motion, audio, etc. Channels are passed between CHOPs in TouchDesigner networks. See also Export.