Home > Sample chapters

Programming with the Kinect for Windows Software Development Kit: Displaying Kinect Data

In this chapter from Programming with the Kinect for Windows Software Development Kit you will learn how to display the different Kinect streams. You will also write a tool to display skeletons and to locate audio sources.

Because there is no physical interaction between the user and the Kinect sensor, you must be sure that the sensor is set up correctly. The most efficient way to accomplish this is to provide a visual feedback of what the sensor receives. Do not forget to add an option in your applications that lets users see this feedback because many will not yet be familiar with the Kinect interface. Even to allow users to monitor the audio, you must provide a visual control of the audio source and the audio level.

In this chapter you will learn how to display the different Kinect streams. You will also write a tool to display skeletons and to locate audio sources.

All the code you produce will target Windows Presentation Foundation (WPF) 4.0 as the default developing environment. The tools will then use all the drawing features of the framework to concentrate only on Kinect-related code.

The color display manager

As you saw in Chapter 2 Kinect is able to produce a 32-bit RGB color stream. You will now develop a small class (ColorStreamManager) that will be in charge of returning a WriteableBitmap filled with each frame data.

This WriteableBitmap will be displayed by a standard WPF image control called kinectDisplay:

<Image x:Name="kinectDisplay" Source="{Binding Bitmap}"></Image>

This control is bound to a property called Bitmap that will be exposed by your class.

Before writing this class, you must introduce the Notifier class that helps handle the INotifyPropertyChanged interface (used to signal updates to the user interface [UI]):

using System;
using System.ComponentModel;
using System.Linq.Expressions;

namespace Kinect.Toolbox
{
    public abstract class Notifier : INotifyPropertyChanged
    {
        public event PropertyChangedEventHandler PropertyChanged;

        protected void RaisePropertyChanged<T>(Expression<Func<T>> propertyExpression)
        {
            var memberExpression = propertyExpression.Body as MemberExpression;
            if (memberExpression == null)
                return;

            string propertyName = memberExpression.Member.Name;
            if (PropertyChanged != null)
                PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
        }
    }
}

As you can see, this class uses an expression to detect the name of the property to signal. This is quite useful, because with this technique you don’t have to pass a string (which is hard to keep in sync with your code when, for example, you rename your properties) to define your property.

You are now ready to write the ColorStreamManager class:

using System.Windows.Media.Imaging;
using System.Windows.Media;
using Microsoft.Kinect;
using System.Windows;
public class ColorStreamManager : Notifier
{
    public WriteableBitmap Bitmap { get; private set; }

    public void Update(ColorImageFrame frame)
    {
        var pixelData = new byte[frame.PixelDataLength];

        frame.CopyPixelDataTo(pixelData);

        if (Bitmap == null)
        {
            Bitmap = new WriteableBitmap(frame.Width, frame.Height,
                                         96, 96, PixelFormats.Bgr32, null);
        }

        int stride = Bitmap.PixelWidth * Bitmap.Format.BitsPerPixel / 8;
        Int32Rect dirtyRect = new Int32Rect(0, 0, Bitmap.PixelWidth, Bitmap.PixelHeight);
        Bitmap.WritePixels(dirtyRect, pixelData, stride, 0);

        RaisePropertyChanged(() => Bitmap);
    }
}

Using the frame object, you can get the size of the frame with PixelDataLength and use it to create a byte array to receive the content of the frame. The frame can then be used to copy its content to the buffer using CopyPixelDataTo.

The class creates a WriteableBitmap on first call of Update. This bitmap is returned by the Bitmap property (used as binding source for the image control). Notice that the bitmap must be a BGR32 (Windows works with Blue/Green/Red picture) with 96 dots per inch (DPI) on the x and y axes.

The Update method simply copies the buffer to the WriteableBitmap on each frame using the WritePixels method of WriteableBitmap.

Finally, Update calls RaisePropertyChanged (from the Notifier class) on the Bitmap property to signal that the bitmap has been updated.

So after initializing the sensor, you can add this code in your application to use the ColorStreamManager class:

var colorManager = new ColorStreamManager();
void kinectSensor_ColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
    using (var frame = e.OpenColorImageFrame())
    {
        if (frame == null)
            return;

        colorManager.Update(frame);
    }
}

The final step is to bind the DataContext of the picture to the colorManager object (for instance, inside the load event of your MainWindow page):

kinectDisplay.DataContext = colorManager;

Now every time a frame is available, the ColorStreamManager bound to the image will raise the PropertyChanged event for its Bitmap property, and in response the image will be updated, as shown in Figure 3-1.

Figure 3-1

Figure 3-1 Displaying the Kinect color stream with WPF.

If you are planning to use the YUV format, there are two possibilities available: You can use the ColorImageFormat.YuvResolution640x480Fps15 format, which is already converted to RGB32, or you can decide to use the raw YUV format (ColorImageFormat.RawYuvResolution640x480Fps15), which is composed of 16 bits per pixel—and it is more effective.

To display this format, you must update your ColorStreamManager:

public class ColorStreamManager : Notifier
{
    public WriteableBitmap Bitmap { get; private set; }
    int[] yuvTemp;

    static double Clamp(double value)
    {
        return Math.Max(0, Math.Min(value, 255));
    }

    static int ConvertFromYUV(byte y, byte u, byte v)
    {
        byte b = (byte)Clamp(1.164 * (y - 16) + 2.018 * (u - 128));
        byte g = (byte)Clamp(1.164 * (y - 16) - 0.813 * (v - 128) - 0.391 * (u - 128));
        byte r = (byte)Clamp(1.164 * (y - 16) + 1.596 * (v - 128));

        return (r << 16) + (g << 8) + b;
    }

    public void Update(ColorImageFrame frame)
    {
        var pixelData = new byte[frame.PixelDataLength];

        frame.CopyPixelDataTo(pixelData);

        if (Bitmap == null)
        {
            Bitmap = new WriteableBitmap(frame.Width, frame.Height,
                                         96, 96, PixelFormats.Bgr32, null);
        }

        int stride = Bitmap.PixelWidth * Bitmap.Format.BitsPerPixel / 8;
        Int32Rect dirtyRect = new Int32Rect(0, 0, Bitmap.PixelWidth, Bitmap.PixelHeight);

        if (frame.Format == ColorImageFormat.RawYuvResolution640x480Fps15)
        {
            if (yuvTemp == null)
                yuvTemp = new int[frame.Width * frame.Height];

            int current = 0;
            for (int uyvyIndex = 0; uyvyIndex < pixelData.Length; uyvyIndex += 4)
            {
                byte u = pixelData[uyvyIndex];
                byte y1 = pixelData[uyvyIndex + 1];
                byte v = pixelData[uyvyIndex + 2];
                byte y2 = pixelData[uyvyIndex + 3];

                yuvTemp[current++] = ConvertFromYUV(y1, u, v);
                yuvTemp[current++] = ConvertFromYUV(y2, u, v);
            }

            Bitmap.WritePixels(dirtyRect, yuvTemp, stride, 0);
        }
        else
            Bitmap.WritePixels(dirtyRect, pixelData, stride, 0);

        RaisePropertyChanged(() => Bitmap);
    }
}

The ConvertFromYUV method is used to convert a (y, u, v) vector to an RGB integer. Because this operation can produce out-of-bounds results, you must use the Clamp method to obtain correct values.

The important point to understand about this is how YUV values are stored in the stream. A YUV stream stores pixels with 32 bits for each two pixels, using the following structure: 8 bits for Y1, 8 bits for U, 8 bits for Y2, and 8 bits for V. The first pixel is composed from Y1UV and the second pixel is built with Y2UV.

Therefore, you need to run through all incoming YUV data to extract pixels:

for (int uyvyIndex = 0; uyvyIndex < pixelData.Length; uyvyIndex += 4)
{
    byte u = pixelData[uyvyIndex];
    byte y1 = pixelData[uyvyIndex + 1];
    byte v = pixelData[uyvyIndex + 2];
    byte y2 = pixelData[uyvyIndex + 3];

    yuvTemp[current++] = ConvertFromYUV(y1, u, v);
    yuvTemp[current++] = ConvertFromYUV(y2, u, v);
}

Now the ColorStreamManager is able to process all kinds of stream format.