Category Archives: KinectiCopter

The Beginning of the Cocoa Kinect Example

Edit: I now have a new slightly polished cocoa-freenect wrapper for use in your Kinect-Cocoa projects! Check out the post here.

Today I pushed the final application of my libfreenect on the mac beginners guide to GitHub for everyone to see and download. Hopefully this will help a lot of Kinect beginners get started with their projects and produce some cool things ๐Ÿ™‚

I did forget to mention in the Readme but you will need to be running Mac OS X 7 (Lion) to build this program. Im not sure if the included app will work on older versions but you can give it a go ๐Ÿ™‚ If however you would like to run it on an older version you will need to change some of the code in the “Kinect Processing.m” file. The code causing problems is the “@autoreleasepool {}” function which is not present in earlier versions of the SDK, so you will have to change this to its older version. It’s not too hard though and I won’t be making the change myself I’m afraid because the code looks nicer this way :).

Also, for all of you out their without a Kinect I have included a sample point cloud file which you can import into the app (in all the usual mac ways. i.e. dragging the file onto the app icon, double clicking the file, using the File->Open… menu and of course by using the “Import” button in the app.

I hope you enjoy ๐Ÿ™‚ and here is the Readme that is on the GitHub page:

OpenKinect Cocoa Example

This uses the libfreenect library produced by the good people of the OpenKinect community. This code gives an example on how to use the libfreenect library with your Cocoa applications in Mac OS X.

It took me ages to learn how to begin programming with the Kinect on my Mac and there wasnt a great deal of help on the internet that I could find ๐Ÿ˜ฆ so I spent a long time figuring it all out (especially with OpenGL, that thing is a bastard) and then I finally created this app which will form the final application to a guide I will make in the summer.

The guide will take a semi-begginner programmer (someone who is already experienced with Objective-C, im not going to go and teach that but I will give a link to a guy on youtube who taught me), show them how to install all the libraries they need and then take them though all the steps necessary to produce this code.
To be honest I wish I found this on the internet myself ha ha, oh well ๐Ÿ™‚ I like working things out.

To use this code you will first need to install libfreenect:
– Theres the OpenKinect website which will be more up to date –
– Or there is my website where I have outlined a method –

And then you will need to download the code from this GitHub page, your best bet is probably using the download as .zip button or by going into your Terminal app and pasting in:
git clone git://

You can then open up the “OpenKinect Cocoa Example.xcodeproj” file and build & run it and have a play. Make sure you have a Kinect though ๐Ÿ˜‰

A feature you might like though is where you can export and import point cloud files (.pcf), I’ll include one in there for you to play with if you dont have a Kinect yet.

Tagged , , , , , , ,


A quick update ๐Ÿ™‚

Full screen and efficient Kinect viewing.

So, here is a bit of news about the project so far:

  • I have made quite a bit of progress with my project using Processing (link) but the problem was that; it was not very efficient at drawing to the screen, and I didn’t have full reign over the data, but it was very easy to use.
  • So then I made some more progress on the Objective-C and Cocoa side of the project (as you can see above) and things are going good ๐Ÿ™‚ Now I just need to get some object detection working which I managed when using Processing. I will show this off soon.

What I’m going to do soon:

  • A beginnersย guide to using the Kinect in Cocoa ๐Ÿ™‚ seeing as there isn’t much help out there for beginners (like I was..) I will be creating this guide to help out all them mac programmers. Also, once its done users of other operating systems may also be able to follow some of the methods used when building their own apps.
  • Get some more pictures of object detection up!
  • Then some helicopter tracking..


Tagged , , ,

Lots of Kinect Point Cloud Views!

Well, today, wow, oh my god, his PA, this is a big one. So today I managed to get a few things better with the Kinect cloud view in the KinectiCopter app.

The view is able to display in both colour (RGB data from kinect) and just plain white. I have also created a new KCKinectController class which is able to collect the kinect data and control the display of 4 separate NSOpenGLViews, isometric (just rotatable at the moment, might stay this way..), front view, top view, side (right) view.


Also, error handling! well, error handling for there being no kinect connected. if the program detects no kinects connected then it will display a message in the console and quit 2 seconds later. Obviously in later versions this will be changed so that the user will receive a better error message that they can actually see without opening the console app.

I’ll put up a post sometime soon showing how I managed to get so many instances of NSOpenGLView all showing one point cloud with just one set of calls for the Kinect point cloud. Then everyone will be able to do it ๐Ÿ™‚ except I won’t be putting up my own code cos thats for my uni project!…

Lovely stuff.

Tagged , , , ,

How can I do some processing of Kinect data?

The next task in my project, find out the location and size of all the objects in the scene! Sounds a bit tricky…

After a quick bit of looking around I found PCL (the Point Cloud Library) which contains a lot of functions all to do with manipulating and calculating features and other things to do with point clouds, the stuff the KInect gives us ๐Ÿ™‚ good stuff.

Lets give it an install and have a play with the tutorials (they look wicked, definitely the best tutorials for a library I’ve seen so far ๐Ÿ™‚ ).

So. After trying many attempts of an install over a week! I finally got it to install properly. That was soooo hard! I documented down the easy way of doing it in the end so I can share it if this next bit works out… After it was all installed properly everything seemed awesome until I tried to compile a sample C++ code in Xcode. This did not work out so nicely ๐Ÿ˜ฆ

I think the problem is that Xcode does not see the correct header files or it is looking in the wrong place for them. When trying to #include the PCL libraries it makes you put the folder “plc-1.4/pcl/…” at the front which messes up with all the other #include statements within the header files as they are only looking for “pcl/…”.

So after lots of hunting on the internet for any way I can fix this I tried compiling the sample code with CMake using the defined method and it worked fine! This led me to believe that the libraries were all working fine and that it was just something to do with how Xcode works.

So, I did lots more searching around and found a post (look near the bottom) which talks pretty much about the problem I’m having I think. So what I need to do at some point now is uninstall homebrew and the libraries it installed and install macports and install it that way. Which will involve A LOT more painful installing and compiling (because macports takes loads longer for some reason) and I find its harder to use (I love how simple the commands for homebrew are ๐Ÿ™‚ )

So, when I eventually get round to installing this, which may be sometime soon, I’ll be able to come back and tell everyone how it went and how to fix/avoid all the problems that I came in to.

Please, please, please let PCL work. It would make things sooooo much easier!!


Tagged , , ,

Installing OpenKinect and OpenCV the easy way on a Mac

Edit: I now have a new slightly polished cocoa-freenect wrapper for use in your Kinect-Cocoa projects! Check out the postย here. On the repo there also new and improved installation instructions!

For ages I tried to find an easy way to install both libfreenect (OpenKinect) and OpenCV on my mac without having to follow loads of lines of instructions and getting lots of errors in the process.

So, after quite a while of searching around for an easy install, which didn’t involve me installing loads of different package managers, I came across these methods which work really nicely ๐Ÿ™‚

First, Install Homebrew, if you don’t have it already:

  • It’s super easy… just follow the instruction… GO!
  • That was crazy fast, now do one of these, or both, or none…


  • If you still have your terminal open type in (or copy and paste) these commands one at a time and press enter.
cd /usr/local/Library/Formula


curl --insecure -O ""
curl --insecure -O ""
brew install libfreenect
  • Then, once all thats finished you’re done for OpenKinect ๐Ÿ™‚ you can give it a test by typing this into the terminal (as long as your still in the install directory..)

OpenCV (2.3.1a):

  • Now, OpenCV is even easier again! Just type in (or copy and paste again) this command:
brew install opencv
  • This one takes a bit longer than OpenKinect…
  • And then, You’re all done!
  • You can close terminal now ๐Ÿ™‚

Now the last task. If you want to use these libraries in Xcode just follow these steps:

  • These instructions are for Xcode 4 by the way. Sorry Xcode 3 people, I started using Xcode from version 4.. But don’t worry Xcode 3 people, if you know what you’re doing then this is pretty much the same process as using the built in Mac libraries.
  • If your creating a Cocoa application it’s a little simpler:
    • Open your YourApp.xcodeproj file
    • Select the target you want to add it to, there will usually only be one
    • Then click “Summary”
    • And under “Linked Frameworks and Libraries” press the plus button
    • In the search field that shows up type in “libfreenect”
    • You will see all the libraries that begin with libfreenect. So now you can choose one ๐Ÿ™‚ I usually use libfreenect_sync because it is easier to use this in objective-c programs, from what I have seen so far. So select the latest version of it that shows up.
    • That’s it for that. Now to include it in your application code just type this #import <libfreenect/libfreenect_sync.h>
  • Now, if it isn’t a Cocoa application but a Console application do these things:
    • Open your YourApp.xcodeproj file
    • Select the target you want to add it to
    • Click on the “Build Phases” tab
    • Then under “Link Binary With Libraries” press the plus button
    • In the search field that shows up type in “libfreenect”
    • Now you can select which library you would like to use as I discussed in the second to last point of the Cocoa app method.
    • All done!

So now you should be all sorted with OpenKinect and OpenCV (the libraries when searching are called libopencv… there are quite a few tho, don’t really wanna go into what does what and what you need and all that… I don’t really know that myself just yet..)

Anyways, bye bye everyone

Tagged , ,

Making little bits of progress :)

So, at the moment I’m in the middle of my January exams ๐Ÿ˜ฆ so work on the project is taking a very far back seat. But I decided to a have a little go at making a few things tidier and do a little experimenting.

Firstly, I came up with a name! I decided on “KinectiCopter”. I thought Helicopter and Kinect and stuck the two together (if you didn’t realise).

I then thought up a few ideas for a logo/icon for my project/application and finally came up with (after a lot of time staring at the screen) the following logo ๐Ÿ™‚

KinectiCopter Icon Version 3

KinectiCopter Icon Version 3

After sorting a new icon for my KinectiCopter application I hadn’t made yet I was itching to do something, so I decided to put off revision a little more and began to make a new project which may well be the final one. I copied over all the work I had previously done on the Kinect into this project. So, I had my KCKinectController class which took care of the interface between the libfreenect_sync library and my objective-c program, and previous work where I had got the video and depth data to display using OpenGL/OpenCV.

I then couldn’t resist to get another OpenGLView class made, which instead of showing the user 2D images of video and depth, it showed the user a Point Cloud view of the scene. There was an example of this in the OpenKinect examples, so I had a long hard look at this to try and work out what it did and how and then I recreated it in my own program ๐Ÿ™‚

When building it it seemed to work fine except that the view seemed really zoomed out compared to the example and when I did zoom in things looked a bit like they were overlapping… I had a quick think and then decided instead of using my own KCKinectController class to get the depth and video data I would just go straight to libfreenct_sync and get the data from that. This seemed to fix everything! But why? No idea. But I do have a feeling that it has something to do with me converting the depth/video data to an IplImage for OpenCV and then converting it back to data.

So, I’m now thinking that instead of getting the controller to convert to IplImage and then to data, just get the controller to give the user the data it collected in the first place and then have some over methods for converting for when I want to use OpenCV.

Thats what I’ll try next ๐Ÿ™‚

Tagged , ,

I am the Kinect Master!!

So, it took me about a week of very frustrating research, fixes, failures of fixes, getting libraries installed and some not installing to finally get some proper synchronous data back from the Kinect into my program.

There were so many things going wrong its hard to actually remember it all.. but here is what I can remember:

Firstly I set up my objective-c program to use the libfreenect_sync.h library along with OpenGL to display the RGB and Depth images in the view in my program. This worked fine for the RGB feed but the Depth feed was just a mess. The images it displayed were messed up. It was like there was a load of overlapping and static like you used to see on TV’s.ย So I went out to try and fix this.

One fix that I found was to change the OpenGLย glTexImage2Dย setting toย GL_RGB for depth and change the input resolution to 320 by 420 pixels. This seemed to fix things a bit but the image looked a bit low res compared to the glview example.

I then tried LOOADS of other ways to fix it, thinking that it was something wrong with my OpenGL settings but nothing worked.. I eventually attempted to install OpenCV again on my mac and it turned out that the easiest and most straightforward way to do it was to just (using homebrew) type this into Terminal:

sudo brew update
sudo brew install opencv

I think if you then restart Xcode the libraries will show up when you try to add linked frameworks and libraries under “libopencv…”.

So, I got OpenCV installed and I was able to test the c_sync example that was available hereย (switching it from video being displayed to depth). I then found out that it wasn’t my programming that was causing the problems it was something else.

After a lot more playing and searching I found a OpenCV Kinect example that uses the libfreenect_cv.h wrapper. And this seemed to use the libfreenect_sync.h header and still be able to display proper depth information. So I tinkered with this and eventually was able to recreate it in my own program using the same libfreenect_cv.h wrapper ๐Ÿ™‚

The next step was to recreate it again but in my Objective-C program. Which also worked. And then finally I created my own Objective-C class which allowed me to easily control the Kinect and get data from it ๐Ÿ™‚

(I think the real reason I couldn’t get it working was to do with a certain function I couldn’t get working myself… the function that seems to have a lot of switch-case statements in.)

So, there we are, I finally got it working ๐Ÿ˜€ only took a week… grr… so now I just have to work out how I can use OpenCV to help me pick out the objects to avoid in the room and find the Helicopter then track it.


Just after I got my Objective-C Kinect class working ๐Ÿ™‚

Tagged , ,

Working with depth

On and off for the last two days I have been looking into getting the depth data displayed to the user aswell as the RGB. This has proven to be a bit tricky. Firstly when I got OpenGL to display to depth data the image was really weird, black and white, overlapping and duplicated. There was definitely something going wrong somewhere, although I was using the exact same commands as for the RGB view and as the glview example supplied with libfreenect.
So I did a lot of playing and changed some of the individual OpenGL parameters one at a time to see if that made any difference. Then I found that to get some actual colour in the view and to get things looking a bit more fixed I should switch the Format parameter ofย glTexImage2Dย function to GL_RGBA.

I then did more playing to see why there was duplicating of the image. It seemed like it was showing one frame, then to the right of it, the previous frame and then underneath them there was about 2 of the same frame but with the occasional static messing it all up. It was very weird… It seemed like the image I was receiving from the library was bot actually 640×480 pixels but actually smaller. So I made a few changes to the interface of my program so I could change these values as the program was running to see what was the actual resolution of the input data.
It turns out that height of the image I was receiving was 320 pixels and the width was 480. This didn’t make any sense at all… And also it meant I was missing out on vital resolution that will be very important in my calculations later on.

So after a bit of thinking I thought that maybe there is something wrong with the library I am using and at the moment my OpenCV install doesn’t work so I cannot test the example program I have ๐Ÿ˜ฆ I’ll have to try and get this working tomorrow though.

Also another thing, if it is the library which is screwing up then I’ll have a look into making my own objective-c wrapper ๐Ÿ™‚ this does mean wasting a load more time though…

And something else I’m wondering about is how do they get the different colours working in the glview example when the variable in mine is only 8 bits wide?? I think maybe the size of the variable I’m putting the depth textures in is a bit too small, smaller than what I’m getting in for the depth.

Let’s check what the actual resolution of the Kinect IR camera is, or at least the depth image that we get.

Tagged ,

Let’s try and get some RGB info :)

Now that I’ve successfully managed to make a program that connects to the Kinect in Objective-C and makes it move up and down and all that, the next challenge is to get it to display some RGB data using OpenGL.

So, firstly I will see how they’ve done it in the libfreenect example…

Wow, it really seems like this getting of RGB data is not going to be as simple as tilting the Kinect. The libfreenect API uses callbacks to give the program new data as it is received from the cameras. Also as I am trying to mix C and Objective-C it is proving a lot more difficult to get working properly. I have spent a whole day trying to work out how to duplicate the process in Objective-C, how to get the OpenGL view working properly and how to get the callbacks working properly, but still I haven’t got it working yet.

I had a chill and stuff and then thought that the best way to do it is probably to use C based threads, as used in the libfreenect example, as well as Objective-C threads to monitor the data within the C threads.

Hopefully this will all work out for me because I have got very used to using Objective-C, Xcode and the Interface Builder. Errgh, I kinda sound emo there. best stop this now…!

So, the mission is, get some pthreads working in my code as done in the libfreenect example and power it out and get it working just as I wanted ๐Ÿ˜‰ and FOR SCIENCE!

.. Christmas ๐Ÿ™‚ …

I was doing a bit of research into OpenGL and libfreenect and then discovered (again) that there were wrappers that I could maybe use in my code. There is a C-sync wrapper that could possibly enable me to read input RGB and Depth data when I am ready for it instead of using callbacks. This could result in the program not being as efficient as some frames may be lost, or the processor could be trying to read frames when there are no new ones. But I’ll give it a look and see what happens. Hopefully it will help me out a lot ๐Ÿ™‚

So, I have successfully managed to get it to let me control the tilt of the Kinect device and also retrieve the state of the tilt every second using another thread. Looking good so far ๐Ÿ™‚ now for video.

Now I have implemented a way for the RGB data to be collected using another thread it is able to call the drawRect: method when there is data available. And so, it works! I have my program displaying the RGB data at intervals of 0.2 seconds. This gives a video image that is a little jumpy as I would expect, so now I will decrease the sleep time between getting new RGB data and see what happens.

I have set the sleep time between RGB data calls to 1 millisecond and it works very well ๐Ÿ™‚ the image is very smooth and my program doesn’t seem to report that there is no new frames (this may be to the way libfreenect_sync.h works as it states that data is kept in place until a new frame is available…i think. I should probably check this).

So now the next step it to try and also get some depth data displayed and at the same time work out how I can determine the depth of individual pixels as I want.


Working out how to use libfreenect in Objective-C

I’ve finally got round to starting it… working out how to use this libfreenect thing in my project. So, I’ve downloaded the libfreenect files and I’ve started looking at the .c example. It creates a window using OpenGL and displays basic Kinect data in it (RGB view and IR view).

My task now is to work out how to use it in my project. Firstly, work out how to use it in C and then see if I can also use it in Objective-C.

… lots of reverse engineering of code…

… a while later…

DONE! -ish. I have managed to reverse engineer the example program to a point where I can get my own program to find the number of Kinect devices connected, connect to the device, change the tilt and LED colour of the device and then disconnect from the device.

I have managed to recreate it in C and then I realised that it would be even easier to make it work in Objective-C! Well kinda easier for me because I’ve been using Objective-C so much recently..

So, here is my C version of the program (the main.c file) which carries out basic movement every time it connects successfully.

#include "libfreenect.h"
#include <stdio.h>

freenect_context *freenectContext;
freenect_device *freenectDevice;
int noDevicesConnected;
int error;

int main(int argc, char **argv) {
    // freenect_init initialises a freenect context. The second parameter can be NULL if not using mutliple contexts.
    // freenect_set_log_level sets the log level for the specified freenect context.
    // freenect_select_subdevices selects which subdevices to open when connecting to a new kinect device.
    freenect_init(&freenectContext, NULL);
    freenect_set_log_level(freenectContext, FREENECT_LOG_DEBUG);
    freenect_select_subdevices(freenectContext, (freenect_device_flags)(FREENECT_DEVICE_MOTOR | FREENECT_DEVICE_CAMERA));

    noDevicesConnected = freenect_num_devices(freenectContext);
    printf("Number of devices connected: %d\n", noDevicesConnected);
    // Exit the app if there are no devices connected.
    if (noDevicesConnected < 1) return 1;

    // freenect_open_device opens a Kinect device.
    error = freenect_open_device(freenectContext, &freenectDevice, 0);
    if (error < 0) {
        // Then exit the app if there was an error while connecting.
        printf("Could not open the Kinect device.");
        return 1;

    freenect_set_tilt_degs(freenectDevice, 30);
    freenect_set_led(freenectDevice, LED_BLINK_RED_YELLOW);
    printf("Done Functions\n");


    return 0;

And now theres my Objective-C program which gives you a GUI and allows you to manually control the tilt of the Kinect using a slider control. Here is the interface file:

#import <Foundation/Foundation.h>
#import "libfreenect.h"

@interface KinectMotorController : NSObject {
    freenect_context *kinectContext;
    freenect_device *kinectDevice;

    NSNumber *noDevicesConnected;
    NSInteger error;
@property (assign) IBOutlet NSTextField *numberOfDevices;
@property (assign) IBOutlet NSTextField *connectedBool;
@property (assign) IBOutlet NSSlider *tiltControl;

- (IBAction)findDevices:(id)sender;
- (IBAction)connectDevices:(id)sender;
- (IBAction)changeTilt:(id)sender;
- (IBAction)disconnectDevices:(id)sender;


And now the implementation file:

#import "KinectMotorController.h"

@implementation KinectMotorController
@synthesize numberOfDevices;
@synthesize connectedBool;
@synthesize tiltControl;

- (IBAction)findDevices:(id)sender {
	// Initialise the freenect library.
	freenect_init(&kinectContext, NULL);
	freenect_set_log_level(kinectContext, FREENECT_LOG_DEBUG);
	freenect_select_subdevices(kinectContext, (freenect_device_flags)(FREENECT_DEVICE_MOTOR | FREENECT_DEVICE_CAMERA));

	// Find the devices collected and show the user the number.
	noDevicesConnected = [NSNumber numberWithInt:freenect_num_devices(kinectContext)];
	[numberOfDevices setStringValue:[noDevicesConnected stringValue]];

- (IBAction)connectDevices:(id)sender {
	error = freenect_open_device(kinectContext, &kinectDevice, 0);
	if (error < 0) {
		[connectedBool setStringValue:@"Failed to connect!"];
	} else {
		freenect_set_led(kinectDevice, LED_GREEN);
		[connectedBool setStringValue:@"Connected!"];

- (IBAction)changeTilt:(id)sender {
	freenect_set_tilt_degs(kinectDevice, [tiltControl intValue]);

- (IBAction)disconnectDevices:(id)sender {
	freenect_set_led(kinectDevice, LED_RED);
	[connectedBool setStringValue:@"Disconnected."];

So there we are ๐Ÿ™‚ That really wasn’t as bad as I thought it would be. When I first saw the libfreenect code it looked like a bit of nightmare but at the moment its fine. We will see what happens when I try to start getting the camera data though… It might turn out to be a bit mental.

Tagged , , , , ,