Category Archives: Projects

Cocoa Kinect Wrapper

I have wanted to do this for a long time but finally I have released a, hopefully, easy to use Objective-C wrapper for the libfreenect library!

You can find this wrapper in my GitHub repo where you can download and use it fairly freely.

Also, if you want to make edits to it or fix problems you find then please fork the repo and send me a pull request ๐Ÿ™‚

If you do have problems though and cannot fix them yourself then please submit a new issue and I will do what I can.

If you haven’t looked at it already, check it out now atย

It’s website time

sitefull20percentSo as you can probably see, it has been a very long time since I last posted! Well this would be because of the workload from my 3rd Year Project working with the Kinect. Well that’s completed now, and I won an award for it, and now I’m using all that expertise in my 4th Year Project!

But here’s some new information, I have been building a website, it’s hosted by GitHub and I have a nice url to go with it, it’s Check it out. It’s not much to look at the moment but as time progresses it will contain a lot more information about my projects and in particular, processing with the Kinect. And I can foresee that most of all the code will be developed on a mac for macs ๐Ÿ™‚ apart from of course all the code thats developed for Arduino and other platforms like that…Raspberry Pi.

I will also be working on a way to get some sort of blog set up on that site too, so this blog may actually become redundant… but I think it’s probably for the best because I’ll be able to share to code and how I did it with the internet.

In the mean time…


The Beginning of the Cocoa Kinect Example

Edit: I now have a new slightly polished cocoa-freenect wrapper for use in your Kinect-Cocoa projects! Check out the post here.

Today I pushed the final application of my libfreenect on the mac beginners guide to GitHub for everyone to see and download. Hopefully this will help a lot of Kinect beginners get started with their projects and produce some cool things ๐Ÿ™‚

I did forget to mention in the Readme but you will need to be running Mac OS X 7 (Lion) to build this program. Im not sure if the included app will work on older versions but you can give it a go ๐Ÿ™‚ If however you would like to run it on an older version you will need to change some of the code in the “Kinect Processing.m” file. The code causing problems is the “@autoreleasepool {}” function which is not present in earlier versions of the SDK, so you will have to change this to its older version. It’s not too hard though and I won’t be making the change myself I’m afraid because the code looks nicer this way :).

Also, for all of you out their without a Kinect I have included a sample point cloud file which you can import into the app (in all the usual mac ways. i.e. dragging the file onto the app icon, double clicking the file, using the File->Open… menu and of course by using the “Import” button in the app.

I hope you enjoy ๐Ÿ™‚ and here is the Readme that is on the GitHub page:

OpenKinect Cocoa Example

This uses the libfreenect library produced by the good people of the OpenKinect community. This code gives an example on how to use the libfreenect library with your Cocoa applications in Mac OS X.

It took me ages to learn how to begin programming with the Kinect on my Mac and there wasnt a great deal of help on the internet that I could find ๐Ÿ˜ฆ so I spent a long time figuring it all out (especially with OpenGL, that thing is a bastard) and then I finally created this app which will form the final application to a guide I will make in the summer.

The guide will take a semi-begginner programmer (someone who is already experienced with Objective-C, im not going to go and teach that but I will give a link to a guy on youtube who taught me), show them how to install all the libraries they need and then take them though all the steps necessary to produce this code.
To be honest I wish I found this on the internet myself ha ha, oh well ๐Ÿ™‚ I like working things out.

To use this code you will first need to install libfreenect:
– Theres the OpenKinect website which will be more up to date –
– Or there is my website where I have outlined a method –

And then you will need to download the code from this GitHub page, your best bet is probably using the download as .zip button or by going into your Terminal app and pasting in:
git clone git://

You can then open up the “OpenKinect Cocoa Example.xcodeproj” file and build & run it and have a play. Make sure you have a Kinect though ๐Ÿ˜‰

A feature you might like though is where you can export and import point cloud files (.pcf), I’ll include one in there for you to play with if you dont have a Kinect yet.

Tagged , , , , , , ,


A quick update ๐Ÿ™‚

Full screen and efficient Kinect viewing.

So, here is a bit of news about the project so far:

  • I have made quite a bit of progress with my project using Processing (link) but the problem was that; it was not very efficient at drawing to the screen, and I didn’t have full reign over the data, but it was very easy to use.
  • So then I made some more progress on the Objective-C and Cocoa side of the project (as you can see above) and things are going good ๐Ÿ™‚ Now I just need to get some object detection working which I managed when using Processing. I will show this off soon.

What I’m going to do soon:

  • A beginnersย guide to using the Kinect in Cocoa ๐Ÿ™‚ seeing as there isn’t much help out there for beginners (like I was..) I will be creating this guide to help out all them mac programmers. Also, once its done users of other operating systems may also be able to follow some of the methods used when building their own apps.
  • Get some more pictures of object detection up!
  • Then some helicopter tracking..


Tagged , , ,

Lots of Kinect Point Cloud Views!

Well, today, wow, oh my god, his PA, this is a big one. So today I managed to get a few things better with the Kinect cloud view in the KinectiCopter app.

The view is able to display in both colour (RGB data from kinect) and just plain white. I have also created a new KCKinectController class which is able to collect the kinect data and control the display of 4 separate NSOpenGLViews, isometric (just rotatable at the moment, might stay this way..), front view, top view, side (right) view.


Also, error handling! well, error handling for there being no kinect connected. if the program detects no kinects connected then it will display a message in the console and quit 2 seconds later. Obviously in later versions this will be changed so that the user will receive a better error message that they can actually see without opening the console app.

I’ll put up a post sometime soon showing how I managed to get so many instances of NSOpenGLView all showing one point cloud with just one set of calls for the Kinect point cloud. Then everyone will be able to do it ๐Ÿ™‚ except I won’t be putting up my own code cos thats for my uni project!…

Lovely stuff.

Tagged , , , ,

How can I do some processing of Kinect data?

The next task in my project, find out the location and size of all the objects in the scene! Sounds a bit tricky…

After a quick bit of looking around I found PCL (the Point Cloud Library) which contains a lot of functions all to do with manipulating and calculating features and other things to do with point clouds, the stuff the KInect gives us ๐Ÿ™‚ good stuff.

Lets give it an install and have a play with the tutorials (they look wicked, definitely the best tutorials for a library I’ve seen so far ๐Ÿ™‚ ).

So. After trying many attempts of an install over a week! I finally got it to install properly. That was soooo hard! I documented down the easy way of doing it in the end so I can share it if this next bit works out… After it was all installed properly everything seemed awesome until I tried to compile a sample C++ code in Xcode. This did not work out so nicely ๐Ÿ˜ฆ

I think the problem is that Xcode does not see the correct header files or it is looking in the wrong place for them. When trying to #include the PCL libraries it makes you put the folder “plc-1.4/pcl/…” at the front which messes up with all the other #include statements within the header files as they are only looking for “pcl/…”.

So after lots of hunting on the internet for any way I can fix this I tried compiling the sample code with CMake using the defined method and it worked fine! This led me to believe that the libraries were all working fine and that it was just something to do with how Xcode works.

So, I did lots more searching around and found a post (look near the bottom) which talks pretty much about the problem I’m having I think. So what I need to do at some point now is uninstall homebrew and the libraries it installed and install macports and install it that way. Which will involve A LOT more painful installing and compiling (because macports takes loads longer for some reason) and I find its harder to use (I love how simple the commands for homebrew are ๐Ÿ™‚ )

So, when I eventually get round to installing this, which may be sometime soon, I’ll be able to come back and tell everyone how it went and how to fix/avoid all the problems that I came in to.

Please, please, please let PCL work. It would make things sooooo much easier!!


Tagged , , ,

Installing OpenKinect and OpenCV the easy way on a Mac

Edit: I now have a new slightly polished cocoa-freenect wrapper for use in your Kinect-Cocoa projects! Check out the postย here. On the repo there also new and improved installation instructions!

For ages I tried to find an easy way to install both libfreenect (OpenKinect) and OpenCV on my mac without having to follow loads of lines of instructions and getting lots of errors in the process.

So, after quite a while of searching around for an easy install, which didn’t involve me installing loads of different package managers, I came across these methods which work really nicely ๐Ÿ™‚

First, Install Homebrew, if you don’t have it already:

  • It’s super easy… just follow the instruction… GO!
  • That was crazy fast, now do one of these, or both, or none…


  • If you still have your terminal open type in (or copy and paste) these commands one at a time and press enter.
cd /usr/local/Library/Formula


curl --insecure -O ""
curl --insecure -O ""
brew install libfreenect
  • Then, once all thats finished you’re done for OpenKinect ๐Ÿ™‚ you can give it a test by typing this into the terminal (as long as your still in the install directory..)

OpenCV (2.3.1a):

  • Now, OpenCV is even easier again! Just type in (or copy and paste again) this command:
brew install opencv
  • This one takes a bit longer than OpenKinect…
  • And then, You’re all done!
  • You can close terminal now ๐Ÿ™‚

Now the last task. If you want to use these libraries in Xcode just follow these steps:

  • These instructions are for Xcode 4 by the way. Sorry Xcode 3 people, I started using Xcode from version 4.. But don’t worry Xcode 3 people, if you know what you’re doing then this is pretty much the same process as using the built in Mac libraries.
  • If your creating a Cocoa application it’s a little simpler:
    • Open your YourApp.xcodeproj file
    • Select the target you want to add it to, there will usually only be one
    • Then click “Summary”
    • And under “Linked Frameworks and Libraries” press the plus button
    • In the search field that shows up type in “libfreenect”
    • You will see all the libraries that begin with libfreenect. So now you can choose one ๐Ÿ™‚ I usually use libfreenect_sync because it is easier to use this in objective-c programs, from what I have seen so far. So select the latest version of it that shows up.
    • That’s it for that. Now to include it in your application code just type this #import <libfreenect/libfreenect_sync.h>
  • Now, if it isn’t a Cocoa application but a Console application do these things:
    • Open your YourApp.xcodeproj file
    • Select the target you want to add it to
    • Click on the “Build Phases” tab
    • Then under “Link Binary With Libraries” press the plus button
    • In the search field that shows up type in “libfreenect”
    • Now you can select which library you would like to use as I discussed in the second to last point of the Cocoa app method.
    • All done!

So now you should be all sorted with OpenKinect and OpenCV (the libraries when searching are called libopencv… there are quite a few tho, don’t really wanna go into what does what and what you need and all that… I don’t really know that myself just yet..)

Anyways, bye bye everyone

Tagged , ,

Making little bits of progress :)

So, at the moment I’m in the middle of my January exams ๐Ÿ˜ฆ so work on the project is taking a very far back seat. But I decided to a have a little go at making a few things tidier and do a little experimenting.

Firstly, I came up with a name! I decided on “KinectiCopter”. I thought Helicopter and Kinect and stuck the two together (if you didn’t realise).

I then thought up a few ideas for a logo/icon for my project/application and finally came up with (after a lot of time staring at the screen) the following logo ๐Ÿ™‚

KinectiCopter Icon Version 3

KinectiCopter Icon Version 3

After sorting a new icon for my KinectiCopter application I hadn’t made yet I was itching to do something, so I decided to put off revision a little more and began to make a new project which may well be the final one. I copied over all the work I had previously done on the Kinect into this project. So, I had my KCKinectController class which took care of the interface between the libfreenect_sync library and my objective-c program, and previous work where I had got the video and depth data to display using OpenGL/OpenCV.

I then couldn’t resist to get another OpenGLView class made, which instead of showing the user 2D images of video and depth, it showed the user a Point Cloud view of the scene. There was an example of this in the OpenKinect examples, so I had a long hard look at this to try and work out what it did and how and then I recreated it in my own program ๐Ÿ™‚

When building it it seemed to work fine except that the view seemed really zoomed out compared to the example and when I did zoom in things looked a bit like they were overlapping… I had a quick think and then decided instead of using my own KCKinectController class to get the depth and video data I would just go straight to libfreenct_sync and get the data from that. This seemed to fix everything! But why? No idea. But I do have a feeling that it has something to do with me converting the depth/video data to an IplImage for OpenCV and then converting it back to data.

So, I’m now thinking that instead of getting the controller to convert to IplImage and then to data, just get the controller to give the user the data it collected in the first place and then have some over methods for converting for when I want to use OpenCV.

Thats what I’ll try next ๐Ÿ™‚

Tagged , ,

I am the Kinect Master!!

So, it took me about a week of very frustrating research, fixes, failures of fixes, getting libraries installed and some not installing to finally get some proper synchronous data back from the Kinect into my program.

There were so many things going wrong its hard to actually remember it all.. but here is what I can remember:

Firstly I set up my objective-c program to use the libfreenect_sync.h library along with OpenGL to display the RGB and Depth images in the view in my program. This worked fine for the RGB feed but the Depth feed was just a mess. The images it displayed were messed up. It was like there was a load of overlapping and static like you used to see on TV’s.ย So I went out to try and fix this.

One fix that I found was to change the OpenGLย glTexImage2Dย setting toย GL_RGB for depth and change the input resolution to 320 by 420 pixels. This seemed to fix things a bit but the image looked a bit low res compared to the glview example.

I then tried LOOADS of other ways to fix it, thinking that it was something wrong with my OpenGL settings but nothing worked.. I eventually attempted to install OpenCV again on my mac and it turned out that the easiest and most straightforward way to do it was to just (using homebrew) type this into Terminal:

sudo brew update
sudo brew install opencv

I think if you then restart Xcode the libraries will show up when you try to add linked frameworks and libraries under “libopencv…”.

So, I got OpenCV installed and I was able to test the c_sync example that was available hereย (switching it from video being displayed to depth). I then found out that it wasn’t my programming that was causing the problems it was something else.

After a lot more playing and searching I found a OpenCV Kinect example that uses the libfreenect_cv.h wrapper. And this seemed to use the libfreenect_sync.h header and still be able to display proper depth information. So I tinkered with this and eventually was able to recreate it in my own program using the same libfreenect_cv.h wrapper ๐Ÿ™‚

The next step was to recreate it again but in my Objective-C program. Which also worked. And then finally I created my own Objective-C class which allowed me to easily control the Kinect and get data from it ๐Ÿ™‚

(I think the real reason I couldn’t get it working was to do with a certain function I couldn’t get working myself… the function that seems to have a lot of switch-case statements in.)

So, there we are, I finally got it working ๐Ÿ˜€ only took a week… grr… so now I just have to work out how I can use OpenCV to help me pick out the objects to avoid in the room and find the Helicopter then track it.


Just after I got my Objective-C Kinect class working ๐Ÿ™‚

Tagged , ,

Working with depth

On and off for the last two days I have been looking into getting the depth data displayed to the user aswell as the RGB. This has proven to be a bit tricky. Firstly when I got OpenGL to display to depth data the image was really weird, black and white, overlapping and duplicated. There was definitely something going wrong somewhere, although I was using the exact same commands as for the RGB view and as the glview example supplied with libfreenect.
So I did a lot of playing and changed some of the individual OpenGL parameters one at a time to see if that made any difference. Then I found that to get some actual colour in the view and to get things looking a bit more fixed I should switch the Format parameter ofย glTexImage2Dย function to GL_RGBA.

I then did more playing to see why there was duplicating of the image. It seemed like it was showing one frame, then to the right of it, the previous frame and then underneath them there was about 2 of the same frame but with the occasional static messing it all up. It was very weird… It seemed like the image I was receiving from the library was bot actually 640×480 pixels but actually smaller. So I made a few changes to the interface of my program so I could change these values as the program was running to see what was the actual resolution of the input data.
It turns out that height of the image I was receiving was 320 pixels and the width was 480. This didn’t make any sense at all… And also it meant I was missing out on vital resolution that will be very important in my calculations later on.

So after a bit of thinking I thought that maybe there is something wrong with the library I am using and at the moment my OpenCV install doesn’t work so I cannot test the example program I have ๐Ÿ˜ฆ I’ll have to try and get this working tomorrow though.

Also another thing, if it is the library which is screwing up then I’ll have a look into making my own objective-c wrapper ๐Ÿ™‚ this does mean wasting a load more time though…

And something else I’m wondering about is how do they get the different colours working in the glview example when the variable in mine is only 8 bits wide?? I think maybe the size of the variable I’m putting the depth textures in is a bit too small, smaller than what I’m getting in for the depth.

Let’s check what the actual resolution of the Kinect IR camera is, or at least the depth image that we get.

Tagged ,