Tag Archives: objective c

The Beginning of the Cocoa Kinect Example

Edit: I now have a new slightly polished cocoa-freenect wrapper for use in your Kinect-Cocoa projects! Check out the post here.

Today I pushed the final application of my libfreenect on the mac beginners guide to GitHub for everyone to see and download. Hopefully this will help a lot of Kinect beginners get started with their projects and produce some cool things 🙂

I did forget to mention in the Readme but you will need to be running Mac OS X 7 (Lion) to build this program. Im not sure if the included app will work on older versions but you can give it a go 🙂 If however you would like to run it on an older version you will need to change some of the code in the “Kinect Processing.m” file. The code causing problems is the “@autoreleasepool {}” function which is not present in earlier versions of the SDK, so you will have to change this to its older version. It’s not too hard though and I won’t be making the change myself I’m afraid because the code looks nicer this way :).

Also, for all of you out their without a Kinect I have included a sample point cloud file which you can import into the app (in all the usual mac ways. i.e. dragging the file onto the app icon, double clicking the file, using the File->Open… menu and of course by using the “Import” button in the app.

I hope you enjoy 🙂 and here is the Readme that is on the GitHub page:

OpenKinect Cocoa Example

This uses the libfreenect library produced by the good people of the OpenKinect community. This code gives an example on how to use the libfreenect library with your Cocoa applications in Mac OS X.

It took me ages to learn how to begin programming with the Kinect on my Mac and there wasnt a great deal of help on the internet that I could find 😦 so I spent a long time figuring it all out (especially with OpenGL, that thing is a bastard) and then I finally created this app which will form the final application to a guide I will make in the summer.

The guide will take a semi-begginner programmer (someone who is already experienced with Objective-C, im not going to go and teach that but I will give a link to a guy on youtube who taught me), show them how to install all the libraries they need and then take them though all the steps necessary to produce this code.
To be honest I wish I found this on the internet myself ha ha, oh well 🙂 I like working things out.

To use this code you will first need to install libfreenect:
– Theres the OpenKinect website which will be more up to date – http://openkinect.org/wiki/Getting_Started
– Or there is my website where I have outlined a method – https://jamesreuss.wordpress.com/2012/01/28/installing-openkinect-and-opencv-the-easy-way-on-a-mac/

And then you will need to download the code from this GitHub page, your best bet is probably using the download as .zip button or by going into your Terminal app and pasting in:
git clone git://github.com/jimjibone/OpenKinect-Cocoa-Example.git

You can then open up the “OpenKinect Cocoa Example.xcodeproj” file and build & run it and have a play. Make sure you have a Kinect though 😉

A feature you might like though is where you can export and import point cloud files (.pcf), I’ll include one in there for you to play with if you dont have a Kinect yet.

Tagged , , , , , , ,


A quick update 🙂

Full screen and efficient Kinect viewing.

So, here is a bit of news about the project so far:

  • I have made quite a bit of progress with my project using Processing (link) but the problem was that; it was not very efficient at drawing to the screen, and I didn’t have full reign over the data, but it was very easy to use.
  • So then I made some more progress on the Objective-C and Cocoa side of the project (as you can see above) and things are going good 🙂 Now I just need to get some object detection working which I managed when using Processing. I will show this off soon.

What I’m going to do soon:

  • A beginners guide to using the Kinect in Cocoa 🙂 seeing as there isn’t much help out there for beginners (like I was..) I will be creating this guide to help out all them mac programmers. Also, once its done users of other operating systems may also be able to follow some of the methods used when building their own apps.
  • Get some more pictures of object detection up!
  • Then some helicopter tracking..


Tagged , , ,

I am the Kinect Master!!

So, it took me about a week of very frustrating research, fixes, failures of fixes, getting libraries installed and some not installing to finally get some proper synchronous data back from the Kinect into my program.

There were so many things going wrong its hard to actually remember it all.. but here is what I can remember:

Firstly I set up my objective-c program to use the libfreenect_sync.h library along with OpenGL to display the RGB and Depth images in the view in my program. This worked fine for the RGB feed but the Depth feed was just a mess. The images it displayed were messed up. It was like there was a load of overlapping and static like you used to see on TV’s. So I went out to try and fix this.

One fix that I found was to change the OpenGL glTexImage2D setting to GL_RGB for depth and change the input resolution to 320 by 420 pixels. This seemed to fix things a bit but the image looked a bit low res compared to the glview example.

I then tried LOOADS of other ways to fix it, thinking that it was something wrong with my OpenGL settings but nothing worked.. I eventually attempted to install OpenCV again on my mac and it turned out that the easiest and most straightforward way to do it was to just (using homebrew) type this into Terminal:

sudo brew update
sudo brew install opencv

I think if you then restart Xcode the libraries will show up when you try to add linked frameworks and libraries under “libopencv…”.

So, I got OpenCV installed and I was able to test the c_sync example that was available here (switching it from video being displayed to depth). I then found out that it wasn’t my programming that was causing the problems it was something else.

After a lot more playing and searching I found a OpenCV Kinect example that uses the libfreenect_cv.h wrapper. And this seemed to use the libfreenect_sync.h header and still be able to display proper depth information. So I tinkered with this and eventually was able to recreate it in my own program using the same libfreenect_cv.h wrapper 🙂

The next step was to recreate it again but in my Objective-C program. Which also worked. And then finally I created my own Objective-C class which allowed me to easily control the Kinect and get data from it 🙂

(I think the real reason I couldn’t get it working was to do with a certain function I couldn’t get working myself… the function that seems to have a lot of switch-case statements in.)

So, there we are, I finally got it working 😀 only took a week… grr… so now I just have to work out how I can use OpenCV to help me pick out the objects to avoid in the room and find the Helicopter then track it.


Just after I got my Objective-C Kinect class working 🙂

Tagged , ,

Working with depth

On and off for the last two days I have been looking into getting the depth data displayed to the user aswell as the RGB. This has proven to be a bit tricky. Firstly when I got OpenGL to display to depth data the image was really weird, black and white, overlapping and duplicated. There was definitely something going wrong somewhere, although I was using the exact same commands as for the RGB view and as the glview example supplied with libfreenect.
So I did a lot of playing and changed some of the individual OpenGL parameters one at a time to see if that made any difference. Then I found that to get some actual colour in the view and to get things looking a bit more fixed I should switch the Format parameter of glTexImage2D function to GL_RGBA.

I then did more playing to see why there was duplicating of the image. It seemed like it was showing one frame, then to the right of it, the previous frame and then underneath them there was about 2 of the same frame but with the occasional static messing it all up. It was very weird… It seemed like the image I was receiving from the library was bot actually 640×480 pixels but actually smaller. So I made a few changes to the interface of my program so I could change these values as the program was running to see what was the actual resolution of the input data.
It turns out that height of the image I was receiving was 320 pixels and the width was 480. This didn’t make any sense at all… And also it meant I was missing out on vital resolution that will be very important in my calculations later on.

So after a bit of thinking I thought that maybe there is something wrong with the library I am using and at the moment my OpenCV install doesn’t work so I cannot test the example program I have 😦 I’ll have to try and get this working tomorrow though.

Also another thing, if it is the library which is screwing up then I’ll have a look into making my own objective-c wrapper 🙂 this does mean wasting a load more time though…

And something else I’m wondering about is how do they get the different colours working in the glview example when the variable in mine is only 8 bits wide?? I think maybe the size of the variable I’m putting the depth textures in is a bit too small, smaller than what I’m getting in for the depth.

Let’s check what the actual resolution of the Kinect IR camera is, or at least the depth image that we get.

Tagged ,

Working out how to use libfreenect in Objective-C

I’ve finally got round to starting it… working out how to use this libfreenect thing in my project. So, I’ve downloaded the libfreenect files and I’ve started looking at the .c example. It creates a window using OpenGL and displays basic Kinect data in it (RGB view and IR view).

My task now is to work out how to use it in my project. Firstly, work out how to use it in C and then see if I can also use it in Objective-C.

… lots of reverse engineering of code…

… a while later…

DONE! -ish. I have managed to reverse engineer the example program to a point where I can get my own program to find the number of Kinect devices connected, connect to the device, change the tilt and LED colour of the device and then disconnect from the device.

I have managed to recreate it in C and then I realised that it would be even easier to make it work in Objective-C! Well kinda easier for me because I’ve been using Objective-C so much recently..

So, here is my C version of the program (the main.c file) which carries out basic movement every time it connects successfully.

#include "libfreenect.h"
#include <stdio.h>

freenect_context *freenectContext;
freenect_device *freenectDevice;
int noDevicesConnected;
int error;

int main(int argc, char **argv) {
    // freenect_init initialises a freenect context. The second parameter can be NULL if not using mutliple contexts.
    // freenect_set_log_level sets the log level for the specified freenect context.
    // freenect_select_subdevices selects which subdevices to open when connecting to a new kinect device.
    freenect_init(&freenectContext, NULL);
    freenect_set_log_level(freenectContext, FREENECT_LOG_DEBUG);
    freenect_select_subdevices(freenectContext, (freenect_device_flags)(FREENECT_DEVICE_MOTOR | FREENECT_DEVICE_CAMERA));

    noDevicesConnected = freenect_num_devices(freenectContext);
    printf("Number of devices connected: %d\n", noDevicesConnected);
    // Exit the app if there are no devices connected.
    if (noDevicesConnected < 1) return 1;

    // freenect_open_device opens a Kinect device.
    error = freenect_open_device(freenectContext, &freenectDevice, 0);
    if (error < 0) {
        // Then exit the app if there was an error while connecting.
        printf("Could not open the Kinect device.");
        return 1;

    freenect_set_tilt_degs(freenectDevice, 30);
    freenect_set_led(freenectDevice, LED_BLINK_RED_YELLOW);
    printf("Done Functions\n");


    return 0;

And now theres my Objective-C program which gives you a GUI and allows you to manually control the tilt of the Kinect using a slider control. Here is the interface file:

#import <Foundation/Foundation.h>
#import "libfreenect.h"

@interface KinectMotorController : NSObject {
    freenect_context *kinectContext;
    freenect_device *kinectDevice;

    NSNumber *noDevicesConnected;
    NSInteger error;
@property (assign) IBOutlet NSTextField *numberOfDevices;
@property (assign) IBOutlet NSTextField *connectedBool;
@property (assign) IBOutlet NSSlider *tiltControl;

- (IBAction)findDevices:(id)sender;
- (IBAction)connectDevices:(id)sender;
- (IBAction)changeTilt:(id)sender;
- (IBAction)disconnectDevices:(id)sender;


And now the implementation file:

#import "KinectMotorController.h"

@implementation KinectMotorController
@synthesize numberOfDevices;
@synthesize connectedBool;
@synthesize tiltControl;

- (IBAction)findDevices:(id)sender {
	// Initialise the freenect library.
	freenect_init(&kinectContext, NULL);
	freenect_set_log_level(kinectContext, FREENECT_LOG_DEBUG);
	freenect_select_subdevices(kinectContext, (freenect_device_flags)(FREENECT_DEVICE_MOTOR | FREENECT_DEVICE_CAMERA));

	// Find the devices collected and show the user the number.
	noDevicesConnected = [NSNumber numberWithInt:freenect_num_devices(kinectContext)];
	[numberOfDevices setStringValue:[noDevicesConnected stringValue]];

- (IBAction)connectDevices:(id)sender {
	error = freenect_open_device(kinectContext, &kinectDevice, 0);
	if (error < 0) {
		[connectedBool setStringValue:@"Failed to connect!"];
	} else {
		freenect_set_led(kinectDevice, LED_GREEN);
		[connectedBool setStringValue:@"Connected!"];

- (IBAction)changeTilt:(id)sender {
	freenect_set_tilt_degs(kinectDevice, [tiltControl intValue]);

- (IBAction)disconnectDevices:(id)sender {
	freenect_set_led(kinectDevice, LED_RED);
	[connectedBool setStringValue:@"Disconnected."];

So there we are 🙂 That really wasn’t as bad as I thought it would be. When I first saw the libfreenect code it looked like a bit of nightmare but at the moment its fine. We will see what happens when I try to start getting the camera data though… It might turn out to be a bit mental.

Tagged , , , , ,

Using the TVDB API

The other day when I was doing some more work on the Media Sorter I decided to implement the way I will get the program to find out the episode name for the current file.

After looking for ages on the internet I found out that there wasn’t any help anywhere on how to interpret the TVDB API XML files in objective-c. I found a framework but I wanted to try and make it myself so I could see how it works and suit it to my needs.

After a lot more searching and figuring I found out that it should be possible to do it all using NSXMLParser. This was a bit awkward to figure out too. It wasn’t straightforward to see how to use it, but anyways, I worked it out.

So here is a bit of code I put together which does a basic search of TVDB shows when you give it a show name to search for. It returns an array of the show information held in a new object I created (TVDBShow).

Here is the .m file for TVDBShow:

#import <Foundation/Foundation.h>

@interface TVDBShow : NSObject
@property (retain) NSString *seriesID;
@property (retain) NSString *language;
@property (retain) NSString *seriesName;
@property (retain) NSString *overview;

// Instance Methods
- (void)setShowWithSeriesID:(NSString*)newSeriesID Language:(NSString*)newLanguage SeriesName:(NSString*)newSeries Overview:(NSString*)newOverview;

// Class Methods
+ (TVDBShow*)showWithSeriesID:(NSString*)newSeriesID Language:(NSString*)newLanguage SeriesName:(NSString*)newSeries Overview:(NSString*)newOverview;


@implementation TVDBShow
@synthesize seriesID, language, seriesName, overview;

// Instance Methods
- (void)setShowWithSeriesID:(NSString*)newSeriesID Language:(NSString*)newLanguage SeriesName:(NSString*)newSeries Overview:(NSString*)newOverview {
	[self setSeriesID:newSeriesID];
	[self setLanguage:newLanguage];
	[self setSeriesName:newSeries];
	[self setLanguage:newLanguage];

// Class Methods
+ (TVDBShow*)showWithSeriesID:(NSString*)newSeriesID Language:(NSString*)newLanguage SeriesName:(NSString*)newSeries Overview:(NSString*)newOverview {
    TVDBShow *returnableShow = [[TVDBShow alloc] init];
    [returnableShow setShowWithSeriesID:newSeriesID Language:newLanguage SeriesName:newSeries Overview:newOverview];
    return [returnableShow autorelease];


And then the .m file for TVDBApi:

#import <Foundation/Foundation.h>
#import "TVDBShow.h"

typedef enum searchType {
    seriesSearch = 0,
} searchType;

@interface TVDBApi : NSObject {
    // API Objects
    NSString *APIKey;
    NSString *mirrorPath;
    NSString *getSeriesPath;
    NSUInteger currentSearchType;

    // Show Objects
    TVDBShow *foundShow; // Initalise and release when needed instead of in init.
    NSMutableString *collectedData;
    NSMutableArray *foundShowArray;

// Search Methods
- (void)searchForTVDBShowsWithName:(NSString*)showName;

// NSXML Parser Methods
- (void)parser:(NSXMLParser*)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict;
- (void)parser:(NSXMLParser*)parser foundCharacters:(NSString *)string;
- (void)parser:(NSXMLParser*)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName;

// Getters
- (NSArray*)getFoundShowArray;


@implementation TVDBApi

- (id)init {
    self = [super init];
    if (self) {
        // Initialisations
		APIKey = [[NSString alloc] initWithString:@"APIKEY"];
		mirrorPath = [[NSString alloc] initWithString:@"http://www.thetvdb.com/api/"];
		getSeriesPath = [[NSString alloc] initWithString:@"GetSeries.php?seriesname="];
		currentSearchType = 0;
		foundShowArray = [[NSMutableArray alloc] init];
    return self;
- (void)dealloc {
    [APIKey release];
	[mirrorPath release];
	[getSeriesPath release];
	[foundShowArray release];
    [super dealloc];

// Search Methods
- (void)searchForTVDBShowsWithName:(NSString*)showName {
	// Set up the url to search for the show 'showName'.
	showName = [showName stringByAddingPercentEscapesUsingEncoding:NSASCIIStringEncoding];
	NSURL *showSearchURL = [[NSURL alloc] initWithString:
							[NSString stringWithFormat:@"%@%@%@", mirrorPath, getSeriesPath, showName]];

	// Set up the search type and show array.
	currentSearchType = (searchType)seriesSearch;
	[foundShowArray removeAllObjects];

	// Create a XML parser to search through the returned results for us.
	NSXMLParser *XMLParser = [[NSXMLParser alloc] initWithContentsOfURL:showSearchURL];
	[XMLParser setDelegate:self];
	[XMLParser parse];

	// Release all the local objects.
	[showSearchURL release];
	[XMLParser release];

// NSXML Parser Methods
- (void)parser:(NSXMLParser*)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict {
	// Run different searches for different search types.
	if (currentSearchType == (searchType)seriesSearch) {
		// If we are doing a Series search then we want to create a new TVDBShow object to work with for each show we encounter in the search.
		if ([elementName isEqualToString:@"seriesid"]) {
			foundShow = [[TVDBShow alloc] init];
	} else if (currentSearchType == (searchType)episodeSearch) {
		// If we are doing an Episode search then......
		NSLog(@"Implement Episode Search");
- (void)parser:(NSXMLParser*)parser foundCharacters:(NSString *)string {
	// This will be the same for all search types as it just collects the data we want.
	if (!collectedData) {
		collectedData = [[NSMutableString alloc] init];
	[collectedData setString:string];
- (void)parser:(NSXMLParser*)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName {
	// Also run different searches for different search types.
	if (currentSearchType == (searchType)seriesSearch) {
		// So were doing a Series search. We must pick out all the information we want and put it into the relevant field of foundShow.
		if		  ([elementName isEqualToString:@"seriesid"]) {
			[foundShow setSeriesID:collectedData];
		} else if ([elementName isEqualToString:@"language"]) {
			[foundShow setLanguage:collectedData];
		} else if ([elementName isEqualToString:@"SeriesName"]) {
			[foundShow setSeriesName:collectedData];
		} else if ([elementName isEqualToString:@"Overview"]) {
			[foundShow setOverview:collectedData];
		} else if ([elementName isEqualToString:@"id"]) {
			// This is the final element in this shows XML tree and because were not interested in this we can use it to close off the assignment of this shows details and add it to the foundShowArray.
			[foundShowArray addObject:foundShow];
			[foundShow release];
	} else if (currentSearchType == (searchType)episodeSearch) {
		NSLog(@"Implement Episode Search");

// Getters
- (NSArray*)getFoundShowArray {
	return foundShowArray;


If anyone sees anything wrong with this code or how it could be improved please let me know. I’m still relatively new to object oriented programming. I should also be releasing some proper files for download or something sometime soon 🙂

Tagged , , ,