Monthly Archives: December 2011

I am the Kinect Master!!

So, it took me about a week of very frustrating research, fixes, failures of fixes, getting libraries installed and some not installing to finally get some proper synchronous data back from the Kinect into my program.

There were so many things going wrong its hard to actually remember it all.. but here is what I can remember:

Firstly I set up my objective-c program to use the libfreenect_sync.h library along with OpenGL to display the RGB and Depth images in the view in my program. This worked fine for the RGB feed but the Depth feed was just a mess. The images it displayed were messed up. It was like there was a load of overlapping and static like you used to see on TV’s. So I went out to try and fix this.

One fix that I found was to change the OpenGL glTexImage2D setting to GL_RGB for depth and change the input resolution to 320 by 420 pixels. This seemed to fix things a bit but the image looked a bit low res compared to the glview example.

I then tried LOOADS of other ways to fix it, thinking that it was something wrong with my OpenGL settings but nothing worked.. I eventually attempted to install OpenCV again on my mac and it turned out that the easiest and most straightforward way to do it was to just (using homebrew) type this into Terminal:

sudo brew update
sudo brew install opencv

I think if you then restart Xcode the libraries will show up when you try to add linked frameworks and libraries under “libopencv…”.

So, I got OpenCV installed and I was able to test the c_sync example that was available here (switching it from video being displayed to depth). I then found out that it wasn’t my programming that was causing the problems it was something else.

After a lot more playing and searching I found a OpenCV Kinect example that uses the libfreenect_cv.h wrapper. And this seemed to use the libfreenect_sync.h header and still be able to display proper depth information. So I tinkered with this and eventually was able to recreate it in my own program using the same libfreenect_cv.h wrapper 🙂

The next step was to recreate it again but in my Objective-C program. Which also worked. And then finally I created my own Objective-C class which allowed me to easily control the Kinect and get data from it 🙂

(I think the real reason I couldn’t get it working was to do with a certain function I couldn’t get working myself… the function that seems to have a lot of switch-case statements in.)

So, there we are, I finally got it working 😀 only took a week… grr… so now I just have to work out how I can use OpenCV to help me pick out the objects to avoid in the room and find the Helicopter then track it.


Just after I got my Objective-C Kinect class working 🙂

Tagged , ,

Working with depth

On and off for the last two days I have been looking into getting the depth data displayed to the user aswell as the RGB. This has proven to be a bit tricky. Firstly when I got OpenGL to display to depth data the image was really weird, black and white, overlapping and duplicated. There was definitely something going wrong somewhere, although I was using the exact same commands as for the RGB view and as the glview example supplied with libfreenect.
So I did a lot of playing and changed some of the individual OpenGL parameters one at a time to see if that made any difference. Then I found that to get some actual colour in the view and to get things looking a bit more fixed I should switch the Format parameter of glTexImage2D function to GL_RGBA.

I then did more playing to see why there was duplicating of the image. It seemed like it was showing one frame, then to the right of it, the previous frame and then underneath them there was about 2 of the same frame but with the occasional static messing it all up. It was very weird… It seemed like the image I was receiving from the library was bot actually 640×480 pixels but actually smaller. So I made a few changes to the interface of my program so I could change these values as the program was running to see what was the actual resolution of the input data.
It turns out that height of the image I was receiving was 320 pixels and the width was 480. This didn’t make any sense at all… And also it meant I was missing out on vital resolution that will be very important in my calculations later on.

So after a bit of thinking I thought that maybe there is something wrong with the library I am using and at the moment my OpenCV install doesn’t work so I cannot test the example program I have 😦 I’ll have to try and get this working tomorrow though.

Also another thing, if it is the library which is screwing up then I’ll have a look into making my own objective-c wrapper 🙂 this does mean wasting a load more time though…

And something else I’m wondering about is how do they get the different colours working in the glview example when the variable in mine is only 8 bits wide?? I think maybe the size of the variable I’m putting the depth textures in is a bit too small, smaller than what I’m getting in for the depth.

Let’s check what the actual resolution of the Kinect IR camera is, or at least the depth image that we get.

Tagged ,

Let’s try and get some RGB info :)

Now that I’ve successfully managed to make a program that connects to the Kinect in Objective-C and makes it move up and down and all that, the next challenge is to get it to display some RGB data using OpenGL.

So, firstly I will see how they’ve done it in the libfreenect example…

Wow, it really seems like this getting of RGB data is not going to be as simple as tilting the Kinect. The libfreenect API uses callbacks to give the program new data as it is received from the cameras. Also as I am trying to mix C and Objective-C it is proving a lot more difficult to get working properly. I have spent a whole day trying to work out how to duplicate the process in Objective-C, how to get the OpenGL view working properly and how to get the callbacks working properly, but still I haven’t got it working yet.

I had a chill and stuff and then thought that the best way to do it is probably to use C based threads, as used in the libfreenect example, as well as Objective-C threads to monitor the data within the C threads.

Hopefully this will all work out for me because I have got very used to using Objective-C, Xcode and the Interface Builder. Errgh, I kinda sound emo there. best stop this now…!

So, the mission is, get some pthreads working in my code as done in the libfreenect example and power it out and get it working just as I wanted 😉 and FOR SCIENCE!

.. Christmas 🙂 …

I was doing a bit of research into OpenGL and libfreenect and then discovered (again) that there were wrappers that I could maybe use in my code. There is a C-sync wrapper that could possibly enable me to read input RGB and Depth data when I am ready for it instead of using callbacks. This could result in the program not being as efficient as some frames may be lost, or the processor could be trying to read frames when there are no new ones. But I’ll give it a look and see what happens. Hopefully it will help me out a lot 🙂

So, I have successfully managed to get it to let me control the tilt of the Kinect device and also retrieve the state of the tilt every second using another thread. Looking good so far 🙂 now for video.

Now I have implemented a way for the RGB data to be collected using another thread it is able to call the drawRect: method when there is data available. And so, it works! I have my program displaying the RGB data at intervals of 0.2 seconds. This gives a video image that is a little jumpy as I would expect, so now I will decrease the sleep time between getting new RGB data and see what happens.

I have set the sleep time between RGB data calls to 1 millisecond and it works very well 🙂 the image is very smooth and my program doesn’t seem to report that there is no new frames (this may be to the way libfreenect_sync.h works as it states that data is kept in place until a new frame is available…i think. I should probably check this).

So now the next step it to try and also get some depth data displayed and at the same time work out how I can determine the depth of individual pixels as I want.


Working out how to use libfreenect in Objective-C

I’ve finally got round to starting it… working out how to use this libfreenect thing in my project. So, I’ve downloaded the libfreenect files and I’ve started looking at the .c example. It creates a window using OpenGL and displays basic Kinect data in it (RGB view and IR view).

My task now is to work out how to use it in my project. Firstly, work out how to use it in C and then see if I can also use it in Objective-C.

… lots of reverse engineering of code…

… a while later…

DONE! -ish. I have managed to reverse engineer the example program to a point where I can get my own program to find the number of Kinect devices connected, connect to the device, change the tilt and LED colour of the device and then disconnect from the device.

I have managed to recreate it in C and then I realised that it would be even easier to make it work in Objective-C! Well kinda easier for me because I’ve been using Objective-C so much recently..

So, here is my C version of the program (the main.c file) which carries out basic movement every time it connects successfully.

#include "libfreenect.h"
#include <stdio.h>

freenect_context *freenectContext;
freenect_device *freenectDevice;
int noDevicesConnected;
int error;

int main(int argc, char **argv) {
    // freenect_init initialises a freenect context. The second parameter can be NULL if not using mutliple contexts.
    // freenect_set_log_level sets the log level for the specified freenect context.
    // freenect_select_subdevices selects which subdevices to open when connecting to a new kinect device.
    freenect_init(&freenectContext, NULL);
    freenect_set_log_level(freenectContext, FREENECT_LOG_DEBUG);
    freenect_select_subdevices(freenectContext, (freenect_device_flags)(FREENECT_DEVICE_MOTOR | FREENECT_DEVICE_CAMERA));

    noDevicesConnected = freenect_num_devices(freenectContext);
    printf("Number of devices connected: %d\n", noDevicesConnected);
    // Exit the app if there are no devices connected.
    if (noDevicesConnected < 1) return 1;

    // freenect_open_device opens a Kinect device.
    error = freenect_open_device(freenectContext, &freenectDevice, 0);
    if (error < 0) {
        // Then exit the app if there was an error while connecting.
        printf("Could not open the Kinect device.");
        return 1;

    freenect_set_tilt_degs(freenectDevice, 30);
    freenect_set_led(freenectDevice, LED_BLINK_RED_YELLOW);
    printf("Done Functions\n");


    return 0;

And now theres my Objective-C program which gives you a GUI and allows you to manually control the tilt of the Kinect using a slider control. Here is the interface file:

#import <Foundation/Foundation.h>
#import "libfreenect.h"

@interface KinectMotorController : NSObject {
    freenect_context *kinectContext;
    freenect_device *kinectDevice;

    NSNumber *noDevicesConnected;
    NSInteger error;
@property (assign) IBOutlet NSTextField *numberOfDevices;
@property (assign) IBOutlet NSTextField *connectedBool;
@property (assign) IBOutlet NSSlider *tiltControl;

- (IBAction)findDevices:(id)sender;
- (IBAction)connectDevices:(id)sender;
- (IBAction)changeTilt:(id)sender;
- (IBAction)disconnectDevices:(id)sender;


And now the implementation file:

#import "KinectMotorController.h"

@implementation KinectMotorController
@synthesize numberOfDevices;
@synthesize connectedBool;
@synthesize tiltControl;

- (IBAction)findDevices:(id)sender {
	// Initialise the freenect library.
	freenect_init(&kinectContext, NULL);
	freenect_set_log_level(kinectContext, FREENECT_LOG_DEBUG);
	freenect_select_subdevices(kinectContext, (freenect_device_flags)(FREENECT_DEVICE_MOTOR | FREENECT_DEVICE_CAMERA));

	// Find the devices collected and show the user the number.
	noDevicesConnected = [NSNumber numberWithInt:freenect_num_devices(kinectContext)];
	[numberOfDevices setStringValue:[noDevicesConnected stringValue]];

- (IBAction)connectDevices:(id)sender {
	error = freenect_open_device(kinectContext, &kinectDevice, 0);
	if (error < 0) {
		[connectedBool setStringValue:@"Failed to connect!"];
	} else {
		freenect_set_led(kinectDevice, LED_GREEN);
		[connectedBool setStringValue:@"Connected!"];

- (IBAction)changeTilt:(id)sender {
	freenect_set_tilt_degs(kinectDevice, [tiltControl intValue]);

- (IBAction)disconnectDevices:(id)sender {
	freenect_set_led(kinectDevice, LED_RED);
	[connectedBool setStringValue:@"Disconnected."];

So there we are 🙂 That really wasn’t as bad as I thought it would be. When I first saw the libfreenect code it looked like a bit of nightmare but at the moment its fine. We will see what happens when I try to start getting the camera data though… It might turn out to be a bit mental.

Tagged , , , , ,

My First Github Repo

I have just set up github and uploaded my first repo of the Media Sorter project so everyone can see what I’m doing.

As of now, the Media Sorter runs in the status bar and allows you to perform a manual update of the video files found in the downloads folder. It is able to search through this folder to find your videos, then extract the show name, series number and episode number. It then uses the TVDB API and the library that I created to retrieve the relevant data for that show from It doesn’t yet allow you to apply the changes to the files, or move them to your preferred location but I will do this soon 🙂

Also, the code is probably pretty messy at the moment as I’ve been running into a few hiccups recently with my coding. It is all sorted now (hopefully no bugs :)) but it is probably still in the mess that I left it in.

I also have a feeling that I will be rewriting quite a bit of the JRMedia files to make them more efficient/better but that will become apparent at the time.

Anyways, peace out and all that…

Oh yeah, I totally forgot…durrr… if you want to see my github site go to:

Using the TVDB API

The other day when I was doing some more work on the Media Sorter I decided to implement the way I will get the program to find out the episode name for the current file.

After looking for ages on the internet I found out that there wasn’t any help anywhere on how to interpret the TVDB API XML files in objective-c. I found a framework but I wanted to try and make it myself so I could see how it works and suit it to my needs.

After a lot more searching and figuring I found out that it should be possible to do it all using NSXMLParser. This was a bit awkward to figure out too. It wasn’t straightforward to see how to use it, but anyways, I worked it out.

So here is a bit of code I put together which does a basic search of TVDB shows when you give it a show name to search for. It returns an array of the show information held in a new object I created (TVDBShow).

Here is the .m file for TVDBShow:

#import <Foundation/Foundation.h>

@interface TVDBShow : NSObject
@property (retain) NSString *seriesID;
@property (retain) NSString *language;
@property (retain) NSString *seriesName;
@property (retain) NSString *overview;

// Instance Methods
- (void)setShowWithSeriesID:(NSString*)newSeriesID Language:(NSString*)newLanguage SeriesName:(NSString*)newSeries Overview:(NSString*)newOverview;

// Class Methods
+ (TVDBShow*)showWithSeriesID:(NSString*)newSeriesID Language:(NSString*)newLanguage SeriesName:(NSString*)newSeries Overview:(NSString*)newOverview;


@implementation TVDBShow
@synthesize seriesID, language, seriesName, overview;

// Instance Methods
- (void)setShowWithSeriesID:(NSString*)newSeriesID Language:(NSString*)newLanguage SeriesName:(NSString*)newSeries Overview:(NSString*)newOverview {
	[self setSeriesID:newSeriesID];
	[self setLanguage:newLanguage];
	[self setSeriesName:newSeries];
	[self setLanguage:newLanguage];

// Class Methods
+ (TVDBShow*)showWithSeriesID:(NSString*)newSeriesID Language:(NSString*)newLanguage SeriesName:(NSString*)newSeries Overview:(NSString*)newOverview {
    TVDBShow *returnableShow = [[TVDBShow alloc] init];
    [returnableShow setShowWithSeriesID:newSeriesID Language:newLanguage SeriesName:newSeries Overview:newOverview];
    return [returnableShow autorelease];


And then the .m file for TVDBApi:

#import <Foundation/Foundation.h>
#import "TVDBShow.h"

typedef enum searchType {
    seriesSearch = 0,
} searchType;

@interface TVDBApi : NSObject {
    // API Objects
    NSString *APIKey;
    NSString *mirrorPath;
    NSString *getSeriesPath;
    NSUInteger currentSearchType;

    // Show Objects
    TVDBShow *foundShow; // Initalise and release when needed instead of in init.
    NSMutableString *collectedData;
    NSMutableArray *foundShowArray;

// Search Methods
- (void)searchForTVDBShowsWithName:(NSString*)showName;

// NSXML Parser Methods
- (void)parser:(NSXMLParser*)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict;
- (void)parser:(NSXMLParser*)parser foundCharacters:(NSString *)string;
- (void)parser:(NSXMLParser*)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName;

// Getters
- (NSArray*)getFoundShowArray;


@implementation TVDBApi

- (id)init {
    self = [super init];
    if (self) {
        // Initialisations
		APIKey = [[NSString alloc] initWithString:@"APIKEY"];
		mirrorPath = [[NSString alloc] initWithString:@""];
		getSeriesPath = [[NSString alloc] initWithString:@"GetSeries.php?seriesname="];
		currentSearchType = 0;
		foundShowArray = [[NSMutableArray alloc] init];
    return self;
- (void)dealloc {
    [APIKey release];
	[mirrorPath release];
	[getSeriesPath release];
	[foundShowArray release];
    [super dealloc];

// Search Methods
- (void)searchForTVDBShowsWithName:(NSString*)showName {
	// Set up the url to search for the show 'showName'.
	showName = [showName stringByAddingPercentEscapesUsingEncoding:NSASCIIStringEncoding];
	NSURL *showSearchURL = [[NSURL alloc] initWithString:
							[NSString stringWithFormat:@"%@%@%@", mirrorPath, getSeriesPath, showName]];

	// Set up the search type and show array.
	currentSearchType = (searchType)seriesSearch;
	[foundShowArray removeAllObjects];

	// Create a XML parser to search through the returned results for us.
	NSXMLParser *XMLParser = [[NSXMLParser alloc] initWithContentsOfURL:showSearchURL];
	[XMLParser setDelegate:self];
	[XMLParser parse];

	// Release all the local objects.
	[showSearchURL release];
	[XMLParser release];

// NSXML Parser Methods
- (void)parser:(NSXMLParser*)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict {
	// Run different searches for different search types.
	if (currentSearchType == (searchType)seriesSearch) {
		// If we are doing a Series search then we want to create a new TVDBShow object to work with for each show we encounter in the search.
		if ([elementName isEqualToString:@"seriesid"]) {
			foundShow = [[TVDBShow alloc] init];
	} else if (currentSearchType == (searchType)episodeSearch) {
		// If we are doing an Episode search then......
		NSLog(@"Implement Episode Search");
- (void)parser:(NSXMLParser*)parser foundCharacters:(NSString *)string {
	// This will be the same for all search types as it just collects the data we want.
	if (!collectedData) {
		collectedData = [[NSMutableString alloc] init];
	[collectedData setString:string];
- (void)parser:(NSXMLParser*)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName {
	// Also run different searches for different search types.
	if (currentSearchType == (searchType)seriesSearch) {
		// So were doing a Series search. We must pick out all the information we want and put it into the relevant field of foundShow.
		if		  ([elementName isEqualToString:@"seriesid"]) {
			[foundShow setSeriesID:collectedData];
		} else if ([elementName isEqualToString:@"language"]) {
			[foundShow setLanguage:collectedData];
		} else if ([elementName isEqualToString:@"SeriesName"]) {
			[foundShow setSeriesName:collectedData];
		} else if ([elementName isEqualToString:@"Overview"]) {
			[foundShow setOverview:collectedData];
		} else if ([elementName isEqualToString:@"id"]) {
			// This is the final element in this shows XML tree and because were not interested in this we can use it to close off the assignment of this shows details and add it to the foundShowArray.
			[foundShowArray addObject:foundShow];
			[foundShow release];
	} else if (currentSearchType == (searchType)episodeSearch) {
		NSLog(@"Implement Episode Search");

// Getters
- (NSArray*)getFoundShowArray {
	return foundShowArray;


If anyone sees anything wrong with this code or how it could be improved please let me know. I’m still relatively new to object oriented programming. I should also be releasing some proper files for download or something sometime soon 🙂

Tagged , , ,

The Beginning of The Media Sorter

The other day I wasted a lot of time and decided to make an application which searches a directory of your choice for media files, picks out the relevant information from the file name and then renames it to follow a certain naming convention (such as “Show Name S01E02.avi”).

So, I managed to get it to do all that. Now what I want it to do is using the information its already gathered:

  • Find information on the internet, from maybe, and get the episode name.
  • If the file is an .avi file then don’t convert it and give it an orange file tag with the new filename.
  • Otherwise if it is an .m4v or .mp4, just change the file name without converting.
  • Then, if it is a .mkv find a way of repackaging it as an mp4, then rename it.
  • Finally, move the file from its current location to another location specified by the user. In my case an external hard drive.

So, yeah, the next step is to find a way of finding the information I want on the internet and extracting it. That might be a bit tricky.

Tagged , , ,

The Kinect & Helicopter Guider

For my 3rd Year Engineering project I chose to design and build a system which allows a computer (a Mac) to autonomously control an RC Helicopter by seeing in 3D using a Xbox Kinect.

The idea is that the Kinect sensor sees a room from a certain viewpoint and builds a 3D image, picking out objects so that a route for the Helicopter can be made by the computer.

The Helicopter will be controlled by the computer via an Arduino board which will send IR messages to the Helicopter. The messages will also be sent from the computer to Arduino via USB and Serial data.

So then, once a route has been made for the Helicopter to travel the computer will transmit the relevant IR messages to the Helicopter. Also, while the Helicopter is flying the computer will be able to track it in the 3D scene using the Kinect.

So, the main steps in the project I have decided on so far are:

  • Produce an application which interprets IR messages over Arduino from the remote control that comes with the Helicopter. This application takes the raw IR messages and decodes them to give a reproducible code that later applications can use to send messages to the Helicopter.
  • Produce another application which uses the decoded message to build a simple controller on the computer that takes user input to fly the Helicopter.
  • Then begin to produce an application that uses the Kinect to build a 3D scene of a room which is able to pick out the objects in it.
  • Next, modify the previous application so that it can find and track the Helicopter in the scene.
  • Then using the same application modify it so that it can produce a route for the helicopter. Maybe a rectangle shaped root keep things simple for the time being.
  • Then finally, combine all the applications into one which carries out all the tasks listed above whilst also using control algorithms (such as PID) to properly fly and guide the helicopter around the route.
Tagged , , , ,

Let’s Begin

So heres my first post 🙂 This site will contain lots of things that I get up to with my projects and ideas.

One project which I should be talking a lot about at the moment is my 3rd Year Project which involves me using the 3D imaging of the Kinect to track a small RC Helicopter around a room and control it. It will involve the use of Arduino to help me send IR commands to the Helicopter from my computer. And finally all this will be done on my Mac 🙂

When I have been looking around not he internet there really doesn’t seem to be much use of Mac’s in this sort of field of Engineering. So all of the projects on this site will mostly involve me using Xcode on my Mac along with the Arduino IDE and sometimes maybe a little bit of windows use.

Anyways, enough of this talking, let’s get on with some science!

Tagged , , , ,