So, it took me about a week of very frustrating research, fixes, failures of fixes, getting libraries installed and some not installing to finally get some proper synchronous data back from the Kinect into my program.
There were so many things going wrong its hard to actually remember it all.. but here is what I can remember:
Firstly I set up my objective-c program to use the libfreenect_sync.h library along with OpenGL to display the RGB and Depth images in the view in my program. This worked fine for the RGB feed but the Depth feed was just a mess. The images it displayed were messed up. It was like there was a load of overlapping and static like you used to see on TV’s. So I went out to try and fix this.
One fix that I found was to change the OpenGL glTexImage2D setting to GL_RGB for depth and change the input resolution to 320 by 420 pixels. This seemed to fix things a bit but the image looked a bit low res compared to the glview example.
I then tried LOOADS of other ways to fix it, thinking that it was something wrong with my OpenGL settings but nothing worked.. I eventually attempted to install OpenCV again on my mac and it turned out that the easiest and most straightforward way to do it was to just (using homebrew) type this into Terminal:
sudo brew update sudo brew install opencv
I think if you then restart Xcode the libraries will show up when you try to add linked frameworks and libraries under “libopencv…”.
So, I got OpenCV installed and I was able to test the c_sync example that was available here (switching it from video being displayed to depth). I then found out that it wasn’t my programming that was causing the problems it was something else.
After a lot more playing and searching I found a OpenCV Kinect example that uses the libfreenect_cv.h wrapper. And this seemed to use the libfreenect_sync.h header and still be able to display proper depth information. So I tinkered with this and eventually was able to recreate it in my own program using the same libfreenect_cv.h wrapper 🙂
The next step was to recreate it again but in my Objective-C program. Which also worked. And then finally I created my own Objective-C class which allowed me to easily control the Kinect and get data from it 🙂
(I think the real reason I couldn’t get it working was to do with a certain function I couldn’t get working myself… the function that seems to have a lot of switch-case statements in.)
So, there we are, I finally got it working 😀 only took a week… grr… so now I just have to work out how I can use OpenCV to help me pick out the objects to avoid in the room and find the Helicopter then track it.