Having set up the Pandaboard with the Kinect as mentioned previously I have been experimenting with the OpenNI Library. I have tried simply extracting the depth data from the Kinect using a technique developed from the samples. This makes use of a DepthGenerator and a DepthMetaData container to extract the information, however so far my attempts at getting meaningful data have fallen somewhat flat.
The best depth image I have generated so far (from many less successful attempts) is shown below. This is plotted by scaling all the depth values by the maximum in order to give a full colour range of black to white.
While this is clearly not ideal, and I am currently in the process of doing more research and experimenting to try to work out what I am doing wrong, it is definitely progress.