We are planning, as for now, to use Emgu, an integration of openCV for .Net programming environment, and do some basic image processing on the incoming RGBD image and then feed it to an ANN. The path is as below:
original image from here
Disappointingly we found that the depth resolution of Kinect is rated around 1.3mm. This won’t let us have an acceptable shape extracted off of small objects (like in the order of less than 5cm).
We are exploring different methods to solve this:
We heard Lenses can force Kinect to shift its viewpoint from the original 0.8-6 to a smaller and more compact area. This might increase the depth distance resolution.
Nyko Zoom can improve and make Kinect to look at object that are closer to it than its original minimum 0.8 meter, “I found that the Zoom moved up the minimum distance by 14 inches. ” This might also increase the depth resolution! We don’t know yet.
We are off to some experimentations and rethinking…
We got the Kinect to work with Microsoft’s guide and Abhijit’s guide :
Now we have to get rid of this ghost which appears in the depth image and makes our hand to look like it has 10 fingers !
Just found the Kinect competitor. Asus has released Xtionpro(link)
Apparently they are using Openni which is “an industry-led, not-for-profit organization formed to certify and promote the compatibility andinteroperability of Natural Interaction devices, applications and middleware.”
Asus has already started a competition for its Xtionpro.
Here is a project developed using Opencv and Openni on Xtionpro:
Seems interesting :).
FANN does seem to be a very eligible choice when it comes to using Neural Networks. Supporting 15 different programming languages and having an easy to read introduction article, and a bunch of other fruitful features.
Here is a cool use of it with Kinect:
A useful .Net library for computer vision and machine learning is AForge.NET
Here, a few methods as below are proposed for pattern recognition:
We have been always fascinated by a delta robot, e.g. ABB Flexpicker
photo courtesy of endgadget.com
We are thinking of one to pick out batteries after we’d classified them with Kinect…
Here is a similar project:
We’ve spent most of yesterday to get the kinect up and running
We considered to program on MAC OS but we skipped it since Microsoft SDK is for windows. No bad thing about openkinect.org or freenect.com. We just thought it might be easier to go with the SDK, specially not required to do calibration, according to this comparison.
There is a quick start guide (video) available from MSDN which we are following. Moreover Jon Carlos says a few words on how to use OpenCV along with Kinect!
Abhijit’s world of .Net also has a 5 step guide which is very interesting and complete. Moreover there is this post about interesting sample programs (ready to run)
Here is a never ending awesome list of kinect porjects.
So far the RGB preview works in C#, but for some reason its super slow and we are yet to get the depth view working.
This blog will host our steps and results of our research while we are trying to do our master thesis in Chalmers University of Technology in department of IT, Intelligent systems design Master program.
The aim is to use sensors other than 2d imaging to identify and sort batteries. Batteries come in different chemical substances and after collection from households they have to be sorted into proper chemical fraction in order to be sent to the corresponding recycling plant.
This work is being carried out by:
Farshid Jafari Harandi
Amir Sabbagh Pour