Student Post: Xtion Pro Live Stereo Camera and OpenNI

Greetings,

This week I have been setting up the ASUS Xtion Pro Live Stereo RGB camera to be used along with an open source SDK known as OpenNI for quick prototyping and 3D motion/depth sensing.

01

This camera could be beneficial for us in prototyping and proof of concept going forward. The RGB sensor on the stereo camera will allow us to do better depth imaging while the stereo camera setup itself will allow for hopefully better image recognition and detection of changes in the field of view.

The software package that I plan to use in order to communicate with the camera is OpenNI and NITE; however, the documentation is few and far between and troubleshooting is a little more difficult because depending on which package and what version of an OS you have, the camera will not be recognized.

The OpenNI and NITE packages should allow us to receive real time stereo video and be able to quickly turn that into depth images and 3D point clouds.

hqdefault

Registration_cloud1.png

OpenNI Examples

Starting out with OpenNI

There have been small challenges that have surfaced with getting the camera to be recognized once plugged into my machine via USB, so I am going to work on sorting this out over the next week.

Thanks,

Adam S.

 

 

We’ve acquired a 3D printer!

We’ve acquired a 3D printer! Already I’m learning that this is a far from automated process. Here’s the printer, a FlashForge Creator Pro with a print of a twisted pen-holder I coded in OpenSCAD:

img_9493-2

A good third print.

Here’s a successful print of a 3D graph filling the interior of a Pikachu model. The code for computing this graph is using our circle and sphere packing heuristics to generate a volume filling tetrahedral graph that is a sub-graph of the cannonball lattice:

img_9487

Pikachu, I choose you!

I tried to print a much thinner version of Pikachu, but it wasn’t working that well:

img_9502-2

When prints go awry.

The lab is a lot more hands-on these days than theoreticians normally get to go =D. Here’s one final print to start things off right:

img_9494-2

Student Post: BoofCV, Image Segmentation, and 3D Scanner

Greetings,

This past week Dr. Bowers and I met in order to come up with some ideas as to how to move forward with our problem of comparing meshes and/or images for error detection. Essentially, we were able to address some problems we’ve been having with some new ideas:

  • Image segmentation
  • Real Time/Video Structure for Motion
  • BoofCV/3D Stereo Clouds
  • 3D scanner in cooperation with Dr. Dias of the Physics Department here at JMU

Image Segmentation

To start, the goal regarding image segmentation is to cluster pixels into prominent image regions and then use this within a stereo system for object recognition. We would like to be able to take a picture of say a desk without a specific image in it, then take another picture with the specific image inserted and be able to detect it through image segmentation. This would allow us to essentially remove the background while keep the image and its boundaries. (Pictures from Source 1 below) 

Real Time Structure for Motion (Simultaneous Localisation and Mapping “SLAM”)

Another idea that I plan on exploring further is real time structure for motion where you can get a live stream from a camera and mesh it in real time to do the comparisons. This would be beneficial for what we are trying to do here since it would be in real time versus quickly meshing and comparing. (Interested? Source 2 & 3 at the bottom has more)

BoofCV (Real Time CV and Robotics)
BoofCV is an open source pure Java library built on top of the OpenCV (Open Source Computer Vision Software). There are 4  packages that BoofCV is organized into: 1) image processing, 2) features, 3) geometric vision, and 4) recognition. The areas of this package that I am interested in are dense optical flow which compares two images to estimate apparent motion between the pixels. This is useful for object detection, tracking, and image segmentation. Additionally, I’m interested in their geometric 3D stereo cloud package and stereo calibration which will come in handy when using stereo cameras. Lastly, the have a Java module written to interact with the Xbox 360 Kinect which would could possibly use as a stereo setup in the meantime to get some quick results. (Interested? Follow the link from Source 4)
Shining 3D Scanner (In part with JMU Physics)
In addition to these other methods for reconstruction and object detection, in the next couple of weeks, I will be working on setting up the Shining 3D Scanner from the Dr. Dias in the Physics Department to see if there is a way to get effective meshes and scans from this for a proof of concept!
Sources:

Student Post: OpenSfM Revisited, Epipolar Geometry, and Possibility of Rendering

Greetings,

I’ve continued to delve into the OpenSfM configuration files to attempt to get denser meshes without much promise. The meshes seem to be similar; however, with the yellow spiral I did receive a less distorted image that could be on the road to where we want to be. The figure is shown below:

So within the next week or two, I hope to meet with Professor Bowers to discuss some optimizations that we can make to receive even better meshes.

Epipolar Geometry is basically “two views geometry” in which you match points from two separate images using an epipolar constraint to take the problem from a 2D search problem to a 1D problem. This can be done with either a stereo camera separate (two separate cameras) were the images are acquired simultaneously or a single moving camera where the images are acquired sequentially. Given two images of a scene, the goal is to triangulate the corresponding image points and then back-project the ‘rays’ which intersect at a given 3D point. This is shown in the figure below:

From this triangulation, we use the information that a point in one view “generates” an epipolar line in the other one as shown below:

From this we, the corresponding points x and x` create a plane known as the epipolar plane as shown below in the shaded area:

Ultimately, we see that from each camera plane there is an epipolar line that creates this triangulation in which we can search for the corresponding image points:

From this we can see the true advantage of using epipolar geometry when reconstructing 3D models as shown below:

All in all, I’d like to explore the possibility of using epipolar geometry to creating these mappings of 2D points to reconstruct a 3D image.

In addition to using epipolar geometry for reconstruction, I’d also like to explore the idea of using rendering instead of reconstruction take a 3D image and receive point mappings to compare instead of the the 2D to 3D pipeline that we’ve been attempting so far.

Thank you,

Adam S.

*Pictures: Dr. Didier Stricker, Kaiserlautern University, http://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec05.pdf

Student Post: Progress with OpenSfM (Structure for Motion)

Hello,

I’ve made some progress in the past week with getting a implementation of OpenSfM running on my own machine and finally have done so. I ran a couple of test 2D images through the reconstruction pipeline with moderate results. So, I thought it was time to run some actual 3D printed material through it to seem what reconstructions I could get!

The first was to no avail as I tried to reconstruct a dodecahedron. Here is a picture of what the 3D print looks like and then the mesh after going through the OpenSfM pipeline.

 

 

 

 

 

Obviously, this was a bit disappointing; however, some others had better pixel formations and densities resembling the image more so.

Here is a Spiral

So that mesh is getter better especially from the first one. Here is one last one which is a cone with two spheres in it.

Overall, the meshes got better and better which is promising.

Goals for next week:

  • Rework the configuration files to attempt to get more denser meshes
  • Research other techniques for 3D reconstruction that have open source software packages
  • More research on parameters for pixels and matching points within meshes.

Thank you!

Adam S.

Student Post: Automated Detection of 3D Printer Failures Using Structure for Motion and Computer Vision

The goal of this project is to provide more effective mechanisms for error detection of 3D printer failures through the use of ‘structure for motion’, computer vision and computational geometry. Furthermore, first steps in order to achieve this goal is to determine what actual constitutes an error and what types of errors the application will focus on (if not all).

So what next??? We need to make some “error free” prints in addition to “intentional error” prints in order to establish a base line for tolerance and to compare the two. Also, creating a workflow for reconstructing 3D models from 2D images is coming and this is where structure for motion comes in handy.

Of course, further exploration of literature on the subject will be continuing.