Student Post: Xtion Pro Live Stereo Camera and OpenNI

Greetings,

This week I have been setting up the ASUS Xtion Pro Live Stereo RGB camera to be used along with an open source SDK known as OpenNI for quick prototyping and 3D motion/depth sensing.

01

This camera could be beneficial for us in prototyping and proof of concept going forward. The RGB sensor on the stereo camera will allow us to do better depth imaging while the stereo camera setup itself will allow for hopefully better image recognition and detection of changes in the field of view.

The software package that I plan to use in order to communicate with the camera is OpenNI and NITE; however, the documentation is few and far between and troubleshooting is a little more difficult because depending on which package and what version of an OS you have, the camera will not be recognized.

The OpenNI and NITE packages should allow us to receive real time stereo video and be able to quickly turn that into depth images and 3D point clouds.

hqdefault

Registration_cloud1.png

OpenNI Examples

Starting out with OpenNI

There have been small challenges that have surfaced with getting the camera to be recognized once plugged into my machine via USB, so I am going to work on sorting this out over the next week.

Thanks,

Adam S.

 

 

We’ve acquired a 3D printer!

We’ve acquired a 3D printer! Already I’m learning that this is a far from automated process. Here’s the printer, a FlashForge Creator Pro with a print of a twisted pen-holder I coded in OpenSCAD:

img_9493-2

A good third print.

Here’s a successful print of a 3D graph filling the interior of a Pikachu model. The code for computing this graph is using our circle and sphere packing heuristics to generate a volume filling tetrahedral graph that is a sub-graph of the cannonball lattice:

img_9487

Pikachu, I choose you!

I tried to print a much thinner version of Pikachu, but it wasn’t working that well:

img_9502-2

When prints go awry.

The lab is a lot more hands-on these days than theoreticians normally get to go =D. Here’s one final print to start things off right:

img_9494-2

Student Post: BoofCV, Image Segmentation, and 3D Scanner

Greetings,

This past week Dr. Bowers and I met in order to come up with some ideas as to how to move forward with our problem of comparing meshes and/or images for error detection. Essentially, we were able to address some problems we’ve been having with some new ideas:

  • Image segmentation
  • Real Time/Video Structure for Motion
  • BoofCV/3D Stereo Clouds
  • 3D scanner in cooperation with Dr. Dias of the Physics Department here at JMU

Image Segmentation

To start, the goal regarding image segmentation is to cluster pixels into prominent image regions and then use this within a stereo system for object recognition. We would like to be able to take a picture of say a desk without a specific image in it, then take another picture with the specific image inserted and be able to detect it through image segmentation. This would allow us to essentially remove the background while keep the image and its boundaries. (Pictures from Source 1 below) 

Real Time Structure for Motion (Simultaneous Localisation and Mapping “SLAM”)

Another idea that I plan on exploring further is real time structure for motion where you can get a live stream from a camera and mesh it in real time to do the comparisons. This would be beneficial for what we are trying to do here since it would be in real time versus quickly meshing and comparing. (Interested? Source 2 & 3 at the bottom has more)

BoofCV (Real Time CV and Robotics)
BoofCV is an open source pure Java library built on top of the OpenCV (Open Source Computer Vision Software). There are 4  packages that BoofCV is organized into: 1) image processing, 2) features, 3) geometric vision, and 4) recognition. The areas of this package that I am interested in are dense optical flow which compares two images to estimate apparent motion between the pixels. This is useful for object detection, tracking, and image segmentation. Additionally, I’m interested in their geometric 3D stereo cloud package and stereo calibration which will come in handy when using stereo cameras. Lastly, the have a Java module written to interact with the Xbox 360 Kinect which would could possibly use as a stereo setup in the meantime to get some quick results. (Interested? Follow the link from Source 4)
Shining 3D Scanner (In part with JMU Physics)
In addition to these other methods for reconstruction and object detection, in the next couple of weeks, I will be working on setting up the Shining 3D Scanner from the Dr. Dias in the Physics Department to see if there is a way to get effective meshes and scans from this for a proof of concept!
Sources:

Student Post: OpenSfM Revisited, Epipolar Geometry, and Possibility of Rendering

Greetings,

I’ve continued to delve into the OpenSfM configuration files to attempt to get denser meshes without much promise. The meshes seem to be similar; however, with the yellow spiral I did receive a less distorted image that could be on the road to where we want to be. The figure is shown below:

So within the next week or two, I hope to meet with Professor Bowers to discuss some optimizations that we can make to receive even better meshes.

Epipolar Geometry is basically “two views geometry” in which you match points from two separate images using an epipolar constraint to take the problem from a 2D search problem to a 1D problem. This can be done with either a stereo camera separate (two separate cameras) were the images are acquired simultaneously or a single moving camera where the images are acquired sequentially. Given two images of a scene, the goal is to triangulate the corresponding image points and then back-project the ‘rays’ which intersect at a given 3D point. This is shown in the figure below:

From this triangulation, we use the information that a point in one view “generates” an epipolar line in the other one as shown below:

From this we, the corresponding points x and x` create a plane known as the epipolar plane as shown below in the shaded area:

Ultimately, we see that from each camera plane there is an epipolar line that creates this triangulation in which we can search for the corresponding image points:

From this we can see the true advantage of using epipolar geometry when reconstructing 3D models as shown below:

All in all, I’d like to explore the possibility of using epipolar geometry to creating these mappings of 2D points to reconstruct a 3D image.

In addition to using epipolar geometry for reconstruction, I’d also like to explore the idea of using rendering instead of reconstruction take a 3D image and receive point mappings to compare instead of the the 2D to 3D pipeline that we’ve been attempting so far.

Thank you,

Adam S.

*Pictures: Dr. Didier Stricker, Kaiserlautern University, http://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec05.pdf

Student Post: Progress with OpenSfM (Structure for Motion)

Hello,

I’ve made some progress in the past week with getting a implementation of OpenSfM running on my own machine and finally have done so. I ran a couple of test 2D images through the reconstruction pipeline with moderate results. So, I thought it was time to run some actual 3D printed material through it to seem what reconstructions I could get!

The first was to no avail as I tried to reconstruct a dodecahedron. Here is a picture of what the 3D print looks like and then the mesh after going through the OpenSfM pipeline.

 

 

 

 

 

Obviously, this was a bit disappointing; however, some others had better pixel formations and densities resembling the image more so.

Here is a Spiral

So that mesh is getter better especially from the first one. Here is one last one which is a cone with two spheres in it.

Overall, the meshes got better and better which is promising.

Goals for next week:

  • Rework the configuration files to attempt to get more denser meshes
  • Research other techniques for 3D reconstruction that have open source software packages
  • More research on parameters for pixels and matching points within meshes.

Thank you!

Adam S.

Student Post: Automated Detection of 3D Printer Failures Using Structure for Motion and Computer Vision

The goal of this project is to provide more effective mechanisms for error detection of 3D printer failures through the use of ‘structure for motion’, computer vision and computational geometry. Furthermore, first steps in order to achieve this goal is to determine what actual constitutes an error and what types of errors the application will focus on (if not all).

So what next??? We need to make some “error free” prints in addition to “intentional error” prints in order to establish a base line for tolerance and to compare the two. Also, creating a workflow for reconstructing 3D models from 2D images is coming and this is where structure for motion comes in handy.

Of course, further exploration of literature on the subject will be continuing.

Student Post: Conversion of Graph to Solid Complete

Dear Reader,

This past week I have read over the official OpenSCAD documentation and achieved a limited but sufficient understanding of the OpenSCAD scripting language.

To the Sphere Packing software, I have added a new file output for the graph edges as well as a file output for the mesh triangles.

Unfortunately, since it doesn’t seem like OpenSCAD has a direct way of pulling input from general files aside from standard 3D object files such as STL, OFF, and AMF, I have to manually copy and paste the file outputs from Sphere Packing into an OpenSCAD file. On the bright side, I have made the file outputs to be vectors of points that are already in scad format so copying and pasting is all that really needs to be done. I have written a simple scad script file that takes the graph data and mesh data and generates corresponding solids.

Here is a picture of an example solid for a graph from Sphere Packing:

The solid above took several minutes to render since the level of detail is relatively high. Next, I constructed a hollow shell from the triangles of the original mesh to fuse with the internal graph structure as a cover. The results are demonstrated below:

These solids are all union-ed together to create one solid which is then exported as an STL file. The object can be sliced with any 3D slicer program and then printed using existing 3D printers.

Hopefully, I’ll get to print a physical model this week.

Thanks for reading,

Xiang

Student Post: First Steps Toward 3D Printing Meshes Generated By Sphere Packing

Dear Reader,

A major goal of my Sphere Packing research is to actualize the mathematics and computer models into novel techniques for the mesh generation portion of the 3D printing process. To this end, we need a way to complete the full process starting from an STL model of a 3D object and ending with a 3D printed object. With the current software, we have already established a way to generate the meshes; however, we do not yet have a way to print the meshes. As partially mentioned in the previous post, we came up with two possible approaches for printing the meshes. The first method that we will try involves the conversion of a mesh into a solid using OpenSCAD.

A simple illustration of this idea is shown below:

Image result for arrowImage result for openscad hollow tetrahedron

Having the solid, we could then use traditional slicer software to generate the G-code for printing the object.

Also, we can use OpenSCAD’s shape union and intersection functionalities to combine the shell of the original object with the internal infill mesh to create a print object that would have the shape of the original object but with a semi-hollow internal support structure consisting of the mesh from sphere packing.

Image result for openscad intersection

The general idea is produce a lightweight solid with a strong internal structure that can withstand great stress. An analogy to a natural structure with this quality would be bird bones which are necessarily lightweight for flight yet they can withstand the constant pressures from flight motions.

Image result for bird bone

The specific implementation of this procedure is in progress, so stay tuned. ~

Thanks for reading!

-Xiang

Student Post: New Lattice Structure and Other Ideas Going Forward

Dear Reader,

Since the last time I posted, I have added the ability to generate a different lattice structure to the Sphere Packing software. This new lattice structure is called the hexagonal close packed (hcp) lattice. It is one of the tightest ways to pack equally sized spheres, with the other way being the face-centered cubic (fcc) lattice which only differs in the way that consecutive layers are aligned.

Close packing.svg

In the above image, a bird’s-eye view of hcp packing (left) and fcc packing (right) is shown. The underlying hexagonal arrangement of white spheres representing layer 1 resembles the tightest way of packing equally sized circles in a plane also known as a penny-packing. Every layer of the hcp and fcc lattices has this identical hexagonal arrangement. The black spheres representing layer 2 are placed over the gaps created between three underlying spheres. Note that layer 2 and layer 3 are not fully drawn out so that the lower layers are not obscured completely. Also note that layer 2 is identical for both hcp and fcc lattices. The difference is from layer 3 which lines up with layer 1 in the hcp lattice, but is shifted in the fcc lattice.

Anyways, back to the main point. Hexagonal close-packed lattices are now available to use in the sphere packing software. Earlier there was only body centered cubic lattices:

Now here are the hcp lattices:

With this new type of lattice we hope to gain insights into how a different set of combinatorics affects sphere packing.

Moving forward, I would like to 3D print the meshes generated by the sphere packing. The difficulty lies in the fact that the meshes are not solids, so traditional 3D slicer programs can not work with the mesh output directly. Thus, one approach would be to convert the mesh into a solid. As it has been suggested to me, this could possibly be done using OpenSCAD. Another approach would be to skip the slicer step completely and directly generate gcode from the mesh using a toolpath planning algorithm. I will first try the method of converting the mesh into a solid since it seems easier than developing a new algorithm for toolpath planning in 3D space. I will keep you posted on the results.

Thanks for reading!

-Xiang