Student Post: Automated Detection of 3D Print Failures Thesis Proposal

Greetings,

For the past two weeks, I have been drafting and submitting my honors thesis proposal in order to formalize what I have been blogging about this semester! The proposal is titled “Automated Detection of 3D Printer Failures” and can be found at the link provided below. Please peruse the proposal for details and to see what is coming for the next year.

Thank you,

Adam S.

adam-slattum-proposal

Student Post: Success with ROS and OpenNI, Python Scripting with OpenCV

Greetings,

This week I have finally found success with our implementation of ROS and OpenNI on our Linux Mint machine in the Forensics Lab. I ended up resolving our global frame issue and have received images from the Asus Xtion Pro Live in addition to point clouds!

When running roslaunch rviz rviz with OpenNI launch engaged using the following command:Screen Shot 2017-03-19 at 11.46.36 AM

I was prompted with the global frame within rviz which I proceeded to set the frame to ‘camera/link’ in order for data visualizations to be captured from the camera.

Here is an example of the global frame prior to the configuration:

Screenshot from 2017-03-23 13-54-35

Here is an example of the global frame after the configuration with ‘camera/link’:

Screenshot from 2017-03-23 13-54-48

After this, I added to the Displays tab in rviz a ‘camera/depth/points’ object which is visualized using PointCloud2 in rviz. Here is an example:

Screenshot from 2017-03-23 13-55-12.png

By selecting the PointCloud2 option, it added it to the main Displays tab in rviz which allowed me to finally view point clouds!

Here is an example of the screenshot from the first point cloud that I was able to get. Keep in mind the camera is not calibrated to be in a static location.

Screenshot from 2017-03-23 13-40-49

You can make out the desktop computers in the background and maybe office chairs. Here is the configuration with the visualization in rviz:

Screenshot from 2017-03-23 13-56-53

If you look closely at the image, you can see me in the point cloud as well as something that looks like my shadow; however, it is actually 3-dimensional so it is recognizing that my body is in front of the wall behind me creating the shadow like figure from not being able to see behind me.

In addition to this, I was able to change the configuration around to receive what looks like heat mapped images with points as squares. Here is the picture:

Screenshot from 2017-03-23 13-58-35

As you can see, that is me again in the point cloud with my hands raised over my head. An interesting thing to note is the style of the point cloud can be change so that pixels are generated as squares, circles, etc.

Finally, I’ve been writing some python code this week with OpenCV complied with OpenNI support in order to receive raw camera data. However, I’ve hit a road back as the code for OpenCV 3 and OpenNI have changed some of the OpenNI functions and constant around.

So for next week, be on the look out for some code to interact with the Asus Xtion Pro Live.

Thanks for reading,

Adam

 

Student Post: ROS and OpenNI Update

Greetings,

This past week I have worked on installation ROS (Robot Operating System) on our desktop in the Ars Geometrica Lab in ISAT. With the ROS installation complete and usable, I began focusing on getting the OpenNI_launch and OpenNI_camera nodes working together to produce visuals from the Asus Xtion Pro Live.

This was indeed more difficult then I anticipated due to all of the dependency and compatibility issues involved with the drivers for stereo cameras like the Xtion Pro Live. At the beginning of the week, I was not able to get the OpenNI_launch node to recognize the Xtion Pro Live through the USB connection to the desktop; however, I was able to fix this issue with dependency management and now the module launch node recognizes that the Xtion is plug in and can retrieve data from it.

The only catch here is that there are two strange warnings produced that I looked into regarding two configuration yaml files that are not located in the correct spot for camera calibration prior to opening the video stream.

These are the errors that I was receiving:

Screen Shot 2017-03-19 at 11.24.34 AM

I felt like these might have something to do with why the camera stream was not visible in ROS, so I kept digging and found that I needed to intrinsically calibrate the camera to see its stream.

Screen Shot 2017-03-19 at 11.27.33 AM

After running this command, an image viewer popped up and I was able to see a colorful image through the Xtion Pro Live. So, this lead me to believe that I needed to calibrate the camera in order to visualize it through ROS’s rviz which is there 3D visualizer.

A quick reminder, the goal right now is to be able to visualize point clouds in rviz using the Xtion Pro Live. This should be simple once the camera is recognized within the rviz environment.

Currently, within rviz, I am running into an issue with the global fixed frame. It has an error associated with it that I was not able to figure out with the time spent this week; however, I do have an inclination as to what could be causing the error. It seems to be that when you run:

Screen Shot 2017-03-19 at 11.46.36 AM

In ROS, this initializes rviz as well so something with the configuration from the openni.launch in rviz could be trigger the error. So with this in mind, the goal for next week is to get rviz and OpenNI_Launch working together in order to begin visualizing point clouds within rviz.

Thanks,

Adam Slattum

1) ROS RVIZ

2) ROS Image View

Student Post: ROS and Asus Xtion Pro Live

Greetings,

This week I will exploring using the Asus Xtion Pro Live within the ROS environment. We have acquired a desktop machine for the Ars Geometrica Lab in ISAT that we have install Linux Mint on to use with ROS.

A little bit about ROS. Robot Operating System (ROS) is a collection of software frameworks for robot software development and within this collection of software frameworks, they have an OpenNI Camera driver for depth and RGB camera. These include the Microsoft Kinect, ASUS Xtion Pro and Pro Live. The driver publishes raw depth, RGB, and IR image streams within ROS. This used in combination with the OpenNI Launch driver allows ROS to convert these streams into depth images, disparity images, and registered point clouds which is exactly what we are looking for.

The next steps are being able to visualize our 3D printed objects through the OpenNI drivers in ROS and think about a pipeline that we could construct. Ideally, taking these point clouds from ROS and using a comparison test between ideal and error images would be the first step in the process.

Thanks,

Adam Slattum

1) http://wiki.ros.org/openni_launch

2) http://wiki.ros.org/openni_camera

3) http://www.ros.org/

Student Post: Xtion Pro Live Stereo Camera and OpenNI

Greetings,

This week I have been setting up the ASUS Xtion Pro Live Stereo RGB camera to be used along with an open source SDK known as OpenNI for quick prototyping and 3D motion/depth sensing.

01

This camera could be beneficial for us in prototyping and proof of concept going forward. The RGB sensor on the stereo camera will allow us to do better depth imaging while the stereo camera setup itself will allow for hopefully better image recognition and detection of changes in the field of view.

The software package that I plan to use in order to communicate with the camera is OpenNI and NITE; however, the documentation is few and far between and troubleshooting is a little more difficult because depending on which package and what version of an OS you have, the camera will not be recognized.

The OpenNI and NITE packages should allow us to receive real time stereo video and be able to quickly turn that into depth images and 3D point clouds.

hqdefault

Registration_cloud1.png

OpenNI Examples

Starting out with OpenNI

There have been small challenges that have surfaced with getting the camera to be recognized once plugged into my machine via USB, so I am going to work on sorting this out over the next week.

Thanks,

Adam S.

 

 

Student Post: BoofCV, Image Segmentation, and 3D Scanner

Greetings,

This past week Dr. Bowers and I met in order to come up with some ideas as to how to move forward with our problem of comparing meshes and/or images for error detection. Essentially, we were able to address some problems we’ve been having with some new ideas:

  • Image segmentation
  • Real Time/Video Structure for Motion
  • BoofCV/3D Stereo Clouds
  • 3D scanner in cooperation with Dr. Dias of the Physics Department here at JMU

Image Segmentation

To start, the goal regarding image segmentation is to cluster pixels into prominent image regions and then use this within a stereo system for object recognition. We would like to be able to take a picture of say a desk without a specific image in it, then take another picture with the specific image inserted and be able to detect it through image segmentation. This would allow us to essentially remove the background while keep the image and its boundaries. (Pictures from Source 1 below) 

Real Time Structure for Motion (Simultaneous Localisation and Mapping “SLAM”)

Another idea that I plan on exploring further is real time structure for motion where you can get a live stream from a camera and mesh it in real time to do the comparisons. This would be beneficial for what we are trying to do here since it would be in real time versus quickly meshing and comparing. (Interested? Source 2 & 3 at the bottom has more)

BoofCV (Real Time CV and Robotics)
BoofCV is an open source pure Java library built on top of the OpenCV (Open Source Computer Vision Software). There are 4  packages that BoofCV is organized into: 1) image processing, 2) features, 3) geometric vision, and 4) recognition. The areas of this package that I am interested in are dense optical flow which compares two images to estimate apparent motion between the pixels. This is useful for object detection, tracking, and image segmentation. Additionally, I’m interested in their geometric 3D stereo cloud package and stereo calibration which will come in handy when using stereo cameras. Lastly, the have a Java module written to interact with the Xbox 360 Kinect which would could possibly use as a stereo setup in the meantime to get some quick results. (Interested? Follow the link from Source 4)
Shining 3D Scanner (In part with JMU Physics)
In addition to these other methods for reconstruction and object detection, in the next couple of weeks, I will be working on setting up the Shining 3D Scanner from the Dr. Dias in the Physics Department to see if there is a way to get effective meshes and scans from this for a proof of concept!
Sources:

Student Post: OpenSfM Revisited, Epipolar Geometry, and Possibility of Rendering

Greetings,

I’ve continued to delve into the OpenSfM configuration files to attempt to get denser meshes without much promise. The meshes seem to be similar; however, with the yellow spiral I did receive a less distorted image that could be on the road to where we want to be. The figure is shown below:

So within the next week or two, I hope to meet with Professor Bowers to discuss some optimizations that we can make to receive even better meshes.

Epipolar Geometry is basically “two views geometry” in which you match points from two separate images using an epipolar constraint to take the problem from a 2D search problem to a 1D problem. This can be done with either a stereo camera separate (two separate cameras) were the images are acquired simultaneously or a single moving camera where the images are acquired sequentially. Given two images of a scene, the goal is to triangulate the corresponding image points and then back-project the ‘rays’ which intersect at a given 3D point. This is shown in the figure below:

From this triangulation, we use the information that a point in one view “generates” an epipolar line in the other one as shown below:

From this we, the corresponding points x and x` create a plane known as the epipolar plane as shown below in the shaded area:

Ultimately, we see that from each camera plane there is an epipolar line that creates this triangulation in which we can search for the corresponding image points:

From this we can see the true advantage of using epipolar geometry when reconstructing 3D models as shown below:

All in all, I’d like to explore the possibility of using epipolar geometry to creating these mappings of 2D points to reconstruct a 3D image.

In addition to using epipolar geometry for reconstruction, I’d also like to explore the idea of using rendering instead of reconstruction take a 3D image and receive point mappings to compare instead of the the 2D to 3D pipeline that we’ve been attempting so far.

Thank you,

Adam S.

*Pictures: Dr. Didier Stricker, Kaiserlautern University, http://ags.cs.uni-kl.de/fileadmin/inf_ags/3dcv-ws11-12/3DCV_WS11-12_lec05.pdf

Student Post: Progress with OpenSfM (Structure for Motion)

Hello,

I’ve made some progress in the past week with getting a implementation of OpenSfM running on my own machine and finally have done so. I ran a couple of test 2D images through the reconstruction pipeline with moderate results. So, I thought it was time to run some actual 3D printed material through it to seem what reconstructions I could get!

The first was to no avail as I tried to reconstruct a dodecahedron. Here is a picture of what the 3D print looks like and then the mesh after going through the OpenSfM pipeline.

 

 

 

 

 

Obviously, this was a bit disappointing; however, some others had better pixel formations and densities resembling the image more so.

Here is a Spiral

So that mesh is getter better especially from the first one. Here is one last one which is a cone with two spheres in it.

Overall, the meshes got better and better which is promising.

Goals for next week:

  • Rework the configuration files to attempt to get more denser meshes
  • Research other techniques for 3D reconstruction that have open source software packages
  • More research on parameters for pixels and matching points within meshes.

Thank you!

Adam S.

Student Post: Automated Detection of 3D Printer Failures Using Structure for Motion and Computer Vision

The goal of this project is to provide more effective mechanisms for error detection of 3D printer failures through the use of ‘structure for motion’, computer vision and computational geometry. Furthermore, first steps in order to achieve this goal is to determine what actual constitutes an error and what types of errors the application will focus on (if not all).

So what next??? We need to make some “error free” prints in addition to “intentional error” prints in order to establish a base line for tolerance and to compare the two. Also, creating a workflow for reconstructing 3D models from 2D images is coming and this is where structure for motion comes in handy.

Of course, further exploration of literature on the subject will be continuing.