Student Post: FreeCAD FEM Module Results

Dear Reader,

In accordance with my timeline, I have explored the capabilities of FreeCAD’s FEM Module along with other auxiliary modules in order to find a reliable software program that enables easy application and analysis of stress on a 3D object.

I explored the basic functionality on a trivial yet essential structure as shown in the following image:

Here were the displacement and stress results on individual nodes of the mesh:

In this test, I used auto generated nodes for the mesh. FreeCAD also appears to support more customized methods for defining structural nodes:

However, I’m still in the process of figuring this feature out. Also, I’m working on designing a specific 3D object that would be simple to stress test given a specific set of available equipment.

Stay tuned for updates.

Thanks for reading,

-Xiang

Student Post: Success with ROS and OpenNI, Python Scripting with OpenCV

Greetings,

This week I have finally found success with our implementation of ROS and OpenNI on our Linux Mint machine in the Forensics Lab. I ended up resolving our global frame issue and have received images from the Asus Xtion Pro Live in addition to point clouds!

When running roslaunch rviz rviz with OpenNI launch engaged using the following command:Screen Shot 2017-03-19 at 11.46.36 AM

I was prompted with the global frame within rviz which I proceeded to set the frame to ‘camera/link’ in order for data visualizations to be captured from the camera.

Here is an example of the global frame prior to the configuration:

Screenshot from 2017-03-23 13-54-35

Here is an example of the global frame after the configuration with ‘camera/link’:

Screenshot from 2017-03-23 13-54-48

After this, I added to the Displays tab in rviz a ‘camera/depth/points’ object which is visualized using PointCloud2 in rviz. Here is an example:

Screenshot from 2017-03-23 13-55-12.png

By selecting the PointCloud2 option, it added it to the main Displays tab in rviz which allowed me to finally view point clouds!

Here is an example of the screenshot from the first point cloud that I was able to get. Keep in mind the camera is not calibrated to be in a static location.

Screenshot from 2017-03-23 13-40-49

You can make out the desktop computers in the background and maybe office chairs. Here is the configuration with the visualization in rviz:

Screenshot from 2017-03-23 13-56-53

If you look closely at the image, you can see me in the point cloud as well as something that looks like my shadow; however, it is actually 3-dimensional so it is recognizing that my body is in front of the wall behind me creating the shadow like figure from not being able to see behind me.

In addition to this, I was able to change the configuration around to receive what looks like heat mapped images with points as squares. Here is the picture:

Screenshot from 2017-03-23 13-58-35

As you can see, that is me again in the point cloud with my hands raised over my head. An interesting thing to note is the style of the point cloud can be change so that pixels are generated as squares, circles, etc.

Finally, I’ve been writing some python code this week with OpenCV complied with OpenNI support in order to receive raw camera data. However, I’ve hit a road back as the code for OpenCV 3 and OpenNI have changed some of the OpenNI functions and constant around.

So for next week, be on the look out for some code to interact with the Asus Xtion Pro Live.

Thanks for reading,

Adam

 

Student Post: Project Timeline for Structural Refinement to Counter Application of Forces

Dear Reader,

The sphere packing software currently generates 3D internal structural meshes for hollow 3D objects. However, our current usage of sphere packing does not provide a major improvement over simple manual generation of uniform truss structures. The true advantage to using sphere packing is actually the ability to selectively manipulate local regions of the internal structure. Of course, a human engineer could manually design a complex structure with desirable properties in specific parts of a 3D object, but this is time consuming. My current goal is to create software that can automate or aid a human in this process.

In order to take advantage of sphere packing’s capability, I am moving the project to the next phase. I’ve been doing research on stress in mechanical engineering to gain a better understanding of how structures are designed to counter or use stress. Unfortunately as far as my novice eyes can see, it seems that the structures generally do not involve complex 3D graphs. Also there’s a large amount of information on how to calculate stresses on objects, but there is much less information on how to augment structures to counter stress besides using varying materials.

Regardless, I am going to perform an experiment to determine if I can structurally augment 3D meshes to resist stress forces, specifically compression, tension, and shear.

Image result for stress mechanics

In the first phase, the inputs are the STL file of a 3D object and the sphere packing parameters. The output was a 3D graph that served as the internal mesh of the object. In the second phase, the input will be the 3D graph and data describing forces applied to each vertex of the graph. The output will be an altered graph that should resist the forces better.

To obtain the forces applied to each vertex of the graph, I will convert the graph to solid form as a STL file and use a finite element analysis tool that will enable a user to selectively apply forces to the entire object. I am thinking of using an FEM module for FreeCAD; although I have yet to explore the capabilities of the module.

Image result for finite element analysis freecad

Next, I will add functionality to the sphere packing software where the vertex forces data can be used to refine the graph. The specific refinement scheme still needs to be resolved, but I do have an idea that I am going to implement. I will explain in greater details once the implementation is complete, but the general idea is based off of cell growth in biology where in my case the cells are spheres. This is based on the assumption that a dense graph can withstand more stress than a sparse graph. Think of osteoporosis for example:

Image result for osteoporosis

Once the refined graph for the original object has been generated, I will conduct physical tests to determine its structural limits. I will conduct the same tests on the unrefined graph, the full solid object, and a hollow shell of the object as controls.

To keep myself on track I present my Timeline:

March 27th, 2017 – Find and utilize suitable software for simulation of forces on 3D objects to calculate forces on specific nodes of a graph.

April 7th, 2017 – Finish implementation of refinement functionality in sphere packing.

April 14th, 2017 – Finish designing experimental structures and print as objects for testing.

April 21st, 2017 – Complete physical stress tests on objects.

April 24th, 2017 – Write up report on experiment.

I shall keep you posted on progress.

Thanks for reading,

-Xiang

Student Post: ROS and OpenNI Update

Greetings,

This past week I have worked on installation ROS (Robot Operating System) on our desktop in the Ars Geometrica Lab in ISAT. With the ROS installation complete and usable, I began focusing on getting the OpenNI_launch and OpenNI_camera nodes working together to produce visuals from the Asus Xtion Pro Live.

This was indeed more difficult then I anticipated due to all of the dependency and compatibility issues involved with the drivers for stereo cameras like the Xtion Pro Live. At the beginning of the week, I was not able to get the OpenNI_launch node to recognize the Xtion Pro Live through the USB connection to the desktop; however, I was able to fix this issue with dependency management and now the module launch node recognizes that the Xtion is plug in and can retrieve data from it.

The only catch here is that there are two strange warnings produced that I looked into regarding two configuration yaml files that are not located in the correct spot for camera calibration prior to opening the video stream.

These are the errors that I was receiving:

Screen Shot 2017-03-19 at 11.24.34 AM

I felt like these might have something to do with why the camera stream was not visible in ROS, so I kept digging and found that I needed to intrinsically calibrate the camera to see its stream.

Screen Shot 2017-03-19 at 11.27.33 AM

After running this command, an image viewer popped up and I was able to see a colorful image through the Xtion Pro Live. So, this lead me to believe that I needed to calibrate the camera in order to visualize it through ROS’s rviz which is there 3D visualizer.

A quick reminder, the goal right now is to be able to visualize point clouds in rviz using the Xtion Pro Live. This should be simple once the camera is recognized within the rviz environment.

Currently, within rviz, I am running into an issue with the global fixed frame. It has an error associated with it that I was not able to figure out with the time spent this week; however, I do have an inclination as to what could be causing the error. It seems to be that when you run:

Screen Shot 2017-03-19 at 11.46.36 AM

In ROS, this initializes rviz as well so something with the configuration from the openni.launch in rviz could be trigger the error. So with this in mind, the goal for next week is to get rviz and OpenNI_Launch working together in order to begin visualizing point clouds within rviz.

Thanks,

Adam Slattum

1) ROS RVIZ

2) ROS Image View

Student Post: ROS and Asus Xtion Pro Live

Greetings,

This week I will exploring using the Asus Xtion Pro Live within the ROS environment. We have acquired a desktop machine for the Ars Geometrica Lab in ISAT that we have install Linux Mint on to use with ROS.

A little bit about ROS. Robot Operating System (ROS) is a collection of software frameworks for robot software development and within this collection of software frameworks, they have an OpenNI Camera driver for depth and RGB camera. These include the Microsoft Kinect, ASUS Xtion Pro and Pro Live. The driver publishes raw depth, RGB, and IR image streams within ROS. This used in combination with the OpenNI Launch driver allows ROS to convert these streams into depth images, disparity images, and registered point clouds which is exactly what we are looking for.

The next steps are being able to visualize our 3D printed objects through the OpenNI drivers in ROS and think about a pipeline that we could construct. Ideally, taking these point clouds from ROS and using a comparison test between ideal and error images would be the first step in the process.

Thanks,

Adam Slattum

1) http://wiki.ros.org/openni_launch

2) http://wiki.ros.org/openni_camera

3) http://www.ros.org/