Image: non-physically-based light transport through bubble cluster

This is an image of a non-physically based light transport through a bubble cluster generated as part of REU student Sarah Ciresi’s project. We show a physically based one below as well, but this one just looked too cool not post. This image was produced after we added circular arc rendering to a 2D raytracer, but before we added the actual simulation of light through a thin soapy film. We hope you’ll allow us some artistic license even though we’re mucking around with the physics:

Screen Shot 2017-07-01 at 10.18.18 AM

The bubble cluster was generated using our Koebe-Lib library for inversive geometry that we have developed over the past two months. The light transport was simulated with a modified version of Tantalum, a nice GPU-based 2D raytracer designed to simulate physically modeled light transport through a scene.

Here’s a rendering that is more accurately modeling the physics of light passing through a thin-film. Not quite as visually stunning, but still pretty cool:

Screen Shot 2017-07-02 at 9.59.35 PM.png

The light source is modeled as an incandescent light. We used 1.0 for the index of refraction of air and 1.33 for the index of refraction of soapy water. We are not currently simulating any color shift on reflected rays, but may add this in the future. Our walls are also modeled as infinitesimally thick, since the thickness of a bubble wall at this resolution is much smaller than a pixel. The scene was rendered using 12 million simulated photons.

Stay tuned for an initial release of Koebe-Lib’s alpha version, which we expect will occur within the next month.

On Representing Circles and Disks

JMU is hosting an REU program this summer in computer science. Sarah Ciresi, a rising senior at Georgetown, is working on a project to develop a low-cost 3D scanner for capturing soap bubble configurations. You can follow her progress here.

To aid her work, we’ve been developing a library for inversive geometry on S^2 which is (heavily) based on the structure formulated in Sharif Ghali’s excellent book Introduction to Geometric Computing. The goal is to make our library public at the end of the summer.

Ghali covers a lot in his book, including the development of some spherical geometry (though his “circles” are really only the “great circles”), and an implementation of the oriented projective space T^3 as formulated in Jorge Stolfi’s PhD thesis and subsequent book. This oriented projective space is really the space of rays in \mathbb{R}^4.

In this post we assume some familiarity with homogeneous coordinates of points in \mathbb{R}^3 and their representation in \mathbb{R}^4. Readers unfamiliar with these should check out “Introduction to Geometric Computing” for a very well thought out introduction. (Perhaps we’ll post a video on this sometime.)

Representing disks on the sphere

For our purposes, we needed more spherical geometry than Ghali’s introduction provides. Specifically, we work with circles on S^2 which are not necessarily great circles, and we also need our circles to be oriented, meaning that there is a given direction along the circle that is treated as counter-clockwise. Another picture of this is that instead of oriented circles, we are dealing with disks, which are the oriented circles along with the region bounded on the left of the circle (where “left” is defined with respect to the circle’s counter-clockwise orientation).

Now, notice that any circle C on the sphere S^2 can be represented by the plane ax + by + cz + d = 0 whose intersection with S^2 is C. This, in turn, is the intersection of the hyperplane ax + by + cz + dw = 0 in \mathbb{R}^4 with 3D unit sphere in the the w=1 level sub-space. (If it seems like we just jumped into 4-dimensions out of nowhere, don’t worry, it will make sense in a minute.) Thus, we give coordinates to a circle C on S^2 by specifying the 4-tuple of coefficients of the hyperplane (a, b, c, d).  Notice that multiplying all the coefficients by any fixed \lambda does not change the circle represented since ax + by + cz + dw = 0 = \lambda a x + \lambda b y + \lambda c z + \lambda d w.

To obtain disks from such a representation, we adopt the convention of Stolfi and treat (a, b, c, d) and (\lambda a, \lambda b, \lambda c, \lambda d) as the same disk if and only if \lambda > 0. Our convention is to identify the disk incident to C with area less than 2\pi with those 4-tuples where d > 0 and those with area greater than 2\pi with those 4-tuples where d < 0. (d = 0 is a great circle, and then the direction of the 3-vector (a, b, c) is used to identify the disk.)

One of the nice things about this representation, is that several computations become trivial. In the following let (a, b, c, d) represent a disk D with boundary C For instance:

  • The Euclidean center of C (i.e. the center in Euclidean 3-space, not on the sphere S^2) is given by the homogeneous coordinates (-a, -b, -c, \sqrt{a^2 + b^2 + c^2} / d), (the 3D point (\frac{-ad}{L}, \frac{-bd}{L}, \frac{-cd}{L}) where L=\sqrt{a^2 + b^2 + c^2}).
  • The spherical center of D is (a / L, b / L, c / L) where L is defined as above.
  • The Conical cap of C, which is the apex of the cone tangent to S^2 at C is given by the homogeneous coordinates (-a, -b, -c, d). If d \neq 0, then this corresponds to the 3D point (-a / d, -b / d, -c / d), and if d = 0, this corresponds to the point at infinite at the endpoint of the ray (-a, -b, -c). (For the uninitiated, this is the power of the homogeneous coordinates–we can represent points at infinity as if they are any other point. But that is a matter for another post.)

The Inversive Distance

One of the main functions we work with in inversive geometry is the inversive distance between two circles. The usual way of defining this for circles on the sphere is:

\langle C_{1}, C_{2} \rangle = \frac{-\cos \sphericalangle ( p_{1},p_{2} ) + \cos(r_{1}) \cos(r_{2})}{\sin(r_{1}) \sin(r_{2})}

where p_{i} is the spherical center of C_{i} and r_{i} is its spherical radius (for i = 1, 2) and \cos \sphericalangle ( p_{1},p_{2} ) is the spherical angle between p_1 and p_2.

Given our representation of circles as the 4-tuples (a_1, b_1, c_1, d_1) and (a_2, b_2, c_2, d_2), the inversive distance becomes

\langle C_{1}, C_{2} \rangle = \frac{-(a_1 a_2 + b_1 b_2 + c_1 c_2 - d_1 d_2)}{\sqrt{a_1^2 + b_1^2 + c_1^2 - d_1^2} \sqrt{a_2^2 + b_2^2 + c_2^2 - d_2^2}}

which is simply the cos(\theta) between the vectors (a_1, b_1, c_1, d_1) and (a_2, b_2, c_2, d_2) under the (3,1) Minkowski inner product.

In other words, circles on the sphere endowed with the inversive distance are a geometric picture of vectors in space-time endowed with the usual Minkowski inner product!

Student Post: Sphere Packing Honors Thesis Proposal

Dear Reader,

I’ve decided to do a senior capstone project involving the material I’ve presented in the previous posts. Therefore, in the previous two weeks I’ve been working on writing up my honors thesis proposal for sphere packing.

My proposal can be found at the following link if you wish to see it:

Xiang Chen’s Honor Proposal

This week I will be exploring a more advanced tool called COMSOL to which I have been provided access courtesy of Dr. Marcelo Dias. Among many other features, this tool provides a very comprehensive set of capabilities for analyzing the stresses internal to an object subjected to forces. I will be figuring out how to load my sphere packing graphs into COMSOL and parsing the stress output from COMSOL.

Thanks for reading,

Xiang

Student Post: Automated Detection of 3D Print Failures Thesis Proposal

Greetings,

For the past two weeks, I have been drafting and submitting my honors thesis proposal in order to formalize what I have been blogging about this semester! The proposal is titled “Automated Detection of 3D Printer Failures” and can be found at the link provided below. Please peruse the proposal for details and to see what is coming for the next year.

Thank you,

Adam S.

adam-slattum-proposal

New Paper: Rigidity of Circle Polyhedra in the 2-Sphere and of Hyperideal Polyhedra in Hyperbolic 3-Space

We just posted a new paper to the ArXiV, “Rigidity of Circle Polyhedra in the 2-Sphere and of Hyperideal Polyhedra in Hyperbolic 3-Space“. This is joint work with Philip L. Bowers at FSU and Kevin Pratt, who worked on the project over the summer as a visiting undergraduate research assistant here at JMU.

This paper grew out of the curious and surprising result of Jiming Ma and Jean-Marc Schlenker, in which they construct an inversive distance circle packing on the sphere which is not globally rigid (meaning that there exist more than one realization of the same inversive distance data as a pattern of circles that are not Möbius equivalent).

Suppose you are given a triangulation K of a topological sphere and a real number weight w(ij) on each edge ij of K. The inversive distance circle packing problem asks, is there a set of circles C on the sphere and a bijection P : V(K) \rightarrow C such that for every edge ij of K, the inversive distance between circles P(i) and P(j) is equal to the weight w(ij)? We call C an inversive distance circle packing realizing (K, w). There are really two important questions: (1) given such a K and edge labeling w, does there exist a packing? and (2) if one does exist, is it unique?

The Ma-Schlenker result was especially surprising because the answer to question (2) has been yes, it’s unique, in virtually every setting circle packings have been studied. (As of this post’s writing, question (1) remains very much open for general inversive distance circle packings.) Ma and Schlenker start with a Euclidean twisted octahedron and use some powerful mathematical tools (Pogorolov maps and infinite de Sitter space) to obtain their result. Recently, we provided some constructions of Ma-Schlenker style octahedra in the intrinsic inversive geometry of the sphere in another paper that may interest the reader. We can now construct lots of examples of families of circle-polyhedra where there is not uniqueness.

The Ma-Schlenker construction raises the question, “When is global rigidity of an inversive distance circle pattern on the sphere guaranteed?” This is the subject of our paper.

Enter Cauchy

The questions being asked of circle patterns have an analog in the study of Euclidean polyhedra dating back to the ancient Greeks. The question might be asked, if we know the shapes of all the faces of a polyhedron and which faces are attached together along which edges (though not at what dihedral angle), does that data determine the polyhedron uniquely? In general, the answer is no–take a cube and replace its top face with a pyramid made of four equilateral triangles to obtain a house-like structure; now, invert the pyramid. However, in 1813, Cauchy proved his celebrated Rigidity Theorem, which states that if we further require that the polyhedron be convex, along with the specified faces and their combinatorics, then there is only one construction possible. This theorem (and some of its later proofs–Cauchy’s original argument had several serious bugs) is certainly one of Erdös’s proofs from the book (and in fact one proof of the theorem is in the Aigner and Ziegler book Proofs from THE BOOK).

To Our Paper

Inspired by Ma and Schlenker’s use of Euclidean polyhedra to prove their result, we set out to recreate Cauchy’s proof, except in the case of inversive distance circle packings. To do this, we first generalized packings to circle-polyhedra, which are really the natural way of talking about gluing up circle-polygons along “edges” to form patterns of circles on the sphere. In order to do this, we began to work with circle space, a space that is a partial dual to the real-projective 3-space in which circles are points, coaxial families of circles are lines, and bundles of circles are planes. Along with this space comes a notion of convexity for circle-polyhedra, and our main result is an analog to Cauchy’s: if you specify (up to Möbius transformations) a bunch of circle-polygons (to serve as the faces of a polyhedron), and how the polygons should be combined (by identifying edges), then if there exists a convex circle-polyhedron satisfying your specifications, it is the unique convex circle-polyhedron satisfying your specifications.

Our proof of this theorem follows the same general outline as Cauchy’s original proof, though with some really lovely forays into hyperbolic geometry. (A quick preview: if a bunch of circles intersect some other circle orthogonally, we can take the parts of each of the intersecting circles lying on the interior of the one circle to be hyperbolic lines in the Poincare disk model of hyperbolic 2-space. From there we build some nice hyperbolic polygons with some interesting properties, and derive a lemma about certain hyperbolic robot arms constructed out of revolute joints with the occasional piston thrown in. It’s all very fun.)

Check out the full details here. 

Student Post: FreeCAD FEM Module Results

Dear Reader,

In accordance with my timeline, I have explored the capabilities of FreeCAD’s FEM Module along with other auxiliary modules in order to find a reliable software program that enables easy application and analysis of stress on a 3D object.

I explored the basic functionality on a trivial yet essential structure as shown in the following image:

Here were the displacement and stress results on individual nodes of the mesh:

In this test, I used auto generated nodes for the mesh. FreeCAD also appears to support more customized methods for defining structural nodes:

However, I’m still in the process of figuring this feature out. Also, I’m working on designing a specific 3D object that would be simple to stress test given a specific set of available equipment.

Stay tuned for updates.

Thanks for reading,

-Xiang

Student Post: Success with ROS and OpenNI, Python Scripting with OpenCV

Greetings,

This week I have finally found success with our implementation of ROS and OpenNI on our Linux Mint machine in the Forensics Lab. I ended up resolving our global frame issue and have received images from the Asus Xtion Pro Live in addition to point clouds!

When running roslaunch rviz rviz with OpenNI launch engaged using the following command:Screen Shot 2017-03-19 at 11.46.36 AM

I was prompted with the global frame within rviz which I proceeded to set the frame to ‘camera/link’ in order for data visualizations to be captured from the camera.

Here is an example of the global frame prior to the configuration:

Screenshot from 2017-03-23 13-54-35

Here is an example of the global frame after the configuration with ‘camera/link’:

Screenshot from 2017-03-23 13-54-48

After this, I added to the Displays tab in rviz a ‘camera/depth/points’ object which is visualized using PointCloud2 in rviz. Here is an example:

Screenshot from 2017-03-23 13-55-12.png

By selecting the PointCloud2 option, it added it to the main Displays tab in rviz which allowed me to finally view point clouds!

Here is an example of the screenshot from the first point cloud that I was able to get. Keep in mind the camera is not calibrated to be in a static location.

Screenshot from 2017-03-23 13-40-49

You can make out the desktop computers in the background and maybe office chairs. Here is the configuration with the visualization in rviz:

Screenshot from 2017-03-23 13-56-53

If you look closely at the image, you can see me in the point cloud as well as something that looks like my shadow; however, it is actually 3-dimensional so it is recognizing that my body is in front of the wall behind me creating the shadow like figure from not being able to see behind me.

In addition to this, I was able to change the configuration around to receive what looks like heat mapped images with points as squares. Here is the picture:

Screenshot from 2017-03-23 13-58-35

As you can see, that is me again in the point cloud with my hands raised over my head. An interesting thing to note is the style of the point cloud can be change so that pixels are generated as squares, circles, etc.

Finally, I’ve been writing some python code this week with OpenCV complied with OpenNI support in order to receive raw camera data. However, I’ve hit a road back as the code for OpenCV 3 and OpenNI have changed some of the OpenNI functions and constant around.

So for next week, be on the look out for some code to interact with the Asus Xtion Pro Live.

Thanks for reading,

Adam

 

Student Post: Project Timeline for Structural Refinement to Counter Application of Forces

Dear Reader,

The sphere packing software currently generates 3D internal structural meshes for hollow 3D objects. However, our current usage of sphere packing does not provide a major improvement over simple manual generation of uniform truss structures. The true advantage to using sphere packing is actually the ability to selectively manipulate local regions of the internal structure. Of course, a human engineer could manually design a complex structure with desirable properties in specific parts of a 3D object, but this is time consuming. My current goal is to create software that can automate or aid a human in this process.

In order to take advantage of sphere packing’s capability, I am moving the project to the next phase. I’ve been doing research on stress in mechanical engineering to gain a better understanding of how structures are designed to counter or use stress. Unfortunately as far as my novice eyes can see, it seems that the structures generally do not involve complex 3D graphs. Also there’s a large amount of information on how to calculate stresses on objects, but there is much less information on how to augment structures to counter stress besides using varying materials.

Regardless, I am going to perform an experiment to determine if I can structurally augment 3D meshes to resist stress forces, specifically compression, tension, and shear.

Image result for stress mechanics

In the first phase, the inputs are the STL file of a 3D object and the sphere packing parameters. The output was a 3D graph that served as the internal mesh of the object. In the second phase, the input will be the 3D graph and data describing forces applied to each vertex of the graph. The output will be an altered graph that should resist the forces better.

To obtain the forces applied to each vertex of the graph, I will convert the graph to solid form as a STL file and use a finite element analysis tool that will enable a user to selectively apply forces to the entire object. I am thinking of using an FEM module for FreeCAD; although I have yet to explore the capabilities of the module.

Image result for finite element analysis freecad

Next, I will add functionality to the sphere packing software where the vertex forces data can be used to refine the graph. The specific refinement scheme still needs to be resolved, but I do have an idea that I am going to implement. I will explain in greater details once the implementation is complete, but the general idea is based off of cell growth in biology where in my case the cells are spheres. This is based on the assumption that a dense graph can withstand more stress than a sparse graph. Think of osteoporosis for example:

Image result for osteoporosis

Once the refined graph for the original object has been generated, I will conduct physical tests to determine its structural limits. I will conduct the same tests on the unrefined graph, the full solid object, and a hollow shell of the object as controls.

To keep myself on track I present my Timeline:

March 27th, 2017 – Find and utilize suitable software for simulation of forces on 3D objects to calculate forces on specific nodes of a graph.

April 7th, 2017 – Finish implementation of refinement functionality in sphere packing.

April 14th, 2017 – Finish designing experimental structures and print as objects for testing.

April 21st, 2017 – Complete physical stress tests on objects.

April 24th, 2017 – Write up report on experiment.

I shall keep you posted on progress.

Thanks for reading,

-Xiang

Student Post: ROS and OpenNI Update

Greetings,

This past week I have worked on installation ROS (Robot Operating System) on our desktop in the Ars Geometrica Lab in ISAT. With the ROS installation complete and usable, I began focusing on getting the OpenNI_launch and OpenNI_camera nodes working together to produce visuals from the Asus Xtion Pro Live.

This was indeed more difficult then I anticipated due to all of the dependency and compatibility issues involved with the drivers for stereo cameras like the Xtion Pro Live. At the beginning of the week, I was not able to get the OpenNI_launch node to recognize the Xtion Pro Live through the USB connection to the desktop; however, I was able to fix this issue with dependency management and now the module launch node recognizes that the Xtion is plug in and can retrieve data from it.

The only catch here is that there are two strange warnings produced that I looked into regarding two configuration yaml files that are not located in the correct spot for camera calibration prior to opening the video stream.

These are the errors that I was receiving:

Screen Shot 2017-03-19 at 11.24.34 AM

I felt like these might have something to do with why the camera stream was not visible in ROS, so I kept digging and found that I needed to intrinsically calibrate the camera to see its stream.

Screen Shot 2017-03-19 at 11.27.33 AM

After running this command, an image viewer popped up and I was able to see a colorful image through the Xtion Pro Live. So, this lead me to believe that I needed to calibrate the camera in order to visualize it through ROS’s rviz which is there 3D visualizer.

A quick reminder, the goal right now is to be able to visualize point clouds in rviz using the Xtion Pro Live. This should be simple once the camera is recognized within the rviz environment.

Currently, within rviz, I am running into an issue with the global fixed frame. It has an error associated with it that I was not able to figure out with the time spent this week; however, I do have an inclination as to what could be causing the error. It seems to be that when you run:

Screen Shot 2017-03-19 at 11.46.36 AM

In ROS, this initializes rviz as well so something with the configuration from the openni.launch in rviz could be trigger the error. So with this in mind, the goal for next week is to get rviz and OpenNI_Launch working together in order to begin visualizing point clouds within rviz.

Thanks,

Adam Slattum

1) ROS RVIZ

2) ROS Image View

Student Post: ROS and Asus Xtion Pro Live

Greetings,

This week I will exploring using the Asus Xtion Pro Live within the ROS environment. We have acquired a desktop machine for the Ars Geometrica Lab in ISAT that we have install Linux Mint on to use with ROS.

A little bit about ROS. Robot Operating System (ROS) is a collection of software frameworks for robot software development and within this collection of software frameworks, they have an OpenNI Camera driver for depth and RGB camera. These include the Microsoft Kinect, ASUS Xtion Pro and Pro Live. The driver publishes raw depth, RGB, and IR image streams within ROS. This used in combination with the OpenNI Launch driver allows ROS to convert these streams into depth images, disparity images, and registered point clouds which is exactly what we are looking for.

The next steps are being able to visualize our 3D printed objects through the OpenNI drivers in ROS and think about a pipeline that we could construct. Ideally, taking these point clouds from ROS and using a comparison test between ideal and error images would be the first step in the process.

Thanks,

Adam Slattum

1) http://wiki.ros.org/openni_launch

2) http://wiki.ros.org/openni_camera

3) http://www.ros.org/