Student Post: The Beginning of the Petersen Family of Graphs’ Journey into Space … 3-Space

Graphs can be a funny thing. When you say “graph”, some people will thing of a line or some other function in 2-D. Other people will think of marvelous planes and surfaces in 3-D. Fewer people will think of “standard” graph theory type graphs represented in 2-D. And even fewer people will think of those graph theory type graphs represented in 3-D. My goal is to change that.

Throughout the semester, I want to highlight the Y-Δ and Δ-Y transforms that allow you to go between graphs within the Petersen family of graphs. The names Y-Δ and Δ-Y are quite visual. In Y-Δ you are replacing 4 vertices that make a Y shape with 3 that make a triangle shape and in Δ-Y you are replacing 3 vertices that make a triangle shape with 4 that make a Y shape. Now, when the Petersen family of graphs are represented in 2-D, these Y-Δ and Δ-Y transforms are not quite apparent. I am hoping that representing these graphs in 3-D will make these transforms a bit clearer.

Since this is my first exposure to OpenSCAD (and really my first exposure to modeling something in 3-D all my own), I’m going to start off the semester by getting the Petersen family of graphs in 3-space and then go from there. This week we have the famous Petersen graph and Kbeing featured.

The Petersen Graph

Bringing the Petersen graph into 3 dimensions was quite exciting. The Petersen graph is classically portrayed as a big pentagon with a small star in the middle, like this:

Petersen model perspective top

As you might be able to guess, the Petersen graph is nonplanar. We can’t draw it on a plane without its edges overlapping. However, if we allow ourselves an extra dimension, we can represent the Petersen graph without any edges overlapping. Petersen model persepectivetestpetersengif

Just like in 2-D, there are different ways to represent the Petersen graph in 3-D. Above is the one I chose to start my OpenSCAD intro with. I created it using an edge code my mentor, Dr. Taalman, sent me. Here’s a picture of my code!codepetersen

K6

Now, even though Kdoesn’t look as complicated as the Petersen graph (it only has 6 vertices!), it is much more complicated to graph in 3-D while avoiding edges crossing over. The view from the top down doesn’t look like K,but when you move ever so slightly, you can see edges that weren’t visible before.

With the Petersen graph, I only had to adjust the height of a few points. However, Kis a completely different beast in that regard. Here is a spin around the graph! As you’ll be able to see, Kin 3-D (or at least how I’ve represented it, there are probably cleaner ways) gets pretty intricate compared to the Petersen graph above!k6 gif

 

Hopefully next time I will have the other 5 graphs from the Petersen family to show you all, since I am much more familiar with OpenSCAD. I can tell that showing of the Y-Δ and Δ-Y transforms will be a challenge — but I’m excited!

Until next time,

Hannah Critchfield

Student Post: Simple Introduction To Rigidity

 

convexpolyhedron

Figure 1. Convex v. Non-Convex

The figure on the left is a convex polyhedron, while the figure on the right is non-convex. What makes the cube on the left convex is that any two points can be connected by a line segment within the interior of the polyhedron. Looking at the figure on the right, we can see that two points can be connected on its exterior making the polyhedron non-convex.

Let’s quickly define polyhedron. A polyhedron is a three-dimensional figure made up of flat polygonal faces joined by straight edges. Polyhedra can be classified by the number of faces they have. For instance, a 4-sided polyhedron is called a tetrahedron, 5-sided is a pentahedron, 8-sided is an octahedron, and so on.

What makes a polyhedron rigid, you ask?

A polyhedron is rigid if you cannot bend it into another configuration and it remain the same shape. Bending a polyhedron into a congruent configuration would mean that as you move the figure the faces do not change and the edges preserve their length. With rigid polyhedra, however, trying to bend it would result in a tear or break. Take a cube, for example.

Figure 2. A Rigid Polyhedron (Cube)

If you had this cube in your hands, then you could easily find that bending or deforming it could not be done.

This brings us to the Cauchy Rigidity Theorem (1813), in which Cauchy states that any convex polyhedron is rigid. No convex polyhedron can be bent or deformed.

So, what about non-convex polyhedra? It turns out that there exists non-convex polyhedra that can in fact be bent into another configuration and remain the same shape. This is called flexibility. You have a polyhedron (which must be non-convex), you bend it, and you have a polyhedron that is congruent to the one you started with but arranged in a different way. A simple definition of flexibility is continuously moveable. A flexible polyhedron can be moved in a continuous motion.

Raoul Bricard, a French mathematician, constructed the first flexible polyhedron that had self-intersections in 1897. A self-intersection means that two or more edges cross each other. This construction is called the Bricard octahedron and consists of six vertices, twelve edges, and eight triangular faces (hence the name).

Figure 3. Bricard’s Flexible Octahedron With Self-Intersections

Fast forward about 8 decades later when in 1977 an American mathematician by the name of Robert Connelly constructs the first flexible polyhedron that did not have self-intersections. Connelly actually disproved Leonhard Euler, who argued in what is known as the Rigidity Conjecture (1766) that all polyhedra are rigid.

The best and simplest construction of a flexible polyhedron was built by German mathematician, Klaus Steffen. This construction consists of 9 vertices, 21 edges, and 14 triangular faces.

Figure 4. Steffen’s Flexible Polyhedron Without Self-Intersections

I have made Steffen’s polyhedron and you can as well! Follow this template along with the following instructions so that you may physically feel the flexibility. This model is meant to be made out of paper. Note that even if you made Steffan’s flexible polyhedron out of metal plates with the edges as hinges that it would remain flexible.

Now that we have covered rigid and flexible polyhedra, we can discuss the bellows conjecture. Does the volume inside of a flexible polyhedron vary while we bend it? Ijad Sabitov proved in 1995 that there does not exist a construction of a flexible polyhedron that has variable volume as you move it. A flexible polyhedron has the same volume regardless of its state of flex. In other words, a flexible polyhedron would make a terrible bellow.

I have been told that “the bellows conjecture” is quite a misleading name.

 

Written By: Brittany Braswell

References:

Figure 1. liam.flookes.com/cs/geo/

Figure 2. By Tomruen at en.wikipedia [Public domain], via Wikimedia Commons

Figure 3. en.wikipedia.org/wiki/Bricard_octahedron

Figure 4. By Unknown at en.wikipedia [Public domain], via Wikimedia Commons

 

Student Post: The Quantum Fourier Transform

The Discrete Fourier Transform

The Fourier Transformation is a high level mathematical function using the integrals over a continuous wave to compute the component that make up such a wave. As this is done in a continuous fashion, the Discrete Fourier Transform(DFT) is not. The DFT uses the summation of real numbers representing points in time of wave motion. Which makes it an extremely powerful tool to computers because of their discrete nature.

This transformation is accomplished by computing the value of each signal component using every point in time chosen. Every signal value is made of a summation of all the points in time chosen with the value of the signal applied. This is done in the form of circle rotation. Euler’s formula allows us to recognize a basis of imaginary polar coordinates where e^{ix} = cos(x) + sin(x)i, thus creating an arbitrary point in R^2 . This basis allows us to apply an exponential rotation to all of our points x_n, where K is the signal component being evaluated and n is the point in time. After the transformation is complete we are left with a “list” containing all of the signal components of the original wave.

Written as:

X_k = \LARGE {\sum_{n=0}^{N-1} x_n e^{-i2 \pi k \frac{n}{N}}}

When the DFT is implemented on an algorithmic level it becomes quickly apparent that this runs at an N^2 time complexity belonging to O(n^2). That is because to compute every value we must add together all of the point by point computations. Also, because of the discrete computation this algorithm is implemented using arrays. Where every index in array K is filled with the summation of points stored in array n. Finally, array K contains the value of every signal component spanned by the wave points chosen. While O(n^2) isn’t inapplicable, the Fast Fourier Transform creates a one layer speed up in time complexity to reach O(n \log n).

The Fast Fourier Transform

The Fast Fourier Transform, or FFT, is a proven way to implement a substantial array transformation on a large amount of data, due to its time complexity of O(n \log n). This speed up was possible due to the amount of redundant calculations made by the Discrete Fourier Transform. These redundant calculations come from the periodic nature of imaginary numbers, where i^t = -i^{4t}. Due to this property, by splitting the data up into even and odd indices, we are then able to apply the logarithmic time complexity. Using our original equation

X_k = \LARGE {\sum_{n=0}^{N-1} x_n e^{-i2 \pi k \frac{n}{N}}},

and denoting e^{-i2 \pi k \frac{n}{N}} as W^{nk}_{N}. So we have

X_k = \LARGE {\sum_{n=0}^{N-1} x_n W^{nk}_{N}},

Now that we’ve condensed the formula, it becomes possible to split between indices  2m and 2m+1, as halves of the array n respectively, to perform the Fourier transform as:

X_k = \LARGE {\sum_{m=0}^{\frac{N}{2}-1} x_{2m} W^{2mk}_{N}} + \LARGE {\sum_{m=0}^{\frac{N}{2}-1} x_{2m+1} W^{(2m+1)k}_{N}}

Now that the formula has been established and we are aware of the properties of W as

W^{4k}_{8} = -W^{0k}_{8}

W^{5k}_{8} = -W^{1k}_{8}

W^{6k}_{8} = -W^{2k}_{8}

W^{mk}_{8} = -W^{(m-4)k}_{8}

Where N is equal to 8.

We define the even summation of the FFT as $latex  G[m] $ and the odd summation as H[m] . Therefore we can see that:

X_0 = G[0] + W^{0k}_{8}H[0]

X_1 = G[1] + W^{1k}_{8}H[1]

X_2 = G[2] + W^{2k}_{8}H[2]

X_4 = G[0] + W^{4k}_{8}H[0] = G[0] - W^{0k}_{8}H[0]

X_5 = G[1] + W^{5k}_{8}H[1] = G[1] - W^{1k}_{8}H[1]

Given that the input is a power of 2, we can then perform the operation know as the decimating-in-time-algorithm. This is, recursively breaking down every set of transforms into even and odd inputs until only two point DFT’s remain. That is, \frac{N}{2} point transformations, where we implement the recursion \frac{N}{2}...\frac{N}{N/2} until hitting summations or arrays of length 1. At each stage of the algorithm \frac{N}{2} complex operations are needed to build the previous level. Because of the stage construction in \log N, we only have to compute the summations of values themselves N,  while recycling the complex computations.

Quantum Fourier Transform

Now that this basis has been firmly established and we have denoted the fastest time complexity of the Fourier transform using classical computing as O(n \log n) , we can further use Quantum gate transformations to a achieve a time complexity such as O(\log n) . Needless to say, this is an exponential speed up of an entire magnitude. To define the transform we first note that X_k can also be defined as set of column matrices denoted as X^{(s)}, where s the index of the “array”. We then form the formula as

X^{(s)}_{j} = \LARGE {\sum_{n=0}^{N-1} P^{(s)}_{jk} X^{(s+1)}_{k}}

While this may seem complex, it is important to realize this is a re-representation of the previously defined FFT formula in order to define P^{(s)} as a form of column matrices. As we know from the previous post, qubits are defined as column vectors representing system  state probability. If we can reform our input array to be represented as such then we can apply our gate matrices in order to introduce a state of superposition.

Following this the form of basis P is proven under a few propositions, however for the sake of this article we will assume the propositions as true while giving a brief explanation for each.

1.   Each row of matricesP^{(s)}  only contains two non zero entries

In the matrices ofP^{(s)}  we see that the non zero entries are located in the main diagonal and the sub-diagonal depending on the column values of j. If we look at this a little closer it becomes clear why this. Putting at this in terms of a double array, if indices i and j are equal at all times then we will receive only the diagonal values stored in the array and the same for our matrices. When i = j we traverse all indices in the main diagonal of each matrix, when i = j - 2 we receive all entries in the lower sub-diagonal, and i = j + 2 yields all entries in the upper sub-diagonal. Because we assumed proposition one we can now say each matrix has been defined depending on the piece-wise function of the indices i and j.

2.  The matrices of are unitary

If you refer from the last post, unitary is defined as a square matrix where the conjugate transpose is also its inverse. Recall, we need this to be true in order to evaluate output transformations as we due input, in our time steps denoted by t. Subsequently, this allows these quantum gates to be implemented on a quantum computer. However, we must determine what these gates are first. We do this by whats called QR decomposition. The decomposition states that any matrix belonging toP^{(s)}  can be broken up into the product of a unitary matrixM^{(s)}  and an upper triangular matrix N^{(s)} This then brings us to our next proposition.

3. For any matrix s belonging to P^{(s)}, s   can be made up of the matrices M^{(s)} and N^{(s)}

Because the depth of the proofs are being conceded right now, I am going to use proposition three in order to explain the mathematical composition of both M^{(s)} and N^{(s)} . Firstly, by multiplying the matrices M and N before traversing the matrix on the left side of the solution we can see that these sets of matrices build P^{(s)} , such as

\LARGE {(M^{(s)} N^{(s)})_{jk} = M^{(s)}_{jk} N^ {(s)}_{kk}} =P^{(s)}_{jj}

We then denote that matrix M^{(s)} can be written as its form of induction using the Identity I and the Hadamard matrix H. By recursively using the formula \LARGE {M^{(0)}_{2n \times 2n} = I^{\otimes (n-1)} \otimes H}

for every matrix of matrices M^{(s)} we can see this induction compose to

\LARGE {M^{(s)}_{2n \times 2n} = M^{(s-1)}_{2n-1 \times 2n-1} \otimes I \otimes H}

Thus, building every matrix belonging to M^{(s)} and completing the unitary requirement of P^{(s)}

Lastly, we form our matrices of N as triangular matrices. This is simply the product of all triangular matrices in reverse using R at time u = t - s + 1 denoted by

N^{(s)} = \LARGE {\prod_{t=s+1}^{n-1} P^{(s,t,u)}}

Finally, we are able to build the Quantum Fourier Transform from the defined decomposition. We first prepare our quantum state as \left|\varphi \right> and every complex matrix in P^{(s)}  such as,

\left|\varphi_0 \right> = P^0, P^1......P^{(n-1)} \left|\varphi_n \right>

Where \left|\varphi_n \right> contains every value of the input X^{(n)} in the form of ket complex matrices.

Thus we can say,  \left|\varphi_0 \right> = \sum_{c_{n-1} ... c_{0} < 1} X^0_{c_0... c_n-1} \left|c_{n-1} ... c_0 \right>

The algorithm takes a complex array X as input and uses our quantum gates in P^{(s)} to output a quantum state that correspond the probabilities of the real values and amplitudes, given by the original DFT. Thus, we prepare our quantum register of input as

\left|\varphi_n \right> = \sum_{k = 0}^{N-1} X_k \left |k \right >

We then move in reverse as an outer loop from s =  n -1 \rightarrow 0

and as an inner loop from

t = n -1 \rightarrow s + 1 .

At every step in t, we apply our unitary gate operation of N^{(s)} as

R^{(s,t, t -s +1)}

After applying the quantum gate operation to every matrix at step t, we apply the Hadamard matrix to the value of X^{(n)} at time step s.

After completing every matrix in the reverse time step of s, we have completed the algorithm, this yielding a quantum state $latex \left|\varphi_n \right> $ corresponding to elements of the complex matrix Y.

If we evaluate the time complexity of this transformation we see that the outer loop as n(n+1)/2 steps or equivalently O(n^2). We see that classically this is equal to the original DTF, however every qubit operation compounds the probabilistic superposition to 2^n bits. Due to this we can say that the operation becomes O(log n) . This incredible exponential speed up becomes the driving factor of the algorithm and the reason it must be defined in terms of qubits and quantum space.

Written By: Mitchell Mesecher

 

Student Post: Adding Buttons!

This week and the previous week I was tasked with adding buttons to our user interface. Currently when the buttons A, P, or C were clicked it selected a certain “tool” that you could use. This week I was attempting to create buttons so that instead of having to click a keyboard key the user could click a button. I was able to create buttons, however, I unintentionally over wrote the frame for the Python Interpreter so the buttons are on the wrong screen. I believe that this will be a quick fix and that all I need to do is create a new content pane and add that to the frame with the arcball because currently it is adding buttons to the content pane of the python interpreter. A picture of the issue is included below. The reason we are making this change is because as we add more features the use of keyboard keys would become too extensive. That is all for this week.

BlogPost2

Until next week,

Maddie Brower

Student Post: The Rabbit Hole of Graph Theory Terminology (Plus some Petersen Graphs)

My first ever exposure to any sort of graph theory was through this paper by Flapan, Mattman, Mellor, Naimi, and Nukkuni. If you’ve already clicked on the link, you might see how someone could get overwhelmed with terminology early on in the paper. Since I had no prior knowledge to graph theory, that person was me. The goal of this post is to provide some casual definitions and some examples showing off certain properties. (Note: this will not cover all definitions in the paper, and certainly not all definitions in graph theory, just some ones that I wish I knew before reading the paper/some ones I think need extra attention.)

[Note: all images are either from the paper linked above, or are from the wikipedia article linked in the definition title preceding the image.]

Definitions and Examples

Planar Graph: A graph is planar if it can be drawn on the plane without any of its edges crossing. (Note: only one way of drawing the graph without the edges crossing has to exist for it to be planar.)

KnThis is a complete graph with n vertices. It will also have n(n-1)/2 edges. Below is a picture showing off Kfor n=1…12 along with the number of edges the graph has.for K_n Picture

Minor: Let G be a graph and H be a subgraph. A minor of a graph is created by deleting edges and vertices, and contracting zero or more edges. (Note: when you contract an edge, the two vertices at the end of that edge merge.) Below is a diagram of a G: graph, H: minor of G, and how the minor was formed. The dotted line represents the deletion of an edge, while the grey line represents the contraction of an edge.

Y-Δ Transform/Move: Let G be a graph that has a vertex, v, and let have exactly three other vertices surrounding it. A Y-Δ transform of G at v is achieved by removing v from G and adding edges between each pair of the neighboring vertices.

Δ-Y Transform/Move: Let H be a graph that has a triangle of vertices, uvw. A Δ-Y transform of H at uvw is achieved by removing the edges uvvw, and uw from H and adding a new vertex, z,  and new edges uzvz, and wz.

So, in Y-Δ transforms you are deleting a vertex and in Δ-Y transforms you are adding a vertex. These transforms will be useful for understanding an upcoming definition.

Petersen GraphThis is a graph with 10 vertices and 15 edges. Click on the link and scroll to read about some neat properties Petersen Graphs have. If I listed them all here, this definition would be the length of the blogpost.

Petersen Family of GraphsThis is a set of seven undirected graphs (graphs in which the edges don’t have direction) that includes the Petersen graph, the complete graph Kand 5 other graphs that are formed by Y-Δ and Δ-Y transforms. Below are two images: one from the paper mentioned at the beginning of this post and one from the Wikipedia page linked in this definition. Both show the Petersen family of graphs, but the Wikipedia picture shows which graphs can be formed by performing Y-Δ and Δ-Y transforms.Petersen family paper

Petersen family connections

Upcoming

If you clicked on the paper mentioned at the beginning of the blog post, you may have noticed that the definitions featured here only cover the first three pages. These definitions and examples were chosen because they are going to be heavily used in what I want to do next. Past posts by Ben Flint have shown off Petersen graphs represented in 3-dimensions. My goal for the semester is to represent K6 , K3,3,1 , a Petersen graph, and the other 4 graphs in the Petersen family in 3-dimensions. A stretch goal/side project is to create some sort of program that will perform those Y-Δ and Δ-Y transforms on the Peterson family of graphs (and potentially others as well).

 

Until next time,

Hannah Critchfield

A Student Post: An Interactive Application for (Inversive) Geometry Of Circles

Hello,

For the past several months I have been working an interactive interface for the construction of inversive geometry of circles. Currently the working features are adding points, dragging points, and selecting three points to draw a disk. The past few weeks I was tasked with altering the code in order to implement a construction graph for the objects that were being drawn. Once completed this alteration will update any object if what it is composed of is changed. For example, a disk is composed of 3 points with the construction graph when one of the points is moved the disk updates accordingly.

 

 

Currently in order to utilize the features such as adding a point or drawing a disk I am using keyboard commands (P : point editor mode, C : to draw a disk, A : to move the arcball). As the interface acquires more features the addition of more keyboard commands will become inconvenient to remember all of the keyboard keys and thus currently I am attempting to add buttons to the screen so those can be clicked as opposed to a keyboard key being selected.

Until next week,

Madelaine Brower

A Student Post: The Basic Mathematics of Quantum Computing

Outline

This blog post is purposed with denoting and recognizing the basic principles involving the mathematics behind quantum computing. This write up is directed towards upper level undergraduates, as well as graduate students, with backgrounds in mathematics or computer science.

This article will cover the simulation and representation of quantum physics using specialized computer bits and linear algebra. These specialized bits are called qubits. As the nature of these bits was covered in the last post, here we will describe the basic operations that can be performed on them. These operations are the beginning to understanding how quantum exponential speed up is possible.

Along with the qubit, we will cover the mathematics behind quantum gate manipulation. Using quantum gates we allow our system to enter into a state of superposition. By applying one of these gates to a qubit we hence alter the probabilistic complexities of that qubit, allowing our qubit system to be derived with varying probabilities for its position.

An Overview

As discussed in the last post, the nature of superposition can be interpreted by the implementation of qubits. A qubit can be interpreted as a matrix. Each qubit represents a probabilistic quantum state, where each indincy of the matrix can be viewed as the probability of the system being in said state. As a reminder, it is important to create a basis for a qubit to possibly occupy multiple states at once, as we are trying to emulate the quantum theory of superposition (Refer to First Post). From a qubits static state, we can perform operations to transform a qubit. Qubit transformations are completed by applying whats called a quantum gate. These quantum gates take the form of square matrices, as they must also be what is called, unitary. In order to transform a qubit into a subsequent state we must apply the quantum gate with matrix multiplication. The indices of the new matrix after performing the multiplication now represent the probabilities of the system residing in each position of the new state. We will see examples of this further on.

Before reaching the mathematical theory, there are a few notations that must be defined. One of the most important being the ket, generically as \left|\varphi\right> . This is considered to be the shorthand notation for representing the probabilistic state of a qubit. Using this notation, it becomes easier to perform a quantum gate multiplication on a particular qubit without drawing out the matrices themselves, as these can get quiet large. At this point it is important to note that all qubits take the form of column vectors. In computer science terms, this can be seen as a one-dimensional array rotated to be viewed vertically. Let us know recognize the most simple of qubits,  \left|0\right> and  \left|1\right>. Where  \left|0\right> = \left(\begin{array}{c} 1\\ 0\end{array}\right)  and  \left|1\right> = \left(\begin{array}{c} 0\\ 1\end{array}\right).

Let us recognize that qubit  \left|0\right> holds a 100% chance of being in state 0 and qubit  \left|1\right>  holds a 100% chance to be in state 1.

We can denote any qubit as \left|\varphi\right> = [C_0,C_1.....C_n]^T containing 2^n number of rows.  Notice we use C to denote each indicy of the qubit. This is due to the fact that qubit state probabilities are represented as complex numbers. However, we will learn the ropes first using real numbers.

Throughout this article we recognize M as a matrix with entries denoting the probability of a movement between positions, or more simply a quantum gate and X as a qubit matrix representing the static probabilities for the systems current position. When in reference to quantum computations these matrix entries will be in terms of complex numbers where a_{ij} \in \mathbb{C}. Lastly, we want to recognize the Hadamard matrix, which is the controlled quantum gate used to transform the ket matrix into superposition. The Hadamard matrix will be denoted with H where = \left(\begin{array}{cc} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \end{array}\right).

Principle Operations

First, it is important to note the sizes of matrices M and XM is always a square matrix made up of size N \times N where N is the number of possible positions. Qubit X is always a column vector denoting the state of the state of the system. Next, we must cover the matrix multiplication of M \times X, were the product is equal to the state of the system after one movement. This matrix can be denoted as Y which itself is a qubit. The number of movements, K, is equal to the number of times is multiplied by itself. Therefore, the column vector, or qubit, Y with constraints entries for probability of the system being in each position after movements, equal to M^k X = Y.

Lets look at an example.

Movement between position t and t + 1 described by matrices M \times X = Y

\left(\Large \begin{array}{ccc} \frac{1}{8} & \frac{3}{8} & \frac{1}{2}\\ 0 & \frac{9}{16} & \frac{7}{16}\\ \frac{7}{8} & \frac{1}{16} & \frac{1}{16} \end{array}\right) \times \left(\Large \begin{array}{ccc} \frac{1}{2} \\ \frac{1}{4} \\ \frac{1}{4} \end{array}\right) = \left(\Large \begin{array}{ccc} \frac{9}{32} \\ \frac{1}{4} \\ \frac{30}{64} \end{array}\right) .

M describes the probability of movement between the three positions. Probability of moving from positions 1 \rightarrow 3 = \frac{7}{8}. Qubit X describes the static probabilities of the positions the state is currently occupying. While qubit Y describes those same static probabilities after a movement. Notice the sum of all columns or rows is equal to 1.

Let us quickly touch upon the notation for a complex quantum gate such as H, which is the form all quantum gates must take. When in relation to the set \mathbb{C} we will denote M with U as unitary where the sum of all entries modules squared is equal to one, this allows us to fill our matrix with complex numbers, correctly representing the qubit transformation of superposition when applying matrix H.

By definition the conjugate transpose of a unitary matrix is also its inverse, we will take its conjugate transpose, or adjoint, as U(cross). The importance of this is that by taking the adjoint of U at position t + k and applying it our state we will effectively receive the qubit at the state of t - k. By doing this, we allow in the analysis of results without analyzing input, substantially cutting down on the level of system decoherence.

Let us now practice an example using a qubit that only contains complex probabilities. The importance of this example is to notice how we evaluate a qubit that contains indices of complex numbers.

Finding the norm of the matrix by taking the dot product of square roots of its vector rows and vector columns is most crucial. Then by applying this value as a scalar to the original qubit matrix and taking the square modules, we receive the probability of being in each position at time t \pm k.

First we again notice \left| 1 \right> = \left(\begin{array}{cc} 0 \\ 1 \end{array}\right), where there is a zero probability that the qubit is in state 0. Now lets consider a qubit that has been moved through a few quantum states and has different probabilities correlating to the possibilities of being in states 0 and 1. We can define such a qubit as, \left|\varphi\right> = \left(\begin{array}{cc} 2+3i \\ 4+2i\end{array}\right) = S

Here we evaluate the norm and then re-apply this to our original qubit.

\sqrt{S} = \sqrt{\left(\begin{array}{cc} 2+3i & 4+2i \end{array}\right) \times \left(\begin{array}{cc} 2+3i \\ 4+2i\end{array}\right)} = \sqrt{13 + 20} = \sqrt{33}

Now we divide S by \sqrt{33} and then evaluate the qubit by taking S^2 . Such as:

\frac{\Large S}{\Large \sqrt{33}} =  \left(\Large \begin{array}{cc} \frac{2+3i}{\sqrt{33}} \\ \frac{4+2i}{\sqrt{33}}\end{array}\right)^2 = \left(\Large \begin{array}{cc} \frac{13}{33} \\ \frac{20}{33}\end{array}\right)

From the transformed matrix we can see the probability of qubit S being in state \left|0 \right> = \frac{13}{33} and the probability of state \left|1 \right> = \frac{20}{33}. We can see this is correct because the two probabilities add to one.

The Tensor Product

Finally, we must address the operation of quantum entanglement. Quantum entanglement is the process of intertwining two separable states with their individual complex probabilities into one state, or qubit, with a new set of static probability. This quantum operation may be the most important principle, as it allows for all the matrix manipulation previously described. In addition, we can even “multiply” or append qubits that are of different rows and columns, something that is not normally achievable.

The principle notation involving the joining of two qubits is denoted as the tensor product. This is the scalar multiplication of one matrix by another, in order to combine the probability of being in both states at once. The tensor product is denoted by \otimes and is the main operation applied to qubits in order to combine states.

Let us once again look at the two most basic qubits,  \left|0\right> and  \left|1\right>. As you recall these qubits are denoted as two column matrices of probabilities 1 and 0. However let us supposed we wanted combine our two quantum states into one.

Lets evaluate,

\left|0\right> \otimes \left|1\right> = \left(\begin{array}{c} 1\\ 0\end{array}\right) \otimes  \left(\begin{array}{c} 0\\ 1\end{array}\right) = \left|01\right>  = \left(\begin{array}{c} 0\\ 1\\0\\0\end{array}\right)

Where the system has a 100% probability to be in state 01.

While this may seem counter intuitive to the complexity involved, this allows the for quick combination between qubits, especially when applying quantum gates. Here we can create any size multitude of a column vector qubit. It is important to note that the column vector representing a qubit grows exponentially with the number of bits allocated. That is a 2^m \times 1 matrix.

For instance, take the operation: \sqrt{3} \left(\begin{array}{c} 1\\ 0\\1\\1\end{array}\right)

Normally, this scalar multiplication would just be as such, but we can represent this using tensor products.

Firstly, notice that positions \left|00\right>, \left|10\right>, \left|11\right> All have 100% probability to be their respect states.

But consider this in ket notation as, \LARGE{\frac{\left|00\right> + \left|10\right> + \left|11\right>}{\sqrt{3}}}

Now we can see that all three positions of the qubit have an equal complex probability by being divided by \sqrt{3}

From here we apply our gate operations in the same respect, by using the tensor product we can apply our unitary matrices of the probability of state movement, such as H, to the matrix of a qubit. Where U\left|\varphi\right> represents the system at position t + 1, if U = H then the qubit system at position t + 1 would be in superposition.

This final concept of the Tensor Product is relatively hard to grasp especially without a practical example. For that reason, the next post will center around implementing what is known as Deutsch’s Algorithm and combination of quantum input.

Written By: Mitchell Mesecher

References: 

An Introduction to Quantum Computing, Noson S. Yanofsky

Student Post: The Basic Principles of Quantum Computing

What is Quantum Computing?

The nature behind the implementation of quantum computers comes from the attempt to simulate and use principles of quantum mechanics to enhance computational run-time exponentially. Theoretically, any task given to a classical Turing machine can be completed with exponentially less computations on a quantum computer. In order to accomplish this, quantum systems adopt the principle of superposition, or more simply, being in every position at once. When it comes to quantum mechanics this is usually thought of in the realm of photons or waves, but with computers we can apply this theory to the most basic of measurements, a bit. A bit has only two positions, a one or a zero, but by taking advantage of the concept of superposition we can create a bit in both positions at the same time. We define this as a qubit. These qubits, along with high-level mathematics and statistics, allow us to use a small finite space to “hold” many theoretical data values at once, thus maximizing the possible perform-able computations.

The Constraints on Implementation

Having addressed a few of the high level abstractions involved in the implementations of a quantum system, we will now look at some of the physical and theoretical constraints involved. One of the fundamental concepts of quantum physics is that of  observation. You may recall hearing of a study called Schrodinger’s Cats. Using this principle of superposition, Schrodinger stated that an unobserved cat may be in both states of alive and dead at once, until observed by an individual. Once observation occurs, the quantum state of superposition collapses to the classical state of which it was perceived. This principle is denoted as decoherence and is also applied to our integration of qubits. Once the value of a qubit is observed at a certain position, it collapses to the classical state of which it was accessed. Subsequently, there is a limit to a quantum system’s ability to belong to a set of mixed states. It was shown that pure state symmetry breaking imposes a time constraint on which a system as a whole can stay quantum coherent. Additionally, miniaturization of a system compounds upon this time constraint.

After covering the theoretical aspects involved , we can now define the concrete system requirements needed, such as: a well processed and dynamic qubit register for stability when dealing with decoherence; a set of gate matrices to perform necessary operations on the state of a qubit; the longest decoherence time possible given the size of the system; and, the ability for singular measurements when dealing in a state defined by superposition.

Applications of Quantum Computers

One of the most famous applications behind the implementation of quantum computers involves the study of cryptography and encryption. Namely the ability of a computer to factor incredibly large numbers and find their primes. This is the basis behind the world’s asymmetric encryption protocol, such as RSA. If suitable quantum systems become commercially available then new encryption protocol would have to developed and integrated almost overnight. Currently, the factorization ability of a classical binary computer reaches a super non-deterministic polynomial time complexity. Where a normal polynomial time complexity allows an algorithm to be predictable in its run-time based on the size of an input N. When we talk about a super non-deterministic complexity, its understood that the run-time of a specific algorithm cannot be generally determined and may take time exponentially longer than the size of the input. These two time complexities are denoted as NP (non-deterministic polynomial) and (polynomial-time). The most famous algorithm for the factorization of numbers is known as Shor’s algorithm. This algorithm uses modular arithmetic and a discrete recursive method to factor its input, but unfortunately still falls under our aforementioned NP running time. However, when implemented in a quantum state, this algorithm becomes applicable.

A more relatable example of an application in quantum computing is that of searching through an unordered list of items. The function of searching takes up a large percentage of all computer computation. Subsequently, being able to reduce the run time for this operation for any sized input would be innovative in reducing the total amount of time for computational operations. The current time it takes to find a particular object in an unordered list is evaluated in the worst case scenario. That is, you must search the entire list to find the item you’re looking for. Your time complexity becomes the size of the input N. Using a quantum system, we can do better. Grover’s algorithm takes advantage of the probabilistic principles of quantum computing in order to reduce this running time to O(N). Unfortunately, this does not still fall under a NP-Complete time complexity; where searching time would be reduced exponentially, Grover’s algorithm offers a quadratic speedup and in the worst case scenario still takes time N. However, this quadratic speedup is not insignificant. Using the aspect of superposition, we can allow every item to be both item we’re looking for and the item we’re not. From here we can use our qubit equation and quantum gate manipulation to infer this answer from the theoretically output, such as the tensor product of input and the function being used. Additionally, the increase in probability for item I+1 ∈ once we know that item is not our specific value, is very helpful in lowering our running time. It is important to note that Grover’s algorithm is, in part, a functional expansion on Deutsch’s algorithm.

Written By: Mitchell Mesecher

 

References:

Difficulties in the Implementations of Quantum Computers, Abhilash Ponnath*

 

 

Koebe-Lib Initial Release

The initial release of Koebe-Lib, our library for representing constructions in the inversive geometry of circles on the 2-sphere has just been uploaded to GitHub. Currently, the library contains one application, SphericalSketch, which allows a user to define and interact with geometrical objects on the 2-sphere using the python language.

A screenshot of the interface of SphericalSketch is shown above.

Student Post: Replicating Real Life

Another awesome thing about 3D printing is that you can replicate things that already exist without having to rebuild them from scratch. Using 3D scanning technology it is possible to digitize existing objects and then bring them back in to the physical world with a 3D printer. This is one of they things I am attempting to do this semester with a Duke Dog statuette. For the JMU Centennial 10 years ago, a number of statuettes were created (see here), one of which now goes out to various events throughout the year. This statuette is getting quite old and beat up, and would be very expensive to recreate with traditional means since the original molds are gone. So using 3D scanning and printing we are hoping to be able to create something that may be able to replace this Duke Dog.

At the beginning of the semester we did a few 3D scans with a Structure Sensor scanner on an iPad. These scans turned out surprisingly well and I have been slowly trying to clean them up and make the model look alright (but I’m not an artist so…). The original statuette is quite large and certainly too big to print in one piece on any printer we have, and even Shapeways cannot print it at full size. So the goal is to come up with a good way to cut the model into parts so that each part can be printed at larger scale and then assembled into a full-scale model that is larger than a single print volume.

My initial cut method did not use flat planes and the pieces did not join very good along the seams so there are quite large gaps. However, it did work for creating a model larger than a single print volume. It also allows for the parts to be printed in different orientations that do not need as much support material and don’t mar the surface of the model as much as a single piece print. The issue of supports and poor surface quality is not a big deal if we were using something like Shapeways, but for the purposes of prototypes it is more important to consider.

Here are pictures of the first pieced together print that I did from my initial cut method. I am working on finding a better method for cutting that will not create such bad seams and will allow the parts to fit together better. Having flat interface surfaces between parts will also allow me to hopefully create alignment pins so that orientation when assembling is not an issue. Being able to cleanly hollow the parts is also a goal so that we can reduce the cost of printing with a service like Shapeways.

Until next time… — Quincy Mast