Student Post: Visualizing Bipartite and Complete Graphs in 3D Space

Bipartite Graphs

When thinking about how to represent bipartite graphs in 3D space, the challenge arose: how do we take full advantage of this extra dimension. The models we started with worked fine as they were, but we weren’t fully satisfied. As seen below, our current model for the graph K44 appeared in space as two graphs of K22 stacked on top of one another and rotated 45 degrees. We knew we could do better.

Original graph for K44

What followed was a simple, yet elegant solution to the problem. For a graph KNM, the points N are plotted along the Z axis while the points M are plotted along a circle around the origin. What resulted can be seen below in our new graph for K33.

K33 using the Line and Circle model

3D printing the graph for K33 only solidifies the simplicity of the design; however, the true beauty of the design is how it can be adapted to a general solution for all values of KNM

The Line and Circle Model

K5-5 using the Line and Circle model

The line and circle model is perfect for relatively small values of KNM as it clearly displays the relationship every point has with each other. However, for increasingly larger graphs, It becomes clear that points plotted along the line are allocated significantly less space than those along the circle. As such the points may begin to appear unevenly distributed as the size increases. This can be useful, for displaying graphs KNM where N and M are different values, such as K6-10.

The Semicircle Model

After developing and printing graphs for the Line and Circle model, we had a desire to iterate upon its design. Our original idea involved plotting the points of K33 on the vertices of an octehedron, but the model itself soon generalized to plotting the points of the graph along opposing, perpendicular semicircles. As such is distributes the vertices of the graph evenly across the model. The semicircle model is more suitable for visualizing larger graphs, but can feel rather sparse for smaller values such as K33. Graphs of KNM where N and M are not even can also appear lopsided, as the nature of the semicircle model’s visual symmetry relies upon the two plots being equal.

K55 Using the semicircle model

Streamlining the Code

After developing our two models for bipartite graphs, our next objective was to make it easier and less time consuming to create graphs in openSCAD. Up to this point, we had been manually plotting points in the code. While we already had code to connect lines between points, meaning most of the hard work was already done, for the work of plotting points for each (increasingly large) new graph we wanted to print was still very time consuming. It would be entirely impractical to plot something as large as K20-20 for instance without more advanced code.

K20-20 Using the semicircle model

I developed the code to the point where for any two values for KNM, the program would automatically create the corresponding graph in either model and scales it appropriately.

Below is the code for plotting a graph using the line and circle model. This was easily the most complicated part of the process, as it required splitting the points into 3 separate variables. The variable k plots all of the Line vertices while v and z each plot half of the circle vertices.

Because these two models for bipartite graphs could be generalized so easily, we were able to write code that could plot a graph of any size in seconds. Having reached a good stopping place for bipartite graphs, we began looking towards modeling complete graphs.

Complete Graphs

Immediately it became apparent that generalizing a model for complete graphs would be quite unfeasible, as applying any really symmetry to them would result in intersections. We still wished to generalize them in some way, and while the first model we arrived at did have some merits, it was not quite satisfactory. We called this model the Dome Projection of a complete graph.

Dome Projection

https://en.wikipedia.org/wiki/Book_embedding

The idea behind the Dome Projection model to plot the vertices of the graph on a circle, then connect them by projecting their edges on different ‘dome levels’. The code for it works by starting from a single vertex and connecting it to every other point. It then increases the dome level, moves to the next vertex and connects it to every vertex it is not yet connected to, proceeding like this until ever vertex is connected to every other vertex.

Dome Projection for K5

One interesting thing about the Dome Projection is its relation to another way of representing graphs, the Book Embedding method.

A Book Embedding of K5

Book embedding works by connecting the vertices of a graph on different planes or ‘sheets’, connecting all possible points to each other without intersections then moving onto the next sheet and proceeding in this way until every vertex is connected. We quickly discovered that the number of sheets in a book embedding corresponded to the number of levels on a dome projection. As seen above, the dome projection for K5 has 3 levels while the book embedding has 3 sheets.

Dome projection serves as an interesting new way to represent graphs and certainly looks nicer on a desk than a book embedding, however it was still not quite what we were looking for from a 3D modeling of complete graphs.

A Better Representation

Even though its unlikely a generalized form for the complete graphs exist in 3D space, we sought to find something close, at leas for relatively small values of K. Starting with K6, The first model we constructed involved plotting pairs of points at different Z values and rotating them around the Z axis. What resulted appeared as a tetrahedron with two points inside of it. This tetrahedral visualization for K6 attempts to preserve as much symmetry as possible in the graph with hopes that doing so could provide a method to visualize large KN graphs, such as K8, with as much order as possible. This visualization does work well for K6 but does begin falling apart at K8. Though no graphs larger than K8 have been modeled in this way yet, it’s safe to say this trend would continue.

Plotting K8 with the tetrahedral visualization

Our second visualization for K6 takes things in a different direction. Instead of generalizing a model for the graph, it aims to generalize an algorithm for determining the model for each individual graph. The code we used for it plots the points of the graph randomly on a sphere then calculates the distances between every edge for a given number of trials. The model that has the largest distances between edges is selected as the new best model for a given graph. It takes millions of trials to get a reasonable result, however, that result it striking .

Two Visualizations for K6

Order and Disorder in 3D Graphs

On the left is the graph for K6 we started with when this project began. It was created by stacking two graphs for K3 on top of each other and rotating them so that their edges don’t intersect. On the right is the graph for K6 that our code produced. After a semester and a half, we ended up exactly where we started. There’s some beauty in this, however, as somehow, a model created through order and a model created through chaos ended up appearing strikingly similar. This could just be coincidence, or it could be a result of the absurd poetry that is Math.

Looking to the Future

As we move forward in this project, we plan to continue finding new models for larger complete graphs. We’re excited to see what these graphs end up looking like, as everything after K6 is new territory at this point.

Rocket Flame Trench 3D Printing Project


Last semester as well as this semester, I have been working with Dr. Taalman to produce a 3D printed model of the rocket flame trench that is located at the launchpad in NASA’s Wallop Flight Facility on Eastern Shore Virginia. When a rocket is launched it generates a lot of heat, and the trench directs the exhaust and heat away from the rocket.  

Dr. Lubert, who is currently researching how to minimize the vibration caused by the rocket launch noise, gave us several blueprints and pictures of the trench. This gave us an idea of the shape of the trench.

Our first step was to see if Fusion 360 had the tools we needed to design the model. We tested our idea on how to build the model in Fusion 360 and printed an object that has a similar shape as the trench.

After we had figured out that we could design the model in Fusion 360, we scanned the blueprints and up loaded them to the Fusion 360 cloud. Since the blueprints contain proprietary information, we also removed all the measurements from the blueprints in case Fusion 360 was hacked and the blueprints stolen.  (The blueprints will be available at the presentation.)  

Once the blueprints were in Fusion 360, we sketched the shape of the trench by tracing over the blueprints. Since we do not have blueprints for the cross sections of the trench and for the top, we will need to use available pictures to help us to determine the shape of these sections

At this point, the side section of trench has been sketched. The top and front sections still need to be sketched. Once the model has sketched and the different planes connected to a make a 3D model, the model will be ready to print.  The printed model will give Dr Lubert a better understand of the shape of the trench and how a rocket’s exhaust flow through and exits the trench.

Source: https://geekhaus.com/mathmakerlab/portfolio/214/

Photo Credits: Dr. Lubert and Lyndon Swarey

Student Post: Rolling Trefoil Knots

Introduction:

Dr. Laura Taalman and Dr. Stephen Lucas have been researching rolling trefoil knots since before we (Abby Eget and Harley Meade) became involved with the project. When we joined the research team, we were fortunate enough to come into the project with a lot of previously coded information in OpenSCAD including the general shape of the Morton trefoil, which is tritangentless, which is what allows it to properly roll. Then we helped the team optimize the knot using a parameter a, in accordance to the formula in the section below, so that there was minimal variation in the height of the center of mass.

Optimizing the z value using the a value:

Originally the plan was to optimize the a value by scaling by 1 in the z direction. This was done surprisingly quickly, so we shifted directions. Instead of optimizing the a value for scaling by 1 in the z direction, we decided to try to optimize the scaling in the z direction based on varying a values. We used a values of 0.3, 0.5, 0.7, and 0.9.

Coded knot with a = .5

We printed the four trefoil knots with the four different a values from Shapeways. Then we printed four corresponding convex hulls in our own Maker Lab. These are pictured below.

This image has an empty alt attribute; its file name is img-6355-e1553031369202.jpg
a = .3 scaling = .4629
This image has an empty alt attribute; its file name is img-6356-1-e1553031560633.jpg
a = .5 scaling = .8210
This image has an empty alt attribute; its file name is img-0232-1-e1553031667294.jpg
a = .7 scaling = 1.3162
This image has an empty alt attribute; its file name is img-0231-1-e1553031817729.jpg
a = .9 scaling = 2.4954
This image has an empty alt attribute; its file name is graphcomparingknotssnip.png
Varying heights of the center of mass for knots of varying a values with z-scale 1.

Knot paths:

After considering how the knots rolled in relation to their convex hulls, we decided to use the triangles from the convex hulls’ code to see what kind of paths the knots make when rolling on the table. Two of these are pictured below, with the black line showing the horizontal shift in center of mass.

This image has an empty alt attribute; its file name is knotpathssnip.png
Path of Morton trefoil knots with a = 0.3 and a = 0.5

Two other parameters that we changed are the p and q values of the knots. These are related to the fact that the trefoil is a torus knot, where the p represents how many times the knot wraps around a torus meridianally and the q represents how many times the knot wraps around the torus longitudinally. We also decided to change these values, while keeping p and q relatively prime so as to not go from a knot to a link, to see if these knots would at least be externally tritangentless, and how their paths on a table would be affected.

This image has an empty alt attribute; its file name is img-0233-1-e1553033443463.jpg
p = 7 q =2

After exploring some of the variables and parameters surrounding rolling trefoil knots, we are excited to continue optimizing these knots by looking at other factors. In the future, we plan to optimize the materials, infill, and consistency in sizing of the knots to minimize differences not currently accounted for.

Welcome Back

Hello all,

It has been a long time. I took a break from KoebeLib to research another problem and now I am back! Man, oh man, has it been a long time and I am happy to let you know that a lot has been done since my last post. We are well past our way of selecting circles. The selection of circles then allowed us to create more sophisticated tools. Current functionality includes: calculating the intersection of two circles, given two disks and a point the member of the coaxial family through that point is calculated, and we have delete functionality! Below a picture has been included to show you what the current button panel looks like when it is first run. The current in progress tools are given two disks and a point calculate the disk of the orthogonal family. We initially thought that we could use the conical caps to calculate, however, this did not work. Our next thought is that we calculate the plane orthogonal to the given two disks and this will be relatively simple because we had to do something similar for the member of the coaxial family. We have code written for given three disks return a c-plane but there is not currently a button for it because we aren’t quite sure as to how to represent c-planes. On the flip side another tool in progress is given three c-planes return a disk. For this to be completed I just have to write the calculation to select a plane. These two tools will not require too much work and as a result I have a couple more (simple) things in progress. Currently our button panel is not responsive so that is first on my todo list. We also want a side panel that displays the current drawn objects and want functionality to be able to select multiple objects. I will see you all with more tools completed for KoebeLib.

Screen Shot 2019-03-04 at 5.28.29 PM

Until then,

Madelaine Brower

Student Post: Programming Tilings with Escher

Introduction

This post assumes that you have downloaded the KoebeLib repository  and are using IntelliJ IDEA.

Escher is a domain specific language that was created to allow users to efficiently define and describe combinatorial subdivision rules and perform the subdivision. Today I will be showing you how to write a subdivision tiling program using Escher. Our tiling of choice for this tutorial is the well known Chair Tile (Figure 1).

chair_sub

Figure 1: The subdivision iterations of the geometric Chair Tile. [1]

Writing the Program

escher_example

Figure 2: Sample code for a Chair Tile subdivision program in Escher

The first step in writing our subdivision program is to define all of the tiles that will be used (prototiles). Define a prototile using the keyword TILETYPE followed by an identifier for our tile and the number of vertices enclosed in {}. Note that the vertices created for a tile are indexed 0-N and connected sequentially into a cycle with N being connected to 1 and N-1. For our example we only have one prototile and it is defined on line 1.
TILETYPE chair {8};

Next, each prototile that is defined needs to have an accompanying set of subdivision rules. To define the subdivision rules for a specific prototile use the keyword SUBDIVISION followed by the name of the prototile. All of the subdivision rules need to be enclosed in a block designated by the {}. For our example we only need subdivision rules for the chair tile and they are defined in lines 2-21.
SUBDIVISION chair { INSERT RULES HERE };

Within a subdivision rule definition we can perform three main actions: splitting preexisting edges to created new vertices, creating new unconnected vertices, and specifying the child tiles. Creating a new vertex with the split function requires the keyword VERTEX followed by an identifier for the vertex, and equal sign, and the split function with the appropriate parameters. The three parameters for the split function are the origin and destination vertices for the desired edge and the number of edges you want to split that edge into. Note that for splits with edge counts larger than 2 you need to specify and identifier for each vertex that will be created when splitting the edge. For our example we split all of the edges on our Chair in lines 4-11. New unconnected vertices are created in lines 12-16.
VERTEX a = split(chair.vertex[0], chair.vertex[1], 2);
VERTEX b = split(chair.vertex[1], chair.vertex[2], 2);
VERTEX c = split(chair.vertex[2], chair.vertex[3], 2);
VERTEX d = split(chair.vertex[3], chair.vertex[4], 2);
VERTEX e = split(chair.vertex[4], chair.vertex[5], 2);
VERTEX f = split(chair.vertex[5], chair.vertex[6], 2);
VERTEX g = split(chair.vertex[6], chair.vertex[7], 2);
VERTEX h = split(chair.vertex[7], chair.vertex[0], 2);
VERTEX i;
VERTEX j;
VERTEX k;
VERTEX l;
VERTEX m;

The children in a set of subdivision rules are the resulting prototiles that are created by connecting newly created and existing vertices within the parent tile. To create a child rule use the keyword CHILD followed by and identifier for the child, an equal sign, and information about the resulting child tile: the tiletype followed by all of the vertices in counter-clockwise order encased in ([]). For our example we create four children of the chair tiletype in lines 17-20.
CHILD c1 = chair([chair.vertex[0], a, chair.vertex[1], j, i, m, chair.vertex[7], h]);
CHILD c2 = chair([chair.vertex[2], c, chair.vertex[3], d, k, j, chair.vertex[1], b]);
CHILD c3 = chair([i, j, k, d, chair.vertex[4], e, l, m]);
CHILD c4 = chair([chair.vertex[6], g, chair.vertex[7], m, l, e, chair.vertex[5], f]);

Finally we want to tell the program to execute the tiling. In order to do that we use the tile function and provide it with two parameters: the initial prototile and the number of subdivision iterations. Four our example we start with a chair tile and subdivide three times on line 22.
tile( chair, 3 );

Congratulations, you have successfully written your first Escher program. Save the file as chair.es and be sure to remember where you saved it.

Running the Program

In order to run the program locate the EscherDriver.kt file at KoebeLib/src/tiling/language/algorithms/EscherDriver.kt in the IntelliJ IDE and edit the run configurations. We need to add the full path of the Escher program file that we just saved into the Program arguments box. Now we can run the EscherDriver on our program by clicking the green triangle in the top right of the IDE. Running the driver will produce three files in the main KoebeLib directory: config, verts, and edges. These files will be used to visualize the tiling created by our program.

Visualizing the Results

In order to visualize the tiling we have created we will be using a python script that has been included in KoebeLib. Open a terminal and navigate to KoebeLib/src/tiling/language/algorithms/. Run the provided python script by typing python planarDraw.py this should display the tiling (Figure 3). This figure can be saved through the options at the bottom of the graph.

chair_3

Figure 3: A graph of the Chair tiling after three iterations of subdivisions.

Bibliography

[1] Luis Blackaller, Piles of Tiles, http://black.mitplw.com/tiles/aperiodic.html

Student Post: Transforming Graphs

Today’s mission was to transform K6 into another member of the Petersen Family of graphs using a Δ-Y transform of K6 by identifying 3 vertices and 3 edges connecting them pair-wise, deleting those edges, and adding a new vertex connected to all three points. My starting point for this is my OpenSCAD code from making  K6 . Want I wanted to do was delete the edges in red and add a new vertex in the center and in blue add three new edges to connect the new vertex with the three vertices whose edges I pairwise deleted. The picture below nicely shows the Δ-Y transform of K6 that would be taking place.

K6MarkedUp

My current issue is this: the way my code is set up, I have all my edges being created by a for loop.

for loop K6 transform

I got stuck trying to figure out how to easily delete three edges after they had already been made. I’m going to talk with Dr. Taalman to see if there is a quick fix or how I should efficiently writing code to make this transform happen.

Until next time,

Hannah Critchfield

Deutsch’s Algorithm

Disclaimer

The current posting is considered to be a simple example application of the theorems and mathematics discussed in the last post, The Basic Mathematics of Quantum Computing. The information here will be hard to grasp unless you have covered the operations specified previously. For this reason this post, and most postings following this, will be regarded as a continuation.

Background

Deutsch’s Algorithm is the one of the most representable quantum algorithms. This algorithm was developed in order to illustrate the processing of qubit input and the difference in performance when compared to a classical Turing machine. This algorithm covers the testing of black box functions, or modules of which the operations performed upon the input at the time of execution isn’t known.

In this fashion, in order to determine how the black box executes we would have to test multiple inputs and examine patterns in the outputs. However, for this example we limit the possibilities used to only four boolean functions, two being balanced and two constant.

To clarify, a constant function has the same Boolean output for every input. While a balanced function yields as many 0s as it does 1s. We define our functions as such,

f(0) = f(1) -> constant -> Both inputs of 0 and 1 map to the output 0 or 1 alike.

f(0) = f(1) -> balanced -> Both inputs of 0 and 1 map to the opposite output.

In the illustration below, functions one and three are constant, while two and four are balanced.

Normally, with a classic implementation we would need to test at least two inputs to determine if the function inside the black box was constant or balanced. Alternatively, using the principles of superposition on qubits we can evaluate both inputs at once to effectively cut our computational complexity in half.

The computational operations saved by using this algorithm may seem negligible, but when expanded into an algorithmic form the ability to evaluate data to this respect can have a quadratic decrease on the constant running time of a system.

Complexities

Before learning about the procedure of the algorithm, it’s important to review the mathematics needed in order to complete this excersise. If you recall from the last post we covered something known as the Hadarmad matrix, which is the quantum gate used to tranform a qubit into the state of superposition.

Recall:  = \left(\begin{array}{cc} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \end{array}\right).

Again this is done through matrix tensor multiplication.

It is important to remember that every qubit in any current state can be evaulated as a column vector made up of 2^n probabilites of possible occupying states. These column vectors representing the possible state positions of the qubits are represented as complex numbers where \left|\varphi\right> = [C_1, C_2, .... C_N] and each indicy belongs to the domain of complex numbers, such that a_{i1} \in \mathbb{C}.

Lets review the tensor multiplication of two qubits that contain real number probabilites. Such that \left|0\right>  has a 100% to occupy state 0 and \left|1\right> has a 100% chance to occupy state 1. Next we apply the tensor product:

\left|0\right> \otimes \left|1\right> = \left(\begin{array}{c} 1\\ 0\end{array}\right) \otimes  \left(\begin{array}{c} 0\\ 1\end{array}\right) = \left|01\right>  = \left(\begin{array}{c} 0\\ 1\\0\\0\end{array}\right)

Once evaulated, he resulting qubit has four possible states but always has a 100% probabilty to be in the state 01.

Following this procedure we can apply the same operations between \left|0\right> and the quantum gate H. Such that,

H \otimes \left|0\right> =  \left(\begin{array}{cc} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} \end{array}\right) \left(\begin{array}{c} 1\\ 0\end{array}\right) = \frac{\left|0\right> + \left|1\right>}{\sqrt{2}} = \left(\begin{array}{c} \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}\end{array}\right)

Hence, the resulting qubit has a complex probability of 50% being in state 0 and 50% of being in state 1. As shown previoulsy, if we wanted to evaulate these as real number probabilites we take the sqaure of the complex numbers.

Now that we have identified a basis for transitioning qubits into a state of superposition we are ready to evaulate Deutch’s Algorithm. Recall, this is possible because of the qubits ability to occupy both testable inputs at once.

Deutsch’s Algorithm

As we recognized, this algorithm consists of four black box functions with an input of one or zero and an output one or of zero. Again, if f(0) \neq f(1), \textrm{then} f(x) is balanced, otherwise if f(0) = f(1) is constant.

Now consider  the diagram below. First, notice that this graphic is a quantum circuit in which the two qubit inputs on the left are manipulated through the quantum gates denoted by H. They are then passed to the U_f function which is our black box test.

From the diagram above we can see that by applying two qubits as the inputs and subsequently turning them both in a superposition, the value of the qubits becomes negligible as they have equal probability of being in both states.

We now explain the qubit state \left|\varphi_n\right> at each stage of the algorithm.

At  \left|\varphi_0\right>  we start with the initial separated qubit inputs  \left|0 \right>, \left|1 \right>

Where \left|\varphi_0\right> =  \left|0,1\right>

Next we examine state \left|\varphi_1\right>

First we must apply the matrix H to both our qubits, where

H \otimes \left|0\right> = \frac{\left|0\right> + \left|1\right>}{\sqrt{2}}  and  H \otimes \left|1\right> = \frac{\left|0\right> - \left|1\right>}{\sqrt{2}}.

Finally we tensor together our two superposition qubits to achieve,

\left|\varphi_1\right> = H\left|0\right>  \otimes H\left|1\right> = \frac{\left|0\right> + \left|1\right>}{ \sqrt{2}} \frac{\left|0\right> - \left|1\right>}{ \sqrt{2}}.

We use the tensor product to combine these two states to obtain the result \left|\varphi_1\right> = \frac{\left|00\right> - \left|01\right> + \left|10\right> - \left|11\right>}{2} .

Therefore the probability of qubit \large{\left|\varphi_1\right> =  \left(\begin{array}{c} \frac{1}{2}\\ \frac{-1}{2}\\ \frac{1}{2}\\ \frac{-1}{2} \end{array}\right)}

This combined state can then be applied to our function as one input. We then evaualte the state of qubit \left|\varphi_2\right> as the output after having been applied to the black box function.

Here notice that the sign of this output changes dependent on which function was applied during execution, such that

\Large{\frac{(-1)^{f(0)} \left|0\right> + (-1)^{f(1)} \left|1\right>}{ \sqrt{2}} \frac{\left|0\right> - \left|1\right>}{ \sqrt{2}}}..

From this expression we can see that depending on the values of f(0), f(1) we can determine whether the function is balanced or constant. This is because a constant function always yields  \left|0\right> +  \left|1\right>

while a balanced always yields  \left|0\right> -  \left|1\right> due to the change in sign.

Finally we can put this in terms of a piece wise function where the first qubit is the one that shows the attributes of the function and the second is the static transformation of the bottom qubit.

\left|\varphi_2\right> = \begin{cases}  (\pm 1) \frac{\left|0\right> + \left|1\right>}{ \sqrt{2}}  \frac{\left|0\right> - \left|1\right>}{ \sqrt{2}} & f = constant \\  (\pm 1) \frac{\left|0\right> - \left|1\right>}{ \sqrt{2}}  \frac{\left|0\right> - \left|1\right>}{ \sqrt{2}} & f = balanced \end{cases}.

Recall that because unitary, or its own inverse, we then effectively apply our last H to the top qubit in order to move it out of superposition for observation at \left|\varphi_3\right> and see the constraint on balanced or constant function.

We see this where \left|\varphi_3\right> \begin{cases}  (\pm 1) \left|0\right> \frac{\left|0\right> - \left|1\right>}{ \sqrt{2}}  & f = constant \\  (\pm 1) \left|1\right> \frac{\left|0\right> - \left|1\right>}{ \sqrt{2}} & f = balanced \end{cases} .

For clarification, the top qubit is the one that was manipulated by the function and brought out of a state of superposition, while the bottom qubit stayed statically bound to superposition in order to preserve the systems level of decoherence.

While this algorithm is simple, it still gives a tangible example of the applications of qubit mathematics and quantum computation. This procedure was later proven to be useful in determining the contents of many block box functions and can be implemented in a variety of different ways. Unfortunately, I cannot delve into all of the specific applications, as there are many other topics that needed to be covered. However, if you are interested in learning more, an extension of Deutsch’s Algorithm is called the Deutsch-Jozsa Algorithm and can be applied to a much larger domain of input. In fact Deutsch’s Algorithm serves as a basis for many algorithms in quantum computing that serve to expand upon evaluating multiple outputs at once.

Written By: Mitchell Mesecher

Citations: 

Quantum Computing for Computer Scientists, Noson S. Yanofsky