What is Quantum Computing?
The nature behind the implementation of quantum computers comes from the attempt to simulate and use principles of quantum mechanics to enhance computational run-time exponentially. Theoretically, any task given to a classical Turing machine can be completed with exponentially less computations on a quantum computer. In order to accomplish this, quantum systems adopt the principle of superposition, or more simply, being in every position at once. When it comes to quantum mechanics this is usually thought of in the realm of photons or waves, but with computers we can apply this theory to the most basic of measurements, a bit. A bit has only two positions, a one or a zero, but by taking advantage of the concept of superposition we can create a bit in both positions at the same time. We define this as a qubit. These qubits, along with high-level mathematics and statistics, allow us to use a small finite space to “hold” many theoretical data values at once, thus maximizing the possible perform-able computations.
The Constraints on Implementation
Having addressed a few of the high level abstractions involved in the implementations of a quantum system, we will now look at some of the physical and theoretical constraints involved. One of the fundamental concepts of quantum physics is that of observation. You may recall hearing of a study called Schrodinger’s Cats. Using this principle of superposition, Schrodinger stated that an unobserved cat may be in both states of alive and dead at once, until observed by an individual. Once observation occurs, the quantum state of superposition collapses to the classical state of which it was perceived. This principle is denoted as decoherence and is also applied to our integration of qubits. Once the value of a qubit is observed at a certain position, it collapses to the classical state of which it was accessed. Subsequently, there is a limit to a quantum system’s ability to belong to a set of mixed states. It was shown that pure state symmetry breaking imposes a time constraint on which a system as a whole can stay quantum coherent. Additionally, miniaturization of a system compounds upon this time constraint.
After covering the theoretical aspects involved , we can now define the concrete system requirements needed, such as: a well processed and dynamic qubit register for stability when dealing with decoherence; a set of gate matrices to perform necessary operations on the state of a qubit; the longest decoherence time possible given the size of the system; and, the ability for singular measurements when dealing in a state defined by superposition.
Applications of Quantum Computers
One of the most famous applications behind the implementation of quantum computers involves the study of cryptography and encryption. Namely the ability of a computer to factor incredibly large numbers and find their primes. This is the basis behind the world’s asymmetric encryption protocol, such as RSA. If suitable quantum systems become commercially available then new encryption protocol would have to developed and integrated almost overnight. Currently, the factorization ability of a classical binary computer reaches a super non-deterministic polynomial time complexity. Where a normal polynomial time complexity allows an algorithm to be predictable in its run-time based on the size of an input N. When we talk about a super non-deterministic complexity, its understood that the run-time of a specific algorithm cannot be generally determined and may take time exponentially longer than the size of the input. These two time complexities are denoted as NP (non-deterministic polynomial) and P (polynomial-time). The most famous algorithm for the factorization of numbers is known as Shor’s algorithm. This algorithm uses modular arithmetic and a discrete recursive method to factor its input, but unfortunately still falls under our aforementioned NP running time. However, when implemented in a quantum state, this algorithm becomes applicable.
A more relatable example of an application in quantum computing is that of searching through an unordered list of items. The function of searching takes up a large percentage of all computer computation. Subsequently, being able to reduce the run time for this operation for any sized input would be innovative in reducing the total amount of time for computational operations. The current time it takes to find a particular object in an unordered list is evaluated in the worst case scenario. That is, you must search the entire list to find the item you’re looking for. Your time complexity becomes the size of the input N. Using a quantum system, we can do better. Grover’s algorithm takes advantage of the probabilistic principles of quantum computing in order to reduce this running time to O(√N). Unfortunately, this does not still fall under a NP-Complete time complexity; where searching time would be reduced exponentially, Grover’s algorithm offers a quadratic speedup and in the worst case scenario still takes time N. However, this quadratic speedup is not insignificant. Using the aspect of superposition, we can allow every item to be both item we’re looking for and the item we’re not. From here we can use our qubit equation and quantum gate manipulation to infer this answer from the theoretically output, such as the tensor product of input and the function being used. Additionally, the increase in probability for item I+1 ∈ N once we know that item I is not our specific value, is very helpful in lowering our running time. It is important to note that Grover’s algorithm is, in part, a functional expansion on Deutsch’s algorithm.
Written By: Mitchell Mesecher
References:
Difficulties in the Implementations of Quantum Computers, Abhilash Ponnath*