top of page

A Deeper dive into photonic quantum computing systems

by Blesso Abraham


In the race toward fault-tolerant universal quantum computing, a new champion is a photonic quantum computer which takes advantage of the well-developed optic fiber technology and uses the photons as the quantum state. With the invention of the quantum computing (QC) concept, the development of suitable optic quantum technology came both an intriguing approach to the problem and a necessity. On the one hand, the advantages of using photons as information carriers are clear, photons are clean and decoherence-free quantum systems for which single-qubit operations can be effortlessly performed with incredibly high constancy. On the other hand, quantum information handling with photons as “ flying qubits” is needed for communication-based quantum information science tasks, similar to networking quantum computers and enabling distributed processing like quantum key distribution.

In terms of the traditional DiVincenzo criteria of a quantum computer, five out of seven are inherently satisfied by choosing photons. The remaining criteria are harder to satisfy because photons don’t freely interact, making deterministic two-qubit gates a challenge. Among the added specialized considerations is photon loss, which arises from flawed detection and photon generation ways, and scattering and absorption in optic factors comprising the arithmetic circuits. Although photons are always flying, computing and networking tasks may need them to be delayed or stored, so a redundant device — an optic quantum memory — may occasionally be demanded.

Addressing each of these considerations requires more resources, creating a notionally large optic QC outflow that has now led to negative comprehensions of the photonic approach.

Of course, profound exploration is underway in developing deterministic optic quantum gates, which could take photonic quantum computing in a new direction. Meanwhile, the idea of direct optical quantum computing that relies on simple, but probabilistic, quantum operations have added promise as it has continued development over the last 20 years. On the technology side, we look at photon finding and generation tools, integrated waveguide technology — and some new intermediate quantum computing demonstrations that these enable. On the abstract side, we discuss many promising ways toward a realistic universal linear optical quantum computer. We’ll concentrate on photonic quantum computing that relies on qubits encoded in discrete variables, but still, that quantum computing with continuous variables has now become an important part of LOQC.


Photonic quantum computers are a series of photon experiments using photon generators, photon detectors, beam splitters, phase changers and waveguides in a miniaturized chip the outcome of each experiment affects the final output generalized to solve problems using universal gates and KML or boson sampling procedure.


A range of optical waveguide systems has been developed for IQP, including laser-writing silica, silica-on-insulator(SiO2), silicon-on-insulator (Si), and silicon nitride (Si3N4), lithium niobite (LN), gallium arsenide (GaAs), indium phosphide (InP), and others. We refer to detailed discussions on the IQP platforms in other reviews, while here highlight a few of them. For example, silicon waveguides can tightly confine light, allowing single-photon generations in waveguides and high-density integration.

Si3N4 can generate single photons over a broadband window. Laser-writing 3D circuits can provide possibilities for studying complex physical Hamiltonians. Whilst the above example are all passive materials, LN, GaAs and InP own large electro-optical effects that allow fast manipulation of single-photon states.

Qubit encoding and manipulation. An on-chip toolbox has been developed to encode, transmit, and process quantum information in various degree-of-freedoms (DoFs) of single-photons (Box 1). Flexible controlling of the photon’s DoFs greatly enriches functionalities.

For example, encoding states in time and polarization enables robust chip-to-chip quantum interconnection via fiber or free space. Encoding qubits in the path allows extremely low-error single-qubit and two-qubit operations (Box 2), key for on-chip quantum information processing. Si-based MZIs with a 65dB the on-off ratio has been achieved with near-perfect beam splitters, equivalent to having a Pauli-Z error rate of < 10−6. With ultra-high precise operations and measurements, the overhead resources for MBQC can be significantly reduced. The high levels of controllability of photonic states have also been confirmed by the demonstrations of two-photon quantum interference with near-unity visibility of 100.1±0.4% in SiO2 and 100.0±0.4% in Si platforms.

Integrated single-photon sources. Harnessing the power of IQP requires the on-chip generation of a large number of identical single-photons. We discuss two types of integrated SPSs: parametric photon-pair source and quantum dot (QD) single-photon source.

Pumping nonlinear optical waveguides or cavities allows the generation of photon-pairs, via the spontaneous four-wave mixing (SFWM) or spontaneous parametric down-conversion (SPDC) process. These SPSs have been reported in χ3 waveguides (e.g., Si, SiO2 and Si3N4), and in χ2 waveguides (e.g., GaAs and LN ). A notable feature of integrated parametric SPSs is that they can be engineered into arrays are highly identical sources, each one individually controllable. We give two state-of-the-art examples: an array of 18 SFWM-SPSs in SiO2-waveguides (4 SFWM-SPSs in Si-micro resonators can produce heralded single-photons with a 52% (50%) heralding efficiency and 95% photon indistinguishability from separated SPSs. An issue for parametric SPSs however is that photons are produced non-deterministically with a 5%-10% probability. This can be improved by exploiting the multiplexing technique, such as in time and spatial domains. A state-of-the-art multiplexing source is demonstrated in bulk optics, achieving a 66.7% high probability of single photons collected into a fiber and high indistinguishability of ∼90%.

QD-SPSs promise deterministic generation of single-photons, and particularly self-assembled InGaAs/GaAs QD-SPSs hold the best performance.

A breakthrough in 2013 demonstrated near-optimal single-photon emissions from a single QD, by using a resonant excitation technique. Photons with a 99.1% single-photon purity, 66% extraction efficiency, and 98.5% indistinguishability have been produced in a single QD (in several QDs samples. This type of QD-SPSs in micropillar emit photons out-of-plane, presenting ease of fibre-coupling but difficulty in waveguide integration. Instead, QD-SPSs in photonic-crystal waveguides allow near-unity preferential emission into the waveguide. A major challenge however is to create multiple, identical QD-SPSs, due to the difficulty in reproducing the samples. A solution is to actively de-multiplex the single photons from a single dot into different spatial modes. This scheme produces multiple photons at the loss of overall rate while having shown great success in recent Boson sampling implementations. Integrated single-photon detectors. On-chip detection of single-photons allows the ultimate readout of quantum information. Nowadays several SPD technologies are available, e.g., avalanche photodiodes, superconducting nanowire SPD (SNSPD), and transition edge sensor (TES). Fully integrated SNSPDs have been patterned atop GaAs, Si, and Si3N4 waveguides. A breakthrough in 2012 showed evanescently-coupled Si-waveguide SNSPDs with a 91% detection efficiency, 18ps jitter, and 50Hz dark count. Instead of direct deposition, SNSPDs fabricated on a Si3N4 membrane can be flexibly transferred to other substrates. Moreover, photon-number-resolving (PNR)detection is required in many quantum protocols. The PNR Tess has been evanescently integrated on SiO2 and LN waveguides, which can resolve up to 5 photons. A series of integrated SNSPDs atop GaAs waveguide also allows the on-chip PNR detection of up to 4 photons.


Photon-pair sources from SPDC and related processes — like spontaneous four-wave mixing (SFWM) doesn’t seem to be only non-deterministic but generally operates at low generation probabilities. to keep the single-photon state quality high, pump powers must be kept low; otherwise, multiple photon pairs are going to be generated at identical times. This limits practical photon-pair generation probability n, for SPDC and similar processes, to n 1%. Directly combining an array of n such sources in simultaneous pairs with probability to come up with a bigger quantum state is not a viable option for a scalable photonic quantum computer. A more feasible alternative is to use a deterministic photon source. In recent years, photon-on-demand sources supported quantum dots, both free-space and integrated into optical waveguides, have demonstrated a big increase in brightness(no of photons), enabling new quantum computation experimental demonstrations. (It is worth noting that although quantum dots are usually assumed to produce single photons on demand, quantum dots may also generate entangled photon pairs and superpositions of photon number states).

Although quantum dots can couple to optical cavities with very high efficiency, a currently outstanding problem is coupling light efficiently into single-mode optical fibers, with present coupling efficiencies of 33%. Moreover, each quantum dot is usually spectrally different from others because of structural and environmental inhomogeneities, therefore the photons emitted by two dots are distinguishable from one another. PQC relies on nonclassical interference, and therefore the lack of indistinguishability makes it complicated to extend the number of photons used simultaneously in an experiment. One way to repair this can be to tune the spectrum of various quantum dots to create them indistinguishable. Alternatively, one quantum dot will be accustomed generate all the desired photons. For this, a pulsed output stream of photons from the dot is demultiplexed into different spatial channels via a free-space or integrated active optical network. The multiplexed photons are then each delayed by appropriate amounts, to be output simultaneously from the source setup.

Similar active optical circuits may also, in theory, turn probabilistic sources like SPDC into deterministic ones. To understand this, an array of sources is employed. Detecting the heralding signals from such an array will label which source has successfully generated a photon pair. Then, the corresponding heralded photon(two low energy photons created by one high energy photon) is actively rerouted through an optical network toward the output, while other photons if generated, would be discarded by the identical network.

Using n sources, this manner theoretically boosts the generation efficiency without increasing the pump power that impinges on one nonlinear crystal and thus without increasing the quantity of high-photon-number noise from multiple-pair generation events. (In principle, the network may filter multiple-pair generation events if photon-number resolving detectors are used.) this idea experimentally demonstrated in 2011, has moved significantly toward practicality since then, partially due to the employment of fibre- and waveguide-based integrated platforms to assist to scale. Another method, that doesn’t require multiple separate sources, is to use time multiplexing of one source. within the time multiplexing approach, shown in, a heralded photon pair is generated in a very random time bin, but the timing is recorded through the detection of the heralding signal. The heralded photons are sent into a full of life temporal delay network and switched to exit the network at a set, although lower, repetition rate. the quantity n of your time bins that are used to output one single photon plays the role of n sources in a very spatial multiplexing scheme. Thus, the development in generation probability scales with the scale of the delay network but is affected negatively by the loss of optical components in it. This multiplexing idea has been recently implemented in several experiments, demonstrating multiplexing with largescale or large-scale and low-loss networks, or with devices that produce indistinguishable output photons produced single photons within the output fibre with a probability of 0:6, and these photons displayed nonclassical interference visibility of 0:9. A more in-depth observe near-deterministic sources. Interesting preliminary work has also been done toward combining these varieties of techniques to simultaneously generate over one single photon at a time. The multiplexing approach will be applied to quite one probabilistic source to come up with states with one photon in each of N> 1 modes. another method is to use an optical quantum memory to synchronize several probabilistic sources. Although quantum memory may be as simple as a switchable optical delay (in a free-space, fibre, or waveguide loop, for example), there’s also extensive theoretical and experimental development of memories supported matter systems, with recent achievements including but not limited to broadband, high-speed multimode, telecom-compatible or configurable memories, capable of storing vector, vortex, or entangled qubits, and storage with long coherence times and high storage efficiency and fidelity. Over the span of slightly over a decade, photon detection and also the probabilistic generation of high-quality photons have undergone transformational advances, and also the development of deterministic sources is well underway, with no in-principle barriers to their realization.


The building blocks for the gate-based scheme are demonstrated in the Integrated Quantum Photonic chip, as an example, the CNOT gate was first demonstrated in SiO2, then in laser-writing silica Si and others. These initial demonstrations implemented an unheralded CNOT scheme that doesn’t include the ancillary photons nevertheless, it had been possible to demonstrate a rudimentary version of Shor’s factoring algorithm using two CNOT gates. It allows factorization with fidelity of 99±1%. the primary integrated heralded CNOT gate was performed employing a fully programmable SiO2-chip that was able to implement universal linear-optic operations.

The type of teleportation required to shuttle successfully quantum information from heralded gates was also demonstrated on-chip with three photonic qubits and one CNOT gate in a very laser-writing SiO2-chip. It achieved the single-chip teleportation of single-qubit states with a median fidelity of 89±3%.

Re-programming photonic chips allow the processing of multiple quantum tasks and algorithms in a single chip. The first fully programmable two-qubit quantum photonic processor was demonstrated in a silica kick in 2011. The device includes single-qubit preparation and measurement, in addition to a two-qubit entangling operation. It was reconfigured to perform 1000s of experiments while remaining high fidelity, review the wave-particle duality during a quantum delayed-choice experiment, and implement a quantum algorithm that may compute the ground-state energy of molecules. The first reconfigurable laser-writing device was also demonstrated recently, a Si-photonic quantum processor ready to initialize, operate and analyse arbitrary two-qubit states and processes, was demonstrated by adopting a linear-combination scheme. The device was programmed to implement 98 different logic gates (e.g., CNOT, CZ, CH and SWAP), and achieved a median quantum process fidelity of ∼93%. Multiple algorithms are implemented within the device, like a quantum approximate optimization algorithm for a three-example constraint satisfaction problem. Universal linear-optic circuits, able to implement all possible quantum protocols, are proposed and experimentally realized in programmable optical circuits. Its first realization was in a single SiO2-chip, which consists of a six-mode triangularly arranged MZI network.

The versatility of this universal circuit was demonstrated for several key applications, including the implementations of heralded CNOT gate, Boson sampling circuits with verification tests, 6-dimensional complex Hadamard operations, and quantum simulation of vibrational dynamics of molecules. Recently, an eight-mode universal linear-optic circuit was also demonstrated in a Si3N4-chip


The photonic CNOT gate completes a bit-flip operation on photon 2 (the target qubit) when photon 1 (the control qubit) is in the right circular polarization. Otherwise, nothing is done on the target qubit. The two photons are prepared in the polarization state.


The photonic Toffoli gate completes a bit-flip operation on photon 3 (the target qubit) when both photon 1 and photon 2 (the control qubits) are in the right-circular polarization; Otherwise, nothing is done on the target qubit. The quantum circuit for implementing a robust photonic Toffoli Gate.


The photonic Fredkin gate completes a polarization swap operation between photon 2 and photon3 (the target qubits) if photon 1 (the control qubit) is in the right-circular polarization; If the photon is in the left-circular polarization nothing is done on the target qubits. The quantum circuit for implementing a robust photonic Fredkin gate. The photonic Fredkingate can be easily achieved by performing a CNOT operation with photon 3 as the control qubit and photon 2 as the target qubit before and after the photons are injected into the import of the Toffoli gate. Photon 1, photon 3 and photon 2 are injected into the system subsequently.


Dense subgraph identification

Graphs can be used to model a wide variety of concepts including social networks websites, financial markets, and biological networks. A common problem of interest is to identify dense subgraphs, i.e. subgraphs that contain a large number of connections between their nodes. Dense subgraphs represent subsets of nodes that are highly connected, which may correspond for example to communities in social networks or to mutually influential proteins in a biological network. In a nutshell, dense subgraphs are the highly correlated regions of graphs, and finding dense subgraphs is relevant whenever identifying such correlations is important.


A clique is a subgraph that contains all possible edges between its node. It is a complete graph with unit density. The maximum clique problem, or max clique for short, is the task of finding the largest clique in a graph. Max clique is NP-hard and its decision version is one of the most widely-studied NP-complete problems. Applications of max clique have been known for decades, and new applications continue to be discovered in a wide range of disciplines such as bioinformatics, social network analysis finance flight schedules, and telecommunications.


Graph isomorphism is the problem of determining whether two graphs G and G0 are isomorphic, i.e., whether they are identical up to a permutation of their nodes. Mathematically, a graph isomorphism is a bijective mapping f from the node-set of G to the node-set of G0 such that an edge (i, j) exists in G if and only if the edge (f(i), f(j)) occurs in G0.Two graphs are isomorphic if such an isomorphism exists.


Molecules absorb light at frequencies that depend on the allowed energy transitions between different states. These transitions can be determined both by electronic and vibrational degrees of freedom, in which case the absorption lines are referred to as the vibrionic spectrum of the molecule. The absorption spectra of molecules are relevant for example in determining their usage in photovoltaics or as dyes in industrial processes.


35 views0 comments

Recent Posts

See All


bottom of page