diff --git "a/stack_exchange/QC/QCQ&A 2018.csv" "b/stack_exchange/QC/QCQ&A 2018.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/QC/QCQ&A 2018.csv" @@ -0,0 +1,38035 @@ +Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense +2036,1,2038,,05-12-2018 21:57,,20,2700,"

Given that the global phases of states cannot be physically discerned, why is it that quantum circuits are phrased in terms of unitaries and not special unitaries? One answer I got was that it is just for convenience but I'm still unsure.

+ +

A related question is this: are there any differences in the physical implementation of a unitary $U$ (mathematical matrix) and $ V: =e^{i\alpha}U$, say in terms of some elementary gates? Suppose there isn't (which is my understanding). Then the physical implementation of $c\text{-}U$ and $c\text{-}V$ should be the same (just add controls to the elementary gates). But then I get into the contradiction that $c\text{-}U$ and $c\text{-}V$ of these two unitaries may not be equivalent up to phase (as mathematical matrices), so it seems plausible they correspond to different physical implementations.

+ +

What have I done wrong in my reasoning here, because it suggests now that $U$ and $V$ must be implemented differently even though they are equivalent up to phase?

+ +

Another related question (in fact the origin of my confusion, I'd be extra grateful for an answer to this one): it seems that one can use a quantum circuit to estimate both the modulus and phase of the complex overlap $\langle\psi|U|\psi\rangle$ (see https://arxiv.org/abs/quant-ph/0203016). But doesn't this imply again that $U$ and $e^{i\alpha}U$ are measurably different?

+",2375,,26,,5/13/2018 8:41,5/14/2018 15:24,Why are quantum gates unitary and not special unitary?,,4,3,,,,CC BY-SA 4.0 +2037,2,,2035,5/13/2018 5:23,,1,,"

The best I have it's this generic answer, which I put here for clarity, hoping for improvements/corrections or even to be superseded by something better:

+ +
+

If the limiting factor for fidelity in a given architecture+algorithm are the single-qubit gates, or the two-qubit gates, or the measurement, and if this limiting factor is not optimized in a ZEFOZ point, then effectively quantum fidelity will not be optimal in the ZEFOZ point.

+
+",1847,,,,,5/13/2018 5:23,,,,0,,,,CC BY-SA 4.0 +2038,2,,2036,5/13/2018 5:23,,16,,"

Even if you only limit yourself to special-unitary operations, states will still accumulate global phase. For example, $Z = \begin{bmatrix} i & 0 \\ 0 & -i \end{bmatrix}$ is special-unitary but $Z \cdot |0\rangle = i |0\rangle \neq |0\rangle$.

+ +

If states are going to accumulate unobservable global phase anyways, what benefit do we get out of limiting ourselves to special unitary operations?

+ +
+

are there any differences in the physical implementation of a unitary $U$ (mathematical matrix) and $V :=e^{i\alpha}U$, say in terms of some elementary gates?

+
+ +

As long you're not doing anything that could make the global phases relevant, they can have the same implementation. But if you're going to do something like, uh-

+ +
+

add controls to the elementary gates

+
+ +

Yeah, like that. If you do stuff like that, then you can't ignore global phases. Controls turn global phases into relative phases. If you want to completely ignore global phase, you can't have a black box ""add a control"" operation modifier.

+",119,,,,,5/13/2018 5:23,,,,2,,,,CC BY-SA 4.0 +2039,2,,2030,5/13/2018 9:44,,17,,"

The first thing to do is to think topologically: make sure you understand why a coffee cup is the same thing topologically as a donut.

+ +

Now, imagine we swap two identical particles, and do it again, so that we are back where we started. Apply this topological thinking to the paths taken by the particles: it is the same as doing nothing.

+ +

Here I show a picture of this, where one particle is dragged around another particle. Topologically, the path taken can be deformed back to the ""do nothing"" path.

+ +

+ +

The square root of this operation is a swap:

+ +

+ +

Since the square root of 1 is either +1 or -1, a swap affects the state by multiplying by either +1 (for bosons) or -1 (for fermions.)

+ +

To understand anyons, we are going to do the same analysis, but with one less dimension. So now a particle winding around another particle is not topologically the same as the ""do nothing"" operation:

+ +

+ +

We need the extra third dimension to untangle the path of the anyon, and since we can't do this topologically, the state of the system could be modified by such a process.

+ +

Things get more interesting as we add particles. With three anyons, the paths taken can get tangled, or braided in arbitrary ways. To see how this works it helps to use three dimensions: two space dimensions and one time dimension. Here is an example of three anyons wandering around and then returning back where they started:

+ +

+ +

Long before physicists started to think about anyons, the mathematicians already worked out how these braiding processes combine to form new braids or undo braids. These are known as ""braid groups"" in work that dates back to Emil Artin in 1947.

+ +

Like the distinction between Bosons and Fermions above, different anyon systems will behave differently when you do these braid operations. One example of anyon, known as the Fibonacci anyon, are able to approximate any quantum operation just by doing these kinds of braids. And so theoretically we could use these to build a quantum computer.

+ +

I wrote an introductory paper on anyons, which is where I got these pictures from: https://arxiv.org/abs/1610.05384. There's more mathematics there, as well as a description of a close cousin of anyon theory known as a ""modular functor"".

+ +

Here is another good reference, with more Fibonacci anyon goodness: Introduction to topological quantum computation with non-Abelian anyons +

+ +

EDIT: I see that I didn't say anything about the observables. +The observables of the system measure the total anyon content within a region. In terms of anyon paths we can think of this as bringing all the anyons in some region together and ""fusing"" them into one anyon, which may be the ""no anyon"" aka vacuum state. For a system supporting Fibonacci anyons there will only ever be two outcomes for such a measurement: fibonacci anyon or vacuum. Another example is the toric code where there are four anyon outcomes.

+",263,,263,,5/15/2018 10:11,5/15/2018 10:11,,,,2,,,,CC BY-SA 4.0 +2040,2,,2036,5/13/2018 13:33,,13,,"

The fact that quantum gates are unitary, is rooted in the fact that the evolution of (closed) quantum systems is by the Schrödiner equation. For a time interval in which we are trying to realise a particular unitary transformation at a constant rate, we use the time-independent Schrödinger equation:

+ +

$$ \tfrac{\mathrm d}{\mathrm dt} \lvert \psi(t) \rangle = \tfrac {1}{i\hbar}H \lvert \psi(t) \rangle, $$

+ +

where $H$ is the Hamiltonian of the system: a Hermitian matrix, whose eigenvalues describe energy eigenvalues. In particular, the eigenvalues of $H $ are real. The solution to this equation is

+ +

$$ \lvert \psi(t) \rangle = \exp\bigl(-i H t/\hbar\bigr) \lvert \psi(0) \rangle $$ +where $U = \exp(-iHt/\hbar)$ is the matrix which you obtain by taking the eigenvectors of $H$, and replacing their eigenvalues $E$ with $\mathrm{e}^{iEt/\hbar}$. Thus, from a matrix with real eigenvalues, we get a matrix whose eigenvalues are complex numbers with unit norm.

+ +

What would it take for this evolution to specifically be a special unitary matrix? A special unitary matrix is one whose determinant is precisely $1$; that is, whose eigenvalues all multiply to $1$. This corresponds to the restriction that the eigenvalues of $H$ all sum to zero. Furthermore, because the eigenvalues of $H$ are energy levels, whether the sum of its eigenvalues is equal to zero depends on how you have decided to fix what your zero energy point is — which is in effect a subjective choice of reference frame. (In particular, if you decide to adopt the convention that all of your energy levels are non-negative, this implies that no interesting system will ever have the property of the energy eigenvalues summing to zero.)

+ +

In short, gates are unitary rather than special unitary, because the determinant of a gate does not correspond to physically meaningful properties — in the explicit sense that the gate arises from the physics, and the conditions which correspond to the determinant of the gate being 1 is a condition of one's own reference frame and not the physical dynamics.

+",124,,124,,5/13/2018 14:38,5/13/2018 14:38,,,,0,,,,CC BY-SA 4.0 +2041,1,2042,,5/13/2018 14:09,,10,639,"

Question: Given a unitary matrix acting on $n$ qubits, can we find the shortest sequence of Clifford + T gates that correspond to that unitary?

+ +

For background on the question, two important references:

+ +
    +
  1. Fast and efficient exact synthesis of single qubit unitaries generated by Clifford and T gates by Kliuchnikov, Maslov, and Mosca
  2. +
  3. Exact synthesis of multiqubit Clifford+T circuits by Giles and Selinger.
  4. +
+",1860,,1860,,5/13/2018 15:07,08-07-2018 21:07,Shortest sequence of universal quantum gates that correspond to a given unitary,,2,1,0,,,CC BY-SA 4.0 +2042,2,,2041,5/13/2018 15:12,,10,,"

Getting an optimal decomposition is definitely an open problem. (And, of course, the decomposition is intractable, $\exp(n)$ gates for large $n$.) A ""simpler"" question you might ask first is what is the shortest sequence of cnots and single qubit rotations by any angle, (what IBM, Rigetti, and soon Google currently offer, this universal basis of gates can be expressed in terms of your basis of Cliffords and t-gates). This ""simpler"" question is also open and has a non-unique answer. A related question is what is an exact optimal decomposition of gates from a universal basis to go from ground state to a given final state.

+ +

I am assuming you are referring to exact decompositions. If you want approximate decompositions, there are different methods for that, such as the Trotter-Suzuki decomposition, or approximating an exact decomposition.

+ +

The ""quantum csd compiler"" in Qubiter does a non-optimized decomposition of any n qubit unitary into cnots and single qubit rots using the famous csd (Cosine-Sine Decomposition) subroutine from LAPACK. Some enterprising person could try to find optimizations for Qubiter's quantum compiler. You can use Qubiter's compiler, for example (I wrote a paper on this), to let your classical computer re-discover Coppersmith's quantum Fourier Transform decomposition!

+ +

Qubiter is open source and available at github (full disclosure - I wrote it).

+",1974,,91,,08-07-2018 21:07,08-07-2018 21:07,,,,1,,,,CC BY-SA 4.0 +2043,1,2044,,5/13/2018 15:17,,6,335,"
    +
  1. Consider the state $|X\rangle = \sqrt{0.9} |00\rangle + \sqrt{0.1} |11\rangle$, shared between Alice and Bob, who are located far apart.

  2. +
  3. Alice brings in an ancilla qubit at her location (left-most qubit in the kets): $|X\rangle = \sqrt{0.9} |000\rangle + \sqrt{0.1} |011\rangle$.

  4. +
  5. Now Alice performs a CNOT gate with the control being her entangled qubit, and the target being the ancilla: $|X\rangle = \sqrt{0.9} |000\rangle + \sqrt{0.1} |111\rangle$.

  6. +
  7. Then Alice measures the ancilla in the basis $\{\sqrt{0.1} |0\rangle + \sqrt{0.9} |1\rangle , \sqrt{0.9} |0\rangle - \sqrt{0.1} |1\rangle\}$. Supposing the measurement outcome is $+1$, i.e., the ancilla collapsed to the state $\sqrt{0.1} |0\rangle + \sqrt{0.9} |1\rangle$ , the remaining state of the initial $2$ qubits will be $|X\rangle = \sqrt{0.1 \times 0.9} |00\rangle + \sqrt{0.9 \times 0.1} |11\rangle$, which is the maximally entangled state up to a normalization factor.

  8. +
  9. We started from a state that was not maximally entangled, and we were able to boost the entanglement by doing a local measurement and post-selecting on the outcome.

  10. +
+ +

Is entanglement distillation using post-selection as I have described above feasible?

+",1860,,26,,5/13/2018 16:16,5/13/2018 17:27,Entanglement distillation by local operations and post-selection using one entanglement pair,,1,4,,,,CC BY-SA 4.0 +2044,2,,2043,5/13/2018 17:27,,3,,"

There are two different contexts where the term ""entanglement distillation"" is used, and are largely incomparable, even if they are conceptually extremely close (and I'm sure you'll be able to find papers that blur these boundaries).

+ +

In the first, Alice and Bob share a known quantum state which is (usually) a pure state. They use this to make a maximally entangled state with some probability. You can see this, for example, in section 12.5.1 of Nielsen & Chuang (""Transforming bi-partite pure state entanglement""). The protocol described in the question is the standard protocol in this context, except that the measurements are often expressed as POVMs instead of projective measurements in a larger Hilbert space.

+ +

In the second, one has many copies of a mixed state that one wishes to to make more entangled. This situation may be referred to as ""purification"" because one of the aims is, essentially, to make the state as pure as possible. The use of the mixed state, as usual, may describe part of an entangled system, or may describe some lack of knowledge/stochastic preparation procedure etc. In essence, there is some lack of knowledge about what you have, and variation between the copies, and it is this extra complication that necessitates multiple copies. See, for example, section 12.5.3 of Nielsen & Chuang (""Entanglement distillation and quantum error-correction"").

+",1837,,,,,5/13/2018 17:27,,,,0,,,,CC BY-SA 4.0 +2045,2,,2036,5/14/2018 6:03,,5,,"

When writing gates for, for example, a quantum circuit diagram, you could always write them using the convention of having determinant one (from the special unitary group), but it's just a convention. It makes no physical difference to the circuit that you implement. As said elsewhere, whether what you naturally produce corresponds directly to the special unitary is really a choice of convention, and where you define your 0 energy to be.

+ +

As for the issue when you start implementing controlled-$U$, there is an interesting comparison to be made. Let's say we define $V=e^{i\alpha}$. How can we implement controlled-$V$ in terms of controlled-$U$? You apply controlled-$U$ and then, on the control qubit, you apply the phase gate $\left(\begin{array}{cc} 1 & 0 \\ 0 & e^{i\alpha} \end{array}\right)$. There are two things to observe here. First, the difference is on the control qubit rather than the target qubit. The target qubit, where you're implementing the $U$, doesn't really care about the difference in phase. It's the control-qubit that's hit by the phase gate. The second is that I didn't write the phase gate as a special unitary. Of course, I could have written it as $\left(\begin{array}{cc} e^{-i\alpha/2} & 0 \\ 0 & e^{i\alpha/2}\end{array}\right)$ but I didn't because the way that I chose to write it was notationally more convenient - less writing for me, and hopefully more immediately obvious to you why it works.

+",1837,,,,,5/14/2018 6:03,,,,0,,,,CC BY-SA 4.0 +2046,2,,2041,5/14/2018 7:46,,4,,"

Suppose that an exact synthesis was possible for your provided unitary (the number of theoretic restriction on the entries) and so the algorithms described in the question gave you a sequence of Clifford+T gates that implemented that unitary. As stated in the Giles-Selinger paper, you get a sequence that is very far from optimal. So at this point you have reduced to the word problem in the group generated by the Clifford+T gate set. Some groups have algorithms to shorten a given word while still representing the same element of the group into a normal form that is the shortest within that class. Others do not.

+ +

More details to illustrate the principle: Let us say there are $2$ qubits. Denote $S_1$ etc for the generator that does the phase gate on qubit $1$, $CNOT_{12}$ for $1$ being the control etc. Each one of these is treated as a letter. The algorithm will spit out some word in these generators. The group is the group with these generators and many relations like $S_i^4=1$ and $X_i Y_j = Y_j X_i$ when $i \neq j$ among many other relations. So this defines some finitely generated group. Because we have a word from the provided algorithms but has not been optimized, the task is to provide a convenient shortest possible normal form in the word problem for this group. So if given the word $S_1 S_1 S_2 S_1 S_1$ one could use the relation $S_1 S_2 = S_2 S_1$ twice and the $S_1^4=1$ relation once to get $S_2$ as a shorter word that represents the same group element. For a given group presentation, one would like an algorithm that takes an arbitrary word and reduces it. In general this is not possible.

+ +

Disclaimer for below: Forthcoming project/Haskell implementation joint w/ Jon Aytac.

+ +

I don't know about the solvability of the word problem for the Clifford+T gate set, but one can do something simpler with only the involutions (call them $r_i$) in that set and only the relations of the form $(r_i r_j)^{m_{ij}}=1$. That is a Coxeter group related to the Clifford+T gate set, but with an efficiently solvable word problem. So one may take the result of the Giles-Selinger algorithm and potentially shorten it using only these very simple relations (after looking at segments with only those involution letters). In fact any algorithm that takes a given unitary and approximates or exactly synthesizes it into Clifford+T can be fed into this procedure to potentially shorten it slightly.

+",434,,434,,5/15/2018 4:40,5/15/2018 4:40,,,,0,,,,CC BY-SA 4.0 +2047,1,2048,,5/14/2018 8:18,,9,919,"

I understand that there are two ways to think about 'general quantum operators'.

+ +

Way 1

+ +

We can think of them as trace-preserving completely positive operators. These can be written in the form +$$\rho'=\sum_k A_k \rho A_k^\dagger \tag{1}$$ +where $A_k$ are called Kraus operators.

+ +

Way 2

+ +

As given in (An Introduction to Quantum Computing by Kaye, Laflamme and Mosca, 2010; pg59) we have a +$$\rho'=\mathrm{Tr}_B\left\{ U(\rho \otimes \left| 00\ldots 0\right>\left<00\ldots 0 \right|) U^\dagger \right\} \tag{2}$$ +where $U$ i s a unitary matrix and the ancilla $\left|00 \ldots 0\right>$ has at most size $N^2$.

+ +

Question

+ +

Exercise 3.5.7 (in Kaye, Laflamme and Mosca, 2010; pg60) gets you to prove that operators defined in (2) are completely positive and trace preserving (i.e. can be written as (1)). My question is the natural inverse of this; can we show that any completely positive, trace preserving map can be written as (2)? I.e. are (1) and (2) equivalent definitions of a 'general quantum operator'?

+",2015,,55,,5/31/2021 15:03,5/13/2022 8:49,Is the Kraus representation of a quantum channel equivalent to a unitary evolution in an enlarged space?,,2,0,,,,CC BY-SA 4.0 +2048,2,,2047,5/14/2018 8:59,,11,,"

This question is posed, and answered positively, in Nielsen & Chuang in a subsection of chapter 8 entitled ""System-environment models for and operator-sum representation"". In my version, it can be found on page 365.

+ +

Imagine $|\psi\rangle$ is an arbitrary pure state on the space upon which you wish to enact the operators. Let $|e_0\rangle$ be some fixed state on another quantum system (with dimension equal to at least the number of Krauss operators, and labelled 'B'). Then you can define a unitary by its action on the space of states spanned by $|\psi\rangle$: +$$ +U|\psi\rangle|e_0\rangle=\sum_k(A_k|\psi\rangle)|e_k\rangle, +$$ +where the $|e_k\rangle$ are an orthonormal basis. To check that this corresponds to a valid unitary, we just have to test it for different input states and ensure that the initial overlap is preserved: +$$ +\langle\psi|\phi\rangle\langle e_0|e_0\rangle=\langle\psi|\langle e_0|U^\dagger U|\phi\rangle|e_0\rangle=\langle\psi|\sum_kA_k^\dagger A_k|\phi\rangle, +$$ +which is true thanks to the completeness relation of the Krauss operators.

+ +

Finally, one just has to check that this unitary does indeed implement the claimed map: +$$ +\text{Tr}_B\left(U|\psi\rangle\langle \psi|\otimes|e_0\rangle\langle e_0|U^\dagger\right)=\sum_kA_k|\psi\rangle\langle\psi|A_k^\dagger. +$$

+",1837,,,,,5/14/2018 8:59,,,,0,,,,CC BY-SA 4.0 +2049,1,,,5/14/2018 10:19,,13,927,"

The feature of quantum error correcting codes called degeneracy is that they can sometimes be used to correct more errors than they can uniquely identify. It seems that codes exhibiting such characteristic are able to overcome the performance of quantum error correction codes that are nondegenerate.

+ +

I am wondering if there exists some kind of measure, or classification method in order to determine how degenerate is a quantum code, and so if there has been any study trying to determine the error correction abilities of quantum codes depending on the degeneracy of them.

+ +

Apart from that, it would be interesting to give reference or some intuition about how to construct good degenerate codes, or just reference about the current state of the art of those issues.

+",2371,,55,,11/30/2021 22:27,02-08-2022 21:49,Degeneracy of Quantum Error Correction Codes,,2,0,,,,CC BY-SA 4.0 +2050,2,,2049,5/14/2018 13:10,,6,,"

I don't have a complete answer, but perhaps others can improve on this starting point.

+ +

There are probably 3 things to ask about the code:

+ +
    +
  1. How degenerate is it?

  2. +
  3. How hard is it to perform the classical post-processing of the error syndrome in order to determine which corrections to make?

  4. +
  5. What are its error correcting/fault-tolerant thresholds?

  6. +
+ +

I suppose a simple enough measure of degeneracy is the extent to which the Quantum Hamming Bound is surpassed. For an $[[N,k,d]]$ code, a non-degenerate code must satisfy:

+ +

$$2^{N-k}\geq\sum_{n=0}^{\lfloor d/2\rfloor}3^n\binom{N}{n}$$

+ +

So the amount by which that bound is violated suggests something interesting about how densely the information is packed. Of course, that's no use if your mega-degenerate code cannot actually correct for any errors. Similarly, if your code is in principle awesome, but it takes too long to actually work out what the correction is, your code isn't really any use in practice because errors will continue to accumulate as you try to work out what corrections to do.

+",1837,,26,,5/14/2018 16:13,5/14/2018 16:13,,,,4,,,,CC BY-SA 4.0 +2051,1,2052,,5/14/2018 13:19,,7,147,"

I submitted a job in the 0.5.0 version of QISKit using

+ +
job = execute(qc, 'ibmqx5', shots=shots)
+
+ +

This just submits the job, and does not wait for a result.

+ +

I then immediately tested whether the job was still running using

+ +
print(job.running)
+
+ +

This gave the result False. However, when I requested the result using

+ +
job.result()
+
+ +

This still took a while to get the result, suggesting that the job actually was still running after all. What is going on here?

+",409,,,,,5/14/2018 13:25,"Why does job.running in QISKit output False, even if the job is still running?",,1,0,,,,CC BY-SA 4.0 +2052,2,,2051,5/14/2018 13:19,,4,,"

There are three stages that the job goes through, as you'll see if you also print the status using print(job.status).

+ +

The first is an initialization stage. This returns False for job.running, because it hasn't started running yet.

+ +

Then your job actually will run, and so give True for job.running. Finally it will have finished running, and so job.running goes back to False.

+ +

So don't use job.running to test whether a result is ready.

+",409,,409,,5/14/2018 13:25,5/14/2018 13:25,,,,0,,,,CC BY-SA 4.0 +2053,2,,2036,5/14/2018 15:24,,1,,"

Plain and simple answer: In the absence of decoherence, state vectors evolve according to $|\psi(t)\rangle = e^{-iHt}|\psi(0)\rangle$ for a Hamiltonian $H$. This is what a ""gate"" is doing. Hamiltonians have to be Hermitian, so this transformation is unitary. Hamiltonians do not have to have eigenvalues that sum to 0, so the transformation does not have to be special unitary.

+",2293,,,,,5/14/2018 15:24,,,,0,,,,CC BY-SA 4.0 +2054,1,,,5/14/2018 18:37,,25,585,"

In a comment on my answer to the question: What exactly are anyons and how are they relevant to topological quantum computing? I was asked to give specific examples of occurrence of anyons in nature. I've spent 3 days searching, but every article refers to either "proposed experiments" or "nearly definitive evidence".

+

Abelian anyons:

+

Fractional charges have been measured directly since 1995, but in my search, all articles pointing to evidence of fractional statistics or an exchange factor $e^{i\theta}\ne\pm1$, point to this nearly 7-year old pre-print, where they say in the abstract that they "confirm" detecting the theoretically predicted phase of $\theta =2\pi/3$ in the $\nu=7/3$ state of a quantum Hall system. However, the paper seems to have never passed a journal's peer review. There is no link to a journal DOI on arXiv. On Google Scholar I clicked "see all 5 versions", but all 5 were arXiv versions. I then suspected the article's name might have changed at the time of publication so went hunting for it on the authors' websites. The last author has Princeton University's Department of Electrical Engineering listed as affiliation, but does not show up on that department's list of people (after clicking on "People", I clicked on "Faculty", "Technical", "Graduate Students", "Administrative", and "Research Staff" but nothing showed up). The same happened for the second-last author! The third-last author does have a lab website with a publication list, but nothing like this paper appears in the "Selected Publications out of more than 800" page. The fourth-last author is at a different university, but his website's publication list is given as a link to his arXiv page (still no published version visible). The 5th last, 6th last, and 7th last authors have an affiliation of James Franck Institute and Department of Physics at the University of Chicago, but none of their three names shows up on either website's People pages. One of the authors also has affiliation at a university in Taiwan, and her website there lists publications co-authored with some of the people from the pre-print in question, but never anything with a similar title or with a similar enough author list. Interestingly, even her automatically generated but manually adjustable Google Scholar page does not have even the arXiv version but does have earlier papers (with completely different titles and no mention of anyons) with some of the co-authors. That covers all authors. No correspondence emails were made available.

+

1. Is this pre-print the only claim of confirming an exchange factor $\ne\pm1$ ?
+2. If yes, what is wrong with their claimed confirmation of this? (It appears to have not passed any journal's peer review, and it also appears that an author has even taken down the arXiv version from her Google Scholar page).

+

Non-abelian anyons:

+

I found here this quote: "Experimental evidence of non-abelian anyons, although not yet conclusive and currently contested [12] was presented in October 2013 [13]." The abstract of [12] says that the experiment in [13] is inconsistent with a plausible model and that the authors of [13] may have measured "Coulomb effects" rather than non-Abelian braiding. Interestingly the author list of [13] overlaps with the pre-print mentioned in the Abelian section of this question, though that pre-print was from 2 years earlier and said in the abstract "Our results provide compelling support for the existence of non-Abelian anyons" which is a much weaker statement than what they say in the same abstract for the Abelian case: "We confirm the Abelian anyonic braiding statistics in the $\nu=7/3$ FQH state through detection of the predicted statistical phase angle of $2\pi/3$, consistent with a change of the anyonic particle number by one."

+",2293,,2293,,7/20/2020 22:27,3/23/2022 16:03,What is the status of confirming the existence of anyons?,,2,2,,,,CC BY-SA 4.0 +2055,2,,2054,5/14/2018 20:50,,8,,"

It depends what you mean by the 'existence' of anyons.

+ +

One way is to engineer a Hamiltonian which leads to quasiparticles (or other defects) that have anyonic statistics. This will require the Hamiltonian to be implemented, the system to be cooled to sufficiently near the ground state, the anyons to be manipulated, etc. So there's a lot to be done, and I don’t think that the development of the systems required has a lot of other applications. So it suffers from being both hard to do, and quite a niche.

+ +

Hopefully, someone else will give you the answers you want on this kind of approach. However, I thought it is important to note that there is another way to get anyons. This is to not bother with the Hamiltonian. Instead, the eigenstates can be prepared and manipulated directly.

+ +

In this case, you aren’t getting any topological protection from the Hamiltonian. Instead, measurements are constantly made of what eigenstate you are in, in order to detect and help you mitigate the unwanted effects of errors.

+ +

The most realistic examples of this approach will be ones for which these operations can be easily performed on a quantum computer. All the development and progress towards building qubits and their gates can then be directly used in the search for anyons.

+ +

Anyons are systems that can be easily implemented with qubits or qubits are typically a specific form of quantum error correcting code. Specifically, they are stabilizer codes for which the states of the stabilizer space are topologically ordered, and syndrome measurements correspond to measuring whether anyons are present at each point throughout the system.

+ +

Th simplest example is the surface code. The basic quasiparticles of this are Abelian anyon. There have been experiments that create and manipulate these anyons to demonstrate their braiding behaviour. The first example was done over a decade ago in photonics systems.

+ +

The surface code can also host defects which behave as Majorana modes, and therefore non-Abelian anyons. I implemented a very minimal example of their braiding in this paper.

+ +

As quantum processors get larger, cleaner and more sophisticated, there will be a lot more of this kind of study. I would think that the majority of the anyons that we will see and use will be realized in this manner, rather than with an implementation of the Hamiltonian.

+",409,,26,,5/15/2018 6:02,5/15/2018 6:02,,,,8,,,,CC BY-SA 4.0 +2056,1,11472,,5/15/2018 2:06,,11,320,"

The recent McClean et al. paper Barren plateaus in quantum neural network training landscapes shows that for a wide class of reasonable parameterized quantum circuits, the probability that the gradient along any reasonable direction is non-zero to some fixed precision is exponentially small as a function of the number of qubits.

+ +

This seems to affect Noisy Intermediate-Scale Quantum (NISQ) programs (as proposed by e.g. John Preskill) since they involve hybrid quantum-classical algorithms, ie training a parameterized quantum circuit with a classical optimization loop. +

+ +

My question: How do you avoid getting stranded on those barren plateaus? Concretely, how would one go about building one's Ansatz Haar states to avoid getting stuck in those plateaus? The paper proposes but does not elaborate:

+ +
+

One approach to avoid these landscapes in the quantum setting is to + use structured initial guesses, such as those adopted in quantum + simulation.

+
+",2387,,26,,01-01-2019 10:15,04-09-2020 10:36,"Devising ""structured initial guesses"" for random parametrized quantum circuits to avoid getting stuck in a flat plateau",,1,2,,,,CC BY-SA 4.0 +2057,1,,,5/15/2018 4:25,,8,412,"

I am currently trying to implement a boosting algorithm akin to XGBoost with a quantum device. The reason is that I want to make use of a quantum device to train weak classifiers. However, as far as I know, the current quantum device can only be used for binary variables including both input variables and outputs.

+ +

Is it possible to use all binary variables to implement the additive training as it does in the XGBoost?

+ +

XGBoost GitHub Project

+",2354,,26,,5/15/2018 6:26,10/25/2022 2:02,Gradient boosting akin to XGBoost using a quantum device,,1,4,,,,CC BY-SA 4.0 +2058,1,,,5/15/2018 7:33,,13,739,"

In the comments to a question I asked recently, there is a discussion between user1271772 and myself on positive operators.

+ +

I know that for a positive trace-preserving operator $\Lambda$ (e.g. the partial transpose) if acting on a mixed state $\rho$ then although $\Lambda(\rho)$ is a valid density matrix it mucks up the density matrix of the system it is entangled to - hence this is not a valid operator.

+ +

This and user1271772's comments, however, got me thinking. $\Lambda$ acting on a state which is not part of a larger system does indeed give a valid density matrix and there is no associated entangled system to muck it up.

+ +

My question is, therefore: Is such an operation allowed (i.e. the action of a positive map on a state which is not part of a larger system). If not, why not? And if so, is it true that any positive map can be extended to a completely positive map (perhaps nontrivially)?

+",2015,,10480,,3/26/2021 4:43,3/26/2021 4:43,Is acting with a positive map on a state not part of a larger system allowed?,,3,9,,,,CC BY-SA 4.0 +2059,1,,,5/15/2018 8:13,,10,519,"

Long-range entanglement is characterized by topological order (some kinds of global entanglement properties), and the ""modern"" definition of topological order is the ground state of the system cannot be prepared by a constant-depth circuit from a product state, instead of ground states dependency and boundary excitations in traditional. Essentially, a quantum state which can be prepared by a constant-depth circuit is called trivial state.

+ +

On the other hand, quantum states with long-range entanglement are ""robust"". One of the most famous corollaries of quantum PCP conjecture which proposed by Matt Hastings is the No Low-energy Trivial States conjecture, and the weaker case proved by Eldar and Harrow two years ago (i.e. NLETS theorem: https://arxiv.org/abs/1510.02082). Intuitively, the probability of a series of the random errors are exactly some log-depth quantum circuit are very small, so it makes sense that the entanglement here is ""robust"".

+ +

It seems that this phenomenon is some kinds of similar to topological quantum computation. Topological quantum computation is robust for any local error since the quantum gate here is implemented by braiding operators which is connected to some global topological properties. However, it needs to point that ""robust entanglement"" in the NLTS conjecture setting only involved the amount of entanglement, so the quantum state itself maybe changed -- it does not deduce a quantum error-correction code from non-trivial states automatically.

+ +

Definitely, long-range entanglement is related to homological quantum error-correction codes, such as the Toric code (it seems that it is related to abelian anyons). However, my question is that are there some connections between long-range entanglement (or ""robust entanglement"" in the NLTS conjecture setting) and topological quantum computation? +Perhaps there exists some conditions regarding when the correspondent Hamiltonian can deduce a quantum error-correction code.

+",1777,,1777,,5/22/2018 21:36,5/22/2018 21:36,Are there connections between long-range entanglement and topological quantum computation?,,1,2,,,,CC BY-SA 4.0 +2060,1,,,5/15/2018 8:16,,14,542,"

The quantum Hamming bound for a non-degenerate $[[N,k,d]]$ quantum error correction code is defined as:

+ +

\begin{equation} +2^{N-k}\geq\sum_{n=0}^{\lfloor d/2\rfloor}3^n\begin{pmatrix}N \\ n\end{pmatrix}. +\end{equation} +However, there is no proof stating that degenerate codes should obey such bound. I wonder if there exists any example of a degenerate code violating the quantum Hamming bound, or if there have been some advances in proving similar bounds for degenerate codes.

+",2371,,55,,2/22/2021 15:59,2/22/2021 15:59,Violation of the Quantum Hamming bound,,1,0,,,,CC BY-SA 4.0 +2061,2,,2059,5/15/2018 9:29,,7,,"

There were two simultaneous PRLs published by Kitaev & Preskill and Levin & Wen that I think answer your question.

+ +

These use the area law of entanglement seen by states that can be expressed as ground states of a Hamiltonian with only local interactions.

+ +

Specifically, suppose you have a 2D system of interacting particles in a pure state. You then single out some region, and calculate the von Neumann entropy of the reduced density matrix for that region. This will essentially be a measure of how entangled the region is with its complement. The area law tells us that this entropy, $S$, should obey

+ +

$S = \alpha L - \gamma + \ldots$

+ +

Here $L$ is the length of the perimeter of the region. The first term accounts for the fact that correlations in these systems are typically short range, and so the entanglement is mostly composed of correlations between particles on each side of the boundary.

+ +

The $\gamma$ term is unaffected by the size or shape of the region, and so represents a contribution of global and topological effects. Whether this is non-zero, and what the value is, tells you about the topologically ordered nature of your entangled system.

+ +

The $\ldots$ term just represents contributions that decay as the region increases, and so can be ignored as $L\rightarrow \infty$.

+ +

The two papers, and ones based upon them, then find ways to isolate and calculate $\gamma$ for different entangled states. The value is shown to depend on the anyon model for which these entangled states represent the vacuum.

+",409,,,,,5/15/2018 9:29,,,,0,,,,CC BY-SA 4.0 +2062,1,2063,,5/15/2018 10:12,,5,1157,"

To use certain things in QISKIt, such as acessing the real quantum processors, it seems that there is a file 'Qconfig.py'. That needs to be set up.

+ +

How do I do this?

+",409,,,,,2/15/2019 14:58,What is Qconfig in QISKit and how do I set it up?,,1,2,,,,CC BY-SA 4.0 +2063,2,,2062,5/15/2018 10:12,,5,,"

To get access to the real devices and remote simulators, you need to have an account with the IBM Q Experience. Once you do, you can get your API key from the account page.

+ +

Once you have the API key, you can set up a Qconfig file. Let's assume that you want to run things from the QISKit tutorial. The Qconfig file can then be set up by editing the file located here.

+ +

You just need to replace the line

+ +
APItoken = None
+
+ +

with

+ +
APItoken = ""randomstringoflettersandnumbersyoucopiedfromthewebsite""
+
+ +

Where the stuff within the quotation marks should be your actual API token.

+ +

You also need to rename the file from 'Qconfig.py.template' to just Qconfig.py.

+ +

Now you need to import the information from this file in your programs. This will depend on where you program sits in your computer relative to the Qconfig file.

+ +

If the .py or .ipynb file containing your program is sitting in the same directory as 'Qconfig.py', you can import with just

+ +
import Qconfig
+qx_config = {
+    ""APItoken"": Qconfig.APItoken,
+    ""url"": Qconfig.config['url']}
+
+ +

except Exception as e: + print(e) + qx_config = { + ""APItoken"":""YOUR_TOKEN_HERE"", + ""url"":""https://quantumexperience.ng.bluemix.net/api""}

+ +

and then register your connection to the API with

+ +

register(qx_config['APItoken'], qx_config['url'])

+ +

If your .py or .ipynb is one directory level down, such as for this you'll need to have the lines

+ +
import sys
+sys.path.append(""../"")
+
+ +

before you try to import. This tells the program to look for the Qconfig file in the directory above instead.

+",409,,,,,5/15/2018 10:12,,,,0,,,,CC BY-SA 4.0 +2064,2,,2058,5/15/2018 10:49,,8,,"

Any map which is not Completely Positive, Trace Preserving (CPTP), is not possible as an ""allowed operation"" (a more-or-less complete account of how some system transforms) in quantum mechanics, regardless of what states it is meant to act upon.

+ +

The constraint of maps being CPTP comes from the physics itself. Physical transformations on closed systems are unitary, as a result of the Schrödinger equation. If we allow for the possibility to introduce auxiliary systems, or to ignore/lose auxiliary systems, we obtain a more general CPTP map, expressed in terms of a Stinespring dilation. Beyond this, we must consider maps which may occur only with a significant probability of failure (as with postselection). This is perhaps one way of describing an ""extension"" for non-CPTP maps to CPTP maps — engineering it so that it can be described as a provocative thing with some probability, and something uninteresting with possibly greater probability; or at least a mixture of a non-CPTP map with something else to yield a total evolution which is CPTP — but whether it is useful to do so in general is not clear to me.

+ +

On a higher level — while we may consider entanglement a strange phenomenon, and in some way special to quantum mechanics, the laws of quantum mechanics themselves make no distinctions between entangled states and product states. There is no sense in which quantum mechanics is delicate or sensitive to the mere presence of nonlocal correlations (which are correlations in things which we are concerned with), which would render impossible some transformation on entangled states merely because it might produce an embarrassing result. Either a process is impossible — and in particular not possible on product states — or it is possible, and any embarrassment about the outcome for entangled states is our own, on account of the difficulty in understanding what has happened. What is special about entanglement is the way it challenges our classically-motivated preconceptions, not how entangled states themselves evolve in time.

+",124,,124,,5/17/2018 9:20,5/17/2018 9:20,,,,26,,,,CC BY-SA 4.0 +2065,2,,2060,5/15/2018 15:14,,7,,"

You may be interested in the answers to this question. One example of a degenerate code beating the quantum Hamming bound is here. I also have a numerical example of a small violation in my own work, here. In Figure two, you will see a zoomed in section. Essentially, the black line is the quantum Hamming bound (that may not be entirely obvious from what is written!), and the grey line is an approximation of what can be achieved with something related to the Toric code. There will be other examples as well!

+ +

There appear to be a number of results about classes of degenerate codes that do not violate the quantum Hamming bound (e.g. here and here). I haven't read them, so don't know how useful they are, but the abstracts suggest that they provide a nice counter-point, conveying the rarity of good degenerate codes.

+",1837,,1837,,5/15/2018 16:30,5/15/2018 16:30,,,,1,,,,CC BY-SA 4.0 +2066,2,,2058,5/15/2018 19:01,,-3,,"

No law of physics states that we must be able to evolve a sub-system of the universe on its own.

+ +

There would be no way to definitively test such a law.

+ +
+ +

The density matrix of the universe must have a trace of 1 and be positive semi-definite, by the mathematical definition of probabilities1. Any change in the universe must1 preserve this, for mathematical reasons and due to definitions. If $\rm{Tr}(\rho_{\rm{universe}})\lt1$, you just haven't included the whole universe in $\rho_{\rm{universe}}$. If it's more than 1, or if $\rho_{\rm{universe}}<0$, what you have is not actually a density matrix, by the definition of probability1.

+ +

So the map: $\rho_{\rm{universe}}(0)\rightarrow\rho_{\rm{universe}}(t)$ must1 be positive and trace-preserving.

+ +

For convenience, we like to model sub-regions of the universe, and introduce complete positivity for that. But one day an experiment might come along that we find impossible to explain2, perhaps because we have chosen to model the universe in a way that's not compatible with how the universe actually works.

+ +

If we assume gravity doesn't exist, and we can magically compute anything we want, we believe that evolving $\rho_{\rm{universe}}$ using the right positive trace-preserving map, then doing a partial trace over all parts of the universe not of concern, will give accurate predictions. +Introducing the notion of modeling only a sub-system of $\rho_{\rm{universe}}$, using a CPT map, is also something we believe will work, but we might bet slightly less on this, because we've added the assumption that sub-systems evolve this way, not just the universe as a whole.

+ +


+1: Even this is debatable because the relationship between a wavefunction or density matrix and probabilities comes from a postulate of quantum mechanics called the Born rule, which until fewer than 10 years ago was never tested at all, and still has only been confirmed to be true within an $\epsilon$, and for a particular system: If Born's rule isn't true, Eq. 6 of this would not be zero. To test if Born's rule is true for a particular system (in this case, photons coming from some particular source), you would have to do an infinite number of instances, of all 7 of these experiments, or come up with a different way to test Born's rule (and I don't know of any). In 2009 we published this saying that Born's rule was true (for this system) to within an $\epsilon$ that was smaller than the experimental uncertainty, so we only know Born's rule is true for this system, and to within a precision limited by the experiment.

+ +

2: This is actually already the case, but let's pretend that gravity does not exist and that quantum mechanics (QED+QFD+QCD) is correct, and we still find it impossible to explain something, despite having (somehow) magical computer power to compute anything we want instantly.

+",2293,,2293,,5/15/2018 23:57,5/15/2018 23:57,,,,6,,,,CC BY-SA 4.0 +2068,1,2069,,5/16/2018 1:45,,8,343,"

I am aware that of the difference of Adiabatic Quantum Computing (AQC) and Quantum Annealing (QA) as explained here. However, another term which came up in some papers was Adiabatic Quantum Optimization (AQO). Where does AQO fit in among AQC and QA?

+ +

It seems that it is another name for QA and is restricted to solving the Ising model, just like D-Wave's QA? I would think AQO is more general than QA where QA is a form of AQO where its Hamiltonian can be any algebraic function of spin variables without restrictions on the interaction, unlike the Ising model.

+ +

Anyone mind clarifying this? Thank you!

+",2398,,26,,12/13/2018 19:50,12/13/2018 19:50,Adiabatic Quantum Computing vs Adiabatic Quantum Optimization vs Quantum Annealing,,1,0,,,,CC BY-SA 4.0 +2069,2,,2068,5/16/2018 4:29,,6,,"

I'm very happy my answer from 3 years ago to that question is still helping people!

+ +

The answer to your new question is found here:

+ +

+ +

Notice that there is another term here which is ""Quantum Adiabatic Algorithm"" or QAA. In fact those QAA papers from 2000 and 2001 papers call it ""Quantum Adiabatic Evolution Algorithm"" or QAEA, and ""Quantum Computation by Adiabatic Evolution"" or QCAE.

+ +
+ +

I think we should agree on just using the terms AQC and Quantum Annealing to describe the two things in my answer to the question you provided the link to.

+",2293,,2293,,5/17/2018 1:03,5/17/2018 1:03,,,,1,,,,CC BY-SA 4.0 +2070,1,,,5/16/2018 17:42,,4,1008,"

A lot of the tutorials on BB84 protocol talks about these two measurement bases, 'Rectilinear' or 'Vertical-Horizontal' and 'Diagonal'. I understand that it is possible to create a physical device that would be able to measure a qubit in both vertical and horizontal direction, or in other words, in 'Rectilinear' basis, but what would be the matrix representation of it?

+ +

For example, we can use $\lvert 0 \rangle \langle 0 \lvert$ to measure a qubit in the $\lvert 0 \rangle$ basis and $\lvert 1 \rangle \langle 1 \lvert$ to measure in the $\lvert 1 \rangle$ basis. +But what would be the combined measurement basis which we could call 'rectilinear' or 'vertical-horizontal'?

+",2403,,26,,12/23/2018 12:28,12/23/2018 12:28,'Rectilinear' and 'Diagonal' Basis in BB84 Protocol,,2,2,,,,CC BY-SA 4.0 +2071,2,,2070,5/16/2018 20:38,,3,,"

For the diagonal basis, the measurement operators are the $|0\rangle\langle 0|$ and $|1\rangle\langle 1|$, as stated in the question. For the other basis, any mutually unbiased basis will do, but people usually go for the two operators $(|0\rangle+|1\rangle)(\langle 0|+\langle 1|)/2$ and $(|0\rangle-|1\rangle)(\langle 0|-\langle 1|)/2$.

+ +

The labels of which basis you call what are fairly arbitrary, but I think that the rectilinear basis is usually the one that corresponds with horizontal/vertical polarisation and is labelled 0/1. The diagonal basis is then the other one.

+",1837,,1837,,5/16/2018 20:57,5/16/2018 20:57,,,,0,,,,CC BY-SA 4.0 +2072,2,,2070,5/16/2018 21:34,,6,,"

Talking about bases such as $\left|0\rangle\langle0\right|$ and $\left|1\rangle\langle1\right|$ (or the equivalent vector notation $\left|0\right>$ and $\left|1\right>$, which I'll use in this answer) at the same time as 'horizontal' and 'vertical' are, to a fair extent (pardon the pun) orthogonal concepts.

+ +

On a Bloch sphere, there are 3 different orthonormal bases - we generally consider $\left|0\right\rangle$ and $\left|1\right\rangle$; $\frac{1}{\sqrt 2}\left(\left|0\right\rangle+\left|1\right\rangle\right)$ and $\frac{1}{\sqrt 2}\left(\left|0\right\rangle-\left|1\right\rangle\right)$; $\frac{1}{\sqrt 2}\left(\left|0\right\rangle+i\left|1\right\rangle\right)$ and $\frac{1}{\sqrt 2}\left(\left|0\right\rangle-i\left|1\right\rangle\right)$. I'll refer to these as the 'quantum information bases' as this is the notation generally used in quantum information.

+ +

That looks a bit of a mess, so we can also write this as $\left|\uparrow_z\right\rangle$, $\left|\downarrow_z\right\rangle$; $\left|\uparrow_x\right\rangle$, $\left|\downarrow_x\right\rangle$; $\left|\uparrow_y\right\rangle$, $\left|\downarrow_y\right\rangle$, where the different bases are now labelled as $x, y$ and $z$. In terms of spin-half particles, this has a natural definition of up/down spin in each of those directions. However, there is freedom in choosing which direction (in the lab) these axes are in (unless otherwise constrained).

+ +

Photons (used in the BB84 protocol) aren't spin-half particles (they have a spin of one), but nevertheless have similarities to this - the 'axes' are the possible directions of the polarisation of a photon, only instead of labelling these as $x, y$ and $z$, they're labelled as horizontal/vertical, diagonal/antidiagonal and left-/right-circular, or in vector notation, this is shortened to $\left|H\right>$, $\left|V\right>$; $\left|D\right>$, $\left|A\right>$; $\left|L\right>$ and $\left|R\right>$. These can then be mapped on to the 'quantum information' bases above, although which basis gets labelled as $\left|0\right>$ and $\left|1\right>$ is somewhat arbitrary.

+ +

For the BB84 protocol (and indeed, frequently used in other applications), the rectilinear (vertical/horizontal) basis is the one labelled using $\left|0\right>$ and $\left|1\right>$.

+ +

That is: $$\left|H\right>=\left|0\right>$$ +$$\left|V\right>=\left|1\right>$$ +$$\left|D\right>=\frac{1}{\sqrt 2}\left(\left|H\right\rangle+\left|V\right\rangle\right)=\frac{1}{\sqrt 2}\left(\left|0\right\rangle+\left|1\right\rangle\right)$$ +$$\left|A\right>=\frac{1}{\sqrt 2}\left(\left|H\right\rangle-\left|V\right\rangle\right)=\frac{1}{\sqrt 2}\left(\left|0\right\rangle-\left|1\right\rangle\right)$$ +$$\left|R\right>=\frac{1}{\sqrt 2}\left(\left|H\right\rangle+i\left|V\right\rangle\right)=\frac{1}{\sqrt 2}\left(\left|0\right\rangle+i\left|1\right\rangle\right)$$ +$$\left|L\right>=\frac{1}{\sqrt 2}\left(\left|H\right\rangle-i\left|V\right\rangle\right)=\frac{1}{\sqrt 2}\left(\left|0\right\rangle-i\left|1\right\rangle\right)$$

+ +

If you want to measure in any of these bases, use the 'projectors' of that basis. That is, if you want to measure in the rectilinear basis, the projectors are $\left|H\rangle\langle H\right|$ and $\left|V\rangle\langle V\right|$. Similarly, in the diagonal basis, $\left|D\rangle\langle D\right|$ and $\left|A\rangle\langle A\right|$; and in the circularly polarised basis, $\left|L\rangle\langle L\right|$ and $\left|R\rangle\langle R\right|$

+",23,,,,,5/16/2018 21:34,,,,1,,,,CC BY-SA 4.0 +2073,1,,,5/17/2018 0:23,,5,303,"

If we only assume that the wavefunction of the universe evolves according to $e^{-iHt}$, is there any proof that all subsystems of the universe (partial traces over parts of the universe) must evolve according to a completely positive, trace-preserving (CPTP) map?

+ +

An example of a perfectly valid quantum map that is not completely positive is given in the paragraph containing Eq. 6 of the paper: Who's afraid of not completely positive maps?. This was possible because they made the system and its environment entangled at the time $t=0$. So my question is whether such a proof exists for the case where there is no initial entanglement.

+",2293,,26,,12/31/2018 23:33,12/31/2018 23:33,"Only assuming the universe evolves according to a positive trace-preserving map, is there a proof that all subsystem evolution must be CPTP?",,2,6,,,,CC BY-SA 4.0 +2074,1,,,5/17/2018 3:52,,22,1122,"

Many papers assert that Hamiltonian simulation is BQP-complete +(e.g., +Hamiltonian simulation with nearly optimal dependence on all parameters and Hamiltonian Simulation by Qubitization).

+ +

It is easy to see that Hamiltonian simulation is BQP-hard because any quantum algorithm can be reduced to Hamiltonian simulation, but how is Hamiltonian simulation in BQP?

+ +

i.e., what precisely is the Hamiltonian simulation decision problem in BQP and under what conditions on the Hamiltonian?

+",1885,,55,,09-04-2020 11:48,1/22/2023 15:55,What are examples of Hamiltonian simulation problems that are BQP-complete?,,1,1,,,,CC BY-SA 4.0 +2081,2,,2074,5/17/2018 6:43,,18,,"

There are plenty of different variants, particularly with regards to the conditions on the Hamiltonian. It's a bit of a game, for example, to try and find the simplest possible class of Hamiltonians for which simulation is still BQP-complete.

+

The statement will roughly be along the lines of: let $|\psi\rangle$ be a (normalised) product state, $H$ be a Hamiltonian from some particular class (e.g. consisting only of nearest-neighbour couplings on a one-dimensional lattice), $\hat O$ an observable comprising a tensor product of one-body operators such that $\|\hat O\|\leq 1$, and $t$ be a time. Given the promise that $\langle\psi|e^{iHt}\hat O e^{-iHt}|\psi\rangle$ is either greater than $\frac12+a$ or less than $\frac12-a$ for some $a$ (e.g. $a=\frac16$), decide which is the case.

+
+

Further Details

+

Hamiltonian Simulation is BQP-hard

+

The basic construction (originally due to Feynman, here tweaked a bit) basically shows how you can design a Hamiltonian that implements any quantum computation, including any BQP-complete computation. The observable you would measure is just $Z$ on a particular output qubit, the two measurement outcomes corresponding to 'yes' and 'no'.

+

The simplest sort of Hamiltonian you might think of is to consider a computation of $N-1$ sequential unitaries $U_n$ acting on $M$ qubits, starting from a state $|0\rangle^{\otimes M}$. Then you can introduce an extra $N$ qubits, and specify the Hamiltonian +$$ +H=\frac{2}{N}\sum_{n=1}^{N-1}\sqrt{n(N-n)}\left(|10\rangle\langle 01|_{n,n+1}\otimes U+|01\rangle\langle 10|_{n,n+1}\otimes U^\dagger\right). +$$ +If you prepare your initial state as $|1\rangle|0\rangle^{\otimes(N-1)}|0\rangle^{\otimes M}$ then after a time $N\pi/4$, it will be in a state $|0\rangle^{\otimes (N-1)}|1\rangle|\Phi\rangle$ where $|\Phi\rangle$ is the output of the desired computation. The funny coupling strengths that I've used here, the $\sqrt{n(N-n)}$ are chosen specifically to give deterministic evolution, and are related to the concept of perfect state transfer. Usually you'll see results stated with equal couplings, but probabilistic evolution.

+

To see how this works, you define a set of states +$$ +|\psi_n\rangle=|0\rangle^{\otimes(n-1)}|1\rangle|0\rangle^{\otimes{N-n}}\otimes\left(U_{n-1}U_{n-2}\ldots U_1|0\rangle^{\otimes M}\right). +$$ +The action of the Hamiltonian is then +$$ +H|\psi_n\rangle=\frac2N\sqrt{(n-1)(N+1-n)}|\psi_{n-1}\rangle+\frac2N\sqrt{n(N-n)}|\psi_{n+1}\rangle, +$$ +which proves that the evolution is restricted to an $N\times N$ subspace which is represented by a tridiagonal matrix (which is the specific thing studied in perfect state transfer).

+

Of course, this Hamiltonian doesn't have any particularly nice properties - it is highly non-local, for example. There are many tricks that can be played to simplify the Hamiltonian to being, for example, one-dimensional. It can even be translationally invariant if you want, at the cost of having to prepare a more complex initial product state (at that point, the computation is no longer encoded in the Hamiltonian, which is universal, but is encoded in the input state). See here, for example.

+

(Local) Hamiltonian simulation is in BQP

+

The evolution of any Hamiltonian which is local on some lattice, acting on an initial product state, for a time that is no more than polynomial in the system size, can be simulated by a quantum computer, and any efficiently implementable measurement can be applied to estimate an observable. In this sense, you can see that Hamiltonian simulation is no harder than a quantum computation, the counter-point to the previous statement that quantum computation is no harder than Hamiltonian simulation.

+

There are many ways to do this (and there have been some recent papers that show significant improvements in error scaling for certain classes of Hamiltonian). Hre's quite a simple one. Take the Hamiltonian $H$ that you want to simulate. Split it up into different parts, $H_i$, each of which commutes. For example, on a nearest-neighbour Hamiltonian on some graph, you don't need more pieces than the maximum degree of the graph. You then Trotterize the evolution, writing the approximation +$$ +e^{iHt}\approx \left(e^{-iH_1\delta t}e^{-iH_2\delta t}\ldots e^{-iH_n\delta t}\right)^{t/\delta t} +$$ +So, you just have to construct a circuit that implements terms like $e^{-iH_1\delta t}$, which is composed of commuting terms $H_1=\sum_nh_n$, each of which acts only on a small number of qubits. +$$ +e^{-iH_1\delta t}=\prod_{n}e^{-ih_n\delta t} +$$ +Since this is just a unitary on a small number of terms, a universal quantum computer can implement it.

+",1837,,2927,,1/22/2023 15:55,1/22/2023 15:55,,,,0,,,,CC BY-SA 4.0 +2082,1,2084,,5/17/2018 11:22,,30,3203,"

I am a computer science student and am currently searching for resources from where I can learn about quantum computers, quantum computing models, their working principles, their gates and some simple quantum algorithms.

+",2405,,26,,12/13/2018 19:50,1/30/2022 17:43,Are there any organised resources available from where I can begin my quantum computing studies?,,5,2,,,,CC BY-SA 4.0 +2083,2,,2082,5/17/2018 11:43,,8,,"

The book Quantum computation and quantum information by Nielsen and Chuang is a good read in order to introduce yourself to the world of quantum computation. The book assumes minimal prior experience with quantum mechanics and with computer science, aiming instead to be a self-contained introduction to the relevant features of both, so it is really a nice starting point for anyone who wishes to introduce himself to world of quantum information science.

+",2371,,15,,5/17/2018 21:00,5/17/2018 21:00,,,,0,,,,CC BY-SA 4.0 +2084,2,,2082,5/17/2018 12:37,,14,,"

A curated list of resources can be found here.

+ +

In case of the link above one day going dead, I should pick out some highlights. Though this will be entirely subjective

+ + +",409,,,,,5/17/2018 12:37,,,,1,,,,CC BY-SA 4.0 +2091,2,,2082,5/17/2018 14:36,,3,,"

It really depends on where your brain is at. In particular, how much mathematics you have under your belt. Much of what you will need to understand is contained within linear algebra (over the complex numbers.) Zooming in more: it's all in the tensor product. Most explanations I see of how tensoring works are brutally difficult to understand as a novice. In fact, the case can be made that the whole field of quantum computing has been held back by our understanding of tensor products and ability to work with them (calculate.) In this vein, I would highly recommend the recent book by Coecke and Kissinger ""Picturing Quantum Processes."" Although perhaps you would like to struggle with a more traditional text first, in order to more appreciate the diagrammatic approach.

+",263,,,,,5/17/2018 14:36,,,,0,,,,CC BY-SA 4.0 +2093,2,,1679,5/17/2018 19:10,,2,,"

That Wikipedia article you mention says ""Blockchain security methods include the use of public-key cryptography."" The most widely used pubic-key cryptography methods are RSA and some elliptic curve methods. Quantum computers are a threat to both RSA and elliptic curve methods because they rely on it being difficult to factor large number or to calculate difficult discrete logarithms, and Peter Shor showed in 1994 that a quantum computer can perform both these tasks with exponentially fewer arithmetic operations than a classical computer.

+ +

If it is possible to build a big enough quantum computer, most if not all blockchain implementations will be at threat because of relying on public-key cryptography implementations which are not safe against quantum computing.

+",2293,,,,,5/17/2018 19:10,,,,1,,,,CC BY-SA 4.0 +2094,2,,1679,5/17/2018 20:38,,7,,"
+

Are the current implementations of blockchain resistant to attacks using quantum computation?

+
+

Quick answers:

+
    +
  1. Resistant against near-term technology? Sure.

    +
  2. +
  3. Reliably secure in the long term? Probably not.

    +
  4. +
  5. Will this pose a major problem? Very likely not.

    +
  6. +
  7. Is this risk unique to blockchains? Nope.

    +
  8. +
+

Because even if quantum computers would become a major threat to current implementations, the community could just elect to do a hard fork to post-quantum cryptography.

+

Not to say that blockchain technology developers and researchers don't need to worry about working on this issue, though I'd imagine that the average user needn't be concerned with this particular threat.

+

Also worth noting that other financial institutions, including banks, would be prone to a similar risk in some weird hypothetical world in which people inexplicably elected against upgrading their crypto. For example, hackers could use quantum computers to crack a financial institution's TLS/SSL certificate, allowing them to man-in-the-middle attack (random 2015 paper).

+
+

Long answer

+

Here's a 2017 paper that projects that Bitcoin could potentially become vulnerable by 2027, using generous assumptions:

+
+

The key cryptographic protocols used to secure the internet and financial transactions of today are all susceptible to attack by the development of a sufficiently large quantum computer. One particular area at risk are cryptocurrencies, a market currently worth over 150 billion USD. We investigate the risk of Bitcoin, and other cryptocurrencies, to attacks by quantum computers. We find that the proof-of-work used by Bitcoin is relatively resistant to substantial speedup by quantum computers in the next 10 years, mainly because specialized ASIC miners are extremely fast compared to the estimated clock speed of near-term quantum computers. On the other hand, the elliptic curve signature scheme used by Bitcoin is much more at risk, and could be completely broken by a quantum computer as early as 2027, by the most optimistic estimates. We analyze an alternative proof-of-work called Momentum, based on finding collisions in a hash function, that is even more resistant to speedup by a quantum computer. We also review the available post-quantum signature schemes to see which one would best meet the security and efficiency requirements of blockchain applications.

+

"Quantum attacks on Bitcoin, and how to protect against them" (2017-10-28)

+
+

That said, I'm not too sure how relevant a concern this might be in practice as it seems like that the situation'll change before that point. Even if Bitcoin's still around and going strong by the time it could be attacked, various mitigation techniques might go into effect.

+

The "Weakness" article on Bitcoin's wiki doesn't even mention quantum stuff, though their article on "Myths" does:

+
+

Quantum computers would break Bitcoin's security

+
+

While ECDSA is indeed not secure under quantum computing, quantum computers don't yet exist and probably won't for a while. The DWAVE system often written about in the press is, even if all their claims are true, not a quantum computer of a kind that could be used for cryptography. Bitcoin's security, when used properly with a new address on each transaction, depends on more than just ECDSA: Cryptographic hashes are much stronger than ECDSA under QC.

+

Bitcoin's security was designed to be upgraded in a forward compatible way and could be upgraded if this were considered an imminent threat (cf. Aggarwal et al. 2017, "Quantum attacks on Bitcoin, and how to protect against them").

+

See the implications of quantum computers on public key cryptography.

+

The risk of quantum computers is also there for financial institutions, like banks, because they heavily rely on cryptography when doing transactions.

+

"Myths", bitcoinwiki

+
+

Regarding the point about updating mentioned above, it's that while Bitcoin and other blockchains do tend to require standard algorithms that may be foreseeably attacked by quantum computers, before that's an issue, they can basically just do a hard fork, which is basically an update that everyone in the network migrates to, enabling stuff like algorithm changes.

+
+

What is 'Hard Fork'
A hard fork (or sometimes hardfork), as it relates to blockchain technology, is a radical change to the protocol that makes previously invalid blocks/transactions valid (or vice-versa). This requires all nodes or users to upgrade to the latest version of the protocol software. Put differently, a hard fork is a permanent divergence from the previous version of the blockchain, and nodes running previous versions will no longer be accepted by the newest version. This essentially creates a fork in the blockchain: one path follows the new, upgraded blockchain, and the other path continues along the old path. Generally, after a short period of time, those on the old chain will realize that their version of the blockchain is outdated or irrelevant and quickly upgrade to the latest version.

+

"Hard Fork", Investopedia

+
+

Of course, pushing a hard fork requires getting much of the community to accept it, though since pretty much all members of a cryptocurrency network wouldn't want to get hacked/scammed/etc., a hard fork pushed to avert a foreseeable risk of attack by quantum computers would almost certainly be uncontroversial.

+",15,,-1,,6/18/2020 8:31,5/17/2018 20:53,,,,10,,,,CC BY-SA 4.0 +2095,1,,,5/18/2018 5:15,,8,172,"

In a simple form, Bell's theorem states that:

+ +
+

No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.

+
+ +

Bell developed a series of inequalities to provide specific experimental examples to distinguish between the predictions of any theory relying on local hidden variables and those of quantum mechanics. As such, Bell test inequality experiments are of fundamental interest in quantum mechanics. However, if one wants to do things properly, one realizes that there are a number of loopholes that affects, in different degrees, all experiments trying to perform Bell tests.[1] Experiments trying to close these loopholes tend to be unique rather than routine. One of the results of having general-purpose quantum computers, or networks thereof, would be the ability to routinely perform sophisticated quantum experiments.

+ +

Question: What requirements would have to fulfill a general-purpose quantum computer (network) to be able to implement Bell tests that are at least as loophole-free as the best realization that has been done so far?

+ +

For clarity: ideally the best answer will take a quantum computing approach and contain close-to-engineering details, or at least close-to-architecture. For example, writing the experiment as a simple quantum circuit, one of the current architectures can be chosen and from that one would make some realistic order-of-magnitude estimates to the required times of the different quantum gates / measurements and of the required physical distance between the different qubits.

+ +

[1] As commented by @kludg, it has been argued that ""..no experiment, as ideal as it is, can be said to be totally loophole-free."", see Viewpoint: Closing the Door on Einstein and Bohr’s Quantum Debate

+",1847,,55,,04-10-2021 08:29,04-10-2021 08:29,How would a Quantum Computer (network) perform loophole-free Bell tests?,,1,2,,,,CC BY-SA 4.0 +2096,2,,2095,5/18/2018 6:58,,6,,"

When people talk about a loophole free Bell test, what they really mean is that the two loopholes that most concern the majority of people are closed simultaneously: the measurement loophole and the locality loophole.

+ +

Let us briefly review the protocol:

+ +
    +
  • A Bell state $(|00\rangle+|11\rangle)/\sqrt{2}$ is produced, and two parties, Alice and Bob, each take one qubit.

  • +
  • Alice and Bob must be separated by a distance $d$.

  • +
  • Alice and Bob each pick a random bit value.

  • +
  • If Alice's random bit value, $x$, is 0, she measures her qubit in the $Z$ basis. If it's 1, she measures in the $X$ basis. Her measurement outcome is a bit, recorded in the variable $A_x$.

  • +
  • If Bob's random bit value is $y\in\{0,1\}$, he measures in the basis $(Z+(-1)^yX)/\sqrt{2}$. His measurement outcome is a bit, recorded in the variable $B_y$.

  • +
  • Alice and Bob repeat this many times, and evaluate the expected value of $$S=A_0(B_0+B_1)+A_1(B_0-B_1).$$

  • +
+ +

Closing the locality loophole requires that the two parties that are taking part are separated by a distance $d$ such that the time between Alice's measurement basis being chosen, and Bob's answer being given is less than $d/c$, where $c$ is the speed of light (so that there is no way an adversary choosing Bob's answer can know Alice's measurement basis). It also requires, symmetrically, that Alice's answer is given no later than a time $d/c$ after Bob's measurement choice is made.

+ +

Closing the measurement loophole requires the use of detectors that have a sufficiently high accuracy (otherwise, an adversary obeying a Local Hidden Variable model could replace your detectors with better detectors, and use the margin of error to throw away results that would betray the presence of eavesdropping/manipulation). The precise value of this threshold depends on your precise formulation of the Bell test. The commonly quoted value is a detector efficiency of about $83\%$ for the CHSH test.

+ +

Recently, there have been experiments that have closed both these loopholes simultaneously. See here, for example. Their results are good enough that they can quantify the likelihood of there being a local hidden variable model that describes their results ($P=0.039$). Ultimately, if you want to do better, you either need devices that perform better than theirs, or to perform more runs of the experiment. That is, perhaps, now the main experimental challenge; to improve the speed of such devices so that it doesn't take 18 days to generate 245 trials! These experiments also claim to remove the freedom of choice loophole, wherein one worries that the random number generators that are used for choosing the measurement bases of Alice and Bob are also governed by the same local hidden variable model, instead of generating perfect randomness that is uncorrelated with the rest of the experiment.

+ +

In terms of a quantum computing architecture for implementing this, that is not a particularly natural issue: for a quantum computer, one wants to be able to create as much connection and interaction between the qubits as possible, which is rather the opposite of needing to separate them over great distances. I suppose the sort of context which is starting to generate the right scenario are the designs for scalable ion trap quantum computers, where there are multiple separated traps, each of which only interacts occasionally. If each of these were far enough apart, you could think of a loophole-free Bell test. I believe the measurement efficiencies in these scenarios are high enough. The question then is, how far apart do these different locations have to be to close the locality loop hole? I haven't done any sort of calculation based on real data, but I think the answer is of the order of kilometers, i.e. completely unreasonable for a single computer. For me, those would be separate computers, working very hard to cooperatively compute using the minimum of shared resources (i.e. entanglement).

+",1837,,1837,,5/18/2018 7:30,5/18/2018 7:30,,,,7,,,,CC BY-SA 4.0 +2097,1,2099,,5/18/2018 19:46,,12,780,"

Schrödinger wrote a letter to Einstein after the 1935 EPR paper, and in that letter Schrödinger used the German word ""Verschränkung"" which translates into ""entanglement"", but when was the word first used in English?

+ +

Schrödinger's 1935 paper written in English, called Discussion of Probability Relations between Separated Systems, says (according to Wikipedia) ""I would not call [entanglement] one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought"" which means the concept was there but it whatever word he used for it was not entanglement (hence the square brackets). Unfortunately I do not have access to the full paper.

+",2293,,2293,,5/18/2018 20:24,1/24/2021 22:58,When was the first use of the word Entanglement?,,3,1,,,,CC BY-SA 4.0 +2098,1,,,5/18/2018 20:16,,9,406,"

The Toric code Hamiltonian is:

+ +

$\sum_{x,y}\left( \prod_{i\in p(x,y)} Z_{ixy} + \prod_{i\in v(x,y)} X_{ixy} \right),$

+ +

where the $v$ and $p$ are defined according to this picture (courtesy of James Wooton's contribution to Wikipedia):

+ +

+ +

At the moment we have an infinite 2D lattice:

+ +

$x\rightarrow \pm \infty$
+$y\rightarrow \pm \infty$.

+ +

But if we set periodic boundary conditions such that (and feel free to edit the question if I am incorrect about this):

+ +

$p(x+10,y)=p(x,y)$
+$v(x,y+10)=v(x,y)$,

+ +

We get the follownig torus (image courtesy of James Wooton's contribution to Wikipedia) :

+ +

+ +

Now in my periodic boundary conditions, I chose to add $+10$ but could have added some other number instead. How does this ""size of the torus"" affect the function of the toric code?

+",2293,,10480,,02-10-2021 07:07,02-10-2021 21:34,How does the size of a toric code torus affect its ability to protect qubits?,,2,0,,,,CC BY-SA 4.0 +2099,2,,2097,5/18/2018 23:28,,9,,"

I managed to get access to the paper mentioned in the question. Schrödinger in 1935 (the same year the original EPR paper was published) wrote in English: ""By the interaction the two representatives (or $\psi$-functions) have become entangled."" This was in the abstract.

+ +

He also wrote later in the paper: ""What constitutes the entanglement is that $\psi$ is not a product of a function of x and a function of y.""

+ +

He also used the term ""disentanglement""

+ +

However the use of the term as found by searching: quantum entanglement
+in Google Scholar indicates that the term was merely doubling every 10 years until roughly 1990 when it went up by a factor of 5 in a 10-year period, followed by further factor of 6 in the next 10-year period:

+ +

+ +

Data was collected just now:

+ +

1900-1940: 63 results
+1900-1950: 93 results
+1900-1960: 146 results
+1900-1970: 313 results
+1900-1980: 718 results
+1900-1990: 1700 results
+1900-2000: 9380 results
+1900-2010: 61700 results
+1900-2020: 151000 results

+",2293,,,,,5/18/2018 23:28,,,,4,,,,CC BY-SA 4.0 +2100,1,,,5/19/2018 2:02,,9,344,"

For example, has anyone seen something like:
+""quqrit"" for a 4-level system[1], or
+""qupit"" for a 5-level system[2] ?

+ +

1 From ""quad"" or ""quart"" since ""tetra"" would be qutrit, which is already a 3-level system.
+2 From ""penta"" since ""quint"" would interfere with quqrit for 4-level system.

+ +

I understand that we could call a quqrit a ""spin-3/2 particle"" which would be overloading the meaning of spin, but I wonder if these terms have ever been used. I have searched ""quqrit"" and ""qupit"" on Google but what if it's not called ""quqrit"" but something else? Also, perhaps no one had the bravery to publish something with these words written down but have been used orally at conferences. Surprisingly, after asking this question I found one paper using the term ""qupit"" but the word only appears in the title and nowhere else, so it's not clear what their definition of qupit is!

+",2293,,2293,,5/19/2018 2:08,06-07-2018 07:32,Do any specific types of qudits other than qubits and qutrits have a name?,,2,3,,,,CC BY-SA 4.0 +2101,2,,2100,5/19/2018 4:39,,4,,"

After a lot of searching, it appears that the word ""quqrit"" has indeed been used in one (but I found only one!) paper from 2011, and indeed it was used to describe a 4-level system. But the word ""quqit"" is used to describe 4-level systems in two papers [1][2] dating back to 2004. This time there's four different authors in total, but two of them are on both papers so no independent groups using this term (as far as I have found).

+ +

As for higher-order qudits beyond $d=5$, it appears to be an open question.

+",2293,,2293,,5/19/2018 4:47,5/19/2018 4:47,,,,0,,,,CC BY-SA 4.0 +2102,2,,2100,5/19/2018 5:46,,8,,"

There is no standard name for a qudit for $d>3$. The community has mostly settled on the term qudit (but you will still find qunit or quNit, for example, using $n$ or $N$ instead of $d$ in some older papers).

+ +

You will find the odd paper where an individual author will pick a name for the $d=4$ case. I’ve certainly seen ququad and ququart. But I think mostly people just use qudit, and specify $d$.

+ +

The other term that you'll see (thanks to Niel de Beaudrap for pointing it out) is qupit, a quantum system where the Hilbert space dimension is an odd prime, $p$.

+",1837,,23,,06-07-2018 07:32,06-07-2018 07:32,,,,2,,,,CC BY-SA 4.0 +2103,2,,2098,5/19/2018 6:23,,6,,"

The Toric code is an error correcting code. The distance of the code (I.e. the number of local operations required to convert one logical state into an orthogonal one) is equal to $N$, where the Toric code is defined on an $N\times N$ grid.

+ +

One of the places that the performance of the Toric code really wins out is that although it is only distance $N$, the vast majority of sets of $N$ single qubit errors can be corrected, and it is only once you get $O(N^2)$ errors that you get killed. That means that as $N\rightarrow\infty$, those $O(N)$ terms vanish, and you get a finite per-qubit error rate as a threshold for error correction. For finite $N$, the error correcting threshold will be lower.

+",1837,,,,,5/19/2018 6:23,,,,0,,,,CC BY-SA 4.0 +2104,2,,2012,5/19/2018 7:49,,2,,"

I don't know the translation into physics, but the circuit you want for the most basic demonstration is the following: + +Here, $|+\rangle=(|0\rangle+|1\rangle)/\sqrt{2}$, and the gates are controlled-not gates and controlled-phase gates. The state $|\psi\rangle$ can be any input state initially. The first time the circuit is run prepares the $|\psi\rangle$ qubits in a 4-qubit Toric code (up to some corrections depending on what measurement results you got). Repeat the measurements again, and you get a round of error correction. In effect, what the first qubit is doing is measuring the expectation value of the observable $XXXX$ on the Toric code qubits, while the second qubit measures the $ZZZZ$ observable.

+ +

I seem to remember that Jiannis Pachos (and coauthors) explicitly described the smallest Toric code implementation possible (which I guess was this version), but I can't seem to find the paper. I had assumed that JamesWooton would have jumped in by now to tell you where that paper is. It must be commented, however, that such a small size of Toric Code is completely hopeless for error correcting properties; you cannot even correct for single-qubit errors!

+",1837,,1837,,5/19/2018 9:12,5/19/2018 9:12,,,,2,,,,CC BY-SA 4.0 +2105,1,2124,,5/19/2018 10:26,,7,75,"

When applying each of the six degree of freedom rotations (or certain combinations of them) in an SO(4) using quantum gates, the results I expected are produced. For example, the following circuit in Craig Gidney's Quirk tool demonstrates rotations in three degrees of freedom, along with some displays of the resulting matrices expressed as percentages:

+ +

+ +

However, when applying some combinations of rotations, such as the following, results I didn't expect are produced in the final matrix:

+ +

+ +

In contrast, the results I am expecting are the following: +$$ +\begin{bmatrix} + .73 & .07 & .13 & .07 \\ + .00 & .73 & .15 & .13 \\ + .13 & .07 & .73 & .07 \\ + .15 & .13 & .00 & .73 +\end{bmatrix} +$$

+ +

For convenience, here is a link to the Quirk circuit with all six degree of freedom rotations, albeit with an unexpected final result. The results I expect are the following:

+ +

$$ +\begin{bmatrix} + .62 & .01 & .08 & .29 \\ + .11 & .80 & .01 & .08 \\ + .13 & .07 & .80 & .01 \\ + .15 & .13 & .11 & .62 +\end{bmatrix} +$$

+ +

I don't know enough about using ancilla bits and uncomputation techniques to apply them to this, but I suspect that it might explain part of the unexpected results. Any advice would be greatly appreciated.

+",2421,,26,,12/23/2018 13:52,12/23/2018 13:52,How to avoid error when applying certain combinations of degree of freedom rotations using a quantum circuit?,,1,0,,,,CC BY-SA 4.0 +2106,1,2107,,5/19/2018 14:16,,40,14119,"

I am studying Quantum Computing and Information, and have come across the term "surface code", but I can't find a brief explanation of what it is and how it works. Hopefully you guys can help me with this.

+",2422,,55,,6/13/2022 23:18,6/13/2022 23:18,"What is the ""surface code"" in the context of quantum error correction?",,2,5,,,,CC BY-SA 4.0 +2107,2,,2106,5/19/2018 18:13,,27,,"

The surface codes are a family of quantum error correcting codes defined on a 2D lattice of qubits. Each code within this family has stabilizers that are defined equivalently in the bulk, but differ from one another in their boundary conditions.

+ +

The members of the surface code family are sometimes also described by more specific names: The toric code is a surface code with periodic boundary conditions, the planar code is one defined on a plane, etc. The term ‘surface code’ is sometimes also used interchangeably with ‘planar code’, since this is the most realistic example of the surface code family.

+ +

The surface codes are currently a large research area, so I’ll just point you towards some good entry points (in addition to the Wikipedia article linked to above).

+ + + +

The surface codes can also be generalized to qudits. For more on that, see here.

+",409,,-1,,8/28/2019 8:59,8/28/2019 8:59,,,,2,,,,CC BY-SA 4.0 +2108,1,2122,,5/19/2018 18:56,,8,997,"

I am trying to run some code using qiskit, but I get the error message, that I have run out of necessary Experiment Units. I tried to remove pending jobs using the API with the following code

+ +
for job in api.get_jobs():
+    if job[""status""] == ""RUNNING"":
+    api.cancel_job(id_job=job[""id""], hub=None, group=None, project=None,
+        access_token=None, user_id=None)
+
+ +

but it didn't work.

+ +

Am I even going in the right direction or is there some other way to retrieve these used Experiment Units? I have read, that they normally are given back just after the execution of program is finished or after 24 hours (depending on which one ends earlier), but I am waiting now for over than two days and nothing happens.

+",2098,,409,,5/19/2018 22:47,5/21/2018 7:34,How to delete pending jobs on IBM Quantum Computer to retrieve units?,,1,6,,,,CC BY-SA 4.0 +2109,1,,,5/20/2018 7:48,,6,123,"

I have some practical difficulties with projective measurements, so I'd welcome inspiration from others. This is beyond the question ""Are true Projective Measurements possible experimentally?"" in that I'm not aiming for perfection but for something practical. In particular, I care about the case where we want to keep computing after a measurement.

+ +

Let us say we try to effect an upwards transition between two energy levels, by illuminating the sample with the appropriate wavelength. The transition is only possible if the initial state is occupied, since the final state is outside of out computational basis. For this to be a projective measurement rather than an unitary operation in a larger basis, we need to irreversibly detect this, say by a radiative spontaneous relaxation of this ""final"" state of the transition to a third energy level. If we were subsequently able to go back to the original level (coherently and rapidly), then I assume we'd have a messy work-around for an ideal projective measurement.

+ +

The question is: can this be done, or is this scheme fundamentally flawed? If it can be done, please illustrate with examples where this works.

+",1847,,1837,,9/21/2018 7:42,9/21/2018 7:42,Projective measurements: aftermath and restoration,,0,0,,,,CC BY-SA 4.0 +2110,1,2113,,5/20/2018 11:41,,29,3126,"

Grover's algorithm is used, among other things, to search an item $\mathbf{y}$ in an unordered list of items $[\mathbf{x}_0, \mathbf{x}_1, ..., \mathbf{x}_{n-1}]$ of length $n$. Even though there are plenty of questions here regarding this topic, I still miss the point.

+ +

Searching in a list, the classical way

+ +

Normally, I would design a search function this way +$$ \mathrm{search}([\mathbf{x}_0, \mathbf{x}_1, ..., \mathbf{x}_{n-1}], \mathbf{y}) = i \in \mathbb{N} \quad \text{such that } \mathbf{x}_i = \mathbf{y} $$ +So I give the list and the wanted item as inputs, and I receive the position of the item in the list as output. I think I have understood that the information about $\mathbf{y}$ is embedded in the algorithm through the oracle gate $O$, so our function becomes +$$ \mathrm{search}_\mathbf{y}([\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_n] ) = i \in \mathbb{N} \quad \text{such that } \mathbf{x}_i = \mathbf{y} $$ +Let's make a practical example. Consider searching the ace of spades $1\spadesuit$ in a sequence of 8 cards from a standard 52-card deck:

+ +

+ +

The list of length $8$ is $[ + \mathbf{x}_0 = J\clubsuit,$ $ + \mathbf{x}_1 = 10\diamondsuit,$ $ + \mathbf{x}_2 = 4\heartsuit,$ $ + \mathbf{x}_3 = Q\clubsuit,$ $ + \mathbf{x}_4 = 3\spadesuit,$ $ + \mathbf{x}_5 = 1\spadesuit,$ $ + \mathbf{x}_6 = 6\spadesuit, $ $ + \mathbf{x}_7 = 6\clubsuit]$.

+ +

The wanted element is $\mathbf{x}_5$. I should obtain $\mathrm{search}_{\spadesuit}(cards) = 5$. Each card can be encoded with $\lceil{\log_2 52}\rceil = 6$bits, the list has $8$ elements so we need $6\times 8 = 48$ bits to encode the list. In this case, the oracle $O$ will implement the function: +$$f(\mathbf{x}) = \begin{cases} 1, & \mathbf{x} = 1\spadesuit \\ 0, & \text{otherwise} \end{cases}$$

+ +

However, the input of Grover's algorithm is not a state of $48$qubits.

+ +

(NB: Image of shuffled deck is taken from here)

+ +

Grover and its oracle

+ +

Several sources (eg. here - graphically explained) say that the input of the algorithm is different: the input is a state taken from the search space $S = \{ 0, 1, 2, ..., N \} = \{0, 1, 2, ..., 7 \} $ where $N$ is the number of elements of the list. Each number corresponds to the position of an element in the list.

+ +

The input of $\mathrm{search}_{\spadesuit}(\cdot)$ is now a $\lceil \log_2 8 \rceil = 3$qubit vector $|\psi\rangle$, which must be a superposition of all the items in the search space $S$.

+ +

We know

+ +
    +
  • $|0_{3\text{qubits}}\rangle = |000\rangle$ corresponds to $J\clubsuit$;
  • +
  • $|1_{3\text{qubits}}\rangle = |001\rangle$ corresponds to $10\diamondsuit$;
  • +
  • $|2_{3\text{qubits}}\rangle = |010\rangle$ corresponds to $4\heartsuit$;
  • +
  • $|5_{3\text{qubits}}\rangle = |101\rangle$ corresponds to $1\spadesuit$ which is the wanted element;
  • +
  • and so on...
  • +
+ +

In this case we have +$$\mathrm{search}_{\spadesuit}(|\psi\rangle) = |5_{3\text{qubits}}\rangle$$ +But in this case, our oracle would have to implement the function +$$f(|\psi\rangle) = \begin{cases} 1, & |\psi\rangle = |5_{3\text{qubits}}\rangle \\ 0, & \text{otherwise} \end{cases}$$

+ +

Building the oracle requires us to know that $\spadesuit$ is at position 5. What's the point to execute the algorithm if we have already searched for the element in order to build the oracle?

+",1874,,55,,05-03-2021 11:28,05-03-2021 11:57,What's the point of Grover's algorithm if we have to search the list of elements to build the oracle?,,5,3,,,,CC BY-SA 4.0 +2111,1,2112,,5/20/2018 13:51,,21,4532,"

Shor's algorithm is expected to enable us to factor integers far larger than could be feasibly done on modern classical computers.

+ +

At current, only smaller integers have been factored. For example, this paper discusses factorizing $15=5{\times}3$.

+ +

What is in this sense the state-of-art in research? Is there any recent paper in which it says some bigger numbers have been factorized?

+",2426,,26,,12/13/2018 19:51,12/17/2018 10:23,What integers have been factored with Shor's algorithm?,,3,1,,,,CC BY-SA 4.0 +2112,2,,2111,5/20/2018 14:19,,15,,"

The prime factorization of 21 (7x3) seems to be the largest done to date with Shor's algorithm; it was done in 2012 as detailed in this paper. It should be noted, however, that much larger numbers, such as 56,153 in 2014, have been factored using a minimization algorithm, as detailed here. For a convenient reference, see Table 5 of this paper:

+ + + +

$$ +\begin{array}{c} + \textbf{Table 5:}~\text{Quantum factorization records} \\ \hline + \small{ + \begin{array}{cccccc} + \text{Number} & \text{# of factors} & \begin{array}{c}\text{# of qubits} \\ \text{needed} \end{array} & \text{Algorithm} & \begin{array}{c}\text{Year} \\ \text{implemented} \end{array} & \begin{array}{c}\text{Implemented} \\ \text{without prior} \\ \text{knowledge of} \\ \text{solution} \end{array} \\ \hline + 15 & 2 & 8 & \text{Shor} & 2001~\left[2\right] & \chi \\ + & 2 & 8 & \text{Shor} & 2007~\left[3\right] & \chi \\ + & 2 & 8 & \text{Shor} & 2007~\left[3\right] & \chi \\ + & 2 & 8 & \text{Shor} & 2009~\left[5\right] & \chi \\ + & 2 & 8 & \text{Shor} & 2012~\left[6\right] & \chi \\ + 21 & 2 & 10 & \text{Shor} & 2012~\left[7\right] & \chi \\ + 143 & 2 & 4 & \text{minimization} & 2012~\left[1\right] & \checkmark \\ + 56153 & 2 & 4 & \text{minimization} & 2012~\left[1\right] & \checkmark \\ \hline + 291311 & 2 & 6 & \text{minimization} & \text{not yet} & \checkmark \\ + 175 & 3 & 3 & \text{minimization} & \text{not yet} & \checkmark + \end{array}} +\end{array}_{\Large{.}} +$$

+",91,,2293,,5/20/2018 22:07,5/20/2018 22:07,,,,7,,,,CC BY-SA 4.0 +2113,2,,2110,5/20/2018 15:35,,15,,"

If you have 8 items in the list (like in your card's example), then the input of the oracle is 3 (qu)bits. Number of cards in the deck (52) is irrelevant, you need 3 bits only to encode 8 cards.

+ +

You can think that 3 bits encode the position in the list of the card you are searching; then you don't know the position, but the oracle knows. So if you are searching the ace of spades, then the oracle knows that the ace of spades is the 6th card (or 5th counting from zero) and implements the function +$$ +f(\mathbf{x}) = \begin{cases} 1, & \text{if x = 5, or binary '101'} \\ 0, & \text{otherwise} \end{cases}$$

+ +

PS: It is better to think about the Grover's algorithm differently: you have an oracle implementing a boolean function which outputs $1$ for a single combination of input bits, otherwise outputs zero, and your task is to find the combination. The problem has the same complexity as searching in an unsorted list or database, that is why the Grover's algorithm is usually described as searching in an unsorted database. But applying the algorithm to a real-world database search indeed raises questions that are beyond the algorithm itself. Grover's algorithm is just searching for what the oracle knows.

+",2105,,2105,,5/20/2018 16:34,5/20/2018 16:34,,,,4,,,,CC BY-SA 4.0 +2114,1,2116,,5/20/2018 17:02,,8,432,"

$\newcommand{\Ket}[1]{\left|#1\right>}$

+ +

I understand that in general quantum black box algorithms (such as the ones which play a part in Simon's & Deutsch's algorithm) implement a quantum circuit to compute some function $f\left(x\right)$ in such a way that the input is fed with trailing zero qubits, and the result is the input followed by the output, e.g:

+ +

$$\Ket{x}\Ket{0} \rightarrow \Ket{x}\Ket{f(x)}\,.$$

+ +

My question is, since basically one can write the above more explicitly as:$$ +\Ket{x}\otimes\Ket{0} \rightarrow \Ket{x}\otimes\Ket{f(x)} +\,,$$whether it is possible, in case $\Ket{x}$ is not a pure state but a superposition, to get an output which ""mixes"" inputs with the wrong outputs.

+ +

To clarify what I mean I'll give an example: +Suppose our input is the one qubit superposition:

+ +

$$\Ket{x} = \frac{\Ket{0}+\Ket{1}}{\sqrt{2}}$$

+ +

Will the result of the black-box circuit be the following tensor product:

+ +

$$ +\left\lbrack\frac{\Ket{0}+\Ket{1}}{\sqrt{2}}\right\rbrack +\otimes +\left\lbrack\frac{\Ket{f(0)}+\Ket{f(1)}}{\sqrt{2}}\right\rbrack +$$

+ +

(Which I find confusing and unlikely) +Or, the other option which seems to be more natural:

+ +

$$\frac{\Ket{0}\Ket{f(0)}+\Ket{1}\Ket{f(1)}}{\sqrt{2}}$$

+ +

(Or perhaps both are wrong? :))

+",2428,,15,,5/20/2018 20:17,5/20/2018 20:17,"Clarification needed regarding quantum ""black-box"" circuits",,2,0,,,,CC BY-SA 4.0 +2115,2,,2114,5/20/2018 17:50,,4,,"

Nice question. +Your second example is correct. I will show this by using Equation 2 from here:

+ +

$(A + B)\otimes C = A\otimes C + B\otimes C$.

+ +

For your example:

+ +

$\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)\otimes|f(x)\rangle = \frac{|0\rangle\otimes|f(x)\rangle}{\sqrt{2}}+\frac{|1\rangle\otimes|f(x)\rangle}{\sqrt{2}} = \frac{|0\rangle\otimes|f(0)\rangle}{\sqrt{2}}+\frac{|1\rangle\otimes|f(1)\rangle}{\sqrt{2}}$

+ +

You can see this being done, for example, in the first line of Page 6 in these lecture notes of Prof. John Watrous.

+",2293,,2293,,5/20/2018 17:58,5/20/2018 17:58,,,,0,,,,CC BY-SA 4.0 +2116,2,,2114,5/20/2018 18:30,,4,,"

It is always good to start from considering an example. Suppose you have CNOT gate; then +\begin{align} +\Ket{0}\Ket{0} \rightarrow \Ket{0}\Ket{0}\\ +\Ket{1}\Ket{0} \rightarrow \Ket{1}\Ket{1} +\end{align} +By linearity +\begin{align} +\frac{1}{\sqrt{2}}(\Ket{0}\Ket{0} + \Ket{1}\Ket{0}) \rightarrow \frac{1}{\sqrt{2}}(\Ket{0}\Ket{0}+ \Ket{1}\Ket{1}) +\end{align} +or +\begin{align} +\frac{1}{\sqrt{2}}(\Ket{0} + \Ket{1})\Ket{0} \rightarrow \frac{1}{\sqrt{2}}(\Ket{0}\Ket{0}+ \Ket{1}\Ket{1}) +\end{align} +So your first guess is wrong, but the second guess seems to be true, and it is not hard to convince yourself that it is indeed true.

+",2105,,,,,5/20/2018 18:30,,,,1,,,,CC BY-SA 4.0 +2118,2,,1796,5/20/2018 20:23,,1,,"

For Shor's algorthm: +Every experiment has been designed for the specific number being factored. The largest number factored without cheating was 15 which is the smallest non-trivial semi-prime on which to apply Shor's algorithm. Major changes would be needed in the experiment (including in the number of qubits) in order to factor 21, for example. IBM's 50-qubit machine can implement Shor's algorithm on larger numbers, but the noise is so bad that you will only get the correct factors if you're very lucky, and that's why it hasn't been done yet.

+ +

For the annealing algorithm: 376289 has been factored with D-Wave's 2048-qubit annealer, and this is not a specific experiment but a general algorithm on an easily programmable machine, but we do not know how this will scale. A very crude upper limit to the number of qubits needed to factor RSA-230 is 5.5 billion qubits (but this can be brought down significantly by better compilers), while Shor's algorithm can do it with 381 qubits.

+",2293,,,,,5/20/2018 20:23,,,,0,,,,CC BY-SA 4.0 +2119,2,,2111,5/20/2018 21:43,,6,,"

For Shor's algorthm: +State of the art is still 15. In order to ""factor"" 21 in the paper Heather mentions, they had to use the fact that $21=7\times 3$ to choose their base $a$. This was explained in 2013 in the paper Pretending to factor numbers on a quantum computer, later published by Nature with a slightly friendlier title. The quantum computer did not factor 21, but it verified that the factors 7 and 3 are indeed correct.

+ +

For the annealing algorithm: State of the art is 376289. But we do not know how this will scale. A very crude upper limit to the number of qubits needed to factor RSA-230 is 5.5 billion qubits (but this can be brought down significantly by better compilers), while Shor's algorithm can do it with 381 qubits.

+",2293,,,,,5/20/2018 21:43,,,,1,,,,CC BY-SA 4.0 +2120,2,,2111,5/20/2018 23:59,,5,,"

The size of the number factored is not a good measure for the complexity of the factorization problem, and correspondingly the power of a quantum algorithm. The relevant measure should rather be the periodicity of the resulting function which appears in the algorithm.

+ +

This is discussed in J. Smolin, G. Smith, A. Vargo: Pretending to factor large numbers on a quantum computer, Nature 499, 163-165 (2013). In particular, the authors also give an example of a number with 20000 binary digits which can be factored with a two-qubit quantum computer, with exactly the same implementation that had been used previously to factor other numbers.

+ +

It should be noted that the ""manual simplifications"" which the authors perform to arrive at this quantum algorithm is something which has also been done e.g. for the original experiment factoring 15.

+",491,,,,,5/20/2018 23:59,,,,0,,,,CC BY-SA 4.0 +2121,2,,2110,5/21/2018 7:09,,7,,"

While it is perhaps easiest for us to think about the function of the oracle as already having computed all these values, that's not what it's doing. In the case you described, the oracle has 8 possible inputs (i.e. encoded in 3 (qu)bits), and the oracle does all the computation that you need on the fly. So, the moment you try to evaluate the oracle for some value $x$, the oracle looks up (in this instance) the card that the value of $x$ corresponds to, and then checks if that card is the marked card. The idea being that each time you call the oracle, it goes through that process once. Overall, you evaluate the function a number of times that's equal to the number of times you call the oracle. The aim of any search algorithm is to call that oracle as few times as possible.

+ +

In case this sounds a little circular (given an input $x$, find which card that corresponds to), remember that your look-up table for what $x$ corresponds to what card can be ordered which is a different, simpler, much faster search question.

+ +

The key differences in your example compared to a more realistic usage scenario are:

+ +
    +
  • The search space is usually massive. There's no realistic prospect of precomputing all values. Indeed, that is exactly what we're trying to avoid.

  • +
  • Usually, we don't actually say 'find the ace of spades'. Instead, there's an $f(x)$ that is non-trivial to evaluate to test if $x$ is the 'marked' item or not. The fact that the oracle can take quite a long time to evaluate, even for a single entry, is what makes the oracle the costly part to implement (and all other gates are given for free) and why you need to minimise the number of calls.

  • +
+ +

So, really, the way a classical search would work on your problem is: pick an $x$ at random. Evaluate $y=f(x)$. If $y=1$, return $x$, otherwise repeat. While the net effect of $f(x)$ is 'is the input $x_0$, the marked entry?', that is not the actual calculation that it does.

+",1837,,,,,5/21/2018 7:09,,,,0,,,,CC BY-SA 4.0 +2122,2,,2108,5/21/2018 7:25,,6,,"

Cancel Job is only available for the IBM Q Network, not for IBM Q Experience: https://github.com/QISKit/qiskit-api-py/blob/master/IBMQuantumExperience/IBMQuantumExperience.py#L795

+ +

In the next weeks, we hope that it is available for IBM Q Experience too.

+ +

Regarding to the credits... we are analyzing the problem. We have refilled your credits.

+ +

If you have any other issue, please post in qiskit (https://qiskit.org/) slack public channel :).

+",2436,,2436,,5/21/2018 7:34,5/21/2018 7:34,,,,0,,,,CC BY-SA 4.0 +2123,2,,2106,5/21/2018 8:26,,27,,"

The terminology of 'surface code' is a little bit variable. It might refer to a whole class of things, variants of the Toric code on different lattices, or it might refer to the Planar code, the specific variant on a square lattice with open boundary conditions.

+

The Toric Code

+

I'll summarise some of the basic properties of the Toric code. Imagine a square lattice with periodic boundary conditions, i.e. the top edge is joined to the bottom edge, and the left edge is joined to the right edge. If you try this with a sheet of paper, you'll find you get a doughnut shape, or torus. On this lattice, we place a qubit on each edge of a square.

+

+

Stabilizers

+

Next, we define a whole bunch of operators. For every square on the lattice (comprising 4 qubits in the middle of each edge), we write +$$ +B_p=XXXX, +$$ +acting a Pauli-$X$ rotation on each of the 4 qubits. The label $p$ refers to 'plaquette', and is just an index so we can later count over the whole set of plaquettes. On every vertex of the lattice (surrounded by 4 qubits), we define +$$ +A_s=ZZZZ. +$$ +$s$ refers to the star shape and again, will let us sum over all such terms.

+

We observe that all of these terms mutually commute. It's trivial for $[A_s,A_{s'}]=[B_p,B_{p'}]=0$ because Pauli operators commute with themselves and $\mathbb{I}$. More care is required with $[A_s,B_p]=0$, bot note that these two terms either have 0 or 2 sites in common, and pairs of different Pauli operators commute, $[XX,ZZ]=0$.

+

Codespace

+

Since all these operators commute, we can define a simultaneous eigenstate of them all, a state $|\psi\rangle$ such that +$$ +\forall s:A_s|\psi\rangle=|\psi\rangle\qquad\forall p:B_p|\psi\rangle=|\psi\rangle. +$$ +This defines the codespace of the code. We should determine how large it is.

+

For an $N\times N$ lattice, there are $N^2$ qubits, so the Hilbert space dimension is $2^{N^2}$. There are $N^2$ terms $A_s$ or $B_p$, which we collectively refer to as stabilizers. Each has eigenvalues $\pm 1$ (to see, just note that $A_s^2=B_p^2=\mathbb{I}$) in equal number, and when we combine them, each halves the dimension of the Hilbert space, i.e. we would think that this uniquely defines a state.

+

Now, however, observe that $\prod_sA_s=\prod_pB_p=\mathbb{I}$: each qubit is included in two stars and two plaquettes. This means that one of the $A_s$ and one of the $B_p$ is linearly dependent on all the others, and does not further reduce the size of the Hilbert space. In other words, the stabilizer relations define a Hilbert space of dimension 4; the code can encode two qubits.

+

Logical Operators

+

How do we encode a quantum state in the Toric code? We need to know the logical operators: $X_{1,L}$, $Z_{1,L}$, $X_{2,L}$ and $Z_{2,L}$. All four must commute with all the stabilizers, and be linearly independent from them, and must generate the algebra of two qubits. Commutation of operators on the two different logical qubits: +$$ +[X_{1,L},X_{2,L}]=0\quad [X_{1,L},Z_{2,L}]=0 \quad [Z_{1,L},Z_{2,L}]=0\quad [Z_{1,L},X_{2,L}]=0 +$$ +and anti-commutation of the two on each qubit: +$$ +\{X_{1,L},Z_{1,L}\}=0\qquad\{X_{2,L},Z_{2,L}\}=0 +$$

+

There's a couple of different conventions for how to label the different operators. I'll go with my favourite (which is probably the less popular):

+
    +
  • Take a horizontal line on the lattice. On every qubit, apply $Z$. This is $Z_{1,L}$. In fact, any horizontal line is just as good.

    +
  • +
  • Take a vertical line on the lattice. On every qubit, apply $Z$. This is $X_{2,L}$ (the other convention would label it as $Z_{2,L}$)

    +
  • +
  • Take a horizontal strip of qubits, each of which is in the middle of a vertical edge. On every qubit, apply $X$. This is $Z_{2,L}$.

    +
  • +
  • Take a vertical strip of qubits, each of which is in the middle of a horizontal edge. On every qubit, apply $X$. This is $X_{1,L}$.

    +
  • +
+

You'll see that the operators that are supposed to anti-commute meet at exactly one site, with an $X$ and a $Z$.

+

Ultimately, we define the logical basis states of the code by +$$ +|\psi_{x,y}\rangle: Z_{1,L}|\psi_{x,y}\rangle=(-1)^x|\psi_{x,y}\rangle,\qquad Z_{2,L}|\psi_{x,y}\rangle=(-1)^y|\psi_{x,y}\rangle +$$

+

The distance of the code is $N$ because the shortest sequence of single-qubit operators that converts between two logical states constitutes $N$ Pauli operators on a loop around the torus.

+

Error Detection and Correction

+

Once you have a code, with some qubits stored in the codespace, you want to keep it there. To achieve this, we need error correction. Each round of error correction comprises measuring the value of every stabilizer. Each $A_s$ and $B_p$ gives an answer $\pm 1$. This is your error syndrome. It is then up to you, depending on what error model you think applies to your system, to determine where you think the errors have occurred, and try to fix them. There's a lot of work going on into fast decoders that can perform this classical computation as efficiently as possible.

+

One crucial feature of the Toric code is that you do not have to identify exactly where an error has occurred to perfectly correct it; the code is degenerate. The only relevant thing is that you get rid of the errors without implementing a logical gate. For example, the green line in the figure is one of the basic errors in the system, called an anyon pair. If the sequence of $X$ rotations depicted had been enacted, then the stabilizers on the two squares with the green blobs in would have given a $-1$ answer, while all others give $+1$. To correct this, we could apply $X$ along exactly the path where the errors happened, although our error syndrome certainly doesn't give us the path information. There are many other paths of $X$ errors that would give the same syndrome. We can implement any of these, and there are two options. Either, the overall sequence of $X$ rotations forms a trivial path, or one that loops around the torus in at least on direction. If it's a trivial path (i.e. one that forms a closed path that does not loop around the torus), then we have successfully corrected the error. This is at the heart of the topological nature of the code; many paths are equivalent, and it all comes down to whether or not these loops around the torus have been completed.

+

Error Correcting Threshold

+

While the distance of the code is $N$, it is not the case that every combination of $N$ errors causes a logical error. Indeed, the vast majority of $N$ errors can be corrected. It is only once the errors become of much higher density that error correction fails. There are interesting proofs that make connections to phase transitions or the random bond Ising model, that are very good at pinning down when that is. For example, if you take an error model where $X$ and $Z$ errors occur independently at random on each qubit with probability $p$, the threshold is about $p=0.11$, i.e. $11\%$. It also has a finite fault-tolerant threshold (where you allow for faulty measurements and corrections with some per-qubit error rate)

+

The Planar Code

+

Details are largerly identical to the Toric code, except that the boundary conditions of the lattice are open instead of periodic. This means that, at the edges, the stabilizers get defined slightly differently. In this case, there is only one logical qubit in the code instead of two.

+",1837,,4757,,6/13/2022 21:09,6/13/2022 21:09,,,,5,,,,CC BY-SA 4.0 +2124,2,,2105,5/21/2018 9:40,,3,,"

Since you haven't told us how you've tried to do the calculation, I don't know where you're making the mistake. (I'm also unfamiliar with Quirk, which seems to be using an unusual ordering of basis elements in the output matrix. If anything looks inconsistent in the following answer, try swapping the middle two rows/columns, and adding a transpose!)

+ +

The first important thing is to not use the percentage values in the transition matrices. These correspond to probabilities, but to do any further work, we need to know about probability amplitudes. So, the unitary output of your first sequence of gates is +$$ +\left( +\begin{array}{cccc} + \frac{\sqrt{2+\sqrt{2}}}{2} & \frac{1}{4} \left(-2+\sqrt{2}\right) & 0 & -\frac{i}{2 \sqrt{2}} \\ + 0 & \frac{1}{4} \left(2+\sqrt{2}\right) & -\frac{1}{2} i \sqrt{2-\sqrt{2}} & -\frac{i}{2 \sqrt{2}} \\ + 0 & -\frac{i}{2 \sqrt{2}} & \frac{\sqrt{2+\sqrt{2}}}{2} & \frac{1}{4} \left(-2+\sqrt{2}\right) \\ + -\frac{1}{2} i \sqrt{2-\sqrt{2}} & -\frac{i}{2 \sqrt{2}} & 0 & \frac{1}{4} \left(2+\sqrt{2}\right) \\ +\end{array} +\right) +$$ +Now we can apply the final sequence of gates; an $X$ on qubit 1, a controlled-$Y^{1/4}$ and another $X$ on qubit 1. You get the output unitary +$$ +\left( +\begin{array}{cccc} + \frac{1}{4} \left(2+\sqrt{2}\right) & -\frac{1}{2} \sqrt{1-\frac{1}{\sqrt{2}}} & -\frac{i}{2 \sqrt{2}} & + -\frac{1}{2} i \sqrt{\frac{1}{2} \left(2-\sqrt{2}\right)} \\ + 0 & \frac{1}{4} \left(2+\sqrt{2}\right) & -\frac{1}{2} i \sqrt{2-\sqrt{2}} & -\frac{i}{2 \sqrt{2}} \\ + -\frac{i}{2 \sqrt{2}} & -\frac{1}{2} i \sqrt{\frac{1}{2} \left(2-\sqrt{2}\right)} & \frac{1}{4} \left(2+\sqrt{2}\right) + & -\frac{1}{2} \sqrt{1-\frac{1}{\sqrt{2}}} \\ + -\frac{1}{2} i \sqrt{2-\sqrt{2}} & -\frac{i}{2 \sqrt{2}} & 0 & \frac{1}{4} \left(2+\sqrt{2}\right) \\ +\end{array} +\right) +$$ +The mod-square of each element is then +$$ +\left( +\begin{array}{cccc} + \frac{1}{16} \left(2+\sqrt{2}\right)^2 & \frac{1}{8} \left(2-\sqrt{2}\right) & \frac{1}{8} & \frac{1}{8} + \left(2-\sqrt{2}\right) \\ + 0 & \frac{1}{16} \left(2+\sqrt{2}\right)^2 & \frac{1}{4} \left(2-\sqrt{2}\right) & \frac{1}{8} \\ + \frac{1}{8} & \frac{1}{8} \left(2-\sqrt{2}\right) & \frac{1}{16} \left(2+\sqrt{2}\right)^2 & \frac{1}{8} + \left(2-\sqrt{2}\right) \\ + \frac{1}{4} \left(2-\sqrt{2}\right) & \frac{1}{8} & 0 & \frac{1}{16} \left(2+\sqrt{2}\right)^2 \\ +\end{array} +\right). +$$ +Numerically, these are the same as given in the question: +$$ +\left( +\begin{array}{cccc} + 0.729 & 0.0732 & 0.125 & 0.0732 \\ + 0 & 0.729 & 0.146 & 0.125 \\ + 0.125 & 0.0732 & 0.729 & 0.0732 \\ + 0.146 & 0.125 & 0 & 0.729 \\ +\end{array} +\right) +$$

+",1837,,,,,5/21/2018 9:40,,,,0,,,,CC BY-SA 4.0 +2125,2,,1999,5/21/2018 17:39,,3,,"

One of many possible constructions that gives some insight into this question, at least to me, is as follows. Using the CSD (cosine-sine decomposition), you can expand any unitary operator into a product of efficient gates V that fit nicely into a binary tree pattern. In the case of the QFT, that binary tree collapses to a single branch of the tree, all the V not in the branch are 1.

+ +

Ref: +Quantum Fast Fourier Transform Viewed as a Special Case of Recursive Application of Cosine-Sine Decomposition, by myself.

+",1974,,91,,08-07-2018 21:32,08-07-2018 21:32,,,,2,,,,CC BY-SA 4.0 +2126,1,,,5/21/2018 23:05,,12,545,"

One deals with the notion of superposition when studying Shor's algorithm, but how about entanglement? Where exactly does it appear in this particular circuit?

+

I assume it is not yet present in the initial state $\left|0\right>\left|0\right>$, but how about in further process, after applying Hadamard gates, the controlled-U gates and the inverse Fourier transform?

+

I understand that the first and second registers have to be entangled, otherwise, the final measurement on one of them wouldn't collapse the other one, which gives us the period (well, kind of, we need to use continuous fractions to infer it).

+",1889,,55,,09-03-2021 11:37,09-03-2021 11:37,Where exactly does entanglement appear in Shor's algorithm?,,1,3,,,,CC BY-SA 4.0 +2127,2,,2126,5/21/2018 23:56,,7,,"

Your question contains the answer, as you mention the controlled-U gate which is an entangling gate. You will see in the page I linked, that the action of c-U on $|+\rangle|0\rangle$ for example can turn the state into one which cannot be written as a product:

+ +

$|+\rangle|0\rangle = \left( \frac{|0\rangle+|1\rangle}{\sqrt{2}} \right)\otimes |0\rangle = \left( \frac{|00\rangle+|10\rangle}{\sqrt{2}} \right)= \left( \frac{|00\rangle+|1\rangle U| +0\rangle}{\sqrt{2}} \right) = \left( \frac{|00\rangle+|1\rangle \left(u_{00}|0\rangle + u_{10}|1\rangle\right)}{\sqrt{2}} \right)$

+ +

In the last step, I used the definition of $U$ from the linked controlled-U description:

+ +

+ +

An example where this gate is entangling is where $u_{00}$ = 0 and $u_{10}=1$, which is just the $\rm{CNOT}$ gate. In that case we get $\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$ which is the Bell state and is maximally entangled.

+ +

You may also be interested in this article on: ""Entanglement and it's role in Shor's algorithm"".

+",2293,,2293,,5/22/2018 0:04,5/22/2018 0:04,,,,2,,,,CC BY-SA 4.0 +2128,1,,,5/22/2018 10:14,,8,734,"

The so-called depolarizing channel is the channel model that is mostly used when constructing quantum error correction codes. The action of such channel over a quantum state $\rho$ is

+

$$\rho\rightarrow(1-p_x-p_y-p_z)\rho+p_xX\rho X+p_yY\rho Y+p_zZ\rho Z$$

+

I was wondering which other channel models are considered in quantum communications, and how the construction of error correction codes is affected by considering such other channels.

+",2371,,55,,8/19/2020 10:15,8/19/2020 10:15,"What quantum channels are considered in quantum communication, and how does this choice affect the construction of error correction codes?",,1,1,,,,CC BY-SA 4.0 +2129,1,2131,,5/22/2018 16:22,,23,1431,"

Is entanglement transitive, in a mathematical sense?

+ +
+ +

More concretely, my question is this:

+ +

Consider 3 qubits $q_1, q_2$ and $q_3$. Assume that

+ +
    +
  • $q_1$ and $q_2$ are entangled, and that
  • +
  • $q_2$ and $q_3$ are entangled
  • +
+ +

Then, are $q_1$ and $q_3$ entangled? If so, why? If not, is there a concrete counterexample?

+ +
+ +

On my notion of entanglement:

+ +
    +
  • qubits $q_1$ and $q_2$ are entangled, if after tracing out $q_3$, the qbits $q_1$ and $q_2$ are entangled (tracing out $q_3$ corresponds to measuring $q_3$ and discarding the result).
  • +
  • qubits $q_2$ and $q_3$ are entangled, if after tracing out $q_1$, the qbits $q_2$ and $q_3$ are entangled.
  • +
  • qubits $q_1$ and $q_3$ are entangled, if after tracing out $q_2$, the qbits $q_1$ and $q_3$ are entangled.
  • +
+ +

Feel free to use any other reasonable notion of entanglement (not necessarily the one above), as long as you clearly state that notion.

+",2444,,26,,12/23/2018 12:28,12/23/2018 12:28,Is entanglement transitive?,,3,5,,,,CC BY-SA 4.0 +2130,2,,2129,5/22/2018 18:00,,6,,"

I read the following in Freudenthal triple classication of three-qubit entanglement:

+ +

""Dür et al. (Three qubits can be entangled in two inequivalent ways) used simple arguments concerning the conservation of ranks of reduced density matriceshere are only six three-qubit equivalence classes:

+ +
    +
  • Null (The trivial zero entanglement orbit corresponding to vanishing states)
  • +
  • Separable (Another zero entanglement orbit for completely factorisable product states)
  • +
  • Biseparable (Three classes of bipartite entanglement: A-BC, B-AC, C-AB)
  • +
  • W (Three-way entangled states that do not maximally +violate Bell-type inequalities) and
  • +
  • GHZ (maximally violate Bell-type inequalities)""
  • +
+ +

which as I understand it the answer to your question is yes: if A and B are entangled and B and C are entangled you necessarily are in one of the three-way entangled states so A and C are also entangled.

+",1847,,,,,5/22/2018 18:00,,,,0,,,,CC BY-SA 4.0 +2131,2,,2129,5/22/2018 18:02,,12,,"

TL;DR: It depends on how you choose to measure entanglement on a pair of qubits. If you trace out the extra qubits, then ""No"". If you measure the qubits (with the freedom to choose the optimal measurement basis), then ""Yes"".

+ +
+ +

Let $|\Psi\rangle$ be a pure quantum state of 3 qubits, labelled A, B and C. We will say that A and B are entangled if $\rho_{AB}=\text{Tr}_C(|\Psi\rangle\langle\Psi|)$ is not positive under the action of the partial transpose map. This is a necessary and sufficient condition for detecting entanglement in a two-qubit system. The partial trace formalism is equivalent to measuring qubit C in an arbitrary basis and discarding the result.

+ +

There's a class of counter-examples that show that entanglement is not transitive, of the form +$$ +|\Psi\rangle=\frac{1}{\sqrt{2}}(|000\rangle+|1\phi\phi\rangle), +$$ +provided $|\phi\rangle\neq |0\rangle,|1\rangle$. If you trace out qubit $B$ or qubit $C$, you'll get the same density matrix both times: +$$ +\rho_{AC}=\rho_{AB}=\frac12\left(|00\rangle\langle 00|+|1\phi\rangle\langle 1\phi|+|00\rangle\langle 1\phi|\langle\phi|0\rangle+|1\phi\rangle\langle 00|\langle0|\phi\rangle\right) +$$ +You can take the partial transpose of this (taking it on the first system is the cleanest): +$$ +\rho^{PT}=\frac12\left(|00\rangle\langle 00|+|1\phi\rangle\langle 1\phi|+|10\rangle\langle 0\phi|\langle\phi|0\rangle+|0\phi\rangle\langle 10|\langle0|\phi\rangle\right) +$$ +Now take the determinant (which is equal to the product of the eigenvalues). You get +$$ +\text{det}(\rho^{PT})=-\frac{1}{16}|\langle 0|\phi\rangle|^2(1-|\langle 0|\phi\rangle|^2)^2, +$$ +which is negative, so there must be a negative eigenvalue. Thus, $(AB)$ and $(AC)$ are entangled pairs. Meanwhile +$$ +\rho_{BC}=\frac12(|00\rangle\langle 00|+|\phi\phi\rangle\langle\phi\phi |). +$$ +Since this is a valid density matrix, it is non-negative. However, the partial transpose is just equal to itself. So, there are no negative eigenvalues and $(BC)$ is not entangled.

+ +

Localizable Entanglement

+ +

One might, instead, talk about the localizable entanglement. Before further clarification, this is what I thought the OP was referring to. In this case, instead of tracing out a qubit, one can measure it in a basis of your choice, and calculate the results separately for each measurement outcome. (There is later some averaging process, but that will be irrelevant to us here.) In this case, my response is specifically about pure states, not mixed states.

+ +

The key here is that there are different classes of entangled state. For 3 qubits, there are 6 different types of pure state:

+ +
    +
  • a fully separable state
  • +
  • 3 types where there is an entangled state between two parties, and a separable state on the third
  • +
  • a W-state
  • +
  • a GHZ state
  • +
+ +

Any type of quantum state can be converted into one of the standard representatives of each class just by local measurements and classical communication between the parties. Note that the conditions of $(q_1,q_2)$ and $(q_2,q_3)$ being entangled remove the first 4 cases, so we only have to consider the last 2 cases, W-state and GHZ-state. Both representatives are symmetric under exchange of the particles: +$$ +|W\rangle=\frac{1}{\sqrt{3}}(|001\rangle+|010\rangle+|100\rangle)\qquad |GHZ\rangle=\frac{1}{\sqrt{2}}(|000\rangle+|111\rangle) +$$ +(i.e. if I swap qubits A and B, I still have the same state). +So, these representatives must have the required transitivity properties: If A and B are entangled, then B and C are entangled, as are A and C. In particular, Both of these representatives can be measured in the X basis in order to localize the entanglement. Thus, any pure state that you're given must be such that you can include the measurement to convert it into the standard representative into the measurement for localizing the entanglement, and you're done!

+",1837,,1837,,5/24/2018 4:00,5/24/2018 4:00,,,,5,,,,CC BY-SA 4.0 +2132,2,,2129,5/22/2018 18:07,,7,,"

This isn't an answer, but instead just some background facts that are important to know about in order to avoid ""not even wrong"" territory on these types of questions.

+ +

""Entanglement"" is not all-or-nothing. Just saying ""q1 is entangled with q2 and q2 is entangled with q3"" is not enough information to determine the answer to questions like ""if I measure q3, will q1 still be entangled with q2?"". Entanglement gets complicated when dealing with larger systems. You really need to know the specific state, and the measurement, and whether you are permitted to condition on the result of the measurement.

+ +

It may be the case that q1,q2,q3 are entangled as a group but if you trace out any one of the qubits then the density matrix of the remaining two describes a mere classically correlated state. (E.g. this happens with GHZ states.)

+ +

You should be aware of the monogamy of entanglement. Past a certain threshold, increasing the strength of the entanglement between q1 and q2 must decrease the strength of entanglement between q1 and q3 (and equivalently q2 and q3).

+",119,,119,,5/22/2018 18:09,5/22/2018 18:09,,,,2,,,,CC BY-SA 4.0 +2135,1,2136,,5/23/2018 15:42,,9,707,"

Consider the measurement of the syndrome for the standard 3-qubit code to correct bit flips: +$$ +\def\place#1#2#3{\smash{\rlap{\hskip{#1px}\raise{#2px}{#3}}}} +\def\hline#1#2#3{\place{#1}{#2}{\rule{#3px}{1px}}} +\def\vline#1#2#3{\place{#1}{#2}{\rule{1px}{#3px}}} +% +\hline{30}{30}{210} +\hline{30}{60}{210} +\hline{30}{150}{210} +\hline{30}{180}{210} +\hline{30}{210}{210} +% +\vline{60}{60}{150} +\vline{90}{60}{120} +\vline{120}{30}{150} +\vline{150}{30}{120} +% +\place{46}{51}{\huge{\oplus}} +\place{76}{51}{\huge{\oplus}} +\place{106}{21}{\huge{\oplus}} +\place{136}{21}{\huge{\oplus}} +% +\place{30}{205}{\llap{Z_1}} +\place{30}{175}{\llap{Z_2}} +\place{30}{145}{\llap{Z_3}} +% +\place{241}{41}{\left. \rule{0px}{22.5px} \right\} M} +% +\phantom{\rule{280px}{225px}}_{\Large{.}} +$$

+ +

Here $M$ is a measurement in the computational basis. This circuit measures $Z_1Z_2$ and $Z_2Z_3$ of the encoded block (i.e. the top three). My question is why measure these using ancilla qubits - why not just measure the 3 encoded qubits directly? Such a setup would mean you would not have to use c-not gates which from what I have heard are hard to implement.

+ +

(Note I have only given this 3-qubit code as an example I am interested in general syndrome measurements on general codes).

+",2015,,1847,,6/15/2018 8:00,6/15/2018 8:00,Why do we use ancilla qubits for error syndrome measurements?,,2,0,,,,CC BY-SA 4.0 +2136,2,,2135,5/23/2018 15:52,,7,,"

The key point of quantum error correction is precisely to correct the errors without collapsing the qubits, right? If we measure the encoded qubits we project the qubits to $\left|0\right>$ or $\left|1\right>$ and lose all the information in the coefficients $\alpha \left|0\right> + \beta \left|1\right>$. By measuring ancilla qubits we can know what has happened to the qubits without actually knowing the values of the qubits: this enables us to correct errors in a non-destructive way, and carry on with our quantum operation.

+",1847,,15,,5/24/2018 8:43,5/24/2018 8:43,,,,0,,,,CC BY-SA 4.0 +2137,2,,2097,5/23/2018 17:59,,3,,"

This is a soft answer, offered b/c I'm interested in the etymologies of terms we use in math and science. For what it's worth:

+ +
+

entanglement (n.) + 1630s, ""that which entangles,"" from entangle + -ment. From 1680s as ""act of entangling."" Foreign entanglements does not appear as such in Washington's Farewell Address (1796), though he warns against them. The phrase is found in William Coxe's 1798 memoirs of Sir Robert Walpole. +
SOURCE: Online Etymological Dictionary

+
+ +

from:

+ +
+

tangle (n.) + 1610s, ""a tangled condition, a snarl of threads,"" from tangle (v.).
SOURCE: Online Etymological Dictionary

+
+ +

I make this connection because the idea of determinacy and strings/threads goes back to at least the ancient Greek Moirai (the Fates).

+",2447,,2293,,5/24/2018 4:38,5/24/2018 4:38,,,,3,,,,CC BY-SA 4.0 +2138,2,,2135,5/24/2018 4:09,,6,,"

When you say ""why not just measure the 3 encoded qubits directly"", are you thinking that you could measure $Z_1$, $Z_2$ and $Z_3$, and that, from there, you can calculate the values $Z_1Z_2$ and $Z_2Z_3$?

+ +

This is sort of true: if your only goal is to obtain the observables $Z_1Z_2$ and $Z_2Z_3$, you could do this.

+ +

But that is not your end goal, which is, instead, to preserve the information encoded in the logical state. The only way you can do this is to learn nothing about the state that is encoded. Effectively, measuring in this way gives you too much information: it gives you 3 bits of information (1 bit from each measurement that you perform) when you only need 2 bits. Where does this extra bit come from? It is one bit of information about the state that you have encoded. In other words, you have measured the encoded state, destroying any superposition that you are specifically trying to use the error correcting code to protect.

+",1837,,,,,5/24/2018 4:09,,,,0,,,,CC BY-SA 4.0 +2139,2,,2097,5/24/2018 4:22,,6,,"

According to Matthias Christandl (who did some research on this to resolve a bet with Artur Ekert), while the term ""entanglement"" was first used in 1935, as already relayed in other answers, the concept was discussed by Schrodinger in 1932. This set of slides (slides 3-8 in particular) from a talk reproduce part of a document that details this. The full image is also on the front of his thesis.

+",1837,,,,,5/24/2018 4:22,,,,0,,,,CC BY-SA 4.0 +2140,1,2141,,5/24/2018 7:57,,5,274,"

In this pdf for the Simon's algorithm we need $n-1$ independent $\mathbf y$ such that: +$$ \mathbf y \cdot \mathbf s=0$$ +to find $\mathbf s$. On page 6 of the pdf the author writes that the probability of getting $n-1$ independent values of $\mathbf y$ in $n-1$ attempts is: +$$P_{\text{ind}}=(1-1/N(\mathbf s))(1-2/N(\mathbf s))\cdots (1-2^{n-1}/N(\mathbf s))\tag{1}$$ +where $N(\mathbf s)=2^{n-1}$ if $\mathbf s \ne 0$ and $2^n$ if $\mathbf s=0$. Clearly then $P_{\text{ind}}=0$ for $\mathbf{s}\ne 0$ - which I believe to be wrong.

+ +

My question is, therefore: Is formula (1) wrong and if so what is the correct version. If it is not wrong how do we interpret +$P_{\text{ind}}=0$ .

+",2015,,26,,10/24/2019 13:23,10/24/2019 13:23,Simon's Algorithm Probability of Independence,,1,0,,,,CC BY-SA 4.0 +2141,2,,2140,5/24/2018 8:26,,4,,"

At first glance, the formula looks lightly wrong: the last term in the should only be $(1-2^{n-2}/N(s))$, giving +$$ +P_{ind}=\prod_{k=1}^{n-1}\left(1-\frac{2^{k-1}}{N(s)}\right) +$$ +overall. Thus every term is a half, or larger.

+ +

My reasoning is as follows: you perform a measurement and get a random outcome. The first time you do this, it can be any outcome except the all 0 string. This happens with probability $1-1/N(s)$.

+ +

The second time, you want any string except the all zeros, or the answer you got last time, $y_1$. Thus, the term $1-2/N(s)$.

+ +

The third time, you want any string except the all zeros, $y_1$, $y_2$ or $y_1\oplus y_2$. Thus, the term $1-4/N(s)$.

+ +

Once you have $k-1$ linearly independent strings $y_1$ to $y_{k-1}$ and you're trying top find the $k^{th}$, there are $2^{k-1}$ answers you don't want to get: the $2^{k-1}$ answers that are linearly dependent on the strings you already have (note that this counting includes the all zeros string). You keep going until the last term, $k=n-1$ because you're trying to find $n-1$ linearly independent cases.

+ +

Incidentally, this is not the way that I would ever make the argument. Who cares about the probability of needing exactly $n-1$ calls? You can just keep repeating as many times as you need to in order to find $n-1$ linearly independent strings. Since we've already argued that the worst-case probability of finding a new linearly independent string is 1/2, this means that, on average, no more than $2(n-1)$ trials would be required (and actually somewhat less, because early on you're far more likely to get a hit). You could also apply a Chernoff bound to prove that the probability of needing significantly more runs than that is vanishingly small. OK, that's essentially where the solution gets to, it just feels a little excessive (to me).

+",1837,,1837,,5/25/2018 7:00,5/25/2018 7:00,,,,0,,,,CC BY-SA 4.0 +2142,2,,1356,5/24/2018 15:37,,7,,"

Classical Version

+ +

Think about a simple strategy of classical error correction. You've got a single bit that you want to encode, +$$ +0\mapsto 00000\qquad 1\mapsto 11111 +$$ +I've chosen to encode it into 5 bits, but any odd number would do (the more the better). Now, let's assume some bit-flip errors have occurred, so what we have is +$$ +01010. +$$ +Was this originally the encoded 0 or 1? If we assume that the probability of error per bit, $p$, is less than a half, then we expect that fewer than half the bits have errors. So, we look at the number of 0s and the number of 1s. Whichever there's more of is the one that we assume is the one we started with. This is called a majority vote. There's some probability that we're wrong, but the more bits we encoded into, the smaller this probability.

+ +

On the other hand, if we know that $p>\frac12$, we can still do the correction. You'd just be implementing a minority vote! The point, however, is that you have to do completely the opposite operation. There's a sharp threshold here that shows, at the very least, that you need to know which regime you're working in.

+ +

For fault-tolerance, things get messier: the $01010$ string that you got might not be what the state actually is. It might be something different, still with some errors that you have to correct, but the measurements you've made in reading the bits are also slightly faulty. Crudely, you might imagine this turns the sharp transition into an ambiguous region where you don't really know what to do. Still, if error probabilities are low enough, or high enough, you can correct, you just need to know which is the case.

+ +

Quantum Version

+ +

In general, things get worse in the quantum regime because you have to deal with two types of errors: bit flip errors ($X$) and phase flip errors ($Z$), and that tends to make the ambiguous region bigger. I won't go further into details here. However, there's a cute argument in the quantum regime that may be illuminating.

+ +

Imagine you have the state of a single logical qubit stored in a quantum error correcting code $|\psi\rangle$ across $N$ physical qubits. It doesn't matter what that code is, this is a completely general argument. Now imagine there's so much noise that it destroys the quantum state on $\lceil N/2\rceil$ qubits (""so much noise"" actually means that errors happen with 50:50 probability, not close to 100% which, as we've already said, can be corrected). It is impossible to correct for that error. How do I know that? Imagine I had a completely noiseless version, and I keep $\lfloor N/2\rfloor$ qubits and give the remaining qubits to you. We each introduce enough blank qubits so that we've got $N$ qubits in total, and we run error correction on them. + +If it were possible to perform that error correction, the outcome would be that both of us would have the original state $|\psi\rangle$. We would have cloned the logical qubit! But cloning is impossible, so the error correction must have been impossible.

+",1837,,1837,,5/26/2018 4:26,5/26/2018 4:26,,,,0,,,,CC BY-SA 4.0 +2143,1,2144,,5/25/2018 10:42,,11,3150,"

I want to be able to applied controlled versions of the $R_y$ gate (rotation around the Y axis) for real devices on the IBM Q Experience. Can this be done? If so, how?

+",409,,26,,12/23/2018 13:51,12/23/2018 13:51,How can a controlled-Ry be made from CNOTs and rotations?,,1,0,,,,CC BY-SA 4.0 +2144,2,,2143,5/25/2018 10:42,,7,,"

You can make controlled $R_y$ gates from cnots and $R_y$ rotations, so they can be be done on any pair of qubits that allows a cnot.

+ +

Two examples of controlled-Ys are shown in the image below. They are on the same circuit, one after the other.

+ +

+ +

The first has qubit 1 as control and qubit 0 as target, which is easy because the cnots can be directly implemented in the right direction.

+ +

In the second example, qubit 0 is control and qubit 1 is target. This is achieved by using four H gates for each cnot to effectively turn it around.

+ +

This second example can also be optimized further. There are two adjacent H gates on the top line that can be canceled. And since H anticommutes with Y, $H\,u3(\theta,0,0)\,H$ can always be replaced with $u3(-\theta,0,0)$. (Thanks to @DaftWullie for pointing these out).

+ +

+ +

The single qubit gates used are $u3(\theta,0,0)$, which are $R_y(\theta)$ rotations. The angles used are pi/2 and -pi/2 in this case. These cancel when the control is $|0\rangle$. This gives the expected effect of the controlled-Y acting trivially in this case.

+ +

When the control is $|1\rangle$, the cnots perform an X either side of the $u3(-\pi/2,0,0)$, which has the effect

+ +

$X \, u3(\theta,0,0) \, X = u3(-\theta,0,0)$

+ +

This means that the $u3(-\pi/2,0,0)$ flips to $u3(\pi/2,0,0)$. The end effect on the control is then

+ +

$ u3(\pi/2,0,0) \, u3(\pi/2,0,0) \, = u\, 3(\pi,0,0) \, = \, Y$

+ +

which is a $Y$

+ +

A more general controlled $R_y$ rotation means that you want to do a fraction of a $Y$. So just reduce both angles by the corresponding fraction.

+",409,,409,,5/25/2018 11:40,5/25/2018 11:40,,,,5,,,,CC BY-SA 4.0 +2145,1,2146,,5/25/2018 14:20,,11,719,"

I am a bit confused about the necessity of an oracle qubit in Grover's algorithm.

+ +

My question is, does it depend on how you implement your oracle whether you need an oracle qubit or not? Or, it there any reason for an oracle qubit? (such as, there exist some problems that cannot be solved without an oracle qubit, or it's easier to think about the problem with an oracle qubit, or it's a convention, etc)

+ +

Many resources introduce Grover's algorithm with an oracle qubit, but I found there are some cases that you do not need an oracle qubit.

+ +

For example, here are two implementations of Grover's algorithm in IBM Q simulator. One is using an oracle qubit, and the other is not. In both cases, I would like to find |11> from a space of |00>, |01>, |10>, and |11>. In both cases, oracle successfully flips |11> to -|11>.

+ +

・With an oracle qubit (Link to IBM Q simulator) +

+ +

・Without an oracle qubit (Link to IBM Q simulator) +

+",2100,,55,,3/28/2019 19:43,3/28/2019 19:43,Why is an oracle qubit necessary in Grover's algorithm?,,1,0,,,,CC BY-SA 4.0 +2146,2,,2145,5/25/2018 14:46,,6,,"

From the perspective of defining the quantum circuit, the oracle qubit is not strictly necessary. For example, in Grover's search, you might normally define the action of the oracle as +$$ +U|x\rangle|y\rangle=|x\rangle|y\oplus f(x)\rangle, +$$ +where $f(x)$ returns 1 if $x$ is the marked item. However, we always use this in a particular way, inputting $(|0\rangle-|1\rangle)/\sqrt{2}$ on the oracle qubit. This has the net effect of just implementing a phase on the marked item. In other words, it is entirely equivalent to the implementation of a new unitary +$$ +\tilde U|x\rangle=(-1)^{f(x)}|x\rangle +$$

+ +

However, where it makes a difference is the practical reality. Searching for an item, we will actually need some sort of circuit that recognises the marked item, based on the input of $x$. At that point, it's far easier to think about outputting the answer onto the oracle bit, rather than somehow directly building the unitary that gives the phase without using the oracle qubit. Indeed, I suspect if I asked you to design a generic version $\tilde U$, you'd come up with $U$ with the extra qubit as the solution.

+",1837,,,,,5/25/2018 14:46,,,,3,,,,CC BY-SA 4.0 +2147,1,2148,,5/25/2018 17:48,,10,307,"

Suppose I have a classical-classical-quantum channel $W : \mathcal{X}\times\mathcal{Y} \rightarrow \mathcal{D}(\mathcal{H})$, where $\mathcal{X},\mathcal{Y}$ are finite sets and $\mathcal{D}(\mathcal{H})$ is the set of density matrices on finite dimensional, complex Hilbert space $\mathcal{H}$.

+ +

Suppose $p_x$ is the uniform distribution on $\mathcal{X}$ and $p_y$ is the uniform distribution on $\mathcal{Y}$. Further, define for distributions $p_1$ on $\mathcal{X}$ and $p_2$ on $\mathcal{Y}$, the Holevo information +$$\chi(p_1, p_2, W) := H\left(\sum_{x,y}p_1(x)p_2(y)W(x,y)\right) - \sum_{x,y}p_1(x)p_2(y)H(W(x,y))$$

+ +

where $H$ is the von Neumann entropy.

+ +

I would like to show, for +$$ p_1 := \sup_{p}\left\{ \chi(p, p_y, W)\right\}, p_2 := \sup_{p}\left\{ \chi(p_x, p, W)\right\}$$ +that, +$$\chi(p_1, p_2, W) \geq \chi(p_1, p_y, W) \text{ and } \chi(p_1, p_2, W)\geq \chi(p_x, p_2, W).$$

+ +

So far, I'm not yet convinced that the statement is true in the first place. I haven't made much progress in proving this, but it seems like some sort of triangle inequality could verify the claim.

+ +

Thanks for any suggestions regarding if the statement should hold and tips on how to prove it.

+",509,,55,,07-10-2021 23:33,07-10-2021 23:33,Proof of an Holevo information inequality for a classical-classical-quantum channel,,1,1,,,,CC BY-SA 4.0 +2148,2,,2147,5/25/2018 19:00,,11,,"

It appears that the statement is not true in general. Suppose $X = Y = \{0,1\}$, $\mathcal{H}$ is the Hilbert space corresponding to a single qubit, and $W$ is defined as +\begin{align} +W(0,0) & = | 0 \rangle \langle 0 |,\\ +W(0,1) & = | 1 \rangle \langle 1 |,\\ +W(1,0) & = | 1 \rangle \langle 1 |,\\ +W(1,1) & = \frac{1}{2} | 0 \rangle \langle 0 | + \frac{1}{2} | 1 \rangle \langle 1 |. +\end{align} +If $p_y$ is the uniform distribution, the optimal choice for $p_1$ is $p_1(0) = 1$ and $p_1(1) = 0$, which gives $\chi(p_1,p_y,W) = 1$, which is the maximum possible value. (I assume you mean to define $p_1$ and $p_2$ as the argmax of those expressions, not the supremum.) Likewise, if $p_x$ is uniform, $p_2(0) = 1$ and $p_2(1) = 0$ is optimal, and the value is the same. However, $\chi(p_1,p_2,W) = 0$, so the inequality does not hold.

+",1764,,,,,5/25/2018 19:00,,,,0,,,,CC BY-SA 4.0 +2149,1,,,5/25/2018 20:59,,11,807,"

I am confused about what to input to Oracle in Grover's algorithm.

+ +

Don't we need to input what we are looking for and where to find what we are looking for to Oracle, in addition to the superpositioned quantum states?

+ +

For example, assume we have a list of people's names {""Alice"", ""Bob"", ""Corey"", ""Dio""}, and we want to find if ""Dio"" is on the list. Then, Oracle should take $1/2(|00\rangle + |01\rangle + |10\rangle + |11\rangle)$ as an input and output $1/2(|00\rangle + |01\rangle + |10\rangle - |11\rangle)$. I kind of understand that.

+ +

But don't we also need to input the word ""Dio"" and the list {""Alice"", ""Bob"", ""Corey"", ""Dio""} to Oracle? Otherwise, how can Oracle return output? Is it not explicitly mentioned since Oracle is a black box and we do not have to think about how to implement it?

+ +

My understanding about Oracle is,

+ +
    +
  • Oracle has the ability to recognize if the word ""Dio"" is in the list.
  • +
  • To do so, Oracle takes the superpositioned quantum states as an input, where each quantum state represents the index of the list.
  • +
  • So, input $|00\rangle$ to Oracle means, check if the word ""Dio"" is in the index 0 of the list and return $-|00\rangle$ if yes and return $|00\rangle$ otherwise.
  • +
  • In our case, Oracle returns $1/2(|00\rangle + |01\rangle + |10\rangle - |11\rangle)$.
  • +
  • But what about the list and the word?
  • +
+",2100,,55,,8/16/2019 18:57,02-09-2021 19:19,Grover's algorithm: what to input to Oracle?,,2,1,,,,CC BY-SA 4.0 +2150,2,,2149,5/25/2018 22:42,,5,,"

Although popular explanations of Grover's algorithm talk about searching over a list, in actuality you use it to search over possible inputs 0..N-1 to a function. The cost of the algorithm is $O(\sqrt{N} \cdot F)$ where $N$ is the number of inputs you want to search over and $F$ is the cost of evaluating the function. If you want that function to search over a list, you must hardcode the list into the function.

+ +

Hard coding the function to use a list of $N$ items is usually a very bad idea, because it tends to cause $F$ to equal $O(N)$. Which would make the total cost of Grover's algorithm $O(\sqrt{N} \cdot F) = O(\sqrt{N} \cdot N) = O(N^{1.5})$. Which sort of defeats the whole purpose, since $N^{1.5} > N$.

+",119,,,,,5/25/2018 22:42,,,,4,,,,CC BY-SA 4.0 +2151,1,2152,,5/26/2018 23:24,,35,3876,"

In the past few days, I have been trying to collect material (mostly research papers) related to Quantum machine learning and its applications, for a summer project. Here are a few which I found interesting (from a superficial reading):

+ + + +

However, coming from the more physics-y end of the spectrum, I don't have much +background knowledge in this area and am finding most of the specialized materials impenetrable. Ciliberto et al.'s paper: Quantum machine learning: a classical perspective somewhat helped me to grasp some of the basic concepts. I'm looking for similar but more elaborate introductory material. It would be very helpful if you could recommend textbooks, video lectures, etc. which provide a good introduction to the field of quantum machine learning.

+ +

For instance, Nielsen and Chuang's textbook is a great introduction to the quantum computing and quantum algorithms in general and goes quite far in terms of introductory material (although it begins at a very basic level and covers all the necessary portions of quantum mechanics and linear algebra and even the basics of computational complexity!). Is there anything similar for quantum machine learning?

+ +

P.S: I do realize that quantum machine learning is a vast area. In case there is any confusion, I would like to point out that I'm mainly looking for textbooks/introductory papers/lectures which cover the details of the quantum analogues of classical machine learning algorithms.

+",26,,55,,09-01-2020 22:27,08-09-2021 16:12,Introductory material for quantum machine learning,,5,0,,,,CC BY-SA 4.0 +2152,2,,2151,5/26/2018 23:48,,23,,"

The Nielsen and Chuang of Quantum Machine Learning is this extensive review called ""Quantum Machine Learning"" published in Nature in 2017. The arXiv version is here and has been updated as recently as 10 May 2018.

+",2293,,1847,,5/27/2018 5:32,5/27/2018 5:32,,,,5,,,,CC BY-SA 4.0 +2154,1,,,5/27/2018 5:18,,7,134,"

Could anyone point to some references examining Bell inequality violations at large distances please?

+

I see many times, in pop science articles and research literature alike, that the quantum information of the entangled state is transmitted instantaneously to all components of the state. Strictly speaking, we must say that this is a theoretical prediction and provide an lower bound on the speed of information transfer from an experiment, right?

+",1867,,55,,2/22/2021 15:59,2/22/2021 15:59,References examining Bell inequality violations at large distances,,3,1,,,,CC BY-SA 4.0 +2155,1,,,5/27/2018 5:41,,7,504,"

If I have the $X$ gate acting on a qubit and the $\lambda_6$ gate acting on a qutrit, where $\lambda_6$ is a Gell-Mann matrix, the system is subjected to the Hamiltonian:

+

$\lambda_6X= +\begin{pmatrix}0 & 0 & 0 & 0 & 0 & 0\\ +0 & 0 & 0 & 0 & 0 & 0\\ +0 & 0 & 0 & 0 & 0 & 1\\ +0 & 0 & 0 & 0 & 1 & 0\\ +0 & 0 & 0 & 1 & 0 & 0\\ +0 & 0 & 1 & 0 & 0 & 0 +\end{pmatrix} +$

+

In case anyone doubts this matrix, it can be generated with the following script (MATLAB/octave):

+
lambda6=[0 0 0; 0 0 1; 0 1 0];
+X=      [0 1; 1 0 ];
+kron(lambda6,X)
+
+

However consider the alternative Hamiltonian:

+

$-\frac{1}{2}Z\lambda_1 + \frac{1}{2}\lambda_1 - \frac{1}{\sqrt{3}}X\lambda_8+\frac{1}{3}X$.

+

This is the exact same Hamiltonian!

+

The following script proves it:

+
lambda1=[0 1 0;1 0 0;0 0 0];
+lambda8=[1 0 0;0 1 0;0 0 -2]/sqrt(3);
+Z=      [1 0; 0 -1 ];
+round(-0.5*kron(Z,lambda1)+0.5*kron(eye(2),lambda1)-(1/sqrt(3))*kron(X,lambda8)+(1/3)*kron(X,eye(3)))
+
+

The "round" in the last line of code can be removed, but the format will be uglier because some of the 0's end up being around $10^{-16}$.

+

I thought the Pauli decomposition for two qubits is unique, why would the Pauli-GellMann decomposition of a qubit-qutrit be non-unique, and how would the decomposition $\lambda_6X$ from the above 6x6 matrix be obtained?

+",2293,,2293,,04-01-2021 22:34,04-01-2021 22:34,Why is the decomposition of a qubit-qutrit Hamiltonian in terms of Pauli and Gell-Mann matrices not unique?,,2,0,,,,CC BY-SA 4.0 +2156,1,,,5/27/2018 6:20,,5,62,"

Far from my expertise, but sheer curiosity. I've read that PostBQP (""a complexity class consisting of all of the computational problems solvable in polynomial time on a quantum Turing machine with postselection and bounded error"") is very powerful. Still, I don't understand the practical sense of assuming you can decide the value an output qubit takes.

+ +

My question: Have post-selection quantum computing experiments been implemented (or is it possible that they will be implemented)? +(And, if the answer is yes: how does post-selection take place in a way that practically enhances your computing power?)

+",1847,,,,,5/27/2018 6:20,Is PostBQP experimentally relevant?,,0,2,,5/27/2018 8:57,,CC BY-SA 4.0 +2157,2,,2155,5/27/2018 9:00,,4,,"

This looks essentially similar to the property of non-commutativity of the Kronecker product: $X\otimes \lambda_6\neq \lambda_6\otimes X$:

+ +

$$X\otimes\lambda_6 = \begin{pmatrix}0&1 \\1&0\end{pmatrix}\otimes \begin{pmatrix}0&0&0 \\0&0&1 \\0&1&0\end{pmatrix} = +\begin{pmatrix}0&0&0&0&0&0 \\ +0&0&0&0&0&1\\ +0&0&0&0&1&0\\ +0&0&0&0&0&0\\ +0&0&1&0&0&0\\ +0&1&0&0&0&0 +\end{pmatrix}$$

+ +

Unsurprisingly, you can't decompose $-\frac{1}{2}Z\lambda_1 + \frac{1}{2}I_2\lambda_1 - \frac{1}{\sqrt{3}}X\lambda_8+\frac{1}{3}XI_3 = \lambda_6X$ into $X\lambda_6$.

+ +

However, as both matrices are square, they are 'permutation similar', so that $X\otimes \lambda_6=P^T\left(\lambda_6\otimes X\right)P$ for some permutation matrix $P$

+ +

In other words, to answer part 1, for a given permutation/ordering, the decomposition is unique, but when the ordering is changed, the matrix/Hamiltonian undergoes a rotation $\left(P^T = P^{-1}\right)$, which also changes the decomposition.

+ +

It becomes clear what can be used to decompose a matrix of this form by splitting it into sub-matrices: by writing $$X\lambda_6 = \begin{pmatrix}A&B\\C&D\end{pmatrix},$$ where each sub-matrix $A, B, C$ and $D$ is a $3\times 3$ matrix, it becomes clear that $A=D=0$ and $B=C=\lambda_6$, which verifies $$X\lambda_6 = \begin{pmatrix}0&\lambda_6\\\lambda_6&0\end{pmatrix} = X\otimes \lambda_6$$

+ +

Performing the rotation/permuting and applying the same idea gives $$M=\begin{pmatrix}0&0&0&0&0&0 \\ +0&0&0&0&0&0\\ +0&0&0&0&0&1\\ +0&0&0&0&1&0\\ +0&0&0&1&0&0\\ +0&0&1&0&0&0 +\end{pmatrix} = \begin{pmatrix}A&B\\C&D\end{pmatrix},$$ which gives that $$A=0,\quad B=C=\begin{pmatrix}0&0&0\\0&0&0\\0&0&1\end{pmatrix},\quad D=\begin{pmatrix}0&1&0\\1&0&0\\0&0&0\end{pmatrix}=\lambda_1$$

+ +

It follows that $B=C=\frac{1}{3}I_3-\frac{1}{\sqrt{3}}\lambda_8$, giving $$M=\begin{pmatrix}0&\frac{1}{3}I_3-\frac{1}{\sqrt{3}}\lambda_8\\\frac{1}{3}I_3-\frac{1}{\sqrt{3}}\lambda_8&\lambda_1\end{pmatrix}=\frac{1}{2}\left(I-Z\right)\otimes\lambda_1 + X\otimes\left(\frac{1}{3}I_3-\frac{1}{\sqrt{3}}\lambda_8\right).$$

+ +

Changing the order of the decomposition: $$M=\begin{pmatrix}A&&B&&C\\D&&E&&F\\G&&H&&J\end{pmatrix},$$ which gives $A=B=C=D=E=G=J=0$ and $F=H=X$, in turn giving $$M=\begin{pmatrix}0&&0&&0\\0&&0&&X\\0&&X&&0\end{pmatrix}=\lambda_6\otimes X$$

+",23,,23,,5/28/2018 23:16,5/28/2018 23:16,,,,3,,,,CC BY-SA 4.0 +2158,2,,2154,5/27/2018 11:11,,3,,"

The largest scale Bell test done thus far is the ""Cosmic Bell Test"" of 2017. It ruled out hidden variables within a distance of 600 light years from Earth.

+ +

The 16 significant Bell test experiments performed between 1972 and 2018 are listed here with references to the original papers.

+",2293,,,,,5/27/2018 11:11,,,,2,,,,CC BY-SA 4.0 +2159,1,2160,,5/27/2018 11:21,,8,304,"

The Pauli group, $P_n$, is given by +$$P_n=\{ \pm 1, \pm i\}\otimes \{ I,\sigma_x,\sigma_y,\sigma_z\}^{\otimes n}$$ +Abelian subgroups of this which do not contain the element $(-1)*I$ correspond to a stabilizer group. If there are $r$ generators of one such subgroup, $\mathcal{G}$, then the $+1$ eigenstate has $2^{n-r}$ basis elements.

+ +

This then leads to the natural question of whether we have that $r\le n$ and how can it be proved (either way)?

+ +

I guess a (valid?) proof would be along the lines of that if $r \gt n$ we would have a bias of fractional dimension - this is not allowed so $r\lt n$. But if one exists I would prefer a proof considering only the group properties and not the space which it acts on.

+",2015,,,,,5/27/2018 14:25,Maximum number of Stabilizer Generators?,,1,0,,,,CC BY-SA 4.0 +2160,2,,2159,5/27/2018 14:25,,5,,"

Consider a subgroup $G $ of the Pauli group with at least one operator that acts non-trivially on some qubit.

+ +
    +
  • Given any qubit $j $, for which the group contains an operator $S_j $ which acts on $j $ non-trivially, there is a Clifford group operator $C_j $ such that $C_j S_j C_j^\dagger =Z_j $, acting on qubit $j $ alone. (Why?)

  • +
  • If $G_j = \{ C_j S C_j^\dagger \,\vert\, S \in G \}$ and $G $ is abelian, then $G_j = \langle Z_j \rangle \oplus G'_j$, where $G'_j $ does not act on qubit $j $. (Why?)

  • +
  • By induction, we can transform any abelian subgroup on $n $ qubits to a group with at most $n+1$ generators, where up to $n $ of them act on a single qubit with a $Z $ operator. (And what then would the remaining one be?)

  • +
+ +

From this, we can prove that a stabiliser group on $n $ qubits has at most $n $ generators; and with only a little more work, we can show that a stabiliser group with $r $ generators stabilises a subspace of dimension $2^{n-r} $.

+",124,,,,,5/27/2018 14:25,,,,0,,,,CC BY-SA 4.0 +2161,1,2183,,5/27/2018 14:57,,12,780,"

According to An introduction to quantum machine learning (Schuld, Sinayskiy & Petruccione, 2014), Seth Lloyd et al. say in their paper: Quantum algorithms for supervised and unsupervised machine learning that classical information can be encoded into the norm of a quantum state $\langle x|x \rangle = |\vec{x}|^{-1}\vec{x}$. I'm not sure I understand their notation.

+ +

Let's take a simple example. Say I want to store this array: $V = \{3,2,1,2,3,3,5,4\}$ of size $2^{3}$ in the state of an $3$-qubit quantum system.

+ +

I can represent the state of an $3$-qubit system as:

+ +

$|\psi\rangle = a_1|000\rangle + a_2|001\rangle + a_3|010\rangle + a_4|011\rangle + a_5|100\rangle + a_6|101\rangle + a_7|110\rangle + a_8|111\rangle$ (using standard basis) where $a_i\in \Bbb C \ \forall \ 1 \leq i\leq 8$.

+ +

I could represent $V$ as a vector $\vec{V} = 3 \hat{x}_1 + 2 \hat{x}_2 +... + 4 \hat{x}_8$ where $\{\hat{x}_1,\hat{x}_2,...,\hat{x}_8\}$ forms an orthonormal basis in $\Bbb R^{8}$, and write the standard Euclidean norm for it as $|\vec{V}|=\sqrt{3^2+2^2+...+4^2}$.

+ +

After this, I'm confused as to how I'd get the coefficients $a_1,a_2,..,a_8$. Should I just assign $3$ to $a_1$, $2$ to $a_2$ and so on?

+ +

But, then again:

+ +
+

Consider the vector $N=2^{n}$ dimensional complex vector $\vec{v}$ + with components $\{v_i=|v_i|e^{i\phi_i}\}$. Assume that + $\{|v_i|,\phi_i\}$ are stored as floating point numbers in quantum + random access memory. Constructing the $\log_2 N$ qubit quantum state + $|v\rangle = |\vec{v}|^{-1/2}\vec{v}$ then takes $\mathcal{O}(\log_2 + N)$ steps as long as the sub-norms are also given in the qRAM in which + case any state can be constructed in $\mathcal{O}(\log N)$ steps.

+
+ +

Firstly, I don't understand their notion of a $2^n$ dimensional complex vector. If each of the components of their classical data array has two floating point numbers, wouldn't encoding that into a $n$-qubit quantum state be equivalent to storing a $2\times 2^{n}$ size classical array in a $n$-qubit system? Yes, I do know that $a_1,a_2,..,a_{2^n}$ are complex numbers having both magnitude and direction, and hence can store $2\times 2^{n}$ amount of classical information. But they don't mention anywhere how they will convert classical data (say in form of a $2\times 2^{n}$ array) into that form. Moreover, there seems to be a restriction that phase of a complex number $a_i$ can only range from $-\pi$ to $+\pi$.

+ +

Secondly, let us assume that the initial data array we wanted to store in our quantum system was actually $V=\{\{3,\phi_1\},\{2,\phi_2\},...,\{4,\phi_8\}\}$.

+ +

If they define $|v\rangle$ as $|\vec{v}|^{-1/2}\vec{v}$ then $|V\rangle$ in our example would look something like $(\sqrt{3^2+2^2+...+4^2})^{-1/2}(|3e^{i\phi_1}||000\rangle + |2e^{i\phi_2}||001\rangle + ... + |4e^{i\phi_8}||111\rangle)$. But then we're losing all the information about the phases $\phi_i$, isn't it? So what was the use of starting with a complex vector (having both a phase and magnitude) in the first place, when we're losing that information when converting to $|V\rangle$ anyway? Or are we writing supposed to consider $|V\rangle$ as $(\sqrt{3^2+2^2+...+4^2})^{-1/2}(3e^{i\phi_1}|000\rangle + 2e^{i\phi_2}|001\rangle + ... + 4e^{i\phi_8}|111\rangle)$?

+ +

It would be really helpful if someone could explain where I am going wrong using some concrete examples regarding storage of classical data in an $n$-qubit system.

+",26,,55,,2/20/2021 16:25,2/20/2021 16:25,Embedding classical information into norm of a quantum state,,1,2,,,,CC BY-SA 4.0 +2166,1,2219,,5/27/2018 15:35,,13,1502,"

I am getting confused about Grover's algorithm and it's connection to complexity classes.

+ +

The Grover's algorithm finds and element $k$ in a database of $N=2^n$ (such that $f(k)=1$) of elements with $$\sim \sqrt{N}=2^{n/2}$$ +calls to the oracle.

+ +

So we have the following problem:

+ +
+

Problem: Find a $k$ in the database such that $f(k)=1$

+
+ +

Now I am aware that this is not a desision problem and thus our normal definitions of complexity class $\text{P}$, $\text{NP}$ etc don't really apply. But I am curious to know how we would define the complexity class in such a case - and weather it is done with respect to $N$ or $n$?

+ +

Furthermore the Grover's algorithm can be used as a subroutine. I have read in several places that the Grover's algorithm does not change the complexity class a problem - is there a heuristic way to see this.

+",2015,,1837,,5/28/2018 14:07,06-02-2018 06:07,Grover's Algorithm and its relation to complexity classes?,,4,2,,,,CC BY-SA 4.0 +2167,2,,2166,5/27/2018 16:20,,-1,,"

Forget about database. Grover's algorithm solves Boolean Satisfiability Problem, namely:

+ +

You have a boolean circuit with $n$ inputs and a single output. The circuit outputs $1$ for a single configuration of input bits, otherwise is outputs $0$. Find the configuration of input bits.

+ +

The problem is known to be NP-complete.

+",2105,,,,,5/27/2018 16:20,,,,3,,,,CC BY-SA 4.0 +2170,2,,2154,5/28/2018 8:25,,5,,"

I think there is a conceptual thing going on here that needs clarifying (I'll leave the experimental links to others). I presume the question is predicated on the idea that, well, measurements are made within a certain time of each other, which is compared to the distance between the places where the measurements are being made. The concern is that this only gives a bound: if information is transmitted, it happens faster than some velocity which we have now bounded.

+ +

However, what one ought to do is consider what special relativity tells you: if two events are space-like separated, there is no notion of temporal ordering. Different observers, travelling at different velocities, can see the events happening in different orders (or simultaneously). So, all you need to know is that the measurement events are space-like separated (i.e. the distance between the events is larger than speed of light $\times$ time between events), and that is enough.

+ +

Also, there's a terminology issue. Bell tests do not talk about the transmission of information, but the presence of correlation. The term information would suggest that one party can choose some information to communicate to another party. This cannot happen faster than the speed of light. But the ""random decision"" made when a measurement is made on an entangled state is somehow resolved everywhere simultaneously, but does not communicate any information.

+",1837,,,,,5/28/2018 8:25,,,,2,,,,CC BY-SA 4.0 +2171,2,,1648,5/28/2018 13:39,,3,,"

There has been a great deal of scientific debate over evidence of quantum effects in biology due to the difficulties of reproducing scientific evidence. Some have found evidence of quantum coherence while others have argued this is not the case. (Ball, 2018).

+ +

The most recent research study (in Nature Chemistry, May 2018) found evidence of a specific oscillating signal indicating superpositioning. The scientists found quantum effects that lasted precisely as expected based on theory and proved that these belong to energy superimposed on two molecules simultaneously. This resulted in the conclusion that biological systems exhibit the same quantum effects as non-biological systems.

+ +

These effects have been observed in the Fenna-Matthews-Olsen reaction centre of the bacteria - Chlorobium Tepidum (Borroso-Flores, 2017).

+ +

Research evidences the dimensions and time scales of the photosynthetic energy transfer processes puts them close to the quantum/classical border. There are various explanations for this, but they seem to indicate energetically noisy quantum/classical limit is ideal for excitation energy transfer control. Keren 2018.

+ +

Quantum Biology as Biological Semiconductors

+ +
+

Such dynamics in biology rely on spin chemistry (radical pairs), and + it is has been recognised that “Certain organic semiconductors (OLEDs) + exhibit magnetoelectroluminescence or magnetoconductance, the + mechanism of which shares essentially identical physics with radical + pairs in biology”

+
+ +

 PJ Hore (2016).

+ +

The terms 'spin singlets' and 'triplets' are used in spintronics (in investigating semiconductors) and the term radical pairs (including spin singlets or triplets) are used to discuss spin chemistry in biology. But all the terms are describing the same phenomena (just in different disciplinary realms). Recently there has been interdisciplinary calls for the integration of spin chemistry and spintronics in recognition of this J Matysik (2017).

+ +

Biological semiconductors that have already identified by scientists include melanin and peptides, and peptides are now being explored as scaffolds for quantum computing.

+ +

UltriaFast Electron Transfer, and Storing Electronic Spin Information in a Nuclear Spin

+ +
+

During photosynthesis, plants use electronic coherence for ultrafast + energy and electron transfer and have selected specific vibrations to + sustain those coherences. In this way photosynthetic energy transfer + and charge separation have achieved their amazing efficiency. At the + same time these same interactions are used to photoprotect the system + against unwanted byproducts of light harvesting and charge separation + at high light intensities

+
+ +

Rienk van Grondelle.

+ +

In charge separation in photosynthetic reaction centres, triplet states can react with molecular oxygen generating destructive singlet oxygen. The triplet product yield in bacteria and plants is observed to be reduced by weak magnetic fields.  It has been suggested that this effect is due to solid-state photochemically induced dynamic nuclear polarization (photo-CIDNP), which is an efficient method of creating non-equilibrium polarization of nuclear spins by using chemical reactions, which have radical pairs as intermediates (Adriana Marais 2015). Within biology such as mechanism could increase resistance to oxidative stress.

+ +

It has been noted there seems to be a link between the conditions of occurrence of photo-CIDNP in reaction centres and the conditions of the unsurpassed efficient light-induced electron transfer in reaction centres. J Matysik 2009, I F Cespedes-Camacho and J Matysik 2014. 

+ +

A CIDNP effect has been observed in the Fenna-Matthews-Olsen reaction centre (Roy et al 2006).

+ +

A CIDNP effect has also been observed in flavin adenine dinucleotide (FAD) (Stob 1989).

+ +

FAD is implicated in quantum effects theorised in cryptochrome and other biological redox reactions. The widely accept theory is that during response to magnetic fields, the photo-excitation of the non-covalently bound flavin adenine dinucleotide (FAD) cofactor in Cryptochrome leads to the formation of radical pairs via sequential electron transfers along the “tryptophan-triad”, a chain of three conserved tryptophan residues within the protein. This process reduces the photo-excited singlet state of the FAD to the anion radical, +In the same way that photo-CIDNP MAS NMR has provided detailed insights into photosynthetic electron transport in reaction centres, it is anticipated in a variety of applications in mechanistic studies of other photoactive proteins. It may be possible to characterize the photoinduced electron transfer process in cryptochrome Xiao-Jie (2016).

+ +
+

'until now, no CIDNP phenomenon has been observed in spintronics , + although the possibility of obtaining such effects has been + mentioned “If nuclear spin resonance is found to have an impact on the + spin-dependent electron transport due to the hyperfine interaction, + ultimately the opposite process may become possible: storing + electronic spin information in the nuclear spin.”

+
+ +

 J Matysik (2017).

+",2498,,,,,5/28/2018 13:39,,,,1,,,,CC BY-SA 4.0 +2172,1,,,5/28/2018 16:16,,6,449,"

The image is taken from this link.

+ +

+Here Alice is using random bases to encode 0 or 1. After the process is completed, Bob has similarly polarized photons as Alice. These polarization can be any of the $\lvert 0 \rangle , \lvert 1 \rangle, \lvert + \rangle$ or $\lvert - \rangle$. However, how would Bob know what Alice meant for which two of these bases? Meaning, Alice might choose ${\lvert 0 \rangle, \lvert + \rangle}$ to encode a 0 and ${\lvert 1 \rangle, \lvert - \rangle}$ to encode a 1 or vice versa. How do they determine which polarization encodes which bits?

+",2403,,26,,12/23/2018 13:47,12/23/2018 13:47,BB84 Protocol Alice Choice to Bob,,1,0,,,,CC BY-SA 4.0 +2173,2,,2172,5/28/2018 18:18,,4,,"

That’s the public discussion stage: Alice and Bob can both announce which basis they chose for each round. If they happened to pick the same basis on a given round, they know that (in a perfect world) their answers were the same, so they can translate them into a 0/1 value that nobody else knows. That translation is arbitrary, and they’ve probably agreed it in advance.

+ +

The natural way to do this is to associate an operator with each measurement basis, e.g. X or Z (the Pauli matrices). The measurement answers are then e.g. $(\mathbb{I}+(-1)^xX)/2$ where x is a bit value which we use as the translation.

+",1837,,1837,,5/28/2018 18:24,5/28/2018 18:24,,,,2,,,,CC BY-SA 4.0 +2174,1,,,5/28/2018 18:47,,4,552,"

I recently asked this question on Grover's algorithm, and I am still fairly confused about the whole thing. Consider the following snippet from this post (written by DIDIx13) which for convenience I will reproduce here:

+ +
+

If you throw away the problem structure, and just consider the space of $2^n$ possible solutions, then even a quantum computer needs about $\sqrt{2^n}$ steps to find the correct one (using Grover's algorithm) + If a quantum polynomial time algorithm for a $\text{NP}$-complete problem is ever found, it must exploit the problem structure in some way.

+
+ +

The first line emphasis one place where I am confused: Grover's algorithm finds a solution amongst $2^n$ solutions to a problem - this is not a decisions problem alone and as mentioned in my question linked above means we cannot assign it a complexity class.

+ +

That said Grover's algorithm can be used to solve decision problems (there seems to be a lot of talk on related questions about ""SAT"") - but I have yet seen a simple example of such an application.

+ +

Thus my question is: Does there exist a simple example of Grover's algorithm solving a decision problem? (even better if you can provide one where the classical search is in $NP$ and another in $P$)

+",2015,,,,,06-02-2018 06:53,Example of Grover's Algorithm applied to a decision problem?,,1,0,,,,CC BY-SA 4.0 +2175,2,,2174,5/28/2018 19:48,,3,,"

Take the problem of 3-SAT. There is some $f(x)$ which gives outputs 0 or 1. We generally think of the case where the outputs 1 are rare, and hard to find.

+ +

PROBLEM: Determine if there is an $x$ that satisfies $f(x)=1$.

+ +

(3-SAT has a certain structure to the way the variables are evaluated based on conjunctive normal form, but that's not so important right now).

+ +

This problem is known to be NP-complete: as much as we believe NP and P are distinct, we believe this problem cannot be solved efficiently. But Grover's does help us because it gives us a square root speed-up; it searches for the items where the answer is 1.

+ +

I'm not sure if you're asking for a specific example of an $f(x)$, but one of the issues is that this complexity classification is about scaling: you need a family of functions for different sized inputs. 3-SAT is one such class. Perhaps my answer here supplies the sorts of examples you were after?

+",1837,,1837,,06-02-2018 06:53,06-02-2018 06:53,,,,0,,,,CC BY-SA 4.0 +2176,1,,,5/28/2018 20:45,,5,194,"

In the paper ""Demonstration of two-qubit algorithms with a superconducting quantum processor"" (L. DiCarlo et al., Nature 460, 240 (2009), arXiv) they demonstrate how to realize conditional phase gates with superconducting qubits.

+ +

Specifically, they use the $|{1,1}\rangle \leftrightarrow |0, 2\rangle$ to create a conditional phase gate. I quote ""his method of realizing a +C-Phase gate by adiabatically using the avoided crossing between +computational and non-computational states is generally applicable +to qubit implementations with finite anharmonicity, such as trans- +mons or phase qubits"".

+ +

My question is how this technique works, especially why it is a controlled gate.

+",1853,,1847,,5/29/2018 2:33,5/29/2018 10:52,Conditional Phase Gate Superconducting Qubits,,1,0,,,,CC BY-SA 4.0 +2177,1,,,5/28/2018 20:50,,27,9545,"

I want to create a Toffoli gate controlled by n qubits, and implement it in QISKit. Can this be done? If so, how?

+",2503,,26,,03-12-2019 09:30,01-06-2022 11:04,How can I implement an n-bit Toffoli gate?,,3,1,,,,CC BY-SA 4.0 +2178,2,,2177,5/28/2018 21:00,,25,,"

A simple way to do this is illustrated in Figure 4.10 of Nielsen & Chuang. +

+ +

Where U can be any single-qubit rotation (in this case, an X gate).

+ +

This circuit works like this: +We want to apply U to the target qubit only if the AND of all control qubits is 1. A normal Toffoli gives us the AND of 2 qubits. So by chaining a few Toffolis, we can get c1.c2.c3.c4.c5, with the catch that some ""work"" (or ancilla) qubits have been introduced to store intermediate results. After applying the final CU, we get the final result in target. Now we can clean up the intermediate work qubits by undoing their computations, returning them to the |0> state. +This model of reversible computation is known as the ""compute-copy-uncompute"" method, and was first proposed by Charlie Bennett in 1973.

+ +

Here is the QISKit code to construct the circuit and visualize it:

+ + + +
from qiskit import QuantumRegister, QuantumCircuit
+
+n = 5  # must be >= 2
+
+ctrl = QuantumRegister(n, 'ctrl')
+anc = QuantumRegister(n-1, 'anc')
+tgt = QuantumRegister(1, 'tgt')
+
+circ = QuantumCircuit(ctrl, anc, tgt)
+
+# compute
+circ.ccx(ctrl[0], ctrl[1], anc[0])
+for i in range(2, n):
+    circ.ccx(ctrl[i], anc[i-2], anc[i-1])
+
+# copy
+circ.cx(anc[n-2], tgt[0])
+
+# uncompute
+for i in range(n-1, 1, -1):
+    circ.ccx(ctrl[i], anc[i-2], anc[i-1])
+circ.ccx(ctrl[0], ctrl[1], anc[0])    
+
+from qiskit.tools.visualization import circuit_drawer
+circuit_drawer(circ)
+
+ +

Yields:

+ +

+",2503,,,,,5/28/2018 21:00,,,,0,,,,CC BY-SA 4.0 +2181,2,,2176,5/29/2018 10:52,,3,,"

Each of the two spins, $q\in\{L,R\}$, has a bunch of energy levels $\{|n\rangle_q\}$, each at energy $\omega_{n}^q$. In other words, the basic Hamiltonian of the spins is: +$$ +H=\sum_{n=0}^{N}\omega_{n}^L|n\rangle\langle n|_L+\omega_{n}^R|n\rangle\langle n|_R +$$ +Written like this, the two spins are not interacting, so we won't get a two-qubit gatewithout doing something extra.

+ +

We we're talking about a qubit, we specifically focus on populating just the $|0\rangle$ and $|1\rangle$ levels of each spin. Nothing else is ever populated (hopefully). Under the Hamiltonian $H$, as basis element $|x\rangle$ for $x\in\{0,1\}^2$ acquires a phase +$$ +e^{-i(\omega^L_{x_L}+\omega^R_{x_R})t}. +$$

+ +

In addition to this basic Hamiltonian of the spins, there is a cavity, containing photons that interact with both spins. This is what will mediate the two-qubit interaction. In effect, by manipulating the interaction parameters, we can change the energy level of the $|1\rangle_L|1\rangle_R$ state independently of the $|10\rangle$ and $|01\rangle$ states. Thus, in principle, one creates a different phase pno all 4 basis states, and these can be combined to give a controlled-phase gate.

+ +

In practice, how this works is that most of the time you want to be sat in a region of parameter space where there is no two-qubit interaction going on. At particular moments of a computation, you need to turn this interaction on. This is achieved by adiabatically varying the cavity parameters. By doing this, the populations of the qubits in the different basis states don't change, but you move to a regime where the energy levels are different, and generating the phases you need.

+ +

You'll notice I haven't mentioned the $|0,2\rangle$ level yet. In some ways this is irrelevant; everything I've said is (a sufficiently good approximation to) true. The issue is that usually, when you change the parameters, you don't get the independent control of the different energies. The place to go looking, if you want to find such independent control, is in the region of an 'avoided crossing', where the usual linear variation of energy with parameters would suggest that two energy levels should be the same (e.g. the $|11\rangle$ and $|02\rangle$ levels). The avoided crossing means that the energy takes on a quadratic form near the (not) crossing point, and it's that non-linearity that you're making use of. It also defines important constraints on the adiabatic evolution: since you do not want to populate the $|02\rangle$ level, you have to move slowly with respect to the energy gap between $|11\rangle$ and $|02\rangle$, which is comparatively small, and therefore the evolution time is quite slow.

+",1837,,,,,5/29/2018 10:52,,,,0,,,,CC BY-SA 4.0 +2182,1,2184,,5/29/2018 14:59,,6,183,"

I know there are some ""quantum versions"" of hand-writing recognition algorithms which have been proposed using quantum neural networks. Example: ""Recognition of handwritten numerals by Quantum Neural Network with fuzzy features"" (J Zhou, 1999). Also, recently by Rebentrost et al.: ""A Quantum Hopfield Neural Network"" presents an application of their method as a genetic sequence recognizer.

+ +

What are some other proposed applications of quantum neural networks whose given solutions provide considerable improvement over the corresponding classical version of the neural network in terms of accuracy? Also, have any of those proposed solutions been programmed/simulated?

+ +

Please note that I'm looking for research papers which specifically demonstrate some applications of quantum neural networks and which provide a significant improvement over the corresponding classical neural networks.

+",26,,26,,5/31/2018 23:22,5/31/2018 23:22,What are some of the interesting problems whose solutions have been proposed using quantum neural networks?,,1,10,,,,CC BY-SA 4.0 +2183,2,,2161,5/29/2018 15:19,,5,,"
+

I don't understand their notion of a $2^n$ dimensional complex + vector. If each of the components of their classical data array has + two floating point numbers, wouldn't encoding that into a $n$-qubit + quantum state be equivalent to storing a $2\times 2^{n}$ size + classical array in a $n$-qubit system?

+
+ +

You are absolutely correct that a $2\times 2^n$ classical array of nubers is stored in an n-qubit system.

+ +

But they are absolutely right that the vector's dimension is $2^n$. This is because the vector has $2^n$ rows, where each entry has 2 classical numbers.

+ +

You can also store the same vector in a $2\times 2^n$ array: $2^n$ rows are filled in with the real parts and $2^n$ rows by the imaginary parts, but this vector would not evolve according to the Schrödinger equation.

+ +

I hope this helps resolve this part of the question.

+ +
+

But they don't mention anywhere how they will convert classical data (say in form of a $2\times 2^{n}$ array) into that form.

+
+ +

You are right. Just as Peter Shor never mentioned anywhere how his qubits for factoring will be prepared.

+ +

This is up to the experimentalists, and it is implementation-dependent. This means that for NMR qubits you'd convert the classical data into qubits differently from superconducting qubits, or ion-trap qubits, or quantum dot qubits, etc. Therefore I do not blame Shor, or any of the 6 authors of the 2 papers you mentioned (who are all theorists by the way), for not explaining how the qubits will be prepared.

+ +
+

let us assume that the initial data array we wanted to store in our quantum system was actually $V=\{\{3,\phi_1\},\{2,\phi_2\},...,\{4,\phi_8\}\}$. If they define $|v\rangle$ as $|\vec{v}|^{-1/2}\vec{v}$ then $|V\rangle$ in our example would look something like $(\sqrt{3^2+2^2+...+4^2})^{-1/2}(|3e^{i\phi_1}||000\rangle + |2e^{i\phi_2}||001\rangle + ... + |4e^{i\phi_8}||111\rangle)$. But + then we're losing all the information about the phases $\phi_i$, isn't + it? So what was the use of starting with a complex vector (having + both a phase and magnitude) in the first place, when we're losing that + information when converting to $|V\rangle$ anyway?

+
+ +

You had it earlier in your question! ""Consider the vector $N=2^{n}$ dimensional complex vector $\vec{v}$ with components $\{v_i=|v_i|e^{i\phi_i}\}$."" Therefore the vector is:

+ +

\begin{equation} +|\vec{v}|^{-1/2} +\begin{pmatrix}|v_{1}|e^{i\phi_{1}}\\ +|v_{2}|e^{i\phi_{2}}\\ +\vdots\\ +|v_{2^{n}}|e^{i\phi_{2^{n}}} +\end{pmatrix} +\end{equation}

+ +

Notice:
+1) There's $2^n$ entries, not $2 \times 2^n$
+2) There is NO norm around the phases, so this is why you have lost all information about the phases, because you put extra norm symbols where they shouldn't be :)

+ +
+

Or are we writing supposed to consider $|V\rangle$ as + $(\sqrt{3^2+2^2+...+4^2})^{-1/2}(3e^{i\phi_1}|000\rangle + + 2e^{i\phi_2}|001\rangle + ... + 4e^{i\phi_8}|111\rangle)$?

+
+ +

Closer! The correct answer is the vector I wrote above, which can be written like this:

+ +

$|\vec{v}|^{-1/2}\left( e^{i \phi_1} |00 \cdots 00\rangle + e^{i \phi_2} |00 \cdots 01\rangle + \cdots + e^{i \phi_{N}} |1\cdots 1\rangle\right)$.

+ +

For your specific example:

+ +

\begin{equation} + \frac{3e^{i\phi_1}|000\rangle + 2e^{i\phi_8}|001\rangle + \cdots + 4e^{i\phi_8}|111\rangle}{\sqrt{77}} +\end{equation}

+ +

The purpose of all of this is so that the sum of the squares of the coefficients is 1, which in my equation is true because the numberator is:

+ +

\begin{equation} +\sqrt{3^2 + 2^2 + 1^2 + 2^2 + 3^2 + 3^2 + 5 ^2+ 4^2} = \sqrt{77} +\end{equation}

+ +

I hope that clears it up!

+",2293,,26,,5/29/2018 18:02,5/29/2018 18:02,,,,6,,,,CC BY-SA 4.0 +2184,2,,2182,5/29/2018 17:19,,2,,"
+

What are some other proposed applications of quantum neural networks?

+
+ +

Absolutely any application of classical neural networks can be an application of quantum neural networks. There's a lot of examples beyond the two you listed.

+ +
+

Also, have any of those proposed solutions been programmed/simulated?

+
+ +

Yes, for example Ed Farhi of MIT and Hartmut Neven of Google teamed up on a paper where an application was distinguishing digits using QNNs.

+",2293,,,,,5/29/2018 17:19,,,,3,,,,CC BY-SA 4.0 +2185,2,,2166,5/29/2018 18:34,,2,,"

All counting is done in terms of $n$, the number of bits required to describe the input.

+ +

We define the class of problems $\text{NP}$ in the following way (or, this is one way to do it):

+ +

Let $f(x)$ be a function that accepts an input $x\in\{0,1\}^n$ and returns a single bit value 0 or 1. The task is that you have to find whether a given value of $x$ returns a 1. However, there is further structure to the problem: if $f(x)=1$, you are guaranteed that there exists a proof $p_x$ (of size $m\sim\text{poly}(n)$) such that a function $g(x,p_x)=1$ only if $f(x)=1$, and the function $g(x,p_x)$ is efficiently computable (i.e. it has a running time of $\text{poly}(n)$.

+ +

Let me give a few examples (perhaps these are what you were asking for here?):

+ +
    +
  • Parity: $f(x)$ answers the question 'is $x$ odd?'. This is so trivial (just take the least significant bit of $x$) that $f(x)$ is efficiently computed directly, and therefore a proof is unnecessary, $g(x,p_x)=f(x)$.

  • +
  • Composite numbers: $f(x)$ answers the question 'is the decimal representation of $x$ a composite number?'. One possible proof in the yes direction (you only have to prove that direction) is to give a pair of factors. e.g. $x=72$, $p_x=(8,9)$. Then $g(x,p)$ simply involves multiplying together the factors and checking they are equal to $x$.

  • +
  • Graph isomorphism: Given two graphs $G_1$ and $G_2$ (here $x$ contains the description of both graphs), $f(x)$ answers the question 'are the two graphs isomorphic?'. The proof $p_x$ is a permutation: a statement of how the vertices in $G_1$ map to those of $G_2$. The function $g(x,p_x)$ verifies that $p_x$ is a valid permutation, permutes the vertices of $G_1$ using the specified permutation, and verifies that the adjacency matrix is the same as that of $G_2$.

  • +
  • Minesweeper: The old favourite game built into windows (and others) can be expressed like this. Imagine a minesweeper board that is partially uncovered, so some cells are unknown, and some cells have been uncovered to reveal how many mines are in the neighbouring cells. This is all built into the variable $x$. $f(x)$ asks the question 'is there a valid assignment of mines on the uncovered region?'. The proof, $p_x$ is simply one such assignment of mines. This is easily verified using $g(x,p_x)$ which simply ensures consistency with every known constraint.

  • +
+ +

All of these problems are in $\text{NP}$ because they fit the definition of an efficiently verifiable solution. Some of them are known to be in $\text{P}$ as well: we've already stated that odd testing is in $\text{P}$. Composite numbers also is, because it is efficient to check if a number is prime using AKS primality testing.

+ +

Graph isomorphism and minesweeper are not known to be in $\text{P}$. Indeed, minesweeper is known to be $\text{NP}$-complete, i.e. if it can be solved efficiently, every problem in $\text{NP}$ is in $\text{P}$. Many people suspect that $\text{P}\neq\text{NP}$, and hence Minesweeper would have instances which take longer than polynomial time to solve.

+ +

One possible way to solve an $\text{NP}$ problem is, for a fixed $x$, simply to test all possible proofs $p_x$ up to a maximum length $m=\text{poly}(n)$, and see if there's a satisfying solution, i.e. to search for a solution $g(x,p_x)=1$. Obviously, that takes time $O(2^m\text{poly}(m))$, as there are exponentially many items to search, each requiring a polynomial time to compute. This can be improved by implementing Grover's search: we just search for a solution $g(x,p_x)=1$ (i.e. the valid $p_x$ becomes the marked item), and this takes a time $O(2^{m/2}\text{poly}(m))$. This is massively faster, but does not change the assessment of whether the running time is polynomial or something worse; it has not become a polynomial time algorithm. For example, graph isomorphism would have to search over all possible permutations. Minesweeper would have to search over all possible assignments of mines on uncovered squares.

+ +

Of course, some of the time, additional structure in the problem permits different solutions that do not require the searching of all possible proofs. There, Grover's search is of less, or even no, use to us, but it might be that we can come up with a polynomial time algorithm in another way. For example, the case of composite testing: classically, finding the factors of a number appears to be hard: we can't do much better than testing all possible factors, so making use of that form of proof doesn't help much, but, as already mentioned, the question can be resolved efficiently via another route, AKS primality testing.

+",1837,,1837,,5/30/2018 8:14,5/30/2018 8:14,,,,3,,,,CC BY-SA 4.0 +2186,2,,2155,5/29/2018 19:12,,5,,"

You get two decompositions for your matrix (let's call it $A$) because you are using two different operatorial bases.

+ +

In the first case you are considering the matrix as acting in a space of dimension $3\times 2$, that is, using the operatorial basis $\{\lambda_i\sigma_j\}_{ij}\equiv\{\lambda_i\otimes\sigma_j\}_{ij}$.

+ +

In other words, you are computing the coefficients $c_{ij}=\operatorname{tr}((\lambda_i\otimes \sigma_j) A)$, +finding $c_{61}$ to be the only not vanishing term. +This decomposition will be unique, because +$\operatorname{tr}\big[ (\lambda_i\sigma_j)(\lambda_k\sigma_l) \big]=N_{ij} \delta_{ik}\delta_{jl}$.

+ +

On the other hand, the second decomposition is obtained thinking of $A$ as a matrix in a space of dimensions $2\times 3$, that is, by decomposing it using the operatorial basis $\{\sigma_i\lambda_j\}_{ij}\equiv\{\sigma_i\otimes\lambda_j\}_{ij}$. +This gives you new coefficients +$d_{ij}\equiv\operatorname{tr}((\sigma_i \otimes\lambda_j) A)$, which do not have to be (and indeed are not) the same as the $c_{ij}$.

+ +

There is no paradox because $\{\sigma_i\otimes\lambda_j\}_{ij}$ and $\{\lambda_i\otimes\sigma_j\}_{ij}$ are two entirely different operatorial bases for a space of dimension $6$.

+",55,,55,,08-09-2018 08:51,08-09-2018 08:51,,,,1,,,,CC BY-SA 4.0 +2187,2,,2166,5/29/2018 22:08,,2,,"

Complexity classes are generally defined with regard to the size of the input. The relevant sizes here are $n$ (the number of qubits on which you let Grover's algorithm operate) and a number you haven't mentioned yet, call it $m$, of bits needed to describe the subroutine generally referred to as the oracle. Typically, the oracle will be efficiently implemented in a way that scales polynomially in $n$, which is the case, for example, if you encode a typical boolean satisfiability problem in the oracle.

+ +

In any case, you do not get a gain in complexity class using Grover's algorithm: It takes exponentially many quantum operations, typically $m*2^{n/2}$, to solve a problem we could brute-force in exponentially many steps, typically $m*2^{n-1}$, on a classical computer anyways. This means that problems known (e.g. EXPTIME) or suspected (e.g. NP) to take exponential runtime will still require exponential runtime.

+ +

However, physicists like to appeal to the notion that this is still an exponential speed-up with no known (or indeed readily conceivable) classical equivalent. This is most apparent in the database example where the oracle function is a database lookup and Grover's algorithm can cause one to need many fewer lookups than there are data in the database. In this sense, there is still a significant advantage, although it is completely lost in the complexity class picture.

+",,user1039,,,,5/29/2018 22:08,,,,2,,,,CC BY-SA 4.0 +2188,1,2189,,5/30/2018 10:03,,6,112,"

I was working on the Grover's algorithm and the most common example is for a unitary distribution in a quantum database, for example:

+ +

$|\psi\rangle = \frac{1}{2}|00\rangle + \frac{1}{2}|01\rangle + \frac{1}{2}|10\rangle + \frac{1}{2}|11\rangle.$

+ +

Is there a way to obtain arbitrary distribution (the above one is achieved by applying $H^{\otimes n}$ gates), e.g.

+ +

$|\psi\rangle = \frac{1}{3}|00\rangle + \frac{1}{4}|01\rangle + \sqrt{\frac{83}{144}}|10\rangle + \frac{1}{2}|11\rangle$ +? Does the structure of Grover's algorithm differ in such a case?

+",2098,,,,,5/30/2018 10:52,How to obtain arbitrary distribution in quantum database,,1,0,,,,CC BY-SA 4.0 +2189,2,,2188,5/30/2018 10:52,,5,,"

According to this paper,

+ +
+

A significant conclusion from this solution is that generically the generalized algorithm also has $O(\sqrt{N/r})$ running time

+
+ +

Where 'r' is the number of marked states. By generalized, the authors meant a distribution with arbitrary complex amplitudes. So it seems to answer your question. That the modified initialization would still perform in the same way as the original one.

+",2403,,,,,5/30/2018 10:52,,,,0,,,,CC BY-SA 4.0 +2190,1,2191,,5/30/2018 14:58,,9,1581,"

I trying to do some tests in the IBM Q5 computer of IBM quantm experience for some simple error correction protocols, but as I can see, some operations between the qubits are not allowed.

+ +

For example, it is not possible to perform a CNOT operation with the fourth qubit or when selecting one for as the target qubit for the operation, it does not allow to use any of the other qubits as control qubits.

+ +

I have been thinking about the fact that maybe it is because of the physical implementation of such computer, but as I do not know much about the construction of quantum computers I do not know if that might be the cause. So I am wondering if that is actually the issue, or otherwise why those operations are not allowed.

+",2371,,26,,12/14/2018 5:40,12/14/2018 5:40,Allowed CNOT gates for IBM Q 5 quantum computer,,2,0,,,,CC BY-SA 4.0 +2191,2,,2190,5/30/2018 15:12,,8,,"

Yes, the physical implementation is the constraint. If you look at the image of the processor you'll notice the connections between qubits. This gives you an idea of how you can perform two qubit gates between particular qubits.

+ +

Here's the documentation on the Tenerife backend. In the section titled Two Qubit gates at the bottom you can read the details. Also the directions of the gates are also detailed in a log file there.

+ +

https://github.com/QISKit/qiskit-backend-information/tree/master/backends/tenerife/V1

+ +

+",54,,54,,5/30/2018 15:17,5/30/2018 15:17,,,,0,,,,CC BY-SA 4.0 +2192,1,2193,,5/30/2018 15:29,,6,233,"

I am thinking about the following question:

+ +
+

Assuming that we have some given state $\rho$ and we perform a + measurement with $k$ outcomes on this state. Then we can describe the + measurement in outcomes as eigenvalues of the measurable, i.e., the + Hermitian operator that I denote by $D$, with probabilities + $\mathrm{Tr}[D_i\rho]$, where $D_i$ are the projectors in the $i^{th}$ + eigenspace of $D$, i.e. for the eigendecomposition $D = \sum_i + \lambda_i s_i s_i^T = \sum_i \lambda_i D_i$.

+
+ +

I was wondering if my assumption is true. If the number of (distinguishable?) outcomes for any Hermitian operator is given by $k$ i.e. then we have only $k$ non-zero eigenvalues and hence $D$ must be of rank $\leq k$?

+",2054,,55,,10/28/2022 6:07,10/28/2022 6:07,How is the number of measurement outcomes linked to the rank of the observable?,,1,2,,,,CC BY-SA 4.0 +2193,2,,2192,5/30/2018 15:46,,3,,"

You are implicitly making a specific assumption here: that the $\{D_i\}$ are rank 1 projectors.

+ +

If your $\{D_i\}$ are rank-1 projectors, i.e. taking the form $D_i=s_is_i^T$, then because there is a completeness relation for measurement operators, +$$ +\sum_iD_i=\mathbb{I}, +$$ +then you must have a number of outcomes equal to the dimension of the Hilbert space you're measuring. Call that $k$. Now, if you define $D=\sum_i\lambda_iD_i$ where the $\lambda_i$ are distinct, then $D$ must have rank either $k$ or $k-1$: if one of the $\lambda_i$ is 0, then the number of non-zero values (which is equivalent to the rank) is $k-1$.

+ +

Now, strictly, the $D_i$ could be projectors, but not have rank 1 (in fact, they don't even have to be projectors, but we won't go there...), but instead a rank $r_i=\text{Tr}(D_i)$. In this case, either $D$ is full rank (which we'll still call $k$) or, if a particular $\lambda_j=0$, then it has rank $k-r_j$, because that's the number of non-zero eigenvalues $D$ has. Here, the number of distinguishable outcomes is potentially much smaller than the rank of $D$. All you really know is that $\text{rk}(D)\geq |\{D_i\}|-1$ (i.e. the number of measurement operators minus 1, in case one of the eigenvalues is 0). But that could be a very loose bound in some circumstances (and the bound is the opposite way round to what you were asking).

+ +

Overall, the answer is that the number of distinguishable measurement outcomes is not equal to the rank of the measurement operator.

+",1837,,,,,5/30/2018 15:46,,,,0,,,,CC BY-SA 4.0 +2194,2,,2190,5/30/2018 16:55,,6,,"

The five qubit IBM devices have a ‘bow tie’ architecture, which mean that it is only possible to interact certain pairs of qubits. These are shown in the answer of Andrew O.

+ +

The interaction that can be performed between these pairs of qubits is a CNOT with a particular direction. However, it is possible to implement others indirectly.

+ +

For example, to perform a CNOT with q0 as control and q1 as target, use

+ +
h q[0];
+h q[1];
+cx q[1], q[0];
+h q[1];
+h q[0];
+
+ +

The above can be added in the QASM editor. Or you could do the same with the GUI: it is a CNOT with Hadamads before and after on both qubits. The Hadamards effectively reverse the CNOT direction.

+",409,,,,,5/30/2018 16:55,,,,4,,,,CC BY-SA 4.0 +2195,1,,,5/31/2018 2:58,,8,1992,"

I have followed the installation steps, regarding QISKit working environment. For circuit visualization, I've installed latex as in addition to poppler to convert from PDF to images. Afterwards, I followed the example given here.

+ +

I wrote the code and after running, the program run but I didn't get the circuit visualization. I don't know what is the problem, even I have not received any error messages.

+ +

So any ideas?

+",2519,,26,,03-12-2019 09:29,03-12-2019 09:29,Visualization of Quantum Circuits when using IBM QISKit,,1,4,0,,,CC BY-SA 4.0 +2196,1,2197,,5/31/2018 14:36,,4,1238,"

I am currently going through Nielsen's QC bible and having still some foundational / conceptual problems with the matter.

+ +

+ +

I have tried to retrieve this $8 {\times} 8$ matrix describing the QFT of 3 qubits via Kronecker product in various attempts.

+ +

Hadamard transform can be decomposed into $H \otimes 1 \otimes 1$, and the others are fundamentally kronecker products of the 4x4 matrices of S resp. T with the 2x2 identity.

+ +

Whats wrong with my approach?

+ +

EDIT:

+ +

$T\text{=}\left( +\begin{array}{cccccccc} + 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 0 & e^{\frac{\pi i}{4}} & 0 & 0 \\ + 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ + 0 & 0 & 0 & 0 & 0 & 0 & 0 & e^{\frac{\pi i}{4}} \\ +\end{array} +\right)$

+ +

which is derived from $R_k = \left( +\begin{array}{cc} + 1 & 0 \\ + 0 & e^{2 i \pi /2^k} \\ +\end{array} +\right)$, being $S$ for $k=1$ and $T$ for $k=2$.

+ +

EDIT 2:

+ +

The controlled T-operation can be represented in computational basis as

+ +

$\left( +\begin{array}{cccc} + 1 & 0 & 0 & 0 \\ + 0 & 1 & 0 & 0 \\ + 0 & 0 & 1 & 0 \\ + 0 & 0 & 0 & e^{2 \pi i / 2^k } \\ +\end{array} +\right)$.

+ +

EDIT 3:

+ +

In mathematica, one faulty calculation of mine is:

+ +

$\text{SWAP}\text{=}\left( +\begin{array}{cccccccc} + 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ + 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ + 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ + 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ +\end{array} +\right)$

+ +

SWAP*KroneckerProduct[IdentityMatrix[2], IdentityMatrix[2], H]KroneckerProduct[IdentityMatrix[2], S] + KroneckerProduct[ IdentityMatrix[2], H, IdentityMatrix[2]] * T KroneckerProduct[S, IdentityMatrix[2]] + KroneckerProduct[H, IdentityMatrix[2], + IdentityMatrix[2]] // MatrixForm

+ +

which gives:

+ +

$\left( +\begin{array}{cccccccc} + \frac{1}{2 \sqrt{2}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 0 & -\frac{1}{2 \sqrt{2}} & 0 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 0 & \frac{e^{\frac{i \pi }{2}}}{2 \sqrt{2}} & 0 & 0 \\ + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\frac{e^{\frac{i \pi }{2}} i^2}{2 \sqrt{2}} \\ +\end{array} +\right)$

+",2522,,2522,,5/31/2018 15:22,5/31/2018 15:39,Example of Quantum Fourier Computation for three qubits,,1,7,,,,CC BY-SA 4.0 +2197,2,,2196,5/31/2018 15:39,,3,,"

The tensor products for the individual gates are all being calculated correctly, and the matrices are being multiplied in the correct order. In this case, it turns out that it is a Mathematica programming error, using the * operator for element-by-element multiplication of matrices rather than matrix multiplication. The first clue was that the overall output was not even unitary, even though the individual gates seemed to be correct.

+",1837,,,,,5/31/2018 15:39,,,,0,,,,CC BY-SA 4.0 +2198,1,2204,,5/31/2018 16:18,,9,179,"

I'm relatively new to quantum computing and my goal is to learn how to implement algorithms that I read in papers. While I have found many circuit snippets I have yet to find a repository of examples on GitHub or other places where I would go to find machine learning code. Does an analogous quantum computing repository exist?

+",418,,,,,06-11-2018 07:59,Where can I find example circuits to learn from?,,1,2,,,,CC BY-SA 4.0 +2199,2,,2128,5/31/2018 16:33,,7,,"

First let me mention a minor point concerning terminology. The type of channel you are suggesting is often called a Pauli channel; the term depolarizing channel usually refers to the case where $p_x = p_y = p_z$.

+ +

Anyway, it is not really correct to say that Pauli channels are the channel model considered for quantum error correction. Standard quantum error correcting codes can protect against arbitrary errors (represented by any quantum channel you might choose) so long as the errors do not affect too many qubits.

+ +

As an example, let us consider an arbitrary single-qubit error, represented by a channel $\Phi$ mapping one qubit to one qubit. Such a channel can be expressed in Kraus form as +$$ +\Phi(\rho) = A_1 \rho A_1^{\dagger} + \cdots + A_m \rho A_m^{\dagger} +$$ +for some choice of Kraus operators $A_1,\ldots,A_m$. (For a qubit channel we can always take $m = 4$ if we want.) You could, for instance, choose these operators so that $\Phi(\rho) = |0\rangle \langle 0|$ for every qubit state $\rho$, you could make the error unitary, or whatever else you choose. The choice can even be adversarial, selected after you know how the code works.

+ +

Each of the Kraus operators $A_k$ can be expressed as a linear combination of Pauli operators, because the Pauli operators form a basis for the space of 2 by 2 complex matrices: +$$ +A_k = a_k I + b_k X + c_k Y + d_k Z. +$$ +If you now expand out the Kraus representation of $\Phi$ above, you will obtain a messy expression where $\Phi(\rho)$ looks like a linear combination of operators of the form $P_i \rho P_j$ where $i,j\in\{1,2,3,4\}$ and $P_1 = I$, $P_2 = X$, $P_3 = Y$, and $P_4 = Z$.

+ +

Now imagine that you have a quantum error correcting code that protects against an $X$, $Y$, or $Z$ error on one qubit. The usual way this works is that some extra qubits in the 0 state are tacked on to the encoded data and a unitary operation is performed that reversibly computes into these extra qubits a syndrome describing which error occurred, if any, and which qubit was affected.

+ +

Supposing that the arbitrary error $\Phi$ happened on the first qubit for simplicity, after the syndrome computation you will end up with a state that looks like a linear combination of terms like this: +$$ +P_i |\psi\rangle \langle \psi| P_j \otimes |P_i\: \text{syndrome}\rangle\langle P_j\:\text{syndrome}|. +$$ +The assumption here is that $|\psi\rangle$ represents the encoded data without any noise, $P_i$ and $P_j$ act on the first qubit, and that ""$P_i$ syndrome"" and ""$P_j$ syndrome"" refer to the standard basis states that indicate that these errors have occurred on the first qubit. (The situation is similar for the error affecting any other qubit; I'm just trying to keep the notation simple by assuming the error happened to the first qubit.)

+ +

Now the key is that you measure the syndrome to see what error occurred, and all of the cross terms disappear because of the measurement. You are left with a probabilistic mixture of states that look like +$$ +P_i |\psi\rangle \langle \psi| P_i \otimes |P_i\: \text{syndrome}\rangle\langle P_i\:\text{syndrome}|. +$$ +The error is corrected and the original state is recovered. In effect, by measuring the syndrome, you ""project"" or ""collapse"" the error to something that looks like a Pauli channel.

+ +

This is all described (somewhat briefly) in Section 10.2 of Nielsen and Chuang.

+",1764,,,,,5/31/2018 16:33,,,,4,,,,CC BY-SA 4.0 +2200,1,,,5/31/2018 22:35,,11,2047,"

I am trying to get used to IBM Q by implementing three qubits Grover's algorithm but having difficulty to implement the oracle.

+ +

Could you show how to do that or suggest some good resources to get used to IBM Q circuit programming?

+ +

What I want to do is to mark one arbitrary state by flipping its sign as the oracle supposed to do.

+ +

For example, I have

+ +

$1/\sqrt8(|000\rangle+|001\rangle+|010\rangle+|011\rangle+|100\rangle+|101\rangle+|110\rangle+|111\rangle)$.

+ +

and I want to mark $|111\rangle$ by flipping its sign to $-|111\rangle$. I somehow understand that CCZ gate would solve the problem but we do not have CCZ gate in IBM Q. The combination of some gates will act the same as CCZ but I am not sure how to do that yet. And I am also struggling for the other cases not only for $|111\rangle$.

+ +

Two qubits case is simple enough for me to implement, but three qubits care is still confusing to me.

+",2100,,55,,06-03-2018 16:27,06-09-2021 05:39,Implementation of the oracle of Grover's algorithm on IBM Q using three qubits,,3,2,,,,CC BY-SA 4.0 +2201,1,2202,,5/31/2018 23:23,,7,129,"

In the paper A quantum-implementable neural network model (Chen, Wang & Charbon, 2017), on page 18 they mention that ""There are 784 qurons in the input layer, where each quron is comprised of ten qubits.""

+ +

That seems like a misprint to me. After reading the first few pages I was under the impression that they were trying to use $10$ qubits to replicate the $784$ classical neurons in the input layer. Since $2^{10}=1024>784$, such that each sub-state's coefficient's square is proportional to the activity of a neuron. Say the square of the coefficient of $|0000000010\rangle$ could be proportional to the activation of the $2$-nd classical neuron (considering all the $784$ neurons were labelled fom $0$ to $783$).

+ +

But if what they wrote is true: ""There are 784 qurons in the input layer"" it would mean there are $7840$ qubits in the input layer, then I'm not sure how they managed to implement their model experimentally. As of now we can properly simulate only ~$50$ qubits.

+ +

However, they managed to give an error rate for $>7840$ qubits (see Page 21: ""Proposed two-layer QPNN, ten hidden qurons, five select qurons - 2.38""). No idea how's they managed to get that value. Could someone please explain?

+",26,,26,,12/23/2018 12:28,12/23/2018 12:28,How did the authors manage to simulate and get the error estimate for a neural network with greater than 7840 qubits?,,1,0,,,,CC BY-SA 4.0 +2202,2,,2201,06-01-2018 01:32,,2,,"
+

As of now we can properly simulate only ~50 qubits.

+
+ +

You are talking about a full quantum simulation of a vector containing $2^{50}$ elements.

+ +

In quantum neural networks and quantum annealing, we usually only need something close to the ground state (optimal value) rather than the absolute global minimum.

+ +

Here is another example from 2017 where 1000 qubits are simulated:

+ +

+ +

Here's an example from 2015 where 1000 qubits are simulated (it says bits rather than qubits, but they are the qubits of the D-Wave device):

+ +

+",2293,,,,,06-01-2018 01:32,,,,3,,,,CC BY-SA 4.0 +2203,1,2206,,06-01-2018 08:01,,8,469,"

Recently I found out the Applications of Quantum Computing Professional Certificate Program that MITxPRO is offering for people interested in quantum computing. I saw that it is consisted of four courses that can be done independently or as a whole program. This is the link for the course.

+ +

I am especially interested in just the last one of such four courses, but I do not know if it would be necessary to take the other ones so that I could do such course.

+ +

That's why I was wondering if someone here has started this course, and so if there is someone, if he could give an insight about which is the level required for taking such courses, the time required in order to complete the homework and his opinion about the course in general. Also it would be interesting to hear if you think that taking all courses should be necessary (although I am aware of the fact that just one of the courses has been given, so this would be a subjective opininon).

+",2371,,26,,12/13/2018 19:51,01-03-2020 04:07,Reference on MITxPRO Applications of Quantum Computing Professional Certificate Program,,2,0,,,,CC BY-SA 4.0 +2204,2,,2198,06-01-2018 14:10,,9,,"

I know this is not what you are asking but this paper: +Quantum Algorithm Implementations for Beginners explains the implementation of some machine learning algorithms. Hope this helps!

+",2100,,,,,06-01-2018 14:10,,,,1,,,,CC BY-SA 4.0 +2205,2,,2200,06-01-2018 15:27,,4,,"

I am answering my question. After some google search, I found this image showing CCZ gate by CNOT, T dagger, and T gate. I tried this on IBM Q and it worked. I want to explore why it works but that's another story.

+ +

For someone who is interested, here is my quantum circuit of Grover's algorithm finding |111> with one iteration.

+ +

+",2100,,,,,06-01-2018 15:27,,,,1,,,,CC BY-SA 4.0 +2206,2,,2203,06-01-2018 15:39,,8,,"

I signed up for this series because I was interested in the 2nd and 3rd courses.

+ +

There are a lot of students from different backgrounds so I think that limits the depth of what the instructors can cover. The introductory course was too easy in terms of content, however useful in the form of industry perspectives and getting to know 'who is doing what' in hardware. My fear is that the remaining courses will be a bit too simple/general.

+ +

The bulk of the time is spent watching videos. I set the speed to 1.25x or else it's just a bit too slow for me. You could complete the entire course in a weekend.

+ +

Taking all the courses is absolutely not necessary but you do get a nice certificate at the end.

+ +

Oct 31, 2018 Update

+ +

I've finished all 4 courses and have to say the 2nd, 3rd, and 4th courses were great. They went into a reasonable amount of depth in the topics. I'd recommend the series to anyone starting out. If you're already familiar with the basics then maybe skip the first course.

+ +

Jan 2, 2020 Update

+ +

Since I received a few upvotes on this answer recently, I thought I would add a bit more information. The 4 course certificate program has since been split into two two-course programs. Quantum Computing Fundamentals and Quantum Computing Realities. My comments above still stand. Skip the fundamentals course if you're already familiar with the basics.

+",54,,54,,01-02-2020 17:39,01-02-2020 17:39,,,,0,,,,CC BY-SA 4.0 +2207,1,2208,,06-01-2018 17:06,,4,192,"

I am going through Nielsen and Chuang and am finding the chapter on error-correction particularly confusing. At the moment I am stuck on exercise 10.12 which states

+ +

Show that the fidelity between the state $|0 \rangle$ and $\varepsilon(|0\rangle\langle0|)$ is $\sqrt{1-2p\backslash3}$, and use this to argue that the minimum fidelity for the depolarizing channel is $\sqrt{1-2p\backslash3}$.

+ +

As I understand $\varepsilon$ is a quantum operation and could be whatever we want as long as it fits the definition, do I assume $\varepsilon$ is the depolarizing channel or is there some general operation I don't know about.

+ +

Thanks!

+",2528,,55,,7/26/2020 18:11,7/26/2020 18:11,How to find the fidelity between two state when one is an operator?,,1,0,,,,CC BY-SA 4.0 +2208,2,,2207,06-01-2018 17:45,,2,,"

The channel $\mathcal{E}$ is explicitly defined in the preceding paragraph as being the depolarising channel. Thus, all you need to calculate is +$$ +F=\sqrt{\langle 0|\mathcal{E}(|0\rangle\langle 0|)|0\rangle}. +$$

+",1837,,,,,06-01-2018 17:45,,,,0,,,,CC BY-SA 4.0 +2209,1,2214,,06-01-2018 20:25,,7,494,"

I am interested in a quantum algorithm that has the following characteristics:

+ +
    +
  1. output = 2n bits OR 2 sets of n bits (e.g. 2 x 3 bits)
  2. +
  3. the number of 1-bits in the first set of n-bits must be equal to the number of 1-bits in the second set. E.g. correct output = 0,0,0, 0,0,0 (both 3-bit sets have zero 1-bits); 1,0,0, 0,1,0 (both 3-bit sets have one 1-bit); 1,1,0, 0,1,1 (both 3-bit sets have two 1-bit)
  4. +
  5. Each time the quantum algorithm runs it must randomly return one of the possible solutions.
  6. +
+ +

Any idea how I can best implement such an algorithm on a quantum computer ?

+ +

FYI I have tried the following algorithm (where n = 2 ) but it missed the 2 answers 0110 and 1001: +

+",2529,,,,,06-04-2018 09:15,How to create a quantum algorithm that produces 2 n-bit sequences with equal number of 1-bits?,,2,0,0,,,CC BY-SA 4.0 +2210,1,2221,,06-01-2018 20:40,,4,467,"

I work on a Quantum Information Science II: Quantum states, noise and error correction MOOC by Prof. Aram Harrow, and I do not understand which property of tensor products is used in one of the transitions in the videos.

+ +

Let's consider an isometry $V: A \to B \otimes E$ ($E$ is a subspace to be thrown away at the end).

+ +

Let's fix and orthonormal basis $\{ |e\rangle \}$ in $E$ and partially expand the isometry $V$ as $V = \sum_e V_e \otimes |e\rangle$, where each $V_e$ is a linear operator from $A$ to $B$.

+ +

The Stinespring form of a quantum operation is a partial trace applied after an isometry: $N(\rho) = \mathrm{tr}_E [V \rho V^\dagger]$.

+ +

Now, if we expand that with our representation of $V$, we get +$$ +N(\rho) = \mathrm{tr}_E \left[ +\sum_{e_1} \sum_{e_2} +\left( V_{e_1} \otimes |e_1\rangle \right) +\rho +\left( V_{e_2}^\dagger \otimes \langle e_2| \right) +\right]. +$$

+ +

My question is how to get from here to the next step +$$ +N(\rho) = \mathrm{tr}_E \left[ +\sum_{e_1} \sum_{e_2} +(V_{e_1} \rho V_{e_2}^\dagger) \otimes |e_1 \rangle \langle e_2| +\right]? +$$

+ +

(BTW, eventually, we end up with the Kraus operator decomposition of a channel: $N(\rho) = \sum_e V_e \rho V_e^\dagger$.)

+",528,,55,,2/19/2021 18:33,2/19/2021 18:33,Tensor product properties used to obtain Kraus operator decomposition of a channel,,1,1,,,,CC BY-SA 4.0 +2211,1,2213,,06-01-2018 20:54,,14,886,"

I am confused about how to understand the $Z$ gate in a Bloch sphere.

+ +

Considering the matrix $Z = \begin{pmatrix} +1 & 0 \\ +0 & -1 +\end{pmatrix}$ it is understandable that $Z|0\rangle = |0\rangle$ and $Z|1\rangle = -|1\rangle$.

+ +

It is explained here that $Z$ gate is $\pi$ rotation around the $Z$ axis. Then, how should I understand $Z|1\rangle = -|1\rangle$? Since $|1\rangle$ is the south pole, I feel it is natural to think that $\pi$ rotation around the $Z$ axis does not do anything.

+",2100,,26,,06-02-2018 17:56,06-03-2018 06:03,How to think about the Z gate in a Bloch sphere?,,3,0,,,,CC BY-SA 4.0 +2212,1,,,06-01-2018 21:53,,3,126,"

Puzzle

+ +

I have the following puzzle for which I would like to create a quantum algorithm.

+ +
    +
  1. There are 2 players that need to complete 3 tasks as fast as possible.
  2. +
  3. There are 3 different types of tasks ( A, B, C )
  4. +
  5. There are 3 different types of tools (a, b, c)
  6. +
  7. Each player will only get 2 tools (they might get the same tool twice)
  8. +
  9. Tool a will make that the player can finish task A 10 minutes faster than without tool a.
  10. +
  11. Tool b will make that the player can finish task B 10 minutes faster than without tool b.
  12. +
  13. Tool c will make that the player can finish task C 10 minutes faster than without tool c.
  14. +
  15. The output of the quantum algorithm must be (a) a random combination of the 3 tasks (the same task might appear multiple times) and (b) a random assignment of 2 tools to each of the 2 players but it should also remain a fair competition (in other words one player will not gain more time in total thanks to the set of tools he got).
  16. +
+ +

So a possible outcome of the quantum algorithm:

+ +
    +
  • Tasks A, A, C (note that same task can appear multiple times) and
  • +
  • Player one gets tools a,b
  • +
  • Player two gets tools a,a (although he gets 2 a tools - he can only use one tool at a time - so the 2nd a tool would not give any benefits)
  • +
+ +

So, in this case, both players will equally benefit (= 20 minutes) thanks to tool a and the 2 tasks A, A.

+ +

So how would you implement such a problem in a quantum algorithm?

+ +
+ +

Generalized Puzzle

+ +

Of course this puzzle can be further generalized as:

+ +
    +
  • each player has to complete n tasks (and not 3)
  • +
  • instead of 3 different types of tasks (A, B, C) and 3 corresponding different types of tools (a, b, c), there are t different types of tasks with corresponding tools giving them a 10 minute performance benefit.
  • +
  • each player gets k tools instead of 2.
  • +
+ +

I don't need an answer on this generalized puzzle ! I am more than happy to get an answer on the simple puzzle.

+",2529,,2529,,06-04-2018 07:31,06-04-2018 07:31,Algorithm to allocate tasks and tools fairly to 2 players,,0,10,,,,CC BY-SA 4.0 +2213,2,,2211,06-01-2018 22:31,,7,,"

The way to think about the Bloch sphere is in terms of the density matrix for the state. $Z$ acting on either $|0\rangle\langle 0|$ or $|1\rangle\langle 1|$ does nothing, as is true for any diagonal density matrix. To see the effect of the rotation, you need to look at how any non-diagonal density matrix is changed by $Z$, such as $|+\rangle\langle +|$.

+",1837,,,,,06-01-2018 22:31,,,,0,,,,CC BY-SA 4.0 +2214,2,,2209,06-01-2018 22:49,,2,,"

There are probably better ways than this, but here’s one you could try:

+ +

Start as you have done, with Hadamards on every qubit of the first register, then controlled nots between matching pairs of qubits across the two registers. This creates a uniform superposition of terms $|x\rangle|x\rangle$.

+ +

Now you need to somehow perform a random permutation on the second register. Introduce $\binom{n}2$ ancillary qubits. Apply Hadamard on each, and use each qubit to control the application of a swap between a different pair of qubits on the second register. Then forget about the ancillary qubits, and just measure the first two registers. (I’m guessing this gives you a sufficiently random permutation.)

+",1837,,1837,,06-02-2018 09:52,06-02-2018 09:52,,,,4,,,,CC BY-SA 4.0 +2215,2,,2211,06-01-2018 22:52,,2,,"

As per Wikipedia, we can write any pure state as $$|\psi\rangle = \cos\left( \frac{\theta}{2} \right) |0 \rangle + e^{i \phi} \sin\left( \frac{\theta}{2} \right) |1 \rangle$$

+ +

Where $\theta$ and $\phi$ are the angles on the Bloch sphere:

+ +

+ +

Almost any point on the surface (i.e. pure state) has a unique representation in terms of the angles, except for the poles. Just like on the Earth the South Pole has no well-defined longitude (any longitude works the same), for the $|1 \rangle$ state any phase $\phi$ means the same thing. The “latitude” $\theta$ is here $\pi$, let's plug that into the equation:

+ +

$$|1\rangle = \cos\left( \frac{\pi}{2} \right) |0 \rangle + e^{i \phi} \sin\left( \frac{\pi}{2} \right) |1 \rangle = $$ +$$ = 0 + e^{i \phi} |1 \rangle$$

+ +

If you are familiar with Euler's identity, you will probably recognise $e^{i \phi}$ as a rotation in the complex plane. In particular, since $Z$ is a rotation for $\phi = \pi$, we get the famous $e^{i \pi} = -1$, finally arriving at $|1 \rangle = - |1 \rangle$.

+",580,,,,,06-01-2018 22:52,,,,3,,,,CC BY-SA 4.0 +2217,2,,2211,06-02-2018 01:03,,11,,"

$|1\rangle$ and $-|1\rangle$ are assigned to the same point on the Bloch sphere because they are equal up to global phase. Algebraically: $|1\rangle \equiv -|1\rangle$ where $\equiv$ means ""equal up to global phase"". Meaning there is some $\theta$ such that $-|1\rangle = e^{i \theta} |1\rangle$.

+ +

The thing that is confusing you is that, despite the fact that $|0\rangle \equiv Z |0\rangle$ and $|1\rangle \equiv Z |1\rangle$, this is not true for linear combinations of the two. For example, $Z |+\rangle \not\equiv Z |+\rangle$ even though $|+\rangle = \frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle$.

+",119,,119,,06-03-2018 06:03,06-03-2018 06:03,,,,0,,,,CC BY-SA 4.0 +2219,2,,2166,06-02-2018 06:07,,7,,"

Summary

+ +
    +
  • There is a theory of complexity of search problems (also known as relation problems). This theory includes classes called FP, FNP, and FBQP which are effectively about solving search problems with different sorts of resources.
  • +
  • From search problems, you can also define decision problems, which allows you to relate search problems to the usual classes P, NP, and BQP.
  • +
  • Whether you consider the search version of the decision version of the problem, the way that you consider the input to the Unstructured Search problem will determine what upper bounds you can put on its complexity.
  • +
+ +

The complexity of relation problems

+ +

As you note, Grover's problem solves a search problem, which in the complexity literature is sometimes also known as a relation problem. That is, it is a problem of the following sort:

+ +
+

The structure of a general search problem.
Given an input $x$ and a binary relation $R$, find a $y$ such that $R(x,y)$ holds.

+
+ +

The complexity classes FP and FNP are defined in terms of such problems, where in particular one is interested in the case where $y$ has length at most a polynomial function of the length of $x$, and where the relation $R(x,y)$ can itself be computed in an amount of time bounded by some polynomial in the length of $(x,y)$.

+ +

In particular: the example of the 'database search' problem for which Grover's Search is usually applied can be described as follows.

+ +
+

Unstructured Search.
+ Given an input oracle $\mathcal O: \mathcal H_2^{\otimes m+1} \!\to \mathcal H_2^{\otimes m+1}$ such that $\mathcal O \lvert a \rangle \lvert b \rangle = \lvert a \rangle \lvert b \oplus f(a) \rangle$ for some function $f: \{0,1\}^m \to \{0,1\}$, find a $y$ such that $\mathcal O + \lvert y \rangle \lvert 0 \rangle = \lvert y \rangle \lvert 1 + \rangle$.

+
+ +

Here, the oracle itself is the input to the problem: it plays the role of $x$, and the relation which we are computing is +$$ R(\mathcal O,y) \;\;\equiv\;\; \Bigl[\mathcal O + \lvert y \rangle \lvert 0 \rangle = \lvert y \rangle \lvert 1 + \rangle\Bigr] \;\;\equiv\;\; \Bigl[ f(y) = 1 \Bigr].$$

+ +

Suppose that, instead of an oracle, we are provided with a specific input $x$ which describes how the function $f$ is to be computed, we can then consider which complexity class this problem belongs to. As pyramids indicates, the appropriate complexity class we obtain depends on how the input is provided.

+ +
    +
  • Suppose that the input function is provided as an database (as the problem is sometimes described), where each entry to the database has some length $\ell$. If $n$ is the length of the string $x$ used to describe the entire database, then the database has $N = n\big/\ell$ entries. It is then possible to exhaustively search the entire database by querying each of the $N$ entries in sequence, and stop if we find an entry $y$ such that $f(y) = 1$. Supposing that each query to the database takes something like $O(\log N) \subseteq O(\log n)$ time, this procedure halts in time $O(N \log N) \subseteq O(n \log n)$, so that the problem is in FP.

    + +

    Assuming that the database-lookup can be done in coherent superposition, Grover's algorithm allows this problem is in FBQP. However, as FP ⊆ FBQP, the classical exhaustive search also proves that this problem is in FBQP. All that we obtain (up to log factors) is a quadratic speed-up due to a savings in the number of database queries.

  • +
  • Suppose that the input function is described succinctly, by a polynomial-time algorithm that takes a specification $x \in \{0,1\}^n$ and an argument $y \in \{0,1\}^m$ and computes $\mathcal O : \mathcal H_2^{m+1} \!\to \mathcal H_2^{m+1}\!$ on a standard basis state $\lvert y \rangle\lvert b \rangle$, where $m$ may be much larger than $\Omega(\log n)$. An example would be where $x$ specifies the CNF form of some boolean function $f: \{0,1\}^m \to \{0,1\}$ for $m \in O(n)$, in which case we may efficiently evaluate $f(y)$ on an input $y \in \{0,1\}^m$ and thereby efficiently evaluate $\mathcal O$ on standard basis states. This puts the problem in FNP.

    + +

    Given a procedure to evaluate $f(y)$ from $(x,y)$ in time $O(p(n))$ for $n = \lvert x \rvert$, Grover's algorithm solves the problem of Unstructured Search for $\mathcal O$ in time $O(p(n) \sqrt{2^m})$ $\subseteq O(p(n) \sqrt{2^n})$. This is not polynomial in $n$, and so does not suffice to put this problem in FBQP: we only obtain a quadratic speedup — though this is still a potentially huge savings of computation time, assuming that the advantage provided by Grover's algorithm is not lost to the overhead required for fault-tolerant quantum computation.

  • +
+ +

In both cases, the complexity is determined in terms of the length $n$ of the string $x$ *which specifies how to compute the oracle $\mathcal O$. In the case that $x$ represents a look-up table, we have $N = n\big/\ell$, in which case the performance as a function of $N$ is similar to the performance as a function of $n$; but in the case that $x$ succinctly specifies $\mathcal O$, and $N \in O(2^{n/2})$, the big-picture message that Grover's solves the problem in $O(\sqrt N)$ queries obscures the finer-grained message that this algorithm is still exponential-time for a quantum computer.

+ +

Decision complexity from relation problems

+ +

There is a straightforward way to get decision problems from relation problems, which is well-known from the theory of NP-complete problems: to turn the search problem to a question of the existence of a valid solution.

+ +
+

The decision version of a general search problem.
Given an input $x$ and an binary relation $R$, determine whether $\exists y: R(x,y)$ holds.

+
+ +

The complexity class NP can essentially be defined in terms of such problems, when the relationship $R$ is efficiently computable: the most famous NP-complete problems (CNF-SAT, HAMCYCLE, 3-COLOURING) are about the mere existence of a valid solution to an efficiently verifiable relationship problem. This switch from producing solutions to simply asserting the existence of solutions is also what allows us to describe versions of integer factorisation which are in BQP (by asking whether there exist non-trivial factors, rather than asking for the values of non-trivial factors).

+ +

In the case of Unstructured Search, again which complexity class best describes the problem depends on how the input is structured. Determining whether there exists a solution to a relationship problem may be reduced to finding and verifying a solution to that problem. Thus in the case that the input is a string $x$ specifying the oracle as a look-up table, the problem of unstructured search is in P; and more generally if $x$ specifies an efficient means of evaluating the oracle, the problem is in NP. It is also possible that there is a way of determining whether there exists a solution to Unstructured Search which does so without actually finding a solution, though it is not clear in general how to do so in a way which would provide an advantage over actually finding a solution.

+ +

Oracle complexity

+ +

I have conspicuously been shifting from talking about the oracle $\mathcal O$, to ways that an input $x$ can be used to specify (and evaluate) the oracle $\mathcal O$. But of course, the main way in which we consider Grover's algorithm is as an oracle result in which evaluating the oracle takes a single time-step and requires no speficiation. How do we consider the complexity of the problem in this case?

+ +

In this case, we are dealing with a relativised model of computation, in which evaluating this one specific oracle $\mathcal O$ (which, remember, is the input to the problem) is a primitive operation. This oracle is defined on all input sizes: to consider the problem for searching on strings of length $n$, you must specify that you are considering how the oracle $\mathcal O$ acts on inputs of length $n$, which again would be done by considering the length of a boolean string $x$ taken as input. In this case, the way in which we would present the problem might be as follows.

+ +
+

Unstructured Search relative to Oracle $\mathcal O$.
Given an input $x = 11\cdots 1$ of length $n$,

+ +
    +
  • find a $y \in \{0,1\}^n$ (relation problem) or

  • +
  • determine whether there exists a $y \in \{0,1\}^n$ (decision problem)

  • +
+ +

such that $\mathcal O \lvert y \rangle \lvert 0 \rangle = \lvert y \rangle \lvert 1 \rangle$.

+
+ +

This problem is in $\mathsf{NP}^{\mathcal O}$ (for the decision problem) or $\mathsf{FNP}^{\mathcal O}$ (for the relation problem), depending on which version of the problem you wish to consider. Because Grover's algorithm is not a polynomial-time algorithm, this problem is not known to be in $\mathsf{BQP}^{\mathcal O}$ or $\mathsf{FBQP}^{\mathcal O}$. In fact, we can say something stronger, as we will soon see.

+ +

The reason why I brushed over the actual, oracle-based description of Unstructured Search was in order to touch on your point of complexity, and in particular to touch on the question of input size. The complexity of problems are largely governed by how the inputs are specified: as a succinct specification (in the case of how a function is specified in CNF-SAT), as an explicit specification (in the case of a look-up table for a function), or even as an integer specified in unary, i.e. as the length of a string of 1s as above (as in ""Unstructured Search relative to Oracle $\mathcal O$"" above).

+ +

As we can see from the latter case, if we treat the input only as an oracle, the situation looks a bit un-intuitive, and it certainly makes it impossible to talk about the ways that the ""database"" can be realised. But one virtue of considering the relativised version of the problem, with an actual oracle, is that we can prove things which otherwise we have no idea how to prove. If we could prove that the decision version of the succinct unstructured search problem was in BQP, then we would stand to realise an enormous breakthrough in practical computation; and if we could prove that the decision problem was not actually in BQP, then we would have shown that P ≠ PSPACE, which would be an enormous breakthrough in computational complexity. We don't know how to do either. But for the relativised problem, we can show that there are oracles $\mathcal O$ for which the decision version of ""Unstructured Search relative to $\mathcal O$"" is in $\mathsf{NP}^{\mathcal O}$ but not in $\mathsf{BQP}^{\mathcal O}$. This allows us to show that while quantum computing is potentially powerful, there are reasons to expect that BQP probably doesn't contain NP, and that the relation version of Unstructured Search in particular is unlikely to be contained in FBQP without imposing strong constraints on how the input is represented.

+",124,,,,,06-02-2018 06:07,,,,0,,,,CC BY-SA 4.0 +2221,2,,2210,06-02-2018 17:36,,3,,"

As pointed out in a comment, what you wrote as $\rho$ should more precisely be written as $\rho\otimes\mathbb 1$ (although the Kraus operator decomposition can be obtained similarly with any initial ancilla state, in which case you have $\rho\otimes|\phi\rangle\!\langle\phi|$).

+ +

The standard algebraic properties of tensor product spaces then tell you that +$$(A\otimes B)(C\otimes D)=(AC)\otimes(BD),$$ +from which you immediately get your result.

+",55,,528,,06-10-2018 21:14,06-10-2018 21:14,,,,2,,,,CC BY-SA 4.0 +2222,1,,,06-03-2018 12:42,,15,665,"

I am interested in a quantum algorithm that gets as input an n-bit sequence and that produces as output a reshuffled (permuted) version of this n-bit sequence.

+ +

E.g. if the input is 0,0,1,1 (so n=4 in this case) then the possible answers are:

+ +
    +
  • 0,0,1,1
  • +
  • 0,1,0,1
  • +
  • 0,1,1,0
  • +
  • 1,0,0,1
  • +
  • 1,0,1,0
  • +
  • 1,1,0,0
  • +
+ +

Note that only one output should be generated which is randomly chosen among all possible valid outputs.

+ +

How can this best be implemented in a quantum algorithm ?

+ +

A solution for this is already proposed as part of one of the answers for How to create a quantum algorithm that produces 2 n-bit sequences with equal number of 1-bits?. But the problem with this solution is that this requires about $\binom{n}2$ help qubits which becomes rapidly huge if n is big.

+ +

Note:

+ +
    +
  • Please, do not provide a classical algorithm without any explanation of how the steps of the classical algorithm can be mapped to a universal quantum computer.
  • +
  • for me there are 2 good ways to interpret ""randomly chosen among all possible good outputs"": (1) each possible good output has equal chance of being chosen. (2) every possible good output has a chance > 0 of being chosen.
  • +
+",2529,,2529,,06-05-2018 11:20,06-06-2018 21:28,How to permute (reshuffle) an n-bit input?,,3,9,,,,CC BY-SA 4.0 +2223,1,2224,,06-03-2018 14:47,,7,301,"

During the classical pre-processing stage of Simon's algorithm, we repeat the quantum operations $n-1$ times to get

+ +

$$ +\begin{alignat}{7} +y_{1} \cdot s & \phantom{\vdots} =~ && 0 \\ y_{2} \cdot s & \phantom{\vdots}=~ && 0 \\ & \phantom{=} \vdots \\ y_{n-1} \cdot s & \phantom{\vdots}=~ && 0 ~, +\end{alignat} +$$

+ +

where $s$ is the period while $y_{i}$ are the linearly independent measurement outcomes of the quantum process. But, shouldn't we be requiring $n$ equations to get the value of $s$ as it is an unknown with $n$ variables? I wonder if this is because a system of $n$ equations will admit only the trivial solution of all $0$s. Is there a mathematical reasoning to elucidate this? How exactly would we uniquely solve for $n$ variables with $n-1$ equations?

+",1351,,15,,06-04-2018 00:04,06-04-2018 00:04,Simon's algorithm: Number of equations,,1,0,,,,CC BY-SA 4.0 +2224,2,,2223,06-03-2018 18:18,,5,,"

Imagine for a moment that $y_1,\ldots,y_{n-1}$ are linearly independent vectors in $\mathbb{R}^n$. There would then be a one-dimension subspace of vectors $s$ satisfying the $n-1$ equations you listed.

+ +

The situation is the same here, except that the field of real numbers is replaced by the finite field $\mathbb{F}_2$ with elements 0 and 1. The same linear algebra as in the real number case works in this case, and we again obtain a one-dimensional subspace of possible solutions $s$. This time, however, because there are only two elements of $\mathbb{F}_2$, this one-dimensional subspace includes only two elements: the all-zero vector and some nonzero vector $s$. Naturally we disregard the all-zero vector and take the nonzero vector $s$ as our solution.

+ +

Note also that if you did add another equation, $y_n\cdot s = 0$ for some $y_n$, then $y_n$ would have to be a linear combination of $y_1,\ldots,y_{n-1}$, for otherwise the only solution would be $s = (0,\ldots,0)$. So, you would not obtain any additional information about $s$ by adding an additional equation.

+",1764,,,,,06-03-2018 18:18,,,,0,,,,CC BY-SA 4.0 +2225,1,,,06-04-2018 01:53,,8,564,"

This is probably just a misunderstanding on my part, but everything I've seen on what quantum computers do thus far seems to suggest that the actual process of reading the entangled qubits would be equivalent to reading the value of a plate opposing a subdivided plate in a plate capacitor while the setting of initial qubits would be the equivalent of assigning a voltage to each subdivided plate. E.g. in this image:

+ +

+ +

You would be able to read the voltage on the red plate after setting independent voltages from a known range representing 0 at the low and 1 at the high on the 4 separate subdivisions of the opposing plate, then rounding off at some particular voltage to get a zero or one out of it for those 4 bits.

+ +

Is this wrong? If so, how does it differ from actual quantum computing?

+",2538,,26,,12/23/2018 12:29,12/23/2018 12:29,What's the difference between a set of qubits and a capacitor with a subdivided plate?,,2,1,,,,CC BY-SA 4.0 +2226,2,,2225,06-04-2018 03:40,,5,,"

Your capacitors cannot be in the state $\frac{1}{\sqrt{2}}\left(|00\rangle + |11\rangle\right)$, but qubits can.

+ +

Let's say $|0\rangle$ is 0$\,$V and $|1\rangle$ is 1$\,$V.
+If you have 2 bits we can have $|00\rangle$,$|01\rangle$,$|10\rangle$,$|11\rangle$.

+ +

But the state: $\frac{1}{\sqrt{2}}\left(|00\rangle + |11\rangle\right)$, is in a superposition of two of these cases. The bit values can be (0,0) or (1,1). Either case is equally possible, until a measurement is made (think Schrödinger's cat: you don't know if it's alive or dead until you open the box).

+",2293,,2293,,07-08-2018 01:31,07-08-2018 01:31,,,,6,,,,CC BY-SA 4.0 +2227,1,,,06-04-2018 05:46,,7,182,"

I have been reading the paper A quantum-implementable neural network model (Chen et al., 2017) for a few days now, but failed to understand how exactly their algorithm offers a speedup over the classical neural network models.

+

In particular I'm confused about what they mean by and what they are trying to do with quantum multiplexers. They haven't even defined it properly.

+

Here's the relevant paragraph:

+
+

One of the most interesting points of QPNN is that we can use quantum parallelism to combine several basic networks at the same time. To achieve this purpose, only $n$ qubits are needed in a control layer to perform the following quantum multiplexer as shown in Fig. $5$.

+

$$\left( \begin{array}{ccc} U_1 \\ & \ddots \\ && U_{2^n} \end{array} \right) \tag{13}$$ +

+
+

+
+

where $U_i$ represents the dynamics of the $i$th basic network. Moreover $2^{n}$ different quantum gates $\left\{P^{(i)}|i=1, \dots ,2^{n}\right\}$ can also be applied on the output layer of each basic network respectively.

+
+

Questions:

+
    +
  1. What does $i^{\text{th}}$ basic network mean in this context?
  2. +
  3. What is quantum multiplexer and how exactly is it helping in this context? What is meant by the matrix shown in $(13)$? (I read a few papers on quantum multiplexers which say that they are basically used to transfer information contained by several qubits as information in a qudit. But no idea how that is relevant here.)
  4. +
  5. What do they mean by "we can use quantum +parallelism to combine several basic networks at the same time"?
  6. +
+",26,,-1,,6/18/2020 8:31,7/13/2018 14:44,"What do ""$i$-th basic network"", ""quantum multiplexers"" and ""quantum parallelism"" mean in this context? How are they beneficial?",,0,2,,,,CC BY-SA 4.0 +2228,1,2235,,06-04-2018 06:48,,9,3159,"

I would like to simulate a quantum algorithm where one of the steps is ""Square root of Swap gate"" between 2 qubits.

+ +

How can I implement this step using the IBM composer?

+",2529,,26,,12/23/2018 13:45,02-06-2020 22:02,"How to implement the ""Square root of Swap gate"" on the IBM Q (composer)?",,3,3,,,,CC BY-SA 4.0 +2229,2,,2209,06-04-2018 06:50,,0,,"

I know we already have an answer here, but going back to the problem specification, there's a much simpler way to achieve this if the only thing that's important is the output, comprising a binary string. All the protocol actually has to do is:

+ +
    +
  • Select an $n$-bit string, $x\in\{0,1\}^n$ at random. If you don't want to trust classical randomness, use $n$ qubits, starting in $|0\rangle$, with Hadamard acting on them, and measure in the computational basis. In fact, just use 1 qubit, and repeat the same thing $n$ times with it.

  • +
  • Let $w_x$ be the Hamming weight of $x$ (i.e. the number of 1s). Select a random $y\in\{0,1\}^n$ such that $w_y=w_x$. There are several ways you might do this, but, for example, use $m=\lceil\log_2\binom{n}{w_x}\rceil$ bits. Generate an answer $z\in\{0,1\}^m$ uniformly at random (i.e. $p_z=1/2^m$). $\binom{n}{w_x}$ of these strings $z$ can be mapped onto a suitable $y$. If a given answer cannot, just keep choosing a random $z$ until it can. On average, you won't need more than 2 gos.

  • +
  • Give the answer $x,y$.

  • +
+",1837,,1837,,06-04-2018 09:15,06-04-2018 09:15,,,,1,,,,CC BY-SA 4.0 +2230,1,5123,,06-04-2018 09:13,,11,1796,"

In order to rotate about an axis of the Bloch sphere we ususally use pulses e.g. in trapped ion quantum computing or superconducting qubits. Let's say we have rotation around the x-axis. What do I have to change in order to be able to rotate around the y-axis or the z-axis? I assume it has something to do with the phase but I could not find a good reference for how this works.

+",1853,,26,,12/23/2018 13:44,01-06-2019 14:27,Rotating about the y- or z-axis of the Bloch sphere,,2,5,,,,CC BY-SA 4.0 +2232,2,,2222,06-04-2018 12:45,,0,,"

A quantum computer can do classical computations. +The optimal algorithm would be to:

+ +
    +
  1. Pick any bit (the fastest one you can get access to).
  2. +
  3. Find a bit that has the opposite value (if in step 1 you got a 0, find a 1)
  4. +
  5. Switch them (0 becomes 1 and 1 becomes 0).
  6. +
+ +

Step 2 involves searching through an $N$ bit string which using classical operations takes $\mathcal{O}(N)$ operations, but if you can get the $n^{\textrm{th}}$ bit value by evaluating a function, you may be able to use Grover's quantum algorithm to find the opposite bit with $\mathcal{O}\left(\sqrt{N}\right)$ operations.

+",2293,,,,,06-04-2018 12:45,,,,1,,,,CC BY-SA 4.0 +2233,2,,2222,06-04-2018 14:54,,5,,"

It could be done with $\lceil\log n\rceil$ additional qubits along these lines:

+ +
+
    +
  1. Transform the additional qubits so that they encode a number + $k\in\{0,\ldots,n-1\}$ chosen uniformly at random.

  2. +
  3. Cyclically shift the input qubits $k$ times.

  4. +
  5. Let the last of the original input qubits be fixed as an output and recurse on the remaining $n-1$ of them.

  6. +
+
+ +

This is a classical algorithm, but you can run it on a quantum computer of course, as Norbert has suggested in a comment. (The aspect of the question that is adamant about the algorithm being quantum is still not clear to me, so if running a classical algorithm such as the one I have suggested on a quantum computer is not sufficient, it would be helpful for the question to be clarified.)

+ +

Note that because the question asks for a random output, the algorithm is going to have to generate entropy at some point, presumably through measurements or performing other non-unitary operations on qubits (such as initializing them). In the algorithm above, it is the first step that generates entropy: regardless of the state of the additional qubits before the operation in step 1 is performed, they should have the state +$$ +\frac{1}{n} \sum_{k = 0}^{n-1} \lvert k \rangle \langle k \rvert +$$ +after step 1 is performed (with $k$ encoded in binary, let's say).

+",1764,,,,,06-04-2018 14:54,,,,5,,,,CC BY-SA 4.0 +2234,2,,2228,06-04-2018 16:49,,0,,"

Every 2-qubit gate has a ""Paulinomial decomposition"" which means it can be written as a polynomial of Pauli matrices.

+ +

For the gate you want:

+ +

$ +\sqrt{ \mbox{SWAP} } = +\begin{bmatrix} +1 & 0 & 0 & 0 \\ +0 & \frac{1}{{2}} (1+i) & \frac{1}{{2}} (1-i) & 0 \\ +0 & \frac{1}{{2}} (1-i) & \frac{1}{{2}} (1+i) & 0 \\ +0 & 0 & 0 & 1 \\ +\end{bmatrix} = \frac{1-i}{4}\left(X_1X_2+Y_1Y_2+Z_1Z_2\right) +\frac{3+i}{2}I, +$

+ +

where $X_i$ is an $X$ gate applied to the $i^\textrm{th}$ qubit.

+",2293,,2293,,06-04-2018 16:59,06-04-2018 16:59,,,,10,,,,CC BY-SA 4.0 +2235,2,,2228,06-04-2018 22:42,,11,,"

Here is a SQRT(SWAP) construction which only requires CNOTs in one direction, Hadamards, S gates ($Z^{\frac{1}{2}}$), S dagger gates ($Z^{-\frac{1}{2}}$), T gates ($Z^{\frac{1}{4}}$) and T dagger gates ($Z^{-\frac{1}{4}}$):

+ +

+ +

You should be able to encode it directly into the composer.

+",119,,-1,,02-06-2020 22:02,02-06-2020 22:02,,,,16,,,,CC BY-SA 4.0 +2236,2,,2228,06-04-2018 23:22,,5,,"

What you want to do is a rotation on the subspace spanned by $|01\rangle$ and $|10\rangle$ which rotates it by $\sqrt{X}$. To this end, you can first do a CNOT, which maps this subspace to $\{|01\rangle,|11\rangle\}$. Now you need to do the $\sqrt{X}$ rotation on the first qubit, conditioned on the second qubit being one. Implementing controlled-$U$ gates using CNOTs is a standard construction, which can be found in a range of places, see e.g. https://arxiv.org/abs/quant-ph/9503016. Depending how you do this step, you might have to fix the ""global"" phase of the 1st qubit (given the 2nd is $|1\rangle$). Finally, you need to undo the CNOT.

+",491,,,,,06-04-2018 23:22,,,,5,,,,CC BY-SA 4.0 +2237,2,,2225,06-05-2018 09:46,,2,,"

It looks like you are asking about the possibility of encoding the mathematics of quantum states & measurement into some kind of analog device. And yes, I think this is possible. This reminds me of how people study analog models of gravity. The one problem I can see with this is that it will not scale very well as you increase the number of entangled quantum systems. Eg. for every extra qubit added to a system you double the number of dimensions. So for 10 qubits you would need a capacitor plate with 1024 subdivisions, and so on.

+ +

In summary, what you are proposing is to simulate a quantum system with an analog computer. We already do this with digital computers, but it just doesn't scale.

+",263,,,,,06-05-2018 09:46,,,,0,,,,CC BY-SA 4.0 +2238,1,2241,,06-05-2018 09:59,,8,544,"

I am learning how to program the IBM Q Experience quantum computers in order to learn more about how does it work and in order to perform some experiments in it. By doing so I was wondering what are the most advanced things that have been done in such computers that actually are an advance for quantum technologies.

+ +

More specfically, I am interested in what have been done in quantum error correction code implementation and testing in those, and if some papers about those implementations and techniques used are available.

+",2371,,26,,12/13/2018 19:51,12/13/2018 19:51,Practical Implementations of QECCs in IBM Q Experience,,2,0,,,,CC BY-SA 4.0 +2239,2,,2222,06-05-2018 11:57,,3,,"

Note: this answer assumes you want the permutation to be coherent, i.e. you want $\frac{1}{\sqrt{3}} ( |001\rangle + |010\rangle + |100\rangle)$ instead of a 1/3 chance of $001$, a 1/3 chance of $010$, and a 1/3 chance of $100$.

+ +

Be careful how you specify this task, because it could very easily be impossible due to reversibility constraints. For example, for the input $|001\rangle$ you want to output the GHZ state $\left| {3 \atop 1} \right\rangle = \frac{1}{\sqrt{3}} (|001\rangle + |010\rangle + |100\rangle)$. But if you also want to output the GHZ state for the input $|010\rangle$ and $|100\rangle$, that won't work. You can't send multiple input states to the same output state (without decoherence). As long as you say ""I only care about sorted-ascending inputs like 0000111 but not 1110000 or 0010110; you can do whatever you want with those"", this will be fine.

+ +

One trick to producing a quantum permutation of a sorted input is to first prepare a ""permutation state"" by applying a sorting network to a list of seed values each in a uniform superposition. The sorting network will output qubits holding the sorted seeds, but also qubits holding the sorting network comparisons. The permutation state is just the comparison qubits. To apply it to your input, you simply run the input through the sorting network in reverse. Note that there are some tricky details here; see the paper ""Improved Techniques for Preparing Eigenstates of Fermionic Hamiltonians"". You would have to generalize this technique to work on inputs with repeated values, instead of only unique values.

+ +

You may also want to look into ""quantum compression"", which is very closely tied to the $\left| {n \atop k} \right\rangle$ states (uniform superpositions of all $n$-bit states with $k$ bits set) that you want to produce. The main difference is that you would run the quantum compression circuit in reverse, and it expects a number encoding ""how many ones are there?"" instead of ""give me a state with the correct number of ones"".

+ +

I guess what I'm saying is that producing these kinds of states is more complicated than you might have expected. I think the reason it is complicated is because the magnitude of the amplitudes in your outputs depend on the computational basis state of your input. For example, for $|0001\rangle$ you want an output which is a superposition of four classical states, so you have a prefactor of $\frac{1}{\sqrt{4}}$ hidden inside $\left| {4 \atop 1} \right\rangle$. But for $|0011\rangle$ the desired output has six classical states and so $\left| {4 \atop 2} \right\rangle$ hides a prefactor of $\frac{1}{\sqrt{6}}$.

+",119,,119,,06-05-2018 12:17,06-05-2018 12:17,,,,2,,,,CC BY-SA 4.0 +2240,2,,2238,06-05-2018 12:12,,2,,"

You might find the paper ""Automated Error Correction in IBM Quantum Computer and Explicit Generalization"" (2017) by Panigrahi et al. relevant. As for ""what are the most advanced things that have been done in such computers that actually are an advance for quantum technologies"", if you search a bit on arXiv you'll find quite a few relevant papers published by them. One such recent example which I personally had found quite interesting is: ""Application of quantum scrambling in Rydberg atom on IBM quantum computer"" (2018).

+ +

Disclosure: I'm currently an undergraduate member of Prof. Panigrahi's group.

+",26,,26,,06-05-2018 12:29,06-05-2018 12:29,,,,0,,,,CC BY-SA 4.0 +2241,2,,2238,06-05-2018 13:13,,4,,"

The publicly available IBM devices don't yet have the connectivity to realize quantum error correcting codes that both detect and correct a full set of quantum errors. But we can certainly do proof-of-principle experiments on the tools and techniques required. Here are the experiments I know of

+ +

Error correction experiments done (or doable) on a 5 qubit device:

+ + + +

Error correction experiments on the 16 qubit device.

+ + + +

Disclosure: The last one is mine.

+",409,,,,,06-05-2018 13:13,,,,1,,,,CC BY-SA 4.0 +2242,1,,,06-05-2018 14:22,,11,2833,"

I was wondering if there is a way to compose a program with multiple quantum circuits without having the register reinitialized at $0$ for each circuit.

+ +

Specifically, I would like run a second quantum circuit after running the first one, as in this example:

+ +
qp = QuantumProgram()
+qr = qp.create_quantum_register('qr',2)
+cr = qp.create_classical_register('cr',2)
+
+qc1 = qp.create_circuit('B1',[qr],[cr])
+qc1.x(qr)
+
+qc1.measure(qr[0], cr[0])
+qc1.measure(qr[1], cr[1])
+
+qc2 = qp.create_circuit('B2', [qr], [cr])
+qc2.x(qr)
+qc2.measure(qr[0], cr[0])
+qc2.measure(qr[1], cr[1])
+
+#qp.add_circuit('B1', qc1)
+#qp.add_circuit('B2', qc2)
+
+pprint(qp.get_qasms())
+
+result = qp.execute()
+
+print(result.get_counts('B1'))
+print(result.get_counts('B2'))
+
+ +

Unfortunately, what I get is the same result for the two runs (i.e. a count of 11 for the B1 and B2 instead of 11 and 00 for the second, as if B2 is run on a completely new state initialized on 00 after B1.

+",1644,,16606,,5/25/2022 16:51,5/25/2022 16:51,Composing multiple quantum circuits in single quantum program in Qiskit,,2,3,,,,CC BY-SA 4.0 +2243,2,,2242,06-05-2018 17:49,,0,,"

Once you do a measurement, the wavefunction of the quantum state/register collapses and it loses its quantum nature. It doesn't make sense to apply another circuit on it.

+",2527,,,,,06-05-2018 17:49,,,,3,,,,CC BY-SA 4.0 +2244,1,,,06-05-2018 21:39,,20,881,"

In boson sampling, if we start with 1 photon in each of the first $M$ modes of an interferometer, the probability of detecting 1 photon in each output mode is: $|\textrm{Perm}(A)|^2$, where the columns and rows of $A$ are the first $M$ columns of the interferometer's unitary matrix $U$, and all of its rows.

+ +

This makes it look like for any unitary $U$, we can construct the appropriate interferometer, construct the matrix $A$, and calculate the absolute value of the permanent of $A$ by taking the square root of the probability of detecting one photon in each mode (which we get from the boson sampling experiment). Is this true, or is there some catch? People have told me that you can't actually get information about a permanent from boson sampling.

+ +

Also, what happens to the rest of the columns of $U$: How exactly is it that the experimental outcome only depends on the first $M$ columns of $U$ and all of its rows, but not at all on the other columns of $U$? Those columns of $U$ do not affect the outcome of the experiment in the first $M$ modes at all?

+",2293,,55,,09-10-2020 11:47,09-10-2020 11:47,"Is it possible to ""calculate"" the absolute value of a permanent using Boson Sampling?",,2,0,,,,CC BY-SA 4.0 +2245,2,,2244,06-06-2018 05:18,,9,,"

It appears to be true, up to a point. As I read Scott Aaronson's paper, it says that if you start with 1 photon in each of the first $M$ modes of an interferometer, and find the probability $P_S$ that a set $s_i$ photons is output in each mode $i\in\{1,\ldots, N\}$ where $\sum_is_i=M$, is +$$ +P_s=\frac{|\text{Per(A)}|^2}{s_1!s_2!\ldots s_M!}. +$$ +So, indeed, if you take a particular instance where $s_i=0$ or 1 for every possible output, then, yes the probability is equal to the permanent of $A$, where $A$ is the first $M$ columns of $U$ and a specific subset of $M$ rows specified by the locations $s_i=1$. So, this is not quite as specified in the question: it is not all rows, but only some subset, so that $A$ is a square matrix, corresponding to the bits that the experiment ""sees"", i.e. the input rows and output rows. The photons never populate anything else, and so are blind to the other elements of the unitary matrix $U$.

+ +

This should be fairly obvious. Let's say I have some $3\times 3$ matrix $V$. If I start in some basis state $|0\rangle$ and find its product, $V|0\rangle$, then knowing that tells me very little about the outputs $V|1\rangle$ and $V|2\rangle$, aside from what can be said from the knowledge that $V$ is unitary, and hence columns and rows are orthonormal.

+ +

The issue that one must be careful of is the accuracy: you run this once and all you get is a single sample according to the probability distribution $P_s$. You run this a few times, and you start to build up information about the different probabilities. You run this enough times, and you can get an arbitrarily accurate answer, but how many is enough? There are two different ways that you can measure the error in an estimate of a value $p$. You can demand either an additive error $p\pm\epsilon$ or a multiplicative error, $p(1\pm\epsilon)$. Since we expect that a typical probability will be exponentially small in $n+m$, the multiplicative error demands far greater accuracy, which cannot be achieved efficiently via sampling. On the other hand, the additive error approximation can be achieved.

+ +

While a multiplicative error is what people usually want to calculate, the additive error can also be an interesting entity. For example, in the evaluation of the Jones polynomial.

+ +

Aaronson points us back further in time for where this connection between Boson sampling and the Permanent was first made:

+ +
+

It has been known since work by Caianiello in 1953 (if not earlier) that the amplitudes for $n$-boson processes can be written as the permanents of $n\times n$ matrices.

+
+ +

Instead, their main contribution

+ +
+

is to prove a connection between the ability of classical computers to solve the approximate + BosonSampling problem and their ability to approximate the permanent

+
+ +

i.e. to understand the approximation problem associated with, e.g. finite sampling, and to describe the computational complexity consequences associated: that we believe such a thing is hard to evaluate classically.

+",1837,,1837,,06-07-2018 06:54,06-07-2018 06:54,,,,6,,,,CC BY-SA 4.0 +2246,2,,2244,06-06-2018 10:56,,7,,"

You cannot efficiently recover the absolute values of the amplitudes, but if you allow for arbitrary many samples, then you can estimate them to whatever degree of accuracy you like.

+ +

More specifically, if the input state is a single photon in each of the first $n$ modes, and one is willing to draw an arbitrary number of samples from the output, then it is in principle possible to estimate the permanent of $A$ to whatever degree of accuracy one likes, by counting the fraction of the times the $n$ input photons come out in the first $n$ different output ports. +It is to be noted though that this does not really have much to do with BosonSampling, as the hardness result holds in the regime of the number of modes much larger than the number of photons, and it's about the efficiency of the sampling.

+ +

BosonSampling

+ +

I'll try a very brief introduction to what boson sampling is, but it should be noted that I cannot possibly do a better job at this than Aaronson himself, so it's probably a good idea to have a look at the related blog posts of his (e.g. blog/?p=473 and blog/?p=1177), and links therein.

+ +

BosonSampling is a sampling problem. This can be a little bit confusing in that people are generally more used to think of problems having definite answers. +A sampling problem is different in that the solution to the problem is a set of samples drawn from some probability distribution.

+ +

Indeed, the problem a boson sampler solves is that of sampling from a specific probability distribution. More specifically, sampling from the probability distribution of the possible outcome (many-boson) states.

+ +

Consider as a simple example a case with 2 photons in 4 modes, and let's say we fix the input state to be $(1,1,0,0)\equiv|1,1,0,0\rangle$ (that is, a single photon in each of the two first two input modes). +Ignoring the output states with more than one photon in each mode, there are $\binom{4}{2}=6$ possible output two-photon states: +$(1,1,0,0), (1,0,1,0), (1,0,0,1), (0,1,1,0), (0,1,0,1)$ and $(0,0,1,1)$. +Let us denote for convenience with $o_i, i=1,.,6$ the $i$-th one (so, for example, $o_2=(1,0,1,0)$). +Then, a possible solution to BosonSampling could be the series of outcomes: +$$o_1, o_4, o_2, o_2, o_5.$$

+ +

To make an analogy to a maybe more familiar case, it's like saying that we want to sample from a Gaussian probability distribution. +This means that we want to find a sequence of numbers which, if we draw enough of them and put them into a histogram, will produce something close to a Gaussian.

+ +

Computing permanents

+ +

It turns out that the probability amplitude of a given input state $|\boldsymbol r\rangle$ to a given output state $|\boldsymbol s\rangle$ is (proportional to) the permanent of a suitable matrix built out of the unitary matrix characterizing the (single-boson) evolution.

+ +

More specifically, if $\boldsymbol R$ denotes the mode assignment list${}^{(1)}$ associated to $|\boldsymbol r\rangle$, $\boldsymbol S$ that of $|\boldsymbol s\rangle$, and $U$ is the unitary matrix describing the evolution, then the probability amplitude $\mathcal A(\boldsymbol r\to\boldsymbol s)$ of going from $|\boldsymbol r\rangle$ to $|\boldsymbol s\rangle$ is given by +$$\mathcal A(\boldsymbol r\to\boldsymbol s) = +\frac{1}{\sqrt{\boldsymbol r!\boldsymbol s!}} \operatorname{perm} U[\boldsymbol R|\boldsymbol S], +$$ +with $U[\boldsymbol R|\boldsymbol S]$ denoting the matrix built by taking from $U$ the rows specified by $\boldsymbol R$ and the columns specified by $\boldsymbol S$.

+ +

Thus, considering the fixed input state $|\boldsymbol r_0\rangle$, the probability distribution of the possible outcomes is given by the probabilities +$$p_{\boldsymbol s} = \frac{1}{\boldsymbol r_0! \boldsymbol s!} \lvert +\operatorname{perm}U[\boldsymbol R|\boldsymbol S] +\rvert^2.$$

+ +

BosonSampling is the problem of drawing ""points"" according to this distribution.

+ +

This is not the same as computing the probabilities $p_s$, or even computing the permanents themselves. +Indeed, computing the permanents of complex matrices is hard, and it is not expected even for quantum computers to be able to do it efficiently.

+ +

The gist of the matter is that sampling from a probability distribution is in general easier than computing the distribution itself. +While a naive way to sample from a distribution is to compute the probabilities (if not already known) and use those to draw the points, there might be smarter ways to do it. +A boson sampler is something that is able to draw points according to a specific probability distribution, even though the probabilities making up the distribution itself are not known (or better said, not efficiently computable).

+ +

Furthermore, while it may look like the ability to efficiently sample from a distribution should translate into the ability of efficiently estimating the underlying probabilities, this is not the case as soon as there are exponentially many possible outcomes. +This is indeed the case of boson sampling with uniformly random unitaries (that is, the original setting of BosonSampling), in which there are $\binom{m}{n}$ possible $n$-boson in $m$-modes output states (again, neglecting states with more than one boson in some mode). For $m\gg n$, this number increases exponentially with $n$. +This means that, in practice, you would need to draw an exponential number of samples to even have a decent chance of seeing a single outcome more than once, let alone estimate with any decent accuracy the probabilities themselves (it is important to note that this is not the core reason for the hardness though, as the exponential number of possible outcomes could be overcome with smarter methods).

+ +

In some particular cases, it is possible to efficiently estimate the permanent of matrices using a boson sampling set-up. This will only be feasible if one of the submatrices has a large (i.e. not exponentially small) permanent associated with it, so that the input-output pair associated with it will happen frequently enough for an estimate to be feasible in polynomial time. This is a very atypical situation, and will not arise if you draw unitaries at random. For a trivial example, consider matrices that are very close to identity - the event in which all photons come out in the same modes they came in will correspond to a permanent which can be estimated experimentally. Besides only being feasible for some particular matrices, a careful analysis of the statistical error incurred in evaluating permanents in this way shows that this is not more efficient than known classical algorithms for approximating permanents (technically, within a small additive error) ${}^{(2)}$.

+ +

Columns involved

+ +

Let $U$ be the unitary describing the one-boson evolution. +Then, basically by definition, the output amplitudes describing the evolution of a single photon entering in the $k$-th mode are in the $k$-th column of $U$.

+ +

The unitary describing the evolution of the many-boson states, however, is not actually $U$, but a bigger unitary, often denoted by $\varphi_n(U)$, whose elements are computed from permanents of matrices built out of $U$.

+ +

Informally speaking though, if the input state has photons in, say, the first $n$ modes, then naturally only the first $n$ columns of $U$ must be necessary (and sufficient) to describe the evolution, as the other columns will describe the evolution of photons entering in modes that we are not actually using.

+ +
+ +

(1) This is just another way to describe a many-boson state. Instead of characterizing the state as the list of occupation numbers for each mode (that is, number of bosons in first mode, number in second, etc.), we characterize the states by naming the mode occupied by each boson. +So, for example, the state $(1, 0, 1, 0)$ can be equivalently written as $(1, 3)$, and these are two equivalent ways to say that there is one boson in the first and one boson in the third mode.

+ +

(2): S. Aaronson and T. Hance. ""Generalizing and Derandomizing Gurvits's Approximation Algorithm for the Permanent"". https://eccc.weizmann.ac.il/report/2012/170/

+",55,,55,,8/16/2018 9:03,8/16/2018 9:03,,,,8,,,,CC BY-SA 4.0 +2247,2,,2004,06-06-2018 12:50,,4,,"

Grover's algorithm is used extensively in quantum cryptography as well. It can be used to solve problems such as the Transcendental Logarithm Problem, Polynomial Root Finding Problem etc.

+",2556,,,,,06-06-2018 12:50,,,,2,,,,CC BY-SA 4.0 +2248,1,,,06-06-2018 13:08,,10,1118,"

D-Wave makes use of a $(n,k=4)$-Chimera structured graph in their computers. Meaning a $n\times n$ grid of unit cells, with each unit cell consisting of a complete bipartite graph on $2k=8$ nodes ($4$ for each side), also called $K_{4,4}$.

+ +

Why did D-Wave chose $k=4$? An argument given is that this non-planar structure allows for an embedding of many interesting problems. However, $K_{3,3}$ is also a non-planar graph. So why not choose $k=3$? Additionally, increasing $k$ seems to me as one of the easiest ways to increase the number of qubits your problem has. So why not use $k=5,6,\dots$?

+",2005,,2293,,06-06-2018 23:41,10/21/2018 3:51,Why did D-Wave choose the Chimera graph the way they did?,,2,0,,,,CC BY-SA 4.0 +2250,2,,2248,06-06-2018 17:43,,4,,"

You are right that $K_{3,3}$ is non-planar, but as you said yourself, a larger $k$ is much better. If they could do $K_{1000,1000}$ that would be nice, because each qubit could be coupled to 1002 qubits (1000 within the $K_{1000,1000}$ and two to the neighboring cells). Instead D-Wave is limited to problems which can be embedded such that each qubit couples to at most 6 other qubits.

+ +

The reason they don't have larger $k$ is for physical reasons. It is harder to couple a qubit to 1002 qubits than it is to couple it to 6 qubits. It is also harder to couple a qubit to 6 qubits vs 5 qubits, but they found that it was easy enough to go to $k=4$, so they were not limited to $K_{3,3}$.

+",2293,,2293,,06-06-2018 23:42,06-06-2018 23:42,,,,5,,,,CC BY-SA 4.0 +2251,1,2253,,06-06-2018 19:10,,5,3046,"

How would you check if 2 qubits are orthogonal with respect to each other?

+ +

I need to know this to solve this problem:

+ +
+

You are given $2$ quantum bits:$$ +\begin{align} +|u_1\rangle &= \cos\left(\frac{x}{2}\right) |0\rangle + \sin\left(\frac{x}{2}\right)e^{in} |1\rangle \tag{1} \\[2.5px] +|u_2\rangle &= \cos\left(\frac{y}{2}\right) |0\rangle + \sin\left(\frac{y}{2}\right)e^{im} |1\rangle \tag{2} +\end{align} +$$where $m-n = \pi$ and $x+y=\pi$.

+
+",2559,,26,,12/23/2018 12:29,12/23/2018 12:29,How to check if 2 quantum bits are orthogonal?,,4,4,,,,CC BY-SA 4.0 +2252,2,,2251,06-06-2018 19:32,,3,,"

In order to check if two qubits are orthogonal, you should check that the inner product between them equals zero. This can be written like +$\langle u_1|u_2\rangle=0$.

+ +

Knowing that $\langle 0|0\rangle=1$, $\langle 0|1\rangle=0$, $\langle 1|0\rangle=0$ and $\langle 1|1\rangle=1$ it should be easy to solve the problem by yourself.

+",2371,,,,,06-06-2018 19:32,,,,2,,,,CC BY-SA 4.0 +2253,2,,2251,06-07-2018 06:14,,4,,"

Keep in mind that $|0\rangle$ and $|1\rangle$ are orthonormal basis vectors of a two-dimensional complex vector space (over the field of complex numbers). To check whether $|u_1\rangle$ and $|u_2\rangle$ are orthogonal you'll have to check whether the standard inner product $\langle u_1|u_2\rangle$ is $0$. Here $\langle u_1|$ refers to the bra vector corresponding to the ket vector $|u_1\rangle$. In matrix notation that would simply mean that $\langle u_1|$ is the complex conjugate transpose a.k.a Hermitian conjugate of $|u_1\rangle$.

+ +

In your case:

+ +

$|u_1\rangle = \cos(\frac{x}{2}) \begin{bmatrix} 1 \\ 0 \end{bmatrix} + (\sin(\frac{x}{2}))e^{in} \begin{bmatrix} 0 \\ 1 \end{bmatrix}$

+ +

and

+ +

$|u_2\rangle = \cos(\frac{y}{2}) \begin{bmatrix} 1 \\ 0 \end{bmatrix} + (\sin(\frac{y}{2}))e^{im} \begin{bmatrix} 0 \\ 1 \end{bmatrix}$

+ +

Now, you'll find $\langle u_1|$ to be:

+ +

$\cos(\frac{x}{2}) \begin{bmatrix} 1 & 0 \end{bmatrix} + (\sin(\frac{x}{2}))e^{-in} \begin{bmatrix} 0 & 1 \end{bmatrix}$

+ +

Now, simply carry out the multiplication of the matrices $\langle u_1|$ and $ |u_2\rangle$. If it turns out to be $0$, they're orthogonal. Or else they're not orthogonal.

+",26,,,,,06-07-2018 06:14,,,,0,,,,CC BY-SA 4.0 +2254,2,,2251,06-07-2018 07:08,,4,,"

Instead of doing matrix calculation you can also map those qubits to the Bloch sphere. A Bloch sphere is a unit 2-sphere, with antipodal points corresponding to a pair of mutually orthogonal state vectors.

+ +

So if you can show that the 2 points are antipodal on a Bloch sphere, then you have proven that they are orthogonal.

+ +

The nice thing is that your qubits are already expressed in a way that can easily be mapped on the bloch sphere (x, y, n and m correspond to the angles on the Bloch sphere).

+ +

So giving the angles it is easy to determine if they are antipodal as in that case the corresponding angles must differ with $\pi + n.2\pi$ where n is a positive or negative natural number (note that in case $\theta =0 + n\pi$ is a special case as in that case we should ignore angle $\psi$).

+ +

In the question it is already stated that the angles m and n differ with $\pi$ so we only need to look at the angles x and y and so we can conclude that only under the following condition will the above 2 qubits be orthogonal

+ +
    +
  • $y-x = { \pi + n (2 \pi)}$ where $n$ is a positive or negative natural number [1]
  • +
+ +

As $x+y= \pi$ (see question) and [1] : then only the following x and y combinations will give orthogonal qubits.

+ +
    +
  • $2y = \pi + \pi + n (2 \pi)$ which can be simplified as $y = n\pi$ where n is a positive or negative natural number and in that case $x=\pi - n\pi$ (e.g. when $y=0$ en $x=\pi$ we get 2 orthogonal qubits).
  • +
+",2529,,2529,,06-07-2018 08:10,06-07-2018 08:10,,,,0,,,,CC BY-SA 4.0 +2255,1,2256,,06-07-2018 11:22,,19,1889,"

I'm currently reading Nielsen and Chuang's Quantum Computation and Quantum Information and I'm not sure if I correctly understand this exercise (on page 57) :

+
+

Exercise 1.2: Explain how a device which, upon input of one of two non-orthogonal quantum states $\left|\psi\right>$ or $\left|\phi\right>$ correctly identified the state, could be used to build a device which cloned the states $\left|\psi\right>$ and $\left|\phi\right>$, in violation of the no-cloning theorem. Conversely, explain how a device for cloning could be used to distinguish non-orthogonal quantum states.

+
+

The first part seems fairly straightforward to me : once the state has been identified as $|\psi\rangle$ or $|\phi\rangle$, just prepare an identical state through whatever means we have available, effectively cloning the original state.

+

For the converse, I've not been able to achieve better than this :

+
    +
  1. Clone the state to be identified $n$ times

    +
  2. +
  3. Perform a measurement on each of the copies in the basis $(|\psi\rangle, |\psi'\rangle)$, where $|\psi'\rangle$ is a state orthogonal to $|\psi\rangle$

    +
  4. +
  5. If one of the measurements yields $|\psi'\rangle$, then we know for certain that the original state is $|\phi\rangle$

    +
  6. +
  7. If all of the measurements yield $|\psi\rangle$, we can claim that the original state is $|\psi\rangle$ with a probability of error equal to : $|\langle\psi|\phi\rangle|^{2n}$, which can be made arbitrarily small by increasing $n$

    +
  8. +
+

However, the way the exercise is worded makes me think that there must be some deterministic way of distinguishing between $|\psi\rangle$ and $|\phi\rangle$ given a cloning machine. Is this indeed the case?

+",2563,,2563,,02-11-2021 18:03,02-11-2021 18:03,No-cloning theorem and distinguishing between two non-orthogonal quantum states,,1,0,,,,CC BY-SA 4.0 +2256,2,,2255,06-07-2018 11:42,,8,,"

That's the way that I would initially go about answering the question. There are, however, a few tweaks you could make.

+ +

Definitive Answer

+ +

As you point out, the annoying feature is that you can never be definitive about having the state $|\psi\rangle$.

+ +

There are a couple of ways that you might avoid that pitfall.

+ +

The first option is to have two different measurement bases that you pick between. The first is as you specified. The second is the complementary view where you use $(|\phi\rangle,|\phi'\rangle)$.

+ +

The second option is to introduce a POVM. (I don't have my copy of Nielsen and Chuang to hand, and don't remember if they've been introduced at this point.) POVMs can have more than 2 measurement operators, and are often quite good at saying ""the state was definitely not $|x\rangle$"", so you could make one operator that says ""the state definitely was not $|\psi\rangle$"", another that says ""definitely not $|\phi\rangle$"" and a third just for the sake of completeness.

+ +

Both options are variants on a theme, and technically you might still have to run both forever before you get a definitive answer, but the expected number of trials is finite.

+ +

Better Outcome Probabilities

+ +

You can actually pick a better measurement basis than the one you described so that you can come to a conclusion faster (but certainly does not give you a definitive answer). Try to think about the two states $|\psi\rangle$ and $|\phi\rangle$ on the Bloch sphere. You can always find a plane passing through both points and the centre of the sphere. On this plane, there is a circle, with two points corresponding to the two states. Draw lines joining these points to the centre. Next, construct a diameter of the circle which makes equal angles with the two lines you've just drawn. This would define the measurement basis that tells you absolutely nothing about which of the two states you have. But, if you construct the diameter that is perpendicular to that line, this is the one that, at least in a single shot, has the maximum probability of distinguishing between the two states.

+ +

Here's a picture. $|\Psi\rangle$ is one of the two basis states that you want for the measurement, and remember that the angle $\theta$ can be related to $|\langle\psi|\phi\rangle|$. +

+ +

It might be worth then calculating if you have $k$ copies of each state, whether a joint measurement constructed in this way can perform better than the individual measurement repeated $k$ times.

+",1837,,1837,,06-07-2018 11:53,06-07-2018 11:53,,,,0,,,,CC BY-SA 4.0 +2257,2,,2251,06-07-2018 13:51,,7,,"

The answers given so far assume that classical descriptions of the two qubits' states are given (e.g. x and y are provided in the problem's statement). Interestingly, even if the two states are unknown to you, the overlap of the two qubits' states can be evaluated with a simple quantum circuit, provided you have access to pairs of qubits in these states. The circuit is known as a SWAP test, and can be found in Fig. 1 of this reference 1, where apparently it was first proposed.

+ +

1 Buhrman et al., ""Quantum fingerprinting"". https://arxiv.org/abs/quant-ph/0102001

+",2558,,,,,06-07-2018 13:51,,,,0,,,,CC BY-SA 4.0 +2258,1,2259,,06-07-2018 15:03,,17,7919,"

I would like to simulate the 4 ""Bell States"" on the IBM composer?

+ +

How can I best implement those 4 Bell states using the existing set of gates ?

+ +

Here below you see the definition of the 4 Bell states.

+ +

+ +

The first bell state can be easily implemented on the composer through a Hadamard gate and a CNOT gate as follows:

+ +

+ +

but how can I implement the other 3 bell states ?

+",2529,,26,,12/13/2018 19:52,7/25/2021 0:14,How to implement the 4 Bell states on the IBM Q (composer)?,,2,0,,,,CC BY-SA 4.0 +2259,2,,2258,06-07-2018 15:32,,10,,"

Remember that the Hadamard (H) gate maps $|0\rangle$ to $\frac{|0\rangle+|1\rangle}{\sqrt{2}}$ and $|1\rangle$ to $\frac{|0\rangle-|1\rangle}{\sqrt{2}}$, while the CNOT gate has the following conversion table:

+ +

+ +

So, you can use the same circuit:

+ +

+ +

but begin in the states $|01\rangle$, $|10\rangle$ and $|11\rangle$ to get the other three Bell states (you can easily convert to these states from $|00\rangle$ using the Pauli-X gate).

+ +

+ +

Source: Todd Brun's lecture notes, Part-11

+",26,,26,,06-07-2018 16:06,06-07-2018 16:06,,,,0,,,,CC BY-SA 4.0 +2260,2,,2258,06-07-2018 15:34,,11,,"

One of the possible solutions:

+

$|\Phi^+\rangle = \textrm{CNOT} \cdot H_1 |00 \rangle$
+$|\Phi^-\rangle = Z_1 |\Phi^+\rangle = Z_1 \cdot \textrm{CNOT} \cdot H_1 |00 \rangle$
+$|\Psi^+\rangle = X_2 |\Phi^+\rangle = X_2 \cdot \textrm{CNOT} \cdot H_1 |00 \rangle$
+$|\Psi^-\rangle = Z_1 |\Psi^+\rangle = Z_1 \cdot X_2 \cdot \textrm{CNOT} \cdot H_1 |00 \rangle$

+

Where $Z_i$ and $X_i$ act on the $i^\textrm{th}$ qubit.

+
+

So here is $|\Phi^-\rangle$:

+

+

Here is $|\Psi^+\rangle$:

+

+

And here is $|\Psi^-\rangle$:

+

+
+

Another solution is:

+

$|\Phi^+\rangle = \textrm{CNOT} \cdot H_1 |00 \rangle$
+$|\Phi^-\rangle = X_1 \cdot \textrm{CNOT} \cdot H_1 |00 \rangle$
+$|\Psi^+\rangle = X_2 \cdot \textrm{CNOT} \cdot H_1 |00 \rangle$
+$|\Psi^-\rangle = Z_1 X_2\cdot \textrm{CNOT} \cdot H_1 |00 \rangle$

+",2293,,2293,,5/28/2021 19:03,5/28/2021 19:03,,,,3,,,,CC BY-SA 4.0 +2261,1,2262,,06-07-2018 18:45,,12,1549,"

I have just started to learn about quantum computing, and I know a little bit about qubits. What is a resource where I can learn a basic quantum algorithm and the concepts behind how it works?

+",2566,,2293,,06-12-2018 15:34,1/20/2022 11:24,Resources for quantum algorithm basics,,6,0,,,,CC BY-SA 4.0 +2262,2,,2261,06-07-2018 19:13,,7,,"

Most textbooks and lecture courses start with solving the Deutsch problem using quantum computing.

+

Parts 1 to 4 of John Watrous's lecture notes will describe the concepts, starting from basics. By the end of lecture 4, you will have learned how a quantum computer can solve the Deutsch problem with fewer operations than a classical computer would need.

+

All 22 lecture notes can be found here.

+",2293,,14495,,1/15/2022 18:38,1/15/2022 18:38,,,,0,,,,CC BY-SA 4.0 +2263,1,,,06-07-2018 23:14,,42,23485,"

The Bell state $|\Phi^{+}\rangle = \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle )$ is an entangled state. But why is that the case? How do I mathematically prove that?

+",,user72,,,,11-10-2022 01:08,How do I show that a two-qubit state is an entangled state?,,4,0,,,,CC BY-SA 4.0 +2265,2,,2248,06-08-2018 01:07,,3,,"

user1271772's answer is entirely correct. I was going to comment with additional information to help answer nippon's question, but I just created this account and apparently there's a reputation requirement before adding comments.

+ +

D-Wave's superconducting flux qubits are niobium metal loops that form a ""hash symbol"" made of two flat layers that have been stretched out and laid parallel. One layer is 90-degrees rotated from the other. When you move charge (current) in a loop it produces a magnetic field perpendicular to the plane of the loop. When you move a magnetic field through a charge-carrying loop it induces motion in the charge (current). But the amount of induction is partly determined by the size of the overlapping area (not linearly, since perfect overlap doesn't mean perfect induction, and non-overlapping adjacent wires still do it) so you can't currently usefully overlap 1000x1000 because the influence on each neighbor would be small. Stacking more layers is hard for the same reason wireless charging only just started to not suck.

+ +

The D-Wave uses Niobium loops interspersed with these amazing little quantum-permeable membrane slices called Josephson Junctions (that won their discoverer a Nobel before he went a little wacky) cooled to just above 0 kelvin, so they can hold a charge with zero resistance. Basic quantum computing hardware generally has to be robust to decoherence, which means it can't interact much with the outside environment (should be its own Hamiltonian). There's already a ton of control hardware and stuff that has to go into keeping it all stable. Every time they move the machine they have to recalibrate it (at least with the DW2) and a new random arrangement of like 90% of the qubits will work until it's calibrated again. So it's actually a harder problem than just fitting to a chimera graph. Needs to be a readily radiation-hardenable system of some kind, e.g. a neural network.

+",2567,,2567,,06-09-2018 17:18,06-09-2018 17:18,,,,3,,,,CC BY-SA 4.0 +2266,2,,2263,06-08-2018 05:18,,30,,"

A two qudit pure state is separable if and only if it can be written in the form $$|\Psi\rangle=|\psi\rangle|\phi\rangle$$ for arbitrary single qudit states $|\psi\rangle$ and $|\phi\rangle$. Otherwise, it is entangled.

+ +

To determine if the pure state is entangled, one could try a brute force method of attempting to find satisfying states $|\psi\rangle$ and $|\phi\rangle$, as in this answer. This is inelegant, and hard work in the general case. A more straightforward way to prove whether this pure state is entangled is the calculate the reduced density matrix $\rho$ for one of the qudits, i.e. by tracing out the other. The state is separable if and only if $\rho$ has rank 1. Otherwise it is entangled. Mathematically, you can test the rank condition simply by evaluating $\text{Tr}(\rho^2)$. The original state is separable if and only if this value is 1. Otherwise the state is entangled.

+ +

For example, imagine one has a pure separable state $|\Psi\rangle=|\psi\rangle|\phi\rangle$. The reduced density matrix on $A$ is +$$ +\rho_A=\text{Tr}_B(|\Psi\rangle\langle\Psi|)=|\psi\rangle\langle\psi|, +$$ +and +$$ +\text{Tr}(\rho_A^2)=\text{Tr}(|\psi\rangle\langle\psi|\cdot |\psi\rangle\langle\psi|)=\text{Tr}(|\psi\rangle\langle\psi|)=1. +$$ +Thus, we have a separable state.

+ +

Meanwhile, if we take $|\Psi\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)$, then +$$ +\rho_A=\text{Tr}_B(|\Psi\rangle\langle\Psi|)=\frac12\left(|0\rangle\langle 0|+|1\rangle\langle 1|\right)=\frac12\mathbb{I} +$$ +and +$$ +\text{Tr}(\rho_A^2)=\frac14\text{Tr}(\mathbb{I}\cdot\mathbb{I})=\frac12 +$$ +Since this value is not 1, we have an entangled state.

+ +

If you wish to know about detecting entanglement in mixed states (not pure states), this is less straightforward, but for two qubits there is a necessary and sufficient condition for separability: positivity under the partial transpose operation.

+",1837,,1867,,12/23/2018 14:54,12/23/2018 14:54,,,,3,,,,CC BY-SA 4.0 +2267,1,2274,,06-08-2018 09:03,,6,391,"

I am looking for an implementation using the quantum gates provided by the IBM composer of the following quantum function:

+ +
    +
  • input $2n$ qubits
  • +
  • output $2n$ qubits wherein $50\%$ of the cases the state of the $2$ sets of $n$ qubits are swapped and in the other $50\%$ of the cases the state of the $2n$ qubits remain unchanged. With swapped I mean that qubit $q[i]$ will get the state of qubit $q[n+i]$ and qubit $q[n+i]$ will get the state of $q[i]$. Note also that all the qubits must be swapped or not.
  • +
+ +

E.g. $n=3$: If input $|000111\rangle$ then output in 50% of the cases is $|000111\rangle$ and in the other 50% of the cases is $|111000\rangle$

+ +

We have already a solution when $n=1$ in the following StackOverflow question:

+ + + +

but how can we do that when $n>1$?

+",2529,,26,,12/13/2018 19:52,12/13/2018 19:52,How to implement a Square root of Swap gate that swaps 2n-qubits on the IBM Q (composer)?,,1,1,,,,CC BY-SA 4.0 +2268,1,2269,,06-08-2018 10:15,,6,943,"

Let's say two qubits are both in $|+\rangle$ state. We need to find $a_1$, $a_2$, $a_3$, and $a_4$ in $|\phi\rangle = a_1|00\rangle + a_2|01\rangle + a_3|10\rangle + a_4|11\rangle$, how do we find these amplitudes? How do we do it in general case, when each of the qubits are not necessarily in $|+\rangle$ state, but in some $|?\rangle$ state?

+",2559,,26,,12/23/2018 12:30,12/23/2018 12:30,How to calculate the state given by two qubits?,,1,0,,,,CC BY-SA 4.0 +2269,2,,2268,06-08-2018 10:21,,3,,"

Start by writing out what the $|+\rangle$ state actually is: +$$ +|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle) +$$ +So, two qubits in the state $|+\rangle$ are in the state +$$ +|+\rangle|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)\otimes \frac{1}{\sqrt{2}}(|0\rangle+|1\rangle) +$$ +You then need to expand this out, making use of the distributivity of the tensor product, and match up with the specified form of $|\phi\rangle$. Since this sounds a bit like a homework problem, I'm not going to do that for you explicitly. (If you've tried something and got stuck, show us what you tried!)

+ +

The general case is absolutely equivalent, you just replace $|+\rangle|+\rangle$ with something like +$$ +(b_0|0\rangle+b_1|1\rangle)\otimes (b_2|0\rangle+b_3|1\rangle) +$$

+",1837,,,,,06-08-2018 10:21,,,,3,,,,CC BY-SA 4.0 +2270,1,2271,,06-08-2018 12:29,,15,9687,"

Let's say we have a circuit with $2$ Hadamard gates:

+ +

+ +

Let's take the $|00\rangle$ state as input. The vector representation of $|00\rangle$ state is $[1 \ 0 \ 0 \ 0]$, but this is the representation of $2$ qubits and H accepts just $1$ qubit, so should we apply the first H gate to $[1 \ 0]$ and the second H gate to $[0 \ 0]$? Or should we input $[1 \ 0]$ in each H gate, because we are applying H gates to just one qubit of state $|0\rangle$ each time?

+",2559,,26,,12/23/2018 12:30,12/23/2018 12:30,How to input 2 qubits in 2 Hadamard gates?,,1,0,,,,CC BY-SA 4.0 +2271,2,,2270,06-08-2018 12:40,,19,,"
+

Or should we input $[1 \ 0]$ in each H gate, because we are applying H + gates to just qubit of state $|0\rangle$ each time?

+
+ +

Yes, when you have a two-qubit state (say you label the two qubits as $A$ and $B$ respectively), you need to apply the two Hadamard gates separately on each qubit's state. The final state will be the tensor product of the two ""transformed"" single-qubit states.

+ +

If your input is $|0\rangle_A\otimes|0\rangle_B$, the output will simply be $$\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)_A\otimes\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)_B$$

+ +
+ +

Alternative:

+ +

If the two input qubits are entangled, the above method won't work since you won't be able to represent the input state as a tensor product of the states of the two qubits. So, I'm outlining a more general method here.

+ +

When two gates are in parallel, like in your case, you can consider the tensor product of the two gates and apply that on the 2-qubit state vector. You'll end up with the same result.

+ +

$\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\1&-1\\ \end{bmatrix} \otimes \frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\1&-1\\ \end{bmatrix} = \frac{1}{2}\begin{bmatrix}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1 \end{bmatrix}$

+ +

Now, on applying this matrix on the 2-qubit state $\begin{bmatrix}1\\0\\0\\0\end{bmatrix}$ you get:

+ +

$$\frac{1}{2}\begin{bmatrix}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1 \end{bmatrix} \begin{bmatrix}1\\0\\0\\0\end{bmatrix}=\begin{bmatrix}1/2\\1/2\\1/2\\1/2\end{bmatrix}$$

+ +

which is equivalent to $$\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)_A\otimes\left(\frac{|0\rangle+|1\rangle}{\sqrt{2}}\right)_B$$

+ +

Justification

+ +

Tensor product of linear maps:

+ +
+

The tensor product also operates on linear maps between vector spaces. + Specifically, given two linear maps $S : V \to X$ and $T : W \to Y$ + between vector spaces, the tensor product of the two linear maps $S$ + and $T$ is a linear map $(S\otimes T)(v\otimes w) = S(v) \otimes T(w)$ + defined by $(S\otimes T)(v\otimes w) = S(v) \otimes T(w)$.

+
+ +

Thus, $$(\mathbf H|0\rangle_A) \otimes (\mathbf H|0\rangle_B) = (\mathbf H\otimes \mathbf H)(|0\rangle_A \otimes |0\rangle_B)$$

+",26,,26,,06-08-2018 13:51,06-08-2018 13:51,,,,0,,,,CC BY-SA 4.0 +2274,2,,2267,06-08-2018 17:16,,3,,"

A general trick for smoothing a big discrete operation into a continuous operation is to apply the phase estimation algorithm, then apply a phase gradient to the phase register, then uncompute the phase estimation. For example, see this blog post on computing the fractional fourier transform.

+ +

Because the swap operation has exactly two eigenvalues (+1 and -1), the phase estimation is extremely simple. It only requires a single phase estimation qubit. The resulting circuit looks like this:

+ +

+ +

Note that the top qubit should start and end in the 0 state. You can continuously vary how much swappery there is by varying the angle of the Z rotation in the middle.

+",119,,119,,06-08-2018 23:00,06-08-2018 23:00,,,,2,,,,CC BY-SA 4.0 +2275,1,,,06-09-2018 10:00,,7,410,"

Suppose we have a qutrit with the state vector $|\psi\rangle = a_0|0\rangle + a_1|1\rangle + a_2|2\rangle$, and we want to project its state onto the subspace having the basis $\{|0\rangle,|2\rangle\}$, I know the projection operator would be written like: $1|0\rangle \langle0| + 0|1\rangle\langle 1| + 1|2\rangle\langle2|$.

+ +

I'm having a few confusions here. Does $|0\rangle \langle 0|$ represent a tensor product between $[1 \ 0 \ 0]^{T}$ and $[1 \ 0 \ 0]$ ? Or is it just matrix multiplication? Also, I thought that we must always be able to write a projection operator in the form $|\phi\rangle \langle \phi|$ where $|\phi\rangle$ is a possible state of a qutrit. But how to represent $1|0\rangle \langle0| + 0|1\rangle\langle 1| + 1|2\rangle\langle2|$ in the form $|\phi\rangle \langle \phi|$?

+",2582,,26,,06-09-2018 11:27,06-12-2018 06:52,Confusion regarding projection operator,,2,0,,,,CC BY-SA 4.0 +2276,1,2277,,06-09-2018 11:31,,8,1100,"

Let's say we have the following quantum circuit:

+ +

+ +

Let's say we input the state $|00\rangle$ . Both of the $H$ gates produce the output $1/\sqrt{2}$, but which one of the following $2$ vectors is the input of $\operatorname{CNOT}$ gate: +$1: [1/\sqrt{2}, \ 1/\sqrt{2}, \ 1/\sqrt{2}, \ 1/\sqrt{2}]$ or +$2: [1/2, \ 1/2, \ 1/2, \ 1/2]$? +Also, after applying a $2$ qubit gate (in this case $\operatorname{CNOT}$) how do we find out what to input in the following single qubit gates (in this case two $H$ gates)? +Note that I gave this example for simplicity, a general answer will also be accepted.

+",2559,,26,,12/23/2018 12:31,12/23/2018 12:31,How to apply single and two qubit gates to 2 qubits multiple times?,,1,0,,,,CC BY-SA 4.0 +2277,2,,2276,06-09-2018 13:17,,5,,"

Step 1 (application of two hadamard gates):

+ +

$$|0\rangle_A \otimes |0\rangle_B \to \left(\frac{|0\rangle + |1\rangle}{\sqrt{2}}\right)_A\otimes\left(\frac{|0\rangle + |1\rangle}{\sqrt{2}}\right)_B$$

+ +

This is equivalent to the state vector: $\begin{bmatrix}1/2\\1/2\\1/2\\1/2\end{bmatrix}$, which will act as the input for your CNOT gate.

+ +

Step 2: (application of CNOT)

+ +

Let's remind ourselves what the CNOT gate does:

+ +

+ +

So clearly, +$$\left(\frac{|0\rangle + |1\rangle}{\sqrt{2}}\right)_A\otimes\left(\frac{|0\rangle + |1\rangle}{\sqrt{2}}\right)_B = \frac{1}{2}(|00\rangle + |01\rangle + |10\rangle + |11\rangle)$$ $$\to \frac{1}{2}(|00\rangle + |01\rangle + |11\rangle + |10\rangle) = \left(\frac{|0\rangle + |1\rangle}{\sqrt{2}}\right)_A\otimes\left(\frac{|0\rangle + |1\rangle}{\sqrt{2}}\right)_B$$

+ +

Step 3: (re-application of two Hadamard gates)

+ +

$$\left(\frac{|0\rangle + |1\rangle}{\sqrt{2}}\right)_A\otimes\left(\frac{|0\rangle + |1\rangle}{\sqrt{2}}\right)_B \to |0\rangle_A \otimes |0\rangle_B$$

+ +

Here, we used that fact that the Hadamard gate maps the state $\dfrac{|0\rangle + |1\rangle}{\sqrt{2}}$ to the state $|0\rangle$, for each qubit. You basically need to input the vector $\begin{bmatrix}1/\sqrt{2}\\1/\sqrt{2}\end{bmatrix}$ into the two Hadamard gates at the end. Find the state transformation on each qubit. From there you can construct the state vector for the final 2-qubit state. The final answer will be $\begin{bmatrix}1 \\ 0 \\ 0 \\ 0\end{bmatrix}$. Clearly, we end up with the same 2-qubit state which we started with.

+ +

Exercise: Carry out steps $2$ and $3$ using the matrix notation. Also, check this related answer.

+",26,,26,,06-09-2018 13:25,06-09-2018 13:25,,,,0,,,,CC BY-SA 4.0 +2278,1,2279,,06-09-2018 13:27,,6,475,"

My idea was to apply $Z$ operator 𝐭𝐰𝐢𝐜𝐞, which leads us back to the point where we started from, and also show that after applying the $Z$ operator just 𝐨𝐧𝐜𝐞 we are not at the same point where we started (this is for showing that we are not rotating by a multiple of $360^{\circ}$). Is this the correct proof? What about the general case, where we want to find out through how many degrees a given operator rotates the points?

+",2559,,26,,06-09-2018 13:54,06-09-2018 14:52,How to prove that Z operator rotates points on Bloch sphere about Z axis through 180°?,,2,0,,,,CC BY-SA 4.0 +2279,2,,2278,06-09-2018 13:45,,6,,"

The Pauli-$Z$ gate maps $|0\rangle$ to $|0\rangle$ and $|1\rangle$ to $-|1\rangle$.

+

For Bloch sphere representation, state of a qubit is written like (look at my previous answer for a detailed explanation)

+

$$|\psi\rangle = \cos(\theta/2)|0\rangle + e^{i\phi}\sin(\theta/2)|1\rangle$$

+

Apply the Pauli-$Z$ gate on this and you get:

+

$$|\psi'\rangle = \cos(\theta/2)|0\rangle + (-1)e^{i\phi}\sin(\theta/2)|1\rangle$$

+

$$=\cos(\theta/2)|0\rangle + e^{i(\phi+\pi)}\sin(\theta/2)|1\rangle$$

+

Thus, the angle $\phi$ changes by $\pi$. We can call $\phi + \pi$ as $\phi'$ now.

+

Remind yourself what the Bloch sphere looks like:

+

+

Clearly, from the diagram, if the angle $\phi$ is changed by $\pi$, it will imply that the state of the qubit has been rotated about the Z-axis by $180$ degrees.

+",26,,-1,,6/18/2020 8:31,06-09-2018 13:45,,,,0,,,,CC BY-SA 4.0 +2280,2,,2275,06-09-2018 14:21,,8,,"
+

Does $|0\rangle\langle0|$ represent a tensor product or is it just matrix multiplication?

+
+ +

You can think of $|0\rangle\langle0|$ as tensor product of $|0\rangle$ and $\langle0|$, or equivalently as the matrix multiplication (more precisely, Kronecker product) of the vectors representing them.

+ +
+

Also, I thought that we must always be able to write a projection operator in the form |ϕ⟩⟨ϕ|

+
+ +

Not necessarily. +A projector will have that form if and only if it projects onto a one-dimensional space (that is, it projects onto a pure state). +More general projections, like the one you mention, do not have this feature, and that is totally fine.

+ +

Indeed, also the identity matrix is a (trivial) projection, and it certainly cannot be written as $|\psi\rangle\langle\psi|$ for any pure state $|\psi\rangle$.

+",55,,,,,06-09-2018 14:21,,,,1,,,,CC BY-SA 4.0 +2281,2,,2278,06-09-2018 14:52,,2,,"

One can more generally show that $R_z(\theta)=e^{-i \theta Z/2}=\cos(\theta/2)-iZ\sin(\theta/2)$ rotates points on the Bloch sphere by an angle $\theta$ around the $z$-axis, +and note that $Z=i R_z(\pi)$.

+ +

Let $|\psi\rangle$ be an arbitrary pure state. The coordinates of the point representing $|\psi(\theta)\rangle\equiv R_z(\theta)|\psi\rangle$ on the Bloch sphere are +$$x(\theta)=\langle\psi(\theta)|X|\psi(\theta)\rangle, \\ y(\psi)=\langle\psi(\theta)|Y|\psi(\theta)\rangle, \\ +z(\psi)=\langle\psi(\theta)|Z|\psi(\theta)\rangle.$$ +That this point follows a circular trajectory around the $z$-axis when $\theta$ goes from $0$ to $2\pi$, +can be seen by direct calculation as follows:

+ +

\begin{align} +x(\theta) &=\langle\psi\rvert R_z(-\theta)\,X\,R_z(\theta)\lvert\psi\rangle = +\cos\theta\,x(0) + \sin\theta\,y(0), \\ +y(\theta) &=\langle\psi\rvert R_z(-\theta)\,Y\,R_z(\theta)\lvert\psi\rangle = +-\sin\theta\,x(0) + \cos\theta\,y(0), \\ +z(\theta) &= z(0). +\end{align}

+ +

Note that the same is true more generally for mixed states: the point representing $\rho(\theta)\equiv R_z(\theta)\rho R_z(\theta)^\dagger$ in the Bloch sphere is +$$\newcommand{\Tr}{\operatorname{Tr}} +x(\theta)=\Tr(X\rho(\theta)), \\ +y(\theta)=\Tr(Y\rho(\theta)), \\ +z(\theta)=\Tr(Z\rho(\theta)),$$ +and one can show by direct calculation that this point evolves similarly to the pure case.

+",55,,,,,06-09-2018 14:52,,,,0,,,,CC BY-SA 4.0 +2282,1,2285,,06-10-2018 12:34,,43,7223,"

Is it because we don't know exactly how to create quantum computers (and how they must work), or do we know how to create it in theory, but don't have the tools to execute it in practice? Is it a mix of the above two? Any other reasons?

+",2559,,-1,,10/17/2022 0:09,10/17/2022 0:09,Why is it harder to build quantum computers than classical computers?,,7,6,,,,CC BY-SA 4.0 +2283,1,,,06-10-2018 15:10,,7,115,"

During a description of zero-dimensional self-dual $\text{GF}(4)$ quantum codes in ""On self-dual quantum codes, graphs, and Boolean functions"" by L.E. Danielsen, it states:

+ +
+

A zero-dimensional stabilizer code with high distance represents a single quantum state which is robust to error, sometimes called a stabilizer state. Codes of higher dimension can be constructed from zero-dimensional quantum codes...

+
+ +

My question has two parts:

+ +

Firstly, I am confused by what is meant here by a ""single quantum state"". To my understanding, the passage seems to confuse the encoded 0-dimensional qubit state (which is robust to Pauli error), i.e. a single ""qubit"" state, with the single ""stabilizer"" state which is represented by the generators of the code (which is not robust to error, as a single Hadamard on any qubit would be sufficient to take it to a completely different stabilizer state). Is this the case, or am I misunderstanding something here?

+ +

Secondly, what form do these constructions take in practice? Is this simply achieved by not enforcing one of the stabilizer's generators, or are there other more general methods? Furthermore, if there are multiple construction methods, what are their advantages or disadvantages (both in the codes they create and/or the complexity of the construction)?

+",391,,391,,06-11-2018 06:47,06-11-2018 08:45,Zero-distance self-dual GF(4) quantum codes and constructing k > 0 codes from them,,1,0,,,,CC BY-SA 4.0 +2284,1,2287,,06-10-2018 16:39,,7,6825,"

I know that 2 qubits are entangled if it is impossible to represent their joint state as a tensor product. But when we are given a joint state, how can we tell if it is possible to represent it as a tensor product? +For example, I am asked to tell if the qubits are entangled for each of the following situations:

+ +

$$\begin{align} +\left| 01 \right>\\ +\frac 12(\left| 00 \right> + i\left| 01 \right> - i\left| 10 \right> + i\left| 11 \right> )\\ +\frac 12(\left| 00 \right> - \left| 11 \right>)\\ +\frac 12(\left| 00 \right> + \left| 01 \right> +i\left| 10 \right> + \left| 11 \right> ) \end{align}$$

+",2559,,26,,12/23/2018 12:31,12/23/2018 12:31,How to check if 2 qubits are entangled?,,2,3,,06-10-2018 19:23,,CC BY-SA 4.0 +2285,2,,2282,06-10-2018 17:11,,38,,"

We know exactly, in theory, how to construct a quantum computer. But that is intrinsically more difficult than to construct a classical computer.

+ +

In a classical computer, you do not have to use a single particle to encode bits. Instead, you might say that anything less than a billion electrons is a 0 and anything more than that is a 1, and aim for, say, two billion of electrons to encode a 1 normally. That makes you inherently fault-tolerant: Even if there are hundreds of millions of electrons more or less than expected, you will still get the correct classification as a digital 0 or a 1.

+ +

In a quantum computer, this trick is not possible due to the non-cloning theorem: You cannot trivially employ more than one particle to encode a qubit (quantum bit). Instead, you must make all your gates operate so well that they are not just accurate to the single particle level but even to a tiny fraction of how much they act on a single particle (to the so-called quantum-error correction threshold). This is much more challenging than to get gates accurate merely to within hundreds of millions of electrons.

+ +

Meanwhile we do have the tools to, just barely, make quantum computers with the required level of accuracy. But nobody has, as of yet, managed to make a big one meaning one that can accurately operate on the perhaps hundred of thousands of physical qubits needed to implement a hundred or so logical qubits to then be undeniably in the realm where the quantum computer beats classical computers at select problems (quantum supremacy).

+",,user1039,,user1039,06-10-2018 19:00,06-10-2018 19:00,,,,4,,,,CC BY-SA 4.0 +2286,2,,2284,06-10-2018 18:12,,3,,"

It is done for a specific state (a Bell state) here, and the same procedure can be used for any other two-qubit state.

+",2293,,2293,,7/13/2018 14:47,7/13/2018 14:47,,,,0,,,,CC BY-SA 4.0 +2287,2,,2284,06-10-2018 18:13,,4,,"

If you are given a general 2-qubit state $a \mid 00 \rangle + b \mid 01 \rangle + c \mid 10 \rangle + d \mid 11 \rangle$

+ +

If it is unentangled, then the coefficients are that of $(\alpha \mid 0 \rangle + \beta \mid 1 \rangle)(\gamma\mid 0 \rangle + \delta \mid 1 \rangle)$ for some $\alpha .. \delta$.

+ +

$$ +\alpha \gamma = a\\ +\alpha \delta = b\\ +\beta \gamma = c\\ +\beta \delta = d +$$

+ +

You want to know if those 4 equations are solvable for a given $a,b,c,d$. This question becomes

+ +

$$ +ad - bc = 0 +$$

+ +

so if $ad-bc=0$, then you can solve for $\alpha .. \delta$. You don't need to solve for them, you just need to need to know if it is possible.

+ +

The generalization for qudits with potentially different values of $d_1$ and $d_2$ are the quadratic polynomials that cut out the Segre embedding as a zero locus.

+",434,,,,,06-10-2018 18:13,,,,2,,,,CC BY-SA 4.0 +2288,2,,2282,06-10-2018 22:04,,11,,"

There's many reasons, both in theory and implementation, that make quantum computers much harder to build.

+ +

The simplest might be this: while it is easy to build machines that exhibit classical behaviour, demonstrations of quantum behaviour require really cold and really precisely controlled machines. The thermodynamic conditions of the quantum regime are just hard to access. When we finally do achieve a quantum system, it's hard to keep it isolated from the environment which seeks to decohere it and make it classical again.

+ +

Scalability is a big issue. The bigger our computer, the harder it is to keep quantum. The phenomena that promise to make quantum computers really powerful, like entanglement, require the qubits can interact with eachother in a controlled way. Architectures that allow this control are hard to engineer, and hard to scale. Nobody's agreed on a design!

+ +

As @pyramids points out, the strategies we use to correct errors in classical machines usually involve cloning information, which is forbidden by quantum information theory. While we have some strategies to mitigate errors in clever quantum ways, they require that are qubits are already pretty noise-free and that we have lots of them. If we can't improve our engineering past some threshold, we can't employ these strategies - they make things worse!

+",2591,,,,,06-10-2018 22:04,,,,1,,,,CC BY-SA 4.0 +2289,1,,,06-10-2018 23:24,,5,99,"

I've accidentally written a procedure which appears to compute both outputs of a long-running function $f: \{0,1\} \to \{1...n\}$ using one run of $f$ plus $\mathcal{O}(n)$ time. I thought this couldn't be done. Where's the bug in my ""perpetuum mobile""? If I'm wrong, what's the name of this technique?

+ +
    +
  • Initialize with $\sum_ic_i|i\rangle_x$, with $c_0$ initially, say, $\sqrt½$, and the invariant $\sum_ic_i^2 = 1$.

  • +
  • Compute $f(x)$, producing: $\sum_ic_i|i\rangle_x|f_i\rangle$ (1). We never evaluate $f$ again, and our goal is to extract these $f_i$.

  • +
  • Produce: +$$ +\sqrt½\sum_ic_i|i\rangle_x|f_i\rangle(|i^{|a|}\rangle_a|f_i\rangle_b+2^{-|a+b|/2}\sum_{rs}|r\rangle_a|s\rangle_b) +$$

  • +
  • Measure and discard the $a$ and $b$ registers, getting (1) with $c_0^2 \in \{1/(2+2^{|a|+|b|}), ½, 1-1/(2+2^{|a|+|b|})\}$, and in all probability we know which.

  • +
  • Use amplitude amplification, where the good and bad subspaces are distinguished using the $x$ qubit, to get $c_0^2$ between ⅛ and ⅞.

  • +
  • Repeat the last 3 *s a bunch of times.

  • +
  • In all probability, we've measured $f_0$ and $f_1$.

  • +
+",2592,,2592,,06-11-2018 12:51,06-11-2018 13:51,How does this parallel computation scheme fail?,,2,4,,,,CC BY-SA 4.0 +2290,2,,2282,06-11-2018 03:59,,1,,"

I have to disagree with the idea that the no-cloning theorem makes error correction with repetition codes difficult. Given that your inputs are provided in the computational basis (i.e. you inputs are not arbitrary superpositions, which is almost always the case, especially when you're solving a classical problem e.g. Schor's algorithm), you can clone them with controlled-Not gates, run your computation in parallel on all the copies, and then correct errors. The only trick is to make sure you don't do a measurement during error-correction (except possibly of the syndrome), and to do this all you have to do is continue to make use quantum gates.

+

Error correction for quantum computers is not much more difficult than for classical computers. Linearity takes can of most of the perceived difficulties.

+

I'd also like to mention that there are much more efficient schemes for quantum error correction than repetition codes. And that you need two pauli-matrices to generate the rest, so you need two types of repetition codes if you're going to go for the inefficient, but conceptually simple repetition code route (one for bit-flips and one for phase flips).

+

Quantum error correction shows that linear increase in the number of physical qubits per logical qubit improves the error rate exponentially, just as it does in the classical case.

+

Still, we're nowhere near 100 physical qubits. This is the real problem. We need to be able to glue a lot more semi-accurate qubits together before any of this starts to matter.

+",2595,,2595,,09-08-2020 20:00,09-08-2020 20:00,,,,2,,,,CC BY-SA 4.0 +2291,2,,2289,06-11-2018 06:56,,1,,"

There's quite a lot that doesn't make sense in your described protocol, at least to me! It would be a lot clearer in terms of bras and kets. Let me try to translate, and you can comment to try and get us one the same page...

+ +
    +
  • Let $|x\rangle=(|0\rangle+|1\rangle)/\sqrt{2}$

  • +
  • Compute $f(x)$, so you have $(|0\rangle|f(0)\rangle+|1\rangle|f(1)\rangle)/\sqrt{2}$

  • +
  • Let $|m\rangle=(|0\rangle+|1\rangle)/\sqrt{2}$, so that we have an overall state +$$ +(|0\rangle_m|000\ldots 0\rangle_a|f(0)\rangle_b+|1\rangle_m|111\ldots 1\rangle_a|f(1)\rangle_b)/\sqrt{2} +$$

  • +
  • Measure the $a$ and $b$ registers.

  • +
+ +

What I don't understand: you've defined $|x\rangle$ but never used it. You've initialised the registers $a$ and $b$, but overwritten them immediately (so why did you initialise them?). What amplitude is it that you think you want to amplify?

+ +

With 50:50 probability, you either get $|0\rangle_m|000\ldots 0\rangle_a|f(0)\rangle_b$ or $|1\rangle_m|111\ldots 1\rangle_a|f(1)\rangle_b$. So, to get both values, you repeat until you've got both values. That requires a few runs. At best it takes 2, but you'd be better off just evaluating $f(0)$ and $f(1)$ by using two runs.

+ +
+ +

Update after question revision:

+ +

There are still issues with the description of the protocol: what is $d$, and how do you produce such a state with $d\neq0$? Surely you're better off just leaving $d=0$?

+ +

However, i think we're now getting to the point of main misunderstanding, which is counting the circuit complexity. Every time you produce the state +$$ +|\Psi_f\rangle=\sqrt½\sum_ic_i|i\rangle_x|f(i)\rangle(|i^{|a|}\rangle_a|f(i)\rangle_b+d\sum_{rs}|r\rangle_a|s\rangle_b) +$$ +you have to evaluate the function $f(x)$. If that is the costly function to evaluate, you have to count the number of repetitions. You say ""Repeat the last 3 *s a bunch of times."", so that costs you a bunch of function evaluations, while just evaluating $f(0)$ and $f(1)$ costs you exactly 2 function evaluations. Moreover, to get 2 answers out, you must repeat your procedure at least twice, so you can never even get lucky and beat the naive classical case sometimes.

+ +

Note that if you want to be able to copy the state $|\Psi_f\rangle$, then for all different functions $f(x)$, those states must be orthogonal. However, consider two different functions: $f(0)=f(1)=0$ and $g(0)=0,g(1)=1$. We have that $\langle\Psi_f|\Psi_g\rangle\neq 0$.

+",1837,,1837,,06-11-2018 09:28,06-11-2018 09:28,,,,7,,,,CC BY-SA 4.0 +2292,1,2374,,06-11-2018 07:07,,14,660,"

There are many fairly standard quantum algorithms that can all be understood within a very similar framework, from Deutsch's algorithm Simon's problem, Grover's search, Shor's algorithm and so on.

+ +

One algorithm that seems to be completely different is the algorithm for evaluating the Jones Polynomial. Moreover, it seems like this is a crucial algorithm to understand in the sense that it is a BQP-complete problem: it exhibits the full power of a quantum computer. Also, for a variant of the problem, it's DQC-1 complete, i.e. it exhibits the full power of one clean qubit.

+ +

The Jones Polynomial algorithm paper presents the algorithm in a very different way to the other quantum algorithms. Is there a more similar/familiar way that I can understand the algorithm (specifically, the unitary $U$ in the DQC-1 variant, or just the whole circuit in the BQP-complete variant)?

+",1837,,1837,,6/15/2018 13:26,6/17/2018 23:52,Jones Polynomial,,2,0,,,,CC BY-SA 4.0 +2293,2,,2283,06-11-2018 08:27,,1,,"

For the first part of the question, I think that with ""single quantum state"", the author is referring to the encoded quantum state, which will be an state formed by $n$-qubits. Such state is called stabilizer state because the encoding operation takes the input state, in this case $|0\rangle^{\otimes n}$, and takes it to a state in the codespace defined by the stabilizers. This code space is defined to be the $+1$ simultaneous eigenspace of the stabilizer operators, and in general has dimension $2^k$, so as in this case $k=0$, the dimension of this codespace will be $1$. Consequently, the author is trying to state that such zero-dimensional stabilizer codes refer to the basis of such $1$ dimensional subspace, which will be just a single quantum state.

+ +

Also the author says that such state is robust to errors because an stabilizer error correction code is being applied to the state, and so errors will be correctable. Obviously a Hadamard gate would change the encoded state, but I think that with robust, the author is trying to say that the state can be corrected after the appearance of Pauli errors, that is $\{X,Y,Z\}^{\otimes n}$ operators applied to the state. This robustness obviously comes from the fact that an error-correction code is being applied to the $n$-ancilla qubit input state.

+ +

Then this codes would be obtainded by finding such encoding operator that would take the input $n$-state to the $+1$ simultaneous eigenspace that the $n$ stabilizer generators define (Note that in general ${n-k}$ generators are needed and that here $k=0$). I don't know what do you exactly want to say with the frase ""enforcing one of the stabilizer's generators"". To find out the encoder unitary, most of the times, combinations of so-called Clifford gates are used, there are differents methods and algorithms in literature that are useful for finding the exact combination.

+",2371,,2371,,06-11-2018 08:45,06-11-2018 08:45,,,,2,,,,CC BY-SA 4.0 +2294,2,,2282,06-11-2018 08:33,,3,,"

One important point is that quantum computers contain classical computers. So it must be at least as hard to build a quantum computer as it is a classical computer.

+ +

For a concrete illustration, it's worth thinking about universal gate sets. In classical computation, you can create any circuit you want via the combination of just a single type of gate. Often people talk about the NAND gate, but for the sake of this argument, it's easier to talk about the Toffoli gate (also known as the controlled-controlled-not gate). Every classical (reversible) circuit can be written in terms of a whole bunch of Toffolis. An arbitrary quantum computation can be written as a combination of two different types of gate: the Toffoli and the Hadamard.

+ +

This has immediate consequences. Obviously, if you're asking for two different things, one of which does not exist in classical physics, that must be harder than just making the one thing that does exist in classical physics. Moreover, making use of the Hadamard means that the sets of possible states you have to consider are no longer orthogonal, so you cannot simply look at the state and determine how to proceed. This is particularly relevant to the Toffoli, because it becomes harder to implement as a result: before, you could safely measure the different inputs and, dependent upon their values, do something to the output. But if the inputs are not orthogonal (or even if they are, but in an unknown basis!) you cannot risk measuring them because you will destroy the states, specifically, you destroy the superpositions that are the whole thing that's making quantum computation different from classical computation.

+",1837,,,,,06-11-2018 08:33,,,,2,,,,CC BY-SA 4.0 +2295,1,2325,,06-11-2018 08:34,,8,127,"

In On the classification of all self-dual additive codes over $\textrm{GF}(4)$ of length up to 12 by Danielsen and Parker, they state:

+ +
+

Two self-dual additive codes over $\textrm{GF}(4)$, $C$ and $C^\prime$, are equivalent if and only if the codewords of $C$ can be mapped onto the codewords of $C^\prime$ by a map that preserves self-duality. Such a map must consist of a permutation of coordinates (columns of the generator matrix), followed by multiplication of coordinates by nonzero elements from $\textrm{GF}(4)$, followed by possible conjugation of coordinates.

+
+ +

with the previous definition

+ +
+

Conjugation of $x \in \textrm{GF}(4)$ is defined by $\bar{x} = x^2$.

+
+ +

I am confused what ""conjugation of coordinates"" means in this context. To me ""coordinates"" would normally refers to the code matrix's columns, or equivalently the quantum code's qubits. However, here it seems to be referring to the alphabet of the code, or equivalently the Pauli operators of the code's stabilizer generators. If this is the case, what operation does ""conjugation of coordinates"" represent with respect to the code's stabilizer generators?

+",391,,,,,6/13/2018 9:19,"What does ""conjugation of coordinates"" mean with respect to GF(4) (quantum) codes",,1,0,,,,CC BY-SA 4.0 +2296,1,2297,,06-11-2018 10:28,,5,127,"

I came across with a problem that involves $2$ quantum trits in state $\left| 22 \right>.$ What is it's tensor product interpretation and a matrix interpretation?

+",2559,,26,,12/23/2018 12:31,12/23/2018 12:31,What is the $\left| 22\right>$ state?,,1,3,,,,CC BY-SA 4.0 +2297,2,,2296,06-11-2018 10:49,,5,,"

It is worth emphasising that the stuff that you write inside a ket is completely arbitrary. It's just a label you're attaching to something, so it should have a proper definition somewhere. Now, usually, we're talking about quantum spins with a certain size of Hilbert space, say $d$. Probably here you're talking $d\geq 3$, and perhaps specifically $d=3$. Then, one set of basis states is often written as $|i\rangle$ for $i=0,1,\ldots d-1$. You can choose to represent these as vectors $(0,0,\ldots,0,1,0,0,\ldots 0)$ where the 1 is the $i+1$th entry, and there are $d$ entries. So, I guess you're talking about +$$ +|2\rangle\equiv\left(\begin{array}{c} 0 \\ 0 \\ 1 \end{array}\right) +$$ +The tensor product $|2\rangle\otimes|2\rangle$ then has the standard meaning.

+",1837,,,,,06-11-2018 10:49,,,,0,,,,CC BY-SA 4.0 +2298,1,,,06-11-2018 11:18,,6,179,"

One of the main ideas introduced in Giovannetti et al. 2007 +(0708.1879) is the so-called bucket-brigade (q)RAM architecture.

+ +

The authors state (first paragraph, second column, first page, in v2) that this new (at the time) (q)RAM architecture reduces the number of switches that must be thrown during a RAM call, quantum or classical, from $O(N^{1/d})$ to $O(\log N)$, where $N=2^n$ is the number of memory slots in the RAM and $d$ is the dimension of the lattice that, according to the authors, conventional RAM architectures use for memory retrieval.

+ +

The conventional architecture they have in mind essentially consists in retrieving the information using a tree structure, like the one they present in their Fig. 1 (here reproduced):

+ +

+ +

They say that this scheme requires to throw $O(N)$ switches for each memory call, but I don't understand why is this the case. +From the above, it would seem that one just needs to throw $O(\log_2(N))$ +switches, one per bifurcation, to get from the top to the bottom.

+ +

I understand that in the quantum case, with this protocol, we would end up with a state correlated with all of the $N$ switches, but they seem to be stating that even in the classical case one needs to activate them all.

+ +

In other words, is the advantage of the bucket-brigade approach only in the higher error resilience in the quantum case, or would it also be classically advantageous, compared with the conventional approaches?

+",55,,55,,06-11-2018 11:35,6/18/2018 13:26,Are bucket-brigate (q)RAM architectures also advantageous in the classical case?,,2,0,,,,CC BY-SA 4.0 +2299,1,2300,,06-11-2018 13:08,,23,5313,"

If a circuit takes more than one qubit as its input and has quantum gates which take different numbers of qubits as their input, how would we interpret this circuit as a matrix?

+ +

Here is a toy example:

+ +

+",2559,,26,,5/13/2019 19:50,5/13/2019 19:50,How to interpret a quantum circuit as a matrix?,,1,0,,,,CC BY-SA 4.0 +2300,2,,2299,06-11-2018 13:37,,23,,"

Specific Circuit

+

The first gate is a Hadamard gate which is normally represented by $$\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\1&-1\end{bmatrix}$$

+

Now, since we're only applying it to the first qubit, we use a kronecker product on it (this confused me so much when I was starting out - I had no idea how to scale gates; as you can imagine, it's rather important), so we do $H\otimes I$, where $I$ is the 2x2 identity matrix. This produces

+

$$\frac{1}{\sqrt{2}}\begin{bmatrix}1 & 0 & 1 & 0\\0 & 1 & 0 & 1\\ 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1\end{bmatrix}$$

+

Next we have a CNOT gate. This is normally represented by

+

$$\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&0&1\\0&0&1&0\end{bmatrix}$$

+

This is the right size for two qubits, so we don't need to scale using kronecker products. We then have another hadamard gate, which scales the same was as the first. To find the overall matrix for the circuit, then, we multiply them all together:

+

$$\frac{1}{\sqrt{2}}\begin{bmatrix}1 & 0 & 1 & 0\\0 & 1 & 0 & 1\\ 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1\end{bmatrix}\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&0&1\\0&0&1&0\end{bmatrix}\frac{1}{\sqrt{2}}\begin{bmatrix}1 & 0 & 1 & 0\\0 & 1 & 0 & 1\\ 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1\end{bmatrix}$$

+

and get

+

$$\frac{1}{2}\begin{bmatrix}1&1&1&-1\\1&1&-1&1\\1&-1&1&1\\-1&1&1&1\end{bmatrix}$$

+

(if python multiplied correctly =) We would then multiply this by our original qubit state, and get our result.

+

Generalization

+

So basically, you go through each gate one by one, take the base representation, and scale them appropriately using kronecker products with identity matrices. Then you multiply all the matrices together in the order they are applied. Be sure to do this such that if you wrote out the multiplication, the very first gate is on the far right; as arriopolis points out, this is a common mistake. Matrices are not commutative! If you don't know the base representation of a matrix, check first wikipedia's article on quantum gates which has a lot.

+",91,,-1,,6/18/2020 8:31,06-12-2018 13:22,,,,4,,,,CC BY-SA 4.0 +2301,2,,2289,06-11-2018 13:51,,0,,"

Ampltiude amplification seems to require both that it is easy to judge between good and bad (which is easy enough if one goes by the $x$ qubit), and that it is easy to get to one's starting point, which in this case requires evaluating $f$.

+",2592,,,,,06-11-2018 13:51,,,,0,,,,CC BY-SA 4.0 +2302,1,2304,,06-11-2018 15:16,,8,3278,"

What does it mean to measure a qubit (or multiple qubits) in standard basis?

+",2559,,26,,12/23/2018 12:32,12/23/2018 12:32,Measuring in standard basis meaning,,2,0,,,,CC BY-SA 4.0 +2303,2,,2302,06-11-2018 15:19,,6,,"

You define the projectors +$$ +P_0=|0\rangle\langle 0|=\left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right)\qquad P_1=|1\rangle\langle 1|=\left(\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array}\right). +$$ +For any state $|\psi\rangle$, the probability of getting answer $x$ is $p_x=\langle\psi|P_x|\psi\rangle$ and, after the measurement, the qubit is in the state $|x\rangle$.

+ +

If you want to measure multiple (say $n$) qubits in the standard basis, you can take arbitrary tensor products of the single-qubit terms, +$$ +P_x=|x\rangle\langle x|=\bigotimes_{i=1}^nP_{x_i} +$$ +for $x\in\{0,1\}^n$.

+ +

Where you have to be a little more careful is if you're measuring only a subset of qubits. Then, the probability is still $p_x=\langle\psi|P_x|\psi\rangle$, but the output state is $P_x|\psi\rangle/\sqrt{p_x}$, which could still be a superposition of multiple basis states (all those for which the measured qubits correspond to $x$).

+",1837,,,,,06-11-2018 15:19,,,,0,,,,CC BY-SA 4.0 +2304,2,,2302,06-11-2018 15:45,,9,,"

A $1$-qubit system, in general, can be in a state $a|0\rangle+b|1\rangle$ where $|0\rangle$ and $|1\rangle$ are basis vectors of a two dimensional complex vector space. The standard basis for measurement here is $\{|0\rangle,|1\rangle\}$. When you are measuring in this basis, with $\frac{|a|^2}{|a|^2+|b|^2}\times 100\%$ probability you will find that the state after measurement is $|0\rangle$ and with $\frac{|b|^2}{|a|^2+|b|^2}\times 100\%$ you'll find that the state after measurement is $|1\rangle$.

+ +

But you could carry out the measurement in some other basis too, say $\{\frac{|0\rangle+|1\rangle}{\sqrt{2}},\frac{|0\rangle-|1\rangle}{\sqrt{2}}\}$, but that wouldn't be the standard basis.

+ +
+

Exercise: Express $a|0\rangle+b|1\rangle$ in the form $c(\frac{|0\rangle+|1\rangle}{\sqrt{2}})+d(\frac{|0\rangle-|1\rangle}{\sqrt{2}})$ + where $a,b,c,d\in\Bbb C$.

+
+ +

If you measure in the basis the probability of ending in the state $\frac{|0\rangle+|1\rangle}{\sqrt{2}}$ after a measurement is $\frac{|c|^2}{|c|^2+|d|^2}\times 100\%$ and probability of ending in the state $\frac{|0\rangle-|1\rangle}{\sqrt{2}}$ is $\frac{|d|^2}{|c|^2+|d|^2}\times 100\%$.

+ +

Similarly, for a $2$-qubit system the standard basis would $\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\}$ and its general state can be expressed as $\alpha|00\rangle + \beta|01\rangle + \gamma|10\rangle + \delta|11\rangle$. When you measure this in the standard basis you can easily see that the probability of ending up in the state (say) $|00\rangle$ will be $\frac{|\alpha|^2}{|\alpha|^2+|\beta|^2+|\gamma|^2+|\delta|^2}\times 100\%$. Similarly you can deduce the probabilities for the other states.

+ +

You should be able to extrapolate this same logic to general $n$-qubit states, now. Feel free to ask questions in the comments.

+",26,,26,,06-11-2018 16:06,06-11-2018 16:06,,,,0,,,,CC BY-SA 4.0 +2305,2,,2292,06-11-2018 17:09,,4,,"

You have mentioned five papers in the question, but one paper that remains unmentioned is the experimental implementation in 2009. Here you will find the actual circuit that was used to evaluate a Jones polynomial:

+ +

+ +

This might be the closest you will get to a ""more familiar"" presentation of the algorithm, as interest in the Jones polynomial and in DQC-1 have decayed a bit since 2009.

+ +

More details on this experiment can be found in Gina Passante's thesis.

+",2293,,2293,,6/14/2018 20:00,6/14/2018 20:00,,,,2,,,,CC BY-SA 4.0 +2306,1,2307,,06-11-2018 20:18,,10,611,"

Here the authors argue that the efforts of creating a scalable quantum neural network using a set of parameterized gates are deemed to fail for a large number of qubits. This is due to the fact that, due to the Levy's Lemma, the gradient of a function in high dimensional spaces is almost zero everywhere.

+ +

I was wondering if this argument can be also applied to other hybrid quantum-classical optimization methods, like VQE (Variational Quantum Eigensolver) or QAOA (Quantum Approximate Optimization Algorithm).

+ +

What do you think?

+",1644,,55,,08-01-2020 11:37,08-01-2020 11:37,Barren plateaus in quantum neural network training landscapes,,1,5,,,,CC BY-SA 4.0 +2307,2,,2306,06-12-2018 00:47,,4,,"

First: The paper references [37] for Levy's Lemma, but you will find no mention of ""Levy's Lemma"" in [37]. You will find it called ""Levy's Inequality"", which is called Levy's Lemma in this, which is not cited in the paper you mention.

+ +

Second: There is an easy proof that this claim is false for VQE. In quantum chemistry we optimize the parameters of a wavefunction ansatz $|\Psi(\vec{p})\rangle$ in order to get the lowest (i.e. most accurate) energy. The energy is evaluated by:

+ +

$$ +E_{\vec{p}} = \frac{\left\langle \Psi(\vec{p})\right|H\left|\Psi(\vec{p})\right\rangle}{\left\langle\Psi(\vec{p}) \right|\left.\Psi(\vec{p}) \right\rangle}. +$$

+ +

VQE just means we use a quantum computer to evaluate this energy, and a classical computer to choose how to improve the parameters in $\vec{p}$ so that the energy will be lower in the next quantum iteration.

+ +

So whether or not the ""gradient will be will be 0 almost everywhere when the number of parameters in $\vec{p}$ is large"" does not depend at all on whether we are using VQE (on a quantum computer) or just running a standard quantum chemistry program (like Gaussian) on a classical computer. Quantum chemists typically variationally optimize the above energy with up to $10^{10}$ parameters in $\vec{p}$, and the only reason we don't go beyond that is because we run out of RAM, not because the energy landscape starts to become flat. In this paper you can see at the end of the abstract that they calculated the energy for a wavefunction with about $10^{12}$ parameters, where the parameters are coefficients of Slater determinants. It is generally known that the energy landscape is not so flat (like it would be if the gradient were 0 almost everywhere) even when there's a trillion parameters or even more.

+ +

Conclusion: The application of Levy's Lemma is going to depend on the particular energy landscape that you have, which will depend on both $H$ and your ansatz $|\Psi(\vec{p})\rangle$. In the case of their particular implementation of QNN's, they have found an application of Levy's Lemma to be appropriate. In the case of VQE, we have a counter-example to the claim that Levy's Lemma ""always"" applies. The counter example where Levy's Lemma does not apply is when $H$ is a molecular Hamiltonian and $|\Psi\rangle$ is a CI wavefunction.

+",2293,,,,,06-12-2018 00:47,,,,4,,,,CC BY-SA 4.0 +2308,2,,2275,06-12-2018 06:52,,3,,"

A projection operator $P$ has two key properties: +$$ +P^\dagger=P\qquad P^2=P +$$ +A particularly simple instance of a projection operator is a rank 1 projector, $P=|\phi\rangle\langle\phi|$, which you can easily see satisfies the two properties given that $|\phi\rangle$ is a normalised state, so $\langle\phi|\phi\rangle=1$.

+ +

To see what rank the projector is, simply evaluate $\text{rank}(P)=\text{Tr}(P)$. In your state example of $P=|0\rangle\langle 0|+|2\rangle\langle 2|$, you can see that the rank is 2, so it cannot be written as a rank 1 projector $|\phi\rangle\langle\phi|$.

+",1837,,,,,06-12-2018 06:52,,,,0,,,,CC BY-SA 4.0 +2309,2,,2282,06-12-2018 07:57,,0,,"

Ultimate Black Box

+ +

A quantum computer is by definition the ultimate black box. You feed in an input and you get a process, which produces an output.

+ +

Any attempt to open up the black box, will result in the process not happening.

+ +

Any engineer would tell you that would hinder any design process. Even the smallest design flaw would takes months of trial and error to trace down.

+",2622,,,,,06-12-2018 07:57,,,,0,,,,CC BY-SA 4.0 +2310,1,2314,,06-12-2018 09:48,,22,5500,"

Given a $2$ qubit-system and thus $4$ possible measurements results in the basis $\{|00\rangle$, $|01\rangle$, $|10\rangle$, $|11\rangle\}$, how can I prepare the state, where:

+ +
    +
  1. only $3$ of these $4$ measurement results are possible (say, $|00\rangle$, $|01\rangle$, $|10\rangle$)?

  2. +
  3. these measurements are equally likely? (like Bell state but for $3$ outcomes)

  4. +
+",2624,,55,,06-12-2018 14:01,3/27/2020 15:24,How can I build a circuit to generate an equal superposition of 3 outcomes for 2 qubits?,,5,5,,,,CC BY-SA 4.0 +2312,2,,2310,06-12-2018 12:28,,11,,"

I'll tell you how to create any two qubit pure state you might ever be interested in. Hopefully you can use it to generate the state you want.

+ +

Using a single qubit rotation followed by a cnot, it is possible to create states of the form

+ +

$$ \alpha \, |0\rangle \otimes |0\rangle + \beta \, |1\rangle \otimes |1\rangle .$$

+ +

Then you can apply an arbitrary unitary, $U$, to the first qubit. This rotates the $|0\rangle$ and $|1\rangle$ states to new states that we'll call $|a_0\rangle$ and $|a_1\rangle$,

+ +

$$U |0\rangle = |a_0\rangle, \,\,\, U |1\rangle = |a_1\rangle$$

+ +

Our entangled state is then

+ +

$$\alpha \, |a_0\rangle \otimes |0\rangle + \beta \, |a_1\rangle \otimes |1\rangle .$$

+ +

We can similarly apply a unitary to the second qubit.

+ +

$$V |0\rangle = |b_0\rangle, \,\,\, V |1\rangle = |b_1\rangle$$

+ +

which gives us the state

+ +

$$\alpha \, |a_0\rangle \otimes |b_0\rangle + \beta \, |a_1\rangle \otimes |b_1\rangle .$$

+ +

Due to the Schmidt decomposition, it is possible to express any pure state of two qubits in the form above. This means that any pure state of two qubits, including the one you want, can be created by this procedure. You just need to find the right rotation around the x axis, and the right unitaries $U$ and $V$.

+ +

To find these, you first need to get the reduced density matrix for each of your two qubits. The eigenstates for the density matrix of your first qubit will be your $|a_0\rangle$ and $|a_1\rangle$. The eigenstates for the second qubit will be $|b_0\rangle$ and $|b_1\rangle$. You'll also find that $|a_0\rangle$ and $|b_0\rangle$ will have the same eigenvalue, which is $\alpha^2$. The coefficient $\beta$ can be similarly derived from the eigenvalues of $|a_1\rangle$ and $|b_1\rangle$.

+",409,,,,,06-12-2018 12:28,,,,2,,,,CC BY-SA 4.0 +2313,2,,2310,06-12-2018 12:30,,12,,"

Here is how you might go about designing such a circuit.$\def\ket#1{\lvert#1\rangle}$ +Suppose that you would like to produce the state $\ket{\psi} = \tfrac{1}{\sqrt 3} \bigl( \ket{00} + \ket{01} + \ket{10} \bigr)$. Note the normalisation of ${\small 1}/\small \sqrt 3$, which is necessary for $\ket{\psi}$ to be a unit vector.

+ +

If we want to consider a straightforward way to realise this state, we might want to think in terms of the first qubit being a control, which determines whether the second qubit should be in the state $\ket{+} = \tfrac{1}{\sqrt 2}\bigl(\ket{0}+\ket{1}\bigr)$, or in the state $\ket{0}$, by using some conditional operations. This motivates considering the decomposition +$$ \ket{\psi} \;=\; \tfrac{\sqrt 2}{\sqrt 3}\ket{0}\ket{+} \;+\; \tfrac{1}{\sqrt 3}\ket{1}\ket{0}.$$ +Taking this view it makes sense to consider preparing $\ket{\psi}$ as follows:

+ +
    +
  1. Prepare two qubits in the state $\ket{00}$.
  2. +
  3. Rotate the first qubit so that it is in the state $\tfrac{\sqrt 2}{\sqrt 3}\ket{0} + \tfrac{1}{\sqrt 3}\ket{1}$.
  4. +
  5. Apply a coherently controlled operation on the two qubits which, when the first qubit is in the state $\ket{0}$, performs a Hadamard on the second qubit.
  6. +
+ +

Which specific operations you would apply to realise these transformations — i.e. which single-qubit transformation would be most suitable for step 2, and how you might decompose the two-qubit unitary in step 3 into CNOTs and Pauli rotations — is a simple exercise. (Hint: use the fact that both $X$ and the Hadamard are self-inverse to find as simple a decomposition as possible in step 3.)

+",124,,,,,06-12-2018 12:30,,,,0,,,,CC BY-SA 4.0 +2314,2,,2310,06-12-2018 12:46,,11,,"

Break the problem in parts.

+ +

Say we have already sent $\mid 00 \rangle$ to $\frac{1}{\sqrt{3}} \mid 00 \rangle + \frac{\sqrt{2}}{\sqrt{3}}\mid 01 \rangle$. We can send that to $\frac{1}{\sqrt{3}} \mid 00 \rangle + (\frac{1}{2} (1+i))\frac{\sqrt{2}}{\sqrt{3}}\mid 01 \rangle + (\frac{1}{2} (1-i))\frac{\sqrt{2}}{\sqrt{3}}\mid 10 \rangle$ by a $\sqrt{SWAP}$. That satisfies you're requirements with all probabilities $\frac{1}{3}$ but with different phases. If you want use phase shift gates on each to get the phases you want like if you want to make them all equal.

+ +

Now how do we get from $\mid 00 \rangle$ to $\frac{1}{\sqrt{3}} \mid 00 \rangle + \frac{\sqrt{2}}{\sqrt{3}}\mid 01 \rangle$? If it was $\frac{1}{\sqrt{2}} \mid 00 \rangle + \frac{1}{\sqrt{2}}\mid 01 \rangle$, we could do a Hadamard on the second qubit. It is not a easy with this but we can still use a unitary only on the second qubit. That is done by a rotation operator purely on the second qubit by factoring as

+ +

$$Id \otimes U : \; \mid 0 \rangle \otimes (\mid 0 \rangle) \to +\mid 0 \rangle \otimes (\frac{1}{\sqrt{3}} \mid 0 \rangle + \frac{\sqrt{2}}{\sqrt{3}} \mid 1 \rangle) +$$

+ +

$$ +U = \begin{pmatrix} +\frac{1}{\sqrt{3}} & \frac{\sqrt{2}}{\sqrt{3}} & \\ +\frac{\sqrt{2}}{\sqrt{3}} & -\frac{1}{\sqrt{3}} & \\ +\end{pmatrix} +$$ works. Decompose this into more basic gates if you need to.

+ +

In total we have:

+ +

$$ +\mid 00 \rangle \to \frac{1}{\sqrt{3}} \mid 00 \rangle + \frac{\sqrt{2}}{\sqrt{3}}\mid 01 \rangle\\ +\to \frac{1}{\sqrt{3}} \mid 00 \rangle + (\frac{1}{2} (1+i))\frac{\sqrt{2}}{\sqrt{3}}\mid 01 \rangle + (\frac{1}{2} (1-i))\frac{\sqrt{2}}{\sqrt{3}}\mid 10 \rangle\\ +\to \frac{1}{\sqrt{3}} \mid 00 \rangle + \frac{e^{i \theta_1}}{\sqrt{3}}\mid 01 \rangle + \frac{e^{i \theta_2}}{\sqrt{3}}\mid 10 \rangle +$$

+",434,,,,,06-12-2018 12:46,,,,5,,,,CC BY-SA 4.0 +2317,1,,,06-12-2018 20:23,,11,170,"

In [1], the problem of simulating a Hamiltonian using repeated applications of a different set of Hamiltonians is discussed.

+ +

In particular, let $A$ and $B$ be a pair of Hermitian operators, and let $\mathcal L$ be the algebra generated from $A, B$ through repeated commutation $^{\mathbf{(\dagger)}}$.

+ +

The author then asks (first paragraph of third page) what is $\mathcal L$ for an arbitrary pair of observables $A$ and $B$, and argues that $\mathcal L$ is the space of all Hermitian matrices, unless (quoting from the paper) both $e^{iA t}$ and $e^{iB t}$ lie in an $n$-dimensional unitary representation of some Lie group other than $U(n)$.

+ +

I'm not too familiar with the theory of Lie algebras, so this statement is quite cryptic for me. +How can this be shown more explicitly? +Equivalently, is there a more direct way to show this fact?

+ +
+ +

$(\dagger)$: More explicitly, this is the vector space spanned by $A, B, i[A,B], [A,[A,B]], ...$

+ +

[1] Lloyd 1995, Almost Any Quantum Logic Gate is Universal, Link to PRL.

+",55,,55,,06-12-2018 21:07,12/21/2021 12:06,"Why does (almost) every pair of Hamiltonians generate, through repeated commutation, the whole space of Hermitian matrices?",,2,4,,,,CC BY-SA 4.0 +2318,2,,2282,06-12-2018 20:30,,5,,"

Simpler answer: All quantum computers are classical computers too, if you limit their gate set to only classical gates such as $X$, which is the NOT gate. Every time you build a quantum computer, you're also building a classical computer, so you can prove mathematically that building a quantum computer must be at least as hard as building a classical computer.

+",2293,,,,,06-12-2018 20:30,,,,0,,,,CC BY-SA 4.0 +2319,2,,2317,06-12-2018 20:58,,2,,"
+

I'm not too familiar with the theory of Lie algebras, so this + statement is quite cryptic for me. How can this be shown more + explicitly? Equivalently, is there a more direct way to show this + fact?

+
+ +

At around the same time, David Deutsch et al. proved the same thing in this paper: Universality in Quantum Computation (1995), but without ever using the word ""algebra"" or ""Lie"" in the whole paper. The proof starts on page 3 and the main point is at Eq. 9, which is the same equation that appears in Seth Lloyd's paper, but here it is explained without reference to ""Lie algebras"". Eq. 9 is an application of what in physics we often just call the ""Trotter splitting"". It was written down almost 100 years earlier by Sophus Lie, but you do not need to know anything about Lie Algebras or even vector spaces in order to apply the formula as done in Eq. 9.

+",2293,,26,,6/13/2018 5:17,6/13/2018 5:17,,,,1,,,,CC BY-SA 4.0 +2320,1,2322,,06-12-2018 21:10,,8,118,"

If this isn't known, would they theoretically be? I'm particularly interested in knowing whether a QC would be faster at evaluating the fitness function of the possible solutions than a classical machine

+",2637,,26,,6/13/2018 5:39,6/13/2018 15:31,Are Genetic Programming runtimes faster on QCs than on classical computers?,,1,1,,,,CC BY-SA 4.0 +2321,1,,,06-12-2018 23:55,,9,207,"

As the title suggests, I want to know what the applicability of quantum network coding is, besides the EPR pair construction between distant pairs of 'Users-Targets'.

+ +

Can quantum network coding be used for computation?

+",2422,,26,,12/13/2018 19:53,12/13/2018 19:53,What is the applicability of quantum network coding?,,1,5,,,,CC BY-SA 4.0 +2322,2,,2320,6/13/2018 0:22,,3,,"

There are quantum algorithms for genetic programming which would theoretically have advantages over the corresponding classical genetic programming algorithms but you would need a full-fledged quantum computer with more qubits than any quantum computer we currently have, in order to observe such an advantage.

+",2293,,2293,,6/13/2018 15:31,6/13/2018 15:31,,,,0,,,,CC BY-SA 4.0 +2323,2,,2321,6/13/2018 6:46,,5,,"

Network coding — both classical network coding, and quantum network coding — is an approach to distributing information by performing simple operations at nodes in a network, acting on input signals and transmitting the outputs to other nodes. To put it another way, network coding is an approach to distributing information using a communications network if we treat it as a logical circuit, albeit where the 'gates' performed at each node may be a bit more powerful than just AND, OR, CNOT, or the like.

+ +

In principle, we can use the setting of network coding to perform non-trivial computations by an appropriate choice of operations (gates) at the nodes. Network coding does not usually allow the freedom to also choose the structure of the network itself (i.e. the circuit topology), as this is usually given as an input parameter to a given network coding problem. But there will still be some range of computations which a given network can admit, not all of which will serve merely to distribute information.

+ +

In the particular case of quantum network coding, the detail that things are to be done in a distributed (and presumably coherent) manner does add wrinkles to how you can manage to accomplish things. However, if we allow classical communication between nodes in the network as well — either allowing classical messages to move both forward and backward within the coding network or in an all-to-all manner — then you can perform coherent quantum network coding for the k-pairs problem +[1] or an arbitrary network coding problem [2] respectively, provided that a classical network protocol exists for the same problem in the same network: and furthermore, the way this is done can be seen to essentially be Measurement Based Quantum Computation (MBQC), as Martin Roeteller and I showed [3]. Conversely, it is fairly clear that for any MBQC procedure, there is a corresponding coding network topology which allows that procedure to be realised.

+ +

It follows that, while the details are a bit pickier than in the classical case, quantum network coding can be viewed as a setting in which to do universal computation, specifically via MBQC, at least so long as auxiliary classical communication is allowed (with somewhat fewer constraints than on quantum communication).

+ +
+ +

[1] Constructing Quantum Network Coding Schemes from Classical Nonlinear Protocols. Kobayashi et al. (2010). [arXiv:1012.4583]

+ +

[2] General Scheme for Perfect Quantum Network Coding with Free Classical Communication. Kobayashi et al. (2009). [arXiv:0908.1457]

+ +

[3] Quantum linear network coding as one-way quantum computation. de Beaudrap & Roetteler (2014). [arXiv:1403.3533]

+",124,,124,,6/13/2018 16:53,6/13/2018 16:53,,,,0,,,,CC BY-SA 4.0 +2324,1,2328,,6/13/2018 8:57,,11,1526,"

I have some perplexity concerning the concept of phase estimation: by definition, given a unitary operator $U$ and an eigenvector $|u\rangle$ with related eigenvalue $\text{exp}(2\pi i \phi)$, the phase estimation allows to find the value of $\phi$. +This would mean that I would be able to determine an eigenvalue of a certain matrix given that I know already one of its eigenvectors? But isn't the fact that needing an eigenvector beforehand would quite reduce the usefulness of the phase estimation itself?

+",2648,,26,,6/13/2018 9:59,4/27/2022 21:23,What is the actual power of Quantum Phase Estimation?,,3,2,,,,CC BY-SA 4.0 +2325,2,,2295,6/13/2018 9:19,,4,,"
+

I am confused what ""conjugation of coordinates"" means in this context.

+
+ +

Conjugating coordinates of $\mathcal C$ is equivalent to setting some diagonal elements of Γ to 1.

+ +

Read ""Theorem 12, on page 8 and 9"" for an understanding of the usage, this is further explained on page 15 (last paragraph):

+ +
+

""As mentioned before, the set of self-dual linear codes over GF(4) is a subset of the self-dual additive codes of Type II. Note that conjugation of single coordinates does not preserve the linearity of a code. It was shown by Van den Nest $^{[25]}$ that the code $\mathcal C$ generated by a matrix of the form Γ + $ωI$ can not be linear. However, if there is a linear code equivalent to $\mathcal C$, it can be found by conjugating some coordinates. Conjugating coordinates of $\mathcal C$ is equivalent to setting some diagonal elements of Γ to 1. Let $A$ be a binary diagonal matrix such that Γ + $A$ + $ωI$ generates a linear code. Van den Nest $^{[25]}$ proved that $\mathcal C$ is equivalent to a linear code if and only if there exists such a matrix $A$ that satisfies Γ$^2$ + $A$Γ + Γ$A$ + Γ + $I$ = $0$. A similar result was found by Glynn et al. $^{[12]}$. Using this method, it is easy to check whether the LC orbit of a given graph corresponds to a linear code. However, self-dual linear codes over GF(4) have already been classified up to length 16, and we have not found a way to extend this result using the graph approach."".

+
+ +

References: +[12] D. G. Glynn, T. A. Gulliver, J. G. Maks, M. K. Gupta, The geometry of additive quantum codes, submitted to Springer-Verlag, 2004. Book +[25] M. Van den Nest, Local Equivalence of Stabilizer States and Codes, Ph.D. thesis, K. U. Leuven, Leuven, Belgium, May 2005. .PDF (English starts on page 22)

+",278,,,,,6/13/2018 9:19,,,,1,,,,CC BY-SA 4.0 +2326,2,,1952,6/13/2018 9:42,,2,,"

A couple papers are out there on algorithms which can be constructed using reverse annealing, http://iopscience.iop.org/article/10.1088/1367-2630/aa59c4/meta and https://arxiv.org/abs/1609.05875 (it is worth pointing out previous somewhat related closed system work: https://link.springer.com/article/10.1007/s11128-010-0168-z). As far as experimental results, I think the only ones publicly visible at the time of writing are the white paper given in the previous post. However, there will be some new work presented at AQC 2018 (https://ti.arc.nasa.gov/events/aqc-18/) in late June and these talks are usually put online a few months after the conference.

+",2649,,,,,6/13/2018 9:42,,,,0,,,,CC BY-SA 4.0 +2327,2,,2324,6/13/2018 10:04,,7,,"

Sometimes, you might know the eigenvector, and the computational question that you want to answer is what the eigenvalue is. For example, any function evaluation $f(x)$ defined by the action of a $U$ +$$ +U:|x\rangle|y\rangle\mapsto|x\rangle|y\oplus f(x)\rangle +$$for $x\in\{0,1\}^n$, $y\in\{0,1\}$ has well defined eigenvectors, +$$ +|x\rangle(|0\rangle\pm|1\rangle)/\sqrt{2}, +$$ +but whether the eigenvalue is $\pm 1$ is absolutely vital: that is essentially the question being asked in things like Deutsch's algorithm, Deutsch-Jozsa, Simon's algorithm, Bernstein-Vazirani etc. It's also the way that the oracle for quantum search is often constructed.

+ +

In a slightly more generalised setting (that applies, for example, to Shor's algorithm), you might not need to find a specific eigenvalue, but a random choice from some subset will do. So it might be that there's a standard state (e.g. $|00\ldots 01\rangle$) that has support on all of the eigenvectors from which you want to pick an eigenvalue randomly, but you have no idea what the individual eigenvectors are, but you can run phase estimation with that input, and you'll be just fine.

+",1837,,,,,6/13/2018 10:04,,,,0,,,,CC BY-SA 4.0 +2328,2,,2324,6/13/2018 10:25,,9,,"

If you don't supply a $|u\rangle$ as an input, there are two possible things you might want to get out:

+ +
    +
  1. The $\varphi$ for a randomly chosen (but unknown) eigenstate $|u\rangle$;

  2. +
  3. Both $\varphi$ and $|u\rangle$ for one or more eigenstates.

  4. +
+ +

Let's first look at 1. Since eigenstates form a complete basis, any input state you use can be interpreted as a superposition of the eigenstates of $U$. Due to the linearity of quantum mechanics, the algorithm would then run for all these states at once. Then at the end, when you measure the output, it will randomly collapse to a given instance. This means that you'll be given a $\varphi$ for a randomly chosen eigenstate, but you won't know which it is. The existing phase estimation algorithm therefore can supply us with the first possible application.

+ +

The second application is something we can't do with standard phase estimation, but let's consider it hypothetically. Any algorithm that could do this would be giving us an eigenstate $|u\rangle$ as part of the output. So if you want to actually know what $|u\rangle$ is, you'd have to do tomography on the output. Since tomography is inefficient, there would probably be better ways to go about doing this.

+",409,,409,,6/13/2018 10:43,6/13/2018 10:43,,,,0,,,,CC BY-SA 4.0 +2329,1,2331,,6/13/2018 10:37,,8,213,"

If we start with a Hamiltonian $H(t_i)$, and with our qubits prepared in the ground state of this, and then slowly change this to a Hamiltonian $H(t_i)$, the final state of our qubits should be the ground state of the new Hamiltonian. This is due to the adiabatic theorem, and is the basis of quantum annealing.

+ +

But what if it's not the ground state we want? Suppose we start with the first excited state of $H(t_i)$. Does the process give us the first excited state of $H(t_f)$? What about for other states?

+",409,,,,,6/13/2018 15:18,Can quantum annealing find excited states?,,1,0,,,,CC BY-SA 4.0 +2331,2,,2329,6/13/2018 15:18,,4,,"

In Practice:

+ +

Quantum annealing almost always gives excited states in practice. To get the exact ground state at the end, you need the adiabatic passage to be perfect.

+ +

The closest thing to a perfect adiabatic passage is probably this recent paper where they get the ground state with 0.975 fidelity, but this is for 3 qubits with a very simple Hamiltonian (see Eq. 5).

+ +

However in the D-Wave machine with 2000 qubits there's $2^{2000}-1$ excited states and much higher likelihood that many of them will be near the ground state. Almost every problem D-Wave has worked on recently seeks an approximate solution, not the absolute global minimum.

+ +

In Theory:

+ +

What if my annealer is perfect and we can stay in the true ground initial state the whole time? Yes it should be possible to prove the adiabatic theorem for any initial state, not just the ground state, but how are you going to initialize the system in some particular excited state?

+ +

There is indeed a technique for doing ""constrained"" annealing described here. The idea is to use a driver Hamiltonian that commutes with the constraints (see the paragraph above Eq. 2)!

+ +

More generally you can adiabatically evolve along a particular symmetry sector. For example if the ground state of a molecular Hamiltonian is a singlet state, but you are only interested in a triplet (excited) state, as long as you start with a state that has triplet symmetry and your driver Hamiltonian conserves spin, you can prove a generalization of the ""basic"" adiabatic theorem that states that you will remain in the triplet state if you evolve slowly enough.

+",2293,,,,,6/13/2018 15:18,,,,0,,,,CC BY-SA 4.0 +2332,2,,2298,6/13/2018 18:29,,4,,"
+

They say that this scheme requires to throw $O(N)$ switches for each memory call, but I don't understand why is this the case. + From the above, it would seem that one just needs to throw $O(\log_2(N))$ + switches, one per bifurcation, to get from the top to the bottom.

+
+ +

It seems to me that $n= \log_2 N$ is the length of the address register: the number of times that you have to say left or right to reach a given node.

+ +

The number of 'switches' is the amount of hardware required to realize the full graph. For the classical case, this means transistors. From the paper:

+ +
+

An electronic implementation requires placing one transistor in each + of the two paths following each node in the graph. Each address bit + controls all the transistors in one of the graph levels: it activates + all the transistors in the left paths if it has value 0, or all the + transistors in the right paths if it has value 1

+
+ +

The number of transistors required for the bottom level of the graph is $O(N)$. This then decreases exponentially as you go up the graph. So the total number is also $O(N)$. All are active, since each address bit activates all the transistors in a given level.

+ +
+

In other words, is the advantage of the bucket-brigade approach only in the higher error resilience in the quantum case, or would it also be classically advantageous, compared with the conventional approaches?

+
+ +

I think that it would indeed work in classical RAM, but the hardware constraints didn't supply the 'evolutionary pressure' required for it to be develiped and implemented. The quantum need to hide from errors is the main motivation.

+",409,,,,,6/13/2018 18:29,,,,1,,,,CC BY-SA 4.0 +2333,1,2335,,6/14/2018 1:59,,13,671,"

Tried asking here first, since a similar question had been asked on that site. Seems more relevant for this site however.

+ +

It is my current understanding that a quantum XOR gate is the CNOT gate. Is the quantum XNOR gate a CCNOT gate?

+",2645,,2645,,07-10-2018 23:34,07-10-2018 23:34,Quantum XNOR Gate Construction,,2,1,,,,CC BY-SA 4.0 +2334,2,,2333,6/14/2018 3:25,,4,,"

The quantum XNOR is not a CCNOT. CCNOT would take 3 bits as input, whereas XOR, XNOR, and CNOT take in only 2 bits or qubits as input.

+ +

The reason why we say the XOR can be thought of as a CNOT is explained here, and the same reasoning can be used to construct the (2 qubit) XNOR.

+",2293,,2293,,6/14/2018 13:50,6/14/2018 13:50,,,,2,,,,CC BY-SA 4.0 +2335,2,,2333,6/14/2018 7:30,,8,,"

Any classical one-bit function $f:x\mapsto y$ where $x\in\{0,1\}^n$ is an $n$-bit input and $y\in\{0,1\}$ is an $n$-bit output can be written as a reversible computation, +$$ +f_r:(x,y)\mapsto (x,y\oplus f(x)) +$$ +(Note that any function of $m$ outputs can be written as just $m$ separate 1-bit functions.)

+ +

A quantum gate implementing this is basically just the quantum gate corresponding to the reversible function evaluation. If you simply write out the truth table of the function, each line corresponds to a row of the unitary matrix, and the output tells you which column entry contains a 1 (all other entries contain 0).

+ +

In the case of XNOR, we have the standard truth table, and the reversible function truth table +$$ +\begin{array}{c|c} +x & f(x) \\ +\hline +00 & 1 \\ +01 & 0 \\ +10 & 0 \\ +11 & 1 +\end{array} +\qquad +\begin{array}{c|c} +(x,y) & (x,y\oplus f(x)) \\ +\hline +000 & 001 \\ +001 & 000 \\ +010 & 010 \\ +011 & 011 \\ +100 & 100 \\ +101 & 101 \\ +110 & 111 \\ +111 & 110 +\end{array} +$$ +Thus, the unitary matrix is +$$ +U=\left(\begin{array}{cccccccc} +0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ +1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ +0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ +0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ +0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ +0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ +0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ +0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ +\end{array}\right). +$$ +This can easily be decomposed in terms of a couple of controlled-not gates and a bit flip or two.

+ +

The method that I just outlined gives you a very safe way of making the construction that works for any $f(x)$, but it does not perfectly reconstruct the correspondence between XOR and controlled-not. For that, we need to assume a little bit more about the properties of the function $f(x)$.

+ +

Assume that we can decompose the input $x$ into $a,b$ such that $a\in\{0,1\}^{n-1}$ and $b\in \{0,1\}$ such that for all values of $a$, the values of $f(a,b)$ are distinct for each $b$. In this case, we can define the reversible function evaluation as $$f:(a,b)\mapsto(a,f(a,b)).$$ This means that we're using 1 fewer bits than the previous construction, but from here on the technique can be repeated.

+ +

So, let's go back to the truth table for XNOR. +$$ +\begin{array}{c|c} +ab & f(a,b) \\ +\hline +00 & 1 \\ +01 & 0 \\ +10 & 0 \\ +11 & 1 +\end{array} +$$ +We can see that, for example, when we fix $a=0$, the two outputs are $1,0$, hence distinct. Similarly for fixing $a=1$. Thus, we can proceed with the reversible function construction +$$ +\begin{array}{c|c} +ab & af(a,b) \\ +\hline +00 & 01 \\ +01 & 00 \\ +10 & 10 \\ +11 & 11 +\end{array} +$$ +and this gives us a unitary +$$ +U=\left(\begin{array}{cccc} +0 & 1 & 0 & 0 \\ +1 & 0 & 0 & 0 \\ +0 & 0 & 1 & 0 \\ +0 & 0 & 0 & 1 +\end{array}\right) +$$ +You can easily check that this is the same as $\text{cNOT}\cdot(\mathbb{1}\otimes X)$.

+",1837,,,,,6/14/2018 7:30,,,,1,,,,CC BY-SA 4.0 +2336,1,2337,,6/14/2018 9:00,,10,4584,"

What conditions must a matrix hold to be considered a valid density matrix?

+",2559,,,,,6/14/2018 11:09,How to check if a matrix is a valid density matrix?,,2,0,,,,CC BY-SA 4.0 +2337,2,,2336,6/14/2018 9:02,,13,,"

If a matrix has unit trace and if it is positive semi-definite (and Hermitian) then it is a valid density matrix. More specifically check if the matrix is Hermitian; find the eigenvalues of the matrix , check if they are non-negative and add up to $1$.

+",2663,,2663,,6/14/2018 9:32,6/14/2018 9:32,,,,6,,,,CC BY-SA 4.0 +2338,1,2339,,6/14/2018 9:13,,8,2874,"

If we are given a state of a qubit, how do we construct its density matrix?

+",2559,,26,,12/23/2018 12:32,12/23/2018 12:32,How to find a density matrix of a qubit?,,1,1,,,,CC BY-SA 4.0 +2339,2,,2338,6/14/2018 9:14,,9,,"

If you're given $|\psi\rangle$, just calculate $\rho=|\psi\rangle\langle\psi|$.

+ +

For example, $|\psi\rangle=\cos\theta|0\rangle+\sin\theta e^{i\phi}|1\rangle$, then $\langle\psi|=\cos\theta\langle 0|+\sin\theta e^{-i\phi}\langle 1|$. This means that +$$ +\rho=\left(\begin{array}{c} \cos\theta\\ \sin\theta e^{i\phi}\end{array}\right)\cdot \left(\begin{array}{cc} \cos\theta && \sin\theta e^{-i\phi}\end{array}\right)=\left(\begin{array}{cc} +\cos^2\theta & \cos\theta\sin\theta e^{-i\phi} \\ \cos\theta\sin\theta e^{i\phi} & \sin^2\theta +\end{array}\right). +$$

+",1837,,,,,6/14/2018 9:14,,,,0,,,,CC BY-SA 4.0 +2340,2,,2073,6/14/2018 9:57,,0,,"
+

So my question is whether such a proof exists for the case where there is no initial entanglement.

+
+ +

The answer to this is no. As mentioned in the paper, if the subsystem is separable from the rest of the universe then a global unitary evolution will act as a CPTP map on the subsystem. You can verify this explicitly by writing down an unentagled pure state, evolving it by a unitary and then tracing out one part. This calculation is done explicitly in Chapter 8 of Nielsen and Chuang.

+",2663,,,,,6/14/2018 9:57,,,,0,,,,CC BY-SA 4.0 +2341,2,,2336,6/14/2018 11:09,,6,,"

Suppose someone has prepared your quantum system in one of an orthogonal set of states $\{|\psi_j\rangle\}$. You don't know which of these states they've prepared it in, but you do know that they prepared state $|\psi_j\rangle$ with probability $p_j$. Your system is then described by the density matrix,

+ +

$\rho = \sum_j \, p_j \, |\psi_j\rangle \langle\psi_j|$.

+ +

There are some properties that will apply to any density matrix of this form.

+ +
    +
  • Clearly it is diagonalizable, since it is explicitly written in terms of its eigenvalues $p_j$ and eigenstates $|\psi_j\rangle$.

  • +
  • Since the $|\psi_j\rangle \langle\psi_j|$ are Hermitian, and since probabilities are real numbers, the density matrix is Hermitian.

  • +
  • Since probabilities are all either zero or positive, the density matrix is positive semidefinite.

  • +
  • Since all probabilities must sum to 1, and the trace is a sum of eigenvalues, the density matrix must have a trace of 1.

  • +
+ +

These are exactly the properties required of all density matrices. Hopefully this derivation of them gives a bit of understanding of why they are required.

+",409,,,,,6/14/2018 11:09,,,,4,,,,CC BY-SA 4.0 +2342,1,,,6/14/2018 12:54,,11,261,"

As stated in the question, I have found in several papers (e.g. 1, 2) that in order to perform a quantum walk on a given tree it is necessary to add some nodes to the root $r$, say $r^{'}$ and $r^{""}$. Why are they needed?

+ +
+ +

References:

+ +
    +
  1. Farhi E., ""Quantum computation and decision trees"". +https://arxiv.org/abs/quant-ph/9706062

  2. +
  3. Ambainis A., ""Any AND-OR formula of size N can be evaluated in time $N^{1/2+o(1)}$ on a quantum computer"". http://www.ucw.cz/~robert/papers/andor-siamjc.pdf

  4. +
+",2648,,55,,07-05-2018 09:40,3/29/2021 23:57,"Quantum Walk: Why the need of adding ""tail"" nodes to the root?",,1,5,,,,CC BY-SA 4.0 +2343,1,2345,,6/14/2018 16:48,,24,2974,"

Quantum computing allows us to encrypt information in a different way compared to what we use today, but quantum computers are much more powerful than today's computers. So if we manage to build quantum computers (hence use quantum cryptography), will the so-called ""hackers"" have more or fewer chances of ""hacking"" into the systems? Or is it impossible to determine it?

+",2559,,26,,12/13/2018 19:53,6/30/2020 19:24,Is quantum cryptography safer than classical cryptography?,,3,1,,,,CC BY-SA 4.0 +2345,2,,2343,6/14/2018 18:05,,18,,"

If you are talking specifically about quantum key distribution (quantum cryptography being an umbrella term that could apply to lots of stuff), then once we have a quantum key distribution scheme, this is theoretically perfectly secure. Rather than computational security that much of current cryptography is based on, quantum key distribution is perfectly secure.

+ +

That said, it is only perfectly secure subject to certain assumptions relating mainly to lab security. These same assumptions are essentially present in the classical case as well, just that because the quantum experiments are a lot more fiddly, it might be harder to be completely on top of all the possible attacks. Realistically, these are already the directions in which cryptography is attacked, rather than trying to brute force a crack. For example, exploits relying on a bad implementation of a protocol (rather than the protocol itself being flawed).

+ +

What quantum crypto, or post quantum crypto, is aiming to do is sidestep the loss of computational security implied by a quantum computer. It will never avoid these implementation issues.

+ +

In a completely shameless plug, you might be interested in my introduction to quantum cryptography video. It talks a little about this computational vs perfect security question (although doesn't really talk about possible hacks of QKD).

+",1837,,1837,,6/15/2018 5:14,6/15/2018 5:14,,,,2,,,,CC BY-SA 4.0 +2346,1,2350,,6/14/2018 18:17,,6,555,"

There is an excellent answer to How do I add 1+1 using a quantum computer? that shows constructions of the half and full adders. In the answer, there is a source for the QRCA. I have also looked at this presentation.

+ +

I am still left with these questions:

+ +
    +
  1. What does a truth table for a QRCA look like?

  2. +
  3. What would a unitary matrix for a QRCA be?

  4. +
  5. What would a circuit diagram for a QRCA look like? (There is an example of a 6-bit circuit in the arXiv paper)

  6. +
+",2645,,2645,,6/16/2018 21:18,6/16/2018 21:18,Quantum Ripple Carry Adder Construction,,1,4,,,,CC BY-SA 4.0 +2347,1,2354,,6/14/2018 18:55,,20,15549,"

What is the motivation behind density matrices? And, what is the difference between the density matrices of pure states and density matrices of mixed states?

+ +

This is a self-answered sequel to What's the difference between a pure and mixed quantum state? & How to find a density matrix of a qubit? You're welcome to write alternate answers.

+",26,,,,,08-07-2021 08:30,Density matrices for pure states and mixed states,,2,0,,,,CC BY-SA 4.0 +2348,2,,2347,6/14/2018 18:55,,11,,"

The motivation behind density matrices[1]:

+

In quantum mechanics, the state of a quantum system is represented by a state vector, denoted $|\psi\rangle$ (and pronounced ket). A quantum system with a state vector $|\psi\rangle$ is called a pure state. However, it is also possible for a system to be in a statistical ensemble of different state vectors. For example, there may be a $50\%$ probability that the state vector is $|\psi_1\rangle$ and a $50\%$ chance that the state vector is $|\psi_2\rangle$. This system would be in mixed state. The density matrix is especially useful for mixed states, because any state, pure or mixed, can be characterized by a single density matrix. +A mixed state is different from a quantum superposition. The probabilities in a mixed state are classical probabilities (as in the probabilities one learns in classical probability theory/statistics), unlike the quantum probabilities in a quantum superposition. In fact, a quantum superposition of pure states is another pure state, for example, $\frac{|0\rangle + |1\rangle}{\sqrt{2}}$ . In this case, the coefficients $\frac{1}{\sqrt {2}}$ are not probabilities, but rather probability amplitudes.

+

Example: light polarization

+

An example of pure and mixed states is light polarization. Photons can have two helicities, corresponding to two orthogonal quantum states, $|R\rangle$ (right circular polarization) and $|L\rangle$ (left circular polarization). A photon can also be in a superposition state, such as $\frac{|R\rangle + |L\rangle}{\sqrt{2}}$(vertical polarization) or $\frac{|R\rangle - |L\rangle}{\sqrt{2}}$(horizontal polarization). More generally, it can be in any state $\alpha|R\rangle + \beta |L\rangle$ (with $|\alpha|^2+|\beta|^2=1$) corresponding to linear, circular or elliptical polarization. If we pass $\frac{|R\rangle + |L\rangle}{\sqrt{2}}$ polarized light through a circular polarizer which allows either only $|R\rangle$ polarized light, or only $|L\rangle$ polarized light, the intensity would be reduced by half in both cases. This may make it seem like half of the photons are in state $|R\rangle$ and the other in state $|L\rangle$. But this is not correct: Both $|R\rangle$ and $|L\rangle$ are partly absorbed by a vertical linear polarizer, but the $\frac{|R\rangle+|L\rangle}{\sqrt 2}$ light will pass through that polarizer with no absorption whatsoever.

+

However, unpolarized light such as the light from an incandescent light bulb is different from any state like $\alpha|R\rangle + \beta|L\rangle$ (linear, circular or elliptical polarization). Unlike linearly or elliptically polarized light, it passes through the polarizer with $50\%$ intensity loss whatever the orientation of the polarizer; and unlike circularly polarized light, it cannot be made linearly polarized with any wave plate because randomly oriented polarization will emerge from a wave plate with random orientation. Indeed, unpolarized light cannot be described as any state of the form $\alpha |R\rangle + \beta |L\rangle$ in a definite sense. However, unpolarized light can be described with ensemble averages, e.g. that each photon is either $|R\rangle$ with $50\%$ probability or $|L\rangle$ with $50\%$ probability. The same behaviour would occur if each photon was either vertically polarized with $50\%$ probability or horizontally polarized with $50\%$ probability.

+

Therefore, unpolarized light cannot be described by any pure state but can be described as a statistical ensemble of pure states in at least two ways (the ensemble of half left and half right circularly polarized, or the ensemble of half vertically and half horizontally linearly polarized). These two ensembles are completely indistinguishable experimentally, and therefore they are considered the same mixed state. One of the advantages of the density matrix is that there is just one density matrix for each mixed state, whereas there are many statistical ensembles of pure states for each mixed state. Nevertheless, the density matrix contains all the information necessary to calculate any measurable property of the mixed state.

+

Where do mixed states come from? To answer that, consider how to generate unpolarized light. One way is to use a system in thermal equilibrium, a statistical mixture of enormous numbers of microstates, each with a certain probability (the Boltzmann factor), switching rapidly from one to the next due to thermal fluctuations. Thermal randomness explains why an incandescent light bulb, for example, emits unpolarized light. A second way to generate unpolarized light is to introduce uncertainty in the preparation of the system, for example, passing it through a birefringent crystal with a rough surface, so that slightly different parts of the beam acquire different polarizations. A third way to generate unpolarized light uses an EPR setup: A radioactive decay can emit two photons travelling in opposite directions, in the quantum state $\frac{|R,L\rangle + |L,R\rangle}{\sqrt{2}}$. The two photons together are in a pure state, but if you only look at one of the photons and ignore the other, the photon behaves just like unpolarized light.

+

More generally, mixed states commonly arise from a statistical mixture of the starting state (such as in thermal equilibrium), from uncertainty in the preparation procedure (such as slightly different paths that a photon can travel), or from looking at a subsystem entangled with something else.

+

Obtaining the density matrix[2]:

+

As mentioned before, a system can be in a statistical ensemble of different state vectors. Say there is $p_1$ probability that the state vector is $|\psi_1\rangle$ and $p_2$ probability that the state vector is $|\psi_2\rangle$ are the corresponding classical probabilities of each state being prepared.

+

Say, now we want to find the expectation value of an operator $\hat{O}$. Using also the cyclic invariance and linearity properties of the trace, we compute it as:

+

$$\langle \hat{O} \rangle = p_1\langle \psi_1 \lvert \hat{O} \lvert \psi_1 \rangle + p_2\langle \psi_2 \lvert \hat{O} \lvert \psi_2 \rangle = p_1Tr(\hat{O} \lvert \psi_1 \rangle \langle \psi_1 \lvert) + p_2Tr(\hat{O} \lvert \psi_2 \rangle \langle \psi_2 \lvert)$$

+

$$= Tr(\hat{O} (p_1 \lvert \psi_1 \rangle \langle \psi_1 \lvert) + p_2 \lvert \psi_2 \rangle \langle \psi_2 \lvert)) = Tr(\hat{O} \rho)$$

+

where $\rho$ is what we call the density matrix. The density operator contains all the information needed to calculate an expectation value for the experiment.

+

Thus, basically the density matrix $\rho$ is

+

$$p_1 \lvert \psi_1 \rangle \langle \psi_1 \lvert + p_2 \lvert \psi_2 \rangle \langle \psi_2 \lvert$$ in this case.

+

You can obviously extrapolate this logic for when more than just two state vectors are possible for a system, with different probabilities.

+

Calculating the density matrix:

+

Let's take an example, as follows.

+

+

In the above image, the incandescent light bulb $1$ emits completely random polarized photons $2$ with mixed state density matrix.

+

As mentioned before, an unpolarized light can be explained with an ensemble average i.e. say each photon is either $|R\rangle$ or $|L\rangle$ with $50%$ probability for each. Another possible ensemble average is: each photon is either $\frac{|R\rangle+|L\rangle}{\sqrt 2}$ or $\frac{|R\rangle - |L\rangle}{\sqrt 2}$ with $50\%$ probability for each. There are lots of other possibilities too. Try to come up with some yourself. The point to note is that the density matrix for all these possible ensembles will be exactly the same. And this is exactly the reason why density matrix decomposition into pure states is not unique. Let's check:

+

Case 1: $50\%$ $|R\rangle$ & $50\%$ $|L\rangle$

+

$$\rho_{\text{mixed}} = 0.5 |R\rangle \langle R| + 0.5 |L\rangle \langle L|$$

+

Now, in the basis $\{|R\rangle, |L\rangle\}$, $|R\rangle$ can be denoted as $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $|L\rangle$ can be denoted as $\begin{bmatrix} 0 \\ 1 \end{bmatrix}$

+

$$\therefore 0.5 \left(\begin{bmatrix} 1 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} 1 & 0 \end{bmatrix}\right) + 0.5 \left(\begin{bmatrix} 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} 0 & 1 \end{bmatrix}\right)$$

+

$$= 0.5 \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} + 0.5\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}$$

+

$$= \begin{bmatrix} 0.5 & 0 \\ 0 & 0.5 \end{bmatrix}$$

+

Case 2: $50\%$ $\frac{|R\rangle + |L\rangle}{\sqrt 2}$ & $50\%$ $\frac{|R\rangle - |L\rangle}{\sqrt 2}$

+

$$\rho_{\text{mixed}} = 0.5 \left(\frac{|R\rangle + |L\rangle}{\sqrt 2}\right)\otimes \left(\frac{\langle R| + \langle L|}{\sqrt 2}\right) + 0.5 \left(\frac{|R\rangle - |L\rangle}{\sqrt 2}\right)\otimes \left(\frac{\langle R| - \langle L|}{\sqrt 2}\right)$$

+

In the basis $\{\frac{|R\rangle + |L\rangle}{\sqrt 2}, \frac{|R\rangle - |L\rangle}{\sqrt 2}\}$, $\frac{|R\rangle + |L\rangle}{\sqrt 2}$ can be denoted as $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $\frac{|R\rangle - |L\rangle}{\sqrt 2}$ can be denoted as $\begin{bmatrix} 0 \\ 1 \end{bmatrix}$

+

$$\therefore 0.5 \left(\begin{bmatrix} 1 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} 1 & 0 \end{bmatrix}\right) + 0.5 \left(\begin{bmatrix} 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} 0 & 1 \end{bmatrix}\right)$$

+

$$= 0.5 \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} + 0.5\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}$$

+

$$= \begin{bmatrix} 0.5 & 0 \\ 0 & 0.5 \end{bmatrix}$$

+

Thus, we can clearly see that we get the same density matrices in both case 1 and case 2.

+

However, after passing through the vertical plane polarizer (3), the remaining photons are all vertically polarized (4) and have pure state density matrix:

+

$$\rho_{\text{pure}} = 1 \left(\frac{|R\rangle + |L\rangle}{\sqrt 2}\right)\otimes \left(\frac{\langle R| + \langle L|}{\sqrt 2}\right) + 0 \left(\frac{|R\rangle - |L\rangle}{\sqrt 2}\right)\otimes \left(\frac{\langle R| - \langle L|}{\sqrt 2}\right) $$

+

In the basis $\{\frac{|R\rangle + |L\rangle}{\sqrt 2}, \frac{|R\rangle - |L\rangle}{\sqrt 2}\}$, $|R\rangle$ can be denoted as $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $|L\rangle$ can be denoted as $\begin{bmatrix} 0 \\ 1 \end{bmatrix}$

+

$$\therefore 1 \left(\begin{bmatrix} 1 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} 1 & 0 \end{bmatrix}\right) + 0 \left(\begin{bmatrix} 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} 0 & 1 \end{bmatrix}\right)$$

+

$$= 1 \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} + 0\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}$$

+

$$= \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$$

+

The single qubit case:

+

If your system contains just a single qubit and you're know that its state $|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$ (where $|\alpha|^2+|\beta|^2$) then you are already sure that the 1-qubit system has the state $|\psi\rangle$ with probability $1$!

+

In this case, the density matrix will simply be:

+

$$\rho_{\text{pure}} = 1|\psi\rangle \langle \psi|$$

+

If you're using the orthonormal basis $\{\alpha|0\rangle + \beta|1\rangle,\beta^*|0\rangle - \alpha^*|1\rangle\}$,

+

the density matrix will simply be:

+

$$\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$$

+

This is very similar to 'case 2' above, so I didn't show the calculations. You can ask questions in the comments if this portion seems unclear.

+

However, you could also use the $\{|0\rangle,|1\rangle\}$ basis as @DaftWullie did in their answer.

+

In the general case for a 1-qubit state, the density matrix, in the $\{|0\rangle,|1\rangle\}$ basis would be:

+

$$\rho = 1(\alpha |0\rangle + \beta |1\rangle) \otimes (\alpha^* \langle 0| + \beta^* \langle 1|)$$

+

$$= \begin{bmatrix} \alpha \\ \beta \end{bmatrix} \otimes \begin{bmatrix} \alpha^* & \beta^* \end{bmatrix}$$

+

$$= \begin{bmatrix} \alpha\alpha^* & \alpha\beta^* \\ \beta\alpha^* & \beta\beta^* \end{bmatrix}$$

+

Notice that this matrix $\rho$ is idempotent i.e. $\rho = \rho^2$. This is an important property of the density matrices of a pure state and helps us to distinguish them from density matrices of mixed states.

+

Obligatory exercises:

+
+

1. Show that density matrices of pure states can be diagonalized to the form $\text{diag}(1,0,0,...)$.
+2. Prove that density matrices of pure states are idempotent.

+
+
+

Sources & References:

+

[1]: https://en.wikipedia.org/wiki/Density_matrix

+

[2]: https://physics.stackexchange.com/a/158290

+

Image Credits:

+

User Kaidor +on Wikimedia

+",26,,4271,,08-07-2021 08:30,08-07-2021 08:30,,,,4,,,,CC BY-SA 4.0 +2349,2,,2073,6/14/2018 19:43,,1,,"

There are two possible answers.

+ +
    +
  1. Let's say the universe evolves from $t=0$ to $t_f$ then the unitary evolution $U$ from $0$ to $t_f$ induces a CP evolution on the subsystem. To see this, note that the composition of CP maps is CP. Now, the reduced (system) evolution is $Tr_E U\rho_s\otimes\rho_E U^\dagger$ which is a composition of the map $\rho_s\rightarrow \rho_s\otimes\rho_E$ (which is CP), unitary evolution, and partial trace (again CP). So overall it is CP.

    + +

    See theorem 5.4 in John Watrous's lecture notes

    + +

    Note that this extends to multiple systems since each would just involve a different partial trace.

  2. +
  3. Take the dynamics between $t_i>t_0$ and $t_f$, this might not be CP (or even linear!), which is why NCP maps (and similarly non-linear maps) can be physical. However, this is a slightly tricky subject since it is unclear how you would go about constructing the map. For example, different ways of doing process tomography could lead to different results.

  4. +
+",1864,,,,,6/14/2018 19:43,,,,0,,,,CC BY-SA 4.0 +2350,2,,2346,6/14/2018 22:27,,6,,"
+

What does a truth table for a QRCA look like?

+
+ +

You don't want to know. It will be a gigantic complicated table that provides no insight whatsoever. At the very least you need to use boolean algebra instead of a table, but even that will be cumbersome and will require many intermediate values that ultimately are just a less-visual way of describing an addition circuit.

+ +

If it helps, here is the set of equations for a simpler operation, an increment operation. The equations define how each output bit can be computed from the input bits:

+ +

$o_0 = i_0 \oplus 1$

+ +

$o_1 = i_1 \oplus i_0$

+ +

$o_2 = i_2 \oplus (i_0 \land i_1)$

+ +

$\vdots$

+ +

$o_n = i_n \oplus {\Large{\land}}_{k=0}^{n-1} i_k$

+ +
+

What would a unitary matrix for a QRCA be?

+
+ +

It's a permutation matrix.

+ +

As a starting point, here is the permutation matrix corresponding to a 2-bit increment:

+ +

$$\text{Inc}_2 = \begin{bmatrix} +&&&1\\ +1&&&\\ +&1&&\\ +&&1&\\ +\end{bmatrix}$$

+ +

and a 3-bit increment:

+ +

$$\text{Inc}_3 = \begin{bmatrix} +&&&&&&&1\\ +1&&&&&&&\\ +&1&&&&&&\\ +&&1&&&&&\\ +&&&1&&&&\\ +&&&&1&&&\\ +&&&&&1&&\\ +&&&&&&1&\\ +\end{bmatrix}$$

+ +

I suspect you see the pattern. Just start with an identity matrix and shift it down by 1 (with the bottom row wrapping around to the top). To add 2, instead of adding 1 (i.e. incrementing) you would just shift down by 2 instead of by 1.

+ +

In an addition circuit, the amount of shifting depends on the other input. So you end up with a series of sub-matrices with increasingly-shifted diagonals:

+ +

$$\text{Add}_2 = \begin{bmatrix} +\begin{bmatrix} +1&&&\\ +&1&&\\ +&&1&\\ +&&&1\\ +\end{bmatrix} +\\& +\begin{bmatrix} +&&&1\\ +1&&&\\ +&1&&\\ +&&1&\\ +\end{bmatrix} +\\&& +\begin{bmatrix} +&&1&\\ +&&&1\\ +1&&&\\ +&1&&\\ +\end{bmatrix} +\\&&& +\begin{bmatrix} +&1&&\\ +&&1&\\ +&&&1\\ +1&&&\\ +\end{bmatrix} +\end{bmatrix}$$

+ +
+

What would a circuit diagram for a QRCA look like?

+
+ +

There are many possible constructions. Here is one that works entirely inline:

+ +

+ +

You can play with this construction in Quirk.

+",119,,,,,6/14/2018 22:27,,,,0,,,,CC BY-SA 4.0 +2351,2,,2343,6/14/2018 22:48,,11,,"

Most attacks now on classical computers don't actually break the encryption, they trick the systems / communication protocols into using it in a weak way, or into exposing information via side channels or directly (via exploits like buffer overflows).

+ +

Or they trick humans into doing something (social engineering).

+ +

I.e. currently you don't attack the crypto itself (because things like AES or RSA are very well tested), you attack the system built around it and the people using it.

+ +
+ +

All of these avenues of attack will sill be present when computers communicate via quantum encryption. However, with quantum encryption theoretically giving perfect security instead of just computational security, tricking a system into weakening its encryption (by using weak keys or wrong keys or keys you already know) shouldn't be a problem.

+ +

Possibly there will be weaknesses that systems need to avoid in quantum crypto, especially practical implementations that work over imperfect channels.

+ +
+ +

TL:DR: When quantum computers can practically attack RSA and the world switches over to quantum crypto for communication without a pre-shared secret, we'll be back in the same situation we are now: the crypto itself is not the weakest link.

+",2433,,2433,,6/14/2018 23:22,6/14/2018 23:22,,,,2,,,,CC BY-SA 4.0 +2352,1,2573,,6/15/2018 1:30,,7,519,"

After getting help here with XNOR & RCA gates I decided to dive into XOR Swaps & XOR linked lists. I was able to find this explanation for quantum XOR Swapping which seems sufficient for the time being. I am not able to find any information on quantum XOR linked lists however (Google returns ""no results"").

+ +

How would a quantum XOR linked list be expressed?

+",2645,,2645,,8/31/2018 2:24,8/31/2018 2:24,Quantum XOR Linked List Construction,,1,0,,,,CC BY-SA 4.0 +2353,1,2356,,6/15/2018 6:10,,5,204,"

I would like to implement a quantum program on the IBM Composer with following characteristics:

+ +
    +
  1. The output is the observed value of 3 qubits
  2. +
  3. Only one of the 3 qubits should be observed as $1$, the 2 others should be $0$
  4. +
  5. The probability that a qubit is observed as $1$ is $1/3$
  6. +
+ +

So the 3 possible outputs of this circuit are: $|100\rangle$,$|010\rangle$,$|001\rangle$ (all 3 having equality probability of being outputted)

+ +

So how can we best implement such a circuit on the IBM Composer ?

+",2529,,26,,12/13/2018 19:53,12/13/2018 19:53,How to implement a random selection of one of the 3 qubits on the IBM Q (composer)?,,1,0,,,,CC BY-SA 4.0 +2354,2,,2347,6/15/2018 7:13,,14,,"

Motivation

+ +

The motivation behind density matrices is to represent a lack of knowledge about the state of a given quantum system, encapsulating within a single description of this system all the possible outcomes of measurement results, given what we know about the system. The density matrix representation has the added advantage of getting rid of any issues associated with global phases because +$$ +|\phi\rangle\langle\phi|=(e^{i\varphi}|\phi\rangle)(e^{-i\varphi}\langle\phi|). +$$ +The lack of knowledge might arise in a variety of ways:

+ +
    +
  • A subjective lack of knowledge - a referee prepares for you one of a set of states $\{|\phi_i\rangle\}$ with probability $p_i$, but you don't know which. Even if they know which $|\phi_j\rangle$ they prepared, since you do not, you have to describe it based upon what you know of the possible set of states and their corresponding probabilities, $\rho=\sum_ip_i|\phi_i\rangle\langle \phi_i|$.

  • +
  • An objective lack of knowledge - if the quantum system is part of a larger entangled state, it is impossible to describe the system as a pure state, but all possible outcomes of measurements are described by the density matrix obtained by $\rho=\text{Tr}_B(\rho_{AB})$.

  • +
+ +

It is interesting, however, that the objective lack of knowledge can become subjective - a second party can perform operations on the rest of the entangled state. They can know the measurement results etc. but if they don't pass those on, the person holding the original quantum system has no new knowledge, and so describes their system using the same density matrix as before, but it is now a subjective description.

+ +

It is also important to note that choosing a particular way of representing the density matrix, for example, $\rho=\sum_ip_i|\phi_i\rangle\langle \phi_i|$, is a very subjective choice. It may be motivated by a particular preparation procedure, but mathematically, any description that gives the same matrix is equivalent. For example, on a single qubit, $\rho=\frac12\mathbb{I}$ is known as the maximally mixed state. Due to the completeness relation of a basis, this can be represented as a 50:50 mixture or two orthogonal states using any 1-qubit basis. +$$ +\frac12\mathbb{I}=\frac12|0\rangle\langle 0|+\frac12|1\rangle\langle 1|=\frac12|+\rangle\langle +|+\frac12|-\rangle\langle -|. +$$

+ +

Pure and Mixed States

+ +

The difference between the density matrix of a pure state and a mixed state is straightforward - the pure state is a special case which can be written in the form $\rho=|\psi\rangle\langle\psi |$, while a mixed state cannot be written in this form. Mathematically, this means that the density matrix of a pure state has rank 1, while a mixed state has rank greater than 1. The best way of calculating this is via $\text{Tr}(\rho^2)$: $\text{Tr}(\rho^2)=1$ implies a pure state, otherwise it's mixed. To see this, recall that $\text{Tr}(\rho)=1$, meaning that all the eigenvalues sum to 1. Also, $\rho$ is positive semi-definite, so all the eigenvalues are real and non-negative. So, if $\rho$ is rank 1, the eigenvalues are $(1,0,0,\ldots ,0)$, and their sum-square is clearly 1. The sum-square of any other set of non-negative numbers that sum to 1 must be less than 1.

+ +

The pure state corresponds to perfect knowledge of the system, although the fun bit about quantum mechanics is that this does not imply full knowledge of the possible measurement outcomes. Mixed states represent some imperfect knowledge, whether that's knowledge of the preparation, or knowledge of a larger Hilbert space.

+ +

That the mixed state description is much richer can be seen from the Bloch sphere picture on a single qubit: the pure states are all those on the surface of the sphere, while the mixed states are all those contained within the volume. In terms of parameter counting, instead of two parameters, you need three, the extra one corresponding to the length of the Bloch vector. +$$ +\rho=\frac{\mathbb{I}+r\underline{n}\cdot\underline{\sigma}}{2}, +$$ +where $\underline{n}$ is a 3-element unit vector, $\underline{\sigma}$ is a vector of the Pauli matrices, and $r=1$ for a pure state, and $0\leq r<1$ for a mixed state.

+",1837,,,,,6/15/2018 7:13,,,,4,,,,CC BY-SA 4.0 +2355,1,2360,,6/15/2018 11:23,,8,780,"

I was wondering if something like this is possible in QISKit: let's say we have two registers containing target and ancilla qubits:

+ +

$a_0$ -------------------

+ +

$a_1$--------------------

+ +

$\vdots$

+ +

$a_4$ ------------------

+ +

$t_0$ ------------------

+ +

$t_1$ ------------------

+ +

$\vdots$

+ +

$t_4$ ------------------

+ +

These two registers are stored in one quantum register qr. So to access $a_0$ we would type qr[0], to access $a_1$ - qr[1], ..., for $t_5$ - qr[9]. We can pass this quantum register as an argument to some function:

+ +
foo(qr, ...)
+
+ +

What I want to do is to interleave the ancilla and target qubits:

+ +

$a_0$ -------------------

+ +

$t_0$--------------------

+ +

$\vdots$

+ +

$a_i$ ------------------

+ +

$t_i$ ------------------

+ +

$\vdots$

+ +

$a_4$ ------------------

+ +

$t_4$ ------------------

+ +

so to access $a_0$ I would type qr[0], for $t_1$ - qr[1] and so on. Finally, I would like to pass such changed quantum register qr' again as an argument to some function

+ +
foo(qr', ...)
+
+ +

and in this function I would like to use these changed indices. Is this possible? Other solution I figured out was to pass array of indices for ancilla and target qubits, but I would like to avoid that. Another option would be to use swap gates on these qubits

+",2098,,26,,12/31/2018 22:02,10-12-2022 18:49,Changing indices of qubits in QISKit,,2,2,,,,CC BY-SA 4.0 +2356,2,,2353,6/15/2018 12:17,,4,,"

The pure quantum state that satisfies your conditions is the W state in three qubits, +$$ \frac{1}{\sqrt{3}} \left(|001\rangle + |010\rangle + |100\rangle \right) $$

+ +

You can look at this answer for a high level circuit to construct this. The first gate in that circuit is a single qubit gate that effects the transformation, +$$ |0 \rangle \rightarrow \frac{1}{\sqrt{3}} |1 \rangle + \sqrt{\frac{2}{3}} | 0\rangle .$$

+ +

This you can implement in the composer as a $U_3$ gate with an appropriate value of theta.

+ +

Next you will need a controlled H gate between the first and second qubits, and a Toffoli gate. To implement them in the composer you can use the circuits given here +. The control gates in the answer have the control qubits flipped (the controls are triggered by $0$ and not $1$ ). So you will need to sandwich your control qubits in the composer between $X$ gates to get the desired circuit. As you can see, constructing this from scratch in the composer is rather tedious.

+",2663,,23,,6/15/2018 18:36,6/15/2018 18:36,,,,2,,,,CC BY-SA 4.0 +2357,2,,2058,6/15/2018 12:39,,0,,"

The situation of non-completely positive maps (or more generally non-linear maps) is controversial partly due to the precise definition of how you should construct the map. But it is easy to come up with an example of something that would seem to be NCP or even not linear.

+ +
    +
  1. Non linear map.
  2. +
+ +

Consider a preparation device that can create a qubit in an arbitrary state $\rho$ (this device has 3 dials). Now let this device be constructed so that it also prepares a second state $\rho$ in the environment. I.e, you think you prepared a one qubit state $\rho$ but actually you prepared a two qubit state $\rho\otimes\rho$. The second qubit is the environment (which you cannot access), so if you perform tomography on your qubit, everything seems ok.

+ +

No imagine that you also have the following black box - it has (as far as you can tell) one input and two outputs. In reality (unknown to you) it has two inputs and two outputs and it simply spits out both the system qubit and the environement qubit. As far as you can tell, this black box is a cloning machine, violating linearity.

+ +
    +
  1. NCP
  2. +
+ +

Similar to the idea above, but the preparation device prepares $\rho\otimes\rho^T$ (clearly this could be done in the lab). The black box will now be a one rail box (one qubit input one qubit output as far as the user is concerned), which swaps the system and environement. To you, it seems like a trasposition map.

+ +

Note that both preparation devices are physical, but the way you construct the map might depend on how you use them. In the example above I assumed that a mixed state $\rho$ would only be constructed by using the three dials in the machine. In principle, I could try to construct a mixed state by flipping coins and preparing pure states with the right probability. Tomorgraphy would show that the processes are equivalent, but the environment would be different, and the map you would construct for the black boxes would be different.

+",1864,,,,,6/15/2018 12:39,,,,0,,,,CC BY-SA 4.0 +2358,1,2361,,6/15/2018 18:02,,20,692,"

It has been proven that adiabatic quantum computing is equivalent to ""standard"", or gate-model quantum computing. Adiabatic computing, however, shows promises for optimisation problems, where the objective is to minimise (or maximise) a function which is in some way related to the problem – that is, finding the instance that minimises (or maximises) this function immediately solves the problem.

+ +

Now, it seems to me that Grover's algorithm can essentially do the same: by searching over the solution space, it will find one solution (possibly out of many solutions) that satisfies the oracle criterion, which in this case equates to the optimality condition, in time $O(\sqrt N)$, where $N$ is the size of the solution space.

+ +

This algorithm has been shown to be optimal: as Bennett et al. (1997) put it, ""the class $\rm NP$ cannot be solved on a quantum Turing machine in time $o(2^{n/2})$"". In my understanding, this means there is no way to construct any quantum algorithm that finds a solution by searching through the space faster than $O(\sqrt N)$, where $N$ scales with the problem size.

+ +

So my question is: while adiabatic quantum computing is often presented as being superior when it comes to optimisation problems, can it really be faster than $O(\sqrt N)$? If yes, this seems to contradict the optimality of Grover's algorithm, since any adiabatic algorithm can be simulated by a quantum circuit. If not, what is the point of developing adiabatic algorithms, if they are never going to be faster than something we can systematically construct with circuits? Or is there something wrong with my understanding?

+",2687,,26,,12/13/2018 19:54,05-01-2020 20:28,Can adiabatic quantum computing be faster than Grover's algorithm?,,2,0,,,,CC BY-SA 4.0 +2360,2,,2355,6/16/2018 0:33,,5,,"

The relationship between your indices can be captured by a map:

+ +

$$\{0: 0, 1: 2, 2: 4, 3: 6, 4: 8, 5: 1, 6: 3, 7: 5, 8: 7, 9: 9\}$$

+ +

You can then use this to specify where operations get applied to in a register.

+ +

Here is a simple code in QISKit (generalizes to arbitrary register length):

+ +
from qiskit import * 
+from qiskit.tools.visualization import *
+
+# build a register with k targets and k ancillas 
+k = 5
+qr = QuantumRegister(2*k)
+circ = QuantumCircuit(qr)
+
+# apply cx between ancillas and targets
+for i in range(k):
+    circ.cx(qr[i], qr[i+k])
+
+circuit_drawer(circ)
+
+ +

+ +
# specify the desired interleaving
+# {0: 0, 1: 2, 2: 4, 3: 6, 4: 8, 5: 1, 6: 3, 7: 5, 8: 7, 9: 9}
+new_qubit_map = {i: 2*i if i < k else 2*(i-k)+1 for i in range(len(qr))}
+
+# create the same circuit, but with the new interleaving
+circ_2 = QuantumCircuit(qr)
+for i in range(k):
+    circ_2.cx(qr[new_qubit_map[i]], qr[new_qubit_map[i+k]])
+
+circuit_drawer(circ_2)
+
+ +

+",2503,,26,,6/17/2018 6:23,6/17/2018 6:23,,,,0,,,,CC BY-SA 4.0 +2361,2,,2358,6/16/2018 0:52,,9,,"

Good question. For unstructured search, adiabatic quantum computation indeed gives the exact same $\sqrt{N}$ speedup that the standard gate-based Grover's algorithm does, as proven in this important paper by Roland and Cerf. This agrees with the equivalence between adiabatic and gate-based quantum computation that you mentioned.

+ +

(One minor correction to your question: you're correct that in the setup for the oracle-search problem, you need to frame your search query as a yes/no question that the oracle can answer. But the question isn't actually taken to be ""does $x$ extremize the function $f(x)$?"", as you proposed. Instead, it's ""is $f(x)$ less than or equal to $M$?"" See slides 9 and 10 here. That's because an oracle for the latter question is considered a more realistic model for a physical setup, where it's conceivable that one could directly compute or measure $f(x)$ for a given $x$, but not $f(x) - f_\text{min}$.)

+ +

Nevertheless, there are two potential advantages to adiabatic QC, both of which are difficult to study theoretically. The first is practical: actually building large coherent quantum circuits is a whole lot harder that just drawing them in a journal article. Even though adiabatic QC doesn't have any fundamental advantage over the traditional setup, it might be much easier to implement experimentally.

+ +

Secondly, the same huge caveat applies to AQC as to the standard Grover's algorithm: it only applies to unstructured or ""black-box"" search, where we complete ignore any correlations between the answers that the oracle gives when fed in ""similar"" or ""related"" queries. Any actual search problem that we care about will by definition have some structure to it, although this structure may be far too complicated for us to analyze. For example, if we think of the function to be extremized as an energy landscape, it seems reasonable that the system can more easily tunnel between ""nearby"" local minima than between ""faraway"" ones.

+ +

So to really rigorously compare the relative benefits of the adiabatic vs. gate-based setups in a real experiment, you'd need to ""overcome the relativization barrier"" and consider the structure of the specific function that you're trying to extremize, which is usually really hard to do. This makes it very difficult to draw general conclusions about the two approach's relative advantages in the real world. It's also why it's so hard to prove unconditional complexity separations theoretically. For all we know, for real-world rather than oracle problems, quantum computers might be able to give exponential speedups - possibly even for NP-complete problems, which would imply that NP $\subset$ BQP, although this is considered very unlikely.

+",551,,551,,05-01-2020 20:28,05-01-2020 20:28,,,,4,,,,CC BY-SA 4.0 +2363,1,2364,,6/16/2018 9:45,,6,194,"

Moore's law states that computer power doubles in every 18 months (more formally: ""the number of transistors in a dense integrated circuit doubles about every two years.""). Statistics suggest that this observation should be correct, but aren't quantum computers much more powerful than just double-powered classical computers? More interesting question is, is there even a way to improve the power of quantum computers?

+",2559,,26,,12/13/2018 19:54,03-03-2021 02:52,Will Moore's Law be no longer effective once quantum computers are created?,,3,0,,6/17/2018 17:05,,CC BY-SA 4.0 +2364,2,,2363,6/16/2018 17:31,,3,,"
+

but aren't quantum computers much more powerful than just + double-powered classical computers?

+
+ +

Yes. A universal quantum computer with only 100 qubits (12.5 quantum bytes) can find the ground state of a matrix with $2^{200} = 10^{60}$ elements. Assuming Moore's Law could continue forever (which is not true due to physical limitations), it would take longer than the age of the universe (13.5 billion years) for the ""doubling of transistors every 18 months"" to bring classical computers to what a quantum computer with one quantum gigabyte can do, for certain problems.

+ +
+

More interesting question is, is there even a way to improve the power of + quantum computers?

+
+ +

There have been proposals for exploiting certain types of phenomena that would lead to devices even more powerful than quantum computers, but in all cases quantum computers would be a special case of such devices (just like classical computers are a special case of quantum computers, they are quantum computers that just only use classical gates and inputs that are not in any superposition). It is hard enough to build a quantum computer, so building the more generalized devices would be even harder.

+",2293,,26,,7/14/2018 18:31,7/14/2018 18:31,,,,0,,,,CC BY-SA 4.0 +2365,2,,2358,6/16/2018 17:51,,3,,"

Adiabatic quantum computation cannot do anything faster than circuit-based quantum computation from a computational complexity perspective. This is because there is a mathematical proof that circuit-based quantum computation can efficiently simulate adiabatic quantum computation [see section 5 of this paper].

+ +
+

can it really be faster than $\mathcal{O}(\sqrt{N})$?

+
+ +

The answer is no. This is because if AQC could do it in, say, $\mathcal{O}(\log{N})$, then circuit-based QC could also do it in $\mathcal{O}(\log{N})$ by the algorithm in section 5 of the paper I linked above. This would violate the optimality of $\mathcal{O}(\sqrt{N})$ for unstructured search.

+",2293,,,,,6/16/2018 17:51,,,,10,,,,CC BY-SA 4.0 +2366,1,,,6/16/2018 21:50,,10,431,"

This answer cites a paper[$\dagger$] which purposes a quantum blockchain using entanglement in time.

+ +

""The weakness is that the research only presents a conceptual design."" - QComp2018

+ +

How could a quantum blockchain which leverages time entanglement be realized?

+ +

Resources:

+ +
    +
  1. Quantum Secured Blockchain

  2. +
  3. Quantum Bitcoin: An Anonymous and Distributed Currency Secured by the No-Cloning Theorem of Quantum Mechanics

  4. +
+ +

[$\dagger$]: Quantum Blockchain using entanglement in time Rajan & Visser (2018)

+",2645,,2645,,07-03-2018 06:50,11/17/2018 22:41,Time Entangled Quantum Blockchain,,1,5,,,,CC BY-SA 4.0 +2367,2,,2363,6/17/2018 5:18,,1,,"

The quantum equivalent of Moore's Law is Rose's Law which states that ""the number of qubits in a scalable quantum computing architecture should double every year.""

+ +

The prediction was made by Geordie Rose of D-Wave circa 2003. See D-Wave's Future of Hardware, this article or this amazing answer for more info.

+ +

My understanding is that a quantum computer can make $2^n$ computations per step where n is the number of qubits. With the number of qubits doubling every year this means the computational power of quantum computers will accelerate very quickly.

+",2645,,2645,,6/17/2018 6:05,6/17/2018 6:05,,,,0,,,,CC BY-SA 4.0 +2368,2,,2363,6/17/2018 5:52,,1,,"

Moore's law is not a fundamental law of the nature. It is just a heuristic mentioned by Moore to show the growing importance of computer technology. You should never take it for granted and there is nothing wrong if the actual trend doesn't follow Moore's law.

+

Secondly, Quantum computers give speed up in only certain kinds of computations. You cannot expect it to give exponential speed up for every classical algorithm. So it is inappropriate to compare a QC with a Classical computer. Classical computers are going to stay even after QC become commercially viable.

+",2384,,2293,,03-03-2021 02:52,03-03-2021 02:52,,,,1,,,,CC BY-SA 4.0 +2370,1,2379,,6/17/2018 16:47,,13,275,"

Background

+ +

The Toffoli gate is a 3-input, 3-output classical logic gate. It sends $(x, y, a)$ to $(x, y, a \oplus (x \cdot y))$. It is significant in that it is universal for reversible (classical) computation.

+ +

The Popescu-Rohrlich box is the simplest example of a non-signaling correlation. It takes a pair of inputs $(x, y)$ and outputs $(a, b)$ satisfying $x \cdot y = a \oplus b$ such that $a$ and $b$ are both uniform random variables. It is universal for a certain class of (but not all) non-signaling correlations.

+ +

To my eye, these two objects look extremely similar, especially if we augment the PR box by having it output $(x, y, a, b) = (x, y, a, a \oplus (x \cdot y))$. This 2-input, 4-output PR box ""is"" the 3-input, 3-output Toffoli gate but with the third input replaced by a random output. But I've been unable to locate any references that relate them.

+ +

Question

+ +

What is the relationship between the Toffoli gate and the Popescu-Rohrlich box? Is there something like a correspondence between reversible classical circuits and (a certain class of?) non-signaling correlations that maps one to the other?

+ +

Observations

+ +
    +
  1. Specifying a non-signaling correlation requires not just a function but also an assignment of each input and output to a party that controls it. A PR box is no longer non-signaling if we allow Alice to enter both inputs and Bob to read both outputs. Or in our ""augmented"" PR-box, if Alice inputs $x$, she must also be the one who reads the copy of $x$. So it seems nontrivial to determine, for a general circuit (with some inputs possibly replaced by random outputs), all the ways inputs and outputs can be assigned to parties such that communication is not possible.

  2. +
  3. We can apply the above procedure any logic gate, including irreversible ones. For instance, we can take AND and replace one of the inputs by a random output, and get a function one input $x$ and a pair $(a, x \cdot a)$ where $a$ is a uniform random variable. However, $x \cdot a$ is $0$ conditioned on $x = 0$, so the only way this can be non-signaling is if Alice, who inputs $x$, receives $x \cdot a$. But this procedure can already be reproduced classically with a shared source of randomness. So I would expect that including irreversible gates does not expand the class of non-signaling correlations one can construct.

  4. +
+",2547,,55,,2/14/2021 18:59,2/14/2021 18:59,What is the relationship between the Toffoli gate and the Popescu-Rohrlich box?,,1,0,,,,CC BY-SA 4.0 +2371,1,2535,,6/17/2018 17:08,,8,263,"

The C10k Problem is a classical computing problem whose name (C10k) is a numeronym for concurrently handling ten thousand connections.

+ +

How could a quantum network be constructed to handle 10,000 clients concurrently?

+",2645,,2645,,6/25/2018 9:20,6/29/2018 20:22,"How could a quantum network be constructed to handle 10,000 clients concurrently?",,2,6,,,,CC BY-SA 4.0 +2372,1,,,6/17/2018 17:55,,15,3412,"

I'm fairly confused about how Grover's algorithm could be used in practice and I'd like to ask help on clarification through an example.

+ +

Let's assume an $N=8$ element database that contains colors Red, Orange, Yellow, Green, Cyan, Blue, Indigo and Violet, and not necessarily in this order. My goal is to find Red in the database.

+ +

The input for Grover's algorithm is $n = \log_2(N=8) = 3$ qubits, where the 3 qubits encode the indices of the dataset. My confusion comes here (might be confused about the premises so rather say confusion strikes here) that, as I understand, the oracle actually searches for one of the indices of the dataset (represented by the superposition of the 3 qubits), and furthermore, the oracle is ""hardcoded"" for which index it should look for.

+ +

My questions are:

+ +
    +
  • What do I get wrong here?
  • +
  • If the oracle is really looking for one of the indices of the database, that would mean we know already which index we are looking for, so why searching?
  • +
  • Given the above conditions with the colors, could someone point it out if it is possible with Grover's to look for Red in an unstructured dataset?
  • +
+ +

There are implementations for Grover's algorithm with an oracle for $n=3$ searching for |111>, e.g. (or see an R implementation of the same oracle below): + +https://quantumcomputing.stackexchange.com/a/2205

+ +

Again, my confusion is, given I do not know the position of $N$ elements in a dataset, the algorithm requires me to search for a string that encodes the position of $N$ elements. How do I know which position I should look for when the dataset is unstructured?

+ +

R code:

+ +
 #START
+ a = TensorProd(TensorProd(Hadamard(I2),Hadamard(I2)),Hadamard(I2))
+ # 1st CNOT
+ a1= CNOT3_12(a)
+ # 2nd composite
+ # I x I x T1Gate
+ b = TensorProd(TensorProd(I2,I2),T1Gate(I2)) 
+ b1 = DotProduct(b,a1)
+ c = CNOT3_02(b1)
+ # 3rd composite
+ # I x I x TGate
+ d = TensorProd(TensorProd(I2,I2),TGate(I2))
+ d1 = DotProduct(d,c)
+ e = CNOT3_12(d1)
+ # 4th composite
+ # I x I x T1Gate
+ f = TensorProd(TensorProd(I2,I2),T1Gate(I2))
+ f1 = DotProduct(f,e)
+ g = CNOT3_02(f1)
+ #5th composite
+ # I x T x T
+ h = TensorProd(TensorProd(I2,TGate(I2)),TGate(I2))
+ h1 = DotProduct(h,g)
+ i = CNOT3_01(h1)
+ #6th composite
+ j = TensorProd(TensorProd(I2,T1Gate(I2)),I2)
+ j1 = DotProduct(j,i)
+ k = CNOT3_01(j1)
+ #7th composite
+ l = TensorProd(TensorProd(TGate(I2),I2),I2)
+ l1 = DotProduct(l,k)
+ #8th composite
+ n = TensorProd(TensorProd(Hadamard(I2),Hadamard(I2)),Hadamard(I2))
+ n1 = DotProduct(n,l1)
+ n2 = TensorProd(TensorProd(PauliX(I2),PauliX(I2)),PauliX(I2))
+ a = DotProduct(n2,n1)
+ #repeat the same from 2st not gate
+ a1= CNOT3_12(a)
+ # 2nd composite
+ # I x I x T1Gate
+ b = TensorProd(TensorProd(I2,I2),T1Gate(I2))
+ b1 = DotProduct(b,a1)
+ c = CNOT3_02(b1)
+ # 3rd composite
+ # I x I x TGate
+ d = TensorProd(TensorProd(I2,I2),TGate(I2))
+ d1 = DotProduct(d,c)
+ e = CNOT3_12(d1)
+ # 4th composite
+ # I x I x T1Gate
+ f = TensorProd(TensorProd(I2,I2),T1Gate(I2))
+ f1 = DotProduct(f,e)
+ g = CNOT3_02(f1)
+ #5th composite
+ # I x T x T
+ h = TensorProd(TensorProd(I2,TGate(I2)),TGate(I2))
+ h1 = DotProduct(h,g)
+ i = CNOT3_01(h1)
+ #6th composite
+ j = TensorProd(TensorProd(I2,T1Gate(I2)),I2)
+ j1 = DotProduct(j,i)
+ k = CNOT3_01(j1)
+ #7th composite
+ l = TensorProd(TensorProd(TGate(I2),I2),I2)
+ l1 = DotProduct(l,k)
+ #8th composite
+ n = TensorProd(TensorProd(PauliX(I2),PauliX(I2)),PauliX(I2))
+ n1 = DotProduct(n,l1)
+ n2 = TensorProd(TensorProd(Hadamard(I2),Hadamard(I2)),Hadamard(I2))
+ n3 = DotProduct(n2,n1)
+ result=measurement(n3)
+ plotMeasurement(result)
+
+ +

+",2698,,,,,11-03-2020 13:43,Grover's algorithm: a real life example?,,3,2,,,,CC BY-SA 4.0 +2373,2,,2372,6/17/2018 20:06,,6,,"

This is already partially discussed in this related question, but I'll try here to address more specifically some of the issues you rise.

+ +

Generally speaking, Grover's algorithm rests upon the assumption that one is able to perform a querying operation of the form $$|i\rangle\mapsto(-1)^{f(x_i)}|i\rangle,$$ where $i$ is the index in the database, and $x_i$ whatever information the database attaches to $i$.

+ +

You can think of $f(x_i)$ as ""asking a question about $x_i$"". For example, ""is $x_i$ a prime number?"", or ""does $x_i$ have property $P$?"", where $P$ could mean ""being red"".

+ +

It is important to note that $f$ could be asking a question which does not fully characterize $x_i$. This means that after I run the algorithm and retrieve $i$, and thus $x_i$ with it, I also gain knowledge which was not used to build the oracle.

+ +

However, in many proof of principle implementations of Grover's algorithm, like the one you show, this is not the case. +Indeed, in these demonstrations the question that is being asked is ""trivial"", in the sense that $x_i=i$, and the question is of the form ""is $x_i$ equal to 3?"".

+ +

In such a case, the algorithm is indeed not particularly useful in that the answer has to be hardcoded into the oracle, but this needs not be the case in general.

+",55,,55,,7/24/2018 10:41,7/24/2018 10:41,,,,3,,,,CC BY-SA 4.0 +2374,2,,2292,6/17/2018 23:52,,9,,"

This answer is more or less a summary of the Aharonov-Jones-Landau paper you linked to, but with everything not directly related to defining the algorithm removed. Hopefully this is useful.

+ +

The Aharonov-Jones-Landau algorithm approximates the Jones polynomial of the plat closure of a braid $\sigma$ at a $k$th root of unity by realizing it as (some rescaling of) a matrix element of a certain unitary matrix $U_\sigma$, the image of $\sigma$ under a certain unitary representation of the braid group $B_{2n}$. Given an implementation of $U_\sigma$ as a quantum circuit, approximating its matrix elements is straightforward using the Hadamard test. The nontrivial part is approximating $U_\sigma$ as a quantum circuit.

+ +

If $\sigma$ is a braid on $2n$ strands with $m$ crossings, we can write $\sigma = \sigma_{a_1}^{\epsilon_1} \sigma_{a_2}^{\epsilon_2} \cdots \sigma_{a_m}^{\epsilon_m}$, where $a_1, a_2, \ldots, a_m \in \{1, 2, \ldots, 2n - 1\}$, $\epsilon_1, \epsilon_2, \ldots, \epsilon_m \in \{\pm 1\}$, and $\sigma_i$ is the generator of $B_{2n}$ that corresponds to crossing the $i$th strand over the $(i + 1)$st. It suffices to describe $U_{\sigma_i}$, since $U_\sigma = U_{\sigma_{a_1}}^{\epsilon_1} \cdots U_{\sigma_{a_m}}^{\epsilon_m}$.

+ +

To define $U_{\sigma_i}$, we first give a certain subset of the standard basis of $\mathbb{C}^{2^{2n}}$ on which $U_{\sigma_i}$ acts nontrivially. For $\psi = \lvert b_1 b_2 \cdots b_{2n} \rangle$, let $\ell_{i'}(\psi) = 1 + \sum_{j = 1}^{i'} (-1)^{1-b_j}$. Let's call $\psi$ admissible if $1 \leq \ell_{i'}(\psi) \leq k - 1$ for all $i' \in \{1, 2, \ldots, 2n\}$. (This corresponds to $\psi$ describing a path of length $2n$ on the graph $G_k$ defined in the AJL paper.) Let $$\lambda_r = \begin{cases}\sin(\pi r / k) & \textrm{if $1 \leq r \leq k - 1$},\\ 0 & \textrm{otherwise.}\end{cases}$$ Let $A = ie^{-\pi i/2k}$ (this is mistyped in the AJL paper; also note that here and only here, $i = \sqrt{-1}$ is not the index $i$). Write $\psi = \lvert \psi_i b_i b_{i+1} \cdots\rangle$, where $\psi_i$ is the first $i - 1$ bits of $\psi$, and let $z_i = \ell_{i-1}(\psi_i)$. Then +$$ +\begin{align} +U_{\sigma_i}(\lvert\psi_i 00 \cdots\rangle) & = A^{-1}\lvert\psi_i 00 \cdots\rangle\\ +U_{\sigma_i}(\lvert\psi_i 01 \cdots \rangle) & = \left( A\frac{\lambda_{z_i-1}}{\lambda_{z_i}} + A^{-1}\right)\lvert\psi_i 01 \cdots\rangle + A\frac{\sqrt{\lambda_{z_i+1}\lambda_{z_i-1}}}{\lambda_{z_i}}\lvert\psi_i 10 \cdots\rangle\\ +U_{\sigma_i}(\lvert\psi_i 10 \cdots \rangle) & = A\frac{\sqrt{\lambda_{z_i+1}\lambda_{z_i-1}}}{\lambda_{z_i}}\lvert\psi_i 01 \cdots\rangle + \left(A\frac{\lambda_{z_i+1}}{\lambda_{z_i}} + A^{-1}\right)\lvert\psi_i 10 \cdots\rangle\\ +U_{\sigma_i}(\lvert\psi_i 11 \cdots\rangle) & = A^{-1}\lvert\psi_i 11 \cdots\rangle +\end{align} +$$ +We define $U_{\sigma_i}(\psi) = \psi$ for non-admissible basis elements $\psi$.

+ +

We would now like to describe $U_{\sigma_i}$ as a quantum circuit with polynomially many (in $n$ and $k$) gates. Notice that while $U_{\sigma_i}$ only changes two qubits, it also depends on the first $i - 1$ qubits through the dependence on $z_i$ (and indeed, it depends on all qubits for the admissibility requirement). However, we can run a counter to calculate and store $z_i$ (and also determine admissibility of the input) in logarithmically many (in $k$) ancilla qubits, and therefore we can apply the Solovay-Kitaev algorithm to get a good approximation to $U_{\sigma_i}$ using only polynomially many gates. (The paper appeals to Solovay-Kitaev twice: once for incrementing the counter at each step, and once for applying $U_{\sigma_i}$; I'm not sure if there is a more direct way to describe either of these as quantum circuits with standard gates. The paper also doesn't mention the need to check for admissibility here; I'm not sure if this is important, but certainly we at least need $1 \leq z_i \leq k - 1$.)

+ +

So to recap:

+ +
    +
  1. Start with a braid $\sigma \in B_{2n}$ with $m$ crossings.
  2. +
  3. Write $\sigma = \sigma_{a_1}^{\epsilon_1} \sigma_{a_2}^{\epsilon_2} \cdots \sigma_{a_m}^{\epsilon_m}$.
  4. +
  5. For each $i \in \{1, 2, \ldots, m\}$, apply the Solovay-Kitaev algorithm to get an approximation of the unitary matrix $U_{\sigma_{a_i}}$ (or its inverse if $\epsilon_i = -1$).
  6. +
  7. Compose all of the approximations from step 3 to get a quantum circuit with polynomially many gates that approximates $U_{\sigma}$.
  8. +
  9. Apply the real and imaginary Hadamard tests polynomially many times with the circuit from step 4 and the state $\lvert 1010 \cdots 10\rangle$.
  10. +
  11. Average the results of step 5 and multiply by some scaling factor to get an approximation to the real and imaginary parts of the Jones polynomial of the plat closure of $\sigma$ evaluated at $e^{2\pi i/k}$.
  12. +
+",2547,,,,,6/17/2018 23:52,,,,0,,,,CC BY-SA 4.0 +2375,2,,2371,6/18/2018 0:53,,3,,"

Enabling network sockets to handle 10k clients at the same time with over 1 gigabit per second Ethernet (the C10k problem), is different from making a quantum computer that can handle 10k qubits concurrently. Remember 10k bits is only 1.25kB which is not even enough to store a typical operating system.

+ +

If you want to consider each qubit as a ""client"" in some generalization of the C10k problem, then the answer to your question depends on whether or not you need a universal gate set to be applicable between each of the 10,000 qubit connections. If so, the largest quantum computers with a universal gate set are the 50-qubit machine by IBM and the 72-qubit machine by Google (which has been announced but not shown to the public yet).

+ +

You mention D-Wave, which makes non-universal quantum annealers. If each qubit is considered a ""client"", it is true that the D-Wave 2000Q has 2048 qubits, but not all of them can be connected to any other qubit. This is the connectivity graph for a typical D-Wave machine. Notice that each qubit can only be connected to at most 6 other qubits. To get 10,000 qubits in this arrangement, you just need to create more of these ""unit cells"" of 8 qubits each. What's pictured here is the D-Wave One which has 16 units cells of 8 qubits each (8 x 16 = 128 total qubits). The D-Wave Two had 64 units cells of 8 qubits each (8 x 64 = 512 qubits). The D-Wave 2X had 132 unit cells (8 x 144 = 1152 total qubits), and the D-Wave 2000Q has 256 unit cells (8 x 256 = 2048 total qubits).

+ +

For 10,000 qubits you just need 1250 units cells (8 x 1250 = 10,000). After that point D-Wave says that a re-design would need to be required, perhaps in the size of the unit cells, or in going from 2D to 3D, or in the physics itself.

+",2293,,2293,,6/18/2018 1:48,6/18/2018 1:48,,,,8,,,,CC BY-SA 4.0 +2376,1,2377,,6/18/2018 6:17,,9,571,"

In this[1] paper, on page 2, they mention that they are generating the weighting matrix as follows:

+ +

$$W = \frac{1}{Md}[\sum_{m=1}^{m=M} \mathbf{x}^{(m)}\left(\mathbf{x}^{(m)}\right)^{T}] - \frac{\Bbb I_d}{d}$$

+ +

where $\mathbf{x}^{(m)}$'s are the $d$-dimensional training samples (i.e. $\mathbf{x} := \{x_1,x_2,...,x_d\}^{T}$ where $x_i \in \{1,-1\} \ \forall \ i\in \{1,2,...,d\}$) and there are $M$ training samples in total. This generation of weighting matrix using matrix multiplication followed by a sum over $M$ terms seems to be a costly operation in terms of time complexity i.e. I guess around $O(Md)$ (?).

+ +

Does there exist any quantum algorithm which can offer a substantial speed-up for generation of the weighting matrix? I think in the paper their main speedup comes from the quantum matrix inversion algorithm (which is mentioned later on the paper), but they don't seem to have taken into account this aspect of the weighting matrix generation.

+ +

[1]: A Quantum Hopfield Neural Network Lloyd et al. (2018)

+",26,,,,,6/18/2018 10:02,Is it possible to speed up the generation of the weighting matrix using a quantum algorithm?,,1,0,,,,CC BY-SA 4.0 +2377,2,,2376,6/18/2018 9:07,,5,,"

Taking the density matrix $$\rho=W+\frac{I_d}{d}=\frac 1M \sum_{m=1}^M\left|x^{\left(m\right)}\rangle\langle x^{\left(m\right)}\right|,$$ many of the details are all contained in the following paragraph on page 2:

+ +
+

Crucial for quantum adaptations of neural networks is the classical-to-quantum read-in of activation patterns. In our setting, reading in an activation pattern $x$ amounts to preparing the quantum state $|x〉$. This could + in principle be achieved using the developing techniques of quantum random access memory (qRAM) [33] or efficient quantum state preparation, for which restricted, oracle based, results exist [34]. In both cases, the computational overhead is logarithmic in terms of $d$. One can alternatively adapt a fully quantum perspective and take the activation patterns $|x〉$ directly from a quantum device or as the output of a quantum channel. For the former, our preparation run time is efficient whenever the quantum device is composed of a number of gates scaling at most polynomially with the number of qubits. Instead, for the latter, we typically view the channel as some form of fixed system-environment interaction that does not require a computational overhead to implement.

+
+ +

The references in the above are:

+ +

[33]: V. Giovannetti, S. Lloyd, L. Maccone, Quantum random access memory, Physical Review Letters 100, 160501 (2008) [PRL link, arXiv link]

+ +

[34]: A. N. Soklakov, R. Schack, Efficient state preparation for a register of quantum bits, Physical Review A 73, 012307 (2006). [PRA link, arXiv link]

+ +
+ +

Without going into details of how, both of the above are indeed schemes for respectively, implementing an efficient qRAM; and efficient state preparation that recreate the state $\left|x\right>$ in time $\mathcal O\left(\log_2 d\right)$.

+ +

However, this only gets us so far: this can be used to create the state $\rho^{\left(m\right)} = \left|x^{\left(m\right)}\rangle\langle x^{\left(m\right)}\right|$, while we want a sum over all the possible $m$'s.

+ +

Crucially, $\rho = \sum_m\rho^{\left(m\right)}/M$ is mixed, so cannot be represented by a single pure state, so the second of the above two references on recreating pure states doesn't apply and the first requires the state to already be in qRAM.

+ +

As such, the authors make one of three possible assumptions:

+ +
    +
  1. They have a device that just so happens to give them the correct input state

  2. +
  3. They either have the states $\rho^{\left(m\right)}$ in qRAM,

  4. +
  5. They are able to create those states at will, using the second of the above references. The mixed state is then created using a quantum channel (i.e. a completely positive, trace preserving (CPTP) map).

  6. +
+ +

Forgetting about the first two of the above options for the moment (the first magically solves the problem anyway), the channel could either be:

+ +
    +
  • an engineered system, in that it would be created for a specific instance in something akin to an analogue simulation. In other words you've got a physical channel that takes a physical length of time $t$ (as opposed to some time complexity). This is the ""fixed system-environment interaction that does not require a computational overhead to implement.""

  • +
  • The channel is itself simulated. There are a few papers on this, such as Bény and Oreshkov's Approximate simulation of quantum channels (arXiv link - this looks like a thorough paper, but I couldn't find any time complexity statements), Lu et. al.'s Experimental quantum channel simulation (no arXiv version seems to exist) and Wei, Xin and Long's arXiv preprint Efficient universal quantum channel simulation in IBM's cloud quantum computer, which (for number of qubits $n=\lceil\log_2 d\rceil$) gives a time complexity of $\mathcal O\left(\left(8n^3+n+1\right)4^{2n}\right)$. Stinespring dilation can also be used, with a complexity of $\mathcal O\left(27n^34^{3n}\right)$.

  • +
+ +
+ +

Now looking at option 21, one possible more efficient method would be to transfer the states from the address register to the data register in the usual method: for addresses in register $a$, $\sum_j\psi_j\left|j\right>_a$, transferring this to the data register gives the state in the data register $d$ as $\sum_j\psi_j\left|j\right>_a\left|D_j\right>_d$. It should be possible to simply decohere the address and data register to turn this into a mixed state, giving a small time overhead, although no extra computational complexity overhead, giving a much improved complexity of producing $\rho$, given a qRAM with the states $\left|x^{\left(m\right)}\right>$, of $\mathcal O\left(n\right)$. This is also the complexity of creating the states $\left|x^{\left(m\right)}\right>$ in the first place, giving a potential (much improved) complexity of producing $\rho$ of $\mathcal O\left(n\right)$.

+ +

1 With thanks to @glS for pointing this possibility out in chat

+ +
+ +

This density matrix is then fed into 'qHop' (quantum Hopfield), where it is used to simulate $e^{-iAt}$ for $$A=\begin{pmatrix}W-\gamma I_d && P\\ P&& 0\end{pmatrix}$$ as per the ""Efficient Hamiltonian Simulation of A"" subsection on page 8.

+",23,,23,,6/18/2018 10:02,6/18/2018 10:02,,,,1,,,,CC BY-SA 4.0 +2378,2,,2372,6/18/2018 12:43,,6,,"

One main assumption to be efficient within a usage of a database is that you can load with a superposition of addresses data from a RAM, also called QRAM (see https://arxiv.org/abs/0708.1879). Then assume you have one state for the address, one state for the value, and a load operation, which loads the value of the corresponding address into the value register. So the load operation would do the step +$$|x\rangle_{\text{address}}|0\rangle_{\text{value}} \rightarrow |x\rangle_{\text{address}}|\textrm{load}(x)\oplus 0\rangle_{\text{value}} = |x\rangle_{\text{address}}|\textrm{load}(x)\rangle_{\text{value}}.$$

+ +

In the first step you apply the Hadamard gates on the address register and then apply the load operation on both registers. Then you will have a superposition of all values in the database an the value register. +$$H^{\otimes n}_{\text{address}} |0\rangle_{\text{address}}|\textrm0\rangle_{\text{value}}=\frac1{2^{n/2}}\sum_{x=0}^{2^n-1} |x\rangle_{\text{address}}|\textrm0\rangle_{\text{value}} $$ +$$\text{apply load}\rightarrow +\frac1{2^{n/2}}\sum_{x=0}^{2^n-1} |x\rangle_{\text{address}}|\textrm{load}(x)\rangle_{\text{value}}$$ +Then you apply the Grover algorithm on the value register with any oracle you want like looking a for a prime or a specific value. We know after $O(\sqrt{N})$ iterations the correct answer will be measured with high probability. Thus, the correct solution together with the register address $x^*$ of the correct solution will be very probable measured +$$|x^*\rangle_{\text{address}}|\textrm{load}(x^*)\rangle_{\text{value}}.$$

+ +

Maybe the main problem you have is with understanding the database not the Grover algorithm. You can see a more detailed explanation in chapter 6.5 Nielsen & Chuang for this.

+ +

I also think that the most useful application of the Grover algorithm is not the database application, but is its generalizations as amplitude amplification (see https://arxiv.org/abs/quant-ph/0005055) on any quantum algorithm.

+ +

EDIT: +I thought of the problem glS answered already a bit: If we can built an oracle, isn't the problem already solved? Because to construct the oracle, we need to know how the correct solution looks like. And if you would have no background in computer science, this question would be hard to answer by yourself. However, under assumptions most scientists believe (NP$\neq$P), this is exactly the case for a subset of NP-complete problems (the ones which do not have good approximation methods) . We can construct an oracle, which can check if a solution is correct in polynomial time, but to construct an oracle, which finds the correct solution seems to be not efficiently computable.

+",2691,,2691,,7/24/2018 8:29,7/24/2018 8:29,,,,2,,,,CC BY-SA 4.0 +2379,2,,2370,6/18/2018 13:00,,8,,"

A natural way to relate Toffoli gates and PR boxes is to see them both as representations of the AND function of two binary inputs, but in different ways. The connection with the AND function is evident and clearly acknowledged by the question, but I would express it in a slightly different way:

+ +
    +
  1. The Toffoli gate is of course the natural way of representing AND as a reversible function. It follows the usual pattern of representing an arbitrary function $f:\{0,1\}^n\rightarrow\{0,1\}$ reversibly as $|x,a\rangle \mapsto |x,a\oplus f(x)\rangle$.

  2. +
  3. The PR box can be seen as a distributed form of the AND function. The output of a PR box on input $(x,y)$ can be expressed as $(\text{AND}(x,y)\oplus a, a)$, or equivalently as $(a,\text{AND}(x,y)\oplus a)$, where $a\in\{0,1\}$ is a uniformly generated random bit. The output of the PR box is therefore either a perfectly correlated or perfectly anti-correlated pair of random bits, depending on whether the AND of the inputs is 0 or 1 respectively. This is interesting because Alice and Bob collectively know the output of the AND function (which they can obtain by computing the XOR of their output bits), while individually they have no information at all about this value.

  4. +
+ +

The idea that the PR box effectively computes the AND function in this distributed way is a key idea in Wim van Dam's proof that communication complexity becomes trivial in the presence of PR boxes:

+ +
+

Wim van Dam. Implausible consequences of superstrong nonlocality. + Natural Computing 12(1): 9-12, 2013.

+
+",1764,,,,,6/18/2018 13:00,,,,0,,,,CC BY-SA 4.0 +2380,2,,2298,6/18/2018 13:26,,2,,"

An answer to this question seems to have been given by the same authors, in a different follow-up paper which I hadn't seen before.

+ +

In [1], the authors write (emphasis mine):

+ +
+

A classical RAM that uses the bucket-brigade addressing schemes need + only activate $O(n)$ transistors in the course of a memory call, in + contrast with a conventional RAM that activates $O(2^n)$ transistors. As + a result, a RAM that uses our design might operate with less + dissipation and power consumption than a conventional RAM. Note, + however, that energy costs in the memory addressing are not + sufficiently high in current RAM chips to justify an immediate + adoption of the bucket-brigade. Other sources of inefficiencies and + dissipations are currently predominant (mostly in the memory cells + themselves). However, new promising memory cell technologies are being + developed (e.g. the “memristor” cells [15]), which would drastically + cut back cell dissipation, so that cutting back dissipation in the + addressing may become important in the future.

+
+ +

It would therefore seem that, at least according to the authors, the bucket-brigade scheme would be also advantageous for classical architectures, in terms of energy consumption.

+ +

[1]: Giovannetti, Lloyd, and Maccone. ""Architectures for a quantum random access memory."" Physical Review A 78.5 (2008): 052310.

+",55,,,,,6/18/2018 13:26,,,,0,,,,CC BY-SA 4.0 +2381,1,,,6/18/2018 15:14,,19,2106,"

What is a ""qubit""? Google tells me that it's another term for a ""quantum bit"". What is a ""quantum bit"" physically? How is it ""quantum""? What purpose does it serve in quantum computing?

+ +

Note: I'd prefer an explanation that is easily understood by laypeople; terms specific to quantum computing should preferably be explained, in relatively simple terms.

+",90,,10480,,01-02-2022 22:05,01-02-2022 22:05,What is a qubit?,,6,8,,,,CC BY-SA 4.0 +2382,2,,2381,6/18/2018 17:12,,4,,"
+

What is a ""quantum bit"" physically? How is it ""quantum""?

+
+ +

First let me give examples of classical bits:

+ +
    +
  • In a CPU: low voltage = 0, high voltage = 1
  • +
  • In a hard drive: North magnet = 0, South magnet = 1
  • +
  • In a barcode on your library card: Thin bar = 0, Thick bar = 1
  • +
  • In a DVD: Absence of a deep microscopic pit on the disk = 0, Presence = 1
  • +
+ +

In every case you can have something in between:

+ +
    +
  • If ""low voltage"" is 0 mV, and ""high voltage"" is 1 mV, you can have a medium voltage of 0.5 mV
  • +
  • You can have a magnet polarized in any direction, such as North-West
  • +
  • You can have lines in a barcode that are of any width
  • +
  • You can have pits of various depths on the surface of a DVD
  • +
+ +

In quantum mechanics things can only exist in ""packages"" called ""quanta"". The singular of ""quanta"" is ""quantum"". This means for the barcode example, if the thin line was one ""quantum"", the thick line can be two times the size of the thin line (two quanta), but it cannot be 1.5 times the thickness of the thin line. If you look at your library card you will notice that you can draw lines that are of thickness 1.5 times the size of the thin lines if you want to, which is one reason why barcode bits are not qubits.

+ +

There do exist some things in which the laws of quantum mechanics do not permit anything between the 0 and the 1, some examples are below:

+ +
    +
  • spin of an electron: It's either up (0) or down (1), but cannot be in between.
  • +
  • energy level of an electron: 1st level is 0, 2nd level is 1, there is no such thing as 1.5th level
  • +
+ +

I have given you two examples of what a qubit can be physically: spin of an electron, or energy level of an electron.

+ +
+

What purpose does it serve in quantum computing?

+
+ +

The reason why the qubit examples I gave come in quanta are because they exist as solutions to something called the Schrödinger Equation. Two solutions to the Schrödinger equation (the 0 solution, and the 1 solution) can exist at the same time. So we can have 0 and 1 at the same time. If we have two qubits, each can be in 0 and 1 at the same time, so collectively we can have 00, 01, 10, and 11 (4 states) at the same time. If we have 3 qubits, each of them can be in 0 and 1 at the same time, so we can have 000, 001, 010, 011, 100, 101, 110, 111 (8 states) at the same time. Notice that for $n$ qubits we can have $2^n$ states at the same time. That is one of the reasons why quantum computers are more powerful than classical computers.

+",2293,,2293,,7/15/2018 0:37,7/15/2018 0:37,,,,0,,,,CC BY-SA 4.0 +2383,2,,2381,6/18/2018 18:03,,3,,"

A qubit (quantum bit) is a quantum system that can be fully described by (""lives in"") a 2-dimensional complex vector space.

+ +

However, much more than that is required to do computations. There needs to exist two orthogonal basis vectors in that vector space, call them $|0\rangle$ and $|1\rangle$, that are stable in the sense that you can set the system very precisely to $|0\rangle$ or to $|1\rangle$, and it will stay there for a long time. This is easier said than done because unless noise is reduced somehow, it will cause the state to drift gradually so that it contains a component along both the $|0\rangle$ and $|1\rangle$ dimensions.

+ +

To do computations, you must also be able to induce a ""complete"" set of operations acting on one or two qubits. When you are not inducing an operation, qubits should not interact with each other. Unless interaction with the environment is suppressed, qubits will interact with each other.

+ +

A classical bit, by the way, is much simpler than a qubit. It's a system that can be described by a boolean variable

+",1974,,26,,7/13/2018 14:56,7/13/2018 14:56,,,,0,,,,CC BY-SA 4.0 +2385,2,,2381,6/19/2018 1:16,,10,,"

This is a good question and in my view gets at the heart of a qubit. Like the comment by @Blue, it's not that it can be an equal superposition as this is the same as a classical probability distribution. It is that it can have negative signs.

+ +

Take this example. Imagine you have a bit in the $0$ state and represent it as vector $\begin{bmatrix}1 \\0 \end{bmatrix}$ and then you apply a coin flipping operation which can be represented by a stochastic matrix $\begin{bmatrix}0.5 & 0.5 \\0.5 & 0.5 \end{bmatrix}$ this will make a classical mixture $\begin{bmatrix}0.5 \\0.5 \end{bmatrix}$. If you apply this twice it will still be a classical mixture $\begin{bmatrix}0.5 \\0.5 \end{bmatrix}$.

+ +

Now lets go to the quantum case and start with a qubit in the $0$ state which again is represented by $\begin{bmatrix}1 \\0 \end{bmatrix}$. In quantum, operations are represented by a unitary matrix which has the property $U^\dagger U = I$. The simplest unitary to represent the action of a quantum coin flip is the Hadamard matrix $\begin{bmatrix}\sqrt{0.5} & \sqrt{0.5} \\\sqrt{0.5} & -\sqrt{0.5} \end{bmatrix}$ where the first column is defined so that after one operation it makes the state $|+\rangle =\begin{bmatrix}\sqrt{0.5} \\\sqrt{0.5} \end{bmatrix}$, then the second column must be $\begin{bmatrix}\sqrt{0.5} & a \\\sqrt{0.5} & b \end{bmatrix}$ where $|a|^2 = 1/2$, $|b|^2 = 1/2$ and $ab^* = -1/2$. A solution to this is $a =\sqrt(0.5)$ and $b=-a$.

+ +

Now lets do the same experiment. Applying it once gives +$\begin{bmatrix}\sqrt{0.5} \\\sqrt{0.5} \end{bmatrix}$ and if we measured (in the standard basis) we would get half the time 0 and the other 1 (recall in quantum Born rule is $P(i) = |\langle i|\psi\rangle|^2$ and why we need all the square roots). So it is like the above and has a random outcome.

+ +

Lets apply it twice. Now we would get $\begin{bmatrix} 0.5+0.5 \\0.5-0.5\end{bmatrix}$. The negative sign cancels the probability of observing the 1 outcome and a physicist we refer to this as interference. It is these negative numbers that we get in quantum states which cannot be explained by probability theory where the vectors must remain positive and real.

+ +

Extending this to n qubits gives you a theory that has an exponential that we can't find efficient ways to simulate.

+ +

This is not just my view. I have seen it shown in the talks by Scott Aaronson and I think its best to say quantum is like “Probability theory with Minus Signs” (this is a quote by Scott).

+ +

I am attaching the slides I like to give for explaining quantum (if it is not standard to have slides in an answer I am happy to write the math out to get across the concepts) +

+",302,,302,,6/24/2018 19:16,6/24/2018 19:16,,,,8,,,,CC BY-SA 4.0 +2386,1,,,6/19/2018 3:28,,5,1983,"

I am looking to collaborate on open source simulation efforts.

+",1697,,,,,6/19/2018 8:25,What are some open sources projects on quantum computing?,,2,6,,6/19/2018 13:59,,CC BY-SA 4.0 +2387,2,,2386,6/19/2018 3:43,,3,,"

There is in fact a quantum computing topic tag on github which comes up with a list of a whole bunch of projects.

+ +

QISKit is pretty well known; it's connected with IBM's quantum experience as well as the QASM language. It's a pure simulator.

+ +

In another direction, there's projects like OpenFermion which is for finding algorithms to simulate different problems in quantum chemistry.

+ +

A fabulous little simulator that I prefer over IBM's is Quirk which runs in your browser and is open source.

+ +

You can also start your own project like I did, though I would not recommend becoming an open-source contributor to mine because I haven't updated my repository in a while and I'm pretty sure everything there is rather decisively out of date.

+ +

If you can provide some more criteria, I can recommend some more specific projects.

+",91,,26,,6/19/2018 4:17,6/19/2018 4:17,,,,0,,,,CC BY-SA 4.0 +2388,1,2395,,6/19/2018 8:00,,13,1862,"

I have been trying to get my head around the famous(?) paper Quantum algorithm for linear systems of equations (Harrow, Hassidim & Lloyd, 2009) (more popularly known as the HHL09 algorithm paper) for some time, now.

+ +

On the very first page, they say:

+ +
+

We sketch here the basic idea of our algorithm and then discuss it in + more detail in the next section. Given a Hermitian $N\times N$ matrix + $A$, and a unit vector $\vec{b}$, suppose we would like to find + $\vec{x}$ satisfying $A\vec{x} = \vec{b}$. (We discuss later questions + of efficiency as well as how the assumptions we have made about $A$ + and $\vec{b}$ can be relaxed.) First, the algorithm represents + $\vec{b}$ as a quantum state $|b\rangle = \sum_{i=1}^{N}b_i|i\rangle$. + Next, we use techniques of Hamiltonian simulation [3, 4] to apply + $e^{iAt}$ to $|b_i\rangle$ for a superposition of different times $t$. + This ability to exponentiate $A$ translates, via the well-known + technique of phase-estimation [5–7] into the ability to decompose $|b\rangle$ + in the eigenbasis of $A$ and to find the corresponding eigenvalues + $\lambda_j$ Informally, the state of the system after + this stage is close to $\sum_{j=1}^{j=N} \beta_j + |u_j\rangle|\lambda_j\rangle$, where $u_j$ is the eigenvector basis of + $A$ and $|b\rangle = \sum_{j=1}^{j=N} \beta_j|u_j\rangle$.

+
+ +

So far so good. As described in Nielsen & Chuang in the chapter ""The quantum Fourier transform and its applications"", the phase estimation algorithm is used to estimate $\varphi$ in $e^{i2\pi \varphi}$ which is the eigenvalue corresponding to an eigenvector $|u\rangle$ of the unitary operator $U$.

+ +

Here's the relevant portion from Nielsen & Chuang:

+ +
+

The phase estimation algorithm uses two registers. The first register + contains $t$ qubits initially in the state $|0\rangle$. How we choose + $t$ depends on two things: the number of digits of accuracy we wish to + have in our estimate for $\varphi$, and with what probability we wish the + phase estimation procedure to be successful. The dependence of $t$ on + these quantities emerges naturally from the following analysis.

+ +

The second register begins in the state $|u\rangle$ and contains as + many qubits as is necessary to store $|u\rangle$. Phase estimation is + performed in two stages. First, we apply the circuit shown in Figure + 5.2. The circuit begins by applying a Hadamard transform to the first register, followed by application of controlled - $U$ operations on + the second register, with $U$ raised to successive powers of two. The + final state of the first register is easily seen to be:

+ +

$$\frac{1}{2^{t/2}}\left(|0\rangle+\text{exp}(2\pi i + 2^{t-1}\varphi)|1\rangle)(|0\rangle+\text{exp}(2\pi i + 2^{t-2}\varphi)|1\rangle)...(|0\rangle+\text{exp}(2\pi i + 2^{0}\varphi)|1\rangle\right)= + \frac{1}{2^{t/2}}\sum_{k=0}^{2^{t}-1}\text{exp}(2\pi i \varphi + k)|k\rangle$$

+ +

+ +

The second stage of phase estimation is to apply the inverse quantum + Fourier transform on the first register. This is obtained by reversing + the circuit for the quantum Fourier transform in the previous section + (Exercise 5.5) and can be done in $\Theta (t^2)$ steps. The third and + final stage of phase estimation is to read out the state of the first + register by doing a measurement in the computational basis. We will + show that this provides a pretty good estimate of $\varphi$. An + overall schematic of the algorithm is shown in Figure 5.3.

+ +

To sharpen our intuition as to why phase estimation works, suppose $\varphi$ + may be expressed exactly int bits, as $\varphi = 0.\varphi_1 ... + \varphi_t$. Then the state (5.20) resulting from the first stage of + phase estimation may be rewritten

+ +

$$\frac{1}{2^{t/2}}(|0\rangle + \exp(2\pi i + 0.\varphi_t|1\rangle)(|0\rangle + \exp(2\pi i 0.\varphi_{t-1}\varphi_t|1\rangle)...(|0\rangle + \exp(2\pi i 0.\varphi_1...\varphi_t|1\rangle)$$

+ +

The second stage of phase estimation is to apply the inverse quantum + Fourier transform. But comparing the previous equation with the + product form for the Fourier transform, Equation (5.4), we see that + the output state from the second stage is the product state + $|\varphi_1 ...\varphi_t\rangle$. A measurement in the computational + basis, therefore, gives us $\varphi$ exactly!

+ +

+ +

Summarizing, the phase estimation algorithm allows one to estimate the + phase $\varphi$ of an eigenvalue of a unitary operator $U$, given the + corresponding eigenvector $|u\rangle$. An essential feature at the + heart of this procedure is the ability of the inverse Fourier + transform to perform the transformation

+ +

$$\frac{1}{2^{t/2}}\sum_{j = 0}^{2^t-1}\exp(2\pi i \varphi j)|j\rangle |u\rangle \to |\tilde \varphi \rangle |u\rangle$$

+
+ +

Let's proceed from here. I found a nice circuit diagram for the HHL09 algorithm here[$\dagger$]:

+ +

+ +

Step 1 (Phase Estimation):

+ +

In the first step of the HHL09 algorithm the same concept (of the standard Quantum Phase Estimation algorithm as described in Nielsen and Chuang) is used. However, we must keep in mind that $A$ by itself isn't a unitary operator. However, if we assume that $A$ is Hermitian then the exponential $e^{iAt}$ is unitary (no worries, there's exists a workaround in case $A$ isn't Hermitian!).

+ +

Here, we can write $U=e^{iAt}$. There's another subtle point involved here. We do not know the eigenvectors $|u_j\rangle$ of $U$ beforehand (but we do know that for any unitary matrix of size $N\times N$ there exist $N$ orthonormal eigenvectors). Moreover, we need to remind ourselves that if the eigenvalues of $A$ are $\lambda_j$ then the eigenvalues of $e^{iAt}$ will be $e^{i \lambda_j t}$. If we compare this with the form of eigenvalues given in Nielsen and Chuang for $U$ i.e. if $e^{2\pi i \varphi} \equiv e^{ i \lambda_j t}$, we'd find $\varphi = \frac{\lambda_j t}{2\pi}$. In this case, we begin in the state $|b\rangle$ (which can be written as a superposition of the eigenvectors of $U$ i.e. $\sum_{j=1}^{j=N}\beta_j|u_j\rangle$) rather than any particular eigenvector $|u_j\rangle$ of $U$, as far as the second register of qubits is concerned. If we had begun in the state $|u\rangle \otimes (|0\rangle)^{\otimes t}$ we would have ended up with $|u\rangle \otimes |\tilde\varphi\rangle$ i.e. $|u_j\rangle \otimes |\tilde{\frac{\lambda_j t}{2\pi}}\rangle$ (considering that $\lambda_j$ is the eigenvalue associated with the eigenvector $|u_j\rangle$ of $A$). Now, instead if we begin in the superposition of eigenvectors $\sum_{j=1}^{j=N}\beta_j|u_j\rangle$ we should end up with $\sum_{j=1}^{j=N}\beta_j|u_j\rangle\otimes |\tilde{\frac{\lambda_j t}{2\pi}}\rangle$.

+ +

Question:

+ +

Part 1: In the HHL09 paper, they wrote about the state of the system after this Phase Estimation step is $\sum_{j=1}^{j=N}\beta_j|u_j\rangle\otimes |\tilde\lambda_j\rangle$. However, from what I wrote above it seems to me that the state of the system should rather be $\sum_{j=1}^{j=N}\beta_j|u_j\rangle\otimes |\tilde{\frac{\lambda_j t}{2\pi}}\rangle$.

+ +

What am I missing here? Where did the factor of $\frac{t}{2\pi}$ vanish in their algorithm?

+ +

Edit: Part 2 has been asked here to make the individual questions more focused.

+ +
+ +

I also have several confusions regarding Step 2 and Step 3 of the HHL09 algorithm too, but I decided to post them as separate question threads, as this one is becoming too long. I'll add the links to those question threads, on this post, once they are created.

+ +

[$\dagger$]: Homomorphic Encryption Experiments on IBM's Cloud Quantum Computing Platform Huang et al. (2016)

+",26,,26,,12/23/2018 12:33,12/23/2018 12:33,Quantum algorithm for linear systems of equations (HHL09): Step 1 - Confusion regarding the usage of phase estimation algorithm,,2,1,,,,CC BY-SA 4.0 +2389,2,,2386,6/19/2018 8:25,,1,,"

You can find some lists of open source quantum projects here.

+ +

I’d also advise you to check out the QISKit and Forest Slack channels to find some potential ideas and collaborators.

+",409,,,,,6/19/2018 8:25,,,,0,,,,CC BY-SA 4.0 +2390,1,,,6/19/2018 8:40,,8,715,"

This is a continuation of Quantum algorithm for linear systems of equations (HHL09): Step 1 - Confusion regarding the usage of phase estimation algorithm

+ +

Questions (contd.):

+ +

Part 2: I'm not exactly sure how many qubits will be needed for the Step 1 of the HHL09.

+ +

In Nielsen and Chuang (section 5.2.1, 10th anniversary edition) they say:

+ +
+

Thus to successfully obtain $\varphi$ accurate to $n$-bits with + probability of sucess at least $1-\epsilon$ we choose

+ +

$$t=n+\lceil { \log(2+\frac{1}{2\epsilon})\rceil}$$

+
+ +

So, say we want an accuracy of $90\%$ i.e. $1-\epsilon = 0.9 \implies \epsilon = 0.1$ and a precision of $3$-bits for $\frac{\lambda_j t}{2\pi}$ or $\lambda_j$ we'd need

+ +

$$t = 3 + \lceil { \log_2(2+\frac{1}{2 (0.1)})\rceil} = 3 + 3 = 6$$

+ +

Apart from that, since $|b\rangle$ can be represented as a sum of $N$ linearly independent eigenvectors of a $N\times N$ dimensional matrix $A$, we'd need minimum $\lceil{\log_2(N)\rceil}$ qubits to produce a vector space having at least $N$ - dimensions. So, we need $\lceil{\log_2(N)\rceil}$ for the second register.

+ +

Now, for the first register we not only $\lceil{\log_2(N)\rceil}$ qubits won't be sufficient to represent the $N$ eigenvalues $|\lambda_j\rangle$, that is because we'll need more bits for representing each $|\lambda_j\rangle$ precisely upto $n$-bits.

+ +

I guess we should again use the formula $$n+\lceil { \log(2+\frac{1}{2\epsilon})\rceil}$$ in this case. If we want each eigenvalue $|\lambda_i\rangle$ to be represented with a $3$-bit precision and $90\%$ accuracy then we'd need $6\times \lceil{\log_2(N)\rceil}$ for the first register. Plus, one more qubit which is needed for the ancilla.

+ +

So, we should need a total of $(6+1)\lceil{\log_2(N)\rceil}+1$ qubits for Step 1 of the HHL09 algorithm. That's quite a lot!

+ +

Say we want to solve a $2\times 2$ linear equation system such that $A$ is Hermitian that itself would require $7\lceil{\log_2(2)\rceil}+1 = 8$ qubits! In case $A$ is not Hermitian we'd need even more qubits. Am I right?

+ +

However, in this[$\dagger\dagger$] paper on page 6 they claim that they used the HHL09 algorithm to estimate the pseudoinverse of $A$ which of size ~$200\times 200$. In that paper, $A$ is defined as:

+ +

$$A := \begin{pmatrix} W - \gamma \Bbb I_d & P \\ P & 0 \end{pmatrix}$$

+ +

where $P$,$W$ and $\Bbb I_d$ are all $d\times d$ matrices.

+ +

+ +

In the H1N1 related simulated Lloyd et al. have claimed to have made, $d = 100$. And they further claim that they used the HHL09 algorithm to estimate the pseudo-inverse of $A$ (which is of size $200\times 200$). That would need a minimum of $7\lceil{\log_2(200)\rceil}+1 = 7(8)+1 = 57$ qubits to simulate. I have no idea how they could possibly do that using the current quantum computers or quantum computer simulations. As far as I know, IBM Q Experience at present supports ~$15$ qubits (that too it isn't as versatile as their $5$-qubit version).

+ +

Am I missing something here? Does this Step 1 actually require a lesser number of qubits than what I have estimated?

+ +

[$\dagger\dagger$]: A Quantum Hopfield Neural Network Lloyd et al. (2018)

+",26,,26,,12/23/2018 12:34,12/23/2018 12:34,Quantum algorithm for linear systems of equations (HHL09): Step 1 - Number of qubits needed,,1,2,,,,CC BY-SA 4.0 +2391,2,,2381,6/19/2018 9:00,,5,,"

A qubit is a two-dimensional quantum system, and the quantum generalization of a bit. Like bits, qubits can be in the states 0 and 1. In quantum notation, we write these as $|0\rangle$ and $|1\rangle$. They can also be in superposition states such as

+ +

$$ |\psi_0 \rangle = \alpha |0\rangle + \beta |1\rangle$$

+ +

Here $\alpha$ and $\beta$ are complex numbers in general. But for this answer, I'll just assume they are normal real numbers. The name I've given this state, $|\psi_0 \rangle$, is just for convenience. It has no deeper meaning.

+ +

Extracting an output from a qubit is done by a process known as measurement. The most common measurement is what we call the $Z$ measurement. This means just asking the qubit whether it is 0 or 1. If it is in a superposition state, such as the one above, the output will be random. You'll get 0 with probability $\alpha^2$ and 1 with probability $\alpha^2$ (so clearly these numbers need to satisfy ($\alpha^2+\beta^2=1$).

+ +

This might make it seem that superpositions are just random number generators, but that isn't the case. For every $ \alpha |0\rangle + \beta |1\rangle$ , we can construct the following state

+ +

$$ |\psi_1 \rangle = \beta |0\rangle - \alpha |1\rangle$$

+ +

This is as different to $|\psi_0\rangle$ as $|0\rangle$ is to $|1\rangle$. We call it a state that is orthogonal to $|\psi_0\rangle$.

+ +

With this we can define an alternative measurement that looks at whether our qubit is $|\psi_0\rangle$ or $|\psi_1\rangle$. For this measurement, it is the $|\psi_0\rangle$ and $|\psi_1\rangle$ states that give us definite answers. For other states, such as $|0\rangle$ and $|1\rangle$, we'd get random outputs. This is because they can be thought of as superpositions of $|\psi_0\rangle$ and $|\psi_1\rangle$.

+ +

So, trying to summarize a little, qubits are objects that we can use to store a bit. We usually do this in the states $|0\rangle$ and $|1 \rangle$, but in fact, we could choose to do it in any of the infinite possible pairs of orthogonal states. If we want to get the bit out again with certainty, we have to measure according to the encoding we used. Otherwise, there will always be a degree of randomness. For more detail on all this, you can check out a blog post I once wrote.

+ +

To start getting interesting things happing, we need more than one qubit. Since $n$ bits can be made into $2^n$ different bit strings, there is an exponentially large number of orthogonal states that can be included in our superpositions of $n$ qubits. This is the space in which we can do all the tricks of quantum computation.

+ +

But as for how that works, I'll have to refer you to the rest of the questions and answers in this Stack Exchange.

+",409,,26,,7/13/2018 14:55,7/13/2018 14:55,,,,0,,,,CC BY-SA 4.0 +2392,1,2394,,6/19/2018 10:58,,5,95,"

In a paper I am reading, it states:

+ +
+

For open-loop coherent controllability a quantum system with Hamiltonian $H$ is open-loop controllable by a coherent controller if and only if the algebra $\mathcal{A}$ generated from $\{ H, H_i \}$ by commutation is the full algebra of Hermitian operators for the system.

+
+ +

How would you produce an algebra from the set $\{ H, H_i \}$ using commutation? What is the basic idea in this regard?

+",2032,,55,,4/23/2021 12:50,4/23/2021 12:50,"How do you produce an algebra from a set $\{H, H_i\}$ via commutation?",,1,0,,,,CC BY-SA 4.0 +2393,1,2396,,6/19/2018 11:03,,9,867,"

This is a sequel to Quantum algorithm for linear systems of equations (HHL09): Step 1 - Confusion regarding the usage of phase estimation algorithm and Quantum algorithm for linear systems of equations (HHL09): Step 1 - Number of qubits needed.

+ +
+ +

In the paper: Quantum algorithm for linear systems of equations (Harrow, Hassidim & Lloyd, 2009), what's written up to the portion

+ +
+

The next step is to decompose $|b\rangle$ in the eigenvector basis, + using phase estimation [5–7]. Denote by $|u_j\rangle$ the eigenvectors + of $A$ (or equivalently, of $e^{iAt}$), and by $\lambda_j$ the + corresponding eigenvalues.

+
+ +

on page $2$ makes some sense to me (the confusions up till there have been addressed in the previous posts linked above). However, the next portion i.e. the $R(\lambda^{-1})$ rotation seems a bit cryptic.

+ +
+

Let $$|\Psi_0\rangle := \sqrt{\frac{2}{T}}\sum_{\tau =0}^{T-1} \sin + \frac{\pi(\tau+\frac{1}{2})}{T}|\tau\rangle$$

+ +

for some large $T$. The coefficients of $|\Psi_0\rangle$ are chosen + (following [5-7]) to minimize a certain quadratic loss function which + appears in our error analysis (see [13] for details).

+ +

Next, we apply the conditional Hamiltonian evolution $\sum_{\tau = + 0}^{T-1}|\tau\rangle \langle \tau|^{C}\otimes e^{iA\tau t_0/T}$ on + $|\Psi_0\rangle^{C}\otimes |b\rangle$, where $t_0 = + \mathcal{O}(\kappa/\epsilon)$.

+
+ +

Questions:

+ +

1. What exactly is $|\Psi_0\rangle$? What do $T$ and $\tau$ stand for? I've no idea from where this gigantic expression $$\sqrt{\frac{2}{T}}\sum_{\tau =0}^{T-1} \sin +\frac{\pi(\tau+\frac{1}{2})}{T}|\tau\rangle$$ suddenly comes from and what its use is.

+ +

2. After the phase estimation step, the state of our system is apparently:

+ +

$$\left(\sum_{j=1}^{j=N}\beta_j|u_j\rangle\otimes |\tilde\lambda_j\rangle\right)\otimes |0\rangle_{\text{ancilla}}$$

+ +

This surely cannot be written as $$\left(\sum_{j=1}^{j=N}\beta_j|u_j\rangle\right)\otimes \left(\sum_{j=1}^{j=N}|\tilde\lambda_j\rangle\right)\otimes |0\rangle_{\text{ancilla}}$$ i.e.

+ +

$$|b\rangle\otimes \left(\sum_{j=1}^{j=N}|\tilde\lambda_j\rangle\right)\otimes |0\rangle_{\text{ancilla}}$$

+ +

So, it is clear that $|b\rangle$ is not available separately in the second register. So I've no idea how they're preparing a state like $|\Psi_0\rangle^{C}\otimes |b\rangle$ in the first place! Also, what does that $C$ in the superscript of $|\Psi_0\rangle^{C}$ denote?

+ +

3. Where does this expression $\sum_{\tau = + 0}^{T-1}|\tau\rangle \langle \tau|^{C}\otimes e^{iA\tau t_0/T}$ suddenly appear from ? What's the use of simulating it? And what is $\kappa$ in $\mathcal{O}(\kappa/\epsilon)$ ?

+",26,,26,,07-05-2018 19:00,07-05-2018 19:00,Quantum algorithm for linear systems of equations (HHL09): Step 2 - What is $|\Psi_0\rangle$?,,2,0,,,,CC BY-SA 4.0 +2394,2,,2392,6/19/2018 11:13,,3,,"

In general, an algebra $\mathcal{A}$ generated from a set $\{H_1, H_2,..., H_n\}$ by commutation refers to the algebra whose generators are $H_1,H_2,...,H_n$, all their first-order commutators $C_{ij} = [H_i,H_j]$, and all their second-order commutators $C_{ijk} = [[H_i, H_j],H_k]$ and so on.

+",26,,,,,6/19/2018 11:13,,,,3,,,,CC BY-SA 4.0 +2395,2,,2388,6/19/2018 11:33,,5,,"

It depends on the papers but I saw 2 approaches:

+ +
    +
  1. In most of the papers I read about the HHL algorithm and its implementation, the Hamiltonian evolution time $t$ is taken such that this factor disappear, i.e. $t = t_0 = 2\pi$.

  2. +
  3. The approximate eigenvalue is often written $\tilde \lambda$. In some paper this notation really means ""the approximation of the true eigenvalue $\lambda$"" and in other papers they seems to include $\frac{t}{2\pi}$ in this definition, i.e. ""$\tilde \lambda$ is the approximation of the value of $\frac{\lambda t}{2\pi}$"".

  4. +
+ +

Here are some links:

+ +
    +
  1. Quantum linear systems algorithms: a primer (Dervovic, Herbster, Mountney, Severini, Usher & Wossnig, 2018): a complete and very good article on HHL algorithm and some improvements that has been discovered. The paper is from the 22th of February, 2018. The value of $t$ you are interested in is first addressed in page 30, in the legend of Figure 5 and is fixed at $2\pi$.

  2. +
  3. Quantum Circuit Design for Solving Linear Systems of Equations (Cao, Daskin, Frankel & Kais, 2013) (take the v2 and not the v3): a detailed implementation of HHL algorithm for a fixed 4x4 matrix. If you plan to use the article let me warn you that there are some mistakes in it. I can provide you the ones I found if you are interested. The value for $t$ (which is denoted as $t_0$ in this paper) is fixed to $2\pi$ in the second page (at the start of the right column).

  4. +
  5. Experimental Quantum Computing to Solve Systems of Linear Equations (Cai, Weedbrook, Su, Chen, Gu, Zhu, Li, Liu, Lu & Pan, 2013): an implementation of HHL algorithm for a 2x2 matrix on an experimental setup. They fix $t = 2\pi$ in the legend of Figure 1.

  6. +
  7. Experimental realization of quantum algorithm for solving linear systems of equations (Pan, Cao, Yao, Li, Ju, Peng, Kais & Du, 2013): implementation of HHL for a 2x2 matrix. The implementation is similar to the one given in the second point above, with the 4x4 matrix. They fix $t_0 = 2\pi$ in page 3, bullet point n°2.

  8. +
+",1386,,1386,,6/19/2018 14:25,6/19/2018 14:25,,,,0,,,,CC BY-SA 4.0 +2396,2,,2393,6/19/2018 13:38,,6,,"

1. Definitions

+ +

Names and symbols used in this answer follow the ones defined in Quantum linear systems algorithms: a primer (Dervovic, Herbster, Mountney, Severini, Usher & Wossnig, 2018). A recall is done below.

+ +

1.1 Register names

+ +

Register names are defined in Figure 5. of Quantum linear systems algorithms: a primer (Dervovic, Herbster, Mountney, Severini, Usher & Wossnig, 2018) (reproduced below):

+ +
    +
  • $S$ (1 qubit) is the ancilla register used to check if the output is valid or not.
  • +
  • $C$ ($n$ qubits) is the clock register, i.e. the register used to estimate the eigenvalues of the hamiltonian with quantum phase estimation (QPE).
  • +
  • $I$ ($m$ qubits) is the register storing the right-hand side of the equation $Ax = b$. It stores $x$, the result of the equation, when $S$ is measured to be $\left|1\right>$ at the end of the algorithm.
  • +
+ +

+ +

2. About $\left|\Psi_0\right>$:

+ +
    +
  1. What exactly is $\left|\Psi_0\right>$?

    + +

    $\left|\Psi_0\right>$ is one possible initial state of the clock register $C$.

  2. +
  3. What do $T$ and $\tau$ stand for?

    + +

    $T$ stands for a big positive integer. This $T$ should be as large as possible because the expression of $\left|\Psi_0\right>$ asymptotically minimise a given error for $T$ growing to infinity. In the expression of $\left|\Psi_0\right>$, $T$ will be $2^n$, the number of possible states for the quantum clock $C$.

    + +

    $\tau$ is just the summation index

  4. +
  5. Why such a gigantic expression for $\left|\Psi_0\right>$?

    + +

    See DaftWullie's post for a detailed explanation.

    + +

    Following the citations in Quantum algorithm for linear systems of equations (Harrow, Hassidim & Lloyd, 2009 v3) we end up with:

    + +
      +
    1. The previous version of the same paper Quantum algorithm for linear systems of equations (Harrow, Hassidim & Lloyd, 2009 v2). The authors revised the paper 2 times (there are 3 versions of the original HHL paper) and version n°3 does not include all the informations provided in the previous versions. In the V2 (section A.3. starting at page 17), the authors provide a detailed analysis of the error with this special initial state.
    2. +
    3. Optimal Quantum Clocks (Buzek, Derka, Massar, 1998) where the expression of $\left|\Psi_0\right>$ is given as $\left|\Psi_{opt}\right>$ in Equation 10. I don't have the knowledge to understand fully this part, but it seems like this expression is ""optimal"" in some sense.
    4. +
  6. +
+ +

3. Preparation of $\left|\Psi_0\right>$:

+ +

As said in the previous part, $\left|\Psi_0\right>$ is an initial state. They do not prepare $\left|\Psi_0\right>$ after the phase estimation procedure. The sentence ordering is not really optimal in the paper. The phase estimation procedure they use in the paper is a little bit different from the ""classic"" phase estimation algorithm represented in the quantum circuit linked in part 1, and that is why they explain it in details.

+ +

Their phase estimation algorithm is:

+ +
    +
  1. Prepare the $\left|\Psi_0\right>$ state in the register $C$.
  2. +
  3. Apply the conditional Hamiltonian evolution to the registers $C$ and $I$ (which are in the state $\left|\Psi_0\right>\otimes \left|b\right>$).
  4. +
  5. Apply the quantum Fourier transform to the resulting state.
  6. +
+ +

Finally, the $C$ in $\left| \Psi_0 \right>^C$ means that the state $\left| \Psi_0 \right>$ is stored in the register $C$. This is a short and convenient notation to keep track of the registers used.

+ +

4. Hamiltonian simulation:

+ +

First of all, $\kappa$ is the condition number (Wikipedia page on ""condition number"") of the matrix $A$.

+ +

$\sum_{\tau = 0}^{T-1}|\tau\rangle \langle \tau|^{C}\otimes e^{iA\tau t_0/T}$ is the mathematical representation of a quantum gate.

+ +

The first part in the sum $|\tau\rangle \langle \tau|^{C}$ is a control part. It means that the operation will be controlled by the state of the first quantum register (the register $C$ as the exponent tells us).

+ +

The second part is the ""Hamiltonian simulation"" gate, i.e. a quantum gate that will apply the unitary matrix given by $e^{iA\tau t_0/T}$ to the second register (the register $I$ that is in the initial state $\left|b\right>$).

+ +

The whole sum is the mathematical representation of the controlled-U operation in the quantum circuit of ""1. Definitions"", with $U = e^{iA\tau t_0/T}$.

+",1386,,1386,,6/27/2018 6:21,6/27/2018 6:21,,,,0,,,,CC BY-SA 4.0 +2397,1,2398,,6/19/2018 17:44,,11,2938,"

A CCCNOT gate is a four-bit reversible gate that flips its fourth bit if and only if the first three bits are all in the state $1$.

+ +

How would I implement a CCCNOT gate using Toffoli gates? Assume that bits in the workspace start with a particular value, either 0 or 1, provided you return them to that value.

+",2713,,26,,12/13/2018 19:55,11/25/2019 19:37,Implementing a CCCNOT gate using only Toffoli gates,,1,4,,,,CC BY-SA 4.0 +2398,2,,2397,6/19/2018 19:20,,10,,"

I guess what you're looking for is the following circuit. Here, $b_1,b_2,b_3,b_4 \in \{0,1\}$, and $\oplus$ is addition modulo $2$.

+ +

+ +

Here, the fifth qubit is used as an auxiliary, or ancilla qubit. It starts at $|0\rangle$ and ends in $|0\rangle$ when the circuit is applied.

+ +

Let me elaborate on how this circuit works. The idea is to first of all check whether the first two qubits are in state $|1\rangle$. This can be done using a single Toffoli gate, and the result is stored in the auxiliary qubit. Now, the problem reduces to flipping qubit $4$, whenever qubits $3$ and the auxiliary qubit are in $|1\rangle$. This can also be achieved using one application of a Toffoli gate, namely the middle one in the circuit shown above. Finally, the last Toffoli gate serves to uncompute the temporary result that we stored in the auxiliary qubit, such that the state of this qubit returns to $|0\rangle$ after the circuit is applied.

+ +
+ +

In the comment section, the question arose whether it is possible to implement such a circuit using only Toffoli gates, without using auxiliary qubits. This question can be answered in the negative, as I will show here.

+ +

We want to implement the $\mathrm{CCCNOT}$-gate, which acts on four qubits. We can define the following matrix (the matrix representation of the Pauli-$X$-gate): +$$X = \begin{bmatrix} +0 & 1 \\ 1 & 0 +\end{bmatrix}$$ +Furthermore, we denote the $N$-dimensional identity matrix by $I_N$. Now, we observe that the matrix representation of the $\mathrm{CCCNOT}$-gate, acting on four qubits, is given by the following $16 \times 16$ matrix: +$$\mathrm{CCCNOT} = \begin{bmatrix} +I_{14} & 0 \\ 0 & X +\end{bmatrix}$$ +Hence, we can determine its determinant: +$$\det(\mathrm{CCCNOT}) = -1$$ +Now consider the matrix representation of the Toffoli gate, acting on the first three qubits of a $4$-qubit system. Its matrix representation is written as (where we used the Kronecker product of matrices): +$$\mathrm{Toffoli} \otimes I_2 = \begin{bmatrix} +I_6 & 0 \\ 0 & X +\end{bmatrix} \otimes I_2 = \begin{bmatrix} +I_{12} & 0 \\ 0 & X \otimes I_2 +\end{bmatrix} = \begin{bmatrix} +I_{12} & 0 & 0 \\ 0 & 0 & I_2 \\ 0 & I_2 & 0 +\end{bmatrix}$$ +Calculating its determinant yields: +$$\det(\mathrm{Toffoli} \otimes I_2) = 1$$ +The Toffoli gates can also act on different qubits of course. Suppose we let the Toffoli gate act on the first, second and fourth qubit, where the fourth qubit is the target qubit. Then we obtain the new matrix representation from the one displayed above by swapping the columns corresponding to the states that differ only in the third and fourth qubit, i.e., $|0001\rangle$ with $|0010\rangle$, $|0101\rangle$ with $|0110\rangle$, etc. The important thing to note here, is that the number of swaps of columns is even, and hence that the determinant remains unchanged. As we can write every permutation of qubits as a sequence of consecutive permutations of just $2$ qubits (that is, $S_4$ is generated by the transpositions in $S_4$), we find that for all Toffoli gates, applied to any combination of control and target qubits, its matrix representation has determinant $1$.

+ +

The final thing to note is that the determinant commutes with matrix multiplication, i.e., $\det(AB) = \det(A)\det(B)$, for any two matrices $A$ and $B$ compatible with matrix multiplication. Hence, it now becomes apparent that applying multiple Toffoli gates in sequence never creates a circuit whose matrix representation has a determinant different from $1$, which in particular implies that the $\mathrm{CCCNOT}$-gate cannot be implemented using only Toffoli gates on $4$ qubits.

+ +

The obvious question, now, is what changes when we do allow an auxiliary qubit. We find the answer when we write out the action of the $\mathrm{CCCNOT}$-gate on a $5$-qubit system: +$$\mathrm{CCCNOT} \otimes I_2 = \begin{bmatrix} +I_{14} & 0 \\ 0 & X +\end{bmatrix} \otimes I_2 = \begin{bmatrix} +I_{28} & 0 & 0 \\ 0 & 0 & I_2 \\ 0 & I_2 & 0 +\end{bmatrix}$$ +If we calculate this determinant, we find: +$$\det(\mathrm{CCCNOT} \otimes I_2) = 1$$ +Hence, the determinant of the $\mathrm{CCCNOT}$-gate acting on $5$ qubits is $1$, instead of $-1$. This is why the previous argument is not valid for $5$ qubits, as we already knew because of the explicitly constructed circuit the OP asked for.

+",24,,26,,11/25/2019 19:37,11/25/2019 19:37,,,,8,,,,CC BY-SA 4.0 +2399,1,2401,,6/19/2018 21:14,,14,746,"

This question is very similar as Is there any general statement about what kinds of problems can be solved more efficiently using a quantum computer?

+ +

But the answers provided to that questions mainly looked at it from a theoretical/mathematical point of view.

+ +

For this question, I am more interested in the practical/engineering point of view. So I would like to understand what kind of problems can be more efficiently solved by a quantum algorithm than you would currently be able to do with a classical algorithm. So I am really assuming that you do not have all knowledge about all possible classical algorithms that could optimally solve the same problem!

+ +

I am aware that the quantum zoo expresses a whole collection of problems for which there exists a quantum algorithm that runs more efficiently than a classical algorithm but I fail to link these algorithms to real-world problems.

+ +

I understand that Shor's factoring algorithm is very important in the world of cryptography but I have deliberately excluded cryptography from the scope of this question as the world of cryptography is a very specific world which deserves his own questions.

+ +

In efficient quantum algorithms, I mean that there must at least be one step in the algorithm that must be translated to a quantum circuit on a n-qubit quantum computer. So basically this quantum circuit is creating a $2^n$ x $2^n$ matrix and its execution will give one of the $2^n$ possibilities with a certain possibility (so different runs might give different results - where the likely hood of each of the $2^n$ possibilities is determined by the constructed $2^n$ x $2^n$ Hermitian matrix.)

+ +

So I think to answer my question there must be some aspect/characteristic of the real world problem that can be mapped to a $2^n \times 2^n$ Hermitian matrix. +So what kind of aspects/characteristics of a real-world problem can be mapped to such a matrix?

+ +

With real-world problem I mean an actual problem that might be solved by a quantum algorithm, I don't mean a domain where there might be a potential use of the quantum algorithm.

+",2529,,26,,1/27/2019 8:09,3/16/2022 7:13,What kind of real-world problems (excluding cryptography) can be solved efficiently by a quantum algorithm?,,2,0,,,,CC BY-SA 4.0 +2400,2,,2381,6/19/2018 22:53,,7,,"

I'll probably be expanding this more (!) and adding pictures and links as I have time, but here's my first shot at this.

+
+

Mostly math-free explanation

+

A special coin

+

Let's begin by thinking about normal bits. Imagine this normal bit is a coin, that we can flip to be heads or tails. We'll call heads equivalent to "1" and tails "0". Now imagine instead of just flipping this coin, we can rotate it - 45${}^\circ$ above horizontal, 50$^\circ$ above horizontal, 10$^\circ$ below horizontal, whatever - these are all states. This opens up a huge new possibility of states - I could encode the whole works of Shakespeare into this one coin this way.

+

But what's the catch? No such thing as a free lunch, as the saying goes. When I actually look at the coin, to see what state it's in, it becomes either heads or tails, based on probability - a good way to look at it is if it's closer to heads, it's more likely to become heads when looked at, and vice versa, though there's a chance the close-to-heads coin could become tails when looked at.

+

Further, once I look at this special coin, any information that was in it before can't be accessed again. If I look at my Shakespeare coin, I just get heads or tails, and when I look away, it still is whatever I saw when I looked at it - it doesn't magically revert to Shakespeare coin. I should note here that you might think, as Blue points out in the comments, that

+
+

Given the huge advancement in modern day technology there's nothing stopping me from monitoring the exact orientation of a coin tossed in air as it falls. I don't necessarily need to "look into it" i.e. stop it and check whether it has fallen as "heads" or "tails".

+
+

This "monitoring" counts as measurement. There is no way to see the inbetween state of this coin. None, nada, zilch. This is a bit different from a normal coin, isn't it?

+

So encoding all the works of Shakespeare in our coin is theoretically possible but we can never truly access that information, so not very useful.

+

Nice little mathematical curiosity we've got here, but how could we actually do anything with this?

+

The problem with classical mechanics

+

Well, let's take a step back a minute here and switch to another tack. If I throw a ball to you and you catch it, we can basically model that ball's motion exactly (given all parameters). We can analyze its trajectory with Newton's laws, figure out its movement through the air using fluid mechanics (unless there's turbulence), and so forth.

+

So let's set us up a little experiment. I've got a wall with two slits in it and another wall behind that wall. I set up one of those tennis-ball-thrower things in the front and let it start throwing tennis balls. In the meantime, I'm at the back wall marking where all our tennis balls end up. When I mark this, there are clear "humps" in the data right behind the two slits, as you might expect.

+

Now, I switch our tennis-ball-thrower to something that shoots out really tiny particles. Maybe I've got a laser and we're looking where the photons look up. Maybe I've got an electron gun. Whatever, we're looking at where these sub-atomic particles end up again. This time, we don't get the two humps, we get an interference pattern.

+

+

Does that look familiar to you at all? Imagine you drop two pebbles in a pond right next to each other. Look familiar now? The ripples in a pond interfere with each other. There are spots where they cancel out and spots where they swell bigger, making beautiful patterns. Now, we're seeing an interference pattern shooting particles. These particles must have wave-like behavior. So maybe we were wrong all along. (This is called the double slit experiment.)Sorry, electrons are waves, not particles.

+

Except...they're particles too. When you look at cathode rays (streams of electrons in vacuum tubes), the behavior there clearly shows electrons are a particle. To quote wikipedia:

+
+

Like a wave, cathode rays travel in straight lines, and produce a shadow when obstructed by objects. Ernest Rutherford demonstrated that rays could pass through thin metal foils, behavior expected of a particle. These conflicting properties caused disruptions when trying to classify it as a wave or particle [...] The debate was resolved when an electric field was used to deflect the rays by J. J. Thomson. This was evidence that the beams were composed of particles because scientists knew it was impossible to deflect electromagnetic waves with an electric field.

+
+

So...they're both. Or rather, they're something completely different. That's one of several puzzles physicists saw at the beginning of the twentieth century. If you want to look at some of the others, look at blackbody radiation or the photoelectric effect.

+

What fixed the problem - quantum mechanics

+

These problems lead us to realize that the laws that allow us to calculate the motion of that ball we're tossing back and forth just don't work on a really small scale. So a new set of laws were developed. These laws were called quantum mechanics after one of the major ideas behind them - the existence of fundamental packets of energy, called quanta.

+

The idea is that I can't just give you .00000000000000000000000000 plus a bunch more zeroes 1 Joules of energy - there is a minimum possible amount of energy I can give you. It's like, in currency systems, I can give you a dollar or a penny, but (in American money, anyway) I can't give you a "half-penny". Doesn't exist. Energy (and other values) can be like that in certain situations. (Not all situations, and this can occur in classical mechanics sometimes - see also this; thanks to Blue for pointing this out.)

+

So anyway, we got this new set of laws, quantum mechanics. And the development of those laws is complete, though not completely correct (see quantum field theories, quantum gravity) but the history of their development is kind of interesting. There was this guy, Schrodinger, of cat-killing (maybe?) fame, who came up with the wave equation formulation of quantum mechanics. And this was preferred by a lot of physicists preferred this, because it was sort of similar to the classical way of calculating things - integrals and hamiltonians and so forth.

+

Another guy, Heisenberg, came up with another totally different way of calculating the state of a particle quantum-mechanically, which is called matrix mechanics. Yet another guy, Dirac, proved that the matrix mechanical and wave equation formulations were equal.

+

So now, we must switch tacks again - what are matrices, and their friend vectors?

+

Vectors and matrices - or, some hopefully painless linear algebra

+

Vectors are, at their simplest, arrows. I mean, they're on a coordinate plane, and they're math-y, but they're arrows. (Or you could take the programmer view and call them lists of numbers.) They're quantities that have a magnitude and a direction. So once we have this idea of vectors...what might we use them for? Well, maybe I have an acceleration. I'm accelerating to the right at 1 m/s$^2$, for example. That could be represented by a vector. How long that arrow is represents how quickly I am accelerating, the arrow would be pointing right along the x-axis, and by convention, the arrow's tail would be situated at the origin. We notate a vector by writing something like $\begin{bmatrix}2\\3\end{bmatrix}$ which would notate a vector with its tail at the origin and its point at $(2, 3)$.

+

So we have these vectors. What sorts of math can I do with them? How can I manipulate a vector? I can multiply vectors by a normal number, like $3$ or $2$ (these are called scalars), to stretch it, shrink it (if a fraction), or flip it (if negative). I can add or subtract vectors pretty easily - if I have a vector $\begin{bmatrix}2\\3\end{bmatrix} + \begin{bmatrix}4\\2\end{bmatrix}$ that equals $\begin{bmatrix}6\\5\end{bmatrix}$. There's also stuff called dot products and cross products that we won't get into here - if interested in any of this, look up 3blue1brown's linear algebra series, which is very accessible, actually teaches you how to do it, and is a fabulous way to learn about this stuff.

+

Now let's say I have one coordinate system, that my vector is in, and then I want to move that vector to a new coordinate system. I can use something called a matrix to do that. Basically we can define in our system two vectors, called $\hat{i}$ and $\hat{j}$, read i-hat and j-hat (we're doing all this in two dimensions in the real plane; you can have higher dimension vectors with complex numbers ($\sqrt{-1} = i$) as well but we're ignoring them for simplicity), which are vectors that are one unit in the $x$ direction and one unit in the $y$ direction - that is, $\hat{i}=\begin{bmatrix}1\\0\end{bmatrix}$ and $\hat{j}=\begin{bmatrix}0\\1\end{bmatrix}$.

+

Then we see where $\hat{i}$ and $\hat{j}$ end up in our new coordinate system. In the first column of our matrix, we write the new coordinates of $\hat{i}$ and in the second column the new coordinates of $\hat{j}$. We can now multiply this matrix by any vector and get that vector in the new coordinate system (again, please watch the 3blue1brown videos on this - the visualization he provides is better than anything I could write here). The reason this works is because you can rewrite vectors as what are called linear combinations. This means that we can rewrite say, $\begin{bmatrix}2\\3\end{bmatrix}$ as $2\begin{bmatrix}1\\0\end{bmatrix} + 3\begin{bmatrix}0\\ 1\end{bmatrix}$ - that is, $2\hat{i} + 3\hat{j}$. When we use a matrix, we're effectively re-multiplying those scalars by the "new" $\hat{i}$ and $\hat{j}$ - the unit vectors in the new coordinate system. These matrices, and linear algebra in general, are used a lot in many fields (computer graphics, solving systems of equations, and so much more), but this is where the name matrix mechanics comes from.

+

Tying it all together

+

Now matrices can represent rotations of the coordinate plain, or stretching or shrinking the coordinate plane or a bunch of other things. But some of this behavior...sounds kind of familiar, doesn't it? Our little special coin sounds kind of like it. We have this rotation idea. What if we represent the horizontal state by $\hat{i}$, and the vertical by $\hat{j}$, and describe what the rotation of our coin is using linear combinations? That works, and makes our system much easier to describe. So our little coin can be described using linear algebra.

+

What else can be described linear algebra and has weird probabilities and measurement? Quantum mechanics. (In particular, this idea of linear combinations becomes the idea called a superposition, which is where the whole idea, oversimplified to the point it's not really correct, of "two states at the same time" comes from.) So these special coins can be quantum mechanical objects. What sorts of things are quantum mechanical objects?

+
    +
  • photons
  • +
  • superconductors
  • +
  • electron energy states in an atom
  • +
+

Anything, in other words, that has the discrete energy (quanta) behavior, but also can act like a wave - they can interfere with one another and so forth.

+

So we have these special quantum mechanical coins. What should we call them? They store an information state like bits...but they're quantum. They're qubits. And now what do we do? We manipulate the information stored in them with matrices (ahem, gates). We measure to get results. In short, we compute.

+

Now, we know that we cannot encode infinite amounts of information in a qubit and still access it (see the notes on our "Shakespeare coin"), so what then is the advantage of a qubit? It comes in the fact that those extra bits of information can affect all the other qubits (it's that superposition/linear combination idea again), which affects the probability, which then affects your answer - but it's very difficult to use, which is why there are so few quantum algorithms.

+

The special coin versus the normal coin - or, what makes a qubit different?

+

So...we have this qubit. But Blue brings up a great point.

+
+

how is a quantum state like $\frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle$ different from a coin which when tossed in the air has a 50−50 chance of turning out to be heads or tails. Why can't we say that a classical coin is a "qubit" or call a set of classical coins a system of qubits?

+
+

There are several differences - the way that measurement works (see the fourth paragraph), this whole superposition idea - but the defining difference (Mithrandir24601 pointed this out in chat, and I agree) is the violation of the Bell inequalities.

+

Let's take another tack. Back when quantum mechanics was being developed, there was a big debate. It started between Einstein and Bohr. When Schrodinger's wave theory was developed, it was clear that quantum mechanics would be a probabilistic theory. Bohr published a paper about this probabilistic worldview, which he concluded saying

+
+

Here the whole problem of determinism comes up. From the standpoint of our quantum mechanics there is no quantity which in any individual case causally fixes the consequence of the collision; but also experimentally we have so far no reason to believe that there are some inner properties of the atom which conditions a definite outcome for the collision. Ought we to hope later to discover such properties ... and determine them in individual cases? Or ought we to believe that the agreement of theory and experiment—as to the impossibility of prescribing conditions for a causal evolution—is a pre-established harmony founded on the nonexistence of such conditions? I myself am inclined to give up determinism in the world of atoms. But that is a philosophical question for which physical arguments alone are not decisive.

+
+

The idea of determinism has been around for a while. Perhaps one of the more famous quotes on the subject is from Laplace, who said

+
+

An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

+
+

The idea of determinism is that if you know all there is to know about a current state, and apply the physical laws we have, you can figure out (effectively) the future. However, quantum mechanics decimates this idea with probability. (Notably, this has also been somewhat ruined classically by chaos theory, of "if a butterfly flaps its wings in Tokyo" fame - I encourage you to look into this, as chaotic systems can be incredibly interesting.) "I myself am inclined to give up determinism in the world of atoms." This is a huge deal!

+

Albert Einstein's famous response:

+
+

Quantum mechanics is very worthy of regard. But an inner voice tells me that this is not yet the right track. The theory yields much, but it hardly brings us closer to the Old One's secrets. I, in any case, am convinced that He does not play dice.

+
+

(Bohr's response was apparently "Stop telling God what to do", but anyway.)

+

For a while, there was debate. Hidden variable theories came up, where it wasn't just probability - there was a way the particle "knew" what it was going to be when measured; it wasn't all up to chance. And then, there was the Bell inequality. To quote Wikipedia,

+
+

In its simplest form, Bell's theorem states

+
+

No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.

+
+
+

And it provided a way to experimentally check this. It's true - it is pure probability. This is no classical behavior. It is all chance, chance that affects other chances through superposition, and then "collapses" to a single state upon measurement (if you follow the Copenhagen interpretation). So to summarize: firstly, measurement is fundamentally different in quantum mechanics, and secondly, that quantum mechanics is not deterministic. Both of these points mean that any quantum system, including a qubit, is going to be fundamentally different from any classical system.

+
+

A small disclaimer

+

+

As xkcd wisely points out, any analogy is an approximation. This answer isn't formal at all, and there's a heck of a lot more to this stuff. I'm hoping to add to this answer with a slightly more formal (though still not completely formal) description, but please keep this in mind.

+
+

Resources

+
    +
  • Nielsen and Chuang, Quantum Computing and Quantum Information. The bible of quantum computing.

    +
  • +
  • 3blue1brown's linear algebra and calculus courses are great for the math.

    +
  • +
  • Michael Nielsen (yeah, the guy who coauthored the textbook above) has a video series called Quantum Computing for the Determined. 10/10 would recommend.

    +
  • +
  • quirk is a great little simulator of a quantum computer that you can play around with.

    +
  • +
  • I wrote some blog posts on this subject a while back (if you don't mind reading my writing, which isn't very good) that can be found here which attempts to start from the basics and work on up.

    +
  • +
+",91,,91,,09-11-2021 18:26,09-11-2021 18:26,,,,0,,,,CC BY-SA 4.0 +2401,2,,2399,6/20/2018 5:05,,9,,"

I won't be giving any precise statements about which problems can be solved more efficiently using quantum algorithms (compared to existing classical algorithms) but rather some examples:

+ +
    +
  • Discrete Fourier transform (DFT) is used in pretty much all modern day music systems, for example in iPods. That algorithm single-handedly changed the world of digital music. See this for a summary. However, Quantum Fourier transform can further improve upon the complexity of DFT i.e. from $\mathcal{O}(N\log(N))$ to $\mathcal{O}(\log^2 N)$. I've written an answer regarding this here.

  • +
  • The Quantum algorithm for linear systems of equations provides an exponential speedup over the classical methods like Gaussian elimination.

  • +
+ +
+

The quantum algorithm for linear systems of equations, designed by + Aram Harrow, Avinatan Hassidim, and Seth Lloyd is a quantum algorithm + formulated in 2009 for solving linear systems. The algorithm estimates + the result of a scalar measurement on the solution vector to a given + linear system of equations.

+ +

The algorithm is one of the main fundamental algorithms expected to + provide a speedup over their classical counterparts, along with Shor's + factoring algorithm, Grover's search algorithm and quantum simulation. + Provided the linear system is a sparse and has a low condition number + ${\displaystyle \kappa }$ , and that the user is interested in + the result of a scalar measurement on the solution vector, instead of + the values of the solution vector itself, then the algorithm has a + runtime of $O(\log(N)\kappa ^{2})$, where ${\displaystyle N}$ is the + number of variables in the linear system. This offers an exponential + speedup over the fastest classical algorithm, which runs in + ${\displaystyle O(N\kappa )}$or $O(N{\sqrt {\kappa }})$ for positive + semidefinite matrices).

+
+ + + +
+

One of the earliest – and most important – applications of a quantum + computer is likely to be the simulation of quantum mechanical systems. + There are quantum systems for which no efficient classical simulation + is known, but which we can simulate on a universal quantum computer. + What does it mean to “simulate” a physical system? According to the + OED, simulation is “the technique of imitating the behaviour of some + situation or process (whether economic, military, mechanical, etc.) by + means of a suitably analogous situation or apparatus”. What we will + take simulation to mean here is approximating the dynamics of a + physical system. Rather than tailoring our simulator to simulate only + one type of physical system (which is sometimes called analogue + simulation), we seek a general simulation algorithm which can simulate + many different types of system (sometimes called digital simulation)

+
+ +

For the details, check chapter 7 of the lecture notes by Ashley Montaro.

+ + + +
+

Hybrid Quantum/Classical Algorithms combine quantum state preparation + and measurement with classical optimization. These algorithms + generally aim to determine the ground state eigenvector and eigenvalue + of a Hermitian Operator.

+ +

QAOA:

+ +

The quantum approximate optimization + algorithm[1] is a toy model of quantum + annealing which can be used to solve problems in graph theory. The + algorithm makes use of classical optimization of quantum operations to + maximize an objective function.

+ +

Variational Quantum Eigensolver

+ +

The VQE algorithm applies classical optimization to minimize the + energy expectation of an ansatz state to find the ground state energy + of a molecule [2]. This can also be extended to find + excited energies of molecules.[3].

+
+ +

You can find many more such examples on Wikipedia itself. Apart from those, there are lots of recent algorithms which can be used in machine learning and data science. This answer will get a bit too long if I add the details of all those. However, see this and this and the references therein.

+ +

[1]: A Quantum Approximate Optimization Algorithm Farhi et al. (2014)

+ +

[2]: A variational eigenvalue solver on a quantum processor Peruzzo et al. (2013)

+ +

[3]: Variational Quantum Computation of Excited States Brierley et al. (2018)

+",26,,26,,6/20/2018 5:38,6/20/2018 5:38,,,,8,,,,CC BY-SA 4.0 +2402,1,2404,,6/20/2018 9:32,,11,5660,"

I want to know what time complexity is considered efficient/inefficient for quantum computers. For this, I need to know how many operations a quantum computer can perform per second. Can anyone tell me how to calculate it and what factors it depends on (implementation details or number of qubits etc.)?

+",2559,,26,,07-03-2018 18:23,6/27/2019 7:16,How many operations can a quantum computer perform per second?,,2,0,,,,CC BY-SA 4.0 +2403,1,,,6/20/2018 9:50,,7,778,"

The governments, big companies (list of quantum processors) and smaller ones are in the competition of building bigger and bigger quantum computers.

+ +

Not unexpectedly the number of qubits of those quantum computers seem to double every year but those qubits are noisy qubits. What is a more meaningful metric is the number of error corrected qubits and some sources say that we need 100 noisy qubits to simulate one error corrected qubit.

+ +

Besides the error corrected qubits there are other substantial hurdles that need to be passed to come to a universal quantum computer that can do something useful (see Why Will Quantum Computers be Slow?)

+ +

I read quotes like (source GOOGLE, ALIBABA SPAR OVER TIMELINE FOR 'QUANTUM SUPREMACY):

+ +
+

Intel CTO Mike Mayberry told WIRED this week that he sees broad + commercialization of the technology as a 10-year project. IBM has said + it can be “mainstream” in five.

+
+ +

So when might we actually expect the first universal quantum computer that can do something useful outside the academic world ?

+ +

With useful I mean that brings real value (e.g. commercial application) and that cannot be done with same efficiency using existing classical computing algorithms/models.

+ +

I have deliberately mentioned outside the academic world as obviously currently pure by their existence the current quantum computers are useful for the theoretical computer scientist, quantum physicist, quantum specialist, ...

+ +

What currently interests me more is the usefulness of the outcome of this quantum algorithm for real world problems (e.g. cracking an actual key, designing a new molecule with specific characteristics, finding the optimal solution of an actual problem, recognizing real images, ....)

+",2529,,2529,,6/20/2018 11:31,6/21/2018 22:09,When can we expect the first (universal) quantum computer being able to do something useful outside the academic world?,,3,0,,6/22/2018 11:01,,CC BY-SA 4.0 +2404,2,,2402,6/20/2018 10:00,,18,,"

Giving an estimate for a generic quantum chip is impossible as there is no standard implementation for the moment.

+ +

Nevertheless, it is possible to estimate this number for specific quantum chip, with the information provided online. I found information on the IBM Q chips, so here is the answer for the IBM Q 5 Tenerife chip. In the link you will find information on the chip, but nothing about timings. You need to access the version log of the chip (via a link given on the IBM Q 5 Tenerife chips page). In this version log, go to a ""Gate Specification"" section, you will have the following information (more explanation below):

+ +
    +
  1. A time for ""GD"", which is 60ns in the link above.
  2. +
  3. Multiple times for ""GF"" (let's take 200ns for the computations below).
  4. +
  5. A ""buffer time"", which is 10ns in the link above.
  6. +
+ +

But what do ""GD"", ""GF"" or ""buffer time"" represent? They are base physical operations, i.e. the operations that will be performed on the physical qubit. These physical operation are then used to implement some base quantum gates. You can find the decomposition of the 4 base quantum gates of the IBM Q backends in terms of these physical operations on the IBM Q 5 Tenerife chips page. I copied the illustration below.

+ +

+ +

Along with ""GD"" and ""GF"", there is a physical ""FC"" operation that does not appears in the timings. This is because this ""FC"" operation just ""changes the frame of the following pulses"" (citing Jay Gambeta from a conversation on the QISKit Slack), and so the ""FC"" operation has a cost (time of application) of 0.

+ +

The ""buffer time"" is just a pause time between each physical operation application.

+ +

So finally we can compute the time needed to apply each base gate on this specific backend:

+ +
    +
  1. U1: 0ns
  2. +
  3. U2: 70ns = 0ns + 60ns + 10ns (buffer) + 0ns
  4. +
  5. U3: 140ns = 0ns + 60ns + 10ns (buffer) + 0ns + 60ns + 10ns (buffer) + 0ns
  6. +
  7. CX: 560ns = 0ns + 60ns + 10ns (buffer) + 200ns + 10ns (buffer) + 60ns + 10ns (buffer) + 200ns + 10ns (buffer)
  8. +
+ +

From these timings, you can deduce the number of operations per second that the ibmqx4 backend can perform.

+ +

Taking 200ns per operation as a crude approximation of the mean timing for an operation, you end up with 5 000 000 operations per seconds.

+ +

You can find the data for other backends on the qiskit-backend-information GitHub repository.

+",1386,,1386,,6/27/2019 7:16,6/27/2019 7:16,,,,0,,,,CC BY-SA 4.0 +2405,2,,2402,6/20/2018 10:02,,12,,"

There is an important difference between physical operations and logical operations.

+ +

Physical operations that will be slightly imperfect, performed on qubits that are also imperfect. The rate at which these can be performed depends on what physical system is being used to realize the qubits. For example, superconducting qubits can perform two qubit gates (the slowest ones) in a time on the order of 100 ns (see Nelimee's answer).

+ +

By combining many physical qubits, and doing a process with lots of physical operations, we can build logical qubits. By doing error correction, these qubits and the operations done upon them can be made arbitrarily accurate. These are the kind of operations that are required to implement quantum algorithms.

+ +

There are currently too many unknowns to give you a clock rate of logical operations. Especially since even proof-of-principle logical qubits have not yet been built (not with quantum error correction codes, at least). It depends on how imperfect the physical qubits and operations are, and so how much we need to do to clean everything up. It depends on what kind of error correcting code we use, which in turn depends on the instruction set of our quantum processors (i.e., which pairs of qubits can have a two qubit gate applied on them directly). And this depends on how much noise we are willing to have, because better architectures often come at the cost of noise. So there are a lot of interdependencies, and much to be resolved.

+",409,,409,,6/20/2018 10:08,6/20/2018 10:08,,,,0,,,,CC BY-SA 4.0 +2406,1,2421,,6/20/2018 11:30,,10,560,"

For a Hilbert space $\mathcal{H}_A$, I have seen the phrase

+ +
+

density matrices acting on $\mathcal{H}_A$

+
+ +

multiple times, e.g. here.

+ +

It is clear to me that if $\mathcal{H}_A$ has finite Hilbert dimension $n$, then this makes sense mathematically, because a density matrix $\rho$ can be written as $\rho \in \mathbb{C}^{n \times n}$ and elements $\phi$ of $\mathcal{H}_A$ can be written as $\phi \in \mathbb{C}^n$, so I can write down $\rho \phi \in \mathbb{C}^n$.

+ +

However, it is unclear what this means? The density matrix $\rho$ describes a (possibly mixed) state of a quantum system. But I can also interpret $\phi$ as a single state vector, describing a quantum system in a pure state.

+ +

So, what does $\rho\phi$ refer to (where $\rho \in \mathbb{C}^{n \times n}$ is a density matrix, and $\phi \in \mathbb{C}^n$ is an element of $\mathcal{H}_A$)? Can I interpret it? How can a density matrix (i.e., the representation of a state) act on states (on single state vectors)? Why do we interpret density matrices (which represent states) as operators?

+",2444,,55,,10/31/2019 19:27,10/31/2019 19:27,"What does it mean for a density matrix to ""act on a Hilbert space $\mathcal{H}""$?",,2,0,,,,CC BY-SA 4.0 +2407,1,2426,,6/20/2018 15:14,,11,514,"

Theorem 2 of [1] states:

+ +
+

Suppose $C$ is an additive self-orthogonal sub-code of $\textrm{GF}(4)^n$, containing $2^{n-k}$ vectors, such that there are no vectors of weight $<d$ in $C^\perp/C$. Then any eigenspace of $\phi^{-1}(C)$ is an additive quantum-error-correcting code with parameters $[[n, k, d]]$.

+
+ +

where here $\phi: \mathbb{Z}_2^{2n} \rightarrow \textrm{GF}(4)^n$ is the map between the binary representation of $n$-fold Pauli operators and their associated codeword, and $C$ is self-orthogonal if $C \subseteq C^\perp$ where $C^\perp$ is the dual of $C$.

+ +

This tells us that each additive self-orthogonal $\textrm{GF}(4)^n$ classical code represents a $[[n, k, d]]$ quantum code.

+ +

My question is whether the reverse is also true, that is: is every $[[n, k, d]]$ quantum code represented by an additive self-orthogonal $\textrm{GF}(4)^n$ classical code?

+ +

Or equivalently: Are there any $[[n, k, d]]$ quantum codes that are not represented by an additive self-orthogonal $\textrm{GF}(4)^n$ classical code?

+ +

[1]: Calderbank, A. Robert, et al. ""Quantum error correction via codes over GF (4)."" IEEE Transactions on Information Theory 44.4 (1998): 1369-1387.

+",391,,391,,6/21/2018 12:21,9/21/2019 18:06,"Are all $[[n, k, d]]$ quantum codes equivalent to additive self-orthogonal $GF(4)^n$ classical codes?",,2,4,,,,CC BY-SA 4.0 +2408,1,2409,,6/20/2018 23:08,,5,682,"

Reproduced from Exercise 2.1 of Nielsen & Chuang's Quantum Computation and Quantum Information (10th Anniversary Edition):

+ +
+

Show that $(1, −1)$, $(1, 2)$ and $(2, 1)$ are linearly dependent.

+
+ +

Note: This question is part of a series attempting to provide worked solutions to the exercises provided in the above book.

+",391,,26,,3/30/2019 7:58,3/30/2019 7:58,"Nielsen & Chuang Exercise 2.1 - ""Linear dependence: example""",,1,2,,01-07-2019 15:14,,CC BY-SA 4.0 +2409,2,,2408,6/20/2018 23:08,,5,,"

A set of $n$ vectors $V = \{\vec{v}_1, \ldots, \vec{v}_n\}$ are linearly dependant if there exists a set of scalars $a_1, \ldots, a_n$ (which are not all zero) such that +$$ +\sum_{i=1}^n a_i\vec{v_i} = \vec{0} +$$ +where $\vec{0}$ is the all-zero vector.

+ +

Writing $V$ as a matrix with vectors as columns, this is equivalent to finding a solution to the matrix equation +$$ +V\vec{a} = \vec{0}, \quad\textrm{where}\quad \vec{a} = \begin{pmatrix} a_1\\ \vdots \\ a_n \end{pmatrix}. +$$

+ +

In our case, we have +$$ +V = \begin{pmatrix} +1 & 1 & 2 \\ +-1 & 2 & 1 +\end{pmatrix}, +$$ +with a solution to $V\vec{a} = \vec{0}$ provided by +$$ +\vec{a} = \begin{pmatrix} 1\\ 1 \\ -1 \end{pmatrix}, +$$ +hence showing linear dependance.

+",391,,,,,6/20/2018 23:08,,,,0,,,,CC BY-SA 4.0 +2410,1,2412,,6/21/2018 2:59,,4,102,"

I have seen qubits, qutrits & entangled bits (e-bits) a decent amount. I have also seen qunits/qudits for n-th level qubits. What I am trying to wrap my head around is the differences between n-th level e-bits vs n-th level qunits. What are the similarities? Differences?

+ +

What generalizations exist about n-th level e-bits / qunits?

+",2645,,55,,12/13/2019 13:24,12/13/2019 13:24,"What are the differences between ""$n$-th level e-bits"" and ""$n$-th level qunits""?",,1,0,,,,CC BY-SA 4.0 +2411,2,,2406,6/21/2018 3:18,,0,,"

By ""acting of $\mathcal{H}_A$"" I believe you mean ""acting on $\mathcal{H}_A$"", which is what is written in the section you provided a link to.

+ +

What it means is ""acting on a state in $\mathcal{H}_A$"".

+ +
+

""I do not know what it means for a state to act on $\phi$.""

+
+ +

You defined $\phi$ as a state. States do not act on states. Operators (such as $\rho$) act on states. The section you provided a link to, does not mention ""acting on a state $\phi$"" at any point.

+",2293,,,,,6/21/2018 3:18,,,,2,,,,CC BY-SA 4.0 +2412,2,,2410,6/21/2018 7:00,,2,,"

Disclaimer: Since you haven't stated the source from where you got those terms, I will mention the most obvious definitions for those, which occur to me - simply looking at the names.

+ +

""Qunit"" refers to any quantum system whose state lies in a complex vector space whose dimension is any natural number $n$. Example of qunits are qubits, for which $n=2$ and qutrits for which $n=3$. Also check this for other less commonly used terms: Do any specific types of qudits other than qubits and qutrits have a name?

+ +

Next, remember that a system of two qubits will lie in a $2\times 2$-dimensional vector space i.e. $\Bbb C^2\times \Bbb C^2$. A system of three qubits will lie in a $2\times 2 \times 2$-dimensional vector space $\Bbb C^2\times \Bbb C^2\times \Bbb C^2$ and so on. By ""$n$-dimensional"" e-bits they're referring to the dimension of the vector space again. As for the ""e-bit"" part, that's simple. That is, the state of the system of qubits is basically not separable into individual qubit states lying in $\Bbb C^2$ each (i.e. the qubits are entangled). Check out the answers in: How do I show that a two-qubit state is an entangled state?

+",26,,,,,6/21/2018 7:00,,,,0,,,,CC BY-SA 4.0 +2413,1,2418,,6/21/2018 8:03,,10,961,"

""How do I show that a two-qubit state is an entangled state?"" includes an answer which references the Peres–Horodecki criterion. This works for $2\times 2$ and $2\times3$ dimensional cases; however, in higher dimensions, it is ""inconclusive."" It is suggested to supplement with more advanced tests, such as those based on entanglement witness. How would this be done? Are there alternative ways to go about this?

+",2645,,15,,6/21/2018 12:02,12/22/2022 7:51,How to show that an n-level system is entangled?,,2,0,,,,CC BY-SA 4.0 +2414,1,,,6/21/2018 8:03,,20,565,"

It seems to be a widely held belief within the scientific community that it is possible to do ""universal, fault-tolerant"" quantum computation using optical means by following what is called ""linear optical quantum computing (LOQC)"" pioneered by KLM (Knill, Laflamme, Milburn). However, LOQC uses only modes of light that contain either zero or one photon, not more.

+ +

Continuous modes of light contain, by definition, much more than one photon. The paper Probabilistic Fault-Tolerant Universal Quantum Computation and Sampling Problems in Continuous Variables Douce et al. (2018) [quant-ph arXiv:1806.06618v1] claims ""probabilistic universal fault-tolerant"" quantum computation can also be done using continuous modes of squeezed light. The paper goes even further and claims it is possible to demonstrate quantum supremacy using continuous modes. In fact, the paper's abstract says:

+ +
+

Furthermore, we show that this model can be adapted to yield sampling + problems that cannot be simulated efficiently with a classical + computer, unless the polynomial hierarchy collapses.

+
+ +

A quantum computing startup called Xanadu that has some credibility because it has written several papers with Seth Lloyd, seems to be claiming that they too will ultimately be able to do quantum computation with continuous modes of light, and perform some tasks better than a classical computer.

+ +

And yet, what they are doing seems to me to be analog computing (is fault tolerant error correction possible for analog computing?). Also, they use squeezing and displacement operations. Such operations do not conserve energy (squeezing or displacing a mode can change its energy), so such operations seem to require exchanges of macroscopic amounts (not quantized amounts) of energy with an external environment, which probably can introduce a lot of noise into the qc. Furthermore, squeezing has only been achieved in the lab for limited small values, and a claim of universality might require arbitrary large squeezing as a resource.

+ +

So, my question is, are these people being too optimistic or not? What kind of computing can be done realistically in the lab with continuous modes of light?

+",1974,,419,,08-06-2018 08:04,08-06-2018 08:04,"Is ""probabilitistic,universal, fault tolerant quantum computation"" possible with continuous values?",,1,0,,,,CC BY-SA 4.0 +2415,1,2417,,6/21/2018 8:15,,3,1318,"

What is the difference between 3 qubits, 2 qutrits and a 6th level qunit? Are they equivalent? Why / why not?

+ +

Can 6 classical bits be super-densely coded into each?

+",2645,,2645,,07-02-2018 15:22,07-02-2018 15:22,"Difference between 3 qubits, 2 qutrits & 1 six level qunit",,2,2,,,,CC BY-SA 4.0 +2416,2,,2415,6/21/2018 8:38,,2,,"

They are not equivalent. It can be seen by the fact that the system of $3$ qubits acts on a $8$ dimensional Hilbert space, the 2 qutrit system acts on a $9$ dimensional Hilbert space, and the 6 level qunit acts on a $6$ dimensional Hilbert space. Consequently, the nature of the states defined by each of the quantum systems is different.

+ +

This dimension argument comes from the fact that a k-level n-qunit system acts on a state space of dimension $k^n$.

+ +

For the superdense-coding, I am aware that Bell pairs are used in order to obtain the desired coding, and as you do not consider such entangled qubits, I am not sure to answer such question.

+",2371,,26,,6/21/2018 9:08,6/21/2018 9:08,,,,4,,,,CC BY-SA 4.0 +2417,2,,2415,6/21/2018 9:15,,7,,"

The Hilbert space dimension of $n$ qudits is $d^n$, where $d$ is the dimension of the qudit ($d=2$ for qubit, $d=3$ for qutrit, etc). So three qubits have an $8$ dimensional space, two qutrits have a $9$ dimensional space, and one $d=6$ qudit has a six dimensional space. As such, we cannot regard them as equivalent.

+ +

I guess you meant to compare situations with equal total Hilbert space dimension. Such as a comparing a pair of qubits with a $d=4$ system. In this case, there is mathematically no distinction. You could choose to relabel the basis states $|00\rangle$, $|01\rangle$, $|10\rangle$ and $|11\rangle$ as the qudit basis states $|0\rangle$, $|1\rangle$, $|2\rangle$ and $|3\rangle$. Then any qudit operations defined with the qudit basis could be equivalently defined with the qubits, and vice-versa. You could also use other mappings between basis states, this was just an example.

+ +

We could also use a subspace of a larger space to simulate a smaller one. For example, suppose you want to simulate a spin-$1$ particle, which is a 3 level system. You could do this using a pair of qubits (a four level system) and identifying three basis states of the former with three of the latter (such as $|-1\rangle$, $|0\rangle$ and $|1\rangle$ with $|00\rangle$, $|01\rangle$ and $|10\rangle$, for example). As long as you implement your spin-$1$ operations correctly, you'll always avoid the $|11\rangle$ state, and your two qubits effectively become a qutrit.

+ +

You might also be interested in my answer to the question What is the most economical and preferred basis for the qudit?

+",409,,409,,6/21/2018 10:01,6/21/2018 10:01,,,,1,,,,CC BY-SA 4.0 +2418,2,,2413,6/21/2018 9:33,,8,,"

Determining whether a given state is entangled or not is NP hard. So if you include all possible types on entanglement, including mixed states and multipartite entanglement, there is never going to be an elegant solution. Techniques are therefore defined for specific cases, where the structure of the problem can be used to create an efficient solution.

+ +

For example, if a state is bipartite and pure, you can simply take the reduced density matrix of one party and see if it is mixed. This could be done by computing the Von Neumann entropy to see if it is non-zero (this quantity provides a measure of entanglement in this case).

+ +

This approach would work for any pure state of two particles, whatever their dimension. It can also be used to calculate entanglement for any bipartition. For example, if you had $n$ particles, you could take the first $m$ to be one party, and the remaining $n-m$ to be another, and use this technique to see if any entanglement exists between these groups.

+ +

For other cases, the approach you take will depend on the kind of entanglement you are looking for.

+",409,,,,,6/21/2018 9:33,,,,0,,,,CC BY-SA 4.0 +2419,2,,2413,6/21/2018 9:37,,6,,"

As suggested in your Wiki link, the way to detect an entangled state is to find a hyperplane that separates it from the convex set of separable states. This hyperplane represents what is called an entanglement witness. The PPT criterion that you mentioned is one such witness. Now to construct entanglement witnesses for higher dimensional systems is not easy, but it can be done algorithmically by solving a hierarchy semi-definite programs (SDP)[1]. This hierarchy is complete, as every entangled state will eventually be detected. But it is computationally inefficient if the entangled state is very close to the convex set of separable states. It is in fact known that detecting entanglement is NP-hard[2].

+

[1] Doherty, Andrew C., Pablo A. Parrilo, and Federico M. Spedalieri. "Complete family of separability criteria." Physical Review A69.2 (2004): 022308

+

[2] Gharibian, Sevag. "Strong NP-hardness of the quantum separability problem." arXiv preprint arXiv:0810.4507 (2008).

+",2663,,13968,,12/22/2022 7:51,12/22/2022 7:51,,,,0,,,,CC BY-SA 4.0 +2420,1,2433,,6/21/2018 9:58,,6,180,"

This article from 2017 predicts the quantum internet by 2030. What are the biggest bottlenecks in the realization of a global quantum network (ie quantum internet)?

+",2645,,55,,12-06-2021 11:31,12-06-2021 11:31,How could a global quantum network be realized?,,1,0,,,,CC BY-SA 4.0 +2421,2,,2406,6/21/2018 12:37,,14,,"

It is common that one refers to a density matrix (or, equivalently, a density operator) $\rho$ as acting on a particular space $\mathcal{H}$. This serves to establish the ""type"" of $\rho$ in computer science parlance. In particular, when there are multiple spaces under consideration, it may be helpful for a reader to know that $\rho$ corresponds specifically to whatever abstract physical system is described by the space $\mathcal{H}$.

+ +

Referring to $\rho$ as acting on a space $\mathcal{H}$ also makes perfect sense, as the question points out, because $\rho$ can be viewed as a linear map from $\mathcal{H}$ to itself. The properties of $\rho$ that relate to its action as a linear mapping of this form are important and say a lot about the state. For example, the eigenvalues of $\rho$ describe the randomness or uncertainty inherent to that state.

+ +

However, this does not mean that if $\phi\in\mathcal{H}$ is a unit vector that describes a pure state, then $\rho\phi$ should have an interpretation. Such a vector might show up in a proof or calculation -- for example, the quantity $\langle \phi | \rho | \phi\rangle$, which is the inner product between $\phi$ and $\rho\phi$, is a commonly encountered quantity known as the fidelity (or squared fidelity) between the states represented by $\rho$ and $\phi$ -- but in my view the vector $\rho\phi$ is just a vector and does not have a natural or fundamental physical interpretation.

+",1764,,,,,6/21/2018 12:37,,,,0,,,,CC BY-SA 4.0 +2422,2,,2381,6/21/2018 12:59,,1,,"

All we observe in quantum technologies (photons, atoms, etc) are bits (either a 0 or a 1).

+ +

At the essence, no one really knows what a quantum bit is. Some people say it's an object that is ""both"" 0 and 1; others say it's about things to do with parallel universes; but physicists don't know what it is, and have come up with interpretations that are not proven.

+ +

The reason for this ""confusion"" is due to two factors:

+ +

(1) One can get remarkable tasks accomplished which cannot be explained by thinking of the quantum technology in terms of normal bits. So there must be some extra element involved which we label ""quantum"" bit. But here's the critical piece: this extra ""quantum"" element cannot be directly detected; all we observe are normal bits when we ""look"" at the system.

+ +

(2) One way to ""see"" this extra ""quantum"" stuff is through maths. Hence a valid description of a qubit is mathematical, and every translation of that is an interpretation that has not yet been proven.

+ +

In summary, no one knows what quantum bits are. We know there's something more than bits in quantum technologies, which we label as ""quantum"" bit. And so far, the only valid (yet unsatisfying) description is mathematical.

+ +

Hope that helps.

+",2084,,,,,6/21/2018 12:59,,,,0,,,,CC BY-SA 4.0 +2423,1,2424,,6/21/2018 13:16,,3,1669,"

Reproduced from Exercise 2.2 of Nielsen & Chuang's Quantum Computation and Quantum Information (10th Anniversary Edition):

+ +
+

Suppose $V$ is a vector space with basis vectors $|0\rangle$ and $|1\rangle$, and $A$ is a linear operator from $V$ to $V$ such that $A|0\rangle = |1\rangle$ and $A|1\rangle = |0\rangle$. Give a matrix representation for $A$, with respect to the input basis $|0\rangle, |1\rangle$, and the output basis $|0\rangle, |1\rangle$. Find input and output bases which give rise to a different matrix representation of $A$.

+
+ +

Note: This question is part of a series attempting to provide worked solutions to the exercises provided in the above book.

+",391,,26,,3/30/2019 7:59,3/30/2019 7:59,Nielsen & Chuang Exercise 2.2 - “Matrix representations: example”,,1,0,,01-07-2019 15:14,,CC BY-SA 4.0 +2424,2,,2423,6/21/2018 13:16,,5,,"

Immediately, we can see that +$$ +A = |1\rangle\langle0| + |0\rangle\langle1|. +$$ +If the input and out bases are $\{|0\rangle, |1\rangle\}$, then +$$ +|0\rangle = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad |1\rangle = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \quad\textrm{and}\quad \langle0| = \begin{pmatrix} 1 & 0 \end{pmatrix}, \quad \langle1| = \begin{pmatrix} 0 & 1 \end{pmatrix}, +$$ +so we can write the first equation as +$$ +\begin{align} +A &= \begin{pmatrix} 1 \\ 0 \end{pmatrix} \otimes \begin{pmatrix} 0 & 1 \end{pmatrix} + \begin{pmatrix} 0 \\ 1 \end{pmatrix} \otimes \begin{pmatrix} 1 & 0 \end{pmatrix} \\ +&= \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}\\ +&= \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} +\end{align} +$$ +to solve the first question. Note that in this case, $A$ is the equal to the bitflip or Pauli $X$ operation.

+ +

Secondly, for fun, let us choose to write $A$ in input basis $\{|+\rangle, |-\rangle\}$, where +$$ +|+\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle) \quad\textrm{and}\quad |-\rangle = \frac{1}{\sqrt{2}}(|0\rangle - |1\rangle) +$$ +and output basis $\{|L\rangle, |R\rangle\}$, where +$$ +|L\rangle = \frac{1}{\sqrt{2}}(|0\rangle + i|1\rangle) \quad\textrm{and}\quad |R\rangle = \frac{1}{\sqrt{2}}(|0\rangle - i|1\rangle). +$$ +Rewriting our original $\{|0\rangle, |1\rangle\}$ bases vectors in terms of the above bases, we find +$$ +\begin{align} +|0\rangle &= \frac{1}{\sqrt{2}}(|+\rangle + |-\rangle) = \frac{1}{\sqrt{2}}(|L\rangle + |R\rangle) \quad \textrm{and}\\ +|1\rangle &= \frac{1}{\sqrt{2}}(|+\rangle - |-\rangle) = \frac{i}{\sqrt{2}}(|R\rangle - |L\rangle), +\end{align} +$$ +with $\langle0| = (|0\rangle)^\dagger$ and $\langle1| = (|1\rangle)^\dagger$.

+ +

From this we can rewrite $A$ as +$$ +\begin{align} +A &= \frac{i}{2}(|R\rangle - |L\rangle)(\langle+| + \langle-|) + \frac{1}{2}(|L\rangle + |R\rangle)(\langle+| - \langle-|) \\ +&= \frac{1}{2}\big[ (1+i)|R\rangle\langle+| + (1-i)|L\rangle\langle+| + (-1+i)|R\rangle\langle-| + (-1-i)|L\rangle\langle-|\big] \\ +&= \frac{1+i}{2}(|R\rangle\langle+| - i|L\rangle\langle+| + i|R\rangle\langle-| - |L\rangle\langle-|) +\end{align} +$$ +To get $A$ in the desired matrix form, we then set +$$ +|L\rangle = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad |R\rangle = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \quad\textrm{and}\quad \langle+| = \begin{pmatrix} 1 & 0 \end{pmatrix}, \quad \langle-| = \begin{pmatrix} 0 & 1 \end{pmatrix}, +$$ +such that +$$ +A = \frac{1+i}{2} \begin{pmatrix} -i & -1 \\ 1 & i \end{pmatrix}. +$$

+",391,,,,,6/21/2018 13:16,,,,4,,,,CC BY-SA 4.0 +2425,1,,,6/21/2018 13:46,,7,293,"

In Ref. [1] absolutely maximally entangled (AME) states are defined as:

+ +
+

An $\textrm{AME}(n,d)$ state (absolutely maximally entangled state) of $n$ qudits of dimension $d$, $|\psi\rangle \in \mathbb{C}^{\otimes n}_d$, is a pure state for which every bipartition of the system into the sets $B$ and $A$, + with $m = |B| \leq |A| = n − m$, is strictly maximally entangled such that + $$ +S(\rho_B) = m \log_2 d. +$$

+
+ +

As the name would have you believe, does this mean that an $\textrm{AME}(n,d)$ state is maximally entangled across all entanglement monotones (for fixed $n$ and $d$)?

+ +

[1]: Helwig, Wolfram, et al. ""Absolute maximal entanglement and quantum secret sharing."" Physical Review A 86.5 (2012): 052335.

+",391,,,,,6/25/2018 15:38,Are Absolutely Maximally Entangled states maximally entangled under all entanglement monotones?,,1,1,,,,CC BY-SA 4.0 +2426,2,,2407,6/21/2018 13:51,,2,,"

The additive self-orthogonal constraint on the classical codes in order to create stabilizer quantum codes is needed due to the fact that the stabilizer generators must commute between them in order to create a valid code space. When creating quantum codes from classical codes, the commutation relationship for the stabilizers is equivalent to having a self-orthogonal classical code.

+ +

However, quantum codes can be constructed from non-self-orthogonal classical codes over $GF(4)^n$ by means of entanglement-assistance. In this constructions, an arbitrary classical code is selected, and by adding some Bell pairs in the qubit system, commutation between the stabilizers is obtained.

+ +

This entanglement-assisted paradigm for constructing QECCs from any classical code is presented in arXiv:1610.04013, which is based on the paper ""Correcting Quantum Errors with Entanglement"" published in Science by Brun, Devetak and Hsieh.

+",2371,,26,,5/13/2019 21:21,5/13/2019 21:21,,,,0,,,,CC BY-SA 4.0 +2427,1,,,6/21/2018 14:31,,5,139,"

Quantum networks or quantum internet are terms that can be found when reading about quantum computation and information nowadys, but still I they are pretty vague concepts that are still in development.

+ +

I was wondering about the fact that this networks or internet would be pretty limited in communications between several computers as sending redundant information to several of them would be prohibited because of the no-cloning theorem. That would mean that only point-to-point communications could be done in quantum network. Moreover, once sent the information cannot be sent again as no copies of it are remaining, and so the communication could be done just once or the state should be created again by doing the same computations as before each time we want to send it. This sounds pretty inefficient coparing to the classical networks and internet.

+ +

Am I right about this thought or am I missing something?

+",2371,,26,,12/13/2018 19:55,12/13/2018 19:55,Does the no-cloning theorem impose limits on the capabilities of quantum networks?,,2,0,,,,CC BY-SA 4.0 +2428,2,,2427,6/21/2018 15:11,,4,,"

Setting aside the practical problems in actually building such things, quantum computers/networks can do everything their classical counterparts do without any fundamental overhead.

+ +

Your reasoning seems to stem from a misunderstanding of the no-cloning theorem. +The no-cloning theorem says that you cannot reversibly clone unknown states with a protocol that does not depend on the state being cloned (see e.g. this question).

+ +

This has nothing to do with the communication context that you mention. If I want to send you some classical information, I would do it in basically the same way as it is done in a ""classical"" network. If I want to send you a quantum state, and I know what that state is, again there are no problems in ""cloning"" that state (I can just generate the same state many times).

+ +

If I have an unknown state and I want to send you many copies of it, then yes, the no-cloning theorem prevents me from doing so. However, it is not clear why should I want to do this for communication purposes.

+",55,,55,,6/21/2018 17:18,6/21/2018 17:18,,,,3,,,,CC BY-SA 4.0 +2429,2,,2403,6/21/2018 17:24,,0,,"

2023

+ +

[1] Source IBM Research https://www.research.ibm.com/5-in-5/quantum-computing/

+ +
+

... Within five years, the industry will have discovered the first + applications where a quantum computer (used alongside a classical + computer) will offer a benefit to solving specific problems. ...

+
+ +

[2] point [3] in answer 2028 also supports to some extent this period.

+",2529,,2529,,6/21/2018 22:09,6/21/2018 22:09,,,,0,,,,CC BY-SA 4.0 +2430,2,,2403,6/21/2018 17:43,,0,,"

2028

+ +

[1] Source Scientific American : How Close Are We—Really—to Building a Quantum Computer?(Intel’s head of quantum computing talks about the challenges of developing algorithms, software programs and other necessities for a technology that doesn’t yet exist)

+ +
+

The race is on to build the world’s first meaningful quantum + computer—one that can deliver the technology’s long-promised ability + to help scientists do things like develop miraculous new materials, + encrypt data with near-perfect security and accurately predict how + Earth’s climate will change. Such a machine is likely more than a + decade away, but IBM, Microsoft, Google, Intel and other tech + heavyweights breathlessly tout each tiny, incremental step along the + way.

+
+ +

and a bit further down the same article:

+ +
+

People think quantum computers are just around the corner, but history + shows these advances take time. If 10 years from now we have a quantum + computer that has a few thousand qubits, that would certainly change + the world in the same way the first microprocessor did. We and others + have been saying it’s 10 years away. Some are saying it’s just three + years away, and I would argue that they don’t have an understanding of + how complex the technology is.

+
+ +

[2] Source : When Will Quantum Computers Be Consumer Products?

+ +
+

Andrew Dzurak, Professor in Nanoelectronics at University of New South + Wales, ... “I think that within ten years, there will be + demonstrations of modelling of certain chemicals and drugs that + couldn’t be done today but I don’t think there will be a convenient, + routine [system] that [people] can use,” Dzurak said in the interview. + “To move to that stage will take another decade further beyond that.”

+
+ +

[3] Why Quantum Computing Should Be on Your Radar Now

+ +

This report could also be interpreted that the first useful thing could happen earlier than 2028 (e.g. in 2023)

+ +
+

What to expect, when

+ +

There's a lot of confusion about the current state of quantum + computing which industry research firms Boston Consulting Group (BCG) + and Forrester are attempting to clarify.

+ +

In the Forrester report, Hopkins estimates that quantum computing is + in the early stages of commercialization, a stage that will persist + through 2025 to 2030. The growth stage will begin at the end of that + period and continue through the end of the forecast period which is + 2050.

+
+ +

...

+ +
+

BCG also reasons that the quantum computing market will advance in + three distinct phases:

+ +
    +
  1. The first generation will be specific to applications that are quantum + in nature, similar to what D-Wave is doing.

  2. +
  3. The second generation will unlock what report co-author and BCG senior partner Massimo Russo + calls ""more interesting use cases.""

  4. +
  5. In the third generation, quantum + computers will have achieved the number of logical qubits required to + achieve Quantum Supremacy. (Note: Quantum Supremacy and logical qubits + versus physical qubits are important concepts addressed below.)

  6. +
+
+",2529,,2529,,6/21/2018 22:00,6/21/2018 22:00,,,,0,,,,CC BY-SA 4.0 +2431,2,,2403,6/21/2018 21:35,,0,,"

2033 or later

+ +

[1] source Why will quantum computers be slow?

+ +

This site mentions the following bottlenecks:

+ +
    +
  1. Bottleneck #0: Qubit count
  2. +
  3. Bottleneck #1: T count
  4. +
  5. Bottleneck #2: Measurement depth
  6. +
  7. Bottleneck #3: Spacetime volume
  8. +
+ +

Regarding the first bottleneck it says the following:

+ +
+

If the qubit count doubles every year, it'll be 17 years before we get to 100%. + That 17 year gap is daunting. 50 to 100 noisy qubits should be enough + to do something hard, and 100 to 200 logical qubits should be enough + to do something useful, but at the moment it's not known if there's + much in between. If there really is nothing on the decade-long road + from 100 noisy qubits to 100 error-corrected qubits,

+
+ +

The summary of the blog:

+ +
+

Summary + To run a quantum computation really fast, you need:

+ +

1 Enough qubits to fit the computation in the first place. + 2 Then enough T factories to perform T gates as fast as the computation needs them. + 3 Then wider circuits that do measurements in parallel instead of + serially. + 4 Then efficient strategies for routing qubits and packing braiding operations.

+ +

I like to imagine that each bottleneck will be + the ""most relevant"" for about a decade. We'll naturally transition + from one to the next because, funnily enough, every single one of + these bottlenecks falls to adding more qubits....

+
+ +

[2] The points mentioned in answer 2028 mainly give an indication when they expect it might happen. They are not excluding it can be several years later.

+",2529,,2529,,6/21/2018 22:08,6/21/2018 22:08,,,,0,,,,CC BY-SA 4.0 +2432,1,2451,,6/22/2018 15:34,,14,4842,"

$T_2$ generally refers to the measurement of the coherence of the qubit with respect to its dephasing (that's a rotation through the $|0\rangle$ - $|1\rangle$ axis of the Bloch sphere for those of us visualizing). But sometime in the literature, it's called $T_2$ and other times it's referred to as $T_2^*$. The fact that it is never explained leads me to believe the distinction is very simple. What's the distinction between these two concepts?

+ +

I assert that this nomenclature is very common in the literature (at least regarding solid-state QC). Here is one example: Ultralong spin coherence time in isotopically engineered diamond.

+ +

My internet search for an explanation has come up dry. Please help.

+",1867,,26,,6/23/2018 13:43,6/24/2018 20:04,What's the Difference between T2 and T2*?,,2,0,,,,CC BY-SA 4.0 +2433,2,,2420,6/22/2018 20:13,,1,,"
+

What are the biggest bottlenecks in the realization of a global quantum network (ie quantum internet)?

+
+

It would seem only complexity, getting everything working together, with high reliability, for a long enough period of time. The OSI description of network layers describes 7 layers, quantum networks will likely end up with more but we can already implement the OSI conventional model thus proving the viability and existence of quantum networking.

+

The article mentions some concerns, which are being successfully worked on:

+ +

Not mentioned in the article is the first level, the symbol layer, converting the qubit's state to something that can be transmitted and restored on the other end: the light-matter interface.

+

Complexity and cost won't be a problem for the first users, the delay (2030) would be due to commercialization and popularity for a sufficiently large group. The local network is operating today (scheduled opening 2020).

+",278,,-1,,6/18/2020 8:31,6/22/2018 20:13,,,,2,,,,CC BY-SA 4.0 +2434,2,,2432,6/23/2018 9:29,,3,,"

From Chapter 15 of NII's quantum information lecture series on ""Fundamentals of Noise processes"" (link here):

+ +
+

An applied DC field $H_0$ is not completely uniform in all space points. If many spin qubits are placed in such an inhomogeneous DC field, they have different Larmor frequencies. This leads to the dephasing effect if we compare the phase difference between different qubits. A time constant for this dephasing process is determined by the spatial (not temporal) inhomogeneous broadening of the DC field and distinguished from $T_2$ process. A new time constant is often referred to as $T_2^*$.

+
+ +

So, if I am understanding this correctly (and I am not an expert in this), then $T_2^*$ is the combined dephasing from the standard dephasing mechanism (described by $T_2$) and any inhomogeneities in the DC field. Presumably, the reason it is often not distinguished from $T_2$ in papers is that it is assumed that such inhomogeneities always exist and so $T^*_2$ is almost always the value being measured and the most relevant in practise.

+",391,,,,,6/23/2018 9:29,,,,0,,,,CC BY-SA 4.0 +2438,2,,2390,6/24/2018 5:25,,3,,"

Calculation of the inverse of an $N\times N$ matrix can be done by applying HHL with $N$ different $\vec{b}_i$ (specifically, HHL is applied $N$ times, once for each computational basis vector used as the $\vec{b}_i$).

+

In each case, phase estimation has to be done for an $N \times N$ matrix.

+

The number of qubits required for phase estimation is written on page 249 of the 10th anniversary edition of N&C:

+
+

"The quantum phase estimation procedure uses two registers. The first +register contains $t$ qubits."

+

"The second register [...] contains as many qubits as is necessary +to store $|u\rangle$", where $|u\rangle$ is an $N$-dimensional vector.

+
+

So you are correct that we would need $6$ qubits for the first register, and $\log N=8$ qubits for the second register.

+

This is 14 qubits in total to do the phase esitmation part of each HHL iteration involved in calculating the inverse of a matrix. 14 qubits is well within the capabilities of a laptop.

+",2293,,-1,,6/18/2020 8:31,6/24/2018 5:25,,,,2,,,,CC BY-SA 4.0 +2439,1,2464,,6/24/2018 7:08,,9,1645,"

This is a continuation of Quantum algorithm for linear systems of equations (HHL09): Step 2 - What is $|\Psi_0\rangle$?

+ +
+ +

In the paper: Quantum algorithm for linear systems of equations (Harrow, Hassidim & Lloyd, 2009), the details of the actual implementation of the algorithm is not given. How exactly the states $|\Psi_0\rangle$ and $|b\rangle$ are created, is sort of a ""black-box"" (see pages 2-3).

+ +

$$|\Psi_0\rangle = \sqrt{\frac{2}{T}}\sum_{\tau = 0}^{T-1}\sin \frac{\pi (\tau+\frac{1}{2})}{T}|\tau\rangle$$

+ +

and $$|b\rangle = \sum_{1}^{N}b_i|i\rangle$$

+ +

where $|\Psi_0\rangle$ is the initial state of the clock register and $|b\rangle$ is the initial state of the Input register.

+ +

(Say) I want to carry out their algorithm on the IBM $16$-qubit quantum computer. And I want to solve a certain equation $\mathbf{Ax=b}$ where $\mathbf{A}$ is a $4\times 4$ Hermitian matrix with real entries and $\mathbf{b}$ is a $4\times 1$ column vector with real entries.

+ +

Let's take an example:

+ +

$$\mathbf{A} = \begin{bmatrix} 1 & 2 & 3 & 4 \\ 2 & 1 & 5 & 6 \\ 3 & 5 & 1 & 7 \\ 4 & 6 & 7 & 1 \end{bmatrix}$$

+ +

and

+ +

$$\mathbf{b}=\begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \end{bmatrix}$$

+ +

Given the dimensions of $\mathbf{A}$ and $\mathbf{b}$, we should need $\lceil{\log_2 4\rceil}=2$ qubits for the input register and another $6$ qubits for the clock register assuming we want the eigenvalues to be represented with $90\%$ accuracy and up to $3$-bit precision for the eigenvalues (this has been discussed here previously). So total $2+6+1=9$ qubits will be needed for this purpose (the extra $1$ qubit is the ancilla).

+ +

Questions:

+ +
    +
  1. Using this information, is it possible to create the initial states $|\Psi_0\rangle$ and $|b\rangle$ on the IBM $16$ qubit version?

  2. +
  3. If you think $4\times 4$ is too large to be implemented on the IBM quantum computers you could even show an example of initial state preparation for a $2\times 2$ Hermitian matrix $\mathbf{A}$ (or just give a reference to such an example).

  4. +
+ +

I simply want to get a general idea about whether this can be done (i.e. whether it is possible) on the IBM 16-qubit quantum computer, and for that which gates will be necessary. If not the IBM 16-qubit quantum computer, can the QISKit simulator used for recreating the initial state preparation of $|\Psi_0\rangle$ and $|b\rangle$ in the HHL algorithm? Is there any other better alternative to go about this?

+",26,,26,,07-05-2018 19:01,07-06-2018 09:30,Quantum algorithm for linear systems of equations (HHL09): Step 2 - Preparation of the initial states $|\Psi_0\rangle$ and $|b\rangle$,,2,8,,,,CC BY-SA 4.0 +2440,1,2442,,6/24/2018 8:49,,6,510,"

As explained in this this answer, when we have different (relative) phases between two states, those two states will yield the same probabilities when measured in the same basis but different probabilities when measured in different bases.

+ +

My question is why would we need to do measurement in different bases?
+I'm asking from a quantum programming point of view. Why would I need to change the (relative) phase of two states when writing a program?

+ +

One constraint though: I don't understand how to read a Bloch sphere so the matrix and circuit formalisms are preferred.

+",2417,,26,,12/23/2018 13:41,12/23/2018 13:41,Use of change of phase gates,,3,0,,,,CC BY-SA 4.0 +2441,2,,2440,6/24/2018 9:34,,1,,"

The phase mentioned in that question and answer are of the form: +$$ +P = +\begin{bmatrix} +e^{i\theta} & 0 \\ +0 & e^{i\phi} +\end{bmatrix} +$$ +so, they are not global phases. However, two particular states, aka its eigenstates, are unaffected by this relative phase operator. Because, this operator is diagonal it means the basis we are currently using is their eigenstate.

+",2023,,,,,6/24/2018 9:34,,,,0,,,,CC BY-SA 4.0 +2442,2,,2440,6/24/2018 9:49,,1,,"
+

My question is why would we need to do measurement in different bases? + I'm asking from a quantum programming point of view.

+
+ +

Quantum programs are usually written to implement some quantum algorithm. Depending on the algorithm you might need to change the basis of measurement at some point.

+ +

For instance, Bell state measurement is useful in the quantum teleportation protocol.

+",26,,,,,6/24/2018 9:49,,,,0,,,,CC BY-SA 4.0 +2443,1,,,6/24/2018 10:55,,6,973,"

I had asked this question earlier in the comment section of the post: What is a qubit? but none of the answers there seem to address it at a satisfactory level.

+ +

The question basically is:

+ +
+

How is a single qubit in a Bell state + $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$ any different from a + classical coin spinning in the air (on being tossed)?

+
+ +
+ +

The one-word answer for difference between a system of 2 qubits and a system of 2 classical coins is ""entanglement"". For instance, you cannot have a system of two coins in the state $\frac{1}{\sqrt 2}|00\rangle+\frac{1}{\sqrt 2}|11\rangle$. The reason is simple: when two ""fair"" coins are spinning in air, there is always some finite probability that the first coin lands heads-up while the second coin lands tails-up, and also the vice versa is true. In the combined Bell state $\frac{1}{\sqrt 2}|00\rangle+\frac{1}{\sqrt 2}|11\rangle$ that is not possible. If the first qubit turns out to be $|0\rangle$, the second qubit will necessarily be $|1\rangle$. Similarly, if the first qubit turns out to be $|1\rangle$, the second qubit will necessarily turn out to be $|1\rangle$. At, this point someone might point out that if we use $2$ ""biased"" coins then it might be possible to recreate the combined Bell state. The answer is still no (it's possible to mathematically prove it...try it yourself!). That's because the Bell state cannot be decomposed into a tensor product of two individual qubit states i.e. the two qubits are entangled.

+ +
+ +

While the reasoning for the 2-qubit case is understandable from there, I'm not sure what fundamental reason distinguishes a single qubit from a single ""fair"" coin spinning in the air.

+ +

This answer by @Jay Gambetta somewhat gets at it (but is still not satisfactory):

+ +
+

This is a good question and in my view gets at the heart of a qubit. + Like the comment by + @blue, + it's not that it can be an equal superposition as this is the same as + a classical probability distribution. It is that it can have negative + signs.

+ +

Take this example. Imagine you have a bit in the $0$ state and then + you apply a coin flipping operation by some stochastic matrix + $\begin{bmatrix}0.5 & 0.5 \\0.5 & 0.5 \end{bmatrix}$ this will make a + classical mixture. If you apply this twice it will still be a + classical mixture.

+ +

Now lets got to the quantum case and start with a qubit in the $0$ + state and apply a coin flipping operation by some unitary matrix + $\begin{bmatrix}\sqrt{0.5} & \sqrt{0.5} \\\sqrt{0.5} & -\sqrt{0.5} + \end{bmatrix}$. This makes an equal superposition and you get random + outcomes like above. Now applying this twice you get back the state + you started with. The negative sign cancels due to interference which + cannot be explained by probability theory.

+ +

Extending this to n qubits gives you a theory that has an exponential + that we can't find efficient ways to simulate.

+ +

This is not just my view. I have seen it shown in talks of Scott + Aaronson and I think its best to say quantum is like “Probability + theory with Minus Signs” (this is a quote I seen Scott make).

+
+ +

I'm not exactly sure how they're getting the unitary matrix $\begin{bmatrix}\sqrt{0.5} & \sqrt{0.5} \\\sqrt{0.5} & -\sqrt{0.5} \end{bmatrix}$ and what the motivation behind that is. Also, they say: ""The negative sign cancels due to interference which can not be explained by probability theory."" The way they've used the word interference seems very vague to me. It would be useful if someone can elaborate on the logic used in that answer and explain what they actually mean by interference and why exactly it cannot be explained by classical probability. Is it some extension of Bell's inequality for 1-qubit systems (doesn't seem so based on my conversations with the folks in the main chat though)?

+",26,,26,,07-08-2018 14:00,01-02-2022 22:09,How is a single qubit fundamentally different from a classical coin spinning in the air?,,4,0,,,,CC BY-SA 4.0 +2444,1,,,6/24/2018 14:31,,7,371,"

I mean are we certain that they will be able to provide us a huge improvements (in some tasks) compared to clasical computers?

+",2754,,2293,,9/24/2020 14:07,9/24/2020 14:07,Are we certain that quantum computers are more efficient than classical computers can be built?,,2,1,,,,CC BY-SA 4.0 +2445,2,,2443,6/24/2018 15:25,,3,,"

The analogy between qubits and coin flips is popular but can be misleading. (See, for example, this video: https://www.youtube.com/watch?v=lypnkNm0B4A) A coin spinning in the air and landing on the ground is not truly random, though we may describe it as such. The key point is how you measure it.

+ +

At any point in time the coin has a definite orientation, though it may be unknown to us. Likewise, qubits have a definite state at any time, which we can describe by a point on the surface of a sphere (the so-called Bloch sphere). Mathematically, a coin's orientation and a qubit's state are equivalent. While in the air, the coin may undergo deterministic and reversible motion (e.g., spinning and falling). Likewise, prior to measurement a qubit may undergo deterministic and reversible transformations (e.g., unitary gate operations on a quantum computer).

+ +

Measurement represents an irreversible process. For a coin, it is a series of inelastic collisions with the ground, bouncing and spinning until it comes to rest. If we are completely ignorant of the initial conditions of the coin, the two final orientations (heads or tails) will appear equally likely, but this is not always the case. If I drop it oriented ""heads up"" from a short height, it will land flat with ""heads up"" with near certainty. But suppose I was standing next to a large magnetic wall and did this. The coin would hit edge-on and would likely land with either heads or tails showing, with equal probability. One could imagine doing this experiment with various initial orientations of the coin and orientations of the magnetic wall (upright, flat, slanted, etc.). You can imagine that the probability of getting heads or tails will be different, depending on the relative orientations of the coin and wall. (In theory it's all completely deterministic, but in practice we never know the initial conditions that precisely.)

+ +

Measurements of qubits are quite similar. I can prepare a qubit in the state $\frac{1}{\sqrt{2}}[|0\rangle + |1\rangle]$, measure it in the 0/1 basis $\{|0\rangle, |1\rangle\}$, and get either $|0\rangle$ or $|1\rangle$ with equal probability. If, however, I measure in the +/- basis $\{|+\rangle, |-\rangle\}$ (analogous to using a magnetic wall), I get $|+\rangle$ with near certainty. (I say ""near certainty"" because, well, nothing in the real world is perfect.) Here, $|\pm\rangle = \frac{1}{\sqrt{2}}[|0\rangle\pm|1\rangle$ are the +/- basis states. For polarized photons, for example, this could be done used polarization filters rotated $45^\circ$.

+ +

The difference between preparing the state $\frac{1}{\sqrt{2}}[|0\rangle + |1\rangle]$ and the state $\frac{1}{\sqrt{2}}[|0\rangle - |1\rangle]$ is the difference between preparing a vertically oriented coin with either heads or tails facing away from the wall. (A good picture would really help here.) We can tell which of the two states is prepared based on the outcome of a suitably chosen measurement, which in this case would be a +/- basis (or magnetic wall) measurement.

+ +

Jay Gambetta mentions a unitary matrix that is used to represent a Hadamard gate. It corresponds to rotating a coin by $90^\circ$, so a coin that's initially heads up becomes vertically oriented with, say, heads facing away from the wall. If the wall is magnetic and you release the coin, it will stick to it with heads up. If, instead, you started with a coin that's tails up and applied the same rotation, it would be vertical with tails facing away from the wall. If you release it (and the wall is still magnetic), you get tails. On the other hand, if the wall is not magnetic and you drop it, it lands heads or tails with equal probability. Using a ""floor"" measurement doesn't distinguish between the two vertical orientations, but using a ""wall"" measurement does. It's not so much whether things are predictable or not, it's the type of measurement you do that distinguishes one quantum state (or coin orientation) from another.

+ +

This is the whole of it. The only remaining mystery is that the outcome of the coin measurement is considered to be, in theory, completely deterministic, while that of the qubit is considered to be, except in special cases, ""intrinsically random."" But that's another discussion...

+",356,,356,,6/24/2018 16:56,6/24/2018 16:56,,,,3,,,,CC BY-SA 4.0 +2446,1,2479,,6/24/2018 16:50,,3,115,"

I recently found out about shadowgraphy and was wondering if a technique like this could be used to:

+ +
    +
  • Visually show entanglement
  • +
  • Suffice as a measure (e.g. continous partial trace)
  • +
  • Most useful applications
  • +
+ +

Resources

+ + +",2645,,2645,,6/25/2018 22:15,6/26/2018 19:45,Realization of Quantum Shadowgraphy,,1,0,,,,CC BY-SA 4.0 +2447,2,,2443,6/24/2018 17:55,,-1,,"

You have already mentioned the practical differences, such as qubit entanglement, and the negative signs (or more general ""phases"").

+ +

The fundamental reason for this is that allowed quantum states are solutions of the Schrödinger equation, which is a linear differential equation. The sum of solutions to a linear differential equation is always also a solution to that differential equation [1],[2],[3]. Since ""solution"" to differential equation is synonymous with ""allowed quantum state"" or ""allowed wavefunction"", any sum of allowed states is also allowed (ie. superpositions like Bell states are allowed).

+ +

That is the fundamental reason why quantum mechanical bits (qubits) can exist in superpositions. In fact, not just any sum, but any linear combination of states is an allowed state because the differential equation is linear. This means we can even add constants (phases of -1 or +1 or $e^{i\theta}$) and still have allowed states.

+ +

Bits that follow the rules of quantum physics, for example, the Schrödinger equation, can physically exist in superpositions and with phases, due to linearity (review vector spaces if this is not clear). Classical physics does not give any mechanism for a system to be in more than one state at the same time.

+",2293,,26,,7/13/2018 15:03,7/13/2018 15:03,,,,3,,,,CC BY-SA 4.0 +2448,2,,2444,6/24/2018 18:19,,4,,"

The answer is no. We cannot be 100% certain.
+Just like we don't have a proof that P $\ne$ NP, there is no proof that NP $\ne$ QMA, though we believe both these inequalities to be true even without proof.

+ +

Furthermore, we do not know how the ""engineering complexity"" scales, so even though Shor's algorithm has exponentially fewer operations to perform than the best known classical algorithm, it might be double exponentially more difficult to implement it physically. See my answer to this question: Are there any estimates on how complexity of quantum engineering scales with size?.

+ +

It is also possible that there exists a proof that NP $\ne$ QMA and that the engineering complexity scales linearly, meaning that quantum computers could ""provably"" have some advantage, but we just do not know of any such proof yet. Until we see a quantum computer give these ""huge improvements"" for a problem where it is provably better than the best classical algorithm, we have no way of being 100% certain that quantum computers will provide what you ask.

+ +

Quantum communication though (not necessarily quantum computing), does have some provable benefits over present day classical communication devices, and one example is the BB84 protocol.

+",2293,,,,,6/24/2018 18:19,,,,9,,,,CC BY-SA 4.0 +2449,1,2450,,6/24/2018 18:24,,6,293,"

When considering Shor's algorithm, we use ancilla qubits to effectively obtain the state +$$\sum_x \left|x,f(x)\right>$$ +for the function $f(x) = a^x \mod N$.

+ +

As I have learned it, we then measure the ancilla qubits, to obtain, say $f(x) = b$ and get the state +$$\sum_{x\mid f(x) = b} \left|x,f(x)\right>.$$

+ +

Then applying a QFT will give the period. However, I think that the measurement of the ancilla qubits is not necessary, in order to be able to apply the QFT (or its inverse for that matter) and do a measurement to obtain the period.

+ +

Is that correct? Is it necessary to measure the ancilla qubits in Shor's algorithm?

+",2005,,26,,12/14/2018 6:09,12/14/2018 6:09,Measuring ancillas in Shor's algorithm,,1,0,,,,CC BY-SA 4.0 +2450,2,,2449,6/24/2018 19:12,,7,,"
+

Is that correct? Is it [not] necessary to measure the ancilla qubits in Shor's algorithm?

+
+ +

Correct, it is not necessary to measure the ancillae.

+ +

This is easily seen by appealing to the no-communication theorem. If measuring the ancillae could affect the success of the algorithm, you could communicate faster-than-light by starting the algorithm many times, giving the ancillae to Alice, sending her far away, have her encode a bit by measuring or not measuring the ancillae, then having Bob measure how often the factoring was succeeding to read out the bit.

+ +

Another way to see that this works is by simulating it. From the results you can see that, as soon as the modular-exponentiation into the ancillae has happened, the density matrix of the input has been decohered in a very particular periodic way. And that the inverse QFT of this decohered density matrix has magnitude spikes at the correct places:

+ +

+ +

Notice that this is all in place even though the ancillae were not measured. Measuring them changes none of the density matrices.

+",119,,119,,6/24/2018 19:58,6/24/2018 19:58,,,,0,,,,CC BY-SA 4.0 +2451,2,,2432,6/24/2018 19:51,,12,,"

The naming started in NMR and it has become the difference between the following two experiments.

+ +

Experiment one: Prepare the qubit in a superposition state (apply a H gate) and vary the wait time and then measure in the superposition basis (apply another H gate). The decay time of this experiment is $T_2^*$. We commonly call this a Ramsey experiment.

+ +

Experiment two: Prepare the qubit in a superposition state and apply half the wait time and then apply a pi-pulse (X operator) and then the remainder of the wait time and measure in the superposition basis. The decay time of this experiment is $T_2$. We commonly call this a Hahn Echo.

+ +

In the second experiment, the pi-pulse refocuses slow noise which depending on the system can be due to many reasons. There are higher order experiments that refocus the noise better and this is an active research area.

+ +

To see this imagine the simplest case where the noise can be explained by a Hamiltonian $H = \Delta |1\rangle\langle 1|$ that is constant and unknown.

+ +

In the first experiment:

+ +

Step 1. Apply H makes $(|0\rangle+|1\rangle)/\sqrt(2)$

+ +

Step 2. Wait makes $(|0\rangle+\exp(-i\Delta t)|1\rangle)/\sqrt{2}$

+ +

Step 3. Apply H $[(1+e^{-i\Delta t})|0\rangle+(1-e^{-i\Delta t})|1\rangle]/2$.

+ +

The probability of getting outcome 0 is $1/2+\cos(\Delta t)/2$.

+ +

In the second experiment:

+ +

Step 1. Apply H gives $(|0\rangle+|1\rangle)/\sqrt(2)$

+ +

Step 2. Half Wait gives $(|0\rangle+\exp(-i\Delta t/2)|1\rangle)/\sqrt{2}$

+ +

Step 3. Apply X gives $(|1\rangle+\exp(-i\Delta t/2)|0\rangle)/\sqrt{2}$.

+ +

Step 3. Half Wait gives $\exp(-i\Delta t/2)(|1\rangle+|0\rangle)/\sqrt{2}$.

+ +

Step 4. Apply H gives $\exp(-i\Delta t/2)|0\rangle$.

+ +

The probability of getting outcome $0$ is always $1$ (there is no decay). So the $T_2$ of this experiment would be infinite.

+ +

In the more general case $H = \Delta(t) |1\rangle\langle 1|$ the first experiment gives $\mathrm{Pr}(0) = 1/2+\langle\cos(\int_0^t \Delta(t))\rangle/2$ where we have averaged over different shots (runs of the experiment). I similar but different expression can be derived for the second experiment and I will leave as an exercise what happens but depending on the assumptions you are willing to make about the noise correlations you can simplify this expression in terms of the noise spectrum.

+ +

If you want to go check it out on a real experiment you can try the notebook https://github.com/QISKit/qiskit-tutorial/blob/master/reference/qcvv/relaxation_and_decoherence.ipynb but you need an account on the IBM Q experience.

+",302,,26,,6/24/2018 20:04,6/24/2018 20:04,,,,2,,,,CC BY-SA 4.0 +2452,2,,1808,6/24/2018 20:20,,2,,"

If $a^{r/2} \equiv -1$, then $a^{r/2}$ is a trivial square root of $1$ instead of an interesting square root. We already knew that $-1$ is a square root of $1$. We need a square root we didn't already know.

+ +

Suppose I give you a number $x$ such that $x^2 = 1 \pmod{N}$. You can rewrite this equation as:

+ +

$$\begin{align} +x^2 &= 1 + k \cdot N +\\ +x^2 - 1 &= k \cdot N +\\ +(x+1)(x-1) &= k \cdot N +\end{align} +$$

+ +

The key thing to realize is that this equation is trivial when $x$ is $\pm 1 \bmod N$. If $x\equiv -1$, then the left hand side is $0 \bmod N$ because the factor $(x+1)\equiv 0$. The same thing happens if $x \equiv +1$, but with the other factor.

+ +

In order for both $(x+1)$ and $(x-1)$ to be interesting (i.e. non-zero mod $N$), we need $x$ to be an extra square root of $1$. A square root besides the obvious $+1$ and $-1$ answers. When that happens, it is impossible for the prime factors of $N$ to all go into $(x+1)$ or all go into $(x-1)$, and so $\gcd(x+1, N)$ is guaranteed to give you a factor of $N$ instead of a multiple of $N$.

+ +

For example, if $N=221$ then $x=103$ is an extra square root of 1. And indeed, both $\gcd(x+1, N) = \gcd(104, 221) = 13$ and $\gcd(x-1, N) = \gcd(102, 221) = 17$ are factors of $221$. Whereas if we had picked the boring square root $x=-1\equiv 220$, then neither $\gcd(x+1,N) = \gcd(221, 221) = 221$ nor $\gcd(x-1,N) = \gcd(219,221) = 1$ are factors of $221$.

+",119,,26,,03-04-2019 21:30,03-04-2019 21:30,,,,0,,,,CC BY-SA 4.0 +2453,2,,2195,6/24/2018 20:43,,2,,"

You can find detailed instructions about what you need to install for your particular OS here. This visualization uses Latex, so you need to have a latex compiler installed.

+ +

Also make sure you have the latest version of Qiskit, as a Windows visualization bug was recently fixed. You can upgrade by doing: +pip install -U qiskit

+ +

Within the next couple weeks, Qiskit will have another visualization method that runs purely in Python, hence no other software installations required.

+",2503,,,,,6/24/2018 20:43,,,,0,,,,CC BY-SA 4.0 +2454,1,2455,,6/24/2018 21:23,,7,470,"

I found it odd that the result of the action of identity gate (namely a $2\times2$ identity matrix) on a pure state $|0\rangle$ (namely the vector corresponding to the $2\times1$ matrix $\begin{bmatrix} 1\\0 \end{bmatrix}$) becomes a $2\times2$ matrix $\begin{bmatrix} 1+0\cdot i&0+0\cdot i\\0+0\cdot i&1+0\cdot i \end{bmatrix}=\begin{bmatrix} 1&0\\0&1 \end{bmatrix}$ as I found it HERE (QISKit tutorial page): +

+ +

Also, when one lets for more precision, the result gets odd:

+ +

+ +

Why is this? The same thing happens for other gates listed on the page referenced above.

+",2757,,2293,,8/17/2020 2:56,8/17/2020 2:56,One-qubit gate results in QISKit,,1,0,,,,CC BY-SA 4.0 +2455,2,,2454,6/25/2018 1:28,,3,,"

In the first case you are not asking for the state but the unitary matrix representing the circuit. This is correct and just rounding error. It looks like you are not using the latest version so I would update.

+ +

In the second case are you sure that you don’t have a previous circuit loaded in memory. In that notebook all examples have the same circuit name and it looks like this is a different gate. If i had to guess it is the u2 example. If this is not the case you have found a bug and please submit an issue and we try to debug it.

+ +

I would also change the title as this is not a general quantum computing question to one qubit gates errors in qiskit or something like that.

+",302,,302,,6/25/2018 2:22,6/25/2018 2:22,,,,7,,,,CC BY-SA 4.0 +2456,1,2459,,6/25/2018 6:53,,3,268,"

I was not able to locate any visuals online. The visual I have in my head is a cube w/ bloch spheres as the eight vertices.

+ +

I am also curious about a matrix representation, although I am not sure how feasible this is as a qubyte has $2^8$ (256) states.

+ +

What is the best way to represent a qubyte?

+",2645,,26,,7/16/2018 15:03,7/16/2018 15:03,How to represent a qubyte?,,2,1,,,,CC BY-SA 4.0 +2457,2,,2444,6/25/2018 7:12,,3,,"

There is no absolute certainty. People introduce complexity classes: BPP for the set of problems that a classical computer (with access to a source of randomness) can solve in polynomial time, and BQP for the set of problems that a quantum computer can efficiently solve. We know that BQP contains BPP, but we do not know for certain that there are problems in BQP but not BPP. It is generally believed that there are some. Shor's algorithm for factoring large composite numbers is most typically hailed as a likely candidate, although the best candidate is the algorithm for the Jones polynomial because this problem is known to be the hardest that is efficiently solvable by a quantum computer. But we don't know. It may be that there is an efficient classical algorithm for this problem, and we just haven't been smart enough to find it yet.

+ +

What we do know is that with respect to certain oracles (i.e. black boxes that function in particular ways), quantum algorithms do outperform classical ones. You can think of this as ""if we program an algorithm in a particular way, how fast can it run?"". The advantage of these oracle-based algorithms is that lower bounds can be proven, showing the minimum number of operations required. The classic example of this is Simon's problem. Again, it doesn't mean that there can't be a better way of doing it classically via another route.

+ +

The other aspect that is implied by your question is whether a scalable universal quantum computer can actually be built. Until this is actually done, I don't think it can be proven, although it is generally believed that it will happen. There are, however, those who believe there are fundamental limitations that will prevent the construction of a suitable device. Gil Kalai, for example.

+",1837,,,,,6/25/2018 7:12,,,,0,,,,CC BY-SA 4.0 +2458,1,2652,,6/25/2018 7:13,,3,197,"

What quantum technologies and/or techniques (if any) could be used to evaluate the value of large numbers in the fast growing hierarchy such as Tree(3) or SCG(13)?

+",2645,,2645,,12/18/2018 20:24,12/18/2018 20:24,Quantum algorithm to evaluate numbers in fast growing hierarchy,,1,2,,,,CC BY-SA 4.0 +2459,2,,2456,6/25/2018 7:19,,6,,"

I don't think you'll find a good visual representation. The Bloch sphere for a qubit is a particularly unique coincidence because the number of parameters to represent an arbitrary mixed state is only 1 more than the number of parameters required to represent an arbitrary pure state, and so the pure states can be thought of as the surface to a mixed state's volume.

+ +

A cube with Bloch spheres at the vertices is a fair representation if all 8 qubits remain separable at all times. However, you will get in a mess trying to represent entanglement in that picture (you'll need superpositions of several different separable states). As you've said, an arbitrary pure state of a qubyte has $2^8$ states, requiring $2^{16}-2$ real parameters to describe. 8 (pure) Bloch states give you access to only 16 parameters. You're probably best sticking to a complex vector of $2^8$ elements (minus 2 real parameters for global phase and normalisation).

+",1837,,,,,6/25/2018 7:19,,,,3,,,,CC BY-SA 4.0 +2460,2,,2443,6/25/2018 7:47,,6,,"
+

How is a single qubit in a Bell state $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ any different from a classical coin spinning in the air (on being tossed)?

+ +

For both of them, the probability of getting heads is 1/2 + and getting tails is also 1/2 (we can assume that heads$\equiv|1\rangle$ and tails$\equiv|0\rangle$ and that we are ""measuring"" in the heads-tails basis).

+
+ +

For any 1-qubit state $|\psi\rangle$, if all you do is measure it in the computational basis, you will always be able to explain it in terms of a probability distribution p(heads)$=|\langle 0|\psi\rangle|^2$ and p(tails)$=|\langle 1|\psi\rangle|^2$. The key differences are in using different bases and/or performing unitary evolutions.

+ +

The classic example is the Mach-Zehnder interferometer. Think of it this way: any 1-bit probabilistic operation is described by a $2\times 2$ stochastic matrix (i.e. all columns sum to 1). Call it $P$. It is easy enough to show that there is no $P$ such that $P^2=X$, where $X$ is the Pauli matrix (in other words, a NOT gate). Thus, there is no probabilistic gate that can be considered the square-root of NOT. On the other hand, we can build such a device. A half-silvered mirror performs the square-root of not action.

+ +

A half-silvered mirror has two inputs (labelled 0 and 1) and two outputs (also labelled 0 and 1). Each input is a photon coming in a different direction, and it is either reflected or transmitted. If you just look at one half-silvered mirror, then whatever input you give, the output is 50:50 reflected or transmitted. It seems just like the coin you're talking about. However, if you put two of them together, if you input 0, or always get the output 1, and vice versa. The only way to explain this is with probability amplitudes, and a transition matrix that looks like +$$ +U=\frac{1}{\sqrt{2}}\left(\begin{array}{cc} +1 & i \\ i & 1 +\end{array}\right). +$$ +In quantum mechanics, the square-root of not gate exists.

+",1837,,,,,6/25/2018 7:47,,,,4,,,,CC BY-SA 4.0 +2461,2,,2388,6/25/2018 9:04,,3,,"
+

What am I missing here? Where did the factor of $\frac{t}{2\pi}$ vanish in their algorithm?

+
+ +

Remember that in Dirac notation, whatever you write inside the ket is an arbitrary label referring to something more abstract. So, it is true that you are finding the (approximate) eigenvector to $U$, which has eigenvalue $e^{-i\lambda t}$ and therefore what you're extracting is $\lambda t/(2\pi)$, but that is the same as the eigenvector of $A$ with eigenvalue $\lambda$, and it is that which is being referred to in the notation. But if you wanted to be really clear, you could write it as

+ +

|approximate eigenvector of $U$ for which eigenvalue is $e^{-i\lambda t}$ and of $A$ for which eigenvalue if $\lambda\rangle$,

+ +

but perhaps instead of writing that out every time, we might just write $|\tilde\lambda\rangle$ for brevity!

+",1837,,,,,6/25/2018 9:04,,,,0,,,,CC BY-SA 4.0 +2462,2,,2393,6/25/2018 10:12,,4,,"

$\newcommand{\bra}[1]{\left\langle#1\right|}\newcommand{\ket}[1]{\left|#1\right\rangle}\newcommand{\proj}[1]{|#1\rangle\langle#1|}\newcommand{\half}{\frac12}$In answer to your first question, I wrote myself some notes some time ago about my understanding of how it worked. The notation is probably a bit different (I've tried to bring it more into line, but it's easy to miss bits), but attempts to explain that choice of the state $|\Psi_0\rangle$. There also seem to be some factors of $\frac12$ floating around in places.

+ +

When we first study phase estimation, we're usually thinking about it in respect to use in some particular algorithm, such as Shor's algorithm. This has a specific goal: getting the best $t$-bit approximation to the eigenvalue. You either do, or you don't, and the description of phase estimation is specifically tuned to give as high a success probability as possible.

+ +

In HHL, we are trying to produce some state +$$ +\ket{\phi}=\sum_j\frac{\beta_j}{\lambda_j}\ket{\lambda_j}, +$$ +where $\ket{b}=\sum_j\beta_j\ket{\lambda_j}$, making use of phase estimation. The accuracy of the approximation of this will depend far more critically on an accurate estimation of the eigenvalues that are close to 0 rather than those that are far from 0. An obvious step therefore, is to attempt to modify the phase estimation protocol so that rather than using `bins' of fixed width $2\pi/T$ for approximating the phases of $e^{-iAt}$ ($T=2^t$ and $t$ is number of qubits in phase estimation register), we might rather specify a set of $\phi_y$ for $y\in\{0,1\}^t$ to act as the centre of each bin so that we can have vastly increased accuracy close to 0 phase. More generally, you might specify a trade-off function for how tolerant you might be of errors as a function of the phase $\phi$. The precise nature of this function can then be tuned to a given application, and the particular figure of merit which you will use to determine success. In the case of Shor's algorithm, our figure of merit was simply this binning protocol -- we were successful if the answer was in the correct bin, and unsuccessful outside it. This is not going to be the case in HHL, whose success is more reasonably captured by a continuous measure such as the fidelity. So, for the general case, we shall designate a cost function $C(\phi,\phi')$ which specifies a penalty for answers $\phi'$ if the true phase is $\phi$.

+ +

Recall that the standard phase estimation protocol worked by producing an input state that was the uniform superposition of all basis states $\ket{x}$ for $x\in\{0,1\}^t$. This state was used to control the sequential application of multiple controlled-$U$ gates, which are followed up by an inverse Fourier transform. Imagine we could replace the input state with some other state +$$ +\ket{\Psi_0}=\sum_{x\in\{0,1\}^t}\alpha_x\ket{x}, +$$ +and then the rest of the protocol could work as before. For now, we will ignore the question of how hard it is to produce the new state $\ket{\Psi_0}$, as we are just trying to convey the basic concept. Starting from this state, the use of the controlled-$U$ gates (targeting an eigenvector of $U$ of eigenvalue $\phi$), produces the state +$$ +\sum_{x\in\{0,1\}^t}\alpha_xe^{i\phi x}\ket{x}. +$$ +Applying the inverse Fourier transform yields +$$ +\frac{1}{\sqrt{T}}\sum_{x,y\in\{0,1\}^t}e^{ix\left(\phi-\frac{2\pi y}{M}\right)}\alpha_x\ket{y}. +$$ +The probability of getting an answer $y$ (i.e. $\phi'=2\pi y/T$) is +$$ +\frac{1}{T}\left|\sum_{x\in\{0,1\}^t}e^{ix\left(\phi-\frac{2\pi y}{T}\right)}\alpha_x\right|^2 +$$ +so the expected value of the cost function, assuming a random distribution of the $\phi$, is +$$ +\bar C=\frac{1}{2\pi T}\int_0^{2\pi}d\phi\sum_{y\in\{0,1\}^t}\left|\sum_{x\in\{0,1\}^t}e^{ix\left(\phi-\frac{2\pi y}{T}\right)}\alpha_x\right|^2C(\phi,2\pi y/T), +$$ +and our task is to select the amplitudes $\alpha_x$ that minimise this for any specific realisation of $C(\phi,\phi')$. If we make the simplifying assumption that $C(\phi,\phi')$ is only a function of $\phi-\phi'$, then we can make a change of variable in the integration to give +$$ +\bar C=\frac{1}{2\pi}\int_0^{2\pi}d\phi\left|\sum_{x\in\{0,1\}^t}e^{ix\phi}\alpha_x\right|^2C(\phi), +$$ +As we noted, the most useful measure is likely to be a fidelity measure. Consider we have a state $\ket{+}$ and we wish to implement the unitary $U_\phi=\proj{0}+e^{i\phi}\proj{1}$, but instead we implement $U_{\phi'}=\proj{0}+e^{i\phi'}\proj{1}$. The fidelity measures how well this achieves the desired task, +$$ +F=\left|\bra{+}U_{\phi'}^\dagger U\ket{+}\right|^2=\cos^2\left(\frac{\phi-\phi'}{2}\right), +$$ +so we take +$$ +C(\phi-\phi')=\sin^2\left(\frac{\phi-\phi'}{2}\right), +$$ +since in the ideal case $F=1$, so the error, which is what we want to minimise, can be taken as $1-F$. +This will certainly be the correct function for evaluating any $U^t$, but for the more general task of modifying the amplitudes, not just the phases, the effects of inaccuracies propagate through the protocol in a less trivial manner, so it is difficult to prove optimality, although the function $C(\phi-\phi')$ will already provide some improvement over the uniform superposition of states. Proceeding with this form, we have +$$ +\bar C=\frac{1}{2\pi}\int_0^{2\pi}d\phi\left|\sum_{x\in\{0,1\}^t}e^{ix\phi}\alpha_x\right|^2\sin^2\left(\half\phi\right), +$$ +The integral over $\phi$ can now be performed, so we want to minimise the function +$$ +\half\sum_{x,y=0}^{T-1}\alpha_x\alpha_y^\star(\delta_{x,y}-\half\delta_{x,y-1}-\half\delta_{x,y+1}). +$$ +This can be succinctly expressed as +$$ +\min\bra{\Psi_0}H\ket{\Psi_0} +$$ +where +$$ +H=\half\sum_{x,y=0}^{T-1}(\delta_{x,y}-\half\delta_{x,y-1}-\half\delta_{x,y+1})\ket{x}\bra{y}. +$$ +The optimal choice of $\ket{\Psi_0}$ is the minimum eigenvector of the matrix $H$, +$$ +\alpha_x=\sqrt{\frac{2}{T+1}}\sin\left(\frac{(x+1)\pi}{T+1}\right), +$$ +and $\bar C$ is the minimum eigenvalue +$$ +\bar C=\half-\half\cos\left(\frac{\pi}{T+1}\right). +$$ +Crucially, for large $T$, $\bar C$ scales as $1/T^2$ rather than the $1/T$ that we would have got from the uniform coupling choice $\alpha_x=1/\sqrt{T}$. This yields a significant benefit for the error analysis.

+ +

If you want to get the same $|\Psi_0\rangle$ as reported in the HHL paper, I believe you have to add the terms $-\frac14\left(\ket{0}\bra{T-1}+\ket{T-1}\bra{0}\right)$ to the Hamiltonian. I have no justification for doing so, however, but this is probably my failing.

+",1837,,1837,,6/27/2018 6:24,6/27/2018 6:24,,,,0,,,,CC BY-SA 4.0 +2463,2,,2440,6/25/2018 14:13,,3,,"

Strictly, it's never necessary to measure in different bases, because any projective measurement can be decomposed as a unitary followed by a projective measurement in the computational basis. Conversely, there are times when algorithms are described as unitaries followed by measurements in the computational basis, and you might choose to move some of those unitaries into a description of the basis.

+ +

The point I'm trying to make here is that the division between measurement and the rest of an algorithm is fairly arbitrary, so to get to the heart of your question, we really have to ask about the importance of relative phases inside a quantum algorithm. These are absolutely crucial, as these are what provide for the constructive and destructive interference that permit outcomes different from classical computation. Take a look at Deutsch's algorithm for the simplest, 1 qubit, example. This is usually described as -prepare $|0\rangle$ state, Hadamard, function evaluation, Hadamard, measure in computational basis- but you could describe it as -prepare $|+\rangle$ state, function evaluation,measure in $|\pm\rangle$ basis-. The function evaluation, the thing you really want to get at, is entirely encoded in the relative phase between the two states.

+ +

To give one extreme example, consider measurement-based quantum computation. Here, we produce a standard quantum state, and the computation to be performed is entirely defined by the choice of single-qubit measurements that are performed.

+",1837,,,,,6/25/2018 14:13,,,,2,,,,CC BY-SA 4.0 +2464,2,,2439,6/25/2018 15:09,,3,,"

It's not possible to create the initial states $\left| \Psi_0\right>$ and $\left|b\right>$ on the IBM 16 qubits version. On the other hand, it is possible to approximate them with an arbitrarily low error1 as the gates implemented by the IBM chips offer this possibility.

+ +

Here you ask for 2 different quantum states:

+ +
    +
  1. $\left| b \right>$ is not restricted at all. The state $\left| b \right>$ is represented by a vector of $N$ complex numbers that can be anything (as long as the vector has unitary norm).
  2. +
  3. $\left| \Psi_0 \right>$ can be seen as a special case of $\left| b \right>$, where the coefficients $b_i$ are more constrained.
  4. +
+ +

With this analysis, any method that can be used for creating $\left|b\right>$ can also be used to create $\left| \Psi_0 \right>$. On the other hand, as $\left| \Psi_0 \right>$ is more constrained, we can hope that there exists more efficient algorithms to produce $\left| \Psi_0 \right>$.

+ +

Useful for $\left|b\right>$ and $\left|\Psi_0\right>$: Based on Synthesis of Quantum Logic Circuits (Shende, Bullock & Markov, 2006), the QISKit Python SDK implements a generic method to initialize an arbitrary quantum state.

+ +

Useful for $\left|\Psi_0\right>$: Creating superpositions that correspond to efficiently integrable probability distributions (Grover & Rudolph, 2002) presents quickly an algorithm to initialise a state whose amplitudes represents a probability distribution respecting some constraints. These constraints are respected for $\left|\Psi_0\right>$ according to Quantum algorithm for solving linear systems of equations (Harrow, Hassidim & Lloyd, 2009), last line of page 5.

+ +

For the implementation on QISKit, here is a sample to initialise a given quantum state:

+ +
import qiskit
+
+statevector_backend = qiskit.get_backend('local_statevector_simulator')
+
+###############################################################
+# Make a quantum program for state initialization.
+###############################################################
+qubit_number = 5
+Q_SPECS = {
+    ""name"": ""StatePreparation"",
+    ""circuits"": [
+        {
+            ""name"": ""initializerCirc"",
+            ""quantum_registers"": [{
+                ""name"": ""qr"",
+                ""size"": qubit_number
+            }],
+            ""classical_registers"": [{
+                ""name"": ""cr"",
+                ""size"": qubit_number
+            }]},
+    ],
+}
+Q_program = qiskit.QuantumProgram(specs=Q_SPECS)
+
+## State preparation
+import numpy as np
+from qiskit.extensions.quantum_initializer import _initializer
+
+def psi_0_coefficients(qubit_number: int):
+    T = 2**qubit_number
+    tau = np.arange(T)
+    return np.sqrt(2 / T) * np.sin(np.pi * (tau + 1/2) / T)
+
+def get_coeffs(qubit_number: int):
+    # Can be changed to anything, the initialize function will take
+    # care of the initialisation.
+    return np.ones((2**qubit_number,)) / np.sqrt(2**qubit_number)
+    #return psi_0_coefficients(qubit_number)
+
+circuit_prep = Q_program.get_circuit(""initializerCirc"")
+qr = Q_program.get_quantum_register(""qr"")
+cr = Q_program.get_classical_register('cr')
+coeffs = get_coeffs(qubit_number)
+_initializer.initialize(circuit_prep, coeffs, [qr[i] for i in range(len(qr))])
+
+res = qiskit.execute(circuit_prep, statevector_backend).result()
+statevector = res.get_statevector(""initializerCirc"")
+print(statevector)
+
+ +
+ +

1Here ""error"" refers to the error between the ideal state and the approximation when dealing with a perfect quantum computer (i.e. no decoherence, no gate error).

+",1386,,1386,,6/26/2018 8:17,6/26/2018 8:17,,,,0,,,,CC BY-SA 4.0 +2465,2,,2425,6/25/2018 15:38,,3,,"

There is no a priori reason why an entanglement monotone should have this property. I think the particular issue is that, restricting to pure states, when you start to look at multiparticle states, you get inequivalent states. For example, for 3 qubits, the W state and GHZ state are inequivalent, and both are maximally entangled representatives for their SLOCC equivalence class (stochastic local operations and classical communication). There's no particular reason that an entanglement monotone should always favour GHZ (the AME(3,2)) over W-state. Indeed, they don't. Looking through the list of common multipartite entanglement monotones on Wikipedia, the one that stands out to me is the Schmidt Measure. It has value $\log_2(3)$ for a W state and one a value 1 for the GHZ state, so the AME is not maximally entangled by this measure.

+",1837,,,,,6/25/2018 15:38,,,,0,,,,CC BY-SA 4.0 +2466,1,2468,,6/26/2018 1:54,,9,3781,"

How could a $\sqrt{SWAP}$ circuit be expressed in terms of CNOT gates & single qubit rotations?

+ +

+ +
    +
  • CNOT & $\sqrt{SWAP}$ Gates
  • +
+ +
+

Any quantum circuit can be simulated to an arbitrary degree of accuracy using a combination of CNOT gates and single qubit rotations.

+
+ + + +
+

Both CNOT and $\sqrt{SWAP}$ are universal two-qubit gates and can be transformed into each other.

+
+ + + +

Edit

+ +
+

The questions are not identical, and are likely to attract different audiences. -DaftWullie

+
+ +

Can the difference be regarded as applied vs theoretic?

+",2645,,26,,12/23/2018 13:39,4/28/2019 9:06,"Expressing ""Square root of Swap"" gate in terms of CNOT",,2,5,,,,CC BY-SA 4.0 +2468,2,,2466,6/26/2018 6:59,,6,,"

As pointed out by @Nelimee, this question is essentially answered in this question, even if that question seems more specific. However, for the sake of completeness... (Note that I make no claims about minimality of construction with respect to, for example, number of controlled-not gates.)

+ +

Let's start with a unitary matrix for the square root of SWAP: +$$ +\left( +\begin{array}{cccc} + 1 & 0 & 0 & 0 \\ + 0 & \frac{1}{2}+\frac{i}{2} & \frac{1}{2}-\frac{i}{2} & 0 \\ + 0 & \frac{1}{2}-\frac{i}{2} & \frac{1}{2}+\frac{i}{2} & 0 \\ + 0 & 0 & 0 & 1 \\ +\end{array} +\right) +$$ +Note that if we pre- and post-multiply by a controlled NOT, controlled off qubit 2, this transforms into the form of a controlled-$U$ gate: +$$ +\left( +\begin{array}{cccc} + 1 & 0 & 0 & 0 \\ + 0 & 1 & 0 & 0 \\ + 0 & 0 & \frac{1}{2}+\frac{i}{2} & \frac{1}{2}-\frac{i}{2} \\ + 0 & 0 & \frac{1}{2}-\frac{i}{2} & \frac{1}{2}+\frac{i}{2} \\ +\end{array} +\right) +$$ +There are standard constructions for this controlled-$U$ gate. So, overall, we have a circuit that looks like + +where $ABC=\mathbb{I}$ and $AXBXC=e^{i\pi/4}\sqrt{X}$. Again, there are standard routes towards finding $A$, $B$ and $C$. For instance, we can define $R_Y(\theta)$ to be a rotation about the $Y$ axis by an angle $\theta$, i.e. $R_Y(\theta)=e^{i\theta Y}$ (there may be a factor of $\frac12$ here compared to some definitions). Then, if +$$ +A=R_Y(\theta)R_Z(\phi)\qquad B=R_Z(-2\phi)\qquad C=R_Z(\phi)R_Y(-\theta), +$$ +then it's easy to verify that $ABC=\mathbb{I}$. Furthermore, $AXBXC=R_Y(\theta)R_Z(4\phi)R_Y(-\theta)$ since $XR_Z(\phi)X=R_Z(-\phi)$. You can think of this as $R_Z(4\phi)$ specifying the eigenvalues that we want, and the $R_Y(\theta)$ is changing the basis from the computational basis into something else. In the present case, we have $\theta=\pi/2$ and $4\phi=\pi/4$.

+ +

The one little thing that we haven't got right yet is a phase factor. Our $AXBXC$ is creating the correct rotation up to an overall phase of $e^{i\pi/4}$: +$$ +\left( +\begin{array}{cccc} + 1 & 0 & 0 & 0 \\ + 0 & 1 & 0 & 0 \\ + 0 & 0 & \frac{1}{\sqrt{2}} & -\frac{i}{\sqrt{2}} \\ + 0 & 0 & -\frac{i}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ +\end{array} +\right) +$$ +The trick to get this right is to apply an $R_Z(\pi/8)$ on the first qubit and remove a global phase of $e^{i\pi/8}$. Thus, we have +

+",1837,,,,,6/26/2018 6:59,,,,0,,,,CC BY-SA 4.0 +2469,1,,,6/26/2018 7:34,,8,434,"

Let there be a known a scheme (quantum circuit) of Controlled-G, where unitary gate G has G$^†$ such that G≠G$^†$ and GG$^†$=I (for example S and S$^†$, T and T$^†$, V and V$^†$, but not Pauli and H gates).

+ +

My question for the experts is:

+ +

How is correct that new scheme of Controlled-G$^†$ gate may be constructed from this known scheme of Controlled-G gate by reversing the order of used gates (U) and each U in this scheme changes to the corresponding U$^†$ +(if U≠U$^†$ of course)?

+ +

For example, see below my OPENQASM-program (note that suffix 'dg' for gate name is used instead '$^†$'in OPENQASM), where it is used well-known scheme of Controlled-S gate and my scheme of Controlled-S$^†$ gate, constructed from this well-known scheme by the above method.
+So far I have only received successful results of applying this method and have not found any obvious contradictions with the known theory [*], but suddenly I didn't take something into account.

+ +

My program in OPENQASM for example:

+ + + +
//Name of Experiment: Amy Matthew controlled-s and my controlled-sdg gates v7
+OPENQASM 2.0; 
+include ""qelib1.inc"";
+qreg q[3];
+creg c[3];
+gate cs a,b {
+// a is control, b is target
+// see https//uwspace.uwaterloo.ca/bitstream/handle/10012/7818/AmyMatthew.pdf
+// fig.4.6b
+cx b,a;
+tdg a;
+cx b,a;
+t a;
+t b;
+}
+gate csdg a,b {
+// a is control, b is target
+// my controlled-sdg (I hope that is reverse of controlled-s)
+tdg b;
+tdg a;
+cx b,a;
+t a;
+cx b,a;
+}
+
+h q[0];
+cx q[0],q[1]
+;
+x q[2];
+h q[2];
+barrier q;
+cs q[0],q[2];
+cs q[1],q[2];
+barrier q;
+csdg q[0],q[2];
+csdg q[1]
+
+,q[2];
+
+barrier q;
+h q[2];
+measure q -> c; 
+
+ + + +

+ +

[*]: Elementary gates for quantum computation Barenco et al. (1995)

+",2764,,26,,12/23/2018 13:38,12/23/2018 13:38,Construct Controlled-$G^{\dagger}$ from known Controlled-$G$,,2,0,,,,CC BY-SA 4.0 +2470,2,,2469,6/26/2018 7:46,,3,,"
+

How is correct that new scheme of Controlled-$G^\dagger$ gate may be constructed from this known scheme of Controlled-G gate by reversing the order of used gates ($U$) and each $U$ in this scheme changes to the corresponding $U^\dagger$ (if $U≠U^\dagger$ of course)?

+
+ +

It's 100% correct:

+ +

Inverting a composed quantum gate is done with the algorithm you gave. You can check for example the implementation of the inverse() method in qiskit.CompositeGate:

+ +
def inverse(self):
+    """"""Invert this gate.""""""
+    # self.data is a list of the quantum gates composing the CompositeGate.
+    self.data = [gate.inverse() for gate in reversed(self.data)]
+    self.inverse_flag = not self.inverse_flag
+    return self
+
+ +

Inverting a controlled-gate boils down to apply the controlled-[inverse of this gate].

+",1386,,,,,6/26/2018 7:46,,,,0,,,,CC BY-SA 4.0 +2471,2,,2469,6/26/2018 8:27,,3,,"

If I understand the question correctly, you're assuming that you have some gate $V$ that you've decomposed as $\prod_{i=1}^NU_i$ and you want to show that $V^\dagger$ is $\prod_{i=1}^NU_{N+1-i}^\dagger$ where the product is taken in the opposite order?

+ +

In that case, you just need to show that $VV^\dagger=\mathbb{I}$ given that $U_iU_i^\dagger=\mathbb{I}$. You can do this directly using the stated decompositions above: +$$ +VV^\dagger=(U_1U_n\ldots U_N)(U_N^\dagger\ldots U_2^\dagger U_1^\dagger)=U_1U_n\ldots U_{N-1}(U_NU_N^\dagger)U_{N-1}^\dagger\ldots U_2^\dagger U_1^\dagger +$$ +and so on.

+ +

This solution is completely general, and does not need to make any assumptions about the form of the original unitary $V$.

+",1837,,1837,,6/26/2018 9:00,6/26/2018 9:00,,,,4,,,,CC BY-SA 4.0 +2472,2,,2466,6/26/2018 9:08,,6,,"

I think that you want answers to explicitly use full cnots, rather than partial versions. But since you already have an answer for that, I'll contribute a different perspective.

+ +

A $\mathrm{SWAP}$ can be thought of as a cnot that has been conjugated by oppositely oriented cnots.

+ +

$$\mathrm{SWAP} = {\rm cx}(k,j) \,\, {\rm cx}(j,k) \,\, {\rm cx}(k,j) $$

+ +

To make a $\sqrt{\mathrm{SWAP}}$, we can instead use a square root of cnot conjugated by oppositely oriented cnots.

+ +

$$ \sqrt{\mathrm{SWAP}} = {\rm cx}(k,j) \,\, \sqrt{{\rm cx}(j,k)} \,\, {\rm cx}(k,j) $$

+ +

To verify that this is indeed a $\sqrt{\mathrm{SWAP}}$, you can simply square it and verify that it ends up in the way we expect (using the fact that cnots square to identity).

+",409,,26,,4/28/2019 9:06,4/28/2019 9:06,,,,1,,,,CC BY-SA 4.0 +2473,1,2474,,6/26/2018 9:16,,3,777,"

Reproduced from Exercise 2.3 of Nielsen & Chuang's Quantum Computation and Quantum Information (10th Anniversary Edition):

+ +
+

Suppose $A$ is a linear operator from vector space $V$ to vector space $W$, and $B$ is a linear operator from vector space $W$ to vector space $X$. Let $|v_i⟩$, $|w_j⟩$, and $|x_k⟩$ be bases for the vector spaces $V$, $W$, and $X$, respectively. Show that the matrix representation for the linear transformation $BA$ is the matrix product of the matrix representations for $B$ and $A$, with respect to the appropriate bases.

+
+ +

Note: This question is part of a series attempting to provide worked solutions to the exercises provided in the above book.

+",391,,26,,3/30/2019 8:00,3/30/2019 8:00,Nielsen & Chuang Exercise 2.3 - “Matrix representation for operator products”,,1,0,,01-07-2019 15:12,,CC BY-SA 4.0 +2474,2,,2473,6/26/2018 10:28,,4,,"

Consider the linear maps $A: V\to W$ and $B: W\to X$. The composition $BA$ is a linear map from $V$ to $X$. Now, how can $\mathcal{M}(BA)$ be computed from $\mathcal{M}(B)$ and $\mathcal{M}(A)$? $\mathcal{M}(A)$ is the $n\times p$ matrix representation of the linear map $A$ w.r.t the basis $\{v_1,...,v_p\}$ and $\{w_1,...,w_n\}$. $\mathcal{M}(B)$ is the $m\times n$ matrix representation of the linear map $B$ w.r.t the basis $\{w_1,...,w_n\}$ and $\{x_1,...,x_m\}$.

+ +

Say, $\mathcal{M}(A) = \begin{bmatrix}a_{11} & ... & a_{1p}\\... & ... & ...\\a_{n1} & ... &a_{np}\end{bmatrix}$ & $\mathcal{M}(B) = \begin{bmatrix}b_{11} & ... & b_{1n}\\... & ... & ...\\b_{m1} & ... &b_{mn}\end{bmatrix}$.

+ +

Now, let's see the action of the linear map $BA$ on a certain basis vector of $V$, say $v_k$ (as it is sufficient to just know the action of any linear transformation on the basis vectors, in order to determine the transformation):

+ +

$$\therefore BA(v_k)=B(\sum_{r=1}^{n} a_{r,k}w_r)$$ +$$= \sum_{r=1}^{n}a_{r,k}Bw_r$$ +$$= \sum_{r=1}^{n}a_{r,k}\sum_{j=1}^{m}b_{j,r}x_j$$ +$$= \sum_{j=1}^{m}(\sum_{r=1}^{n}b_{j,r}a_{r,k})x_j$$

+ +

Thus, $\mathcal{M}(BA)$ is the $m\times p$ matrix whose entry in row $j$, column $k$ equals $\sum_{r=1}^{n}b_{j,r}a_{r,k}$. This is exactly equal to the $j,k$-th element of the matrix we get after multiplying $\mathcal{M}(B)$ and $\mathcal{M}(A)$. Hence,

+ +
+

$$\mathcal{M}(BA)=\mathcal{M}(B)\mathcal{M}(A)$$

+
+ +

Note: I've omitted the ket notation given in the original question, for ease of typing. By default, $|v_i\rangle = v_i,|w_j\rangle = w_j$ and $|x_k\rangle = x_k$ for indices $i,j,k$.

+",26,,26,,6/26/2018 11:18,6/26/2018 11:18,,,,0,,,,CC BY-SA 4.0 +2475,1,,,6/26/2018 16:10,,5,1210,"

Suppose we have any matrix in $\mathrm U(2)$ and we need to construct a Controlled-$U$ operation with the $H,T,S$ gates. Then I am aware of the construction given in N&C of decomposing $U$ into $e^{ia}AXBXC$. My question is how to implement gates $A,B,C$ using the gates from library set $\left\lbrace H,T,S,X,Y,Z\right\rbrace$.

+ +

Also, any sequence of the gates induces some phase to the matrix (while $A,B,C \in \mathrm{SU}(2)$), so how is that phase removed from the circuit?

+",2770,,23,,6/26/2018 16:59,08-05-2020 13:45,Controlled-U operation in IBMQ,,2,0,,,,CC BY-SA 4.0 +2476,2,,2456,6/26/2018 16:43,,3,,"

For a pure state of 8 qubits, the Hilbert space is $2^8$ dimensional. Dropping the normalization and phase information means you are left with the space $\mathbb{CP}^{2^8-1}$.

+ +

Unlike a single qubit which give the Bloch sphere $\mathbb{CP}^{2^1-1}$, this is too big to draw directly. Instead one usually draws simpler spaces that capture the essential features. This is thanks to the fact that these spaces are toric varieties. The example of how to draw $\mathbb{CP}^2$ as a triangle with tori above each point is given there. What this amounts to is remembering only the amplitudes first and then realizing that you forgot the phases and fixing that later.

+ +

So instead of drawing a line segment (a 1-cube) as you would for a single qubit, you would have to draw a $2^8-1$ simplex. Of course you can't do that, but you can project onto planes that you can draw several 2 dimensional pictures for. So this draws only certain linear combinations of probabilities for the $2^8$ basis states. Do several of these.

+ +

A lot of information is lost, because you couldn't draw the full $2^8-1$ complex dimensional ($2^{16}-2$ real dimensional) thing, but by throwing away information about phases and only taking certain linear combinations of probabilities, you get something you can visualize. Also you know what sort of structure you forgot along the way. Like when going from the point on the simplex back to the full complex projective space, you lost all the phases. You can draw those as points on the circle as usual, so you can recover the full information from those two diagrams.

+ +

If you want to say that some of the qubits are separable from others, then you get $\mathbb{CP}^{2^n-1} \times \mathbb{CP}^{2^m-1}$ where $n+m=8$. This is also toric and so also has a polytope that replaces the $2^8-1$ simplex. If you want it fully separable this would be $\mathbb{CP}^{2^1-1} \times \cdots \mathbb{CP}^{2^1-1}$ 8 times. The polytope that replaces the $2^8-1$ simplex there is a product of 8 1-cubes as already mentioned. Again above this polytope are some phases you lost along the way, but those are easy to draw in an accompanying diagram.

+",434,,,,,6/26/2018 16:43,,,,0,,,,CC BY-SA 4.0 +2477,1,2478,,6/26/2018 16:52,,9,2623,"

How to implement the phase shift gate in qiskit or ibmq? +Phase Shift Gate : $$\begin{pmatrix}e^{ia} && 0 \\ 0 && e^{ia}\end{pmatrix} = e^{ia}I$$

+",2771,,26,,03-12-2019 09:29,07-02-2019 12:22,Phase-Shift Gate in Qiskit,,1,2,,,,CC BY-SA 4.0 +2478,2,,2477,6/26/2018 17:05,,17,,"

You can implement the phase shift gate +$$P_h(\theta) = \begin{pmatrix}e^{i\theta} & 0\\0 & e^{i\theta}\end{pmatrix}$$ +with the X and u1 gate from the IBM Q chips: +$$ \begin{align} +P_h(\theta) &= U_1(\theta)\ X\ U_1(\theta)\ X \\ +&= \begin{pmatrix}1 & 0\\0 & e^{i\theta}\end{pmatrix} \begin{pmatrix}0 & 1\\1 & 0\end{pmatrix} \begin{pmatrix}1 & 0\\0 & e^{i\theta}\end{pmatrix} \begin{pmatrix}0 & 1\\1 & 0\end{pmatrix} \\ +&= \begin{pmatrix}0 & 1\\e^{i\theta} & 0\end{pmatrix}\begin{pmatrix}0 & 1\\e^{i\theta} & 0\end{pmatrix} \\ +&= \begin{pmatrix}e^{i\theta} & 0\\0 & e^{i\theta}\end{pmatrix} +\end{align}$$

+ +

So:

+ +
def Ph(quantum_circuit, theta, qubit):
+    quantum_circuit.u1(theta, qubit)
+    quantum_circuit.x(qubit)
+    quantum_circuit.u1(theta, qubit)
+    quantum_circuit.x(qubit)
+
+ +

implements the $P_h$ gate on Qiskit.

+",1386,,1386,,07-02-2019 12:22,07-02-2019 12:22,,,,4,,,,CC BY-SA 4.0 +2479,2,,2446,6/26/2018 19:45,,4,,"

There is an analogue to shadowgraphy which shows up in quantum information, which is the phase-space representation of quantum states via the Wigner function.

+ +

The Wigner function W(q,p) is a phase-space representation of quantum states of a single particle, which gives the quantum mechanical probability distributions for position measurements, momentum measurements, and in general for any quadrature observable (i.e. linear combinations of position and momentum). For a general introduction, check this link:

+ +

https://en.wikipedia.org/wiki/Wigner_quasiprobability_distribution

+ +

These marginal distributions can be interpreted as a shadow of the quantum state along a direction specified by which quadrature is measured. Interestingly, there are quantum states for which the Wigner function may be negative in some phase-space regions! This is an indication of non-classicality, as the multiple ""shadows"" fail to make sense together: each shadow is a sensible probability distribution, but the ""object"" that casts the shadows is a phase-space distribution without direct interpretation in terms of probabilities (as it can be negative in some regions).

+ +

At least for some definitions of discrete Wigner functions (useful for describing states in finite-dimensional Hilbert spaces), this negativity has been linked to contextuality and advantage in quantum computation, as pointed out for example in these papers:

+ +

https://arxiv.org/abs/1201.1256

+ +

https://arxiv.org/abs/0710.5549

+",2558,,,,,6/26/2018 19:45,,,,0,,,,CC BY-SA 4.0 +2480,2,,2475,6/26/2018 21:24,,2,,"

In general, it is not possible to implement every single-qubit operation exactly. However, it is possible to construct a quantum circuit whose action is approximately equal to this single-qubit operation. This can be done in many different ways, but I'm not sure if it is known which one is most efficient. I do not know of any efficient way to construct such a circuit, however, there is a pretty inefficient method that is well-known. I'll leave it to the OP to decide if it is efficient enough for his/her purposes.

+ +

Nielsen and Chuang explain one construction using just the $H$ and $T$ gates. This is done in chapter 4, especially in section 4.5.3. This section builds on earlier results of chapter 4, one of them being exercise 4.11. It is important to note that this exercise is wrong in some editions (have a look at the errata if this is true for your edition). Another source that provides this construction is this, section 4.3, where the $\pi/8$-gate is just a different notation for the $T$-gate.

+ +

The construction is difficult to present concisely in this answer. The main idea is to consider the following two gate sequences $HTHT$ and $THTH$. These two gate sequences can be visualized as rotations in the Bloch sphere, where the angle of rotation is an irrational multiple of $\pi$ for both of these gate sequences. Consecutive applications of these two rotations are enough to implement any rotation, and hence the gate sequences $HTHT$ and $THTH$ can be used to construct any single-qubit gate.

+",24,,,,,6/26/2018 21:24,,,,0,,,,CC BY-SA 4.0 +2481,2,,2242,6/27/2018 2:33,,2,,"

In Qiskit you can compose two circuits to make a bigger circuit. You can do this simply by using the + operator on the circuits.

+ +

Here is your program rewritten to illustrate this +(note: you need the latest version of Qiskit for this, upgrade with pip install -U qiskit).

+ +
from qiskit import *
+qr = QuantumRegister(2)
+cr = ClassicalRegister(2)
+qc1 = QuantumCircuit(qr, cr)
+qc1.x(qr)
+
+qc2 = QuantumCircuit(qr, cr)
+qc2.x(qr)
+
+qc3 = qc1 + qc2
+
+ +

You can see that qc3 is a concatenation of q1 and q2.

+ +
print(qc3.qasm())
+
+ +

Yields:

+ +
OPENQASM 2.0;
+include ""qelib1.inc"";
+qreg q0[2];
+creg c0[2];
+x q0[0];
+x q0[1];
+x q0[0];
+x q0[1];
+
+ +

Now, you seem to want to probe the state twice: once where qc1 ends, and once when qc2 ends. You can do this in a simulator by inserting snapshot commands. This will save the statevector at a given point in the circuit. It does not collapse the state.

+ +
from qiskit.extensions.simulator import *
+qc1.snapshot('0')    # save the snapshot in slot ""0""
+qc2.snapshot('1')    # save the snapshot in slot ""1""
+qc2.measure(qr, cr)  # measure to get final counts
+
+qc3 = qc1 + qc2
+
+ +

You can now execute qc3 on a simulator.

+ +
job = execute(qc3, 'local_qasm_simulator')
+result = job.result()
+print(result.get_snapshot('0'))
+print(result.get_snapshot('1'))
+print(result.get_counts())
+
+ +

Yields: + [0.+0.j 0.+0.j 0.+0.j 1.+0.j] + [1.+0.j 0.+0.j 0.+0.j 0.+0.j] + {'00': 1024}

+ +

So the state goes back to |00> as expected.

+",2503,,,,,6/27/2018 2:33,,,,1,,,,CC BY-SA 4.0 +2483,1,2550,,6/27/2018 3:13,,7,1845,"

In Realization of a photonic CNOT gate sufficient for quantum computation FIG. 1 there is a "scheme to obtain a photonic realization of a CNOT gate with two independent qubits."

+

What components are needed to realize a photonic CNOT gate?

+",2645,,55,,10/24/2020 14:44,10/24/2020 14:44,What components are needed to realize a photonic CNOT gate?,,2,0,,,,CC BY-SA 4.0 +2484,2,,2483,6/27/2018 6:23,,3,,"

The original question was asking with respect to a particular Quantum Game. This quantum game only contains the elements of linear optics, not all optics, so it was basically asking how CNot can be realised with linear optical elements.

+ +

This cannot be done with 100% efficiency. However, there are schemes that allow it to work in a heralded way, i.e. it works sometimes, and you know if it's worked. One of the best known schemes is the KLM scheme, which has since been described in more detail in the other answer. However, the efficiency can be very low. The other problem is that when the scheme fails, what happens to the information you tried to compute on? Depending on the scheme, it could be destroyed. A good way around this is with measurement-based quantum computation, where you can ""grow"" a resource state first, and only once you've been successful do you actually compute with it.

+ +

One recent experiment is described here, and their main optical circuit is reproduced below:

+ +

+ +

However, the linear optical elements are supplemented by the presence of an entangled pair of photons to enhance the success rate. I don't think the game currently has the capability of producing those: the typical way to get them is to introduce a non-linear crystal that splits a single photon into an entangled pair.

+ +

Another option is that instead of using two photons with qubits encoded in the respective polarisations (as in the previous case), you can encode two qubits using a single photon by combining path information (two possible paths) and polarisation. Then a polarising beam splitter implements what you're after: there are two inputs, 0 and 1. If the input is one polarisation (say H), the input is transmitted, meaning that the output is the same as the input (so $\mathbb{I}$ is enacted on the path information). For the orthogonal polarisation (V), the input is reflected, so $X$ is enacted on the path information.

+",1837,,1837,,07-02-2018 06:39,07-02-2018 06:39,,,,4,,,,CC BY-SA 4.0 +2485,2,,2475,6/27/2018 8:41,,2,,"

You asked for a decomposition using the $\left\lbrace H,T,S,X,Y,Z\right\rbrace$. I assume that's because you think that's all that IBMQ offers. However, it is also possible to do rotations around the $x$, $y$ and $z$ axes, which make things a lot easier.

+

Any single qubit unitary matrix may be decomposed as a sequence of rotations around two, non-parallel axes. For example

+

$$U = e^{i\alpha} +\,\, +R_z(\beta) +\,\, +R_y(\gamma) +\,\, +R_z(\delta) +$$

+

Here the global phase $\alpha$ is undetectable and usually can be ignored, but not in cases like adding control to a unitary gate. The $\beta$ and $\delta$ are angles of rotation around the $z$ axis, and $\gamma$ is the angle for the $y$ axis.

+

The $z$ axes rotation is called RZ and is implemented by U1 (they are differ by a global phase only) on the IBM Q Experience and in QISKit. It takes a single parameter as an argument, which is the angle of rotation expressed in radians.

+

The $y$ axis rotation can be done using U3. This takes three arguments. The first of which is the angle in radians for the $y$ rotation, and the other two should be set to zero.

+

So if you want to do a rotation with $\beta=0.1$, $\gamma=0.2$ and $\delta=0.3$, for example, this could be done using the QASM editor of the IBM Q Experience with

+
u1(0.3) q[0];
+u3(0.2,0,0) q[0];
+u1(0.1) q[0];
+
+

It can also be done using the composer. You just need to tick the 'advanced' checkbox to see these gates.

+

If the global phase is important for your purposes, then you can use RZ gates instead U1 gates (on the assumption that when a Controlled-U will be constructed then the correct version of the cRZ gate will be used in IBM Q):

+
rz(0.3) q[0];
+u3(0.2,0,0) q[0];
+rz(0.1) q[0];
+
+

Let's pretend that the global phase is 0.4, then we can append the circuit e.g. like this:

+
x q[0];
+u3(pi,0.4,pi+0.4) q[0];  
+
+",409,,12416,,08-05-2020 13:45,08-05-2020 13:45,,,,1,,,,CC BY-SA 4.0 +2486,1,2497,,6/27/2018 12:32,,12,1907,"

Maybe it is a naive question, but I cannot figure out how to actually exponentiate a matrix in a quantum circuit. +Assuming to have a generic square matrix A, if I want to obtain its exponential, $e^{A}$, i can use the series

+ +

$$e^{A} \simeq I+ A+\frac{A^2}{2!}+\frac{A^3}{3!}+...$$

+ +

To have its approximation. I do not get how to do the same using quantum gates then apply it for instance to perform an Hamiltonian simulation. Some help?

+",2648,,26,,12/23/2018 13:35,12/23/2018 13:35,How to implement a matrix exponential in a quantum circuit?,,1,2,,,,CC BY-SA 4.0 +2487,1,2488,,6/27/2018 12:40,,2,518,"

Reproduced from Exercise 2.4 of Nielsen & Chuang's Quantum Computation and Quantum Information (10th Anniversary Edition):

+ +
+

Show that the identity operator on a vector space $V$ has a matrix representation which is one along the diagonal and zero everywhere else, if the matrix representation is taken with respect to the same input and output bases. This matrix is known as the identity matrix.

+
+ +

Note: This question is part of a series attempting to provide worked solutions to the exercises provided in the above book.

+",391,,26,,3/30/2019 7:58,3/30/2019 7:58,Nielsen & Chuang Exercise 2.4 - “Matrix representation for identity”,,1,0,,01-07-2019 15:10,,CC BY-SA 4.0 +2488,2,,2487,6/27/2018 15:04,,2,,"

On a vector space $V = \textrm{span}(\{|v_1⟩, \ldots, |v_n⟩\})$, the identity operator $\mathcal{I}$ maps each vector to itself, such that +$$ +\mathcal{I} = \sum_i |v_i⟩⟨v_i|. +$$

+ +

If $\{|v_1⟩, \ldots, |v_n⟩\}$ is chosen as the basis for the matrix representation, then we set

+ +

$$ +|v_1⟩ = \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix}, \quad |v_2⟩ = \begin{pmatrix} 0 \\ 1 \\ \vdots \\ 0 \end{pmatrix}, \quad \cdots \quad |v_n⟩ = \begin{pmatrix} 0 \\ 0 \\ \vdots \\ 1 \end{pmatrix}, +$$
+and $⟨v_1| = (|v_1⟩)^\dagger$ such that +$$ +|v_1⟩⟨v_1| = \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix} \otimes \begin{pmatrix} 1 & 0 & \cdots & 0 \end{pmatrix} = \textrm{diag}(1,0,\ldots,0) , \quad\textrm{etc}. +$$

+ +

Summing over all terms therefore produces the desired matrix +$$ +\mathcal{I} = +\begin{pmatrix} +1 & 0 & \cdots & 0 \\ +0 & 1 & \cdots & 0 \\ +\vdots & \vdots & \ddots & \vdots \\ +0 & 0 & \cdots & 1 +\end{pmatrix} +$$

+",391,,,,,6/27/2018 15:04,,,,0,,,,CC BY-SA 4.0 +2489,1,2495,,6/27/2018 20:17,,7,459,"

Using a simple puzzle game to benchmark quantum computers is the most clever approach I have seen so far.

+ +

The author of the aforementioned article, James, makes a nice analogy to buying a laptop (""more than just a single number when comparing"") in an answer to How should different quantum computing devices be compared?.

+ +

Another answer by whurley to the same question mentions the IEEE Standard for Quantum Computing Performance Metrics & Performance Benchmarking.

+ +

In When will we know that quantum supremacy has been reached? there is an answer by Niel which includes this snippet:

+ +
+

But the bottom line is that, yes, ""quantum supremacy"" is precisely about ""not being able to simulate quantum computers of certain sizes"", or at least not being able to simulate certain specific processes that you can have them perform, and this benchmark depends not only on quantum technology but on the best available classical technology and the best available classical techniques. It is a blurry boundary which, if we are being serious about things, we will only be confident that we have passed a year or two after the fact. But it is an important boundary to cross.

+
+ +

My question is bipartite:

+ +
    +
  • What will be the primary metrics for quantum computers? (Clasical examples include processor speed, RAM & hard drive space)

  • +
  • Best tools / strategies for benchmarking above metrics? (Existing or proposed)

  • +
+",2645,,26,,6/27/2018 20:47,6/28/2018 7:36,How to benchmark a quantum computer?,,1,0,,,,CC BY-SA 4.0 +2490,1,2494,,6/27/2018 22:13,,4,145,"

On page 2 of the paper Quantum Circuit Design for Solving Linear Systems of Equations (Cao et al.,2012) there's this circuit:

+ +

+ +

It further says:

+ +
+

After the inverse Fourier transform is executed on register $C$, we + use its $|\lambda_j\rangle$ states stored in register $C$ as a + control for a Hamiltonian simulation $\exp(iH_0t_0)$ that is applied + on register $M$.

+
+ +

But then again:

+ +
+

We further establish the control relation between Register $L$ and + $\exp[-ip\left(\dfrac{\lambda_j}{2^m}\dfrac{1}{2^{l-k_l}}\right)t_0]$ + simulation that acts on register $M$. The values of the binary numbers + stored in register $L$ are then able to determine the time parameter + $t$ in overall Hamiltonian simulation $\exp(-iH_0t)$.

+
+ +

From here, I'm a bit confused which register of qubits is being used to control the Hamiltonian simulation $\exp(-iH_0t)$. Are both registers: $L$ and $C$ being used as ""control""? How does control by multiple registers work? (I only know how control by a single register of qubits works, as of now)

+",26,,26,,12/23/2018 13:34,12/23/2018 13:34,"How are two different registers being used as ""control""?",,1,0,,,,CC BY-SA 4.0 +2491,1,29371,,6/27/2018 23:10,,16,4205,"

I'm at the AQC conference at NASA and everybody seems to suddenly be talking about the Bacon-Shor code but there is no Wikipedia page and the pdf that I gave a link to does not really explain what it is and how it works.

+ +

How does it compare to the Shor code ?

+",2293,,,,,12/14/2022 23:50,What is a Bacon-Shor code and what is its significance?,,5,4,,,,CC BY-SA 4.0 +2492,2,,2491,6/27/2018 23:50,,0,,"

Shor Code

+ +
+

Can detect and correct arbitrary single qubit errors, but if there are 2 or more single qubit errors before a correction round, the correction will fail. -Intuition for Shor code failure probability

+
+ +

Bacon-Shor Code

+ +
+

Bacon-Shor codes, quantum subsystem codes which are well suited for applications to fault-tolerant quantum memory because the error syndrome can be extracted by performing two-qubit measurements. Optimal Bacon-Shor codes

+
+ +
+ +
+

Contrarily to Shor's code, these stabilizers cannot identify the precise qubit on which a bit-flip occurs, they can only identify the column in which it occurs. - Quantum Error Correction

+
+ +
+ +
+

For the $[[n^2, 1, n]]$ Bacon-Shor code the qubits are laid out in + a 2D n × n square array. It is also possible to work with asymmetric Bacon-Shor codes with qubits in a n × m array. - Quantum Error Correction for Quantum Memories pg. 34

+
+ +
+ +
+

We have shown that for every generalized Shor code + there is an subsystem code with the same parameters but + which requires significantly fewer stabilizer measurements + in order to perform quantum error correction. - Quantum Error Correcting Subsystem Codes From Two Classical Linear Codes

+
+ +
+ +

Also, here is a video from Microsoft Universal Fault-Tolerant Computing with Bacon-Shor Codes.

+",2645,,2645,,6/28/2018 1:28,6/28/2018 1:28,,,,4,,,,CC BY-SA 4.0 +2493,2,,2491,6/28/2018 5:03,,2,,"

Disclaimer: This answer is based on what I deduced from a brief Googling session. I might make further additions/improvements as and when I will understand the details more thoroughly. Feel free to make suggestions in the comments.

+ +

The $9$-qubit Shor code $[[9,1,3]]$ (qubits laid in a $3\times 3$ lattice) is the smallest member in the family of $m^2$-qubit Bacon-Shor code(s) $[[m^2,1,m]]$ (qubits laid in a $m\times m$ lattice). Shor's code, as you know, can correct both Pauli sign flip and Pauli bit-flip errors in a single qubit. Moreover, to correct any single-qubit error (with a high probability), it is sufficient to be able to correct against any single qubit Pauli error.[1]

+ +

+ +

Now, Bacon-Shor code(s) is(/are) a generalization of this concept to noise models where the qubits in a code block are subject to both bit-flip errors with probability $p_X$ and dephasing errors with probability $p_Z$. The noise is assumed to act independently on each qubit and the $X$ and $Z$ errors are uncorrelated.

+ +

+ +

In (Napp and Preskill, 2013) the authors find that the optimally-sized Bacon-Shor code for equal $X$ and $Z$ errors rate $p$ is given by $m=\frac{\log 2}{4p}$ and for that optimal choice they can bound the logical $X$(or $Z$) error rate as $\tilde p(p) \lesssim \exp(\frac{-0.06}{p})$[2].

+ +

It is also possible to work with asymmetric Bacon-Shor codes with qubits in an $n \times m$ array. Asymmetric codes can have better performance when, say, $Z$ errors are more likely than $X$ errors[3].

+ +

References:

+ +
    +
  1. Quantum Error Correction for Quantum Memories (Barbara M. Terhal, 2015)
  2. +
  3. Optimal Bacon-Shor codes (Napp & Preskill, 2012)
  4. +
  5. Fault-tolerant quantum computation with asymmetric Bacon-Shor codes (Brooks & Preskill, 2013)
  6. +
+",26,,26,,6/28/2018 5:25,6/28/2018 5:25,,,,2,,,,CC BY-SA 4.0 +2494,2,,2490,6/28/2018 7:22,,2,,"

You can see from the circuit diagram that in the third-last slice, both registers $L$ and $C$ are being used as controls. There's no problem with two registers being controls, after all, that's exactly what a Toffoli (controlled-controlled-NOT) gate does. It probably helps to explicitly write down what transformation they're talking about. I'll call it $U$. (I'm mostly extrapolating this from what you've written, rather than delving extensively into the paper or references.)

+ +

Before I do that, I just want to change notation slightly, because what I see there, I find slightly misleading. If the eigenvalues of $At_0$ are $\lambda_j$, then the states of the register $C$ might better be written as $k\in\{0,1\}^t$ (assuming $t$ qubits are being used) with $\lambda_jt_0=2\pi k/2^t$. You could say that $|\lambda_j\rangle$ is an appropriate label for the ket, but it makes me think of the eigenvector, rather than the binary representation of the (rescaled) eigenvalue.

+ +

Having done that, we have +$$ +U=\sum_{k\in\{0,1\}^t}\sum_{s\in\{0,1\}^l}\sum_{p\in\{0,1\}^m}|k\rangle\langle k|_C\otimes|s\rangle\langle s|_L\otimes|p\rangle\langle p|_Me^{-i2\pi p k \frac{f(s)}{2^{t+m+l}}} +$$ +where $f(s)$ is some function of $s$ that I haven't entirely understood yet - I'm finding the paper quite opaque on that point, although figure 6 probably helps a bit. I think it's simply +$$ +f(s)=\sum_{q=1}^l\frac{s_q}{2^{-q}}=s +$$ +(where the final answer is $s$ represented as a decimal, rather than binary). To explain: we go through each bit value and controlled off its value, add a phase. So the value $s_q$ indicates whether the $q^{th}$ bit is controlling, and the contributed phase is $1/2^{-q}=2^q$. A product of phases means we look at the sum of the arguments. Where I'm using $q$, the paper uses $k_l$, I think, which is, I think, not supposed to be the $l^{th}$ bit of the $k$ index we're using on register $C$. $\sum_qs_q2^q$ is just the decimal representation of a binary string.

+",1837,,26,,6/28/2018 8:04,6/28/2018 8:04,,,,4,,,,CC BY-SA 4.0 +2495,2,,2489,6/28/2018 7:36,,1,,"

This may not exactly answer your question (which I suspect is still very much an open question, and what you're likely to get as answers are opinions), but have you looked at blind quantum computation? See here for another perspective.

+ +

One way that we can describe that premise is to imagine some company claims to have developed a fabulous universal quantum computer. But it's so expensive, difficult to control etc. that only they can be trusted to run stuff on it, and it's not directly open for use by other people. How do you know it's really running as a quantum computer, and not just that, for example, they have some new classical algorithm that can simulate quantum computations better than we were expecting?

+ +

Blind quantum computation achieves this by making the quantum computer perform a computation without knowing what that computation actually is! The company who owns the computer is left blind to what it's supposed to be doing, so their cheating strategies are severely curtailed.

+ +

However, I assume that this method would not be applicable to the quantum supremacy scenario.

+",1837,,,,,6/28/2018 7:36,,,,1,,,,CC BY-SA 4.0 +2496,1,2498,,6/28/2018 9:08,,7,813,"

I would like to represent the state of a qubit on a Bloch sphere from the measurements made with Q#.

+ +

According the documentation, it is possible to measure a qubit in the different Pauli bases (PauliX, PauliY, PauliZ). This returns Zero if the +1 eigenvalue is observed, and One if the −1 eigenvalue is observed.

+ +

I can repeat this several times to find the probabilities for each base. Unfortunately, from there, I don't know how to calculate the density matrix or the X, Y, Z coordinates needed to plot the Bloch sphere.

+ +

Is it possible to find the density matrix or the X, Y, Z coordinates from these measurements? If yes, how ?

+",2782,,2782,,4/15/2020 12:37,4/15/2020 12:37,From Q# measurements to Bloch sphere,,2,0,,,,CC BY-SA 4.0 +2497,2,,2486,6/28/2018 9:24,,8,,"

Reformulating your question:

+ +
+

How to perform Hamiltonian Simulation for a generic square matrix $A$?

+
+ +

Quick answer: it is not possible.

+ +

The goal of Hamiltonian Simulation (HS) is to find a quantum circuit (i.e. a succession of gates) that acts like $U(t) = e^{-iAt}$ on a quantum state. Here $U(t)$ needs to be unitary (because of the properties of quantum gates) and so $e^{-iAt}$ needs also to be unitary.

+ +

So the HS algorithm is only applicable to matrices $A$ such that $e^{-iAt}$ is unitary. Every hermitian matrix satisfy this property, but not every generic square matrix does. Depending on your problem, this limitation may or may not be an issue but you can't use HS if $e^{-iAt}$ is not unitary.

+ +

For example for the HHL algorithm (that use HS of $A$ as a subroutine) with a system $Ax=b$, if $e^{-iAt}$ is not unitary you can instead consider the problem +$$Cy = \begin{pmatrix} 0 & A \\ A^\dagger & 0 \end{pmatrix} \begin{pmatrix} 0 \\ x\end{pmatrix} = \begin{pmatrix}b \\ 0\end{pmatrix},$$ +solve it with HHL (which is now possible because the new matrix $C$ is hermitian) and recover $x$.

+ +

So the interesting question is now:

+ +
+

How to perform Hamiltonian Simulation for a given hermitian matrix $A$?

+
+ +

And the answer will depend on the properties of $A$.

+ +

This is a huge research topic and there are plenty of things to say on it. I will not present every methods here as they are quite complicated and I did not understand all of them. Here is a list of papers/presentations that are related to HS and that may be interesting to start with HS:

+ +
    +
  1. Simulating Hamiltonian dynamics on a small quantum computer: slides about HS. Even if it is a presentation, this is the most complete source I found on Hamiltonian Simulation. It presents quickly 3 different methods and cites interesting papers for each method.
  2. +
  3. Lecture Notes on Quantum Algorithms (Andrew M. Childs, 2017): recent and rather complete. HS is discussed in chapter 25 (page 123).
  4. +
  5. Exponential improvement in precision for simulating sparse Hamiltonians: presents in details one of the 3 methods presented in 1.
  6. +
  7. Efficient quantum algorithms for simulating sparse Hamiltonians: presents in details another of the 3 methods presented in 1.
  8. +
+",1386,,,,,6/28/2018 9:24,,,,3,,,,CC BY-SA 4.0 +2498,2,,2496,6/28/2018 11:40,,5,,"

The problem you are describing (i.e. finding an approximation of some state given some number of identical copies of it and some set of measurements) is known as quantum state tomography or state tomography for short.

+ +

In practise, the most efficient schemes for state tomography will depend on a specific experiment's setup and limitations, for which different protocols exist (see the wikipedia page for an overview). Because the efficiency scaling of state tomography is notoriously bad in general, finding optimal tomography schemes for particular scenarios is an active area of research.

+ +

That being said, for the rest of this answer I will assume that efficiency scalings are not important for what you want to do and describe the general theory behind state tomography on a single qubit.

+ +

It can be shown that any density matrix can be written out as a linear combination of Pauli matrices, such that +$$ +\rho = \frac{1}{2}(I + x\sigma_x + y\sigma_y + z\sigma_z) = \frac{1}{2} (I + \vec{r} \cdot \vec{\sigma}) +$$ +where $\vec{\sigma} = (\sigma_x, \sigma_y, \sigma_z)$ and $\vec{r} = (x, y, z)$ is the so-called Bloch vector representing the coordinates on the Bloch sphere you reference above. To find these, we observe that +$$ +x = \textrm{Tr}(\rho \sigma_x) = \langle \sigma_x \rangle_\rho, \quad y = \textrm{Tr}(\rho \sigma_y) = \langle \sigma_y \rangle_\rho, \quad z = \textrm{Tr}(\rho \sigma_z) = \langle \sigma_z \rangle_\rho. +$$ +and so each the amplitude of each component of the Bloch vector is given by the expectation value of it's associated Pauli operator on $\rho$. Recall that the expectation value $\langle A \rangle_\rho$ is given by the average over all eigenvalues returned by measurement of $A$ on $\rho$.

+ +

So, to find the state's Bloch vector, just perform each Pauli measurements many times to find the respective expectation value.

+ +

Finally, to retrieve the matrix representation of $\rho$, simply recall that +$$ +I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \quad +\sigma_x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \quad +\sigma_y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \quad +\sigma_z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, +$$

+ +

and so

+ +

$$ +\rho = \frac{1}{2} \begin{pmatrix} 1 + z & x - iy \\ x + iy & 1 - z \end{pmatrix}. +$$

+ +

Useful links and further reading:

+ + + +

Afterword:

+ +

By extending this protocol to two qubits we can flavour of why state tomography is not scalable in general. For two qubits $\rho_1$ and $\rho_2$ the combined state $\rho_{1,2} = \rho_1 \otimes \rho_2$ is given by +\begin{align} +\rho_{1,2} = \frac{1}{2} (I + \vec{r}_1 \cdot \vec{\sigma}_1) \otimes \frac{1}{2} (I + \vec{r}_2 \cdot \vec{\sigma}_2) +\end{align} +which is going to contain $4^2 = 16$ components, one for each $2$-fold Pauli operator. Clearly a strategy that requires $4^n$ different expectation values to be estimated to recover an $n$-qubit state is unfeasible for even small systems of qubits. So this is where the aforementioned research comes in.

+",391,,391,,6/28/2018 14:24,6/28/2018 14:24,,,,1,,,,CC BY-SA 4.0 +2499,1,2552,,6/28/2018 14:11,,176,39908,"

I have a computer science degree. I work in IT, and have done so for many years. In that period ""classical"" computers have advanced by leaps and bounds. I now have a terabyte disk drive in my bedroom drawer amongst my socks, my phone has phenomenal processing power, and computers have revolutionized our lives.

+ +

But as far as I know, quantum computing hasn't done anything. Moreover it looks like it's going to stay that way. Quantum computing has been around now for the thick end of forty years, and real computing has left it in the dust. See the timeline on Wikipedia, and ask yourself where's the parallel adder? Where's the equivalent of Atlas, or the MU5? I went to Manchester University, see the history on the Manchester Computers article on Wikipedia. Quantum computers don't show similar progress. Au contraire, it looks like they haven't even got off the ground. You won't be buying one in PC World any time soon.

+ +

Will you ever be able to? Is it all hype and hot air? Is quantum computing just pie in the sky? Is it all just jam-tomorrow woo peddled by quantum quacks to a gullible public? If not, why not?

+",1905,,26,,12/14/2018 5:47,09-02-2022 23:33,Is quantum computing just pie in the sky?,,13,2,,,,CC BY-SA 4.0 +2500,2,,2499,6/28/2018 14:57,,37,,"

Classical computing has been around longer than quantum computing. The early days of classical computing is similar to what we are experiencing now with quantum computing. The Z3 (First Turing complete electronic device) built in the 1940s was the size of a room and less powerful than your phone. This speaks to the phenomenal progress we have experienced in classical computing.

+ +

The dawn of quantum computing on the other hand, did not start until the 1980s. Shor's factoring algorithm; the discovery that jump-started the field was discovered in the 1990s. This was followed a few years later with the first experimental demonstration of a quantum algorithm.

+ +

There is evidence that quantum computers can work. There is a tremendous amount of progress on the experimental and theoretical aspects of this field every year and there is no reason to believe that it's going to stop. The Quantum threshold theorem states that large scale quantum computing is possible if the error rates for physical gates are below a certain threshold. We are approaching (some argue that we are already there) this threshold for small systems.

+ +

It's good to be skeptical about the usefulness of quantum computation. In fact, it's encouraged! It's also natural to compare the progress of quantum computation with classical computation; forgetting that quantum computers are more difficult to build than classical computers.

+",362,,362,,07-04-2018 02:43,07-04-2018 02:43,,,,1,,,,CC BY-SA 4.0 +2501,2,,2499,6/28/2018 14:59,,23,,"

Early classical computers were built with existing technology. For example, vacuum tubes were invented around four decades before they were used to make Colossus.

+ +

For quantum computers, we need to invent the technology before we make the computer. And the technology is so beyond what had previous existed, that just this step has taken a few decades.

+ +

Now we pretty much have our quantum versions of vacuum tubes. So expect a Colossus in a decade or so.

+",409,,,,,6/28/2018 14:59,,,,0,,,,CC BY-SA 4.0 +2502,2,,2496,6/28/2018 15:27,,2,,"

SLesslyTall's answer is correct and very well explained. Let me add a little explanation on the interpretation of the return value of the Q# measurement function.

+ +

When you measure many times a qubit with the state $|1\rangle$, here are the results you get :

+ +
    +
  • Measurements in the Pauli $X$ basis : 50% of Zero and 50% of One
  • +
  • Measurements in the Pauli $Y$ basis : 50% of Zero and 50% of One
  • +
  • Measurements in the Pauli $Z$ basis : 0% of Zero and 100% of One
  • +
+ +

As explained in the documentation, the function returns Zero for the $+1$ eigenvalue and One for the $-1$ eigenvalue. So to find $x$, $y$, $z$ you have to perform the following calculations:

+ +
    +
  • $x = 0.5 \cdot 1 + 0.5 \cdot (-1) = 0.5 - 0.5 = 0$
  • +
  • $y = 0.5 \cdot 1 + 0.5 \cdot (-1) = 0.5 - 0.5 = 0$
  • +
  • $z = 0 \cdot 1 + 1 \cdot (-1) = 0 - 1 = -1$
  • +
+ +

Then to get the density matrix, you can use the formula given by SLesslyTall:

+ +

$\rho = \frac{1}{2}\begin{pmatrix}1+z&x-iy\\x+iy&1-z\end{pmatrix} = \frac{1}{2}\begin{pmatrix}1+(-1)&0-i0\\0+i0&1-(-1)\end{pmatrix} = \frac{1}{2}\begin{pmatrix}0&0\\0&2\end{pmatrix} = \begin{pmatrix}0&0\\0&1\end{pmatrix}$

+",2782,,,,,6/28/2018 15:27,,,,0,,,,CC BY-SA 4.0 +2503,2,,2499,6/28/2018 16:46,,9,,"

Why would you expect two different technologies to advance at the same rate?

+ +

Simply put, quantum computers can be immensely more powerful but are immensely harder to build than classical computers. The theory of their operation is more complicated and based on recent physics, there are greater theoretical pitfalls and obstacles that inhibit their scaling up in size, and their design requires much more sophisticated hardware which is harder to engineer.

+ +

Nearly every stage of development of a quantum computer is inanalogous to that of a classical computer. So a question for you; why compare them?

+",2591,,2591,,6/28/2018 20:36,6/28/2018 20:36,,,,0,,,,CC BY-SA 4.0 +2504,2,,2499,6/28/2018 16:52,,107,,"

I'll be trying to approach this from a neutral point of view. Your question is sort of "opinion-based", but yet, there are a few important points to be made. Theoretically, there's no convincing argument (yet) as to why quantum computers aren't practically realizable. But, do check out: How Quantum Computers Fail: Quantum Codes, Correlations in Physical Systems, and Noise Accumulation - Gil Kalai, and the related blog post by Scott Aaronson where he provides some convincing arguments against Kalai's claims. Also, read James Wotton's answer to the related QCSE post: Is Gil Kalai's argument against topological quantum computers sound?

+

Math Overflow has a great summary: On Mathematical Arguments Against Quantum Computing.

+

However, yes, of course, there are engineering problems.

+

Problems (adapted from arXiv:cs/0602096):

+
    +
  • Sensitivity to interaction with the environment: Quantum computers are extremely sensitive to interaction with the surroundings since +any interaction (or measurement) leads to a collapse of the state function. This +phenomenon is called decoherence. It is extremely difficult to isolate a quantum system, especially an engineered one for a computation, without it getting entangled with the environment. The larger the number of qubits the harder is it to maintain the coherence.

    +

    [Further reading: Wikipedia: Quantum decoherence]

    +
  • +
  • Unreliable quantum gate actions: Quantum computation on qubits is accomplished by operating upon them with an array of transformations that are implemented in principle using small gates. It is imperative that no phase errors be introduced in these transformations. But practical schemes are likely to introduce such errors. It is also possible that the quantum register is already entangled with the environment even before the beginning of the computation. Furthermore, uncertainty in initial phase +makes calibration by rotation operation inadequate. In addition, one must consider the relative lack of precision in the classical control that +implements the matrix transformations. This lack of precision cannot be completely compensated for by the quantum algorithm.

    +
  • +
  • Errors and their correction: Classical error correction employs redundancy. The simplest way is to store the information multiple times, and — if these copies are later found to disagree — just take a majority vote; e.g. Suppose we copy a bit three times. Suppose further that a noisy error corrupts the three-bit state so that one bit is equal to zero but the other two are equal to one. If we assume that noisy errors are independent and occur with some probability $p$, it is most likely that the error is a single-bit error and the transmitted message is three ones. It is possible that a double-bit error occurs and the transmitted message is equal to three zeros, but this outcome is less likely than the above outcome. Copying quantum information is not possible due to the no-cloning theorem. This theorem seems to present an obstacle to formulating a theory of quantum error correction. But it is possible to spread the information of one qubit onto a highly entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. However, quantum error correcting code(s) protect quantum information against errors of only some limited forms. Also, they are efficient only for errors in a small number of qubits. Moreover, the number of qubits needed to correct errors doesn't normally scale well with the number of qubits in which error actually occurs.

    +

    [Further reading: Wikipedia: Quantum error correction]

    +
  • +
  • Constraints on state preparation: State preparation is the essential first step to be considered before the beginning of any quantum computation. In most schemes, the qubits need to be in a particular superposition state for the quantum computation to proceed correctly. But creating arbitrary states precisely can be exponentially hard (in both time and resource (gate) complexity).

    +
  • +
  • Quantum information, uncertainty, and entropy of quantum gates: +Classical information is easy to obtain by means of interaction with the system. On the other hand, the impossibility of cloning means that any specific unknown state cannot be determined. This means that unless the system has specifically been prepared, our ability to control it remains limited. The average information of a system is given by its entropy. The determination of entropy would depend on the statistics obeyed by the object.

    +
  • +
  • A requirement for low temperatures: Several quantum computing architectures +like superconducting quantum computing require extremely low temperatures (close to absolute zero) for functioning.

    +
  • +
+

Progress:

+ +
+

There have been several experimental realizations of CSS-based codes. +The first demonstration was with NMR qubits. Subsequently, +demonstrations have been made with linear optics, trapped ions, and +superconducting (transmon) qubits. Other error-correcting codes have +also been implemented, such as one aimed at correcting for photon +loss, the dominant error source in photonic qubit schemes.

+
+ +

Conclusion:

+

Whether we will ever have efficient quantum computers which can visibly outperform classical computers in certain areas, is something which only time will say. However, looking at the considerable progress we have been making, it probably wouldn't be too wrong to say that in a couple of decades we should have sufficiently powerful quantum computers. On the theoretical side though, we don't yet know if classical algorithms (can) exist which will match quantum algorithms in terms of time complexity. See my previous answer about this issue. From a completely theoretical perspective, it would also be extremely interesting if someone can prove that all BQP problems lie in BPP or P!

+

I personally believe that in the coming decades we will be using a combination of quantum computing techniques and classical computing techniques (i.e. either your PC will be having both classical hardware components as well as quantum hardware or quantum computing will be totally cloud-based and you'll only access online them from classical computers). Because remember that quantum computers are efficient only for a very narrow range of problems. It would be pretty resource-intensive and unwise to do an addition like 2+3 using a quantum computer (see How does a quantum computer do basic math at the hardware level?).

+

Now, coming to your point of whether national funds are unnecessarily being wasted on trying to build quantum computers. My answer is NO! Even if we fail to build legitimate and efficient quantum computers, we will still have gained a lot in terms of engineering progress and scientific progress. Already research in photonics and superconductors has increased manyfold and we are beginning to understand a lot of physical phenomena better than ever before. Moreover, quantum information theory and quantum cryptography have led to the discovery of a few neat mathematical results and techniques which may be useful in a lot of other areas too (cf. Physics SE: Mathematically challenging areas in Quantum information theory and quantum cryptography). We will also have understood a lot more about some of the hardest problems in theoretical computer science by that time (even if we fail to build a "quantum computer").

+

Sources and References:

+
    +
  1. Difficulties in the Implementation of Quantum Computers (Ponnath, 2006)

    +
  2. +
  3. Wikipedia: Quantum computing

    +
  4. +
  5. Wikipedia: Quantum error correction

    +
  6. +
+
+

Addendum:

+

After a bit of searching, I found a very nice article which outlines almost all of Scott Aaronson's counter-arguments against the quantum computing skepticism. I very highly recommend going through all the points given in there. It's actually part 14 of the lecture notes put up by Aaronson on his website. They were used for the course PHYS771 at the University of Waterloo. The lectures notes are based on his popular textbook Quantum Computing Since Democritus.

+",26,,26,,09-02-2022 23:33,09-02-2022 23:33,,,,0,,,,CC BY-SA 4.0 +2505,1,2520,,6/28/2018 18:20,,8,622,"

I'm just starting of on quantum computing, specifically following the IBM Q Experience documentation [1]. In here, they are explaining the following experiment:

+ +

$T|+\rangle$

+ +

The expected outcomes according to the document:

+ +
    +
  • Phase angle: $\pi/4$
  • +
  • Gates: $T$
  • +
  • Prob 0: 0.8535533
  • +
  • Prob 1: 0.1464466
  • +
  • X-length: 0.7071067
  • +
+ +

I'm trying to deduce this with math.

+ +

$T |+\rangle = \begin{bmatrix}1 & 0 \\ 0 & e^{i\pi/4}\end{bmatrix} {1\over\sqrt 2} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = {1\over\sqrt 2} \begin{bmatrix}1\\e^{i\pi/4}\end{bmatrix}$

+ +

I think I now need to split this out in $|0\rangle$ and $|1\rangle$ so that I get the quantum amplitudes:

+ +

$ = {1\over\sqrt 2} \begin{bmatrix}1\\0\end{bmatrix} + {1\over\sqrt 2} e^{i\pi/4} \begin{bmatrix}0 \\1 \end{bmatrix}$

+ +

Here things are falling apart, as

+ +

$ P(0) = |{1\over\sqrt 2}|^2 = 0.5 $
+$ P(1) = |{1\over\sqrt 2} e^{i\pi/4}|^2 = 0.5 $

+ +

So my question: How do I correctly calculate the probabilities and the X-length?

+ +

[1]: IBM Q: User Guide / The Weird and Wonderful World of the Qubit / Introducing Qubit Phase

+",2794,,26,,12/23/2018 13:33,12/23/2018 13:33,What are the P(0) and P(1) probabilities for the T transformation in quantum computing?,,2,1,,,,CC BY-SA 4.0 +2506,2,,2499,6/28/2018 18:48,,9,,"

The sad truth for most of the people here is that John Duffield (the asker) is right.

+ +

There is no proof that a quantum computer will ever be of any value.

+ +

However, for the companies that have invested in quantum computing (IBM, Google, Intel, Microsoft, etc.), it is entirely worth it to try to build one, because if they are successful they will be able to solve some problems exponentially faster than classical computers, and if they are not successful no dent has been put in the billions of dollars they have available.

+ +

The attempt to build useful quantum computers, which you can call a failure so far, has at least lead to advances in understanding superconductors, photonics, and even quantum theory itself. A lot of mathematics used for analyzing quantum mechanics, was developed in the context of quantum information theory.

+ +

And finally, quantum computers might never be marketable, but quantum communication devices by Toshiba, HP, IBM, Mitsubishi, NEC and NTT are already on the market.

+ +

In conclusion: I agree with John Duffield that quantum computing may never be of any value. But quantum communication is already marketable, and a lot of new science, mathematics, and engineering (e.g. for superconductors) was developed for our failed (so far) attempts in making quantum computing a reality.

+",2293,,2293,,07-02-2018 06:45,07-02-2018 06:45,,,,0,,,,CC BY-SA 4.0 +2508,1,,,6/28/2018 20:43,,3,242,"

Reproduced from Exercise 2.5 of Nielsen & Chuang’s Quantum Computation and Quantum Information (10th Anniversary Edition):

+ +
+

A function $(\cdot, \cdot)$ from $V × V$ to $C$ is an inner product if it satisfies the requirements:

+ +

(1) $(\cdot, \cdot)$ is linear in the second argument, + $$ +\left(|v⟩, \sum_iλ_i|w_i⟩\right) = \sum_i λ_i (|v⟩, |w_i⟩). +$$

+ +

(2) $(|v⟩,|w⟩)=(|w⟩,|v⟩)^∗$.

+ +

(3) $(|v⟩, |v⟩) ≥ 0$ with equality if and only if $|v⟩ = 0$.

+ +

For example, $C^n$ has an inner product defined by + $$ +((y_1,...,y_n),(z_1,...,z_n)) ≡ \sum_i y_i^* z_i = \begin{bmatrix} y_1 & \ldots & y_n\end{bmatrix}\begin{bmatrix} z_1 \\ \vdots \\ z_n\end{bmatrix}. +$$ + Verify that $(·, ·)$ just defined is an inner product on $C^n$.

+
+ +

Note: This question is part of a series attempting to provide worked solutions to the exercises provided in the above book.

+",391,,26,,3/30/2019 7:59,3/30/2019 7:59,Nielsen & Chuang Exercise 2.5 - Inner products of complex vectors,,0,1,,07-01-2018 16:44,,CC BY-SA 4.0 +2509,2,,2499,6/28/2018 22:32,,7,,"

There are many technical challenges to developing a universal quantum computer consisting of with many qubits, as pointed out in the other answers. See also this review article. However, there may be workaround ways to get certain nontrivial quantum computing results before we get to the first truly universal quantum computer.

+ +

Note that classical computing devices existed a long time before the first universal computer was made. E.g. to numerically solve differential equations, you can construct an electric circuit consisting of capacitors, coils and resistors, such that the voltage between certain points will satisfy the same differential equation as the one you want to solve. This method was popular in astrophysics before the advent of digital computers.

+ +

In case of quantum computing, note that when Feynman came up with the idea of quantum computing, he argued on the basis of the difficulty of simulating quantum mechanical properties of certain physical systems using ordinary computers. He turned the argument around by noting that the system itself solves the mathematical problem that is hard to solve using ordinary computers. The quantum mechanical nature of the system makes that so, therefore one can consider if one can construct quantum mechanical devices that are able to tackle problems that are hard to solve using ordinary computers.

+",2803,,,,,6/28/2018 22:32,,,,0,,,,CC BY-SA 4.0 +2510,2,,2499,6/28/2018 22:37,,9,,"
+

See the timeline on Wikipedia, and ask yourself where's the parallel adder?

+
+ +

It seems to me that your answer lies in your question. Looking at the timeline on Wikipedia shows very slow progress from 1959 until about 2009. It was mainly theoretical work until we went from zero to one.

+ +

In the only 9 years since then, the pace of progress has been tremendous, going from 2 qubits to 72 and if you include dwave up to 2000 qubits. And, there's one working in the cloud right now that we have access to. Graph the progress of the last 60 years and I'm sure you'll see quite the knee in the curve you seem to desire and a rebuttal to your statement But as far as I know, quantum computing hasn't done anything.

+ +
+

Where's the equivalent of Atlas, or the MU5?

+
+ +

Is that the measure against which your question is based?

+ +
+

Will you ever be able to? Is it all hype and hot air? Is quantum computing just pie in the sky? Is it all just jam-tomorrow woo peddled by quantum quacks to a gullible public?

+
+ +

Yes. No. No. No.

+ +
+

If not, why not?

+
+ +

Because, as your referenced timeline shows, people are making significant progress in the number and stability of qubits as well as in quantum algorithms.

+ +

Asking people to predict the future has always been fraught with failure which is why most of these sites don't allow 'opinion based' questions.

+ +

Perhaps more specific (non-opinion based) questions would better serve to answer your questions.

+",2806,,,,,6/28/2018 22:37,,,,0,,,,CC BY-SA 4.0 +2511,2,,2491,6/28/2018 22:37,,16,,"

The key difference is that the Bacon-Shor code is a subsystem code, while the Shor code is a stabilizer code. They have the same stabilizer operators, but the error correction procedure is different. The canonical reference for this construction is [Poulin].

+ +

Stabilizer codes rely on measuring eigenvalues of commuting operators (the stabilizers). Because these operators commute, we can label subspaces of the state space by these eigenvalues. In particular, the joint +1 eigenspace is the codespace. If any of our measurements result in a -1 eigenvalue, we know that the state has wandered out of the codespace and can (hopefully) do something to rectify this.

+ +

With subsystem codes, we also measure eigenvalues of some operators, but this time they do not form a commuting set of operators. These operators are called gauge operators. They generate a group called the gauge group. The trick to this construction is that the center of the gauge group is the stabilizer group. This is the group of operators generated by the gauge operators that commute with every element of the gauge group.

+ +

How this works in practice: suppose you have a stabilizer operator $s$ written as a product of gauge operators $\{g_i\}$:

+ +

$$ s = \prod_i g_i. $$

+ +

Now we go ahead and measure each of the $g_i$. Each measurement gives a random eigenvalue $\lambda_i = \pm 1$ but the product of these $\lambda = \prod \lambda_i$ labels the eigenspace of $s$ that the state belongs to. Once we have all the eigenvalues of the stabilizers in this way we can (hopefully) do something to rectify the state.

+ +

An example: I find it helpful to think of the ""4-qubit Bacon-Shor code"". This is an error detecting subsystem code. The gauge operators are

+ +

$$\{XXII, IIXX, ZIZI, IZIZ\}.$$

+ +

Think of these as operating on a $2\times 2$ lattice of qubits. These operators generate the stabilizers $XXXX$ and $ZZZZ.$ Once we measure $XXII$ and $IIXX$ we multiply the two eigenvalue measurements to find the eigenvalue of $XXXX$. These gauge operators are ""easier"" to measure, because they only involve two qubits, but the cost is that we mess up the state in other ways. These ""other ways"" are the gauge qubits, and we don't care about these. The encoded qubits, or logical qubits are the ones we are trying to preserve. The operators that act on the encoded qubits are the logical operators. For this example these are $ZZII$ and $XIXI$. As an exercise I would recommend working out the corresponding eigenvectors (and eigenspaces) for all these operators.

+ +

Larger Bacon-Shor codes work similarly. For an $n\times n$ lattice of qubits, there are a bunch of 2-qubit gauge operators, arranged like ""dominoes"" on the lattice. The $X$ type gauge operators are horizontal dominoes, and the $Z$ type gauge operators are vertical dominoes. A vertical stack of $n$ of the $X$ type dominoes generates an $X$ type stabilzer on $n\times 2$ qubits. And so on.

+ +

The relevance to adiabatic quantum computing is that we can form a Hamiltonian from these operators, as the negative sum of the gauge operators. The groundspace of the Hamiltonian corresponds to the logical qubits of the gauge code, and excitations of the state correspond to errors. For the Bacon-Shor code, the gap of this Hamiltonian goes to zero as the size of the system grows. Therefore this Hamiltonian does not work to protect the encoded state (energetically.) This Hamiltonian is also known as the quantum compass model.

+ +

I also wrote a paper about subsystem codes and Hamiltonians.

+",263,,263,,6/28/2018 22:43,6/28/2018 22:43,,,,0,,,,CC BY-SA 4.0 +2512,2,,2499,6/28/2018 23:02,,12,,"

To answer part of the question, ""will I ever buy a quantum computer"", etc. I think there is a fundamental misunderstanding.

+ +

Quantum computing isn't just classical computing but faster. A quantum computer solves certain kinds of problems in a short time that would take a classical super computer a thousand years. This isn't an exaggeration. But regular kinds of computing, adding numbers, moving bits for graphics, etc. Those will still just be classical computing things.

+ +

If the technology could ever be miniaturized (I don't know), it might be something more like an MMU or a graphics card. An additional feature to your classical computer, not a replacement. In the same way a high end graphics card lets your computer do things that it would not be able to (in reasonable time) with the main CPU, a quantum computer would allow other sorts of operations that can't be done currently.

+ +

I recommend you at least scan maybe the first paragraph of the ""Principles of Operation"" page on the quantum computing Wikipedia page.

+",2810,,,,,6/28/2018 23:02,,,,0,,,,CC BY-SA 4.0 +2513,1,2519,,6/29/2018 0:37,,10,2380,"

I want to understand the relation between the following two ways of deriving a (unitary) matrix that corresponds to the action of a gate on a single qubit:

+ +
+

1) HERE, in IBM's tutorial, they represent the general unitary matrix acting on a qubit as: + $$ +U = \begin{pmatrix} +\cos(\theta/2) & -e^{i\lambda}\sin(\theta/2) \\ +e^{i\phi}\sin(\theta/2) & e^{i\lambda+i\phi}\cos(\theta/2) +\end{pmatrix}, +$$ + where $0\leq\theta\leq\pi$, $0\leq \phi<2\pi$, and $0\leq \lambda<2\pi$.

+
+ +

This is derived algebraically using the definition of a unitary operator $U$ to be: $UU^{\dagger}=I$.

+ +
+

2) HERE (pdf), similar to Kaye's book An Introduction Quantum Computing, the same operator is calculated to be: + $$U=e^{i\gamma}\,R_{\hat n}(\alpha).$$ + Here, $R_{\hat n}(\alpha)$ is the rotation matrix around an arbitrary unit vector (a vector on the Bloch sphere) as the axis of rotation for an angle $\alpha$. Also, $e^{i\gamma}$ gives the global phase factor to the formula(which is not observable after all). The matrix corresponding to this way of deriving $U$ is: $$e^{i\gamma}\cdot\begin{pmatrix} cos\frac{\alpha}{2}-i\,sin\frac{\alpha}{2}\,cos\frac{\theta}{2}&-i\,sin\frac{\alpha}{2}\,e^{-i\phi}\\ -i\,sin \frac{\alpha}{2}e^{i\phi}&cos\frac{\alpha}{2}+i\,sin\frac{\alpha}{2}\,cos\theta\end{pmatrix}.$$

+
+ +

This derivation is clearer to me since it gives a picture of these gates in terms of rotating the qubits on the Bloch sphere, rather than just algebraic calculations as in 1.

+ +

Question: How do these angles correlate in 1 and 2? I was expecting these two matrix to be equal to each other up to a global phase factor.

+ +

P.S.: This correspondence seems instrumental to me for understanding the U-gates defined in the tutorial (IBM).

+",2757,,55,,5/18/2021 8:00,5/18/2021 8:00,What is the relation between these two forms of a single-qubit unitary operation?,,2,0,,,,CC BY-SA 4.0 +2514,1,2517,,6/29/2018 1:55,,9,345,"

Is there some definition or theorem about what a quantum computer can achieve from which post-quantum cryptographic schemes (eg lattice cryptography, but not quantum cryptography) can justify their security? I know the period finding function is capable of breaking RSA and discrete logs, but is it the only algorithm relevant to breaking encryption schemes? Can I say that if a scheme is not susceptible to the period finding function it is not susceptible to quantum computing? If not, is there some similar, alternative statement of the form ""if an encryption scheme cannot be broken by algorithm X, it cannot be broken by quantum computing""?

+ +

For example, is it sufficient to prove that an encryption scheme can only be broken by trying all possible keys, and the best that quantum computing can do in this regard is square root search time with Grover's algorithm?

+",2816,,26,,07-01-2018 08:20,07-01-2018 08:20,How to justify post quantum encryption security?,,2,2,,,,CC BY-SA 4.0 +2515,2,,2514,6/29/2018 4:32,,4,,"
+

Is there some definition or theorem about what a quantum computer can + achieve from which post quantum cryptographic schemes (eg lattice + cryptography, but not quantum cryptography) can justify their + security?

+
+ +

No. Just because your post-quantum cryptographic scheme works today, doesn't mean Peter Shor won't find a quantum algorithm to break it tomorrow.""

+ +
+

I know the period finding function is capable of breaking RSA and discrete logs, but is it the only algorithm relevant to breaking encryption schemes?

+
+ +

No. An example of another algorithm is Grover's algorithm, which is relevant to breaking cryptosystems based on the Transcendental Logarithm Problem.

+ +
+

Can I say that if a scheme is not susceptible to the period finding function it is not susceptible to quantum computing?

+
+ +

No. Schemes based on the Transcendental Logarithm Problem are not susceptible to period finding, but are susceptible to quantum enhanced speed-ups.

+ +
+

If not, is there some similar, alternative statement of the form ""if an encryption scheme cannot be broken by algorithm X, it cannot be broken by quantum computing""?

+
+ +

No. We do not know every single quantum algorithm in possible existence. Even if a scheme is resilient to period finding and Grover's algorithm, it might be possible to use quantum computers to break it more efficiently than classical computers. We might just need to make Peter Shor interested enough to come up with a quantum enhanced decryption scheme for it.

+ +
+

Is it sufficient to prove that an encryption scheme can only be broken by trying all possible keys, and the best that quantum computing can do in this regard is square root search time with Grover's algorithm?

+
+ +

No. Just because a classical computer cannot break your scheme except by trying all possible keys, doesn't mean a quantum computer cannot.

+ +

Here is a question that does have a yes answer:

+ +

What can we do to prove that an encryption scheme is safe against quantum computers?

+ +

Answer: Prove that decrypting the code is a QMA complete or QMA hard problem. QMA hard problems are problems that are hard for quantum computers in the way that NP hard problems are hard for classical computers.

+ +

This has inspired me to ask this question, which I do not know the answer to!

+",2293,,2293,,6/29/2018 5:01,6/29/2018 5:01,,,,1,,,,CC BY-SA 4.0 +2516,1,,,6/29/2018 4:59,,6,189,"

Such a public key cryptosystem would be ""quantum safe"" in the sense that quantum computers cannot efficiently solve QMA hard problems.

+",2293,,2293,,6/29/2018 8:32,11-04-2018 17:17,What classical public key cryptography protocols exist for which hacking is QMA complete or QMA hard?,,1,2,,,,CC BY-SA 4.0 +2517,2,,2514,6/29/2018 6:48,,6,,"

This is essentially the realm of computational complexity classes. For example, the class BQP may crudely described as the set of all problems that can be efficiently solved on a quantum computer. The difficulty with complexity classes is that it's hard to prove separations between many classes, i.e. the existence of problems which are in one class but not another.

+ +

In a sense, it is sufficient to be able to say ""if this quantum algorithm cannot break it, it is safe"", you just have to use the right algorithm. You need a BQP-complete algorithm such as finding roots of the Jones polynomial - any quantum algorithm can be cast as an instance of a BQP-complete algorithm. However, how that algorithm might be used for the cracking is completely unclear and non-trivial. It's not enough to see that you can't directly brute force things. So, that approach is probably not so helpful.

+ +

What do we want from a post-quantum crypto scenario? We need:

+ +
    +
  • a function $y=f(x)$ that we can easily compute for the purposes of encryption.
  • +
  • for which the inverse, $f^{-1}(y)$ cannot easily be computed on a quantum computer, i.e. the problem class is outside BQP.
  • +
  • given some secret, $z$, there is a classically efficiently computable function $g(y,z)=x$, i.e. with the supplementary information, the function $f(x)$ can be inverted. This is so that the right person (who has the private key, $z$) can decrypt the message.
  • +
+ +

This last bullet is (essentially) the definition of the complexity class NP: the problems for which it may be hard to find a solution, but for which a solution is easily verified when given a proof (corresponding to the private key in our case).

+ +

So, what we're after are problems in NP but not in BQP. Since we don't know if NP=BQP, we don't know that such things exist. However, there's a good route for looking at solutions: we consider NP-complete problems. These are the hardest instances of problems in NP, so if BQP$\neq$ NP (which is widely believed to be the case), NP-complete problems are certainly not in BQP. (If a problem is complete for a complexity class, it means that if you can solve it efficiently, you can solve all instances of the class efficiently.) So, this is kind of the guidance for where one might look for post-quantum algorithms.

+ +

The additional subtlety that complicates matters, however, is roughly (I'm not an expert) that complexity classes talk about worst case complexity, i.e. for a given problem size, it's about how hard the hardest possible instance of the problem is. But there could be only one such problem instance, which would mean that if we fix the problem size (as is standard, e.g. you might talk about 1024 bit RSA; the 1024 bits is the problem size), there's only be one private key. If we know that, an eavesdropper can just use that private key to decrypt messages. So, we actually need that this computational complexity reasoning applies for a large proportion of possible inputs. This gets you into the world of average-case complexity where, as I understand it, it becomes much harder to make such statements.

+ +

It may help to make a comparison to RSA, a public key crypto-system, and ignoring the existence of quantum computers. It is based on the difficulty of factoring large composite numbers. This problem is not (believed to be) in P, so it is believed to be difficult for a hacker with a classical computer to get at the answer. Meanwhile, it is in NP because the solution is readily verified (if you're given one factor, you can easily check it's a factor). That means it can be decrypted using a classical computer by the rightful recipient.

+",1837,,1837,,6/29/2018 8:58,6/29/2018 8:58,,,,0,,,,CC BY-SA 4.0 +2518,2,,2516,6/29/2018 6:54,,4,,"

Please start by reading my answer here. I believe you've mistaken the requirements for post-quantum crypto. If you use a scheme which is QMA-hard, then that means either your problem is QMA-complete (in which case, you can decrypt the message using a quantum computer, but not with a classical computer unless NP=QMA), or not (in which case you cannot decrypt efficiently even on a quantum computer). What you typically want for post-quantum crypto is something for which the decryption (by the holder of the private key) can be performed efficiently on a classical computer.

+ +

There may be schemes out there which are designed to be run with quantum computers performing the (allowed) decryption, but I'm not aware of any, and they are not the main focus of research at the moment.

+ +

As I also described in the other answer, what you're more likely to be interested in is some form of typical case complexity. I suppose it's conceivable that cases with an NP-complete typical case complexity could have worst-case complexity that's QMA-hard.

+",1837,,1837,,6/29/2018 7:24,6/29/2018 7:24,,,,5,,,,CC BY-SA 4.0 +2519,2,,2513,6/29/2018 7:21,,3,,"

Your second unitary isn't quite right, it's not even unitary! I think it should be: +$$e^{i\gamma}\cdot\begin{pmatrix} \cos\frac{\alpha}{2}-i\,\sin\frac{\alpha}{2}\,\cos\frac{\theta}{2}&-i\,\sin\frac{\alpha}{2}\sin\frac{\theta}{2}\,e^{-i\phi}\\ -i\,\sin \frac{\alpha}{2}\sin\frac{\theta}{2}e^{i\phi}&\cos\frac{\alpha}{2}+i\,\sin\frac{\alpha}{2}\,\cos\frac{\theta}{2}\end{pmatrix}.$$
+This may make is easier to find the correspondence. Let me put $\tilde\ $ over the entities from the first unitary in order to distinguish them.

+ +

Let's define $\tan(\beta)=\tan\frac{\alpha}{2}\cos\frac{\theta}{2}$. This is the phase of the first matrix element, so +$$ +\cos\frac{\alpha}{2}-i\,\sin\frac{\alpha}{2}\,\cos\frac{\theta}{2}=e^{i\beta}\cos\frac{\tilde\theta}{2}, +$$ +where we're allowing equality between the two unitaries to be up to a global phase $e^{i(\gamma+\beta)}$. In other words, +$$ +\cos^2\frac{\tilde\theta}{2}=\cos^2\frac{\alpha}{2}+\sin^2\frac{\alpha}{2}\cos^2\frac{\theta}{2}=\cos^2\frac{\alpha}{2}\sec^2\beta. +$$

+ +

For the off-diagonal entries, recall that a unitary matrix must have columns whose sum-mod-square must be 1. Thus, the off-diagonal entries must be $\sin\frac{\tilde\theta}{2}$ up to some phase which we have to fix. We need +$$ +-\beta+\phi-\frac{\pi}{2}=\tilde\phi\qquad -\beta-\phi-\frac{\pi}{2}=\tilde\lambda+\pi, +$$ +where I've incorporated the $i$ and $-1$ factors using phases $\pi/2$ and $\pi$. That perfectly fixes the relations between those two.

+ +

Now we only have to get the bottom-right matrix element correct. Again, we've already got the weight correct by unitarity, it's just the phase that we need. This is $-2\beta$, which from adding together the above two relations gives $\tilde\phi+\tilde\lambda+2\pi\equiv\tilde\phi+\tilde\lambda$, exactly as required.

+",1837,,,,,6/29/2018 7:21,,,,2,,,,CC BY-SA 4.0 +2520,2,,2505,6/29/2018 7:35,,5,,"

You are correct with your calculation that +$$ +T\left(\begin{array}{c} 1 \\ 1 \end{array}\right)/\sqrt{2}=\left(\begin{array}{c} 1 \\ e^{i\pi/4} \end{array}\right)/\sqrt{2}, +$$ +and you are correct that if you want to calculate the probability of getting a ""0""$\equiv\left(\begin{array}{c} 1 \\ 0 \end{array}\right)$ measurement result, you evaluate +$$ +P(0)=\left|\left(\begin{array}{cc} 1 & 0 \end{array}\right)\cdot\left(\begin{array}{c} 1 \\ e^{i\pi/4} \end{array}\right)/\sqrt{2}\right|^2=\frac{1}{2}, +$$ +so you get both answers with probability 1/2. However, this is not what the referenced page is trying to calculate. It says

+ +
+

If we start with a system initially in the |+⟩ (which is done using the Hadamard), then apply multiples of the T gate and measure in the x-basis...

+
+ +

The X-basis is not the question of 0 or 1 that you have already calculated. Instead, it's the probability of being in $|\pm\rangle=\frac{1}{\sqrt{2}}\left(\begin{array}{c} 1 \\ \pm 1\end{array}\right)$. I think the confusion has arisen because while people often refer to 0 and 1 as being the computational basis (as I have, and you have), when you're talking about measurements where there are two possible results, you can always label the outcomes as 0 and 1, no matter what basis was used. This is what they've done.

+ +

So, +$$ +P(+)=\left|\frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 & 1 \end{array}\right)\cdot\left(\begin{array}{c} 1 \\ e^{i\pi/4} \end{array}\right)/\sqrt{2}\right|^2=\frac{1}{4}|1+e^{i\pi/4}|^2 +$$ +Expanding this gives +$$ +P(+)=\frac{1}{4}\left((1+\cos\frac{\pi}{4})^2+\sin^2\frac{\pi}{4}\right)=\frac{1}{4}\left((1+\frac{1}{\sqrt{2}})^2+\frac{1}{2}\right)=\frac{2+\sqrt{2}}{4} +$$ +If you numerically evaluate this, you'll get the required result, 0.8535533. You could repeat the calculation for $P(-)$, or just use the fact that $P(+)+P(-)=1$.

+ +

The x-length, as they call it, is +$$ +P(+)-P(-)=2P(+)-1=\frac{2+\sqrt{2}}{2}-1=\frac{1}{\sqrt{2}}. +$$ +Again, that's exactly what's claimed.

+",1837,,,,,6/29/2018 7:35,,,,5,,,,CC BY-SA 4.0 +2521,1,,,6/29/2018 8:09,,8,514,"

I have been reading about the Quantum Channel Capacity and it seems to be an open problem to find such capacity in general. Quantum capacity is the highest rate at which quantum information can be communicated over many independent uses of a noisy quantum channel from a sender to a receiver.

+ +

Known results on the field are the Hashing bound, which is a lower bound on such quantum capacity and which is given by the LSD (Lloyd-Shor-Devetak) theorem; or the HSW (Holevo-Schumacher-Westmoreland) theorem for classical capacity over quantum channels.

+ +

I was wondering if there have been any advances in what a general expression for the quantum capacity since the release of those theorems. A glimpse on the advance in such field is enough for me, or references to papers where such task is developed.

+",2371,,55,,03-08-2021 11:13,03-08-2021 11:13,Advances in Quantum Channel Capacity,,1,0,,,,CC BY-SA 4.0 +2522,1,,,6/29/2018 8:50,,4,95,"

I know there are measures that can be taken to mitigate the effects of dephasing (I'm referring here to Dynamic Decoupling and the other ideas it led to). I find it surprising that there is no equivalent procedure to mitigate the effects of energy leakage. In principle, it seems like if you add energy to the system at the rate it leaves (due to interactions with the measurement equipment, etc) you could keep a qubit at the same energy level indefinitely. That is, $T_1 \to \infty$.

+ +

I know that just have the qubit at the correct energy probably isn't enough to ensure that it is in a pure state, so I want to try to formulate this idea using the density matrix formalism and try to see if there is any clever way to add the energy in such a way to keep the qubit in an initial .

+ +

I can't get started, so I'm asking for help. How would you model a system at constant energy, with energy leaving and entering at the same rate?

+",1867,,26,,11/28/2018 6:07,11/28/2018 6:07,Modeling energy relaxation effects with density matrix formalism,,1,0,,,,CC BY-SA 4.0 +2523,2,,2522,6/29/2018 9:10,,1,,"

I think the real question is how are you putting the energy back in? Here's a natural suggestion:

+ +

Take one qubit for simplicity. It was two energy levels, call them 0 and 1. 1 is at an energy $\omega$ higher than 0. So, the natural Hamiltonian is +$$ +H_0=\omega|1\rangle\langle 1| +$$ +If you want to talk about energy dissipation, you probably have to go to the Lindblad equation, with an amplitude damping term +$$ +L=\gamma|0\rangle\langle 1|. +$$ +However, you're probably going to add energy back in using a unitary process, +$$ +H_1=\Omega(|0\rangle\langle 1|+|1\rangle\langle 0|). +$$ +Thus, the overall evolution is +$$ +\frac{d\rho}{dt}=-i[H_0+H_1,\rho]+L\rho L^\dagger-\frac12(L^\dagger L\rho+\rho L^\dagger L). +$$ +Now, if you want to make sure there's a balance of energy in and energy out, you want to evaluate the expected energy of the state, $\text{Tr}(\rho H_0)$, and make sure that's a constant, i.e. that the time derivative is 0. I haven't tried to solve it (you were asking for a starting point) but I presume that to make it work, $\Omega$ will be a function of time and of the initial state. The other interesting thing to calculate will presumably be $\text{Tr}(\rho^2)$ (or perhaps its first derivative). This will map the increasing mixedness of the state $\rho$, which can never be compensated for by the unitary action of $H_1$ (which preserves mixedness).

+",1837,,,,,6/29/2018 9:10,,,,4,,,,CC BY-SA 4.0 +2524,1,,,6/29/2018 10:04,,5,174,"

I don't know much about physics so maybe the question is stupid, but I wonder how engineers detect that the state function did not collapse due to the environment while a calculation is performed? Theoretically, a measurement will break the system. Related is also the next question, how to detect that a calculation is finished? A measurement should also break the state function and thus break an ongoing calculation.

+ +

If one thinks further, how is it even possible to check whether the experimenter produced a certain quantum state? Maybe all the bits have a totally different state right from the start and you just know about that afterwards.

+ +

I seem to miss something because if this would be true, quantum computers would be in-practical. Is it like this: The experimenter takes a measurement periodically and hopes that the operation was successful?

+",,user2827,26,,6/29/2018 11:01,07-02-2018 11:12,How to check states?,,2,4,,,,CC BY-SA 4.0 +2525,1,2546,,6/29/2018 11:17,,10,224,"

Qudit graph states are $d$-dimension generalisations of qubit graph states such that each state is represented by a weighted graph $G$ (with no self-loops) such that each edge $(i, j)$ is assigned a weight $A_{i, j} = 0,\ldots,d-1$. +The graph state associated with $G$ is then given by +$$ +|G⟩ = \prod_{i>j} \textrm{CZ}_{i,j}^{A_{i,j}} |+⟩^{\otimes n}, +$$ +where $|+⟩ = F^\dagger|0⟩$ and $F$ is the Fourier gate +$$ +F = \frac{1}{\sqrt{d}}\sum_{k=0}^{d-1} \omega^{kl}|k⟩⟨l|. +$$

+ +

In the literature on qudit graph states, there does not seem to be a consistency as to whether such states are defined only for $d$ prime or not. +For example, some sources only give the above definition for $d$ prime, such as

+ + + +

whereas some do not specify any such restriction, such as

+ + + +

So which is correct? +Are qudit graph states (well-)defined when the dimension is non-prime?

+ +

Also, if so, are they uniquely defined?

+",391,,,,,6/30/2018 17:38,Are qudit graph states well-defined for non-prime dimension?,,1,0,,,,CC BY-SA 4.0 +2526,1,,,6/29/2018 11:43,,5,325,"

I'm looking at this paper and try to implement the Quantum adders they define myself.

+ +

Suppose we have a number $b=b_{n-1}\dots b_1b_0$ and they want to add a constant number $a=a_{n-1}\dots a_1a_0$.

+ +

They define +$$A_j = \Pi_{k=1}^{j+1} R_k^{a_{j+1-k}}, \quad R_k = \begin{pmatrix}1&0\\0&e^{i2\pi/2^k}\end{pmatrix}$$

+ +

The result can be obtained by first applying a QFT on all qubits, then apply $A_j$ on qubit $j$ and then apply an inverse QFT.

+ +

However, if I try to work this out for the simple case where $b=0$ and $a=1$, I end up with a quantum state +$$0.5\left|01\right> + (0.5+0.5i)\left|10\right> - 0.5i \left|11\right>.$$ +Note in this case, $A_0 = Z$ and $A_1 = S$.

+ +

Is there an error in my calculation, or is the definition in the article not correct?

+",2005,,26,,5/13/2019 8:57,5/13/2019 8:57,Implementation of quantum adder,,1,6,,,,CC BY-SA 4.0 +2527,2,,2524,6/29/2018 13:44,,2,,"
+

How engineers detect that the state function did not collapse due to the environment while a calculation is performed?

+
+

You can detect and correct errors during the calculation by using some Quantum Error Correction code. The idea behind these error correction codes is:

+
    +
  1. You entangle the qubit you want to "protect" against errors with $n$ other qubits.
  2. +
  3. You perform the operation that may introduce an error.
  4. +
  5. You measure a general property of the system.
  6. +
+

I recommend you to read Quantum Error Correction for Beginners (Devitt, Nemoto & Munro, 2013) if you want more details on quantum error correction (and detection!).

+

But you are right, a dumb measurement will break the state function and thus break an ongoing calculation.

+
+

How to detect that a calculation is finished?

+
+

From the comments you wrote, your question is based on Quantum theory, the Church-Turing principle and the universal quantum computer (page 7). In this paper, David Deutsch tries to formalise the Quantum Turing Machine (QTM).

+

In the model of the QTM, a user does not have any mean to know if the computation is over or not but the machine does (because there is no more instruction, because the quantum state ended up in a trap state, etc.). In order to let the user know that he can retrieve the result of its calculations, the machine just flips a (classical) bit to 1. This bit and the quantum bits used for the computation are obviously not entangled (classical bits cannot be entangled) and so measuring the bit will not change the quantum system.

+

In real-world models/implementations, this may be done by other means. For example with IBM's quantum chips, you know that your computation is finished when the results become available and IBM knows that the computation is over because they executed on the quantum chip all the quantum gates you asked them to execute.

+
+

How is it even possible to check whether the experimenter produced a certain quantum state?

+
+

You can perform 2 different checks:

+
    +
  1. Quantifying the reliability of the initial state: if the experimenter tell you that the initial state is $\left|\phi\right>$ you can run multiple experiments consisting in only a measurement of the initial state and count how many experiments gave wrong outcomes.

    +

    This check consists in measurements and so may break the initial state, so you cannot check just before your experiment that the initial state is good. You can only obtain numbers like "the initial state will be good in 90% of the experiments".

    +
  2. +
  3. Checking just before your calculations that the initial state is good. As said in the previous point, this may break the initial state and I don't know any algorithm capable of doing that for an arbitrary initial state.

    +
  4. +
+

The 2 points above are not making any assumptions on the initial state. If you are using the quantum circuit model then you expect $\left|0\right>^n$ as initial state and this initial state can be checked and corrected before your calculations:

+
    +
  1. Perform a destructive measurement on the qubits. Depending on the outcome of the measurement, each qubit is either in the state $\left|0\right>$ or $\left|1\right>$.
  2. +
  3. For all the qubits measured in the state $\left|1\right>$, apply an $X$ gate to flip the qubit state to $\left|0\right>$.
  4. +
+

I think this is how IBM reset its qubits to the $\left|0\right>$ state (but I don't have any links ensuring it).

+",1386,,-1,,6/18/2020 8:31,07-02-2018 11:12,,,,0,,,,CC BY-SA 4.0 +2528,1,2533,,6/29/2018 14:15,,5,181,"

In $\mathbb{C^2}$, we generally take $+1$ and $-1$ as the standard eigenvalues, that's what Pauli-X, Pauli-Z measurements, etc will give us. +Is there a similar standard while measuring in the Bell basis and the computational basis in $\mathbb{C^2}\otimes\mathbb{C^2}$?

+ +

Of course, the actual eigenvalues don't matter, as long as we are talking about the same resolution of identity, but I was just wondering if there was a convention.

+",2832,,26,,12/23/2018 12:35,12/23/2018 12:35,What are the standard eigenvalues in $\mathbb{C^2}\otimes\mathbb{C^2}$?,,1,3,,,,CC BY-SA 4.0 +2529,2,,2427,6/29/2018 15:19,,1,,"

I have found this paper in which an insight about the question can be seen as the author states the difficulty of constructing the so-called quantum repeaters for quantum networks due to the no-cloning theorem. However, the constructions are indeed possible. The aforementioned paper is can be found here. The same author has a book called Quantum Networking where I consider that he will go further in explaining such difficulty and how to overcome it, but I do not have access to such text, so I am not sure if such explanation is given there.

+ +

I am giving this information as an answer so that other users interested in quantum networking can find information about the question that is being asked here.

+",2371,,,,,6/29/2018 15:19,,,,0,,,,CC BY-SA 4.0 +2530,1,2532,,6/29/2018 15:24,,6,1693,"

After taking some measure, how can a qunit be ""unmeasured""? Is unmeasurement (ie reverse quantum computing) possible?

+",2645,,,,,07-01-2018 02:03,Reverse Quantum Computing: How to unmeasure a qunit,,3,1,,,,CC BY-SA 4.0 +2531,2,,2530,6/29/2018 15:30,,3,,"

You can compute by measuring - see cluster-based quantum computation - but the whole thing that makes measurement different in quantum mechanics is that it destroys the superposition. It can't be undone. Once you measure, the qudit isn't in a state $|\psi\rangle = \alpha|0\rangle + \beta|1\rangle + ... +\gamma|n\rangle$ but in a state $|\psi\rangle = |0\rangle$ or $|\psi\rangle = |1\rangle$ or what have you based upon probability. When you measure the qubit again soon after, it stays as either $|0\rangle$ or $|1\rangle$. The superposition is gone. We can't get it back (except by doing the same operations that lead our qubit to that point, in which case it'll be very similar) because we can't clone a qubit, so we can't figure out what $\alpha$ and $\beta$ are.

+

Tl;dr: No.

+",91,,-1,,6/18/2020 8:31,6/29/2018 15:30,,,,7,,,,CC BY-SA 4.0 +2532,2,,2530,6/29/2018 15:40,,7,,"

I am not really sure about what you mean by ""unmeasuring"" a qubit, but if you mean to recover the qubit that was measured by manipulating the post-measurement state then I am afraid that the answer is no. When a quantum state is measured, the supoerposition state of such is collpased to one of the possible outcomes of the measurement, and so the qubit is lost.

+ +

The third postulate of quantum mechanics explains measurments in the quantum world, and such postulate says the following:

+ +
+

Quantum measurements are described by a collection $\{M_m\}$ of measurement operators. These are operators acting on the state space of the system being measured. The index $m$ referes to the measurement outcomes that may occur in the experiment. If the state of the quantum system is $|\psi\rangle$ immediately before the measurement, then the probability that result $m$ occurs is given by \begin{equation} +p(m)=\langle\psi|M_m^\dagger M_m|\psi\rangle, +\end{equation} + and the state of the sytem after the measurement is + \begin{equation} +\frac{M_m|\psi\rangle}{\sqrt{\langle\psi|M_m^\dagger M_m|\psi\rangle}}. +\end{equation}

+
+ +

So the post-measurement state collapses into another state defined by the postulate 3, and the previous quantum state is lost irreversibly. See also this wikipedia entry for wave function collapse, where it explains the collapse of quantum states after measurement.

+ +

Consequently, if the same measurment wants to be done, the quantum state must be prepared again before the measurement and so the xperiment can be repeated.

+",2371,,,,,6/29/2018 15:40,,,,8,,,,CC BY-SA 4.0 +2533,2,,2528,6/29/2018 17:44,,3,,"

There is no notion of ""standard eigenvalues"" for general matrices.

+ +

Some meaningful eigenvalues for 4x4 matrices are: +

+ +
    +
  • {-3/2, -1/2, 1/2, 3/2} which are the possible z-projections of a spin-3/2 particle
  • +
  • Instead of eigenvalues of X and Z, use the eigenvalues of the Dirac matrices, which are 4x4 matrices that are related to Pauli matrices
  • +
  • Instead of eigenvalues of X and Z, use the eigenvalues of the 4x4 generalization of the Gell-Mann matrices (which themselves are 3x3 generalizations of the 2x2 Pauli matrices).
  • +
  • Finally, as Neil de Beaudrap has noted in the comment, {-1,1} can also be eigenvalues for 4x4 matrices, such as the SWAP gate.
  • +
+",2293,,2293,,07-02-2018 18:01,07-02-2018 18:01,,,,2,,,,CC BY-SA 4.0 +2535,2,,2371,6/29/2018 20:16,,0,,"

In the comments to my answer the OP has written:

+ +
+

In the universal gate case you stated the largest systems are <100. + How could it reach 10k?

+
+ +

Well I have good news for you. Four days ago D-Wave announced at the AQC conference that they can now do YY coupling:

+ +

+ +

Here you can see the superconducting circuit that gives you ZZ and YY coupling at the same time:

+ +

+ +

I cannot show you more of their ""preview"" presentation, but expect for them to publish something very soon.

+ +

Why is YY coupling significant? It is because in 2007, Jacob Biamonte and Peter Love from D-Wave proved that XX + ZZ is enough for universal quantum computation. XX and YY are equivalent up to a rotation, so they could easily have instead said that YY + ZZ is universal.

+ +

Now that D-Wave has engineered a universal set of couplers, it should be possible to have a 10,000 qubit universal quantum computer when they extend to 1250 units cells (since 8 x 1250 = 10,000, see my first answer).

+ +

I'm sorry that there's no literature references for this yet, but the picture tells the whole story, and I'm afraid that until D-Wave publishes something, this is the ""source"" for the information. This is how you can cite this answer.

+",2293,,2293,,6/29/2018 20:22,6/29/2018 20:22,,,,0,,,,CC BY-SA 4.0 +2536,1,,,6/29/2018 20:33,,9,1717,"

How is D-Wave's Pegasus architecture different from the Chimera architecture?

+",2293,,26,,01-07-2019 14:55,8/28/2019 5:31,"What is D-Wave's ""Pegasus"" architecture?",,3,0,,,,CC BY-SA 4.0 +2537,2,,2536,6/29/2018 20:34,,5,,"

Pegasus is the first fundamental change in D-Wave's architecture since the D-Wave One.

+ +

The D-Wave Two, 2X, and 2000Q all used the ""Chimera"" architecture, which consisted of unit cells of $K_{4,4}$ graphs. The four generations of D-Wave machines just added more qubits by adding more and more unit cells that were the same.

+ +

In Pegasus, the actual structure of the unit cells has fundamentally changed for the first time. Instead of the Chimera graph where each qubit can have at most 6 qubits, the Pegasus graph allows each qubit to couple to 15 other qubits.

+ +

A machine has been made already with 680 Pegasus qubits (compare this to 2048 Chimera qubits in the D-Wave 2000Q).

+ +

The work was presented by Trevor Lanting of D-Wave, four days ago:

+ +

+

+",2293,,,,,6/29/2018 20:34,,,,3,,,,CC BY-SA 4.0 +2538,2,,2526,6/29/2018 22:03,,2,,"

I'm assuming an initial state of the form $|a\rangle|b\rangle = |1\rangle|0\rangle$ for your simple case. You first perform a QFT on the right qubit, obtaining $|1\rangle(\frac{|0\rangle+|1\rangle}{\sqrt{2}})$. Next, you apply $A_0 = R_1$ to the right qubit to obtain $|1\rangle(\frac{|0\rangle-|1\rangle}{\sqrt{2}})$. Finally, you apply an IQFT to the right qubit and obtain $|1\rangle|1\rangle$, thereby demonstrating that $1+0=1$. As @DaftWullie noted, all the action happens on the ""$b$"" qubit; the $a$ qubit (or cbit in this case) acts only as a control.

+",356,,,,,6/29/2018 22:03,,,,1,,,,CC BY-SA 4.0 +2539,2,,2530,6/29/2018 23:02,,2,,"

Unitary operation is revesible, but measurement is a projection operation, which is not reveaible. Think about matrix inverse, projection matrix has lower rank and does not have inverse

+",2854,,,,,6/29/2018 23:02,,,,1,,,,CC BY-SA 4.0 +2540,1,2542,,6/30/2018 1:00,,14,2893,"

A universal set of gates are able to mimic the operation of any other gate type, given enough gates. For example, a universal set of quantum gates are the Hadamard ( $H$ ), the $\pi/8$ phase shift ( $T$ ), and the $\mathrm{CNOT}$ gate. How would one disprove or prove universality of a set of gates such as $\{H,T\}$, $\{\mathrm{CNOT},T\}$, or $\{\mathrm{CNOT}, H\}$?

+",2713,,124,,7/17/2019 12:27,7/17/2019 12:27,How to prove/disprove universality for a set of gates?,,2,0,,,,CC BY-SA 4.0 +2541,2,,2540,6/30/2018 1:19,,3,,"

Nielsen and Chuang, pg 191 of the 10th anniversary edition:

+ +
+

We have just shown that an arbitrary unitary matrix on a $d$-dimensional Hilbert space may be written as a product of two-level unitary matrices. Now we show that single qubit and CNOT gates together can be used to implement an arbitrary two-level unitary operation on the state space of $n$ qubits. Combining these results we see that single qubit and CNOT gates can be used to implement an arbitrary unitary operation on $n$ qubits and therefore are universal for quantum computation.

+
+ +

The first sentence there is an accepted result, so you must simply show that the combination of your gate set can implement ""an arbitrary two-level unitary operation"". To quote Wikipedia:

+ +
+

Technically, this is impossible since the number of possible quantum gates is uncountable, whereas the number of finite sequences from a finite set is countable. To solve this problem, we only require that any quantum operation can be approximated by a sequence of gates from this finite set.

+
+ +

See also this paper.

+",91,,,,,6/30/2018 1:19,,,,0,,,,CC BY-SA 4.0 +2542,2,,2540,6/30/2018 5:41,,10,,"

Universality can be a very subtle thing which is quite tricky to prove. There are usually two options for proving it:

+ +
    +
  • show directly, using your chosen gates, how to construct any arbitrary unitary of arbitrary size (there’s no constraint on the size of the construction, just that it can be done) to arbitrary accuracy (on some non-trivial sub space of the full Hilbert space).

  • +
  • show how your chosen set of gates can be used to recreate (to arbitrary accuracy) an existing universal set.

  • +
+ +

Conversely, if you wish to disprove it, you show that the effect of your set of gates can always be simulated by an (assumed) lesser model of computation, usually classical computation.

+ +

There are a few heuristics that you can use for guidance:

+ +
    +
  • you must have a multi-qubit gate in your set. If all you have are single-qubit gates, you can simulate each qubit independently on a classical computer. So, if we believe that quantum computers are more powerful than classical, single qubit gates alone are not universal for quantum computation. This rules out {H,T}.

  • +
  • you must have a gate that creates superpositions. This rules out {CNOT, T}. Again, this is a classical computation with the addition of an irrelevant global phase.

  • +
+ +

Of course, these are not sufficient conditions: the set {H,S,CNOT} can be efficiently simulated as well (see Gottesman-Knill theorem). This must also be true of {H,CNOT} as they are a subset and so the operations that they can create is no more than those of the original set.

+ +

One of the universal sets that I find most interesting is {Toffoli,H}. It always feels surprising to me that this is enough (especially when you compare to the previous set). Note that it does not involve any complex numbers.

+ +

It is also possible to get universality from a single two-qubit gate such as +$$ +\left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ 0 & 0 & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{array}\right) +$$

+",1837,,1837,,6/30/2018 6:28,6/30/2018 6:28,,,,1,,,,CC BY-SA 4.0 +2543,2,,2499,6/30/2018 10:58,,12,,"

TL;DR: I've been working on the theory of quantum computers for about 15 years. I've seen nothing convincing to say that they won't work. Of course, the only real proof that they can work is to make one. It's happening now. However, what a quantum computer will do and why we want it does not match up with the public perception.

+ +
+

Is quantum computing just pie in the sky? Is it all just jam-tomorrow woo peddled by quantum quacks to a gullible public?

+
+ +

As a ""quantum quack"" (thanks for that), of course I'm going to tell you it's all realistic. But the theory is sound. So long as quantum mechanics is correct, the theory of quantum computation is correct, and there are efficient algorithms for quantum computers for which we don't know how to efficiently compute the solution on a classical computer. But I don't think anything that I write here can convince a skeptic. Either, you have to sit down and learn all the details yourself, or wait and see.

+ +

Of course, quantum mechanics is only a theory which could be superseded at any time, but its predictions have already been applied to explain the world around us. Quantum computers are not pushing the theory into an untested regime where we might hope there are unexpected results (which is what physicists really hope for, because that's where you start to see hints at new physics). For example, quantum mechanics is already applied to condensed matter systems consisting of far more constituents than we're talking about qubits in a near-term quantum computer. It's just that we need an unprecedented level of control over them. A few people think they have arguments for why a quantum computer won't work, but I've not found anything particularly convincing in the arguments that I've read.

+ +
+

Is it all hype and hot air?

+
+ +

There is a lot of hype surrounding quantum computers. I would say that this comes from two main sources:

+ +
    +
  • the popular representation of quantum computing in the mainstream media and popular culture (e.g. science fiction books). Ask anybody actively working on quantum computation, I think they will all agree it's poorly represented, giving the impression that it's a universal solution that will make everything run quicker, which is, at least for now, not the case. There has been some jam-tomorrow woo peddled to a gullible public, but that's more through a ""lost in translation"" attempt to over-simplify what's going on, mostly by non-specialist intermediaries.

  • +
  • researchers themselves. For the past 20(ish) years, people have been promising that quantum computing is just over the horizon, and its never quite materialized. It's quite reasonable that observers get sick of it at that point. However, my perspective from being within the field is that many people claiming to be working towards quantum computers haven't been. As funding bodies have got progressively more demanding with the ""why"" for research, and ensuring ""impact"", quantum computing has become the go-to for many experimentalists, even if they aren't really interested in doing anything for a quantum computer. If there's been some way that they can twist what they're doing so that it sounds relevant to quantum computing, they've tended to do it. It doesn't mean that quantum computing can't be done, it just hasn't been as much of a focus as has been implied. Take, at a slightly different level, the explosion of quantum information theory. So few theorists within that have actively worked on the theory of quantum computers and how to make them work (that's not to say they've not been doing interesting things).

  • +
+ +

However, we are now hitting a critical mass where there's suddenly a lot of research investment in making quantum computers, and associated technology, a reality, and things are starting to move. We seem to be just hitting the point, with devices of about 50 qubits, that we might be capable of achieving ""quantum supremacy"" - performing computations whose results we can't really verify on a classical computer. Part of the problem with achieving this has actually been the aforementioned rapid progress of classical computing. Given Moore's Law type of progress, yielding exponentially improving classical computational power, it's been a constantly shifting bar of what do we need to achieve to be convincing.

+ +
+

Quantum computers don't show similar progress. Au contraire, it looks like they haven't even got off the ground.

+
+ +

The point is, it's hard to do, and it's taken a long time to get the basic technology right. This is a slightly imperfect comparison, but it's not too bad: think about the lithography processes that are used for making processors. Their development has been progressive, making smaller and smaller transistors, but progress has been slowing as it's got harder and harder to deal with, for one, the quantum effects that are getting in the way. Quantum computers, on the other hand, are essentially trying to step over that whole progressive improvement thing and jump straight to the ultimate, final, result: single atom transistors (kind of). Perhaps that gives some level of insight into what the experimentalists are trying to deal with?

+ +
+

You won't be buying one in PC World any time soon. Will you ever be able to?

+
+ +

It's not clear that you'd even want to. At the moment, we expect quantum computers to be useful for certain, very specific tasks. In that case, we perhaps envisage a few powerful centralized quantum computers that do those specific jobs, and most people will keep going with classical computers. But, since you want to draw analogies with the development of classical computers, then (according to Wikipedia) it is in 1946 that Sir Charles Darwin (grandson of the famous naturalist), head of Britain's National Physical Laboratory, wrote:

+ +
+

it is very possible that ... one machine would suffice to solve all the problems that are demanded of it from the whole country

+
+ +

(variants of this are attributed to people like Watson). This very clearly is not the case. The reality was that once computers became widely available, further uses were found for them. It might be the same for quantum computers, I don't know. One of the other reasons that you wouldn't buy a quantum computer in a shop is its size. Well, the actual devices are usually tiny, but it's all the interfacing equipment and, especially, cooling that takes up all the space. As the technology improves, it'll be able to operate at progressively higher temperatures (look, for example, at the progress of high temperature superconductivity compared to the original temperatures that had to be achieved) which will reduce cooling requirements.

+",1837,,,,,6/30/2018 10:58,,,,0,,,,CC BY-SA 4.0 +2544,2,,2499,6/30/2018 11:27,,19,,"

TL,DR: Engineering and physics arguments have already been made. I add a historical perspective: I argue that the field of quantum computation is really only a bit more than two decades old and that it took us more than three decades to build something like the MU5.

+ +
+ +

Since you mention the timeline, let's have a closer look:

+ +

The beginnings

+ +

First of all, the mere possibility of something like a quantum computer was voiced by Richard Feynman in the west (1959 or 1981 if you wish) and Yuri Manin in the east (1980). But that's just having an idea. No implementation starts.

+ +

When did similar things happen with classical computing? Well, a very long time ago. Charles Babbage for instance already wanted to build computing machines in the early 19th century and he already had ideas. Pascal, Leibniz, they all had ideas. Babbage's analytical machine of 1837 which was never built due to funding and engineering challenges (by the way, the precursor of the analytical machine was built with Lego) is definitely the most recent first idea that is already way ahead of what Feynman and Manin proposed for quantum computing, because it proposes a concrete implementation.

+ +

The '70s don't see anything related to a quantum computer. Some codes are invented, some theoretical groundwork is done (how much information can be stored?), which is necessary for qc, but it's not really pursuing the idea of the quantum computer.

+ +

Codes and communication-related ideas are to quantum computation what telephones and telegraph wires are to classical computing: an important precursor, but not a computer. As you know, Morse codes and telegraphs are technologies of the 19th century and more difficult codes for noisy channels were also studied. The mathematical groundwork (in terms of no-go-theorems and the like) was done in 1948 by Shannon.

+ +

Anyway, it can be argued that punch card computing was developed in 1804 for weaving, but I don't want to claim that this was really the beginning of the classical computation.

+ +

Universal (quantum) computers

+ +

So when did computation start? I'm going to argue that you need a number of things to get research for universal computing off the ground; before that, the number of people and money invested there will be limited.

+ +
    +
  1. You need the notion of a universal computer and a theoretical model of what to achieve.
  2. +
  3. You need an architecture of how to implement a universal computer - on a theoretical level.
  4. +
  5. You need a real-life system where you could implement it.
  6. +
+ +

When do we get those in quantum computation?

+ +
    +
  • Deutsch describes the universal quantum computer in 1985 (33 years ago).
  • +
  • Circuit models and gates are developed around the same time.
  • +
  • The first complete model of how to put everything together was proposed by Cirac and Zoller in 1994 (merely 24 years ago).
  • +
+ +

All the other advances in quantum computation before or during that time were limited to cryptography, quantum systems in general or other general theory.

+ +

What about classical computation?

+ +
    +
  • We have Turing's work on Turing machines (1936) or Church's work (same time frame).
  • +
  • Modern architectures rely on von Neumann's model (1945); other architectures exist.
  • +
  • As a model, the digital circuit model was designed in 1937 by Shannon.
  • +
+ +

So, in 1994 we are in a comparable state to 1937:

+ +
    +
  • There are a few people doing the theoretical groundwork, and the groundwork has now been done.
  • +
  • There are a fair number of people doing engineering work on foundational issues not directly related but very helpful for building a (quantum) computer.
  • +
  • And the field is generally not that big and well-funded.
  • +
  • But: from that date, funding and people start pouring into the field.
  • +
+ +

The field is taking off

+ +

For classical computing, this is illustrated by the amount of different ""first computer systems"" in the Wikipedia timeline. There were several research groups at least in Germany, England, and the United States in several locations (e.g. Manchester and Bletchley Park in the UK, to name just a few). War-time money was diverted to computing because it was necessary for e.g. the development of the nuclear bomb (see accounts at Los Alamos).

+ +

For quantum computation, see e.g. this comment:

+ +
+

The field of QIS began an explosive growth in the early to mid-1990s as a consequence of several simultaneous stimuli: Peter Shor demonstrated that a quantum computer could factor very large numbers super-efficiently. The semiconductor industry realized that the improvement of computers according to Moore’s law would all too soon reach the quantum limit, requiring radical changes in technology. Developments in the physical sciences produced trapped atomic ions, advanced optical cavities, quantum dots, and many other advances that made it possible to contemplate the construction of workable quantum logic devices. Furthermore, the need for secure communications drove the investigations of quantum communication schemes that would be tamper proof.

+
+ +

All in all, from the time, that the theoretical groundwork of modern computers had been laid to the time that the first computers are available (Zuse 1941, Manchester 1948, to name just two) it took about a decade. Similarly, it took about a decade for the first systems doing some sort of universally programmable calculation with quantum systems. Granted, their capabilities are lower than the first Manchester computers, but still.

+ +

Twenty years later, we are slowly seeing explosive growth in technology and a lot of firms get involved. We also see the advent of new technologies like the transistor (first discovered in 1947).

+ +

Similarly, 20 years after the beginning of quantum computation we see the serious entrance of private companies into the field, with Google, IBM, Intel, and many others. When I was at my first conference in 2012, their involvement was still academic, today, it is strategic. Similarly, we saw a proposition of a wealth of different quantum computing systems during the 2000s such as superconducting qubits, which form the basis of the most advanced chips from the three companies mentioned above. In 2012, nobody could claim to have a somewhat reliable system with more than a couple of physical qubits. Today, only six years later, IBM lets you play with their very reliable 16 qubits (5 if you really only want to play around) and Google claims to test a 72 qubit system as we speak.

+ +

Yes, we have still some way to go to have a reliable large-scale quantum computer with error-correction capabilities, and the computers we currently have are weaker than the classical computers we had in the '60s, but I (as others explain in other answers) believe this is due to the unique engineering challenges. +There is a small chance that it's due to physical limitations we have no idea about but if it is, given current progress, we should know in a couple of years at the latest.

+ +

What's my point here?

+ +
    +
  • I argued that the reason that we don't see an MU5 quantum computer yet is also due to the fact that the field is just not that old, yet, and hasn't really achieved that much attention until recently.
  • +
  • I argue that from a present-day perspective, it seemed that classical computers became very good very quickly, but that this neglects decades of prior work where development and growth didn't seem as fast.
  • +
  • I argue that if you believe (as almost everybody in the field does) that the initial engineering problems faced by quantum computers are harder than those faced by classical computers, then you see a very much comparable research and innovation trajectory to one of the classical computers. Of course, they are somewhat different, but the basic ideas of how it goes are similar.
  • +
+",1854,,7429,,6/21/2019 8:23,6/21/2019 8:23,,,,0,,,,CC BY-SA 4.0 +2546,2,,2525,6/30/2018 13:43,,8,,"

The definition you give for a graph state, and in particular the quantum Fourier transform $F$ and the controlled-$Z$ operator — where we take $ Z $ to be the unitary generalisation of the Pauli $Z$ operator, satisfying $Z = F XF^\dagger$ for $X$ a shift-by-one permutation operation — are all well-defined even in composite dimension. The Fourier transform is certainly an operation of interest for arbitrary definition; the controlled-$Z$ operation is still diagonal and unitary, and still has the relevant connections to $F$ as a tensor; there is nothing about the mathematical objects themselves which become troublesome in composite dimension.

+ +

The reason why you see so much emphasis on prime dimension is essentially that composite dimension qudits are inconvenient to analyse. The reasons for this arise from number theory: particularly in the fact that in composite dimension one must worry about zero divisors. Frankly, there aren't many in the field who think of themselves as number theorists, and very few researchers (either among the authors or the readers of articles) have much patience for number systems which are not fields such as the well-loved examples of $\mathbb C$, $\mathbb R$, $\mathbb Q$, and of course the integers modulo a prime $p$, $\mathbb Z_p$. For this reason, you will rarely see references to qudits of composite dimension anywhere in the field. Even when you do, the major concern of mathematical convenience will usually motivate some other restriction.

+ +

Quantum information theory may occasionally make use of number theory, and pure mathematics in general, but make no mistake: this field does not have much overlap with the priorities of pure mathematics. If a definition has been presented in a way that looks strangely restricted, it's reasonably likely that it's because it allows a result to be shown which would be much more challenging, or even just a little bit more awkward, to prove without that restriction — and it is considered more important to publicise examples of striking-sounding results than to present reasonably complete mathematical theories.

+",124,,124,,6/30/2018 17:38,6/30/2018 17:38,,,,0,,,,CC BY-SA 4.0 +2549,1,2563,,6/30/2018 16:27,,2,127,"

I have had a few ideas for circuits that I would like to get feedback on (do such things already exist, what utility they serve, etc).

+ +

Any suggestions on how I might graphically render these?

+ +

Example:

+ +

+ +

Here i is the input & o is the output. The slash directly to the right of the input is a 50/50 beam splitter. The remaining slashes are mirrors.

+ +

If a pulse were directed in, the output would first recieve 50% of the beam. The other half of the beam would pass through into the reflective chamber. Upon completion of the loop, the beam would again reach the splitter. Again, half the beam (25%) would pass to the output while the other half (25%) looped thru the circuit.

+ +

This process could be run thru $n$ times until some desired outcome is reached.

+ +

Does this already exist? What is the utility (if any)?

+",2645,,2645,,07-01-2018 16:03,07-02-2018 05:47,Photonic Circuit Idea (Does this already exist?),,1,3,,,,CC BY-SA 4.0 +2550,2,,2483,6/30/2018 18:25,,7,,"

Contrary to DaftWullie's answer, it is possible to implement a CNOT gate in a photonic system with 100% efficiency.

+ +

However, there are caveats to this - it depends on what's used as the qubits (or, as this is a photonic system, potentially qudits) in the system.

+ +
+ +

KLM: A photon as a qubit

+ +

The first thing that most people think of in terms of photonic qubits is polarisation. In this case, postselection (/heralding) is generally required. This was theoretical shown to be possible by Knill, Laflamme and Milburn (KLM) in 2001. Within a couple of years, the first probabilistic photonic CNOT gate was shown by O'Brien et. al. (arXiv version) in an equivalent scheme, as shown in figure 1.

+ +

+ +

Figure 1: Circuit diagram of probabilistic 2-photon CNOT gate. Each photon (control and target) is encoded from polarisation to spatial modes. After postselecting on a single photon in $C_{\text{out}}$ and a single photon in $T_{\text{out}}$, when the control photon, $C$, is in spatial mode $C_0$, the identity operation is performed on the target photon, $T$, while when $C$ is in $C_1$, the NOT (X) operation is performed on $T$ which has a probability of success of 1/9. Uses beamsplitters. Image taken from Figure 1a of O'Brien et. al.

+ +

One variation of this is to use a nonlinear phase shift to make a deterministic version of this.

+ +

While the above may not sound overly great for the prospects of optical quantum computing, encoding a qubit as polarisation/2 spatial modes of a photon is far from the only way to perform optical quantum computing.

+ +
+ +

Reck: Many modes makes... Many dimensions

+ +

One other such method was proposed before KLM by Reck et. al. (shown in figure 2) and has since been improved upon by Clements et. al. In this a single photon is encoded in some number, $d$, of spatial modes. This is equivalent to a $\log_2 d$ qubit system and can be used to implement any unitary. For a 2-qubit system, this is equivalent to having 4 spatial modes labelled $\left|00\right>, \left|01\right>, \left|10\right>$ and $\left|11\right>$ and a CNOT operation is equivalent to swapping the bottom 2 $\left(\left|10\right> \text{ and } \left|11\right>\right)$ modes.

+ +

+ +

Figure 2: Image of a 6-mode Reck scheme chip, which can be used to implement a deterministic 'CNOT' gate. Uses phase shifters and beam splitters to build up a unitary evolution over the modes of the system. Image taken from Figure 1 of Carolan et. al.

+ +

Of course, it's not quite that simple and, due to requiring an exponential number of modes, the Reck scheme isn't generally considered to be overly scalable.

+ +
+ +

That leaves us with the (final) two options1: nonlinear optics (continuous variable) and measurement based quantum computing

+ +

Continuous variable: Just keep squeezing

+ +

As detailed in my answer here, continuous variable QC also offers a universal gateset which can be used to make arbitrary unitaries, in theory at least. Unfortunately, as more squeezing is still required, an experimental realisation of this is yet to occur.

+ +

And now for something completely different: Measurement based

+ +

Another scheme that hasn't been experimentally achieved, yet shows potential, is measurement based QC. Instead of performing CNOT gates during unitary evolution that defines a circuit, the entangling operations occur as part of the state preparation of the system. As per Ewert and Loock (arXiv version) the current idea of doing this involves generating small clusters of entangled photons, then entangling these into larger clusters using fusion gates, as shown in figure 3.

+ +

+ +

Figure 3: Diagram of a 75% efficient fusion gate. Inputting the state $\left|\Upsilon_1\right> = \frac{1}{\sqrt 2}\left(\left|20\right> + \left|02\right>\right)$ allows for the detection of higher dimensional states. These can then be cascaded to detect larger and larger cluster states. The probabilistic measurement is equivalent to an entangling operation, similar to a CNOT gate. Image taken from Figure 1 of Ewert and Loock.

+ +
+ +

1 Although there are a number of variations of the different schemes used and work is constantly being done to improve upon them

+",23,,,,,6/30/2018 18:25,,,,2,,,,CC BY-SA 4.0 +2552,2,,2499,6/30/2018 19:11,,49,,"
+

Is quantum computing just pie in the sky?

+
+

So far it is looking this way. We have been reaching for this pie aggressively over the last three decades but with not much success. we do have quantum computers now, but they are not the pie we wanted, which is a quantum computer that can actually solve a useful problem faster or with better energetic efficiency than a classical computer.

+
+

You won't be buying one in PC World any time soon.

+

Will you ever be able to?

+
+

We cannot predict the future, but if I had to guess right now, I would say "no". There is not yet any application for which quantum computing would be valuable enough. Instead we might have quantum computers at a small number of special institutes where very special calculations are done (like the supercomputer called Titan at Oak Ridge National Lab, or like a cyclotron particle accelerator where special experiments are done).

+
+

Is it all hype and hot air?

+
+

Most of it is hype, unfortunately.
+But applications in quantum chemistry can indeed be game changing. Instead of doing physically laborious experiments on thousands of candidate molecules for a medicine or fertilizer, we can search for the best molecules on a computer. Molecules behave quantum mechanically, and simulating quantum mechanics is not efficient on classical computers, but is on quantum computers. Much of Google's investment in QC is for chemistry applications [1].

+
+

Is it all just jam-tomorrow woo peddled by quantum quacks to a gullible public? If not, why not?

+
+

Much of it is, unfortunately.

+

You were probably one of the more talented students in your class at Manchester University. You might have noticed that there was only a few of you and a larger number of mediocre and sub-mediocre students. There is a similar phenomenon at the professor level. Many professors don't find it easy or "natural" to write well-received grant proposals, but they need funding to keep their job, and to make sure their PhD students aren't starved of experiencing scientific conferences and having access to the software they need.

+

When a professor becomes:

+
    +
  • desperate for funding, or
  • +
  • caught up with other problems in life, such as having to take care of a child with cancer, or
  • +
  • aware that they won't make huge scientific discoveries like some scientists did 100s of years ago,
  • +
+

life becomes more about surviving, keeping a happy family, and doing what they enjoy rather than making a better world for their grandchildrens' grandschildren. As a professor, I can tell you that many of my colleagues are not as "noble" as the public often perceives scientists to be.

+

I know around 1000 people with funding to work in quantum computing, and not a single one seems to have ill intentions to fool a "gullible public" in some sinister way. Most of us just apply for grants available through our universities or through our governments, and we don't intend to exaggerate the importance of our work any more than other scientists competing for the same money (we have to compete with molecular physicists pretending their work is important for fixing climate change just because the molecule they're working on is in our atmosphere, or biophysicists pretending their work might cure cancer just because they're working on a molecule that's prominent in the body).

+

A lot of the "hype" around quantum computing comes from the media. Journalists have twisted the contents of my papers to make eye-catching headlines which will get more clicks on their ads, and their bosses give them pressure to do this or they'll lose their job to the other intern that doesn't care as much about being honest.

+

Some of the hype does come from scientists themselves, many who truly believe quantum computing will be revolutionary because their PhD supervisor didn't have a great education (remember that Manchester University is one of the best in the world, and most universities are not even close), or perhaps in rare cases there is hype from people desperate for funding, but not much for reasons other than these.

+

I do believe the public should invest a bit in quantum computing, as they do for lots of other areas of research which have no guaranteed positive outcome. The hype is often exaggerated by journalists, ignorant scientists, or non-ignorant scientists who think they need it for survival. There is also unfairly harsh criticism from journalists and funding agencies.

+

Nothing you said in your question is wrong.
+I have just given some reasons for why they are correct.

+",2293,,-1,,03-10-2022 03:30,03-10-2022 03:30,,,,0,,,,CC BY-SA 4.0 +2553,1,2575,,6/30/2018 19:36,,7,1000,"

I have the following operation in my .qs files:

+ +
operation myOp(qubits: Qubit[]) : () {
+     // uses elements from the qubit array        
+ }
+
+ +

How do I send an array of qubits to this in the driver file? +The following did not work:

+ +
Qubit[] qubits = new Qubit[2];
+myOp.Run(sim, qubits);
+
+ +

I got the following error:

+ +
Driver.cs(13,32): error CS1503: Argument 2: cannot convert from 'Microsoft.Quantum.Simulation.Core.Qubit[]' to 'Microsoft.Quantum.Simulation.Core.QArray<Microsoft.Quantum.Simulation.Core.Qubit>' [/home/tinkidinki/Quantum/Warmup_Contest/BellState/BellState.csproj]
+
+The build failed. Please fix the build errors and run again.
+
+ +

Also, as an aside: Would such a question be more suitable for this site, or for stack overflow?

+",2832,,26,,03-12-2019 09:04,03-12-2019 09:04,How do you send an array of qubits to an operation in Q#?,,3,2,,,,CC BY-SA 4.0 +2554,1,2571,,6/30/2018 20:30,,6,267,"

If quantum computers advance to the point where they can defeat RSA, DSA, SHA (and really all existing classical public key encryption or and authentication) then it appears that it would be impossible to make secure transactions on the internet.

+ +

It would be impossible to maintain the security of user accounts for social media, amazon, eBay, online banking, etc. It seems that the economic repercussions of this would be catastrophic on a global scale.

+ +

What measures can be taken against attacks on cryptosystems by quantum computers?

+ +

At lest for now, I see a big problem with giving an answer that involves saying we could just use quantum encryption algorithms. The main reason is that in order for the encryption to be effective the end users would have to be in possession of a quantum encrypt/decrypt device. Not a problem for a bank or Amazon on their end, but a big problem for a guy trying to order a pizza on his smart phone.

+ +

If end users were not actually in possession of a small quantum computer, and instead used a cloud based service to access a quantum device an attacker could just target the last segment of the transaction (between a cloud service and their device).

+ +

For end users to possess quantum crypto devices one would need to bring the cost down to a few hundred dollars max or the average person would not be able to afford it. Right now most quantum systems are priced in the hundreds of thousands or millions of dollars range.

+ +

Also, all of the viable quantum systems I have seen run near absolute zero. I don't know of anyone who makes a dilution refrigerator the size of a AA battery. So you couldn't perform transactions on portable devices.

+ +

Is the only option then to classify all quantum crypto research until these problems can be solved?

+",2866,,2866,,07-01-2018 13:58,07-02-2018 09:33,What measures can be taken against attacks on cryptosystems by quantum computers other than just classifying research?,,2,0,,,,CC BY-SA 4.0 +2555,2,,2521,6/30/2018 20:44,,5,,"

Let's recap a bit:

+ +

In classical information theory, the analogous formula is the Shannon noisy channel coding theorem. It's charming, because it is basically just a very simple optimization of the mutual information.

+ +

The quantum channel capacity is that it is given by

+ +

$$ \lim\limits_{n\to\infty} \frac{1}{n}Q(T^{\otimes n}) $$

+ +

where $T$ is the quantum channel in question and $Q$ is the coherent information.

+ +

Now let's try to answer your question:

+ +

Obviously, we'd want a formula that doesn't depend on $n$ just like in the classical case. The problem is: It's known that such an expression cannot exist (see https://arxiv.org/abs/quant-ph/9706061). It gets worse: You could hope that there is a maximal $n$ after which you at least know that the capacity is zero. But that's false (published recently: Unbounded number of channel uses may be required to detect quantum capacity (Cubitt et al., 2015).

+ +

In addition, if you use two different channels, their capacity can both be zero while the capacity when used together is larger than zero (see arXiv:0807.4935), which makes it even more difficult to imagine simple formulas for channels.

+ +

This implies that the best we can hope for is formulas for particular channels or particular subclasses of channels. There are a few results scattered throughout the literature. For instance, the capacities of certain Gaussian bosonic channels are known (arXiv:quant-ph/0606132).

+ +
+ +

However, please note that the quantum capacity is only one of many capacities defined and it's not necessarily the most interesting one. A different capacity which is often discussed in the literature is the classical capacity of a quantum channel (i.e. how much classical information can I send over a quantum channel?).

+ +

Let me point you to a recent review providing a lot of pointers to the literature: arXiv:1801.02019.

+",1854,,26,,5/13/2019 15:26,5/13/2019 15:26,,,,1,,,,CC BY-SA 4.0 +2557,2,,2554,6/30/2018 21:01,,-2,,"

We can always make larger and larger keys in RSA. If a quantum computer can factor numbers up to RSA-4096, then use RSA-131072, or better yet, some elliptic curve key big enough to be safe against the next 10 years of quantum computing hardware.

+ +

Or don't use public key cryptography but instead use standard passwords where the cost for a classical computer to break them is $M^n$ for $M$ characters and a password of length $n$, and the quantum computer's cost is at best $\mathcal{O}(M^{0.5n})$ which is not at all much better.

+",2293,,2293,,07-02-2018 05:42,07-02-2018 05:42,,,,3,,,,CC BY-SA 4.0 +2558,1,2560,,6/30/2018 22:51,,4,551,"

From a 9×9 Hamiltonian lying 9D space, I choose a certain subspace of 4D for designing a two qubit gate. Now the original unitary time evolution operator also lies in 9D space and it's a 9×9 size matrix. For action of unitary time evolution operator on the two qubit gate made out of 4D subspace it is required to project the unitary time evolution operator in the 4D subspace. After reviewing literature, I came across an article doing same thing with the use of projection operator.

+ +

My question- How to find the projection operator on the subspace?

+ +

Also I guess projection operator will be 4×4 matrix, so how will it act on the unitary time evolution operator which is a 9×9 matrix.

+ +

P.S.- I took the definition of projection operator from ""Quantum Computation and Quantum Information, Isaac Chuang and Michael Nielsen"".

+ +

+",2817,,26,,12/23/2018 13:31,12/23/2018 13:31,Projection operator on Time evolution Operator,,1,5,,,,CC BY-SA 4.0 +2560,2,,2558,07-01-2018 08:46,,2,,"

A $9\times 9$ matrix $H$ can act on a $9$ dimensional state vector, say something like:

+ +

$$|\Psi\rangle = a_0|0\rangle + a_1|1\rangle + .... + a_8|8\rangle$$

+ +

Now, say you want to find the matrix which only acts on the subspace spanned by the basis $\{|0\rangle,|1\rangle\,|2\rangle,|3\rangle\}$, but has the same effect as the original $H$ matrix.

+ +

Find, $H|0\rangle$ and $H|1\rangle$ (where $|0\rangle = [1 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0\ 0 \ 0 ]^{T}$, $|1\rangle = [0 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0\ 0 \ 0 ]^{T}$, +$|2\rangle = [0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0\ 0 \ 0 ]^{T}$ and $|3\rangle = [0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0\ 0 \ 0 ]^{T}$)

+ +

Then find the unique scalars $\alpha_{j,i}$ such that

+ +

$H|0\rangle = \alpha_{0,0}|0\rangle + \alpha_{1,0}|1\rangle + \alpha_{2,0}|2\rangle+ \alpha_{3,0}|3\rangle$,

+ +

$H|1\rangle = \alpha_{0,1}|0\rangle + \alpha_{1,1}|1\rangle + \alpha_{2,1}|2\rangle + \alpha_{3,1}|3\rangle$,

+ +

$H|2\rangle = \alpha_{0,2}|0\rangle + \alpha_{1,2}|1\rangle + \alpha_{2,2}|2\rangle + \alpha_{3,2}|3\rangle$ and

+ +

$H|3\rangle = \alpha_{0,3}|0\rangle + \alpha_{1,3}|1\rangle + \alpha_{2,3}|2\rangle + \alpha_{3,3}|3\rangle$

+ +

Now, your required $4\times 4$ matrix is $\begin{bmatrix}\alpha_{0,0} & \alpha_{0,1} & \alpha_{0,2} & \alpha_{0,3}\\\alpha_{1,0} & \alpha_{1,1} & \alpha_{1,2} & \alpha_{1,3} \\ \alpha_{2,0} & \alpha_{2,1} & \alpha_{2,2} & \alpha_{2,3} \\ \alpha_{3,0} & \alpha_{3,1} & \alpha_{3,2} & \alpha_{3,3}\end{bmatrix}$

+ +

Let me know if you're confused about any particular step.

+",26,,26,,07-01-2018 09:56,07-01-2018 09:56,,,,1,,,,CC BY-SA 4.0 +2561,1,2583,,07-01-2018 13:05,,5,483,"

A beginner question after watching few videos.

+ +

Say, var=a; var can be either of two values, a or b. Check what is the value of var, using Q#, QISKit or similar.

+ +

Any help/idea?

+",2884,,26,,07-01-2018 13:10,6/22/2020 13:38,Checking value of variable using quantum approach,,2,0,,,,CC BY-SA 4.0 +2562,1,3798,,07-01-2018 13:34,,9,269,"

This question is a follow-up to the previous QCSE question: ""Are qudit graph states well-defined for non-prime dimension?"". From the question's answer, it appears that there is nothing wrong in defining graph states using $d$-dimensional qudits, however, it seems that other definitional aspects of graph-states do not similarly extend to non-prime dimension.

+ +

Specifically, for qubit graph states, one key aspect to their prevalence and use is the fact that: any two graph states are local Clifford equivalent if and only if there is some sequence of local complementations that takes one graph to the other (for simple, undirected graphs). Needless to say, this is an incredibly useful tool in analyses of quantum error correction, entanglement and network architectures.

+ +

When considering $n$-qudit graph states, the equivalent graph is now weighted with adjacency matrix $A \in \mathbb{Z}_d^{n \times n}$, where $A_{ij}$ is the weight of edge $(i,j)$ (with $A_{ij}=0$ indicating no edge exists). +In the qudit case, it was shown that LC equivalence can similarly be extended by the generalisation of local complementation ($\ast_a v$) and inclusion of an edge multiplication operation ($\circ_b v$), where: +\begin{align} +\ast_a v &: A_{ij} \mapsto A_{ij} + aA_{vi}A_{vj} \quad \forall\;\; i,j \in N_G(v), \;i \neq j \\ +\circ_b v &: A_{vi} \mapsto bA_{vi} \quad\forall\;\; i \in N_G(v), +\end{align} +where $a, b = 1, \ldots, d-1$ and all arithmetic is performed modulo $p$.

+ +

Graphically, this is represented by the following operations (reproduced from Ref. 2): +

+ +

However, if the graph state is defined on qudits of non-prime dimension, then we can see these operations (seem to) fail to represent LC-equivalence.

+ +

For example, take the qudit state $|G\rangle$ depicted the graph $G$ in Fig. 1, defined for qudit dimension $d=4$, and let $x=y=z=2$, such that $A_{12}=A_{13}=A_{14}=2$. +In this case performing $\circ_2 1$ then $A_{1i} \mapsto 2 \times 2 = 4 \equiv 0 \bmod 4 \;\forall\; i$, and hence qudit $1$ is disentangled from all other qudits using only local operations. +Clearly this is wrong and occurs because of the problem of zero divisors as mentioned in the previous questions' answer.

+ +

My question is: is there any set of graph operations that properly represent local Clifford equivalence for qudit graph states of non-prime dimension?

+ +

Note: I am primarily interested in operations that directly apply to a state's representation as a single weighted graph, rather than possible decompositions into multiple prime-dimensional graph states, as suggested in Sec. 4.3 of ""Absolutely Maximally Entangled Qudit Graph States"".

+",391,,391,,07-01-2018 13:40,7/19/2018 11:21,Does local Clifford equivalence have a direct graphical representation for qudit graph states of non-prime dimension?,,1,1,,,,CC BY-SA 4.0 +2563,2,,2549,07-01-2018 16:39,,2,,"

To answer your first, general question: Optical circuits are usually drawn with a selection of conventional symbols, a directory of which to draw them can be found here.

+ +

With respect to that specific circuit, if a single photon was input, the output would produce a state in a decaying superposition of subsequent time bins. If you choose a temporal basis for your output photons $|t_k\rangle$, taking $|t_0\rangle$ as the time bin of your the first pass through the beamsplitter and $|t_1\rangle$ as the time bin of the next pass, etc, then the output state will be given by +$$ +\sum_{k=0}^\infty \left(\frac{e^{-i\phi}}{\sqrt{2}}\right)^{k+1}|t_k\rangle, +$$ +where $\phi$ is the phase shift imparted on each photon after a single pass of the loop.

+ +

Alternatively, if you only care about the input and output photons, this can be more generally be represented by the Bogoliubov transformation on the optical mode operators +$$ +a^\dagger_t \mapsto \sum_{k=0}^\infty \left(\frac{e^{-i\phi}}{\sqrt{2}}\right)^{k+1} b^\dagger_{t+k} +$$

+ +

where $a_t^\dagger$ and $b^\dagger_t$ are the creation operators for photons in time bin $t$ in the optical input and output modes respectively.

+ +

Personally, I haven't seen this sort of state before and don't know any particular use for it. However, there may be some use for it in some sort of strange loop-based architecture for linear optical quantum computing, although I would doubt it.

+",391,,391,,07-02-2018 05:47,07-02-2018 05:47,,,,3,,,,CC BY-SA 4.0 +2564,1,,,07-01-2018 17:07,,6,140,"

This is a sequel to How are two different registers being used as "control"?

+ +

I found the following quantum circuit given in Fig 5 (page 6) of the same paper i.e. Quantum Circuit Design for Solving Linear Systems of Equations (Cao et al.,2012).

+ +

+ +

In the above circuit $R_{zz}(\theta)$ is $\left(\begin{matrix}e^{i\theta} & 0 \\ 0 & e^{i\theta}\end{matrix}\right)$

+ +

As @DaftWullie mentions here:

+ +
+

Frankly, I've no chance of getting there because there's an earlier + step that I don't understand: the output on registers $L, M$ after Figure + 5. The circuit diagram and the claimed output don't match up (the claimed output being separable between the $L$ and $M$ registers, when + qubit $l−1$ of register $L$ should be entangled with those of register M.

+
+ +

I understand that after the Walsh-Hadamard transforms the state of register $L$ is $$\frac{1}{\sqrt{2^l}}\sum_{s=0}^{2^l-1}|s\rangle$$

+ +

and that of register $M$ is

+ +

$$\frac{1}{\sqrt{2^m}}\sum_{p=0}^{2^m-1}|p\rangle$$

+ +

But after that, I'm not exactly sure how they're applying the $R_{zz}$ rotation gates to get to $$\sum_s\sum_p|p\rangle \exp(i p/2^m t_0)|s\rangle$$

+ +

Firstly, are all the $R_{zz}$ gates acting on a single qubit i.e. the $l-1$th qubit in the register $L$? (Seems so from the diagram, but I'm not sure).

+ +

Secondly, it would be very helpful if some can write down the steps for how're they're getting to $\sum_s\sum_p|p\rangle \exp(i p/2^m t_0)|s\rangle$ using the controlled rotation gates.

+",26,,26,,12/23/2018 13:30,12/23/2018 13:30,How exactly is the stated composite state of the two registers being produced using the $R_{zz}$ controlled rotations?,,0,10,,,,CC BY-SA 4.0 +2565,1,2568,,07-01-2018 19:47,,22,5198,"

I've probably read the chapter The quantum Fourier transform and its applications from Nielsen and Chuang (10 th anniversary edition) a couple of times before and this took this thing for granted, but today, when I looked at it again, it doesn't seem obvious to me at all!

+ +

Here's the circuit diagram for the Phase estimation algorithm:

+ +

+ +

The first register having $t$ qubits is supposedly the ""control register"". If any of the qubit in the first register is in state $|1\rangle$ the corresponding controlled unitary gate gets applied to the second register. If it is in a state $|0\rangle$ then it doesn't get applied to the second register. If it is in a superposition of the two states $|0\rangle$ and $|1\rangle$ the action of the corresponding unitary on the second register can be determined by ""linearity"". Notice, that all the gates are acting only on the second register and none on the first register. The first register is supposed to be only a control.

+ +

However, they show that the final state of the first register as:

+ +

$$\frac{1}{2^{t/2}}\left(|0\rangle+\text{exp}(2\pi i + 2^{t-1}\varphi)|1\rangle)(|0\rangle+\text{exp}(2\pi i + 2^{t-2}\varphi)|1\rangle)...(|0\rangle+\text{exp}(2\pi i + 2^{0}\varphi)|1\rangle\right)$$

+ +

I'm surprised as to why we consider there to be a change in the state of the first register of qubits at all, after the action of the Hadamard gates. The final state of the first register should just have been

+ +

$$\left(\frac{|0\rangle+|1\rangle}{\sqrt 2}\right)^{\otimes t}$$

+ +

isn't it? I say this because the first register is supposed to be a control only. I don't understand how or why the state of the first register should change when acting as a control.

+ +

I initially thought that considering the exponential factors to be part of the first register qubit states was only a mathematical convenience, but then it didn't make sense. State of a qubit or a system of qubits shouldn't depend upon what is mathematically convenient to us!

+ +

So, could someone please explain why exactly the state of the first register of qubits changes, even when it simply acts as a ""control"" for the second register? Is it just a mathematical convenience or is there something deeper?

+",26,,26,,02-07-2019 08:07,5/29/2022 0:15,"Why does the ""Phase Kickback"" mechanism work in the Quantum phase estimation algorithm?",,3,6,,,,CC BY-SA 4.0 +2566,2,,2565,07-02-2018 05:19,,-1,,"

Great question.
+I once asked this too, but it is not just a matter of mathematical convenience.
+The controlled-U is an ""entangling"" gate.
+Once there's entanglement, you cannot separate the state into ""first register"" and ""second register"".
+Only think of these registers separately at the beginning, or when there's no entanglement. After there's entanglement, your best bet is to work through the mathematics (matrix multiplications) thoroughly, and you will indeed get the state given by Nielsen and Chuang.

+",,user2898,,,,07-02-2018 05:19,,,,4,,,,CC BY-SA 4.0 +2568,2,,2565,07-02-2018 06:31,,16,,"

Imagine you have an eigenvector $|u\rangle$ of $U$. If you have a state such as $|1\rangle|u\rangle$ and you apply controlled-$U$ to it, you get out $e^{i\phi}|1\rangle|u\rangle$. The phase isn't attached to a specific register, it's just an overall multiplicative factor.

+ +

Now let's use a superposition on the first register: +$$ +(|0\rangle+|1\rangle)|u\rangle\mapsto |0\rangle|u\rangle+e^{i\phi}|1\rangle|u\rangle +$$ +You can rewrite this as +$$ +(|0\rangle+e^{i\phi}|1\rangle)|u\rangle +$$ +so it appears on the first register, even though it was sort-of created on the second register. (Of course that interpretation isn't entirely true because it was created by a two-qubit gate acting on both qubits).

+ +

This step is at the heart of many quantum algorithms.

+ +

Why don't we write $|\Psi\rangle=|0\rangle|u\rangle+|1\rangle(e^{i\phi}|u\rangle)$ and just claim that it is not separable?

+ +

One can't just claim it, but must show it mathematically. For example, we can take the partial trace over the second qubit, +$$ +\text{Tr}_B(|\Psi\rangle\langle\Psi|_{AB})=\text{Tr}_B(|0\rangle\langle 0|\otimes |u\rangle\langle u|+|1\rangle\langle 0|\otimes e^{i\phi}|u\rangle\langle u|+|0\rangle\langle 1|\otimes |u\rangle\langle u|e^{-i\phi}+|1\rangle\langle 1|\otimes e^{i\phi}|u\rangle\langle u|e^{-i\phi}) +$$ +To take the partial trace, we pick a basis to sum over. For simplicity, let's pick $\{|u\rangle,|u^\perp\rangle\}$ where $\langle u|u^\perp\rangle=0$ and $\langle u|(e^{i\phi}|u\rangle=e^{i\phi}$. Then you get +$$ +\text{Tr}_B(|\Psi\rangle\langle\Psi|_{AB})=|0\rangle\langle 0|+e^{i\phi}|1\rangle\langle 1|+e^{-i\phi}|0\rangle\langle 1|+|1\rangle\langle 1| +$$ +This is rank 1 (and you can see the phase has appeared on the first register), so the state is not entangled. It is separable.

+",1837,,1837,,07-02-2018 07:36,07-02-2018 07:36,,,,6,,,,CC BY-SA 4.0 +2569,2,,2524,07-02-2018 06:52,,1,,"
+

how (do) engineers detect that the state function did not collapse due to the environment while a calculation is performed?

+
+ +

In a sense, they don't. There's the possibility that you encode in an error correcting or error detecting code which permits the monitoring of errors. But really, before experimentalists want to run an algorithm, they've already done lots of tests on their system so they know how it works using a procedure called process tomography. They've checked in great detail that each of their gates performs as they expect them to, so they already have a very accurate prediction of how much error there will be.

+ +
+

how to detect that a calculation is finished?

+
+ +

You don't need to detect that it has finished. The computation is made up of a definite set of quantum gates. All you do is apply each gate sequentially. You know when you've finished implementing them, so you know when to look at the answer.

+ +
+

how is it even possible to check whether the experimenter produced a certain quantum state? Maybe all the bits have a totally different state right from the start and you just know about that afterwards.

+
+ +

Well, you can make sure you've got the correct starting state because you can measure it (you always start in a product state that you can easily measure). The final result is just about making sure all the individual steps do exactly what they're supposed to do. Again, if you can't do them perfectly, error correction (and more generally, fault tolerance) can beused to keep you on track. It is true that you can't use measurements to determine the state and copy it many times, as you might think to do with classical computation, but there are still good error correcting techniques. But that's another whole question...

+",1837,,,,,07-02-2018 06:52,,,,0,,,,CC BY-SA 4.0 +2570,2,,1937,07-02-2018 07:13,,3,,"

Let's start with a simple example where $H_i$ and $H_f$ commute because they are both diagonal:

+ +

$H_i= +\begin{pmatrix}1 & 0\\ +0 & -1 +\end{pmatrix} +$

+ +

$H_p= +\begin{pmatrix}-1 & 0\\ +0 & -0.1 +\end{pmatrix} +$

+ +

The eigenvector with lowest eigenvalue (i.e. the ground state) of $H_i$ is $|1\rangle $ so we start in this state. +The ground state of $H_f$ is $|0\rangle$ so this is what we're looking for.

+ +

Remember the minimum runtime for the AQC to give the correct answer to within an error $\epsilon$:
+$\tau\ge \max_t\left(\frac{||H_i - H_f||^2}{\epsilon E_{\rm{gap}}(t)^3}\right)$.

+ +

This is given and explained in Eq. 2 of Tanburn et al. (2015).

+ +
    +
  • Let's say we want $\epsilon = 0.1$.
  • +
  • Notice that $||H_i - H_f||^2 = 0.1 $ according Eq. 4 of the same paper.
  • +
  • Notice that $\frac{||H_i - H_f||^2}{\epsilon}=1$ (I've chosen $\epsilon$ so that this would happen, but it doesn't matter).
  • +
  • We now have $\tau \ge \max_t\left(\frac{1}{E_{\rm{gap}}(t)^3}\right)$
  • +
+ +

So what is the minimum gap between ground and first excited state (which gives the $\max_t$) ?
+When $t=20\tau/29$, the Hamiltonian is:

+ +

$H=\frac{9}{29}H_i + \frac{20}{29}H_p$

+ +

$H=\frac{9}{29}\begin{pmatrix}1 & 0\\ +0 & -1 +\end{pmatrix} + \frac{20}{29}\begin{pmatrix}-1 & 0\\ +0 & -0.1 +\end{pmatrix}$

+ +

$ +H=\begin{pmatrix}\frac{9}{29} & 0\\ +0 & -\frac{9}{29} +\end{pmatrix}+\begin{pmatrix}-\frac{20}{29} & 0\\ +0 & -\frac{2}{29} +\end{pmatrix} +$

+ +

$ +H=\begin{pmatrix}\frac{-11}{29} & 0\\ +0 & -\frac{11}{29} +\end{pmatrix} +$

+ +

So when $t=\frac{20}{29}\tau$, we have $E_{\rm{gap}}=0$ and the lower bound on $\tau$ is essentially $\infty$.

+ +

So the adiabatic theorem still applies, but when it states that the Hamiltonian needs to change ""slowly enough"", it turns out it needs to change ""infinitely slowly"", which means you will not likely ever get the answer using AQC.

+",2293,,,,,07-02-2018 07:13,,,,2,,,,CC BY-SA 4.0 +2571,2,,2554,07-02-2018 09:33,,2,,"
+

At least for now, I see a big problem with giving an answer that + involves saying we could just use quantum encryption algorithms. The + main reason is that in order for the encryption to be effective the + end users would have to be in possession of a quantum encrypt/decrypt + device. Not a problem for a bank or Amazon on their end, but a big + problem for a guy trying to order a pizza on his smartphone.

+
+ +

Not really. Current day cryptosystems are mostly based on integer factorization, discrete logarithm & elliptic curve cryptography. I'd like to point out to you: Post-quantum cryptography. There are already a few cryptography algorithms which are resistant to quantum computer attacks. And you don't need necessarily ""quantum computers"" on the sender's or receiver's end for using such cryptography techniques.

+ +

The cryptosystems which are quantum-resistant normally use problems which lie outside BQP rather than being QMA-hard. That implies, the owners of the private key (in this case the sender and receiver) can easily decrypt the message using a classical computer, whereas since the problem is QMA-hard without the private key, even using ""brute-force"", it would be difficult for a quantum computer to hack. (see @DaftWullie's excellent answer) and his comment:

+ +
+

For example, in the normal classical case of RSA, the central function + is factoring. The problem is (assumed to be) outside P making it hard + for a classical computer to hack, but inside NP (NOT NP-hard) so that + the rightful receiver can decrypt it on a classical computer.

+
+ +

To quote Wikipedia:

+ +
+

In contrast to the threat quantum computing poses to current + public-key algorithms, most current symmetric cryptographic + algorithms, and hash functions are considered to be relatively secure + against attacks by quantum computers. While the quantum Grover's + algorithm does speed up attacks against symmetric ciphers, doubling + the key size can effectively block these attacks. Thus post-quantum + symmetric cryptography does not need to differ significantly from + current symmetric cryptography.

+
+ +

By the way, I should point out that we're still quite a long way away from having actual quantum computers which can break current-day cryptosystems. The quantum computers of today aren't capable of even factorizing very large numbers. The largest number factorized by a quantum computer till date is $291311$[1] (as far as I know). That's something even your hand PC can do in milliseconds and that is nowhere close to breaking a cryptosystem. Presumably, by the time we will have such quantum computers, cryptography will have already progressed by leaps and bounds (and we wouldn't have to worry about ""quantum attacks"").

+ +

Moral of the story: Even in the future, your guy can still order pizzas using his ""classical"" smartphone without having to worry about hungry hackers stealing his pizzas using a quantum computer! ;)

+ +
+ +

[1]: High-fidelity adiabatic quantum computation using the intrinsic Hamiltonian of a spin system: Application to the experimental factorization of 291311 Li et al. (2017)

+",26,,,,,07-02-2018 09:33,,,,0,,,,CC BY-SA 4.0 +2572,2,,2565,07-02-2018 12:10,,21,,"

A first remark

+ +

This same phenomenon of 'control' qubits changing states in some circumstances also occurs with controlled-NOT gates; in fact, this is the entire basis of eigenvalue estimation. So not only is it possible, it is an important fact about quantum computation that it is possible. It even has a name: a ""phase kick"", in which the control qubits (or more generally, a control register) incurs relative phases as a result of acting through some operation on some target register.$\def\ket#1{\lvert#1\rangle}$

+ +

The reason why this happens

+ +

Why should this be the case? Basically it comes down to the fact that the standard basis is not actually as important as we sometimes describe it as being.

+ +

Short version. Only the standard basis states on the control qubits are unaffected. If the control qubit is in a state which is not a standard basis state, it can in principle be changed.

+ +

Longer version —

+ +

Consider the Bloch sphere. It is, in the end, a sphere — perfectly symmetric, with no one point being more special than any other, and no one axis more special than any other. In particular, the standard basis is not particularly special.

+ +

The CNOT operation is in principle a physical operation. To describe it, we often express it in terms of how it affects the standard basis, using the vector representations +$$ \ket{00} \to {\scriptstyle \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}}\,, +\quad +\ket{01} \to {\scriptstyle \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}}\,, +\quad +\ket{10} \to {\scriptstyle \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}}\,, +\quad +\ket{11} \to {\scriptstyle \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}}$$ +— but this is just a representation. This leads to a specific representation of the CNOT transformation: +$$ +\mathrm{CNOT} +\to +{\scriptstyle \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}}\,.$$ +and for the sake of brevity we say that those column vectors are the standard basis states on two qubits, and that this matrix is a CNOT matrix.

+ +

Did you ever do an early university mathematics class, or read a textbook, where it started to emphasise the difference between a linear transformation and matrices — where it was said, for example, that a matrix could represent a linear transformation, but was not the same as a linear transformation? The situation with CNOT in quantum computation is one example of how this distinction is meaningful. The CNOT is a transformation of a physical system, not of column vectors; the standard basis states are just one basis of a physical system, which we conventionally represent by $\{0,1\}$ column vectors.

+ +

What if we were to choose to represent a different basis — say, the X eigenbasis — by $\{0,1\}$ column vectors, instead? Suppose that we wish to represent $$ +\begin{aligned} +\ket{++} \to{}& [\, 1 \;\; 0 \;\; 0 \;\; 0 \,]^\dagger\,, +\\ +\ket{+-} \to{}& [\, 0 \;\; 1 \;\; 0 \;\; 0 \,]^\dagger\,, +\\ +\ket{-+} \to{}& [\, 0 \;\; 0 \;\; 1 \;\; 0 \,]^\dagger\,, +\\ +\ket{--} \to{}& [\, 0 \;\; 0 \;\; 0 \;\; 1 \,]^\dagger \,. +\end{aligned}$$ +This is a perfectly legitimate choice mathematically, and because it is only a notational choice, it doesn't affect the physics — it only affects the way that we would write the physics. It is not uncommon in the literature to do analysis in a way equivalent to this (though it is rare to explicitly write a different convention for column vectors as I have done here). We would have to represent the standard basis vectors by: +$$ \ket{00} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}}\,, +\quad +\ket{01} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ -1 \\ 1 \\ -1 \end{bmatrix}}\,, +\quad +\ket{10} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ 1 \\ -1 \\ -1 \end{bmatrix}}\,, +\quad +\ket{11} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ -1 \\ -1 \\ 1 \end{bmatrix}}\,.$$ +Again, we're using the column vectors on the right only to represent the states on the left. But this change in representation will affect how we want to represent the CNOT gate.

+ +

A sharp-eyed reader may notice that the vectors which I have written on the right just above are the columns of the usual matrix representation of $H \otimes H$. There is a good reason for this: what this change of representation amounts to is a change of reference frame in which to describe the states of the two qubits. In order to describe $\ket{++} = [\, 1 \;\; 0 \;\; 0 \;\; 0 \,]^\dagger$, $\ket{+-} = [\, 0 \;\; 1 \;\; 0 \;\; 0 \,]^\dagger$, and so forth, we have changed our frame of reference for each qubit by a rotation which is the same as the usual matrix representation of the Hadamard operator — because that same operator interchanges the $X$ and $Z$ observables, by conjugation.

+ +

This same frame of reference will apply to how we represent the CNOT operation, so in this shifted representation, we would have +$$ +\begin{aligned} +\mathrm{CNOT} \to \tfrac{1}{4}{}\,{\scriptstyle +\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix} +\,\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}\, +\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix} +}\, += +\,{\scriptstyle +\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix}} +\end{aligned}$$ +which — remembering that the columns now represent $X$ eigenstates — means that the CNOT performs the transformation +$$ \begin{aligned} +\mathrm{CNOT}\,\ket{++} &= \ket{++} , \\ +\mathrm{CNOT}\,\ket{+-} &= \ket{--}, \\ +\mathrm{CNOT}\,\ket{-+} &= \ket{-+} , \\ +\mathrm{CNOT}\,\ket{--} &= \ket{+-} . +\end{aligned} $$ +Notice here that it is only the first, 'control' qubits whose state changes; the target is left unchanged.

+ +

Now, I could have shown this same fact a lot more quickly without all of this talk about changes in reference frame. In introductory courses in quantum computation in computer science, a similar phenomenon might be described without ever mentioning the words 'reference frame'. But I wanted to give you more than a mere calculation. I wanted to draw attention to the fact that a CNOT is in principle not just a matrix; that the standard basis is not a special basis; and that when you strip these things away, it becomes clear that the operation realised by the CNOT clearly has the potential to affects the state of the control qubit, even if the CNOT is the only thing you are doing to your qubits.

+ +

The very idea that there is a 'control' qubit is one centered on the standard basis, and embeds a prejudice about the states of the qubits that invites us to think of the operation as one-sided. But as a physicist, you should be deeply suspicious of one-sided operations. For every action there is an equal and opposite reaction; and here the apparent one-sidedness of the CNOT on standard basis states is belied by the fact that, for X eigenbasis states, it is the 'target' which unilaterally determines a possible change of state of the 'control'.

+ +

You wondered whether there was something at play which was only a mathematical convenience, involving a choice of notation. In fact, there is: the way in which we write our states with an emphasis on the standard basis, which may lead you to develop a non-mathematical intuition of the operation only in terms of the standard basis. But change the representation, and that non-mathematical intuition goes away.

+ +

The same thing which I have sketched for the effect of CNOT on X-eigenbasis states, is also going on in phase estimation, only with a different transformation than CNOT. The 'phase' stored in the 'target' qubit is kicked up to the 'control' qubit, because the target is in an eigenstate of an operation which is being coherently controlled by the first qubit. On the computer science side of quantum computation, it is one of the most celebrated phenomena in the field. It forces us to confront the fact that the standard basis is only special in that it is the one we prefer to describe our data with — but not in how the physics itself behaves.

+",124,,124,,07-02-2018 12:30,07-02-2018 12:30,,,,1,,,,CC BY-SA 4.0 +2573,2,,2352,07-02-2018 13:53,,8,,"

Caveat. I can't be absolutely certain that no-one has contemplated a quantum XOR list before — but I can be pretty confident. On the theory side, the idea of data structures as granular as linked lists (of any description) is pretty low-level, and to my knowledge is not really the subject of research; and people working on architectures only dream of the day in which they might worry about how to store data structures in their machines. So it's likely that there is no pre-defined art on the subject.

+ +

Before we consider ""quantum XOR linked lists"", the main thing to consider is what a linked list does. It is just a data structure which is used to maintain a list of data items, where the 'links' encode memory addresses for the next or previous item in the list. This idea is pretty generic (that's a compliment, not an insult): it can be applied to any sort of data without modification. On the other hand, it is possible to contemplate ways to extend the addressing itself to the quantum regime: this is something which, at least in principle, you may want to do, for example if you are interested in superpositions of possible list values for some algorithm which relies on such a concept to work.

+ +

This motivates two concepts:

+ +
    +
  • Classically linked lists of quantum data values, in which the data is quantum but the memory addresses are classical pointers to definite qubits or other quantum registers. This is the application of the notion of ""linked list of [X]"" to the case where 'X' happens to be quantum data. The data structure doesn't care that the data itself is quantum: the data structure is there to act as a box in which to order your data, and is completely agnostic as to what is to be done with that data.

    + +

    For this sort of linked list, the way to get an XOR linked list is simple: you do the same thing as for XOR Linked lists of classical data. The fact that the data is quantum doesn't make any difference to the data structure in this case, so you use the very same techniques.

  • +
  • Quantum-linked lists, in which the data is quantum, and so are the encodings of the memory addresses. This is what you would do if you wanted to consider superposition of possible lists, including arbitrary variations in list-length. If you're wondering how you could possibly retrieve data from a superposition of memory addresses, it seems to me that the answer is the same as you would do with any +algorithm which queries a ""quantum database"" in superposition: you +require access to a qRAM, as a means to perform coherent queries of a data bank. +If the data stored at the address is also quantum, you cannot clone it of course; but perhaps you can move that data (in a particular branch of the superposition) from the qRAM to your +active data qubits — your 'cache', if you will — in order to operate on it more +quickly. After you have performed the operations you intend to perform on that cached data, you might then swap it back to the qRAM. +In the meantime, the state of your working memory will be entangled with one or more registers of the qRAM, but there is nothing theoretically wrong with that, so long as you maintain the appropriate hygiene to keep from disrupting your data in ways that you do not intend. I have no specific suggestions for when you would actually want to do this, but as it is not an obviously ridiculous thing to want to do, I see no reason not to take this concept seriously.

    + +

    In such a list, one should generally expect the pointer values of elements later in the list, to be entangled with pointer values earlier in the list. +You will have to be careful about how you store copies of memory addresses, as you compute them and (just as importantly!) uncompute them, to traverse the quantum linked list. +The data structure might be represented almost as a decision tree, though there is likely some application of this data structure which would make the relative phases of the pointer values important, and which would distinguish it from a decision tree. (These are features which one should generally expect of a data structure built from quantum registers for memory addresses.)

    + +

    If there is any reason at all to consider superpositions of memory addresses, then as soon as it becomes reasonable at all to realise data structures such as linked lists on quantum computers, an XOR-linked list seems a very sensible idea for a data structure. The savings in resources for the links would be a significant benefit for the foreseeable future, with only modest overhead in the additional cached data required to compute the addresses for the forward and backward links. The way one would realise these are almost the same as for classical XOR-linked lists: the memory addresses are encoded as standard basis vectors, for which you may computer the XOR, and more generally you may take the superposition of the parities computed by XOR. The principle is essentially the same as for a more general quantum linked list; you just use a more elaborate encoding for the pointers. This may require a little bit more care to be taken to uncompute the links addresses as you traverse the list, but it seems likely that only a modest amount of additional work is required.

    + +

    The major constraint that I see for quantum-linked lists — whether singly-, doubly-, or XOR-linked — is the uncomputation of memory addresses as you traverse the list. Whether this ever poses a significant problem would probably depend on how and why you would want to traverse a quantum-linked list, which is to say, what you want to accomplish with it. However, it seems clear in principle that one can define such quantum-linked lists, and the way they would be implemented would be similar to a reversible implementation of classical linked lists, assuming that there is the appropriate architecture (such as qRAM in this case) to support the question of fetching data from addresses stored in superposition.

  • +
+",124,,,,,07-02-2018 13:53,,,,0,,,,CC BY-SA 4.0 +2574,2,,2499,07-02-2018 15:50,,20,,"

When you ask whether it is pie in the sky, that rather depends on what promises you think quantum technologies are trying to fulfil. And that depends on who the people are making those promises.

+ +

Consider why you are even aware of quantum computation, given that it hasn't yet managed to produce any devices (or to be more fair, not very many devices) which resemble muscular computer hardware. Where are you hearing about it from, whence the excitement? I'm willing to bet that even if you attend every academic talk about quantum computing that you can personally manage to, not very much of what you hear about quantum computing is coming from academics. Chances are you hear a lot about quantum computing from sources which are more interested in excitement than fact.

+ +

There are some corporate sources who are making more or less grandiose claims about what their quantum hardware can do, or will be able to do; and there have been for well over a decade. Meanwhile, there is a large community of people who have simply been trying to make careful progress and not spend too much of their energy making promises they can't deliver. Whom will you have heard more from?

+ +

But even granting those, the parties most responsible for excitement about quantum computation are certain kinds of magazines and special-interest websites, which as sources of information are like market-square waffle vendors: they trade very much on sweet vaporous aromas rather than something with substance and bite. +The attention-seeking advertising industry, rather than academia, are the main reason why there are such puffed-up expectations of quantum computation. They don't even care about quantum computation in principle: it is one of several magical incantations with which to amaze the crowd, to evoke dreams of pie in the sky, and in the meantime make money from some other company for the mere possibility that an ad was seen for half a second. That industry is very much in the business of selling airborne pastry, both to their clients and to their audience. But does that mean that the world is owed flying fig rolls by those who are actually working on quantum technologies? It's hard enough to accomplish the things which we think might be possible to accomplish — which are more modest, but still worthwhile.

+ +

Among my academic peers (theoretical computer scientists and theoretical physicists), the blatant misinformation about quantum computation among the public is a source of significant frustration. Most of us believe that it will be possible to build a quantum computer, and most of those who do also believe that it will have significant economic impacts. But none of us expect that it would turn the world upside-down in five to ten years, nor have we been expecting that for any of the past fifteen years that it started to become fashionable to say that we would have massive quantum computers ""in five to ten years"". I have always made it a point to say that I hope to see the impacts in my lifetime, and recent activity has made me hope to see it within twenty — but even then you won't be going to the store to buy one, any more than you go down to the store to buy a Cray supercomputer.

+ +

Nor do any of us expect that it will allow you to easily solve the Travelling Salesman Problem, or the like. Being able to analyse problems in quantum chemistry and quantum materials is the original, and in the short term still the best, prospective application of quantum computation, and it may be revolutionary there; and perhaps in the longer term we can provide robust and significant improvements in practise for optimisation problems. (D-Wave claims they can already do this in practise with their machines: the jury is still out among academics whether this claim is justified.)

+ +

The devil of it is, to explain what you can actually expect out of the theory and development of quantum computation, you have to somehow explain a little quantum mechanics. This Is Not An Easy Thing To Do, and as with anything complicated, there is little patience in the larger world for nuanced understanding, especially when ""alternative facts"" in the form of candy-flavoured 'yakawow' hype is striding mightily around in seven league boots.

+ +

The truth — about what quantum computation can do, and that it likely won't allow you to teleport across the world, nor solve world hunger or airline chaos at a stroke — is boring. But making significant advances in chemistry and materials science is not. To say nothing of applications not yet developed: how easily can you extrapolate from gear-based computers to help reliably compute taxes or compute logarithm-tables to designing aircraft?

+ +

The timeline of classical computing technology extends well before even the 19th century. We have some idea of how to try to re-tread this path with quantum technologies, and we have an idea of the sorts of dividends which may be possible if we do so. For that reason, we hope to reproduce the development to useful computing technology in a much, much faster amount of time than the 370-plus years from Pascal's adders to the modern day. But it's not going to be quite as fast as some people have been promising, particularly those people who are not actually responsible for delivering on those 'promises'.

+ +

Some remarks.

+ +

""Where's the parallel adder?""

+ +
    +
  • We don't have large devices which carry out addition by quantum computers, but we do have some people working on fast addition circuits in quantum computers — some of what quantum computers will have to do would involve more conventional operations on data in superposition.
  • +
+ +

""Where's the equivalent of Atlas, or the MU5?""

+ +
    +
  • To be frank, we're still working on the first reliable quantum analogue of Pascal's adder. I'm hopeful that the approach of the NQIT project (disclosure: I'm involved in it, but not as an experimentalist) of making small, high-quality modules which can exchange entanglement will be a route to rapid scaling via mass-production of the modules, in which case we might go from Pascal's adder, to the Collosus, to the Atlas, and beyond in a matter of a few years. But only time will tell.
  • +
+ +

""It looks like they haven't even got off the ground. You won't be buying one in PC World any time soon.""

+ +
    +
  • That is perfectly true. However, if you were ever told to expect otherwise, this is more likely to be the fault of PC World (or to be fair, PC World's competitors in the market for your subscription money as a tech enthusiast) than it is ours. Any responsible researcher would tell you that we're striving hard to make the first serious prototype devices.
  • +
+ +

""Will you ever be able to [buy a quantum computer in PC World]?""

+ +
    +
  • Will you ever be able to buy a Cray in PC World? Would you want to? Maybe not. But your university may want to, and serious businesses may want to. Beyond that is wild speculation — I don't see how a quantum computer would improve word-processing. But then again, I doubt that Babbage ever imagined that anything akin to his Difference Engine would be used to compose letters.
  • +
+",124,,124,,07-02-2018 16:07,07-02-2018 16:07,,,,1,,,,CC BY-SA 4.0 +2575,2,,2553,07-02-2018 19:17,,5,,"

In general, there are exactly two ways to allocate qubits in Q#: the using statement, and the borrowing statement. +Both can only be used from within Q#, and can't be directly used from within C#. +Thus, you'd likely want to make a new Q# operation to serve as the ""entry point"" from C#; this new operation would then be responsible for allocating qubits and passing them down.

+ +

For instance:

+ +
// MyOp.qs
+operation EntryPoint() : () {
+    body {
+        using (register = Qubit[2]) {
+            myOp(register);
+        }
+    }
+}
+
+
+// Driver.cs
+EntryPoint.Run().Wait();
+
+",1978,,,,,07-02-2018 19:17,,,,0,,,,CC BY-SA 4.0 +2576,2,,2553,07-02-2018 19:18,,3,,"

All qubits must be allocated by the Simulator, so you can't create an instance and pass it down to your Operation.

+ +

Why do you want to create the Qubits on the driver? If anything, you should create an ""entry"" method on Q# that just allocates your qubits and then call your operation, and call that from the Driver.

+",2918,,,,,07-02-2018 19:18,,,,0,,,,CC BY-SA 4.0 +2577,1,2579,,07-02-2018 19:26,,5,395,"

What exactly is a logical (non-physical? error corrected?) qunit?

+ +

Can quantum systems be built exclusively w/ logical qunits?

+",2645,,26,,12/13/2018 19:56,12/13/2018 19:56,Computing with Logical Qunits,,2,2,,,,CC BY-SA 4.0 +2578,1,,,07-02-2018 19:30,,-8,300,"

Does something like Mohs' scale exist for quantum computing? (eg. classical = 0, hybrid = 5, pure quantum = 10)

+ +
+

Mohs' scale: a scale of hardness used in classifying minerals. It runs from 1 to 10 using a series of reference minerals, and a position on the scale depends on the ability to scratch minerals rated lower.

+
+ +

The idea came from seeing this answer which mentions ""Mohs' scale of Sci-fi hardness.""

+",2645,,26,,1/13/2019 15:57,1/13/2019 15:57,Scale for Quantum Computing,,2,2,,7/27/2018 18:40,,CC BY-SA 4.0 +2579,2,,2577,07-02-2018 19:45,,5,,"

A logical qubit is made out of many physical qubits (or qudits), simply selecting a particular two-dimensional subspace. So you can’t make it “exclusively” out of logical qubits because they sit on top of real physical qubits. In fact, if you're thinking about a terminology of ""virtual qubits"", that is actually best thought of as a synonym for ""logical qubits"".

+ +

Remember what should almost be the mantra of quantum computing: ""information is physical"". Information doesn't exist unless it is recorded somewhere, so it must be recorded on something physical and the physical operations that can be performed on that information carrier determine the nature of the information theory. So if you want a logical qubit you'd better be using quantum information carriers at the physical level. It doesn't matter what quantum information carrier you use as your physical qubit, whether that's a spin or a photon, or any type of two-level system.

+ +

There is no particular relation between the number of physical qubits and the number of logical qubits (so long as the size of the space for logical qubits $\leq$ size of space for physical qubits). You might, however, use the relation as some sort of measure of efficiency. For example, we often talk about error correcting codes as defining a logical qubit. They’re defined on multiple physical qubits and give you a smaller number of “useful” qubits. Most critically, when you start talking about fault-tolerance, you have two parameters: $p_c$, the critical error rate above which the fault tolerant scheme cannot achieve arbitrary accuracy, and $p_a<p_c$, the actual error rate that you can achieve. The variation of the number of physical qubits you require to achieve a single error corrected logical qubit as $p_a$ approaches $p_c$ tells you a lot about the feasibility of fault-tolerance and the accuracy we need to aim for to get good quality quantum computing on modestly sized quantum computer..

+",1837,,1837,,07-03-2018 07:21,07-03-2018 07:21,,,,5,,,,CC BY-SA 4.0 +2580,2,,2561,07-02-2018 20:01,,1,,"

In Q# you use either the M or the Measure operations to measure a qubit.

+ +

Once you have the measurement of the Qubit, you can use things like if, for or other control flow statements to control your program execution based on the variable values.

+ +

Does that answer your question?

+",2918,,,,,07-02-2018 20:01,,,,0,,,,CC BY-SA 4.0 +2581,2,,2499,07-02-2018 20:22,,10,,"

Like all good questions, the point is what you mean. As the CTO of a startup developing a quantum computer, I have to emphatically disagree with the proposition that quantum computing is just pie in the sky.

+ +

But then you assert ""You won't be buying one in PC World any time soon."" This I not only agree with but would suggest that in the foreseeable future, you won't be able to, which is as close to ""never"" as you'll get me to assert.

+ +

Why is that? To the first point, it is valid because there are no engineering reasons to prevent us from building a quantum computer and in fact there are no reasons that will continue to prevent us from building one for much longer. To the second point, it is because it is harder to build a quantum computer than it is to build a classical computer (you need special conditions such as extremely cold temperatures or a very good vacuum, and they are slower) yet there are only certain problems that quantum computers excel at. You don't need any laptops to do drug discovery by computation or breaking outdated crypto or to accelerate inverting some function (especially not if they come with wardrobe sized support equipment), but you need one or a few supercomputers to do it.

+ +

Why can I say there are no engineering issues that prevent (large, universal) quantum computers? Note that a single example would suffice, hence I choose the technology I know best, the one I am pursuing professionally. In ion trap based quantum computing, all the ingredients one needs have been demonstrated: There are high-fidelity, universal quantum gates. There are successful attempts to move ions (separate and recombine them from strings of ions, move them along paths, and through intersections of paths), with suitable performance. Plus initializing, measuring, etc. is possible at a fidelity comparable to gate operations. The only thing that prevents large, universal ion trap based quantum computers from being built are related to getting the scientists that made the individual contributions together with the right engineers, serious engineering indeed, and finance.

+ +

I'm itching to even tell you just how one might go about getting the feat done soon, technically, but I fear I'd make our patent attorney (and my CEO and everyone else in the company) a bit mad. What it boils down to is this:

+ +

If quantum computing is indeed a pie in the sky, then looking back, people in the future will perceive it as just such a low hanging fruit as the first microcomputers.

+",,user1039,,,,07-02-2018 20:22,,,,0,,,,CC BY-SA 4.0 +2582,2,,2578,07-02-2018 20:30,,2,,"

We used Mohs' scale in Earth Science class to measure the hardness of rocks. If we could scratch it with our finger nail it meant the rock had a hardness of 2 or less. If not it had a hardness of 3 or more. Then if that rock could be scratched by another rock we would assign something greater and if it could scratch softer rocks we'd give it something less. Eventually we were able to come up with a self-consistent order of hardness for all rocks in the data set.

+ +

I do not see why you are comparing this to quantum computers. +Why Mohs' scale and not the Richter scale or the Kinsey scale or the pH scale?

+ +

To answer your question: There is no such scale I know of for quantum, classical, hybrid computers. The reason why is probably the fact that those three (quantum, classical, hybrid) are the only things on the scale worth mentioning, so it is a ternary scale (1,2, or 3) not something more sophisticated like a 1-10. We therefore don't have to use numbers and can just use the names, which are more descriptive, self-explanatory, and therefore clear.

+",2293,,2293,,07-02-2018 22:06,07-02-2018 22:06,,,,4,,,,CC BY-SA 4.0 +2583,2,,2561,07-02-2018 20:37,,3,,"

You can check for state equality with the SWAP test.

+

Quantum fingerprinting (Buhrman, Cleve, Watrous & de Wolf, 2001) seems to be the first paper to introduce the SWAP test.

+

The idea behind this test is:

+
    +
  1. Encode the 2 quantum states using quantum error correction codes to "increase the difference between them".
  2. +
  3. Test the 2 code words by using an ancilla register and the procedure below.
  4. +
  5. Read the ancilla register. If it is $\left\vert 0 \right\rangle$ then the 2 states are probably equal. Else, they are probably different.
  6. +
+

+

You can repeat the procedure multiple times to ensure that the 2 states are equal up to a given probability.

+

You can implement by yourself the test on QISKit:

+
from qiskit import ClassicalRegister, QuantumRegister, QuantumCircuit, execute
+from qiskit import IBMQ, BasicAer
+
+q_simulator = BasicAer.get_backend('qasm_simulator')
+
+register_size = 2
+
+qr_psi = QuantumRegister(register_size, 'psi')  #For state PSI
+qr_phi = QuantumRegister(register_size, 'phi')  #For state PHI
+qr_ancilla = QuantumRegister(1, 'ancilla')
+cequal = ClassicalRegister(1, 'equal')
+
+circuit = QuantumCircuit()
+
+circuit.add_register(qr_psi)
+circuit.add_register(qr_phi)
+circuit.add_register(qr_ancilla)
+circuit.add_register(cequal)
+
+
+def cswap(circuit, ctrl, q1, q2) -> None:
+    assert(len(q1) == len(q2), "The swapped register sizes should match")
+    for i in range(len(q1)):
+        # Controlled swap
+        circuit.ccx(ctrl, q1[i], q2[i])
+        circuit.ccx(ctrl, q2[i], q1[i])
+        circuit.ccx(ctrl, q1[i], q2[i])
+
+def equality_test(circuit, ancilla, q1, q2, classical_register) -> None:
+    assert(len(q1) == len(q2), "The swapped register sizes should match")
+    circuit.h(ancilla[0])
+    cswap(circuit, ancilla[0], q1, q2)
+    circuit.h(ancilla[0])
+    circuit.measure(ancilla[0], classical_register[0])
+
+## Initialisation
+# We add Hadamard to all the registers - to create PSI
+circuit.h(qr_psi)
+# We add Hadamard to all the registers - to create PHI
+# Modify the initialisation of either PHI or PSI and check the results.
+circuit.x(qr_phi)
+circuit.h(qr_phi)
+
+## SWAP test
+equality_test(circuit, qr_ancilla, qr_psi, qr_phi, cequal)
+
+res_qasm = execute([circuit], q_simulator, shots=1024).result()
+counts = res_qasm.get_counts()
+
+print(counts)
+
+

You can convince yourself that the method works with high probability by changing the initialisation step of the two registers and check the results when the 2 register match or don't match.

+",1386,,9201,,6/22/2020 13:38,6/22/2020 13:38,,,,0,,,,CC BY-SA 4.0 +2584,2,,2577,07-02-2018 21:10,,2,,"

Short answer:

+ +

Logical qubits are just an abstraction above physical qubits. A logical qubit is something (see after for examples) that acts like a qubit.

+ +

Some examples

+ +

A logical qubit can be:

+ +
    +
  1. A single physical qubit. This is the case for most of (all?) the quantum chips currently available. In this case, the logical qubit has no advantage over the physical one (they are the same qubits).
  2. +
  3. Multiple qubits used for quantum error correction code.

    + +

    In this case, the user sees the logical qubit as a single qubit and does not need to know the underlying error correction scheme because the logical qubit behaves like a physical one. This case is quite interesting for quantum computing because we could ""hide"" the complexity of quantum error correction algorithm behind logical qubits. The user will only see one qubit with low error rates, but the physical setup will be composed of multiple qubits.

  4. +
  5. As of today, the term ""logical qubit"" is mostly used in quantum error correction but may be used in other fields in the future.

  6. +
+ +

Side note: to answer your question in comment:

+ +
+

Is there any sort of generalization as to how physical & logical qubits correlate? (eg. n logical qubits requires $n^2$ physical qubits) Although they tend to be made of physical qubits, can they be made of other things? (eg. how could they be made photonicly or could they be purely virtual)

+
+ +

There is no generalisation on the number of physical qubits needed to encode one logical qubit. This is entirely dependant on the error correction algorithm used.

+ +

A purely virtual qubit does not exists. Logical qubits are necessarily composed of physical qubits. Note that a ""qubit"" is already an abstraction: it abstract the physical representation used. As a logical qubit is defined in term of ""qubits"", logical qubits are independant of the physical implementation of the underlying qubits.

+",1386,,,,,07-02-2018 21:10,,,,2,,,,CC BY-SA 4.0 +2585,2,,2578,07-02-2018 22:10,,3,,"

This is not exactly like Mohs' hardness scale, but it's a series of 5 different definitions of quantum comptuers by Michele Mosca:

+ +
+

Definition 1: Since the world is quantum, any computer is a quantum computer. Conventional computers are just weak quantum + computers, since they don’t exploit intrinsically quantum effects, + such as superposition and entanglement.

+ +

Definition 2: A quantum computer is a computer that uses intrinsically quantum effects that cannot naturally be modeled by + classical physics. Classical computers may be able to mathematically + simulate instances of such computers, but they are not implementing + the same kinds of quantum operations.

+ +

Definition 2’: Definition 2, where there are strong tests or proofs of the quantum effects at play (e.g. by doing Bell tests).

+ +

Definition 3: A quantum computer is a computer that uses intrinsically quantum effects to gain some advantage over the best + known classical algorithms for some problem.

+ +

Definition 4: A quantum computer is a computer that uses intrinsically quantum effects to gain an asymptotic speed-up over the + best known classical algorithms for some problem. (The difference with + definition 3 is that the advantage is a fundamental algorithmic one + that grows for larger instances of the problem; versus advantages more + closely tied to hardware or restricted to instances of some bounded + size.)

+ +

Definition 5: A quantum computer is a computer that is able to capture the full computational power of quantum mechanics, just as + conventional computers are believed to capture the full computational + power of classical physics. This means, e.g. that it could implement + any quantum algorithm specified in any of the standard quantum + computation models. It also means that the device is in principle + scalable to large sizes so that larger instances of computational + problems may be tackled.

+
+ +

Source: https://qz.com/194738/why-nobody-can-tell-whether-the-worlds-biggest-quantum-computer-is-a-quantum-computer/#footnote

+",2293,,,,,07-02-2018 22:10,,,,1,,,,CC BY-SA 4.0 +2586,1,,,07-03-2018 00:41,,7,170,"

The traditional definition of a nonlocal game is restricted to having two players and one round (e.g., here), but it is natural to consider a more general class of games that may have more than two players and more than one round of questions. While there has been a lot of work dealing with games with more than two players, I have found very little on multi-round games. For instance, there is a recent preprent of Crépeau and Yang that gives a definition of a multi-party, multi-round non-signaling distribution and seems to describe a multi-round game (although the paper is written in the language of commitment schemes rather than games, so I'm not entirely sure my interpretation is correct).

+ +

Has there been any other work dealing with multi-round games? And is there a reason so much of the literature has focused on single-round games? Are multi-round games inherently ""no more interesting"" than single-round games from the perspective of understanding nonlocality?

+",2547,,55,,10/27/2021 17:07,10/27/2021 17:07,Has anyone analyzed multi-round nonlocal games?,,1,2,,,,CC BY-SA 4.0 +2587,1,,,07-03-2018 01:54,,8,316,"

One of the novel features of Bitcoin and other cryptocurrencies is that coins can be irrefutably "burned" or destroyed, by creating a transaction to send the money to a junk burn address.

+

Thinking similarly about quantum money - from knots, or hidden subspaces, Wiesner's currency, BBBW, etc. - has an "obvious" way to be destroyed. For example, given a legitimate Wiesner coin, by measuring in the "wrong" basis, the currency would be destroyed.

+

But it's not clear how to irrefutably prove that the coin was destroyed.

+
+

For example, if I have a Wiesner coin, and I tell the world that I've burned it, is there a way that I can do it so that others will believe me? Even if I'm the bank?

+
+

Edit

+

I think I first heard of this question in a fascinating and stimulating lecture by Or Sattath.

+

As an answer has suggested, clearly measuring a coin in a "burn basis" will burn it. That burn basis can even be orthogonal to the valid basis, maximally destroying the coin, I think.

+

Another option that comes to mind is for the bank to publish and broadcast, on a public classical channel, the correct basis for a serial number $s$. Thus, the secret is released, enabling anyone to clone the coin at will, and effectively destroying the unclonability, and hence uniqueness and value, of the coin.

+

Such an option (on a classical channel) is viable for Wiesner's currency, and the Hidden Subspaces currency, but I don't think for the Quantum Knots currency - there's no secret to be released on the Knots coin.

+",2927,,2927,,01-11-2023 17:05,01-11-2023 17:05,"Can quantum money be reliably ""burned?""",,1,0,,,,CC BY-SA 4.0 +2588,1,2608,,07-03-2018 02:47,,2,283,"

Google returns ""About 1 results"" for Quantum Computing with Sound.

+ +

The sole result links to an article entitled ""Physicists have designed the building blocks of quantum computer that works using sound""

+ +

From the abstract:

+ +
+

Sound can be just as quantum as light. But our toolbox for single quanta of sound, i.e. phonons, is currently insufficient.

+
+ +

Has anyone seen something like this before? Sounds interesting to me & am curious to find out more about ""phononic quantum computing"" & what advantages / disadvantages it has to offer.

+",2645,,26,,12/13/2018 19:56,12/13/2018 19:56,Phononic Quantum Computing,,1,0,,,,CC BY-SA 4.0 +2589,1,2602,,07-03-2018 06:35,,4,960,"
+

""Negative energies and probabilities should not be considered as nonsense. They are well-defined concepts mathematically, like a negative of money."" -Paul Dirac

+
+ +

The abstract from Photon-phonon-photon transfer in optomechanics states:

+ +
+

the Wigner function of the recovered state can have negative values at the origin, which is a manifest of nonclassicality of the quantum state

+
+ +

I recently learned about the Wigner function from this answer to a question I had asked about quantum shadowgraphy in which Ernesto says:

+ +
+

there are quantum states for which the Wigner function may be negative in some phase-space regions! This is an indication of non-classicality + ... this negativity has been linked to contextuality

+
+ +

From an answer by Rob:

+ +
+

experimental schemes to reconstruct the generalized Wigner representation of a given density operator (representing mixed or pure quantum states)

+
+ +

The abstract from Negativity and contextuality are equivalent notions of nonclassicality states

+ +
+

We also demonstrate the impossibility of noncontextuality or nonnegativity in quantum theory

+
+ +
+ +

What is negative probability? Non-classicality? (Non)contextuality? Quasiprobability? Arbitrary multispin quantum states?

+",2645,,55,,7/16/2020 9:22,7/16/2020 9:22,What does negative probability represent?,,1,0,,,,CC BY-SA 4.0 +2590,2,,2587,07-03-2018 06:48,,2,,"

If each coin is entangled w/ the ledger, burning a coin via measuring it in the 'burn' (or 'wrong') basis would create an update in the ledger which could then be verified by anyone who had access to the ledger.

+ +

See this paper: Quantum Blockchain using entanglement in time + for more info on one approach.

+ +

I also posted a question about time entangled blockchains which includes a couple links:

+ +
    +
  1. Quantum Secured Blockchain

  2. +
  3. Quantum Bitcoin: An Anonymous and Distributed Currency Secured by the No-Cloning Theorem of Quantum Mechanics

  4. +
+",2645,,2645,,07-03-2018 07:17,07-03-2018 07:17,,,,5,,,,CC BY-SA 4.0 +2591,2,,2586,07-03-2018 13:07,,4,,"

It seems that using more rounds will not be such helpful for us to get something more powerful from complexity perspectives. There are a few comments about the number of rounds and the number of players for $\mathsf{MIP}^*$ in Thomas Vidick's lecture note regarding quantum mutli-prover interactive proofs. Note that the non-local games are $\mathsf{MIP^*}$ protocols.

+ +
    +
  • If we allow an extra prover and quantum messages (i.e. a $\mathsf{QMIP^*}$ protocol), then non-local games with polynomially many rounds can be parallelized to one round, which it proved by Kempe-Kobayashi-Matsumoto-Vidick. But as @John Watrous mentioned, we don't know how to do that for non-local games with classical messages, i.e. $\mathsf{MIP^*}$ protocol.
  • +
+ +

More details see section 6.3.2 in Watrous-Vidick's survey about quantum proofs, or the KKMV paper mentioned above.

+ +
    +
  • But we did not know something similar for the number of provers, even the following could be true $\mathsf{MIP^*}(2,\mathrm{poly}) \subsetneq \mathsf{MIP^*}(3,\mathrm{poly}) \subsetneq \cdots$, where $\mathsf{MIP^*}(r,k)$ denote $r$-round $k$-prover non-local games. Namely, we may get something more powerful by adding new provers.
  • +
+",1777,,1777,,07-03-2018 19:42,07-03-2018 19:42,,,,3,,,,CC BY-SA 4.0 +2594,1,2596,,07-03-2018 21:50,,8,4604,"

Mostly I'm confused over whether the common convention is to use +$i$ or -$i$ along the anti-diagonal of the middle $2\times 2$ block.

+",119,,26,,12/23/2018 13:29,12/23/2018 13:29,What is the matrix of the iSwap gate?,,2,0,,,,CC BY-SA 4.0 +2595,1,,,07-03-2018 22:06,,35,1570,"

The intuition I have for why quantum computing can perform better than classical computing is that the wavelike nature of wavefunctions allow you to interfere multiple states of information with a single operation, which theoretically could allow for exponential speedup.

+ +

But if it really is just constructive interference of complicated states, why not just perform this interference with classical waves?

+ +

And on that matter, if the figure-of-merit is simply how few steps something can be calculated in, why not start with a complicated dynamical system that has the desired computation embedded in it. (ie, why not just create ""analog simulators"" for specific problems?)

+",2660,,26,,12/13/2018 19:56,02-05-2019 21:50,"If quantum speed-up is due to the wave-like nature of quantum mechanics, why not just use regular waves?",,6,4,,,,CC BY-SA 4.0 +2596,2,,2594,07-03-2018 22:09,,7,,"
+

Mostly I'm confused over whether the common convention is to use +i or + -i along the anti-diagonal of the middle 2x2 block.

+
+ +

The former. There are two $+i$'s along the anti-diagonal of the middle $2\times 2$ block of the iSWAP gate. See page 95 here[$\dagger$].

+ +

+ +

[$\dagger$]: Explorations in Computer Science (Quantum Gates) - Colin P. Williams

+",26,,26,,07-03-2018 22:17,07-03-2018 22:17,,,,0,,,,CC BY-SA 4.0 +2597,1,,,07-04-2018 01:31,,11,437,"

I’m trying to grok quantum walks, and would like to create an example that walks a perfect binary tree to find the one and only marked leaf node. Is this possible? If so, suppose the depth of the tree is five. Would that require a circuit with five wires? Would it best be realized with a Discrete Time Quantum Walk, flipping a Hadamard Coin five times? Regardless of whether these questions are on the right track, and although I’ve read a lot of papers on the subject, I’m currently at a loss for how to implement what I’ve described. Any concrete pointers?

+",2421,,55,,07-04-2018 22:30,12-07-2019 10:43,Quantum walk with binary tree,,1,1,,,,CC BY-SA 4.0 +2598,2,,2595,07-04-2018 01:37,,1,,"

""why not just perform this interference with classical waves?""

+ +

Yes this is one way we can simulate quantum computers on regular digital computers. We simulate the ""waves"" using floating point arithmetic. The problem is that it does not scale. Every qubit doubles the number of dimensions. For 30 qubits you already need about 8 gigabytes of ram just to store the ""wave"" aka state vector. At around 40 qubits we run out of computers big enough to do this.

+ +

A similar question was asked here: What's the difference between a set of qubits and a capacitor with a subdivided plate?

+",263,,26,,7/13/2018 15:09,7/13/2018 15:09,,,,2,,,,CC BY-SA 4.0 +2601,2,,2595,07-04-2018 05:29,,2,,"

Regular waves can interfere, but cannot be entangled.
+An example of an entangled pair of qubits, that cannot happen with classical waves, is given in the first sentence of my answer to this question: What's the difference between a set of qubits and a capacitor with a subdivided plate?

+ +

Entanglement is considered to be the crucial thing that gives quantum computers advantage over classical ones, since superposition alone can be simulated by a probabilistic classical computer (i.e. a classical computer plus a coin flipper).

+",2293,,2293,,7/13/2018 15:10,7/13/2018 15:10,,,,2,,,,CC BY-SA 4.0 +2602,2,,2589,07-04-2018 07:00,,5,,"
+

What is non-classicality?

+
+ +

I'm not sure if there's a universally accepted definition, but the way that I'd define it is: if all possible outcomes of experiments on a particular quantum system can be described by a probability distribution, then the system is classical. Otherwise, it is non-classical. In alternative terminology, for a classical system, people say that there's a (local) hidden variable model that explains the experimental outcomes.

+ +

A trivial example is a diagonal density matrix when measured in the computational basis. The diagonal elements just give the probabilities of the different outcomes, so the state is classical.

+ +
+

What is negative probability?

+
+ +

This is rather loose terminology. For a true probability distribution (in the discrete setting, a set $\{p_i\}$ such that $p_i\geq 0$ and $\sum_ip_i=1$) never contains negative probabilities by definition.

+ +

You only get ""negative probability"" in some quasi-probability distributions, and so it should probably be called ""negative quasi-probability"" to avoid misunderstandings. As stated in the question, this is one way of detecting non-classicality. That leads us to...

+ +
+

What is quasi-probability?

+
+ +

(which may be what you're meaning by pseudoprobability). These are distributions that behave a lot like probabilities in many ways, but relax at least one of the constraints in the definition, usually the non-negativity of the elements. According to Wikipedia, any density matrix can be written as a diagonal matrix using an over-complete basis. Those diagonal elements then form a quasi-probability distribution - some of the elements can be negative.

+ +
+

What is (non)-contextuality?

+
+ +

Contextuality is another test that can be used to prove the non-classicality of a quantum system. This is a substantial topic that I'm not inclined to address in answer to small part of a question. You probably want to start finding out about the Kochen-Specker Theorem.

+ +

It is worth noting that Bell tests, such as the CHSH test, can be considered as contextuality tests, they're just made a little simpler because they're supplemented with some extra information about non-locality between certain measurement operators, ensuring their commutation. So, with CHSH, you evaluate some expectation value $S$. If $|S|\leq 2$, the state is classical, while if $|S|>2$, it cannot be explained by a local hidden variable model; the state is non-classical.

+",1837,,,,,07-04-2018 07:00,,,,0,,,,CC BY-SA 4.0 +2603,1,,,07-04-2018 07:08,,15,1078,"

Most of us on this site believe that quantum computing will work. However, let's play devil's advocate. Imagine that we suddenly hit some fundamental stumbling block that prevented further development towards a universal quantum computer. Perhaps we're limited to a NISQ device (Noisy, Intermediate Scale Quantum) of 50-200 qubits, for the sake of argument. The study of (experimental) quantum computing suddenly stops and no further progress is made.

+ +

What good has already come out of the study of quantum computers?

+ +

By this, I mean realisable quantum technologies, the most obvious candidate being Quantum Key Distribution, but also technical results that feed into other fields. Rather than simply a list of items, a brief description of each would be appreciated.

+",1837,,26,,01-02-2019 17:52,10/19/2019 12:50,What use has quantum computing been?,,4,3,,,,CC BY-SA 4.0 +2604,1,2605,,07-04-2018 09:01,,17,1874,"

The quantum phase estimation algorithm (QPE) computes an approximation of the eigenvalue associated to a given eigenvector of a quantum gate $U$.

+ +

Formally, let $\left|\psi\right>$ be an eigenvector of $U$, QPE allows us to find $\vert\tilde\theta\rangle$, the best $m$ bit approximation of $\lfloor2^m\theta\rfloor$ such that $\theta \in [0,1)$ and +$$U\vert\psi\rangle = e^{2\pi i \theta} \vert\psi\rangle.$$

+ +

The HHL algorithm (original paper) takes as input a matrix $A$ that satisfy $$e^{iAt} \text{ is unitary } $$ and a quantum state $\vert b \rangle$ and computes $\vert x \rangle$ that encodes the solution of the linear system $Ax = b$.

+ +

Remark: Every hermitian matrix statisfy the condition on $A$.

+ +

To do so, HHL algorithm uses the QPE on the quantum gate represented by $U = e^{iAt}$. Thanks to linear algebra results, we know that if $\left\{\lambda_j\right\}_j$ are the eigenvalues of $A$ then $\left\{e^{i\lambda_j t}\right\}_j$ are the eigenvalues of $U$. This result is also stated in Quantum linear systems algorithms: a primer (Dervovic, Herbster, Mountney, Severini, Usher & Wossnig, 2018) (page 29, between equations 68 and 69).

+ +

With the help of QPE, the first step of HLL algorithm will try to estimate $\theta \in [0,1)$ such that $e^{i2\pi \theta} = e^{i\lambda_j t}$. This lead us to the equation +$$2\pi \theta = \lambda_j t + 2k\pi, \qquad k\in \mathbb{Z}, \ \theta \in [0,1)$$ +i.e. +$$\theta = \frac{\lambda_j t}{2\pi} + k, \qquad k\in \mathbb{Z}, \ \theta \in [0,1)$$ +By analysing a little the implications of the conditions $k\in \mathbb{Z}$ and $\theta \in [0,1)$, I ended up with the conclusion that if $\frac{\lambda_j t}{2\pi} \notin [0,1)$ (i.e. $k \neq 0$), the phase estimation algorithm fails to predict the right eigenvalue.

+ +

But as $A$ can be any hermitian matrix, we can choose freely its eigenvalues and particularly we could choose arbitrarily large eigenvalues for $A$ such that the QPE will fail ($\frac{\lambda_j t}{2\pi} \notin [0,1)$).

+ +

In Quantum Circuit Design for Solving Linear Systems of Equations (Cao, Daskin, Frankel & Kais, 2012) they solve this problem by simulating $e^{\frac{iAt}{16}}$, knowing that the eigenvalues of $A$ are $\left\{ 1, 2, 4, 8 \right\}$. They normalised the matrix (and its eigenvalues) to avoid the case where $\frac{\lambda_j t}{2\pi} \notin [0,1)$.

+ +

On the other side, it seems like parameter $t$ could be used to do this normalisation.

+ +

Question: Do we need to know a upper-bound of the eigenvalues of $A$ to normalise the matrix and be sure that the QPE part of the HHL algorithm will succeed? If not, how can we ensure that the QPE will succeed (i.e. $\frac{\lambda_j t}{2\pi} \in [0,1)$)?

+",1386,,26,,8/24/2018 18:22,8/24/2018 18:24,Quantum phase estimation and HHL algorithm - knowledge of eigenvalues required?,,1,2,,,,CC BY-SA 4.0 +2605,2,,2604,07-04-2018 09:36,,9,,"

You should know a bound on the eigenvalues (both upper and lower). As you say, you can then normalise $A$ by rescaling $t$. Indeed, you should do this to get the most accurate estimate possible, spreading the values $\lambda t$ over the full $2\pi$ range. Bounding the eigenvalues is not typically a problem. For example, you're probably requiring your matrix $A$ to be sparse, so that there aren't too many non-zero matrix elements on each row. Indeed, the problem specification probably gives you a bound on the number $N$ of non-zero entries per row, and the maximum value of any entry $Q$.

+ +

Then you could apply something like Gershgorin's circle theorem. This states that the maximum eigenvalue is upper bounded by +$$ +\max_i a_{ii}+\sum_{j\neq i}|a_{ij}|\leq NQ, +$$ +and the minimum is lower bounded by +$$ +\min_ia_{ii}-\sum_{j\neq i}|a_{ij}|\geq -NQ. +$$ +The $a_{ij}$ are the matrix elements of $A$.

+ +

Within the values of $N$, $Q$, if you're worrying that for a large matrix (say $n$ qubits), while the row sum might be easy to calculate (because there's not many entries), the max over all rows might take a long time (because there's $2^n$ rows), there will be a variety of ways to get good approximations to it (e.g. sampling, or using knowledge of the problem structure). Worst case, you can probably use Grover's search to speed it up a bit.

+",1837,,26,,8/24/2018 18:24,8/24/2018 18:24,,,,3,,,,CC BY-SA 4.0 +2606,2,,2594,07-04-2018 10:45,,6,,"

Whether you use $+i$ or $-i$ is entirely up to you. After all, your definition of $\pm i$ is merely a convention. On the other hand, I think I've only ever seen it with $+i$.

+ +

On a more general footing, you can consider that iSWAP is the gate obtained by time-evolving with an XX interaction ($H=-\sigma_x\otimes\sigma_x - \sigma_y\otimes\sigma_y$), in which case it depends on which sign of $i$ in the Schrödinger equation and for the Hamiltonian you prefer. (You get $+i$ if you evolve with $\exp[-iHt]$, $t=\pi/4$, and chose the minus sign in the Hamiltonian as above, i.e. a ferromagnet).

+",491,,,,,07-04-2018 10:45,,,,3,,,,CC BY-SA 4.0 +2607,1,2673,,07-04-2018 14:02,,9,1705,"

As a non-mathematician/software programmer I'm trying to grasp how QFT (Quantum Fourier Transformation) works.

+ +

Following this YouTube video: https://www.youtube.com/watch?v=wUwZZaI5u0c

+ +

And this blogpost: https://www.scottaaronson.com/blog/?p=208

+ +

I've got a basic understanding on how you can calculate/construct the period using interference. But while trying to explain this to a colleague I ran into a problem. I used the following examples, N=15, a=7, so the period I'll need to find is r=4.

+ +

The pattern is: +7, 4, 13, 1, 7, 4, 13, 1, 7, 4, 13, 1 (etc)

+ +

If I imagine the wheel (like in the YouTube video) or a clock (like the blogpost) I can see that the circle with 4 dots/clock with 4 hours creates a constructive pattern and the others do not.

+ +

But what happens with a circle with 2 dots, or a clock with 2 hours, those will have get the same magnitude/constructive pattern as 4? It loops twice as fast, but other than that, same result?

+ +

How does the QFT cope with this?

+ +

(Bonus: Can you explain in laymans terms without too much complicated math?)

+",2972,,,,,07-08-2018 23:54,Simplified explanation of Shor/QFT transformation as thumbtack,,2,0,,,,CC BY-SA 4.0 +2608,2,,2588,07-04-2018 15:42,,1,,"

Yes other papers have studied phononic qubits after that one. I could just list them here, but I think it would be better for you to learn how to find such papers yourself, so here is how I found the papers:

+ +

Find the paper on Google, then click the link I have highlighted in yellow:

+ +

+ +

Out of the 39 papers that cited this one, I have listed some (but not all) of the relevant ones below:

+ + + +

As for advantages and disadvantages, perhaps the only advantage would be robustness to decoherence, which is claimed in the paper you mentioned, but not yet backed up very much.

+ +

There are lots of disadvantages though:

+ +
    +
  • Phonons are quasi-particles, not fundamental particles. This gives phonons a similar disadvantage as anyons. We all know how hard it is to build a quantum computer with anyons.
  • +
  • You almost never find one isolated phonon, you get a huge spectrum of phonons at the same time.
  • +
  • The technology required for phononic qubits seems to have had its beginnings in 2012, whereas for NMR qubits it goes back to at least the 1940s and for superconducting qubits it goes back to at least the 1960s. Phononic qubits are far behind any other candidate for quantum computers, so there is lot of catching up to do.
  • +
+",2293,,,,,07-04-2018 15:42,,,,0,,,,CC BY-SA 4.0 +2609,1,2610,,07-04-2018 17:35,,4,378,"

I want to express the square root of NOT as a time-dependent unitary matrix such that each $n$ units of time, the square root of NOT is produced.

+ +

More precisely, I want to find a $U(t_0,t_1)$ such that $U(t_0,t_1) = \sqrt{\text{NOT}}$, if $t_1-t_0=n$ for some $n$.

+ +

One possible solution is to express $\sqrt{\text{NOT}}$ as a product of rotation matrices, and then, parametrize the angles in a clever way to depend on the time. But I do not know how to express $\sqrt{\text{NOT}}$ as a product of rotation matrices.

+ +

Any help?

+",2978,,26,,12/23/2018 13:28,12/23/2018 13:28,Square root of NOT as a time-dependent unitary matrix,,1,1,,,,CC BY-SA 4.0 +2610,2,,2609,07-04-2018 18:20,,4,,"

$$ +\sqrt{NOT} = e^{(\frac{i \pi}{4} I_2 - \frac{i \pi}{4} \sigma_x)}\\ +U(t) = e^{\frac{t-t_0}{t_1 - t_0} (\frac{i \pi}{4} I_2 - \frac{i \pi}{4} \sigma_x)} +$$

+",434,,,,,07-04-2018 18:20,,,,5,,,,CC BY-SA 4.0 +2613,2,,2595,07-05-2018 08:31,,5,,"

I don't claim to have a full answer (yet! I hope to update this, as it's an interesting issue to try and explain well). But let me start with a few clarifying comments...

+ +
+

But if it really is just constructive interference of complicated states, why not just perform this interference with classical waves?

+
+ +

The glib answer is that it's not just interference. I think what it really comes down to is that quantum mechanics uses different axioms of probability (probability amplitudes) to classical physics, and these are not reproduced in the wave scenario.

+ +

When someone writes about ""waves"", I naturally think about water waves, but that may not be the most helpful picture to have. Let's think instead about an ideal guitar string. On a string of length $L$ (pinned at both ends), this has wavefunctions +$$ +y_n(x,t)=A_n\sin\left(\omega_nt\right)\cos\left(\frac{n\pi x}{L}\right). +$$ +Let's define the concept of a w-bit (""wave bit""). We can limit ourselves to, say, 4 modes, on the string, so you can associate +$$ +|00\rangle\equiv y_1 \qquad |01\rangle\equiv y_2\qquad |10\rangle\equiv y_3 \qquad |11\rangle\equiv y_4 +$$ +Now since we can prepare the initial shape of the string to be anything we want (subject to the boundary conditions), we can create any arbitrary superposition of those 4 states. So, the theory certainly includes things that look like superposition and entanglement.

+ +

However, they are not superposition and entanglement as we understand them in quantum theory. A key feature of quantum theory is that it contains indeterminism - that the results of some outcomes are inherently unpredictable. We don't start or end our computation from these points, but we must go through them somewhere during the computation$^*$. For example, experimental tests of Bell's Theorem have proven that the world is not deterministic (and, so far, conforms to what quantum theory predicts). The wave-bit theory is entirely deterministic: I can look at the string of my guitar, whatever weird shape it might be in, and my looking at it does not change its shape. Moreover, I can even determine the values of the $\{A_n\}$ in a single shot, and therefore know what shape it will be in at all later times. This is very different to quantum theory, where there are different bases that can give me different information, but I can never access all of it (indeterminism).

+ +

$^*$ I don't have a complete proof of this. We know that entanglement is necessary for quantum computation, and that entanglement can demonstrate indeterminism, but that's not quite enough for a precise statement. Contextuality is a similar measure of indeterminism but for single qubits, and results along those lines have started to become available recently, see here, for broad classes of computations.

+ +
+ +

Another way to think about this might be to ask what computational operations we can perform with these waves? Presumably, even if you allow some non-linear interactions, the operations can be simulated by a classical computer (after all, classical gates include non-linearity). I assume that the $\{A_n\}$ function like classical probabilities, not probability amplitudes.

+ +

This might be one way of seeing the difference (or at least heading in the right direction). There's a way of performing quantum computation classed measurement-based quantum computation. You prepare your system in some particular state (which, we've already agreed, we could do with our w-bits), and then you measure the different qubits. Your choice of measurement basis determines the computation. But we can't do that here because we don't have that choice of basis.

+ +
+

And on that matter, if the figure-of-merit is simply how few steps something can be calculated in, why not start with a complicated dynamical system that has the desired computation embedded in it. (ie, why not just create ""analog simulators"" for specific problems?)

+
+ +

This is not the figure of merit. The figure of merit is really ""How long does it take to perform the computation"" and ""how does that time scale as the problem size changes?"". If we choose to break everything down in terms of elementary gates, then the first question is essentially how many gates are there, and the second is how does the number of gates scale. But we don't have to break it down like that. There are plenty of ""analog quantum simulators"". Feynman's original specification of a quantum computer was one such analogue simulator. It's just that the time feature manifests in a different way. There, you're talking about implementing a Hamiltonian evolution $H$ for a particular time $t_0$, $e^{iHt_0}$. Now, sure, you could implement $2H$, and replace $t_0$ with $t_0/2$, but practically, the coupling strengths in $H$ are limited, so there's a finite time that things take, and we can still demand how that scales with the problem size. Similarly, there's adiabatic quantum computation. There, the time required is determined by the energy gap between the ground and the first excited state. The smaller the gap, the longer your computation takes. We know that all 3 models are equivalent in the time they take (up to polynomial conversion factors, which are essentially irrelevant if you're talking about an exponential speed-up).

+ +

So, analog quantum simulators are certainly a thing, and there are those of us who think they're a very sensible thing at least in the short-term. My research, for example, is very much about ""how do we design Hamiltonians $H$ so that their time evolution $e^{-iHt_0}$ creates the operations that we want?"", aiming to do everything we can in a language that is ""natural"" for a given quantum system, rather than having to coerce it into performing a whole weird sequence of quantum gates.

+",1837,,1837,,07-09-2018 07:02,07-09-2018 07:02,,,,9,,,,CC BY-SA 4.0 +2622,1,4551,,07-05-2018 09:26,,9,911,"

This question is a continuation of Quantum phase estimation and HHL algorithm - knowledge on eigenvalues required?.

+ +
+ +

In the question linked above, I asked about the necessity for HHL to have information on the eigenspectrum of the matrix $A$ considered. +It came out that the HHL algorithm needs a matrix with eigenvalues $\lambda_j \in [0,1)$ to work correctly.

+ +

Following this condition, given a matrix $A$, in order to apply the HHL algorithm we need to check one of the condition below:

+ +
    +
  1. The eigenvalues of the matrix are all within $[0,1)$.
  2. +
  3. A pair $(L,M) \in \mathbb{R}^2$ that bound (from below for $L$ and from above for $M$) the eigenvalues $\lambda_j$ of the matrix $A$. These bounds can be then used to rescale the matrix $A$ such that condition 1. is validated.
  4. +
+ +

First group of questions: I read plenty of papers on HHL and none of them even mentioned this restriction. Why? Is this restriction known but considered weak (i.e. it's easy to have this kind of information)? Or the restriction was not known? Is there any research paper that mention this restriction?

+ +
+ +

Let's talk now about the complexity analysis of HHL. From Quantum linear systems algorithms: a primer (Dervovic, Herbster, Mountney, Severini, Usher & Wossnig, 2018), the complexity of HHL (and several improvements) is written in the image below.

+ +

+ +

The complexity analysis does not take into account (at least I did not find it) the necessary knowledge on the eigenspectrum.

+ +

The case where the considered matrix has sufficiently good properties to estimate its eigenvalues analytically is uncommon (at least for real-world matrices) and is ignored.

+ +

In this answer, @DaftWullie uses the Gershgorin's circle theorem to estimate the upper and lower bounds of the eigenspectrum. The problem with this approach is that it takes $\mathcal{O}(N)$ operations ($\mathcal{O}(\sqrt{N})$ if amplitude amplification is applicable). This number of operation destroys the logarithmic complexity of HHL (and it's only advantage over classical algorithms in the same time).

+ +

Second group of questions: Is there a better algorithm in term of complexity? If not, then why is the HHL algorithm still presented as an exponential improvement over classical algorithms?

+",1386,,1386,,07-05-2018 09:46,10/27/2018 6:04,HHL algorithm -- why isn't the required knowledge on eigenspectrum a major drawback?,,1,4,,,,CC BY-SA 4.0 +2623,2,,1500,07-05-2018 09:33,,5,,"

The device works at cryogenic temperatures, which is so cold that all the gasses would freeze on the experiment device. Moreover, before they do so, they would conduct heat from the walls of the chamber to the experiment device and make it hard to cool the device down.

+ +

Thus, you need vacuum for being able to cool things down to a very low temperature, and once the device is cold, the presence of the cold elements even improves the vacuum, as the little rest gas you were unable to pump out before will freeze - effect known as cryopumping.

+ +

See also +Why must quantum computers be kept near absolute zero?

+",1989,,1989,,11/19/2018 13:03,11/19/2018 13:03,,,,0,,,,CC BY-SA 4.0 +2624,2,,117,07-05-2018 09:39,,2,,"

DanielSank is correct, but I think the answer is actually even more subtle. If there was no loss there would also be no way the background radiation leaking into your quantum device. Even if it was initially thermally excited, one could actively reset the state of the qubits. Thus, in addition to thermal excitations of microwave qubits, the fundamental reason for them being cooled down to so low temperature is really the dielectric loss of the materials the quantum state lives in.

+ +

Air imposes almost no loss to optical photons, but the electric circuits do attenuate the microwave frequency plasmons carrying the quantum information. So far the only way to get rid of these losses is to use superconductors, and, in addition, to go to cryogenic temperatuers much lower than the critical temperature of the superconductors, but there is no fundamental reasons for not being able to use higher temperatures in future, once materials with a lower loss become available.

+",1989,,1989,,11/19/2018 13:08,11/19/2018 13:08,,,,0,,,,CC BY-SA 4.0 +2625,1,2626,,07-05-2018 10:27,,9,466,"

I read the Solovay-Kitaev algorithm for approximation of arbitrary single-qubit unitaries. However, while implementing the algorithm, I got stuck with the basic approximation of depth 0 of the recursion.

+ +

Can someone help me on how to implement the basic approximation such that, given any $2 {\times} 2$ matrix in $\operatorname{SU}\left(2\right)$, it will return the sequence of gates from the set $\left\{H,T,S\right\}$ which approximate to about 0.00001 trace-norm distance of the arbitrary matrix?

+ +

Also, if I am using brute-force or kd trees, up to what gate length $l_0$ should I consider to obtain initial approximation of $0.00001$ for any arbitrary matrix in $\operatorname{SU}\left(2\right)$?

+",2771,,26,,07-07-2018 16:38,07-09-2018 09:19,Basic approximation in Solovay-Kitaev algorithm,,1,0,,,,CC BY-SA 4.0 +2626,2,,2625,07-05-2018 11:32,,5,,"

I don't pretend that this is optimal in the sense of minimal number of applications, but here's one method that comes from the universality proof...

+ +
    +
  • The unitary that you want to implement can be parametrised by $U=\cos\gamma\mathbb{I}-i\sin\gamma\ \underline{m}\cdot\underline{\sigma}$ where $\underline\sigma$ is the vector of Pauli matrices $X$, $Y$, $Z$. If you don't know the values you can get them from e.g. $\cos\gamma=\text{Tr}(U)/2$, $\sin\gamma\ m_X=\text{Tr}(XU)/2$ and so on.
  • +
  • You can implement two unitaries $HTHT=\cos\theta\mathbb{I}-i\sin\theta \underline{n}_1\cdot\underline{\sigma}=R_1(\theta)$ and $THTH=\cos\theta\mathbb{I}-i\sin\theta \underline{n}_2\cdot\underline{\sigma}=R_2(\theta)$. Make sure you know what $\theta$, $\underline{n}_1$ and $\underline{n}_2$ are.
  • +
  • Your first goal is to work out how to express $U$, your target unitary, in the form $e^{i\alpha}R_1(\phi_1)R_2(\phi_2)R_1(\phi_3)$. Again, evaluate things like $\text{Tr}(U),\ \text{Tr}(\underline{n_1}\cdot\underline{\sigma} U)$, but using the new decomposition, and you'll have a set of 3 simultaneous equations to solve for 3 parameters. For example (you'll need to check these!), +$$ +e^{i\alpha}\cos\gamma=\cos\phi_2\cos(\phi_1+\phi_3)-\sin\phi_2\sin(\phi_1+\phi_3)\underline{n}_1\cdot\underline{n}_2\\ +e^{i\alpha}\sin\gamma \underline{m}\cdot\underline{n}_1=\cos\phi_2\sin(\phi_1+\phi_3)-\sin\phi_2\cos(\phi_1+\phi_3)\underline{n}_1\cdot\underline{n}_2 +$$
  • +
  • Now, you want to create a good approximation to the angle $\phi_i$, but you can only repeat sequences such as $HTHT$ an integer number of times, $q_i$. Thus we can create angles $q_i\theta$, but angles are only important modulo $2\pi$. Thus, for each $\phi_i$, find the smallest positive integer $q_i$ such that $|q_i\theta \text{ mod }2\pi-\phi_i|<\epsilon$ for some small parameter $\epsilon$. This means that by repeating $HTHT$ $q_3$ times, then $THTH$ $q_2$ times, then $HTHT$ $q_1$ times, you create each of the 3 rotations about the correct axis, to an angle that is within $\epsilon$ accuracy for each.
  • +
  • You final task is to work out how the accuracy $\epsilon$ on each angle corresponds to an overall accuracy on the unitary If you think about a perturbative expansion of each term, the error is probably about $3\epsilon$. So, now you can work backwards to find $\epsilon$ and know what you need.
  • +
+",1837,,1837,,07-09-2018 09:19,07-09-2018 09:19,,,,7,,,,CC BY-SA 4.0 +2627,1,,,07-05-2018 15:35,,4,948,"

I want to run a experiment like this:

+ +
    +
  1. Generate a bunch of random 12-character passwords like $``\texttt{<Bb\{Q,r2Qp8`}"".$

  2. +
  3. Write an algorithm to randomly generate & compare value on quantum computer.

  4. +
  5. If the value was found, return number of time it take to generate, let's say $6102111820800.$

  6. +
+ +

The only available quantum computer I know of is IBM's quantum computing cloud service.

+ +

Questions:

+ +
    +
  1. Is it possible to run this program on existing quantum computers?

  2. +
  3. If so, how fast would it be?

  4. +
+",2994,,15,,07-05-2018 17:05,07-08-2018 03:23,Is running a large random brute force on quantum computer possible at the moment?,,2,6,,,,CC BY-SA 4.0 +2632,2,,2607,07-05-2018 20:51,,2,,"

In your example the pattern is made by a modular multiplication function or circuit f(x) = ax (mod N) This quantum circuit and pattern is also given in the IBM Q manual of the IBM Q Experience.

+ +

+ +

So in a loop with start input x = 1

+ +

x=1 f(x) = 7 * 1 (mod 15) = 7

+ +

x=7 f(x) = 7 * 7 (mod 15) = 4

+ +

x=4 => 13

+ +

x=13 =>1

+ +

The pattern 1 7 4 13 1 is repeated every 4th time. +So the circuit is fixed for a given a and mod 15 and always returns r = 4. If you want r = 2, you need another multiplier function

+",1773,,1773,,07-05-2018 20:57,07-05-2018 20:57,,,,0,,,,CC BY-SA 4.0 +2635,2,,2603,07-06-2018 04:57,,7,,"

There are a lot of interesting applications that use similar technology. A lot of labs that work towards quantum computing also publish papers with these applications.

+ +

Here are some:

+ +

All-optical computation. Personally, I think this has more potential than quantum computing, as it has already been shown to be useful for quickly processing neural networks (and other algorithms involving matrix multiplication and nonlinear functions). These on-chip systems are made in the same labs (and same people) as measurement-based linear quantum computing. Designing systems capable of operating faster than semi-conductor clock speeds, lowering the minimum power-per operation using light, and increasing parallelization will probably get us very far without needing to change algorithmic architectures.

+ +

Quantum simulation. Richard Feynman's original dream of ""quantum computers"" are now what are referred to as ""quantum analog simulators."" Nature acts like nature. It can be hard to compute analytically or digitally how a Hydrogen atom behaves, but using a system with a similar Hamiltonian can ""do the math for you."" Optical lattices (which are sometimes used for quantum computing of ions) can be used for these quantum simulators. It is very difficult to do calculations of molecules using fundamental physics and chemistry is full of heuristics to deal with these difficulties.

+ +

Quantum state reconstruction. A usually unmentioned open problem in quantum information and computing is how to reconstruct high qbit entangled states. Even if quantum computing doesn't work out, advances made in these open questions might be helpful in the future (for, for instance, key distribution protocols and information theory).

+ +

Quantum Communication. Quantum Key distribution is probably the only working practical application created so far from quantum information. It allows information to be transferred safely without the possibility of eavesdroppers. High-fidelity photon gate operations (created for quantum computers) could allow for efficient quantum repeaters, which could extend the maximum distance that can be traveled.

+ +

Extra Fun Things. Personally, I think the most interesting thing is answering if the brain is a quantum computer. The possiblity of the brain being a quantum computer has been eye-rolled by many physicists for the last decade, dismissing the high temperatures the brain to destroy coherence, but highly reputable (and commendable) physicists have recently challenged this notion. One discussing how nuclear spins could be the mediator of quantum information, another discussing how experiments could be carried to investigate if axons are operating as waveguides.

+",2660,,55,,7/15/2018 16:26,7/15/2018 16:26,,,,0,,,,CC BY-SA 4.0 +2636,2,,2627,07-06-2018 07:52,,3,,"
+

I want to run a experiment like this:

+
    +
  1. Generate a bunch of random 12-character passwords like ";Bb{Q,r2Qp8`" (changed the first character because it interefed with the citation style).
  2. +
+
+

Let's say your characters are encoded in extended ASCII, i.e. they have a value between 0 and 255. You need 8 classical bits to represent one character. One could expect that you could encode this value on 3 qubits ($2^3$), but you need to do a compromise here:

+
    +
  1. If you have access to an external source of randomness, then you can use it to generate a random quantum state (by applying random gates to the initial quantum state for example). In this case, the amplitudes of the obtained quantum state may represent your character (you still need to find how to represent a random character from complex non-integer numbers) and depending on the encoding you use you may need less than 8 qubits.

    +
  2. +
  3. If you don't have access to an external source of randomness or if you want your random numbers to be "perfect", you can use quantum superposition and measurement to generate perfect random integers. This can be done by taking 8 qubits, applying the H gate to the 8 qubits and measuring them in the computational basis. With this algorithm, you will have 8 qubits in the state $\vert0\rangle$ or $\vert1\rangle$ (i.e. a random number on 8 qubits). With this method, 1 character = 8 qubits.

    +
  4. +
+
+
    +
  1. Write an algorithm to randomly generate & compare value on quantum computer.
  2. +
+
+

The generation can be done in the same way as above. For the comparison, it depends on the method you used for the generation and how you represent your characters:

+
    +
  1. As the characters are encoded in the amplitudes, you can use the SWAP test to check for closeness.

    +
  2. +
  3. Here, the qubits are just classical bits so you could just measure them and check classically for equality.

    +
  4. +
+
+
    +
  1. If the value was found, return number of time it take to generate, let's say 6102111820800.
  2. +
+
+

Again, depending on what you want (but the bullet points here are not related to the methods above):

+
    +
  1. You could count the try-fails in a classical register, just by incrementing a classical counter at each fail. If you used the SWAP test, you can measure the ancillary qubits and update the counter.

    +
  2. +
  3. If you want to encode your counter in a quantum state, you need a circuit that will increment the value of a register. You can find a way to construct such a circuit here for example. Then, if you used the SWAP test you can either read the qubit and apply the increment operation or directly apply the increment operation controlled by the state of the ancillary qubit.

    +
  4. +
+
+

The only available quantum computer I know of is IBM's quantum computing cloud service.

+

Questions:

+
    +
  1. Is it possible to run this program on existing quantum computers?
  2. +
+
+

It depends on the method you take: the first method may be able to run on an existing chips, but the second one would need at least $96 = 12*8$ qubits to store the 12 characters, which is above the maximum number of qubits currently available.

+
+
    +
  1. If so, how fast would it be?
  2. +
+
+

It will be sloooooooooooow. The first method may be able to use quantum superposition to speed-up the computations, but the second method uses quantum superposition only to generate random numbers, and then treat them classically.

+",1386,,-1,,6/18/2020 8:31,07-06-2018 07:52,,,,5,,,,CC BY-SA 4.0 +2643,2,,2439,07-06-2018 09:30,,0,,"

HHL algorithm with a 4 x 4 matrix A might be to large for the IBM computer. I tried a smaller toy version of the algorithm according with arXiv 1302.1210 link Solving systems of linear equations

+ +

I explained a little bit about this circuit here at stackexchange: +https://cs.stackexchange.com/questions/76525/could-a-quantum-computer-perform-linear-algebra-faster-than-a-classical-computer/77036#77036

+ +

Unfortunately it is only a 1 qubit input with A = 2 x 2 matrix, in the answer a link to the IBM circuit is given.

+",1773,,,,,07-06-2018 09:30,,,,2,,,,CC BY-SA 4.0 +2650,2,,2603,07-06-2018 11:54,,4,,"

Perform and checking basic quantum-mechanic experiments +Before the IBM and alibaba quantum cloud computers, you would need an expensive lab to do simple CHSH or GHZ experiments. Of course the qubits in the IBM computer are not loophole free but many institutes and also collegeschools could not have better experiment facilities purchased within their physics budget. So basic quantum mechanic experiments can be done very easily.

+ +

Quantum programming tools and experiments +Furthermore basic research in programming quantum computer tools like compilers and mapping algorithms can now be tested on real machines

+ +

This has lead to 113 papers with real and tested quantum algorithms for the ibm computer alone and many more in general. +qc papers

+",1773,,,,,07-06-2018 11:54,,,,0,,,,CC BY-SA 4.0 +2651,1,,,07-06-2018 14:31,,13,730,"

What are the prominent visualizations used to depict large, entangled states and in what context are they most commonly applied?

+

What are their advantages and disadvantages?

+",391,,13968,,7/20/2021 13:02,7/20/2021 13:02,"What are the possible ways to visualise large, entangled states?",,3,0,,,,CC BY-SA 4.0 +2652,2,,2458,07-06-2018 15:20,,5,,"

The numbers you are describing are very very large. To the point, it appears that they are numbers whose representation in decimal (or binary) are large enough that it there is little to no prospect of there being enough matter in the entire universe to store those numbers, in a place-value representation such as those. This being the case, no technology — quantum or otherwise — will be able to produce a representation (or anything which can be described as a conventional 'estimate') of those numbers.

+ +

Furthermore, it appears that these functions are faster growing than any provably total function (e.g., faster than any function which we know to be computable even in exponential time) relative to some more-or-less sensible model of set theory. If you are interested in things which you can compute in a reasonable time-bound with quantum computers — e.g. in polynomial time, which can be simulated in at worst exponential time on a conventional computer — it follows that on mathematical grounds as well as physical grounds, you should expect these functions not to be practically computable even on an idealised quantum computer.

+",124,,,,,07-06-2018 15:20,,,,7,,,,CC BY-SA 4.0 +2653,2,,2651,07-06-2018 16:52,,2,,"

My personal view:

+ +

Yes, large entangled states can be visualized using quantum bayesian networks. See

+ + + +

Other people will probably advise using Tensor Networks instead of quantum Bayesian nets. This begs the question: How do Quantum Bayesian Networks and Tensor Networks compare? I have thought about that and gathered my thoughts in this blog post.

+ +

First lines of blog post:

+ +
+

A question I am often asked is what is the difference between tensor + networks and quantum Bayesian networks, and is there any advantage to + using one over the other.

+ +

When dealing with probabilities, I prefer quantum Bayesian networks + because b nets are a more natural way of expressing probabilities (and + probability amplitudes) whereas tensor nets can be used to denote many + physical quantities other than probabilities so they are not tailor + made for the job as b nets are. Let me explain in more detail for the + technically inclined.

+
+ +

One can consider bipartite entanglement for the two sides of a partition, of a quantum bayesian network. One can write nice inequalities for such bipartite entanglements. See, for example, Entanglement Polygon Inequality in Qubit Systems, +Xiao-Feng Qian, Miguel A. Alonso, Joseph H. Eberly.

+ +

One can also try to define a measure of n-partite entanglement for n>2, where n is the number of nodes of a quantum Bayesian net. +See, for example, Verifying Genuine High-Order Entanglement, Che-Ming Li, Kai Chen, Andreas Reingruber, Yueh-Nan Chen, Jian-Wei Pan.

+",1974,,91,,08-07-2018 21:18,08-07-2018 21:18,,,,0,,,,CC BY-SA 4.0 +2654,1,,,07-07-2018 09:43,,27,5496,"

In the recent Question ""Is Quantum Computing just Pie in the Sky"" there are many responses regarding the improvements in quantum capabilities, however all are focussed on the current 'digital' computing view of the world.

+ +

Analog computers of old could simulate and compute many complex problems that fitted their operating modes that were not suitable for digital computing for many many years (and some are still 'difficult'). Before the wars (~I & II) everything was considered to be 'clockwork' with mechanical Turk brains. Have we fallen into the same 'everything digital' bandwagon trap that keeps recurring (there are no tags related to 'analog')?

+ +

What work has been done on the mapping of quantum phenomena to analog computing, and learning from that analogy? Or is it all a problem of folk having no real idea how to program the beasts.

+",3021,,26,,7/15/2018 13:56,12-06-2021 13:35,Are quantum computers just a variant on Analog computers of the 50's & 60's that many have never seen nor used?,,4,2,,,,CC BY-SA 4.0 +2655,1,,,07-07-2018 17:29,,6,128,"

Existing superconducting quantum computers need to be cooled near absolute zero. For example, some of D-Wave's machines are cooled to about $20 \ \mathrm {mK}$. Their design uses a dilution refrigerator.

+ +

Are there any other cooling methods for superconducting quantum computers besides dilution refrigerators which are capable of achieving such low temperatures?

+ +

Are there any specific commercial quantum computer designs or research projects using these other cooling methods?

+",2866,,26,,07-08-2018 10:12,11/15/2018 19:23,What methods exist for cooling superconducting quantum computers?,,0,3,,01-04-2019 20:11,,CC BY-SA 4.0 +2656,2,,2654,07-08-2018 01:44,,9,,"
+

What work has been done on the mapping of quantum phenomena to analog computing, and learning from that analogy?

+
+ +

A starting place (with a lot of good references) to learn about analog quantum computing (also known as ""quantum analogue computing"" and ""continuous variable quantum computing"") is here. Note that analog classical computing is not as powerful as analog quantum computing, for a reason similar to what I explained in my answer to this question: quantum computers (whether digital or analog) can take advantage of quantum entanglement.

+ +
+

Have we fallen into the same 'everything digital' bandwagon trap that keeps recurring (there are no tags related to 'analog')?

+
+ +

A lot of people unfortunately have, and this might be part of the reason why ""adiabatic quantum computing"" struggled to get the respect it deserved in its early years (and even now). Adiabatic quantum computing is a specific type of analog quantum computing which certainly does have a tag on this Stack Exchange and a fair number of questions (but not enough, in my opinion). It has been proven that ""adiabatic quantum computing"", which is completely analog and does not involve any gates, can do anything that a digital quantum computer can do with the same computational efficiency, so while it is true that many people in quantum computing have fallen into the 'everything digital' bandwagon trap, there are some people that appreciate analog quantum computing (for example adiabatic quantum computing).

+",2293,,2293,,7/13/2018 15:12,7/13/2018 15:12,,,,1,,,,CC BY-SA 4.0 +2657,2,,2627,07-08-2018 03:23,,1,,"

From the question title, it sounds like you're interested in brute-force password cracking.

+ +

There is a quantum algorithm for this that outperforms brute force, in principle. It's called Grover's algorithm and it was one of the earliest quantum algorithms to be discovered.

+ +

However, to crack a password, you need as many qubits in your quantum computer as you'd need bits in a traditional computer to hash the passwords (actually more, since you have to hash them reversibly, and that involves some overhead). This is orders of magnitude more qubits than any present-day quantum computer has, even for simple password hashing techniques. Also, the computation lasts for far longer than any present-day quantum computer can maintain the integrity of its qubits. And even if it worked, it's not very fast: testing $n$ passwords with Grover's algorithm takes about as long as testing $\sqrt{n}$ passwords by brute force.

+",3030,,,,,07-08-2018 03:23,,,,1,,,,CC BY-SA 4.0 +2658,2,,2595,07-08-2018 03:50,,2,,"

What makes quantum wave mechanics different from classical is that the wave is defined over a configuration space with a huge number of dimensions. In nonrelativistic undergraduate quantum mechanics (which is good enough for a theoretical discussion of quantum computing), a system of $n$ spinless point particles in 3D space is described by a wave in $\mathbb{R}^{3n}$, which for $n=2$ already has no analogue in classical mechanics. All quantum algorithms exploit this. It may be possible to exploit classical wave mechanics to improve certain calculations (analog computing), but not using quantum algorithms.

+ +

The usual model of quantum computing uses qubits that can only be in two states ($\{0,1\}$), not a continuum of states ($\mathbb{R}^3$). The closest classical analogue to that is coupled pendulums, not continuous waves. But there's still an exponential difference between the classical and quantum cases: the classical system of n pendulums is described by $n$ positions and momenta (or $n$ complex numbers), while the quantum system is described by $2^n$ complex numbers (or $2^n$ abstract ""positions"" and ""momenta"", but quantum physicists never talk that way).

+",3030,,,,,07-08-2018 03:50,,,,0,,,,CC BY-SA 4.0 +2659,1,,,07-08-2018 06:08,,4,315,"

I'm trying to implement Majorana's ""stellar representation"" of a spin-$j$ system as $2j$ points on the $2$-sphere in python. Consulting papers including Extremal quantum states and their Majorana constellations (Bjork et al., 2015), I convert a complex state vector (nominally indexed from -$j$ to $j$) to its corresponding polynomial with:

+ +
def vector_to_polynomial(vector):
+    components = vector.tolist()
+    j = (len(components)-1.)/2.
+    coeffs = []
+    i = 0
+    for m in numpy.arange(-1*j, j+1, 1):
+        coeff = math.sqrt(math.factorial(2*j)/(math.factorial(j-m)*math.factorial(j+m)))*components[i]
+        coeffs.append(coeff)
+        i += 1
+    return coeffs[::-1]
+
+ +

I use a polynomial solver to determine the roots, and stereographically project them to the $2$-sphere, taking into account when the degree of the polynomial is less than $2j$ by adjoining some poles (latter code not included).

+ +
def root_to_xyz(root):
+    if root == float('inf'):
+        return [0,0,1]
+    x = root.real
+    y = root.imag
+    return [(2*x)/(1.+(x**2)+(y**2)),\
+            (2*y)/(1.+(x**2)+(y**2)),\
+            (-1.+(x**2)+(y**2))/(1.+(x**2)+(y**2))]
+
+ +

See Wikipedia. Now QuTiP has an implementation of the Husimi Q function aka qutip.spin_q_function(state, theta, phi), evaluated at points on the sphere. The zeros of Husimi Q coincide with the Majorana stars. Comparing the results of the above with the QuTiP implementation, however, I find that they only match for integer spins, but not half-integer spins, aka for odd-dimensional systems, but not even dimensional systems. I've tried to code up a few other versions of the Majorana polynomial given in other papers, but the same problem seems to recur. Am I missing something more fundamental? Any advice is welcome!

+",456,,26,,07-08-2018 09:39,07-08-2018 09:39,"Computing Majorana ""Stars""",,0,2,,,,CC BY-SA 4.0 +2662,2,,2654,07-08-2018 12:56,,4,,"
+

Are quantum computers just a variant on Analog computers of the 50's & 60's that many have never seen nor used?

+
+ +

No, they are not.

+ +

The digital vs analog factor is not the point here, +the difference between quantum and classical devices lies at a more fundamental level.

+ +

A quantum device cannot, in general, be simulated efficiently by a classical device, be it ""analog"" or ""digital"" (or at least, this is strongly believed to be the case). In this sense, quantum computers are really radically different from any variation of classical analog computers, or other forms of classical computing for that matter.

+ +

Indeed, the most popularized architectures for quantum computing, those operating on sets of ""qubits"", are the quantum counterparts of digital classical computers. Analog devices also have their quantum counterparts (see for example continuous-variable quantum information).

+",55,,,,,07-08-2018 12:56,,,,4,,,,CC BY-SA 4.0 +2663,1,2666,,07-08-2018 14:55,,6,1256,"

Many texts (especially meant for public consumption) discussing quantum mechanics tend to skim over exactly how entanglement is achieved. Even the Wikipedia article on quantum entanglement describes the phenomenon as follows:

+ +

""Quantum entanglement is a physical phenomenon which occurs when pairs or groups of particles are generated, interact, or share spatial proximity in ways such that the quantum state of each particle cannot be described independently of the state of the other(s)...""

+ +

This doesn't explain how the process actually comes into being. How are these particles ""generated,"" ""interact,"" or ""share spatial-proximity"" such that we can claim that two particles are entangled? What is the process?

+",3035,,26,,12/14/2018 5:27,4/29/2019 8:29,How is entanglement achieved between two particles in quantum computing?,,2,0,,,,CC BY-SA 4.0 +2664,1,2668,,07-08-2018 15:10,,3,292,"

Can someone show to me the steps to derive the joint state at the bottom of this image, please?

+ +

I tried to follow his explanation but I didn't get the same results… +This is taken from the lecture notes of Ronald de Wolf in case it may help

+",3036,,26,,10/28/2019 9:07,10/28/2019 9:07,How is the joint state of these qubits derived?,,1,2,,,,CC BY-SA 4.0 +2665,2,,2663,07-08-2018 15:18,,2,,"

You can easily create an EPR pair by means of a circuit consisting of a Hadamard gate on the first qubit, followed by a CNOT gate, having two $|0\rangle$ states as an input. The calculations would be:

+ +
    +
  • $(H\otimes I)(|0\rangle\otimes|0\rangle)=H|0\rangle\otimes|0\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)\otimes|0\rangle$

  • +
  • $\operatorname{CNOT}\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)\otimes|0\rangle=\frac{1}{\sqrt{2}}(|0\rangle\otimes|0\rangle+|1\rangle\otimes|1\rangle)=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)=|\Psi^+\rangle$

  • +
+ +

The ending state consequently is the EPR pair called $|\Psi^+\rangle$, which is a classical example of an entangled pair.

+",2371,,26,,4/29/2019 8:29,4/29/2019 8:29,,,,0,,,,CC BY-SA 4.0 +2666,2,,2663,07-08-2018 15:27,,5,,"

Josu has given an example of how using quantum gates you can get an entangled state. However, quantum gates are sort of ""black-boxes"".

+ +

The physical methods for creating entangled states and testing for entangled states are covered well on the relevant Wikipedia page:

+ +

Methods of creating entanglement:

+ +
+

Entanglement is usually created by direct interactions between + subatomic particles. These interactions can take numerous forms. One + of the most commonly used methods is spontaneous parametric + down-conversion to generate a pair of photons entangled in + polarisation. Other methods include the use of a fiber coupler to + confine and mix photons, photons emitted from decay cascade of the + bi-exciton in a quantum dot, the use of the Hong–Ou–Mandel effect, + etc., In the earliest tests of Bell's theorem, the entangled particles + were generated using atomic cascades.

+ +

It is also possible to create entanglement between quantum systems + that never directly interacted, through the use of entanglement + swapping. Two independently-prepared, identical particles may also be + entangled if their wave functions merely spatially overlap, at least + partially.

+
+ +

Testing a system for entanglement:

+ +
+

Systems which contain no entanglement are said to be separable. For $2$-Qubit and Qubit-Qutrit systems ($2 × 2$ and $2 × 3$ respectively) the simple Peres–Horodecki criterion provides both a necessary and a sufficient criterion for separability, and thus for detecting entanglement. However, for the general case, the criterion is merely a sufficient one for separability, as the problem becomes NP-hard. A numerical approach to the problem is suggested by Jon Magne Leinaas, Jan Myrheim and Eirik Ovrum in their paper ""Geometrical aspects of entanglement"". Leinaas et al. offer a numerical approach, iteratively refining an estimated separable state towards the target state to be tested, and checking if the target state can indeed be reached. An implementation of the algorithm (including a built-in Peres-Horodecki criterion testing) is brought in the ""StateSeparator"" web-app.

+ +

In 2016 China launched the world’s first quantum communications satellite. The $100m Quantum Experiments at Space Scale (QUESS) mission was launched on Aug 16, 2016, from the Jiuquan Satellite Launch Center in northern China at 01:40 local time.

+ +

For the next two years, the craft – nicknamed ""Micius"" after the ancient Chinese philosopher – will demonstrate the feasibility of quantum communication between Earth and space, and test quantum entanglement over unprecedented distances.

+ +

In the June 16, 2017, issue of Science, Yin et al. report setting a new quantum entanglement distance record of $1203$ km, demonstrating the survival of a $2$-photon pair and a violation of a Bell inequality, reaching a CHSH valuation of $2.37 ± 0.09$, under strict Einstein locality conditions, from the Micius satellite to bases in Lijian, Yunnan and Delingha, Quinhai, increasing the efficiency of transmission over prior fiberoptic experiments by an order of magnitude.

+
+ +

You might also want to see:

+ +

How do I show that a two-qubit state is an entangled state?

+ +

How to show that an n-level system is entangled?

+",26,,,,,07-08-2018 15:27,,,,0,,,,CC BY-SA 4.0 +2667,1,17767,,07-08-2018 15:53,,35,10933,"

As we make inroads into Machine Learning, there seem to be plenty of respectable courses available online via Coursera, edX, etc. on the topic. As quantum computing is still in its infancy, not to mention, incredibly daunting, it is vital that easy to understand, introductory courses are made available.

+ +

I was able to find these courses:

+ +

Quantum Information Science I, Part 1

+ +

Quantum Optics 1 : Single Photons

+ +

However, I am not certain how relevant the second course might be.

+ +

Obviously, this question may benefit from receiving answers in the near future as the topic steadily becomes mainstream.

+",3035,,3035,,07-08-2018 16:49,11/25/2021 19:23,"Currently, what are the best structured courses available online on quantum computing?",,12,2,,,,CC BY-SA 4.0 +2668,2,,2664,07-08-2018 16:04,,2,,"

Since this is a homework-type question, I'll just outline the method:

+ +

You begin in the state $(\alpha_0|0\rangle + \alpha_1|1\rangle) \otimes \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$.

+ +

It can be written as $(\alpha_0|0\rangle_{A1}) \otimes \frac{1}{\sqrt{2}}(|0\rangle_{A2} |0\rangle_B + |1\rangle_{A2} |1\rangle_{B}) + (\alpha_1|1\rangle_{A1}) \otimes \frac{1}{\sqrt{2}}(|0\rangle_{A2} |0\rangle_B + |1\rangle_{A2} |1\rangle_{B})$

+ +

Apply the CNOT to qubits $\text{A1}$ and $\text{A2}$. If $\text{A1}$ is in $|0\rangle$, $\text{A2}$ remains unchanged or else it flips.

+ +

You get to the state:

+ +

$(\alpha_0|0\rangle_{A1}) \otimes \frac{1}{\sqrt{2}}(|0\rangle_{A2} |0\rangle_B + |1\rangle_{A2} |1\rangle_{B}) + (\alpha_1|1\rangle_{A1}) \otimes \frac{1}{\sqrt{2}}(|1\rangle_{A2} |0\rangle_B + |0\rangle_{A2} |1\rangle_{B})$

+ +

Then apply the Hadamard gate on $\text{A1}$. Remember that the Hadamard gate maps $|0\rangle_{A1}$ to $\frac{|0\rangle + |1\rangle}{\sqrt 2}$ and $|1\rangle_{A1}$ to $\frac{|0\rangle - |1\rangle}{\sqrt 2}$.

+ +

You finally get to the state shown in the diagram.

+ +

Note: $\text{A1}$ refers to Alice's first qubit. $\text{A2}$ refers to Alice's second qubit. $\text{B}$ refers to Bob's qubit.

+",26,,,,,07-08-2018 16:04,,,,2,,,,CC BY-SA 4.0 +2669,2,,2667,07-08-2018 16:11,,22,,"

I personally took the course Quantum Mechanics and Quantum Computation on EdX (UC Berkeley) by Professor Vazirani. The course is now archived, however, you can still access the lectures on YouTube. It covers the basics of quantum mechanics and gives a nice overview of some of the most popular quantum algorithms.

+ +

About this course (from the course page):

+ +
+

Quantum computation is a remarkable subject building on the great + computational discovery that computers based on quantum mechanics are + exponentially powerful. This course aims to make this cutting-edge + material broadly accessible to undergraduate students, including + computer science majors who do not have any prior exposure to quantum + mechanics. The course starts with a simple introduction to the + fundamental principles of quantum mechanics using the concepts of + qubits (or quantum bits) and quantum gates. This treatment emphasizes + the paradoxical nature of the subject, including entanglement, + non-local correlations, the no-cloning theorem and quantum + teleportation. The course covers the fundamentals of quantum + algorithms, including the quantum fourier transform, period finding, + Shor's quantum algorithm for factoring integers, as well as the + prospects for quantum algorithms for NP-complete problems. It also + discusses the basic ideas behind the experimental realization of + quantum computers, including the prospects for adiabatic quantum + optimization and the D-Wave controversy.

+
+",26,,26,,07-08-2018 16:33,07-08-2018 16:33,,,,4,,,,CC BY-SA 4.0 +2670,2,,2654,07-08-2018 17:47,,13,,"

Here is a quick list of notable differences between analog and quantum computers:

+ +
    +
  1. Analog computers can't pass Bell tests.

  2. +
  3. The state space of an analog computer with N sliders is N dimensional. The state space of a quantum computer with N qubits is $2^N$ dimensional.

  4. +
  5. Error correct an analog computer and what you've got is a digital computer (i.e. not fundamentally analog anymore). Quantum computers are still quantum after being error corrected.

  6. +
  7. Analog computers aren't sensitive to decoherence errors. They don't break if you make accidental copies of the data. Quantum computations do break if that happens.

  8. +
  9. Analog computers can't (efficiently) run Shor's algorithm. Or Grover's algorithm. Or basically any other quantum algorithm.

  10. +
+",119,,,,,07-08-2018 17:47,,,,2,,,,CC BY-SA 4.0 +2671,1,2672,,07-08-2018 18:14,,7,927,"

I am learning to manipulate Qbits and recently I saw the teleportation algorithm. I read about it in two places: Wikipedia and Lecture Notes from Ronald de Wolf (Page 7, 1.5 Example: Quantum Teleportation).

+ +

I'd like to understand how to operate with Qbits using linear algebra when an entanglement is present. In this case, we have 3 Qbits.

+ +
    +
  • Qbit 1: in possession of Alice and is going to be teleported to Bob
  • +
  • Qbit 2: in possession of Alice
  • +
  • Qbit 3: in possession of Bob
  • +
+ +

Qbit 2 and 3 are in Bell state: $\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$. +The full state of three Qbits 1, 2, 3 is $(\alpha_0 |0\rangle+\alpha_1|1\rangle) \otimes \frac{1}{\sqrt2}(|00\rangle+|11\rangle)$. In an extended (more painful) notation it would be:

+ +

$$\frac{\alpha_0}{\sqrt2}|000\rangle + 0 \cdot |001\rangle + 0 \cdot|010\rangle+ \frac{\alpha_0}{\sqrt2}|011\rangle + $$

+ +

$$\frac{\alpha_1}{\sqrt2}|100\rangle + 0 \cdot|101\rangle + 0 \cdot|110\rangle+ \frac{\alpha_1}{\sqrt2}|111\rangle + $$

+ +

Now I'd like to apply a CNOT gate (Controlled Not) to Qbits 1 and 2, and finally H gate (Hadamard transform) to Qbit 1. I know how CNOT operation affects Qbit 1 and 2, but it's not completely clear how does it affect Qbit 3. I'm wondering what is the $8 \times 8$ Matrix that is applied to the state (in extended notation) when is applied CNOT on Qbit 1 and 2.

+",3037,,26,,12/13/2018 21:00,12/13/2018 21:00,"When you act on a multi-qubit system with a 2-qubit gate, what happens to the third qubit?",,1,0,,,,CC BY-SA 4.0 +2672,2,,2671,07-08-2018 19:14,,7,,"

Whenever you have a quantum gate (like a CNOT) acting on some qubits but not others, it is assumed that the other qubits are acted on with the identity operator. This is done using the ""Left Kronecker product"" or the ""tensor product"".

+ +

So the 8x8 matrix is made by applying CNOT to qubits 1 & 2 and the identity matrix to qubit 3:

+ +

$$ +\begin{bmatrix} + 1 & 0 & 0 & 0 \\ + 0 & 1 & 0 & 0 \\ + 0 & 0 & 0 & 1 \\ + 0 & 0 & 1 & 0 +\end{bmatrix}\otimes +\begin{bmatrix} +1 & 0\\ +0 & 1 +\end{bmatrix} +$$

+ +

If you do:

+ +
kron([1 0 0 0 ; 0 1 0 0 ; 0 0 0 1; 0 0 1 0],eye(2))
+
+ +

in MATLAB or Octave, you get the following 8x8 matrix:

+ +

+ +

Explanation of the code:

+ +
    +
  • ""kron"" means ""left Kronecker product""
  • +
  • The first argument to the ""kron"" function is the CNOT gate in matrix form
  • +
  • The second argument is ""eye(2)"" which means 2x2 identity
  • +
+ +

Here is an example of how to do a ""left Kronecker product"" without MATLAB:

+ +

+",2293,,2293,,07-08-2018 19:22,07-08-2018 19:22,,,,1,,,,CC BY-SA 4.0 +2673,2,,2607,07-08-2018 23:54,,7,,"

Let me attempt to give a rather unconventional answer to this question:

+ +
As a non-mathematician/software programmer I'm trying to grasp
+how QFT (Quantum Fourier Transformation) works.
+
+ +

Suppose that we have a quantum computer which is able to manipulate $n$ qubits. The quantum state of such a quantum computer precisely describes the current state of this quantum computer. It is pretty well-known that we can express this quantum state as a vector of $2^n$ complex numbers. Let's try to visualize these complex numbers in a compact way.

+ +

To that end, consider a horizontal line, on which $2^n$ points are depicted. They are labeled corresponding to their respective position on the line, i.e., the first point is labeled with $|0\rangle$, and the last points is labeled by $|2^n-1\rangle$. We can see this in the picture below.

+ +

+ +

Now, try to picture that at every point, depicted above, this line punctures a circle of radius $1$ right through the middle. That is, there are $2^n$ circles placed exactly at the points depicted above, and the line connects the middles of all these circles. I have tried to depict this in the image below, but my 3D drawing skills are not exactly top-notch, so you will have to excuse me for that.

+ +

+ +

The nice thing of this picture above is that it can uniquely represent the state of any $n$-qubit quantum computer by marking exactly one point in all of the circles. More explicitly, any quantum state of a $n$-qubit quantum computer can be depicted in the above picture by drawing one cross ($\times$) in all of the circles. Conversely, any such drawing represents a quantum state, as long as the squares of the distances of the crosses to the center point sum to $1$. In other words, if we calculate all the distances of the marked points to the center points, then square these distances, and then add them all up, we require that the result equals $1$. An example state is shown below:

+ +

+ +

Throughout the execution of a program, the state of the quantum computer is constantly changing, and as such, so is the visual representation. Hence, throughout the execution of a quantum program, the marked points (the $\times$s), are constantly moving around, within the boundaries of their respective circles.

+ +

Within this framework, the Quantum Fourier Transform is just a very specific way to move the marked vertices around. We will make this explicit in the case of a $3$-qubit quantum computer, on which a $3$-qubit Quantum Fourier Transform is executed.

+ +

To that end, suppose that we have the $3$-qubit system in the following state, i.e., where the cross is all the way at the edge in the $|0\rangle$ circle, and all the others are exactly at the center. It is an exercise for the OP to check that indeed the square of the distances of the marked points to the center points sum to $1$. We refer to this state as the $|0\rangle$ state.

+ +

+ +

The question is now what happens to the quantum state when we apply the Quantum Fourier Transform. It turns out that, when the Quantum Fourier Transform is applied to the state shown above, the resulting state of the quantum system becomes:

+ +

+ +

Here I added the red dashed line passing through all the marked points just for extra convenience. All the marked points are on the exact same location in the circles, namely precisely above the center point at a height of $1/\sqrt{8}$.

+ +

Similarly, we can have a look at the action of the Quantum Fourier Transform on other states. Consider for example the $|1\rangle$ state:

+ +

+ +

Now, if we apply the Quantum Fourier Transform, the resulting state becomes:

+ +

+ +

We can see that the resulting state becomes some kind of helix shape. Moreover, observe that if we were to add one extra circle to the right of the rightmost state, then the helix would complete exactly one revolution.

+ +

It turns out that the other states behave similarly under the Quantum Fourier Transform, but that the period of the helix changes. More precisely, if we start out with the state $|j\rangle$, then the number of revolutions that the Quantum Fourier Transform makes is $j$. I.e., if we consider the initial state $|3\rangle$, then we obtain the following image under the Fourier transform:

+ +

+ +

Here, we can easily see that the helix completes $3$ full revolutions if we were to add one more circle on the right hand side.

+ +

It is important to note that we can also reverse the order of things. That is to say, if we have a quantum computer, whose state can be pictured as a helix, similar to the ones shown above, then we can obtain the period of this helix by implementing the inverse Quantum Fourier Transform. In doing so, we essentially map the helix with $j$ revolutions to the state $|j\rangle$, which is the exact opposite direction from the mapping we considered before.

+ +

It is this idea that is the crucial component in Shor's algortihm. The central idea is to take the sequence of numbers you describe:

+ +
7, 4, 13, 1, 7, 4, 13, 1, 7, 4, 13, 1 (etc)
+
+ +

and use these to create a helix whose period is equal to the period in this sequence. Next, we apply the inverse Quantum Fourier Transform to obtain the state $|4\rangle$, i.e., the period of this sequence.

+ +

NOTE 1: There are a lot of details that I skipped over in the final paragraph. This answer already contains a lot of information, though, which I think needs to sink in before one can attempt to add these details to the picture. If anyone wants me to add these details, I might do so at a later stage.

+ +

NOTE 2: The OP mentioned that he is not a mathematician. To the mathematicians out there, this visual representation is just an array of $2^n$ unit circles in the complex plane, where the marked points are the representation of the amplitudes as vectors in the complex plane.

+",24,,,,,07-08-2018 23:54,,,,1,,,,CC BY-SA 4.0 +2674,1,2675,,07-09-2018 06:45,,15,938,"

Entanglement is often discussed as being one of the essential components that makes quantum different from classical. But is entanglement really necessary to achieve a speed-up in quantum computation?

+",1837,,26,,12/13/2018 20:09,12/13/2018 20:26,Is entanglement necessary for quantum computation?,,1,8,,,,CC BY-SA 4.0 +2675,2,,2674,07-09-2018 06:45,,11,,"

Short answer: yes

+ +

One has to be a little bit more careful setting up the question. Thinking about a circuit as being composed of state preparation, unitaries, and measurements, it is always in principle possible to ""hide"" anything we want, such as entangling operations, inside the measurement. So, let us be precise. We want to start from a separable state of many qubits, and the final measurements should consist of single-qubit measurements. Does the computation have to transition through an entangled state at some point in the computation?

+ +

Pure states

+ +

Let's make the one further assumption that the initial state is a pure (product) state. In that case, the system must go through an entangled state. If it didn't, it is easy to simulate the computation on a classical computer because all you have to do is hold $n$ single-qubit pure states in memory, and update them one at a time as the computation proceeds.

+ +

One can even ask how much entanglement is necessary. Again, there are many different ways that entanglement can be moved around at different times. A good model that provides a reasonably fair measure of the entanglement present is measurement-based quantum computation. Here, we prepare some initial resource state, and it is single-qubit measurements that define the computation that happens. This lets us ask about the entanglement of the resource state. There has to be entanglement and, in some sense, it has to be at least ""two-dimensional"", it cannot just be the entanglement generated between nearest neighbours of a system on a line [ref]. Moreover, one can show that most states of $n$ qubits are too entangled to permit computation in this way.

+ +

Mixed states

+ +

The caveat in all that I've said so far is that we're talking about pure states. For example, we can easily simulate a non-entangling computation on pure product states. But what about mixed states? A mixed state is separable if it can be written in the form +$$ +\rho=\sum_{i=1}^Np_i\rho^{(1)}_i\otimes\rho^{(2)}_i\otimes\ldots\otimes\rho^{(n)}_i. +$$ +Importantly, there is no limit on the value $N$, the number of terms in the sum. If the number of terms in the sum is small, then by the previous argument, we can simulate the effects of a non-entangling circuit. But if the number of terms is large, then (to my knowledge) it remains an open question as to whether it can be classically simulated, or whether it can give enhanced computation.

+",1837,,,,,07-09-2018 06:45,,,,8,,,,CC BY-SA 4.0 +2676,2,,2595,07-09-2018 06:46,,20,,"

Your primary assertion that the mathematics of waves mimic that of quantum mechanics is the right one. In fact, many of the pioneers of QM used to refer to it as wave mechanics for this precise reason. Then it is natural to ask, "" Why can't we do quantum computing with waves ?"".

+ +

The short answer is that quantum mechanics allows us to work with an exponentially large Hilbert space while spending only polynomial resources. That is, the state space of $n$ qubits is a $2^n$ dimensional Hilbert space.

+ +

One cannot construct an exponentially large Hilbert space from polynomially many classical resources. To see why this is the case let us look at two different kinds of wave mechanics based computers.

+ +

The first way to build such a computer would be to take $n$ number of two-level classical systems. Each system then by itself could be represented by a 2D Hilbert space. For example, one could imagine $n$ guitar strings with only the first two harmonics excited.

+ +

This setup will not be able to mimic quantum computing because there is no entanglement. So any state of the system will be a product state and the combined system of $n$ guitar strings cannot be used to make a $2^n$ dimensional Hilbert space.

+ +

The second way one could attempt to construct an exponentially large Hilbert space is to use a single guitar sting and to identify its first $2^n$ harmonics with the basis vectors of the Hilbert space. This is done in @DaftWullie 's answer. The problem with this approach is that the frequency of the highest harmonic one needs to excite to make this happen will scale as $O(2^n)$. And since the energy of a vibrating string scales quadratically with its frequency, we will need an exponential amount of energy to excite the string. So in the worst case, the energy cost of the computation can scale exponentially with the problem size.

+ +

So the key point here is that classical systems lack entanglement between physically separable parts. And without entanglement, we cannot construct exponentially large Hilbert spaces with polynomial overhead.

+",2663,,26,,08-10-2018 20:06,08-10-2018 20:06,,,,2,,,,CC BY-SA 4.0 +2677,2,,2595,07-09-2018 10:57,,5,,"

I myself often describe the source of the power of quantum mechanics as being due to 'destructive interference', which is to say the wave-like nature of quantum mechanics. From a standpoint of computational complexity, it is clear that this is one of the most important and interesting features of quantum computation, as Scott Aronson (for example) notes. But when we describe it in this very brief way — that ""the power of quantum computation is in destructive interference / the wave-like nature of quantum mechanics"" — it is important to notice that this sort of statement is a short-hand, and necessarily incomplete.

+ +

Whenever you make a statement about ""the power"" or ""the advantage"" of something, it is important to bear in mind: compared to what? +In this case, what we are comparing to is specifically probabilistic computing: and what we have in mind is not just that 'something' is acting like a wave, but specifically that something which is otherwise like a probability is acting like a wave.

+ +

It must be said that probability itself, in the classical world, already does act a bit like a wave: specifically, it obeys a sort of Huygen's Principle (that you can understand the propagation of probabilities of things by summing over the contributions from individual initial conditions — or in other words, by a superposition principle). The difference, of course, is that probability is non-negative, and so can only accumulate, and its evolution will be essentially a form of diffusion. +Quantum computation manages to exhibit wave-like behaviour with probability-like amplitudes, which can be non-positive; and so it is possible to see destructive interference of these amplitudes.

+ +

In particular, because the things which are acting as waves are things like probabilities, the 'frequency space' in which the system evolves can be exponential in the number of particles you involve in the computation. This general sort of phenomenon is necessary if you want to get an advantage over conventional computation: if the frequency space scaled polynomially with the number of systems, and the evolution itself obeyed a wave equation, the obstacles to simulation with classical computers would be easier to overcome. If you wanted to consider how to achieve similar computational advantages with other kinds of waves, you have to ask yourself how you intend to squeeze in an exponential amount of distinguishable 'frequencies' or 'modes' into a bounded energy space.

+ +

Finally, on a practical note, there is a question of fault-tolerance. Another side-effect of the wave-like behaviour being exhibited by probability-like phenomena is that you can perform error correction by testing parities, or more generally, coarse-trainings of marginal distributions. Without this facility, quantum computation would essentially be limited to a form of analog computation, which is useful for some purposes but which is limited to the problem of sensitivity to noise. We do not yet have fault-tolerant quantum computation in built computer systems, but we know that it is possible in principle and we are aiming for it; whereas it is unclear how any similar thing could be achieved with water waves, for instance.

+ +

Some of the other answers touch on this same feature of quantum mechanics: 'wave-particle duality' is a way of expressing the fact that we have something probabilistic about the behaviour of individual particles which are are acting like waves, and remarks about scalability / the exponentially of the configuration space follow from this. But underlying these slightly higher-level descriptions is the fact that we have quantum amplitudes, behaving like elements of a multi-variate probability distribution, evolving linearly with time and accumulating but which can be negative as well as positive.

+",124,,124,,02-05-2019 21:50,02-05-2019 21:50,,,,0,,,,CC BY-SA 4.0 +2682,1,2684,,07-09-2018 14:56,,4,785,"

All the references in this question refer to Quantum algorithm for solving linear systems of equations (Harrow, Hassidim & Lloyd, 2009).

+ +

The question I have is about the step where they apply controlled-rotations to transfer the eigenvalue encoded in a quantum register to the amplitudes of a state:

+ +

After the quantum phase estimation, the state of the quantum registers is (see page 3): +$$ +\sum_{j=1}^{N} \sum_{k=0}^{T-1} \alpha_{k|j}\beta_j \vert \tilde\lambda_k\rangle \vert u_j \rangle +$$ +Then, the HHL algorithm consists in applying rotations controlled by $\vert\tilde\lambda_k\rangle$ to produce the state +$$ +\sum_{j=1}^{N} \sum_{k=0}^{T-1} \alpha_{k|j}\beta_j \vert \tilde\lambda_k\rangle \vert u_j \rangle \left( \sqrt{1 - \frac{C^2}{\tilde\lambda_k^2}} \vert 0 \rangle + \frac{C}{\tilde\lambda_k}\vert 1 \rangle \right) +$$ +where ""$C = O(1/\kappa)$"" (still page 3).

+ +

My question: why do they introduce $C$? Isn't $C=1$ valid?

+",1386,,55,,08-01-2020 11:35,08-01-2020 11:35,HHL algorithm -- controlled-by-eigenvalues rotations,,1,0,,,,CC BY-SA 4.0 +2683,1,2685,,07-09-2018 15:15,,6,523,"

See edit at the end of the question

+ +
+ +

All the references in this question refer to Quantum algorithm for solving linear systems of equations (Harrow, Hassidim & Lloyd, 2009).

+ +

HHL algorithm consists in an application of the quantum phase estimation algorithm (QPE), followed by rotations on an ancilla qubit controlled by the eigenvalues obtained as output of the QPE. The state of the quantum registers after the rotations is +$$ +\sum_{j=1}^{N} \sum_{k=0}^{T-1} \alpha_{k|j}\beta_j \vert \tilde\lambda_k\rangle \vert u_j \rangle \left( \sqrt{1 - \frac{C^2}{\tilde\lambda_k^2}} \vert 0 \rangle + \frac{C}{\tilde\lambda_k}\vert 1 \rangle \right). +$$

+ +

Then, the algorithm just uncomputes the first register containing the eigenvalues ($\vert \tilde\lambda_k \rangle$) to give the state +$$ +\sum_{j=1}^{N}\beta_j \vert u_j \rangle \left( \sqrt{1 - \frac{C^2}{\lambda_j^2}} \vert 0 \rangle + \frac{C}{\lambda_j}\vert 1 \rangle \right). +$$

+ +

Here, the notation used assumes that the QPE was perfect, i.e. the approximations were the exact values.

+ +

The next step of the algorithm is to measure the ancilla qubit (the right-most one in the sum above) and to select the output only when the ancilla qubit is measured to be $\vert 1 \rangle$. This process is also called ""post-selection"".

+ +

The state of the system after post-selecting (i.e. after ensuring that the measurement returned $\vert 1 \rangle$) is written

+ +

$$ +\frac{1}{D}\sum_{j=1}^{N}\beta_j \frac{C}{\lambda_j} \vert u_j \rangle +$$ +where $D$ is a normalisation constant (the exact expression can be found in the HHL paper, page 3).

+ +

My question: Why is the $\frac{C}{\lambda_j}$ coefficient still in the expression? From what I understood, measuring +$$ +\left( \sqrt{1 - \frac{C^2}{\lambda_j^2}} \vert 0 \rangle + \frac{C}{\lambda_j}\vert 1 \rangle \right) +$$ +should output $\vert 0 \rangle$ or $\vert 1 \rangle$ and destroy the amplitudes in front of those states.

+ +
+ +

EDIT: Specifying the question.

+ +

Following @glS' answer, here is the updated question:

+ +

Why does the post-selection works like described by @glS' answer and not like above?

+",1386,,1386,,07-09-2018 20:27,07-10-2018 01:18,HHL algorithm -- problem with the outcome of postselection,,2,0,,,,CC BY-SA 4.0 +2684,2,,2682,07-09-2018 15:21,,5,,"

If $\tilde{\lambda_{k}} < C$, the controlled rotation becomes non-physical since you have coeffecient greater than 1 on your $|1\rangle$ state.

+ +

As a result $C < \lambda_{min}$ is a safer choice, and that is $O(1/\kappa)$ according to the 4th paragraph in the intro.

+",3056,,3056,,07-09-2018 15:28,07-09-2018 15:28,,,,0,,,,CC BY-SA 4.0 +2685,2,,2683,07-09-2018 15:37,,4,,"

Your intuition is correct for a single qubit, in that if I measure $$\alpha\vert 0 \rangle + \beta\vert 1 \rangle$$ I would get either $\vert 0 \rangle$ or $\vert 1 \rangle$. But since the qubits are in a large entangled state, the relevant information stored in the ratios of different probabilities is still held fixed, and the $\frac{C}{\lambda_j}$ factors are part of that overall coeffecient on each $\vert u_j \rangle$.

+ +

In essence, you can't think of the process as if you are measuring just +$$ + \sqrt{1 - \frac{C^2}{\lambda_j^2}} \vert 0 \rangle + \frac{C}{\lambda_j}\vert 1 \rangle +$$

+ +

because then you are ignoring the entanglement of the qubits.

+ +

EDIT: To go into more detail. The distiction is between how the one renormalizes after a measurement. So in the beginning if you have a normalized state

+ +

$$\alpha\vert a\rangle + \beta\vert b\rangle$$

+ +

and you measure it and get $\vert b \rangle$, you are essentially saying that ""My measurement returned b, so only states which are consistent with my measurement can exist"" and then normalizing the superposition of all of those states. But only $\vert b\rangle$ is consistent and as such you end up with it, and a prefactor of 1 out front.

+ +

However imagine an entagled state of the following:

+ +

$$\alpha_{00}\vert 00\rangle + \alpha_{01}\vert 01\rangle + \alpha_{10}\vert 10\rangle + \alpha_{11}\vert 11\rangle$$

+ +

If I measure the first qubit and get $\vert 0\rangle$, there are two states, $\vert 01\rangle$ and $\vert 00\rangle$ which are consistent, and so your state must be a superposition of those two. In this example that leads to a final state of +$$A\left(\alpha_{00}\vert 00\rangle + \alpha_{01}\vert 01\rangle\right)$$

+ +

where A is the overall factor to bring the normalization back to 1. The important thing to see is that the ratio between the two remaining probabilities remains constant at $\alpha_{00}/\alpha_{01}$. This ratio would, in the HHL example, be the factors containing different $\frac{C}{\lambda_j}$ values, and as such they must remain. Physically this correspond to the states which are ""still viable"" being unaltered relative to eachother, which those which have now been ruled out have their probability set to 0.

+ +

In terms of more natural language, imagine I give you 4 scenarios and their likelihoods. If I was to specify their relative probabilities, and then rule out 2 of them, it should be natural that the two which have not been ruled out would still have the same relative probability, as whatever extra information has been added was already assumed in order for those scenarios to have been true.

+",3056,,3056,,07-10-2018 01:18,07-10-2018 01:18,,,,4,,,,CC BY-SA 4.0 +2686,2,,2683,07-09-2018 15:39,,3,,"

You are half right, in that the $C$ factor is only kept there for (what I assume being) explanatory purposes.

+ +

However, the $1/\lambda_j$ factors definitely stays there after postselection. One way to see this is that you can think of those factors as attached to the other registers, so that the state is equivalently written as

+ +

$$ +\left(\sum_j\beta_j\sqrt{1-\frac{C^2}{\lambda_j^2}} |u_j\rangle\right)|0\rangle + +\left(\sum_j\beta_j\frac{C}{\lambda_j} |u_j\rangle\right)|1\rangle.\tag1 +$$ +Keeping only the term on the right we get a normalised version of +$$\sum_j\beta_j\frac{C}{\lambda_j} |u_j\rangle|1\rangle.$$

+ +

As an analogy, it's like having the state +$$(a|0\rangle+b|1\rangle)|0\rangle.$$ +By your reasoning, postselecting on $|0\rangle$ (which here trivially happens with 100% probability) should lead to something like $|0\rangle+|1\rangle$, which is clearly not correct.

+ +

Stated in yet another way, you are basically applying the measurement postulate wrong. +The state remaining after postselection is the sum of all states with ancilla in the state $|1\rangle$, which results in (1). The factors disappear only if you consider postselection over the ancilla alone, which neglects the fact that the ancilla is entangled with the other registers. +Mathematically, you can see it in the fact that the postselected state is the one obtained applying the projector $\mathbb 1\otimes |1\rangle\langle1|$, and renormalizing the result.

+",55,,55,,07-09-2018 20:28,07-09-2018 20:28,,,,3,,,,CC BY-SA 4.0 +2687,2,,2667,07-09-2018 15:45,,9,,"

If you are looking for reading material instead of videos, I read John Preskill's Lecture Notes in undergrad to learn more about the subject, and thought they were really informative and well developed. They were initially written in 1997, but have modern updates from 2015.

+ +

From the website:

+ +
+

Course Description

+ +
+

The theory of quantum information and quantum computation. Overview of classical information theory, compression of quantum information, transmission of quantum information through noisy channels, quantum entanglement, quantum cryptography. Overview of classical complexity theory, quantum complexity, efficient quantum algorithms, quantum error-correcting codes, fault-tolerant quantum computation, physical implementations of quantum computation.

+
+ +

Prerequisites

+ +
+

The course material should be of interest to physicists, mathematicians, computer scientists, and engineers, so we hope to make the course accessible to people with a variety of backgrounds.

+ +

Certainly it would be useful to have had a previous course on quantum mechanics, though this may not be essential. It would also be useful to know something about (classical) information theory, (classical) coding theory, and (classical) complexity theory, since a central goal of the course will be generalize these topics to apply to quantum information. But we will review this material when we get to it, so you don't need to worry if you haven't seen it before. In the discussion of quantum coding, we will use some rudimentary group theory.

+
+
+",3056,,,,,07-09-2018 15:45,,,,0,,,,CC BY-SA 4.0 +2688,1,,,07-09-2018 22:03,,7,979,"

I am relatively new to quantum computing and I feel like I don't fully understand the power of quantum computing due to a lack of understanding of how amplitude amplification works.

+ +

What is confusing me is that the amplitude of the qubit is the square root of the probability and so in amplitude amplification, how is the amplitude of the ""correct answer"" changed to negative (before then being reflected across the average)?

+ +

I understand that it is a gate and the gate is a series of complex operations but at the same time I fail to understand physically how do the gates change the amplitude of the qubit. I am referencing this in response to the webpage: IBM Q Experience Documentation - Grover's algorithm. Any help is greatly appreciated. Thank you!

+",3061,,26,,12/14/2018 6:12,9/25/2019 10:09,"In amplitude amplification, how are the amplitudes of qubits changed?",,2,1,,,,CC BY-SA 4.0 +2689,2,,2688,07-10-2018 04:32,,2,,"

EDIT: I completely misunderstood your question and thought that you were confused about what a negative amplitude means, and not about physical mechanisms. I'm leaving this up in case that actually was what you meant. whoops. For the implementation question, how a reflection is implemented physically depends on the qubit implementation you are using.

+ +

I think the main confusion you're having is from thinking about the relationship between amplitudes and probabilities in the wrong direction.

+ +

You say that the amplitudes are the square root of the probability, but a safer way of thinking, which might help in building intuition, is to say that the amplitude norm squared is the probability.

+ +

$$ \vert A\vert^2 = P$$

+ +

if this is inverted in its most general form, you get

+ +

$$A = \sqrt{P} e^{i\theta}$$ for some $\theta$. This additional phase is where your confusion is coming from, as it allows for both negative amplitudes as you are encountering in your example, as well as in many other very important states like +$$H\vert 1 \rangle = \vert -\rangle = \frac{1}{\sqrt{2}}\vert 0\rangle - \frac{1}{\sqrt{2}}\vert 1\rangle$$

+ +

Without this possibility for negative states, quantum mechanical phenomena like constructive and destructive interference of wave functions would be impossible.

+ +

Hope that helps!

+",3056,,3056,,07-10-2018 15:14,07-10-2018 15:14,,,,1,,,,CC BY-SA 4.0 +2690,1,2702,,07-10-2018 05:05,,4,286,"

In a recent question about quantum speed-up @DaftWullie says:

+
+

My research, for example, is very much about "how do we design Hamiltonians $H$ so that their time evolution $e^{-iHt_0}$ creates the operations that we want?", aiming to do everything we can in a language that is "natural" for a given quantum system, rather than having to coerce it into performing a whole weird sequence of quantum gates.

+
+

This makes me think of chronons, which are a proposed quantum of time.

+
+

"There are physical limits that prevent the distinction of arbitrarily close successive states in the time evolution of a quantum system.

+

If a discretization is to be introduced in the description of a quantum system, it cannot possess a universal value, since those limitations depend on the characteristics of the particular system under consideration. In other words, the value of the fundamental interval of time has to change a priori from system to system."

+

Introduction of a Quantum of Time ("chronon"), and its Consequences for Quantum Mechanics

+
+

Is universal chrononic computing possible?

+",2645,,-1,,6/18/2020 8:31,12/13/2018 19:57,Chrononic Computing (Time Evolution Systems),,1,0,,,,CC BY-SA 4.0 +2691,1,2709,,07-10-2018 07:05,,7,194,"

It seems that quantum computers can be classified by the type of quantum they operate on. Not entirely sure what category most common current systems fall into (eg. D-Wave, Google, IBM, Microsoft). Photonic computing seems to be one of the more 'popular' alternative methods. Curious about other forms of unconventional quantum computing.

+ +

Quasi interested in a few different cases:

+ +
    +
  • Phonon - sound based

  • +
  • Roton - vortex based

  • +
  • Dropleton - quantum droplet*

  • +
  • Fracton - fractal analog of phonons*

  • +
  • Plasmon - plasma based

  • +
+ +

Also curious about chronons & virtual particles.

+ +

Have significant breakthroughs in quantum computing been made using non-standard quanta?

+",2645,,55,,11/30/2021 22:26,11/30/2021 22:26,Breakthroughs in quantum computing using non-standard quanta,,2,9,,7/16/2018 18:41,,CC BY-SA 4.0 +2692,1,,,07-10-2018 08:19,,5,273,"

As in the title, I have a doubt regarding the implementation of a boolean formula used as an oracle for a quantum algorithm. +The problem is that so far I could reproduce the formula as a quantum circuit relatively easily and then solve the related SAT using Grover search. However, if for instance I have 8 variables, I would need more or less twice the number of qubits to reproduce the formula (8 for introducing the variables, the others as workspace to perform the logic operations). Even though Grover search allows a quadratic speed up, I feel that implementing the oracle in this classical way is still a big hindrance in terms of qubits needed, given the fact that they are limited to small amounts.

+ +

I am trying to see if it is possible to encapsulate the 8 variables in 3 qubits so that using a Walsh-Hadamard transform I would have a superposition with $2^3=8$ possible states. 2 problems arise:

+ +

1) How could i perform the logical operations needed to represent the boolean formula?

+ +

2) Given a single qubit, I can look at it as a variable and assign true and false values to the states $\left|1\right>$ and $\left|0\right>$ respectively; however, if I use multiple qubits and put all of them in a superposition, when assigning each state to a specific variable I can no longer see if they are true or false by looking at the output states! How could I decide a true/false output in this case?

+ +

Do you guys think this is something feasible? I feel I am missing a key point in the process.

+ +

For instance $\left(x_1\cdot x_2\right)+\left(x_3\cdot x_1\cdot x_4\right)+\left(x_5\cdot x_6\right)$, where every variable $x_i \in \left\lbrace 0,1\right\rbrace$ and "" $\cdot$ "" stands for the logic AND operation and ""$+$"" for the logic OR.

+",2648,,23,,07-11-2018 09:00,07-11-2018 09:00,Compact encoding of Boolean formula as oracle,,1,0,,,,CC BY-SA 4.0 +2693,2,,2651,07-10-2018 09:59,,4,,"

In Verifying Genuine High-Order Entanglement the following graphs represent entangled qudits

+ +

+

+ +
+ +

In an answer to 'Alternative to Bloch sphere to represent a single qubit' @Rob references Majorana representation, qutrit Hilbert space and NMR implementation of qutrit gates which states

+ +
+

The Majorana representation for spin−$s$ systems has found widespread applications such as determining geometric phase of spins, representing $N$ spinors by $N$ points, geometrical representation of multi-qubit entangled states, statistics of chaotic quantum dynamical systems and characterizing polarized light.

+
+ +

The paper also includes this style of representation for qudits

+ +

+ +
+ +

I recently asked about how to visually represent a qubyte. In the comments of @DaftWullie's answer I proposed an 8-cube (hypercube graph):

+ +

+ +
+

An n-cube can be projected inside a regular 2n-gonal polygon by a skew orthogonal projection

+
+ +

This method seems to allow for the complexity of entanglement to be visualized in a scalable fashion.

+",2645,,,,,07-10-2018 09:59,,,,3,,,,CC BY-SA 4.0 +2694,2,,2654,07-10-2018 10:30,,3,,"
+

Have we fallen into the same 'everything digital' bandwagon trap that keeps recurring?

+
+
+

What I have noticed is more the 'everything binary' bandwagon trap; which reminds me of the Grandma's cooking secret:

+
+

Once upon a time, a mother was teaching her daughter the family recipe for making a whole baked ham. It was the very best ham anybody had ever had so they always followed that recipe carefully.

+

They prepared the marinade, scored the skin, put in the cloves, and then came a step the daughter didn't understand.

+

"Why do we cut off the ends of the ham?" she said. "Doesn't that make it dry out?"

+

"You know, I don't know," said the mother. "That's just the way grandma taught me. We should call grandma and ask."

+

So they called grandma and asked, "why do we cut off the ends of the ham? Is it to let the marinade in, or what?"

+

"No," said Grandma. "To be honest, I cut the ends off because that's how my mother taught me. I added the marinade step later, because I was worried about the ham drying out. Let's call great grandma and ask her."

+

So they called the assisted living facility where great grandma was living, and the old woman listend to their questions, and then said.

+

"Oh, for land sakes! I cut off the ends because I didn't have a pan big enough for a whole ham!"

+
+
+

I was recently thinking about qubytes & wondering if they really needed to be defined as 8 qubits. An 8-level quantum system (qunit) would have an 8 dimensional space & could in theory encode a byte (8 bits). Is this a better definition of a qubyte (quantum byte)?

+
+

Or is it all a problem of folk having no real idea how to program the beasts.

+
+",2645,,-1,,6/18/2020 8:31,07-10-2018 10:30,,,,3,,,,CC BY-SA 4.0 +2696,2,,2691,07-10-2018 15:20,,1,,"

I'm not sure if you count adiabatic quantum computing as fringe, but there was a paper using 4 NMR qubits to implement a adiabatic analogue to HHL which allowed them to invert an 8x8 operator with 98.4% fidelity which got put on arXiv a couple weeks ago. I thought that was pretty neat.

+",3056,,3056,,07-10-2018 15:44,07-10-2018 15:44,,,,5,,,,CC BY-SA 4.0 +2697,1,,,07-10-2018 15:42,,27,2596,"

Note on the vocabulary: the word ""hamiltonian"" is used in this question to speak about hermitian matrices.

+ +
+ +

The HHL algorithm seems to be an active subject of research in the field of quantum computing, mostly because it solve a very important problem which is finding the solution of a linear system of equations.

+ +

According to the original paper Quantum algorithm for solving linear systems of equations (Harrow, Hassidim & Lloyd, 2009) and some questions asked on this site

+ + + +

the HHL algorithm is limited to some specific cases. Here is a summary (that may be incomplete!) of the characteristics of the HHL algorithm:

+ +
+ +

HHL algorithm

+ +

The HHL algorithm solves a linear system of equation +$$A \vert x \rangle = \vert b \rangle$$ +with the following limitations:

+ +

Limitations on $A$:

+ + + +

Limitations on $\vert b \rangle$:

+ + + +

Limitations on $\vert x \rangle$ (output):

+ +
    +
  • $\vert x \rangle$ cannot be recovered fully by measurement. The only information we can recover from $\vert x \rangle$ is a ""general information"" (""expectation value"" is the term employed in the original HHL paper) such as $$\langle x\vert M\vert x \rangle$$
  • +
+ +
+ +

Question: +Taking into account all of these limitations and imagining we are in 2050 (or maybe in 2025, who knows?) with fault-tolerant large-scale quantum chips (i.e. we are not limited by the hardware), what real-world problems could HHL algorithm solve (including problems where HHL is only used as a subroutine)?

+ +

I am aware of the paper Concrete resource analysis of the quantum linear system algorithm used to compute the electromagnetic scattering cross section of a 2D target (Scherer, Valiron, Mau, Alexander, van den Berg & Chapuran, 2016) and of the corresponding implementation in the Quipper programming language and I am searching for other real-world examples where HHL would be applicable in practice. I do not require a published paper, not even an unpublished paper, I just want to have some examples of real-world use-cases.

+ +
+ +

EDIT:

+ +

Even if I am interested in every use-case, I would prefer some examples where HHL is directly used, i.e. not used as a subroutine of an other algorithm.

+ +

I am even more interested in examples of linear systems resulting of the discretisation of a differential operator that could be solved with HHL.

+ +

But let me emphasise one more time I'm interested by every use-case (subroutines or not) you know about.

+",1386,,1386,,7/13/2018 13:38,05-08-2019 16:32,What could be the possible future applications for HHL algorithm?,,2,2,,,,CC BY-SA 4.0 +2698,1,,,07-10-2018 16:37,,5,178,"

In (Lloyd et al. 2013), the authors write (beginning of page 3) that the quantum matrix inversion techniques presented by some of the same authors in (Harrow et al. 2008) allow to efficiently implement $e^{-ig(\rho)}$ for any simply computable function $g(x)$, using multiple copies of the density matrix $\rho$ (see (1) below for a bit more context).

+ +

Given that (Harrow et al. 2008) presents a quantum algorithm to obtain a state $|x\rangle$ proportional to $A^{-1}|b\rangle$ for a given $A$ and $|b\rangle$, it doesn't seem obvious to me how this can be used to compute $e^{-ig(\rho)}$.

+ +

To which techniques exactly are the authors referring to? And how are they applied to get the stated result?

+ +
+ +

(1) +More precisely, in the (Lloyd et al. 2013) paper, before making the statement that is the object of this question, the authors describe a method to construct $e^{-i\rho t}$, to accuracy $\epsilon$, using $n$ copies of $\rho$ with $n=O(t^2 \epsilon^{-1})$.

+ +

Moreover, what is meant by simply computable function does not seem to be explained in the paper.

+",55,,,,,07-11-2018 07:17,Why does the quantum linear inversion algorithm allow to implement $e^{-ig(\rho)}$ efficiently using multiple copies of $\rho$?,,1,0,,,,CC BY-SA 4.0 +2699,2,,2553,07-10-2018 18:17,,0,,"

Of course you can allocate qubits from C#! After all, Q# compiles to C#, so it's not possible for something to be doable only from Q#.

+ +

Here's a small script I wrote that will show you the C# generated from a Q# file.

+ +

The relevant bit is Allocate.Apply, which allocates the qubits, and you can read the rest of the produced code to see where Allocate comes from.

+ +

With that in mind, should you be allocating qubits from C#? Well, no, probably not. It's annoyingly difficult, and it's best to leave the quantum stuff for a quantum language.

+",70,,70,,07-10-2018 18:29,07-10-2018 18:29,,,,0,,,,CC BY-SA 4.0 +2701,2,,2698,07-11-2018 07:12,,2,,"

One way to go about this is using the Linear Combination of Unitaries (LCU) algorithm. The LCU algorithm simulates the action of any operator that can be written as a linear combination of simulatable unitary operators. A full treatment of this can be found in Kothari's thesis. Using LCU algorithm, given the ability to apply $e^{i \rho t}$ to the state, the action of $f(\rho)$ on a state can be simulated. You do this by first writing $f(\rho)$ in Fourier space,

+ +

$$ f(\rho) = \int^{+\infty}_{-\infty} dt~ \hat{f}(t) ~e^{i\rho t}.$$

+ +

You can already see that this represents $f(\rho)$ as a sum of simulatable unitaries. But this is a continuous and infinite sum. For many functions this integral can be truncated and discretized to approximate $f(\rho)$ well. Kothari's thesis has some examples of such functions, including $A^{-1}$. See also this work, that uses this technique to simulate the action of $e^{-\beta H}$ on a state.

+ +

I don't think that this is the technique Llyod et.al has in their mind. But the LCU algorithm is a more recently developed technique that solves the problem.

+",2663,,2663,,07-11-2018 07:17,07-11-2018 07:17,,,,0,,,,CC BY-SA 4.0 +2702,2,,2690,07-11-2018 07:36,,3,,"
+

Is universal chrononic computing possible?

+
+ +

There is no consensus that chronons even exist.
+See the first line of this, for example.

+ +

However time (and space) is quantized in one of the most popular generalizations of quantum mechanics called loop quantum gravity.

+ +

If loop quantum gravity is an accurate description of the universe (which is not something we will be able to test for a very long time, until we can observe for example, Hawking radiation), then universal quantum computation with chronons would be possible as long as we can find a way to implement a universal set of gates such as {H,CNOT,R($\pi$/4)}.

+ +

It is hard enough to implement a useful number of {H,CNOT,R($\pi$/4)} gates with ordinary quanta that we've been working with for a century (such as spin quanta or atomic energy level quanta or photon quanta), so don't be disappointed if you don't see universal chrononic quantum computers on the market during your lifetime. But it is possible, provided that quanta of time actually do exist, which would be true if loop quantum gravity were to be true.

+",2293,,26,,7/14/2018 18:21,7/14/2018 18:21,,,,5,,,,CC BY-SA 4.0 +2703,1,2704,,07-11-2018 08:12,,17,2934,"

The $n$-fold Pauli operator set is defined as $G_n=\{I,X,Y,Z \}^{\otimes n}$, that is as the set containing all the possible tensor products between $n$ Pauli matrices. It is clear that the Pauli matrices form a basis for the $2\times 2$ complex matrix vector spaces, that is $\mathbb{C}^{2\times 2}$. Apart from it, from the definition of the tensor product, it is known that the $n$-qubit Pauli group will form a basis for the tensor product space $(\mathbb{C}^{2\times 2})^{\otimes n}$.

+

I am wondering if the such set forms a basis for the complex vector space where the elements of this tensor product space act, that is $\mathbb{C}^{2^n\times 2^n}$. Summarizing, the question would be, is $(\mathbb{C}^{2\times 2})^{\otimes n}=\mathbb{C}^{2^n\times 2^n}$ true?

+

I have been trying to prove it using arguments about the dimensions of both spaces, but I have not been able to get anything yet.

+",2371,,2371,,5/13/2022 21:08,5/13/2022 21:08,Is the Pauli group for $n$-qubits a basis for $\mathbb{C}^{2^n\times 2^n}$?,,1,5,,,,CC BY-SA 4.0 +2704,2,,2703,07-11-2018 08:26,,12,,"

Yes, the set of tensor products of all possible $n$ Pauli operators (including $I$) form an orthogonal basis for the vector space of $2^n \times 2^n$ complex matrices. To see this first we notice that the space has a dimension of $4^n$ and we also have $4^n$ vectors ( the vectors are operators in this case). So we only need to show that they are linearly independent.

+

We can actually show something stronger. It can be easily seen that the members of the Pauli group are orthogonal under the Hilbert-Schmidt inner product. The H-S inner product of two matrices is defined as $Tr(AB^\dagger)$. We can easily verify from the definition that the Pauli group is a mutually orthogonal set under this inner product. We simply have to use the elementary property $Tr(C \otimes D) = Tr(C)Tr(D)$.

+",2663,,2663,,09-04-2021 14:45,09-04-2021 14:45,,,,3,,,,CC BY-SA 4.0 +2705,2,,2692,07-11-2018 08:27,,1,,"

Boolean variables can be represented by the matrix, $b=\frac{z+1}{2}$, where:
+$$ +z=\begin{pmatrix}1 & 0 \\ 0 & -1 +\end{pmatrix}. +$$

+ +

Notice that the eigenvalues of $b$ are {0,1}, which is exactly what you want. This is explained on Pg. 1 of this book on quantum mechanics and Boolean functions (see Eq. 3).

+ +

Pg. 2 of the same book refers to this paper which explains the connection between Boolean variables and quantum operators in even more detail in Eqs 8-9. In the paper, Eq. 13 is an example of exactly the type of function you have.

+ +

So for your example: +$x_1x_2+ x_3x_1x_4+x_5x_6$

+ +

We have: $\frac{(z_1+1)(z_2+1)}{4} + \frac{(z_3+1)(z_1+1)(z_4+1)}{8} + \frac{(z_5+1)(z_6+1)}{4}$.

+",2293,,,,,07-11-2018 08:27,,,,3,,,,CC BY-SA 4.0 +2706,2,,2697,07-11-2018 09:15,,3,,"

Rebentrost et al. recently used the HHL09 algorithm in their A Quantum Hopfield Neural Network (2018) paper, for optimization of the Hopfield network's energy function.

+ +

Basically, if the Lagrangian (which is used to optimize the network energy $E = -\frac{1}{2}x^{T}Wx + \theta^Tx$ given the constraint $Px - x^{\text{(inc)}} = 0$) is:

+ +

$$\mathcal{L} = -\frac{1}{2}x^{T}Wx + \theta^Tx - \lambda^T (Px - x^{\text{(inc)}}) + \frac{\gamma}{2}x^T x$$ then the optimization equations $\frac{\partial \mathcal{L}}{\partial x} = 0$ and $\frac{\partial \mathcal{L}}{\partial \lambda} = 0$ can be written in the form $A \mathbf{v} = \mathbf{w}$. Note that the $\gamma$ in the expression is the regularization parameter. We need to find $\mathbf{v}$ which extremizes network energy subject to the constraint $Px = x^{(\text{inc})}$ and thus, we need a matrix inversion technique. In the paper they've done exactly that and for the matrix inversion they utilized the HHL09 algorithm. See page 4 of the paper.

+ +
+ +

In short, I believe that once we have quantum computers with a sufficiently large number of qubits and decoherence time, the HHL algorithm is going to be one of the most useful subroutines for any quantum machine learning algorithm (since almost all machine learning and neural network algorithms involve some form of ""gradient descent"" or ""optimization"").

+",26,,26,,05-08-2019 16:32,05-08-2019 16:32,,,,0,,,,CC BY-SA 4.0 +2707,1,2708,,07-11-2018 11:04,,8,1996,"

I was reading the documentation for qiskit.QuantumCircuit and came across the functions cu1(theta, ctl, tgt) and cu3(theta, phi, lam, ctl, tgt). Looking at the names they seem to be controlled rotations. ctrl represents the controlled qubit and tgt represents the target qubit. However, what are theta, lambda and phi? They're rotations about which axes? Also, which rotation matrices are being used for cu1 and cu3?

+",26,,26,,12/23/2018 13:26,01-04-2023 02:34,"What are theta, phi and lambda in cu1(theta, ctl, tgt) and cu3(theta, phi, lam, ctl, tgt)? What are the rotation matrices being used?",,2,0,,,,CC BY-SA 4.0 +2708,2,,2707,07-11-2018 11:36,,10,,"

From IBM Q Documentation (the link is hard to find) here is the definition of the generic gate: +$$ +U(\theta, \phi, \lambda) = +\begin{pmatrix} +\cos\left(\frac{\theta}{2}\right) & -e^{i\lambda} \sin\left(\frac{\theta}{2}\right) \\ +e^{i\phi} \sin\left(\frac{\theta}{2}\right) & e^{i(\lambda + \phi)} \cos\left(\frac{\theta}{2}\right) +\end{pmatrix} +$$

+ +

With this gate, they define the following gates: +$$ +\begin{split} +U_1(\lambda) &= U(0, 0, \lambda) = \begin{pmatrix} +1 & 0 \\ 0 & e^{i\lambda} +\end{pmatrix} \\ +U_2(\phi, \lambda) &= U\left(\frac{\pi}{2}, \psi, \lambda\right) = \frac{1}{\sqrt{2}}\begin{pmatrix} +1 & -e^{i\lambda} \\ e^{i\phi} & e^{i(\lambda+\phi)} +\end{pmatrix} \\ +U_3(\theta, \phi, \lambda) &= U(\theta, \phi, \lambda) = \text{see above} +\end{split} +$$

+ +

These gates are the basis (with CX) of the IBM Q online backends (i.e. the real chips).

+ +

The cu1 and cu3 are the controlled operations associated with the matrices above.

+",1386,,,,,07-11-2018 11:36,,,,0,,,,CC BY-SA 4.0 +2709,2,,2691,07-11-2018 12:41,,4,,"

The only two quasi-particle quanta for which I know there to be active research in quantum computing are phonons and anyons.

+ + +",2293,,26,,7/14/2018 18:24,7/14/2018 18:24,,,,1,,,,CC BY-SA 4.0 +2711,1,,,07-11-2018 17:52,,5,492,"

I am wondering if there exists a common library or a set of modules for user-defining-quantum-gate for QISKit.

+ +

If it does, could you tell me?

+ +

By referring to Define Quantum Gates and How to define user's quantum gates?, I have created 6 modules for IBM Q simulator such as CZ gate and CCZ gate (uploaded here) since they were not built in QISKit standard gates.

+ +

However, since the gates such as CZ gate and CCZ gate seems pretty common to use, I am wondering if anybody already opens their gates to the public.

+",2100,,26,,03-12-2019 09:32,04-11-2022 02:20,Is there a common set of modules for user-defining-quantum-gate for QISKit,,1,0,,,,CC BY-SA 4.0 +2712,1,2721,,07-11-2018 23:03,,6,207,"

Referring to Farhi, Gosset, Hassidim, Lutomirski, and Shor's ""Quantum Money from Knots,"" a mint $\mathcal{M}$ generates a run of coins, including, say, $(s,|\$\rangle)$, using a quantum computer to mint $|\$\rangle$ while publishing the public serial number $s$. A merchant $\mathcal{V}$ is able to verify that $|\$\rangle$ corresponds to $s$, and is a valid money state.

+ +

The merchant $\mathcal{V}$ is required to perform quantum operations on $|\$\rangle$ to make sure that $|\$\rangle$ corresponds to $s$ and is a valid money state.

+ +
+

But if the merchant $\mathcal{V}$ is capable of performing the verification, then would she not have all quantum capability to mint her own coin in the first place?

+
+ +

The barrier to entry to minting quantum coins does not seem that much different to verifying quantum coins. Thus, we have a situation wherein anyone with a quantum computer capable of verifying such quantum coins can mint their own coins.

+ +

If so, a question becomes, how would the market determine the value of a quantum coin, potentially from different merchants or minters? Would the ""oldest"" quantum coin be valued more than newer coins? Or would a coin with an interesting serial number $s$ be valued more? Or the coin used in some famous transaction?

+ +

I would imagine a number of publicly available lists of serial numbers, one for each mint/verifier. If I have a quantum computer, I would be motivated to mint my own coin and publish the serial number. The market can decide that ""this coin from this mint is worth more than that coin from that mint,"" but how would the market decide? It seems interesting.

+",2927,,2927,,12-10-2018 00:06,12-10-2018 00:06,Can a merchant who accepts a knot-based quantum coin mint her own knot-based coin?,,2,1,,,,CC BY-SA 4.0 +2713,2,,2711,07-12-2018 08:22,,2,,"

For the moment, there is no publicly available repository of custom gates to my knowledge.

+ +

If you want I have defined some for my own use. You can find the implementation here but:

+ +
    +
  • The ""gates"" implemented are more algorithms than gates.
  • +
  • If you plan to re-use some of the gates, take a look at the license.
  • +
+ +

Be also aware of the fact that the gate hierarchy will probably change in a near future (see #316, #476, #591). Even if the changes should be backward compatible, we don't know how the CompositeGate class will be changed at the moment.

+",1386,,,,,07-12-2018 08:22,,,,1,,,,CC BY-SA 4.0 +2714,1,2715,,07-12-2018 12:22,,7,128,"

Lets say we have a fully working quantum computer. I give it a problem to solve and measure how long it takes to solve it. I repeat the process. Would it take the exact same time to solve the same problem?

+ +

I have read that a quantum computer's solution process could be seen as all the potential answers are already calculated but to get it out of an entangled state without changing the result, you would use Grover's algorithm that reduces the answer set. You then repeat Grover's algorithm to keep on reducing the answer set. Now you could repeat Grover's algorithm to get a single answer or just start up classical computing to test answers once the answer set is small enough.

+ +

This could possibly mean that the same problem could have a different answer set on first iteration of Grover's algorithm which could cascade on how many times it needs to run to get to a reasonably small answer set to test with classical computing.

+ +

Did I misunderstand something or is it reasonable that same problem could have varying calculation time on same hardware?

+",3011,,3011,,07-12-2018 12:42,7/13/2018 7:43,Does a fully working quantum computer solve a specific problem at varying speeds every time?,,1,3,,,,CC BY-SA 4.0 +2715,2,,2714,07-12-2018 14:14,,5,,"

It depends upon the algorithm that you're running as to whether it will always take the same length of time to run or not. Many of the well-known (often oracle-based) algorithms, such as Deutsch-Jozsa have fixed running times, i.e. they will always take the same time to run because it is exactly the same steps that have to be run every time.

+ +

Other algorithms (Simon's, Shor, HHL, Grover...) have some probability of failure, but a way of easily recognising a correct answer. So, if it fails, you just repeat. This means that there's a discrete set of running times, and an expected running time.

+ +

However, I must emphasise that your understanding of how quantum algorithms work is not very accurate.

+ +
+

I have read that a quantum computer's solution process could be seen as all the potential answers are already calculated but to get it out of an entangled state without changing the result, you would use Grover's algorithm that reduces the answer set.

+
+ +

Usually, you do not use Grover's algorithm, unless the specific thing you're doing is a quantum search (or a related set of applications). Even Grover's is almost deterministic (if the number of solutions is known). As the size of the set you're searching over increases, the probability of failure tends to 0 so, to all intents and purposes, you run the algorithm exactly once and effectively have a deterministic running time.

+ +

While the first step of a quantum function can generally be thought of as calculating all possible answers in parallel, the secret sauce of quantum algorithms is very much the special tricks that can be leveraged to get a sensible answer out (for the very limited set of problems for which we know such tricks). But these depend extensively on the specific structure of the problem being solved. Grover's search is kind of the brute force option: if all you know about the problem structure is that you'll be able to recognise an answer when you get it, Grover's the one you want, but the point of quantum algorithms generally is to use further properties of the problem structure and get much greater speed-ups.

+",1837,,1837,,7/13/2018 7:43,7/13/2018 7:43,,,,2,,,,CC BY-SA 4.0 +2716,1,2720,,07-12-2018 19:37,,7,1115,"

For Shor's error correcting code, what is the intuition behind saying that the following circuit corrects the phase flip error?

+ +

I realize that the circuit is trying to compare phases of the three 3-qubit blocks, two at a time. But I don't understand how the Hadamards and CNOTs help in that task. It seems different from the general method followed for phase error correction with three qubits encoded in the Hadamard basis. It also seems to entangle the two ancillas at the end with the encoded 9 qubit code.

+ +

+",1351,,1351,,07-12-2018 20:03,7/13/2018 1:18,Shor code: phase flip error,,1,2,,,,CC BY-SA 4.0 +2717,1,3917,,07-12-2018 21:15,,8,846,"

In the introduction to continuous-variable quantum computing by Strawberry Fields (Xanadu), it lists the primary CV gates (rotation, displacement, squeezing, beamsplitter, cubic phase) along with their unitary:

+ +

+ +

What are the matrix representations of these gates?

+",2645,,,,,08-01-2018 22:27,Matrix representation of continuous-variable gates,,2,3,,,,CC BY-SA 4.0 +2718,1,2725,,07-12-2018 21:32,,7,570,"

Context:

+ +

On the 5th page of the paper Quantum circuit design for solving linear systems of equations (Cao et al, 2012) there's this circuit:

+ +

+ +

+ +

+ +
+ +

Schematic:

+ +

A brief schematic of what's actually happening in the circuit is:

+ +

+ +
+ +

Question:

+ +

Cao et al.'s circuit (in Figure 4) is specifically made for the matrix:

+ +

$$A = \frac{1}{4} \left(\begin{matrix} 15 & 9 & 5 & -3 \\ 9 & 15 & 3 & -5 \\ 5 & 3 & 15 & -9 \\ -3 & -5 & -9 & 15 \end{matrix}\right)$$

+ +

whose eigenvalues are $\lambda_1 = 2^{1-1}=1,\lambda_2 = 2^{2-1}=2,\lambda_3 = 2^{3-1}=4$ and $\lambda_4 = 2^{4-1} = 8$ and corresponding eigenvectors are $|u_i\rangle = \frac{1}{2}\sum_{j=1}^{4}(-1)^{\delta_{ij}}|j\rangle^C$. In this case since there are $4$ qubits in the clock register, the $4$ eigenvalues can be represented as states of the clock register itself (no approximation involved) i.e. as $|0001\rangle$ (binary representation of $1$), $|0010\rangle$ (binary representation of $2$), $|0100\rangle$ (binary representation of $4$) and $|1000\rangle$ (binary representation of $8$).

+ +

After the first Quantum phase estimation step, the circuit's (in Fig. 4) state is

+ +

$$|0\rangle_{\text{ancilla}} \otimes \sum_{j=1}^{j=4} \beta_j |\lambda_j\rangle |u_j\rangle$$

+ +

Everything is fine till here. However, after this, to produce the state

+ +

$$\sum_{j=1}^{j=4} \beta_j |u_j\rangle_I |\lambda_j\rangle^C ((1-C^2/\lambda_j^2)^{1/2}|0\rangle + C/\lambda_j|1\rangle)$$

+ +

it seems necessary to get to the state $$\sum_{j=1}^{j=4} \beta_j |u_j\rangle_I |\frac{2^{3}}{\lambda_j}\rangle^C\otimes |0\rangle_{\text{ancilla}}$$

+ +

That is, the following mappings seem necessary in the $R(\lambda^{-1})$ rotation step:

+ +

$$|0001\rangle \mapsto |1000\rangle, |0010\rangle \mapsto |0100\rangle, |0100\rangle \mapsto |0010\rangle \ \& \ |1000\rangle \mapsto |0001\rangle$$

+ +

which implies that the middle two qubits in the clock register need to be swapped as well as the two end qubits.

+ +

But, in the circuit diagram they seem to be swapping the first and third qubit in the clock register! That doesn't seem reasonable to me. In the paper (Cao et al.) claim that the transformation they're doing using their SWAP gate is

+ +

$$\sum_{j=1}^{j=4} \beta_j |u_j\rangle_I |\lambda_j\rangle^C\otimes |0\rangle_{\text{ancilla}} \mapsto \sum_{j=1}^{j=4} \beta_j |u_j\rangle_I |\frac{2^{4}}{\lambda_j}\rangle^C\otimes |0\rangle_{\text{ancilla}}$$

+ +

According to their scheme, $|1000\rangle \to |0010\rangle$ (see the third page). This scheme doesn't make sense to me because the state $|0001\rangle$ (representing the eigenvalue $1$) would have to be transformed to $|2^4/1\rangle$. But the decimal representation of $16$ would be $|10000\rangle$ which is a 5-qubit state! However, our clock register has only $4$ qubits in total.

+ +

So, basically I think that their SWAP gates are wrong. The SWAPs should actually have been applied between the middle qubits and the two end qubits. Could someone verify?

+ +
+ +

Supplementary question:

+ +

This is not a compulsory part of the question. Answers addressing only the previous question are also welcome.

+ +

@Nelimee wrote a program (4x4_system.py in HHL) in QISKit to simulate the circuit (Figure 4) in Cao et al's paper. Strangely, his program works only if the SWAP gate is applied between the middle two qubits but not in between the two end qubits.

+ +

The output of his program is as follows:

+ +
<class 'numpy.ndarray'>
+Exact solution: [-1  7 11 13]
+Experimental solution: [-0.84245754+0.j  6.96035067+0.j 10.99804383+0.j 13.03406367+0.j]
+Error in found solution: 0.16599956439346453
+
+ +

That is, in his program only the mapping $|0100\rangle \mapsto |0010\rangle$ takes place in the clock register in the $R(\lambda^{-1})$ step. There's no mapping $|1000\rangle \mapsto |0001\rangle$ taking place.

+ +

Just in case anyone figures out why this is happening please let me know in the comments (or in an answer).

+",26,,26,,7/15/2018 16:34,7/15/2018 16:34,SWAP gate(s) in the $R(\lambda^{-1})$ step of the HHL circuit for $4\times 4$ systems,,1,0,,,,CC BY-SA 4.0 +2719,2,,2717,7/13/2018 0:11,,1,,"

The link you gave says:

+ +
+

The CV model is a natural fit for simulating bosonic systems (electromagnetic fields, harmonic oscillators, phonons, Bose-Einstein condensates, or optomechanical resonators) and for settings where continuous quantum operators – such as position & momentum – are present.

+
+ +

Which means you can have many different different matrix representations for the CV gates. They then point out:

+ +
+

The most elementary CV system is the bosonic harmonic oscillator.

+
+ +

This means that for any values of the scalar (non-matrix) parameters $\alpha, \gamma, \phi, z, \theta, \gamma$, you can just calculate the formula they gave you, using the following matrix representations for the creation and annihilation operators for a bosonic harmonic oscillator:

+ +

+ +

The number operator $\hat{n}$ is just $a^\dagger a$.

+ +

Keep in mind that any matrix representation is basis-dependent, meaning that you can take these matrix representations and (for example) diagonalize them, and they would be a perfectly valid matrix representation in a new basis. However the matrices I gave you here are quite ""standard"" for quantum harmonic oscillators.

+",2293,,2293,,7/16/2018 1:13,7/16/2018 1:13,,,,16,,,,CC BY-SA 4.0 +2720,2,,2716,7/13/2018 1:18,,4,,"

The circuit detects an error, by producing a measurement outcome representing a syndrome that indicates which of three blocks of three qubits was affected by a phase error (or that indicates that no phase error occurred). Once you know this, you still have to correct the phase error if there was one, by applying a phase flip to any one of the three qubits in the affected block.

+ +

The context here is that you've already corrected for a possible bit flip error, so that the input to the circuit is a state that resulted from at most one phase error being applied to a vector in the code space of Shor's 9 qubit code. Vectors in this code space look like +$$ +\alpha |\phi_0\rangle |\phi_0\rangle |\phi_0\rangle + +\beta |\phi_1\rangle |\phi_1\rangle |\phi_1\rangle, +$$ +where +$$ +|\phi_0\rangle = \frac{|000\rangle + |111\rangle}{\sqrt{2}} \quad \text{and} \quad |\phi_1\rangle = \frac{|000\rangle - |111\rangle}{\sqrt{2}}. +$$ +A phase error on the first block (i.e., qubit 1, 2, or 3), for instance, would result in the state +$$ +\alpha |\phi_1\rangle |\phi_0\rangle |\phi_0\rangle + +\beta |\phi_0\rangle |\phi_1\rangle |\phi_1\rangle. +$$

+ +

Now, the reasoning behind the circuit is that applying Hadamard gates to each qubit of $|\phi_0\rangle$ gives a uniform superposition over even-parity strings, while applying Hadamard gates to each qubit of $|\phi_1\rangle$ gives a uniform superposition over odd-parity strings. Each set of three controlled-NOT gates will therefore induce a bit flip on one of the syndrome qubits when the three corresponding qubits are in the $|\phi_1\rangle$ state but not in the $|\phi_0\rangle$ state.

+ +

So, in the example suggested above where a phase error touched qubit 1, 2, or 3, the first six controlled-NOT gates have the combined effect of flipping the second syndrome qubit, while the remaining controlled-NOT gates collectively do nothing. This results in the syndrome 01, which indicates a phase flip in block number 1 (i.e., the first three qubits). You can check that a phase error in a different block gives an appropriate outcome, and that if no phase errors occurred, the syndrome 00 is obtained, indicating no phase errors.

+ +

Note that this does not entangle the syndrome qubits with the other qubits, so when you do all of the Hadamard gates again, you recover the original input state. Of course this assumes an input of the special form described above; if you put in an arbitrary state, you might very well end up with something entangled with the syndrome.

+",1764,,,,,7/13/2018 1:18,,,,0,,,,CC BY-SA 4.0 +2721,2,,2712,7/13/2018 1:46,,4,,"

In complexity theory (quantum and classical) the distinction between construction and verification is very important, and the ability to verify certainly does not imply the ability to construct. For example, it is easy to verify that a satisfying assignment to a Boolean formula really is a satisfying assignment, but finding such an assignment given only the formula is a computationally hard problem (assuming $\text{P}\not=\text{NP}$).

+ +

Obviously, the situation with quantum money is a very different one, but there is a similar principle at work, and perhaps the Boolean formula example helps to invalidate any generic sort of intuition stating that having the capacity to verify something (in this case that a quantum money state corresponds to a particular serial number) automatically provides the capacity to construct the same thing (which in this case means constructing a money state with the given serial number).

+ +

In the case of quantum money (of the sort that the paper you referred to considers), there are some important additional constraints. One is that it should actually not be difficult for the mint to produce money. The key here is that the production of money will result in a random choice of $s$; neither the mint nor a would-be counterfeiter would be capable of first choosing $s$ and then creating the corresponding money state. The requirement is actually much stronger: even given one copy of the money state corresponding to serial number $s$, a counterfeiter should not be able to produce two or more states that are likely to pass independent verifications for that serial number.

+ +

So, anyone with a quantum computer would indeed be able to produce as much money as they want, but presumably nobody would care; people would only assign value to those money states whose serial numbers are authenticated in some way by the mint.

+ +

Addendum:

+ +

Concerning issues such as the market value of money states, over-minting, and so on, the only answer I can offer is that quantum money, as a concept in quantum information science, does not address these issues -- it simply provides a protocol whereby states can be efficiently produced and verified but not copied. Individuals and governments could choose to use a quantum money protocol as they choose, and in a situation in which this is done it is up to each participant to decide what value to assign to each state. In this regard, the issues do not seem to me to be specific to quantum money, but are shared by all forms of currency.

+",1764,,1764,,7/13/2018 11:41,7/13/2018 11:41,,,,1,,,,CC BY-SA 4.0 +2722,1,9279,,7/13/2018 2:59,,8,358,"
+

What level of trust in the bank is needed in ""Quantum Money from Hidden Subspaces"" of Aaronson and Christiano?

+
+ +

The bank's mint works by first generating a uniformly random classical secret string $r$, and then generating a banknote $\$_r=(S_r,\rho_r)$. The authors state that the bank can generate many identical banknotes by simply reusing the secret $r$.

+ +
    +
  • But after the currency is distributed, is $r$ needed, either by the bank or by the users, ever again? If so, does the bank need to keep it safe and secure? If not, should the bank ""forget"" or destroy the secret $r$ used in the mini-scheme, lest it fall into a forger's hand?

  • +
  • Can the mint use $r$ to produce many coins with a specific serial number $S_r$, potentially targeting a specific holder of currency for devaluation?

  • +
  • Can the users of the currency know how many coins are actively in circulation, without having to trust the mint?

  • +
+ +

The authors of Hidden Subspaces note that in ""Quantum Money from Knots"" of Farhi, Gosset, Hassidim, Lutomirski, and Shor, not even the mint is likely able to generate two identical banknotes.

+ +

But I think that the inability of banks to copy their own currency is a feature, not a bug, of ""Quantum Money from Knots"", because the actions of the mint are public and known. The total amount of currency is known; no secret $r$ is needed to be kept safe or destroyed; the mint can ""destroy"" a coin by removing it from the public list of serial numbers (Alexander polynomials,) but cannot target a coin for devaluation by minting many copies.

+",2927,,2927,,11-10-2019 14:47,12/22/2019 13:44,"Do we have to trust the bank in ""Quantum Money from Hidden Subspaces?""",,1,4,,,,CC BY-SA 4.0 +2723,2,,2712,7/13/2018 3:39,,3,,"
+

How would the market determine the value of a quantum coin, potentially from different merchants or minters?

+
+ +

tl;dr: by trading!

+ +
+ +

Disclaimer: I am working on a startup that is addressing this problem

+ +
+ +

I curated some thoughts on blockchain in an article entitled Tokenize Everything (& the Decentralized P2P Global Market) at the end of last year. A few relevant snippets include:

+ +

Decentralization

+ +
+

The technological quantum leap here is not necessarily a new type of money. Rather, it is the ability to now achieve global consensus over a decentralized and distributed network without a trusted third party. This has many potential applications for various fields of endeavour.

+
+ +
+ +
+

A ‘decentralised funfair’ would actually be one where anyone can participate as a ‘fun provider’ (earning Funcoins for that service) and/or ‘fun beneficiary’ (giving out Funcoins to receive it), with the economics flowing directly between the two parties. The decentralised funfair would have no employees nor middlemen, only utilitarian participants; there would actually be no end to it, as long as the economic incentives on both sides remain attractive enough.

+
+ +

Liquidity

+ +
+

Because such economies are decentralised, i.e. they have no central authority taking a toll, the most liquid tokens in any economy should in theory ultimately prevail and that would constitute the best outcome for its participants.

+
+ +
+ +
+

Imagine there are multiple funfairs where a fun provider or a fun beneficiary can participate: inevitably they would choose the one that provides the most attractive incentives (i.e. more fun, higher rewards). So ultimately there will only be one decentralised funfair left and its “fun token”, the most liquid one that delivers the most value to its participants

+
+ +

Qlout

+ +
+

The barrier to entry to minting quantum coins does not seem that much different to verifying quantum coins.

+
+ +

Precisely! Given a quantum computer (network), anyone could easily access it via 'the cloud' to mint &/or trade coins. Initially, only a few devices will be able to serve as nodes in the network. However, once advancements are made & quantum devices (QPUs) are commercially available, network size will increase accordingly (& allow for individuals to self host).

+ +
+

Would the ""oldest"" quantum coin be valued more than newer coins? Or would a coin with an interesting serial number s be valued more? Or the coin used in some famous transaction?

+
+ +

These are all really great ideas! Clearly (as is the case in all markets) individuals would assign value to different coins in different ways (utility, collectability, etc).

+ +

The concept of Qlout is that each coin would be given a score based on an algorithm (secret sauce). This would only serve as a 'market score'; individuals would still be free to trade at any agreeable rate.

+",2645,,,,,7/13/2018 3:39,,,,0,,,,CC BY-SA 4.0 +2724,1,3833,,7/13/2018 3:55,,7,454,"

I am new to Stack Exchange and am working on a quantum learning platform for minority youth groups (LGBTQ, low-income, at risk, etc).

+ +

In the question below they are looking for courses on the subject, which I am also interested in, and do plan on checking those links out for ideas.

+ +
+

Introductory material for quantum machine learning

+
+ +

What I am looking for are simple videos, articles, or even games, that cover basic quantum theory at an introductory level.

+ +

There are some games I have looked into and played. +Hello Quantum! was fun and informative, though on my end there was still a lack of comprehension on how the quantum computer (or anything else ""quantum"") would actually function and play out.

+ +

My focus for the educational platform is more directed towards the software side of quantum computing. Is there anything on software that gives a good introduction to the functions and uses a quantum computer will have? As well as what language would be best to program one? Also, would there be a way to program a quantum computer through a classical computer? And, is there a simple introduction to any of this already existing?

+",3088,,26,,12-03-2019 16:08,12-03-2019 16:08,What would be an informative introduction to quantum computing software?,,6,0,,,,CC BY-SA 4.0 +2725,2,,2718,7/13/2018 7:15,,3,,"

I don't see the need for the swap gate either. Although I also don't think you need the set of swap gates that you're wanting. Remember that the standard implementation of the Fourier transform outputs the binary string $j\in\{0,1\}^4$ where the eigenvalues are of the form $2\pi j/16$ but in reverse order, so the least significant bit is at the top, and the most significant bit is at the bottom. Thus, $|1000\rangle$ already corresponds to $j=1$, and so the rotation that you want is with an angle $8\pi/2^m$, and $|0100\rangle$ corresponds to $j=2$, requiring angle $4\pi/2^m$, and so on.

+ +

However, I have to emphasise that what I see here is something of a fudge. It seems to be that the authors have very much built in the knowledge of the eigenvalues - not just that they're of the form $2\pi j/16$, but specifically that they are 1,2,4,8. It looks to me like, if one had a matrix that also contained a $j=4$ term, for example, that the inverse would not be correctly calculated because the angle of rotation actually created (LHS) +$$ +\frac{8\pi}{2^m}+\frac{4\pi}{2^m}=\frac{12\pi}{2^m}\neq\frac{8\pi}{3\cdot 2^m} +$$ +does not match the required angle for the inverse (RHS). Perhaps that was already clear to you, but it was not clear to me in trying to understand the circuit in order to answer your question!

+ +

In terms of testing with QISKit, it's not something that I use myself. However, can I suggest trying a different $|b\rangle$ as input? It is possible (perhaps not likely, however) that some of what's going on is masked slightly because the $|b\rangle$ chosen in the paper is an equally weighted superposition of the 4 eigenvectors. I'd want to chose something with different amplitudes for each eigenvector, just to be sure.

+",1837,,,,,7/13/2018 7:15,,,,0,,,,CC BY-SA 4.0 +2726,2,,2724,7/13/2018 7:32,,9,,"

Introductory Quantum Computation, which seems to be what you're asking for (rather than Quantum Theory more generically) is a bit of a tough ask. As you say, there are a few games around, such as this one, but they don't really go far enough to explain quantum algorithms and the functioning of a quantum computer, because that really has to be backed up by quite a lot of maths and notation, which means that one is rapidly getting away from introductory material.

+ +

I was actually trying to think at some point about how to make a good, accessible, video introduction to quantum computation, and have to admit to having stalled somewhat. What I did manage to produce was something reasonably introductory but about quantum cryptography, which I think is much more accessible than the computation side of things.

+ +

You linked to a question about software. Things like QISKit have an extensive set of documentation that give a good introduction. The tutorials on github may also be of interest to you. Moreover, things like QISKit are precisely a way of programming a QC through a CC. You give a generic quantum circuit, and it is 'compiled' for a specific hardware implementation.

+",1837,,,,,7/13/2018 7:32,,,,1,,,,CC BY-SA 4.0 +2727,1,,,7/13/2018 8:04,,3,303,"
+

The question whether surreal or hyperreal numbers (that both contain the reals, even if they have the same cardinality) could be useful to provide a more satisfactory theory of QM is maybe more interesting. -yuggib

+
+ +

Background

+ +

I indirectly ended up on this stack, much to my surprise. Little by little I have been working to piece the puzzle together.

+ +

I have been pondering this question for a while now, but was not able to formulate it so succinctly until I saw the above quote.

+ +

I still don't feel able to properly convey my intuition around why this is a good approach..

+ +

Abstract

+ +

Star

+ +
+

is not zero, but neither positive nor negative, and is therefore said to be fuzzy and confused with (a fourth alternative that means neither ""less than"", ""equal to"", nor ""greater than"") 0

+
+ +

Fuzzy logic

+ +
+

is a form of many-valued logic in which the truth values of variables may be any real number between 0 and 1. It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. By contrast, in Boolean logic, the truth values of variables may only be the integer values 0 or 1

+
+ +

Qubit

+ +
+

—usually taken to have the value “0” and “1”, like a bit. However, whereas the state of a bit can only be either 0 or 1, the general state of a qubit according to quantum mechanics can be a superposition of both.

+
+ +

Motivation

+ +
+

""It is likely to be a fresh research question, and you the person one most interested in the entire world in finding an answer --- in which case it is most probably up to you (and your opportunity!) to obtain the answer first."" -Niel

+
+ +

While the motivation to wanting to make this work may not be apparent at first, a big piece of what I'm wanting to accomplish is creating a quantum algorithm based on surreal constructions for a quantum intelligence that can use game theory for computation.

+ +

I have placed a couple different bounties in an attempt to push along this research.

+ +

Question

+ +

How can surreal maths be used in quantum computing?

+",2645,,26,,12/14/2018 6:29,12/14/2018 6:29,How can surreal maths be used in quantum computing?,,1,21,,7/16/2018 18:38,,CC BY-SA 4.0 +2728,2,,1385,7/13/2018 8:53,,8,,"

I find a graphical approach quite good for giving some insight without getting too technical. We need some inputs:

+ +
    +
  • we can produce a state $|\psi\rangle$ with non-zero overlap with the 'marked' state $|x\rangle$: $\langle x|\psi\rangle\neq 0$.
  • +
  • we can implement an operation $U_1=-(\mathbb{I}-2|\psi\rangle\langle\psi|)$
  • +
  • we can implement an operation $U_2=\mathbb{I}-2|x\rangle\langle x|$.
  • +
+ +

This last operation is the one that can mark our marked item with a -1 phase. We can also define a state $|\psi^\perp\rangle$ to be orthonormal to $|x\rangle$ such that the $\{|x\rangle,|\psi^\perp\rangle\}$ forms an orthonormal basis for the span of $\{|x\rangle,|\psi\rangle\}$. Both the operations that we have defined preserve this space: you start with some state in the span of $\{|x\rangle,|\psi^\perp\rangle\}$, and they return a state within the span. Moreover, both are unitary, so the length of the input vector is preserved.

+ +

A vector of fixed length within a two-dimensional space can be visualised as the circumference of a circle. So, let's set up a circle with two orthogonal directions corresponding to $|\psi^\perp\rangle$ and $|x\rangle$. +

+ +

Our initial state $|\psi\rangle$ will have small overlap with $|x\rangle$ and large overlap with $|\psi^\perp\rangle$. If it were the other way around, search would be easy: we'd just prepare $|\psi\rangle$, measure, and test the output using the marking unitary, repeating until we got the marked item. It wouldn't take long. Let's call the angle between $|\psi\rangle$ and $|\psi^\perp\rangle$ the angle $\theta$. +

+ +

Now let's take a moment to think about what our two unitary actions do. Both have a -1 eigenvalue, and all other eigenvalues +1. In our two-dimensional subspace, that reduces to a +1 eigenvalue and a -1 eigenvalue. Such an operation is a reflection in the axis defined by the +1 eigenvector. So, $U_1$ is a reflection in the $|\psi\rangle$ axis, while $U_2$ is a reflection in the $|\psi^\perp\rangle$ axis. +

+ +

Now, take an arbitrary vector in this space, and apply $U_2$ followed by $U_1$. The net effect is that the vector is rotated by an angle $2\theta$ towards the $|x\rangle$ axis. +

+ +

So, if you start from $|\psi\rangle$, you can repeat this sufficiently many times, and get to within an angle $\theta$ of $|x\rangle$. Thus, when we measure that state, we get the value $x$ with high probability.

+ +

Now we need a little care to find the speed-up. Assume that the probability of finding $|x\rangle$ in $|\psi\rangle$ is $p\ll 1$. So, classically, we'd need $O(1/p)$ attempts to find it. In our quantum scenario, we have that $\sqrt{p}=\sin\theta\approx\theta$ (since $\theta$ is small), and we want a number of runs $r$ such that $\sin((2r+1)\theta)\approx 1$. So, $r\approx \frac{\pi}{2\theta}\approx \frac{\pi}{2\sqrt{p}}$. You can see the square-root speed-up right there.

+",1837,,,,,7/13/2018 8:53,,,,0,,,,CC BY-SA 4.0 +2729,1,2732,,7/13/2018 11:43,,20,4762,"

The most general definition of a quantum state I found is (rephrasing the definition from Wikipedia)

+ +
+

Quantum states are represented by a ray in a finite- or infinite-dimensional Hilbert space over the complex numbers.

+
+ +

Moreover, we know that in order to have a useful representation we need to ensure that the vector representing the quantum state is a unit vector.

+ +

But in the definition above, they don't precise the norm (or the scalar product) associated with the Hilbert space considered. At first glance I though that the norm was not really important, but I realised yesterday that the norm was everywhere chosen to be the Euclidian norm (2-norm). +Even the bra-ket notation seems to be made specifically for the euclidian norm.

+ +

My question: Why is the Euclidian norm used everywhere? Why not using an other norm? Does the Euclidian norm has useful properties that can be used in quantum mechanics that others don't?

+",1386,,26,,7/15/2018 13:14,7/15/2018 13:14,Quantum states are unit vectors... with respect to which norm?,,6,1,,,,CC BY-SA 4.0 +2730,2,,2729,7/13/2018 12:07,,8,,"

Some terminology seems a little bit jumbled here. Quantum states are represented (within a finite dimensional Hilbert space) by complex vectors of length 1, where length is measured by the Euclidean norm. They are not unitary, because unitary is a classification of a matrix, not a vector.

+ +

Quantum states are changed/evolved according to some matrix. Given that quantum states have length 1, it turns out to be necessary and sufficient that the maps of pure states to pure states are described by unitary matrices. These are the only matrices that preserve the (Euclidean) norm.

+ +

It is certainly a valid question ""could we use a different ($p$) norm for our quantum states?"" If you then classify the operations that map normalised states to normalised states, they are incredibly limited. If $p\neq 2$, the only valid operations are permutation matrices (with different phases on each element). Physics would be a whole lot more boring.

+ +

A good way to get a feel for this is to try drawing a 2D set of axes. Draw on it the shapes corresponding to the set of points of length 1 under different $p$-norms. $p=2$ gives you the circle, $p=1$ gives you a diamond, and $p\rightarrow\infty$ gives a square. What operations can you do that map the shape onto itself? For the circle, it's any rotation. For anything else, it's just rotations by multiples of $\pi/2$. The following comes from Wikipedia:

+ +

+ +

If you want more details, you might want to look here.

+",1837,,1837,,7/13/2018 14:42,7/13/2018 14:42,,,,7,,,,CC BY-SA 4.0 +2731,2,,2729,7/13/2018 12:20,,-2,,"

The Euclidean norm on an $n$-dimensional space, as defined here, is not the only norm used for quantum states.

+ +

A quantum state doesn't have to be defined on an n-dimensional Hilbert space, for example the quantum states for a 1D harmonic oscillator are functions $\psi_i(x)$ whose ortho-normality is defined by:

+ +

$$ +\int \psi_i(x)\psi_j^*(x)dx. +$$

+ +

If $i=j$ we get:

+ +

$$ +\int |\psi(x)|^2dx = \int P(x)dx = 1, +$$

+ +

because the total probability must be 1.
+If $i\ne j$, we get 0, meaning that the functions are orthogonal.

+ +

The Euclidean norm, as defined in the link I gave, is more for quantum states on discrete variables where $n$ is some countable number. In the above case, $n$ (which is the number of possible values that $x$ can be) is uncountable, so the norm doesn't fit into the definition given for a Euclidean norm on an $n$-dimensional pace.

+ +

We could also apply a square root operator to the above norm, and still we'd have the required property that $\int P(x)dx=1$, and the Euclidean norm can then be thought of as a special case of this norm though, for the case where $x$ can only be chosen from some countable number of values. The reason why we use the above norm in quantum mechanics is because it guarantees that the probability function $P(x)$ integrates to 1, which is a mathematical law based on the definition of probability. If you had some other norm which can guarantee that all laws of probability theory are satisfied, you would be able to use that norm too.

+",2293,,2293,,7/13/2018 12:54,7/13/2018 12:54,,,,18,,,,CC BY-SA 4.0 +2732,2,,2729,7/13/2018 13:10,,7,,"

Born's rule states that $|\psi(x)|^2 = P(x)$ which is the probability of finding the quantum system in the state $|x\rangle$ after a measurement. We need the sum (or integral!) over all $x$ to be 1:

+ +

\begin{align} +\sum_x P_x &= \sum_x |\psi_x|^2 = 1,\\ +\int P(x)dx &= \int |\psi(x)|^2 dx= 1. +\end{align}

+ +

Neither of these are valid norms because they are not homogenous. +You can make them homogenous simply by doing the square root:

+ +

\begin{align} +\sqrt{\sum_x |\psi_x|^2} = 1,\\ +\sqrt{\int |\psi(x)|^2dx} = 1. +\end{align}

+ +

and you may recognize this as the Euclidean norm and a generalization of the Euclidean norm to a non-discrete domain. We could also use a different norm:

+ +

\begin{align} +\sqrt{\sum_x \psi_x A \psi_x^*} = 1,\\ +\sqrt{\int \psi(x)A\psi^*(x)} = 1, +\end{align}

+ +

for some positive definite matrix/function A.

+ +
+ +

However a $p$-norm with $p>2$ would not be as useful because for example:

+ +

\begin{align} +\sqrt[5]{\sum_x |\psi_x|^5} \\ +\end{align}

+ +

does not have to be 1.

+ +

In this way the Euclidean norm is special because 2 is the power in Born's rule, which is one of the postulates of quantum mechanics.

+",2293,,2293,,7/14/2018 7:38,7/14/2018 7:38,,,,18,,,,CC BY-SA 4.0 +2733,2,,2727,7/13/2018 14:39,,2,,"

In quantum field theory there are Feynman path integrals that diverge and for this there is a concept of ""renormalization"". At least one approach to this uses surreal numbers but it is not very mainstream.

+",2293,,2293,,7/13/2018 14:50,7/13/2018 14:50,,,,2,,,,CC BY-SA 4.0 +2736,2,,2729,7/13/2018 17:01,,7,,"

More mathematically, because $\mathbb{R}^n$ with an $L^p$ norm is a Hilbert space only for $p=2$.

+",3097,,,,,7/13/2018 17:01,,,,6,,,,CC BY-SA 4.0 +2737,2,,2729,7/13/2018 18:53,,2,,"

The other answers addressed why $p=2$ in terms of which $L^p$ space to use, but not the weighting.

+ +

You could put in a Hermitian positive definite matrix $M_{ij}$ so that that the inner product is $\sum x_i^* M_{ij} y_j$. But that doesn't gain you much. This is because you might as well change variables. For ease, consider the case when $M$ is diagonal. with the diagonal case that would be interpreting $M_{ii} \mid x_i \mid^2$ as a probability instead of $\mid x_i \mid^2$. $M_{ii}>0$ so why not just change variables to $\tilde{x}_i = \sqrt{M_{ii}} x_i$. You can think of this as $L^2$ functions on the space of $n$ points where each point is weighted by $M_{ii}$.

+ +

For the continuous 1 variable case, yes you could use $L^2 (\mathbb{R} , w(x) dx)$ as well. $w(x)$ just reweights the lengths. That's still a perfectly good Hilbert space. But the problem is that translation $x \to x+a$ was supposed to be a symmetry and $w(x)$ breaks that. So might as well not use $w(x)$. For some purposes, that symmetry is not present, so you do have a $w(x) \neq 1$.

+ +

In some cases it is useful not to move to standard form. It shuffles around how you do some calculations. For example, if you're doing some numerics, then you can reduce your errors by this sort of reshuffling to avoid really small or large numbers that your machine finds difficult.

+ +

A tricky thing is to make sure you keep track of when you changed your variables and when you didn't. You don't want to get confused between changing to the standard inner product doing some unitary and changing variables back vs trying to do that in one step. You are likely to drop factors of $\sqrt{M_{ii}}$ etc by mistake, so be careful.

+",434,,,,,7/13/2018 18:53,,,,0,,,,CC BY-SA 4.0 +2738,2,,2667,7/13/2018 20:01,,8,,"

Relatedly, there is also an EdX course on quantum cryptography. The main instructors are Stephanie Wehner and Thomas Vidick, with guest lectures by Ronald Hanson, Nicolas Gisin and David Elkouss. Its description is the following:

+
+

How can you tell a secret when everyone is able to listen in? In this course, you will learn how to use quantum effects, such as quantum entanglement and uncertainty, to implement cryptographic tasks with levels of security that are impossible to achieve classically.

+

This interdisciplinary course is an introduction to the exciting field of quantum cryptography, developed in collaboration between QuTech at Delft University of Technology and the California Institute of Technology. By the end of the course you will:

+
    +
  • Be armed with a fundamental toolbox for understanding, designing and analyzing quantum protocols.

    +
  • +
  • Understand quantum key distribution protocols.

    +
  • +
  • Understand how untrusted quantum devices can be tested.

    +
  • +
  • Be familiar with modern quantum cryptography – beyond quantum key distribution.

    +
  • +
+
+",1825,,-1,,6/18/2020 8:31,7/13/2018 23:31,,,,0,,,,CC BY-SA 4.0 +2740,1,,,7/13/2018 22:30,,11,174,"

For two-qubit states, represented by a $4\times 4$ density matrix, the generic state is described by 15 real parameters. For ease of calculation, it can help to consider restricted families of states, such as the ""$X$""-states, where any matrix elements not on either the main diagonal or anti-diagonal are 0 (requiring 7 real parameters), or rebits, where the matrix elements are all real (requiring 9 real parameters).

+ +

For any given density matrix of two qubits, it is easy to tell if it's entangled: we just test the partial transpose criterion, and the presence of negative eigenvalues. One might like to measure the fraction of the space that is entangled, and for that, one must pick a particular measure.

+ +

The probability with respect to Hilbert-Schmidt measure that generic two-qubit $X$-states are separable has been shown to be $\frac{2}{5}$ (arXiv:1408.3666v2, arXiv:1501.02289v2). Additionally, Lovas and Andai have demonstrated that the corresponding probability for the two-rebit density matrices is $\frac{29}{64}$ (https://arxiv.org/abs/1610.01410). Additionally, a strong body of various forms of evidence (though yet no formal proof) has been developed that the probabilities +for the arbitrary two-qubit and (27-dimensional) two-``quater”[nionic]bit'' density matrices +are $\frac{8}{33}$ and $\frac{26}{323}$, respectively (arXiv:1701.01973).

+ +

However, analogous results with respect to the important Bures (minimal monotone) measure are presently unknown.

+ +

Now, in what manner, if any, might these known Hilbert-Schmidt results be employed to assist in the further estimation/determination of their Bures counterparts?

+ +

Perhaps useful in such an undertaking would be the procedures for the generation of random density matrices with respect to +Bures and Hilbert-Schmidt measure outlined in arXiv:0909.5094v1. Further, Chapter 14 of ""Geometry of Quantum States"" of Bengtsson and Zyczkowski presents formulas for the two measures, among a wide literature of related analyses.

+ +

It seems a particularly compelling conjecture that the two-qubit Bures separability probability assumes some yet unknown simple, elegant form ($\approx 0.073321$), as has been demonstrated do its counterparts, also based on fundamental quantum information-theoretic measures. A value of $\frac{11}{150} =\frac{11}{2 \cdots 3 \cdots 5^2} \approx 0.07333...$ is an interesting candidate in this matter.

+",3089,,10480,,4/29/2021 20:49,4/29/2021 20:49,Estimate/determine Bures separability probabilities making use of corresponding Hilbert-Schmidt probabilities,,0,5,,,,CC BY-SA 4.0 +2742,2,,2729,7/14/2018 8:32,,4,,"

An elegant argument can be derived by asking which theories can we build which are described by vectors $\vec v = (v_1,\dots,v_N)$, where the allowed transformations are linear maps $\vec v\to L\vec v$, probabilities are given by some norm, and probabilities must be preserved by those maps.

+ +

It turns out that there are basically only three options:

+ +
    +
  1. Deterministic theories. Then we don't need those vectors, since we are always in one specific state, i.e. the vectors are $(0,1,0,0,0)$ and the like, and the $L$'s are only permutations.

  2. +
  3. Classical probabilistic theories. Here, we use the $1$-norm and stochastic maps. The $v_i$ are probabilities.

  4. +
  5. Quantum mechanics. Here, we use the $2$-norm and unitary transformations. The $v_i$ are amplitudes.

  6. +
+ +

These are the only possibilities. For other norms no interesting transformations exist.

+ +

If you want a more detailed and nice explanation of this, Scott Aaronson's ""Quantum Computing since Democritus"" has a Lecture on this, as well as a paper.

+",491,,491,,7/14/2018 9:04,7/14/2018 9:04,,,,0,,,,CC BY-SA 4.0 +3743,1,,,7/14/2018 23:03,,5,502,"

Following @DaftWullie's answer I tried to simulate the circuit given in Fig. 4 of the paper (arXiv pre-print): Quantum circuit design for solving linear systems of equations (Cao et al, 2012), on Quirk.

+ +
+ +

The relevant circuit in the arXiv pre-print by Cao et al is:

+ +

+ +

Please note that (I think) the $e^{-iAt_0/2^s}$ (s.t. $1\leq s \leq 4$) gates in the circuit should actually be $e^{+iAt_0/2^s}$ gates instead. That's probably a misprint in the paper.

+ +
+ +

You will find the equivalent Quirk simulated circuit here. The labellings of the gates (along with the matrix entries) can be seen from the dashboard by hovering over them. The matrix entries may also be obtained from the JSON code in the URL. You might want to use a JSON formatter for that purpose.

+ +

The rotation gates used in the paper are $R(8\pi/2^{r-1}),R(4\pi/2^{r-1}),R(2\pi/2^{r-1}),R(\pi/2^{r-1})$. On page 4 they mentioned that higher the value of $r$ they greater is the accuracy. So I took $r=7$.

+ +

I created the custom gates $R_y(8\pi/2^{r-1})$, $R_y(4\pi/2^{r-1})$, $R_y(2\pi/2^{r-1})$ & $R_y(\pi/2^{r-1})$, using the definition of the $R_y$ matrices as:

+ +

$$R_y(\theta) = \left(\begin{matrix}\cos(\theta/2) & \sin(\theta/2) \\ -\sin(\theta/2) & \cos(\theta/2) \end{matrix}\right)$$

+ +

Now, the output state of the ""input register"" should have been

+ +

$$\frac{-|00\rangle + 7|01\rangle + 11|10\rangle + 13|11\rangle}{\sqrt{340}}$$ +i.e. $$-0.0542326|00\rangle + 0.379628|01\rangle + 0.596559|10\rangle + 0.705024|11\rangle$$

+ +

However, I'm getting the output state of the input register as

+ +

$$-(-0.05220|00\rangle+0.37913|01\rangle+0.59635|10\rangle+0.70562|11\rangle)$$

+ +

That is, there's a extraneous global phase of $-1$ in the final output. I'm not sure whether I have made a mistake in the implementation of the circuit OR whether the output of the circuit actually supposed to be accompanied with the global phase.

+ +

@DaftWullie mentions:

+ +
+

If it's a global phase, what does it matter? Everything is always ""up + to a global phase""

+
+ +

That sure is logical! However, I want to be sure that I'm not making any silly error in the implementation itself. I wonder that if there's actually supposed to be a global phase of $-1$, why they didn't explicitly mention it in the paper? I find that a bit surprising. (Indeed, yes, I should perhaps directly contact the authors, but maybe someone here might be able to spot the silly mistake (on my part) quicker! :)

+ +

In case you have any questions about my simulation, please feel free to ask in the comments. I'll be happy to clarify.

+",26,,26,,12/23/2018 13:26,12/23/2018 13:26,"Error: Simulation of ""Quantum algorithm for linear systems of equations"" for $4\times 4$ systems on Quirk (without SWAP) - Global phase",,0,11,,,,CC BY-SA 4.0 +3745,1,3747,,7/15/2018 6:15,,6,683,"

From what I have seen so far, there is a lot of linear algebra. Curious what other kinds of maths are used in QC & the specific fields in which they are most predominately invoked.

+",2645,,26,,12/14/2018 6:29,12/14/2018 6:29,What kind of mathematics is common in quantum computing?,,1,5,,7/16/2018 18:17,,CC BY-SA 4.0 +3746,1,,,7/15/2018 7:06,,6,239,"

For quantum error correction, the necessary and sufficient condition is given in standard texts as:

+ +

$\langle \phi| E^{\dagger}_{a} E_{b} |\psi \rangle = C_{ab} \langle \phi|\psi \rangle $

+ +

$|\psi\rangle$ and $|\phi\rangle$ are codewords. $E_{a}$ and $E_b$ are arbitrary errors that can occur.

+ +

My queries would be:

+ +
    +
  1. Why is $C_{ab}$ independent of the codewords and only dependant on the errors? Intuitively, I realize this is because otherwise, we gain some information about the codeword by detecting the error and run the risk of disturbing the superposition of the codeword. Is there a more formal mathematical explanation for the same?
  2. +
  3. I realize that $C$ is a Hermitian matrix which can be diagonalized. What does it mean in terms of the codewords and the errors to diagonalize the matrix $C$?
  4. +
  5. Why is it the case that the state $|\psi\rangle$ is always an eigenstate of $ E^{\dagger}_{a} E_{b}$ with $C_{ab}$ as an eigenvalue? It doesn't seem intuitive to me.
  6. +
+",1351,,1351,,7/15/2018 9:22,7/15/2018 9:54,Quantum error correction: necessary and sufficient condition,,1,0,,,,CC BY-SA 4.0 +3747,2,,3745,7/15/2018 8:03,,7,," +",2293,,2293,,7/15/2018 8:33,7/15/2018 8:33,,,,3,,,,CC BY-SA 4.0 +3748,2,,3746,7/15/2018 9:54,,1,,"

General Comment

+ +

For errors to be correctable, they must not cause us to mistake one codeword for an orthogonal one. So if two codewords are orthogonal, and they are acted upon by correctable errors, the result will remain orthogonal

+ +

$$ \langle \phi|\psi \rangle =0 \,\,\implies\,\, \langle \phi| E^{\dagger}_{a} E_{b} |\psi \rangle = 0 $$

+ +

Answer to 1

+ +

By setting $|\phi \rangle = |\psi \rangle$, your condition gives us

+ +

$$C_{ab} = \langle \psi | E^{\dagger}_{a} E_{b} |\psi \rangle.$$

+ +

The only case for which this would have no $ |\psi \rangle$ dependence is if $E^{\dagger}_{a} E_{b}=I$, which is clearly not an interesting case!

+ +

Perhaps your source assumes that a choice of an orthonormal basis for the codewords has been made, and then all else is basis dependent. Or perhaps I misunderstood the question.

+ +

Answer to 3

+ +

Even if the errors are not unitary, the operator $E^{\dagger}_{a} E_{b}$ can be decomposed as a linear superposition of unitary operators. Since the condition in your question is linear, it therefore must be as true for non-unitary errors as for unitary errors.

+",409,,,,,7/15/2018 9:54,,,,1,,,,CC BY-SA 4.0 +3765,1,,,7/15/2018 19:51,,5,535,"

As a beginner, for exercise purpose, I’ve studied this two quantum circuits. They are equivalent, and for 2 qubits it’s easy to write the unitary transformation matrix. +

+ +

Looking for another method I wrote what follows, but I’m not sure about notation and, particularly, the last passage. +So, I’m asking here if what I’ve written is admissible (maybe with some correction?).

+ +

+ +

There are other methods?

+",2886,,124,,7/15/2018 22:33,7/17/2018 15:02,"Showing the equivalence of two simple {NOT, CNOT} circuits",,3,0,,,,CC BY-SA 4.0 +3766,2,,3765,7/15/2018 21:55,,2,,"

You seem to have the basic idea. However, for a more formal way to approach the analysis, you might be interested in the following.$\def\ket#1{\lvert #1 \rangle}$

+ +
    +
  • The effect of the 'NOT' gate $X$ on standard basis states can be presented in terms of an explicit change to the bit value inside the Dirac notation, e.g.: +$$ X \,\ket t = \ket{t \oplus 1}$$ +where $a \oplus b$ is the parity (i.e. the sum modulo 2) of a pair of bits $a,b \in \{0,1\}$.

  • +
  • Using the fact that $a \oplus b$ is the sum mod 2 of a pair of bits $a,b \in \{0,1\}$,we know that $\oplus$ is commutative and associative, so that in particular +$$ (a \oplus b) \oplus c = (a \oplus c) \oplus b.$$

  • +
+ +

Using this, we may then describe your left-hand circuit as follows:

+ +

$$ +\begin{align} +\ket{\psi_1} &= \ket{c} \otimes \ket{t} ; +\\[2ex] +\ket{\psi_2} &= X\ket{c} \otimes \ket{t} +\\ +&= \ket{c {\,\oplus\;\!} 1} \otimes \ket{t} ; +\\[2ex] +\ket{\psi_3} &= \ket{c {\,\oplus\;\!} 1} \otimes \ket{(c {\,\oplus\;\!} 1) {\,\oplus\;\!} t} +\\ +&= \ket{c {\,\oplus\;\!} 1} \otimes \ket{(c {\,\oplus\;\!} t) {\,\oplus\;\!} 1} +\\ +&= X\ket{c} \otimes X\ket{c {\,\oplus\;\!} t} . +\end{align}$$

+",124,,124,,7/15/2018 22:31,7/15/2018 22:31,,,,4,,,,CC BY-SA 4.0 +3767,1,3785,,7/16/2018 0:39,,15,757,"

So far I have read a little bit about zx-calculus & y-calculus.

+

From Reversible Computation:

+
+

The zx-calculus is a graphical language for describing quantum systems.

+
+
+
+

The zx-calculus is an equational theory, based on rewriting the diagrams +which comprise its syntax. Re-writing can be automated by means of the quantomatic software.

+
+

This method seems very interesting, however I am not able to find much introductory information on the subject. Any insight into the subject or additional resources would be greatly appreciated.

+

Current Resources:

+ +",2645,,9947,,04-12-2021 14:36,04-12-2021 14:36,Graphical Calculus for Quantum Circuits,,2,2,,,,CC BY-SA 4.0 +3768,2,,3767,7/16/2018 1:41,,4,,"

You already put Selinger's survey, so here are a couple more links.

+ +

Baez and Stay: Baez and Stay is a survey article. It covers monoidal, braided, symmetric and dagger categories. For the example related to quantum computation focus on either Hilb or cobordism. The appropriate string diagrams for these are included along with the sections for those types of categories. It points the connections between logic and type theory as well, but you don't need those sections. However, it would be helpful. You could see Baez's other blog posts as well.

+ +

Qiaochu Yuan's blog: Qiaochu's blog post is more introductory and brief. It focuses solely on the vector space example as to avoid prerequisites besides linear algebra. The later posts in that series cover other adjectives to add on such as braided, symmetric or dagger. See later in that series as well.

+",434,,26,,5/20/2019 20:03,5/20/2019 20:03,,,,0,,,,CC BY-SA 4.0 +3769,2,,3765,7/16/2018 4:39,,0,,"

You can also use the definition of XOR in the following way: +$c\oplus t = c'.t + c.t'$ (where the primes mean not)

+ +

$\implies (c\oplus t)' = (c'.t + c.t')'$

+ +

Now using De Morgan's rule, +$(c'.t + c.t')' = (c+t').(c'+t) = c't'+ct$

+ +

Which can be seen as being the same as $c'\oplus t$

+",4107,,26,,7/17/2018 15:02,7/17/2018 15:02,,,,2,,,,CC BY-SA 4.0 +3770,1,3775,,7/16/2018 4:55,,22,2718,"

I was searching for examples of quantum circuits to exercise with Q# programming and I stumbled on this circuit: +

+ +

From: Examples of Quantum Circuit Diagrams +- Michal Charemza

+ +

During my introductory courses in quantum computation, we were taught that the cloning of a state is forbidden by the laws of QM, while in this case the first contol qubit is copied on the third, target, qubit.

+ +

I quickly tried to simulate the circuit on Quirk, something like this, that sort of confirms the cloning of the state in output on the first qubit. Measuring the qubit before the Toffoli gate shows that is in fact no real cloning, but instead a change on the first control qubit, and an equal output on the first and third qubit.

+ +

By making simple math, it can be shown that the ""cloning"" happens only if the third qubit is in initial state 0, and that only if on the first qubit is not performed a ""spinning operation"" (as indicated on Quirk) on Y or X.

+ +

I tried writing a program in Q# that only confirmed which is aforesaid.

+ +

I struggle in understanding how the first qubit is changed by this operation, and how something similar to a cloning is possible.

+ +

Thank you in advance!

+",2601,,26,,03-12-2019 09:43,4/16/2021 22:02,Toffoli gate as FANOUT,,3,2,,,,CC BY-SA 4.0 +3771,1,3780,,7/16/2018 5:23,,3,126,"

In Quantum Computation with the simplest maths possible there is a section titled ""Doing maths with a controlled-half NOT"" which covers a reversible-(N)AND circuit with controlled-half NOTs.

+ +

+ +
    +
  • What would the unitary matrix for a controlled-half NOT be?

  • +
  • How could a reversible-XNOR gate be constructed with controlled-half NOTs?

  • +
  • How would a half-adders, full adders & ripple carry adders be constructed from controlled-half NOTs?

  • +
+",2645,,26,,12/23/2018 13:24,12/23/2018 13:24,Doing maths with controlled-half NOTs,,1,0,,,,CC BY-SA 4.0 +3772,1,3781,,7/16/2018 5:40,,6,238,"
+

A topological quantum computer is a theoretical quantum computer that employs two-dimensional quasiparticles called anyons. -Wikipedia

+
+ +

Are there other instances of topological quantum computing models that do not use anyons?

+ +

Are there alternative forms of anyons besides Fibonacci anyons?

+",2645,,26,,7/16/2018 15:38,7/16/2018 15:38,Anyon alternatives in topological quantum computing,,2,0,,,,CC BY-SA 4.0 +3773,1,4620,,7/16/2018 6:00,,8,250,"

Does something like Quirk exist for topological (eg. braided) circuits?

+ +

Alternatively, any ideas on how @CraigGidney is getting these circuits (or something similar)?

+ +

+",2645,,26,,7/16/2018 15:02,11-05-2018 00:20,Topological Circuit Simulator,,2,4,,,,CC BY-SA 4.0 +3774,2,,3770,7/16/2018 6:05,,7,,"

The answer is that the no-cloning theorem states that you cannot clone an arbitrary unknown state.

+

This circuit does not violate the no-cloning theorem, because let's look at what it does when the input is $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$. The output at the third register still has to be a $|0\rangle$ or a $|1\rangle$.

+

Therefore it's impossible for this circuit to clone an arbitrary state $|\psi\rangle$, and one example of a state that it cannot clone is: $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$.

+",2293,,2293,,4/16/2021 22:02,4/16/2021 22:02,,,,1,,,,CC BY-SA 4.0 +3775,2,,3770,7/16/2018 6:07,,16,,"

To simplify the question consider CNOT gate instead of Toffoli gate; CNOT is also fanout because

+ +

\begin{align} +|0\rangle|0\rangle \rightarrow |0\rangle|0\rangle\\ +|1\rangle|0\rangle \rightarrow |1\rangle|1\rangle +\end{align}

+ +

and it looks like cloning for any basis state $x\in\{0,1\}$ +\begin{align} +|x\rangle|0\rangle \rightarrow |x\rangle|x\rangle +\end{align}

+ +

but if you take a superposition $|\psi\rangle=\alpha|0\rangle + \beta|1\rangle$ then

+ +

\begin{align} +(\alpha|0\rangle+\beta|1\rangle)|0\rangle \rightarrow \alpha|0\rangle|0\rangle+ \beta|1\rangle|1\rangle +\end{align}

+ +

so generally

+ +

\begin{align} +|\psi\rangle|0\rangle\not\rightarrow|\psi\rangle|\psi\rangle +\end{align}

+ +

and fanout is not cloning.

+ +

As for the question of how the first qubit is changed - it is now entangled with the second qubit.

+",2105,,26,,7/17/2018 15:01,7/17/2018 15:01,,,,1,,,,CC BY-SA 4.0 +3776,2,,3772,7/16/2018 6:15,,0,,"
+

Are there other instances of topological QC that do not use anyons?

+
+ +

The use of anyons is part of the definition of topological QC.

+ +
+

Are there alternative forms of anyons besides Fibonacci anyons?

+
+ +

There are Fibonacci anyons and Ising anyons. An excellent reference is Non-Abelian anyons: when Ising meets Fibonacci.

+",2293,,,,,7/16/2018 6:15,,,,3,,,,CC BY-SA 4.0 +3779,2,,3765,7/16/2018 8:01,,1,,"

The way that I like to do the maths is by using linearity to break things down, and be a bit more explicit. We don't have to keep general functional forms so long as we consider the action of the circuits on a basis of states. When one is using it with regards to a controlled-not, the most natural basis to use is to ensure you have the computational basis on the control qubit. For example, I can track what happens on the first circuit if I input either $|0\rangle$ or $|1\rangle$ on the first qubit. +$$ +|0\rangle|t\rangle\rightarrow |1\rangle|t\rangle\rightarrow|1\rangle(X|t\rangle)\qquad |1\rangle|t\rangle\rightarrow |0\rangle|t\rangle\rightarrow|0\rangle|t\rangle +$$ +Meanwhile, for the second circuit, +$$ +|0\rangle|t\rangle\rightarrow |0\rangle|t\rangle\rightarrow|1\rangle(X|t\rangle)\qquad |1\rangle|t\rangle\rightarrow |1\rangle(X|t\rangle)\rightarrow|0\rangle|t\rangle +$$ +Both circuits give the same outputs for a complete basis of states, so they must be the same unitaries.

+",1837,,1837,,7/16/2018 10:08,7/16/2018 10:08,,,,1,,,,CC BY-SA 4.0 +3780,2,,3771,7/16/2018 8:17,,4,,"

This is the gate that I would call controlled-square-root-of-not. Bit more of a mouthful, I know, but perhaps conveys more accurately what it's doing. The point is that it's a unitary $U$ such that $U^2$ is the controlled-not. There are probably a few ways of writing down such a thing, but, for example +$$ +U=\left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & \frac{e^{i\pi/4}}{\sqrt{2}} & \frac{e^{-i\pi/4}}{\sqrt{2}} \\ 0 & 0 & \frac{e^{-i\pi/4}}{\sqrt{2}} & \frac{e^{i\pi/4}}{\sqrt{2}} +\end{array}\right) +$$

+ +

There's a trivial answer to your other questions. Take the normal circuits for each of these things built out of (n)and gates, and substitute the reversible (n)and circuits in their place. Of course, there may be optimisations to improve things slightly...

+",1837,,26,,7/17/2018 15:00,7/17/2018 15:00,,,,12,,,,CC BY-SA 4.0 +3781,2,,3772,7/16/2018 8:43,,7,,"
+

Are there other instances of topological QC that do not use anyons?

+
+ +

No, that's basically by definition. That said, there are different ways that one could use topological systems in order to achieve quantum computation. In the version you're talking about, you use these anyon pairs to define qubits, and braid them around each other to create quantum gates. Another option is more related to quantum memories: topological systems have a degenerate ground state that can be used to encode a qubit. But that qubit should be very robust against noise because it takes many many single-qubit errors (spanning the bulk of the system) to be misinterpreted as a logical operation. If you have many of these, you could use them as qubits in a quantum computer, but the challenge is actually getting them to do gates.

+ +
+

Are there alternative forms of anyons besides Fibonacci anyons?

+
+ +

There's a huge variety, it's just that Fibonacci anyons are a universal non-Abelian type of anyon that's comparatively easy to explain, but one does not have to be restricted to them if you can prove the right properties in some other system. The other type that are commonly discussed are Ising anyons.

+ +

Basically, every quantum system with localisable excitations has anyons, but their properties vary wildly. For example, if you look at Kitaev's Toric code, there are two types of anyon present. The particle-anti-particle pairs are basically joined together either by a string of $X$ operators, or a string of $Z$ operators. You move them around by applying $X$s or $Z$s as appropriate. When you braid them past each other, there's a single site where an $X$ and a $Z$ coincide. Since these two operators anti-commute, that corresponds to acquiring a -1 phase. Thus, the braiding effectively implements a logical $Z$. But that's all it implements; it can't change the type of excitation that the system is in (hence, we say it has Abelian anyons). That makes the computations you can do with them extremely limited. Even if you can find non-Abelian anyons, they may be of a limited form which is not universal for quantum computation (you might get an equivalent of the Clifford gates, for example).

+",1837,,,,,7/16/2018 8:43,,,,1,,,,CC BY-SA 4.0 +3782,1,3784,,7/16/2018 16:17,,10,2931,"

I was trying to generate Greenberger-Horne-Zeilinger (GHZ) state for $N$ states using quantum computing, starting with $|000...000\rangle$ (N times)

+ +

The proposed solution is to first apply Hadamard Transformation on the first qubit, and then start a loop of CNOT gates with the first qubit of all the others.

+ +

I am unable to understand how I can perform CNOT($q_1,q_2$) if $q_1$ is a part of an entangled pair, like the Bell state $B_0$ which forms here after the Hadamard transformation.

+ +

I know how to write the code for it, but algebraically why is this method correct and how is it done? Thanks.

+",2951,,26,,7/16/2018 17:06,7/16/2018 19:13,CNOT Gate on Entangled Qubits,,2,0,,,,CC BY-SA 4.0 +3783,2,,3782,7/16/2018 17:17,,7,,"

$$ +\psi_1 = |0 0 0 \rangle\\ +\psi_2 = (H \otimes I \otimes I) \psi_1 = \frac{1}{\sqrt{2}} (|0 \rangle + |1 \rangle) \otimes |0 0 \rangle\\ += \frac{1}{\sqrt{2}} ( |0 0 0 \rangle + |1 0 0 \rangle)\\ +\psi_3 = (\operatorname{CNOT}_{12} \otimes I) \psi_2 = \frac{1}{\sqrt{2}} (|0 0 0 \rangle + |1 1 0 \rangle)\\ +\psi_4 = (\operatorname{CNOT}_{13} \otimes I_{2}) \psi_3 = \frac{1}{\sqrt{2}} (|0 0 0 \rangle + |1 1 1 \rangle)\\ +$$

+ +

$\operatorname{CNOT}_{ij}$ is itself an operator on $2$ qubits giving a $4\times 4$ unitary matrix. You can apply it to any state in $\mathbb{C}^2 \otimes \mathbb{C}^2$ not just those of the form $q_i \otimes q_j$. Just write the coefficients in the computational basis where you know what to do in terms of the $\operatorname{CNOT}_{ij}$ of classical reversible computing. Then just follow your linearity nose.

+",434,,26,,7/16/2018 17:25,7/16/2018 17:25,,,,0,,,,CC BY-SA 4.0 +3784,2,,3782,7/16/2018 17:18,,3,,"
+

I am unable to understand how I can perform CNOT($q_1,q_2$) if $q_1$ + is a part of an entangled pair, like the Bell state $B_0$ which forms + here after the Hadamard transformation.

+
+ +

The key is to notice what happens to the computational basis states (or, for that matter, any other complete set of basis states) upon applying the relevant quantum gate(s). Doesn't matter whether the state is entangled or separable. This method always works.

+ +

Let's consider the $2$-qubit Bell state (of two qubits $A$ and $B$):

+ +

$$|\Psi\rangle = \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$

+ +

$|\Psi\rangle$ is formed by an equal linear superposition of the computational basis states $|00\rangle$ & $|11\rangle$ (which can be expressed as $|0\rangle_A\otimes|0\rangle_B$ and $|1\rangle_A\otimes|1\rangle_B$ respectively) and $|1\rangle_A\otimes |1\rangle_B$. We need not worry about the other two computational basis states: $|01\rangle$ and $|10\rangle$ as they are not part of the Bell state superposition $|\Psi\rangle$. A CNOT gate basically flips (i.e. does either one of the two mappings $|0\rangle \mapsto |1\rangle$ or $|1\rangle\mapsto |0\rangle$) the state of the qubit $B$ in case the qubit $A$ is in the state $|1\rangle$, or else it does nothing at all.

+ +

So basically CNOT will keep the computational basis state $|00\rangle$ as it is. However, it will convert the computational basis state $|11\rangle$ to $|10\rangle$. From the action of CNOT on $|00\rangle$ and $|11\rangle$, you can deduce the action of CNOT on the superposition state $|\Psi\rangle$ now:

+ +

$$\operatorname{CNOT}|\Psi\rangle = \frac{1}{\sqrt{2}}(|00\rangle + |10\rangle)$$

+ +
+ +

Edit:

+ +

You mention in the comments that you want one of the two qubits of the entangled state $|\Psi\rangle$ to act as control (and the NOT operation will be applied on a different qubit, say $C$, depending upon the control).

+ +

In that case too, you can proceed in a similar way as above.

+ +

Write down the $3$-qubit combined state:

+ +

$$|\Psi\rangle\otimes |0\rangle_C = \frac{1}{\sqrt{2}}(|0\rangle_A\otimes |0\rangle_B + |1\rangle_A\otimes|1\rangle_B)\otimes |0\rangle_C$$ $$= \frac{1}{\sqrt{2}}(|0\rangle_A\otimes |0\rangle_B\otimes |0\rangle_C+ |1\rangle_A\otimes|1\rangle_B\otimes|0\rangle_C)$$

+ +

Let's say $B$ is your control qubit.

+ +

Once again we will simply check the action of the CNOT on the computational basis states (for a 3-qubit system) i.e. $|000\rangle$ & $|110\rangle$. In computational basis state $|000\rangle = |0\rangle_A\otimes|0\rangle_B|0\rangle_C$ notice that the state of the qubit $B$ is $|0\rangle$ and that of qubit $C$ is $|0\rangle$. Since qubit $B$ is in state $|0\rangle$, the state of qubit $C$ will not be flipped. However, notice that in the computational basis state $|110\rangle = |1\rangle_A\otimes|1\rangle_B\otimes|0\rangle_C$ the qubit $B$ is in state $|1\rangle$ while qubit $C$ is in state $|0\rangle$. Since the qubit $B$ is in state $|1\rangle$, the state of the qubit $C$ will be flipped to $|1\rangle$.

+ +

Thus, you end up with the state:

+ +

$$\frac{1}{\sqrt{2}}(|0\rangle_A\otimes|0\rangle_B\otimes|0\rangle_C + |1\rangle_A\otimes|1\rangle_B\otimes|1\rangle_C)$$

+ +

This is the Greenberger–Horne–Zeilinger state for your $3$ qubits!

+",26,,26,,7/16/2018 19:13,7/16/2018 19:13,,,,8,,,,CC BY-SA 4.0 +3785,2,,3767,7/16/2018 19:57,,14,,"

The best possible textbook reference at the moment is

+ + + +

It is written by one of the two inventors of the ZX calculus (Bob Coecke), and one of the people who has contributed the most to the development of Quantomatic (Aleks Kissinger), and so would be the definitive introductory reference.

+ +

There is now also a website, zxcalculus.com, with tutorials, links to resources, and exhibiting a Python-based tool called PyZX, which you may find helpful. This website is a coordinated effort by the main proponents and developers of the theory and applications of the ZX calculus, of which I am one.

+",124,,26,,05-02-2019 10:17,05-02-2019 10:17,,,,0,,,,CC BY-SA 4.0 +3786,1,,,7/17/2018 4:45,,5,329,"

I am experimenting with some Qiskit ACQUA AI algorithms which require the following import statement:

+ +
from datasets import *
+
+ +

However, import statement is throwing an error:

+ +

ModuleNotFoundError +Traceback (most recent call last) in +from datasets import * +ModuleNotFoundError: No module named 'datasets'

+ +

I am unable to determine the package that this import is from. Clearly, Qiskit ACQUA installation doesn't have to all required packages for the algorithm code to run. I asked the question at the IBM QE forum but the traffic on it is very low. I have not had the answer from anyone yet.

+ +

Any help would be most appreciated!

+",4120,,26,,3/18/2019 19:48,5/29/2019 18:34,Quantum SVM Algorithm Error on import,,1,2,,,,CC BY-SA 4.0 +3787,2,,2724,7/17/2018 5:03,,3,,"

Just sharing my own experience here: I honestly felt much better after reading an introductory book on quantum physics, Idiot's Guide. The book is written by three wonderful professors. They have a whole unit dedicating to Quantum Computing where they explain the fundamentals so nicely. Once you feel comfortable with that, poke around with Microsoft Code and subscribe to the IBM QuantumExpeirence. Installing Qiskit and playing with the code examples was essential for me. If you coming from a functional programming background, you can also try Quipper: A quantum programming language that is written in Haskell. The field is so young and those who claim to know very much, they basically know very little. So, don't be afraid of asking questions. There are no wrong ones.

+",4120,,,,,7/17/2018 5:03,,,,0,,,,CC BY-SA 4.0 +3788,2,,2724,7/17/2018 11:26,,4,,"

I find in Quirk a very good tool to practice from basic concepts to more advanced subjects in the matter. It's pretty fast and intuitive, allowing to see in real time the effect of each modify applied to the circuit. The documentation I found is not so large, but, it's easy to understand. You can find an introduction to the tool here. +On youtube you can find a lot of educational videos. I personally like a lot the TED talk of Leo Kouwenhoven (here). +A similar video by Veritasium is a (qu)bit more detailed, while still very understandable.

+ +

On the game side, James Wootton programmed a handful of quantum computer games, in the sense that they are games designed for quantum computers.

+",2601,,,,,7/17/2018 11:26,,,,0,,,,CC BY-SA 4.0 +3789,1,,,7/17/2018 14:46,,3,47,"

So, I wanted to learn about quantum computing.

+ +

What should I learn and where do I start?

+",2316,,26,,7/17/2018 14:57,7/17/2018 14:57,Are there sites that allow to learn about quantum computing?,,0,2,,7/17/2018 15:03,,CC BY-SA 4.0 +3790,1,,,7/17/2018 21:55,,8,527,"

CNOT gates have been realized for states living in 2-dimensional spaces (qubits).

+ +

What about higher-dimensional (qudit) states? Can CNOT gates be defined in such case? In particular, is this possible for three-dimensional states, for example, using orbital angular momentum?

+",4131,,26,,12/23/2018 13:23,12/23/2018 13:23,Is it possible to realize CNOT gate in 3 dimension?,,2,9,,,,CC BY-SA 4.0 +3791,2,,3790,7/18/2018 6:56,,2,,"

A generalization of the $cX$ (called ""controlled $X$"" or "" controlled $\textrm{NOT}$"") gate is given as $c\tilde{X}$ in this paper. When the dimension of the Hilbert space is $d=2$, then $c\tilde{X}=cX=\textrm{CNOT}$, but $c\tilde{X}$ is also valid for $d>2$.

+ +

Then, in this paper, $d>2$ gates are interpreted in the context of OAMs.

+",2293,,2293,,7/18/2018 9:20,7/18/2018 9:20,,,,4,,,,CC BY-SA 4.0 +3792,2,,3790,7/18/2018 7:15,,4,,"

There are multiple questions implicit in this question.

+ +
+

How do you define an equivalent of the controlled-not for qutrits?

+
+ +

There are probably multiple ways that the gate can be generalised, but this paper defines it as +$$ +|x\rangle|y\rangle\mapsto|x\rangle|-x-y\text{ mod }3\rangle +$$ +I'm not sure why they use the - sign, and am instead going to take the definition +$$ +|x\rangle|y\rangle\mapsto|x\rangle|x+y\text{ mod }3\rangle +$$ +That means that we can write the unitary matrix as +$$ +\left(\begin{array}{ccccccccc} +1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ +0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ +0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ +0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ +0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ +0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ +0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ +0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 +\end{array}\right). +$$

+ +
+

Is it possible to realise this gate in 3 dimensions?

+
+ +

Sure, why not? This paper talks defines things slightly differently, but one could construct the gate I've specified using their formalism, and they also discuss some ideas for physical implementation. This paper may also be interesting.

+ +
+

Has this gate been realised?

+
+ +

Not to my knowledge, but I can't pretend to know everything that has every been achieved experimentally. I would point out, however, that this paper is only doing single-qudit gates, not two-qudit gates. Judging by the fact that that paper was only last year, I'd guess the two qudit generalisation hasn't been done yet in that particular physical realisation.

+",1837,,,,,7/18/2018 7:15,,,,1,,,,CC BY-SA 4.0 +3793,1,,,7/18/2018 8:13,,7,553,"

The square-root of not and square-root of swap gates are often singled out for discussion of gates displaying important properties relating to quantum computers.

+ +
    +
  • How do I define arbitrary (non-integer) powers of the square-root of NOT or square-root of SWAP?

  • +
  • How do I find their unitary matrices?

  • +
  • How might I implement these gates?

  • +
+",1837,,26,,12/23/2018 13:23,10/30/2019 18:51,Arbitrary powers of NOT and SWAP,,2,0,,,,CC BY-SA 4.0 +3794,2,,3793,7/18/2018 8:13,,10,,"

Let's start with some general theory. If you have a normal matrix $A$ (of which unitaries are a subset), you can define any function of that matrix $f(A)$. For example, $A^{1/2}$ or $A^{\pi}$. The most natural way to do this is via the spectral decomposition: if $\{\lambda_i\}$ are the eigenvalues of $A$ and $U$ is the matrix that diagonalises $A$: +$$ +UAU^\dagger=\sum_i\lambda_i|i\rangle\langle i|:=D, +$$ +i.e. $D$ is a diagonal matrix with entries corresponding to the eigenvalues. Then, +$$ +f(A)=U^\dagger\sum_if(\lambda_i)|i\rangle\langle i| U. +$$ +You can see why this works if you think about $\sqrt{A}=U^\dagger\sqrt{D} U$ (where $\sqrt{D}$ is just the same as $D$, but taking the square root on each of the diagonal entries), and we multiply it together: +$$ +\sqrt{A}\cdot \sqrt{A}=U^\dagger\sqrt{D} UU^\dagger\sqrt{D} U=U^\dagger D U=A. +$$

+ +

In this way, we can define arbitrary power of $X$. Effectively, we have +$$ +X^{q}=|+\rangle\langle +|+e^{i\pi q}|-\rangle\langle -|=\frac{1}{2}\left(\begin{array}{cc} +1+e^{i\pi q} & 1-e^{i\pi q} \\ 1-e^{i\pi q} & 1+ e^{i\pi q} +\end{array}\right)=e^{i\pi q/2}\left(\begin{array}{cc} +\cos\frac{\pi q}{2} & -i\sin\frac{\pi q}{2} \\ -i\sin\frac{\pi q}{2} & \cos \frac{\pi q}{2} +\end{array}\right) +$$

+ +

There are several ways in which you might implement this gate. For example, as already mentioned here, you can think of it as a continuous time operator +$$ +e^{i(\mathbb{I}-X)t} +$$ +for any $t$ that you want ($t=\pi q/2$). Another way, if you have arbitrary phase gates available, is just the sequence Hadamard - phase - hadamard, as the hadamards convert a z-rotation into an x-rotation.

+ +
+ +

As yet, I haven't mentioned the power of swap. Think about the swap gate. Clearly, two eigenvectors (of eigenvalue 1) are $|00\rangle$ and $|11\rangle$. These are unchanged by taking arbitrary powers. The bit that's affected is a $2\times 2$ subspace spanned by $\{|01\rangle,|10\rangle\}$. But when you look at just this subspace, the SWAP matrix is exactly the same as $X$. +So, knowing about $f(X)$ immediately tells us about $f(\mathrm{SWAP})$. +$$ +\mathrm{SWAP}^q=\left(\begin{array}{cccc} +1 & 0 & 0 & 0 \\ +0 & e^{i\pi q/2}\cos\frac{\pi q}{2} & -ie^{i\pi q/2}\sin\frac{\pi q}{2} & \ \\ +0 & -ie^{i\pi q/2}\sin\frac{\pi q}{2} & e^{i\pi q/2}\cos \frac{\pi q}{2} & 0 \\ +0 & 0 & 0 & 1 +\end{array}\right) +$$

+ +

If you want a continuous-time implementation, then one can use +$$ +e^{it(\mathbb{I}-Z\otimes Z -X\otimes X-Y\otimes Y)/2}, +$$ +because this Hamiltonian, when restricted to the $2\times 2$ subspace is just $\mathbb{I}-X$, as we required before. As for a quantum circuit, that has been partially addressed here. That answer effectively conveys how any $f(\mathrm{SWAP})$ can be converted into a controlled-$f(X)$ gate. One simply has to get the angle of rotation correct, and sufficient information is conveyed in that answer.

+",1837,,26,,10/30/2019 18:51,10/30/2019 18:51,,,,4,,,,CC BY-SA 4.0 +3796,2,,3770,7/18/2018 16:14,,8,,"

The no cloning theorem says that there is no circuit which creates independent copies of all quantum states. Mathematically, no cloning states that:

+ +

$$\forall C: \exists a,b: C \cdot \Big( (a|0\rangle + b|1\rangle)\otimes|0\rangle \Big) \neq (a|0\rangle + b|1\rangle) \otimes (a|0\rangle + b|1\rangle)$$

+ +

Fanout circuits don't violate this theorem. They don't make indepedent copies. They make entangled copies. Mathematically, they do:

+ +

$$\text{FANOUT} \cdot \Big( (a|0\rangle + b|1\rangle) \otimes |0\rangle \Big) = a|00\rangle + b|11\rangle$$

+ +

So everything is fine because $a|00\rangle + b|11\rangle$ is not the same thing as $(a|0\rangle + b|1\rangle) \otimes (a|0\rangle + b|1\rangle)$.

+",119,,,,,7/18/2018 16:14,,,,0,,,,CC BY-SA 4.0 +3798,2,,2562,7/19/2018 9:42,,2,,"

It is incorrect to use modulo arithmetic in this context. Instead finite field arithmetic should be applied. In $\textrm{GF}(4) = \{0, 1, x, x^2\}$ where $x^2 = x + 1$ and conjugation of $a$ is defined as $\bar{a} = a^2$.

+ +

Addition, multiplication and conjugation tables are then as follows:

+ +

+ +

In this picture we have $0 \equiv 0$, $1 \equiv 1$, $2 \equiv x$, and $3 \equiv x^2$ such that $2 \times 2 = 3$ and so the apparent inconsistency does not occur.

+",391,,391,,7/19/2018 11:21,7/19/2018 11:21,,,,2,,,,CC BY-SA 4.0 +3799,2,,2697,7/19/2018 11:46,,10,,"

A couple years ago it was shown in Quantum algorithms and the finite element method by Montanaro and Pallister that the HHL algorithm could be applied to the Finite Element Method (FEM) which is a ""technique for efficiently finding numerical approximations to the solutions of boundary value problems (BVPs) for partial differential equations, based on discretizing the parameter space via a finite mesh"".

+ +

They showed that within this context HHL could be used to achieve (perhaps at most) a polynomial speedup over the standard classical algorithm (the ""conjugate gradient method"").

+ +

With respect to real-world use-cases, they state that

+ +
+

""One example application is any dynamical problem involving $n$ bodies, which implies solving a PDE defined over a configuration space of dimension 2n. Also, there may be a significant advantage for problems in mathematical finance; + for example, pricing multiasset options requires solving the Black-Scholes equation over a domain with dimension given by the number of assets""

+
+ +

This opens up a whole area of potential use-cases for HHL (assuming conditions on the sparsity of $A$ can be satisfied).

+",391,,,,,7/19/2018 11:46,,,,4,,,,CC BY-SA 4.0 +3800,1,,,7/19/2018 14:34,,13,1769,"

For each IBM quantum chip, one can write a dictionary mapping each control qubit j to a list of its physically allowed targets, assuming j is the control of a CNOT. For example,

+ +
ibmqx4_c_to_tars = {
+    0: [],
+    1: [0],
+    2: [0, 1, 4],
+    3: [2, 4],
+    4: []}  # 6 edges
+
+ +

for their ibmqx4 chip.

+ +

What would be that dict for Google's 72 qubit Bristlecone chip. You can write the dict as a comprehension. Same question for Rigetti's 19 qubit chip.

+",1974,,1974,,7/20/2018 14:13,7/20/2018 21:22,What are physically allowed CNOTs for Rigetti's 19 qubit chip and Google's 72 qubit BristleCone chip?,,3,2,,,,CC BY-SA 4.0 +3802,2,,3800,7/19/2018 14:54,,9,,"

From the original blog post presenting the Bristlecone quantum chip, here is the connectivity map of the chip:

+ +

+ +

Each cross represent a qubit, with nearest-neighbour connectivity. If you number the qubits from left to right, top to bottom (just like how you read english), starting by $0$ then the connectivity map would be given by:

+ +
connectivity_map = {
+    i : [i + offset
+         for offset in (-6, -5, 5, 6) # values deduced by taking a qubit in the middle of
+                                      # chip and computing the offsets between the choosen
+                                      # qubit and it's 4 neighbours
+         if ((0 <= i+offset < 72)             # the neighbour should be a valid qubit
+             and ((i+offset) // 6 != i // 6)) # the neighbour should not be on the same line
+    ]
+    for i in range(72)
+}
+
+ +

Warning: the expression above is completely unverified. It seems to work for the first qubits, it seems logical to me, but it's up to you to check that the map is 100% correct.

+ +

Warning 2: Google's blog post does not talk about the orientation of the connections between qubits. The connectivity map given above assumes that the connections are bilateral.

+",1386,,1386,,7/19/2018 15:14,7/19/2018 15:14,,,,0,,,,CC BY-SA 4.0 +3804,2,,3800,7/19/2018 17:55,,12,,"

Bristlecone's native operation is the CZ, not CNOTs. However, you can transform between the two with Hadamard gates so this is sort of a trivial difference.

+ +

Bristlecone can perform a CZ between any adjacent pair of qubits on a grid. You can see the grid by installing cirq and printing out the Bristlecone device:

+ +
$ pip install cirq
+$ python
+>>> import cirq
+>>> print(cirq.google.Bristlecone)
+                                             (0, 5)────(0, 6)
+                                             │         │
+                                             │         │
+                                    (1, 4)───(1, 5)────(1, 6)────(1, 7)
+                                    │        │         │         │
+                                    │        │         │         │
+                           (2, 3)───(2, 4)───(2, 5)────(2, 6)────(2, 7)───(2, 8)
+                           │        │        │         │         │        │
+                           │        │        │         │         │        │
+                  (3, 2)───(3, 3)───(3, 4)───(3, 5)────(3, 6)────(3, 7)───(3, 8)───(3, 9)
+                  │        │        │        │         │         │        │        │
+                  │        │        │        │         │         │        │        │
+         (4, 1)───(4, 2)───(4, 3)───(4, 4)───(4, 5)────(4, 6)────(4, 7)───(4, 8)───(4, 9)───(4, 10)
+         │        │        │        │        │         │         │        │        │        │
+         │        │        │        │        │         │         │        │        │        │
+(5, 0)───(5, 1)───(5, 2)───(5, 3)───(5, 4)───(5, 5)────(5, 6)────(5, 7)───(5, 8)───(5, 9)───(5, 10)───(5, 11)
+         │        │        │        │        │         │         │        │        │        │
+         │        │        │        │        │         │         │        │        │        │
+         (6, 1)───(6, 2)───(6, 3)───(6, 4)───(6, 5)────(6, 6)────(6, 7)───(6, 8)───(6, 9)───(6, 10)
+                  │        │        │        │         │         │        │        │
+                  │        │        │        │         │         │        │        │
+                  (7, 2)───(7, 3)───(7, 4)───(7, 5)────(7, 6)────(7, 7)───(7, 8)───(7, 9)
+                           │        │        │         │         │        │
+                           │        │        │         │         │        │
+                           (8, 3)───(8, 4)───(8, 5)────(8, 6)────(8, 7)───(8, 8)
+                                    │        │         │         │
+                                    │        │         │         │
+                                    (9, 4)───(9, 5)────(9, 6)────(9, 7)
+                                             │         │
+                                             │         │
+                                             (10, 5)───(10, 6)
+
+ +

Here is how you can get a set containing the allowed CZ operations:

+ +
qubits = cirq.google.Bristlecone.qubits
+allowed = {cirq.CZ(a, b)
+           for a in qubits
+           for b in qubits
+           if a.is_adjacent(b)}
+
+ +

The set has 121 elements in it, and it's somewhat random whether you get CZ(x, y) or CZ(y, x) in the set, so I won't include a printout of the set here.

+ +

An additional constraint to keep in mind is that you cannot perform two CZs next to each other at the same time. Cirq takes this into account when creating circuits targeted at Bristlecone. For example:

+ +
import cirq
+device = cirq.google.Bristlecone
+a, b, c, d, e = device.col(6)[:5]
+circuit = cirq.Circuit.from_ops(
+    cirq.CZ(a, b),
+    cirq.CZ(c, d),
+    cirq.CZ(a, b),
+    cirq.CZ(d, e),
+    device=device)
+print(circuit)
+# (0, 6): ───@───────@───
+#            │       │
+# (1, 6): ───@───────@───
+# 
+# (2, 6): ───────@───────
+#                │
+# (3, 6): ───────@───@───
+#                    │
+# (4, 6): ───────────@───
+
+ +

The first two operations were staggered because they are adjacent CZs, but the second two weren't because they aren't.

+",119,,119,,7/19/2018 18:14,7/19/2018 18:14,,,,5,,,,CC BY-SA 4.0 +3808,1,,,7/19/2018 18:38,,2,87,"

I am thinking as quantum computing is just a new word in tech, so let to explore it by building it or a prototype based on it so I am just wanted help in setting up and initial steps which should I keep in mind. Please tell me about each implementation step.

+",2566,,26,,12/14/2018 5:26,12/14/2018 5:26,"Is it possible to make our own quantum computer? If yes, what will it take?",,0,7,,7/19/2018 21:31,,CC BY-SA 4.0 +3809,1,,,7/19/2018 19:03,,7,1018,"

The QISKIT documentation doesn't explain what a TDG gate does and I can't find it anywhere else online.

+",4150,,26,,03-12-2019 09:33,03-12-2019 09:33,What is the purpose of the TDG gate in QISKit?,,1,0,,,,CC BY-SA 4.0 +3810,2,,3809,7/19/2018 19:16,,8,,"

According to the QISKit documentation, tdg(q) applies the Tdg gate to a qubit.

+ +

$T$ is the basically the $\pi/8$ phase shift gate whose matrix representation considering standard (computational) basis is:

+ +

$$\left(\begin{matrix}1 & 0 \\ 0 & e^{i\pi/4}\end{matrix}\right)$$

+ +

Tdg is simply the conjugate transpose of the matrix $T$ i.e. $T^{\dagger}$, which is:

+ +

$$\left(\begin{matrix}1 & 0 \\ 0 & e^{-i\pi/4}\end{matrix}\right)$$

+ +

Thus, the $T$ gate would map the basis vectors (of a qubit) $|0\rangle$ to $|0\rangle$ itself and $|1\rangle$ to $e^{i\pi/4}|1\rangle$, whereas $T^{\dagger}$ would map $|0\rangle$ to $|0\rangle$ itself and $|1\rangle$ to $e^{-i\pi/4}|1\rangle$.

+ +

P.S: In case you're wondering what dg means in tdg, it is simply an abbreviation for ""dagger"" i.e. in the sense of $T^{\dagger}$ (pronounced as $T$ - dagger).

+",26,,26,,7/19/2018 21:16,7/19/2018 21:16,,,,2,,,,CC BY-SA 4.0 +3813,1,3819,,7/19/2018 21:21,,6,158,"

$\newcommand{\Q}{\mathbf{Q}}\newcommand{\S}{\mathbf{S}}\newcommand{\A}{{\mathcal A}}\newcommand{\H}{\mathcal H}$In the quantum amplitude amplification algorithm, as explained in Brassard et al. 2000 (quant-ph/0005055), the unitary performing the amplification is defined as (using the notation found in pag 5 of the above paper): +$$\Q=-\A\S_0\A^{-1}\S_\chi,$$ +where $\A$ is a unitary, +$\chi:\mathbb Z\to\{0,1\}$ is a Boolean function, and $\S_\chi$ and $\S_0$ are unitaries defined as +$$\S_\chi\equiv I-2\Pi_1,\quad \S_0\equiv I-|0\rangle\langle0|,$$ +where $\Pi_i$ is the projector over the states $|x\rangle$ for which $\chi(x)=i$: +$$\Pi_i\equiv\Pi_{\chi(x)=i}\equiv\sum_{x:\,\chi(x)=i}|x\rangle\langle x|.$$

+ +

Given the state $|\Psi\rangle\equiv\A|0\rangle$, the authors define the states $|\Psi_i\rangle$, for $i=0,1$, as +$$|\Psi_i\rangle\equiv\Pi_i|\Psi\rangle=\Pi_i\A|0\rangle.$$

+ +

The first lemma in the paper, at the end of page 5, states that +\begin{align} +\Q|\Psi_1\rangle&=(1-2a)|\Psi_1\rangle-2a|\Psi_0\rangle, \\ +\Q|\Psi_0\rangle&=2(1-a)|\Psi_1\rangle+(1-2a)|\Psi_0\rangle, +\end{align} +where $a=\langle\Psi_1|\Psi_1\rangle$.

+ +

The action of $\Q$ over $|\Psi_i\rangle$ does not seem obvious. For example, +$$\Q|\Psi_0\rangle=-\A\S_0\A^{-1}\S_\chi|\Psi_0\rangle +=-\A\S_0\A^{-1}|\Psi_0\rangle,$$ +but then already $\A^{-1}$ acts nontrivially on $|\Psi_0\rangle$.

+ +

How is $\Q|\Psi_i\rangle$ computed?

+",55,,55,,7/24/2018 16:41,7/24/2018 16:41,Computing of the action of the amplification operator $\mathbf Q$ over $|\Psi_i\rangle$ in the quantum amplitude amplification algorithm,,1,0,,,,CC BY-SA 4.0 +3814,2,,2724,7/19/2018 22:32,,4,,"
+

Is there anything on software that gives a good introduction to the functions and uses a quantum computer will have?

+
+ +

I gave a talk on quantum computing from a computer science perspective here which has been well-received. I cover the function of basic quantum logic gates, and go over the simplest problem where a quantum computer outperforms classical methods (the Deutsch Oracle problem). High school students have reportedly found it understandable.

+ +
+

As well as what language would be best to program one? Also, would there be a way to program a quantum computer through a classical computer?

+
+ +

Microsoft's Quantum Development Kit contains both a quantum language (Q#) and quantum computer simulator to run your quantum programs on your own classical computer.

+",4153,,,,,7/19/2018 22:32,,,,0,,,,CC BY-SA 4.0 +3815,1,3817,,7/19/2018 23:07,,9,695,"

I've created a simple circuit in Q-Kit to understand conditional gates and outputted states on each step: +

+ +
    +
  1. In the beginning there is clear 00 state, which is the input
  2. +
  3. The first qubit is passed through the Hadamard gate, it gets into superposition, 00 and 10 become equally possible
  4. +
  5. The first qubit CNOTs the second one, probability of 00 is unchanged, but 10 and 11 are swapped
  6. +
  7. The first qubit passes Hadamard again and probability of 00 is splited between 00 and 10, and 11 between 01 and 11 as if first qubit stepped into superposition from a fixed state
  8. +
+ +

Shouldn't the result be equally distributed 00 and 01? The first qubit passes Hadamard twice, which should put it into superposition and back to initial 0. The CNOT gate does not affect controller qubit, so its existence shouldn't affect first qubit at all, but in fact it makes it act like it wasn't in superposition any more. Does usage of qubit as a controller collapse its superposition?

+",3068,,26,,12/23/2018 13:22,12/23/2018 13:22,Does conditional gate collapse controller's superposition?,,3,0,,,,CC BY-SA 4.0 +3816,2,,1679,7/20/2018 0:15,,4,,"

In addition to the security of the digital signatures used in cryptocurrencies, which, as mentioned, is susceptible to an attack with a quantum computer capable of executing Shor's algorithm, cryptocurrencies use other cryptographic primitives in the ""proof-of-work."" Or Sattath describes a weakness of Bitcoin's currently implemented proof-of-work. Sattath proposes an easily-implementable countermeasure for this security flaw, but the current implementation of Bitcoin has Sattath's weakness.

+ +
+ +

In more detail, a cryptocurrency with a blockchain employing Nakamoto-style consensus requires miners who perform a proof-of-work, in order to determine the consensus ledger. In Bitcoin, the proof-of-work entails finding a partial preimage of a particular hash function - that is, at height $n$, miner $i$ generating her merkle root $R_i$ representing the ledger, and finding a nonce $c$ such that a cryptographic hash $H(B_{n-1}\Vert c\Vert R_i)=B_n\le d$ for target $d$.

+ +

As has been noted, such a proof-of-work is weakened by a quantum computer capable of executing Grover's algorithm - by running amplitude amplification on all states that hash to less than the target, a quadratic speedup may be achieved, and the nonce $c$ may be found more easily. A naive way to improve security, then, is to reduce the target $d$ polynomially - that is, make the difficulty be quadratically harder.

+ +

Further, a key requirement of such proofs-of-work is that they are progress-free, meaning that after a miner has spent $t$ minutes working on finding a nonce $c$, then she would be no closer to finding the winning block than if she spent $t+1$ minutes. The hope is that the race goes not the fastest, but to the ones with the most hash power. This leads to a lack of correlation between the time separate miners find a block.

+ +

However, Grover's algorithm is famously not progress-free. That is, each iteration of Grover's algorithm quadratically improves a miners' chance of finding the block. Or Sattath noted that this will likely lead to miners stopping their work immediately upon receiving a mined block, and hopefully winning a fork.

+ +

Sattath states:

+ +
+

Suppose Alice devoted $2$ minutes of applying Grover’s algorithm, and now receives a new block, mined by Bob. She could discard her computation, and start mining on top of Bob’s block, but that amounts to wasting $2$ minutes of computational resources. Instead, she could immediately stop Grover’s algorithm, and measure her quantum state. If she is lucky and her block is valid, and she also propagates her block to most other miners before Bob does, these other miners will mine on top of her block, and she, rather than Bob, will get the block reward.

+
+ +

Sattath supposes that if enough miners are Grover-capable, then all miners will be motivated to measure for their block whenever someone announces a nonce. This leads to forks that destroy the security of the blockchain.

+",2927,,2927,,7/20/2018 2:01,7/20/2018 2:01,,,,0,,,,CC BY-SA 4.0 +3817,2,,3815,7/20/2018 0:17,,5,,"

$$ +\begin{eqnarray*} +\mid 0 0 \rangle &\to& \frac{1}{\sqrt{2}} \mid 0 0 \rangle + \frac{1}{\sqrt{2}} \mid 1 0 \rangle\\ +&\to& \frac{1}{\sqrt{2}} \mid 0 0 \rangle + \frac{1}{\sqrt{2}} \mid 1 1 \rangle\\ +&\to& \frac{1}{\sqrt{4}} \mid 0 0 \rangle + \frac{1}{\sqrt{4}} \mid 1 1 \rangle + \frac{1}{\sqrt{4}} \mid 1 0 \rangle + \frac{1}{\sqrt{4}} \mid 0 1 \rangle +\end{eqnarray*} +$$

+ +

If the second line was $(\frac{1}{\sqrt{2}} \mid 0 \rangle + \frac{1}{\sqrt{2}} \mid 1 \rangle) \otimes v$, then applying the $H$ again would take it to $\mid 0 \rangle \otimes v$, but it is not. They are entangled.

+ +

It seems like you're thinking the first qubit is unaffected by the CNOT, so the last two should commute.

+ +

$$ +\begin{eqnarray*} +H_1 CNOT_{12} &=& \frac{1}{\sqrt{2}} \begin{pmatrix} +1 & 0 & 0 & 1\\ +0 & 1 & 1 & 0\\ +1 & 0 & 0 & -1\\ +0 & 1 & -1 & 0 +\end{pmatrix}\\ +CNOT_{12} H_1 &=& \frac{1}{\sqrt{2}} \begin{pmatrix} +1 & 0 & 1 & 0\\ +0 & 1 & 0 & 1\\ +0 & 1 & 0 & -1\\ +1 & 0 & -1 & 0 +\end{pmatrix}\\ +\end{eqnarray*} +$$

+ +

It is in a superposition, the entire time. There was no collapse. It's a nonobvious noncommutation. If you had $Id \otimes U$, that would be something literally not affecting the first qubit and it would commute with $H_1$. But CNOT is not of that form.

+ +

You can think of it this way at the beginning you have 2 qubits. After applying the first $H$ you still have 2 qubits. Then after the CNOT, they are entangled so you have 1 qudit with $d=4$ because they have been combined. Then the last $H$ leaves it with $d=4$. At each gate, you do a worst case scenario of the entanglement structure.

+",434,,,,,7/20/2018 0:17,,,,0,,,,CC BY-SA 4.0 +3818,2,,3815,7/20/2018 0:18,,3,,"

No, using a controlled gate doesn't measure the control.

+ +

In a sense, the idea that controlled gates would be implemented via measurement is exactly backwards. It's measurement that is implemented in terms of controlled gates, not vice versa. A measurement is just an interaction (i.e a controlled gate) between the computer and the environment that's intractable to undo.

+ +

As a simpler analogy, consider the Z gate. The Z gate applies a -1 phase factor to the $|1\rangle$ state of a qubit. It sends $a|0\rangle + b|1\rangle$ to $a|0\rangle - b |1\rangle$. One could describe this effect in a conditional way: if the qubit is in the $|1\rangle$ state, then the Z gate phases the qubit by -1. But the ""if"" in that description does not mean that we had to measure the qubit and then decide whether or not to apply the -1 phase factor, it's just a slightly-misleading description.

+ +

The same idea applies to the CNOT. Yes, you can describe it in an if-then way. But you can also describe it as ""apply a -1 phase factor to the $|1\rangle \otimes |-\rangle$ state"". And the latter description makes it clear that measurement is not necessary.

+",119,,,,,7/20/2018 0:18,,,,0,,,,CC BY-SA 4.0 +3819,2,,3813,7/20/2018 6:20,,3,,"

The trick here is to not calculate $\mathcal{A}^{-1}|\Psi\rangle$ at all, because it's insufficiently defined! Instead, look at +$$ +\mathcal{A}(\mathbb{I}-2|0\rangle\langle 0|)\mathcal{A}^{-1}=\mathbb{I}-2\mathcal{A}|0\rangle\langle 0|\mathcal{A}^{-1} +$$ +by the fact that $\mathcal{A}$ is unitary. Now, by definition, +$$ +\mathcal{A}|0\rangle=|\Psi_0\rangle+|\Psi_1\rangle +$$ +Thus, we have +$$ +\mathcal{A}(\mathbb{I}-|0\rangle\langle 0|)\mathcal{A}^{-1}=\mathbb{I}-2(|\Psi_0\rangle+|\Psi_1\rangle)(\langle\Psi_0|+\langle\Psi_1|). +$$ +Now you can calculate the effect of this on any input state. Just remember that the states $|\Psi_0\rangle$ and $|\Psi_1\rangle$ are not normalised.

+ +

Hence, +\begin{align*} +Q|\Psi_0\rangle&=-\left(\mathbb{I}-2(|\Psi_0\rangle+|\Psi_1\rangle)(\langle\Psi_0|+\langle\Psi_1|)\right)|\Psi_0\rangle \\ +&=-\left(|\Psi_0\rangle-2(|\Psi_0\rangle+|\Psi_1\rangle)(1-a)\right) \\ +&=2(1-a)|\Psi_1\rangle+(1-2a)|\Psi_0\rangle +\end{align*}

+",1837,,,,,7/20/2018 6:20,,,,0,,,,CC BY-SA 4.0 +3820,2,,3815,7/20/2018 8:47,,0,,"
+
    +
  1. In the beginning there is clear 00 state, which is the input
  2. +
  3. The first qubit is passed through the Hadamard gate, it gets into superposition, 00 and 10 become equally possible
  4. +
+
+ +

Correct.

+ +
+
    +
  1. The first qubit CNOTs the second one, probability of 00 is unchanged, but 10 and 11 are swapped
  2. +
+
+ +

To be precise, 10 becomes 11.

+ +
+
    +
  1. The first qubit passes Hadamard again and probability of 01 is splited between 01 and 11, and 11 between 01 and 11 as if first qubit stepped into superposition from a fixed state
  2. +
+
+ +

Incorrect. There is no 01 here, only 00 and 11, and after applying Hadamard to the first qubit you have superposition of 4 states: 00, 10, 11 and 01, +$$\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)\rightarrow\frac{1}{2}(|00\rangle+|10\rangle+|01\rangle-|11\rangle)$$

+",2105,,2105,,7/20/2018 9:08,7/20/2018 9:08,,,,0,,,,CC BY-SA 4.0 +3821,1,3829,,7/20/2018 9:52,,6,310,"

I am looking for some good software to simulate quantum computing, visually if possible.

+ +

I know about quirk (http://algassert.com/quirk)

+ +

and IBM Q Experience (https://quantumexperience.ng.bluemix.net)

+ +

I just saw this question ( Does conditional gate collapse controller's superposition? ) and the asker uses something that looks really neat : https://i.stack.imgur.com/dIari.png

+ +

Does someone knows what this is? +And does someone knos other good software like these?

+ +

Thank you!!

+",4160,,4160,,7/20/2018 12:22,7/24/2018 6:37,What are some good quantum computing simulator and visualiser?,,2,1,,7/24/2018 13:56,,CC BY-SA 4.0 +3823,1,3824,,7/20/2018 12:05,,8,1080,"

Following from this question, I tried to look at the cited article in order to simulate and solve that same problem... without success. Mainly, I still fail to understand how the authors managed to simulate the Hamiltonian evolution through the circuit shown at the bottom of Fig.4. Even exponentiating classically the matrix I do not get values of the gates shown in the Quirk circuit that @Blue linked along his question.

+ +

I tried to look at the paper in which the Group Leader Optimization algorithm is explained, but I still have troubles understanding how do they assign the rotation angles to the different gates.

+",2648,,23,,10/22/2018 21:13,10/22/2018 21:13,Practical implementation of Hamiltonian Evolution,,1,4,,,,CC BY-SA 4.0 +3824,2,,3823,7/20/2018 13:19,,6,,"

I don't know why/how the authors of that paper do what they do. However, here's how I'd go about it for this special case (and it is a very special case):

+ +

You can write the Hamiltonian as a Pauli decomposition +$$ +A=15\mathbb{I}\otimes\mathbb{I}+9Z\otimes X+5X\otimes Z-3Y\otimes Y. +$$ +Update: It should be $+3Y\otimes Y$. But I don't want to redraw all my diagrams etc., so I'll leave the negative sign.

+ +

Now, it is interesting to note that every one of these terms commutes. So, that means that +$$ +e^{iA\theta}=e^{15i\theta}e^{9i\theta Z\otimes X}e^{5i\theta X\otimes Z}e^{-3i\theta Y\otimes Y}. +$$ +You could work out how to simulate each of these steps individually, but let me make one further observation first: these commuting terms are the stabilizers of the 2-qubit cluster state. That may or may not mean anything to you, but it tells me that a smart thing to do is apply a controlled-phase gate. +$$ +CP\cdot A\cdot CP=15\mathbb{I}\otimes\mathbb{I}+9\mathbb{I}\otimes X+5X\otimes \mathbb{I}-3X\otimes X. +$$ +(You may want to check the sign of the last term. I didn't compute it carefully.) +So, if we start and end our sequence with controlled-phase gates, then 2 of the terms are easy to get right: we rotate the first qubit about the $x$ axis by an angle $5\theta$, and the second qubit about the $x$ axis by an angle $9\theta$.

+ +

The only thing we are left to get right is the $X\otimes X$ rotation. If you think about the structure of $e^{-3i\theta X\otimes X}$, this is like an $x$ rotation on the basis states $\{|00\rangle,|11\rangle\}$, and another one on $\{|01\rangle,|10\rangle\}$. A controlled-not converts these bases into the single-qubit bases controlled off the target qubit. But since both implement the same controlled-rotation but controlled off opposite values, we can just remove the control. Thus, the overall circuit is: + +This simplifies slightly by combining the two controlled-gates at the end: + +Note that I have not included the global phase term here because that's not the sensible way to do it. When you make controlled-($e^{iA\theta}$), you apply the ""global phase"" as a $z$ rotation on the control qubit.

+",1837,,1837,,10/19/2018 13:16,10/19/2018 13:16,,,,4,,,,CC BY-SA 4.0 +3826,2,,3800,7/20/2018 16:27,,3,,"

The current version of PyQuil provides an ""ISA"" object that houses the information that you want about Rigetti's quantun processors, but it isn't formatted as you request. I'm a poor Python programmer, so you'll have to excuse my non-Pythonic-ness—but here's a snippet that will take a device_name and reformat the pyQuil ISA into one of your dictionaries:

+ +
import pyquil.api as p
+
+device_name = '19Q-Acorn'
+
+isa = p.get_devices(as_dict=True)[device_name].isa
+d = {}
+for qubit in isa.qubits:
+    l = []
+    for edge in isa.edges:
+        if qubit.id is edge.targets[0]:
+            l += [edge.targets[1]]
+        if qubit.id is edge.targets[1]:
+            l += [edge.targets[0]]
+    if not qubit.dead:
+        d[qubit.id] = l
+
+print(d)
+
+ +

As in Google's case, the native two-qubit gate typically available on a Rigetti quantum processor is a CZ, which (1) is bidirectional(†) in the sense that CZ q0 q1 is the same as CZ q1 q0 and (2) is easily converted into either of your preferred CNOTs by sandwiching the target with Hadamard gates.

+ +

† - The physical implementation of a CZ gate in a superconducting architecture is handed, which is why you often see architectural descriptions include CZ q0 q1 but not CZ q1 q0. It's a shorthand for which qubit is participating in which half of the physical interaction, even if the result (ignoring noise effects) is the same with either ordering.

+",1796,,,,,7/20/2018 16:27,,,,0,,,,CC BY-SA 4.0 +3828,1,,,7/20/2018 18:28,,7,103,"

By ""faulty"", I mean that you can have errors on the ancilla qubits, you can have faulty syndrome extraction, etc.

+",108,,26,,7/20/2018 18:35,7/23/2018 15:11,What are reliable references on analytical and/or numerical studies of threshold theorems under faulty quantum error correction?,,2,2,,,,CC BY-SA 4.0 +3829,2,,3821,7/20/2018 19:20,,3,,"

I will tell you what my software Qubiter (GitHub page) does. Others like IBM (QISKit GitHub page, website and documentation), Google (Cirq GitHub page and documentation), Rigetti (PyQuil GitHub page and documentation) and Microsoft (Q# GitHub page and documentation) can describe what their own software does to help visualize the circuit.

+ +

Qubiter automatically creates 2 files for the quantum circuit, a Qubiter qasm file and an ASCII picture file. This makes debugging easier (Qubiter can also draw fancy LaTex picture of circuit but that is slower so only optional) The ascii file and qasm file correspond line by line, so line 5 in each gives 2 representations, ascii and qasm, of the same gate. Note that it is common to draw pictures of quantum circuits with time pointing from left to right, but Qubiter draws them with time pointing downwards.

+ +

For example, for Teleportation, this is gif of the Qubiter qasm file:

+ +

+ +

and this is a gif of the Qubiter ascii file:

+ +

+ +

The PRINT ALL statements print to screen the state vector of the qc at the time at which they appear.

+",1974,,1386,,7/24/2018 6:37,7/24/2018 6:37,,,,0,,,,CC BY-SA 4.0 +3830,1,3840,,7/21/2018 15:57,,4,152,"

Reading about quantum topological error correction I’ve found some information, but I don’t have already clear the link with the topological concept (trivial and not trivial paths, how they are related with the error). +Can I have explanations or some link where conceptual meaning is explained?

+ +

Thanks.

+",2886,,2645,,7/23/2018 10:26,7/23/2018 10:26,topological error correction concepts,,1,0,,,,CC BY-SA 4.0 +3832,1,3850,,7/21/2018 19:20,,7,498,"

I found the following proof of BQP belonging to PP (the original document is here). There is a part of the proof that I have trouble understanding. First, the structure is given below.

+ +
+

We try to simulate a polynomial-time generated quantum circuit + (which encodes our problem) using a PP machine. We start with a + universal set of gates.

+ +

We have a Hadamard gate and a Toffoli gate. We also include a $i$-phase shift gate ($|0 \rangle$ + $\rightarrow |0 \rangle, |1 \rangle \rightarrow i|1 \rangle$). However, + we simulate the action of this gate by considering an additional qubit + and performing the following transformations:

+ +

$$|00 \rangle → |00 \rangle $$ $$|01 \rangle → |01 \rangle $$ + $$|10\rangle→ |11 \rangle $$ $$|11 \rangle → −|10 \rangle $$

+ +

The first qubit is viewed as the original qubit and the additional + qubit acts as the ""real/imaginary part"".

+ +

There is a register $B \leftarrow 0$ to store the phase. A register $Z$ + $\leftarrow 0^n$ stores the initial state of the machine. We simulate + the circuit as follows:

+ +

$\bullet$ If the gate is a Toffoli gate, simply modify $Z$ + accordingly.

+ +

$\bullet$ If the gate is a Hadamard gate, flip a coin to determine the + new state of the corresponding bit of $Z$. If the induced + transformation was $1 \rightarrow 1$, toggle the sign bit.

+ +

$\bullet$ If the gate is the two-qubit gate we used to replace the + $i$-phase shift gate, modify $Z$ appropriately and toggle the sign bit + if the transformation induced was $11 \rightarrow 10$.

+ +

The steps above are repeated a second time, using variables $B'$ and + $Z'$ in place of $B$ and $Z$. Finally, the following operations are + performed:

+ +

$\bullet$ If $Z = Z'$ and the first bit of $Z$ and $Z'$ is a $0$, then + output $B ⊕ B'$.

+ +

$\bullet$ If $Z = Z'$ and the first bit of $Z$ and $Z'$ is a $1$, then + output $\bar B ⊕ B'$.

+ +

$\bullet$ If $Z \neq Z'$, then “give up”. This means: flip a fair coin + and output $0$ or $1$ accordingly.

+
+ +

The claim is that the algorithm above simulates a BQP circuit on a PP machine faithfully as the probability the algorithm outputs $1$ minus the probability that the algorithm outputs $0$ is proportional to $\langle 1 | Q_x |1 \rangle - \langle 0 | Q_x |0 \rangle$ (where $Q_x$ is the quantum circuit for a given encoding $x$ from a polynomial-time generated uniform family of quantum circuits). I do not understand why the terms are proportional.

+",1351,,26,,03-04-2019 10:15,03-04-2019 10:15,Query regarding BQP belonging to PP,,1,0,,,,CC BY-SA 4.0 +3833,2,,2724,7/22/2018 0:08,,4,,"

I have not read it yet, however I don't imagine there is a better introduction than Quantum Computing for Babies

+ +

+",2645,,,,,7/22/2018 0:08,,,,0,,,,CC BY-SA 4.0 +3834,2,,3828,7/22/2018 5:45,,2,,"

This is the paper I found really convincing once I’d worked through it: +https://arxiv.org/abs/quant-ph/0504218 +This was really the start of rigorous threshold proofs, and is the starting point for a number of subsequent improvements.

+",1837,,,,,7/22/2018 5:45,,,,0,,,,CC BY-SA 4.0 +3835,1,3839,,7/22/2018 9:24,,6,1068,"

Qubit and qumode are different forms of quantum computation. But most existing quantum computers/chips seems to be of discrete variables. I heard that a group chose qubit for a quantum optical frequency comb experiment relating to quantum computing because the environment noise of continuous variable is much higher. Is this a reason for preferring qubit to qumode at present (if people do prefer it)?

+",4178,,2293,,7/22/2018 21:33,7/23/2018 10:29,"Are qubits preferred over qumode, and if so, why?",,1,0,,,,CC BY-SA 4.0 +3836,2,,2505,7/22/2018 16:27,,0,,"

The other answers were (almost?*) correct, and pointed me in the right direction for computing any probability for a measurement(especially the notion that I was measuring in the wrong basis), but I missed the definition.

+ +

Assuming you are in state $|\psi\rangle$, and you want to know the probability of measuring outcome $|\phi\rangle$:
+$P=|\langle\psi|\phi\rangle|^2$ (https://ocw.tudelft.nl/course-lectures/0-3-1-measuring-qubits-standard-basis/ @1:48)
+where $\langle\psi|$ is the complex conjugate transpose of the $|\psi\rangle$

+ +

More, concrete, if we are in $T |+\rangle$ ($\equiv{1\over\sqrt 2} \begin{bmatrix}1\\e^{i\pi/4}\end{bmatrix}$), and want to know the probability of measuring $|+\rangle$ ($\equiv{1\over\sqrt 2}\begin{bmatrix} 1 \\ 1 \end{bmatrix})$, we calculate:
+$P=|{1\over\sqrt 2} \begin{bmatrix}1 & e^{-i\pi/4}\end{bmatrix}{1\over\sqrt 2}\begin{bmatrix} 1 \\ 1 \end{bmatrix}|^2$
+$P={1\over4}|1+e^{-i\pi/4}|^2 \approx 0.853553$ (or use Euler's formula for the exact outcome)

+ +

P.S. I hope I took the right complex conjugate.
+* I also have the $\psi$ and $\phi$ the other way around compared to DaftWullie his/her answer.

+",2794,,,,,7/22/2018 16:27,,,,2,,,,CC BY-SA 4.0 +3837,1,3845,,7/22/2018 17:20,,7,455,"

A lot of people claim that quantum provides exponential speedup whereas classical computers scale linearly. I have seen examples (such as Shor's algorithm and Simon's) that I believe, but the layman's explanation appears to boil down to ""quantum registers with n qubits are able to hold $2^n$ values."" To me, this sounds a lot like having a SIMD (Single Instruction Multiple Data) CPU where I can load two times $2^n$ variables, and
+a) get the correct outcomes, and only these outcomes
+b) trace back which answer is to which questions

+ +

When trying to do this in quantum computing, I think this is definitely not the case. Let me try to evaluate this with an example:
+Say I have two 2-qubit registers , and I want to add two sets of values (2+1 and 1+2):
+$a|10\rangle + b|01\rangle$
+$a|01\rangle + b|10\rangle$
+where a denotes values in register a, and b denotes values in register b. Aka a and b are not scalars

+ +

If we now look at the cubits, we see that all are once 0, and once 1. This implies a superposition of all qubits in our input. If we now were to do an addition on the two registers that both hold 2 qubits in superposition, and repeat this experiment sufficient enough times to create a probability distribution, I believe we would end up with a probability distribution for all outcomes as follows:
+$P(0) = 1/16 $
+$P(1) = 2/16 $
+$P(2) = 3/16 $
+$P(3) = 4/16 $
+$P(4) = 3/16 $
+$P(5) = 2/16 $
+$P(6) = 1/16 $

+ +

If we look at what we wanted to calculate, 2+1 and 1+2, we see that both our answers (two times the answer 3) are indeed present in the set of outcomes. However,
+a) there are a lot of other answers
+b) we can not trace which answer corresponds to 2+1, and which to 1+2

+ +

So my questions:
+a) Is it correct that for the addition of two sets of randomly chosen variables, we are not guaranteed to see exponential scaling (unless we want to add all values 0 to 2^n with themselves)
+b) Is it correct that, when doing simple classical addition, we loose track of the mapping from input to output

+ +

And as a bonus, does the following hold:
+When performing the computation as above, the usability would be the same (or worse, as the mapping from input to output is lost) as a lookup table with the same number of input values, as when we have two registers in superposition, we will always receive the same output distribution

+",2794,,26,,7/22/2018 20:27,7/23/2018 14:42,"Does ""quantum registers with $n$ qubits are able to hold $2^n$ values and therefore scale exponentially"" actually hold that straightforwardly?",,2,3,,,,CC BY-SA 4.0 +3838,2,,3837,7/23/2018 2:54,,1,,"

I'm going to answer in two parts:

+ +

Regarding your example with addition, a quantum adder has been discussed in the question How do I add 1+1 using a quantum computer. They're a bit involved and I'm not sure if I exactly follow your line of thinking for your questions following your example, but hopefully that link explains a bit about how a quantum computer would handle addition.

+ +

With regards to your more general question about quantum speedups, the important thing isn't that quantum computers can hold up to $2^n$ values in $n$ qubits, as that is also true in a classical computer. The important thing is that they can hold all these $2^n$ values at once. A lot of quantum algorithms start by taking a register of $n$ qubits all in state $|0\rangle$ and applying a Hadamard transform to all bits. This leads to the register being in the state

+ +

$$|\psi\rangle = \displaystyle\sum_{i=0}^{2^n-1}|i\rangle$$

+ +

Which can be fed into an algorithm and have it work on each piece simultaneously. If you look at this explanation of Grover's Algorithm you can see this at work. By plugging in this uniformly distributed $|\psi\rangle$ into a circuit which picks out a specific element, one can isolate that element without having to cycle through each of the possible $2^n$ values for $|i\rangle$ individually. Although Grover's algorithm isn't an exponential speedup, the same sort of structure holds for more complex algorithms as well.

+ +

Hopefully that helps!

+",3056,,3056,,7/23/2018 3:03,7/23/2018 3:03,,,,0,,,,CC BY-SA 4.0 +3839,2,,3835,7/23/2018 3:14,,4,,"

Both models have their potential advantages and disadvantages. The CV model doesn't require energy intensive cooling systems. CV will also work better for continuous-valued problems. Nevertheless, since the model uses photons it brings various challenges to the table as well. Since both models (especially CV) aren't developed, your question may not be the right one to ask right now.

+ +

If you're interested, I know that Xanadu is developing the CV model. They have some papers on CV algorithms and QML out right now.

+",4181,,2293,,7/23/2018 10:29,7/23/2018 10:29,,,,1,,,,CC BY-SA 4.0 +3840,2,,3830,7/23/2018 3:21,,4,,"

This explanation of Surface Codes has a lot of detail and starts from the basics. It should hopefully help out, as well as looking at some of the initial Toric Code papers by Alexei Kitaev.

+ +

My understanding of the term (hopefully someone more knowledgeable can chime in if I'm incorrect!) is that it's because the errors needed to create a logical error have to be non-local, and can be deviated. IE if you take the Toric code which exists on a torus, a logical operation is a loop of errors around the torus, which cannot be smoothly modified into a local error since the topology of the space prevents it.

+",3056,,409,,7/23/2018 9:41,7/23/2018 9:41,,,,0,,,,CC BY-SA 4.0 +3841,1,3842,,7/23/2018 4:01,,7,436,"

I'm just learning about quantum computer but some of it has been available for people to research & practice so I'd like to study it myself. The only kind of quantum computing I found so far is IBM cloud service and Q# quantum simulator but the sources and examples are limited, and I only found a bunch of display emoji & a card-guessing mini-game in quantum programming. Are there any main sources like GitHub but for quantum computing programming?

+",4182,,26,,05-10-2019 17:57,8/26/2021 21:30,Is there something like GitHub for quantum programming?,,3,1,,,,CC BY-SA 4.0 +3842,2,,3841,7/23/2018 4:09,,3,,"

Are you looking for algorithms to look through, or programs for an actual quantum computer?

+ +

If the former the IBM Q Experience user guide has good explanations of some of them, and other questions you can find on this Stack Exchange can get you to more algorithms.

+ +

If you are looking for programs to be run on a quantum computer like IBM's cloud offerings, I'm not sure if there is a github specific to is, but looking into the Qiskit github would be a good place to start!

+",3056,,,,,7/23/2018 4:09,,,,2,,,,CC BY-SA 4.0 +3843,2,,3841,7/23/2018 5:46,,6,,"

For Q#, the largest GitHub repositories of algorithms written in Q# are the official libraries and samples repos.

+ +

If you want to start studying Q# by writing small quantum computing programs, there is Quantum Katas repo, it has less code and the code is simpler but it aims specifically to teach the basics.

+",2879,,2879,,05-10-2019 07:20,05-10-2019 07:20,,,,2,,,,CC BY-SA 4.0 +3844,1,,,7/23/2018 7:13,,4,147,"

Say I have a quantum register containing 1 qubit. A qubit can hold either 0, 1 or both 0 and 1. In Dirac, one would write
+0: $|0\rangle=\begin{bmatrix}1\\0\end{bmatrix}$

+1: $|1\rangle=\begin{bmatrix}0\\1\end{bmatrix}$

+0 and 1: $|+\rangle={1\over\sqrt 2}\begin{bmatrix}1\\1\end{bmatrix}$

+ +

If we were to extend our register to 2 qubits:
+0: $|00\rangle = |0\rangle \otimes |0\rangle = \begin{bmatrix}1\\0\end{bmatrix} \otimes \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}1\\0\\0\\0\end{bmatrix}$

+1: $|01\rangle = |0\rangle \otimes |1\rangle = \begin{bmatrix}1\\0\end{bmatrix} \otimes \begin{bmatrix}0\\1\end{bmatrix} = \begin{bmatrix}0\\1\\0\\0\end{bmatrix}$

+0 and 1: $|0+\rangle = |0\rangle \otimes |+\rangle = \begin{bmatrix}1\\0\end{bmatrix} \otimes {1\over\sqrt 2}\begin{bmatrix}1\\1\end{bmatrix} = {1\over\sqrt 2}\begin{bmatrix}1\\1\\0\\0\end{bmatrix}$

+2: $|10\rangle = |1\rangle \otimes |0\rangle = \begin{bmatrix}0\\1\end{bmatrix} \otimes \begin{bmatrix}1\\0\end{bmatrix} = \begin{bmatrix}0\\0\\1\\0\end{bmatrix}$

+3: $|11\rangle = |1\rangle \otimes |1\rangle = \begin{bmatrix}0\\1\end{bmatrix} \otimes \begin{bmatrix}0\\1\end{bmatrix} = \begin{bmatrix}0\\0\\0\\1\end{bmatrix}$

+ +

Now if I would want to load the decimal values 1 and 2 in the quantum register, I have two ways of reasoning, of which one appears to be flawed:
+a)
+This is in line with the reasoning above for 1 qubit, section ""0 and 1"":
+I want to add values $|01\rangle$ and $|10\rangle$. My first qubit is a 0 for the decimal value 1, and my first qubit is a 1 for the decimal value 2. This means my first qubit is both 0 and 1, and therefor $|+\rangle$. My second qubit is a 1 for decimal value 1, and a zero for decimal value 2, which also implies $|+\rangle$. Hence, I need to load
+$|++\rangle = +|+\rangle \otimes |+\rangle = +{1\over\sqrt 2}\begin{bmatrix}1\\1\end{bmatrix} \otimes {1\over\sqrt 2}\begin{bmatrix}1\\1\end{bmatrix} = +{1\over\sqrt 4}\begin{bmatrix}1\\1\\1\\1\end{bmatrix}$
+Which provides me with 1 and 2, but also 0 and 3.

+ +

b)
+$c ( |01\rangle + |10\rangle) = c( \begin{bmatrix}0\\1\\0\\0\end{bmatrix} + \begin{bmatrix}0\\0\\1\\0\end{bmatrix}) = c \begin{bmatrix}0\\1\\1\\0\end{bmatrix}$, where $c$ is likely ${1\over\sqrt 2}$
+Which provides me with 1 and 2 and only 1 and 2, and works out in mathematics, but fails in my current common sense.

+ +

My questions: Even though I see with math that b) holds and a) fails, can someone explain to me why this fails? Specifically, why is it that both individual qubits appear to be both 0 and 1, hence in superposition, but they are able to distinguish that as a total, only the decimal values 1 and 2 are present, and not 0,1,2 and 3 (which you would expect from 2 qubits in superposition).

+",2794,,,,,7/23/2018 9:56,"Can a qubit register hold any subset of values, or only specific subsets?",,1,1,,,,CC BY-SA 4.0 +3845,2,,3837,7/23/2018 7:40,,5,,"
+

Does “quantum registers with n + qubits are able to hold $2^n$ + values and therefore scale exponentially” actually hold that straightforwardly?

+
+ +

No, it doesn't. That's the popularised explanation of where quantum computers get their speed-up, but it's far more nuanced than that.

+ +

To illustrate this, imagine you have a function $f:x\in\{0,1\}^n\mapsto y\in\{0,1\}^m$. Sure, in quantum you can produce the state +$$ +|\Psi\rangle=\frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n}|x\rangle|f(x)\rangle +$$ +so, in some sense,the function $f$ has been evaluated at every $x$. But what answers can you get? What you certainly cannot read out is every different value of $f(x)$: when you measure (assuming projective measurements) you get a maximum of 1 bit of information for every measurement, $n+m$ bits. But there are $2^n$ inputs, each with $m$ bits to learn, so you need to be able to determine $m2^n$. This is not really any better than doing things classically (in a sense, it's a bit worse because a single measurement will pick an $x$ at random, but classically you'd normally only get the $m$ bits of information from the final outcome).

+ +

The real power of quantum computation comes from doing something clever with that state $|\Psi\rangle$. This usually relies on some special properties that you know about $f(x)$, such as ""it has a global property which is parametrised by $k$. Find $k$."". For example, in the Deutsch-Jozsa algorithm, the structure is that either all $f(x)$ are the same, or there's a perfect 50:50 split between answers (Here, $k$ is one bit of information). That sort of comparison between different function evaluations in a classical domain obviously needs lots of different function evaluations, whereas in the quantum domain, sometimes one can perform the right measurement that compares different parts of the superposition and gives you the right information out.

+",1837,,1837,,7/23/2018 14:42,7/23/2018 14:42,,,,0,,,,CC BY-SA 4.0 +3846,1,3849,,7/23/2018 9:39,,5,502,"

As far I understand, qubits in cirq are labelled by their positions on chip. For example

+ +
print( cirq.google.Foxtail.qubits )
+
+ +

yields

+ +
frozenset({GridQubit(0, 1), GridQubit(1, 9), GridQubit(0, 2), ...
+
+ +

I would like to get a simpler version of the above, namely a simple array of tuples for the positions of all qubits

+ +
[ (0,1), (0,2), (0,3), ..., (1,1), (1,2), (1,3), ... ]
+
+ +

What is the easiest way to obtain this for a given known device in cirq?

+",409,,,,,7/23/2018 12:16,List of qubit locations with cirq,,1,0,,,,CC BY-SA 4.0 +3847,2,,3844,7/23/2018 9:56,,2,,"

What you're discovering here is the phenomenon of entanglement: (Pure) States that cannot be constructed using the tensor product of states of smaller systems. Entanglement typically contains some sort of collective property that cannot be accessed by just acting on one qubit (here, measurement outcomes are always anti-correlated)

+ +

Mathematically, a pure single-qubit system can be described by any normalised vector in $\mathbb{C}^2$, i.e. $\left(\begin{array}{c} \alpha \\ \beta \end{array}\right)$ provided $|\alpha|^2+|\beta|^2=1$. Talking about ""0 and 1"" is already problematic. Clearly you're thinking about ""an equal superposition of 0 and 1"". Now, for two qubits, we can have any normalised state in $\mathbb{C}^4$, which clearly has many more possibilities than the tensor products of two single-qubit states (just count the number of parameters). So, sure, you can ask for ""an equally weighted superposition of 01 and 10"", and you'll get exactly what you had in (b): $(|01\rangle+|10\rangle)/\sqrt{2}$. If you try and create a normalised state with equal amplitudes for those two components using a state $|\psi\rangle\otimes|\phi\rangle$, you will necessarily have $|00\rangle$ and $|11\rangle$ components a well, so the state you've specified will be a superposition of all 4 possible basis states, not just the 2 that you want.

+ +

Let's also take it from the other perspective: let's say I've made $|+\rangle\otimes|+\rangle$. Now let's say that I measure in the computational basis and get the 0 answer. What's my state now? $|0\rangle\otimes|+\rangle$. If I measure the second qubit, I have a 50% change of getting the 0 answer. With a string 00 as my outcome, that certainly does not fulfil your requirement of being a bit 01 or 10.

+ +

Beyond that, I'm not quite sure what your question is asking. ""Why"" is probably either something incredibly deep (that probably nobody understands) or it's ""because the maths says so"". You should probably bear in mind that quantum mechanics is a theory. It's a theory that starts from some postulates, which automatically imply a whole bunch of mathematics. Why we pick those postulates is either an issue of philosophy ""why is the world the way it is?"" or a very practical ""we did some experiments, and this seems to fit"".

+",1837,,,,,7/23/2018 9:56,,,,0,,,,CC BY-SA 4.0 +3848,2,,3841,7/23/2018 11:41,,5,,"

There is no GitHub like service dedicated to quantum programming. It is all on standard GitHub.

+ +

Most useful examples are on the repositories for the quantum SDKs (IBM's QISKit, Rigetti's Forest, Microsoft's Q#, Google's Cirq, and ProjectQ).

+ +

For QISKit, the tutorial has quite a large number of examples. Beyond the simple 'Hello World' example, most resources are found in the reference folder.

+ +

You'll also find things scattered around on the pages of single GitHub users. To find these, you could try looking at who has forked the SDKs or participated in quantum hackathons.

+",409,,119,,05-10-2019 17:31,05-10-2019 17:31,,,,0,,,,CC BY-SA 4.0 +3849,2,,3846,7/23/2018 12:08,,7,,"

GridQubit has comparison methods defined, so sorted will give you a list of the qubits in row-major order:

+ +
>>> sorted(cirq.google.Foxtail.qubits)
+[GridQubit(0, 0), GridQubit(0, 1), [...] GridQubit(1, 9), GridQubit(1, 10)]
+
+ +

Once you have that, you're one list comprehension away:

+ +
>>> [(q.row, q.col) for q in sorted(cirq.google.Foxtail.qubits)]
+[(0, 0), (0, 1), [...] (1, 9), (1, 10)]
+
+ +

Because tuples also have a default ordering, it doesn't matter whether you sort before or after the conversion:

+ +
>>> sorted((q.row, q.col) for q in cirq.google.Foxtail.qubits)
+[(0, 0), (0, 1), [...] (1, 9), (1, 10)]
+
+",119,,119,,7/23/2018 12:16,7/23/2018 12:16,,,,0,,,,CC BY-SA 4.0 +3850,2,,3832,7/23/2018 12:51,,7,,"

Two quick comments before explaining this:

+ +
    +
  1. The notes don't actually contain a proof of the claim made about the simulation; the intention was only to give a basic idea of how the simulation works. It is therefore not at all surprising that the mathematical justification is not clear, because the notes didn't even try to explain it. (It was the last lecture of the course and there just wasn't time to go over it in detail.)

  2. +
  3. The notation is a little bit unusual here: we're interpreting $Q_x$ as the one-qubit mixed state output by the circuit corresponding to $x$, so $\langle 1 | Q_x | 1 \rangle$ is the probability that $Q_x$ outputs 1 and $\langle 0 | Q_x | 0 \rangle$ is the probability that $Q_x$ outputs 0. The point is that we want the probability that our algorithm outputs 1 minus the probability it outputs 0 to be proportional to the probability the circuit outputs 1 minus the probability it outputs zero.

  4. +
+ +

Moving on to the actual question, let us suppose that $x$ is fixed so that we can just talk about a single quantum circuit $Q$ that acts on $n$ qubits and has $t$ gates of the type described in the question: Hadamard gates, Toffoli gates, and these two-qubit gates that mimic phase shift gates. (You can ignore these if you want, so long as you accept that Toffoli and Hadamard gates are universal for quantum computation. The point is that we do not want to deal with non-real numbers.)

+ +

Now, consider a sequence of binary strings $(y_0,\ldots,y_t)$, where each of these strings has length $n$ and there are $t+1$ strings in total. We can think of such a sequence as a possible computation path through the circuit, where $y_0$ represents a classical state of $n$ qubits at time 0, $y_1$ represents a classical state of $n$ qubits at time 1, and so on. Let us call such a path valid if $y_0 = 0^n$ and, for every choice of $k\in\{1,\ldots,t\}$, the $k$-th gate of $Q$ takes $|y_{k-1}\rangle$ to $|y_k\rangle$ with a nonzero amplitude. (Given our limited choice of gates, this nonzero amplitude will always be $\pm 1$ or $\pm 1/\sqrt{2}$.)

+ +

Associated with any such path $P = (y_0,\ldots,y_t)$ is an amplitude $\alpha(P)$. Given the gates we are considering, the amplitude $\alpha(P)$ for each valid path $P$ will always take the form +$$ +\alpha(P) = \frac{s(P)}{2^{m/2}} +$$ +where $s(P)\in\{-1,+1\}$ and $m$ is the total number of Hadamard gates in the circuit. The probability that the circuit outputs a string $z\in\{0,1\}^n$ is equal to +$$ +\left(\sum_P \alpha(P)\right)^2 += \frac{1}{2^m} \sum_P \sum_Q s(P) s(Q), +$$ +where in these sums $P$ and $Q$ range only over the valid paths that end at $z$.

+ +

Now consider the probabilistic algorithm. We can again consider paths of the same form as before: $P = (y_0,\ldots,y_t)$. This time, think of these strings as the sequences of states of the register $Z$. The probability associated with each valid path $P$ will be $2^m$, and the bit $B$ will always track the sign of the corresponding amplitude, so that after running through all of the gates you will have $B = 0$ if $s(P) = +1$ and $B = 1$ if $s(P) = -1$. The same reasoning is applied, independently, to $Z'$ and $B'$ for the second run through the process.

+ +

Now, for a particular choice of a final string $z\in\{0,1\}^n$, consider the situation in which both paths $P$ and $P'$ end in $z$, and look at the binary value $B\oplus B'$. This value is 0 if $B$ and $B'$ are equal, which is the same as saying $s(P)s(Q) = +1$, and 1 if $B$ and $B'$ are different, which is the same as saying $s(P)s(Q) = -1$. Therefore, if the final string contained in $Z$ and $Z'$ is $z$, then the probability that the algorithm outputs 0 minus the probability that it outputs 1 is +$$ +\frac{1}{2^{2m}} \sum_P \sum_Q s(P) s(Q), +$$ +where again we're restricting the sums to valid paths $P$ and $Q$ that end at $z$.

+ +

If the final strings of $Z$ and $Z'$ are different, the algorithm just outputs a random bit, so the probability of outputting 0 minus the probability it outputs 1 is zero, so there's effectively no contribution to the bias of the output in this case.

+ +

Finally, consider the probabilities as we range over all possible final strings $z$. For those strings beginning with 0, which represent the quantum circuit outputting 0, the expression above is the probability that the algorithm outputs 0 minus the probability it outputs 1. For choices of $z$ beginning with 1, which corresponds to the quantum circuit outputting 1, the algorithm flips the output ($1\oplus B \oplus B'$ would have been a better way to write it than $\overline{B}\oplus B'$). Summing everything up, the probability that the algorithm outputs 1 minus the probability it outputs 0 is exactly $2^{-m}$ times the probability that the quantum circuit outputs 1 minus the probability if outputs zero.

+ +

This argument is based on the original proof of $\text{BQP}\subseteq\text{PP}$ found in this paper:

+ +
+

Adleman, DeMarrais, and Huang. Quantum Computability. SIAM Journal on Computing 26(5): + 1524-1540, 1997.

+
+",1764,,,,,7/23/2018 12:51,,,,0,,,,CC BY-SA 4.0 +3851,2,,3828,7/23/2018 15:11,,1,,"

This paper by one of my group mates includes faulty prep on ancilla qubits and other errors in an analysis of Bacon Shor. I'm not sure what you mean by faulty error extraction (measurement errors?) but it has an explanation of the error model.

+",3056,,,,,7/23/2018 15:11,,,,2,,,,CC BY-SA 4.0 +3852,1,,,7/23/2018 16:16,,2,1106,"

Are there schematics and/or diagrams out there to build a very basic quantum system?

+",4164,,26,,12/14/2018 5:26,12/14/2018 5:26,Is it possible to purchase quantum computing hardware?,,3,4,,7/27/2018 18:41,,CC BY-SA 4.0 +3853,2,,1474,7/23/2018 16:54,,3,,"

Unfortunately, the list on Quantiki is pretty old and not well maintained. Even listing all quantum programming languages in a single reply to this question isn't sustainable since the quantum landscape is constantly evolving. For example, Google has just released Cirq, a new quantum programming framework for Noisy Intermediate Scale Quantum (NISQ) computers which isn't featured in any of the above replies since it was announced only a couple of days ago.

+ +

To address this problem and as a response to another question on QC StackExchange I started a curated list of open-source software projects on GitHub which also includes a comprehensive overview of actively developed quantum programming languages and frameworks. The list is actively maintained by the community and we constantly add new projects.

+",1234,,,,,7/23/2018 16:54,,,,2,,,,CC BY-SA 4.0 +3854,2,,3821,7/23/2018 17:02,,1,,"

Additional to the projects you mentioned in your question, the following projects come to my mind:

+ +

The Toronto-based quantum computing startup Xanadu has recently released a great visual & interactive quantum simulator on the web.

+ +

And there is QTop which is a great tool for the simulation and visualization of topological quantum computers.

+",1234,,,,,7/23/2018 17:02,,,,0,,,,CC BY-SA 4.0 +3855,2,,3773,7/23/2018 17:05,,4,,"

There is QTop which is an open-source project that can simulate but also visualize topological quantum codes.

+",1234,,,,,7/23/2018 17:05,,,,0,,,,CC BY-SA 4.0 +3856,2,,3852,7/23/2018 17:07,,0,,"

Considering that they are currently experiments in different labs, with the record of around 72 qubit, they are yet very far from a product which can be sold.

+ +

However, you can choose from a long list of different emulators/simulators.

+",27,,,,,7/23/2018 17:07,,,,0,,,,CC BY-SA 4.0 +3857,2,,3852,7/23/2018 17:40,,2,,"

The only commercially available quantum machines are from a company called D-Wave however there is debate on exactly how ""quantum"" their machines are.

+ +

True quantum computers in the way most of the community looks at them are not available for physical purchase, although IBM's Q Experience allows you to use their quantum computing resources through the cloud.

+ +

In terms of trying to DIY a quantum computer we are a long way from such a goal being viable.

+",3056,,,,,7/23/2018 17:40,,,,2,,,,CC BY-SA 4.0 +3859,1,,,7/24/2018 0:11,,7,82,"

Aaronson and Christiano call public-key or private-key quantum mini-schemes $\mathcal M$ secret-based if a mint works by first uniformly generating a secret random classical strings $r$, and then generating a banknote $\$:=(s_r,\rho_r)$, where $s_r$ is a (classical) serial number corresponding to the quantum state $\rho_r$.

+ +

They state:

+ +
+

Intuitively, in a secret-based scheme, the bank can generate many identical banknotes $s$ by simply reusing $r$, while in a non-secret-based scheme, not even the bank might be able to generate two identical banknotes. (emphasis in original).

+
+ +

In characterizing a putative distributed quantum currency based on Aaronson and Christiano's secret-based scheme, Jogenfors describes a ""reuse attack."" For example he colorfully envisions someone, say Alice, who has minted and distributed a coin $\$_r$, and learns that the coin is in possession of a political rival Bob; she uses her secret knowledge of $r$ to mint and distribute a large number of identical coins $\$_r$, thus devaluing the coins in Bob's possession. Jogenfors describes a novel approach to prevent this attack, for example as discussed here.

+ +
+

However, would the above attack even work with a non-secret-based scheme?

+
+ +

If not even Alice can produce copies of her own coins that she's minted, then she has no way devaluing any others that have been distributed.

+",2927,,2927,,4/28/2019 13:36,4/28/2019 13:36,"Are non-secret-based quantum money mini-schemes susceptable to Jogenfors' ""reuse attack?""",,0,0,,,,CC BY-SA 4.0 +3860,1,3862,,7/24/2018 14:48,,5,1051,"

I want to simulate an arbitrary isolated quantum circuit acting on $n$ qubits (i.e. a pure state of $n$ qubits).

+ +

As I know RAM is the bottleneck for quantum simulators, you can consider a ""normal"" computer to have between $4$ and $8$ Gio of RAM, all the other components are considered sufficiently powerful to not be the bottleneck.

+ +

With this definition of a ""normal"" computer,

+ +
+

What is the maximum value of $n$ (the number of qubits) for which an arbitrary quantum circuit is simulable in a reasonable time ($<1\text{h}$) with a normal computer and freely accessible simulators?

+
+",1386,,26,,12/23/2018 12:36,12/23/2018 12:36,How many qubits are simulable with a normal computer and freely accessible simulators?,,2,6,,,,CC BY-SA 4.0 +3861,1,,,7/24/2018 15:30,,7,539,"

Given an arbitrary $n$-qudit state vector $|\psi\rangle =\sum_i c_i| i \rangle \in \mathbb{C}_d^n$ for some orthonormal basis $\{|i\rangle\}$, what is the most efficient way one can:

+ +
    +
  1. Verify whether the state is a stabilizer state (i.e. can be defined by $n \leq m \leq 2n$ stabilizer generators, with equality for $d=2$), and if so,
  2. +
  3. Find the state's stabilizer generators (in the form of some tensor product of local Pauli matrices).
  4. +
+",391,,55,,5/25/2022 13:59,5/25/2022 13:59,How to verify whether a state is a stabilizer state?,,1,17,,,,CC BY-SA 4.0 +3862,2,,3860,7/24/2018 16:10,,5,,"

This answer doesn't directly answer the question (I have little experience of real simulators with practical overheads etc.), but here's a theoretical upper bound.

+ +

Let's assume that you need to store the whole state vector of $k$ qubits in memory. There are $2^n$ elements that are complex numbers. A complex number requires 2 real numbers, and a real number occupies 24 bytes in python. Let's say we want to cram this into $4\times 10^9$ bytes of RAM (probably leaving a few over for your operating system etc.) Hence, +$$ +48\times 2^n\leq 4\times 10^9 +$$ +Rearrange for $n$ and you have $n\leq26$ qubits.

+ +

Note that applying gates in a quantum circuit is relatively inexpensive memory-wise. See the ""Efficiency Improvements"" section in this answer. From that strategy, one should be able to estimate the time it takes to apply a single one- or two-qubit gate to an $n$-qubit system, and hence how many gates you might expect to fit within some times limit (an hour is very modest, but would certainly serve for illustrative purposes).

+",1837,,1837,,7/25/2018 9:20,7/25/2018 9:20,,,,8,,,,CC BY-SA 4.0 +3863,1,,,7/24/2018 17:42,,13,515,"

As part of a variational algorithm, I would like to construct a quantum circuit (ideally with pyQuil) that simulates a Hamiltonian of the form:

+ +

$H = 0.3 \cdot Z_3Z_4 + 0.12\cdot Z_1Z_3 + [...] + +- 11.03 \cdot Z_3 - 10.92 \cdot Z_4 + \mathbf{0.12i \cdot Z_1 Y_5 X_4}$

+ +

When it comes to the last term, the problem is that pyQuil throws the following error:

+ +

TypeError: PauliTerm coefficient must be real

+ +

I started diving into the literature and it seems like a non-trivial problem. I came across this paper on universal quantum Hamiltonians where complex-to-real encodings as well as local encodings are discussed. However, it is still not clear to me how one would practically implement something like this. Can anyone give me some practical advice how to solve this problem?

+",1234,,26,,1/31/2019 19:25,1/31/2019 19:25,Hamiltonian simulation with complex coefficients,,2,9,,,,CC BY-SA 4.0 +3865,1,,,7/24/2018 19:21,,5,97,"

According to J. Gough, one of the bottlenecks in the current development of large-scale quantum computing may be the lack of our ability to simulate large scale quantum system, which is a NP-hard problem and also requires exponential memory with classical hardware if I understand correctly.

+ +

I'm currently developing a certain powerful discrete approximate algorithm. In principle, it can, for example, find an approximate solution for a given quantum Ising model efficiently or optimize the configuration of quantum gates to minimize the error a la U. Las Heras et. al. by replacing its GA with mine.

+ +

1) Can we reduce digital/analog quantum simulation to a NP-hard discrete optimization problem (like solving Ising model), so that it can be approximately solved by a classical algorithm using classical device?

+ +

2) What are some other crucial bottlenecks? Do they need discrete optimization?

+ +

Edit: I guess one of other bottlenecks is quantum error correction, which was already mentioned above.

+",4193,,4193,,7/25/2018 16:09,7/25/2018 16:09,Application of classical approximate optimization algorithm to bottlenecks of quantum computing,,0,2,,,,CC BY-SA 4.0 +3868,2,,3861,7/25/2018 10:49,,4,,"

Here's a necessary condition that might help recognise potential stabilizer states. I'll state it for qubits as that's what I'm used to thinking about, but I suspect it can be generalised:

+ +
+

all the non-zero amplitudes of a stabilizer state must have the same magnitude.

+
+ +

To see this, let's assume that the state $|\psi\rangle$ is an $n$-qubit stabilizer state with linearly independent stabilizers $\Lambda=\{K_i\}_{i=1}^n$. In other words, $K_i|\psi\rangle=|\psi\rangle$, $K_i=K_i^\dagger$ and $[K_i,K_j]=0$. Let $x,y\in\{0,1\}^n$ be such that $\langle x|\psi\rangle\neq 0$ and $\langle y|\psi\rangle\neq 0$.

+ +

Now consider +$$ +|\psi\rangle\langle \psi|x\rangle=\left(\frac{1}{2^n}\prod_{K\in\Lambda}(\mathbb{I}+K)\right)|x\rangle +$$ +If we multiply out the terms, there are all the different possible products of subsets of stabilizers, each turning $|x\rangle$ into a (possibly different) basis state. Hence, there is at least one subset $S\subseteq\Lambda$ such that $\left(\prod_{K\in S}K\right)|x\rangle=|y\rangle$ up to a global phase.

+ +

Finally, what is the amplitude we're after? +$$ +\langle x|\psi\rangle=\langle x|\left(\prod_{K\in S}K\right)|\psi\rangle=\langle y|\psi\rangle +$$ +up to a global phase.

+ +

Presumably this could put you on a route towards a better-than-brute-force algorithm for determining the stabilizers (I haven't done this myself, hence some vagueness in the statement). If you have a binary string of each of the non-zero basis elements, and the corresponding phase, you know a lot about the group generated by $\Lambda$. A bit of linear algebra should allow you to extract the generators. I would guess that there's even an argument a bit like the one in Simon's algorithm that says you don't need more than $O(n)$ of those basis elements in order to extract the group generators. I'm not sure if this will give you all the information, or just the information about bit-flips. You may also need a Hadamard-rotated version of the state in order to determine the phase flips in the stabilizers.

+",1837,,,,,7/25/2018 10:49,,,,3,,,,CC BY-SA 4.0 +3869,1,,,7/26/2018 5:44,,3,90,"

I am very interested in this field right now, due to the quantum advantage +and would like some helpful nudges in the right direction i.e. where should I start from.

+ +

P.S: I have some background in software engineering and subpar knowledge in math and physics.

+",4207,,2484,,2/15/2019 23:02,2/15/2019 23:02,Any tips on where to start learning quantum computing?,,0,2,,7/26/2018 7:34,,CC BY-SA 4.0 +3870,2,,3863,7/26/2018 9:31,,3,,"

This simple MATLAB/Octave code shows that $i0.12Z_1Y_2X_3$ is not Hermitian:

+ +
z=[1 0 ; 0 -1];
+x=[0 1;  1  0];
+y=[0 -1i; 1i 0];
+
+z1 = kron(z,eye(4));
+y2 = kron(kron(eye(2),y),eye(2));
+x3 = kron(eye(4),x);
+
+H=0.12*1i*z1*y2*x3
+
+ +

The output is H:

+ +
    0     0    0 0.12    0    0     0     0
+    0     0 0.12    0    0    0     0     0
+    0 -0.12    0    0    0    0     0     0
+-0.12     0    0    0    0    0     0     0
+    0     0    0    0    0    0     0 -0.12
+    0     0    0    0    0    0 -0.12     0
+    0     0    0    0    0 0.12     0     0
+    0     0    0    0 0.12    0     0     0
+
+ +

Since it's a real matrix, Hermitian means symmetric, but this is not symmetric and therefore not Hermitian. The top-right triangle isn't equal to the bottom-right triangle.

+ +

However the top-right triangle is the negative of the bottom-right triangle, so it is anti-Hermitian.

+ +

So doing AHussain's suggestion of adding the conjugate transpose, results in 0. Just run this command:

+ +
H + H'
+
+ +

and you will get an 8x8 matrix of 0's.

+ +

So when you make your Hamiltonian Hermitian by adding the conjugate transpose, you get 0 for this term, and therefore you do not need to have any imaginary coefficients.

+",2293,,,,,7/26/2018 9:31,,,,6,,,,CC BY-SA 4.0 +3871,1,,,7/26/2018 16:12,,7,2532,"

Among available qubit technologies for universal quantum computing these 3 come up as promising. NV centers and Majorana qubits also underway but relatively new. +I find superconducting qubits and Trapped Ion qubits very hard to scale. Also T1(decoherence) and T2 (dephasing) for superconducting qubits is very less (us).

+ +

Being a non-physicist, I am not able to exactly locate good sources to convince myself why is one technology preferred over the other? I would really appreciate if you can direct me to relavant literature available on this topic.

+",2391,,55,,09-04-2020 09:31,09-04-2020 09:31,"What are the pros/cons of Trapped Ion Qubits, Superconducting Qubits and Si Spin Qubits?",,2,0,,,,CC BY-SA 4.0 +3872,1,3876,,7/26/2018 16:25,,8,924,"

I understand that right now qubits are physical entities in a Quantum Computer and I am playing around on the IBM Quantum Computer as well as the Q# language and dipping my toes into the Quantum world for the first time.

+ +

I have read a lot of the Alice and Bob style scenarios where it often mentions transporting a qubit from Alice to Bob. I am inferring that as physically transporting it but I haven't found a discussion on what this looks like from a computing sense. As in how it could be theoretically achieved to ""package"" a qubit, or even a representation of a qubit (state or values) for transportation via a Classical or Quantum channel? I am assuming the only way this is possible is via entanglement and teleportation. Is it possible for non-entangled, ordinary qubits, to be represented in some format and transferred logically between two points, whereby the receiving point can decode and interpret the information contained within. That receiving point could be a computer service in a Classical computer architecture or another Quantum machine.

+ +

I ask this in the sense of Classical Computing, where we can encode bits onto a chip but logically represent a bit (or series of bits) in numerous formats and transfer them for manipulation. As a software engineer that's where my thought process is coming from. This might not be a practical thing to want to do in Quantum but in theory, is it something that could be achieved? Any guidance would be welcome.

+ +

EDIT: +Thank you for the really comprehensive answers, it has filled in a lot of gaps and I did not realise the strong link between photons & fiber which brings a potential bridge in theory. I'm working my way through the basic hello world applications and was trying to mentally bridge my software knowledge of Classical into this world at a basic transfer and representation level. I'm trying to build some small apps that bridge both worlds and my mental block right now is representing the characteristics of a qubit in traditional programming notations. Have you any thoughts on what would need to be modelled to create a logical representation of a qubit? What I am getting at is something similar to a specification that would allow a programmer represent a type (like a String e.g. https://en.wikipedia.org/wiki/String_(computer_science)). In the Quantum programming languages the qubit is it's own type, drilling down a level can the characteristics be captured in a very basic manner so that potentially they could be represented in something like a vector array to capture key characteristics e.g. +state (notwithstanding the difficulty of superposition!), spin etc.

+",4210,,26,,12/23/2018 12:39,12/23/2018 12:39,The process for transferring qubits between locations,,2,0,,,,CC BY-SA 4.0 +3873,1,,,7/26/2018 17:18,,8,477,"

If I understand correctly, there must exist unitary operations that can be approximated to a distance $\epsilon$ only by an exponential number of quantum gates and no less.

+ +

However, by the Solovay-Kitaev theorem, any arbitrary unitary operation in $n$ qubits, with $n$ fixed, can be approximated to a distance of $\epsilon$ using poly(log(1/$\epsilon$)) universal gates.

+ +

Don't these two statements appear contradictory? What am I missing?

+",1351,,26,,12/23/2018 13:20,05-02-2019 15:25,Number of gates required to approximate arbitrary unitaries,,2,5,,,,CC BY-SA 4.0 +3874,2,,3872,7/26/2018 17:48,,3,,"

It’s worth stating from the start that “Alice and Bob” scenarios are very different from quantum computation scenarios. The Alice and Bob scenarios are very much that there are two distantly separated locations between which it is impossible to directly perform quantum gates. Meanwhile in the quantum computing architectures you’re talking about, two-qubit gates are readily available. Even if you can’t directly interact a pair of qubits, a bunch of swap gates are enough to move the qubits next to each other, and back again.

+ +

You also want to be careful with your classical software engineer interpretation, because classically it’s very easy to move things about, and make multiple copies. In quantum, you can’t make copies of your data.

+ +

So, how do you move a qubit? Entanglement and teleportation aren’t really an answer. They might help give you error-correction enhanced protocols, but the basic question is still how you share the entangled state to achieve the teleportation.

+ +

Probably the best way is to transfer the quantum information from one physical carrier, such as the qubits in a quantum computer, to a different one. We’d usually think about photons in this context - they’re really good at travelling long distances without interacting too much. In the same way that classical data can transfer via an optical fibre, it’s not too wild to imagine sending photons in superpositions through an optical fibre. You ‘just’ have to convert the photon at either end into the different storage/manipulation type of qubit. The technology certainly exists to do this, but I don’t know how reliably it happens.

+",1837,,,,,7/26/2018 17:48,,,,2,,,,CC BY-SA 4.0 +3875,2,,3860,7/26/2018 18:56,,-1,,"

Along with depending on time constraints, as Craig mentioned, you also need to specify how accurate/what gates you want the simulation to have. CHP (CNOT, Phase, Hadamard) simulations can do incredibly large circuits with large numbers of qubits incredible quickly, however they only allow a certain restricted gate set, so some gates, such as T gates, must be approximated.

+ +

Other simulations exist (such as quantumsim and others) which store full density matrices, and as a result are much more significantly limited in the number of qubits they work with seeing as they must store a $2^n \times 2^n$ matrix.

+",3056,,,,,7/26/2018 18:56,,,,0,,,,CC BY-SA 4.0 +3876,2,,3872,7/26/2018 20:35,,12,,"

You are totally right in your assumption about transporting qubits from Alice to Bob implies something physical. Usually problems/situations that have this setup of a transmission between two parties are called quantum communications. These problems/situations sometimes disambiguate by calling their qubits ""flying qubits"" which are almost always photons. Single photons are also quantum systems that can be prepared in useful qubit states, they can be operated on with gates (but not all kinds of gates and not as easily as some other physical implementations of qubits), and can be measured just like any other qubit system. Alice and Bob would literally share these photos by either an optical fiber they are connected by or through free space (which could be literally to a satellite in space).

+ +

Photons are great for this application because we already use them for a large portion of our classical communication networks. ""Fiber"" internet or photonic networks send classical information in optical fiber with strong pulsed lasers. So if you wanted to have both a classical channel and a quantum communication channel you could do both with the same fiber (hard for some technical reasons but totally possible).

+ +

There are also may other physical systems that you cab make qubits out of for quantum computing (super conductors, ion traps, etc.). You are correct in that to connect different groups of these qubits one would not pick the chip up and move it, but instead they often create a(n) photon(s) that are entangled with the original system or have some information they want to share with the second system and then send the photon over.

+",4211,,,,,7/26/2018 20:35,,,,3,,,,CC BY-SA 4.0 +3877,2,,3852,7/26/2018 20:49,,5,,"

I think this kind of depends on what you are looking for. You asked about building a basic quantum system which you can totally do*. Just grab a good laser, single photon detectors, beam splitters and wave plates, and some attenuating filters. If you can get the laser down to a trickle of single photons you can do some fun quantum key distribution demos, and operations on single qubits (possibly on pairs if you have the right components). Now, that's obviously no computer, but since computers are built out of qubits its a start. +Also as @peterh mentioned, there is lots of simulators that you can use if you want to work with more qubits now, and example being Q# that actually can estimate the number of qubits you would need to run your algorithm on a real quantum computer.

+ + +",4211,,,,,7/26/2018 20:49,,,,0,,,,CC BY-SA 4.0 +3878,2,,3871,7/26/2018 23:24,,6,,"

It think the (very) short answer is that there is not a preferred platform yet. This is why there are very active research communities around each of these technologies. Often if someone says otherwise they are probably working on one of the platforms :)

+",4211,,,,,7/26/2018 23:24,,,,0,,,,CC BY-SA 4.0 +3879,1,3911,,7/27/2018 8:51,,11,718,"

In @DaftWullie's answer to this question he showed how to represent in terms of quantum gates the matrix used as example in this article. However, I believe it to be unlikely to have such well structured matrices in real life examples, therefore I was trying to look at other methods to simulate an Hamiltonian. + I have found in several articles a reference to this one by Aharonov and Ta-Shma in which, among other things they state that it is possible to have some advantage in simulating sparse hamiltonians. After reading the article, however, I haven't understood how the simulation of sparse hamiltonians could be performed. The problem is usually presented as one of graph coloring, however also looking at the presentation that @Nelimee suggested to read to study matrix exponentiation, this all falls down the silmulation through product formula.

+ +

To make an example, let's take a random matrix like:

+ +

$$ +A = \left[\begin{matrix} +2 & 0 & 0 & 0\\ +8 & 5 & 0 & 6\\ +0 & 0 & 7 & 0\\ +0 & 5 & 3 & 4 +\end{matrix}\right]; +$$ + this is not hermitian, but using the suggestion from Harrow,Hassidim and Lloyd we can construct an hermitian matrix starting from it:

+ +

$$ +C = \left[ \begin{matrix} +0 & A\\ +A^{\dagger} & 0 +\end{matrix} \right] + = \left[\begin{matrix} +0 & 0 & 0 & 0 & 2 & 0 & 0 & 0\\ +0 & 0 & 0 & 0 & 8 & 5 & 0 & 6\\ +0 & 0 & 0 & 0 & 0 & 0 & 7 & 0\\ +0 & 0 & 0 & 0 & 0 & 5 & 3 & 4\\ +2 & 8 & 0 & 0 & 0 & 0 & 0 & 0\\ +0 & 5 & 0 & 5 & 0 & 0 & 0 & 0\\ +0 & 0 & 7 & 3 & 0 & 0 & 0 & 0 \\ +0 & 6 & 0 & 4 & 0 & 0 & 0 & 0 \\ +\end{matrix}\right]. +$$

+ +

Now that I have an 8x8, 2-sparse hermitian matrix:

+ +
    +
  • Can I simulate its evolution in other ways than the product formula method?
  • +
  • Even if I use the product formula, how do I exploit the fact that it is sparse? Is it just because there are less non-zero entries and therefore it should be easier to find the product of basic gates?
  • +
+",2648,,,,,7/31/2018 13:45,Advantage of simulating sparse Hamiltonians,,1,0,,,,CC BY-SA 4.0 +3880,1,3881,,7/27/2018 9:09,,7,817,"

As far as I have seen, when it comes to solving linear systems of equations it is assumed to have a matrix with a number of rows and columns equal to a power of two, but what if it is not the case?

+ +

If for instance I have the equation $Ax=b$ where A is a 4x4 matrix and x and b 4x1 vectors, I expect to find the solution in terms of amplitudes of the 4 basis states considered for the problem. What if instead of 4 there is 5? +My idea would be to choose an Hilbert space of smallest dimension that it includes 5 i.e. 8, and then make it so that 3 basis states will have amplitude 0. Is it correct to reason in this way, or am i making problems for nothing?

+",2648,,,,,7/27/2018 9:37,Solving linear systems represented by NxN matrices with N not power of 2,,1,1,,,,CC BY-SA 4.0 +3881,2,,3880,7/27/2018 9:37,,6,,"

This is indeed a correct way to solve linear systems with dimension not equal to a power of 2. Solve the smallest possible system of dimension 2$^n$ that contains the system you want to solve, and pad the matrices and vectors with zeros to make it the right size. This is because the vector $|b\rangle$ in the HHL algorithm, is a quantum state, which means if we have $n$ qubits, its dimension is naturally 2$^n$.

+ +

However, this procedure is not the only way to solve systems with dimension not equal to a power of 2.

+ +

Remember you do not have to work with qubits but can also work with other qudits such as qutrits (3-level systems). Then your $|b\rangle$ will have dimension 3$^n$, so you can do for example a 9x9 system without resorting to solving a 16x16 problem involving 4 qubits. The question then becomes whether or not your hardware can more easily perform the algorithm for the case where $|b\rangle$ is represented by 4 qubits, or for the case where it is 2 qutrits.

+ +

For a 5x5 matrix, you can use for $|b\rangle$ a qupit with $p=5$. Or you can use 3 qubits and work with an 8x8 matrix. Since there's not likely to be a lot of quantum computing hardware around with qupits, it may be easier to do what you suggest, which is to just use more qubits.

+",2293,,,,,7/27/2018 9:37,,,,2,,,,CC BY-SA 4.0 +3883,2,,1880,7/27/2018 16:59,,5,,"

I have worked with NVs in nanodiamonds a little bit, and you are totally right, surface characteristics have a huge influence on how far we can push them. There are definitely multiple groups working on the chemistry/material science that are working to clean up the surfaces as much as possible. I had a colleague, Carlo Bradac who worked with our chemistry dept to passivate/manipulate the surface properties of nanodiamonds for us and they were way better than what we could commercially buy from anywhere. I think he has a startup now where you can order samples of nanodiamonds (and other structures) with certain properties. If I recall correctly, they did things like hot acid baths and centrifuging to do this, but as there is a startup I am not sure it is published. A place to start may be:

+ +

Influence of surface composition on the colloidal stability of ultra-small detonation nanodiamonds in biological media

+",4211,,26,,7/27/2018 17:24,7/27/2018 17:24,,,,0,,,,CC BY-SA 4.0 +3885,1,3886,,7/28/2018 5:57,,7,388,"

Also, why is Microsoft placing such an emphasis on topological qubits when most other companies seem to be focusing on other qubit technologies?

+ +

I know topological qubits could handle noise far better than other systems, so they are appealing, but they are also new and seemingly only theoretical so far.

+",1287,,,,,2/21/2022 20:57,Are there any other companies besides Microsoft pursuing topological QC?,,1,1,,,,CC BY-SA 4.0 +3886,2,,3885,7/28/2018 6:36,,4,,"

Microsoft is the only company that is trying to build a topological quantum computer. You mention that topological qubits handle noise far better than other systems, but they are also theoretical. That's the reason Microsoft is applying a topological approach. It's high-risk, high-reward. If Microsoft manages to realize a topological qubit, scaling up a computer made of topological qubits will be easier than competing approaches because a topological quantum computer would use less resources to perform quantum error correction compared to other implementations.

+",362,,,,,7/28/2018 6:36,,,,2,,,,CC BY-SA 4.0 +3887,1,3889,,7/28/2018 9:05,,4,425,"

I need to use the following matrix gate in a quantum circuit:

+ +

$$\text{Sign Flip}=\left[\begin{matrix}0 & -1 \\ -1 & 0\end{matrix}\right]$$

+ +

$\text{Sign Flip}$ can be decomposed as (in terms of Pauli-$X$,$Y$,$Z$):

+ +

$$\begin{bmatrix} 0 & -1\\ -1 & 0\end{bmatrix} = \begin{bmatrix} 0 & 1\\ 1 & 0\end{bmatrix} \begin{bmatrix} 0 & -i\\ i & 0\end{bmatrix} \begin{bmatrix} 1 & 0\\ 0 & -1\end{bmatrix} \begin{bmatrix} 0 & -i\\ i & 0\end{bmatrix} \begin{bmatrix} 1 & 0\\ 0 & -1\end{bmatrix}$$

+ +

Is there any standard shorthand notation for the $\text{Sign Flip}$ gate? I don't really want to replace one simple custom gate by $5$ quantum gates in my circuit.

+",26,,26,,12/23/2018 13:19,12/23/2018 13:19,Shorthand notation for the sign flip gate,,1,3,,,,CC BY-SA 4.0 +3889,2,,3887,7/28/2018 10:46,,6,,"

A unitary $U$ and $e^{i\phi}U$, which differs from it by a phase, act exact identically on any quantum state. Thus, they should really be considered the ""same"" unitary in terms of their action.

+ +

You can therefore use $X$ instead of your unitary, which is $-X$. This will have exactly the identical action in any circuit.

+ +

(Why is this? There are different ways to see this: Either since $|\psi\rangle$ and $e^{i\phi}|\psi\rangle$ describe the same quantum state, or by working with density operators on which $U$ acts as $\rho\mapsto U\rho U^\dagger$, such that phases cancel. Also, note that this is not true for controlled-unitaries -- but this is an entirely different question.)

+",491,,491,,7/28/2018 14:59,7/28/2018 14:59,,,,0,,,,CC BY-SA 4.0 +3890,1,,,7/28/2018 12:37,,7,173,"

I am going through this video of Quantum Computing for Computer Scientists. I am not able to understand the entanglement state of qubits.

+ +

Is entanglement just an operation, or is it a state which can be stored. If it's a state which is stored then how can we send two qubits far apart? +But if it is just an operation then isn't the operation somehow affected the probability from 50% to 100% to superimposed states of qubits.

+ +

Also, what if the operation has 50% chance of assigning the probability. So when measuring multiple times we tend to get 50% probability of qubits collapsing to one of the states.

+",4226,,2293,,7/29/2018 17:07,8/24/2018 13:13,Is entanglement an operation or a stored state for qubits?,,3,0,,,,CC BY-SA 4.0 +3891,2,,3890,7/28/2018 16:52,,6,,"

It’s a property of the state that is stored. But it’s not a physical change that relates to a force: it is not that you entangle qubits, and that means that they are physically joined. The two physical qubits remain as independent physical entities that can move around separately, just with a joint state. (At least, not usually. In things like covalent bonds, that’s exactly what’s happening!)

+ +

Incidentally, try not to talk too much about “probability” as it can be very misleading in a quantum scenario, especially when you start talking about entanglement. Talk about probability amplitudes where possible, and keep probabilities for being associated with the outcomes of measurements. Otherwise the differences between entanglement and classical correlation will be impossible to distinguish.

+",1837,,1837,,8/24/2018 13:13,8/24/2018 13:13,,,,4,,,,CC BY-SA 4.0 +3893,1,3895,,7/29/2018 0:53,,7,604,"

I'm stuck while trying to understand the Hadamard Gate in a more linear algebra understanding. (I understand the algebraic way). This is because I want to program a simulation of a quantum computer. To apply a gate you multiply each ket by the unitary matrix.

+ +

So, the Hadamard gate maps the state $\alpha |0\rangle + \beta|1\rangle$ to $\frac{\alpha}{\sqrt{2}}\begin{bmatrix}1\\1\\\end{bmatrix}+\frac{\beta}{\sqrt{2}}\begin{bmatrix}1\\-1\\\end{bmatrix}$ right? +But the outputs are not basis vectors. I know this is hard to understand but how do I make it back to the basis vector form so I can write it in ket form. I know you can do it algebraically but how to in linear algebra?

+ +

So put it in the form: +$$\alpha\begin{bmatrix}1\\0\\\end{bmatrix}+\beta\begin{bmatrix}0\\1\\\end{bmatrix}$$

+ +

If you understand what I mean, can you explain it in a general matrix form?

+",4232,,55,,06-04-2020 15:04,06-04-2020 15:04,How do you represent the output of a quantum gate in terms of its basis vectors?,,2,0,,,,CC BY-SA 4.0 +3895,2,,3893,7/29/2018 4:02,,7,,"

In linear algebra representation:

+ +

The state of the qubit is $\alpha|0\rangle + \beta|1\rangle = \begin{bmatrix}\alpha \\ \beta \end{bmatrix}$.

+ +

The Hadamard matrix is $\frac{1}{\sqrt{2}}\begin{bmatrix}1 & 1 \\ 1 & -1 \end{bmatrix}$.

+ +

The state after applying a Hadamard gate can be calculated by multiplying the column vector representing the state by the matrix representing the operator:

+ +

$\frac{1}{\sqrt{2}}\begin{bmatrix}1 & 1 \\ 1 & -1 \end{bmatrix} \cdot \begin{bmatrix}\alpha \\ \beta \end{bmatrix} = \frac{1}{\sqrt{2}} \begin{bmatrix}\alpha + \beta \\ \alpha - \beta \end{bmatrix}$ = $\frac{1}{\sqrt{2}}(\alpha + \beta)|0\rangle + \frac{1}{\sqrt{2}}(\alpha - \beta)|1\rangle$

+",2879,,3056,,7/29/2018 6:29,7/29/2018 6:29,,,,1,,,,CC BY-SA 4.0 +3896,2,,3890,7/29/2018 4:47,,0,,"

To add on to what @DaftWullie said, in my mind it might make more sense to think of entanglement as a property of a multi-qubit state, instead of as a state itself. There is no special ""Entangled"" state, it's just that some states are entangled and some are not.

+",3056,,,,,7/29/2018 4:47,,,,0,,,,CC BY-SA 4.0 +3897,2,,3871,7/29/2018 5:04,,5,,"

Here's a paper comparing Trapped Ion and Superconducting (the main competitors right now) from the group at UMD which compares their trapped ion system with IBM's transmon (superconducting) system. If you want to look at a more algorithm-focused line of thought.

+ +

If you are looking for a more general summary of the strengths and weaknesses this paper seems to discuss all major options, and then this one focuses on Superconducting qubits in particular.

+ +

Hope that helps!

+",3056,,,,,7/29/2018 5:04,,,,1,,,,CC BY-SA 4.0 +3898,1,3966,,7/29/2018 8:21,,8,306,"

I recently came to know about this interesting topic of ""communication complexity"". In simple words, Wikipedia defines it as:

+ +
+

In theoretical computer science, communication complexity studies the + amount of communication required to solve a problem when the input to + the problem is distributed among two or more parties. It was + introduced by Andrew Yao in 1979, who investigated the following + problem involving two separated parties, traditionally called Alice + and Bob. Alice receives an n-bit string $x$ and Bob another $n$-bit + string $y$, and the goal is for one of them (say Bob) to compute a + certain function $f(x,y)$ with the least amount of communication + between them. Of course, they can always succeed by having Alice send + her whole n-bit string to Bob, who then computes the function $f$, but + the idea here is to find clever ways of calculating $f$ with fewer than + n bits of communication. Note that in communication complexity, we are + not concerned with the amount of computation performed by Alice or + Bob, or the size of the memory used.

+
+ +

Now, I was going through the first couple of pages of this paper: ""Quantum Communication Complexity (A Survey) - Brassard"". Apparently, it seems that if the non-communicating parties are previously entangled then more bits of information may be communicated than in the classical case. The paper looks nice, and so I'll read further. However, are there any other important papers or texts which are ""must-reads"" when learning about ""quantum communication complexity""? (I'm mostly interested in the theoretical/algorithmic side)

+",26,,,,,08-07-2018 14:55,Resources for Quantum Communication Complexity,,2,0,,,,CC BY-SA 4.0 +3899,1,3900,,7/29/2018 21:25,,6,474,"

Under the influence of a time-independent Hamiltonian $H$, a state $|\psi\rangle$ will evolve after a time $t$ to the final state $|\psi(t)\rangle=e^{-iH t}|\psi\rangle$, while in the most general case of a time-dependent Hamiltonian $H(t)$, the final state can be formally written as +$$|\psi(t)\rangle=T\exp\left(-i\int_{t_0}^{t}dt\,H(t)\right) |\psi\rangle.$$ +What are the preferred numerical methods to compute $|\psi(t)\rangle$, given $H$, $t$ and $|\psi\rangle$?

+",55,,55,,7/30/2018 15:50,7/30/2018 15:50,What are the preferred numerical methods to simulate the evolution of a state through a time-dependent Hamiltonian?,,1,12,,,,CC BY-SA 4.0 +3900,2,,3899,7/30/2018 8:42,,7,,"

It depends on the Hamiltonian. There are three particular questions whose answers might influence your choice of strategy:

+ +
    +
  • Does the Hamiltonian have any particular structure or symmetry?
  • +
  • How quickly does the Hamiltonian change in time?
  • +
  • What do you know about the initial state in relation to the initial Hamiltonian?
  • +
+ +

Obviously, if the Hamiltonian has any particular structure or symmetry, you should start by taking advantage of it. For example, if your Hamiltonian $H$ satisfies $$\left[H,\sum_{i=1}^NZ_i\right]=0,$$ +then you can split $H$ into a series of subspaces $H=\bigoplus_{i=0}^NH_i$, whose evolutions you can handle separately. This is particularly good if your initial state turns out to be supported on a small number of those subspaces. Another particularly trivial case is of $[H(t),H(s)]=0$ for all $s$ and $t$. In that case, you would start by decomposing the initial state in terms of the eigenvectors.

+ +

If your Hamiltonian is changing slowly in time, and your initial state only has support on a small number of eigenstates, it might be worthwhile investigating whether an adiabatic evolution is occurring. In that case, it may be ""just"" a case of finding the final Hamiltonian and some of its eigenstates.

+ +

If it's changing at a reasonable speed, but your initial state still has support only on a small number of the low-energy eigenvectors, then you might use matrix-product states (especially if your Hamiltonian has a one-dimensional structure).

+ +

If your Hamiltonian is changing very quickly in time, then the default option of Trotterising the Hamiltonian can work quite badly, with errors building quickly. There are improved techniques that work in these sorts of situations, but I've never seen them applied in the discrete setting of qubits (that said, I've never looked). Things like the explicit Arnoldi method, or explicit Fatunla method. You may find some useful details here.

+ +

The default method, which will always work (if you take a small enough $\delta t$) is to split the Hamiltonian evolution into lots of little time steps $\delta t$ and just evaluate +$$ +\ldots e^{-iH(5\delta t/2)\delta t}e^{-iH(3\delta t/2)\delta t}e^{-iH(\delta t/2)\delta t}|\psi\rangle. +$$ +This can be improved by using Runge-Kutta type techniques. You can also vary the size of the time step depending on how quickly the Hamiltonian is changing at a particular instant. Some of the quantum techniques for simulation may also be interesting. There have been recent advancements that massively improve the accuracy of those simulations by moving away from the standard Trotterisation.

+ +

There's probably plenty more that could be said, but I guess that's enough to get you started.

+",1837,,1837,,7/30/2018 11:07,7/30/2018 11:07,,,,0,,,,CC BY-SA 4.0 +3901,1,,,7/30/2018 15:07,,4,2003,"

I have run this program -

+ +
# Import the QISKit SDK
+from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
+from qiskit import execute, register
+
+# Set your API Token.
+# You can get it from https://quantumexperience.ng.bluemix.net/qx/account,
+# looking for ""Personal Access Token"" section.
+QX_TOKEN = ""....""
+QX_URL = ""https://quantumexperience.ng.bluemix.net/api""
+
+# Authenticate with the IBM Q API in order to use online devices.
+# You need the API Token and the QX URL.
+register(QX_TOKEN, QX_URL)
+
+# Create a Quantum Register with 2 qubits.
+q = QuantumRegister(2)
+# Create a Classical Register with 2 bits.
+c = ClassicalRegister(2)
+# Create a Quantum Circuit
+qc = QuantumCircuit(q, c)
+
+# Add a H gate on qubit 0, putting this qubit in superposition.
+qc.h(q[0])
+# Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting
+# the qubits in a Bell state.
+qc.cx(q[0], q[1])
+# Add a Measure gate to see the state.
+qc.measure(q, c)
+
+# Compile and run the Quantum Program on a real device backend
+job_exp = execute(qc, 'ibmqx4', shots=1024, max_credits=10)
+result = job_exp.result()
+
+# Show the results
+print(result)
+print(result.get_data())
+
+ +

Output -

+ +
+

COMPLETED {'time': 19.799431085586548, 'counts': {'00': 445, '01': 62, + '10': 67, '11': 450}, 'date': '2018-07-30T14:56:23.330Z'}

+
+ +

But when I was running this, it was no very fast. Is this due to queuing on the machine?

+",4216,,26,,03-12-2019 09:33,04-07-2021 10:49,Comparing run times on IBM Quantum Experience,,3,0,,,,CC BY-SA 4.0 +3902,1,3905,,7/31/2018 1:55,,4,297,"

A bit is a binary digit, typically 0 or 1.

+ +

Until a value is assigned (or a measurement is made) a bit is in a superposition of the entangled binary pair, is it not?

+",2645,,26,,12/23/2018 12:40,12/23/2018 12:40,Are classical bits quantum?,,4,1,,,,CC BY-SA 4.0 +3903,2,,3902,7/31/2018 2:00,,1,,"

A bit is effectively

+ +

$$|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$$

+ +

where either $\alpha = 1, \beta = 0$ (the bit = 0) or $\alpha = 0, \beta = 1$ (the bit = 1).

+ +

So, yes, in the sense I think you mean, a classical bit can be represented with the quantum notation.

+ +

Remember for quantum mechanics to be a legitimate theory, it must describe all the phenomenons and behaviors that classical mechanics explains, which it does. Similar is true in this case.

+ +

Note that if you ever refer to a classical bit as quantum, you will probably confuse the heck out of everybody, which is why the distinction is made. Also different gates can apply, etc.

+",91,,,,,7/31/2018 2:00,,,,6,,,,CC BY-SA 4.0 +3904,2,,3902,7/31/2018 3:44,,1,,"

You are correct that if you set $\alpha=0$ or $\alpha=1$, then the information gain upon measurement will be none. There is no way to be surprised. You do need a some nontrivial $p$ for be the prior distribution, to get information gain. If you say the state is $\sqrt{p} \mid 0 \rangle + \sqrt{1-p} \mid 1 \rangle$ instead, then when you measure it in the computational basis you can have some surprise/information. Presumably let $p=\frac{1}{2}$ because you want a full bit, not less. One might play fast and loose between ""less than or equal to 1 bit"" vs ""1 bit"" because of implicit assumptions about the probability distribution.

+",434,,,,,7/31/2018 3:44,,,,0,,,,CC BY-SA 4.0 +3905,2,,3902,7/31/2018 5:26,,5,,"

A bit, either 0 or 1, can certainly be thought of as a special case of being a qubit. However, that is not to say that anything capable of computing with classical bits is capable of computing with quantum bits.

+ +

If you have a bit, and don’t know if it’s a 0 or a 1, then how do you describe its state? You have to use Bayesian priors. If you have no idea which it is, you assign the two options equal probabilities. And it really is probabilities here. So it’s 50:50 being in 0 or 1. You can’t describe this as a pure quantum state, but you can use a density matrix: +$$\rho=(|0\rangle\langle 0|+|1\rangle\langle 1|)/2.$$ +This formalism also means that if you later learn something about what the bit value might be (perhaps as a result of reading other bits in a computation), you can update those probabilities using conditional probabilities and Bayes’ rule. Note that this also means it’s a subjective description of the state: it reflects your personal knowledge of the state of the bit, while somebody else, who perhaps prepared the bit, already knows what value it has. This is perhaps one way to see why it should be different from the pure state that you proposed, which should be an objective description that everyone would agree on (in that there is no uncertainty in the state; the only uncertainty is induced by the action of the measurement).

+",1837,,55,,7/31/2018 10:24,7/31/2018 10:24,,,,0,,,,CC BY-SA 4.0 +3906,2,,3901,7/31/2018 6:37,,1,,"

The time given in the results is the execution time on the backend. In your example, your quantum circuit took nearly 20 seconds to execute 1024 times (which seems huge for such a short circuit).

+ +

Before these nearly 20 seconds of execution, it is likely that your job had to wait for a few seconds (maybe up to several hours) in the backend queue, where all the jobs submitted but not executed are kept.

+ +

What you can do to estimate the waiting time is to ask to the backend how many jobs there are in queue, but you can't estimate precisely the time your job will need to wait in the queue as you don't know the content of the other jobs before you.

+",1386,,,,,7/31/2018 6:37,,,,0,,,,CC BY-SA 4.0 +3907,2,,3902,7/31/2018 10:38,,4,,"

No, superposition is not the same as uncertainty about an outcome.

+ +

If you throw a coin, the state of the coin is not $|0\rangle+|1\rangle$ before landing. +If you really want to use the formalism of quantum mechanics to describe the uncertainty about the outcome, you have to describe it as a mixture of the two possible outcomes, that is, something like $|0\rangle\!\langle0|+|1\rangle\!\langle1|$. +As also discussed in this other answer, coherent superpositions and mixtures are two entirely different beasts.

+ +

In particular, saying that the state of a coin (classical bit) is $|0\rangle+|1\rangle$ before the measurement implies the possibility of performing coherent operations on it, which is wrong: it is not possible, even in principle, to do something like measuring the state of the coin in the $|+\rangle$ basis.

+ +

In other words, the crucial difference is that even if you do not know in what state a bit is, it is still the case that the bit is in some definite state (which means no interferences), you just don't which one. +On the other hand, the ""uncertainty"" associated with a qubit does not mean that there is something that we do not know about the state: the qubit can be fully characterised, and yet lead to uncertainty in (some) measurement outcomes.

+",55,,55,,7/31/2018 11:21,7/31/2018 11:21,,,,0,,,,CC BY-SA 4.0 +3908,1,3909,,7/31/2018 12:05,,8,1855,"

I wish to have a ""reset"" gate. This gate would have an effect to bring a qubit to the $\mid0\rangle$ state.
+Clearly, such a gate is not unitary (and so I'm unable to find any reliable implementation in terms of universal gates).

+ +

Now for my particular needs, I need this ability to reset a qubit or a quantum register to that state so users can always start from $\mid0\rangle$. I'm making a small programming language that transpiles to QASM, and when a function is exited, I want all local (quantum) variables (qubits) reset to $\mid0\rangle$ so they can be reused. QASM reset instruction does not work on the real processor.

+ +

I think that something to this effect may be achieved with quantum phase estimation but I'm wondering if there is another way.

+",2417,,26,,08-01-2018 15:19,08-01-2018 15:19,"Possibility of a ""reset"" quantum gate",,2,4,,,,CC BY-SA 4.0 +3909,2,,3908,7/31/2018 12:21,,6,,"

One way is simply to measure the qubit in the standard, $Z$, basis. If you get the answer 0, then you've got the state you want. Otherwise, you apply a bit-flip to it.

+ +

Indeed, if you want to implement a non-unitary operation, you need some sort of measurement operation somewhere, whether that's a direct measurement, or the implementation of a CP map or POVM (but for these options, you need to introduce ancillas of a fixed state, which rather negates the point). Or you could use noise in the system, but you are unlikely to have sufficient control of it - it's noise after all! Of course, none of these options just reset a single qubit; anything that qubit is entangled with is also affected, but that's kind of in the definition of ""reset"" in the quantum context.

+ +

The only other option is to uncompute, but this is not a generic option because, generically you have to uncompute the entire computation to reset even a single qubit, and that resets everything. Except it doesn't work perfectly because of errors. You would be better starting a new computation. There are specific scenarios where an ancilla qubit is used and it can be uncomputed, but this is typically built into the algorithm because the uncomputation step is important for getting rid of some unwanted entanglement that would otherwise appear.

+",1837,,1837,,08-01-2018 07:11,08-01-2018 07:11,,,,3,,,,CC BY-SA 4.0 +3910,2,,3908,7/31/2018 12:23,,3,,"

I do not think that you can achieve this with a single gate, but the cool thing of quantum gates and unitary transformations is that they are reversible, therefore, when implementing a function in your quantum circuit, all you need to do is 'uncompute' it just by reversing the gates that you used.

+ +

Here i made a random circuit and reversed it; you can see that you are back to the state $\vert 0\rangle$.

+ +

This would mean that you have to ""reset"" in a specific way for each function, though.

+",2648,,,,,7/31/2018 12:23,,,,4,,,,CC BY-SA 4.0 +3911,2,,3879,7/31/2018 13:19,,9,,"

The insight that suggests that sparse matrices are useful goes along the lines of: for any $H$, we can decompose it in terms of a set of $H_i$ whose individual components all commute (making diagonalisation straightforward), +$$ +H=\sum_{i=1}^mH_i. +$$ +If the matrix is sparse, then you shouldn't need too many distinct $H_i$. Then you can simulate the Hamiltonian evolution +$$ +e^{-iHt}=\prod_{j=1}^Ne^{-iH_m\delta t}e^{-iH_{m-1}\delta t}\ldots e^{-iH_{1}\delta t}, +$$ +where $t=N\delta t$. For example, in your case, you can have +$$ +H_1=\frac14 X\otimes(18\mathbb{I}-6Z\otimes Z-4Z\otimes\mathbb{I}) \\ +H_2=\frac14(X\otimes(11\mathbb{I}+5Z)\otimes X+Y\otimes(11\mathbb{I}+5Z)\otimes Y)\\ +H_3=\frac14(11X\otimes X-Y\otimes Y)\otimes(\mathbb{I}-Z) +$$ +(the 3 terms corresponding to the fact that it's a 3-sparse Hamiltonian). I believe there's a strategy here: you go through all the non-zero matrix elements of your Hamiltonian and group them so that if I write their coordinates as $(i,j)$ (and I always include their complex conjugate pair), I continue adding other elements to my set $(k,l)$ provided neither $k$ nor $l$ equal $i$ or $j$.. This would mean for an $m$-sparse Hamiltonian, you have $m$ different $H_i$.

+ +

The problem is this doesn't necessarily work this straightforwardly in practice. For one thing, there's still exponentially many matrix elements that you have to go through, but that's always going to be the case with the way you're setting it up.

+ +

The way that people get around this is they set up an oracle. One possible oracle is essentially a function $f(j,l)$ which returns the position and value of the $l^{th}$ non-zero entry on the $j^{th}$ row. This can be built into a full on quantum algorithm. There are a few papers on this topic (none of which I've completely understood yet). For example, here and here. Let me try to give a crude description of the way they work.

+ +

The first step is to decompose the Hamiltonian as a set of unitaries, multiplied by positive scale factors $\alpha_i$: +$$ +H=\sum_i\alpha_iU_i +$$ +For simplicity, let's assume $H=U_1+\alpha U_2$. It might be assumed that you're given this decomposition. One then defines an operation (constructed out of controlled-$U_1$ and controlled-$U_2$) that implements $V=|0\rangle\langle 0|\otimes U_1+|1\rangle\langle 1|\otimes U_2$. If we input a particular state $|0\rangle+\sqrt{\alpha}|1\rangle$ (up to normalisation) on the control qubit, apply $V$, then measure the control qubit, post-selecting on it being in the state $|0\rangle+\sqrt{\alpha}|1\rangle$, then if the post-selection succeeds, we have implemented $U_1+\alpha U_2$, which happens with a probability at least $(1-\alpha)^2/(1+\alpha)^2$. You can do exactly the same with multiple terms, and indeed with exponentials of Hamiltonians (think about the series expansion), although in practice some better series expansions are used based on Bessel functions.

+",1837,,1837,,7/31/2018 13:45,7/31/2018 13:45,,,,3,,,,CC BY-SA 4.0 +3912,2,,3901,08-01-2018 00:13,,7,,"

The time that you see in the result data structure is recorded by the device itself, so it is the running time of your experiment. It does not include the time spent processing your circuit in Qiskit, or the time spent by your job in the queue.

+ +

That being said, here is a rough breakdown of this time (ballpark durations):

+ +
    +
  • 1) Loading the experiment into the instruments that create the pulses (~ 15s)
  • +
  • 2) 1024 repetitions (shots) of running calibration pulses & your circuit (~ 5s) + +
      +
    • a) Reset qubits (relaxation) + calibration: ~ 4ms
    • +
    • b) Reset qubits (relaxation) + your circuit: ~ 1ms
    • +
  • +
+ +

Which adds up to the total experiment time you are seeing.

+",2503,,,,,08-01-2018 00:13,,,,1,,,,CC BY-SA 4.0 +3913,1,,,08-01-2018 06:10,,0,243,"

We are interested in solving an optimization problem, and specifically, the design of efficient yacht hulls.

+ +

In designing an efficient yacht hull, one must consider water-hull resistance, wave generation, near-field and far-field wake, side pressure, velocity prediction, and stability. The design is typically done with simple computational fluid dynamics. Sail design complicated the problem.

+ +

Thus, there are hundreds of equations to optimize to find a good yacht.

+ +

Is this a question which today's DWave can develop answers?

+ +

I read DWave gives too many answers for equation systems. My boat designs will be optimized for one speed and free curves and they will be very beautiful.

+ +

I want to get a good computation in short time.

+ +

What do you think, what would be the time taken to do this using a DWave machine?

+",4250,,9482,,4/30/2020 21:00,4/30/2020 21:02,How would D-Wave be used for complex optimization problems?,,1,1,,08-04-2018 09:04,,CC BY-SA 4.0 +3914,2,,1869,08-01-2018 10:09,,1,,"

The problem here is that $U_f$ as you've defined it is not unitary. To see this, note that the overlaps between states are preserved under the action of a unitary: $\langle\psi|\phi\rangle=\langle\psi|U^\dagger U|\phi\rangle$, while for a constant function $f$ you have $U_f|x\rangle=U_f|x'\rangle$ even if $x\neq x'$ (thereby changing the overlap from 0 to 1).

+ +

The way that you set this up as a unitary is with some additional qubits. If $f:x\mapsto y$ where $x\in\{0,1\}^n$ and $y\in\{0,1\}^m$, then you define $U_f|x\rangle|y\rangle=|x\rangle|y\oplus f(x)\rangle$. This is definitely unitary because +$$ +U_fU_f|x\rangle|y\rangle=|x\rangle|y\oplus f(x)\oplus f(x)\rangle=|x\rangle|y\rangle, +$$ +so $U_f$ is its own inverse.

+ +

Of course, you could define a quantum operation that always sets everything to 0. It's basically a measurement (in the $Z$ basis) followed by a compensating action depending on the measurement outcome. But then you can't use linearity as you did in your analysis, because you have to follow through the different measurement outcomes separately.

+",1837,,,,,08-01-2018 10:09,,,,0,,,,CC BY-SA 4.0 +3915,2,,3913,08-01-2018 13:07,,7,,"

So I do not know much about yacht design but having played a little bit with a D-Wave machine I would suggest to see if you can model your problem as a Quadratic unconstrained binary optimization. +See on Wikipedia.

+ +

That is your variables must be binary and a D-Wave machine will try to find a minimum to your QUBO. It will return many answers in a sense that it tries multiple times to find it but the goal is to find one minimum or sample near it with this minimization objective. If you ask for 1000 tries, it may give you a 100 different solutions but some with more frequencies.

+ +

Now about the time to do it, it will depend on how the problem can fit in the hardware. You have to take into account the number of variables and their connectivity (which variables share a coefficient in the QUBO) and finally see if it fits with the number of qubits and their connectivity on D-Wave.

+ +

Say you can fit it, it will give you answers quickly. Depending on numbers of runs you ask and time parameters for each run you pass, it will be a matter of seconds.

+ +

For example of a QUBO, here is a the simplest one : +$x_0 + x_1 -2x_1x_0$, where

+ +

$x_1$ and $x_0$ are my binary variables (0 or 1 values). +Here two solutions: (0,0) and (1,1).

+ +

To solve this on the machine, you generally pass a file/dictionary/table that looks like this and represent your QUBO problem:

+ +
0 0 1 (coefficient of x0 with itself)
+1 1 1 (coefficient of x1 with itself) 
+0 1 -2 (coefficient between x0 and x1) 
+
+",4127,,9006,,4/30/2020 21:02,4/30/2020 21:02,,,,14,,,,CC BY-SA 4.0 +3917,2,,2717,08-01-2018 22:27,,4,,"

Background

+ +

Often, in quantum optics, the Heisenberg picture is used, where instead of considering equations of motion of states, equations of motions of operators are looked at instead. When considering creation/annihilation operators, this is often considerably easier as the matrices that determine the evolution (assuming it can be written in terms of matrices) are, for one, finite.

+ +

The Heisenberg equations of motions are calculated using $$\frac{dA}{dt} = \frac i\hbar \left[H, A\right] + \frac{\partial A}{\partial t},$$ for an operator $A$ evolving under a Hamiltonian $H$.

+ +

Here, the operator $a_j \left(a_j^\dagger\right)$ is the annihilation (creation) operator for spatial mode $j$. For a single mode, this allows for an effective Hamiltonian (acting on the operators) to be written as $$i\frac{d}{dt}\begin{pmatrix}a \\ a^\dagger\end{pmatrix} = H_{\text{eff}}\begin{pmatrix}a \\ a^\dagger\end{pmatrix}.$$ This naturally extends to writing their transformation as $$\begin{pmatrix}b \\ b^\dagger\end{pmatrix} = M\begin{pmatrix}a \\ a^\dagger\end{pmatrix}$$ for input modes $a \left(a^\dagger\right)$ and output modes $b \left(b^\dagger\right)$.

+ +

When it exists, this transformation matrix $M$ can be calculated using $U^\dagger A_jU = \sum_{k}M_{jk}A_k$ for unitary evolution $U$. For the Unitaries in the question, this gives:

+ +

Transformations

+ +

Displacement +$$D_j\left(\alpha\right):\begin{pmatrix}b_j\\b_j^\dagger\\I\end{pmatrix} = \begin{pmatrix}1 && 0 && \alpha\\0&&1&&\alpha^*\\0&&0&&1\end{pmatrix} \begin{pmatrix}a_j\\a_j^\dagger\\I\end{pmatrix}$$

+ +

Rotation +$$R_j\left(\phi\right):\begin{pmatrix}b_j\\b_j^\dagger\end{pmatrix} = \begin{pmatrix}e^{-i\phi} && 0\\0&&e^{i\phi}\end{pmatrix} \begin{pmatrix}a_j\\a_j^\dagger\end{pmatrix}$$

+ +

Squeezing +$$S_j\left(\xi=re^{i\theta}\right):\begin{pmatrix}b_j\\b_j^\dagger\end{pmatrix} = \begin{pmatrix}\cosh r && -e^{i\theta}\sinh r\\-e^{-i\theta}\sinh r&&\cosh r\end{pmatrix} \begin{pmatrix}a_j\\a_j^\dagger\end{pmatrix}$$

+ +

Beamsplitter +$$B_{jk}\left(\zeta = te^{i\varphi}\right):\begin{pmatrix}b_j\\b_k\\b_j^\dagger\\b_k^\dagger\end{pmatrix} = \begin{pmatrix}t && re^{-i\varphi}&&0&& 0\\re^{i\varphi}&&t&&0&&0\\0&&0&&t&&re^{i\varphi}\\0&&0&&re^{-i\varphi}&&t\end{pmatrix} \begin{pmatrix}a_j\\a_k\\a_j^\dagger\\a_k^\dagger\end{pmatrix},$$ where $r=\cos\left|\zeta\right|$.

+ +

Cubic Phase

+ +

Unfortunately, this is too nonlinear to write in the above way in matrix form. As $x = \frac{1}{\sqrt 2}\left(a+a^\dagger\right)$, even to first order, $V^\dagger a^\dagger V$ will include terms such as $\left[a^3, a^\dagger\right] = 3a^2$, which cannot be written in terms of $\alpha a^\dagger+\beta a+\gamma$.

+",23,,,,,08-01-2018 22:27,,,,8,,,,CC BY-SA 4.0 +3918,1,3924,,08-01-2018 22:36,,5,472,"

It seems that a coin flip game is a decent metaphor for a 2-level system. Until 1 of the 2 players picks heads or tails, even if the coin has already been flipped, the win/loss wave form has not yet collapsed.

+ +

Would rock paper scissors be a good metaphor for qutrits? Where the number of players corresponds w/ the number of qutrits (eg. $3^n$ possible states).

+ +

Would a standard pack of 52 cards be a good metaphor for a 52-level quantum system (the game being guessing correctly a card selected from the deck at random)?

+ +

Particularly interested in game based metaphors because of the easy correlation to combinitorial game theory & computation complexity.

+",2645,,2645,,08-01-2018 23:24,08-07-2018 15:22,Good metaphors for n-level quantum systems,,6,0,,,,CC BY-SA 4.0 +3919,2,,3918,08-01-2018 23:15,,2,,"

Rock paper scissors seems a good one. +Now for the cards pack, I would not see the collapse metaphor here. While a player has not shown his cards, the cards are already set even if you do not see them. But I guess it is a point of view.

+ +

Maybe just pick a MMORPG with 52 monsters appearing randomly.

+",4127,,,,,08-01-2018 23:15,,,,2,,,,CC BY-SA 4.0 +3920,1,3921,,08-02-2018 02:11,,8,226,"

I'm a computer science major who's really keen on physics and quantum mechanics. I have started learning about Q# and D-Wave, but I just wanted to know if it's possible to test quantum mechanical theories using quantum computers.

+ +

If so, then what all different things should I look into? For example, Q# allows us about 30 qubits for free developing. What kind of simulations can I do with that many qubits?

+",4259,,26,,03-12-2019 09:11,03-12-2019 09:11,Can we perform quantum mechanical simulations using a quantum computer?,,2,0,,,,CC BY-SA 4.0 +3921,2,,3920,08-02-2018 02:20,,4,,"

What do you mean by ""Quantum Mechanical Simulations"" ?

+ +

One of the primary motivations in the early history of quantum computing was a statement from Richard Feynman that a quantum computer would be able to effectively simulate quantum systems. To that end, a lot of the nearest term quantum programs people are trying to run (and have run) are simulations of ground states of atoms and molecules. These are very classically resource intensive, but IBM has done this to good success on smaller highly symmetric molecules using their current quantum computers.

+ +

On the other hand, if you are wondering if we can test Quantum Mechanics as a theory using a Quantum Computer things like Bell's Inequalities can be tested. This is a proof that a system is Quantum Mechanical, as there is an inequality which can only be broken if using entanglement. The article linked has a good explanation, and goes into some of the experimental verifications which have already been done, but such a test is one of the ways to ensure that a given Black Box Computer is quantum is nature.

+",3056,,,,,08-02-2018 02:20,,,,5,,,,CC BY-SA 4.0 +3922,2,,3918,08-02-2018 02:22,,2,,"

A rolling n-sided die would be a good analogy that follows the coin example very closely. Until the die settles you can think of it as having not ""collapsed"", and die can come in whatever side number desired (at least in your mind, if you're trying to make a physical example that might be difficult.)

+",3056,,,,,08-02-2018 02:22,,,,1,,,,CC BY-SA 4.0 +3923,2,,3918,08-02-2018 04:31,,8,,"

There is always a difference between a quantum system and a classical metaphor. If a system is a qubit in a pure state, then there always exists a measurement basis (or alternatively a proper unitary gate for the standard measurement basis) such that the measurement outcome is 100% predictable, and a measurement basis with measurement outcome 50%-50%. You can't demonstrate this feature using a classical metaphor - in classical physics random is random and deterministic is deterministic.

+",2105,,,,,08-02-2018 04:31,,,,0,,,,CC BY-SA 4.0 +3924,2,,3918,08-02-2018 15:57,,4,,"

Your metaphor can be chosen as $N-1$ identical coins, such that the outcome state vector corresponds to the sum of the heads. Thus we wind up in the state $|k\rangle$ when we have $k$ heads and $N-1-k$ tails.

+ +

In this approach you can actually associate your metaphor with a classical phase space, as a generalization of the association of a qubit with the Bloch sphere whose generator can be thought in the coin case as the direction of the normal to the coin's face.

+ +

In the multiple coin case, we have $N-1$ such directions. But since in our Hilbert space we don't mind the order of the coins only their summed result, in the classical phase space we should not distinguish states in which the direction of the normal to the coins faces switch.

+ +

What I just described to you in words is actually the Majorana representation (some call it the Majorana star representation), which is based on the isomorphism of the symmetrized tensor product of $N-1$ spheres to the complex projective space $CP^{N-1}$.

+ +

$$\otimes_{\mathrm{sym}} ^{N-1} S^2 \cong CP^{N-1}$$

+ +

The geometric quantization of this complex projective space is the $N-$ level system. There is some renewed interest in this representation, partially motivated by holonomic quantum computation, please see for example Liu and Fu.

+ +

Now, the outcome of the coin flipping in the case of a single qubit can be described as a measurement of the angular momentum component of the coin in the $z$ direction $j_z$. It is not hard to see that the corresponding operator in the multiple coin flip is the sum of the individual angular momenta:

+ +

$$J_z = \sum_1^{N-1} j^{(i)}_z$$

+ +

(The above equation is just standard shorthand used by physicists since the operators act on different components in the tensor product Hilbert space).

+ +

(In the $N\times N$ matrix representation, this operator can be chosen as: $J_z = \mathrm{diag}[N-1, N-3, .,.,., -N+1]$).

+ +

This representation of the $N-$ qubit system has a further analogy. As in the case of a single coin or qubit, the distribution function of the operator $j_z$ is Bernoullian for any state (density matrix) in which the system is prepared; the distribution function of $J_z$ in the multiple coin case is Binomial, for any choice of the density matrix of the system. Thus in both cases this observable returns the classical distribution function.

+",4263,,,,,08-02-2018 15:57,,,,0,,,,CC BY-SA 4.0 +3925,2,,3920,08-02-2018 18:48,,6,,"

A separate note on using simulators for this (as opposed to using an actual quantum computer).

+ +

Simulators, like the one that ships with Q#, are built to simulate quantum mechanical theories as we understand them now. This means that any experiment you run on a simulator will behave exactly as the theory says (well, unless the simulator has a bug in the code), but it doesn't mean that this experiment confirms the theory - it only means that it's a good simulation/illustration of the theory.

+",2879,,,,,08-02-2018 18:48,,,,2,,,,CC BY-SA 4.0 +3927,2,,3918,08-02-2018 20:40,,3,,"

A coin is an extremely bad and highly misleading analogy for a qubit. You shouldn't use it by any means.

+ +

Yet, if you want an equally bad and misleading analogy for a qu-$d$-it, you should use something which is random and has $d$ possible outcome. For $d=3$, this might be rock-paper-scissors. On the other hand, a deck of cards with $52$ cards can have $52!$ possibilities, so it is an equally bad analogy for a qu-$(52!)$-it.

+ +
+ +

It has been suggested to add an explanation why I think this is a horrible analogy. The key point is that it pretends that quantum mechanics is merely classical randomness -- something which can be easily understood with our classical intuition, and thereby implying that all that talking about quantum mechanics being special is just talking. Indeed, the coin analogy already fails for a single qubit, when one performs measurements in more than one basis. This is for instance explained in DaftWullie's answer. Now one could argue that his answer also provides a way to model this with coins, but (1) these are coins which are rigged in a very weird way, and (2) it is still incomplete -- I just need to toss in measurement in yet another basis to make the whole description break down, and to yield an even more complex pattern of rigged coins (even worse, those coins would not get tossed after looking at another coins as in DW's answer, but they would get tossed in a biased way depending on which measurement I did).

+ +

Of course, since quantum theory is a theory about measurements -- which have classical inputs, i.e. measurement settings, and classical outputs, i.e. measurement outcomes -- we can always model this with classical objects which follow some odd distribution. However, the point is exactly that this distribution cannot be modeled by coins any more in any even remotely reasonable way (even more so once we consider two spatially separated qubits).

+",491,,491,,08-07-2018 15:22,08-07-2018 15:22,,,,6,,,,CC BY-SA 4.0 +3928,2,,3863,08-03-2018 06:42,,12,,"

A conventional Hamiltonian is Hermitian. Hence, if it contains a non-Hermitian term, it must either also contain its Hermitian conjuagte as another term, or have 0 weight. In this particular case, since $Z\otimes X\otimes Y$ is Hermitian itself, the coefficient would have to be 0. So, if you're talking about conventional Hamiltonians, you've probably made a mistake in your calculation. Note that if the Hermitian conjugate of the term is not present, you cannot simply fix things by adding it in; it will give you a completely different result.

+ +

On the other hand, you might be wanting to implement a non-Hermitian Hamiltonian. These things do exist, often for the description of noise processes, but are not nearly so widespread. You need to explicitly include the ""non-Hermitian"" terminology, otherwise everyone will just think that what you're doing is wrong because it's not Hermitian, and a Hamiltonian should be Hermitian. I'm not overly familiar with what capabilities the various simulators provide, but I'd be surprised if they have non-Hermiticity built in.

+ +

However, you can simulate it, at the cost of non-deterministic implementation. There will be more sophisticated methods than this (see the links in this answer), but let me describe a particularly simply one: I'm going to assume there's only one non-Hermitian component, which is $i\times$(a tensor product of Paulis). I'll call this tensor product of Paulis $K$. The rest of the Hamiltonian is $H$. You want to create the evolution +$$ +e^{-iHt+Kt} +$$ +We start by Trotterising the evolution, +$$ +e^{-iHt+Kt}= \prod_{i=1}^Ne^{-iH\delta t+K\delta t} +$$ +where $N\delta t=t$. Now we work on simulating an individual term $e^{-iH\delta t+K\delta t}\approx e^{-iH\delta t}e^{K\delta t}$ (which becomes more accurate at large $N$). You already know how to deal with the Hermitian part so, focus on $$e^{K\delta t}=\cosh(\delta t)\mathbb{I}+\sinh(\delta t)K.$$

+ +

We introduce an ancilla qubit in the state $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$, and we use this as the control qubit in a controlled-$K$ gate. Then we measure the ancilla in the $\{|\psi\rangle,|\psi^\perp\rangle\}$ basis (where $\langle\psi|\psi^\perp\rangle=0$). If the outcome is $|\psi\rangle$, then on the target qubits we have implemented the operation $|\alpha|^2\mathbb{I}+|\beta|^2K$, up to normalisation. So, if you fix $(1-|\alpha|^2)/|\alpha|^2=\tanh(\delta t)$, you have perfectly implemented that operation. If the measurement fails, then it's up to you whether you want to try to recover (this may well not be possible) or start again.

+",1837,,1837,,08-03-2018 06:47,08-03-2018 06:47,,,,0,,,,CC BY-SA 4.0 +3929,1,3931,,08-03-2018 15:06,,6,611,"

I’m trying to calculate the probability amplitudes for this circuit: +

+ +

My Octave code is:

+ +
sys = kron([1; 0], [1;0], [1;0])
+h = 1/sqrt(2) * [1 1; 1 -1];
+c = [1 0 0 0; 0 1 0 0; 0 0 0 1; 0 0 1 0];
+op1 = kron(h, eye(2), eye(2));
+op2 = kron(c, eye(2));
+op3 = kron(eye(2), c);
+op4 = kron(h, eye(2), eye(2));
+op4*op3*op2*op1 * sys
+
+ +

The output is:

+ +
ans =
+0.50000
+0.00000
+0.00000
+0.50000
+0.50000
+0.00000
+0.00000
+-0.50000
+
+ +

This differs from the results the quantum circuit simulator gives me, where have I gone wrong?

+",4275,,26,,12/23/2018 13:19,12/23/2018 13:19,Incorrectly Calculating Probability Amplitudes for 3-qbit Circuit,,2,0,,,,CC BY-SA 4.0 +3930,2,,3929,08-03-2018 15:41,,3,,"

The output you've stated there appears to be correct. The Hadamard produces +$$ +|000\rangle\mapsto\frac{1}{\sqrt{2}}(|000\rangle+|100\rangle). +$$ +Then, the two controlled-nots give +$$ +\mapsto\frac{1}{\sqrt{2}}(|000\rangle+|111\rangle). +$$ +The final Hadamard then yields +$$ +\mapsto\frac{1}{2}((|0\rangle+|1\rangle)|00\rangle+(|0\rangle-1|\rangle)|11\rangle). +$$ +This is the same as $\frac12(|000\rangle+|011\rangle+|100\rangle-|111\rangle)$.

+ +

The problem is presumably with the quantum simulator, or your interpretation of it, but since you don’t specify what simulator you’re using, it’s difficult to know...

+",1837,,1837,,08-04-2018 05:09,08-04-2018 05:09,,,,0,,,,CC BY-SA 4.0 +3931,2,,3929,08-03-2018 16:31,,5,,"

You're getting the same output as Quirk, just with a different bit ordering convention for the kets.

+ +

Quirk considers the top qubit to be the ""least significant"" qubit (i.e. if you count 000, 001, 010, ... then it refers to the rightmost bit). So if you apply a Hadamard gate to the top qubit of a three-qubit circuit in Quirk you get the state |000> + |001>.

+ +

In your code you are using the opposite convention, and putting the H as the first argument to kron instead of the last argument, so you would get |000> + |100> instead.

+ +

To get comparable results you just need to vertically mirror the Quirk circuit, e.g. by throwing in a swap gate:

+ +

+",119,,,,,08-03-2018 16:31,,,,2,,,,CC BY-SA 4.0 +3932,1,3950,,08-03-2018 22:03,,17,986,"

Officials in Rubik's cube tournaments have used two different ways of scrambling a cube. Presently, they break a cube apart and reassemble the cubies in a random order $\pi\in G$ of the Rubik's cube group $G$. Previously, they would apply a random sequence $g$ of Singmaster moves $\langle U,D, F, B, L, R\rangle$.

+ +

However, the length $t$ of the word $g$ - the number of random moves needed in order to fully scramble the cube such that each of the $\Vert G\Vert=43,252,003,274,489,856,000$ permutations is roughly equally likely to occur - is presently unknown, but must be at least $20$. This length $t$ can be called the mixing time of a random walk on the Cayley graph of the Rubik's cube group generated by the Singmaster moves $\langle U,D, F, B, L, R\rangle$.

+ +
+

Would a quantum computer have any advantages to determining the mixing time $t$ of the Rubik's cube group?

+
+ +

I think we can have some clever sequence of Hadamard moves to create a register $\vert A \rangle$ as a uniform superposition over all $\Vert G\Vert$ such configurations; thus applying any sequence of Singmaster moves to $\vert A \rangle$ doesn't change $\vert A \rangle$.

+ +

If we have a guess $t'$ as to what the mixing time $t$ is, we can also create another register $\vert B \rangle$ as a uniform superposition of all Singmaster words of length $t'$, and conditionally apply each such word to a solved state $\vert A'\rangle$, to hopefully get a state $\vert B\rangle \vert A\rangle$ such that, if we measure $\vert A \rangle$, each of the $\Vert G \Vert$ configurations are equally likely to be measured. If $t'\lt t$, then we won't have walked along the Cayley graph of $G$ for long enough, and if we were to measure $\vert A \rangle$, configurations that are ""closer"" to the solved state would be more likely. Some clever Fourier-like transform on $\vert B \rangle$ might be able to measure how uniformly distributed $\vert A \rangle$ is.

+ +

To me this feels like something a quantum computer may be good at. For example, if $\vert A \rangle$ hasn't been uniformly mixed by all of the words in $\vert B\rangle$, then some configurations are more likely than others, e.g. $\vert A \rangle$ is more ""constant""; whereas if $\vert A \rangle$ has been fully mixed by all of the walks, then $\vert A \rangle$ is more ""balanced"". But my inuition about both quantum algorithms and Markov chains is not strong enough to get very far.

+ +
+ +

EDIT

+ +

Contrast this question with the quantum knot verification problem.

+ +

In the quantum knot verification, a merchant is given a quantum coin as a state $\vert K \rangle$ of all knots that have a particular invariant. In order to verify the quantum coin, she applies a Markov chain $M$ to transition $\vert K \rangle$ to itself (if it's a valid coin.) She must apply this Markov chain and measure the result at least $t$ times, but otherwise she has no way to construct $\vert K \rangle$ on her own (lest she could forge the coin.) So if she's given a valid coin, she's given a state that she can't produce on her own, along with a Markov chain as a matrix $M$, and she presumably knows the mixing time $t$; she's required to test that $\vert K \rangle$ is valid.

+ +

In the present question, it's probably pretty easy to generate $\vert RC \rangle$ of all Rubik's cube permutations. The quantum circuit corresponding to the Markov chain, call it $S$, of Singmaster moves, is also probably pretty easy to build. However, the mixing time $t$ is unknown, and is the one thing to be determined.

+",2927,,2927,,08-06-2018 22:50,06-03-2021 03:30,Can a quantum computer easily determine the mixing time of the Rubik's cube group?,,3,4,,,,CC BY-SA 4.0 +3933,1,,,08-03-2018 23:01,,13,426,"

The quantum effects of the FMO complex (photosynthetic light harvesting complex found in green sulfur bacteria) have been well studied as well as the quantum effects in other photosynthetic systems. One of the most common hypotheses for explaining these phenomenon (focusing on FMO complex) is Environment-Assisted Quantum Transport (ENAQT) originally described by Rebentrost et al.. This mechanism describes how certain quantum networks can ""use"" decoherence and environment effects to improve the efficiency of quantum transport. Note that the quantum effectss arise from the transport of excitons from one pigment (chlorophyll) in the complex to another. (There is a question that discusses the quantum effects of the FMO complex in a little more detail).

+ +

Given that this mechanism allows for quantum effects to take place at room temperatures without the negative effects of decoherence, are their any applications for quantum computing? There are some examples of artificial systems that utilize ENAQT and related quantum effects. However, they present biomimetic solar cells as a potential application and do not focus on the applications in quantum computing.

+ +

Originally, it was hypothesized that the FMO complex performs a Grover's search algorithm, however, from what I understand, it has now since been showed that this is not true.

+ +

There have been a couple studies that use chromophores and substrates not found in biology (will add references later). However, I would like to focus on systems that use a biological substrate.

+ +

Even for biological substrates there are a couple examples of engineered systems that use ENAQT. For example, a virus-based system was developed using genetic engineering. A DNA-based excitonic circuit was also developed. However, most of these examples present photovoltaics as a main example and not quantum computing.

+ +

Vattay and Kauffman was (AFAIK) the first to study the quantum effects as quantum biological computing, and proposed a method of engineering a system similar to the FMO complex for quantum computing.

+ +
+

How could we use this mechanism to build new types of computers? In + the light harvesting case the task of the system is to transport the + exciton the fastest possible way to the reaction center whose position + is known. In a computational task we usually would like to find the + minimum of some complex function $f_n$. For the simplicity let this + function have only discrete values from 0 to K. If we are able to map + the values of this function to the electrostatic site energies of the + chromophores $H_{nn} = \epsilon_0 f_n$ and we deploy reaction centers + near to them trapping the excitons with some rate $κ$ and can access + the current at each reaction center it will be proportional with the + probability to find the exciton on the chromophore $j_n ∼ κ\rho_{nn}$.

+
+ +
+ +

How can the quantum effects of the FMO complex be used on a biological substrate for quantum computing? Given that the quantum effects occur due to the transport of excitons on network structures, could ENAQT provide more efficient implementations of network-based algorithms (ex: shortest path, traveling salesman, etc.)?

+ +
+ +

P.S. I will add more relevant references if needed. Also, feel free to add relevant references as well.

+",141,,26,,12/13/2018 20:00,12/13/2018 20:00,Does the quantum coherence in the FMO complex have any significance to quantum computing (on a biological substrate)?,,1,10,,,,CC BY-SA 4.0 +3934,2,,3918,08-04-2018 05:42,,3,,"

A coin is not a great analogy for a quantum system. A (slightly) better one is a box that contains 3 coins. There are 3 windows, labelled x, y and z. The box is rigged so that you can only open one window at a time. When you open a window, you can see the heads/tails state of the corresponding coin, but the other two coins get flipped, and you can’t see what happens to them (unless you open another window, but then you know nothing anymore about the window you just had open because that coin has been flipped again).

+ +

I’m not sure that this attempt at an analogy scales very well to larger dimensional systems, because you probably can’t, in general, describe the algebra of the system using a set of mutually anti-commuting observables, so your rules for how the different coins flip would have to be more complicated. The qudit analogy has $d^2-1$ coins. For example, 2 qubits have 15 coins, each corresponds to a tensor product of 2 Paulis. You can open sets of windows that correspond to commuting observables. The other coins get flipped upon opening, but there are some consistency conditions that some outcomes of coin flips are entirely determined by the outcomes of other coin flips. It becomes messy very quickly...

+ +
+ +

Further explanation (expanding some of the Mathematics)

+ +

Any density matrix of a qudit is described by a $d\times d$ matrix. You can pick any basis of matrices that you like to decompose that (Hermitian) matrix. You'll need $d^2$ of them, but one of those terms is always $\mathbb{I}/d$, which I don't need to count. For example, a basis for the qubit are the Pauli matrices $X$, $Y$ and $Z$ (and $\mathbb{I}$). If you correspond each of these basis elements to measurement operators, because there are 2 distinct eigenvalues, you get two measurement outcomes (like head/tails on a coin). If two operators commute, you can simultaneously know the two measurement outcomes. If the two observables anti-commute, the corresponds to maximal uncertainty between the two observables. In other words, you measure one observable (say $Z$), and the other observables ($X$ and $Y$, because both anticommute with $Z$) are complete uncertain, i.e. the coins are flipped.

+ +

For qubits, all 3 observables pair-wise anticommute: $\{X,Y\}=\{Y,Z\}=\{X,Z\}=0$, so whichever measurement you choose, the other two observables reset.

+ +

However, for two qubits, the relationships are not nearly so simple. You have all possible terms +$$ +\mathbb{I}\otimes X \qquad \mathbb{I}\otimes Y \qquad \mathbb{I}\otimes Z \\ +X\otimes \mathbb{I} \qquad Y\otimes \mathbb{I} \qquad Z\otimes \mathbb{I} \\ +X\otimes X \qquad X\otimes Y \qquad X\otimes Z \\ +Y\otimes X \qquad Y\otimes Y \qquad Y\otimes Z \\ +Z\otimes X \qquad Z\otimes Y \qquad Z\otimes Z +$$ +You can see that not all of these pair-wise anticommute, because $\mathbb{I}\otimes Z$ and $Z\otimes \mathbb{I}$ commute, for example. So, we could simultaneously measure the set of observables $\mathbb{I}\otimes Z$, $Z\otimes \mathbb{I}$ and $Z\otimes Z$, but all other observables are completely uncertain. Overall, you'd have 15 coins and there are sets of 3 windows that you can open simultaneously, and all other coins are flipped at that instant.

+ +

If you want to describe the same thing for qutrits, it gets more messy because you can use a basis where everything gives 2 answers, but there's not a perfect division into whether operators commute or anticommute, so you get partial connections between coins which are messier to give a classical equivalent of.

+",1837,,1837,,08-05-2018 08:01,08-05-2018 08:01,,,,1,,,,CC BY-SA 4.0 +3935,1,3941,,08-04-2018 11:05,,6,595,"

Having n qubits, I want to have the unitary described a controlled operation. +Say for example you get as input a unitary, an index for a controlled qubit and another for a target.

+ +

How would you code this unitary operation?

+",4127,,26,,12/23/2018 13:15,12/23/2018 13:15,"How do we code the matrix for a controlled operation knowing the control qubit, the target qubit and the $2\times 2$ unitary?",,2,4,,,,CC BY-SA 4.0 +3936,1,3938,,08-04-2018 11:46,,6,388,"

Say I have a string representing the operations of a quantum circuit. +I want to have the unitary operator representing it. +Is there a tool for doing so in Python or else?

+",4127,,26,,12/23/2018 13:14,12/23/2018 13:14,Is there a tool that can give you the unitary representing a quantum circuit from just a string?,,2,0,,,,CC BY-SA 4.0 +3937,2,,3936,08-04-2018 13:48,,2,,"

Normally, quantum simulators ask you to specify a starting state, for example, all qubits initially in state $|0\rangle$, and then they give you the final state after that initial state evolves through all of the SEO (sequence of elementary operations). This is performed in my software Qubiter by the class SEO_simulator.

+ +

However, a well designed simulator package should also multiply the SEO and give you the unitary matrix it represents. This is done in Qubiter by the class SEO_MatrixProduct.

+ +

I have described the features of Qubiter in some of my other answers; it is my own code.

+",1974,,91,,08-07-2018 21:19,08-07-2018 21:19,,,,0,,,,CC BY-SA 4.0 +3938,2,,3936,08-04-2018 13:52,,6,,"

You can use Python with Qiskit. Say your string representation is written using OpenQASM syntax.

+ +
qasm = """"""
+OPENQASM 2.0;
+include ""qelib1.inc"";
+qreg q[2];
+h q[0];
+t q[1];
+cx q[0], q[1];
+""""""
+
+ +

You can build a circuit out of this and simulate it on a unitary simulator:

+ +
import qiskit as qk
+import numpy as np
+circuit = qk.load_qasm_string(qasm)
+result = qk.execute(circuit, 'local_unitary_simulator').result()
+print(np.round(result.get_unitary(), 1))
+
+ +

Yields:

+ +
[[ 0.7+0.j   0.7-0.j   0. +0.j   0. +0.j ]
+[ 0. +0.j   0. +0.j   0.5+0.5j -0.5-0.5j]
+[ 0. +0.j   0. +0.j   0.5+0.5j  0.5+0.5j]
+[ 0.7+0.j  -0.7+0.j   0. +0.j   0. +0.j ]]
+
+",2503,,,,,08-04-2018 13:52,,,,0,,,,CC BY-SA 4.0 +3939,2,,3935,08-04-2018 14:59,,6,,"

I would solve it like that: suppose you have CNOT gate; it's unitary matrix can be written as a sum of tensor products +$$ +\begin{pmatrix}1 & 0 \\ +0 & 0 +\end{pmatrix}\otimes +\begin{pmatrix}1 & 0 \\ +0 & 1 +\end{pmatrix}+ +\begin{pmatrix}0 & 0 \\ +0 & 1 +\end{pmatrix}\otimes +\begin{pmatrix}0 & 1 \\ +1 & 0 +\end{pmatrix} +$$ +(this is another way to say: if first qubit is 0, the second qubit is unchanged; if first qubit is 1, the second qubit is swapped).

+ +

Now if we have more qubits, we need to insert additional identity matrices +$$ +\begin{pmatrix}1 & 0 \\ +0 & 1 +\end{pmatrix} +$$ +into the tensor products from the left, from the right, or into the middle, depending on the indexes of control and target CNOT qubits.

+ +

PS: I also assumed that the index of control qubit is less than the index of target qubit; if this is not the case swap the terms in the tensor products in the first equation.

+",2105,,2105,,08-05-2018 03:31,08-05-2018 03:31,,,,0,,,,CC BY-SA 4.0 +3940,2,,3893,08-04-2018 17:02,,3,,"

I think you are misunderstanding what a basis vector is. $|0\rangle$ and $|1\rangle$ are basis vectors and these are referred to a rectilinear basis vectors and $\frac{|0\rangle+|1\rangle}{\sqrt{2}} = |+\rangle$ and $\frac{|0\rangle-|1\rangle}{\sqrt{2}} = |-\rangle$ are basis vectors too and these are referred to as diagonal basis vectors. The hadamard transform takes a vector from one basis representation to another. If you take a dot product between the diagonal basis vectors i.e $\langle+|-\rangle$ you will realize that they are orthonormal. It is entirely possible for a Hilbert space and more generally a vector space to have multiple sets of basis vectors.

+",422,,,,,08-04-2018 17:02,,,,0,,,,CC BY-SA 4.0 +3941,2,,3935,08-05-2018 05:14,,5,,"

Here’s some pseudo code, where id(n) creates an $2^n\times 2^n$ identity matrix, and tensor(A,B,...) returns $A\otimes B\otimes\ldots$.

+ +
+def cU(ctrl,targ,U,size):
+    '''implement controlled-U with:
+          control qubit ctrl,
+          target qubit targ,
+          within a set of size qubits'''
+#check input ranges
+    assert 1<=ctrl<=size
+    assert 1<=targ<=size
+    assert ctrl<>targ
+    assert ctrl,targ,size ∊ ℤ
+#ensure U is a 2x2 unitary
+    assert U∊ℂ2x2
+    assert U.U=id(2)
+#the actual code
+    if ctrl<targ:
+        return id(size)+tensor(id(ctrl-1),id(1)-Z,id(targ-1-ctrl),U-id(1),id(size-targ))/2
+    else:
+        return id(size)+tensor(id(targ-1),U-id(1),id(ctrl-1-targ),id(1)-Z,id(size-ctrl))/2
+
+ +

However, remember that usually you're trying to calculate the action of a unitary on some state vector. It will be far more memory efficient to calculate that application directly, rather than first calculating the unitary matrix and applying it to the state vector.

+ +

To understand where this formula came from, think about the two-qubit version, where the first qubit is the control qubit. You'd normally write the unitary as +$$ +|0\rangle\langle 0|\otimes\mathbb{I}+|1\rangle\langle 1|\otimes U. +$$ +Let's rewrite this as +$$ +(\mathbb{I}-|1\rangle\langle 1|)\otimes\mathbb{I}+|1\rangle\langle 1|\otimes U=\mathbb{I}\otimes\mathbb{I}+|1\rangle\langle 1|\otimes (U-\mathbb{I}). +$$ +It can be easier to write things in terms of Pauli matrices, so +$$ +|1\rangle\langle 1|=(\mathbb{I}-Z)/2. +$$ +To get the same unitary on a different number of qubits, you just need to pad with identity matrices everywhere.

+",1837,,1837,,08-06-2018 07:21,08-06-2018 07:21,,,,5,,,,CC BY-SA 4.0 +3942,2,,2414,08-05-2018 10:40,,6,,"

To start off, I would really suggest you to read this review on ""Quantum information with continuous variables(cv)"". It covers most of your questions with cv architecture. Since it is a very big review, I will try to address your questions with what I can remember from reading that paper and glancing over it again now.

+ +

For discrete variables(dv), as you have mentioned, Knill and Laflamme have pioneered the LOQC. But this approach was translated to cvs shortly after the proposal for realization of cv teleportation by Braunstein et al. They showed that cv quantum error correction codes can be implemented using only linear optics and resources of squeezed light.

+ +

Now coming to the universality of this type of quantum computer, they have also shown in the paper that a universal quantum computer for the amplitudes of the electromagnetic field might be constructed using linear optics, squeezers and at least one further non-linear optical element such as the Kerr effect(pg.48~50).

+ +

I will try to summarize their proof verbally as simple as I can.

+ +

1) It is true that, for universal qcs, logical operations can only affect few variables in the form of qubit logic gates and by stacking those gates, it can effect any unitary transformation over a finite number of those variables to any desired degree of precision.

+ +

2) The argument is that since an arbitrary unitary transformation over even a single cv requires an infinite number of parameters to define, it typically cannot be approximated by any finite number of quantum operations.

+ +

3) This problem is tackled by showing that a notion of universal quantum computation over cvs for various subclasses of transformations, such as Hamiltonians (which are polynomial functions of the operators corresponding to the cvs). A set of continuous quantum operations will be termed universal for a particular set of transformations if one can, by a finite number of applications of the operations, approach arbitrarily closely to any transformation in the set.

+ +

4) The result is a very lengthy mathematical proof of constructing quadratic Hamiltonians for EM fields.

+ +

So to answer your question, even though, as you mentioned, the squeezing of light adds external noise to qc, I believe that it can be used for error correcting the same noise. Along with that, the claim of quantum speedup arrives from the fact that to generate all unitary transformations given by an arbitrary polynomial Hermitian Hamiltonian (as is necessary to perform universal cv quantum computation), one must include a gate described by a Hamiltonian other than an inhomogeneous quadratic in the canonical operators.

+ +

These nonlinear transformations can be used in cv algorithms and may provide a significant speedup over any classical process.

+ +

So to conclude, yes cv quantum computation looks optimistic because most of it is theoretical at this point. There are only a few experimental confirmations of the cv architecture like ""squeezed-state EPR entanglement"", ""coherent state quantum teleportation"" etc. But the recent experiments in ""quantum key distribution"" and ""quantum memory effect"" shows that continuous variable quantum computers have the potential to be as effective as their discrete counterparts if not more for some tasks.

+",419,,,,,08-05-2018 10:40,,,,5,,,,CC BY-SA 4.0 +3943,1,3944,,08-05-2018 17:41,,17,10498,"

I've been reading through ""Quantum Computing: A Gentle Introduction"", and I've been struggling with this particular problem. How would you create the circuit diagram, and what kind of reasoning would lead you to it?

+",4287,,26,,12/23/2018 13:14,12/23/2018 13:14,How do you implement the Toffoli gate using only single-qubit and CNOT gates?,,2,1,,,,CC BY-SA 4.0 +3944,2,,3943,08-05-2018 18:10,,17,,"

+ +

is the decomposition (I took this from google images, originally on this website.)

+ +

In order to understand how to decompose it, we can look at it's base structure. The idea is that we combine gates that cancel out, but put CNOT gates in between such that if the specific NOT is executed, the gates don't cancel. This is how generic controlled-U gates are implemented for arbitrary U. as explained in Quantum Computation and Quantum Information by Nielsen and Chuang.

+ +

For a simpler example, imagine you have a gate U and you want to make a controlled-U from it, with just one control. To do so you find single qubit gates A, B, C such that CBA = I, but CXBXA = U. By putting a CNOT where every X gate must be applied, you have created a control-U gate. Similar logic applies in the CC-U case, except you need EDCBA = EXDCXBA = EDXCBXA = I up to a phase, and EXDXCXBXA = U. Where the first and third X corresponds to a CNOT from one control, and the second and fourth are from the other.

+ +

Essentially the intuition for this structure is that you want the have the bottom line cancel out if either control is zero, and to be your desired unitary if they both apply the not gates.

+ +

For additional reading check out pages 181-183 in Nielsen and Chuang. [EDIT: The link @Norbert Schuch posted also contains the same info, and you don't need to track down a textbook.]

+",3056,,3056,,08-05-2018 21:32,08-05-2018 21:32,,,,0,,,,CC BY-SA 4.0 +3946,2,,3898,08-06-2018 03:28,,5,,"

One recent breakthrough which is not covered in that survey is cheat sheets. See,

+ +
+

Separations in communication complexity using cheat sheets and information complexity; + Anurag Anshu, Aleksandrs Belovs, Shalev Ben-David, Mika Göös, Rahul Jain, Robin Kothari, Troy Lee, and Miklos Santha; + arXiv:1605.01142 [quant-ph]

+
+ +

It might be a good idea to first familiarize yourself with the cheat sheet framework.

+ +
+

Separations in query complexity using cheat sheets; + Scott Aaronson, Shalev Ben-David, Robin Kothari; + arXiv:1511.01937 [quant-ph]

+
+ +

Personally, I like thinking about query complexity as a whole instead of specializing in communication complexity. A lot of theorems in traditional query complexity can be lifted into communication complexity, and indeed that is what they do in the above paper.

+",,user1813,,,,08-06-2018 03:28,,,,0,,,,CC BY-SA 4.0 +3947,1,3949,,08-06-2018 06:15,,8,1416,"

When I look at most circuits (admittedly small sample as I'm a beginner), the Hadamard gate is used a lot to prepare a superposition from say the $\mid0\rangle$ state.
+But upon a little reflection, we can prepare a superposition using a $\dfrac{\pi}{2}$ rotation about the X axis.

+ +

I do know that a successive application of the Hadamard gate yields the initial state back (for any state).
+If we have one of $\mid0\rangle$ or $\mid1\rangle$, we can recover them using a succession of said rotation followed by a NOT gate (Pauli-X).

+ +

So why is the Hadamard gate preferred to create superpositions when it uses more gates (rotation about Z then rotation about X then rotation about Z again)?
+If it is because the Hadamard gate allows recovery of any initial state, why is that property so important? (Even when not actually used when I look at the examples I see.)

+",2417,,16606,,06-02-2022 12:48,06-02-2022 12:48,Advantage of Hadamard gate over rotation about the X axis for creating superpositions,,3,3,,,,CC BY-SA 4.0 +3948,2,,3947,08-06-2018 07:00,,4,,"

Any Hermitian quantum gate $U$ is ""self-recovering"". This is because $U$ is unitary, and +$$UU^{\dagger}=U^{\dagger}U=I$$ +If $U$ is also Hermitian, then $U=U^{\dagger}$ and +$$UU=I$$

+ +

Hadamard gate prepares $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$ superposition from $|0\rangle$ state. If you need this superposition, you use Hadamard. If you need a different superposition, $\alpha|0\rangle + \beta|1\rangle$ with some $\alpha$ and $\beta$, you need a different gate or a sequence of gates; Hadamard gate has no advantage here.

+",2105,,2105,,08-06-2018 09:50,08-06-2018 09:50,,,,6,,,,CC BY-SA 4.0 +3949,2,,3947,08-06-2018 07:09,,8,,"

It's mostly about simplicity and adopted convention. In the end, this is basically the same question as ""why should I pick a universal set of gates A rather than a universal set B?"" (see here). Experimentalists would pick the universal set they have available. Theorists just pick something that they like to work with, and eventually a convention is adopted. But it doesn't matter which convention they adopt because any universal set is easily converted into any other universal set, and it is (or should be) understood that the quantum circuits describing algorithms are not what you actually want to run on a quantum computer: you need to recompile them for the available gate set and optimise based on the available architecture (and this process is unique to each architecture).

+ +

You could use operations such as $\sqrt{X}$, but they are a little bit more fiddly because of all the imaginary numbers that appear. Or there's $\sqrt{Y}$ which gives an even more direct comparison to $H$, avoiding imaginary numbers.

+ +

One of the main purposes of $H$ in a quantum circuit is to prepare uniform superpositions: $H|0\rangle=(|0\rangle+|1\rangle)/\sqrt{2}$. But $\sqrt{Y}$ also does this: $\sqrt{Y}|1\rangle=(|0\rangle+|1\rangle)/\sqrt{2}$. When you start combining multiple Hadamards on unknown input states (i.e. the Hadamard transform), it has a particularly convenient structure +$$ +H^{\otimes n}=\frac{1}{\sqrt{2^n}}\sum_{x,y\in\{0,1\}^n}(-1)^{x\cdot y}|x\rangle\langle y|. +$$

+ +

The Hadamard gives you some very nice inter-relations (reflecting basis changes between pairs of mutually unbiased bases), +$$ +HZH=X\qquad HXH=Z \qquad HYH=-Y. +$$ +It also enables relations between controlled-not and controlled phase, and between controlled-not in two different directions (swapping control and target). There are similar relations for $\sqrt{Y}$: +$$ +\sqrt{Y}Z\sqrt{Y}^\dagger=YZ=iX \qquad \sqrt{Y}X\sqrt{Y}^\dagger=YX=-iZ\qquad \sqrt{Y}Y\sqrt{Y}^\dagger=Y +$$ +Part of this looking (slightly) nicer is because, as stated in the question, $H^2=\mathbb{I}$.

+ +

One way that many courses introduce the basic idea of quantum computation, and interference, is to use the Mach-Zehnder interferometer. This consists of two beam splitters which, mathematically, should be described by $\sqrt{X}$ (or $\sqrt{Y}$ would do). Indeed, this is important for a first demonstration because of course these operations are ""square root of not"", which you can prove is logically impossible classically. However, once that initial introduction is over, theorists will often substitute the beam splitter operation for Hadamard, just because it makes everything slightly easier.

+",1837,,1837,,08-07-2018 08:00,08-07-2018 08:00,,,,15,,,,CC BY-SA 4.0 +3950,2,,3932,08-06-2018 07:40,,8,,"

It's an interesting question which is better than most ""is there a quantum algorithm for x?"" questions. I don't know of an existing quantum algorithm. Let me describe what I think would be a typical first attempt, and why that fails. At the end I'll describe a couple of things that might lead to some improvements.

+ +

First Attempt at an Algorithm

+ +

Let's say I want to test a particular mixing time $t$. I'm going to create one register, $RC$ to contains sufficient workspace to hold any of the possible configurations of the Rubik's cube. The initial state of this is a product state that corresponds to the starting state of the cube.

+ +

Then I'm going to make $t$ ancilla registers, $A_1$ to $A_t$. Each of these is the same size as the number of possible Singmaster moves, and is prepared as a uniform superposition across all possible basis elements. Then for each $i=1,\ldots t$, we apply a controlled-unitary from $A_i$ to $RC$ where the register $A_i$ specifies which Singmaster move is applied on $RC$.

+ +

After all this, if we just look at $RC$, it should be in the maximally mixed state if the mixing has happened as desired. The problem is how to test whether or not this output is the maximally mixed state. There are useful techniques such as this one, but what accuracy do we require (i.e. how many repetitions?). We'll need about $|A|^t$ to be sure, I think.

+ +

In fact, this way of doing things is just as bad as doing it classically: you could replace the initial state of each of the $A_i$ with $\mathbb{I}/2^{|A_i|}$ and it wouldn't change the outcome. But this really is just like making a random choice each time and running many times, checking for the correct output distribution.

+ +

Possible Improvements

+ +
    +
  • Running as I described, the output density matrix $\rho$ (on $RC$) must be diagonal. That means that the uniform superposition $|u\rangle$ over all basis states is an eigenstate if and only if the system is maximally mixed. I would if one could combine this observation with some sort of amplitude amplification to get a mild speed-up. Note that $\rho^k|u\rangle$ builds up a difference very rapidly from $|u\rangle$ if the state is not an eigenvector.

  • +
  • Aside from that, you probably need to do something smarter with the ancilla registers. There is some hope that this might be possible because there's quite a lot of group structure built in to the Rubik's cube. One thing that you might try is to see whether you can replace all $t$ ancilla registers with a single register, apply Hadmard gates on every qubit of the register in between each round of controlled-unitaries. It might be that all this does is give you an efficiency saving in terms of the number of qubits compared to my original suggestion. It might not even do that.

  • +
+ +

Whether either of those work directly, I don't know. Still, I think the key principles are to find some useful group structure, and find a way that amplitude amplification can be applied.

+ +

You might find it useful to read up about unitary designs. This is certainly a distinct problem from what we're talking about here, but some of the technical tools might be useful. Roughly speaking, the idea is that a set of unitaries $\{U\}$ is a $t$-design if random application of these unitaries lets one simulate a truly random unitary (drawn from the Haar measure) on output functions $f$ which, when expanded using a Taylor series, are accurate up to degree $t$. The approximate connection here is that if you take the unitaries representing a sequence of $t$ Singmaster moves as $\{U\}$, it would be sufficient if this set were a 2-design (if you get $\text{Tr}(\rho^2)$ correct, you're done).

+",1837,,1837,,08-06-2018 08:17,08-06-2018 08:17,,,,4,,,,CC BY-SA 4.0 +3951,2,,3943,08-06-2018 15:14,,10,,"

You also have this one with V the square root of NOT gate:

+ +

+ +

If you have as control qubits :

+ +

(0,0) : do nothing; +(0,1) : apply V and its conjugate which is identity; +(1,0) : same but inversed; +(1,1) : apply V twice which correspond to your NOT gate.

+",4127,,,,,08-06-2018 15:14,,,,0,,,,CC BY-SA 4.0 +3952,2,,3947,08-06-2018 17:09,,6,,"

I think the major advantages of the Hadamard gate are ""usability"" stuff, as opposed to fundamental mathematical stuff. It's just easier to remember and simpler to apply.

+ +
    +
  1. The Hadamard gate's marix is real and symmetric. Makes it easy to remember.
  2. +
  3. Hadamard is its own inverse. Makes it easy to optimize in circuits. Any two Hs that meet cancel out; whereas $\sqrt{X}$ tends to meet $\sqrt{X}$ as often as $\sqrt{X}^{-1}$ leaving behind $X$ operations.
  4. +
  5. Hadamard's effect on operators is easy to remember: swap X for Z. Whereas for $\sqrt{X}$ style operations you need to remember a right hand rule. If you pass a hadamard over a CZ, it turns into a CNOT. If you pass a $\sqrt{Y}$ over a CZ, whether you get a CNOT or a CNOT+Z depends on whether you went left-to-right or right-to-left.
  6. +
  7. In the surface code you need twist defects or distilled states to do $\sqrt{X}$ gates. Hadamard operations need neither (though the twists are more efficient...).
  8. +
  9. The Hadamard is unique. There are two values $M$ such that $M^2 = X$, and so you need an agreed upon convention for which one $\sqrt{X}$ is.
  10. +
+ +

PS: it would be better to compare a Hadamard to a 90 degree rotation about the Y axis, not the X axis, because the Hadamard operation is equivalent $\sqrt{Y}$ up to Pauli operations ($H \propto Z \cdot \sqrt{Y}$).

+",119,,119,,08-06-2018 19:26,08-06-2018 19:26,,,,0,,,,CC BY-SA 4.0 +3953,1,3958,,08-06-2018 19:13,,3,269,"

What Hilbert space of dimension greater than 4.3e19 would be most convenient for working with the Rubik's Cube verse one qudit?

+ +

The cardinality of the Rubik's Cube group is given by:

+ +

+ +

Examples

+ +

66 Qubits yields ~7.378697629484e19 states (almost more than double the number of states needed)

+ +

42 Qutrits yields ~1.094189891315e20 states (more than double the needed states)

+",2645,,2645,,12/18/2018 23:14,08-07-2021 16:43,Hilbert space to accurately represent 3x3 Rubik's Cube,,2,3,,,,CC BY-SA 4.0 +3954,1,,,08-06-2018 21:00,,8,169,"

Are there any books, courses, tutorials, etc. for studying quantum biology? +Preferably they provide some introduction/primer on the relevant quantum aspects of the quantum biological systems being described.

+",141,,55,,10/24/2022 20:14,10/24/2022 20:14,Resources for quantum biology,,1,5,,,,CC BY-SA 4.0 +3955,1,,,08-07-2018 01:59,,10,765,"

What's the simplest algorithm (like Deutsch's algorithm and Grover's Algorithm) for intuitively demonstrating quantum speed-up? And can this algorithm be explained intuitively?

+ +

Ideally this would be also illustrate clear how quantum interference is being utilized, and why it is not possible or useful using just interference of classical waves.

+",2660,,55,,5/31/2021 15:01,5/31/2021 15:01,What is the simplest algorithm to demonstrate intuitively quantum speed-up?,,3,3,,,,CC BY-SA 4.0 +3956,2,,3955,08-07-2018 03:02,,2,,"

There is nice example in the Microsoft lecture. Suppose you have a classical black box with 1 input and 1 output. How many queries you need to determine whether the output is constant or variable? Evidently you need 2 queries; first you input 0, second you input 1; if both outputs are identical you have constant, otherwise variable. It turns out that after you convert the classical black box into a quantum black box you can build a circuit that needs only a single query (the lecture explains how to do it).

+",2105,,,,,08-07-2018 03:02,,,,3,,,,CC BY-SA 4.0 +3957,1,3964,,08-07-2018 05:35,,11,801,"

I want to decompose a Toffoli gate into CNOTs and arbitrary single-qubit gates. I want to minimize the number of CNOTs. I have a locality constraint: because the Toffoli is occurring in a linear array, the two controls are not adjacent to each other (so no CNOTs directly between the controls).

+ +

What is the minimum number of CNOTs required to perform this task? What is an example of a circuit which achieves this minimum?

+ +

To be specific, this is the layout I have in mind:

+ +
1 ---@---
+     |
+2 ---X---
+     |
+3 ---@---
+
+ +

Each control is adjacent to the target, but the controls are not adjacent to each other.

+",119,,26,,12/23/2018 7:40,10/28/2022 9:59,Minimum number of CNOTs for Toffoli with non-adjacent controls,,4,1,,,,CC BY-SA 4.0 +3958,2,,3953,08-07-2018 06:36,,4,,"

This question does not need to be phrased as a quantum question. One can equally ask what classical register can be used to store a string that uniquely identifies each different configuration of the Rubik’s Cube. This is already implicitly answered in the question: you need 27 bits, 14 trits....

+ +

However, this is labouring under the assumption that you can easily pick and choose your information carrier, and combine together arbitrary sets with different dimensions, and easily talk about the logic operations on them and how they all interact. This is clearly not how it works in classical: your computer only processes bits (and if you want different dimensional systems, the software can handle the conversion: see, for example, the MixedRadix function in Mathematica), and the same will be true of quantum. A particular experiment will focus on one type of information carrier, and will be able to manipulate a certain number of levels. You might be able to repeat that many times, but you won’t be combining qubits, qutrits etc. So you’ll probably be using qubits, and you just need to find the smallest Hilbert space that’s larger than the required dimension. Again, as indicated by the question, that’s 66 qubits.

+ +

Perhaps the concern is that 66 seems like a lot of qubits and we need to make that number as small as possible. Remember that computations will require error correction, which means increasing the number of qubits by several orders of magnitude. One extra qubit doesn’t matter so much on the scale of things.

+",1837,,,,,08-07-2018 06:36,,,,7,,,,CC BY-SA 4.0 +3959,1,4000,,08-07-2018 06:46,,11,797,"

God's number is the worst case of God's algorithm which is

+ +
+

a notion originating in discussions of ways to solve the Rubik's Cube puzzle, but which can also be applied to other combinatorial puzzles and mathematical games. It refers to any algorithm which produces a solution having the fewest possible moves, the idea being that an omniscient being would know an optimal step from any given configuration.

+
+ +

Calculating God's number to be 20 took ""35 CPU-years of idle (classical) computer time.""

+ +

What kind of speed up could be achieved with a quantum approach?

+",2645,,2645,,12/18/2018 20:16,12/18/2018 20:16,Quantum Algorithm for God's Number,,1,5,,,,CC BY-SA 4.0 +3960,1,3961,,08-07-2018 07:03,,6,448,"

I see somewhere that this happens:

+ +

+ +

But I wonder if this is just identity.

+",806,,26,,08-12-2018 06:13,08-12-2018 07:10,Just want to confirm: Do two CNOT gates cancel each other?,,2,0,,,,CC BY-SA 4.0 +3961,2,,3960,08-07-2018 07:08,,5,,"

Yes, it is. If the bottom qubit is 0, neither gate does anything to the top qubit. If the bottom qubit is 1, both gates apply $X$. But since $X^2=\mathbb{I}$, the net effect is that nothing happens. Hence, overall, nothing happens.

+ +

Another way to see this is to look at the unitary matrix of controlled-not. +$$ +\left(\begin{array}{cccc} +1 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 \\ +0 & 0 & 0 & 1 \\ +0 & 0 & 1 & 0 +\end{array}\right) +$$ It’s reasonably easy to see that the eigenvalues are 1,1,1,-1 (evidently, $|00\rangle$ and $|01\rangle$ are +1 eigenvectors, leaving behind a $2\times 2$ matrix like Pauli $X$, which we know has $\pm 1$ eigenvalues), so the square obviously has eigenvalues 1,1,1,1 and the only 4x4 unitary matrix with all ones eigenvalues is the identity matrix.

+ +

Equally, direct calculation: +$$ +\left(\begin{array}{cccc} +1 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 \\ +0 & 0 & 0 & 1 \\ +0 & 0 & 1 & 0 +\end{array}\right)\cdot \left(\begin{array}{cccc} +1 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 \\ +0 & 0 & 0 & 1 \\ +0 & 0 & 1 & 0 +\end{array}\right)=\left(\begin{array}{cccc} +1 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 \\ +0 & 0 & 1 & 0 \\ +0 & 0 & 0 & 1 +\end{array}\right) +$$

+",1837,,1837,,08-07-2018 07:34,08-07-2018 07:34,,,,0,,,,CC BY-SA 4.0 +3963,2,,3957,08-07-2018 10:04,,3,,"

I believe I've got it down to 9 controlled-not gates: +

+ +

What I did was I used a set of three cNots in the place of Swap to move the two controls next to each other to achieve the last part of the standard Toffoli circuit (see here). This used 12 cNots.

+ +

However, the final $T$ and $H$ gates on the target qubit I propagated through one of these swaps. This let me cancel two controlled-Nots.

+ +

Then, in the final SWAP, I chose the first of the controlled-nots to be controlled from the middle qubit. I replaced it with a controlled-phase and two Hadamards. The leading Hadamard cancelled. The controlled-phase gate commutes with the preceding controlled gates controlled off the middle qubit, and phase gates on the middle qubit and bottom qubits. These operations bring that controlled phase up to a controlled-not from the first inserted swap. Hence, we can combine these two gates as a controlled-$iY$, controlled off the bottom qubit. But this can be written as a single cNot with some $S$ gates.

+ +

I've made no attempt at an optimality proof, but I'm already pretty pleased to have got it this small.

+",1837,,,,,08-07-2018 10:04,,,,0,,,,CC BY-SA 4.0 +1,1,18,,03-12-2018 17:00,,75,6416,"

I know that a Turing machine1 can theoretically simulate ""anything"", but I don't know whether it could simulate something as fundamentally different as a quantum-based computer. Are there any attempts to do this, or has anybody proved it possible/not possible?

+ +

I've googled around, but I'm not an expert on this topic, so I'm not sure where to look. I've found the Wikipedia article on quantum Turing machine, but I'm not certain how exactly it differs from a classical TM. I also found the paper Deutsch's Universal Quantum Turing Machine, by +W. Fouché et al., but it is rather difficult to understand for me.

+ +
+ +

+1. In case it is not clear, by Turing machine I mean the theoretical concept, not a physical machine (i.e. an implementation of the theoretical concept).

+",,user7,55,,9/23/2020 12:18,12/13/2022 18:44,Can a Turing machine simulate a quantum computer?,,4,0,,,,CC BY-SA 3.0 +3,1,86,,03-12-2018 17:12,,46,3168,"

Quantum computers are known to be able to crack in polynomial time a broad range of cryptographic algorithms which were previously thought to be solvable only by resources increasing exponentially with the bit size of the key. An example for that is Shor's algorithm.

+ +

But, as far I know, not all problems fall into this category. On Making Hard Problems for Quantum Computers, we can read

+ +
+

Researchers have developed a computer algorithm that doesn’t solve problems but instead creates them for the purpose of evaluating quantum computers.

+
+ +

Can we still expect a new cryptographic algorithm which will be hard to crack using even a quantum computer? +For clarity: the question refers to specifically to the design of new algorithms.

+",27,,1847,,04-10-2018 05:24,2/13/2019 15:40,"Is it possible for an encryption method to exist which is impossible to crack, even using quantum computers?",,4,0,,,,CC BY-SA 3.0 +4,1,84,,03-12-2018 17:16,,17,2081,"

Quantum gates are represented by matrices, which represent the transformations applied to qubits (states).

+ +

Suppose we have some quantum gate which operates on $2$ qubits.

+ +

How does the quantum gate affect (not necessarily change it) the result of measuring the state of the qubits (as the measurement result is affected greatly by the probabilities of each possible state)? More specifically, is it possible to know, in advance, how the probabilities of each state change due to the quantum gate?

+",13,,26,,12/23/2018 12:05,12/23/2018 12:05,How do the probabilities of each state change after a transformation of a quantum gate?,,4,0,,,,CC BY-SA 3.0 +5,2,,1,03-12-2018 17:18,,13,,"

To simulate the collapse of the wave function you'd need a source of randomness. So you'd need a probabilistic Turing machine.

+",29,,,,,03-12-2018 17:18,,,,3,,,,CC BY-SA 3.0 +6,1,48,,03-12-2018 17:21,,9,242,"
+

The minimum size of a computer that could simulate the universe would + be the universe itself.

+
+ +

This is quite a pretty big theory in classical computing and physics because to contain the information of the whole universe, you require a minimum information storage space that is of the size of the universe itself.

+ +

But, quantum computing computes and stores data in parallel to other data and thus while being efficient is actually more compact. We are talking ideal systems, so cooling mechanisms don't count as part of the computer.

+ +

Then, could such a system simulate the whole universe?

+ +

(I thought of a solution that I don't know how to actually prove. My logic is mostly based on the many worlds interpretation of quantum mechanics and that a quantum computer actually uses different universes to compute in parallel, thus increasing memory space and speed).

+ +

Any inputs will be gladly received and are highly appreciated.

+",14,,55,,11-07-2019 11:10,11-07-2019 11:10,Simulating a system inside a system,,1,0,,,,CC BY-SA 4.0 +7,1,89,,03-12-2018 17:21,,20,602,"

Recent researches indicate that quantum algorithms are able to solve typical cryptology problems much faster than classic algorithms.

+ +

Have any quantum algorithms for encryption been developed?

+ +

I'm aware about BB84, but it only seems to be a partial solution for solving the networking.

+",31,,31,,03-12-2018 17:36,12/29/2019 16:36,How is quantum cryptography different from cryptography used nowadays?,,3,0,,,,CC BY-SA 3.0 +8,1,10,,03-12-2018 17:22,,11,225,"
    +
  1. The enthusiast-level, inaccurate knowledge about quantum computers is that they can solve many exponentially solvable problems in polynomial time.
  2. +
  3. The enthusiast-level, inaccurate knowledge about chaotic systems is that being highly sensitive to initial conditions, their prediction and control is very hard above a - typically, not enough - accuracy.
  4. +
+ +

Today, one of the most famous practical usage of chaotic systems is the problem of modeling the weather of the Earth.

+ +

Putting (1) and (2) together, I think using quantum computers, we may have a significant (polynomial to exponential) step to handle them. Is it correct?

+ +

Do we have any essential advantage to handle chaos even more than this?

+",27,,26,,12/13/2018 19:35,12/13/2018 19:35,Does quantum computing have an essential advantage in analyzing/controlling chaotic systems?,,2,1,,,,CC BY-SA 3.0 +10,2,,8,03-12-2018 17:30,,4,,"

Not always. Some problems are non-deterministic (their solution). Apart from that, some problems are, as you say, so sensitive to changes in initial conditions, that most solutions are too localized.

+ +

But there are cases where quantum computers can provide insightful results, that might shed light on different approaches to solutions.

+ +

Another point to consider is the use of Numerical methods in chaotic systems. Some methods are more optimal than others, at the cost of accuracy. With quantum computers, computation time decreases by a lot (acc. to theory), which may allow more accurate calculations, leading to a better understanding of the more difficult chaotic systems.

+ +

To clarify: Quantum computers might not be able to give an analytical solution (even to problems that might have such solutions), but a more accurate approximation can often lead to a new understanding of the problem, which is a way to handle problems.

+",13,,13,,03-12-2018 17:36,03-12-2018 17:36,,,,0,,,,CC BY-SA 3.0 +11,2,,8,03-12-2018 17:31,,5,,"

No.

+ +

Chaos (as described in chaotic systems) is deterministic, and the evolution of such a system can be calculated using classical deterministic equations. The problem is the strong divergence of the different trajectories that even small differences in initial values can lead to large differences in the final values.

+ +

Quantum computing does not help in this situation.

+",18,,,,,03-12-2018 17:31,,,,0,,,,CC BY-SA 3.0 +12,1,24,,03-12-2018 17:36,,77,11122,"

Is there any way to emulate a quantum computer in my normal computer, so that I will be able to test and try quantum programming languages (such as Q#)? I mean something that I can really test my hypothesis and gets the most accurate results.

+ +

Update: I'm not really looking for simulating a quantum computer, but I'm not sure if its possible to efficiently emulate one on a normal non-quantum based PC.

+",40,,26,,03-12-2019 09:12,11-07-2019 19:07,Are there emulators for quantum computers?,,7,3,,,,CC BY-SA 3.0 +13,2,,3,03-12-2018 17:40,,19,,"

I suppose there is a type of encryption that is not crackable using quantum computers: a one-time pad such as the Vigenère cipher. This is a cipher with a keypad that has at least the length of the encoded string and will be used only once. This cipher is impossible to crack even with a quantum computer.

+ +

I will explain why:

+ +

Let's assume our plaintext is ABCD. The corresponding key could be 1234. If you encode it then you get XYZW. Now you can use 1234 to get ABCD or 4678 to get EFGH what might be a valid sentence too.

+ +

So the problem is that nobody can decide whether you meant ABCD or EFGH without knowing your key.

+ +

The only reason this kind of encryption can be cracked is that the users are lazy and use a key twice. And then you can try to crack it. Other problems are, as @peterh stated that one-time-pads require a secret channel to be shared

+",11,,253,,04-02-2018 08:42,04-02-2018 08:42,,,,2,,,,CC BY-SA 3.0 +14,2,,3,03-12-2018 17:45,,17,,"

Yes, there are a lot of proposals for Post-quantum cryptographical algorithms that provide the cryptographic primitives that we are used to (including asymmetric encryption with private and public keys).

+",18,,18,,03-12-2018 18:11,03-12-2018 18:11,,,,0,,,,CC BY-SA 3.0 +17,2,,7,03-12-2018 17:50,,3,,"

There is a cryptographic primitive that is only realisable with quantum computation: A revocable timelock. The base idea is to set up a problem that needs a certain time to be solved on a quantum computer, but the quantum computation can be cancelled in a provable way.

+",18,,,,,03-12-2018 17:50,,,,0,,,,CC BY-SA 3.0 +18,2,,1,03-12-2018 17:59,,50,,"

Yes, a quantum computer could be simulated by a Turing machine, though this shouldn't be taken to imply that real-world quantum computers couldn't enjoy quantum advantage, i.e. a significant implementation advantage over real-world classical computers.

+ +

As a rule-of-thumb, if a human could manually describe or imagine how something ought to operate, that imagining can be implemented on a Turing machine. Quantum computers fall into this category.

+ +

At current, a big motivation for quantum computing is that qubits can exist in superpositions,$$ +\left| \psi \right> = \alpha \left| 0 \right> + \beta \left| 1 \right>, \tag{1} +$$essentially allowing for massively parallel computation. Then there's quantum annealing and other little tricks that are basically analog computing tactics.

+ +

But, those benefits are about efficiency. In some cases, that efficiency is beyond astronomical, enabling stuff that wouldn't have been practical on classical hardware. This causes quantum computing to have major applications in cryptography and such.

+ +

However, quantum computing isn't currently motivated by a desire for things that we fundamentally couldn't do before. If a quantum computer can perform an operation, then a classical Turing machine could perform a simulation of a quantum computer performing that operation.

+ +

Randomness isn't a problem. I guess two big reasons:

+ +
    +
  1. Randomness can be more precisely captured by using distribution math anyway.

  2. +
  3. Randomness isn't a real ""thing"" to begin with; it's merely ignorance. And we can always produce ignorance.

  4. +
+",15,,15,,4/29/2018 21:10,4/29/2018 21:10,,,,4,,,,CC BY-SA 3.0 +20,1,1449,,03-12-2018 18:02,,8,589,"

Quantum computing can be used to efficiently simulate quantum many-body systems. +Solving such a problem is classically hard because its complexity grows exponentially with the problem size (roughly with the degree of freedoms), which is an inherent consequence of the Schroedinger-equation.

+ +

My intuitive understanding of this fact is that using quantum computers, we can essentially simulate the quantum many-body system, thus making the theoretical calculation essentially an experiment.

+ +

What about the reverse problem?

+ +

More specifically, consider the situation in which

+ +
    +
  • we have a description of a quantum many-body system, i.e. we know a formalized set of requirements whose behavior it should follow,
  • +
  • and we are trying to find the actual system for this description?
  • +
+ +

In a practical example, we have the required properties of a chemical compound. The goal is to find a chemical formula which fulfills the requirements.

+ +

This looks for me a harder task as to calculate the physical properties of a known compound (which is, in essence, ""only"" a solution of the Schroedinger-equation).

+ +

For example, such a description of a practical problem in human language would be this:

+ +
+

I want a room temperature superconductor.

+
+ +

And the output would be:

+ +
+

The formula is: ...

+
+ +
+ +

Extension:

+ +

We have the dynamics, or some type of behavior of the system. In the case of chemical compounds, it could be, for example, excitation spectrum or superconductivity transition temperature. The important thing is that we have here a different direction: not from a given QM system do we want to calculate the behavior (= Schroedinger equation), but we have a wanted behavior (now: superconductivity transition temperature), we have a nearly-infinite set of possible compounds, and we are looking for the compound which fulfills the selection criteria ($T_c \geq 300K$).

+",27,,26,,5/14/2019 14:18,5/14/2019 14:18,Can we synthesize quantum many body systems with quantum computers quickly in the general case?,,3,0,,,,CC BY-SA 4.0 +21,1,178,,03-12-2018 18:04,,-3,659,"

Quantum entanglement is 2 atoms that are paired together and when you stop one from spinning the other also stops with the same spin. Can you use these pairs to have faster-than-light (FTL) communication between 2 computers?

+",61,,26,,5/14/2019 14:58,5/14/2019 14:58,Quantum entanglement for faster-than-light (FTL) network communication?,,1,2,,,,CC BY-SA 4.0 +22,2,,4,03-12-2018 18:06,,4,,"

Yes, it is possible. The quantum gates are designed such that given input states are transformed to well defined output states with computable probabilities. The transformation does not constitute a measurement in the sense of quantum mechanics, this means that we can have entangled states in the output of a quantum gate and use these states for further computation.

+ +

Note also that the input states are no longer accessible after being transformed by a quantum gate. If you want to get them back, you have to apply an inverse gate.

+",18,,18,,03-12-2018 18:09,03-12-2018 18:09,,,,2,,,,CC BY-SA 3.0 +23,1,25,,03-12-2018 18:12,,36,2860,"

Similar to the question Could a Turing Machine simulate a quantum computer?: given a 'classical' algorithm, is it always possible to formulate an equivalent algorithm which can be performed on a quantum computer? If yes, is there some kind of procedure we can follow for this? The resulting algorithm will probably not take full advantage of the possibilities of quantum computing, it's more of a theoretical question.

+",45,,55,,9/23/2020 12:19,2/15/2022 9:52,Can a quantum computer simulate a normal computer?,,3,0,,,,CC BY-SA 3.0 +24,2,,12,03-12-2018 18:12,,43,,"

Yes, it's possible (but slow). There are a couple of existing (this is only a partial list) emulators:

+ +
    +
  • QDD: A Quantum Computer Emulation Library + +
    +

    QDD is a C++ library which provides a relatively intuitive set of quantum computing constructs within the context of the C++ programming environment. QDD is unique in that the its emulation of quantum computing is based upon a Binary Decision Diagram (BDD) representation of the quantum state.

    +
  • +
  • jQuantum + +
    +

    jQuantum is a program which simulates a quantum computer. You can design quantum circuits with it and let them run. The current state of the quantum register is illustrated.

    +
  • +
  • QCE + +
    +

    QCE is a software tool that emulates various hardware designs of Quantum Computers. QCE simulates the physical processes that govern the operation of a hardware quantum processor, strictly according to the laws of quantum mechanics. QCE also provides an environment to debug and execute quantum algorithms under realistic experimental conditions.

    +
  • +
+ +

(In addition, Q# only works with MS's QDK, thanks @Pavel)

+ +

The downside to all of these is simple: they still run on binary (non-quantum) circuits. To the best of my knowledge, there's no easily accessible quantum computer to use for running these things. And since it takes multiple binary bits to express a single qubit, the amount of computational power needed to simulate a quantum program gets large very quickly.

+ +

I'll quote a paper on the subject (J. Allcock, 2010):

+ +
+

Our evaluation shows that our implementations are very accurate, but at the same time we use a significant amount of additional memory in order to achieve this. Reducing our aims for accuracy would allow us to decrease representation size, and therefore emulate more qubits with the same amount of memory.

+
+ +

p 89, section 5.1

+ +

As our implementations get more accurate, they also get slower.

+ +

TL;DR: it's possible, and some emulators exist, but none are very efficient for large amounts of qubits.

+",,user7,,user7,03-12-2018 19:27,03-12-2018 19:27,,,,3,,,,CC BY-SA 3.0 +25,2,,23,03-12-2018 18:21,,29,,"

Yes, it can do so in a rather trivial way: Use only reversible classical logical gates to simulate computations using boolean logic (for instance, using Toffoli gates to simulate NAND gates), use only the standard basis states $\lvert 0\rangle$ and $\lvert 1\rangle$ as input, and only perform standard basis state measurements at the output. In this way you can simulate exactly the same calculations as the classical computer does, on a gate-by-gate basis.

+",18,,18,,2/15/2022 9:52,2/15/2022 9:52,,,,2,,,,CC BY-SA 4.0 +27,2,,12,03-12-2018 18:25,,14,,"

If you're specifically looking at Q#, then it's super easy to use with an emulator -- in fact, it's not possible to have Q# but not have the emulator, they're bundled together.

+ +

To get started, first you need to download .NET Core from Microsoft's website.

+ +

When you download Microsoft's Quantum Development Kit through dotnet new -i ""Microsoft.Quantum.ProjectTemplates::0.2-*"" or Microsoft's website, it downloads both the language and Microsoft's own emulator together.

+ +

Creating a new Q# project (dotnet new console -lang Q#) will automatically configure it to use the emulator, so when you type in some Q# and run the project it ""just works"".

+",70,,,,,03-12-2018 18:25,,,,0,,,,CC BY-SA 3.0 +30,2,,12,03-12-2018 18:56,,26,,"

Yes, it is possible to simulate a quantum computer on a normal one – But you most likely have to sacrifice efficiency.

+

The dimension of the state space rises exponentially with the number of qubits ($2^n$, where $n$ is the number of qubits), so the linear algebra you will be dealing with won't be too light – You'll encounter very large matrices and the algorithm you use (regardless of how efficient it is) will likely become exponentially-scaling pretty fast. However, emulating a QC on a normal machine is definitely possible.

+
+

Resources

+

You may be interested in Q# as other answers noted. Some more emulators:

+
    +
  • Quantum Computing Playground

    +
    +

    Quantum Computing Playground is a browser-based WebGL Chrome Experiment. It features a GPU-accelerated quantum computer with a simple IDE interface, and its own scripting language with debugging and 3D quantum state visualization features. Quantum Computing Playground can efficiently simulate quantum registers up to 22 qubits, run Grover's and Shor's algorithms, and has a variety of quantum gates built into the scripting language itself.

    +
    +
  • +
  • QX Simulator

    +
    +

    The QX Simulator is a universal quantum computer simulator developped at QuTech by Nader Khammassi. The QX allows quantum algorithm designers to simulate the execution of their quantum circuits on a quantum computer. The simulator defines a low-level quantum assembly language namely Quantum Code which allows the users to describe their circuits in a simple textual source code file. The source code file is then used as the input of the simulator which executes its content.

    +
    +
  • +
  • Quantum++

    +
    +

    Quantum++ is a modern C++11 general purpose quantum computing library, composed solely of template header files. Quantum++ is written in standard C++11 and has very low external dependencies, using only the Eigen 3 linear algebra header-only template library and, if available, the OpenMP multi-processing library.

    +
    +
  • +
  • Quantum Computer Language

    +
    +

    Despite many common concepts with classical computer science, quantum computing is still widely considered as a special discipline within the broad field of theoretical physics. [...] QCL (Quantum Computation Language) tries to fill this gap: QCL is a high level, architecture independent programming language for quantum computers, with a syntax derived from classical procedural languages like C or Pascal. This allows for the complete implementation and simulation of quantum algorithms (including classical components) in one consistent formalism.

    +
    +
  • +
  • More relevant emulators can be found on Quantiki

    +
  • +
+",35,,-1,,6/18/2020 8:31,03-12-2018 21:22,,,,0,,,,CC BY-SA 3.0 +39,1,51,,03-12-2018 19:10,,24,1341,"

At a very basic level, reading or measuring a qubit forces it to be in one state or the other, so the operation of a quantum computer to gain a result collapses the state into one of many possibilities.

+ +

But as the state of each qubit is probabilistic, surely this means the result can actually be any of those possibilities, with varying likelihood. If I re-run the programme - should I expect to see different results?

+ +

How can I be sure I have the ""best"" result? What provides that confidence? I assume it cannot be the interim measurements as described in this question as they do not collapse the output.

+",67,,26,,5/16/2019 6:36,5/16/2019 11:45,"What level of ""confidence"" of the result from a quantum computer is possible?",,3,0,,,,CC BY-SA 4.0 +41,1,62,,03-12-2018 19:53,,20,929,"

Is a dilution refrigerator the only way to cool superconducting qubits down to 10 millikelvin? If not, what other methods are there, and why is dilution refrigeration the primary method?

+",91,,26,,5/14/2019 14:05,5/14/2019 14:05,What cryogenic systems are suitable for superconducting qubits?,,1,0,,,,CC BY-SA 3.0 +48,2,,6,03-12-2018 20:07,,5,,"

tl;dr- Quantum computers can't really help us to simulate the whole universe as the universe is likely vastly more complex than even quantum mechanics can capture, plus we can't even begin to guess how big it is or many other basic fundamental features. In short, simulating the whole universe is beyond sci-fi.

+ +
+ +

We can't really simulate the entire universe, in large part because we have no idea what the universe is.

+ +

I mean, at current, we have a vague picture of the observable universe:

+ +

+ +

And we've got the Standard Model that describes our current best guess at describing much of what we can see. But, beyond that, we don't know much.

+ +

Examples of stuff we don't know:

+ +
    +
  1. Is the observable universe a significant part of the larger universe? Or is it unimaginably small compared to the whole?

  2. +
  3. Is there a lot of weakly-interacting stuff out there, e.g. dark matter, that might compose what'd kinda be like a parallel world?

  4. +
  5. Suppose dark matter is real (which many physicists currently believe). Then, what if something weakly interacts with dark matter, but only with normal matter indirectly through affecting dark matter? And what if that's a recursive relationship - do we even know a non-trivial portion of what exists within the domain of the observable universe?

  6. +
  7. What exists at the bottom? We really don't know much beneath the Planck scale; it could be the case that what seem to be fundamental particles to us are actually unimaginably large universes themselves!

  8. +
  9. Extra-dimensions, what's-in-black-holes, string theory, etc., etc..

  10. +
+ +

Basically, we know essentially nothing about the larger universe other than that we're part of it. Given this unimaginable ignorance, merely having quantum computers won't really help us simulate the whole thing.

+ +

That said, what quantum computers can do is help us simulate quantum systems of comparable size as well as a bunch of other interesting problems. Presumably we'll gain a better understanding of the possibilities as time goes on.

+",15,,,,,03-12-2018 20:07,,,,0,,,,CC BY-SA 3.0 +49,1,52,,03-12-2018 20:19,,8,453,"

I don't have any specific task or algorithm in mind, so depending on how they were tested – Is there any research which shows just how the D-Wave Two computer was faster (in terms of computation performance) than its predecessor (D-Wave One)?

+",99,,26,,3/26/2018 16:01,3/26/2018 16:01,How much faster is “D-Wave Two” compared to its predecessor?,,2,0,,,,CC BY-SA 3.0 +50,1,66,,03-12-2018 20:25,,3,1842,"

This CDMTCS Report 514, 2017 entitled ""The Road to Quantum Computational Supremacy"" states (in Section 6) that the amount of memory needed to simulate random quantum circuits on classical computers increases rapidly with the number of qubits of the circuit. A circuit on 48 qubits would require a huge amount of memory to be stored.

+ +

How much is it exactly? And how can you calculate it?

+",99,,1847,,4/16/2018 10:05,4/21/2020 21:14,How much memory is required to simulate a 48-qubit circuit?,,2,1,,,,CC BY-SA 3.0 +51,2,,39,03-12-2018 20:26,,14,,"

The majority of useful/relatively efficient algorithms1 for quantum computers belong to the 'bounded-error quantum polynomial time' (BQP) complexity class. By this definition, you want the 'failure rate' of any quantum algorithm to be $\leq\frac{1}{3}$, or $\mathbb{P}\left(\text{success}\right) \geq \frac{2}{3}$, although the result may still be within some small error. A non-probabilistic algorithm (that can run in polynomial time) will still be in this complexity class, with the only difference being that it always returns the correct result2.

+ +

However, as you can run an algorithm an arbitrary number of times, this is equivalent to having a success probability of at least $\frac{1}{2} + n^{-c}$ for an input of length $n$ and any positive constant $c$.

+ +

So, the 'correct' result is the one that appears at least two thirds of the time, unless you want a 'one-shot' computation such as if you want to generate random numbers, or if you want to do something such as benchmark the quantum chip, where the statistics matter and are part of the 'result'.

+ +

Aside from these (or other algorithms that don't have a single 'correct result'), if you find an algorithm with a success rate below a half, it is no longer 'bounded error' and it may not be possible for the user to know the correct result - there may simply be a wrong answer with a higher probability of occurring than the correct one.

+ +

Yes, you may see a different result each time you run a calculation. The confidence in the result is provided by:

+ +
    +
  1. The quantum algorithm itself ensuring that the correct result happens with high probability and;
  2. +
  3. Repeating the algorithm a number of times in order to find the most probable result.
  4. +
+ +
+ +

1 Here, algorithms that can be computed in polynomial time to give a solution with 'high probability', although for the purposes of this answer, the time complexity is of lesser importance

+ +

2 Well, idealistically, at least

+",23,,23,,3/13/2018 0:19,3/13/2018 0:19,,,,2,,,,CC BY-SA 3.0 +52,2,,49,03-12-2018 20:30,,10,,"

As Troyer and Lidar saw no speed increase with the D-Wave 1 compared to classical computers, the D-Wave 2 benchmark figure reported in 2013 of 3600 times as fast as CPLEX (the best algorithm on a conventional machine) suggests the D-Wave 2 is 3600 times as fast as the D-Wave 1.

+ +

However:

+ +
    +
  • the results are in a pretty restricted set of parameters, so this may not be relevant for other parameters. (as an example, the benchmark figures for the D-Wave 2000Q only take constant factor performance gains into account)
  • +
  • the configuration of the CPLEX may not compare directly to the classical computers used to benchmark the D-Wave 1
  • +
+",67,,,,,03-12-2018 20:30,,,,0,,,,CC BY-SA 3.0 +53,1,111,,03-12-2018 20:32,,17,349,"

Background

+ +

Recently I came upon a research article entitled Experimental Demonstration of Blind Quantum Computing. Within this research article, the scientists claimed that - through the proper choice of a generic structure - a data engineer can hide the information about how the data was calculated.

+ +

Question

+ +

If a scientist were to use a BQC (Blind Quantum Computation) protocol to calculate private measurements, what types of variables would they have to use to formulate a generic structure for the blind quantum state?

+ +

Thoughts

+ +

I would like to understand what types of variables could go into the generic structure in order to help keep the data calculations hidden from the server. If you select certain known generic variables, I do not understand why the selection of other known generic variables would prevent the data calculations from being hidden.

+",82,,26,,5/14/2019 14:02,5/14/2019 14:02,Blind quantum computing — generic structure variable selection,,2,0,,,,CC BY-SA 4.0 +57,2,,4,03-12-2018 21:31,,3,,"

As you said, the probabilities of measurements are obtained from the state. And the gates operate unitarily on the states. Consider the POVM element $\Pi$, a state $\rho$ and a gate $U$. Then the probability for the outcome associated with $\Pi$ is $p=\mathrm{tr} (\Pi \rho)$, and the probability after the gate is $p'=\mathrm{tr}(\Pi U \rho U^\dagger)$.

+ +

I just want to stress that it is impossible to know the probability of the outcome after the gate only from the probability of it before the gate. You need to consider the probability amplitudes (the quantum states)!

+ +

Let me make another remark: You are talking about two qubits, so the state after the gate might be entangled. In this case it will not be possible to have ""individual"" probability distributions for each qubit for all measurements in the sense that the joint probability distribution will not factor into the two marginal distributions.

+",104,,,,,03-12-2018 21:31,,,,0,,,,CC BY-SA 3.0 +59,2,,12,03-12-2018 23:21,,29,,"

Yes, it is possible to simulate quantum computations on a classical computer. But the cost of simulations grows exponentially with qubit count and/or circuit depth and/or particular operation counts.

+ +

For trying ideas quickly, my simulator Quirk is great. It's an open-source drag-and-drop quantum circuit simulator that runs in your web browser. You can access a live version at algassert.com/quirk.

+ +

Here is a screenshot of Quirk's example Grover circuit, which is instrumented with intermediate state displays to track the ""hidden"" state becoming more likely:

+ +

+",119,,119,,3/14/2018 3:48,3/14/2018 3:48,,,,1,,,,CC BY-SA 3.0 +60,2,,39,3/13/2018 0:35,,9,,"

Elaborating somewhat on Mithrandir24601's response —

+ +

The feature you're worried about, that a quantum computer might produce a different answer on the next run of the computation, is also a feature of randomised computation. It is good in some ways to be able to obtain a single answer repeatably, but in the end it is enough to be able to obtain a correct answer with high enough confidence. Just as with a randomised algorithm, what is important is that you can be sure of the chances of getting the correct answer in any given run of the computation.

+ +

For instance, your quantum computer might give you the correct answer to a YES / NO question two times out of every three. This might seem like a poor performance, but what this means is that if you run it many times, you can simply take the majority answer and be very confident that the majority rule gives you the correct answer. (The same is true for normal randomised computation as well.) The way that the confidence increases with the number of rune, means that so long as any one run gives an answer which has significantly more than just a 50% chance of being correct, you can make your confidence as high as you like just by doing a modest number of repeated runs (though more runs are required, the closer the chances of a correct answer in any one run are to 50%).

+ +

In theoretical terms, we give the name BQP to the collection of problems which are solvable in $\mathrm{poly}(n)$ computational steps by a quantum computer, for input sizes which can be specified by an $n$-bit string, where the answer is correct with probability at least 2/3; by the argument above, the exact same set of problems is given if you demand that the answer be correct with probability 999/1000, or (1 − 1e-8).

+ +

For problems which have more elaborate answers than YES / NO questions, we can't necessarily assume that the same answer will be produced more than once so that we can take a majority vote. (If you are using a quantum computer to sample from an exponential number of outcomes, it is possible that there are some smaller but still exponentially many quantity of answers which are correct and useful!) Suppose that you are trying to solve an optimisation problem: it might not be easy to verify that you have found the optimal solution, or a nearly-optimal solution — or that the answer that you've gotten is even the best that the quantum computer can do (what if the next run gives you a better answer by chance?). In this case, what is important is to determine what you know about the problem, whether there is an independent way to verify a solution (is your problem in NP, meaning that you can in principle efficiently check any answer you're given?), and what quality of solution you would be happy with.

+ +

Again, this is all true for randomised algorithms as well — the difference being that we expect quantum computers to be able to solve problems that a randomised computer alone could not easily solve.

+",124,,124,,3/13/2018 10:00,3/13/2018 10:00,,,,0,,,,CC BY-SA 3.0 +61,1,67,,3/13/2018 0:50,,11,987,"

I'm researching SPDC's efficacy for use in an optical quantum computing model and I've been trying to figure out exactly what state the photons are in when they come out (as represented by a vector, for example), if I'm using type 1 SPDC and I'm looking at the polarization of the photons.

+ +

Please provide any references used =)

+",91,,104,,3/16/2018 15:49,3/16/2018 15:49,State produced by spontaneous parametric down-conversion (SPDC),,2,0,,,,CC BY-SA 3.0 +62,2,,41,3/13/2018 4:09,,20,,"
+

Is a dilution refrigerator the only way to cool superconducting qubits down to 10 millikelvin?

+
+ +

There's another type of refrigerator that can get to 10 mK: the adiabatic demagnetization refrigerator (ADR).$^{[a]}$

+ +
+

why is dilution refrigeration the primary method?

+
+ +

To understand that, let's talk about one of the main limitations of the ADR.

+ +

How an ADR works

+ +

An ADR usually gets to about 3K with a helium compressor. +That compressor can run all the time, so the refrigerator can sit at 3K indefinitely. +To get down to mK temperatures, the ADR works like this:

+ +
    +
  1. Raise the magnetic field surrounding a solid with nuclear spins. This aligns the spins.
  2. +
  3. Slowly turn the field off. This allows the spins to randomize their direction, which absorbs entropy from the surroundings and lowers the temperature.
  4. +
  5. Once the field is back to zero, we've sucked enough heat out of the surroundings to bring them to mK temperatures.
  6. +
+ +

ADR limitations

+ +

This is all great and it really works, but it's a ""one-shot"" process. +Once the field is down to zero, you can't go any lower. +Heat from the surroundings, such as the room temperature outer parts of the refrigerator, leak heat into the part you're trying to keep cold, and since we've already lowered the magnetic field to zero, we can't do anything to remove that heat. +Therefore, after cooling the ADR, it starts to warm up (hopefully slowly enough to run your experiment).

+ +

It's typical for an ADR to stay below 100mK for maybe twelve hours, although that number depends a lot on how many wires you have running to the cold part of the ADR. +After the temperature rises above what you want, you have to raise the magnetic field again and slowly lower it to re-cool. +Raising and lowering the field takes a while and heats up the refrigerator, and that big magnetic field is often incompatible with superconducting qubit experiments, so you can't run experiments while you're in that stage of the process.

+ +

ADR vs. dilution refrigerator

+ +

The dilution refrigerator, on the other hand, runs continuously, so you have as long as you need to run your experiment. +That's a pretty big reason that they're in common use. +Note, however, that other refrigerators aside from the ADR are used in many superconducting qubit labs for tasks where the benefits of a dilution refrigerator aren't needed and the shorter cold time of an ADR is ok. +For example, ADR's are common for experiments with superconducting resonators, which are used to test the quality of materials that may later be used for a qubit.

+ +

$[a]$: Apologies for not finding a better link. Edits on that are welcome.

+",32,,,,,3/13/2018 4:09,,,,3,,,,CC BY-SA 3.0 +63,1,,,3/13/2018 6:40,,7,298,"

In this paper and this paper, the ""Noise Stability of Qubits"" (the stability of qubits to external noise) has been discussed. In the first one, Gil Kalai states that it is difficult to create a quantum computer since the noise produced in creating a few qubits is enough to reach the threshold for corrupting the entire system of qubits. What can be done to reduce the noise and increase stability of the qubits?

+ +

EDIT: To clarify the doubts mentioned in the comments, I want to add that IBM, Rigetti and some other companies have already started manufacturing Quantum Computers with 4-6 qubits. How have they overcome or what are they working on to face this problem of noise sensitivity?

+",137,,26,,12/13/2018 19:36,12/13/2018 19:36,How to make qubits more stable towards noise?,,0,1,,3/14/2018 13:27,,CC BY-SA 3.0 +64,1,4848,,3/13/2018 7:06,,16,1605,"

According to this press announcement from March 1st 2018, the Alibaba Cloud offers access to an 11 qubit quantum computer via their cloud services. Quote:

+
+

Alibaba Cloud, [...] and Chinese Academy of Sciences (CAS) [...] have launched the superconducting quantum computing cloud, featuring a quantum processor with 11 quantum bits (qubits) of power. [...]

+

Users can now access the superconducting quantum computing cloud through Alibaba Cloud’s quantum computing cloud platform to efficiently run and test custom built quantum codes and download the results.

+
+

However, I have been unable to find any mention of this service anywhere else on their site than in the press announcement. No documentation. No mention in the "Products" overview. Nothing. Does anyone know, how to get started here?

+",138,,-1,,6/18/2020 8:31,12-03-2018 11:26,How to get started with the Alibaba Cloud Quantum Computing Service?,,2,0,,,,CC BY-SA 3.0 +65,1,68,,3/13/2018 7:12,,30,1119,"

The term ""quantum supremacy"" - to my understanding - means that one can create and run algorithms to solve problems on quantum computers that can't be solved in realistic times on binary computers. However, that is a rather vague definition - what would count as ""realistic time"" in this context? Does it have to be the same algorithm or just the same problem? Not being able to simulate quantum computers of certain sizes surely can't be the best measure.

+",138,,26,,3/27/2018 9:57,3/27/2018 9:57,When will we know that quantum supremacy has been reached?,,2,0,,,,CC BY-SA 3.0 +66,2,,50,3/13/2018 7:28,,3,,"

This doesn't exactly answer your question, but it may aid you in understanding the problem and possibly the solution:

+ +

In their paper ""Breaking the 49-Qubit Barrier in the Simulation of Quantum Circuits"" (arXiv:1710.05867), the authors describe simulating a 49-qubit and a 56-qubit quantum computer. According to the paper, they required 4.5 Terrabytes of RAM for the 49-qubit simulation; this did however depend on the exact algorithm they ran. That said, also according to this paper with competing simulation methods they would have been at about 8 Petabytes of RAM for that simulation.

+",138,,,,,3/13/2018 7:28,,,,0,,,,CC BY-SA 3.0 +67,2,,61,3/13/2018 9:24,,10,,"

Background

+ +

First of all, I'll use $\lvert H\rangle$ as a horizontally polarised state and $\lvert V\rangle$ as a vertically polarised state1. There are three modes of light involved in the system: pump (p), taken to be a coherent light source (a laser); as well as signal and idler (s/i), the two generated photons

+ +

The Hamiltonian for SPDC is given by $H = \hbar g\left(a^{\dagger}_sa^{\dagger}_ia_p + a^{\dagger}_pa_ia_s\right)$, where g is a coupling constant dependent on the $\chi^{\left(2\right)}$ nonlinearity of the crystal and $a\left(a^{\dagger}\right)$ is the annihilation (creation) operator. That is, there is a possibility of a pump photon getting annihilated and generating two photons2 as well as a possibility of the reverse.

+ +

The phase matching conditions for frequencies, $\omega_p = \omega_s + \omega_i$ and wave vectors, $\mathbf{k}_p = \mathbf{k}_s + \mathbf{k}_i$ must also be satisfied.

+ +

Type 1 SPDC

+ +

This is where the two generated (s and i) photons have parallel polarisations, perpendicular to the polarisation of the pump, which can only be used to perform SPDC when the pump is polarised along the extraordinary axis of the crystal.

+ +

This means that defining the extraordinary axis as the vertical (horizontal) direction and inputting coherent light along that axis will generate pairs of photons in the state $\lvert HH\rangle\, \left(\lvert VV\rangle\right)$. This is not of much use, so to generate an entangled pair of photons, two crystals are placed next to each other, with extraordinary axes in orthogonal directions. The coherent source is then input with a polarisation of $45^\circ$ to this, such that if the first crystal has an extraordinary axis along the vertical (horizontal) direction, there is a probability of generating photons in the state $\lvert HH\rangle\, \left(\lvert VV\rangle\right)$ as before from the first crystal, as well as a probability of generating photons in the state $\lvert VV\rangle\, \left(\lvert HH\rangle\right)$ from the second crystal.

+ +

However, as the light from the pump is travelling through a material, it will also acquire a phase in the first crystal, such that the final state is $$\lvert\psi\rangle = \frac{1}{\sqrt{2}}\left(\lvert HH\rangle + e^{i\phi}\lvert VV\rangle\right).$$

+ +

Due to the phase matching conditions, the emitted photon pairs will be emitted at opposite points on a cone, as shown below in Figure 1.

+ +

Figure 1: A laser beam is input into two type 1 SPDC crystals, with orthogonal extraordinary axes. This results in a probability of emitting a pair of entangled photons at opposite points on a cone. Image taken from Wikipedia.

+ +
+ +

1 This can be mapped to qubit states using e.g. $\lvert H\rangle = \lvert 0\rangle$ and $\lvert V\rangle = \lvert 1\rangle$

+ +

2 called signal and idler for historical reasons

+ +

References:

+ +

Keiichi Edamatsu 2007 Jpn. J. Appl. Phys. 46 7175

+ +

Kwiat, P.G., Waks, E., White, A.G., Appelbaum, I. and Eberhard, P.H., 1999. Physical Review A, 60(2) - and the arXiv version

+",23,,,,,3/13/2018 9:24,,,,0,,,,CC BY-SA 3.0 +68,2,,65,3/13/2018 9:27,,19,,"

The term quantum supremacy doesn't necessarily mean that one can run algorithms, as such, on a quantum computer that are impractical to run on a classical computer. It just means that a quantum computer can do something that a classical computer will find difficult to simulate.

+ +

You might ask (and rightly so) what I might possibly mean by talking about something done by a quantum computer which is not an algorithm. What I mean by this is that we can have a quantum computer perform a process which

+ +
    +
  • does not necessarily have very well-understood behaviour — in particular, there are very few things we can prove about that process;

  • +
  • in particular, that process does not 'solve' any problem of practical interest — the answer to the computation doesn't necessarily answer a question that you are interested in.

  • +
+ +

When I say that the process doesn't necessarily have well-understood behaviour, this does not mean that we don't know what the computer is doing: we will have a good description of the operations that it does. But we won't necessarily have an acute understanding of the cumulative effect on the state of the system of those operations. (The very promise of quantum computation was originally proposed because quantum mechanical systems are difficult to simulate, which meant that it might be able to simulate other systems which are difficult to simulate.)

+ +
+ +

You might ask what the point is of having a quantum computer do something which is difficult to simulate if the only reason is only that it is difficult to simulate. The reason of this is: it demonstrates a proof of principle. Suppose that you can build quantum systems with 35 qubits, with 40 qubits, with 45 qubits, 50 qubits, and so forth — each built according to the same engineering principles, each of them simulatable in practise, and each behaving the way that the simulation predicts (up to good tolerances), but where each simulation is much more resource-intensive than the last. Then once you have a system on 55 or 60 qubits that you can't simulate with the world's largest supercomputer, you could argue that you have an architecture that builds reliable quantum computers (based on the sizes you can simulate), and which can be used to build quantum computers large enough that no known simulation technique can predict their behaviour (and where perhaps no such technique is even possible).

+ +

This stage in itself is not necessarily useful for anything, but it is a necessary condition to being able to solve interesting problems on a quantum computer more quickly than you can on a classical computer. The fact that you can't necessarily solve 'interesting' problems at this stage is one reason why people are sometimes dissatisfied with the term 'supremacy'. (There are other reasons to do with political connotations, which are justified in my opinion but off-topic here.) Call it ""quantum ascendancy"", if you prefer — meaning that it marks a point at which quantum technologies are definitely becoming significant in power, while not yet in any danger of replacing the mobile phone in your pocket, desktop computers, or even necessarily industrial supercomputers — but it is a point of interest in the developmental curve of any quantum computational technology.

+ +
+ +

But the bottom line is that, yes, ""quantum supremacy"" is precisely about ""not being able to simulate quantum computers of certain sizes"", or at least not being able to simulate certain specific processes that you can have them perform, and this benchmark depends not only on quantum technology but on the best available classical technology and the best available classical techniques. It is a blurry boundary which, if we are being serious about things, we will only be confident that we have passed a year or two after the fact. But it is an important boundary to cross.

+",124,,58,,3/14/2018 15:51,3/14/2018 15:51,,,,3,,,,CC BY-SA 3.0 +69,1,94,,3/13/2018 10:05,,17,689,"

The strengthened version of the Church-Turing thesis states that:

+ +

Any algorithmic process can be simulated efficiently using a Turing machine.

+ +

Now, on page 5 (chapter 1), the book Quantum Computation and Quantum Information: 10th Anniversary Edition By Michael A. Nielsen, Isaac L. Chuang goes on to say that:

+ +
+

One class of challenge to the the strong Church Turing thesis comes + from the field of analog computation. In the years since Turing, + many different teams of researchers have noticed that certain types of + analog computers can efficiently solve problems believed to have no + efficient solution on a Turing machine. At the first glance these + analog computers appear to violate the strong form of the + Church-Turing thesis. Unfortunately for analog computation it turns + out that when realistic assumptions about the presence of noise in + analog computers are made, their power disappears in all known + instances; they cannot efficiently solve problems which are not + solvable on a Turing machine. This lesson – that the effects of + realistic noise must be taken into account in evaluating the + efficiency of a computational model – was one of the great early + challenges of quantum computation and quantum information, a challenge + successfully met by the development of a theory of quantum + error-correcting codes and fault-tolerant quantum computation. Thus, + unlike analog computation, quantum computation can in principle + tolerate a finite amount of noise and still retain its computational + advantages.

+
+ +

What exactly is meant by noise in this context? Do they mean thermal noise? It's strange that the authors did not define or clarify what they mean by noise in the previous pages of the textbook.

+ +

I was wondering if they were referring to noise in a more generalized setting. Like, even if we get rid of the conventional noise - like industrial noise, vibrational noise, thermal noise (or reduce them to negligible levels), noise could still refer to the uncertainties in amplitude, phase, etc, which arise due to the underlying quantum mechanical nature of the system.

+",26,,26,,3/16/2018 15:35,12/25/2019 6:37,"What exactly is meant by ""noise"" in the following context?",,3,0,,,,CC BY-SA 3.0 +70,1,71,,3/13/2018 10:12,,24,4031,"

A quantum computer can efficiently solve problems lying in the complexity class BQP. I have seen a claim the one can (potentially, because we don't know whether BQP is a proper subset or equal to PP) increase the efficiency of a quantum computer by applying postselection and that the class of efficiently solvable problems becomes now postBQP = PP.

+ +

What does postselection mean here?

+",18,,,,,04-11-2018 09:48,What is postselection in quantum computing?,,2,0,,,,CC BY-SA 3.0 +71,2,,70,3/13/2018 10:45,,24,,"

""Postselection"" refers to the process of conditioning on the outcome of a measurement on some other qubit. (This is something that you can think of for classical probability distributions and statistical analysis as well: it is not a concept special to quantum computation.)

+ +

Postselection has featured quite often (up to this point) in quantum mechanics experiments, because — for experiments on very small systems, involving not very many particles — it is a relatively easy way to simulate having good quantum control or feedforward. However, it is not a practical way of realising computation, because you have to condition on an outcome of one or more measurements which may occur with very low probability.

+ +

Actually 'selecting' a measurement outcome is nothing you can do easily in quantum mechanics — what one actually does is throw away any outcome which does not allow you to do what you want to do. If the outcome which you are trying to select has probability $0 < p < 1$, you will have to try an expected number $1/p$ times before you manage to obtain the outcome you are trying to select. If $p = 1/2^n$ for some large integer $n$, you may be waiting a very long time.

+ +

The result that postselection 'increases' (as you say) the power of bounded-error quantum computation from BQP to PP is a well-liked result in the theory of quantum computation, not because it is practical, but because it is a simple and crisp result of a sort which is rare in computational complexity, and is useful for informing intuitions about quantum computation — it has led onward to ideas of ""quantum supremacy"" experiments, for example. But it is not something which you should think of as an operation which is freely available to quantum computers as a practical technique, unless you can show that the outcomes which you are trying to postselect are few enough and of high-enough probability (or, as with measurement-based computation, that you can simulate the 'desirable' outcome by a suitable adaptation of your procedure if you obtain one of the 'undesirable' outcomes).

+",124,,124,,3/13/2018 11:38,3/13/2018 11:38,,,,0,,,,CC BY-SA 3.0 +74,1,,,3/13/2018 11:07,,58,4467,"

It seems that quantum computing is often taken to mean the quantum circuit method of computation, where a register of qubits is acted on by a circuit of quantum gates and measured at the output (and possibly at some intermediate steps). Quantum annealing at least seems to be an altogether different method to computing with quantum resources1, as it does not involve quantum gates.

+ +

What different models of quantum computation are there? What makes them different?

+ +

To clarify, I am not asking what different physical implementations qubits have, I mean the description of different ideas of how to compute outputs from inputs2 using quantum resources.

+ +
+ +

+1. Anything that is inherently non-classical, like entanglement and coherence.
+2. A process which transforms the inputs (such as qubits) to outputs (results of the computation). +

+",144,,26,,5/15/2019 14:56,5/15/2019 14:56,What are the models of quantum computation?,,6,0,,,,CC BY-SA 4.0 +75,1,,,3/13/2018 13:24,,24,790,"

Which quantum error correction code currently holds the record in terms of the highest threshold for fault-tolerance? I know that the surface code is pretty good ($\approx10^{-2}$?), but finding exact numbers is difficult. I also read about some generalizations of the surface code to 3D clusters (topological quantum error correction). I guess the main motivation for this research was to increase the threshold for calculations of arbitrary length.

+ +

My question is: Which quantum error correction code has the highest threshold (as proven at the time of writing this)?

+ +

In order to judge this value it would be nice to know what threshold is theoretically achievable. So if you know of (non-trivial) upper bounds on thresholds for arbitrary quantum error correction codes that would be nice.

+",104,,26,,05-07-2018 13:21,05-07-2018 13:21,Which quantum error correction code has the highest threshold (as proven at the time of writing this)?,,3,0,,,,CC BY-SA 3.0 +80,2,,65,3/13/2018 13:45,,7,,"

The term quantum supremacy, as introduced by Preskill in 2012 (1203.5813), can be defined by the following sentence:

+ +
+

We therefore hope to hasten the onset of the era of quantum supremacy, when we + will be able to perform tasks with controlled quantum systems going beyond what + can be achieved with ordinary digital computers.

+
+ +

Or, as wikipedia rephrases it, quantum supremacy is the potential ability of quantum computing devices to solve problems that classical computers practically cannot.

+ +

It should be noted that this is not a precise definition in the mathematical sense. What you can make precise statements on is how the complexity of a given problem scales with the dimension of the input (say, the number of qubits to be simulated, if one is dealing with a simulation problem). +Then, if it turns out that quantum mechanics allows solving the same problem more efficiently (and, crucially, you are able to prove it), then there is room for a quantum device to demonstrate (or rather, provide evidence towards) quantum supremacy (or quantum advantage, or however you prefer to call it, see for example the discussion in the comments here).

+ +
+ +

So, in light of the above, when exactly can one claim to have reached the quantum supremacy regime? At the end of the day, there is no single magic number that brings you from the ""classically simulatable regime"" to the ""quantum supremacy regime"", and this is more of a continuous transition, in which one gathers more and more evidence towards the statements that quantum mechanics can do better than classical physics (and, in the process, provide evidence against the Extended Church-Turing thesis).

+ +

On the one hand, there are regimes which obviously fall into the ""quantum supremacy regime"". This is when you manage to solve a problem with a quantum device that you just cannot solve with a classical device. For example, if you manage to factorize a huge number that would take the age of the universe to compute with any classical device (and assuming someone managed to prove that Factoring is indeed classical hard, which is far from a given), then it seems hard to refute that quantum mechanics does indeed allow to solve some problems more efficiently than classical devices.

+ +

But the above is not a good way to think of quantum supremacy, mostly because one of the main points of quantum supremacy is as an intermediate step before being able to solve practical problems with quantum computers. Indeed, in the quest for quantum supremacy, one relaxes the requirement of trying to solve useful problems and just tries to attack the principle that at least for some tasks, quantum mechanics does indeed provide advantages.

+ +

When you do this and ask for the simplest possible device that can demonstrate quantum supremacy, things start to get tricky. You want to find the threshold above which quantum devices are better than classical ones, but this amounts to compare two radically different kinds of devices, running radically different kinds of algorithms. +There is no easy (known?) way to do this. +For example, do you take into account how expensive it was to build the two different devices? And what about comparing a general purpose classical device with a special purpose quantum one? Is that fair? +What about validating the output of the quantum device, is that required? Also, how strict do you require your complexity results to be? +A proposed reasonable list of criteria for a quantum supremacy experiment, as given by Harrow and Montanaro (nature23458, paywalled), is$^1$:

+ +
    +
  1. A well-defined computational problem.
  2. +
  3. A quantum algorithm solving the problem which can run on a near-term hardware capable of dealing with noise and imperfections.
  4. +
  5. A number of computational resources (time/space) allowed to any classical competitor.
  6. +
  7. A small number of well-justified complexity-theoretic assumptions.
  8. +
  9. a verification method that can efficiently distinguish between the performances of the quantum algorithm from any classical competitor using the allowed resources.
  10. +
+ +

To better understand the issue one may have a look at the discussions around D-Wave's claims in 2005 of a ""$10^8$ speedup"" with their device (which holds only when using appropriate comparisons). +See for example discussions on this Scott Aaronson's blog post and references therein (and, of course, the original paper by Denchev et al. (1512.02206)).

+ +

Also regarding the exact thresholds separating the ""classical"" from the ""quantum supremacy"" regime, one may have a look at the discussions around the number of photons required to claim quantum supremacy in a boson sampling experiment. +The reported number was initially around 20 and 30 (Aaronson 2010, Preskill 2012, Bentivegna et al. 2015, among others), then briefly went as low as seven (Latmiral et al. 2016), and then up again as high as ~50 (Neville et al. 2017, and you may have a look at the brief discussion of this result here).

+ +

There are many other similar examples that I didn't mention here. For example there is the whole discussion around quantum advantage via IQP circuits, or the number of qubits that are necessary before one cannot simulate classically a device (Neill et al. 2017, Pednault et al. 2017, and some other discussions on these results). +Another nice review I didn't include above is this Lund et al. 2017 paper.

+ +

(1) I'm using here the rephrasing of the criteria as given in Calude and Calude (1712.01356).

+",55,,58,,3/14/2018 9:51,3/14/2018 9:51,,,,0,,,,CC BY-SA 3.0 +82,2,,69,3/13/2018 14:47,,2,,"

Asking the author to clarify would give you the exact answer you are looking for. However, based upon the context provided I believe this may be related to the problem quantum noise spectroscopy attempts to solve.

+ +

Noise

+ +

According to a team of Dartmouth researchers led by Professor Lorenza Viola,

+ +
+

These quantum properties are essential for quantum computing, but they are easily lost through decoherence, when quantum systems are subject to ""noise"" in an external environment.

+
+ +

The quantum properties she is referring too are quantum system properties such as the ability to be in a superposition of two different states simultaneously as stated in the same article.

+ +

My Conclusion

+ +

Therefore, based upon both the context provided in the question and the context provided by the team of Dartmouth researchers, I would conclude that the noise the book refers to is environmental noise.

+",82,,,,,3/13/2018 14:47,,,,0,,,,CC BY-SA 3.0 +84,2,,4,3/13/2018 15:29,,7,,"

Case I: The 2 qubits are not entangled.

+ +

You can write the states of the two qubits (say $\mathrm{A}$ and $\mathrm{B}$) as $|\psi_\mathrm{A}\rangle=a|0\rangle+b|1\rangle$ and $|\psi_\mathrm{B}\rangle = c|0\rangle+d|1\rangle$ where $a,b,c,d\in\Bbb{C}$.

+ +

The individual qubits reside in two dimensional complex vector spaces $\Bbb{C}^2$ (over a $\Bbb{C}$ field). But the state of the system is a vector (or point) residing in a four dimensional complex vector space $\Bbb{C}^4$(over a $\Bbb {C}$ field).

+ +

The state of the system can be written as a tensor product $|\psi_\mathrm{A}\rangle\otimes|\psi_\mathrm{B}\rangle$ i.e. $ac|00\rangle+ad|01\rangle+bc|10\rangle+bd|11\rangle$.

+ +

Naturally, $|ac|^2+|ad|^2+|bc|^2+|bd|^2=1$ since the state vector has to be normalized. The reason as to why the square of the amplitude of a basis state gives the probability of that basis state occurring when measured in the corresponding basis lies in the Born's rule of quantum mechanics (some physicists consider it to be a basic postulate of quantum mechanics). Now, probability of $|0\rangle$ occuring when the first qubit is measured is $|ac|^2+|ad|^2$. Similarly, probability of $|1\rangle$ occuring when the first qubit is measured is $|bc|^2+|bd|^2$.

+ +

Now, what happens if we apply a quantum gate without performing any measurement on the previous state of the system? Quantum gates are unitary gates. Their action can be written as action of an unitary operator $U$ on the initial state of the system i.e. $ac|00\rangle+ad|01\rangle+bc|10\rangle+bd|11\rangle$ to produce a new state $A|00\rangle+B|01\rangle+C|10\rangle+D|11\rangle$ (where $A,B,C,D\in\Bbb{C}$). The magnitude of this new state vector: $|A|^2+|B|^2+|C|^2+|D|^2$ again equates to $1$, since the applied gate was unitary. When the first qubit is measured, probability of $|0\rangle$ occurring is $|A|^2+|B|^2$ and similarly you can find it for occurrence of $|1\rangle$.

+ +

But if we did perform a measurement, before the action of the unitary gate the result would be different. For example of you had measured the first qubit and it turned out to be in $|0\rangle$ state the intermediate state of the system would have collapsed to $\frac{ac|00\rangle + ad|01\rangle}{\sqrt{(ac)^2+(ad)^2}}$ (according to the Copenhagen interpretation). So you can understand that applying the same quantum gate on this state would have given a different final result.

+ +

Case II: The 2 qubits are entangled.

+ +

In case the state of the system is something like $\frac{1}{\sqrt{2}}|00\rangle + \frac{1}{\sqrt{2}}|11\rangle$ , you cannot represent it as a tensor product of states of two individual qubits (try!). There are plenty more such examples. The qubits are said to entangled in such a case.

+ +

Anyway, the basic logic still remains same. The probability of $|0\rangle$ occuring when the first qubit is measured is $|1/\sqrt{2}|^2=\frac{1}{2}$ and $|1\rangle$ occuring is $\frac{1}{2}$ too. Similarly you can find out the probabilities for measurement of the second qubit.

+ +

Again if you apply a unitary quantum gate on this state, you'd end up with something like $A|00\rangle+B|01\rangle+C|10\rangle+D|11\rangle$, as before. I hope you can now yourself find out the probabilities of the different possibilities when the first and second qubits are measured.

+ +
+ +

Note: Normally the basis states of the 2-qubit sytem $|00\rangle,|01\rangle,|10\rangle,|11\rangle$ are considered as the four $4\times 1$ column vectors like $\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}$, $\begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}$, etc. by mapping the four basis vectors to the standard basis of $\Bbb{R}^4$. And, the unitary transformations $U$ can be written as $4\times 4$ matrices which satisfy the property $UU^{\dagger}=U^{\dagger}U=I$.

+",26,,26,,3/13/2018 16:51,3/13/2018 16:51,,,,0,,,,CC BY-SA 3.0 +85,2,,69,3/13/2018 15:57,,11,,"
+

Unfortunately for analog computation it turns out that when realistic assumptions about the presence of noise in analog computers are made, their power disappears in all known instances; they cannot efficiently solve problems which are not solvable on a Turing machine.

+
+ +

""Noise"" appears to be used in the general sense of non-idealities in a signal:

+ +
+

In signal processing, noise is a general term for unwanted (and, in general, unknown) modifications that a signal may suffer during capture, storage, transmission, processing, or conversion.[1]

+ +

Sometimes the word is also used to mean signals that are random (unpredictable) and carry no useful information; even if they are not interfering with other signals or may have been introduced intentionally, as in comfort noise.

+ +

-""Noise (signal processing)"", Wikipedia

+
+ +

For an example of what they're talking about, let's consider a simple circuit:

+ +

$$ +\require{enclose} +\def\place#1#2#3{\smash{\rlap{\hskip{#1pt}\raise{#2pt}{#3}}}} +% +\bbox[10pt]{\enclose{box}{\phantom{\Rule{250pt}{75pt}{0pt}}}} +% +\place{-275}{70}{\enclose{box}{\bbox[5pt,lightblue]{ +\begin{array}{c} +\text{resistor} \\ +\text{set resistance:}~R +\end{array} +}}} +% +\place{-270}{0}{ +\enclose{box}{\bbox[5pt,lightblue]{ +\begin{array}{c} +\text{power source} \\ +\text{set voltage:}~V +\end{array} +}}} +% +\place{-55}{30}{ +\enclose{box}{\bbox[5pt,lightblue]{ +\begin{array}{c} +\text{current meter} \\ +\text{measured current:}~I +\end{array} +}}} +$$

+ +

Since we can select both $V$ and $R$ and we know Ohm's law, $I=\frac{V}{R}$, we can use this circuit to divide numbers for us:

+ +
    +
  1. Select some division problem to perform, $\frac{a}{b}=?$.

  2. +
  3. Set the voltage source to $V=a~\mathrm{V}$.

  4. +
  5. Set the resistor to $R=b~\mathrm{\Omega}$.

  6. +
  7. Measure $I=?~\mathrm{A}$ to get the result!

  8. +
+ +

This is a simple analog computer that can divide numbers without need for us to perform the math in some other manner, e.g. digital logic.

+ +

But what's really cool about this? If we're naive, we might believe that it can do real computation:

+ +
+

In computability theory, the theory of real computation deals with hypothetical computing machines using infinite-precision real numbers. They are given this name because they operate on the set of real numbers. Within this theory, it is possible to prove interesting statements such as ""The complement of the Mandelbrot set is only partially decidable.""

+ +

These hypothetical computing machines can be viewed as idealised analog computers which operate on real numbers, whereas digital computers are limited to computable numbers.

+ +

-""Real computation"", Wikipedia

+
+ +

The thing's that Ohm's law uses real-number values, $\left\{V,I,R\right\}{\in}\mathbb{R}$. If we believe that these values actually have infinite precision, then we can perform multiplication or division with infinite precision in finite time; this is a feat that a Turing machine can't perform with finite-time operations.

+ +

Anyway, back to the original quote:

+ +
+

Unfortunately for analog computation it turns out that when realistic assumptions about the presence of noise in analog computers are made, their power disappears in all known instances; they cannot efficiently solve problems which are not solvable on a Turing machine.

+
+ +

They're basically saying that, whenever someone's come up with a scheme like this, the non-idealities of the situation (noise in the signals, design, etc.) tend to derail the idealistic expectations.

+ +

The quoted excerpt seems to use this as a jumping-off point to discuss how quantum computers aren't as limited by this problem as classical analog computers often seem to have been.

+",15,,15,,3/15/2018 10:39,3/15/2018 10:39,,,,0,,,,CC BY-SA 3.0 +86,2,,3,3/13/2018 16:17,,27,,"

The title of your question asks for techniques that are impossible to break, to which the One Time Pad (OTP) is the correct answer, as pointed out in the other answers. The OTP is information-theoretically secure, which means that an adversaries computational abilities are inapplicable when it comes to finding the message.

+ +

However, despite being perfectly secure in theory, the OTP is of limited use in modern cryptography. It is extremely difficult to use successfully in practice.

+ +

The important question really is:

+ +
+

Can we still expect a new cryptographic algorithm which will be hard to crack using even a quantum computer?

+
+ +

Asymmetric Cryptography

+ +

Asymmetric cryptography includes Public-Key Encryption (PKE), Digital Signatures, and Key Agreement schemes. These techniques are vital to solve the problems of key distribution and key management. Key distribution and key management are non-negligible problems, they are largely what prevent the OTP from being usable in practice. The internet as we know it today would not function without the ability to create a secured communications channel from an insecure communications channel, which is one of the features that asymmetric algorithms offer.

+ +

Shor's algorithm

+ +

Shor's algorithm is useful for solving the problems of integer factorization and discrete logarithms. These two problems are what provide the basis for security of widely used schemes such as RSA and Diffie-Hellman.

+ +

NIST is currently evaluating submissions for Post-Quantum algorithms - algorithms that are based on problems that are believed to be resistant to quantum computers. These problems include:

+ + + +

It should be noted that classical algorithms for solving the above problems may exist, it's just that the runtime/accuracy of these algorithms is prohibitive for solving large instances in practice. These problems don't appear to be solvable when given the ability to solve the problem of order finding, which is what the quantum part of Shor's algorithm does.

+ +

Symmetric Cryptography

+ +

Grover's algorithm provides a quadratic speedup when searching through an unsorted list. This is effectively the problem brute-forcing a symmetric encryption key.

+ +

Working around Grover's algorithm is relatively easy compared to working around Shor's algorithm: Simply double the size of your symmetric key. A 256-bit key offers 128-bits of resistance against brute force to an adversary that uses Grover's algorithm.

+ +

Grover's algorithm is also usable against hash functions. The solution again is simple: Double the size of your hash output (and capacity if you are using a hash based on a sponge construction).

+",162,,162,,6/19/2018 16:48,6/19/2018 16:48,,,,3,,,,CC BY-SA 4.0 +89,2,,7,3/13/2018 16:57,,12,,"

Quantum cryptography relies on elaborate physical machinery to execute cryptographic protocols whose security rests upon axioms of quantum mechanics (theoretically, anyways).

+ +

To quote the wikipedia entry on the BB84 protocol:

+ +
+

The security of the protocol comes from encoding the information in non-orthogonal states. Quantum indeterminacy means that these states cannot in general be measured without disturbing the original state (see No cloning theorem).

+
+ +

There is a good question and answers about ""What makes Quantum Cryptography secure?"" on crypto.stackexchange. They are verbose, so I will refrain from copying the content here.

+ +

Differences between Quantum Cryptography and Modern Cryptography

+ +

Quantum cryptography requires specialized machinery in order to execute a run of the protocol. This is a non-negligible disadvantage compared to modern cryptography. If you want to use Quantum Cryptography, you'll need to pay one of the commercial entities that offers the service.

+ +

Modern cryptography uses mathematical algorithms implemented in software, which can be performed by any old computer with sufficient resources (which are almost all computers in this day and age). The outputs of the algorithms can be transmitted via an arbitrary communications medium.

+ +

If you see a green padlock next to the URL in your web browser, it means your connection to this very site is being secured by modern cryptography - which is effectively being done for free, as far as you were concerned.

+ +

Note

+ +

+ +

Quantum cryptography is often thought to be unconditionally unbreakable due to the laws of the universe. This sounds too good to be true, and it unfortunately is. There is nothing to stop someone from waiting for you to receive your message, then to threaten you until you reveal what the message was. There is also the issue of an adversaries ability to tamper with the hardware. For a rather scathing but in-depth review of these points, see the blog post at cr.yp.to.

+ +

Basically, as with all provably secure cryptographic techniques, these guarantees are only provided within the framework of assumptions that the proofs rest upon. An adversary who finds a hole in these assumptions can circumvent the theoretical guarantees that the algorithms offer. That's not to say that QC is totally worthless and overtly non-functional, but that ""provable security"", as always, needs to be understood to rest on certain sets of assumptions that could be violated in practice.

+",162,,162,,3/15/2018 19:15,3/15/2018 19:15,,,,0,,,,CC BY-SA 3.0 +91,1,104,,3/13/2018 17:21,,45,31484,"

Quantum algorithms frequently use bra-ket notation in their description. What do all of these brackets and vertical lines mean? For example: $|ψ⟩=α|0⟩+β|1⟩$

+ +

While this is arguably a question about mathematics, this type of notation appears to be used frequently when dealing with quantum computation specifically. I'm not sure I have ever seen it used in any other contexts.

+ +
+ +

Edit

+ +

By the last part, I mean that it is possible to denote vectors and inner products using standard notation for linear algebra, and some other fields that use these objects and operators do so without the use of bra-ket notation.

+ +

This leads me to conclude that there is some difference/reason why bra-ket is especially handy for denoting quantum algorithms. It is not an assertion of fact, I meant it as an observation. ""I'm not sure I have seen it used elsewhere"" is not the same statement as ""It is not used in any other contexts"".

+",162,,58,,3/14/2018 14:48,06-01-2020 12:26,How does bra-ket notation work?,,5,1,,,,CC BY-SA 3.0 +92,2,,91,3/13/2018 17:38,,23,,"

You could think of $|0\rangle$ and $|1\rangle$ as two orthonormal basis states (represented by ""ket""s) of a quantum bit which resides in a two dimensional complex vector space. The lines and brackets you see is basically the bra-ket notation a.k.a Dirac notation which is commonly used in quantum mechanics.

+ +

As an example $|0\rangle$ could represent the spin-down state of an electron while $|1\rangle$ could represent the spin-up state. But actually the electron can be in a linear superposition of those two states i.e. $|\psi\rangle_{\text{electron}} = a|0\rangle + b|1\rangle$ (this is usually normalized like $\frac{a|0\rangle + b|1\rangle}{\sqrt{|a|^2+|b|^2}}$) where $a,b\in \Bbb{C}$.

+",26,,26,,3/14/2018 8:17,3/14/2018 8:17,,,,0,,,,CC BY-SA 3.0 +93,2,,61,3/13/2018 19:07,,9,,"

The existing answer does a good job at describing the state that comes from a SPDC configuration at low conversion efficiency, but it's also worth noting that the single-photon behaviour is not all there is to the process. Thus, in particular, if your conversion efficiency (or you detection time / efficiency / SNR) is good enough that you can detect (and discriminate) the emission of multiple photons in the same mode, then those two-photon events also share quantum correlations between the two modes, as do all higher orders of the photon-statistics distribution.

+ +

To be more specific (and ignoring all the polarization, momentum, and phase-matching problems already alluded to by Mithrandir), the light that comes out of a Type II SPDC source in the signal and idler ports is in a two-mode squeezed state

+ +

\begin{align} + |{\text{TMSV}}\rangle & =S_{2}(\zeta )|0\rangle =\exp(\zeta ^{*}{\hat {a}}{\hat {b}}-\zeta {\hat {a}}^{\dagger }{\hat {b}}^{\dagger })|0\rangle +\\& ={\frac {1}{\cosh r}}\sum _{n=0}^{\infty }(\tanh r)^{n}|nn\rangle +\\ & \approx \operatorname{sech}(r)\left[|00\rangle + \tanh(r)|11\rangle + \tanh^2(r)|22\rangle + \tanh^3(r) |33\rangle + \cdots \right], +\end{align} +i.e. just as a detection of a single photon on the signal port is completely (and coherently) correlated with a single photon on the idler port, so too does the observation of a two-photon signal state imply that the idler's mode has been collapsed onto a two-photon state.

+ +

Generally, the folks running SPDC setups as single-photon sources run them in a regime where $\tanh(r)$ is small (so most of the time you get vacuum, except when you do get a click on the signal that guarantees a single photon's presence in the idler mode) in order to eliminate the contribution of the higher-order photon-statistics channels, but they're still there, they can be important, and if you don't control appropriately for them then they might overwhelm the single-photon component of your signal.

+ +

I should also say that the details change from configuration to configuration (so e.g. Type I SPDC only produces single-mode squeezed vacua, if I understand correctly) but the higher-order terms will generally always occur.

+",176,,,,,3/13/2018 19:07,,,,0,,,,CC BY-SA 3.0 +94,2,,69,3/14/2018 0:21,,13,,"

As an addition to Nat's answer, it's worth mentioning that 'noise' is a specific concept1 in quantum computing. This answer will use Preskill's lecture notes as a basis.

+ +

In essence, noise is indeed considered to be something that could be described as 'thermal noise', although it should be noted that it is an interaction with a thermal environment causing noise, as opposed to noise in and of itself. Approximations are made that means this noise can be described using quantum channels, which is what Nielsen & Chuang seem to be referring to, as they discuss this in chapter 8.3 of that very textbook. The most common types of noise described in this manner are: depolarising, dephasing and amplitude damping, which will be very briefly explained below.

+ +

In a bit more detail2

+ +

Start with a system with Hilbert space $\mathcal{H}_S$, coupled to a (thermal) bath with Hilbert space $\mathcal{H}_B$.

+ +

Take the density matrix of the system and 'course grain' it into chunks of $\rho\left(t + n\,\delta t\right)$. Make the assumption that the interaction is Markovian, that is, the environment 'forgets' much quicker than the coarse graining time and that whatever you're trying to observe occurs over a time much longer than the coarse graining time.

+ +

Express the density matrix at $t+\delta t$ as a channel acting on the density matrix at time $t$: $\rho\left(t + \delta t\right) = \varepsilon_{\delta t}\left(\rho\left(t\right)\right)$.

+ +

Expand this to first order in $\delta t$ to get $\varepsilon_{\delta t} = \mathrm{I} + \delta t\,\mathcal{L}$. As a channel, it must be completely positive and trace preserving, so $\varepsilon_{\delta t}\left(\rho\left(t\right)\right) = \sum_aM_a\rho\left(t\right)M_a^\dagger$ and satisfies $\sum_aM_a^\dagger M_a = \mathrm{I}$.

+ +

This gives a non-unitary quantum channel described by the Lindblad Master equation $$\dot\rho = -i\left[H, \rho\right] + \sum_{a>0} \gamma_a\left(L_a\rho L_a^\dagger - \frac{1}{2}\lbrace L^\dagger_aL_a, \rho\rbrace\right),$$ where $\gamma_a$'s are always positive for Markovian evolution.

+ +

This can also be written as $H_{\mathrm{eff}} = H - \frac{i}{2}\sum_a\gamma_aL_a^{\dagger}L_a$, with an additional term, such that the evolution can be written as $$\dot\rho = -i\left[H_{\text{eff}}, \rho\right] + \sum_{a>0} \gamma_aL_a\rho L_a^\dagger.$$

+ +

This now looks equivalent to the Kraus operator representation of a channel, with Kraus operators $K_a \propto L_a$ (as well as an additional Kraus operator to satisfy $\left[H_{\text{eff}}, \rho\right]$). Any non-trivial Lindbladian can then be described as noise, although in reality, it is an approximation of evolution of an open system.

+ +

Some common types of noise3

+ +

Trying out various different forms of $L_a$ gives different behaviours +of the system, which give different possible noises, of which there are a few common ones (in the single qubit case, anyway):

+ +
    +
  1. Dephasing: Causes the system to decohere - this gets rid/reduces the entanglement (i.e. coherence) of the system, necessarily making it more mixed, unless already maximally mixed +$$\varepsilon\left(\rho\right) = \left(1-\frac{p}{2}\right)\rho + \frac{1}{2}\sigma_z\rho\sigma_z$$

  2. +
  3. Depolarising: Upon measuring, either a bit flip ($\sigma_x$), phase flip ($\sigma_z$), or both bit and phase ($\sigma_y$) will have occurred with some probability +$$\varepsilon\left(\rho\right) = \left(1-p\right)\rho + \frac{p}{3}\left(\sigma_x\rho\sigma_x + \sigma_y\rho\sigma_y + \sigma_z\rho\sigma_z\right)$$

  4. +
  5. Amplitude Damping: Represents the system decaying from $\lvert 1\rangle$ to $\lvert 0\rangle$, such as when an atom emits a photon. Leads to a simple version of the coherence times $T_1$ (decay of $\lvert 1\rangle$ to $\lvert 0\rangle$) and $T_2$ (decay of the off-diagonal terms). Described by the Kraus operators $$M_0 = \begin{pmatrix}1 & 0 \\ 0 & \sqrt{1-p}\end{pmatrix} \text{ and } M_1 = \begin{pmatrix}0 & \sqrt{p} \\ 0 & 0\end{pmatrix},$$ giving $$\varepsilon\left(\rho\right) = M_0\rho M_0^\dagger + M_1\rho M_1^\dagger$$

  6. +
+ +
+ +

1 Or rather, several very broad concepts resulting from the same fundamental idea

+ +

2 I wouldn't go around calling this rigorous or anything

+ +

3 Within this context, naturally

+",23,,26,,12/25/2019 6:37,12/25/2019 6:37,,,,0,,,,CC BY-SA 4.0 +97,1,,,3/14/2018 1:32,,26,1405,"

Why do you need error correction? My understanding is that error correction removes errors from noise, but noise should average itself out. To make clear what I'm asking, why can't you, instead of involving error correction, simply run the operations, say, a hundred times, and pick the average/most common answer?

+",91,,26,,05-07-2018 13:21,01-08-2019 18:22,Is error correction necessary?,,4,0,,,,CC BY-SA 3.0 +98,2,,97,3/14/2018 2:27,,22,,"

That doesn't scale well. After a moderately long calculation you're basically left with the maximally mixed state or whatever fixed point your noise has. To scale to arbitrary long calculations you need to correct errors before they become too big.

+ +

Here's some short calculation for the intuition given above. Consider the simple white noise model (depolarizing noise), +$$\rho'(\varepsilon)= (1-\varepsilon)\rho + \varepsilon \frac{\mathbb{I}}{\operatorname{tr} \mathbb{I}},$$ +where $\rho$ is the ideal state (standard notation applies). If you concatenate $n$ such noisy processes, the new noise parameter is $\varepsilon'=1-(1-\varepsilon)^n$, which increases exponentially in the number of gates (or other error sources). If you repeat the experiment $m$-times and assume that the standard error scales as $\frac{1}{\sqrt{m}}$ you see that the number of runs $m$ would be exponentially in the length of your calculation!

+",104,,104,,3/15/2018 12:15,3/15/2018 12:15,,,,0,,,,CC BY-SA 3.0 +99,2,,91,3/14/2018 4:47,,19,,"
+

What do all of these brackets and vertical lines mean?

+
+ +

The notation $\left \lvert v \right \rangle$ means exactly the same thing as $\vec{v}$ or $\textbf{v}$, i.e. it denotes a vector whose name is ""v"". +That's it. +There is no further mystery or magic, at all. +The symbol $\left \lvert \psi \right \rangle$ denotes a vector called ""psi"".

+ +

The symbol $\left \lvert \cdot \right \rangle$ is called a ""ket"", but it could just as well (and in my opinion should) be called a ""vector"" with absolutely no loss of meaning at all.

+ +
+

While this is arguably a question about mathematics, this type of notation appears to be used frequently when dealing with quantum computation specifically. + I'm not sure I have ever seen it used in any other contexts.

+
+ +

The notation was invented by a physicist (Paul Dirac) and is called ""Dirac notation"" or ""bra-ket notation"". +As far as I know, Dirac probably invented it while studying quantum mechanics, and so historically the notation has mostly been used to denote the vectors that show up in quantum mechanics, i.e. quantum states. +Bra-ket notation is the standard in any quantum mechanics context, not just quantum computation. +For example, the Schrodinger equation, which has to do with dynamics in quantum systems and predates quantum computation by decades, is written using bra-ket notation.

+ +

Furthermore, the notation is pretty convenient in other linear algebra contexts and is used outside of quantum mechanics.

+",32,,,,,3/14/2018 4:47,,,,0,,,,CC BY-SA 3.0 +101,2,,97,3/14/2018 10:22,,14,,"

If the error rate were low enough, you could run a computation a hundred times and take the most common answer. For instance, this would work if the error rate were low enough that the expected number of errors per computation was something very small — which means that how well this strategy works would depend on how long and complicated a computation you would like to do.

+ +

Once the error rate or the length of your computation become sufficiently high, you can no longer have any confidence that the most likely outcome is that there were zero errors: at a certain point it becomes more likely that you have one, or two, or more errors, than that you have zero. In this case, there is nothing to prevent the majority of the cases from giving you an incorrect answer. What then?

+ +

These issues are not special to quantum computation: they also apply to classical computation — it just happens that almost all of our technology is at a sufficiently advanced state of maturity that these issues do not concern us in practise; that there may be a greater chance of your computer being struck by a meteorite mid-computation (or it running out of battery power, or you deciding to switch it off) than of there being a hardware error. What is (temporarily) special about quantum computation is that the technology is not yet mature enough for us to be so relaxed about the possibility of error.

+ +

In those times when classical computation has been at a stage when error correction was both practical and necessary, we were able to make use of certain mathematical techniques — error correction — which made it possible to suppress the effective error rate, and in principle to make it as low as we liked. The same techniques surprisingly can be used for quantum error correction — with a little bit of extension, to accommodate the difference between quantum and classical information. At first, before the mid-1990s, it was thought that quantum error correction was impossible because of the continuity of the space of quantum states. But as it turns out, by applying classical error correction techniques in the right way to the different ways a qubit could be measured (usually described as ""bit"" and ""phase""), you can in principle suppress many kinds of noise on quantum systems as well. These techniques are not special to qubits, either: the same idea can be used for quantum systems of any finite dimension (though for models such as adiabatic computation, it may then get in the way of actually performing the computation you wish to perform).

+ +

At the time I'm writing this, individual qubits are so difficult to build and to marshall that people are hoping to get away with doing proof-of-principle computations without any error correction at all. That's fine, but it will limit how long their computations can be until the number of accumulated errors is large enough that the computation stops being meaningful. There are two solutions: to get better at suppressing noise, or to apply error correction. Both are good ideas, but it is possible that error correction is easier to perform in the medium- and long-term than suppressing sources of noise.

+",124,,124,,3/14/2018 10:27,3/14/2018 10:27,,,,4,,,,CC BY-SA 3.0 +104,2,,91,3/14/2018 11:01,,28,,"

As already explained by others, a ket $\left|\psi\right>$ is just a vector. A bra $\left<\psi\right|$ is the Hermitian conjugate of the vector. You can multiply a vector with a number in the usual way.

+ +

Now comes the fun part: You can write the scalar product of two vectors $\left|\psi\right>$ and $\left|\phi\right>$ as $\left<\phi\middle|\psi\right>$.

+ +

You can apply an operator to the vector (in finite dimensions this is just a matrix multiplication) $X\left|\psi\right>$.

+ +

All in all, the notation is very handy and intuitive. For more information, see the Wikipedia article or a textbook on Quantum Mechanics.

+",18,,15,,3/14/2018 13:37,3/14/2018 13:37,,,,9,,,,CC BY-SA 3.0 +105,1,110,,3/14/2018 14:58,,17,1857,"

Online descriptions of quantum computers often discuss how they must be kept near absolute zero $\left(0~\mathrm{K}~\text{or}~-273.15~{\left. {}^{\circ}\mathrm{C} \right.}\right)$.

+ +

Questions:

+ +
    +
  1. Why must quantum computers operate under such extreme temperature conditions?

  2. +
  3. Is the need for extremely low temperatures the same for all quantum computers, or does it vary by architecture?

  4. +
  5. What happens if they overheat?

  6. +
+ +
+ +

Sources : Youtube, D-Wave

+",58,,253,,3/26/2018 15:44,7/24/2022 10:36,Why must quantum computers be kept near absolute zero?,,2,0,,,,CC BY-SA 3.0 +110,2,,105,3/14/2018 17:03,,14,,"

Well, first, not all systems must be kept near absolute zero. It depends on the realization of your quantum computer. For example, optical quantum computers do not need to be kept near absolute zero, but superconducting quantum computers do. So, that answers your second question.

+ +

To answer your first question, superconducting quantum computers (for example) must be kept at low temperatures so that the thermal environment cannot induce fluctuations in the qubits' energies. Such fluctuations would be noise/errors in the qubits.

+ +

(See Blue's question Why do optical quantum computers not have to be kept near absolute zero while superconducting quantum computers do? and Daniel Sank's answer for some follow up information.)

+",91,,91,,3/15/2018 6:06,3/15/2018 6:06,,,,0,,,,CC BY-SA 3.0 +111,2,,53,3/14/2018 19:50,,7,,"

It looks like you're asking about this part of the paper:

+
+

Therefore, a quantum computation is hidden as long as these measurements are successfully hidden. In order to achieve this, the BQC protocol exploits special resources called blind cluster states that must be chosen carefully to be a generic structure that reveals nothing about the underlying computation (see Figure 1).

+

-"Experimental Demonstration of Blind Quantum Computing" (2011)

+
+

That last part, about how they want a "generic structure that reveals nothing about the underlying computation" might make a reader wonder about how a computer's structure could leak information about its computations.

+

As a simple example of structure leaking information about a cypto scheme, suppose that Bob asks Sally question to which we assume that Sally'll respond yes or no. Sally directly encrypts her response using their shared one-time pad (OTP), resulting in the ciphertext rk4. Despite the OTP scheme having perfect secrecy in general, it's clear that Sally responded yes.

+

In this case, the computer was structured to leak information about the length of a message given that message, which was especially disastrous in this contrived example. In general, structure can leak information about the computation. Avoiding such leaks would be necessary for a blind-computation server like the one the paper intends to discuss.

+

Generally speaking, attacks that operate like this are called side-channel attacks.

+

In the case of this paper (disclaiming that I just skimmed it quickly), it looks like they're basically talking about creating a generic computational structure that doesn't leak information through its structural traits. For example, if the structure behaved differently in any way based on a secret aspect of the message, then it may leak secret information to the server when the server observes its own computational behavior.

+

The paper appears to be trying to point out that the computational unit needs to be designed to avoid such information leaks.

+

Later in the paper, they discuss stuff about blinding:

+
+

In cryptography, blinding is a technique by which an agent can provide a service to (i.e., compute a function for) a client in an encoded form without knowing either the real input or the real output. Blinding techniques also have applications to preventing side-channel attacks on encryption devices.

+

-"Blinding (cryptography)", Wikipedia

+
+

And, really, blinding's what this paper's all about: figuring out a way to have a server do work for clients without the clients revealing their secrets to the server.

+

One way to enable blind computation is for the client to use homomorphic encryption on its job request before sending it to the server:

+
+

Homomorphic encryption is a form of encryption that allows computation on ciphertexts, generating an encrypted result which, when decrypted, matches the result of the operations as if they had been performed on the plaintext. The purpose of homomorphic encryption is to allow computation on encrypted data.

+

-"Homomorphic encryption", Wikipedia

+
+",15,,-1,,6/18/2020 8:31,3/16/2018 13:42,,,,0,,,,CC BY-SA 3.0 +112,1,116,,3/14/2018 19:59,,21,11335,"

As I understand it, the main difference between quantum and non-quantum computers is that quantum computers use qubits while non-quantum computers use (classical) bits.

+ +

What is the difference between qubits and classical bits?

+",11,,26,,3/16/2018 15:33,8/31/2018 15:59,What is the difference between a qubit and classical bit?,,1,0,,,,CC BY-SA 3.0 +115,1,1502,,3/14/2018 20:48,,21,1077,"

The crucial role of random access memories (RAMs) in the context of classical computation makes it natural to wonder how one can generalise such a concept to the quantum domain.

+ +

Arguably the most notable (and first?) work proposing an efficient QRAM architecture is Giovannetti et al. 2007. +In this work it was shown that their ""bucket brigate"" approach allows access to the content of the memory with $\mathcal O(\log N)$ operations, where $N$ is the number of memory slots. This is an exponential improvement with respect to alternative approaches, which require $\mathcal O(N^{\alpha})$ operations. +Implementing this architecture is however highly nontrivial from an experimental point of view.

+ +

Is the above the only known way to implement a QRAM? Or have there been other theoretical works in this direction? If so, how do they compare (pros and cons) with the Giovannetti et al. proposal?

+",55,,26,,3/16/2018 15:34,5/15/2019 14:58,What protocols have been proposed to implement quantum RAMs?,,1,0,,,,CC BY-SA 3.0 +116,2,,112,3/14/2018 21:25,,17,,"

A bit is a binary unit of information used in classical computation. It can take two possible values, typically taken to be $0$ or $1$. Bits can be implemented with devices or physical systems that can be in two possible states.

+ +

To compare and contrast bits with qubits, let's introduce a vector notation for bits as follows: a bit is represented by a column vector of two elements $(\alpha,\beta)^T$, where $\alpha$ stands for $0$ and $\beta$ for $1$. Now the bit $0$ is represented by the vector $(1,0)^T$ and the bit $1$ by $(0,1)^T$. Just like before, there are only two possible values.

+ +

While this kind of representation is redundant for classical bits, it is now easy to introduce qubits: a qubit is simply any $(\alpha,\beta)^T$ where the complex number elements satisfy the normalization condition $|\alpha|^2+|\beta|^2=1$. The normalization condition is necessary to interpret $|\alpha|^2$ and $|\beta|^2$ as probabilities for measurement outcomes, as will be seen. Some call qubit the unit of quantum information. Qubits can be implemented as the (pure) states of quantum devices or quantum systems that can be in two possible states, that will form the so called computational basis, and additionally in a coherent superposition of these. Here the quantumness is necessary to have qubits other than the classical $(1,0)^T$ and $(0,1)^T$.

+ +

The usual operations that are carried out on qubits during a quantum computation are quantum gates and measurements. A (single qubit) quantum gate takes as input a qubit and gives as output a qubit that is a linear transformation of the input qubit. When using the above vector notation for qubits, gates should then be represented by matrices that preserve the normalization condition; such matrices are called unitary matrices. Classical gates may be represented by matrices that keep bits as bits, but notice that matrices representing quantum gates do not in general satisfy this requirement.

+ +

A measurement on a bit is understood to be a classical one. By this I mean that an a priori unknown value of bit can in principle be correctly found out with certainty. This is not the case for qubits: measuring a generic qubit $(\alpha,\beta)^T$ in the computational basis $[ (1,0)^T,(0,1)^T]$ will result in $(1,0)^T$ with probability $|\alpha|^2$ and in $(0,1)^T$ with probability $|\beta|^2$. In other words, while qubits can be in states other than computational basis states before measurement, measuring can still have only two possible outcomes.

+ +

There is not much one can do with a single bit or qubit. The full computational power of either comes from using many, which leads to the final difference between them that will be covered here: multiple qubits can be entangled. Informally speaking, entanglement is a form of correlation much stronger than classical systems can have. Together, superposition and entanglement allow one to design algorithms realized with qubits that cannot be done with bits. Of greatest interest are algorithms that allow the completion of a task with reduced computational complexity when compared to best known classical algorithms.

+ +

Before concluding, it should be mentioned that a qubit can be simulated with bits (and vice versa), but the number of bits required grows rapidly with the number of qubits. Consequently, without reliable quantum computers quantum algorithms are of theoretical interest only.

+",144,,144,,04-01-2018 10:11,04-01-2018 10:11,,,,0,,,,CC BY-SA 3.0 +117,1,119,,3/14/2018 21:54,,25,2938,"

This is a follow-up question to @heather's answer to the question : Why must quantum computers be kept near absolute zero?

+ +

What I know:

+ +
    +
  • Superconducting quantum computing: It is an implementation of a quantum computer in a superconducting electronic circuit.

  • +
  • Optical quantum computing: It uses photons as information carriers, and linear optical elements to process quantum information, and uses photon detectors and quantum memories to detect and store quantum information.

  • +
+ +

Next, this is what Wikipedia goes on to say about superconducting quantum computing:

+ +
+

Classical computation models rely on physical implementations + consistent with the laws of classical mechanics. It is known, however, + that the classical description is only accurate for specific cases, + while the more general description of nature is given by the quantum + mechanics. Quantum computation studies the application of quantum + phenomena, that are beyond the scope of classical approximation, for + information processing and communication. Various models of quantum + computation exist, however the most popular models incorporate the + concepts of qubits and quantum gates. A qubit is a generalization of a + bit - a system with two possible states, that can be in a quantum + superposition of both. A quantum gate is a generalization of a logic + gate: it describes the transformation that one or more qubits will + experience after the gate is applied on them, given their initial + state. The physical implementation of qubits and gates is difficult, + for the same reasons that quantum phenomena are hard to observe in + everyday life. One approach is to implement the quantum computers in + superconductors, where the quantum effects become macroscopic, though + at a price of extremely low operation temperatures.

+
+ +

This does make some sense! However, I was looking for why optical quantum computers don't need ""extremely low temperatures"" unlike superconducting quantum computers. Don't they suffer from the same problem i.e. aren't the quantum phenomena in optical quantum computers difficult to observe just as for superconducting quantum computers? Are the quantum effects already macroscopic at room temperatures, in such computers? Why so?

+ +

I was going through the description of Linear optical quantum computing on Wikipedia, but found no reference to ""temperature"" as such.

+",26,,,,,7/28/2022 15:20,Why do optical quantum computers not have to be kept near absolute zero while superconducting quantum computers do?,,3,0,,,,CC BY-SA 3.0 +118,2,,117,3/14/2018 22:11,,7,,"

Because light, at the right frequencies, interacts weakly with matter. +In the quantum regime, this translates to single photons being largely free of the noise and decoherence that is the main obstacle with other QC architectures. +The surrounding temperature doesn't disturb the quantum state of a photon as much as it does when the quantum information is carried by matter (atoms, ions, electrons, superconducting circuits etc.). +For example, reliable transmission of photonic qubits (more precisely, a QKD protocol) between China and Austria, using a low-orbit satellite as link, was recently demonstrated (see e.g. here).

+ +

Unfortunately, light also interacts extremely weakly (as in, it basically doesn't) with other light. +Different photons not interacting with each other is what makes optical quantum computation somewhat tricky. +For example, basic elements like two-qubit gates, when the qubits are carried by different photons, require some form of nonlinearity, which is generally harder to implement experimentally.

+",55,,55,,11/19/2018 15:40,11/19/2018 15:40,,,,0,,,,CC BY-SA 4.0 +119,2,,117,3/14/2018 22:42,,33,,"
+

I was looking for why optical quantum computers don't need ""extremely low temperatures"" unlike superconducting quantum computers.

+
+ +

Superconducting qubits usually work in the frequency range 4 GHz to 10 GHz. +The energy associated with a transition frequency $f_{10}$ in quantum mechanics is $E_{10} = h f_{10}$ where $h$ is Planck's constant. +Comparing the qubit transition energy to the thermal energy $E_\text{thermal} = k_b T$ (where $k_b$ is Boltzmann's constant), we see that the qubit energy is above the thermal energy when +$$f_{10} > k_b T / h \, .$$

+ +

Looking up Boltzmann's and Planck's constants, we find +$$h/k_b = 0.048 \, \text{K / GHz} \, .$$

+ +

Therefore, we can write +$$f_{10} > 1 \, \text{GHz} \, \,\frac{T}{0.048 \, \text{K}}$$

+ +

So, for the highest frequency superconducting qubit at 10 GHz, we need $T < 0.48 \, \text{K}$ in order for there to be a low probability that the qubit is randomly excited or de-excited due to thermal interactions. +This is why superconducting qubits are usually operated in dilution refrigerators at ~15 milliKelvin. +Of course, we also need the temperature to be low enough to get the metals superconducting, but for aluminum that happens at 1 K so actually the constraint we already talked about is more important.

+ +

On the other hand, suppose the two states of the optical qubit $\left \lvert 0 \right \rangle$ and $\left \lvert 1 \right \rangle$ the presence and absence of an optical photon. +An optical photon has a frequency of around $10^{14}$ Hz, which corresponds to a temperature of 14,309 Kelvin. +Therefore, there's an extremely low probability of the thermal environment changing the qubit state by creating or removing a photon. +This is why optical light is sort of intrinsically quantum mechanical in nature.

+ +
+

Don't they suffer from the same problem i.e. aren't the quantum phenomena in optical quantum computers difficult to observe just as for superconducting quantum computers?

+
+ +

Well, the difficulties between superconducting quantum computers and optical quantum computers are different. +Optical photons essentially don't interact with each other. +To get an effective interaction between two photons, you have to either put them through a nonlinear crystal, or do some kind of photodetection measurement. +The challenge with nonlinear crystals is that they're very inefficient; only a very small fraction of photons that go in actually undergo the nonlinear process that causes interaction. +The challenge with photodetection is that it's hard to build a photodetector that has high detection efficiency and low dark counts$^{[a]}$. +In fact, the best photo-detectors actually need to be operated in cryogenic environments anyway, so some optical quantum computing architectures need cryogenic refrigeration despite the fact that the qubits themselves have very high frequency.

+ +

P.S. This answer could be expanded quite a bit. If someone has a particular aspect they'd like to know more about, please leave a comment.

+ +

$[a]$: Dark counts means the times a photodetector thinks it saw a photon even though there really wasn't one. In other words, isn't the rate that the detector counts photons when its in the dark.

+",32,,32,,6/21/2019 17:18,6/21/2019 17:18,,,,6,,,,CC BY-SA 4.0 +120,1,18313,,3/14/2018 23:47,,9,259,"

I work as a theorist with my current research interests in ""quantum contextuality"". For those perusing the question, this is essentially a generalization of non-locality where we can show a quantum system does not admit a hidden-variable model. So in some sense, it's impossible to look at the problem classically.

+ +

For further background, contextuality can often be formulated as a resource for quantum computation-- which is the reason why we're interested in it. Our talking points to people are that contextuality is the right way of characterizing what's essentially quantum about a system (many would debate us on that, but that's perhaps for another thread).

+ +

What I've found so far is a proposed experiment for testing contextuality: +Cabello-2016-Experiment and a description of the implications of a successful contextuality experiment: +Winter-2014-Implications.

+ +

Most experimentalists tend to focus on entanglement (understandably; it's a lot easier to measure and more concrete to think about) - but are there any experimental groups currently working on contextuality experiments?

+",236,,26,,02-01-2019 22:25,07-08-2021 18:04,Are there any experimental groups currently measuring quantum contextuality?,,2,3,,,,CC BY-SA 4.0 +121,1,123,,3/15/2018 0:02,,25,1913,"

This is a question I was inspired to ask based on this question, which notes that quantum annealing is an entirely different model for computation than the usual circuit model. I've heard this before, and it's my understanding that the gate-model does not apply to quantum-annealing, but I've never quite understood why that is, or how to parse the computations that an annealer can do. As I understand from several talks (some by D-wave themselves!) the fact that the annealers are confined to a specific Hamiltonian plays into it.

+",236,,26,,12/23/2018 11:43,12/23/2018 11:43,Why can't quantum annealing be described by a gate model?,,2,0,,,,CC BY-SA 3.0 +122,2,,121,3/15/2018 0:18,,17,,"

Annealing's more of an analog tactic.

+ +

The gist is that you have some weird function that you want to optimize. So, you bounce around it. At first, the ""temperature"" is very high, such that the selected point can bounce around a lot. Then as the algorithm ""cools"", the temperature goes down, and the bouncing becomes less aggressive.

+ +

Ultimately, it settles down to a local optima which, ideally, is favorably like the global optima.

+ +

Here's an animation for simulated annealing (non-quantum): +

+ +

But, it's pretty much the same concept for quantum annealing:

+ +

+ +

By contrast, gate-logic is far more digital than analog. It's concerned with qubits and logical operations rather than merely finding a result after chaotic bouncing-around.

+",15,,15,,3/15/2018 0:24,3/15/2018 0:24,,,,6,,,,CC BY-SA 3.0 +123,2,,121,3/15/2018 1:10,,19,,"

A Quantum Annealer, such as a D-Wave machine is a physical representation of the Ising model and as such has a 'problem' Hamiltonian of the form $$H_P = \sum_{J=1}^nh_j\sigma_j^z + \sum_{i, j}J_{ij}\sigma_i^z\sigma_j^z.$$

+ +

Essentially, the problem to be solved is mapped to the above Hamiltonian. The system starts with the Hamiltonian $H_I = \sum_{J=1}^nh'_j\sigma_j^x$ and the annealing parameter, $s$ is used to map the initial Hamiltonian $H_I$ to the problem Hamiltonian $H_P$ using $H\left(s\right) = \left(1-s\right)H_I + sH_P$.

+ +

As this is an anneal, the process is done slowly enough to stay near the ground state of the system while the Hamiltonian is varied to that of the problem, using tunnelling to stay near the ground state as described in Nat's answer.

+ +

Now, why can't this be used to describe a gate model QC? The above is a Quadratic unconstrained binary optimization (QUBO) problem, which is NP-hard... Indeed, here's an article mapping a number of NP problems to the Ising model. Any problem in NP can be mapped to any NP-hard problem in polynomial time and integer factorisation is indeed an NP problem.

+ +

Well, the temperature is non-zero, so it's not going to be in the ground state throughout the anneal and as a result, the solution is still only an approximate one. Or, in different terms, the probability of failure is greater than a half (it's nowhere near having a decent probability of success compared with what a universal QC considers 'decent' - judging from graphs I've seen, the probability of success for the current machine is around $0.2\%$ and this will only get worse with increasing size), and the anneal algorithm is not bounded error. At all. As such, there's no way of knowing whether or not you've got the correct solution with something such as integer factorisation.

+ +

What it (in principle) does is get very close to the exact result, very quickly, but this doesn't help for anything where the exact result is required as going from 'nearly correct' to 'correct' is still an extremely difficult (i.e. presumably still NP in general, when the original problem is in NP) problem in this case, as the parameters that are/give a 'nearly correct' solution aren't necessarily going to be distributed anywhere near the parameters that are/give the correct solution.

+ +

Edit for clarification: what this means is that a quantum annealer (QA) still takes exponential time (albeit potentially a faster exponential time) to solve NP problems such as integer factorisation, where a universal QC gives an exponential speed up and can solve the same problem in poly time. This is what implies a QA cannot simulate a universal QC in poly time (otherwise it could solve problems in poly time that it can't). As pointed out in the comments, this is not the same as saying that a QA cannot give the same speedup in other problems, such as database search.

+",23,,23,,3/15/2018 13:43,3/15/2018 13:43,,,,6,,,,CC BY-SA 3.0 +124,2,,91,3/15/2018 2:20,,14,,"
+

This leads me to conclude that there is some difference/reason why bra-ket is especially handy for denoting quantum algorithms.

+
+ +

There's already an accepted answer and an answer that explains 'ket', 'bra' and the scalar product notation.

+ +

I'll try add a bit more to the highlighted entry. What makes it a useful/handy notation?

+ +

The first thing that bra-ket notation is really used a lot for is to denote very simply the eigenvectors of a (usually Hermitian) operator associated with an eigenvalue. Suppose we have an eigenvalue equation $A(v)=\lambda v$, this can be denoted as $A\left|\lambda\right\rangle=\lambda \left|\lambda\right\rangle$, and probably some extra label $k$ if there is some degeneracy $A\left|\lambda,k\right\rangle=\lambda \left|\lambda,k\right\rangle$.

+ +

You see this employed all over quantum mechanics, momentum eigenstates tend to be labelled as $\left|\vec{k}\right\rangle$ or $\left|\vec{p}\right\rangle$ depending on units, or with multiple particle states $\left|\vec{p}_1,\vec{p}_2,\vec{p}_3\ldots\right\rangle$; occupation number representation for bose and fermi system many body systems $\left|n_1,n_2,\ldots\right\rangle$; a spin half particle taking the eigenstates usually of the $S_z$ operator, written sometimes as $\left|+\right\rangle$ and $\left|-\right\rangle$ or $\left|\uparrow\,\right\rangle$ and $\left|\downarrow\,\right\rangle$, etc as shorthand for $\left|\pm \hbar/2\right\rangle$; spherical harmonics as eigenfunctions of the $L^2$ and $L_z$ functions are conveniently written as $\left|l,m\right\rangle$ with $l=0,1,2,\ldots$ and $m=-l,-l+1,\ldots,l-1,l.$

+ +

So convenience of notation is one thing, but there's also a kind of 'lego' feeling to algebraic manipulations with dirac notation, take for instance the $S_x$ spin half operator in dirac notation as +$S_x=\frac{\hbar}{2}(\left|\uparrow\right\rangle\left\langle\downarrow\right|+\left|\downarrow\right\rangle\left\langle\uparrow\right|)$, acting on a state like $\left|\uparrow\right\rangle$ one simply does

+ +

$$S_x\left|\uparrow\right\rangle=\frac{\hbar}{2}\left(\left|\uparrow\rangle\langle\downarrow\right|+\left|\downarrow\rangle\langle\uparrow\right|\right)\left|\uparrow\right\rangle=\frac{\hbar}{2}\left|\uparrow\rangle\langle\downarrow\mid\uparrow\right\rangle+\frac{\hbar}{2}\left|\downarrow\rangle\langle\uparrow\mid\uparrow\right\rangle=\frac{\hbar}{2}\left|\downarrow\right\rangle$$

+ +

since $\left\langle\uparrow\mid\uparrow\right\rangle=1$ and $\left\langle\downarrow\mid\uparrow\right\rangle=0$.

+ +

What makes it handy for quantum algorithms?

+ +

Say we have a suitable two level system for a qubit; this forms a two dimensional complex vector space $V$ say whose basis is denoted as $\left|0\right\rangle$ and $\left|1\right\rangle$. When we consider say $n$ qubits of this form, the states of the system live in a bigger space the tensor product space, $V^{\otimes n}$. Dirac notation can be rather handy here, the basis states will be labelled by strings of ones and zeros and one usually denotes a state e.g. $\left|1\right\rangle\otimes\left|0\right\rangle\otimes\left|0\right\rangle\otimes\left|1\right\rangle\equiv\left|1001\right\rangle$, and say we have a bit flip operator $X_i$ which interchanges $1\leftrightarrow 0$ on the $i$'th bit, this can act rather simply on the above strings e.g. $X_3\left|1001\right\rangle=\left|1011\right\rangle$, and taking a sum of operators or acting on a superposition of states works just as simply.

+ +

Slight caution: a state written as $\left|a,b\right\rangle$ doesn't always mean $\left|a\right\rangle\otimes\left|b\right\rangle$, for instance when you have two identical fermions with wave functions say $\phi_{k_1}(\vec{r}_1)$ and $\phi_{k_2}(\vec{r}_2)$, with labels indexing some basis set, then one might write the slater determinant state of the fermions $$\frac{1}{\sqrt{2}}\left(\phi_{k_1}(\vec{r}_1)\phi_{k_2}(\vec{r}_2)-\phi_{k_1}(\vec{r}_2)\phi_{k_2}(\vec{r}_1)\right)$$ in a shorthand as $\left|\phi_{k_1},\phi_{k_2}\right\rangle$ or even $\left|k_1,k_2\right\rangle\neq \left|k_1\right\rangle\otimes \left|k_2\right\rangle$.

+",197,,23,,3/15/2018 11:09,3/15/2018 11:09,,,,0,,,,CC BY-SA 3.0 +125,2,,91,3/15/2018 2:21,,16,,"

The ket notation $|\psi\rangle$ means a vector in whatever vector space we're working in, such as the space of all complex linear combinations of the eight 3-bit strings $000$, $001$, $010$, etc., as we might use to represent the states of a quantum computer. Unadorned $\psi$ means exactly the same thing—the $|\psi\rangle$ ket notation is useful partly to emphasize that, for example, $|010\rangle$ is an element of the vector space of interest, and partly for its cuteness in combination with the bra notation.

+ +

The bra notation $\langle\psi|$ means the dual vector or covector—a linear functional, or linear map from vectors to scalars, whose value at a vector $|\phi\rangle$ is the inner product of $\psi$ with $\phi$, cutely written $\langle\psi|\phi\rangle$. Here we assume the existence of an inner product, which is not a given in arbitrary vector spaces, but in quantum physics we usually work in Hilbert spaces which by definition have an inner product. The dual of a vector is sometimes also called its (Hermitian) transpose, because in matrix representation, a vector corresponds to a column and a covector corresponds to a row, and when you multiply $\mathrm{row} \times \mathrm{column}$ you get a scalar. (The Hermitian part means in addition to transposing the matrix, we take the complex conjugate of its entries—which is really just further transposing the matrix representation $\scriptstyle\begin{bmatrix}a&b\\-b&a\end{bmatrix}$ of the complex number $a + b i$.)

+ +

When written the other way, $|\psi\rangle\langle\phi|$, you get the outer product of $\psi$ with $\phi$, defined to be the linear transformation of the vector space to itself given by $|\theta\rangle \mapsto (\langle\phi|\theta\rangle) |\psi\rangle$. That is, given a vector $\theta$, it scales the vector $\psi$ by the scalar given by the inner product $\langle\phi|\theta\rangle$. Since the operations in question are associative, we can remove the parentheses and unambiguously write $$(|\psi\rangle\langle\phi|)|\theta\rangle = |\psi\rangle\langle\phi|\theta\rangle = \langle\phi|\theta\rangle|\psi\rangle = (\langle\phi|\theta\rangle)|\psi\rangle.$$ The operations involved are not commutative in general, however: reversing the order yields the complex conjugate$\langle\psi|\phi\rangle = \langle\phi|\psi\rangle^*$, replacing $a + b i$ by $a - b i$. There may be other transformations of the spaces involved thrown in the mix too, like $\langle\psi|A|\phi\rangle$, which can be read equivalently as the precomposition of the linear functional $\langle\psi|$ by the linear transformation $A$, applied to the vector $|\phi\rangle$, or as the evaluation of the linear functional $\langle\psi|$ at the vector obtained by transforming $|\phi\rangle$ by the linear transformation $A$.

+ +

The notation is used mainly in quantum physics; mathematicians tend to just write $\psi$ where physicists might write $|\psi\rangle$; $\psi^*$ for the covector $\langle\psi|$; either $\langle\psi,\phi\rangle$ or $\psi^*\phi$ for the inner product; and $\psi^*A\phi$ for what physicists would notate by $\langle\psi|A|\phi\rangle$.

+",238,,,,,3/15/2018 2:21,,,,0,,,,CC BY-SA 3.0 +126,1,,,3/15/2018 6:17,,19,318,"

My understanding is that there seems to be some confidence that quantum annealing will provide a speedup for problems like the traveling salesman, due to the efficiency provided by, ex, quantum tunneling. Do we know, however, around how much of a speedup is provided?

+",91,,26,,3/15/2018 14:02,3/15/2018 14:02,Level of advantage provided by annealing for traveling salesman,,1,0,,,,CC BY-SA 3.0 +129,2,,126,3/15/2018 10:23,,15,,"

First, let me note that quantum annealing, or more precisely the adiabatic quantum computation model is polynomially equivalent to the conventional gate-based quantum computation model. Second, the general traveling salesman problem is NP complete. Third, it is generally believed that the with gate-based quantum computation one cannot solve in polinomial time NP complete problems. +All this means that it is regarded highly unlikely that with quantum annealing one could solve in polynomial time the general traveling salesman problem.

+ +

Despite that it is believed that the general problem can be only solved in exponential time also with quantum annealing, there could still be some speed up, for instance, a polynomial speed-up. Not too much is known about this for the general case. However, there is a very nice recent work that shows that there are bounded-error quantum algorithms which provide a quadratic quantum speedup when the degree of each vertex (in the travailing salesman problem) is at most 3.

+",86,,86,,3/15/2018 10:44,3/15/2018 10:44,,,,0,,,,CC BY-SA 3.0 +130,2,,64,3/15/2018 10:49,,2,,"

The website where you could access this 11-qubit quantum computer via a cloud service is not public yet. When it becomes public, the, of course, it will be posted here.

+",86,,86,,3/15/2018 10:55,3/15/2018 10:55,,,,1,,,,CC BY-SA 3.0 +131,1,132,,3/15/2018 10:58,,18,2425,"

Quantum gates are said to be unitary and reversible. However, classical gates can be irreversible, like the logical AND and logical OR gates. Then, how is it possible to model irreversible classical AND and OR gates using quantum gates?

+",26,,26,,3/15/2018 15:06,3/15/2018 15:06,If quantum gates are reversible how can they possibly perform irreversible classical AND and OR operations?,,1,0,,,,CC BY-SA 3.0 +132,2,,131,3/15/2018 10:58,,17,,"

Let's say we have a function $f$ which maps $n$ bits to $m$ bits (where $m<n$).

+ +

$$f: \{0,1\}^{n} \to \{0,1\}^{m}$$

+ +

We could of course design a classical circuit to perform this operation. Let's call it $C_f$. It takes in as input $n$-bits. Let's say it takes as input $X$ and it outputs $f(X)$.

+ +

Now, we would like to do the same thing using a quantum circuit. Let's call it $U_f$, which takes as input $|X\rangle$ and outputs $|f(X)\rangle$. Now remember that since quantum mechanics is linear the input qubits could of course be in a superposition of all the $n$-bit strings. So the input could be in some state $\sum_{X\in\{0,1\}^{n}}\alpha_X|X\rangle$. By linearity the output is going to be $\sum_{X\in\{0,1\}^{n}}\alpha_X|f(X)\rangle$.

+ +

Evolution in quantum mechanics is unitary. And because it is unitary, it is reversible. This essentially means that if you apply a quantum gate $U$ on an input state $|x\rangle$ and get an ouput state $U|x\rangle$, you can always apply an inverse gate $U^{\dagger}$ to get back to the state $|x\rangle$.

+ +

+ +

Notice, carefully in the above picture that the number of input lines (i.e. six) is exactly same as the number of output lines at each step. This is because of the unitarity of the operations. Compare this to classical operations like the logical AND where $0\wedge1$ gives a single bit output $0$. You can't reconstruct the initial bits $0$ and $1$ from the output, since even $0\wedge 0$ and $1\wedge0$ would have mapped to the same output $0$. But, consider the classical NOT gate. If the input is $0$ it ouputs $1$, while if the input is $1$ it outputs $0$. Since this mapping is one-one, it can be easily implemented as a reversible unitary gate, namely, the Pauli-X gate. However, for implementing a classical AND or a classical OR gate we need to think a bit more.

+ +

Consider the CSWAP gate. Here's a rough diagram showing the scheme:

+ +

+ +

In the SWAP gate depending on the control bit, we the other two may or may not get swapped. Notice that there are three input lines and three output lines. So, it can be modeled as a unitary quantum gate. Now, if $z=0$: If $x=0$, output is $0$, while if $x=1$, output is $y$.

+ +

+ +

If you notice, if $x=0$, we are outputting $\bar{x}\wedge y$ while if $x=1$ we are outputting $x\wedge y$. So we could successfully generate the output $x\wedge y$ which we wanted although we ended up with some ""junk"" outputs $\bar{x}\wedge y$ and $x$. An interesting fact is that the inverse of the CSWAP gate is the CSWAP gate itself (check!).

+ +

That's all! Remember that all classical gates can be constructed with the NAND gate, which can of course be constructed an AND and a NOT gate. We effectively modelled the classical NOT and the classical AND gate using reversible quantum gates. Just to be on the safe side we can also add the qauntum CNOT gate to our list, because using CNOT we can copy bits.

+ +

Hence, the basic message is that using the quantum CSWAP, CNOT and the NOT gates we can replicate any classical gate. BTW, there's a clever trick to get rid of the ""junk"" bits which are produced when quantum gates are used, but that's another story.

+ +

P.S: It's very important to get rid of the ""junk"" bits or else they can cause computational errors!

+ +

Reference & Image Credits: Quantum Mechanics and Quantum Computation MOOC offered by UC Berkeley on edX.

+",26,,,,,3/15/2018 10:58,,,,1,,,,CC BY-SA 3.0 +133,1,134,,3/15/2018 13:27,,18,519,"

Quantum annealing is an optimization protocol that, thanks to quantum tunneling, allows in given circumstances to maximize/minimize a given function more efficiently than classical optimization algorithms.

+ +

A crucial point of quantum annealing is the adiabaticity of the algorithm, which is required for the state to stay in the ground state of the time-dependent Hamiltonian. +This is however also a problem, as it means that find a solution can require very long times.

+ +

How long do these times have to be for a given Hamiltonian? +More precisely, given a problem Hamiltonian $\mathcal H$ of which we want to find the ground state, are there results saying how long would it take a quantum annealer to reach the solution?

+",55,,55,,04-02-2018 19:20,11-01-2021 23:35,How long does quantum annealing take to find the solution to a given problem?,,2,3,,,,CC BY-SA 3.0 +134,2,,133,3/15/2018 13:38,,13,,"

The time to solution (tts) is highly dependent on the Hamiltonian of the problem one would like to solve. The D-Wave uses a spin-glass-like Hamiltonian which can be in the NP-Complete complexity class.

+ +

Due to having to run the annealing process multiple times, tts measures are typically quantified by how long it takes to find the ground state some percent of the time.

+ +

Here's a paper by some colleagues that explains tts (see especially equation 3).

+",54,,91,,3/15/2018 14:46,3/15/2018 14:46,,,,0,,,,CC BY-SA 3.0 +135,1,1218,,3/15/2018 13:58,,34,1738,"

It is generally believed and claimed that quantum computers can outperform classical devices in at least some tasks.

+ +

One of the most commonly cited examples of a problem in which quantum computers would outperform classical devices is $\text{Factoring}$, but then again, it is also not known whether $\text{Factoring}$ is also efficiently solvable with a classical computer (that is, whether $\text{Factoring}\in \text{P}$).

+ +

For other commonly cited problems in which quantum computers are known to provide an advantage, such as database search, the speedup is only polynomial.

+ +

Are there known instances of problems in which it can be shown (either proved or proved under strong computational complexity assumptions) that quantum computers would provide an exponential advantage?

+",55,,26,,5/15/2019 13:58,1/27/2022 5:41,Are there problems in which quantum computers are known to provide an exponential advantage?,,5,7,,,,CC BY-SA 3.0 +136,1,,,3/15/2018 15:02,,36,5215,"

All quantum operations must be unitary to allow reversibility, but what about measurement? Measurement can be represented as a matrix, and that matrix is applied to qubits, so that seems equivalent to the operation of a quantum gate. That's definitively not reversible. Are there any situations where non-unitary gates might be allowed?

+",91,,26,,01-01-2019 09:02,8/20/2021 22:15,"If all quantum gates must be unitary, what about measurement?",,8,0,,,,CC BY-SA 3.0 +137,1,139,,3/15/2018 15:12,,10,287,"

Are there any encryption suites that can be cracked by usual computers or super computers, but not quantum computers?

+ +

If that's possible, what assumptions will it depend on? (Factorizing big numbers, $a^b\pmod d$ $a^c\pmod d$ $a^{bc}\pmod d$ etc...)

+",248,,253,,04-02-2018 08:23,04-02-2018 08:23,Are there any encryption suites which can be cracked by classical computers but not quantum computers?,,1,2,,,,CC BY-SA 3.0 +138,2,,135,3/15/2018 16:21,,12,,"

Not sure if this is strictly what you're looking for; and I don't know that I'd qualify this as "exponential" (I'm also not a computer scientist so my ability to do algorithm analysis is more or less nonexistent...), but a recent result by Bravyi et. al presented a class of '2D Hidden Linear Function problems' that provably use fewer resources on a quantum parallel device.

+

Here is a quick summary of the paper Sergey Bravyi, David Gosset and Robert Koenig, Quantum advantage with shallow circuits. The quantum advantage is in the depth of the parallel circuit, so the number of threads one can split the problem into under bounded fan-in. The problem is given a $N\times N$ matrix $A$ and an input vector $b$, one can define a quadratic form $q$ and a special subspace for that form. The "hidden linear function problem"'s goal is to find a linearization for that quadratic function on a special subspace.

+

A classical probablistic circuit is constrained to ~$\log{N}$ depth, if you want your computation to succeed with probability $>7/8$ (you probably want it to succeed with at least this probability). A quantum circuit can do it with constant depth, so that's a big improvement.

+

The proof essentially amounts to a specific graph state being difficult for a classical circuit to simulate, this sub-result was proven slightly earlier by Jonathan Barrett, Carlton M. Caves, Bryan Eastin, Matthew B. Elliott and Stefano Pironio, Modeling Pauli measurements on graph states with nearest-neighbor classical communication. Then the rest of the paper shows that the greater class of problems contains this difficult problem.

+",236,,2067,,1/26/2022 23:15,1/26/2022 23:15,,,,0,,,,CC BY-SA 4.0 +139,2,,137,3/15/2018 16:41,,13,,"

This is not a very enlightening concept, because most interesting quantum algorithms, such as Shor's algorithm, involve some classical computations as well. While you can always shoehorn a classical computation into a quantum computer, it would be at unnecessarily exorbitant cost.

+ +

We don't yet know, of course, exactly what problems will be hard to solve even if given a quantum computer—the NIST PQCRYPTO competition is in progress right now to study that question.

+ +

However, even then, it likely won't be answered definitively any more than we can answer definitively what cryptography we can't break with classical computers: nobody has found a realistically efficient classical algorithm for factoring a product $n$ of uniform random 1024-bit primes whose totient $\phi(n)$ is coprime with 3, nor has anyone found a realistically efficient classical algorithm for computing cube roots modulo $n$, nor has anyone even ascertained whether factoring is harder than computing cube roots (though certainly it's not easier).

+ +

At best, we can say that a lot of smart people have been well-funded to think very hard about it, and we can choose parameter sizes that thwart the best attacks they have come up with. The outcome of the NIST PQCRYPTO competition will be the same, with any luck—unless someone clever thinks of ways to break every single one of the dozens of candidates.

+",238,,,,,3/15/2018 16:41,,,,0,,,,CC BY-SA 3.0 +140,2,,136,3/15/2018 17:26,,15,,"

Short Answer

+

Quantum operations do not need to be unitary. +In fact, many quantum algorithms and protocols make use of non-unitarity.

+
+

Long Answer

+

Measurements are arguably the most obvious example of non-unitary transitions being a fundamental component of algorithms (in the sense that a "measurement" is equivalent to sampling from the probability distribution obtained after the decoherence operation $\sum_k c_k\lvert k\rangle\mapsto\sum_k |c_k|^2\lvert k\rangle\langle k\rvert$).

+

More generally, any quantum algorithm that involves probabilistic steps requires non-unitary operations. A notable example that comes to mind is HHL09's algorithm to solve linear systems of equations (see 0811.3171). +A crucial step in this algorithm is the mapping $|\lambda_j\rangle\mapsto C\lambda_j^{-1}|\lambda_j\rangle$, where $|\lambda_j\rangle$ are eigenvectors of some operator. This mapping is necessarily probabilistic and therefore non-unitary.

+

Any algorithm or protocol that makes use of (classical) feed-forward is also making use of non-unitary operations. This is the whole of one-way quantum computation protocols (which, as the name suggests, require non-reversible operations).

+

The most notable schemes for optical quantum computation with single photons also require measurements and sometimes post-selection to entangle the states of different photons. +For example, the KLM protocol produces probabilistic gates, which are therefore at least partly non-reversible. +A nice review on the topic is quant-ph/0512071.

+

Less intuitive examples are provided by dissipation-induced quantum state engineering (e.g. 1402.0529 or srep10656). +In these protocols, one uses an open map dissipative dynamic, and engineers the interaction of the state with the environment in such a way that that the long-time stationary state of the system is the desired one.

+",55,,-1,,6/18/2020 8:31,04-11-2018 23:17,,,,0,,,,CC BY-SA 3.0 +141,2,,136,3/15/2018 18:59,,12,,"

At risk of going off-topic from quantum computing and into physics, I'll answer what I think is a relevant subquestion of this topic, and use it to inform the discussion of unitary gates in quantum computing.

+ +

The question here is: Why do we want unitarity in quantum gates?

+ +

The less specific answer is as above, it gives us 'reversibility', or as physicists often talk about it, a type of symmetry for the system. I'm taking a course in quantum mechanics right now, and the way unitary gates cropped up in that course was motivated by the desire to have physical transformations $\hat{U}$: that act as symmetries. This imposed two conditions on the transformation $\hat{U}$:

+ +
    +
  1. The transformations should act linearly on the state (this is what gives us a matrix representation).
  2. +
  3. The transformations should preserve probability, or more specifically inner product. This means that if we define:
  4. +
+ +

$$|\psi '\rangle = U |\psi\rangle, |\phi'\rangle = U |\phi\rangle$$

+ +

Preservation of inner product means that $\langle \phi | | \psi \rangle= \langle \phi' | | \psi'\rangle$. From this second specification, unitarity can be derived (for full details see Dr. van Raamsdonk's notes here).

+ +

So this answers the question of why operations that keep things ""reversible"" have to be unitary.

+ +

The question of why measurement itself is not unitary is more related to quantum computation. A measurement is a projection on to a basis; in essence, it must ""answer"" with one or more basis states as the state itself. It also leaves the state in a way that is consistent with the ""answer"" to the measurement, and not consistent with the underlying probabilities that the state began with. So the operation satisfies specification 1. of our transformation $U$, but definitively does not satisfy specification 2. Not all matrices are created equal!

+ +

To round things back to quantum computation, the fact that measurements are destructive and projective (ie. we can only reconstruct the superposition through repeated measurements of identical states, and every measurement only gives us a 0/1 answer), is part of what makes the separation between quantum computing and regular computing subtle (and part of why it's difficult to pin that down). One might assume quantum computing is more powerful because of the mere size of the Hilbert space, with all those state superpositions available to us. But our ability to extract that information is heavily limited.

+ +

As far as I understand it this shows that for information storage purposes, a qubit is only as good as a regular bit, and no better. But we can be clever in quantum computation with the way that information is traded around, because of the underlying linear-algebraic structure.

+",236,,236,,3/15/2018 20:40,3/15/2018 20:40,,,,2,,,,CC BY-SA 3.0 +142,1,145,,3/15/2018 19:10,,21,1533,"

Post-quantum cryptography like lattice-based cryptography is designed to be secure even if quantum computers are available. It resembles currently employed encryptions, but is based on problems which are most likely not efficiently solvable by a quantum computer.

+ +

Obviously research on quantum key distribution (QKD) continues. But what exactly are advantages of quantum key distribution over post-quantum cryptography?

+ +

The development of a new technology like QKD can have great side effects, and maybe QKD will be more cost-efficient or faster in the very long term, but I doubt that this is the main reason.

+",104,,26,,05-08-2019 10:22,6/26/2022 14:35,Advantage of quantum key distribution over post-quantum cryptography,,3,0,,,,CC BY-SA 3.0 +143,2,,136,3/15/2018 19:28,,24,,"

Unitary operations are only a special case of quantum operations, which are linear, completely positive maps (""channels"") that map density operators to density operators. This becomes obvious in the Kraus-representation of the channel, $$\Phi(\rho)=\sum_{i=1}^n K_i \rho K_i^\dagger,$$ where the so-called Kraus operators $K_i$ fulfill $\sum_{i=1}^n K_i^\dagger K_i\leq \mathbb{I}$ (notation). Often one considers only trace-preserving quantum operations, for which equality in the previous inequality holds. If additionally there is only one Kraus operator (so $n=1$), then we see that the quantum operation is unitary.

+ +

However, quantum gates are unitary, because they are implemented via the action of a Hamiltonian for a specific time, which gives a unitary time evolution according to the Schrödinger equation.

+",104,,,,,3/15/2018 19:28,,,,2,,,,CC BY-SA 3.0 +144,2,,142,3/15/2018 19:43,,18,,"

Quantum key distribution requires that you wholesale replace your entire communications infrastructure built out of 5 EUR ethernet cables and 0.50 EUR CPUs by multimillion-euro dedicated fiber links and specialized computers that actually just do classical secret-key cryptography anyway.

+ +

Plus you have to authenticate the shared secret keys you negotiate with quantum key distribution, which you will probably do using classical public-key cryptography unless you're rich enough to afford couriers with suitcases handcuffed to their wrists.

+ +

More details from François Grieu on crypto.se about what makes quantum cryptography secure.

+ +

The crux of the technical difference—costs and deployability and politics and class divisions aside—is that the physical protocol of a QKD system is intended to be designed so that it need not leave a physical trace that future mathematical breakthroughs could enable to retroactively recover the shared secret negotiated over these dedicated fiber links. In contrast, with classical cryptography, public-key key agreements over the internet, where an eavesdropper records every bit over the wire, could in principle be broken by future mathematical breakthroughs.

+ +

Then, in both cases, the peers use the shared secret they negotiated, whether with quantum key distribution or with classical public-key key agreement, as a secret key for classical secret-key cryptography, which could in principle be broken by future mathematical breakthroughs. (But very smart well-funded people haven't made those breakthroughs after trying for decades.) And this doesn't mean that practical implementations of QKD won't leave physical traces either.

+ +

All that said, QKD is quantum, so it is sexy and makes a good sell to rich governments and banks, who have multimillion-euro discretionary funds for useless toys like QKD. The physics is also pretty cool for nerds to play with.

+ +

M. Stern calls to mind another advantage of QKD: It operates at the link layer, negotiating a secret key shared by the two endpoints of a fiber link—which might be one legitimate user and the MITM who spliced into that fiber link with a rogue QKD device. If, in the era of quantum supremacy, we replaced all the world's classical public-key key agreement by QKD, then where applications currently negotiate secret keys with their peer across the internet for end-to-end authenticated encryption over any routable medium, they would instead have to negotiate secret keys with their ISP, who would negotiate secrets with their upstream ISP, and so on, for hop-by-hop authenticated encryption. This would be a boon for the good guys in major world governments trying to (retroactively) monitor user communications to root out terrorists and activists and journalists and other inconvenient elements of society, because the ISPs would then necessarily have the secret keys ready to turn over to the police.

+",238,,238,,3/17/2018 2:27,3/17/2018 2:27,,,,1,,,,CC BY-SA 3.0 +145,2,,142,3/15/2018 20:41,,15,,"

If it is proven that a given asymmetric encryption protocol relies on a problem which cannot be solved efficiently even by a quantum computer, then quantum cryptography becomes largely irrelevant.

+ +

The point is that, as of today, no one was able to do this. +Indeed, such a result would be a serious breakthrough, as it would prove the existence of $\text{NP}$ problems which are not efficiently solvable on a quantum computer +(while this is generally believed to be the case, it is still unknown whether there are problems in $\text{NP}\!\setminus\!\text{BQP}$).

+ +

Generally speaking, all classical asymmetric encryption protocols are safe under the assumption that a given problem is hard to solve, but in no case, to my knowledge, it has been proven (in the computational complexity sense) that that problem is indeed exponentially hard to solve with a quantum computer (and for many not even that the problem is not efficiently solvable with a classical computer).

+ +

I think this is nicely explained by Bernstein in his review of post-quantum cryptography (Link). +Quoting from the first section in the above, where he just talked about a number of classical encryption protocols:

+ +
+

Is there a better attack on these systems? Perhaps. This is a familiar + risk in cryptography. This is why the community invests huge amounts + of time and energy in cryptanalysis. Sometimes cryptanalysts find a + devastating attack, demonstrating that a system is useless for + cryptography; for example, every usable choice of parameters for the + Merkle–Hellman knapsack public-key encryption system is easily + breakable. Sometimes cryptanalysts find attacks that are not so + devastating but that force larger key sizes. Sometimes cryptanalysts + study systems for years without finding any improved attacks, and the + cryptographic community begins to build confidence that the best + possible attack has been found—or at least that real-world attackers + will not be able to come up with anything better.

+
+ +

On the other hand, the security of QKD does, ideally, not rely on conjectures (or, as it is often put, QKD protocols provide in principle information-theoretic security). If the two parties share a secure key, then the communication channel is unconditionally secure, and QKD provides an unconditionally secure way for them to exchange such a key (of course, still under the assumption of quantum mechanics being right). +In Section 4 of the above-mentioned review, the author presents a direct (if possibly somewhat biased) comparison of QKD vs Post-Quantum cryptography. +It is important to note that of course ""unconditional security"" is here to be meant in the information-theoretic sense, while in the real world there may be more important security aspects to consider. +It is also to be noted that the real-world security and practicality of QKD is not believed to be factual by some (see e.g. Bernstein here and the related discussion on QKD on crypto.SE), and that the information-theoretic security of QKD protocols is only true if they are properly followed, which in particular means that the shared key has to be used as a one-time pad.

+ +

Finally, in reality, also many QKD protocols may be broken. The reason is that experimental imperfection of specific implementations can be exploited to break the protocol (see e.g. 1505.05303, and pag.6 of npjqi201625). +It is still possible to ensure the security against such attacks using device-independent QKD protocols, whose security relies on Bell's inequalities violations and can be proven to not depend on the implementation details. +The catch is that these protocols are even harder to implement than regular QKD.

+",55,,55,,3/17/2018 0:51,3/17/2018 0:51,,,,1,,,,CC BY-SA 3.0 +162,1,,,3/16/2018 1:10,,18,812,"

Often, when comparing two density matrices, $\rho$ and $\sigma$ (such as when $\rho$ is an experimental implementation of an ideal $\sigma$), the closeness of these two states is given by the quantum state fidelity $$F = tr\left(\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\right),$$ with infidelity defined as $1-F$.

+ +

Similarly, when comparing how close an implementation of a gate is with an ideal version, the fidelity becomes $$F\left( U, \tilde U\right) = \int\left[tr\left(\sqrt{\sqrt{U\left|\psi\rangle\langle\psi\right|U^\dagger}\tilde U\left|\psi\rangle\langle\psi\right|\tilde U^\dagger\sqrt{U\left|\psi\rangle\langle\psi\right|U^\dagger}}\right)\right]^2\,d\psi,$$ where $d\psi$ is the Haar measure over pure states. Unsurprisingly, this can get relatively unpleasant to work with.

+ +

Now, let's define a matrix $M = \rho - \sigma$ in the case of density matrices, or $M = U - \tilde U$ when working with gates. Then, the Schatten norms1, such as $\| M\|_1 = tr\left(\sqrt{M^\dagger M}\right)$, $\| M\|_2^2 = tr\left(M^\dagger M\right)$, or other norms, such as the diamond norm can be computed.

+ +

These norms are often easier to compute2 than the above Fidelity. What makes matters worse is that in randomised benchmarking calculations, infidelity doesn't even appear to be a great measure, yet is the number that's used every time that I've seen when looking at benchmarking values for quantum processors.3

+ +

So, Why is (in)fidelity the go-to value for calculating gate errors in quantum processors (using randomised benchmarking), when it doesn't seem to have a helpful meaning and other methods, such as Schatten norms, are easier to calculate on a classical computer?

+ +
+ +

1 The Schatten p-norm of $M$ is $\| M\|_p^p = tr\left(\sqrt{M^\dagger M}^p\right)$

+ +

2 i.e. plug in a noise model on a (classical) computer and simulate

+ +

3 Such as IBM's QMX5

+",23,,55,,09-10-2020 11:47,09-10-2020 11:47,Purpose of using Fidelity in Randomised Benchmarking,,1,0,,,,CC BY-SA 3.0 +163,1,169,,3/16/2018 5:43,,16,8537,"

If one wants to start building a quantum computer from scratch inside simulations (like how people get to build a classical computer from scratch in the Nand2Tetris course), is it possible?

+ +

If yes, what would be some possible approaches?

+ +

Also, what will be the limits of such a simulated machine, given a specific amount of classical computing power? For instance, if we were to pick your average desktop/laptop, what will be the limit? If we take a supercomputer (like Titan) then what would be the limit?

+",261,,26,,12/13/2018 19:36,12/13/2018 19:36,Building a quantum computer in simulation,,3,1,,,,CC BY-SA 3.0 +168,2,,163,3/16/2018 6:22,,5,,"

Well, I'm working on a simulator of a quantum computer currently. The basic idea of quantum computing, of course, is gates represented by matrices applied to qubits represented by vectors. Using Python's numpy package, this isn't that hard to program in the most basic sense.

+ +

From there, one might expand upon, of course, the interface. One might also consider trying to make it a simulator of a nonideal quantum computer, that is, taking into account decoherence times and error correction.

+ +

Then, you get into heavily uncharted territory. How do you construct the instruction set for a quantum computer? Who knows. You'll have to figure out. You'll also have to figure out your version of assembly, and even your version of higher level programming languages.

+ +

So, limitations of a classical computer in this? Well, this is a really complicated question (and worth asking separately, imho) but here's a quick summary:

+ +
    +
  • we don't know if quantum computers are actually better than classical computers; the algorithms for classical computers could just not be good enough yet (quantum supremacy)
  • +
  • let's say, as seems decently likely, that quantum computers are better than classical computers. that improvement will depend heavily on the problem - quantum computers might see, for example, a much higher speed improvement in finding prime factorizations than in checking email. (see also this P.SE q/a.)
  • +
  • to provide some sort of numerical value, if we consider the current fastest algorithm for classical prime factorization finding, i.e., the general number sieve, we have an O-time of $O\Big(e^{\sqrt{\frac{64}{9}}(\log N)^{\frac 13}(\log\log N)^{\frac 23}}\Big)$ which is clearly rather gross. Shor's algorithm, on the other hand, works in $O((\log N)^2(\log \log N)(\log \log \log N))$ which is obviously a lot faster.
  • +
  • I can run a bunch of qubits on my computer as long as I keep them in the $|0\rangle$ or $|1\rangle$ states - i.e., effectively classical. So in some senses, your question is again, kind of ill-defined.
  • +
+",91,,,,,3/16/2018 6:22,,,,0,,,,CC BY-SA 3.0 +169,2,,163,3/16/2018 6:31,,6,,"

The first part of your question seems like a duplicate of an existing QC SE post: Are there emulators for quantum computers?.

+ +

I'm not completely sure what you mean by building a quantum computer from scratch inside simulations. However, yes, you can make software simulations of a quantum computer using your average laptop/desktop. The exact ""limit"" will depend on the computer specifications.

+ +

Since a quantum computer does not violate the Church-Turing thesis, in theory it is definitely possible to simulate a quantum computer using an ideal Turing machine. The obvious approach to simulate such a system requires exponential time on a classical computer and the space complexity is an exponential function of the number of quantum bits simulated. Say, you simulate a $n$-bit quantum computer, you'd need to store about $2^{n}$ bits of information in your classical computer at every instant. Moreover, implementation of quantum gates will again take a huge amount of resources in terms of time and space complexity. An implementation of a quantum gate operating on $n$-qubits would have to store about $4^{n}$ (because you can represent all quantum gate operations as a matrix of size $2^{n}\times2^{n}$) bits of information.

+ +

You can sort of estimate the ""limit"" depending on the specifications of the classical computer. For example if the (accessible) memory size of your classical computer is around $1$ TB I'd expect you can simulate a $\log_4(8\times 10^{12})\approx 21$ bit quantum computer (to be on the safe side let's say $20$). However, keep in mind that classical computers would take much larger time to access all the individual bits of information, compared to an actual quantum computer (depending on the hardware of the quantum computer). So it's going to be slower than an actual quantum computer! Some other limitations are that after each action of a $n$-qubit gate you need to keep track of which output qubits are entangled, which is a NP-hard problem. Also, measurement cannnot be accurately simulated on a classical computer, because classically there is no truly random number generator.

+",26,,26,,3/16/2018 7:37,3/16/2018 7:37,,,,1,,,,CC BY-SA 3.0 +170,2,,163,3/16/2018 11:18,,5,,"

I feel like this answer mostly rests on an underlying misunderstanding of what it means to ""simulate"" something.

+ +

Generally speaking, to ""simulate"" a complex system means to reproduce certain features of such system with a platform that is easier to control (often, but not always, a classical computer).

+ +

Therefore, the question of whether ""one can simulate a quantum computer in a classical computer"" is somewhat ill-posed. +If you mean that you want to replicate every possible aspect of a ""quantum computer"", then that is never going to happen, just like you are never going to be able to simulate every aspect of any classical system (unless you use the same identical system of course).

+ +

On the other hand, you certainly can simulate many aspects of a complex device like a ""quantum computer"". For example, one may want to simulate the evolution of a state within a quantum circuit. +Indeed, this can be exceedingly easy to do! For example, if you have python on your computer, just run the following

+ +
import numpy as np
+identity_2d = np.diag([1, 1])
+pauliX_gate = np.array([[0, 1], [1, 0]])
+hadamard_gate = np.array([[1, 1], [1, -1]]) / np.sqrt(2)
+
+cnot_gate = np.kron(identity_2d, pauliX_gate)
+H1_gate = np.kron(hadamard_gate, identity_2d)
+
+awesome_entangling_gate = np.dot(cnot_gate, H1_gate)
+
+initial_state = np.array([1, 0, 0, 0])
+final_state = np.dot(awesome_entangling_gate, initial_state)
+print(final_state)
+
+ +

Congratulation, you just ""simulated"" the evolution of a separable two-qubit state into a Bell state!

+ +

However, if you try to do the same with, say, 40 qubits and a nontrivial gate, you are not going to be able to pull it off this easily. +The naive reason is that to even just store the state of an $n$-qubit (non sparse) state you need to specify ~$2^n$ complex numbers, and this start taking a lot of memory very quickly. +I say ""naive"" here because in many cases there may be tricks that allow you to avoid this problem$^{(1)}$. +This is why many people work on trying to find clever tricks to simulate quantum circuits (and other types of quantum systems) with classical computers, and why this is far from trivial to do$^{(2)}$.

+ +

Other answers already touched on various aspects of this hardness, and the answers to this other question already mention many available platforms to simulate/emulate various aspects of quantum algorithms, so I will not go there.

+ +
+ +

(1) +An interesting example of this is the problem of simulating a boson sampling device (this is not a quantum algorithm in the sense of a state evolving through a series of gates, but it is nonetheless an example of a nontrivial quantum device). BosonSampling is a sampling problem, in which one is tasked with the problem of sampling from a specific probability distribution, and this has been shown (under likely assumptions) to be impossible to do efficiently with a classical device. Although it was never shown to be a fundamental aspect of this hardness, a certainly nontrivial issue associated with simulating a boson sampling device was that of having to compute an exponentially large number of probabilities from which to sample. However, it was recently shown that indeed one does not need to compute the whole set of probabilities to sample from them (1705.00686 and 1706.01260). +This is not far in principle from simulating the evolution of a lot of qubits in a quantum circuit without having to store the whole state of the system at any given point. +Regarding more directly quantum circuits, examples of recent breakthrough in simulation capabilities are 1704.01127 +and 1710.05867 (also a super-recent one, not yet published, is 1802.06952).

+ +

(2) +In fact, it has been shown (or rather, strong evidence has been provided for the fact) that it is not possible to efficiently simulate most quantum circuits, see 1504.07999.

+",55,,55,,3/30/2018 11:35,3/30/2018 11:35,,,,0,,,,CC BY-SA 3.0 +171,1,172,,3/16/2018 17:35,,47,6916,"

I'm admittedly a novice in this field, but I have read that, while the D-wave (one) is an interesting device, there is some skepticism regarding it being 1) useful and 2) actually a 'quantum computer'.

+ +

For example, Scott Aaronson has expressed multiple times that he is skeptical about whether the 'quantum' parts in the D-wave are actually useful:

+ +
+

It remains true, as I’ve reiterated here for years, that we have no direct evidence that quantum coherence is playing a role in the observed speedup, or indeed that entanglement between qubits is ever present in the system.

+
+ +

Exerpt from this blog.

+ +

Additionally, the relevant Wikipedia section on skepticism against the D-wave is a mess.

+ +

So, I ask:

+ +
    +
  1. I know that D-wave claims to use some sort of quantum annealing. Is there (dis)proof of the D-wave actually using quantum annealing (with effect) in its computations?

  2. +
  3. Has it been conclusively shown that the D-wave is (in)effective? If not, is there a clear overview of the work to attempt this?

  4. +
+",253,,253,,3/26/2018 15:38,5/13/2019 15:59,Is there proof that the D-wave (one) is a quantum computer and is effective?,,2,0,,,,CC BY-SA 3.0 +172,2,,171,3/16/2018 18:54,,29,,"

There is still a search for problems where the D-Wave shows improvement over classical algorithms. One might recall media splashes where the D-Wave solved some instances $10^8$ times faster than a classical algorithms but forgot to mention that the problem can be solved in polynomial time using minimum weight perfect matching.

+ +

Denchev showing $10^8$ speedup +https://arxiv.org/abs/1512.02206

+ +

Mandra using MWPM +https://arxiv.org/abs/1703.00622

+ +

There is some evidence that there are indeed some quantum effects used by the D-Wave. Notably a study by Katzgraber et al. that compares the D-Wave with simulated annealing and the effects of reducing barrier thickness in the energy landscape (to make tunneling more probable). In Fig. 5 of the following paper the barrier thickness is reduced and the D-Wave shows improvement on the class of problems while Simulated Annealing shows no improvement.

+ +

https://arxiv.org/abs/1505.01545

+ +

Full disclosure: Katzgraber was my PhD advisor so I am most familiar with his work.

+ +

On the other hand, there have been a few papers on the topic of the D-Wave being a simple thermal annealer with no quantum effects, notably the papers by Smolin although they are a bit dated now.

+ +

https://arxiv.org/abs/1305.4904

+ +

https://arxiv.org/abs/1401.7087

+ +

More recently Albash et al. discussed the finite temperature as a reason for quantum annealers not functioning competitively.

+ +

https://arxiv.org/abs/1703.03871

+",54,,26,,5/13/2019 15:59,5/13/2019 15:59,,,,0,,,,CC BY-SA 4.0 +175,1,176,,3/17/2018 2:26,,40,6646,"

Grover's search algorithm provides a provable quadratic speed-up for unsorted database search. +The algorithm is usually expressed by the following quantum circuit:

+ +

+ +

In most representations, a crucial part of the protocol is the ""oracle gate"" $U_\omega$, which ""magically"" performs the operation $|x\rangle\mapsto(-1)^{f(x)}|x\rangle$. +It is however often left unsaid how difficult realizing such a gate would actually be. +Indeed, it could seem like this use of an ""oracle"" is just a way to sweep the difficulties under the carpet.

+ +

How do we know whether such an oracular operation is indeed realizable? +And if so, what is its complexity (for example in terms of complexity of gate decomposition)?

+",55,,55,,5/30/2021 20:10,5/30/2021 20:10,How is the oracle in Grover's search algorithm implemented?,,2,2,,,,CC BY-SA 4.0 +176,2,,175,3/17/2018 2:45,,29,,"

The function $f$ is simply an arbitrary boolean function of a bit string: $f\colon \{0,1\}^n \to \{0,1\}$. For applications to breaking cryptography, such as [1], [2], or [3], this is not actually a ‘database lookup’, which would necessitate storing the entire database as a quantum circuit somehow, but rather a function such as

+ +

\begin{equation*} + x \mapsto \begin{cases} + 1, & \text{if $\operatorname{SHA-256}(x) = y$;} \\ + 0, & \text{otherwise,} + \end{cases} +\end{equation*}

+ +

for fixed $y$, which has no structure we can exploit for a classical search, unlike, say, the function

+ +

\begin{equation*} + x \mapsto \begin{cases} + 1, & \text{if $2^x \equiv y \pmod{2^{2048} - 1942289}$}, \\ + 0, & \text{otherwise}, + \end{cases} +\end{equation*}

+ +

which has structure that can be exploited to invert it faster even on a classical computer.

+ +

The question of the particular cost can't be answered in general because $f$ can be any circuit—it's just a matter of making a quantum circuit out of a classical circuit. But usually, as in the example above, the function $f$ is very cheap to evaluate on a classical computer, so it shouldn't pose a particularly onerous burden on a quantum computer for which everything else about Grover's algorithm is within your budget.

+ +

The only general cost on top of $f$ is an extra conditional NOT gate $$C\colon \left|a\right> \left|b\right> \to \left|a\right> \left|a \oplus b\right>$$ where $\oplus$ is xor, and an extra ancillary qubit for it. In particular, if we have a circuit $$F\colon \left|x\right> \left|a\right> \lvert\text{junk}\rangle \mapsto \left|x\right> \left|a \oplus f(x)\right> \lvert\text{junk}'\rangle$$ built out of $C$ and the circuit for $f$, then if we apply it to $\left|x\right>$ together with an ancillary qubit initially in the state $\left|-\right> = H\left|1\right> = (1/\sqrt{2})(\left|0\right> - \left|1\right>)$ where $H$ is a Hadamard gate, then we get

+ +

\begin{align*} + F\left|x\right> \left|-\right> \lvert\text{junk}\rangle + &= \frac{1}{\sqrt{2}}\bigl( + F\left|x\right> \left|0\right> \lvert\text{junk}\rangle + - F\left|x\right> \left|1\right> \lvert\text{junk}\rangle + \bigr) \\ + &= \frac{1}{\sqrt{2}}\bigl( + \left|x\right> \left|f(x)\right> \lvert\text{junk}'\rangle + - \left|x\right> \left|1 \oplus f(x)\right> \lvert\text{junk}'\rangle + \bigr). +\end{align*}

+ +

If $f(x) = 0$ then $1 \oplus f(x) = 1$, so by simplifying we obtain $$F\left|x\right> \left|-\right> \lvert\text{junk}\rangle = \left|x\right> \left|-\right> \lvert\text{junk}'\rangle,$$ whereas if $f(x) = 1$ then $1 \oplus f(x) = 0$, so $$F\left|x\right> \left|-\right> \lvert\text{junk}\rangle = -\left|x\right> \left|-\right> \lvert\text{junk}'\rangle,$$ and thus in general $$F\left|x\right> \left|-\right> \lvert\text{junk}\rangle = (-1)^{f(x)} \left|x\right> \left|-\right> \lvert\text{junk}'\rangle.$$

+",238,,238,,11/18/2019 19:07,11/18/2019 19:07,,,,0,,,,CC BY-SA 4.0 +177,1,179,,3/17/2018 6:30,,12,1026,"

I'm writing with respect to part I and part II of the Fourier sampling video lectures by Professor Umesh Vazirani.

+ +

In part I they start with:

+ +

In the Hadamard Transform:

+ +

+ +

$$|0...0\rangle \to \sum_{\{0,1\}^n}\frac{1}{2^{n/2}}|x\rangle$$ +$$|u\rangle =|u_1...u_n\rangle \to \sum_{\{0,1\}^n}\frac{(-1)^{u.x}}{2^{n/2}}|x\rangle \quad \text{(where $u.x=u_1x_1+u_2x_2+...+u_nx_n$)}$$

+ +

In Fourier Sampling:

+ +

$$|\psi\rangle=\sum_{\{0,1\}}^{n}\alpha_x|x\rangle \to \sum_{x}\hat{\alpha_x}|x\rangle=|\hat{\psi}\rangle$$

+ +

When $|\hat{\psi}\rangle$ is measured we see $x$ with probability $|\hat{\alpha_x}|^2$.

+ +

In part II:

+ +

The Parity Problem:

+ +
+

We are given a function $f:\{0,1\}^n\to\{0,1\}$ as a black box. We + know that $f(x)=u.x$ (i.e. $u_1x_1+u_2x_2+...+u_nx_n (\text{mod 2})$) for some hidden $u\in\{0,1\}^{n}$. How do we + figure out $u$ with as few queries to $f$ as possible?

+
+ +

+ +

They say that we need to follow a two step procedure for figuring out $u$ in minimum possible number of steps.

+ +
    +
  • Set up a superposition $\frac{1}{2^{n/2}}\sum_{x}(-1)^{f(x)}|x\rangle$

  • +
  • Fourier sample to obtain $u$.

  • +
+ +

This is where I got lost. I don't understand what exactly they mean by ""set up a superposition..."". Why should we do it? And how does Fourier sampling (as described) help to determine $u$ ?

+ +

They further build a quantum gate like this:

+ +

+ +

Even this doesn't make sense to me. They're performing Hadamard transforms on a set of n-qubits having state $|0\rangle$ and then bit flip and again Hadamard transform. So we get back to where we were initially. How does an extra $|-\rangle$ state input help by outputting $- \oplus f(0...0)$? I'm not even sure what operation $\oplus$ stand for, here.

+",26,,9482,,4/13/2020 16:32,1/26/2021 15:40,How does Fourier sampling actually work (and solve the parity problem)?,,1,0,,,,CC BY-SA 3.0 +178,2,,21,3/17/2018 9:24,,7,,"

As this is one of the first examples in Nielsen & Chuang, I'll go ahead and type out their explanation here for anyone else that is interested in entanglement for faster than light communication.

+ +

The following is an abridged version of Nielsen & Chuang section 1.3.7 entitled 'Example: quantum teleportation'

+ +
+

Quantum teleportation is a technique for moving quantum states around, + even in the absence of a quantum communications channel linking the + sender of the quantum state to the recipient.

+ +

Here’s how quantum teleportation works. Alice and Bob met long ago but + now live far apart. While together they generated an EPR pair, each + taking one qubit of the EPR pair (also known as Bell states) when they + separated. Many years later, Bob is in hiding, and Alice’s mission, + should she choose to accept it, is to deliver a qubit $\lvert \psi + \rangle$ to Bob. She does not know the state of the qubit, and + moreover can only send classical information to Bob. Should Alice + accept the mission?

+ +

Intuitively, things look pretty bad for Alice. She doesn’t know the + state $\lvert \psi \rangle$ of the qubit she has to send to Bob, and + the laws of quantum mechanics prevent her from determining the state + when she only has a single copy of $\lvert \psi \rangle$ in her + possession. What’s worse, even if she did know the state $\lvert \psi + \rangle$, describing it precisely takes an infinite amount of + classical information since $\lvert \psi \rangle$ takes values in a + continuous space. So even if she did know $\lvert \psi \rangle$, it + would take forever for Alice to describe the state to Bob. It’s not + looking good for Alice. Fortunately for Alice, quantum teleportation + is a way of utilizing the entangled EPR pair in order to send $\lvert + \psi \rangle$ to Bob, with only a small overhead of classical + communication.

+ +

In outline, the steps of the solution are as follows: Alice interacts + the qubit $\lvert + \psi \rangle$ with her half of the EPR pair, and then measures the two + qubits in her possession, obtaining one of four possible classical + results, 00, 01, 10, and 11. She sends this information to Bob. + Depending on Alice’s classical message, Bob performs one of four + operations on his half of the EPR pair. Amazingly, by doing this he + can recover the original state $\lvert + \psi \rangle$!

+
+ +

Skipping some of the details...

+ +
+

First, doesn’t teleportation allow one to transmit quantum states faster than light? This would be rather peculiar because the theory of relativity implies that faster than light information transfer could be used to send information backward in time. Fortunately, quantum teleportation does not enable faster than light communication, because to complete the teleportation Alice must transmit her measurement result to Bob over a classical communications channel. The classical channel is limited by the speed of light, so it follows that quantum teleportation cannot be accomplished faster than the speed of light, resolving the apparent paradox.

+
+",54,,26,,5/14/2019 14:56,5/14/2019 14:56,,,,0,,,,CC BY-SA 4.0 +179,2,,177,3/17/2018 11:15,,7,,"

Starting from the beginning (a very good place to start, after all), the state $\left| 0\right\rangle^{\otimes n}\left| -\right\rangle$ is input into $H^{\otimes n}\otimes I$ (here, called the 'Fourier sample'). This generates the state $$\left(\sum_{x=\{0,1\}^n}\frac{1}{2^{n/2}}|x\rangle\right)\left|-\right\rangle = \frac{1}{2^{n/2}}\left(\left|0\right\rangle + \left|1\right\rangle\right)^{\otimes n}\left|-\right\rangle.$$ Now, we apply the operation $U_f$ (in this case, the bit oracle) to give $$U_f\left(\sum_{x=\{0,1\}^n}\frac{1}{2^{n/2}}|x\rangle\right)\left|-\right\rangle = \sum_{x=\{0,1\}^n}\frac{1}{2^{n/2}}|x\rangle\left|-\oplus f\left(x\right)\right\rangle.$$

+

The first point to note is that $\oplus$ is the classical XOR operation. What this gives is actually the phase oracle, so that we get $$\left(\sum_{x=\{0,1\}^n}\frac{1}{2^{n/2}}\left(-1\right)^{f\left(x\right)}\left|x\right\rangle\right)\left|-\right\rangle.$$ This is because $U_f\left|x\right\rangle\left(\left|0\right\rangle - \left|1\right\rangle\right) = \left|x\right\rangle\left|f\left(x\right)\right\rangle - \left|x\right\rangle\left|1\oplus f\left(x\right)\right\rangle = \left(-1\right)^{f\left(x\right)}\left|x\right\rangle\left(\left|0\right\rangle - \left|1\right\rangle\right)$. This is the 'set up a superposition...' point - all this means is to perform the operations required to set the qubits in the above state, which is a superposition of all possible states (with phase factors, in this case). In this case, this is just Hadamard, followed by a phase oracle.

+

Now, $x$ is just a classical bit string: $x = \prod_ix_i$, so $$H\left|x_i\right\rangle = \frac{1}{\sqrt{2}}\left(\left|0\right\rangle + \left(-1\right)^{x_i}\left|1\right\rangle\right) = \frac{1}{\sqrt{2}}\sum_{y=\left\lbrace0, 1\right\rbrace}\left(-1\right)^{x_i.y}\left|y\right\rangle.$$

+

This gives the property $$H^{\otimes n}\left| x\right\rangle = \frac{1}{2^{n/2}}\sum_{y\in\left\lbrace0, 1\right\rbrace^n}\left(-1\right)^{x.y}\left|y\right\rangle.$$

+

This gives the final state as $$\frac{1}{2^n}\left(\sum_{x, y=\{0,1\}^n}\left(-1\right)^{f\left(x\right) \oplus x.y}\left|y\right\rangle\right)\left|-\right\rangle.$$

+

We know that $f\left(x\right) = u.x = x.u$, giving $\left(-1\right)^{f\left(x\right) \oplus x.y} = \left(-1\right)^{x.\left(u\oplus y\right)}$. Summing over the $x$ terms gives that $\sum_x\left(-1\right)^{x.\left(u\oplus y\right)} = 0,\, \forall\, u\oplus y \neq 0$. This means that we're left with the term for $u\oplus y = 0$, which means that $u=y$, giving the output as $\left|u\right\rangle\left|-\right\rangle$, which is measured to obtain $u$.

+

As for why we want to set up a superposition: This is where the power of quantum computing comes into play - In less mathematical terms, applying the Hadamard transformation is performing a rotation on the qubit states to get into the state $\left|+\right\rangle^{\otimes n}$. You then rotate each qubit in this superposition state using an operation equivalent to XOR (in this new basis), so that when performing the Hadamard transformation again, you're now just rotating back onto the state $\left|u\right\rangle$. Another way of looking at this is to consider it as a reflection or inversion that achieves the same result.

+

The point is that, using superposition, we can do this to all the qubits at the same time, instead of having to individually check each qubit as in the classical case.

+",23,,13500,,10/14/2020 11:19,10/14/2020 11:19,,,,0,,,,CC BY-SA 4.0 +180,1,,,3/17/2018 11:19,,8,173,"

How much of a role does the type of hardware used to implement the building blocks (like qubits, the circuits, the communication channels,quantum RAM etc.) have to play when designing the architecture for a full scale quantum computer?

+ +

My own thoughts on the matter: Architecture should not depend on the way the hardware is realised. Because if it did, then every time someone came up with a novel design for the hardware, it would require rethinking the architecture - not a bad idea if you are looking to improve your architecture but that rethinking should be born out of a desire to improve the computer in general and not simply accommodate some new RAM implementation.

+",261,,26,,5/15/2019 14:29,5/15/2019 14:29,Dependency of architecture on hardware,,2,0,,,,CC BY-SA 3.0 +181,2,,162,3/17/2018 13:00,,6,,"

Nielsen and Chuang in their book ""Quantum Computation and Quantum Information"" have section (Chapter 9) on distance measures for quantum information.

+ +

Surprisingly they say in Section 9.3 "" How well does a quantum channel preserve information?"" that when comparing fidelity to the trace norm:

+ +
+

Using the properties of the trace distance established in the last section it is not difficult, for the most part, to give a parallel development based upon the trace distance. However, it turns out that the fidelity is an easier tool to calculate with, and for that reason we restrict ourselves to considerations based upon the fidelity.

+
+ +

I imagine this is in part why fidelity is used. It seems it's fairly useful as a static measure of distance.

+ +

There also seems to be relatively straightforward extensions of fidelity to ensembles of states

+ +

$$F =\sum_j p_jF(\rho_j,\mathcal{E}(\rho_j))^2,$$

+ +

$p_j$ the probability of preparing the system in states $\rho_j$, and $\mathcal{E}$ the particular noisy channel of interest, $0\leq F\leq 1$.

+ +

There's also an extension to entanglement fidelity, to measure how well a channel preserves entanglement. Given a state $Q$ assumed to be entangled to the external world in some way, and a purification of the state (fictitious system $R$), such that $RQ$ is pure. The state is subjected to dynamics in the channel $\mathcal{E}$. The primes indicate the state after the application of quantum operation. $\mathcal{I}_R$ is the identity map on system $R$.

+ +

$$F(\rho,\mathcal{E}) ≡ F(RQ,R′Q′)^2= ⟨RQ| \left(\mathcal{I}_R \otimes \mathcal{E}\right)\left(|RQ⟩⟨RQ|\right) |RQ⟩$$

+ +

There's some formulas derived to simplify computations of fidelity and entanglement fidelity also given in the chapter.

+ +
+

One of the attractive properties of the entanglement fidelity is that there is a very simple formula which enables it to be calculated exactly.

+
+ +

$$F(\rho,\mathcal{E})=\sum_i\operatorname{tr}|(\rho E_i)|^2 $$

+ +

where the 'operation elements' $E_i$ satisfy a completeness relation. Maybe someone else can comment on more practical implementations, but this is what I've gathered from reading.

+ +

Update 1: Re M.Stern

+ +

It's the same reference Nielsen and Chuang. They comment on that by saying ""You may wonder why the fidelity appearing on the right hand side of the definition is squared. There are two answers to this question, one simple, and one complex. The simple answer is that including this square term makes the ensemble fidelity more naturally related to the entanglement fidelity, as defined below. The more complex answer is that quantum information is, at present, in a state of infancy and it is not entirely clear what the ‘correct’ definitions for notions such as information preservation are! Nevertheless, as we shall see in Chapter 12, the ensemble average fidelity and the entanglement fidelity give rise to a rich theory of quantum information, which leads us to believe that these measures are on the right track, even though a complete theory of quantum information has not yet been developed.""

+ +

To answer your second question as to why not look at the fidelity of $\bar{\rho}$, there's a nice point mentioned in ""Distinguishability measures between ensembles of quantum states"" which I think is in PhysRevA but there's an arXiv version here.

+ +

The point they mention on pg 4, is suppose you have two ensembles $rho$ and $\sigma$ which happen to have the same ensemble average density matrix, $\bar{\rho}=\bar{\sigma}$, then the fidelity $F(\bar{\rho},\bar{\sigma})$ can't +distinguish between them.

+ +

Update 2: Re Mithrandir24601 +So one definition for gate fidelity is motivated by thinking what's the worst-case behaviour of a channel $\mathcal{E}$, for a given input state.

+ +

$$F_{min}=\min_{|\psi\rangle}F(|\psi\rangle\langle\psi|,\mathcal{E}(|\psi\rangle\langle\psi|))\equiv\min_{|\psi\rangle}F(|\psi\rangle,\mathcal{E}(|\psi\rangle\langle\psi|))$$

+ +

Due to concavity in both arguments you can restrict to pure states in this minimising, the equivalence in the second part is just notation.

+ +

In defining how well a gate is implemented one can look as well at a worst case implementation of a unitary gate $U$ by a channel $\mathcal{E}$ by defining

+ +

$$F(U,\mathcal{E})=\min_{|\psi\rangle}F(U|\psi\rangle,\mathcal{E}(|\psi\rangle\langle\psi|))$$

+ +

In the formula you've given and the paper you've linked, they integrate over $\psi$, with an appropriate measure$^*$. This makes me think this should be regarded instead as an average fidelity $\bar{F}(U,\tilde{U})$, which you can imagine might be more useful in practical experiments, especially if you're repeating the experiment. It's probably unlikely to achieve the exact minimum.

+ +

There's an arXiv version of a paper here by Michael Nielsen where he talks about average gate fidelity.

+ +

The only extra difference between fidelity for a gate and average fidelity of a gate mentioned vs the formula you initially provided, is the square of the trace: $[\operatorname{trace}]^2$ you have. As in Update 1 some people prefer to use $F^2$ as the fidelity rather than $F$, as it can supposedly be connected more readily to entanglement fidelity. I do need to read a bit more about that to comment properly.

+ +

($\,^*$) Aside: I think calling it a 'Haar measure' might be misleading, I've seen in it in papers as well. As far as I know, the space of pure states is usually topologically $\Bbb{CP}^n$, for an $n$-dimensional hilbert space. Apparently the measure they use is inherited from the haar measure on $U(n)$ by a quotient or so I've read here: https://physics.stackexchange.com/a/98869/41998.

+",197,,197,,3/26/2018 11:34,3/26/2018 11:34,,,,5,,,,CC BY-SA 3.0 +182,2,,136,3/17/2018 13:22,,4,,"

I'll add a small bit complementing the other answers, just about the idea of measurement.

+ +

Measurement is usually taken as a postulate of quantum mechanics. There's usually some preceding postulates about hilbert spaces, but following that

+ +
    +
  • Every measurable physical quantity $A$ is described by an operator $\hat{A}$ acting on a Hilbert space $\mathcal{H}$. This operator is called an observable, and it's eigenvalues are the possibly outcomes of a measurement.
  • +
  • If a measurement is made of the observable $A$, in the state of the system $\psi$, and the outcome is $a_n$, then the state of the system immediately after measurement is $$\frac{\hat{P}_n|\psi\rangle}{\|\hat{P}_n|\psi\rangle\|},$$ where $\hat{P}_n$ is the projector onto the eigen-subspace of the eigenvalue $a_n$.
  • +
+ +

Normally the projection operators themselves should satisfy $\hat{P}^\dagger=\hat{P}$ and $\hat{P}^2=\hat{P}$, which means they themselves are observables by the above postulates, and their eigenvalues $1$ or $0$. Supposing we take one of the $\hat{P}_n$ above, we can interpret the $1,0$ eigenvalues as a binary yes/no answer to whether the observable quantity $a_n$ is available as an outcome of measurement of the state $|\psi\rangle$.

+",197,,,,,3/17/2018 13:22,,,,0,,,,CC BY-SA 3.0 +1182,1,1203,,3/17/2018 14:03,,15,773,"

Since access to quantum devices capable of quantum computing is still extremely limited, it is of interest to simulate quantum computations on a classical computer. Representing the state of $n$ qubits as a vector takes $2^n$ elements, which greatly restricts the number of qubits one can consider in such simulations.

+ +

Can one use a representation1 that is more compact, in the sense that it uses less memory and/or computational power than the simple vector representation? How does it work?

+ +

While easy to implement, it is clear that the vector representation is wasteful for states that exhibit sparsity and/or redundancy in their vector representation. For a concrete example, consider the 3-qubit state $(1/\sqrt{3}, 1/\sqrt{3},0,0,0,-1/\sqrt{3}, 0,0)^T$. It has $2^3$ elements but they only assume $3$ possible values, with most of the elements being $0$. Of course, to be useful in simulating a quantum computation we would also need to consider how to represent gates and the action of gates on qubits, and including something about these would be welcome, but I would be happy to hear just about qubits too.

+ +

+1. Notice that I am asking about the representations, not software, libraries or articles that might utilize/present such representations. If you present and explain a representation you are very welcome to mention where it is already used though. +

+",144,,26,,12/23/2018 12:08,12/23/2018 12:08,How to compactly represent multiple qubit states?,,3,0,,,,CC BY-SA 3.0 +1183,2,,1182,3/17/2018 15:06,,9,,"

$\newcommand{\ket}[1]{\left|#1\right>}$I'm not sure using sparsity is a good approach here: even single-qubit gates could easily turn a sparse state into a dense one.

+ +

But you can use the stabilizer formalism if you only use Clifford gates. Here is a short recap (notation):
+The single-qubit Pauli group is $G_1=\langle X, Y, Z\rangle$, i.e. all possible products of Pauli matrices (including $\mathbb{I}$). The Pauli group of several qubits is the tensor product space of $G_1$, $G_n=G_1^{\otimes n}$. The stabilizer of a state $\ket{\psi}$ is the subgroup of the Pauli group of all operators that stabilize $\ket{\psi}$, which means $s \ket{\psi} = \ket{\psi}$. It is important to note that this only works for specific (but important) states. I will give an example below. The restriction to elements of the Pauli group is not necessary but common. The stabilizer is generated by operators $s_1$, $s_2$, ... $s_n$. The stabilizer uniquely defines the state and is an efficient description: instead of $2^n-1$ complex numbers we can use $4n^2$ bits ($G_1$ has 16 elements). When we apply a gate $U$, the stabilizer generators update according to $s_i \to U^\dagger s_i U$. A gate that maps Pauli operators to Pauli operators is called Clifford gates. So these are the gates that will not ""mess up"" our description of the state.

+ +

Graph states are an important example for the stabilizer formalism described above. Consider an (undirected) mathematical graph, which consists of $n$ vertices $V$ and edges $E\subset V\times V$. Each vertex corresponds to one qubit. Let us denote the graph by $G=(V,E)$. A graph state is produced from the state $\ket{+}^{\otimes n}$, where $\ket{+}=\frac{1}{\sqrt{2}} (\ket{0}+\ket{1})$ by applying a controlled-phase gate $C_Z$ for each pair of vertices which are connected. The stabilizer is generated by $$s_v= X_v \prod_{\substack{w\in V\\ (v,w)\in E}} Z_w.$$

+ +

For example start with the two-qubit state $\ket{\phi}=\ket{+}\otimes \ket{+}$. The stabilizer is $\langle X\otimes \mathbb{I}, \mathbb{I}\otimes X \rangle$. Now apply the $C_Z$ gate to obtain $\langle X \otimes Z, Z \otimes X \rangle$. (The state is $\ket{\phi'}=\frac{1}{2}(1,1,1,-1)^T$, which is local unitary equivalent to a Bell state)

+ +

The stabilizer formalism also plays an important role in quantum error correction.

+",104,,104,,3/19/2018 13:34,3/19/2018 13:34,,,,0,,,,CC BY-SA 3.0 +1184,2,,180,3/17/2018 18:31,,5,,"

It's a not-so-ideal-world and in short, architecture of quantum computers depends A LOT on the ""hardware"" used. There are currently several ""models"" for physical implementation of quantum computers and all of them require considerably distinct architecture. For example superconducting quantum computers have to be kept at close to absolute zero temperature. In trapped ion quantum computers there are lasers applied to induce coupling between the qubit states. For optical quantum computers you need linear optical elements (including beam splitters, phase shifters, and mirrors) to process quantum information, and photon detectors and quantum memories to detect and store quantum information.

+ +

Here's a list of the common architectures, as stated on Wikipedia:

+ +
+
    +
  • Superconducting quantum computing (qubit implemented by the state of small superconducting circuits (Josephson junctions))
  • +
  • Trapped ion quantum computer (qubit implemented by the internal state of trapped ions)
  • +
  • Optical lattices (qubit implemented by internal states of neutral atoms trapped in an optical lattice)
  • +
  • Quantum dot computer, spin-based (e.g. the Loss-DiVincenzo quantum computer) (qubit given by the spin states of trapped electrons)
  • +
  • Quantum dot computer, spatial-based (qubit given by electron position in double quantum dot)
  • +
  • Nuclear magnetic resonance on molecules in solution (liquid-state NMR) (qubit provided by nuclear spins within the dissolved molecule)
  • +
  • Solid-state NMR Kane quantum computers (qubit realized by the nuclear spin state of phosphorus donors in silicon)
  • +
  • Electrons-on-helium quantum computers (qubit is the electron spin)
  • +
  • Cavity quantum electrodynamics (CQED) (qubit provided by the internal state of trapped atoms coupled to high-finesse cavities)
  • +
  • Molecular magnet (qubit given by spin states)
  • +
  • Fullerene-based ESR quantum computer (qubit based on the electronic spin of atoms or molecules encased in fullerenes)
  • +
  • Linear optical quantum computer (qubits realized by processing states of different modes of light through linear elements e.g. + mirrors, beam splitters and phase shifters)
  • +
  • Diamond-based quantum computer (qubit realized by electronic or nuclear spin of nitrogen-vacancy centers in diamond)
  • +
  • Bose–Einstein condensate-based quantum computer
  • +
  • Transistor-based quantum computer – string quantum computers with entrainment of positive holes using an electrostatic trap
  • +
  • Rare-earth-metal-ion-doped inorganic crystal based quantum computers(qubit realized by the internal electronic state of dopants + in optical fibers)
  • +
  • Metallic-like carbon nanospheres based quantum computers.
  • +
+
+",26,,26,,3/17/2018 19:19,3/17/2018 19:19,,,,0,,,,CC BY-SA 3.0 +1185,1,1186,,3/18/2018 2:05,,27,1555,"

Most reversible quantum algorithms use standard gates like Toffoli gate (CCNOT) or Fredkin gate (CSWAP). Since some operations require a constant $\left|0\right>$ as input and the number of inputs and outputs is equal, garbage qubits (or junk qubits) appear in the course of the computation.

+ +

So, a principal circuit like $\left|x\right>\mapsto\left|f(x)\right>$ actually becomes $\left|x\right>\left|0\right>\mapsto\left|f(x)\right>\left|g\right>$,
+where $\left|g\right>$ stands for the garbage qubit(s).

+ +

Circuits that preserve the original value ends up with $\left|x\right>\left|0\right>\left|0\right>\mapsto\left|x\right>\left|f(x)\right>\left|g\right>$

+ +

I understand that garbage qubits are inevitable if we want the circuit to stay reversible, but many sources${}^1$ claim that it is important to eliminate them. Why is it so?

+ +
+ +

${}^1$ Due to requests for sources, see for example this arXiv paper, pg 8, which says

+ +
+

However, each of these simple operations contains a number of additional, auxiliary qubits, which serve to store the intermediate results, but are not relevant at the end. In order not to waste any unneccesary [sic] space, it is therefore important to reset these qubits to 0 so that we are able to re–use them

+
+ +

or this arXiv paper which says

+ +
+

The removal of garbage qubits and ancilla qubits are essential in designing an efficient quantum circuit.

+
+ +

or the many other sources - a google search produces many hits.

+",31,,26,,12/23/2018 14:20,01-01-2020 19:21,Why is it important to eliminate the garbage qubits?,,2,1,,,,CC BY-SA 4.0 +1186,2,,1185,3/18/2018 5:22,,25,,"

Quantum interference is the heart and soul of quantum computation. Whenever you have junk qubits they're going to prevent interference. This is actually a very simple but very important point. Let's say we have a function $f:\{0,1\}\to\{0,1\}$ which maps a single bit to a single bit. Say $f$ is a very simple function, like $f(x)=x$. Let's say we had a circuit $C_f$ which inputs $x$ and outputs $f(x)$. Now, of course, this was a reversible circuit, and could be implemented using a unitary transformation $|x\rangle\to|x\rangle$. Now, we could feed in $\frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle$ and the output would also be $\frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle$. Let us now apply Hadamard transform gate and measure what we get. If you apply the Hadamard transform to this state $\frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle$, you get the $|0\rangle$ state, and you see $0$ with probability $1$. In this case there was no junk created in the intermediate steps, while converting the classical circuit to a quantum circuit.

+ +

But, let's say we created some junk in an intermediate step when using a circuit like this one:

+ +

.

+ +

For this circuit, if we start off in the state $|x\rangle|0\rangle = \left(\frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle \right)|0\rangle$, after the first step we get $\frac{1}{\sqrt{2}}|00\rangle + \frac{1}{\sqrt{2}}|11\rangle$. If we apply the Hadamard transform to the first qubit, we end up with:

+ +

$$\frac{1}{2}|00\rangle + \frac{1}{2}|01\rangle + \frac{1}{2}|10\rangle - \frac{1}{2}|11\rangle$$

+ +

If we make a measurement on the first qubit we get $0$ with probability $\frac{1}{2}$, unlike in the previous case where we could see $0$ with probability $1$! The only difference between the two cases was the creation of a junk bit in an intermediate step, which was not gotten rid of, thus leading to a difference in the final result of the computation (since the junk qubit got entangled with the other qubit). We will see a different interference pattern than in the previous case when the Hadamard transform is applied. This is exactly why we don't like to keep junk around when we are doing quantum computation: it prevents interference.

+ +

Source: Professor Umesh Vazirani's lecture on EdX.

+",26,,26,,01-01-2020 19:21,01-01-2020 19:21,,,,2,,,,CC BY-SA 4.0 +1187,2,,171,3/18/2018 9:22,,9,,"
+
    +
  • Is there proof that the D-wave (one) is a quantum computer and is effective?
  • +
+
+ +

D-Wave Video - Offers an explanation of: ""How do we know ..."": https://youtu.be/kq9VqR0ZGNc

+ +

One analogy you might make with the D-Wave One, an adiabatic ('analog') computer, is to the ""south-pointing chariot"" or the ""Antikythera mechanism"".

+ +

A lengthy explanation is offered in this Ars Technica (Wired) article: ""Going digital may make analog quantum computer scaleable"":

+ +
    +
  • ""... They pretty much all fall into two categories. In most labs, researchers work on what could be called a digital quantum computer, which has the quantum equivalent of logic gates, and qubits are based on well-defined and well-understood quantum states. The other camp works on analog devices called adiabatic quantum computers. In these devices, qubits do not perform discrete operations, but continuously evolve from some easily understood initial state to a final state that provides the answer to some problem"" (end quote), or quantum annealing.

  • +
  • ""Adiabatic quantum computers are inherently analog devices: each qubit is driven by how strongly it is coupled to every other qubit. Computation is performed by continuously adjusting these couplings between some starting and final value. Tiny errors in the coupling—due to environmental effects, for instance—tend to build up and throw off the final value."".

  • +
  • ""Digital quantum computing, which uses logic operations and quantum gates, offers the possibility of error correction. By encoding information in multiple qubits, you can detect and correct errors. Unfortunately, digital qubits are delicate things compared to those used in adiabatic quantum computers, and the ability to ..."". (Go read the article if you don't want a condensed version).

  • +
  • ""What about a hybrid approach? That's the question asked by a international group of researchers in a recently-published paper in Nature. They’ve tested a system where the computation is performed by qubits that were operating as an adiabatic quantum computer, but with connections between the adiabatic qubits is controlled via a digital network of qubits. This allows the benefits of scale and flexibility that you get from adiabatic quantum computing, while also making the connections less susceptible to noise."".

  • +
+ +

So, yes. It is a computer and uses quantum methods.

+ +

Adiabatic quantum computation (AQC) is a form of quantum computing which relies on the adiabatic theorem to do calculations1 and is closely related to, and may be regarded as a subclass of, quantum annealing.

+ +

Another analogy, probably as unfair as the last, is that AQC is a one-trick-ponyism. It's limited in what it can do, but it does it quickly and well.

+ +
+
    +
  • So, I ask:

    + +

    I know that D-wave claims to use some sort of quantum annealing. Is there (dis)proof of the D-wave actually using quantum annealing (with effect) in its computations?

    + +

    Has it been conclusively shown that the D-wave is (in)effective? If not, is there a clear overview of the work to attempt this?

  • +
+
+ +

There is proof that it is effective when used correctly for doing what it was designed to do:

+ +

""Blockchain platform with proof-of-work based on analog Hamiltonian optimisers"" by Kirill P. Kalinin, Natalia G. Berloff, 27 Feb 2018.

+ +

University of Cambridge, ""Polariton Graph Simulator (Optimizer): an analog Hamiltonian simulaton"", Natalia Berloff.

+ +

""Performance of quantum annealing hardware"" by Damian S. Steiger; Bettina Heim, 22 October 2015.

+ +

There exists important backers and some skeptics of D-Wave.

+ +
+ +

Address concerns expressed in comments - Update: 19 March 2018:

+ +

Here is an article from Nature.com entitled: ""Triode for Magnetic Flux Quanta"" which explain the use of Abrikosov vortices to hold quantized information bits, further clarified (or not) in the article: ""Single Abrikosov vortices as quantized information bits"".

+ +

An oversimplified analogy is that the quantum qubits are (not at all) like magnetic core memory, the difference is:

+ +
    +
  • A single magnetic core holds a binary digit, a bit, (like a fraction of a letter in a book, so you would use 8 bits to represent more than just a letter but all of the ASCII Alphabet, letters digits and control codes). A bit would have to be in one state or the other.

  • +
  • A qubit, by utilizing quantum mechanics, allows the qubit to be in a superposition of both states at the same time, a property that is fundamental to quantum computing. A qbit can be in one state, the other, or both; think of it as trinary on steroids, because qubits can perform two calculations simultaneously (and that's why they are both comparable and incomparable, a superposition of both states; a new way if thinking).

  • +
+ +

Look at this image of a magnetic memory and a quantum processor - quite different from an x86 processor:

+ +

+ +

A simple explanation of the relevance and degree of proof is offered in this video by D-Wave called: ""D-Wave Lab Tour Part 3 (of 3) - The D-Wave Processor"".

+ +

https://www.youtube.com/watch?v=AGByZoYUlU0

+",278,,278,,04-03-2018 16:02,04-03-2018 16:02,,,,0,,,,CC BY-SA 3.0 +1192,1,1199,,3/18/2018 21:53,,15,876,"

Daniel Sank mentioned in a comment, responding to (my) opinion that the constant speed-up of $10^8$ on a problem admitting a polynomial time algorithm is meager, that

+ +
+

Complexity theory is way too obsessed with infinite size scaling limits. What matters in real life is how fast you get the answer to your problem.

+
+ +

In Computer Science, it is common to ignore constants in algorithms, and all in all, this has turned out to work rather well. (I mean, there are good and practical algorithms. I hope you will grant me (theoretical) algorithms researchers have had a rather large hand in this!)

+ +

But, I do understand that this is a slightly different situation as now we are:

+ +
    +
  1. Not comparing two algorithms running on the same computer, but two (slightly) different algorithms on two very different computers.
  2. +
  3. We now are working with quantum computers, for which perhaps traditional perfomance measurements may be insufficient.
  4. +
+ +

In particular, the methods of algorithm analysis are merely methods. I think radically new computing methods calls for a critical review of our current performance evaluation methods!

+ +

So, my question is:

+ +

When comparing the performance of algorithms on a quantum computer versus algorithms on a classical computer, is the practice of 'ignoring' constants a good practice?

+",253,,26,,3/29/2018 18:12,1/19/2019 21:57,Is the common Computer Science usage of 'ignoring constants' useful when comparing classical computing with quantum computing?,,4,3,,,,CC BY-SA 3.0 +1195,1,1211,,3/19/2018 2:39,,13,741,"

Presently, how much information can a quantum computer store, in how many qubits? What restrictions are there and how does it vary across realizations (efficiency of data storage, ease of reading and writing, etc)?

+",92,,26,,12/13/2018 19:37,12/13/2018 19:37,State of the art in quantum memory,,1,0,,,,CC BY-SA 3.0 +1196,1,1242,,3/19/2018 5:50,,19,447,"

I'm interested in the model of quantum computation by magic state injection, that is where we have access to the Clifford gates, a cheap supply of ancilla qubits in the computational basis, and a few expensive-to-distill magic states (usually those that implement S, T gates). I've found that the best scaling is logarithmic in the accuracy $\varepsilon$, specifically $O(\log^{1.6}(1/\varepsilon)$ is what a 2012 paper offers to get the accuracy we need in the $S,T$ states.

+ +

Is this enough to calculate most of the problems we're interested in? Are there any problems that specifically resist QCSI (Quantum Computation by State Injection) because of high overhead, but are more solvable in other models of computation?

+",236,,26,,12/14/2018 6:04,12/14/2018 6:04,How does magic state distillation overhead scale compare to quantum advantages?,,1,0,,,,CC BY-SA 3.0 +1199,2,,1192,3/19/2018 11:35,,10,,"

The common Computer Science usage of 'ignoring constants' is only useful where the differences in performance of various kinds of hardware architecture or software can be ignored with a little bit of massaging. But even in classical computation, it is important to be aware of the impact of architecture (caching behaviour, hard disk usage) if you want to solve difficult problems, or large problems.

+ +

The practise of ignoring constants isn't a practise which is motivated (in the sense of being continually affirmed) from an implementation point of view. It is driven mostly by an interest in an approach to the study of algorithms which is well-behaved under composition and admits simple characterisations, in a manner close to pure mathematics. The speed-up theorems for Turing Machines meant that any sensible definition couldn't attempt to pin down the complexity of problems too precisely in order to arrive at a sensible theory; and besides, in the struggle to find good algorithms for difficult problems, the constant factors weren't the mathematically interesting part...

+ +

This more abstract approach to the study of algorithms was and is largely fruitful. But now we are confronted with a situation where we have two models of computation, where

+ +
    +
  • One is in an advanced state of technological maturity (classical computation); and
  • +
  • One is in a very immature state, but is attempting to realise a theoretical model which can lead to significant asymptotic improvements (quantum computation).
  • +
+ +

In this case, we can ask whether it even makes sense to consider the asymptotic benefit, with or without careful accounting of the constant factors. Because of the extra effort which may be required to perform scalable quantum computing, not only scalar factors but polynomial ""speedups"" in theoretical performance may be washed out once all of the overhead in realising a quantum algorithm is taken into account.

+ +

In these early days, there may also be significant differences in performance to different approaches to quantum architecture. +This could make the choice of architecture as important (if not more important) to how well an algorithm performs than asymptotic analysis — just as it would matter a lot to you whether you do your conventional computation on a von Neumann machine or a highly distributed network with significant latencies.

+ +

The actually important thing for practical computation is — and has always been — not just algorithms, but implementations of algorithms: an algorithm realised in a certain way, on a certain architecture. The common practise of asymptotic analysis which ignores constant factors allows us to pay attention to the systematic, mathematical reasons for differences in the performance of algorithms, and is practically motivated on those occasions when the architectural differences are not so large as to dominate the practical performance.

+ +

With respect to quantum technologies, we are not in that happy situation where we can safely gloss over constant factors in any practical context. But perhaps one day we will be able to do so. This is the long game of quantum information technologies — until now, nearly the only game that academic computer scientists have ever played, as far as quantum information technology is concerned. Anticipating that day when quantum technology finds its footing, it will be good for us to continue pursuing asymptotic analysis, as one line of investigation in the performance of quantum algorithms.

+",124,,124,,3/27/2018 17:10,3/27/2018 17:10,,,,2,,,,CC BY-SA 3.0 +1200,2,,1192,3/19/2018 11:53,,5,,"

First, there are no scaling analyses of quantum devices like we have for algorithms on CMOS hardware. So talking about $O\left(f\left[N\right]\right)$ is flawed. Second, given the lack of theoretical backing, we have to do scaling analyses with experiments. However, the currently accessible problem sizes ($N$) are rather limited. As such, we do not even know if we are in the asymptotic limit (""the true behavior of the device"").

+ +

Therefore, neglecting prefactors (i.e. constant factors) might be premature. Purists will argue that only an improvement in the scaling (""change in slope"") is true speedup. I agree, however, if your quantum device scales as well as the best classical algorithms yet has a $10^{10}$ prefactor, I would call it useful for applications.

+ +

Unfortunately, the current state of the art is D-Wave scaling as good as the best algorithms, but with only a $\approx300$ factor advantage over code run on a single core. Moreover, this is a synthetic benchmark problem and not an application of interest. Far from disruptive... See https://arxiv.org/abs/1711.01368 for more information.

+",,user306,23,,3/19/2018 14:54,3/19/2018 14:54,,,,0,,,,CC BY-SA 3.0 +1202,2,,180,3/19/2018 12:17,,4,,"

At the current state of the art, quite a bit. +As pointed out by the other answer, different architectures implement qubits in different physical substrates, which results in radically different techniques to generate, evolve, interact and measure the qubits. +Moreover, different operations are easier in some architectures than in others.

+ +

To get something more similar to how we usually program classical computers one needs some kind of compilation pipeline, mapping a given computation, expressed in an abstract high-level language, down to the specific hardware details of a given architecture. +This is still a work in progress, but there are people working in this direction. A relevant work that comes to mind is 1604.01401. Here is the pipeline proposed in this paper:

+ +

+ +

In theory, having software suites implementing such a pipeline would allow to write abstract code and have automatically compiled down to work on, say, superconducting chips as well as optical or ion-trap quantum computers.

+ +

In practice, there are so many things to still work out (first of all how to actually make scalable quantum computers) that it is hard to say how such scheme will work.

+",55,,,,,3/19/2018 12:17,,,,0,,,,CC BY-SA 3.0 +1203,2,,1182,3/19/2018 13:08,,8,,"

There are many possible ways to compactly represent a state, the usefulness of which strongly depend on the context.

+ +

First of all, it is important to notice that it is not possible to have a procedure that can map any state into a more efficient representation of the same state (for the same reason why it is obviously not possible to faithfully compress any 2-bit string as a 1-bit string, with a mapping that does not depend on the string).

+ +

However, as soon as you start making some assumptions, you can find more efficient ways to represent a state in a given context. +There is a multitude of possible ways to do this, so I'll just mention a few that come to mind:

+ +
    +
  1. Already the standard vector representation of a ket state can be thought of as a ""compressed representation"", that works under the assumption of the state being pure. Indeed, you need $4^n-1$ real degrees of freedom to represent an arbitrary (generally mixed) $n$-qubit state, but only $2^{n+1}-2$ to represent a pure one.

  2. +
  3. If you assume a state $\rho$ to be almost pure, that is, such that $\rho$ is sparse in some representation (equivalently, $\rho$ is low rank), then again the state can be efficiently characterised. For a $d$-dimensional system (so $d=2^n$ for an $n$-qubit system), instead of using ~$d^2$ parameters, you can have a faithful representation using only $\mathcal O(r d \log^2 d)$, where $r$ is the sparsity of the state (see 0909.3304 and the works that came after that).

  4. +
  5. If you are only interested in a limited number $|S|$ of expectation values, you can find a compressed representation of an $n$-qubit state of size $\mathcal O(n\log(n)\log(|S|))$. Note that this amounts to an exponential reduction. This was shown (I think) in quant-ph/0402095, but the introduction given in 1801.05721 may be more accessible for a physicist (as well as presenting improvements in the optimisation method). +See references in this last paper for a number of similar results.

  6. +
  7. If you know that the entanglement of the state is limited (in a sense that can be precisely defined), then again efficient representations can be found, in terms of tensor networks (an introduction is found e.g. in 1708.00006). More recently, it was also shown that ground states of some notable Hamiltonians can be represented using machine-learning-inspired ansatze ( (1606.02318 and many following works). This was also recently shown/claimed to be equivalent to a specific Tensor Network representation however (1710.04045) so I'm not sure whether it should go to a category of its own.

  8. +
+ +

Note that in all of the above you can more efficiently represent a given state, but to then simulate the evolution of the system you generally need do go back to the original inefficient representation. +If you want to efficiently represent the dynamics of a state through a given evolution, you again need assumptions on the evolution for this to be possible. +The only result that comes to mind on this regard is the classical (as in enstablished, not as in ""non quantum"") Gottesman-Knill theorem, which allows to efficiently simulate any Clifford quantum circuit.

+",55,,55,,04-05-2018 09:15,04-05-2018 09:15,,,,0,,,,CC BY-SA 3.0 +1206,1,,,3/19/2018 16:31,,28,3099,"

To represent a quantum computer's state, all the qubits contribute to one state vector (this is one of the major differences between quantum and classical computing as I understand it). My understanding is that it's possible to measure only one qubit out of a system of multiple qubits. How does measuring that one qubit affect the whole system (specifically, how does it affect the state vector)?

+",91,,,,,09-01-2018 03:38,How does measurement of one qubit affect the others?,,3,0,,,,CC BY-SA 3.0 +1207,2,,1206,3/19/2018 17:02,,17,,"

There are a lot of different ways of looking at qubits, and the state vector formalism is just one of them. In a general linear-algebraic sense a measurement is projection onto a basis. Here I will provide insight with an example from the Pauli observable point of view, that is the usual circuit model of QC.

+ +

Firstly, it's of interest which basis the state vector is being provided in-- every measurement operator comes with a set of eigenstates, and whatever measurements you look at (eg. $X,Y,Z, XX, XZ$, etc.) determine the basis that might be best for you to write the state vector in. The easiest way to answer your question is if you know which basis is of interest to you, and more importantly, whether it commutes with the measurement you just made.

+ +

So for simplicity's sake, let's say you start with two coupled qubits in an arbitrary state written in the $Z$-basis for both qubits:

+ +

$$| \psi \rangle = a | 0_{Z} \rangle \otimes | 0_{Z} \rangle +b | 0_{Z} \rangle \otimes | 1_{Z} \rangle + c | 1_{Z} \rangle \otimes | 0_{Z} \rangle + d | 1_{Z} \rangle \otimes | 1_{Z} \rangle $$

+ +

The simplest possible measurements you could make would be $Z_{1}$, that is the $Z$ operator on the first qubit, followed by $Z_{2}$, the $Z$ operator on the second qubit. What does measurement do? It projects the state into one of the eigenstates. You can think of this as eliminating all possible answers that are inconsistent with the one we just measured. For instance, say we measure $Z_{1}$ and obtain the outcome $1$, then the resulting state we would have would be:

+ +

$$| \psi \rangle = \frac{1}{\sqrt{|c|^{2} +|d|^{2}}} \left(c | 1_{Z} \rangle \otimes | 0_{Z} \rangle + d | 1_{Z} \rangle \otimes | 1_{Z} \rangle \right) $$

+ +

Note that the coefficient out front is just for renormalization. So our probability of measuring $Z_{2}=0$ is $\frac{1}{|c|^{2} +|d|^{2}} |c^{2}|$. Note this is different from the probability we had in the initial state, which was $|a|^{2}+|c|^{2}$.

+ +

Suppose the next measurement you make does not commute with the previous one, however. This is trickier because you have to implement a change of basis on the state vector in order to understand the probabilities. With Pauli measurements, though, it tends to be easy since the eigenbases relate in a nice way, that is:

+ +

$$| 0_{Z} \rangle = \frac{1}{\sqrt{2}} (|0_{X}\rangle + |1_{X} \rangle )$$

+ +

$$| 1_{Z} \rangle = \frac{1}{\sqrt{2}} (|0_{X}\rangle - |1_{X} \rangle )$$

+ +

A good way to check your understanding: What is the probability of measuring $X= +1$ after the $Z_{1}=1$ measurement above? What is the probability if we have not made the $Z_{1}$ measurement? Then a more complicated question is to look at product operators that act on both qubits at once, for instance, how does a measurement of $Z_{1}Z_{2}=+1$ affect the initial state? Here $Z_{1}Z_{2}$ measures the product of the two operators.

+",236,,104,,3/20/2018 15:57,3/20/2018 15:57,,,,1,,,,CC BY-SA 3.0 +1208,1,,,3/19/2018 18:06,,29,1615,"

A lot of people believe that quantum computers can prove to be a pivotal step in creating new machine learning and AI algorithms that can give a huge boost to the field. There have even been studies that our brain may be a quantum computer, but so far there is no consensus among the researchers.

+ +

Given that I am completely new to the field, I wanted to know if there has been some research done in the application of quantum computers in AI that, theoretically speaking, may perform better at some task or converge faster than modern deep learning algorithms.

+",320,,55,,09-01-2020 22:26,09-01-2020 22:26,Is there any potential application of quantum computers in machine learning or AI?,,3,1,,,,CC BY-SA 3.0 +1209,2,,1208,3/19/2018 18:34,,6,,"

There are arguments that our brains are quantum mechanical, and arguments against, so that's a hotly debated topic. Fisher at UCSB has some speculative thinking about how brains might still use quantum effects even though they aren't quantum mechanical in nature. While there's no direct experimental evidence there are two references you might want to read:

+ + + +

Now, on the subject of using quantum computing and machine learning, Rigetti Computing has demonstrated a clustering algorithm using their prototype quantum chips (19 qubits). They published their findings in a white paper on arXiv.org here:

+ + + +

So there's clearly an opportunity to advance machine learning, and eventually, AI using quantum computing imho.

+",274,,15,,3/20/2018 2:11,3/20/2018 2:11,,,,0,,,,CC BY-SA 3.0 +1210,2,,75,3/19/2018 20:46,,7,,"

I believe that the Centre for Engineered Quantum Systems, School of Physics, The University of Sydney and the Center for Theoretical Physics, Massachusetts Institute of Technology use of a tensor network decoder of Bravyi, Suchara and Vargo (BSV), to achieve the highest error correction threshold to date.

+ +

In their whitepaper from last December, ""Ultrahigh Error Threshold for Surface Codes with Biased Noise"", the use of a tensor network decoder resulted in pure $Z$ noise of $p_c=43.7\left(1\right)\%$, which is a fourfold increase over the previous optimal surface code threshold for pure $Z$ noise of $10.9\%$. The $10.9\%$ number comes from S. Bravyi, M. Suchara, and A. Vargo, “Efficient algorithms for maximum likelihood decoding in the surface code”.

+",274,,15,,3/22/2018 0:07,3/22/2018 0:07,,,,2,,,,CC BY-SA 3.0 +1211,2,,1195,3/19/2018 22:10,,6,,"

Unfortunately the state of the technology regarding memories is not as developed as you seem to expect. When we talk about a memory, we think of a device that can store information for an infinite amount of time (for all practical purposes). So before we can think about the size of the memory in a quantum computer, we should look at whether a single quantum memory has been built. There is a lot of progress in this direction, but to my knowledge the currently best ""memory"" achieved a coherence time of about 6 hours (which is amazing, but still not what we are used from classical computers). Although the fidelity of the retrieved state is in the high nineties, the success probability for storage and readout is very low.

+ +

There is also work on using error correction codes to built a memory, but those approaches do not give better results so far.

+",104,,,,,3/19/2018 22:10,,,,0,,,,CC BY-SA 3.0 +1212,2,,1206,3/19/2018 22:41,,9,,"

Suppose that, prior to measurement, your $n$-qubit system is in some state $\lvert \psi \rangle \in \mathcal H_2^{\otimes n}$, where $\mathcal H_2 \cong \mathbb C^2$ is the Hilbert space of a single qubit. Write +$$ \lvert \psi \rangle = \sum_{x \in \{0,1\}^n} u_x \lvert x \rangle $$ +for some coefficients $u_x \in \mathbb C$ such that $\sum_x \lvert u_x \rvert^2 = 1$.

+ +
    +
  • If you are measuring the first qubit in the standard basis, define +$$\begin{aligned} \lvert \varphi_0 \rangle &= \!\!\!\!\!\sum_{x' \in \{0,1\}^{n-1}}\!\!\!\!\!\! u_{0x'} \,\lvert0\rangle \lvert x' \rangle, \\ \lvert \varphi_1 \rangle &= \!\!\!\!\!\sum_{x' \in \{0,1\}^{n-1}}\!\!\!\!\!\! u_{1x'} \,\lvert1\rangle \lvert x' \rangle,\end{aligned}$$ and let $\lvert \psi_0 \rangle = \lvert \varphi_0 \rangle \big/\! \sqrt{\langle \varphi_0 \vert \varphi_0 \rangle}\,$ and $\,\lvert \psi_1 \rangle = \lvert \varphi_1 \rangle \big/\! \sqrt{\langle \varphi_1 \vert \varphi_1 \rangle}\,$. It is not too difficult to show that, if you measure the first qubit and obtain the state $\lvert 0 \rangle$, the state of the entire system ""collapses"" to $\lvert \psi_0 \rangle$, and if you obtain $\lvert 1 \rangle$ what you obtain is $\lvert \psi_1 \rangle$.

    + +

    This is broadly analogous to the idea of conditional probability distributions: you might think of $\lvert \psi_0 \rangle$ as the state of the system conditioned on the first qubit being $\lvert 0 \rangle$, and $\lvert \psi_1 \rangle$ as the state of the system conditioned on the first qubit being $\lvert 1 \rangle$ (except of course that the story is a bit more complicated, on account of the fact that the first qubit is not ""secretly"" in either the state $0$ or $1$).

  • +
  • The above is not strongly dependent on measuring the first qubit: we can define $\lvert \varphi_0 \rangle$ and $\lvert \varphi_1 \rangle$ in terms of fixing any particular bit in the bit string $x$ to either $0$ or $1$, summing over only those components which are consistent with either the choice $0$ or $1$, and proceeding as above.

  • +
  • The above is also not strongly dependent on measuring in the standard basis, as Emily indicates. If we wish to consider measuring the first qubit in the basis $\lvert \alpha \rangle, \lvert \beta \rangle$, where $\lvert \alpha \rangle = \alpha_0 \lvert 0 \rangle + \alpha_1 \lvert 1 \rangle$ and $\lvert \beta \rangle = \beta_0 \lvert 0 \rangle + \beta_1 \lvert 1 \rangle$, we define +$$\begin{aligned} \lvert \varphi_0 \rangle &= \Bigl(\lvert \alpha \rangle\!\langle \alpha \lvert \otimes I^{\otimes n-1}\Bigr)\lvert \psi\rangle = \!\!\!\!\!\sum_{x' \in \{0,1\}^{n-1}}\!\!\!\!\!\! \bigl(\alpha_0^\ast u_{0x'} + \alpha_1^\ast u_{1x'}\bigr) \,\lvert\alpha\rangle \lvert x' \rangle\,, \\ \lvert \varphi_1 \rangle &= \Bigl(\lvert \beta\rangle\!\langle \beta \lvert \otimes I^{\otimes n-1}\Bigr)\lvert \psi\rangle = \!\!\!\!\!\sum_{x' \in \{0,1\}^{n-1}}\!\!\!\!\!\! \bigl(\beta_0^\ast u_{0x'} + \beta_1^\ast u_{1x'}\bigr) \,\lvert\beta\rangle \lvert x' \rangle\,, \end{aligned}$$ +and then proceeding as above.

  • +
+",124,,124,,3/20/2018 14:23,3/20/2018 14:23,,,,0,,,,CC BY-SA 3.0 +1213,1,1220,,3/19/2018 23:00,,15,362,"

I understand that there is a constructive proof that arbitrary gates can be approximated by a finite universal gate set, which is the Solovay–Kitaev Theorem.
+However, the approximation introduces an error, which would spread and accumulate in a long computation. This would presumably scale badly with the length of the calculation? Possibly one might apply the approximation algorithm to the complete circuit as a whole, not to a single gate. But how does this scale with the length of the computation (i.e. how does the approximation scale with the dimension of the gates)? How does the gate approximation relate to gate synthesis? Because I could imagine that this affects the final length of the computation?
+Even more disturbing to me: What happens if the length of the calculation is not known at the time when the gate sequence is compiled?

+",104,,26,,01-01-2019 08:40,01-01-2019 08:40,How does approximating gates via universal gates scale with the length of the computation?,,1,0,,,,CC BY-SA 3.0 +1218,2,,135,3/20/2018 1:54,,11,,"

Suppose a function $f\colon {\mathbb F_2}^n \to {\mathbb F_2}^n$ has the following curious property: There exists $s \in \{0,1\}^n$ such that $f(x) = f(y)$ if and only if $x + y = s$. If $s = 0$ is the only solution, this means $f$ is 1-to-1; otherwise there is a nonzero $s$ such that $f(x) = f(x + s)$ for all $x$, which, because $2 = 0$, means $f$ is 2-to-1.

+ +

What is the cost to any prescribed probability of success, on a classical or quantum computer, of distinguishing a uniform random 1-to-1 function from a uniform random 2-to-1 function satisfying this property, if each option (1-to-1 or 2-to-1) has equal probability?

+ +

I.e., I secretly flip a coin fairly; if I get heads I hand you a black box (classical or quantum, resp.) circuit for a uniform random 1-to-1 function $f$, whereas if I get tails I hand you a black box circuit for a uniform random 2-to-1 function $f$. How much do you have to pay to get a prescribed probability of success $p$ of telling whether I got heads or tails?

+ +

This is the scenario of Simon's algorithm. It has esoteric applications in nonsensical cryptanalysis,* and it was an early instrument in studying the complexity classes BQP and BPP and an early inspiration for Shor's algorithm.

+ +

Simon presented a quantum algorithm (§3.1, p. 7) that costs $O(n + |f|)$ qubits and expected $O(n \cdot T_f(n) + G(n))$ time for probability near 1 of success, where $T_f(n)$ is the time to compute a superposition of values of $f$ on an input of size $n$ and where $G(n)$ is the time to solve an $n \times n$ system of linear equations in $\mathbb F_2$.

+ +

Simon further sketched a proof (Theorem 3.1, p. 9) that a classical algorithm evaluating $f$ at no more than $2^{n/4}$ distinct discrete values cannot guess the coin with advantage better than $2^{-n/2}$ over a uniform random guess.

+ +

In some sense, this answers your question positively: A quantum computation requiring a linear number of evaluations of random function on a quantum superposition of inputs can attain much better success probability than a classical computation requiring an exponential number of evaluations of a random function on discrete inputs, in the size of the inputs. But in another sense it doesn't answer your question at all, because it could be that for every particular function $f$ there is a faster way to compute the search.

+ +

The Deutsch–Jozsa algorithm serves as a similar illustration for a slightly different artificial problem to study different complexity classes, P and EQP, figuring out the details of which is left as an exercise for the reader.

+ +
+ +

* Simon's is nonsensical for cryptanalysis because only an inconceivably confused idiot would feed their secret key into the adversary's quantum circuit to use on a quantum superposition of inputs, but for some reason it makes a splash every time someone publishes a new paper on using Simon's algorithm to break idiots' keys with imaginary hardware, which is how all these attacks work. Exception: It is possible that this might break white-box cryptography, but the security story for white-box cryptography even against classical adversaries is not promising.

+",238,,238,,04-07-2018 04:54,04-07-2018 04:54,,,,2,,,,CC BY-SA 3.0 +1219,2,,1182,3/20/2018 2:21,,3,,"
+

Can one use a representation that is more compact, in the sense that it uses less memory and/or computational power than the simple vector representation? How does it work?

+
+ +

Source: ""Multiple Qubits"":

+ +

""A single qubit can be trivially modeled, simulating a fifty-qubit quantum computation would arguably push the limits of existing supercomputers. Increasing the size of the computation by only one additional qubit doubles the memory required to store the state and roughly doubles the computational time. This rapid doubling of computational power is why a quantum computer with a relatively small number of qubits can far surpass the most powerful supercomputers of today, tomorrow and beyond for some computational tasks."".

+ +

So you can't utilize a Ponzi scheme or rob Peter to pay Paul. Compression will save memory at the cost of computational complexity, or representation in a more flexible space (larger) would reduce computational complexity but at a cost of memory. Essentially what is needed is more capable hardware or smarter algorithms.

+ +
+ +

Here are some methods:

+ +
    +
  • Compression of the volume of sets of quantum states of the Qubit's metric:
  • +
+ +

The Fisher information metric can be used to map the volume of the qubit using an information geometry approach as discussed in ""The Volume of Two-Qubit States by Information Geometry"", ""Analysis of Fisher Information and the Cramer-Rao Bound for Nonlinear Parameter Estimation After Compressed Sensing"", and our ""Intuitive explanation of Fisher Information and Cramer-Rao bound"".

+ +
    +
  • Analogous to operand compression:
  • +
+ +

Computing depth-optimal decompositions of logical operations: ""A meet-in-the-middle algorithm for fast synthesis of depth-optimal quantum circuits"" or this Quora discussion on ""Encoding the dimension of the particle"".

+ +
    +
  • Analogous to memory compression:
  • +
+ +

Qutrit factorization using ternary arithmetic: ""Factoring with Qutrits: Shor's Algorithm on Ternary and Metaplectic Quantum Architectures"" and ""Quantum Ternary Circuit Synthesis Using Projection Operations"".

+ +
    +
  • Analogous to traditional optimization
  • +
+ +

""A Quantum Algorithm for Finding Minimum Exclusive-Or Expressions"".

+ +
    +
  • Other:
  • +
+ +

Krull Dimensions or axiomatisation and graph rewriting: ""Completeness of the ZX-calculus for Pure Qubit Clifford+T Quantum Mechanics"".

+ +

By combining those techniques you ought to be able to squeeze the foot into the shoe. That would permit emulation of larger systems on conventional processors, just don't ask me to explain doctoral level work or write the code. :)

+",278,,278,,3/21/2018 6:50,3/21/2018 6:50,,,,0,,,,CC BY-SA 3.0 +1220,2,,1213,3/20/2018 2:26,,10,,"

Throughout this answer, the norm of a matrix $A$, $\left\lVert A\right\rVert$ will be taken to be the spectral norm of $A$ (that is, the largest singular value of $A$). The solovay-Kitaev theorem states that approximating a gate to within an error $\epsilon$ requires $$\mathcal O\left(\log^c\frac 1\epsilon\right)$$ gates, for $c<4$ in any fixed number of dimensions.

+ +

For the first part:

+ +
+

the approximation introduces an error, which would spread and accumulate in a long computation

+
+ +

Well, it can be shown by induction that errors accumulating through using one matrix to approximate another are subadditive (see e.g. Andrew Child's lecture notes). That is, for unitary matrices $U_i$ and $V_i$, $\left\lVert U_i - V_i\right\rVert < \epsilon\,\forall\, i \in \left\lbrace1, 2, \ldots, t\right \rbrace\implies \left\lVert U_t\ldots U_2U_1 - V_t\ldots V_2V_1\right\rVert \leq t\epsilon$.

+ +

What this means in terms of implementation is that, for an overall error no more than $\epsilon$ to be achieved, each gate needs to be approximated to within $\epsilon/t$, or

+ +
+

applying the approximation to the circuit as a whole

+
+ +

is the same as applying the approximation to each individual gate, each with an individual error no more than that of the entire circuit divided by the number of gates that you're approximating.

+ +

In terms of gate synthesis, The algorithm is performed by taking products of the gate set $\Gamma$ to form a new gate set $\Gamma_0$ which forms an $\epsilon^2$ net for $\operatorname{SU}\left(d\right)$ (for any $A \in \operatorname{SU}\left(d\right),\, \exists U\in\Gamma_0\, s.t. \left\lVert A-U\right\rVert\leq\epsilon^2$). Starting from identity, a new unitary is recursively found from the new gate set in order to get a tighter net round the target unitary. Oddly enough, the time for a classical algorithm to perform this operation is also $\mathcal O\left(\mathit{poly} \log 1/\epsilon\right)$, which is sub-polynomial time. However, as per Harrow, Recht, Chuang, in $d$-dimensions, as a ball of radius $\epsilon$ around $\operatorname{SU}\left(d\right)$ has a volume $\propto \epsilon^{d^2-1}$, this scales exponentially in $d^2$ for a non-fixed number of dimensions.

+ +

This does have an affect on the final computation time. However, as the scaling in both number of gates and classical computational complexity is sub-polynomial, this doesn't change the complexity class of any algorithm, at least for the commonly considered classes.

+ +

For $t$ gates, the overall (time and gate) complexity is then $$\mathcal O\left(t\, \mathit{poly} \log \frac t\epsilon\right)$$.

+ +

When using the unitary circuit model without intermediary measurements, the number of gates to be implemented will always be known prior to the computation. However, it is feasible to assume this isn't the case when intermediary measurements are used, so when then number of gates that you want to approximate is unknown, this is saying that $t$ is unknown. and if you don't know what $t$ is, you obviously can't approximate each gate to an error $\epsilon/t$. If you know a bound on the number of gates (say, $t_{\text{max}}$), then you could approximate each gate to within $\epsilon/t_{\text{max}}$ to get an overall error $\leq\epsilon$ and complexity $$\mathcal O\left(t\, \mathit{poly} \log \frac {t_{\text{max}}}{\epsilon}\right),$$ although if no upper bound on the number of gates is known, then each gate would be approximated to some (smaller) $\epsilon'$, giving an overall error $\leq t'\epsilon$ for the resulting number of implemented gates (which is unknown at the start) $t'$, with an overall complexity of $$\mathcal O\left(t'\, \mathit{poly} \log \frac {1}{\epsilon'}\right).$$

+ +

Of course, the total error of this is still unbounded, so one simple1 way of keeping the error bounded would be to reduce the error each time by a factor of, say, $2$, so that the $n^{th}$ gate would be implemented with error $\epsilon/2^n$. The complexity would then be $$\mathcal O\left(\mathit{poly} \log \frac {2^n}{\epsilon'}\right) = \mathcal O\left(\mathit{poly}\, n\log \frac {1}{\epsilon'}\right),$$ giving an overall (now polynomial) complexity $$\mathcal O\left(\mathit{poly}\, t \log \frac {1}{\epsilon}\right),$$ although this does have the advantage of guaranteeing a bounded error.

+ +

This isn't too bad, so I would hope that (when the number of gates is unknown) classical computers would be able to keep coming up with the correct gates at least as fast as a quantum processor would need them. If not currently, then hopefully once quantum processors become good enough that this actually becomes a problem!

+ +
+ +

1 Although, likely not the most efficient

+",23,,23,,3/20/2018 10:25,3/20/2018 10:25,,,,0,,,,CC BY-SA 3.0 +1221,1,1223,,3/20/2018 7:41,,14,591,"

One of the many thing that confuse me in the field of QC is what makes the measurement of a qubit in a quantum computer any different than just choosing at random (in a classical computer) (that's not my actual question)

+ +

Suppose I have $n$ qubits, and my state is a vector of their amplitudes $(a_1,a_2,\dots,a_n)^\mathrm{T}$.1

+ +

If I pass that state through some gates and do all sorts of quantum operations (except for measurement), and then I measure the state. I'll only get one of the options (with varying probabilities).

+ +

So where's the difference between doing that, and generating a number randomly from some convoluted/complicated distribution? What makes quantum computations essentially different from randomized classical ones?

+ +
+ +
    +
  1. I hope I didn't misunderstand how states are represented. Confused about that, as well...
  2. +
+",13,,15,,3/20/2018 8:55,3/21/2018 1:54,What makes quantum computations different from randomized classical computations?,,2,0,,,,CC BY-SA 3.0 +1222,1,,,3/20/2018 7:52,,8,858,"

The title says most of it: What are the implications of Bremermann's limit for quantum computing? +The Wikipedia page says that the limit applies to any self-contained system, but in the last few lines they also claim that ""access to quantum memory enables computational algorithms that require arbitrarily small amount of energy/time per one elementary computation step"".

+ +

These statements seem contradictory (unless requiring arbitrarily small amount of energy/time also requires the amount of mass going to infinity). So how does the Bremermann's limit actually affect quantum computing?

+",346,,26,,12/14/2018 5:57,4/17/2019 23:42,What are the implications of Bremermann's limit for quantum computing?,,1,0,,,,CC BY-SA 3.0 +1223,2,,1221,3/20/2018 14:25,,13,,"

The question is, how did you get to your final state?

+ +

The magic is in the gate operations that transformed your initial state to your final state. If we knew the final state to begin with, we wouldn't need a quantum computer - we'd have the answer already and could, as you suggest, simply sample from the corresponding probability distribution.

+ +

Unlike Monte Carlo methods that take a sample from some probability distribution and change it to a sample from some other distribution, the quantum computer is taking an initial state vector and transforming it to another state vector via gate operations. The key difference is that quantum states undergo coherent interference, which means that the vector amplitudes add as complex numbers. Wrong answers add destructively (and have low probability), while right answers add constructively (and have high probability).

+ +

The end result, if all goes well, is a final quantum state that yields the right answer with high probability upon measurement, but it took all those gate operations to get there in the first place.

+",356,,15,,3/21/2018 1:08,3/21/2018 1:08,,,,0,,,,CC BY-SA 3.0 +1224,2,,1192,3/20/2018 15:46,,1,,"

While other answers provide good points, I feel that I still disagree a bit. So, I will share my own thoughts on this point.

+ +

In short, I think featuring the constant 'as is' is a wasted opportunity at best. Perhaps it is the best we are able to get for now, but it is far from ideal.

+ +

But first, I think a brief excursion is nessecary.

+ +

When do we have an effective algorithm?

+ +

When Daniel Sank asked me what I would do if there was an algorithm for factoring prime numbers with a $10^6$ factor speedup on a test set of serious instances, I first replied that I doubt this would be due to algorithmic improvements, but other factors (either the machine or the implementation). But I think I have a different response now. Let me give you a trivial algorithm that can factor very large numbers within milliseconds and is nevertheless very ineffective:

+ +
    +
  1. Take a set $P$ of (pretty big) primes.
  2. +
  3. Compute $P^2$, the set of all composites with exactly two factors from $P$. For each composite, store which pair of primes is used to construct it.
  4. +
  5. Now, when given an instance from $P^2$, simply look at the factorization in our table and report it. Otherwise, report 'error'
  6. +
+ +

I hope it is obvious that this algorithm is rubbish, as it works only correctly when our input is in $P^2$. However, can we see this when given the algorithm as a black box and ""by coincide"" only test with inputs from $P$? Sure, we can try to test a lot of examples, but it is very easy to make $P$ very big without the algorithm being ineffective on inputs from $P^2$ (perhaps we want to use a hash-map or something).

+ +

So, it isn't unreasonable that our rubbish algorithm might be coincidentally seem to have 'miraculous' speedups. Now, of course there are many experiment design techniques that can mitigate the risk, but perhaps more clever 'fast' algorithms that still fail in many, but not enough examples can trick us! (also note that I'm assuming no researcher is malicious, which makes matters even worse!)

+ +

So, I would now reply: ""Wake me up when there is a better performance metric"".

+ +

How can we do better, then?

+ +

If we can afford to test our 'black box' algorithm to on all cases, we cannot be fooled by the above. However, this is impossible for practical situations. (This can be done in theoretical models!)

+ +

What we can instead do is to create a statistical hypothesis for some parameterized running time (usually for the input size) to test this, perhaps adapt our hypothesis and test again, until we get a hypothesis we like and rejecting the null seems reasonable. (Note that there are likely other factors involved I'm ignoring. I'm practically a mathematician. Experiment design is not something within my expertise)

+ +

The advantage of statistically testing on a parameterization (e.g. is our algorithm $O(n^3)$? ) is that the model is more general and hence it is harder to be 'cheated' like in the previous section. It is not impossible, but at least the statistical claims on whether this is reasonable can be justified.

+ +

So, what to do with the constants?

+ +

I think only stating ""$10^9$ speedup, wow!"" is a bad way of dealing this case. But I also think completely disregarding this result is bad as well.

+ +

I think it is most useful to regard the curious constant as an anomaly, i.e. it is a claim that in itself warrants further investigation. I think that creating hypotheses based on more general models than simply 'our algorithm takes X time' is a good tool to do this. So, while I don't think we can simply take over CS conventions here, completely disregarding the 'disdain' for constants is a bad idea as well.

+",253,,,,,3/20/2018 15:46,,,,0,,,,CC BY-SA 3.0 +1225,1,1331,,3/20/2018 18:46,,8,224,"

This new algorithm for QC calculation was introduced recently (2017 4Q) by IBM/ Pednault et al. to great fanfare. The paper seems more couched in the language of physics.

+ +

Are there any basic overview/analyses of this by computer scientists about the general ""design pattern"" utilized, vs the prior algorithmic techniques for the problem, or can someone provide one? What about the complexity analysis of the techniques?

+ +

""Breaking the 49-Qubit Barrier in the Simulation of Quantum Circuits""

+",377,,26,,1/31/2019 14:49,1/31/2019 14:49,New algorithm for faster QC simulation by IBM,,1,0,,,,CC BY-SA 3.0 +1226,1,,,3/21/2018 0:20,,13,523,"

Which technological path seems most promising to produce a quantum processor with a greater quantum volume (preferring fewer errors per qubit over more qubits), than Majorana fermions?

+ +

The preferred format for the answer would be similar to:

+ +

""Group ABC's method DEF has demonstrated better QV than using MF; as proven independently in paper G on page x, paper H on page y, and paper I on page z"".

+ +

On Majorana fermions Landry Bretheau says:

+ +
+

These particles could be the elementary brick of topological quantum computers, with very strong protection against errors. Our work is an initial step in this direction.

+
+ +
+ +

Example of an insufficient (but interesting) answer:

+ +

In their paper ""Robust quantum metrological schemes based on protection of quantum Fisher information"", Xiao-Ming Lu, Sixia Yu, and C.H. Oh construct a family of $2t+1$ qubits metrological schemes being immune to $t$-qubit errors after the signal sensing. In comparison at least five qubits are required for correcting arbitrary 1-qubit errors in standard quantum error correction.

+ +

[Note: This theory of robust metrological schemes preserves the quantum Fisher information instead of the quantum states themselves against noise. That results in a good effective volume if they can construct a device utilizing their techniques and show that it scales.

+ +

While that might seem like one promising answer it's a single link (without multiple concurring sources) and there's no device built to show scalability. A low qubit device that's error free and unscalable or a device with many error-prone qubits has a low volume (and thus is ""Not An Answer"").]

+ +
+ +

Additional references:

+ +

Paper explaining Quantum Volume.

+ +

+ +

After doing some research it looks like Graphene sandwiched between superconductors to produce Majorana fermions is the leading edge - is there something better? [""better"" means currently possible, not theoretically possible or ridiculously expensive]. The graphic illustrates that over a hundred qubits with less 0.0001 error rate is wonderful, lesser answers are acceptable.

+",278,,26,,5/15/2019 15:08,5/15/2019 15:08,What is the leading edge technology for creating a quantum computer with the fewest errors?,,1,0,,,,CC BY-SA 3.0 +1227,2,,1221,3/21/2018 1:42,,4,,"

You're right - if we had a bunch of linear probabilities and just kept combining them in a big superposition, we may as well just do randomized classical computation, which'd basically be describable in terms of Bayesian mechanics:

+ +

$\hspace{3cm}$.

+ +

And since classical systems can already operate like this, that'd be disinteresting.

+ +

The trick's in that quantum gates can be non-linear, i.e. they can work in a non-Bayesian way. Then we can construct systems in which qubits interfere in ways that favor desirable outcomes over undesirable outcomes.

+ +

A good example might be Shor's algorithm:

+ +
+

Then ${\displaystyle \omega ^{ry}} \omega ^{ry}$ is a unit vector in the complex plane $( {\displaystyle \omega } \omega$ is a root of unity and $r$ and $y$ are integers), and the coefficient of ${\displaystyle Q^{-1}\left|y,z\right\rangle } Q^{-1}\left|y,z\right\rangle$ in the final state is$${\displaystyle \sum _{x:\,f(x)=z}\omega ^{xy}=\sum _{b}\omega ^{(x_{0}+rb)y}=\omega ^{x_{0}y}\sum _{b}\omega ^{rby}.}$$ + Each term in this sum represents a different path to the same result, and quantum interference occurs – constructive when the unit vectors ${\displaystyle \omega ^{ryb}} \omega ^{ryb}$ point in nearly the same direction in the complex plane, which requires that ${\displaystyle \omega ^{ry}} \omega ^{ry}$ point along the positive real axis.

+ +

-""Shor's algorithm"", Wikipedia

+
+ +

Then, the very next step after that starts with ""Perform a measurement."". This is, they tweaked the odds in favor of the outcome that they wanted, now they're measuring it to see what that was.

+",15,,15,,3/21/2018 1:54,3/21/2018 1:54,,,,1,,,,CC BY-SA 3.0 +1228,1,,,3/21/2018 2:15,,0,999,"

Seth Lloyd, a professor of mechanical engineering and physics at MIT, published a paper and a book in which he shows that the universe can be +regarded as a giant quantum computer. According to him, all observed phenomena are consistent with the model in which the universe is indistinguishable from a quantum computer, e.g., a quantum cellular automaton.

+ +

He considers the following two statements to be true:

+ +
    +
  1. The universe allows quantum computation.
  2. +
  3. A quantum computer efficiently simulates the dynamics of the universe.
  4. +
+ +

To conclude with:

+ +
+

Finally, we can quantize question three: (Q3) ‘Is the universe a quantum + cellular automaton?’ While we cannot unequivocally answer this question + in the affirmative, we note that the proofs that show that a quantum computer + can simulate any local quantum system efficiently immediately imply + that any homogeneous, local quantum dynamics, such as that given by the + standard model and (presumably) by quantum gravity, can be directly reproduced + by a quantum cellular automaton. Indeed, lattice gauge theories, + in Hamiltonian form, map directly onto quantum cellular automata. Accordingly, + all current physical observations are consistent with the theory that + the universe is indeed a quantum cellular automaton.

+
+ +

Does this theory hold up?

+",395,,26,,05-07-2019 15:50,05-07-2019 15:50,Is the universe indistinguishable from a giant quantum computer?,,2,3,,3/26/2018 15:26,,CC BY-SA 3.0 +1229,2,,1228,3/21/2018 2:30,,10,,"

I guess that he's right enough for the moment; quantum mechanics is part of our best theory of the universe, which by definition means that we think the universe works like that.

+ +

It's pretty circular though. When we have some model of the universe, what that literally means is that we think that the universe is operating according to that model. Currently that's a quantum model. Still, who cares?

+ +

The paper attempts to address that question:

+ +
+

The immediate question is ‘So what?’ Does the fact that the universe is observationally indistinguishable from a giant quantum computer tell us anything new or interesting about its behavior? The answer to this question is a resounding ‘Yes!’ In particular, the quantum computational model of the universe answers a question that has plagued human beings ever since they first began to wonder about the origins of the universe, namely, Why is the universe so ordered and yet so complex [1]?

+
+ +

So, I guess that he's saying that quantum mechanics helps us to model more about the universe than prior models.

+ +

Seems like a pretty trivial point. It's weird that someone wrote a paper about it.

+",15,,,,,3/21/2018 2:30,,,,1,,,,CC BY-SA 3.0 +1231,2,,1222,3/21/2018 5:20,,3,,"
+

What are the implications of Bremermann's limit for quantum computing?

+
+

From the Wikipedia page you referenced:

+
+

"A computer with the mass of the entire Earth operating at the Bremermann's limit could perform approximately $10^{75}$ mathematical computations per second.".

+
+

Next you say:

+
+

The Wikipedia page says that the limit applies to any self-contained system, but in the last few lines they also claim ... statements seem contradictory.

+
+

The whole paragraph is:

+
+

The limit has been further analysed in later literature as the maximum rate at which a system with energy spread $ \Delta E $ can evolve into an orthogonal and hence distinguishable state to another, $ \Delta t = \pi \hbar / 2 \Delta E$. In particular, Margolus and Levitin has shown that a quantum system with average energy $E$ takes at least time $ \Delta t = \pi \hbar / 2 E $ to evolve into an orthogonal state. However, it has been shown that access to quantum memory enables computational algorithms that require arbitrarily small amount of energy/time per one elementary computation step.

+
+

The paper referenced, "Computing with a single qubit faster than the computation quantum speed limit", seems to explain it fairly clearly:

+
+

Page 1: "Introduction. The quantum phase space of a qubit is a sphere (Fig. 1). One can discretize this space into any number of states and then apply field pulses to switch between the chosen states in an arbitrary order. In this sense, a qubit comprises the whole universe of choices for computation. For example, a qubit can work as finite automata when different unitary gates act on this qubit depending on arriving digital words. However, different states of a qubit are generally not distinguishable by measurements. So, if the final quantum state encodes the result of computation, we cannot generally extract this information because we cannot distinguish this state by a measurement from other non-orthogonal possibilities reliably. For such reasons, qubits are believed to provide computational advantage over classical memory only when they are used to create purely quantum correlations, i.e., entanglement or quantum discord."

+

...

+

"Quantum mechanically, distinguishable states must be represented by orthogonal vectors that produce definitely different measurement outcomes. However, the switching time between two orthogonal quantum states is restricted from below by a fundamental computation speed limit T = h/(4∆E), where ∆E is characteristic energy of the control field coupling to the memory that is needed to update one bit of information. So, restrictions on strength of control fields automatically restrict the speed of classical computation that saves information in physically distinguishable states. While the existence of this computation quantum speed limit is a mathematically proved fact, I will show an explicit elementary example that demonstrates possibility of solving a computational problem faster than the lowest time bound that is imposed by this limit on classical computation hardware. Access to the quantum memory makes this possible because it allows information processing using nonorthogonal quantum states. So, there is no more direct linear relation between the minimal time and the number of elementary logic operations required to implement an algorithm at given energy constraints.

+
+

+
+

FIG. 1. Up to overall phases that do not influence measurement outcomes, states of a qubit correspond to points on the 2D sphere. This phase space can be discretized to create a register of states (green circles) for computation. However, only opposite points on this sphere, such as the poles marked by |0and |1, are distinguishable by measurements.

+
+

[Note: You might barely get by just skimming just the bold text, I suggest you read the whole paper for a better understanding.]

+

Hopefully that is clear, if you have a comment I can make an edit.

+",278,,-1,,6/18/2020 8:31,4/17/2019 23:42,,,,0,,,,CC BY-SA 4.0 +1232,2,,49,3/21/2018 6:21,,7,,"

As far as I know the closest answer to your question for applications is given in the recent (still unpublished) work presented at the March meeting by Bibek Pokharel, where he compares graph 3-coloring instances on D-Wave Two, D-Wave 2X and D-Wave 2000Q, all other things staying reasonably equal. +The short answer is that all the performance increase is essentially due to the possibility to run single anneals at shorter anneal-time. (e.g. 1$\mu$s instead of 5$\mu$s gives indeed about 5X of performance increase, in terms of time-to-solution (TTS) metric. With respect to 20$\mu$s of D-Wave Two the scaling is different).

+ +

I can also spoil that from D-Wave Two and D-Wave 2000Q on Sherrington-Kirkpatrick instances we observed no substantial improvement as well. Results will be published soon in collaborations with Stanford.

+",410,,,,,3/21/2018 6:21,,,,0,,,,CC BY-SA 3.0 +1233,2,,75,3/21/2018 6:43,,10,,"

As far as I’m aware, the surface code is still regarded as the best. With an assumption of all elements failing with equal probability (and doing so in a certain way) it has a threshold of around 1%.

+ +

Note that the paper you linked to doesn’t have a 3D surface code. It is the decoding problem that is 3D, due to tracking changes to the 2D lattice over time. As I think you suspected, this is the required procedure when try to keep the stored information coherent for as long as possible. Check out this paper for an earlier reference in some of these things.

+ +

Exact threshold numbers mean you need a specific error model, as you know. And for that you need a decoder, which ideally adapts to the specifics of the error model while remaining fast enough to keep up. Your definition of what is fast enough for the task at hand will have a big effect on what the threshold is.

+ +

To get upper bounds for a specific code and specific noise model, we can sometimes map the model to one of statistical mechanics. The threshold then corresponds to the point of a phase transition. See this paper for an example of how to do this, and the references therein for others.

+ +

Other than the threshold, another important factor is how easy it is to do quantum computation on the stored information. The surface code is quite bad at this, which is a major reason that people still consider other codes, despite the great advantages of the surface codes.

+ +

The surface code can only do the X, Z and H gates very simply, but they aren’t enough. The Color code can also manage the S gate without too much trouble, but that still just restricts us to the Clifford gates. Expensive techniques like magic state distillation will still be needed for both cases to get additional operations, as required for universality.

+ +

Some codes don’t have this restriction. They can let you do a full universal gate set in a straightforward and fault-tolerant way. Unfortunately, they pay for this by being much less realistic to build. These slides might point you in the right directions for more resources on this matter.

+ +

It’s also worth noting that even within the family of surface codes there are variations to explore. The stabilizers can be changed to an alternating pattern, or a YYYY stabilizer can used, to better deal with certain noise types. More drastically, we could even make quite big changes to the nature of the stabilizers. There are also the boundary conditions, which are what distinguishes a planar code from a toric code, etc. These and other details give us lots to optimize over.

+",409,,409,,3/21/2018 19:04,3/21/2018 19:04,,,,0,,,,CC BY-SA 3.0 +1234,2,,1226,3/21/2018 8:45,,11,,"

That is indeed the most important question at the moment!

+ +

Superconducting qubits currently have the biggest devices. But will they continue to scale? Will short coherence times make it too hard for error correction to keep up?

+ +

Trapped ions are not far behind. But they have their own scalability issues.

+ +

Spin qubits should be great for scaling once they get going. They are still down in the few qubits at the moment, though.

+ +

Majoranas also are suspected to have some nice properties. But I’d have to see a single qubit before I declare them to be the leading edge.

+ +

Photonics are also a viable strategy. In fact, the first cloud based quantum device was photonic. A few startups are also based around photonic based approaches, such as the one described here.

+",409,,409,,3/21/2018 19:00,3/21/2018 19:00,,,,0,,,,CC BY-SA 3.0 +1235,1,1241,,3/21/2018 10:00,,26,6642,"

As I understand it, the field of quantum mechanics was started in the early 20th century when Max Planck solved the black-body radiation problem. But I don't know when the idea of computers using quantum effects spread out.

+ +

What is the earliest source that proposes the idea of quantum computers using qubits?

+",11,,55,,10/24/2019 9:38,10/27/2019 10:24,Who first proposed the idea of quantum computing using qubits?,,2,0,,,,CC BY-SA 4.0 +1236,1,1240,,3/21/2018 10:14,,9,2569,"

Reading into CNOT gate I understand that, mathematically, such a gate entangles the control qubit and the target. (the resulting state is $\frac{1}{\sqrt 2}(|00\rangle+|11\rangle)$)

+ +

However, looking at the ""truth table"" of the gate, it seems as though the result is not entangled: If I measure the target after passing the gate, the state of the control can still be either option.

+ +

Am I missing something, or did I misunderstand the truth table?

+",13,,26,,12/23/2018 12:09,5/15/2019 16:45,How does evolving a two-qubit state through a CNOT gate entangle them?,,2,0,,,,CC BY-SA 3.0 +1237,2,,1235,3/21/2018 10:28,,7,,"

Around 1960-1973 the idea was beginning to form, but the field really started spreading in the 1980s.

+ +

One of the biggest pioneers was Richard P. Feynman. He proposed a model of a quantum computer in his talk. From that talk, many other scientists pushed the field further (Toffoli created one of the first quantum gates; Shor, at Bell Labs, created one of the first quantum algorithms etc.)

+ +

The field has been changing and evolving rapidly throughout 1980-2000, and keeps evolving. But the initial ""spark"" was made by Richard Feynman. I don't know if he thought of qubits, but his main interest in quantum computing was for simulating quantum physics and systems.

+",13,,,,,3/21/2018 10:28,,,,1,,,,CC BY-SA 3.0 +1240,2,,1236,3/21/2018 11:06,,16,,"

You are correct that none of the states in the truth table are entangled. Not all states become entangled when acted on by the CNOT.

+ +

The entangled state in your question would result if the control qubit was in state $|+\rangle=\frac{1}{\sqrt 2}(|0\rangle+|1\rangle)$), and the target was in state $|0\rangle$. The two qubits would then be in a superposition of $|00\rangle$ and $|10\rangle$. Apply the truth table to each of these independently, and you should see that the CNOT produces the superposition of $|00\rangle$ and $|11\rangle$ that you gave.

+ +

Note that the truth table on that page is not the only one we could write. It favours a certain interpretation of the CNOT, which is that the CNOT performs an X (the quantum version of a NOT) on the target qubit when the control is in state $|1\rangle$. An equivalent interpretation is to say that the CNOT performs a Z on the control if the target is in state $|-\rangle$ (which shows that the control and target labels are quite arbitrary). We can also interpret the CNOT as an operation that swaps the expectation values of certain Pauli operators.

+ +

The CNOT is all of these things at once. To try and explain this (and other things), I made a tutorial for quantum programming. Perhaps it will be of some use to you.

+",409,,409,,3/21/2018 12:16,3/21/2018 12:16,,,,0,,,,CC BY-SA 3.0 +1241,2,,1235,3/21/2018 11:32,,22,,"

According to Wikipedia of Timeline of quantum computing, here are the main events:

+ +
    +
  • 1960 + +
    +

    Stephen Wiesner invents conjugate coding.

    +
  • +
  • 1968

    + +

    A quantum computer with spins as quantum bits was also formulated for use as a quantum spacetime in 1968.

    + +

    Finkelstein, David (1968). ""Space-Time Structure in High Energy Interactions"". In Gudehus, T.; Kaiser, G. Fundamental Interactions at High Energy. New York: Gordon & Breach.

  • +
  • 1973

    + +
    +

    Alexander Holevo publishes a paper showing that n qubits cannot carry more than n classical bits of information (see: ""Holevo's theorem""/""Holevo's bound"").

    + +

    Charles H. Bennett shows that computation can be done reversibly.

    +
  • +
  • 1976

    + +
    +

    Polish mathematical physicist Roman Stanisław Ingarden publishes a seminal paper entitled ""Quantum Information Theory"" in Reports on Mathematical Physics, vol. 10, 43–72, 1976.

    +
  • +
  • 1980

    + +
    +

    Paul Benioff described quantum mechanical Hamiltonian models of computers

    + +

    Yuri Manin proposed an idea of quantum computing

    +
  • +
  • 1981

    + +
    +

    Richard Feynman in his talk [...], observed that it appeared to be impossible in general to simulate an evolution of a quantum system on a classical computer in an efficient way. He proposed a basic model for a quantum computer that would be capable of such simulations

    +
  • +
  • 1982

    + +
    +

    Paul Benioff proposes the first recognisable theoretical framework for a quantum computer.

    +
  • +
+ +

So in general, the field of quantum computing was initiated by the work of Paul Benioff study and Yuri Manin in 1980, Richard Feynman in 1982 study, and David Deutsch in 1985. Source: Quantum computing at Wikipedia.

+",99,,99,,3/21/2018 11:39,3/21/2018 11:39,,,,1,,,,CC BY-SA 3.0 +1242,2,,1196,3/21/2018 12:02,,8,,"

In the context of scalable quantum computing, the polylog scaling needed for magic state distillation should not be a problem.

+ +

Indeed, it is not the only polylog scaling we need to contend with. Using the $S$ and $T$ gates to approximate a general single qubit rotation can have a similar cost when using the Solvay-Kitaev algorithm (though this is no longer state-of-the-art). The cost of error correction is also similar to that of MSD. In fact, it has been shown ""that magic state factories have space-time costs that scale as a constant factor of surface code costs"".

+ +

Within a scalable and fault-tolerant quantum computer, I see no reason to think that MSD will have a problematic overhead. We may find other methods that are better, such as ways to implement complex error correcting codes that allow transversal non-Clifford gates. But those will not be so great at error correction, and so have higher overheads for that. This could easily remove any benefits.

+",409,,,,,3/21/2018 12:02,,,,0,,,,CC BY-SA 3.0 +1244,1,,,3/21/2018 12:14,,11,711,"

While there are many interesting questions that a computer can solve with barely any data (such as factorization, which requires ""only"" a single integer), most real-world applications, such as machine learning or AI, will require large amounts of data.

+ +

Can quantum computers handle this massive stream of data, in theory or in practice? Is it a good idea to store the data in a ""quantum memory"", or is it better to store it in a ""classical memory""?

+",253,,26,,12/13/2018 19:38,5/15/2019 14:47,Can quantum computers handle 'big' data?,,1,0,,,,CC BY-SA 3.0 +1247,1,1251,,3/21/2018 13:37,,8,786,"

I've got the following quantum code using QISKit (based on hello_quantum.py):

+ +
import sys, os
+from qiskit import QuantumProgram, QISKitError, RegisterSizeError
+
+# Create a QuantumProgram object instance.
+Q_program = QuantumProgram()
+try:
+    import Qconfig
+    Q_program.set_api(Qconfig.APItoken, Qconfig.config[""url""])
+except:
+    offline = True
+    print(""WARNING: There's no connection with IBMQuantumExperience servers."");
+print(""The backends available for use are: {}\n"".format("","".join(Q_program.available_backends())))
+backend = 'ibmqx5'
+try:
+    # Create a Quantum Register called ""qr"" with 2 qubits.
+    qr = Q_program.create_quantum_register(""qr"", 2)
+    # Create a Classical Register called ""cr"" with 2 bits.
+    cr = Q_program.create_classical_register(""cr"", 2)
+    # Create a Quantum Circuit called ""qc"". involving the Quantum Register ""qr""
+    # and the Classical Register ""cr"".
+    qc = Q_program.create_circuit(""bell"", [qr], [cr])
+
+    # Add the H gate in the Qubit 0, putting this qubit in superposition.
+    qc.h(qr[0])
+    # Add the CX gate on control qubit 0 and target qubit 1, putting 
+    # the qubits in a Bell state
+    qc.cx(qr[0], qr[1])
+
+    # Add a Measure gate to see the state.
+    qc.measure(qr, cr)
+
+    # Compile and execute the Quantum Program.
+    result = Q_program.execute([""bell""], backend=backend, shots=1024, seed=1)
+
+    # Show the results.
+    print(result)
+    print(result.get_data(""bell""))
+
+except QISKitError as ex:
+    print('There was an error in the circuit!. Error = {}'.format(ex))
+except RegisterSizeError as ex:
+    print('Error in the number of registers!. Error = {}'.format(ex))
+
+ +

I set my APItoken in Qconfig.py as:

+ +
APItoken = 'XXX'
+config = {
+    'url': 'https://quantumexperience.ng.bluemix.net/api',
+}
+
+ +

However, the code fails with the following error:

+ +
The backends available for use are: ibmqx2,ibmqx5,ibmqx4,ibmqx_hpc_qasm_simulator,ibmqx_qasm_simulator,local_qasm_simulator,local_clifford_simulator,local_qiskit_simulator,local_unitary_simulator
+
+ERROR
+There was an error in the circuit!. Error = 'QISkit Time Out'
+
+ +

I've tested both ibmqx4 and ibmqx5, the same issue. I can see that they're active at /qx/devices.

+ +

What does it mean? Does it mean the IBM Q server is down, or the program is too big to execute? Or there is something else going on? In other words, what should I do to run a simple Hello Quantum program on IBM quantum server?

+",99,,26,,03-12-2019 09:24,03-12-2019 09:24,There was an error in the circuit!. Error = 'QISkit Time Out',,2,0,,,,CC BY-SA 3.0 +1250,2,,1247,3/21/2018 14:45,,3,,"

As per GitHub post, I had to increase the timeout for Q_program.execute(), for example:

+ +
result = Q_program.execute([""bell""], backend=backend, shots=1024, seed=1, timeout=600)
+
+ +

The reason is probably, that the queues are busy, so we need to tell QISKit to wait up to 10 minutes. This instruction is basically blocking the rest of the script and wait till the job is executed on the actual backend server and the results are returned.

+ +
+ +

To list details of the jobs which has been submitted, the following code can be used as proposed by @ajavadia:

+ +
from qiskit import QuantumProgram
+import Qconfig
+
+qp = QuantumProgram()
+qp.set_api(Qconfig.APItoken, Qconfig.config['url'])
+
+# Download details of all the jobs you've ever submitted (the default limit is 50).
+my_jobs = qp.get_api().get_jobs(limit=999)
+
+# Filter down to get a list of completed jobs.
+done_jobs = [j for j in my_jobs if j['status']=='COMPLETED']
+
+# Print the results for all of your completed jobs.
+for j in done_jobs:
+    for q in j['qasms']:
+        print(q['qasm'])
+        print(q['result'])
+
+",99,,99,,3/21/2018 14:52,3/21/2018 14:52,,,,1,,,,CC BY-SA 3.0 +1251,2,,1247,3/21/2018 17:18,,6,,"

Your job timed out, probably because of the queue being too long for the job to complete in the time allowed by default for .execute().

+ +

But you already know that, of course, because you have already written an excellent answer of your own. Nevertheless, I have some insights to add from battle hardened experience.

+ +

I usually use this notebook to check on how busy a device is, and if it is active. Then I typically run jobs in the following way.

+ +
    noResults = True
+    while noResults:
+        try: # try to run, and wait if it fails
+            executedJob = engine.execute([""script""], backend=backend, shots=shots, max_credits = 5, wait=30, timeout=600)
+            resultsVeryRaw = executedJob.get_counts(""script"")
+            if ('status' not in resultsVeryRaw.keys()):
+                noResults = False
+            else:
+                print(resultsVeryRaw)
+                print(""This is not data, so we'll wait and try again"")
+                time.sleep(300)
+        except:
+            print(""Job failed. We'll wait and try again"")
+            time.sleep(600)
+
+ +

This uses try to manage any exceptions that might result. The program will just wait and try again rather than crash.

+ +

If we get to the point of successfully using .get_counts, the program then checks to see if it actually contains results. Or rather, it checks that the 'status' key is not present, for it is the harbinger of doom. If there is not proper results, the program again waits and tries again.

+",409,,409,,3/21/2018 18:14,3/21/2018 18:14,,,,0,,,,CC BY-SA 3.0 +1252,2,,1236,3/21/2018 18:02,,5,,"

I'll try to give a slightly different perspective on the same things covered by the other answer.

+ +

A ""table of truth"" characterises a gate by telling you how each basis state evolves through the gate. +Note that this requires choosing an input and and output bases.

+ +

In the prototypical example of the CNOT gate, one chooses the computational basis both for input and output states, and it turns out that all the elements of the computational basis evolve into other elements of the computational basis. +In other words, an element of the computational basis, passing through a CNOT gate, ends up in a specific output basis state (as opposed to a superposition of basis states).

+ +

What might be confusing in this ""table of truth"" way to describe a gate, is that it can be used only with some choices of input and output bases. +For example, you cannot give a ""table of truth"" description of the CNOT gate using the $\{|L\rangle, |R\rangle\}$ basis, because, as you can check, $\text{CNOT}|L, L\rangle = \frac{1}{2}(|L, L\rangle + i |L, R\rangle + |R, L\rangle -i|R, R\rangle)$. +Indeed, the ""table of truth representation"" is only useful in some circumstances, for example when one wants to highlight that a gate might be a ""quantum generalisation"" of a specific classical gate, like it's the case for the CNOT gate.

+",55,,55,,5/15/2019 16:45,5/15/2019 16:45,,,,0,,,,CC BY-SA 4.0 +1255,1,1260,,3/21/2018 23:09,,14,981,"

In the last years, there has been a spur of demonstrations of devices able to perform proof of principle, small-scale, non-fault-tolerant quantum computation (or Noisy Intermediate-Scale Quantum technologies, how they have been referred to).

+ +

With this I'm mostly referring to the superconducting and ion trap devices demonstrated by groups such as Google, Microsoft, Rigetti Computing, Blatt's group (and probably others that I'm forgetting now).

+ +

These devices, as well as the ones that will follow them, are often radically different from each other (in terms of architecture, gates that are easier/harder to implement, number of qubit, connectivity between the qubits, coherence and gate times, generation and readout capabilities, gate fidelities, to name the most obvious factors).

+ +

On the other hand, it is very common in press releases and non-technical news to just say ""the new X device has Y more qubits than the one before, therefore it is so much more powerful"".

+ +

Is the number of qubits really such an important factor to assess these devices? Or should we instead use different metrics? More generally, are there ""simple"" metrics that can be used to qualitatively, but meaningfully, compare different devices?

+",55,,26,,01-01-2019 10:16,01-01-2019 10:16,How should different quantum computing devices be compared?,,4,0,,,,CC BY-SA 3.0 +1256,2,,1255,3/21/2018 23:51,,13,,"

This is a greatly debated topic, and I'm not sure there is an answer to your question at the current time. However, the IEEE (Institute of Electrical and Electronics Engineers) has proposed PAR 7131 - Standard for Quantum Computing Performance Metrics & Performance Benchmarking:

+ +
+

The purpose of this project is to provide a standardized set of + performance metrics and a standardized methodology of benchmarking the + speed/performance of various types of quantum computing hardware and + software as well as comparing these performance metrics to identical + metrics in classical computers such that users of this document may + determine the speed of a quantum computer for a specific application + can easily, and reliably, compare computer performance.

+
+ +

Full disclosure I am the current Chair of the Quantum Computing Standards Workgroup and the reason this PAR was originally proposed was because of a lack of documentation/standards on testing the various quantum computing architectures against classical architectures and each other. The factors you sighted above

+ +
+

number of qubit, connectivity between the qubits, coherence and gate + times, generation and readout capabilities, gate fidelities

+
+ +

are all included as are several other factors. As importantly we've also been working on a way to standardize solvers; an often overlooked component in benchmarking. Non-optimized solvers all too often benefit a quantum machine when comparing quantum architectures to classical architectures. That is, the solver running on the quantum architecture is always optimized where the solver running on the classical architecture is not. This creates an inherent bias in favor of the quantum architecture.

+ +

If you're interested in participating in the development of this standard please let me know, the more people involved from both the quantum and classical sides of the argument the better imho. In the meantime the PAR will start work shortly, and will be coordinating their efforts with other standards organizations so that a single common standard with no bias can emerge to help address performance and benchmarking in the future.

+",274,,52,,3/22/2018 16:44,3/22/2018 16:44,,,,2,,,,CC BY-SA 3.0 +1257,2,,1255,3/22/2018 0:05,,7,,"

IBM is promoting their quantum volume (see also this) idea to quantify the power of a gate model machine with a single number. Before IBM, there was an attempt from Rigetti to define a total quantum factor. +Unclear if it captures what we want in terms of usefulness of devices for applications. Things such as quantum volume are be designed with supremacy experiments in mind, it seems to me. I am leaning to think that a metric should be really application specific. For sampling, this work suggested to use the qBAS score.

+ +

For quantum annealing and similar analog approaches, it seems the community is agreeing on time-to-solution and variants; once again quite application specifics.

+ +

The community is working on defining metrics, and I expect in 2018 to see actual runs of the same problem on different devices (empirical comparison).

+",410,,410,,3/23/2018 1:54,3/23/2018 1:54,,,,0,,,,CC BY-SA 3.0 +1258,2,,1255,3/22/2018 0:39,,8,,"

While number of qubits should be part of such a metric, as you say, it's far from everything.

+ +

However, comparing two different completely different devices (e.g. superconducting and linear optics) is not the most straightforward task1.

+ +

Factors

+ +

Asking about coherence and gate times is equivalent to asking about fidelity and gate times1. Gates being harder or easier to implement just affects the fidelity again.

+ +

Initialisation rate, qubit/entanglement generation and readout capabilities (etc.) are going to affect overall fidelities as well as something akin to 'how frequently (on average) can we perform a computation (while getting a high-enough fidelity result, for some idea of 'high-enough fidelity')'.

+ +

In terms of architecture, the more macro-architecture (e.g. qRAM) will have its own standards and benchmarks, such as readout time, 'is readout on demand?' and of course, fidelity.

+ +

The more microarchitecture can be described under the same notions of connectivity.

+ +

Another, often ignored, metric is the power/resources used.

+ +

Overall, this may have narrowed this list down slightly, but it's still a list that involves a fair amount of comparison. Comparing different devices that use the same method isn't even that straightforward as (at current levels of technology), the processors with higher numbers of qubits often have lower fidelities2.

+ +

Quantum volume

+ +

Thankfully, a few people at IBM have taken the above (except for power used and the architecture) and defined something a bit more useful than 'number of qubits' and called it quantum volume. In this, for a random pair of $2$ qubits, they first define an effective error rate, $\epsilon_{eff}$, by considering what gate errors would be required in an otherwise perfect system to give the same error as the device. This may require the use of SWAP for low connectivity and Solovay-Kitaev-esque methods for low numbers of implementable gates. This is countered by using teleportation if the system has ""fast measurements and feedback"" and any other appropriate method.

+ +

For a total number of qubits $n$ and maximising over the number of 'active qubits', $n'$, the quantum volume is $$V_Q = \max_{n'\leq n}\min\left[n', \frac{1}{\epsilon_{eff}\left(n'\right)}\right]^2.$$

+ +

Of course, we want to move beyond the point of science and into engineering. For that we need a standard3. This is currently being planned, as detailed in Whurley's answer.

+ +

However, as any comparison between such lists isn't going to be straightforward, there's always the more subjective way, such as Quantum Awesomeness, where the enjoyment of the game depends on how good the processor is4.

+ +
+ +

1 In this particular case, one example is that as photons don't decohere, so this has to be adapted to asking about the length of time or number of gates before the realised state is no longer a good approximation to the ideal state, which is just asking for the fidelity, or fidelity and gate times

+ +

2 I've tried this much at least and even this isn't exactly the most fun task

+ +

3 The first, unlike in XKCD 927

+ +

4 The author's opinion is that, while an awesome idea and helpful for getting an idea of how good a processor is, saying that one processor is better than another at such a game is a bit too subjective to tell if one processor is actually better than another

+",23,,23,,3/22/2018 0:44,3/22/2018 0:44,,,,0,,,,CC BY-SA 3.0 +1259,2,,1244,3/22/2018 8:02,,4,,"

It's not so much a matter of big data, but that of saving data. Quantum storage is still (much like the rest of the field) in its infancy.

+ +

(Take what I write with a grain of salt. It's likely to change rapidly.)

+ +

There are a few theories on how quantum computers might be able to hold ""memory"".

+ +

One of these is using nuclear spin. E.g. using long-lived nuclei in a quantum state. Converting an electron qubit (a qubit represented by an electron) to a nuclear qubit is possible.

+ +

Why nuclear qubit/spin?

+ +

A nucleus's coherence time - the time for which its phase is constant (when considering its wave function) - is longer than that of an electron. The linked article (same one as before) details how one can increase the coherence time of a nuclear spin (to some extent). The matter is being researched, but there is some indication that nuclear qubits can be a form of quantum storage.

+ +

What makes it difficult

+ +

The quantum state needs to remain, well, quantum. Additionally, if you entangle two of your ""storage"" qubits, you are likely to lose data.

+ +

Due to no-cloning, one cannot simply ""copy"" a qubit (whose state is unknown), which is one of the reasons quantum storage is difficult.

+ +

As for ""big"" data, it's just a matter of how much memory you have.

+",13,,26,,5/15/2019 14:47,5/15/2019 14:47,,,,0,,,,CC BY-SA 4.0 +1260,2,,1255,3/22/2018 10:09,,5,,"

I think the answer depends on why you are comparing them. Things like the quantum volume, are perhaps better suited to defining progress in the development of devices rather than fully informing end users.

+ +

For example, you are buying a new laptop, you probably use more than just a single number when comparing them. The same should be true for quantum processors. There are many different aspects to a device: number of qubits, connectivity, all the different types of noise, time for measurement (and so whether feedback from measurement results is feasible), gate operation times, etc. All these need to be combined to tell you the one thing you actually need to know: can it run the program that you want to run? That is, I think, always going to be the most pertinent comparison. But it is also the trickiest.

+",409,,,,,3/22/2018 10:09,,,,0,,,,CC BY-SA 3.0 +1262,1,,,3/22/2018 11:54,,12,556,"

When expressing computations in terms of a quantum circuit, one makes use of gates, that is, (typically) unitary evolutions.

+ +

In some sense, these are rather mysterious objects, in that they perform ""magic"" discrete operations on the states. +They are essentially black boxes, whose inner workings are not often dealt with while studying quantum algorithms. +However, that is not how quantum mechanics works: states evolve in a continuous fashion following Schrödinger's equation.

+ +

In other words, when talking about quantum gates and operations, one neglects the dynamic (that is, the Hamiltonian) realising said evolution, which is how the gates are actually implemented in experimental architectures.

+ +

One method is to decompose the gate in terms of elementary (in a given experimental architecture) ones. Is this the only way? What about such ""elementary"" gates? How are the dynamics implementing those typically found?

+",55,,26,,12/23/2018 14:17,12/23/2018 14:17,"How are quantum gates realised, in terms of the dynamic?",,1,8,,,,CC BY-SA 3.0 +1263,1,1264,,3/22/2018 12:19,,4,758,"

The most down-voted question at the moment is about using entanglement for faster-than-light communication.

+ +

Much like how the word “laser” replaced “magic” in the vernacular not too long ago, what are some things that people outside the field think quantum computers, qubits, entanglement, tunneling, or superposition do that people might need to be educated about? Or what are some popular myths about these things should be dispelled?

+ +

One question I’m asked frequently is the reason for the difference in the number of qubits between the D Wave and IBM QX. D Wave has more so it must be better. So in this case, people need to be educated on the different implementations of quantum devices.

+",54,,26,,12/23/2018 12:10,12/23/2018 12:10,What are some popular myths or common misconceptions about quantum computing?,,1,0,,3/22/2018 15:04,,CC BY-SA 3.0 +1264,2,,1263,3/22/2018 12:50,,10,,"

1. Quantum computers are powerful because they act in many universes at once

+ +

This is an oversimplification based on the MWI at best. I don't think it has any pedagogical value. It needs to stop being repeated. Every journalist I talk to asks whether it is a good thing to write. I always say no.

+ +

2. Quantum computers/physics is weird and random

+ +

Anyone not swept up in the parallel universes seems to think that quantum computers are just some weird random thing. As before, there is a kernel of truth, but it is not a good explanation. Quantum algorithms are all about managing the certainty in the system, moving it through the state space to turn a certain input into an (ideally) certain output. The randomness is there, but I don't think it should be the focus of an understanding of quantum computing.

+ +

There seems to be an idea that quantum physics is just a strange random thing. And so quantum computers are just computers with weird randomness going on. From what I've seen, anyone who doesn't get swept up in the whole 'computing in many

+ +

3. Quantum physics does not follow logic

+ +

The popular talk of how strange and weird quantum physics is makes it seem quite illogical. This is pretty bad from the perspective of quantum computing, since computers are build on logic. How can a programmer be expected to make the move o quantum if they think that quantum programming is illogical? If the quantum world seems like a mystical realm understood only through arcane knowledge, it will be hard to engage people outside of the field.

+ +

Though quantum physics doesn't follow the logic of local hidden variables and non-contextuality, it of course has its own logic. The fun of quantum computing is learning how to embrace and use effects that are not possible in classical variables.

+ +

Summary

+ +

I think we should be championing the message that quantum computers are full of logic and certainty, and that these are what we harness to do computation. Too much talk of random and strange effects should be avoided. As should the whole multiple universe thing.

+",409,,,,,3/22/2018 12:50,,,,0,,,,CC BY-SA 3.0 +1265,1,1273,,3/22/2018 14:26,,10,9340,"

Recently, I've read about 'quantum bogosort' on some wiki. The basic idea is, that like bogosort, we just shuffle our array and hope it gets sorted 'by accident' and retry on failure.

+ +
+

The difference is that now, we have 'magic quantum', so we can simply try all permutations at once in 'parallel universes' and 'destroy all bad universes' where the sort is bad.

+
+ +

Now, obviously, this doesn't work. Quantum is physics, not magic. The main problems are

+ +
    +
  1. 'Parallel universes' is merely an interpretation of quantum effects, not something that quantum computing exploits. I mean, we could use hard numbers here, interpretation will only confuse matters here, I think.

  2. +
  3. 'Destroying all bad universes' is a bit like qubit error correction, a very hard problem in quantum computing.

  4. +
  5. Bogo sort remains stupid. If we can speed-up sorting via quantum, why not base it on a good sorting algorithm? (But we need randomness, my neighbour protests! Yes, but can't you think of a better classical algorithm that relies on randomness?)

  6. +
+ +

While this algorithm is mostly a joke, it could be an 'educational joke', like the 'classical' bogosort as the difference between best case, worst case and average case complexity for randomized algorithms is easy and very clear here. (for the record, best case is $\Theta(n)$, we are very lucky but still must check that our answer is correct by scanning the array, expected time is simply awful (IIRC, proportional to the number of permutations, so $O(n!)$) and worst case is we never finish)

+ +

So, what can we learn from 'quantum bogosort'? In particular, are there real quantum algorithms that are similar or is this a theoretical or practical impossibility? +Furthermore, has there been research into 'quantum sorting algorithms'? If not, why?

+",253,,26,,5/15/2019 15:16,01-03-2021 14:51,What can we learn from 'quantum bogosort'?,,2,0,,,,CC BY-SA 4.0 +1268,2,,1208,3/22/2018 21:07,,14,,"

I will only answer to the part of the question regarding how quantum mechanics can be useful to analyse classical data with machine-learning-like techniques. +There are also works related to ""quantum AI"", but that is a much more speculative (and less defined) kind of thing, which I do not want to go into.

+ +

So, can quantum computers be used to speed-up data analysis via machine learning algorithms? Quoting Scott Aaronson's Read the fine print paper, that’s a simple question with a complicated answer.

+ +

It should first of all be noted that trying to answer this kind of question is a big part of what the research area of Quantum Machine Learning is about (the terms quantum-enhanced machine learning and quantum assisted machine learning are also often used to refer to this merger of QM and ML, to distinguish it from the use of ML to help solve problems in QM, which is an entirely different subject). +As you can see from the Wikipedia page, there are many things going on in the field, and it would be pointless to try and give a comprehensive list of relevant papers here.

+ +

Quoting from Schuld et al. 2014, the idea behind Quantum-Assisted Machine Learning (QAML) is the following:

+ +
+

Since the volume of globally stored data is growing by around 20% + every year (currently ranging in the order of several hundred exabytes + [1]), the pressure to find innovative approaches to machine learning + is rising. A promising idea that is currently investigated by academia + as well as in the research labs of leading IT companies exploits the + potential of quantum computing in order to optimise classical machine + learning algorithms.

+
+ +

Going back to your question, a first seemingly positive answer was provided by Harrow et al. 2009, which gave an efficient quantum algorithm to invert linear system of equations (under a number of conditions over the system), working when the data is stored in quantum states. Being this a fundamental linear algebra operation, the discovery led to many proposed quantum algorithms to solve machine learning problems by some of the same authors (1307.0401, 1307.0411, 1307.0471), as well as by many others. +There are now many reviews that you can have a look at to get more comprehensive lists of references, like 1409.3097, 1512.02900, 1611.09347, 1707.08561, 1708.09757, Peter Wittek's book, and likely more.

+ +

However, it is far from established how this would work in practice. Some of the reasons are well explained in Aaronson's paper: Read the fine print (see also published version: nphys3272). +Very roughly speaking, the problem is that quantum algorithms generally handle ""data"" as stored in quantum states, often encoding vectors into the amplitudes of the state. +This is, for example, the case for the QFT, and it is still the case for HHL09 and derived works.

+ +

The big problem (or one of the big problems) with this is that it is far from obvious how you can efficiently load the ""big"" classical data into this quantum state for processing. The typical answer to this is ""we just have to use a qRAM"", but that also comes with many caveats, as this process needs to be very fast to maintain the exponential speed-up that we now can be achieved once the data is in quantum form. +Again, further details can be found in Aaronson's paper.

+",55,,55,,4/27/2020 11:57,4/27/2020 11:57,,,,0,,,,CC BY-SA 4.0 +1269,1,1279,,3/22/2018 21:35,,9,810,"

So, @AndrewO mentioned recently that he has had 'encounters' with people wondering why D-Wave has a lot more qubits than IBM. Of course, this comparison is faulty, since the IBM and D-Wave's machine may both exploit quantum effects to a certain degree, IBM's machine matches the thing the TCS people call a 'Quantum computer' a bit more than D-Waves's alleged quantum annealer.

+ +

How do you explain to a novice why IBM is still reaching important milesstones, even though the D-Wave has a lot more 'qubits'. I understand that an easy answer is, 'well, you're comparing apples and pears', but that is merely relying on your possible authority and simply doesn't explain anything!

+ +

How can you explain that those devices are different, how can you dispell the myth that the number of qubits is not the only metric to judge quantum devices? (preferably to a layman, but assuming basic (undergrad?) physics knowledge is ok, if needed)

+",253,,26,,1/31/2019 17:53,02-02-2019 21:05,How to explain in layman’s terms the significance of the difference of qubits of the D-Wave and IBM QX?,,2,0,,,,CC BY-SA 4.0 +1270,2,,12,3/23/2018 4:10,,3,,"

Yes. If you build it yourself, find a 3rd party computer with the same specs as the BullSequana M9600 series, or come up with €100K+ and buy a system from Atos.

+ +

Notice the similarity between the BullSequana M9600 series and the Atos QLM. +

+ +

Same box (and probably internal components) with different software (but you wanted to use your own, Q#). Atos claims: ""The highest-performing quantum simulator in the world"". I'm not sure about that but the specs for the 30 qubit version are reachable, just two Intel CPUs and 1TB of memory.

+ +

Atos QLM .PDF Brochure.

+ +
+

Is there any way to emulate a quantum computer in my normal computer, so that I will be able to test and try quantum programming languages (such as Q#)?

+
+ +

If you use only 256GB of memory and 1-24TB of Swap Drive it will be slow but it will work.

+ +
+

I mean something that I can really test my hypothesis on and get the most accurate results.

+
+ +

Quote from the Brochure:

+ +

""The Atos Quantum Learning Machine computes the exact execution of a quantum program, with double digit precision. It simulates the laws of physics, which are at the very heart of quantum computing. This is very different to existing quantum processors, which suffer from quantum noise, quantum decoherence, and manufacturing biases, as well as performance bottlenecks. Simulation on the Atos Quantum Learning Machine enables developers to focus on their applications and algorithms, without having to wait for quantum machines to be available"".

+ +

They claim high accuracy, since it's a simulator it's not subject to noise - nor will it be as fast, or as expensive. In theory you could add some memory, drives, and software to your computer ...

+",278,,,,,3/23/2018 4:10,,,,0,,,,CC BY-SA 3.0 +1271,1,1286,,3/23/2018 4:54,,6,279,"

We've seen people use computers to design computers, AI to write computer programs, robots teach themselves, and even robots build themselves. I understand that conventional computers can be used to emulate quantum computers.

+ +

+ +

Robotic arm builds more robotic arms.

+ +
+ +

My question is: Will quantum computers be inherently able to design better quantum computers (which presumably robots could then build) or is that task better suited to Deep Learning / AI and conventional computers (with human intervention)?

+ +

Perhaps another way to ask the question, though I'm fairly certain that this won't make it clearer: ""Do conventional computers limit us to quantum Darwinism while quantum computers might enable universal Darwinism with respect to the evolution of quantum computer design?"".

+ +

Oversimplified (not the question): Could they understand (design) themselves better than classical computers, Deep Learning / AI, or mankind - IE: they do not suffer from what Zurek calls “pointer states”.

+",278,,278,,3/25/2018 15:00,3/25/2018 15:00,Can quantum computers design quantum computers autonomously better than other methods?,,2,0,,,,CC BY-SA 3.0 +1272,2,,1262,3/23/2018 6:23,,7,,"

Generally speaking, a realization of a quantum gate involves coherent manipulation of a two-level system (but this is nothing new to you, maybe). For example, you can use two long-lived electronic states in a trapped atom (neutral or ionized in vacuo) and use an applied electric field to implement single-qubit operations (see trapped ions or optical lattices, for example).

+ +

Alternatively, there are solid-state solutions like superconducting qubits or silicon-defect qubits which are addressed by radio-frequency electronics. You can use microwave-addressed nuclear spin sublevels, or nitrogen vacancy cells in diamond. The commonality is that the manipulation and coupling of the qubits is via applied light fields, and there are a range of methods you can use to tune the level spacing in these systems to enable single-spin addressing or manipulate lifetimes.

+ +

The translation from the implementation to Hamiltonian is obviously dependent on your choice of system, but eventually it all boils back down to Pauli matrices in the end. The light field provides off-diagonal elements in your single-qubit operations, whereas two-qubit operations are trickier and techniques are very implementation-dependent.

+",484,,,,,3/23/2018 6:23,,,,0,,,,CC BY-SA 3.0 +1273,2,,1265,3/23/2018 9:05,,8,,"

DISCLAIMER: The quantum-bogosort is a joke-algorithm

+

Let me just state the algorithm in brief:

+
    +
  • Step 1: Using a quantum randomization algorithm, randomize the list/array, such that there is no way of knowing what order the list is in until it is observed. This will divide the universe into $O(N!)$ universes; however, the division has no cost, as it happens constantly anyway.

    +
  • +
  • Step 2: Check if the list is sorted. If not, destroy the universe (neglecting the actual physical possibility).

    +
  • +
+

Now, all remaining universes contain lists/arrays which are sorted.

+

Worst Case Complexity: $O(N)$

+

(we only consider those universes which can observe that the list is sorted)

+

Average/Best Case Complexity: $O(1)$

+

One of the major problems with this algorithm is the huge possibility magnification of errors as Nick Johnson mentions here:

+

This algorithm has a much bigger problem, however. Assume that one in 10 billion times you will mistakenly conclude a list is sorted when it's not. There are 20! ways to sort a 20 element list. After the sort, the remaining universes will be the one in which the list was sorted correctly, and the 2.4 million universes in which the algorithm mistakenly concluded the list was sorted correctly. So what you have here is an algorithm for massively magnifying the error rate of a piece of machinery.

+
+
+

'Parallel universes' is a highly simplified interpretation of quantum +effects, not something that Quantum Computing exploits.

+
+

Not really sure what you mean by "highly simplified interpretation of quantum effects". The sources (this and this) I found on the internet regarding the quantum bogosort do not explicitly mention that they're using the alternative interpretation of QM i.e. the Everett's interpretation which you might be thinking about. In fact I'm not even sure how to glue together the Everett's interpretation and quantum-bogosort (using post-selection, as some people commented). Anyhow, just as a note: in mainstream cosmology, it is widely believed that more than one universe exists and there are even classifications for them, called the Max Tegmark's four levels and Brian Greene's nine types and Cyclic theories. Read the Wiki article on Multiverse for more details.

+
+

'Destroying all bad universes' is a bit like qubit error correction, a +very hard problem in Quantum Computing.

+
+

Sure, it is in fact much harder, and we don't expect to destroy universes literally. The quantum bogosort is just a theoretical concept, with no practical applications (which I know of).

+
+

Bogo sort remains stupid. If we can speed-up sorting via quantum, why +not base it on a good sorting algorithm? (But we need randomness, my +neighbour protests! Yes, but can't you think of a better classical +algorithm that relies on randomness?)

+
+

Yes, it does remain stupid. It does seem to have started out as an "educational joke" as you said. I did try to find the origin of this sort, or relevant academic papers, but couldn't find any. However, even the classical bogosort is stupid in the sense that is widely held as one of the most inefficient sorting algorithms. Still it has been researched on, purely out of educational interest.

+
+

In particular, are there real quantum algorithms that are similar or +is this a theoretical or practical impossibility?

+
+

None that I know of. Such algorithms are indeed theoretical possibilities, but definitely not practical (at least, not yet).

+
+

Furthermore, has there been research into 'quantum sorting +algorithms'? If not, why?

+
+

There indeed has been research into "quantum sorting". But the problem with such sorting algorithms is any comparison-based quantum sorting algorithm would take at least $\Omega (N\log N)$ steps, which is already achievable by classical algorithms. Thus, for this task, quantum computers are no better than classical ones. However, in space-bounded sorts, quantum algorithms outperform their classical counterparts. This and this are two relevant papers.

+",26,,-1,,6/18/2020 8:31,3/23/2018 19:18,,,,2,,,,CC BY-SA 3.0 +1274,2,,1271,3/23/2018 9:26,,4,,"

We don't yet know if quantum computers are actually better than classical computers, as @heather mentions here. As for now there are just some theoretical algorithms which we know of, specifically for quantum-computers, which have much better time complexities than equivalent classical algorithms. For example - prime factorization and discrete logarithms.

+ +

Wiki also says:

+ +
+

Besides factorization and discrete logarithms, quantum algorithms + offering a more than polynomial speedup over the best known classical + algorithm have been found for several problems, including the + simulation of quantum physical processes from chemistry and solid + state physics, the approximation of Jones polynomials, and solving + Pell's equation. No mathematical proof has been found that shows that + an equally fast classical algorithm cannot be discovered, although + this is considered unlikely. For some problems, quantum computers + offer a polynomial speedup. The most well-known example of this is + quantum database search, which can be solved by Grover's algorithm + using quadratically fewer queries to the database than are required by + classical algorithms. In this case the advantage is provable. Several + other examples of provable quantum speedups for query problems have + subsequently been discovered, such as for finding collisions in + two-to-one functions and evaluating NAND trees.

+
+ +

Whether you can use these speedups to design better quantum computers, depends. I can imagine that you could of course simulate a quantum computer using another quantum computer, which could help in making new designs or rather testing-before-building. But I don't think polynomial speedups, faster prime factorizations, etc. will directly help in designing a quantum computer, unless you actually make use of it somehow.

+ +
+ +

P.S: This is a very short and perhaps incomplete answer, I know. I just wanted to give the OP a basic idea. I'd request others to write alternate answers to this question.

+",26,,,,,3/23/2018 9:26,,,,0,,,,CC BY-SA 3.0 +1276,1,,,3/23/2018 10:28,,19,4507,"

As a result of an excellent answer to my question on quantum bogosort, I was wondering what is the current state of the art in quantum algorithms for sorting.

+ +

To be precise, sorting is here defined as the following problem:

+ +
+

Given an array $A$ of integers (feel free to choose your representation of $A$, but be clear about this, I think this already is non-trivial!) of size $n$, we wish to transform this array into the array $A_s$ such that the arrays 'are reshufflings of each other's and $A_s$ is sorted, i.e. $A_s[i]\leq A_s[j]$ for all $i\leq j$.

+
+ +

What is known about this? Are there complexity bounds or conjectures for certain models? Are there practical algorithms? Can we beat classical sorting (even the bucket or radix sort at their own game? (i.e. in the cases where they work well?))

+",253,,26,,5/15/2019 15:14,1/14/2021 11:49,What is the current state of the art in quantum sorting algorithms?,,3,0,,,,CC BY-SA 4.0 +1279,2,,1269,3/23/2018 10:59,,8,,"

In the classical case, there is a pretty big difference between digital computers and analogue ones. The methodology and hardware is very much distinct (in all cases I know of, at least).

+ +

The divide is still there in the quantum case, but it doesn't run quite as deep. The hardware can be similar, but requirements on how it behaves and how to manipulate it are different. This means that both circuit model quantum computers and quantum annealers can both measure device size using the same metric, the number of qubits, but it is measuring very different and completely non-equivalent things.

+ +

Basically, it is like comparing the length of a slide rule to that of a smartphone, and using that to make statements about their computational power.

+",409,,,,,3/23/2018 10:59,,,,0,,,,CC BY-SA 3.0 +1280,1,1281,,3/23/2018 11:12,,33,11118,"

Quantum gates seem to be like black boxes. Although we know what kind of operation they will perform, we don't know if it's actually possible to implement in reality (or, do we?). In classical computers, we use AND, NOT, OR, XOR, NAND, NOR, etc which are mostly implemented using semiconductor devices like diodes and transistors. Are there similar experimental implementations of quantum gates? Is there any ""universal gate"" in quantum computing (like the NAND gate is universal in classical computing)?

+",26,,26,,12/23/2018 14:16,2/27/2021 8:23,How are quantum gates implemented in reality?,,1,0,,,,CC BY-SA 3.0 +1281,2,,1280,3/23/2018 11:12,,27,,"

One can replicate any quantum gate or at least get arbitrarily close using sufficient number of CNOT, H, X, Z and $\pi/8$ rotation gates. That is because they form a universal set of quantum gates (refer to: M. Nielsen and I. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, 2016, page 189). Be careful here. Clearly, we cannot implement any arbitrary quantum gate $U$ with infinite precision. Instead, given $\epsilon>0$, we implement $U_{\epsilon}$, which is $\epsilon$-close to $U$ (refer to: Quantum Mechanics and Quantum Computation MOOC offered by UC Berkely on EdX). This imperfection of quantum gates is one of the main reasons we need error correction codes.

+ +

There have been attempts to implement those basic gates. I'm adding some of the recent research works related to these attempts:

+ + + +

As Wikipedia mentions, another set of universal quantum gates consists of the Ising gate and the phase-shift gate. These are the set of gates natively available in some trapped-ion quantum computers (Demonstration of a small programmable quantum computer with atomic qubits).

+",26,,26,,12/23/2018 14:16,12/23/2018 14:16,,,,1,,,,CC BY-SA 4.0 +1284,2,,97,3/23/2018 19:53,,2,,"
+

Why do you need error correction? My understanding is that error correction removes errors from noise, but noise should average itself out.

+
+

If you built a house or a road and noise was a variance, a difference, with respect to straightness, to direction, it's not solely / simply: "How would it look", but "How would it be?" - a superposition of both efficiency and correctness.

+

If two people calculated the circumference of a golf ball given a diameter each would get a similar answer, subject to the accuracy of their calculations; if each used several places of decimal it would be 'good enough'.

+

If two people were provided with identical equipment and ingredients, and given the same recipe for a cake, should we expect identical results?

+
+

To make clear what I'm asking, why can't you, instead of involving error correction, simply run the operations, say, a hundred times, and pick the average/most common answer?

+
+

You're spoiling the weighing, tapping your finger on the scale.

+

If you're at a loud concert and try to communicate with the person next to you do they understand you the first time, everytime?

+

If you tell a story or spread a rumor, (and some people communicate verbatim, some add their own spin, and others forget parts), when it gets back to you does it average itself out and become essentially (but not identically) the same thing you said? - unlikely.

+

It like crinkling up a piece of paper and then flattening it out.

+

All those analogies were intended to offer simplicity over exactness, you can reread them a few times, average it out, and have the exact answer, or not. ;)

+
+

A more technical explanation of why quantum error correction is difficult but neccessary is explained on Wikipedia's webpage: "Quantum Error Correction":

+
+

"Quantum error correction (QEC) is used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is essential if one is to achieve fault-tolerant quantum computation that can deal not only with noise on stored quantum information, but also with faulty quantum gates, faulty quantum preparation, and faulty measurements.".

+

"Classical error correction employs redundancy. " ...

+

"Copying quantum information is not possible due to the no-cloning theorem. This theorem seems to present an obstacle to formulating a theory of quantum error correction. But it is possible to spread the information of one qubit onto a highly entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. A quantum error correcting code protects quantum information against errors of a limited form.".

+
+",278,,-1,,6/18/2020 8:31,01-07-2019 22:35,,,,0,,,,CC BY-SA 4.0 +1285,1,1287,,3/23/2018 19:56,,18,1203,"

In this answer I mentioned that the CNOT, H, X, Z and $\pi/8$ gates form a universal set of gates, which given in sufficient number of gates can get arbitrarily close to replicating any unitary quantum gate (I came to know about this fact from Professor Umesh Vazirani's EdX lectures). But, is there any mathematical justification for this? There should be! I tried searching for relevant papers but couldn't find much.

+",26,,2293,,05-08-2018 19:54,12/17/2021 6:16,"What is the mathematical justification for the ""universality"" of the universal set of quantum gates (CNOT, H, Z, X and π/8)?",,2,0,,,,CC BY-SA 4.0 +1286,2,,1271,3/23/2018 20:05,,7,,"

Sort of, quite possibly, if by degrees

+ +

This is a speculative, but plausible, answer

+ +

First of all, how do qubits interact and states evolve with time?

+ +

The description of how individual qubits evolve (i.e. a single qubit gate operation) is given by some Hamiltonian1. Multiple, non-interacting qubits (that are exactly the same) therefore evolve using multiples of that same Hamiltonian. However, as soon as you include some form of interaction, simulating the exact evolution of a large number of interacting qubits quickly becomes intractable (which is exactly why quantum computers should be useful, in theory).

+ +

Now, how are quantum processors designed?

+ +

Designing the microarchitecture of a processor (such as the connectivity, layout of the chip etc.) is one thing, but engineering sizes of qubits, waveguides etc. requires going on a computer and running detailed simulations a lot of the time. If there was a way to simulate large Hamiltonians for a long time, it's reasonable to assume that this would help improve knowledge of how the qubits interact with each other and the environment, such as in a more detailed version of this and extensions thereof, as well as generally being able to look at how multiple qubits interact at once. This would in turn allow for improved design of the details of chip, which would lead to effects such as reduction of decoherence.

+ +

Finally, what are quantum computers good at?

+ +

As mentioned above, quantum computers are potentially useful because of what makes them hard to simulate - a quantum computer is a system that quickly becomes intractable it simulate with increasing numbers of qubits. However, something that quantum computers offer a speedup of is... Hamiltonian simulation.

+ +

In other words, by iteratively running a quantum Hamiltonian simulation, using a classical computer to (tell the quantum computer to) vary certain parameters, it's not unreasonable to assume that a quantum computer could help for optimising certain aspects of the chip to e.g. reduce decoherence times, improve fidelities etc. In turn offering better simulations and allowing for yet more qubits, which potentially offers better simulations and the continual improvement would (hopefully) begin.

+ +

As for whether it does, or if classically simulating a few qubits and extrapolating from this would give a just-as-good design is something that only time will tell.

+ +
+ +

1 Having said that, in linear optical quantum computers (at least), unitaries are implemented directly using physical components such as beam splitters and phase shifters directly describing the unitary, as opposed to, say, superconducting, where applying microwaves is described in terms of a Hamiltonian, which is then used to give a unitary. OK, all this is maybe simplifying a bit, but that gets the gist across. What does matter is that, in linear optical quantum computers, generating the photons on say, a ring resonator, is described by a Hamiltonian

+",23,,,,,3/23/2018 20:05,,,,0,,,,CC BY-SA 3.0 +1287,2,,1285,3/23/2018 22:45,,14,,"

The answer you mention references Michael Nielsen and Isaac Chuang's book, Quantum Computation and Quantum Information (Cambridge University Press), which does contain a proof of the universality of these gates. (In my 2000 edition, this can be found on p. 194.) The key insight is that the $T$ gate (or $\pi/8$ gate), together with the $H$ gate, generates two different rotations on the Bloch sphere with angles that are irrational multiples of $2\pi$. This allows combinations of $T$ and $H$ gates to densely fill the surface of the Bloch sphere and thereby approximate any one-qubit unitary operator.

+ +

That this can be done efficiently is shown by the Solovay-Kitaev theorem. Here, ""efficiently"" means polynomial in $\log(1/\epsilon)$, where $\epsilon$ is the desired accuracy. This is also proven in Nielsen and Chuang's book (Appendix 3 in the 2000 edition). An explicit construction can be found in https://arxiv.org/abs/quant-ph/0505030.

+ +

Combining CNOT gates allows one to approximate arbitrary multi-qubit unitaries, as shown by Barenco et al. in Phys. Rev. A 52 3457 (1995). (A preprint of this paper can be found at https://arxiv.org/abs/quant-ph/9503016.) This is also discussed in Nielsen and Chuang (p. 191 in the 2000 edition).

+",356,,124,,3/25/2018 20:16,3/25/2018 20:16,,,,3,,,,CC BY-SA 3.0 +1289,1,1324,,3/24/2018 5:37,,38,8209,"

On reading this Reddit thread I realized that even after a couple months of learning about quantum computing I've absolutely no clue about how a quantum computer actually works.

+ +

To make the question more precise, let's say we have a superconducting qubit based 5-qubit quantum computer (like the 5-qubit IBM Quantum Computer). I type in $2+3$ using a keyboard onto a monitor (say in a basic calculator app the quantum computer might have). After that it should return me $5$. But is going on at the hardware level? Are some sort of electrical signals corresponding to the inputs $2$,$3$ and $+$ going to the processing unit of the computer? Does that somehow ""initialize"" the Cooper pair electrons? What happens to the Cooper pair electron qubits after that (guess they'd be worked on by some quantum-gates, which are in turn again black boxes)? How does it finally return me the output $5$ ?

+ +

I am surprised as to how little I could come up with about the basic working of a quantum computer by searching on the net.

+",26,,26,,12/23/2018 12:10,03-01-2022 02:34,How does a quantum computer do basic math at the hardware level?,,2,0,,,,CC BY-SA 3.0 +1291,2,,1228,3/24/2018 18:22,,1,,"

If by ""giant quantum computer"" you mean something that can be simulated very efficiently by a tensor product of sufficiently many qubits, then I think the answer is no.

+ +

When we work with finite dimensional systems, it's very clear how to account for the joint description of two subsystems: we simply take the tensor product.

+ +

If you want an infinite-dimensional space with continuously many degrees of freedom, then we have to do something more subtle. We'll call this the ""commuting operator"" model. To every open subset of space we attach an algebra observables corresponding to the local measurements on that part of space. Instead of requiring that the Hilbert space of the universe is the tensor product of all the local hilbert spaces, we just ask that if two regions of space don't intersect, then their algebras of observables commute.

+ +

It's known that the commuting operator model gives rise to correlations between spacelike separated parties that do not arise in the tensor product model. See this paper by William Slofstra on the arXiv.

+ +

We don't know whether there is an experiment that could tell us that our universe is capable of generating correlations in the commuting operator model. See this blog post by Scott Aaronson, which describes Slofstra's result and some of its implications.

+",483,,483,,3/24/2018 23:30,3/24/2018 23:30,,,,4,,,,CC BY-SA 3.0 +1292,1,1316,,3/24/2018 20:08,,12,572,"

In my previous question I asked who invented a quantum computer using qubits.

+ +

As a follow-up to this question I want to ask who built the first quantum computer using at least two qubits.

+ +

During my research I have discovered that in 1998, Jonathan A. Jones and Michele Mosca developed a quantum computer using two qubits specifically to solve Deutsch's problem. Have there been other working quantum computers before to solve other problems or general attempts not specifically bound to one problem?

+",11,,26,,3/26/2018 16:00,6/27/2022 1:14,Who built the first quantum computer using at least two qubits?,,3,2,,,,CC BY-SA 3.0 +1301,2,,20,3/24/2018 23:16,,4,,"

It seems that most ways of formalizing your question would lead to a problem that's QMA-hard, and therefore we shouldn't hope for an efficient quantum algorithm to solve it. (The relationship between BQP and QMA is similar to the relationship between P and NP: it would be very surprising if there were efficient quantum algorithms for QMA-complete problems.)

+ +

The canonical QMA-complete problem is the ""local hamiltonian"" problem. Roughly speaking, the input to the problem is a description of the hamiltonian for a physical system acting on qubits, and the problem is to decide whether the ground state of the system has small energy. If your system of specifying constraints includes local hamiltonians as a special case, then your problem is intractable.

+",483,,483,,3/24/2018 23:59,3/24/2018 23:59,,,,0,,,,CC BY-SA 3.0 +1302,1,1306,,3/24/2018 23:57,,15,1829,"

In classical binary computers, real numbers are often represented using the IEEE 754 standard. With quantum computers you can of course do this as well - and for measurements this (or a similar standard) will probably be necessary since the result of any measurement is binary. But could real numbers be modeled more easily and / or more precisely within the qubits using different methods before the measurement happens? If so, are there any use cases where this is actually useful, seeing that (I'm assuming) any additional precision will be lost when measurements are performed?

+ +

To be clear, I'm not (necessarily) looking for existing standards, just for ideas or suggestions on how to represent those numbers. If there's any research into it, that would be useful too of course.

+",138,,26,,12/23/2018 12:10,12/23/2018 12:10,Representation of real numbers in quantum computers,,2,3,,,,CC BY-SA 3.0 +1303,1,1304,,3/25/2018 1:02,,7,110,"

Quantum Computing (QC) pioneer Vazirani has graciously long provided some nice videos on an intro to QC. E.g. in ""2 qubit gates + tensor product"" (2014) he introduces the tensor product w.r.t. QC gates. I was generally able to follow this video but think there was one subtle point glossed over and I would like an expert to expand on it. We were discussing the tensor product in a QC meetup and the question came up if its commutative. As Wikipedia states, the tensor product is not commutative. But when combining two qubits in a gate operation, Vazirani however does not mention whether order makes any difference. In his diagrams, there is total visual symmetry across the 2 qubits. Or maybe the tensor product is commutative over unitary matrices which QC is limited to? Can someone sort out/ unpack some of these ideas?

+",377,,2224,,06-11-2019 01:44,06-11-2019 01:44,Symmetry of tensor product w.r.t. Vazirani 2-qubit video,,2,0,,,,CC BY-SA 4.0 +1304,2,,1303,3/25/2018 3:44,,6,,"

The tensor product is not commutative, i.e. in the computational basis, $$X\otimes Z=\left( +\begin{array}{cccc} + 0 & 0 & 1 & 0 \\ + 0 & 0 & 0 & -1 \\ + 1 & 0 & 0 & 0 \\ + 0 & -1 & 0 & 0 \\ +\end{array} +\right)$$ while $$Z\otimes X=\left( +\begin{array}{cccc} + 0 & 1 & 0 & 0 \\ + 1 & 0 & 0 & 0 \\ + 0 & 0 & 0 & -1 \\ + 0 & 0 & -1 & 0 \\ +\end{array} +\right).$$ +The two-qubit CNOT gate depicted in the video is not symmetric w.r.t. the two qubits (one is control, the other is target). However, the ordering of the subsystems (e.g. the qubits) is arbitrary and you can understand $X_A \otimes Z_B = Z_B \otimes X_A$, where the indices denote the subsystem. It happens quite frequently that people change the order of the subsystems to whatever order is convenient at that moment.

+",104,,104,,3/25/2018 13:22,3/25/2018 13:22,,,,0,,,,CC BY-SA 3.0 +1305,2,,1303,3/25/2018 5:10,,4,,"

I'll add a bit to the other answer.

+ +

State of a two qubit system is written as $|\psi_1\rangle\otimes|\psi_2\rangle$ where $|\psi_1\rangle$ is the state vector of the first qubit and $|\psi_2\rangle$ is the state vector of the second qubit (it is upto you to decide which one will be called ""first"" and which one will be called ""second""). The order is important, as the first member of the product is always considered to be the state of the first qubit.

+ +

$$|\psi_{\text{state}}\rangle=|\psi_1\rangle\otimes|\psi_2\rangle=|\psi_1\psi_2\rangle\quad (\text{Tensor Product})$$

+ +

However, after choosing the initial convention of the order (i.e. ""first"" and ""second"" qubit) if you're changing ordering of the subsystems, as M.Stein says, you have to mention that explicitly. You can't just suddenly write it as $|\psi_2\rangle\otimes|\psi_1\rangle$. Also, another reason why you can't change order directly is because Tensor product is not-commutative, as M.Stein already mentioned with a nice example.

+ +

A tensor product of that form can be also written as $a|00\rangle + b|01\rangle + c|10\rangle + d|11\rangle$ (where $a,b,c,d\in \Bbb{C}$). Keep in mind that even here, the first entry $\alpha$ in $|\alpha\beta\rangle$ (where $\alpha,\beta\in\{0,1\}$) is the state of the first qubit. For example when you're measuring the first qubit in the $|0\rangle,|1\rangle$ basis you'll end up with either the state $\frac{a|00\rangle+b|01\rangle}{\sqrt{|a|^2+|b|^2}}$ or the state $\frac{c|10\rangle+d|11\rangle}{\sqrt{|c|^2+|d|^2}}$. Notice that, in these two possible results, the first result explicitly implies that the first qubit turns out to be $0$ in that case, while the second possible result implies that the first qubit turns out to be $1$ in that particular case. As you can see, maintaining that order of tensor product is important, to keep track of the actual changes to the qubit system during quantum computing.

+",26,,,,,3/25/2018 5:10,,,,0,,,,CC BY-SA 3.0 +1306,2,,1302,3/25/2018 5:41,,9,,"

There have been efforts to implement construct ""floating point"" representation of small rotations of qubit states, such as: Floating Point Representations in Quantum Circuit Synthesis. But there doesn't seem to be any international standard like the one you mentioned i.e. IEEE 754. IEEE 7130 - Standard for Quantum Computing Definitions is an ongoing project. Anyhow, representation of floating point will automatically be dependent on the precision you want. If you want to follow the path in the first paper I linked (i.e. using qubit rotations) I can already imagine the possibility of errors during such rotation operations and you'd have to deal with them accordingly.

+",26,,,,,3/25/2018 5:41,,,,1,,,,CC BY-SA 3.0 +1308,2,,1302,3/25/2018 12:13,,1,,"

I am afraid that while interesting work is being done here, it should be clear that the quantum computer architecture is very much non-standardised and hence this is all subject to change.

+ +

The IEEE 754 standard describes how to implement a feature that decades of engineering and research have shown to be useful and hence machines are to be expected to do this.

+ +

In contrast, scientists and engineers are still figuring out how to best create an 'universal' quantum computer. They have some ideas on how to do this, as Blue mentions. However, there is no 'one true idea' on which engineers can base standards.

+ +

Perhaps it would even turn out complex numbers are easier to represent on a quantum computer and we have a standard for complex number data-types, instead!

+ +

So, while work is being done here, an IEEE standard seems very much in the far future.

+",253,,,,,3/25/2018 12:13,,,,2,,,,CC BY-SA 3.0 +1315,2,,1276,3/25/2018 13:04,,8,,"

For comparison-based sorting (and search) bounds seem to fit the ones of classical computers: $\Omega(N\log N)$ for sorting and $\Omega(\log N)$ for search, as shown by Hoyer et al. A couple of quantum sorting algorithms are listed in 'Related work' section of ""Quantum sort algorithm based on entanglement qubits {00, 11}"".

+",505,,,,,3/25/2018 13:04,,,,0,,,,CC BY-SA 3.0 +1316,2,,1292,3/25/2018 13:24,,12,,"

What is a qubit? And what is a quantum computer? Any claim about about which is first will depend on our definitions.

+ +

One suggestion might be the 1981 experiment by Aspect, Grangier and Roger to demonstrate a violation of Bell’s inequality.

+ +

My arguments for this are:

+ +
    +
  • It uses a physical degree of freedom (photon polarization) which has since been considered for qubits.
  • +
  • It performs a task (Bell’s inequality violation) that has since been used in quantum information theoretic tasks (like cryptography).
  • +
+ +

So though the authors would have had no concept of their setup being a two qubit quantum computer at the time, I’d say that it was.

+ +

For some other early two qubit systems, see references 7, 8 and 9 in this paper (which has arguably the first 3 qubit system).

+",409,,,,,3/25/2018 13:24,,,,0,,,,CC BY-SA 3.0 +1323,1,1572,,3/25/2018 16:29,,21,374,"

Background

+ +

Recently I was reading the article ""Quantum Bitcoin: An Anonymous and Distributed Currency Secured by the No-Cloning Theorem of Quantum Mechanics"" which demonstrates how a quantum bitcoin could function. The article's conclusion states that:

+ +
+

quantum bitcoins are atomic and there is currently no way to subdivide quantum bitcoin into smaller denominations, or merge them into larger ones.

+
+ +

As there is currently no way to subdivide or merge quantum bitcoins, you can not make change in a transaction. However, I could not understand why subdivision of a quantum bitcoin is not possible.

+ +

Question

+ +

Why can you not subdivide a quantum bitcoin?

+ +

Definitions

+ +

A quantum bitcoin - like a regular bitcoin - is a currency with no central authority.

+ +

The main idea behind the implementation of a quantum bitcoin is the no-cloning theorem. The no-cloning theorem demonstrates how it is impossible to copy the arbitrary quantum state $ \left| \varphi \right> $.

+",82,,2927,,8/25/2018 5:11,8/25/2018 5:11,Quantum Bitcoin Subdivision,,1,5,,,,CC BY-SA 3.0 +1324,2,,1289,3/25/2018 17:09,,33,,"

Firstly, a classical computer does basic maths at the hardware level in the arithmetic and logic unit (ALU). The logic gates take low and high input voltages and uses CMOS to implement logic gates allowing for individual gates to be performed and built up to perform larger, more complicated operations. In this sense, typing on a keyboard is sending electrical signals, which eventually ends up in a command (in the form of more electrical signals) being sent to the ALU, the correct operations being performed and more signals sent back, which gets converted to display pixels in the shape of a number on your screen.

+ +

What about a quantum computer?

+ +

There are two possible ways that quantum processors get used: by themselves, or in conjunction with a classical processor. However, most (including your example of superconducting) quantum processors don't actually use electrical signals, although this is still how your mouse, keyboard and monitor etc. transmit and receive information. So, there needs to be a way to convert the electric signal to whatever signal the quantum processor uses (which I'll get on to later), as well as some way of telling the processor what you want to do. Both these issues can be solved at once by classical pre- and post- processing, such as in IBM's QISKit. Microsoft is taking a bit more of a top-down approach in Q#, where programs for a quantum processor is written more like a 'classical' program, as opposed to a script, then compiled and potentially optimised for the hardware. That is, if you've got a function, it can perform classical operations, as well as make calls to the quantum processor to perform any required quantum operations. This leads me to the first point:

+ +

If you're going to ask a computer with access to a quantum processor to calculate something such as $2+3$, one very valid solution would be to just compute it on the classical processor as per usual.

+ +

OK, let's say that you're forcing the classical processor to use the quantum processor, which in this case is one of IBM's superconducting chips, using transmon qubits, let's say, the IBM QX4. This is too small to have error correction, so let's ignore that. There are three parts to using a circuit model processor: initialisation, unitary evolution and measurement, which are explained in more detail below. Before that,

+ +

What is a transmon?

+ +

Take a superconducting loop to allow for Cooper pairs and add one or two Josephson junctions to give a Cooper pair box island in the region between the two Josephson junctions with Josephson coupling energy $E_J = I_c\Phi_0/2\pi$, where the magnetic flux quantum $\Phi_0 = h/2e$ and $I_c$ is the critical current of the junction. Applying a voltage $V_g$ to this box gives a 'gate capacitance' $C_g$ and makes this a charge qubit. For the Coulomb energy of a single Cooper pair $E_C = \left(2e\right)^2/2C$, where $C$ is the sum of the total capacitance of the island. The Hamiltonian of such a system is given by $$H = E_C\left(n - n_g\right)^2 - E_J\cos\phi,$$ where $n$ is the number of Cooper pairs, $\phi$ is the phase change across the junction and $n_g = C_gV_g/2e$. When performing unitary operations, only the two lowest states of the system are considered, $\left|n\right\rangle = \left|0\right\rangle$ and $\left|n\right\rangle = \left|1\right\rangle$ with respective energies $E_0 =\hbar\omega_0$ and $E_1 = \hbar\omega_1$ and qubit frequency $\omega = \omega_1-\omega_0$, describing the computational basis of a qubit. A typical charge qubit could have $E_C = 5E_J$. Adding a large shunting capacitance and increasing the gate capacitance switches this ratio, so that $E_J\gg E_C$ and we have a transmon. This has the advantage of longer coherence times, at a cost of reduced anharmonicity (where energy levels beyond the first two are closer together, potentially causing leakage).

+ +

Finally, we get to the main question:

+ +

How do we initialise, evolve and measure a transmon?

+ +
    +
  • Single qubit unitary evolution: Applying a microwave pulse $\mathcal E\left(t\right) = \mathcal E_x\left(t\right)\cos\left(\omega_dt\right) + \mathcal E_y\left(t\right)\sin\left(\omega_dt\right)$ for $0<t<t_g$ of frequency $\omega_d$ and making the rotating wave approximation gives the Hamiltonian of the qubit states (in the ideal case) as $$H =\hbar \begin{pmatrix}\omega_1-\omega_d && \frac 12\mathcal E_x\left(t\right) - \frac i2\mathcal E_y\left(t\right)\\ \frac 12\mathcal E_x\left(t\right) + \frac i2\mathcal E_y\left(t\right) && \omega_2-2\omega_d\end{pmatrix}$$ However, due to lower anharmonicity, the microwave pulses have to be shaped to reduce leakage to higher energy levels in a process known as Derivative Removal by Adiabatic Gate (DRAG). By varying the pulse, different Hamiltonians can be achieved, which, depending on the time of the pulse can be used to implement different unitary operations on a single qubit.
  • +
  • Measurement/readout: A microwave resonator, with resonance frequency $\omega_r$, can be coupled to the transmon using a capacitor. This interaction causes (decaying) Rabi oscillations to occur in the transmon-resonator system. When the coupling strength of cavity and qubit, $g \ll \omega-\omega_r$, this is known as the dispersive regime. In this regime, the transmittance spectrum of the cavity is shifted by $\pm g^2/\left(\omega-\omega_r\right)$ depending on the state of the qubit, so applying a microwave pulse and analysing the transmittance and reflectance (by computer) can then be used to measure the qubit.
  • +
  • Multiple qubit unitary evolution: This idea of coupling a qubit to a microwave resonator can be extended by coupling the resonator to another qubit. As in the single qubit gate case, timings of the coupling as well as microwave pulses can be used allow the first qubit to couple to the cavity, which is then coupled to the second qubit and perform certain 2-qubit gates. Higher energy levels can also be used to make certain gates easier to implement due to interactions between higher levels caused by the cavity. One such example is shown here, where the cavity causes an interaction between the states of $\left|2\right>\left|0\right>$ and $\left|1\right>\left|1\right>$. An avoided crossing between these states means that a 2-qubit phase gate can be implemented, although in general 2-qubit gates are implemented less well (have a lower fidelity) than single qubit ones.
  • +
  • Initialisation: Readout, potentially followed by a single qubit Pauli $X$ gate (on each qubit measured to be in state $\left|1\right\rangle$) to ensure that all qubits start in state $\left|0\right\rangle$.
  • +
+ +

Adding 2 and 3 is now a 'simple' matter of initialising the qubits, performing the gates equivalent to a classical reversible adder and measuring the result, all implemented automatically. The measurement result is then returned by a classical computer as per usual.

+ +

As a bonus, it seems a little pointless to go through all that in order to implement gates that could be done on a classical computer anyway, so it turns out that it's possible to approximately implement a quantum adder, which adds two quantum (as opposed to classical) states, with some error, on one of IBM's processors.

+",23,,23,,3/26/2018 16:00,3/26/2018 16:00,,,,0,,,,CC BY-SA 3.0 +1327,1,,,3/25/2018 17:45,,11,214,"

Consider a classical computer, one making, say, a calculation involving a large amount of data. Would quantum memory allow it to store that information (in the short term) more efficiently, or better handle that quantity of data?

+ +

My thought would be it isn't possible, due to the advantage of quantum information storage being in the superpositions, and the data from a classical computer being very much not in a superposition, but I'd like to see if this is correct.

+ +

Either way, citations for further reading would be much appreciated.

+",91,,,,,3/25/2018 18:23,Quantum memory assisting classical memory,,1,5,,,,CC BY-SA 3.0 +1328,1,,,3/25/2018 17:49,,7,343,"

As mentioned in an earlier question of mine, I am interested in using type one spontaneous down conversion (SPDC) in optical quantum computing. However, SPDC is a somewhat low probability occurrence - most of the photons pass straight through the crystals unentangled. What methods, if any, are there to improve the probability of down conversion occurring, and therefore entanglement between photons?

+",91,,26,,3/27/2018 9:59,04-04-2018 18:26,Improving probability of spontaneous parametric down conversion,,2,3,,,,CC BY-SA 3.0 +1329,2,,1328,3/25/2018 17:58,,2,,"

Here's some relevant work in Optimizing type-I polarization-entangled photons-Radhika Rangarajan, Michael Goggin, Paul Kwiat.

+ +

Abstract:

+ +
+

Optical quantum information processing needs ultra-bright sources of + entangled photons, especially from synchronizable femtosecond lasers + and low-cost cw-diode lasers. Decoherence due to timing information + and spatial mode-dependent phase has traditionally limited the + brightness of such sources. We report on a variety of methods to + optimize type-I polarization-entangled sources - the combined use of + different compensation techniques to engineer high-fidelity pulsed and + cw-diode laser-pumped sources, as well as the first production of + polarization-entanglement directly from the highly nonlinear biaxial + crystal BiB3O6 (BiBO). Using spatial compensation, we show more than a + 400-fold improvement in the phase flatness, which otherwise limits + efficient collection of entangled photons from BiBO, and report the + highest fidelity to date (99%) of any ultrafast + polarization-entanglement source. Our numerical code, available on our + website, can design optimal compensation crystals and simulate + entanglement from a variety of type-I phasematched nonlinear crystals.

+
+ +

Apart from that I'd like to mention another interesting development in methods to entangle photons efficiently: An alternative to spontaneous parametric down conversion (SPDC) is +two-photon emission from electrically driven semiconductors. According to Wikipedia:

+ +
+

The newly observed effect of two-photon emission from electrically + driven semiconductors has been proposed as a basis for more efficient + sources of entangled photon pairs. Other than SPDC-generated + photon pairs, the photons of a semiconductor-emitted pair usually are + not identical but have different energies. Until recently, within + the constraints of quantum uncertainty, the pair of emitted photons + were assumed to be co-located: they are born from the same location. + However, a new nonlocalized mechanism for the production of correlated + photon pairs in SPDC has highlighted that occasionally the individual + photons that constitute the pair can be emitted from spatially + separated points.

+
+ +

Relevant paper: Observation of two-photon emission from semiconductors - Alex Hayat, Pavel Ginzburg & Meir Orenstein.

+ +

According to the abstract it is ""three orders of magnitude more efficient than the existing down-conversion schemes"".

+",26,,26,,3/25/2018 18:47,3/25/2018 18:47,,,,0,,,,CC BY-SA 3.0 +1330,2,,1327,3/25/2018 18:23,,10,,"

In summary, no.

+

If you think about it, this makes sense. When measuring a quantum system with $n$ qubits, you get $n$ bits of information. the $2^n$ figure exists only when the system is in superposition, which a classical computer cannot access.

+

The specific theorem in question here is Holevo's theorem. To quote Wikipedia:

+
+

In essence, the Holevo bound proves that given $n$ qubits, although they can "carry" a larger amount of (classical) information (thanks to quantum superposition), the amount of classical information that can be retrieved, i.e. accessed, can be only up to $n$ classical (non-quantum encoded) bits.

+
+

See this physics question and answer(s) as well. (Thanks to glS for linking to this in the comments.)

+",91,,-1,,6/18/2020 8:31,3/25/2018 18:23,,,,1,,,,CC BY-SA 3.0 +1331,2,,1225,3/25/2018 19:12,,5,,"

From a computer science perspective, the calculation of quantum-state amplitudes can be related to group-by aggregation queries in relational database systems, and the techniques we developed to reorganize calculations can be related to algebraic manipulations that are performed by database query optimizers. The general “design pattern” is thus analogous: convert quantum circuits into graph-based algebraic representations that can be readily manipulated and then use those representations to generate optimized execution plans for their simulation. A complexity analysis could then be approached from this perspective.

+ +

I included a simple example in a reply to a question posted on the IBM Q Experience Forum: Equations for Bristle Brush Example

+",522,,,,,3/25/2018 19:12,,,,0,,,,CC BY-SA 3.0 +1336,2,,1292,3/25/2018 23:25,,8,,"

It's difficult to define the point where an experimental setup is a quantum computer. But the crucial feature of a quantum computer is that it's able to perform a quantum computation. The first experimental realization of an algorithm was indeed Jones' and Mosca's implementation of the Deutsch algorithm in 1998 using an NMR setup.

+ +

Of course previous experiments showed components that could be used in a quantum computer.

+ +

However, it is quite reasonable to demand that a quantum computer is able to perform arbitrary arithmetics, whether programmable or by minor adjustments to the setup. By this definition we don't have a quantum computer, yet. This is related to the DiVincenzo Criteria for Quantum Computers.

+",104,,104,,3/25/2018 23:50,3/25/2018 23:50,,,,0,,,,CC BY-SA 3.0 +1337,2,,1276,3/26/2018 1:08,,5,,"

There is a newer result from Robert Beals, Stephen Brierley, Oliver Gray, Aram Harrow, Samuel Kutin, Noah Linden, Dan Shepherd, Mark Stather. They present on Table 2 of Efficient Distributed Quantum Computing the results for bubble sort and insertion sort, it is mainly for ""network sorting"" but they gave more references about sorting.

+ +

A quick and very briefly description of the paper can be: +We can say that the paper show how to solve several problems such as access the quantum memory without the loss of superposition (and they give the cost for it). Also, the paper presents the problem of sorting a network doing it quantumly (one of the problems is the reversibility of operations). I like the paper because it raises several problems and the authors gave the solution for some of the problems. I think that it is hard to try to summarize, I really recommend to read.

+ +

I hope that I have helped.

+",534,,534,,3/26/2018 11:01,3/26/2018 11:01,,,,0,,,,CC BY-SA 3.0 +1338,2,,12,3/26/2018 1:13,,3,,"

I think a nice ""overview"" about the subject can be found at: Quantiki

+ +

They have a list of quantum computer simulators in several languages, some of the simulators have been cited here before. However, they keep a list that they update to inform (or try to inform) of the project's status. There are some ""libraries"" such as:

+ +

Haskell

+ +

qchas (qchas: A library for implementing Quantum Algorithms) - A library useful for implementing Quantum Algorithms. It contains definitions of Quantum Gates, Qubits.

+ +

Python

+ +

qubiter : The Qubiter project aims to provide eventually a full suite of tools, written mostly in Python, for designing and simulating quantum circuits on classical computers.

+ +

Javascript

+ +

jsqis : +jsqis, at its core, is a quantum computer simulator written in Javascript. It allows initialization of quantum registers and their manipulation by means of quantum gates.

+",534,,82,,3/26/2018 2:55,3/26/2018 2:55,,,,0,,,,CC BY-SA 3.0 +1339,1,1643,,3/26/2018 2:22,,7,280,"

Q1: I've tried to find out if Barkhausen noise affects the measurement of spin-wave excitations in magnetic particle material based qubits.

+

I prefer implementations such as those described in "Magnetic qubits as hardware for quantum computers", rather than a true hybrid system such as described in "Coherent coupling between a ferromagnetic magnon and a superconducting qubit" or "Resolving quanta of collective spin excitations in a millimeter-sized ferromagnet"; which relies on the coherent coupling between a single-magnon excitation in a millimeter-sized ferromagnetic sphere and a superconducting qubit.

+

I'm asking about the situation where the magnetic particle is the qubit and not simply part of a magnon-qubit coupling scheme.

+

Q2: Is Barkhausen noise a factor that is not considered relevant?

+

After several hours of research the closest search I could find, for a paper of quantum computing hardware, Mesoscale and Nanoscale Physics, and Barkhausen noise, was this paper: "The Theory of Spin Noise Spectroscopy: A Review".

+
+

"Barkhausen noise

+

Studies of fluctuations in magnetic systems take roots in the work of Heinrich Barkhausen who proved already in 1919 that the magnetic hysteresis curve is not continuous, but is made up of small random steps caused when the magnetic domains move under an applied magnetic field. This noise can be characterized by placing a coil of a conducting wire near the sample. The motion of ferromagnetic domain walls produces changes in the magnetization that induces noisy electrical signals in the coil. Studies of Barkhausen noise have been used in practice as a nondemolition tool to characterize the distribution of elastic stresses and the microstructure of magnetic samples".

+
+

It would seem that Barkhausen noise can affect even very small magnetic particles subjected to an external magnetizing field, as might be encountered during measurement, but nowhere (it would seem) is there research on it's effect on quantum noise of the system.

+

It appears to be a difficult or unanswered question.

+
+

An answer was offered stating:

+
+

The Barkhausen effect has to do with domain wall motion. The magnetic qubit discussed in the first reference is based on nm sized magnetic particle, which we can assume to be single domain, and therefore would not exhibit Barkhausen noise. This paper by Kittel [Theory of the Structure of Ferromagnetic Domains - 1946] discusses domains in magnetic particles.

+
+

There are different limits stated as to what constitutes a single-domain magnetic particle, I've found upper limits ranging from 30-100 nm, with lower limits somewhat more consistent around 10 nm.

+

While it's not precisely stated what the exact size of the particles are in that paper, and we might assume others using similar methods could utilize particles of a different size, let's assume for the sake of that one answer only that the particles in question are single domain.

+

There are five main mechanisms due to which magnetic Barkhausen emissions occur [Jiles (1988)]:

+
    +
  1. Discontinuous, irreversible domain wall motion

    +
  2. +
  3. Discontinuous rotation of magnetic moments within a domain

    +
  4. +
  5. Appearance and disappearance of domain walls (Bloch or Neel). Domain +walls are narrow transition regions between magnetic domains. They +only differ in the plane of rotation of magnetization. For Bloch +walls the magnetization rotates through the plane of the domain wall +whereas for Neel walls the magnetization rotates within the plane of +the domain wall.

    +
  6. +
  7. Inversion of magnetization in single-domain particles

    +
  8. +
  9. Displacement of Bloch or Neel lines in two 180$°$ walls with +oppositely directed magnetizations

    +
  10. +
+

There are a number of papers on the measurement of Barkhausen noise in single-domain magnetic particles:

+ +
+

"The fundamental Barkhausen noise generated by the magnetization reversal of individual particles within a particulate magnetic medium has been observed using the anomalous Hall effect (AHE) as a sensitive magnetization probe. This is the first time the reversal of individual interacting single or nearly single domain particles has been detected. The jumps correspond to magnetic switching volumes of ~3×10$^{-15}$ cm$^3$ with moments around 10$^{-12}$ emu.".

+
+ +
+

"These observations thereby demonstrate that nucleation becomes increasingly more dominant as the particles become smaller, a manifestation of the random distribution of active nucleation sites. Nucleation may therefore account for much of the magnitude and grain size dependence of hysteresis parameters in the PSD range as well as resulting in a gradual transition between multidomain and PSD behavior. Fine particles completely controlled by nucleation during hysteresis behave in a strikingly parallel manner to classical single domains and are therefore quite appropriately described as being pseudo‐single‐domain.".

+
+ +

That paper goes as far as to state:

+
+

"We will show that here Barkhausen Noise has nothing to do with the movement of domain walls nor with Self Organized Criticality nor with fractal domains nor with thermodynamic criteria."

+
+",278,,-1,,6/18/2020 8:31,04-08-2018 21:12,Does Barkhausen noise affect the measurement of magnetic particle based qubits?,,2,0,,,,CC BY-SA 3.0 +1340,2,,1339,3/26/2018 5:59,,1,,"

The Barkhausen effect has to do with domain wall motion. The magnetic qubit discussed in the first reference is based on nm sized magnetic particle, which we can assume to be single domain, and therefore would not exhibit Barkhausen noise.

+ +

This paper by Kittel discusses domains in magnetic particles.

+",127,,127,,3/26/2018 16:43,3/26/2018 16:43,,,,4,,,,CC BY-SA 3.0 +1341,1,,,3/26/2018 10:49,,5,1998,"

What does one mean by saying that classical bits perform operations at the scale of $2n$ and quantum computers perform operations at the scale of $2^n$? In both cases, $n$ = Number of bits/qubits.

+",543,,26,,12/23/2018 12:11,12/23/2018 12:11,Why do classical bits perform calculations at a scale that expands linearly and qubits at exponential scale in the number of (qu)bits?,,3,2,,,,CC BY-SA 3.0 +1342,2,,1341,3/26/2018 11:16,,3,,"

The reason is because of the superposition. It allows you to perform operations and won the speed up. For example, if you have a just one qubit you will have the following because of the superposition:

+ +

$$\alpha_0\lvert0\rangle + \alpha_1\lvert1\rangle$$

+ +

You can see that you have already $2$ basis for one qubit. If you have two qubits you will have $4$ basis for your system and go on. I will put the $2$ qubit but as a combination of tensor product of the previous basis and we will have:

+ +

$$\lvert0\rangle ⊗ \lvert0\rangle, \lvert0\rangle ⊗ \lvert1\rangle, \lvert1\rangle ⊗ \lvert0\rangle, \lvert1\rangle ⊗ \lvert1\rangle$$

+ +

If you apply the tensor product of it, you will end up with a $4$ basis system for $2$ qubits ($2^2$).

+ +

If you are looking for a introduction material I think that the Lecture Notes from Ronald de Wolf is a good start. It is possible to get directly from his website and it is for free. He gave a better explanation about this on section 1.3.

+ +

I hope that I have helped you.

+",534,,26,,3/26/2018 14:45,3/26/2018 14:45,,,,1,,,,CC BY-SA 3.0 +1343,2,,1341,3/26/2018 12:33,,7,,"

I'm not sure it really is true to make such a claim, even though it is one that is often seen. Even so, this statement is common because it does point towards a difference between classical computers and quantum ones.

+ +

Classical computation is essentially a process that takes a single input bit string and keeps transforming it until you get a single output bit string. You can think of the whole process as only ever having one bit string in the computer at once.

+ +

The same is true for quantum computers, except that you need to replace 'bit string' with 'state of many qubits'. So how do bit strings compare with multi qubit states? To find out, we can look at how classical computers can simulate quantum ones.

+ +

One way to represent states of $n$ qubits in a classical computer is to think of them as superpositions of all possible $n$-bit strings. Then you can have a big array, which stores the corresponding amplitude for every $n$-bit string. Since there are $2^n$ $n$-bit strings, this will take an exponentially large amount of memory. This is often not the most efficient method of representing quantum states, but there are cases when it is no worse than any other.

+ +

So if we need to simulate any possible process for $n$ qubits with bits, we know that it will incur this kind of overhead. If we use this to draw a comparison between qubits and bits, we could say that an exponentially large number of qubits are required to match the power of $n$ bits. But the same would not be true for all possible computational tasks.

+",409,,,,,3/26/2018 12:33,,,,3,,,,CC BY-SA 3.0 +1344,1,1354,,3/26/2018 15:37,,14,3633,"

Suppose we have a circuit decomposition of a unitary $U$ using some universal gate set (for example CNOT-gates and single qubit unitaries). Is there a direct way to write down the circuit of the corresponding controlled unitary $C_U$ using the same universal gate set?

+ +

For example take $U=i Y = H X H X$, as a circuit:
+

+ +

We can replace the $X$ gates by $C_X$ (CNOT) gates to obtain $C_U$:
+

+ +

This works because if the control qubit is in state $|0\rangle$ the action on the target is $H^2=\mathbb{I}$, while for $|1\rangle$ it applies the circuit for $U$. For different $U$, in particular if it acts on several qubits, coming up with such a circuit might be cumbersome. Is there a recipe to obtain the circuit of $C_U$ given that you know how to build $U$?

+",104,,55,,04-10-2019 12:17,04-10-2019 12:17,"Given a decomposition for a unitary $U$, how do you decompose the corresponding controlled unitary gate $C(U)$?",,2,2,,,,CC BY-SA 4.0 +1347,2,,74,3/26/2018 15:50,,18,,"

Measurement-based quantum computation (MBQC)

+ +

This is a way to perform quantum computation, using intermediary measurements as a way of driving the computation rather than just extracting the answers. It is a special case of ""quantum circuits with intermediary measurements"", and so is no more powerful. However, when it was introduced, it up-ended many people's intuitions of the role of unitary transformations in quantum computation. In this model one has constraints such as the following:

+ +
    +
  1. One prepares, or is given, a very large entangled state — one which can be described (or prepared) by having some set of qubits all initially prepared in the state $\lvert + \rangle$, and then some sequence of controlled-Z operations $\mathrm{CZ} = \mathrm{diag}(+1,+1,+1,-1)$, performed on pairs of qubits according to the edge-relations of a graph (commonly, a rectangular grid or hexagonal lattice).
  2. +
  3. Perform a sequence of measurements on these qubits — some perhaps in the standard basis, but the majority not in the standard basis, but instead measuring observables such as $M_{\mathrm{XY}}(\theta) = \cos(\theta) X - \sin(\theta) Y$ for various angles $\theta$. Each measurement yields an outcome $+1$ or $-1$ (often labelled '0' or '1' respectively), and the choice of angle is allowed to depend in a simple way on the outcomes of previous measurements (in a way computed by a classical control system).
  4. +
  5. The answer to the computation may be computed from the classical outcomes $\pm 1$ of the measurements.
  6. +
+ +

As with the unitary circuit model, there are variations one can consider for this model. However, the core concept is adaptive single-qubit measurements performed on a large entangled state, or a state which has been subjected to a sequence of commuting and possibly entangling operations which are either performed all at once or in stages.

+ +

This model of computation is usually considered as being useful primarily as a way to simulate unitary circuits. Because it is often seen as a means to simulate a better-liked and simpler model of computation, it is not considered theoretically very interesting anymore to most people. However:

+ +
    +
  • It is important among other things as a motivating concept behind the class IQP, which is one means of demonstrating that a quantum computer is difficult to simulate, and Blind Quantum Computing, which is one way to try to solve problems in secure computation using quantum resources.

  • +
  • There is no reason why measurement-based computations should be essentially limited to simulating unitary quantum circuits: it seems to me (and a handful of other theorists in the minority) that MQBC could provide a way of describing interesting computational primitives. While MBQC is just a special case of circuits with intermediary measurements, and can therefore be simulated by unitary circuits with only polynomial overhead, this is not to say that unitary circuits would necessarily be a very fruitful way of describing anything that one could do in principle in a measurement-based computation (just as there exists imperative and functional programming languages in classical computation which sit a little ill-at-ease with one another).

  • +
+ +

The question remains whether MBQC will suggest any way of thinking about building algorithms which is not as easily presented in terms of unitary circuits — but there can be no question of a computational advantage or disadvantage over unitary circuits, except one of specific resources and suitability for some architecture.

+",124,,253,,3/29/2018 11:01,3/29/2018 11:01,,,,4,,,,CC BY-SA 3.0 +1348,2,,74,3/26/2018 15:51,,12,,"

The Unitary Circuit Model

+ +

This is the best well-known model of quantum computation. In this model one has constraints such as the following:

+ +
    +
  1. a set of qubits initialised to a pure state, which we denote $\lvert 0 \rangle$;
  2. +
  3. a sequence of unitary transformations which one performs on them, which may depend on a classical bit-string $x\in \{0,1\}^n$;
  4. +
  5. one or more measurements in the standard basis performed at the very end of the computation, yielding a classical output string $y \in \{0,1\}^k$. (We do not require $k = n$: for instance, for YES / NO problems, one often takes $k = 1$ no matter the size of $n$.)
  6. +
+ +

Minor details may change (for instance, the set of unitaries one may perform; whether one allows preparation in other pure states such as $\lvert 1 \rangle$, $\lvert +\rangle$, $\lvert -\rangle$; whether measurements must be in the standard basis or can also be in some other basis), but these do not make any essential difference.

+",124,,253,,3/29/2018 11:03,3/29/2018 11:03,,,,0,,,,CC BY-SA 3.0 +1349,2,,74,3/26/2018 15:54,,27,,"

The adiabatic model

+ +

This model of quantum computation is motivated by ideas in quantum many-body theory, and differs substantially both from the circuit model (in that it is a continuous-time model) and from continuous-time quantum walks (in that it has a time-dependent evolution).

+ +

Adiabatic computation usually takes the following form.

+ +
    +
  1. Start with some set of qubits, all in some simple state such as $\lvert + \rangle$. Call the initial global state $\lvert \psi_0 \rangle$.
  2. +
  3. Subject these qubits to an interaction Hamiltonian $H_0$ for which $\lvert \psi_0 \rangle$ is the unique ground state (the state with the lowest energy). For instance, given $\lvert \psi_0 \rangle = \lvert + \rangle^{\otimes n}$, we may choose $H_0 = - \sum_{k} \sigma^{(x)}_k$.
  4. +
  5. Choose a final Hamiltonian $H_1$, which has a unique ground state which encodes the answer to a problem you are interested in. For instance, if you want to solve a constraint satisfaction problem, you could define a Hamiltonian $H_1 = \sum_{c} h_c$, where the sum is taken over the constraints $c$ of the classical problem, and where each $h_c$ is an operator which imposes an energy penalty (a positive energy contribution) to any standard basis state representing a classical assignment which does not satisfy the constraint $c$.
  6. +
  7. Define a time interval $T \geqslant 0$ and a time-varying Hamiltonian $H(t)$ such that $H(0) = H_0$ and $H(T) = H_1$. A common but not necessary choice is to simply take a linear interpolation $H(t) = \tfrac{t}{T} H_1 + (1 - \tfrac{t}{T})H_0$.
  8. +
  9. For times $t = 0$ up to $t = T$, allow the system to evolve under the continuously varying Hamiltonian $H(t)$, and measure the qubits at the output to obtain an outcome $y \in \{0,1\}^n$.
  10. +
+ +

The basis of the adiabatic model is the adiabatic theorem, of which there are several versions. The version by Ambainis and Regev [ arXiv:quant-ph/0411152 ] (a more rigorous example) implies that if there is always an ""energy gap"" of at least $\lambda > 0$ between the ground state of $H(t)$ and its first excited state for all $0 \leqslant t \leqslant T$, and the operator-norms of the first and second derivatives of $H$ are small enough (that is, $H(t)$ does not vary too quickly or abruptly), then you can make the probability of getting the output you want as large as you like just by running the computation slowly enough. Furthermore, you can reduce the probability of error by any constant factor just by slowing down the whole computation by a polynomially-related factor.

+ +

Despite being very different in presentation from the unitary circuit model, it has been shown that this model is polynomial-time equivalent to the unitary circuit model [ arXiv:quant-ph/0405098 ]. The advantage of the adiabatic algorithm is that it provides a different approach to constructing quantum algorithms which is more amenable to optimisation problems. One disadvantage is that it is not clear how to protect it against noise, or to tell how its performance degrades under imperfect control. Another problem is that, even without any imperfections in the system, determining how slowly to run the algorithm to get a reliable answer is a difficult problem — it depends on the energy gap, and it isn't easy in general to tell what the energy gap is for a static Hamiltonian $H$, let alone a time-varying one $H(t)$.

+ +

Still, this is a model of both theoretical and practical interest, and has the distinction of being the most different from the unitary circuit model of essentially any that exists.

+",124,,253,,3/29/2018 11:01,3/29/2018 11:01,,,,0,,,,CC BY-SA 3.0 +1350,2,,74,3/26/2018 15:56,,11,,"

Discrete-time quantum walk

+ +

A ""discrete-time quantum walk"" is a quantum variation on a random walk, in which there is a 'walker' (or multiple 'walkers') which takes small steps in a graph (e.g. a chain of nodes, or a rectangular grid). The difference is that where a random walker takes a step in a randomly determined direction, a quantum walker takes a step in a direction determined by a quantum ""coin"" register, which at each step is ""flipped"" by a unitary transformation rather than changed by re-sampling a random variable. See +[ arXiv:quant-ph/0012090 ] for an early reference.

+ +

For the sake of simplicity, I will describe a quantum walk on a cycle of size $2^n$; though one must change some of the details to consider quantum walks on more general graphs. In this model of computation, one typically does the following.

+ +
    +
  1. Prepare a ""position"" register on $n$ qubits in some state such as $\lvert 00\cdots 0\rangle$, and a ""coin"" register (with standard basis states which we denote by $\lvert +1 \rangle$ and $\lvert -1 \rangle$) in some initial state which may be a superposition of the two standard basis states.
  2. +
  3. Perform a coherent controlled-unitary transformation, which adds 1 to the value of the position register (modulo $2^n$) if the coin is in the state $\lvert +1 \rangle$, and subtracts 1 to the value of the position register (modulo $2^n$) if the coin is in the state $\lvert -1 \rangle$.
  4. +
  5. Perform a fixed unitary transformation $C$ to the coin register. This plays the role of a ""coin flip"" to determine the direction of the next step. We then return to step 2.
  6. +
+ +

The main difference between this and a random walk is that the different possible ""trajectories"" of the walker are being performed coherently in superposition, so that they can destructively interfere. This leads to a walker behaviour which is more like ballistic motion than diffusion. Indeed, an early presentation of a model such as this was made by Feynmann, as a way to simulate the Dirac equation.

+ +

This model also often is described in terms of looking for or locating 'marked' elements in the graph, in which case one performs another step (to compute whether the node the walker is at is marked, and then to measure the outcome of that computation) before returning to Step 2. Other variations of this sort are reasonable.

+ +

To perform a quantum walk on a more general graph, one must replace the ""position"" register with one which can express all of the nodes of the graph, and the ""coin"" register with one which can express the edges incident to a vertex. The ""coin operator"" then must also be replaced with one which allows the walker to perform an interesting superposition of different trajectories. (What counts as 'interesting' depends on what your motivation is: physicists often consider ways in which changing the coin operator changes the evolution of the probability density, not for computational purposes but as a way of probing at basic physics using quantum walks as a reasonable toy model of particle movement.) A good framework for generalising quantum walks to more general graphs is the Szegedy formulation [ arXiv:quant-ph/0401053 ] of discrete-time quantum walks.

+ +

This model of computation is strictly speaking a special case of the unitary circuit model, but is motivated with very specific physical intuitions, which has led to some algorithmic insights (see e.g. [ arXiv:1302.3143 ]) for polynomial-time speedups in bounded-error quantum algorithms. This model is also a close relative of the continuous-time quantum walk as a model of computation.

+",124,,253,,3/29/2018 11:03,3/29/2018 11:03,,,,3,,,,CC BY-SA 3.0 +1351,2,,74,3/26/2018 15:59,,10,,"

Quantum circuits with intermediary measurements

+ +

This is a slight variation on ""unitary circuits"", in which one allows measurements in the middle of the algorithm as well as the end, and where one also allows future operations to depend on the outcomes of those measurements. It represents a realistic picture of a quantum processor which interacts with a classical control device, which among other things is the interface between the quantum processor and a human user.

+ +

Intermediary measurement is practically necessary to perform error correction, and so this is in principle a more realistic picture of quantum computation than the unitary circuit model. but it is not uncommon for theorists of a certain type to strongly prefer measurements to be left until the end (using the principle of deferred measurement to simulate any 'intermediary' measurements). So, this may be a significant distinction to make when talking about quantum algorithms — but this does not lead to a theoretical increase in the computational power of a quantum algorithm.

+",124,,253,,3/29/2018 11:02,3/29/2018 11:02,,,,2,,,,CC BY-SA 3.0 +1352,2,,1344,3/26/2018 16:37,,6,,"

Although this might not answer your question completely, I think it might provide some direction of thinking. Here are two important facts:

+ +
    +
  • Any unitary $2^{n}\times 2^{n}$ matrix $M$, can be realized on a quantum computer with $n$-quantum bits by a finite sequence of controlled-not and single qubit gates1.

  • +
  • Suppose $U$ is a unitary $2\times 2$ matrix satisfying $\text{tr } U \neq 0$, $\text{tr} (UX) \neq 0$, and $\text{det } U \neq 1$. Then six elementary gates are necessary and sufficient to implement a controlled $U$-gate2.

  • +
+ +

It should be possible to extend the second case to the general $n\times n$ case, given the first point, although I haven't found any paper which does that explicitly.

+ +
+ +

1 Elementary gates for quantum computation-A. Barenco (Oxford), C. H. Bennett (IBM), R. Cleve (Calgary), D. P. DiVincenzo (IBM), N. Margolus (MIT), P. Shor (AT&T), T. Sleator (NYU), J. Smolin (UCLA), H. Weinfurter (Innsbruck) +

+ +

2 Optimal Realizations of Controlled Unitary Gates - Guang Song, Andreas Klappenecker (Texas A&M University)

+",26,,,,,3/26/2018 16:37,,,,0,,,,CC BY-SA 3.0 +1353,1,1611,,3/26/2018 16:41,,6,518,"

I'm wondering whether even if we cannot create a fast quantum computer, simulating quantum algorithms can be a reasonable method for classical algorithms.

+ +

In particular, I'd like to see any results of classical algorithms that have been sped up by using a quantum simulation as a subroutine. Second, the next logical step would be to 'cut out the middleman' and see if we can remove the simulator. Perhaps this can even be done semi-automatically!

+ +

So, is there any result or research on this? Suggestions are welcome.

+ +
+ +

To be clear, I'm asking whether there exists any problem such that running a simulation of a quantum computer, on a classical computer, can offer any improvement (time or memory) over (trying to) solve the same problem on a classical computer without running any sort of simulation of a quantum computer.

+ +

Second, I am wondering how one then would attempt to adapt this algorithm such that all 'useless' parts of the quantum algorithm and the simulation are removed, hopefully improving the method even further.

+",253,,253,,04-05-2018 13:06,04-05-2018 14:00,Can classical algorithms be improved by using quantum simulation as an intermediary step?,,2,19,,,,CC BY-SA 3.0 +1354,2,,1344,3/26/2018 19:20,,16,,"

The question may not be entirely well-defined, in the sense that to ask for a way to compute $C(U)$ from a decomposition of $U$ you need to specify the set of gates that you are willing to use. +Indeed, it is a known result that any $n$-qubit gate can be exactly decomposed using $\text{CNOT}$ and single-qubit operations, so that a naive answer to the question would be: just decompose $C(U)$ using single-qubit and $\text{CNOT}$s.

+ +

A different interpretation of the question is the following: given $U$, can I compute $C(U)$ using a set of single-qubit operations and $\text{CNOT}$s not on the control qubit, and $\text{CNOT}$s with the control being the first qubit? +This can be done generalising a result found in chapter four of Nielsen & Chuang.

+ +

Let $U$ be a single-qubit gate. +It can then be proved that $U$ can always be written as $U = e^{i\alpha} AXBXC$, where $X$ is the Pauli X gate, and $A, B$ and $C$ are single-qubit operations such that $ABC=I$ (see N&C for a proof). +It follows that +$$C(U)=\Phi_1(\alpha)A_2C(X)B_2C(X) C_2,$$ +where $\Phi_1(\alpha)\equiv\begin{pmatrix}1&0\\0&e^{i\alpha}\end{pmatrix}\otimes I$ is a phase gate applied to the first qubit, and $A_2, B_2, C_2$ are $A, B, C$ applied to the second qubit. +This is immediate once you realise that, if that first qubit is $|0\rangle$, then $C(X)$ becomes an identity, and on the second qubit you have the operations $ABC$, which give the identity. On the other hand, if the first qubit is $|1\rangle$, then on the second rail you have $AXBXC$, which (together with the phase) equals $U$ by definition.

+ +

The above decomposition can be used to find a naive way to compute $C(U)$ for a general $n$-qubit unitary gate. +The main observation is that if $U=A_1 A_2\cdots A_m$ for any set of gates $\{A_1,..,A_m\}$, then +$$C(U)=C(A_1)C(A_2)\cdots C(A_m).$$ +But we also know that any $n$-qubit $U$ can be decomposed in terms of CNOTs and single-qubit operations. +It follows that $C(U)$ is a sequence of CCNOT and $C(V)$ operations, where CCNOT is here an $X$ gate applied to some qubit conditioned to two other qubits being $|1\rangle$, and $V$ is a single-qubit operation on some qubit. +But again, any CCNOT operation (also called Toffoli), can be decomposed as shown in Figure 4.9 in N&C, and the $C(V)$ are decomposed as shown in the first part of the answer.

+ +

This method allows decomposing a general $n$-qubit unitary gate $U$ using only $\text{CNOT}$ and single-qubit gates. +You may then go further and generalise this to find a decomposition for the case of multiple control qubits. +For this you only now need a way to decompose the Toffoli gates, which is again found in Figure 4.9 of N&C.

+",55,,55,,3/26/2018 19:34,3/26/2018 19:34,,,,2,,,,CC BY-SA 3.0 +1355,2,,1353,3/27/2018 1:42,,1,,"

The question here seems to be: ""can a classical computer be more efficient by simulating a quantum computer?"" and ""what research has been done on this?""

+ +

I think it's important, first, to point out that no one is 100% sure that a quantum computer is even actually better than a classical computer, whether or not we have the fastest possible algorithms for a classical or quantum computer for really any particular problem, and so forth.

+ +

I found an article from October 2017 that details an experiment IBM did simulating a 56 qubit quantum computer on a supercomputer. Here's what the study author said:

+ +
+

For instance, whereas a perfect 56-qubit quantum computer can perform the experiments ""in 100 microseconds or less, we took two days, so a factor of a billion times slower""

+
+ +

(See their paper on arXiv for more information.) I also found a paper submitted to arXiv in February of 2018 which simulates a 64 qubit quantum computer, building on the work of IBM. They also estimate a 72 qubit circuit could be simulated.

+ +

What seems to be prevalent in all of this, though, is that these simulations are for help in comparison to quantum computing results and times, and none of them claim to show quantum computing ""useless"" or ""replicable"". So, my final answer would be no, this is not a thing.

+",91,,,,,3/27/2018 1:42,,,,6,,,,CC BY-SA 3.0 +1356,1,,,3/27/2018 17:09,,14,629,"

Quantum error correction is a fundamental aspect of quantum computation, without which large-scale quantum computations are practically unfeasible.

+ +

One aspect of fault-tolerant quantum computing that is often mentioned is that each error-correction protocol has associated an error rate threshold. +Basically, for a given computation to be protectable against errors via a given protocol, the error rate of the gates must be below a certain threshold.

+ +

In other words, if the error rates of single gates are not low enough, then it is not possible to apply error-correction protocols to make the computation more reliable.

+ +

Why is this? Why is it not possible to reduce error rates that are not already very low to begin with?

+",55,,26,,12/23/2018 14:14,12/23/2018 14:14,Why do error correction protocols only work when the error rates are already significantly low to begin with?,,5,2,,,,CC BY-SA 4.0 +1358,2,,1356,3/27/2018 18:35,,2,,"

To me there seem to be two parts of this question (one more related to the title, one more related to the question itself):

+ +

1) To which amount of noise are error correction codes effective?
+ 2) With which amount of imperfection in gates can we implement fault-tolerant quantum computations?

+ +

Let me firs stress the difference: quantum error correction codes can be used in many different scenarios, for example to correct for losses in transmissions. Here the amount of noise mostly depends on the length of the optical fibre and not on the imperfection of the gates. However if we want to implement fault-tolerant quantum computation, the gates are the main source of noise.

+ +

On 1)

+ +

Error correction works for large error rates (smaller than $1/2$). Take for example the simple 3 qubit repetition code. The logical error rate is just the probability for the majority vote to be wrong (the orange line is $f(p)=p$ for comparison):

+ +

+ +

So whenever the physical error rate $p$ is below $1/2$, the logical error rate is smaller than $p$. Note however, that is particularly effective for small $p$, because the code changes the rate from $\mathcal{O}(p)$ to a $\mathcal{O}(p^2)$ behaviour.

+ +

On 2)

+ +

We want to perfrom arbitrarily long quantum computations with a quantum computer. However, the quantum gates are not perfect. In order to cope with the errors introduced by the gates, we use quantum error correction codes. This means that one logical qubit is encoded into many physical qubits. This redundancy allows to correct for a certain amount of errors on the physical qubits, such that the information stored in the logical qubit remains intact. Bigger codes allow for longer calculations to still be accurate. However, larger codes involve more gates (for example more syndrome measurements) and these gates introduce noise. You see there is some trade-off here, and which code is optimal is not obvious.
+If the noise introduced by each gate is below some threshold (the fault-tolerance or accuracy threshold), then it is possible to increase the code size to allow for arbitrarily long calculations. This threshold depends on the code we started with (usually it is iteratively concatenated with itself). There are several ways to estimate this value. Often it is done by numerical simulation: Introduce random errors and check whether the calculation still worked. This method typically gives threshold values which are too high. There are also some analytical proofs in the literature, for example this one by Aliferis and Cross.

+",104,,104,,3/28/2018 23:25,3/28/2018 23:25,,,,5,,,,CC BY-SA 3.0 +1359,1,1361,,3/27/2018 20:34,,32,2111,"

The standard popular-news account of quantum computing is that a quantum computer (QC) would work by splitting into exponentially many noninteracting parallel copies of itself in different universes and having each one attempt to verify a different certificate, then at the end of the calculation, the single copy that found a valid certificate ""announces"" its solution and the other branches magically vanish.

+ +

People who know anything about theoretical quantum computation know that this story is absolute nonsense, and that the rough idea described above more closely corresponds to a nondeterministic Turing machine (NTM) than to a quantum computer. Moreover, the compexity class of problems efficiently solvable by NTMs is NP and by QCs is BQP, and these classes are not believed to be equal.

+ +

People trying to correct the popular presentation rightfully point out that the simplistic ""many-worlds"" narrative greatly overstates the power of QCs, which are not believed to be able to solve (say) NP-complete problems. They focus on the misrepresentation of the measurement process: in quantum mechanics, which outcome you measure is determined by the Born rule, and in most situations the probability of measuring an incorrect answer completely swamps the probability of measuring the right one. (And in some cases, such as black-box search, we can prove that no clever quantum circuit can beat the Born rule and deliver an exponential speedup.) If we could magically ""decide what to measure"", then we would be able to efficiently solve all problems in the complexity class PostBQP, which is believed to be much large than BQP.

+ +

But I've never seen anyone explicitly point out that there is another way in which the popular characterization is wrong, which goes in the other direction. BQP is believed to be not a strict subset of NP, but instead incomparable to it. There exist problems like Fourier checking which are believed to not only lie outside of NP, but in fact outside of the entire polynomial hierarchy PH. So with respect to problems like these, the popular narrative actually understates rather than overstates the power of QCs.

+ +

My naive intuition is that if we could ""choose what to measure"", then the popular narrative would be more or less correct, which would imply that these super-quantum-computers would be able to efficiently solve exactly the class NP. But we believe that this is wrong; in fact PostBQP=PP, which we believe to be a strict superset of NP.

+ +

Is there any intuition for what's going on behind the scenes that allows a quantum computer to be (in some respects) more powerful than a nondeterministic Turing machine? Presumably this ""inherently quantum"" power, when combined with postselection (which in a sense NTMs already have) is what makes a super-QC so much more powerful than a NTM. (Note that I'm looking for some intuition that directly contrasts NTMs and QCs with postselection, without ""passing through"" the classical complexity class PP.)

+",551,,1847,,4/27/2018 16:51,4/27/2018 16:51,Why is a quantum computer in some ways more powerful than a nondeterministic Turing machine?,,2,0,,,,CC BY-SA 3.0 +1360,1,1382,,3/27/2018 21:49,,5,121,"

There are different theoretical models for quantum computing like the circuit model or the model of adiabatic quantum computers.

+ +

Between which of these models exist polynomial-time reductions?

+ +

Note that this question does not aim to cover physical implementations of quantum computers which are already discussed here.

+",673,,104,,3/28/2018 12:25,3/28/2018 12:25,Which theoretical models for quantum computing are polynomial-time equivalent?,,1,2,,04-02-2018 08:55,,CC BY-SA 3.0 +1361,2,,1359,3/27/2018 21:51,,18,,"

From a pseudo-foundational standpoint, the reason why BQP is a differently powerful (to coin a phrase) class than NP, is that quantum computers can be considered as making use of destructive interference.

+ +

Many different complexity classes can be described in terms of (more or less complicated properties of) the number of accepting branches of an NTM. Given an NTM in 'normal form', meaning that the set of computational branches are a complete binary tree (or something similar to it) of some polynomial depth, we may consider classes of languages defined by making the following distinctions:

+ +
    +
  • Is the number of accepting branches zero, or non-zero? (A characterisation of NP.)
  • +
  • Is the number of accepting branches less than the maximum, or exactly equal to the maximum? (A characterisation of coNP.)
  • +
  • Is the number of accepting branches at most one-third, or at least two-thirds, of the total? (A characterisation of BPP.)
  • +
  • Is the number of accepting branches less than one-half, or at least one-half, of the total? (A characterisation of PP.)
  • +
  • Is the number of accepting branches different from exactly half, or equal to exactly half, of the total? (A characterisation of a class called C=P.)
  • +
+ +

These are called counting classes, because in effect they are defined in terms of the count of accepting branches.

+ +

Interpreting the branches of an NTM as randomly generated, they are questions about the probability of acceptance (even if these properties are not efficiently testable with any statistical confidence). A different approach to describing complexity classes is to consider instead the gap between the number of accepting branches and the number of rejecting branches of an NTM. If counting the cumulation of NTM computational branches corresponds to probabilities, one could suggest that canceling accepting branches against rejecting branches models the cancellation of computational 'paths' (as in sum-over-paths) in quantum computation — that is, as modeling destructive interference.

+ +

The best known upper bounds for BQP, namely AWPP and PP, are readily definable in terms of 'acceptance gaps' in this way. The class NP, however, does not have such an obvious characterisation. Furthermore, many of the classes which one obtains from definitions in terms of acceptance gaps appear to be more powerful than NP. One could take this to indicate that 'nondeterministic destructive interference' is a potentially more powerful computational resource than mere nondeterminism; so that even if quantum computers do not take full advantage of this computational resource, it may nevertheless resist easy containment in classes such as NP.

+",124,,,,,3/27/2018 21:51,,,,5,,,,CC BY-SA 3.0 +1363,2,,1359,3/28/2018 5:45,,-1,,"

This answer was 'migrated' from when this question was asked on Computer Science (Author remains the same)

+ +
+ +

Well, one main reason is that there aren't any quantum algorithms that solve NP-hard problems in polynomial time.

+ +

Another is that adiabetic quantum annealing (as in the Dwave) can only barely beat the classical quantum annealing.

+ +

Also, most researchers think P$\neq$NP. A lot believe P$=$BQP. However, P$\neq$PostBQP. Is PostBQP$\neq$NP now contradictionay? No. We only know that P$=$NP is a weaker statement than (not nessecarily implying more!) PostBQP$=$P! So, why all the fuss about a question harder than P vs. NP!

+ +

As for why to believe P$=$BQP, some believe any improvement will not be asymptotic or merely a constant, as in differing implementation.

+ +

So, there are some reasons to believe PostBQP$\neq$NP. But this is all speculation and likely remains speculation for a while. You can believe whatever you want, for now, at least.

+ +
+ +
+

There exist problems like Fourier checking which are believed to not only lie outside of NP, but in fact outside of the entire polynomial hierarchy. So with respect to problems like these, the popular narrative actually understates rather than overstates the power of QCs.

+
+ +

As for this, I haven't seen a result that states a quantum computer can solve this efficiently! Also, that the machine can solve weird problems fast (simulating itself in $O(n)$, for example) isn't more surprising that a waterfall simulating itself in $O(n)$ (n being the number of simulation steps)

+",253,,,,,3/28/2018 5:45,,,,0,,,,CC BY-SA 3.0 +1364,1,,,3/28/2018 5:59,,4,104,"

The main reason to start with Post Quantum Crypto (PQC) right now is because creating strong crypto, good implementation and accepted standards takes very long. Right now, most PQC is in the 'crypto' stage or starting to enter 'implementation' stage.

+ +

I'm wondering whether, given recent advances in constructing quantum computers, the PQC initiative will be 'fast enough'.

+ +

In particular, I'd like to know if PQC resistant against breaking factoring is widely deployed in practice before...:

+ +
    +
  1. Government agencies can efficiently factor using Shor's algorithm
  2. +
  3. Serious hackers and medium sized companies can factor using Shor.
  4. +
  5. Script kiddies can run Shor's algorithm
  6. +
+ +
+ +

To clarify, I am looking for literature or analyses based on history about deployment speed of cryptographical defenses and compare this with current analysis on the predicted power of quantum computers.

+ +

For instance, when will a good lattice based cryptosystem have a 'mainstream' implementation?

+",253,,253,,3/28/2018 9:41,3/28/2018 9:41,Will post quantum crypto come soon enough?,,0,0,,3/28/2018 8:49,,CC BY-SA 3.0 +1365,1,1373,,3/28/2018 6:05,,28,31516,"

As per my limited understanding, a pure state is the quantum state where we have exact information about the quantum system. And the mixed state is the combination of probabilities of the information about the quantum state of the quantum system.

+ +

However, it is mentioned that different distributions of pure states can generate equivalent mixed states. So how a combination of exact information can result in the combination of probabilities?

+",769,,26,,3/29/2018 18:07,1/14/2021 5:29,What's the difference between a pure and mixed quantum state?,,4,1,,,,CC BY-SA 3.0 +1366,1,1370,,3/28/2018 6:19,,10,1873,"

We have been reading about quantum computers being developed and tested in labs.

+ +

And also, we have quantum simulator programs that use limited virtual qubits (up to 30-40 qubits if cloud-based). +And we have also started learning new quantum computing languages like Q#.

+ +

But do we really have actual commercial quantum computers ready with physical qubits?

+",769,,2687,,3/13/2019 14:09,3/13/2019 14:09,Do real commercial quantum computers exist?,,4,1,,,,CC BY-SA 4.0 +1367,1,1372,,3/28/2018 6:32,,49,2558,"

I come from a non-physics background and I am very much interested in pursuing Quantum Computing - especially how to program them. Any guidance on how to get started will be very helpful.

+",769,,26,,12/13/2018 19:39,10/29/2019 11:22,Programming quantum computers for non-physics majors,,6,0,,,,CC BY-SA 3.0 +1368,2,,1366,3/28/2018 6:33,,4,,"

Ready for useful large scale applications? No.

+ +

However there do exist machines such as IBM's Quantum Experience with real physical qubits on a chip as well as Google announcing this month a new machine with 72 qubits.

+ +

D-Wave likes to tag itself as the first commercially available quantum computer however determining if it is indeed quantum seems to have been left as an exercise to the user. The D-Wave is available for commercial applications for a hefty price if you'd like to purchase a machine.

+",54,,,,,3/28/2018 6:33,,,,0,,,,CC BY-SA 3.0 +1369,2,,1366,3/28/2018 6:33,,1,,"

From an article I read a while ago, it seems like IBM has a 20-qbit quantum computing as a service (QCAAS as I'd like to call it).

+

They officially call it IBM Q: https://www.research.ibm.com/ibm-q/

+

Here's an excerpt from the linked article (Nov 10, 2017):

+
+

IBM makes 20 qubit quantum computing machine available as a cloud service

+

IBM has been offering quantum computing as a cloud service +since last year when it came out with a 5 qubit version of the +advanced computers.

+

Today, the company announced that it’s releasing +20-qubit quantum computers, quite a leap in just 18 months.

+
+",780,,-1,,6/18/2020 8:31,3/29/2018 1:50,,,,2,,,,CC BY-SA 3.0 +1370,2,,1366,3/28/2018 6:34,,10,,"

That depends on your definitions of ""commercial"" and of ""quantum computer"".

+ +

The company D-Wave Systems has been offering what they call quantum computers commercially since 2011. Many things seem to point towards those being adiabatic quantum computers (though people disagree on this). That doesn't quite fit the kind of quantum computers that are becoming popular right now though. You can check this question and its answers for more information on that discussion.

+ +

Companies such as IBM in the other hand are offering access to circuit model quantum computers (with physical qubits). IBM specifically does this in the IBM Q project via their website and a programming interface. They cooperate with commercial companies to explore possibilities in the quantum computing field. (A similar offer is available from Rigetti Computing via their Rigetti Forrest project.) That's not what most people would call ""commercial quantum computers"" though.

+ +

So the answer truly is: It depends.

+",138,,138,,07-05-2018 09:04,07-05-2018 09:04,,,,0,,,,CC BY-SA 4.0 +1372,2,,1367,3/28/2018 6:49,,24,,"

You could start with an introduction to quantum computers such as this one from Voxxed Days Vienna 2018 - it's intended for people with a programming background but little to no prior knowledge in quantum mechanics. After that you can check out the guides in the IBM Quantum Experience or those for the Microsoft Quantum Development Kit.

+ +

In addition to that, there are loads of videos on YouTube, for example, that can help you understand the topic more deeply.

+",138,,,,,3/28/2018 6:49,,,,0,,,,CC BY-SA 3.0 +1373,2,,1365,3/28/2018 6:58,,20,,"
+

""A pure state is the quantum state where we have exact information about the quantum system. And the mixed state is the combination of probabilities of the information about the quantum state ... different distributions of pure states can generate equivalent mixed states. I did not understand how a combination of exact information can result in the combination of probabilities."".

+
+ +

On a Bloch sphere, pure states are represented by a point on the surface of the sphere, whereas mixed states are represented by an interior point. The completely mixed state of a single qubit ${{\frac {1}{2}}I_{2}\,}$ is represented by the center of the sphere, by symmetry. The purity of a state can be visualized as the degree in which it is close to the surface of the sphere.

+ +

In quantum mechanics, the state of a quantum system is represented by a state vector (or ket) $| \psi \rangle$. A quantum system with a state vector $| \psi \rangle$ is called a pure state. However, it is also possible for a system to be in a statistical ensemble of different state vectors: For example, there may be a 50% probability that the state vector is $| \psi_1 \rangle$ and a 50% chance that the state vector is $| \psi_2 \rangle$.

+ +

This system would be in a mixed state. The density matrix is especially useful for mixed states, because any state, pure or mixed, can be characterized by a single density matrix.

+ +

Mathematical description

+ +

The state vector $|\psi \rangle$ of a pure state completely determines the statistical behavior of a measurement. For concreteness, take an observable quantity, and let A be the associated observable operator that has a representation on the Hilbert space ${\mathcal {H}}$ of the quantum system. For any real-valued, analytical function $F$ defined on the real numbers, suppose that $F(A)$ is the result of applying $F$ to the outcome of a measurement. The expectation value of $F(A)$ is

+ +

$$\langle \psi | F(A) | \psi \rangle\, .$$

+ +

Now consider a mixed state prepared by statistically combining two different pure states $| \psi \rangle$ and $| \phi\rangle$, with the associated probabilities $p$ and $1 − p$, respectively. The associated probabilities mean that the preparation process for the quantum system ends in the state $|\psi \rangle$ with probability $p$ and in the state $|\phi\rangle$ with probability $1 − p$.

+",278,,26,,3/28/2018 10:17,3/28/2018 10:17,,,,1,,,,CC BY-SA 3.0 +1374,1,1381,,3/28/2018 7:30,,11,736,"

I've stumbled myself upon this article on Wikipedia, which says:

+
+

Decoherence can be viewed as the loss of information from a system into the environment (often modeled as a heat bath), since every system is loosely coupled with the energetic state of its surroundings.

+

<...>

+

Decoherence represents a challenge for the practical realization of quantum computers, since such machines are expected to rely heavily on the undisturbed evolution of quantum coherences. Simply put, they require that coherent states be preserved and that decoherence is managed, in order to actually perform quantum computation.

+
+

(emphasis mine)

+

So I am wondering how can this loss of information be managed? Does this mean that it should be prevented completely, or is it necessary for quantum computing to actually allow some information loss in order to compute?

+",,user609,55,,5/31/2021 15:02,5/31/2021 15:02,How can quantum decoherence be managed?,,2,0,,,,CC BY-SA 3.0 +1375,2,,1367,3/28/2018 7:33,,16,,"

I think that quantum programmers won’t necessarily need to know about quantum physics and linear algebra. These are certainly things that will help broaden a quantum programmers knowledge, but they should not be regarded as prerequisites.

+ +

Even so, most resources to help a budding quantum programmer start with an assumption of linear algebra. The ones that don’t mostly focus on QISKit, the SDK for IBM’s quantum device (and some of them were written by me).

+ +

The simplest program you can come up with is a “Hello World”. How do you do that for quantum computers? My proposal is a superposition of emoticons.

+ +

Once you’ve moved beyond “Hello World” in quantum programming, you’ll want to do something more complex. Often people make simple games. So let’s do that with a quantum computer. I made Battleships.

+ +

You will find these and many more examples of quantum programming at the QISKit tutorial. I think that is probably the best place for new quantum programmers to see what can be done, and how to do it.

+",409,,409,,3/28/2018 9:36,3/28/2018 9:36,,,,0,,,,CC BY-SA 3.0 +1376,1,,,3/28/2018 8:15,,7,382,"

Reading this entertaining piece of a QC enthusiast mining bitcoins with a Quantum Computer (although efficiently mining bitcoins with the current state of QCs is far-fetched, it is quite possible to be done in the next few years), I wonder how exactly will technologies already using Blockchain adapt to the change?

+ +

Are they currently being worked upon? How would post-quantum cryptography integrate with existing tech secured using today's cryptography algorithms?

+",747,,26,,3/28/2018 9:39,3/28/2018 15:35,How would Blockchain technologies change to survive a post-quantum world?,,1,1,,3/28/2018 15:31,,CC BY-SA 3.0 +1377,2,,1374,3/28/2018 8:19,,0,,"

Yes, currently the loss of information is being managed by means of quantum error correction protocols.

+ +

Ideally, quantum decoherence and eventual loss of information should be prevented. However, in real-world scenarios, it is hard to completely isolate quantum systems from their environment.

+",812,,26,,5/17/2019 22:29,5/17/2019 22:29,,,,0,,,,CC BY-SA 4.0 +1379,2,,1365,3/28/2018 10:11,,5,,"

Pure state: Systems whose state is unequivocally defined by A state vector in other words, single state vector. And this has the complete information about the system.

+ +

Mixed state: System whose state cannot be defined unequivocally by single state vector. It only has limited or no knowledge about the state of the system.

+ +

In reality, we often deal with ensemble of systems and we repeat the experiment. In such, cases it might be difficult to prepare the system exactly the same way to any particular initial state. Under such scenario, mixed state come in handy.

+",812,,812,,3/28/2018 16:33,3/28/2018 16:33,,,,1,,,,CC BY-SA 3.0 +1380,2,,1366,3/28/2018 10:46,,0,,"

Commercially, no. But it is something that companies such as Intel have been working on. In-fact, Intel recently announced its new 49-qubit quantum chip & neuromorphic chip.

+",872,,,,,3/28/2018 10:46,,,,0,,,,CC BY-SA 3.0 +1381,2,,1374,3/28/2018 11:19,,5,,"

The quantum circuit model describes a quantum computer as a closed quantum system and assumes that there is a system which executes the circuit but is completely isolated from the rest of the universe. In the real world, however, there are no known mechanisms for truly isolating a quantum system from its environment. Real quantum systems are open quantum systems. Open quantum systems couple to their environment and destroy the quantum information in the system through decoherence. When examining the simple evolution of a single quantum system this system-environment coupling appears to cause errors on the quantum system’s evolution (which wouldn't be unitary in this case).

+ +

A coin has two states, and makes a good bit but a poor qubit because it cannot remain in superposition of head and tail for very long as it is a classical object. A single nuclear spin can be a very good qubit, because superposition of being aligned with or against an external magnetic field can last for a long time, even days. But it can be difficult to build a quantum computer from nuclear spins because their coupling is so small that it is hard to measure the orientation of a single nuclei. The observation that the constraints are opposing in general: a quantum computer has to be well isolated in order to retain its quantum properties, but at the same time its qubits have to be accessible so that they can be manipulated to perform computation and read out the results. A realistic implementation must strike a balance between these constraints.

+ +

The first step towards solving the decoherence problem was taken in 1995 when +Shor and Steane independently discovered a quantum analogue of classical +error correcting codes. Shor discovered that by encoding quantum information, +this information could become more resistant to interaction with its environment. Following this discovery a rigorous theory of quantum error correction was developed. Many different quantum error correcting codes were discovered and this further led to a theory of fault-tolerant quantum computation. Fully fault-tolerant quantum computation describes methods for dealing with system-environment coupling as well as dealing with faulty control of the quantum computer.

+ +

Of particular significance was the discovery of the threshold theorem for fault-tolerant quantum computation. The threshold theorem states that if the decoherence interactions are of a certain form and are weaker than the controlling interactions by a certain ratio, quantum computation to any desired precision can be achieved. The threshold theorem for fault-tolerance thus declares a final solution to the question of whether there are theoretical limits to the construction of robust quantum computers.

+ +

Reference: Decoherence, Control, and Symmetry in Quantum Computers - D. Bacon

+",26,,,,,3/28/2018 11:19,,,,0,,,,CC BY-SA 3.0 +1382,2,,1360,3/28/2018 11:36,,4,,"

A non-exhaustive list of theoretical models of quantum computation are provided as answers to another question: ""What are the methods of quantum computation?"".

+ +

As to which models are polynomial-time equivalent — the following is an incomplete list of models which are provably universal for polynomial-time quantum computation, assuming perfect control:

+ +
    +
  • The unitary circuit model is polynomial-time equivalent to adiabatic quantum computation [arXiv:quant-ph/0405098];
  • +
  • The unitary circuit model is polynomial-time equivalent to quantum circuits with intermediate measurements (by the principle of deferred measurement);
  • +
  • The one-way measurement-based model is polynomial-time equivalent to unitary circuits.
  • +
+",124,,,,,3/28/2018 11:36,,,,0,,,,CC BY-SA 3.0 +1383,1,1406,,3/28/2018 11:49,,23,4639,"

One of the common claims about quantum computers is their ability to ""break"" conventional cryptography. This is because conventional cryptography is based on prime factors, something which is computationally expensive for conventional computers to calculate, but which is a supposedly trivial problem for a quantum computer.

+ +

What property of quantum computers makes them so capable of this task where conventional computers fail and how are qubits applied to the problem of calculating prime factors?

+",842,,26,,12/23/2018 12:12,12/23/2018 12:12,What makes quantum computers so good at computing prime factors?,,3,0,,,,CC BY-SA 3.0 +1385,1,1386,,3/28/2018 12:22,,37,12161,"

This blogpost by Scott Aaronson is a very useful and simple explanation of Shor's algorithm.

+ +

I'm wondering if there is such an explanation for the second most famous quantum algorithm: Grover's algorithm to search an unordered database of size $O(n)$ in $O(\sqrt{n})$ time.

+ +

In particular, I'd like to see some understandable intuition for the initially surprising result of the running time!

+",253,,,user609,3/28/2018 13:05,9/30/2020 16:44,Is there a layman's explanation for why Grover's algorithm works?,,4,1,,,,CC BY-SA 3.0 +1386,2,,1385,3/28/2018 13:22,,30,,"

There is a good explanation by Craig Gidney here (he also has other great content, including a circuit simulator, on his blog).

+ +

Essentially, Grover's algorithm applies when you have a function which returns True for one of its possible inputs, and False for all the others. The job of the algorithm is to find the one that returns True.

+ +

To do this we express the inputs as bit strings, and encode these using the $|0\rangle$ and $|1\rangle$ states of a string of qubits. So the bit string 0011 would be encoded in the four qubit state $|0011\rangle$, for example.

+ +

We also need to be able to implement the function using quantum gates. Specifically, we need to find a sequence of gates that will implement a unitary $U$ such that

+ +

$U | a \rangle = - | a \rangle, \,\,\,\,\,\,\,\,\,\,\,\,\, U | b \rangle = | b \rangle $

+ +

where $a$ is the bit string for which the function would return True and $b$ is any for which it would return False.

+ +

If we start with a superposition of all possible bit strings, which is pretty easy to do by just Hadamarding everything, all inputs start off with the same amplitude of $\frac{1}{\sqrt{2^n}}$ (where $n$ is the length of the bit strings we are searching over, and therefore the number of qubits we are using). But if we then apply the oracle $U$, the amplitude of the state we are looking for will change to $-\frac{1}{\sqrt{2^n}}$.

+ +

This is not any easily observable difference, so we need to amplify it. To do this we use the Grover Diffusion Operator, $D$. The effect of this operator is essentially to look at how each amplitude is different from the mean amplitude, and then invert this difference. So if a certain amplitude was a certain amount larger than the mean amplitude, it will become that same amount less than the mean, and vice-versa.

+ +

Specifically, if you have a superposition of bit strings $b_j$, the diffusion operator has the effect

+ +

$D: \,\,\,\, \sum_j \alpha_j \, | b_j \rangle \,\,\,\,\,\, \mapsto \,\,\,\,\,\, \sum_j (2\mu \, - \, \alpha_j) \, | b_j \rangle$

+ +

where $\mu = \sum_j \alpha_j$ is the mean amplitude. So any amplitude $\mu + \delta$ gets turned into $\mu - \delta$. To see why it has this effect, and how to implement it, see these lecture notes.

+ +

Most of the amplitudes will be a tiny bit larger than the mean (due to the effect of the single $-\frac{1}{\sqrt{2^n}}$), so they will become a tiny bit less than the mean through this operation. Not a big change.

+ +

The state we are looking for will be affected more strongly. Its amplitude is a lot less than the mean, and so will become a lot greater the mean after the diffusion operator is applied. The end effect of the diffusion operator is therefore to cause an interference effect on the states which skims an amplitude of $\frac{1}{\sqrt{2^n}}$ from all the wrong answers and adds it to the right one. By repeating this process, we can quickly get to the point where our solution stands out from the crowd so much that we can identify it.

+ +

Of course, this all goes to show that all the work is done by the diffusion operator. Searching is just an application that we can connect to it.

+ +

See the answers to other questions for details on how the functions and diffusion operator are implemented.

+",409,,8158,,08-01-2019 16:59,08-01-2019 16:59,,,,0,,,,CC BY-SA 4.0 +1387,2,,1367,3/28/2018 14:19,,6,,"

It's not necessary to fully understand Quantum Mechanics to understand the theory behind QC. I'm a math BSc/programmer and I read about the topic and also did the old edX QC course (unfortunately it's not available, but there are others). I think I can say that I understand the gist of QC, but I know next to nothing about Quantum Mechanics.

+ +

The key part is that Quantum Computing uses mostly linear algebra, which is based on math that is commonly taught at engineering/computer science undergraduate studies. Contrast this to real Quantum Mechanics that uses infinite-dimensional spaces (or functional analysis, if you'd like).

+ +

If you feel comfortable with these undergraduate math topics you can check out Susskind's Quantum Mechanics: Theoretical Minimum - it isn't actually about 'real' quantum mechanics, it's mostly stuff that is useful for QC. BTW the whole Theoretical Minimum book series is aimed at people who know some math (like computer scientists, or engineering majors), and would like to know more about physics. There are also lots of courses online, for example, there are new courses on edX, but I didn't do any of them, so I can't recommend one.

+",949,,26,,10/15/2019 4:33,10/15/2019 4:33,,,,0,,,,CC BY-SA 4.0 +1388,2,,135,3/28/2018 14:31,,9,,"

The complexity class of decision problems efficiently solvable on a classical computer is called BPP (or P, if you don't allow randomness, but these are suspected to be equal anyway). The class of problems efficiently solvable on a quantum computer is called BQP. If a problem exists for which a quantum computer provides an exponential speedup, then this would imply that BPP $\neq$ BQP. However, the BQP versus BPP question is a major open question in theoretical computer science, so no such problem has been proven to exist (and if you find one, you'll definitely win all kinds of awards).

+ +

On the other hand, as the other answer mentions, there are black-box (""oracle"") problems relative to which we know that $\textbf{BPP}^O \neq \textbf{BQP}^O$, like Simon's algorithm. This provides evidence, though not a proof, that BPP $\neq$ BQP in the real world.

+",551,,551,,12/24/2018 22:51,12/24/2018 22:51,,,,1,,,,CC BY-SA 4.0 +1389,2,,3,3/28/2018 14:40,,6,,"

To follow up on Ella Rose's answer: most practical encryption schemes used today (e.g. Diffie-Hellman, RSA, elliptic curve, lattice-based) are centered around the difficulty of solving the hidden subgroup problem (HSP). However, the first three are centered around the HSP for abelian groups. The HSP for abelian groups can be efficiently solved by the quantum Fourier transform, which is implemented e.g. by Shor's algorithm. They are therefore vulnerable to attack by a quantum computer. Most lattice-based methods, on the other hand, revolve around the HSP for dihedral groups, which are nonabelian. Quantum computers are not believed to be able to efficiently solve the nonabelian HSP, so these algorithms should be able to implement post-quantum cryptography.

+",551,,551,,3/28/2018 22:10,3/28/2018 22:10,,,,0,,,,CC BY-SA 3.0 +1390,1,1395,,3/28/2018 14:50,,15,2628,"

In regular computers, bits may be physically represented using a wide variety of two-state devices, such as polarity of magnetization of a certain area of a ferromagnetic film or two levels of electric charge in a capacitor.

+ +

But qubits have a property that they can be in a superposition of both states at the same time. I've seen this question's answers, which explain how can a qubit be represented, or modeled using a regular computer.

+ +

So I want to know what can be used (and is used by companies like D-Wave) to represent a qubit in a real physical quantum computer?

+",,user609,26,,12/13/2018 19:40,12/13/2018 19:40,What is the physical representation of a qubit?,,1,0,,,,CC BY-SA 3.0 +1391,2,,1376,3/28/2018 15:10,,7,,"

Bitcoin uses elliptic-curve cryptography to sign transactions, which can easily be broken by Shor's algorithm.

+ +

I didn't actually read the article because it looked kind of dumb, but I gathered that the author proposed using Grover's algorithm to speed up the mining process by looking for hashes more efficiently. If you had a functioning quantum computer, then I think it would be more efficient to forget about mining and instead just transfer people's bitcoins directly into your own account. A useful quantum computer would completely destroy the entire Bitcoin fundamental model.

+ +

I believe that other cryptocurrencies, like Cardano, are already designed to be secure against quantum attacks, to prevent this issue.

+",551,,551,,3/28/2018 15:35,3/28/2018 15:35,,,,0,,,,CC BY-SA 3.0 +1392,2,,1383,3/28/2018 15:28,,2,,"

First of all, factoring can be done on a quantum computer (with usage of 'unitary' quantum gates) by means of Shor's algorithm.

+ +

An explanation that doesn't require advanced mathematics nor any advanced knowledge of physics is this blog post by Scott Aaronson, titled ""Shor, I'll do it.""

+ +

A brief summary of his ideas is the following:

+ +

First, we represent our quantum gates/qubits with clocks (using the 'complex numbers as arrows (i.e. elements of $\mathbb{R}^2$ with weird multiplication), representation')

+ +

Then, we note that a CS researcher has very irregular sleeping periods. To find this strange period, we use the clocks. Then, we note that this period finding can be used to factor integers (using a similar construction as in the randomized Pollard -$\rho$ algorithm)

+ +

Hence, our strange quantum clocks can help us factor efficiently!

+",253,,,,,3/28/2018 15:28,,,,0,,,,CC BY-SA 3.0 +1395,2,,1390,3/28/2018 15:38,,15,,"

This section on Wikipedia collects the most important ongoing attempts to physically implement qubits.

+ +
+

For physically implementing a quantum computer, many different + candidates are being pursued, among them (distinguished by the + physical system used to realize the qubits):

+ +
    +
  • Superconducting quantum computing (qubit implemented by the state of small superconducting circuits (Josephson junctions))

  • +
  • Trapped ion quantum computer (qubit implemented by the internal state of trapped ions)

  • +
  • Optical lattices (qubit implemented by internal states of neutral atoms trapped in an optical lattice)

  • +
  • Quantum dot computer, spin-based (e.g. the Loss-DiVincenzo quantum computer) (qubit given by the spin states of trapped electrons)

  • +
  • Quantum dot computer, spatial-based (qubit given by electron position in double quantum dot)

  • +
  • Nuclear magnetic resonance on molecules in solution (liquid-state NMR) (qubit provided by nuclear spins within the dissolved molecule)

  • +
  • Solid-state NMR Kane quantum computers (qubit realized by the nuclear spin state of phosphorus donors in silicon)

  • +
  • Electrons-on-helium quantum computers (qubit is the electron spin)

  • +
  • Cavity quantum electrodynamics (CQED) (qubit provided by the internal state of trapped atoms coupled to high-finesse cavities)

  • +
  • Molecular magnet (qubit given by spin states)

  • +
  • Fullerene-based ESR quantum computer (qubit based on the electronic spin of atoms or molecules encased in fullerenes)

  • +
  • Linear optical quantum computer (qubits realized by processing states of different modes of light through linear elements e.g. + mirrors, beam splitters and phase shifters)

  • +
  • Diamond-based quantum computer (qubit realized by electronic or nuclear spin of nitrogen-vacancy centers in diamond)

  • +
  • Bose–Einstein condensate-based quantum computer

  • +
  • Transistor-based quantum computer – string quantum computers with entrainment of positive holes using an electrostatic trap

  • +
  • Rare-earth-metal-ion-doped inorganic crystal based quantum computers (qubit realized by the internal electronic state of dopants in optical + fibers)

  • +
  • Metallic-like carbon nanospheres based quantum computers

  • +
+ +

The large number of candidates demonstrates that the topic, in spite + of rapid progress, is still in its infancy. There is also a vast + amount of flexibility.

+
+",26,,26,,04-08-2018 14:20,04-08-2018 14:20,,,,0,,,,CC BY-SA 3.0 +1402,2,,1383,3/28/2018 18:41,,4,,"

What makes quantum computers good at factoring large numbers is their ability to solve the period finding problem (and a mathematical fact that relates finding prime factors to period finding). That's basically Shor's algorithm in a nutshell. Yet it only begs the question what makes quantum computers good at period finding.

+ +

At the core of period finding is the ability to calculate a function's value over its entire domain (that is, for every conceivable input). This is called quantum parallelism. This in itself is not good enough, but together with interference (the ability to combine results from quantum parallelism in a certain way), it is.

+ +

I suppose this answer might be a bit of a cliff hanger: How does one use these abilities to actually factor? Find the answer to that at wikipedia on Shor's algorithm.

+",,user1039,,user1039,3/28/2018 19:33,3/28/2018 19:33,,,,0,,,,CC BY-SA 3.0 +1403,2,,1385,3/28/2018 18:45,,3,,"

The simple explanation for how (and hence why) Grover's algorithm works is that a quantum gate can only reshuffle (or otherwise distribute) probability amplitudes. Using an initial state with equal probability amplitudes for all states of the computational basis, one starts with an amplitude of $1/\sqrt{N}$. This much can be ""added"" to the desired (solution) state in each iteration, such that after $\sqrt{N}$ iterations one arrives at a probability amplitude of $1$ meaning the desired state has been distilled.

+",,user1039,,,,3/28/2018 18:45,,,,0,,,,CC BY-SA 3.0 +1404,1,1407,,3/28/2018 19:04,,15,3154,"

Deep Learning (multiple layers of artificial neural networks used in supervised and unsupervised machine learning tasks) is an incredibly powerful tool for many of the most difficult machine learning tasks: image recognition, video recognition, speech recognition, etc. Given that it is currently one of the most powerful machine learning algorithms, and Quantum Computing is generally regarded as a game changer for certain very difficult computation tasks, I'm wondering if there has been any movement on combining the two.

+ +
    +
  • Could a deep learning algorithm run on a quantum computer?
  • +
  • Does it make sense to try?
  • +
  • Are there other quantum algorithms that would make deep learning irrelevant?
  • +
+",1044,,55,,09-01-2020 22:28,09-01-2020 22:28,Will deep learning neural networks run on quantum computers?,,4,1,,,,CC BY-SA 3.0 +1405,2,,1367,3/28/2018 19:10,,5,,"

Quantum computers are programmed by (evolving programming languages representing) so-called quantum circuits. These are a sequence of quantum gates plus the information on which quantum bits (qubits) they act.

+ +

The only thing you really need to know about quantum gates is that they represent rotations (in a higher dimensional space, so-called Hilbert space). Hence they are reversible: Quantum computers are programmed with reversible logic.

+ +

What do quantum gates rotate? It is the hyperspheres on whose surface qubit states live. Each state of the computational basis ($\left|00\right>$, $\left|01\right>$, $\left|10\right>$, $\left|11\right>$ for a 2-qubit system in the usual Dirac notation) gets a complex number as a coefficient or as a so-called probability amplitude. The basis vectors are orthogonal and span the state's Hilbert space, the probability amplitudes can be seen as coordinates in it. This is the picture in which quantum gates effect rotations. You will find that physicists often use a different picture, the Bloch sphere, for single qubit systems, in which quantum gates also cause rotations (but sometimes by a larger angle or by one that is omitted in that picture altogether).

+ +

All conventional logic can be implemented by a quantum computer by first expressing it in reversible logic (which may require ancilla bits). The classical NOT gate corresponds to the X quantum gate, but unlike the classical case where the only 1-bit reversible gates are the identity and the NOT gate, a quantum computer has four corresponding gates (X, Y, Z according to rotations on the Bloch sphere, plus the identity). Further, you can have rotations that rotate only by a fraction of how far these gates rotate; a few particularly interesting ones have special names and abbreviations such as the Hadamard gate or H gate that creates the equal superposition of all states.

+ +

Unfortunately, early quantum software engineers will probably have to know a bit about the quantum computer hardware they will be using: Due to the arbitrarily and continuously choosable angle of a quantum gate's effective rotation, there is a sort-of analog element to quantum computers that necessarily creates errors (and physical quantum computers have even more error sources than just that). There's a way to deal with it, quantum error correction that discretizes errors and corrects the most likely discretizations of them to achieve (ideally) arbitrarily complex computations with bound errors. But optimization will likely mean that one quantum computer with one choice of quantum error correction will be more apt at the certain quantum gate or even algorithms than others, in ways that are a bit more subtle than just looking up a speed factor.

+",,user1039,7429,,6/20/2019 7:07,6/20/2019 7:07,,,,1,,,,CC BY-SA 4.0 +1406,2,,1383,3/28/2018 19:11,,16,,"

The short answer

+ +

$\newcommand{\modN}[1]{#1\,\operatorname{mod}\,N}\newcommand{\on}[1]{\operatorname{#1}}$Quantum Computers are able to run subroutines of an algorithm for factoring, exponentially faster than any known classical counterpart. This doesn't mean classical computers CAN'T do it fast too, we just don't know as of today a way for classical algorithms to run as efficient as quantum algorithms

+ +

The long answer

+ +

Quantum Computers are good at Discrete Fourier Transforms. +There's a lot at play here that isn't captured by just ""it's parallel"" or ""it's quick"", so let's get into the blood of the beast.

+ +

The factoring problem is the following: Given a number $N = pq$ where $p,q$ are primes, how do you recover $p$ and $q$? One approach is to note the following:

+ +

If I look at a number $\modN{x}$, then either $x$ shares a common factor with $N$, or it doesn't.

+ +

If $x$ shares a common factor, and isn't a multiple of $N$ itself, then we can easily ask for what the common factors of $x$ and $N$ are (through the Euclidean algorithm for greatest common factors).

+ +

Now a not so obvious fact: the set of all $x$ that don't share a common factor with $N$ forms a multiplicative group $\on{mod} N$. What does that mean? You can look at the definition of a group in Wikipedia here. Let the group operation be multiplication to fill in the details, but all we really care about here is the following consequence of that theory which is: the sequence

+ +

$$ \modN{x^0}, \quad\modN{x^1}, \quad\modN{x^2}, ... $$

+ +

is periodic, when $x,N$ don't share common factors (try $x = 2$, $N = 5$) to see it first hand as:

+ +

$$\newcommand{\mod}[1]{#1\,\operatorname{mod}\,5} +\mod1 = 1,\quad +\mod4 = 4,\quad +\mod8 = 3,\quad +\mod{16} = 1. +$$

+ +

Now how many natural numbers $x$ less than $N$ don't share any common factors with $N$? That is answered by Euler's totient function, it's $(p-1)(q-1)$.

+ +

Lastly, tapping on the subject of group theory, the length of the repeating chains

+ +

$$ \modN{x^0}, \quad\modN{x^1}, \quad\modN{x^2}, ... $$

+ +

divides that number $(p-1)(q-1)$. So if you know the period of sequences of powers of $x \mod N$ then you can start to put together a guess for what $(p-1)(q-1)$ is. Moreover, If you know what $(p-1)(q-1)$ is, and what $pq$ is (that's N don't forget!), then you have 2 equations with 2 unknowns, which can be solved through elementary algebra to separate $p,q$.

+ +

Where do quantum computers come in? The period finding. There's an operation called a Fourier transform, which takes a function $g$ written as a sum of periodic functions $a_1 e_1 + a_2 e_2 ... $ where $a_i$ are numbers, $e_i$ are periodic functions with period $p_i$ and maps it to a new function $\hat{f}$ such that $ \hat{f}(p_i) = a_i$.

+ +

Computing the Fourier transform is usually introduced as an integral, but when you want to just apply it to an array of data (the Ith element of the array is $f(I)$) you can use this tool called a Discrete Fourier Transform which amounts to multiplying your ""array"" as if it were a vector, by a very big unitary matrix.

+ +

Emphasis on the word unitary: it's a really arbitrary property described here. But the key takeaway is the following:

+ +

In the world of physics, all operators obey the same general mathematical principle: unitarity.

+ +

So that means it's not unreasonable to replicate that DFT matrix operation as a quantum operator.

+ +

Now here is where it gets deep an $n$ Qubit Array can represent $2^n$ possible array elements (consult anywhere online for an explanation of that or drop a comment).

+ +

And similarly an $n$ Qubit quantum operator can act on that entire $2^n$ quantum space, and produce an answer that we can interpret.

+ +

See this Wikipedia article for more detail.

+ +

If we can do this Fourier transform on an exponentially large data set, using only $n$ Qubits, then we can find the period very quickly.

+ +

If we can find the period very quickly we can rapidly assemble an estimate for $(p-1)(q-1)$

+ +

If we can do that fast then given our knowledge of $N=pq$ we can take a stab at checking $p,q$.

+ +

That's whats going on here, at a very high level.

+",1034,,45,,04-05-2018 14:04,04-05-2018 14:04,,,,0,,,,CC BY-SA 3.0 +1407,2,,1404,3/28/2018 19:21,,8,,"
    +
  1. Yes, all classical algorithms can be run on quantum computers, moreover any classical algorithm involving searching can get a $\sqrt{\text{original time}}$ boost by the use of grovers algorithm. An example that comes to mind is treating the fine tuning of neural network parameters as a ""search for coefficients"" problem.

  2. +
  3. For the fact there are clear computational gains in some processes: yes.

  4. +
  5. Not that I know of. But someone with more expertise can chime in here if they want. The one thing that comes to mind: often we may use Deep Learning and other forms of Artificial Intelligence to study problems of chemistry, and physics because simulation is expensive or impractical. In this domain, Quantum Computers will likely slaughter their classical ancestors given their ability to natively simulate quantum systems (like those in Nuclear Chemistry) in effectively real time or faster.

  6. +
+ +

Last I spoke with him, Mario Szegedy was interested in precisely this, there are probably a lot of other researchers too working on it right now.

+",1034,,1034,,3/28/2018 19:30,3/28/2018 19:30,,,,7,,,,CC BY-SA 3.0 +1408,2,,1356,3/28/2018 19:22,,2,,"

You need a surprisingly large number of quantum gates to implement a quantum error correcting code in a fault-tolerant manner. One part of the reason is that there are many errors to detect since a code that can correct all single qubit errors already requires 5 qubits and each error can be of three kinds (corresponding to unintentional X, Y, Z gates). Hence to even just correct any single qubit error, you already need logic to distinguish between these 15 errors plus the no-error situation: $XIIII$, $YIIII$, $ZIIII$, $IXIII$, $IYIII$, $IZIII$, $IIXII$, $IIYII$, $IIZII$, $IIIXI$, $IIIYI$, $IIIZI$, $IIIIX$, $IIIIY$, $IIIIZ$, $IIIII$ where $X$, $Y$, $Z$ are the possible single qubit errors and $I$ (identity) denotes the no-error-for-this-qubit situation.

+ +

The main part of the reason is, however, that you cannot use straight-forward error detection circuitry: Every CNOT (or every other nontrivial 2 or more qubit gate) forwards errors in one qubit to another qubit which would be disastrous for the most trivial case of a single qubit error correcting code and still very bad for more sophisticated codes. Hence a fault-tolerant (useful) implementation of needs even more effort than one might naively think.

+ +

With many gates per error correcting step, you can only permit a very low error rate per step. Here yet another problem arises: Since you may have coherent errors, you must be ready for the worst case that an error $\epsilon$ propagates not as $N \epsilon$ after N single qubit gates but as $N^2 \epsilon$. This value must remain sufficiently low such that you overall gain after correcting some (but not all) errors, for example single qubit errors only.

+ +

An example for a coherent error is an implementation of a gate $G$ that does, to first order, not simply $G$ but $G + \sqrt{\epsilon} X$ which you might call an error of $\epsilon$ because that is the probability corresponding to the probability amplitude $\sqrt{\epsilon}$ and hence the probability that a measurement directly after the gate reveals that it acted as the error $X$. After $N$ applications of this gate, again to first order, you have actually applied $G^N + N \sqrt{\epsilon} G^N X$ (if G and X commute, otherwise a more complicated construct that has $N$ distinct terms proportional to $\sqrt{\epsilon}$). Hence you would, if measuring then, find an error probability of $N^2 \epsilon$.

+ +

Incoherent errors are more benign. Yet if one must give a single value as an error threshold, then one cannot choose to only assume benign errors!

+",,user1039,,user1039,3/29/2018 9:16,3/29/2018 9:16,,,,2,,,,CC BY-SA 3.0 +1409,2,,1341,3/28/2018 19:28,,0,,"

By saying that a quantum computer using $n$ qubits does (up to) $2^n$ computations in parallel, one tries to explain quantum parallelism: If you represent the state of the $n$ qubits using probability amplitudes for each state in the computational basis, there are $2^n$ such probability amplitudes that a classical computer would have to update per quantum gate it is to simulate, whilst a quantum computer does this automatically (but with potentially less benefit since not all these numbers can be independently measured).

+",,user1039,,,,3/28/2018 19:28,,,,0,,,,CC BY-SA 3.0 +1410,1,1418,,3/28/2018 19:32,,34,13767,"

What is meant by the term ""computational basis"" in the context of quantum computing and quantum algorithms?

+",,user1039,55,,11/13/2019 11:24,2/22/2022 18:16,"What is meant by the term ""computational basis""?",,4,0,,,,CC BY-SA 4.0 +1411,2,,1410,3/28/2018 19:32,,3,,"

A quantum state is a vector in a high-dimensional vector space (the Hilbert space). There is one basis that comes natural to any quantum algorithm (or quantum computer) that is based on qubits: The states that correspond to the binary numbers are special, they are the so-called computational basis states.

+",,user1039,,,,3/28/2018 19:32,,,,0,,,,CC BY-SA 3.0 +1412,2,,136,3/28/2018 19:38,,2,,"

Measurements are unitary operations, too, you just don't see it: A measurement is equivalent to some complicated (quantum) operation that acts not just on the system but also on its environment. If one were to model everything as a quantum system (including the environment), one would have unitary operations all the way.

+ +

However, usually there is little point in this because we usually don't know the exact action on the environment and typically don't care. If we consider only the system, then the result is the well-known collapse of the wave function, which is indeed a non-unitary operation.

+",,user1039,,,,3/28/2018 19:38,,,,0,,,,CC BY-SA 3.0 +1413,1,1431,,3/28/2018 19:39,,8,7268,"

I would like to play with a quantum circuit local_qasm_simulator in QISKit, but I do not want to implement a separate quantum circuit that would prepare an initial state.

+ +

The way I do it now is by falling back to NumPy. Specifically, first, I extract matrix u from a quantum program qp:

+ + + +
cname = 'circuit name'
+results = qp.execute(cname, backend='local_unitary_simulator', shots=1)
+data = results.get_data(cname)
+u = data['unitary']
+
+ +

Then, I explicitly create the state I need (e.g., $|\psi\rangle = \frac{1}{2}(|00\rangle + |01\rangle + |10\rangle - |11\rangle)$):

+ + + +
num_qubits = 2
+psi = np.ones(2**num_qubits) / 2.0
+psi[3] = -psi[3]
+
+ +

Finally, I apply u to psi with NumPy:

+ + + +
u @ psi
+
+ +

The advantage of this approach is that I can explicitly obtain the state $U |\psi\rangle$. However, I cannot use local_qasm_simulator and the measure() function.

+ +

So, how could I prepare an arbitrary state, and supply it to a circuit, and run a local_qasm_simulator?

+",528,,26,,03-12-2019 09:26,7/21/2020 15:55,How to create an arbitrary state in QISKit for a local_qasm_simulator?,,1,0,,,,CC BY-SA 3.0 +1414,2,,1365,3/28/2018 19:46,,9,,"

A pure state is what one would naturally call a state of a system. Now imagine you have a qubit in a certain state, say the equal superposition of both its computational basis states, which is $\frac{\sqrt{2}}{2} \left( \left|0\right> + \left|1\right> \right)$. Then measure it in the computational basis. What state do you get as a result?

+ +

If you read the measurement result, you know which state you have. But if you discard that result, then you don't know which state the system is in (either it is in $\left|0\right>$ or in $\left|1\right>$). This is different from the superposition you had before (which was a pure state): It is a mixed state.

+",,user1039,,,,3/28/2018 19:46,,,,1,,,,CC BY-SA 3.0 +1415,2,,1356,3/28/2018 19:57,,4,,"

We want to compare an output state with some ideal state, so normally, fidelity, $F\left(\left|\psi\right>, \rho\right)$ is used as this is a good way to tell how well the possible measurement outcomes of $\rho$ compare with the possible measurement outcomes of $\left|\psi\right>$, where $\left|\psi\right>$ is the ideal output state and $\rho$ is the achieved (potentially mixed) state after some noise process. As we're comparing states, this is $$F\left(\left|\psi\right>, \rho\right) = \sqrt{\left<\psi\right|\rho\left|\psi\right>}.$$

+ +

Describing both the noise and error correction processes using Kraus operators, where $\mathcal N$ is the noise channel with Kraus operators $N_i$ and $\mathcal E$ is the error correction channel with Kraus operators $E_j$, the state after noise is $$\rho' = \mathcal N\left(\left|\psi\rangle\langle\psi\right|\right) = \sum_iN_i\left|\psi\rangle\langle\psi\right|N_i^\dagger$$ and the state after both noise and error correction is $$\rho = \mathcal E\circ\mathcal N\left(\left|\psi\rangle\langle\psi\right|\right) = \sum_{i, j}E_jN_i\left|\psi\rangle\langle\psi\right|N_i^\dagger E_j^\dagger.$$

+ +

The fidelity of this is given by \begin{align}F\left(\left|\psi\right>, \rho\right) &= \sqrt{\left<\psi\right|\rho\left|\psi\right>} \\ &= \sqrt{\sum_{i, j}\left<\psi\right|E_jN_i\left|\psi\rangle\langle\psi\right|N_i^\dagger E_j^\dagger\left|\psi\right>} \\&= \sqrt{\sum_{i, j}\left<\psi\right|E_jN_i\left|\psi\rangle\langle\psi\right|E_jN_i\left|\psi\right>^*} \\ &= \sqrt{\sum_{i, j}\lvert\left<\psi\right|E_jN_i\left|\psi\right\rangle\rvert^2}.\end{align}

+ +

For the error correction protocol to be of any use, we want the fidelity after error correction to be larger than the fidelity after noise, but before error correction, so that the error corrected state is less distinguishable from the non-corrected state. That is, we want $$F\left(\left|\psi\right>, \rho\right) > F\left(\left|\psi\right>, \rho'\right).$$ This gives $$\sqrt{\sum_{i, j}\lvert\left<\psi\right|E_jN_i\left|\psi\right\rangle\rvert^2} > \sqrt{\sum_i\lvert\left<\psi\right|N_i\left|\psi\right\rangle\rvert^2}.$$ As fidelity is positive, this can be rewritten as $$\sum_{i, j}\lvert\left<\psi\right|E_jN_i\left|\psi\right\rangle\rvert^2 > \sum_i\lvert\left<\psi\right|N_i\left|\psi\right\rangle\rvert^2.$$

+ +

Splitting $\mathcal N$ into the correctable part,$\mathcal N_c$ , for which $\mathcal E\circ\mathcal N_c\left(\left|\psi\rangle\langle\psi\right|\right) = \left|\psi\rangle\langle\psi\right|$ and the non-correctable part, $\mathcal N_{nc}$, for which $\mathcal E\circ\mathcal N_{nc}\left(\left|\psi\rangle\langle\psi\right|\right) = \sigma$. Denoting the probability of the error being correctable as $\mathbb P_c$ and non-correctable (i.e. too many errors have occurred to reconstruct the ideal state) as $\mathbb P_{nc}$ gives $$\sum_{i, j}\lvert\left<\psi\right|E_jN_i\left|\psi\right\rangle\rvert^2 = \mathbb P_c + \mathbb P_{nc}\left<\psi\vert\sigma\vert\psi\right> \geq \mathbb P_c,$$ where equality will be assumed by assuming $\left<\psi\vert\sigma\vert\psi\right> = 0$. That is a false 'correction' will project onto an orthogonal outcome to the correct one.

+ +

For $n$ qubits, with an (equal) probability of error on each qubit as $p$ (note: this is not the same as the noise parameter, which would have to be used to calculate the probability of an error), the probability of having a correctable error (assuming that the $n$ qubits have been used to encode $k$ qubits, allowing for errors on up to $t$ qubits, determined by the Singleton bound $n-k\geq 4t$) is \begin{align} \mathbb P_c &=\sum_j^t {n\choose j}p^j\left(1-p\right)^{n-j}\\ &= \left(1-p\right)^n + np\left(1-p\right)^{n-1} + \frac 12n\left(n-1\right)p^2\left(1-p\right)^{n-2} + \mathcal O\left(p^3\right) \\ &= 1 - {n\choose{t+1}}p^{t+1} + \mathcal O\left(p^{t + 2}\right)\end{align}.

+ +

Noise channels can also be written as $N_i = \sum_j\alpha_{i, j}P_j$ for a basis $P_j$, which can be used to define a process matrix $\chi_{j, k} = \sum_i\alpha_{i, j}\alpha^*_{i, k}$. This gives $$\sum_i\lvert\left<\psi\right|N_i\left|\psi\right\rangle\rvert^2 = \sum_{j, k}\chi_{j, k}\left<\psi\right|P_j\left|\psi\right\rangle\left<\psi\right|P_k\left|\psi\right\rangle\geq\chi_{0, ,0},$$ where $\chi_{0, 0} = \left(1-p\right)^n$ is the probability of no error occurring.

+ +

This gives that the error correction has been successfully in mitigating (at least some of) the noise when $$1 - {n\choose{t+1}}p^{t+1} \gtrapprox\left(1-p\right)^n.$$ While this is only valid for $\rho \ll 1$ and as a weaker bound has been used, potentially giving inaccurate results of when the error correction has been successful, this displays that error correction is good for small error probabilities as $p$ grows faster than $p^{t+1}$ when $p$ is small.

+ +

However, as $p$ gets slightly larger, $p^{t+1}$ grows faster than $p$ and, depending on prefactors, which depends on the size of the code and number of qubits to correct, will cause the error correction to incorrectly 'correct' the errors that have occurred and it starts failing as an error correction code. In the case of $n=5$, giving $t=1$, this happens at $p\approx 0.29$, although this is very much just an estimate.

+ +

Edit from comments:

+ +

As $\mathbb P_c + \mathbb P_{nc} = 1$, this gives $$\sum_{i, j}\lvert\left<\psi\right|E_jN_i\left|\psi\right\rangle\rvert^2 = \left<\psi\vert\sigma\vert\psi\right> + \mathbb P_c\left(1-\left<\psi\vert\sigma\vert\psi\right>\right).$$

+ +

Plugging this in as above further gives $$1-\left(1-\left<\psi\vert\sigma\vert\psi\right>\right){n\choose{t+1}}p^{t+1} \gtrapprox\left(1-p\right)^n,$$ which is the same behaviour as before, only with a different constant.

+ +

This also shows that, although error correction can increase the fidelity, it can't increase the fidelity to $1$, especially as there will be errors (e.g. gate errors from not being able to perfectly implement any gate in reality) arising from implementing the error correction. As any reasonably deep circuit requires, by definition, a reasonable number of gates, the fidelity after each gate is going to be less than the fidelity of the previous gate (on average) and the error correction protocol is going to be less effective. There will then be a cut-off number of gates at which point the error correction protocol will decrease the fidelity and the errors will continually compound.

+ +

This shows, to a rough approximation, that error correction, or merely reducing the error rates, is not enough for fault tolerant computation, unless errors are extremely low, depending on the circuit depth.

+",23,,23,,3/31/2018 19:27,3/31/2018 19:27,,,,4,,,,CC BY-SA 3.0 +1416,1,1421,,3/28/2018 20:12,,9,637,"

On the Wikipedia page for Shor's algorithm, it is stated that Shor's algorithm is not currently feasible to use to factor RSA-sized numbers, because a quantum computer has not been built with enough qubits due to things such as quantum noise. How do modern quantum computers prevent interference with computations from this noise? Can they prevent it at all?

+",983,,26,,5/17/2019 22:20,5/17/2019 22:20,"How do quantum computers prevent ""quantum noise""?",,2,1,,,,CC BY-SA 3.0 +1417,2,,1416,3/28/2018 20:50,,8,,"

The answer to noise (and any source of error, really) in quantum computations is quantum error correction: You choose an encoding such that discretized errors correspond not only to invalid encodings but also uniquely determine what kind of error must have occured. This is not possible for all errors but with reasonable error models (such as single qubit errors are much more likely than two qubit errors which are much more likely than three qubit errors, etc.) it can be shown that, if your noise and other error sources are below a certain threshold, you can enable arbitrarily large and long computations.

+",,user1039,,,,3/28/2018 20:50,,,,0,,,,CC BY-SA 3.0 +1418,2,,1410,3/28/2018 21:12,,8,,"

When we have just one qubit, there's nothing particularly special about the computational basis; it's just nice to have a canonical basis. In practice you could think that first you implement a gate $Z$ with $Z^2 = I$ and $Z\neq I$, and then you say that the computational basis is the eigenbasis of this gate.

+ +

However, when we talk about multi-qubit systems, the computational basis is meaningful. It comes from picking a basis for each qubit, and then taking the basis which is the tensor product of all these bases. Picking the same basis for each qubit is nice just to keep everything uniform, and calling them $0$ and $1$ is a nice notational choice. What's really important is that our basis states are product states across our qubits: the computational basis states can be prepared by initializing our qubits separately and then bringing them together. This isn't true for arbitrary states! For example, the cat state $\frac1{\sqrt2}\left(|0^n\rangle + |1^n\rangle\right)$ requires a log-depth circuit in order to prepare it from a product state.

+",483,,,,,3/28/2018 21:12,,,,0,,,,CC BY-SA 3.0 +1419,1,1430,,3/28/2018 21:19,,10,1100,"

Grover's algorithm is often described as a way to search a database in $O(\sqrt{N})$ time. For using it we need an oracle gate that represents some function $f$ such that $f^{-1}(1)$ is the answer. But how do you actually make such a “database oracle”?

+ +

Suppose I have an array of numbers $a$ that contains $w$ exactly once and I need to find $w$'s index. On a classical computer, I would load the array into memory and iterate through it until I find $w$.

+ +

For example, if $a = [3, 2, 0, 1, 2, 3]$ and $w = 0$, I expect to get 2 as the answer (or 3 in 1-indexing).

+ +

How do I represent this array in a quantum computer and make a gate that returns $a_x$ for some $x$?

+ +

In particular, do you need to have the entirety of the “database” within quantum memory (assuming there are some ways to access classical registers from quantum gates)?

+",580,,55,,3/28/2018 23:19,02-09-2021 19:10,Does the oracle in Grover's algorithm need to contain information about the entirety of the database?,,3,1,,,,CC BY-SA 3.0 +1420,2,,1419,3/28/2018 21:22,,1,,"

You would build a function $f$ such that $f(x)$ first accesses the $x$-th item of your array and then compares it to $w$. An actual implementation might access the array encoded in extra (parameter) input qubits as if they were bits.

+",,user1039,,,,3/28/2018 21:22,,,,2,,,,CC BY-SA 3.0 +1421,2,,1416,3/28/2018 21:26,,7,,"

How do we prevent quantum noise in a quantum computer?

+ +

Well, technically the answer is (at least for most systems): we use ridiculously low temperatures (much colder than space), we shield everything (or at least as much as possible) out, that might introduce any noise (radio waves, such as phone signals or light, magnetic fields, ...), we do everything to remove particles on our chips, that might interact with our system and we are super careful, that the connections (i.e. cables, optical fibres and such) to the environment (control and readout lines) carry as little noise as possible.

+ +

But that will not be enough to run a relevant Shor. To understand what else we can do, let's understand:

+ +

What is Quantum noise?

+ +

Noise is present in all systems - so also your classical computer. In classical computers however this can manifest in only one way: a bit that should be in one state (say 1) turns out to be in the other (say 0) instead. This is pretty easy to correct for: we just run the computation in parallel a few times and check every now and again if one of them is off and correct the error (assuming the majority to be right)*. So we, of course try to prevent noise, but more importantly, we correct for it!

+ +

Quantum noise turns out to be much more complicated. How so? Well generally the state of a quantum bit (qubit) can be described as a point on a sphere (commonly called bloch sphere). Noise can now move this point somewhere along the sphere (or in fact make the sphere smaller). But we can still apply the same error correcting we used for the classical computer right? No! The tricky part about quantum computing is, that we only get to chose to points on the sphere and get to know to which one it was closer to*. Also we project the state of the qubit into that value - so the value actually becomes the value we measured, no matter what it was before. Crazy, right? Well that's quantum mechanics for ya. So we cannot simply compare the computations while running it as before, because that would destroy our computation!

+ +

Quantum error correction to the rescue?

+ +

Well, it turns out quantum error correction is actually possible through a few tricks (which are kind of hard to explain here - so just for feelings sake: we measure in a slightly different way instead, allowing us to just measure weather two qubits are the same in some respect or not. Again, if we do measure that they are the same we have projected them into being the same, if not we can correct. The important phrase being in some respect, so we have to do this for several types of errors that can happen and afterwards try to puzzle out what actually happened to the qubit). For it to work however, we need a quantum computer that already has very little noise to begin with (see also ""Why do error correction protocols only work when the error rates are already significantly low to begin with?""), can talk (are coupled) to each other and we generally have sufficient control over. Right now, nobody is close to fulfilling all these requirements sufficiently at once (separately on different system they have all been achieved).

+ +
+ +

*Well that's not exactly how it works, but roughly.

+",689,,689,,3/29/2018 8:37,3/29/2018 8:37,,,,0,,,,CC BY-SA 3.0 +1424,2,,1356,3/28/2018 21:41,,4,,"

There is a good mathematical answer already, so I'll try and provide an easy-to-understand one.

+ +

Quantum error correction (QEC) is a (group of) rather complex algorithm(s), that requires a lot of actions (gates) on and between qubits. In QEC, you pretty much connect two qubits to a third helper-qubit (ancilla) and transfer the information if the other two are equal (in some specific regard) into that third qubit. Then you read that information out of the ancialla. If it tells you, that they are not equal, you act on that information (apply a correction). So how can that go wrong if our qubits and gates are not perfect?

+ +

QEC can make the information stored in your qubits decay. Each of these gates can decay the information stored in them, if they are not executed perfectly. So if just executing the QEC destroys more information than it recovers on average, it's useless.

+ +

You think you found an error, but you didn't. If the comparison (execution of gates) or the readout of the information (ancilla) is imperfect, you might obtain wrong information and thus apply ""wrong corrections"" (read: introduce errors). Also if the information in the ancillas decays (or is changed by noise) before you can read it out, you will also get wrong readout.

+ +

The goal of every QEC is obviously to introduce less errors than it corrects for, so you need to minimize the aforementioned effects. If you do all the math, you find pretty strict requirements on your qubits, gates and readouts (depending on the exact QEC algorithm you chose).

+",689,,689,,5/25/2018 14:17,5/25/2018 14:17,,,,0,,,,CC BY-SA 4.0 +1425,2,,1410,3/28/2018 21:46,,11,,"

The "computational basis" is just a basis that is "most natural" in a given context, and is conventionally denoted with $|0\rangle$ and $|1\rangle$ in the case of qubits.

+

To give a few examples:

+
    +
  1. If the qubits are encoded into the polarization of single photons, the computational basis is typically the basis formed by the horizontal and vertical polarization states of the photon.
  2. +
  3. If the qubits are encoded into the spins of something like ions, atoms or electrons, then the "computational basis" is typically assumed to be the basis of the eigenstates of $S_z$, that is, the spin angular momentum the vertical direction (of course, what "vertical" means also depends on the context).
  4. +
  5. If a qubit is encoded into the presence or absence of a photon in a given mode, then the "computational basis" is, well, the occupational state of that mode.
  6. +
+

I could go on. +One also often speaks of "computational basis" for higher-dimensional states (qudits), in which case the same applies: a basis is called "computational" when it's the most "natural" in a given context.

+

From a purely theoretical point of view, the "computational basis" is nothing but some basis that is usually denoted with $\{|0\rangle, |1\rangle,...\}$, to distinguish it from some other basis having some relation with it. +It is fundamental to understand that from a purely theoretical point of view, all bases are equivalent to each other, and they only acquire meaning when one decides that a given basis represents a specific set of states of some physical system.

+",55,,55,,10/24/2021 20:41,10/24/2021 20:41,,,,0,,,,CC BY-SA 4.0 +1426,1,,,3/28/2018 22:32,,15,670,"

My understanding is that the magnetic fields needed to hold the ions in place in ion trap quantum computers are very complex, and for that reason, currently, only 1-D computers are possible, therefore reducing the ease of communication between qubits. There does seem to be a proposition for a 2-d system using a Paul trap in this preprint but I can't seem to find if this has actually been tested.

+ +

Does the scalability of ion trap quantum computers depend upon this alone (whether or not the ions can be arranged in configurations other than a straight line) or are other factors entailed? If the former, what progress has been made? If the latter, what are the other factors?

+",91,,1847,,4/19/2018 6:20,4/19/2018 6:20,Scalability of ion trap quantum computers,,3,0,,,,CC BY-SA 3.0 +1428,1,1432,,3/28/2018 23:25,,14,4333,"

Google recently announced the Bristlecone 72 qubit quantum computer. +However, D-Wave already announced quantum computers featuring more than $2000$ qubits.

+ +

Why is Google's new device newsworthy then? Is it better than D-Wave's machine in some respects? If so, how?

+",1086,,26,,3/29/2018 4:44,04-05-2018 05:47,"Is Google's 72 qubit device better than D-Wave's machines, which feature more than 2000 qubits?",,2,3,,3/31/2018 11:53,,CC BY-SA 3.0 +1429,1,1480,,3/28/2018 23:26,,19,1283,"

I've heard the term Topological Quantum Computer a few times now and know that it is equivalent to quantum computers using circuits with respect to some polynomial-time reduction.

+ +

However, it is totally unclear to me how such a quantum computer differs from others, how it works, and what its strengths are.

+ +

In short: how is a topological quantum computer different than other models, such as gate-based quantum computers and what are the specific use cases for that it is better suited than other models?

+",673,,26,,05-01-2019 14:52,5/17/2019 22:15,How does topological quantum computing differ from other models of quantum computing?,,2,0,,,,CC BY-SA 4.0 +1430,2,,1419,3/28/2018 23:43,,8,,"

$\newcommand{\xtarget}{\boldsymbol{x}_{\operatorname{target}}}\newcommand{\bs}[1]{{\boldsymbol #1}}\newcommand{\on}[1]{{\operatorname{#1}}}$No, it does not.

+ +

The ""oracle"" in Grover's algorithm is a function that, given any element $\boldsymbol x_k$, checks whether $\boldsymbol x_k$ is the element we are looking for, say $\xtarget$. +To do this, the oracle does not need any knowledge of all the other elements $x_j$ that are in the database.

+ +

It may help to consider a more concrete example. +Say you have a database of $20000$ four-digit phone numbers, with $\boldsymbol x_k$ denoting the $k$-th element in this database. +You are interested in knowing what position in the database corresponds to the element $1234$. +Let us assume that the element 10000 of the database is the only such element, that is, $\bs x_{10000}=1234$ and $\bs x_k\neq 1234$ for all $k\neq10000$.

+ +

In the classical case, being the database unsorted, there is no better way than going through every single element in the database, checking each one against the target $1234$. +To do this, you only require to have an algorithm that, given $\bs x_k$, returns $\on{yes}$ if $\bs x_k=1234$ and $\on{no}$ otherwise. +An equivalent way to state this problem is to say that we want an algorithm which, given a list of pairs $\{(k,\bs x_k)\}_{k=1}^{20000}$, returns the pair such that $\bs x_k$ is what we want. +Thus, in our case, we want an algorithm which given $\{(k,\bs x_k)\}_{k=1}^{20000}$ returns $(10000,\bs x_{10000}=1234)$. +Note that this means that the function checking each pair only checks for features of a part of the state, namely, the $\bs x_k$ part. +Indeed, if this was not the case, the whole thing would be pointless because we wouldn't be recovering any information.

+ +

This last framing of the problem is the one that one should keep in mind while thinking about Grover's algorithm.

+ +

In the quantum case, the pairs $(k, \bs x_k)$ become the quantum states $|\psi_k\rangle$ (or just $|k\rangle$ how they are usually denoted), and the oracular function only checks that part of the information stored in $|\psi_k\rangle$ matches the target. +The output of the procedure is the state $|\psi_{10000}\rangle$. +Now, part of this state we already know, because it was hardcoded in the oracle: we know that the second part of the information encoded in $|\psi_{10000}\rangle$ is $1234$, because that is what we were looking for in the first place, and is the information that was encoded into the oracle itself. +However, the state $|\psi_{10000}\rangle$ also carries additional information, namely the position in the database: $10000$. +This information was not used to build the oracle, and is the information that we gain by running the algorithm.

+ +

Finally, note that the oracle knows nothing about the content of the full database. It only implements coherently a function that checks a single state $|\psi_k\rangle$ against its target. +However, the fact that this gate works coherently means that one can input to this checker function a superposition of many (possibly all of the) elements of the database, and obtain an output which contains some global information about all the elements in the database.

+",55,,55,,4/26/2018 17:53,4/26/2018 17:53,,,,1,,,,CC BY-SA 3.0 +1431,2,,1413,3/29/2018 0:44,,9,,"

I think you can use the initialize function as detailed at the section "Arbitrary Initialization" at this tutorial.

+

As an example, this tutorial explicitly shows how to initialize the three qubit state

+

$$ \frac{i}{\sqrt{16}} | 000 \rangle + \frac{1}{\sqrt{8}} | 001 \rangle + \frac{1+i}{\sqrt{16}} | 010 \rangle + \frac{1+2i}{\sqrt{8}} | 101 \rangle + \frac{1}{\sqrt{16}} | 110 \rangle .$$

+

Which is done using the following lines of QISKit code.

+
import math
+desired_vector = [
+    1 / math.sqrt(16) * complex(0, 1),
+    1 / math.sqrt(8) * complex(1, 0),
+    1 / math.sqrt(16) * complex(1, 1),
+    0,
+    0,
+    1 / math.sqrt(8) * complex(1, 2),
+    1 / math.sqrt(16) * complex(1, 0),
+    0]
+initialize_circuit_3q = Q_program.create_circuit('initialize_circuit_3q', [qr], [cr])
+initialize_circuit_3q.initialize("init", desired_vector, [qr[0],qr[1],qr[2]])
+
+",1092,,10388,,7/21/2020 15:55,7/21/2020 15:55,,,,4,,,,CC BY-SA 4.0 +1432,2,,1428,3/29/2018 0:52,,11,,"

Short explanation:

+ +

D-Wave implements quantum annealing, while Google has digitized adiabatic quantum computation.

+ +
+ +

Lengthy Explanation:

+ +

D-Wave advertises their line of quantum computers as having thousands of qubits, though these systems are designed specifically for quadratic unconstrained binary optimization. More information about D-Wave's manufacturing process.

+ +

It is D-Wave's claim that: ""It is best suited to tackling complex optimization problems that exist across many domains such as"":

+ +
    +
  • Optimization

  • +
  • Machine learning

  • +
  • Sampling / Monte Carlo

  • +
  • Pattern recognition and anomaly detection

  • +
  • Cyber security

  • +
  • Image analysis

  • +
  • Financial analysis

  • +
  • Software / hardware verification and validation

  • +
  • Bioinformatics / cancer research

  • +
+ +

D-Wave's QPU uses quantum annealing (QA), a metaheuristic for finding the global minimum of a given objective function over a given set of candidate solutions (candidate states), by a process using quantum fluctuations. Quantum annealing is used mainly for problems where the search space is discrete (combinatorial optimization problems) with many local minima; such as finding the ground state of a spin glass.

+ +

D-Wave's architecture differs from traditional quantum computers. It is not known to be polynomially equivalent to a universal quantum computer and, in particular, cannot execute Shor's algorithm because Shor's Algorithm is not a hillclimbing process. Shor's Algorithm requires a universal quantum computer. D-wave claims only to do quantum annealing.[citation needed]

+ +

Papers:

+ +

Experimental quantum annealing: case study involving the graph isomorphism problem

+ +

Defects in Quantum Computers

+ +
+ +

Google's claim is: ""The goal of the Google Quantum AI lab is to build a quantum computer that can be used to solve real-world problems. Our strategy is to explore near-term applications using systems that are forward compatible to a large-scale universal error-corrected quantum computer using linear array technology"".

+ +

Papers:

+ +

""State preservation by repetitive error detection in a superconducting quantum circuit""

+ +

""Digitized adiabatic quantum computing with a superconducting circuit""

+ +
+ +

Inaccurate layperson' explanation:

+ +

A Graphic Card has more Cores than a CPU.

+ +

GPUs are optimized for taking huge batches of data and performing the same operation over and over very quickly, unlike PC microprocessors, which tend to skip all over the place.

+ +

Architecturally, the CPU is composed of just few cores with lots of cache memory that can handle a few software threads at a time. In contrast, a GPU is composed of hundreds of cores that can handle thousands of threads simultaneously.

+ +
+ +

Technical, but not overly complicated, layperson's explanation:

+ +
+

Why is Google's new device newsworthy then? Is it better than D-Wave's machine in some respects? If so, how?

+
+ +

There are ""Annealing QPUs"" and ""Universal QPUs"" as explained above, an incomplete list is offered on Wikipedia's page: ""List of Quantum Processors"".

+ +

In quantum annealing, the strength of transverse field determines the quantum-mechanical probability to change the amplitudes of all states in parallel.

+ +

In the case of annealing a purely mathematical objective function, one may consider the variables in the problem to be classical degrees of freedom, and the cost functions to be the potential energy function (classical Hamiltonian). Moreover, it may be able to do this without the tight error controls needed to harness the quantum entanglement used in more traditional quantum algorithms.

+ +

That makes it easier to provide more qubits, but the kinds of problems they are able to solve is more limited than the qubits provided in a universal QPU.

+ +

In general the ground state of a Hamiltonian can be used to encode a wider variety of problems than NP (know QMA-complete problems), and so decision to focus on NP optimization problems has led to restrictions which prevent the device from being used for general purpose quantum computing (even if noise was not an issue).

+ +

There is an interesting subtlety as regards noise: If you add noise to the adiabatic algorithm, it degrades gracefully into one of the best classical algorithms for the same problem.

+ +

The adiabatic model can encode universal quantum computation, however the limitations of DWave's implementation means that specific machine cannot.

+ +

Google's universal QPU can solve a wider range of problems than D-Wave's QPU (in it's current implementation) if they can solve their decoherence problem.

+ +

In the case of Google's Bristlecone caution is warranted. Bristlecone is a scaled up version of a 9-qubit Google design that has failed to yield acceptable error rates for a commercially viable quantum system. In real-world settings, quantum processors must have a two-qubit error rate of less than 0.5 percent. According to Google, its best result has been a 0.6 percent error rate using its much smaller 9-qubit hardware.

+ +

The commercial success of quantum computing will require more than high qubit numbers. It will depend on quality qubits with low error rates and long-lasting circuit connectivity in a system with the ability to outperform classic computers in complex problem solving, i.e., “quantum supremacy”.

+ +

Google will use it's record number of more useful qubits to correct the error rate of those error prone qubits.

+ +

More qubits are needed to solve bigger problems and longer living (coherent) qubits to are needed to hold the information long enough for the quantum algorithm to run. IBM describes the problem as: ""Quantum Volume: preferring fewer errors per qubit over more qubits"", see also: What is the leading edge technology for creating a quantum computer with the fewest errors? .

+ +

Google plans to use Surface Codes to resolve this problem, for more info and a comparison to spin glass models see: ""Quantum Computation with Topological Codes: from qubit to topological fault-tolerance"".

+ +

IBM has a video titled: ""A Beginner’s Guide to Quantum Computing"" which explains quantum computing for laypersons in under 20 minutes.

+ +

Microsoft intends to take the wind from everyone's sails with the integration of Q# (Q sharp) into Visual Studio and some information about their Majorana fermion based qubits, and a great reduction in the error rate, in the months to come. See: ""Majorana-based fermionic quantum computation"". The will enable a system that uses less than 25% as many better qubits to accomplish the same amount of work as Google's qubits.

+ +

The website ""The Next Platform"" describes the current situation as: ""Quantum Computing Enters 2018 Like it's 1968"".

+",278,,278,,04-05-2018 05:47,04-05-2018 05:47,,,,7,,,,CC BY-SA 3.0 +1435,2,,1428,3/29/2018 2:46,,11,,"

There are two points I'd make here.

+

D-Wave's computer and Google's computer are fundamentally different.

+

D-Wave's computer is a quantum annealer. Imagine a landscape with some grassy hills. If you put a ball at the top of the hill, it will roll to a local minima, or even the minimum - in this case, a valley. Similarly, a quantum annealer has the qubits as the ball and a polynomial as the landscape. It has the advantage that effects like quantum tunneling help make the process more efficient.

+

On the other hand, Google's computer is gate based. Much like a digital classical computer, it has qubits and gates that are then applied to those qubits. This means that they are optimized, in some senses, for different types of problems. It's like comparing apples and oranges - not totally equivalent.

+

Just because you have a lot of qubits doesn't mean they are good qubits.

+

A lot of the controversy surrounding D-Wave has been because it has been called into question whether or not D-Wave's qubits actually exhibit quantum effects. (It hasn't really helped that D-Wave has...overhyped their successes a bit along the way.) You can have a ton of qubits, but if they aren't actually coherent for a long enough length of time, it's not really useful.

+

Google's qubits are definitely coherent for a reasonable period of time. We don't 100% know about D-Wave's.

+

Tl;dr: both are different in how they work and in their problems.

+

Both are notable in their own ways.

+",91,,-1,,6/18/2020 8:31,3/29/2018 2:46,,,,0,,,,CC BY-SA 3.0 +1436,2,,1410,3/29/2018 3:35,,16,,"

Quantum computing deals (mostly) with finite-dimensional quantum systems called qubits. If you know basic quantum mechanics then you know that the Hilbert space of a qubit is $\mathbb{C}^2$, i.e., the two-dimensional complex Hilbert space over $\mathbb{C}$ (for the more technical people, the state space is $\mathbb{C}P^1$).

+

Therefore, to describe the vectors (or physically, the quantum state of the qubit) in this two-dimensional Hilbert space we need at least two basis elements. If you think of the state of the qubit as a column vector,

+

$${{\bigl [}{\begin{smallmatrix}a\\b\end{smallmatrix}}{\bigr ]}},$$ then you would need to specify what $a,b$ are to specify the state of the qubit. Note that what $a,b$ are depends on what the basis of the system is $-$ there can be two different looking column vectors (in different bases) that represent the same state $|\psi\rangle$ of the qubit. In any case, we need some basis to work with and this is where the "computational basis" comes into play.

+

The computational basis is simply the two basis states composed by (any of) the two distinct quantum states that the qubit can be in physically. However, just like in linear algebra, which two (linearly independent) states you choose is kinda arbitrary (I say kinda because in some physical situations there is a natural choice of the basis; see Einselection).

+

For example, if you have an electron in a magnetic field (pointing in the z-axis say), then the states of the spin pointing upwards and downwards in the z-axis are a typical choice for the computational basis $-$ this is clearly not the only choice, since the z-axis can point in any arbitrary direction. These two states, the $|\uparrow\rangle$ and $|\downarrow\rangle$ pointing states of the spin of the electron are the eigenstates of the $\sigma_z$ (Pauli-z) operator and are usually called the "computational basis".

+",1108,,1108,,2/22/2022 18:16,2/22/2022 18:16,,,,3,,,,CC BY-SA 4.0 +1437,1,1443,,3/29/2018 4:49,,9,646,"

I've been browsing The D-Wave 2000Q site when I bumped into this aspect of their quantum computers:

+
+

A Unique Processor Environment

+

Shielded to 50,000× less than Earth’s magnetic field

+
+

Why is that relevant? What would happen if it would be much less than 50.000x?

+",1115,,-1,,6/18/2020 8:31,04-02-2018 15:18,Is it important for a quantum computer to be shielded by the magnetic field?,,3,0,,,,CC BY-SA 3.0 +1438,2,,1437,3/29/2018 5:00,,3,,"

It is relevant to reduce the quantum noise in the system. If the shield strength is more than 50,000x, better the quantum computing system is shielded from the earth's magnetic field and thus better quantum noise reduction., at least, theoretically.

+ +

EDIT: +Superposition is the heart of quantum computing. Superposition state is susceptible to fluctuating external magnetic fields, thermal fluctuations, radiowave etc., The quantum processor should be in an space where the magnetic field is uniform and stable to avoid quantum noise introduced by the above mentioned factors. Thus, isolating the quantum computing system from its disturbing environment is mandatory.

+ +

Achieving a ideal quantum noise free environment is still a daunting task. However, progress made thus far has brought us to experimental realizations of quantum computers. Shielding more than 50,000x earth's magnetic field would reduce the quantum noise induced by earth's magnetic field.

+",812,,812,,3/29/2018 5:33,3/29/2018 5:33,,,,1,,,,CC BY-SA 3.0 +1439,1,1456,,3/29/2018 5:32,,6,262,"

Quantum computers have shown a new way to compute old problems. D-Wave has a quantum annealer, and Wikipedia describes the D-Wave quantum computer and its use of quantum annealing properties. Hardware wise, how does D-Wave achieve quantum annealing?

+ +

There may be many ways to achieve quantum annealing. D-Wave, being first on the market with quantum annealing hardware, therefore is open to more scrutiny (is their hardware open to such scrutiny?) Does D-Wave use a specific type of annealing at the atomic level (there could be one of many ways to achieve quantum annealing), or (in other words) are there many ways to achieve quantum annealing?

+",429,,26,,5/17/2019 22:11,5/17/2019 22:11,"Hardware wise, how does D-Wave achieve quantum annealing?",,1,1,,,,CC BY-SA 3.0 +1440,1,1442,,3/29/2018 6:51,,19,1295,"

After reading the ""first programmable quantum photonic chip"". I was wondering just what software for a computer that uses quantum entanglement would be like.

+ +

Is there any example of code for specific quantum programming? Like pseudocode or high-level language? Specifically, what's the shortest program that can be used to create a Bell state $$\left|\psi\right> = \frac{1}{\sqrt 2} \left(\left|00\right> + \left|11\right> \right)$$ starting from a state initialised to $\left|\psi_0\right> = \left|00\right>$ using both a simulation and one of IBM's Quantum Experience processors, such as the ibmqx4?

+ +

Making the conceptual jump from traditional programming to entanglement isn't that easy.

+ +
+ +

I've found C's libquantum too.

+",58,,23,,3/29/2018 9:02,5/17/2019 22:08,What would a very simple quantum program look like?,,3,0,,,,CC BY-SA 3.0 +1441,1,1591,,3/29/2018 7:16,,11,999,"

In his celebrated paper ""Conjugate Coding"" (written around 1970), Stephen Wiesner proposed a scheme for quantum money that is unconditionally impossible to counterfeit, assuming that the issuing bank has access to a giant table of random numbers and that banknotes can be brought back to the bank for verification. In Wiesner's scheme, each banknote consists of a classical ""serial number"" $s$, together with a quantum money state $|\psi_s\rangle$ consisting of $n$ unentangled qubits, each one either

+ +

$$|0\rangle,\ |1\rangle,\ |+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \text{or}\ |-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}.$$

+ +

The bank remembers a classical description of $|\psi_s\rangle$ for every $s$. And therefore, when $|\psi_s\rangle$ is brought back to the bank for verification, the bank can measure each qubit of $|\psi_s\rangle$ in the correct basis (either $\{|0\rangle,|1\rangle\}$ or $\{|+\rangle,|-\rangle\}$), and check that it gets the correct outcomes.

+ +

On the other hand, because of the uncertainty relation (or alternatively, the No-Cloning Theorem), it's ""intuitively obvious"" that, if a counterfeiter who doesn't know the correct bases tries to copy $|\psi_s\rangle$, then the probability that both of the counterfeiter's output states pass the bank's verification test can be at most $c^n$, for some constant $c<1$. Furthermore, this should be true regardless of what strategy the counterfeiter uses, consistent with quantum mechanics (e.g., even if the counterfeiter uses fancy entangled measurements on $|\psi_s\rangle$).

+ +

However, while writing a paper about other quantum money schemes, my coauthor and I realized that we'd never seen a rigorous proof of the above claim anywhere or an explicit upper bound on $c$: neither in Wiesner's original paper nor in any later one.

+ +

So, has such a proof (with an upper bound on $c$) been published? If not, then can one derive such a proof in a more-or-less straightforward way from (say) approximate versions of the No-Cloning Theorem, or results about the security of the BB84 quantum key distribution scheme?

+ +

I should maybe clarify that I'm looking for more than just a reduction from the security of BB84. Rather, I'm looking for an explicit upper bound on the probability of successful counterfeiting (i.e., on $c$)---and ideally, also some understanding of what the optimal counterfeiting strategy looks like. I.e., does the optimal strategy simply measure each qubit of $|\psi_s\rangle$ independently, say on the basis

+ +

$$\{ \cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle, \sin(\pi/8)|0\rangle-\cos(\pi/8)|1\rangle \}?$$

+ +

Or is there an entangled counterfeiting strategy that does better?

+ +

Right now, the best counterfeiting strategies that I know are (a) the strategy above, and (b) the strategy that simply measures each qubit in the $\{|0\rangle,|1\rangle\}$ basis and ""hopes for the best."" Interestingly, both of these strategies turn out to achieve a success probability of $(5/8)$$n$. So, my conjecture of the moment is that $(5/8)$$n$ might be the right answer. In any case, the fact that $5/8$ is a lower bound on c rules out any security argument for Wiesner's scheme that's ""too"" simple (for example, any argument to the effect that there's nothing nontrivial that a counterfeiter can do, and therefore the right answer is $c=1/2$).

+",58,,26,,5/17/2019 21:57,6/26/2022 14:33,Rigorous security proof for Wiesner's quantum money,,2,1,,,,CC BY-SA 3.0 +1442,2,,1440,3/29/2018 7:27,,13,,"

Assuming you are considering a gate-based quantum computer, the most easy way to produce an entagled state is to produce one of the Bell states. The following circuit shows the Bell state $\left| \Phi^+ \right>$.

+ +

+ +

By examining $\left| \psi_0 \right>$, $\left| \psi_1 \right>$ and $\left| \psi_2 \right>$ we can determine the entagled state after application of all gates:

+ +

1. $\left| \psi_0 \right>$:

+ +

Not much happens here since no gates were applied at this point. The state of the whole system is therefore just the tensorproduct of the single states which we write like this: +$$ +\left| \psi_0 \right> = \left | 0 0 \right > +$$

+ +

2. $\left| \psi_1 \right>$:

+ +

The Hadamard-Gate applies on the first qubit which results in the following:

+ +

$$ +\left| \psi_1 \right> =(H \otimes I)\left | 0 0 \right > = H\left | 0 \right > \otimes \left | 0 \right > = \frac{1}{\sqrt 2} \left (\left | 0 \right > + \left | 1 \right > \right ) \left | 0 \right > = \frac{1}{\sqrt 2} \left (\left | 0 0 \right > + \left | 1 0 \right > \right ) +$$

+ +

3. $\left| \psi_2 \right>$:

+ +

Now a CNOT gate is applied and flips the second qubit but only where the first one has the value 1. The result is

+ +

$$ +\left| \psi_2 \right> = \frac{1}{\sqrt 2} \left (\left | 0 0 \right > + \left | 1 1 \right > \right ) +$$

+ +

This last state $\left| \psi_2 \right>$ is an entagled state and usually the most natural way to come up with such a situation. Bell states occure in a lot of interesting quantum algorithms such as super dense coding or teleportation.

+ +

Although the approach above might not seem like programming to you in a usual sense, applying gates to states is basically how programming a gate-based quantum computer works. There exists abstraction layers that allow you to perform high-level programming but translate the commands to the application of gates. The IBM Quantum Experience interface provides such features.

+ +

In a language like Microsoft's Q# the above example could look similar to this:

+ +
operation BellTest () : ()
+{
+    body
+    {
+        // Use two qubits
+        using (qubits = Qubit[2])
+        {
+            Set (One, qubits[0]);
+            Set (Zero, qubits[1]);
+
+            // Apply Hadamard gate to the first qubit
+            H(qubits[0]);
+
+            // Apply CNOT gate
+            CNOT(qubits[0],qubits[1]);
+         }
+     }
+}
+
+ +

A more detailed version (including measurement) can be found here: Microsoft: Writing a Quantum Program.

+",673,,673,,3/29/2018 7:42,3/29/2018 7:42,,,,0,,,,CC BY-SA 3.0 +1443,2,,1437,3/29/2018 7:36,,7,,"

The DWave machine relies heavily on single-flux-quantum digital control for setting up qubit and coupler operating points, and for carrying out the annealing protocol. Any stray magnetic flux, if present while the chip is cooled through its superconducting transition, will be trapped inside the circuit and can cause it to fail.

+ +

You can calculate how much shielding you need by requiring the magnetic field inside the shield to be smaller than a flux quantum over the area of the chip. $B = \frac{\Phi_0}{A}$, where $\Phi_0 \sim 2 \cdot 10^{-15} ~ \text{Wb}$ is the flux quantum and $A$ is the area. If the area of the DWave chip is $(2 ~ \text{cm})^2$ (guessing) then $B \sim 5 ~ \text{pT}$. Earth’s field is about $0.25 ~ \mu \text{T}$ so you really want $\times 5 \cdot 10^6$ attenuation of the field. Shielding of 50,000 means that you will have on average about 100 flux quanta that can trap in the chip. Typically people add trapping sites on the chip to sequester the remaining flux in safe areas.

+",127,,580,,3/29/2018 20:18,3/29/2018 20:18,,,,0,,,,CC BY-SA 3.0 +1444,2,,1440,3/29/2018 7:48,,16,,"

One way of writing quantum programs is with QISKit. This can be used to run the programs on IBM's devices. The QISKit website suggests the following code snippet to get you going, which is an entangled circuit as you want. It is also the same process as in the answer by datell. I'll comment on it line-by-line.

+ +
# import and initialize the method used to store quantum programs
+from qiskit import QuantumProgram
+qp = QuantumProgram()
+# initialize a quantum register of two qubits
+qr = qp.create_quantum_register('qr',2) 
+# and a classical register of two bits
+cr = qp.create_classical_register('cr',2) 
+# create a circuit with them which we call 'Bell'
+qc = qp.create_circuit('Bell',[qr],[cr]) 
+# apply a Hadamard to the first qubit
+qc.h(qr[0]) 
+# apply a controlled not with the first qubit as control
+qc.cx(qr[0], qr[1]) 
+# measure the first qubit and store its result on the first bit
+qc.measure(qr[0], cr[0]) 
+# the same for the second qubit and bit
+qc.measure(qr[1], cr[1]) 
+# run the circuit
+result = qp.execute('Bell') 
+# extract the results
+print(result.get_counts('Bell')) 
+
+ +

Note that the 'execute' command here only specifies the program to run. All other settings, such as the device you want to use, the number of times you want to repeat it to get statistics, etc are set to their default values. To run on ibmqx4 for 1024 shots, you can instead use

+ +
results = qp.execute(['Bell'], backend='ibmqx4', shots=1024)
+
+",409,,409,,3/29/2018 9:08,3/29/2018 9:08,,,,0,,,,CC BY-SA 3.0 +1445,2,,1426,3/29/2018 8:00,,8,,"

Ion trap quantum computers hold ions in empty space using electric not magnetic fields. That is impossible using static fields (Earnshaw's theorem) so an alternating field is used. The effect is that charged particles such as ions seek a field minimum; this type of ion trap is also called a quadrupole trap because the simplest (lowest order) field having a minimum in space is a quadrupole field. It is simple to arrange fields that confine ions either to a point or to a line and ion trap quantumcomputers use the latter. Yet this does not scale because computations involve motional modes of the ions which become harder to distinguish when there are more ions.

+ +

There are two approaches to make this approach scalable: Couple strings of ions either using light (photons) or by shuttling ions from one to another such linear ion trap section. Using photons is particularly difficult and far from currently workable for a quantum computer that meets an error correction threshold, so let's focus on shuttling ions.

+ +

Mathematically true quadrupole traps cannot be built to have intersections but that hasn't stopped physicists from making them anyways. The trick is that, although one cannot arrange to have a quadrupole field at the center of the intersection, one can still have confinement. And by slightly driving ions into the confining (alternating) field using a static field, one can get sufficiently strong confinement. It has even been shown that such shuttling across an intersection is possible without significantly heating the ion (changing its motional state).

+ +

With such intersections, ion traps are scalable.

+",,user1039,,,,3/29/2018 8:00,,,,2,,,,CC BY-SA 3.0 +1446,2,,175,3/29/2018 8:19,,5,,"

Well, Grover's original paper, ""Quantum mechanics helps in searching for a needle in a haystack"" clearly states, it assumes that C(S) can be evaluated in a constant time. Grover's search is not concerned about the implementability, but the polynomial reduction in what's called a query complexity (how many times you consult the oracle, like a classical database)

+ +

In fact, the concept of oracle in computing was proposed by Alan Turing to describe constructs for which a description on a UTM might not be realizable (Wikipedia). It is in some sense magical.

+ +

But of course, coming back to your question, how do we then actually make the circuit for Grover search (or any oracular) algorithm? Do we need to know the answer in advance to search the result? Well, in some sense you need to. That is exactly what clever improvements on Grover search tries to work on, such that, we need not know the exact answer in advance, but some properties of it. Let me illustrate with an example.

+ +

For the pattern recognition problem using Grover's search, if I have 4 patterns on 2 qubits (00, 01, 10, 11) and I want to mark and amplify 11, the diagonal of my oracle unitary should be like (1,1,1,-1) to take care of the pi phase shift for the solution. So, for this simple implementation, for construction the unitary, you need to know the full answer in advance.

+ +

A clever improvement of pattern completion if given in the paper ""Quantum pattern matching"" by Mateas and Omar. In essence, it constructs as many fixed oracles as there are alphabets in the set. So for our binary string, there will be an oracle which marks out all 1s, and another that marks out all 0s. The oracles are invoked conditionally based on what I want to search. If I want to search 11, I call oracle 1 on the LSqubit, and oracle 1 again on the MSqubit. By the first oracle, I would amplify the states (01, 11), i.e. states with LSQ as 1, and in the 2nd call, it would amplify (10, 11). So as you see, 11 is the only state that gets amplified twice, ending in a higher measurement probability. Though the compiled quantum circuit would change based on what my input search pattern is, a high-level description of the quantum algorithm remains the same. You can think of the oracles as function calls based on a switch case of the alphabet set invoked for each character in the search string.

+",1153,,55,,3/29/2018 11:38,3/29/2018 11:38,,,,0,,,,CC BY-SA 3.0 +1447,2,,1440,3/29/2018 8:59,,8,,"

The simplest quantum program I can think of is a (1-bit) true random number generator. As a quantum circuit, it looks like this:

+ +

+ +

You first prepare a qubit in the state $|0 \rangle$, then apply a Hadamard gate to produce the superposition $\frac{\sqrt{2}}{2} ( \left| 0 \right> + \left| 1 \right> )$ which you then measure in the computational basis. The measurement's result is $|0\rangle$ or $|1\rangle$, each with a probability of 50%.

+",,user1039,26,,5/17/2019 22:08,5/17/2019 22:08,,,,0,,,,CC BY-SA 4.0 +1448,1,,,3/29/2018 9:27,,2,48,"

Consider an implementation of the Born-Jordan quantization or an arbitrary Cohen class time-frequency analysis algorithm.

+ +

Are there any numerical differences between classical and quantum solutions? Would our visualizations of the data output change in any way? (Or perhaps, would we have to change the way we visualize the data?)

+",1165,,55,,3/29/2018 12:25,3/29/2018 12:25,Are there numerical differences between classical and quantum solutions of problems such as the Born-Jordan quantization?,,0,0,,3/29/2018 19:14,,CC BY-SA 3.0 +1449,2,,20,3/29/2018 9:35,,4,,"

Excellent question! You are asking if we can, given a property we want in a quantum system (for example superconductivity in a chemical compound), efficiently find one example of such a compound.

+ +

First of all, it is perhaps not yet a completely settled question if we can calculate such properties in the forward direction efficiently: Given a compound, does it superconduct? It is efficiently possible to simulate the temporal evolution from a given initial state and also to calculate its energy. There are several ideas that suggest it is also efficiently possible to calculate the interesting states (either the ground state or thermal states) and I believe this is possible but I want to emphasize that some details of these are still debated or not yet investigated academically.

+ +

Assuming we can efficiently calculate if a given compound fulfills your desires, one could start an exhaustive search over a number $N$ of compounds. Such a search can be accelerated by Grover's algorithm; even for the less straight-forward case that we have here, that an unknown number of solutions exist, this is possible in a number of compound-property calculations that scales with $\sqrt{N}$.

+ +

Hence you have a quantum speedup, likely exponential in the calculation of the desired properties of one chemical compound, and subexponential in the search among compounds.

+",,user1039,,user1039,3/29/2018 10:58,3/29/2018 10:58,,,,0,,,,CC BY-SA 3.0 +1451,1,1483,,3/29/2018 10:43,,37,3665,"

I would like to know how a job for a D-Wave device is written in code and submitted to the device.

+ +

In the answer it would be best to see a specific example of this for a simple problem. I guess that the ""Hello World"" of a D-Wave device would be something like finding the ground states of a simple 2D Ising model, since this is the kind of problem directly realized by the hardware. So perhaps this would be a nice example to see. But if those with expertise thing an alternative example would be suitable, I'd be happy to see an alternative.

+",409,,409,,3/29/2018 12:43,5/13/2019 21:18,How do you write a simple program for a D-Wave device?,,3,0,,,,CC BY-SA 3.0 +1452,2,,4,3/29/2018 10:43,,4,,"
+

How does the quantum gate affect (not necessarily change it) the + result of measuring the state of the qubits (as the measurement result + is affected greatly by the probabilities of each possible state)? More + specifically, is it possible to know, in advance, how the + probabilities of each state change due to the quantum gate?

+
+ +

Let's try to approach this with an example and some geometry. Consider a single qubit, whose Hilbert space is $\mathbb{C}^2$, i.e., the two-dimensional complex Hilbert space over $\mathbb{C}$ (for the more technical people, the Hilbert space is actually $\mathbb{C}P^1$). It turns out that $\mathbb{C}P^1 \cong S^2$, the unit sphere, also known as the Bloch sphere. This translates into the fact that all states of a qubit can be represented (uniquely) on the Bloch sphere.

+ +

+Source: Wikipedia

+ +

The state of a qubit can be represented on the Bloch sphere as ${\displaystyle |\psi \rangle =\cos \left({\frac {\theta }{2}}\right)|0\rangle \,+\,e^{i\phi }\sin \left({\frac {\theta }{2}}\right)|1\rangle}$, where $0 \leq \theta \leq \pi$ and $0 \leq \phi < 2 \pi$. Here, $|0\rangle = {{\bigl [}{\begin{smallmatrix}1\\0\end{smallmatrix}}{\bigr ]}}$ and $|1\rangle = {{\bigl [}{\begin{smallmatrix}0\\1\end{smallmatrix}}{\bigr ]}}$ are the two basis states (represented in the figure at the north and south pole respectively). So the states of the qubit are nothing but column vectors, which are identified with (unique) points on the sphere.

+ +

What are quantum gates? +These are unitary operators, $U$ s.t., $U U^\dagger = U^\dagger U = \mathbb{I}$. Gates on a single qubit are elements of $SU(2)$. Consider a simple gate like $Y$ (which stands for the Pauli matrix $\sigma_y := Y = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}$).

+ +

How does this gate act on a qubit and affect the measurement outcomes?

+ +

Say you begin with a qubit in the state $|0\rangle$, i.e., at the north pole on the Bloch sphere. You apply a unitary of the form $U = e^{-i \gamma Y}$ where $\gamma \in \mathbb{R}$. Using properties of the Pauli matrix, we get $U = e^{-i \gamma Y} = cos(\gamma) \mathbb{I} -i sin(\gamma) Y$. The action of this operator is to rotate the state by an angle $2 \gamma$ along the y-axis and therefore if we choose $\gamma = \pi/2$, the qubit $|0\rangle \rightarrow U|0\rangle = |1\rangle$. That is to say, given we know what unitary we are applying to our state, we completely know the way in which our initial state will transform and hence we know how the measurement probabilities would change.

+ +

For example, if we were to make a measurement in the $\{|0\rangle, |1\rangle\}$ basis, initially, one would get the state $|0\rangle$ with probability 1; after applying the unitary, one would get the state $|1\rangle$ with probability 1.

+",1108,,,,,3/29/2018 10:43,,,,0,,,,CC BY-SA 3.0 +1453,2,,135,3/29/2018 12:28,,4,,"

Whilst I cannot supply a formal proof, the simulation of (the temporal evolution) of a quantum system is believed to be such a case: There is no known better way to do this on a classical computer than in exponential time but a quantum computer can trivially do it in polynomial time.

+ +

The idea of such a quantum simulator (see also wikipedia article) is in fact how quantum computers got first proposed.

+",,user1039,,user1039,3/29/2018 12:36,3/29/2018 12:36,,,,6,,,,CC BY-SA 3.0 +1454,2,,1451,3/29/2018 12:33,,9,,"

The title and question body seem to ask two different questions. In the title you ask ""How do you write a simple program for a D-Wave device?"", while in the question body you ask how to find the ground states of a simple 2D Ising model using the underlying hardware of the D-Wave device, and what the corresponding code would be (which is a more specific question).

+ +

I will answer the former, since it is the more general question.

+ +

According to the D-Wave Software page:

+ +
+

The D-Wave 2000Q system provides a standard Internet API (based on + RESTful services), with client libraries available for C/C++, Python, + and MATLAB. This interface allows users to access the system either as + a cloud resource over a network, or integrated into their + high-performance computing (HPC) environments and data centers. Access + is also available through D-Wave’s hosted cloud service. Using D-Wave’s development tools and client libraries, developers can create + algorithms and applications within their existing environments using + industry-standard tools.

+ +

While users can submit problems to the system in a number of different + ways, ultimately a problem represents a set of values that correspond + to the weights of the qubits and the strength of the couplers. The + system takes these values along with other user-specified parameters + and sends a single quantum machine instruction (QMI) to the QPU. + Problem solutions correspond to the optimal configuration of qubits + found; that is, the lowest points in the energy landscape. These + values are returned to the user program over the network.

+ +

Because quantum computers are probabilistic rather than deterministic, + multiple values can be returned, providing not only the best solution + found, but also other very good alternatives from which to choose. + Users can specify the number of solutions they want the system to + return.

+ +

Users can submit problems to the D-Wave quantum computer in several ways:

+ +
    +
  1. Using a program in C, C++, Python, or MATLAB to create and execute QMIs
  2. +
  3. Using a D-Wave tool such as:

    + +
      +
    • QSage, a translator designed for optimization problems

    • +
    • ToQ, a high level language translator used for constraint + satisfaction + problems and designed to let users “speak” in the language of their + problem domain

    • +
    • qbsolv, an open-source, hybrid partitioning optimization solver for + problems that are larger than will fit natively on the QPU. Qbsolv can
      + be downloaded here.

    • +
    • dw, which executes QMIs created via a text editor

    • +
  4. +
  5. By directly programming the system via QMIs

  6. +
+ +

Download this white paper to learn more about the programming model for a D-Wave system

+
+",26,,,,,3/29/2018 12:33,,,,0,,,,CC BY-SA 3.0 +1455,2,,1451,3/29/2018 12:56,,6,,"

The inputs to the D-Wave are a list of interactions and more recently the annealing time of the qubits.

+ +

As you mentioned the Ising problem is one of the easiest with $J_{ij} = 1$ in the problem Hamiltonian however it's not very interesting.

+ +

I recommend the appendices in this paper for a concise description of how the D-Wave hardware operates. (Full disclosure: I'm a coauthor.)

+",54,,26,,5/13/2019 21:18,5/13/2019 21:18,,,,1,,,,CC BY-SA 4.0 +1456,2,,1439,3/29/2018 13:10,,3,,"

Quantum annealing as defined by Chakrabarti 1981 and later implemented by Kadowaki and Nishimori 1998 uses a varying transverse magnetic field to facilitate tunneling through the energy landscape of an optimization problem.

+ +

The system is prepared in the ground state of a Hamiltonian and then the transverse field is applied and slowly reduced (adiabatically) while the problem Hamiltonian is simultaneously 'turned on'. If this is done correctly, the system will remain in the ground state throughout the process and you'll have turned off the initial Hamiltonian and turned on the problem Hamiltonian with the system in the ground state (solution).

+ +

The temperature of the machine does not change (which is a common misconception), only the transverse magnetic field does.

+",54,,,,,3/29/2018 13:10,,,,0,,,,CC BY-SA 3.0 +1457,1,,,3/29/2018 13:53,,11,384,"

Trapped ion quantum computers are among the most promising approaches to achieve large-scale quantum computation. +The general idea is to encode the qubits into the electronic states of each ion, and then control the ions via electromagnetic forces.

+ +

In this context, I often see that experimental realisation of trapped ion systems use ${}^{40}\!\operatorname{Ca}^+$ ions (see e.g. 1803.10238). +Is this always the case? If not, what other kinds of ions are or can be used to build these kinds of trapped ion systems? +What are the main characteristics that the ions must have to be conveniently used to build trapped ion devices?

+",55,,26,,3/29/2018 18:08,3/30/2018 14:22,What kinds of ions do trapped ion quantum computers use?,,1,0,,,,CC BY-SA 3.0 +1458,2,,1457,3/29/2018 14:42,,10,,"

$\require{\mhchem}$

+ +

There are almost too many ion species to list that have been used in ion trap based quantum computing or related experiments. The usual choice is one that is, when singly ionized, hydrogen-like which has convenient consequences for their laser spectroscopy: Then a strong, typically $20$ MHz wide transition lies in the UV or blue end of the laser-accessible spectrum (rather than in the vacuum-UV as it would for ions that need higher than single ionization to become hydrogen-like). Also, the spectrum remains relatively simple (if it is hydrogen-like) meaning there are a limited number of other states that may need their own laser as a repumper laser. It can be advantageous to have one optical meta-stable state that needs a repumper laser because that can be used in measurements and state preparations (or, atypically, to represent one qubit state).

+ +

Finally, you typically (but not always) want an ion that has a hyperfine structure because that allows you to use hyperfine states with only a few $\mathrm{GHz}$ energy spacing as qubit states. These states are advantageous because they have century-long decay times, meaning you have practically no decoherence simply from their spontaneous decay (but you do have decoherence from magnetic fields, to which well-chosen states have, however, no linear and only a quadratic dependence).

+ +

It is also convenient to have a low mass ion because that allows you to build an ion trap with higher motional frequencies (the ion is more strongly confined if its charge-to-mass ratio is high). High motional frequencies imply less (anomalous) heating inside the ion trap and the possibility of faster $2$-qubit gate speeds.

+ +

One of the most popular ion species is $\ce{^{171}Yb^+}$ because you have all required lasers in a spectral region (IR and visible) where you can build them with relative simplicity and there is a convenient meta-stable state of about $1\ \mathrm{Hz}$ width (and one with about $1\ \mathrm{nHz}$ width that is irrelevant), and it has a particularly simple hyperfine structure due to its nuclear spin of $1/2$. $\ce{Ca^+}$ is almost as good: If you can live without having a hyperfine structure, $\ce{^{40}Ca^+}$ has equally simple laser requirements and a relatively low mass whilst by tuning your lasers for $\ce{^{43}Ca^+}$ you gain a hyperfine structure at the expense of it being a fairly complicated one due to the nuclear spin of $7/2$. Some groups pursue $\ce{^9Be^+}$ which is cool for being so light and for only needing lasers at essentially the same wavelength, albeit a difficult one ($313\ \mathrm{nm}$). Many other ions have been used experimentally, including $\ce{Sr^+}$, $\require{\mhchem}\ce{Hg^+}$ and a good depiction of the important properties can be found at Chris Monroe's ""Ion Periodic Table"".

+",,user1039,,user1039,3/30/2018 14:22,3/30/2018 14:22,,,,0,,,,CC BY-SA 3.0 +1459,2,,1185,3/29/2018 14:59,,6,,"

If you want to use a quantum circuit as a subroutine (such as an oracle) to a quantum algorithm that makes use of interference, you must allow interference by a process known as uncomputing your ancillary (or, in your words, garbage) qubits. Uncomputing is always possible: Since your gates are reversible, you can just apply their inverse. That is, after the step you mentioned, $\left|x\right>\left|0\right>\left|0\right>\mapsto\left|x\right>\left|f(x)\right>\left|g\right>$, you perform another computation (or uncomputation) that leads to $\left|x\right>\left|f(x)\right>\left|0\right>$.

+",,user1039,,,,3/29/2018 14:59,,,,0,,,,CC BY-SA 3.0 +1460,2,,136,3/29/2018 16:44,,8,,"

There are several misconceptions here, most of them originate from exposure to only the pure state formalism of quantum mechanics, so let's address them one by one:

+ +
    +
  1. +

    All quantum operations must be unitary to allow reversibility, but + what about measurement?

    +
  2. +
+ +

This is false. In general, the states of a quantum system are not just vectors in a Hilbert space $\mathcal{H}$ but density matrices $-$ unit-trace, positive semidefinite operators acting on the Hilbert space $\mathcal{H}$ i.e., $\rho: \mathcal{H} \rightarrow \mathcal{H}$, $Tr(\rho) = 1$, and $\rho \geq 0$ (Note that the pure state vectors are not vectors in the Hilbert space but rays in a complex projective space; for a qubit this amounts to the Hilbert space being $\mathbb{C}P^1$ and not $\mathbb{C}^2$). Density matrices are used to describe a statistical ensemble of quantum states.

+ +

The density matrix is called pure if $\rho^2 = \rho$ and mixed if $\rho^2 < \rho$. Once we are dealing with a pure state density matrix (that is, there's no statistical uncertainty involved), since $\rho^2 = \rho$, the density matrix is actually a projection operator and one can find a $|\psi\rangle \in \mathcal{H}$ such that $\rho = |\psi\rangle \langle\psi|$.

+ +

The most general quantum operation is a CP-map (completely positive map), i.e., $\Phi: L(\mathcal{H}) \rightarrow L(\mathcal{H})$ such that $$\Phi(\rho) = \sum_i K_i \rho K_i^\dagger; \sum_i K_i^\dagger K_i \leq \mathbb{I}$$ (if $\sum_i K_i^\dagger K_i = \mathbb{I}$ then these are called CPTP (completely positive and trace-preserving) map or a quantum channel) where the $\{K_i\}$ are called Kraus operators.

+ +
+ +

Now, coming to the OP's claim that all quantum operations are unitary to allow reversibility -- this is just not true. The unitarity of time evolution operator ($e^{-iHt/\hbar}$) in quantum mechanics (for closed system quantum evolution) is simply a consequence of the Schrödinger equation.

+ +

However, when we consider density matrices, the most general evolution is a CP-map (or CPTP for a closed system to preserve the trace and hence the probability).

+ +
    +
  1. +

    Are there any situations where non-unitary gates might be allowed?

    +
  2. +
+ +

Yes. An important example that comes to mind is open quantum systems where Kraus operators (which are not unitary) are the ""gates"" with which the system evolves.

+ +

Note that if there is only a single Kraus operator then, $\sum_i K_i^\dagger K_i = \mathbb{I}$. But there's only one $i$, therefore, we have, $K^\dagger K = \mathbb{I}$ or, $K$ is unitary. So the system evolves as $\rho \rightarrow U \rho U^\dagger$ (which is the standard evolution that you may have seen before). However, in general, there are several Kraus operators and therefore the evolution is non-unitary.

+ +

Coming to the final point:

+ +
+ +
    +
  1. +

    Measurement can be represented as a matrix, and that matrix is applied to qubits, so that seems equivalent to the operation of a quantum gate. That's definitively not reversible.

    +
  2. +
+ +

In standard quantum mechanics (with wavefunctions etc.), the system's evolution is composed of two parts $-$ a smooth unitary evolution under the system's Hamiltonian and then a sudden quantum jump when a measurement is made $-$ also known as wavefunction collapse. Wavefunction collapses are described as some projection operator say $|\phi\rangle \langle\phi|$ acting on the quantum state $|\psi\rangle$ and the $|\langle\phi|\psi\rangle|^2$ gives us the probability of finding the system in the state $|\phi\rangle$ after the measurement. Since the measurement operator is after all a projector (or as the OP suggests, a matrix), shouldn't it be linear and physically similar to the unitary evolution (also happening via a matrix). This is an interesting question and in my opinion, difficult to answer physically. However, I can shed some light on this mathematically.

+ +

If we are working in the modern formalism, then measurements are given by POVM elements; Hermitian positive semidefinite operators, $\{M_{i}\}$ on a Hilbert space $\mathcal{H}$ that sum to the identity operator (on the Hilbert space) $\sum _{{i=1}}^{n}M_{i}=\mathbb{I}$. Therefore, a measurement takes the form $$ \rho \rightarrow \frac{E_i \rho E_i^\dagger}{\text{Tr}(E_i \rho E_i^\dagger)}, \text{ where } M_i = E_i^\dagger E_i.$$

+ +

The $\text{Tr}(E_i \rho E_i^\dagger) =: p_i$ is the probability of the measurement outcome being $M_i$ and is used to renormalize the state to unit trace. Note that the numerator, $\rho \rightarrow E_i \rho E_i^\dagger$ is a linear operation, but the probabilistic dependence on $p_i$ is what brings in the non-linearity or irreversibility.

+ +

Edit 1: You might also be interested Stinespring dilation theorem which gives you an isomorphism between a CPTP map and a unitary operation on a larger Hilbert space followed by partial tracing the (tensored) Hilbert space (see 1, 2).

+",1108,,1108,,3/30/2018 21:31,3/30/2018 21:31,,,,0,,,,CC BY-SA 3.0 +1461,1,1476,,3/29/2018 16:54,,27,5558,"

My understanding so far is: a pure state is a basic state of a system, and a mixed state represents uncertainty about the system, i.e. the system is in one of a set of states with some (classical) probability. However, superpositions seem to be a kind of mix of states as well, so how do they fit into this picture?

+ +

For example, consider a fair coin flip. You can represent it as a mixed state of “heads” $\left|0\right>$ and “tails” $\left|1\right>$: $$ \rho_1 = \sum_j \frac{1}{2} \left|\psi_j\right> \left<\psi_j\right| = \frac{1}{2} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} $$

+ +

However, we can also use the superposition of “heads” and “tails”: specific state $\psi = \frac{1}{\sqrt{2}}\left( \left|0\right> + \left|1\right> \right)$ with density

+ +

$$ \rho_2 = \left|\psi\right> \left<\psi\right| = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} $$

+ +

If we measure in the computational basis, we will get the same result. What is the difference between a superposed and a mixed state?

+",580,,580,,3/29/2018 22:16,06-01-2021 15:07,What is the difference between superpositions and mixed states?,,3,2,,,,CC BY-SA 3.0 +1462,1,1501,,3/29/2018 16:59,,8,573,"

A string of $n$ qutrits has a state-space spanned by the $3^n$ different states $\lvert x \rangle $ for strings $x \in \{0,1,2\}^n$ (or $x \in \{-1,0,+1\}^n$, equivalently), while $n $ qubits can only represent $2^n$ computational basis states.

+ +

According to the Wikipedia article on qutrits, ""qutrits ... are more robust to decoherence under certain environmental interactions"".

+ +

Does the increase of possible simultaneous states represented, result in robustness to decoherence?

+",,user609,26,,05-07-2018 13:39,05-07-2018 13:39,Are qutrits more robust to decoherence?,,2,0,,,,CC BY-SA 3.0 +1464,2,,1461,3/29/2018 17:20,,11,,"

The short answer is that there is more to quantum information than ""uncertainty"". This is because there is more than one way to measure a state; and that is because there is more than one basis in which, in principle, you can store and retrieve information. Superpositions allow you to express information in a different basis than the computational basis — but mixtures describe the presence of a probabilistic element, no matter which basis you use to look at the state.

+ +

The longer answer is as follows —

+ +

Measurement as you have described it is specifically measurement in the computational basis. This is often described just as ""measurement"" for the sake of brevity, and large subsets of the community think in terms of this being the primary way to measure things. But in many physical systems, it is possible to choose a measurement basis.

+ +

A vector space over $\mathbb C$ has more than one basis (even more than one orthonormal basis), and on a mathematical level there isn't much that makes one basis more special than another, aside from what is convenient for the mathematician to think about. The same is true in quantum mechanics: unless you specify some specific dynamics, there is no basis which is more special than the others. That means that the computational basis +$$ \lvert 0 \rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \qquad \lvert 1 \rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$$ +is not fundamentally different physically from another basis such as +$$ \lvert + \rangle = \tfrac{1}{\sqrt 2}\begin{bmatrix} 1 \\ 1 \end{bmatrix}, \qquad \lvert - \rangle = \tfrac{1}{\sqrt 2}\begin{bmatrix} 1 \\ -1 \end{bmatrix},$$ +which is also an orthonormal basis. That means that there should be a way to ""measure"" a state $\lvert \psi \rangle \in \mathbb C^2$ in such a way that the probabilities of the outcomes depend on projections onto these states $\lvert + \rangle$ and $\lvert - \rangle$.

+ +

In some physical systems, the way one performs this measurement is to literally take the same apparatus and tilt it so that it is aligned with the X axis instead of the Z axis. Mathematically, the way we do this is to consider the projectors +$$ \Pi_+ = \lvert + \rangle\!\langle + \rvert = \tfrac{1}{2}\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}, \qquad \Pi_- = \lvert - \rangle\!\langle - \rvert = \tfrac{1}{2}\begin{bmatrix} 1 & -1 \\ -1 & 1 \end{bmatrix}$$ +and then to ask what the projections $\lvert \varphi_+ \rangle := \Pi_+ \lvert \psi \rangle$ and $\lvert \varphi_- \rangle := \Pi_- \lvert \psi \rangle$. The norm-squared of $\lvert \varphi_\pm \rangle$ determines the probability of ""measuring $\lvert + \rangle$"" and of ""measuring $\lvert - \rangle$""; and normalising $\lvert \varphi_+ \rangle$ or $\lvert \varphi_- \rangle$ to have a norm of 1 yields the post-measurement state. (For a state on a single qubit, this will just be $\lvert + \rangle$ or $\lvert - \rangle$. More interesting post-measurement states may result if we consider multi-qubit states, and consider the projector $\Pi_+$ or $\Pi_-$ acting on one of many qubits.)

+ +

For density operators, one takes the state $\rho$ which you want to perform a measurement on, and consider $\rho_+ := \Pi_+ \rho \Pi_+$ and $\rho_- := \Pi_- \rho \Pi_-$. These operators may be sub-normalised in the same way that the states $\lvert \varphi_\pm \rangle$ might be, in the sense that they may have trace less than 1. The value of the trace of $\rho_\pm$ is the probability of obtaining the outcome $\lvert + \rangle$ or $\lvert - \rangle$ of the measurement; to renormalise, simply scale the projected operator to have trace 1.

+ +

Consider your state $\rho_2$ above. If you measure it with respect to the $\lvert \pm \rangle$ basis, what you will find is that $\rho_2 = \rho_{2,+} := \Pi_+ \rho_2 \Pi_+$. This means that projecting the operator with $\Pi_+$ does change the state, and that the probability of obtaining the outcome $\lvert + \rangle$ to the measurement is 1. If you do this instead with $\rho_1$, you will find a 50/50 chance of obtaining either $\lvert + \rangle$ or $\lvert - \rangle$. So the state $\rho_1$ is a mixed state, while $\rho_2 $ is not --- the difference being that $\rho_2$ has a definite outcome in a different measurement basis than the standard basis. You might say that $\rho_2$ stores a definite piece of information, albeit in a different basis than the computational basis.

+ +

More generally, a mixed state is one whose largest eigenvalue is less than 1, meaning that there is no basis in which you can measure it to get a definite outcome. Superpositions allow you to express information in a different basis than the computational basis; mixtures represent a degree of randomness about the state of the system you're considering, regardless of how you measure that system.

+",124,,,,,3/29/2018 17:20,,,,0,,,,CC BY-SA 3.0 +1467,2,,1404,3/29/2018 18:27,,14,,"

This is very much an open question, but yes, there is a considerable amount of work that is being done on this front.

+ +

Some clarifications

+ +

It is, first of all, to be noted that there are two major ways to merge machine learning (and deep learning in particular) with quantum mechanics/quantum computing:

+ +

1) ML $\to$ QM

+ +

Apply classical machine learning techniques to tackle problems arising in the context of quantum mechanics/quantum information/quantum computation. +This area is growing too fast for me to even attempt a decent list of references, so I will just link to a couple of the most recent works in this direction: in 1803.04114 the authors used a machine learning approach to find circuits to compute the overlap between two states (there are a number of other works in this same direction), and in 1803.05193 the authors studied how deep neural networks can be used to find quantum control correction schemes.

+ +

2) QM $\to$ ML

+ +

Study of quantum algorithms to analyze big data, which often amounts to look for ""quantum generalizations"" of classical machine learning algorithms. You can have a look at this other answer of mine to get some basic references about this topic. +More specifically for the case of deep learning, in 1412.3489 (aptly named Quantum Deep Learning) the authors propose a method (effectively, a quantum algorithm) to generally speed-up the training of deep, restricted Boltzmann machines. +Another relevant reference here is 1712.05304, in which the authors develop a low-depth quantum algorithm to train quantum Boltzmann machines. +See 1708.09757, as well as the references in the linked answer, to find many more works on this. Note that the speed-up that is claimed in these works can vary wildly, from exponential speed-ups to polynomial ones.

+ +

Sometimes the speed-up comes from the use of quantum algorithms to solve particular linear algebraic problems (see e.g. Table 1 in (1707.08561), sometimes it comes from what basically amounts to the use of (variations of) Grover's search, and sometimes from other things (but mostly these two). +Quoting from Dunjko and Briegel here:

+ +
+

The ideas for quantum-enhancements for ML can roughly be classified + into two groups: a) approaches which rely on Grover’s search and + amplitude amplification to obtain up-to-quadratic speed-ups, and, b) + approaches which encode relevant information into quantum amplitudes, + and which have a potential for even exponential improvements. The + second group of approaches forms perhaps the most developed research + line in quantum ML, and collects a plethora quantum tools – most + notably quantum linear algebra, utilized in quantum ML proposals.

+
+ +

More direct answer to the three questions

+ +

Having said the above, let me more directly answer the three points you raised:

+ +
    +
  1. Could a deep learning algorithm run on a quantum computer? Most definitely yes: if you can run something on a classical computer you can do it on quantum computers. However, the question one should be asking is rather can a quantum (deep) machine learning algorithm be more efficient than the classical counterparts? The answer to this question is trickier. Possibly yes, there are many proposals in this direction, but it is too soon to say what will or will not work.

  2. +
  3. Does it make sense to try? Yes!

  4. +
  5. Are there other quantum algorithms that would make deep learning irrelevant? This strongly depends on what you mean by ""irrelevant"". I mean, for what is known at the moment, there may very well be classical algorithms that will make deep learning ""irrelevant"".
  6. +
+",55,,55,,9/19/2018 20:09,9/19/2018 20:09,,,,2,,,,CC BY-SA 4.0 +1468,1,1469,,3/29/2018 18:38,,3,806,"

From Wikipedia:

+
+

A qubit is a two-state quantum system [...]

+

There are two possible +outcomes for the measurement of a qubit — usually $0$ and $1$, like a bit. +The difference is that whereas the state of a bit is either $0$ or $1$, +the state of a qubit can also be a superposition of both.

+
+

Is it true that quantum computing must be based on qubits, limiting it to only a two-state quantum system? Is it physically possible to build a $n$-state quantum-system (where $n>2$)?

+
+

This is 'repost' of a ("legitimate") question was deleted by the OP, here. I decided to repost it along with the answer which I started writing.

+",26,,-1,,6/18/2020 8:31,11/17/2019 14:03,Is quantum computing limited to a superposition of only two states?,,2,0,,,,CC BY-SA 3.0 +1469,2,,1468,3/29/2018 18:38,,3,,"
+

Is quantum computing limited to a superposition of only two + states?

+
+ +

In theory, it is not. Keep in mind that a qubit is a quantum analogue of the classical ""bit"" which has only two states $0$ and $1$. In principle, there is no limit to the dimension of the state space of a quantum system. There could even be an ""infinite"" dimensional separable Hilbert space (in short, separable means denumerable/countable with a one-one onto mapping to the natural numbers). For non-separable Hilbert spaces there are some complications involved. +In the context of quantum information systems with a state-space dimension greater than $2$ are called ""qudits"".

+ +

And yes, there has been ongoing work to make physical implementations of higher dimensional quantum systems, like qutrits (with trapped ions), as mentioned by @Andrew O, on their currently deleted answer (only users having the privilege to view deleted posts can see it at present).

+ +

Relevant paper: Qutrit quantum computer with trapped ions - A. B. Klimov, R. Guzmán, J. C. Retamal, and C. Saavedra

+ +

Edit:

+ +
    +
  • @glS mentions here that in some cases making higher-dimensional quantum systems can, in fact, be easier, which is an interesting fact I did not know earlier.
  • +
+ +
+

In the context of photonics, for example, it is relatively easy to + generate states in high-dimensional Hilbert spaces, for example + exploiting the orbital angular momentum of single photons. See for + example 1607.05114 and the many + references therein, or Fickler + 2012, in which they experimentally + demonstrate entanglement of states living in 600-dimensional Hilbert + spaces.

+
+ + + +
+

It is also to be noted that the matter of non-separability is + absolutely a non-issue for practical implementation of whatever + protocol, and also that continuous variable quantum computation is a + big subject in quantum computing

+
+",26,,26,,11/17/2019 14:03,11/17/2019 14:03,,,,2,,,,CC BY-SA 4.0 +1471,1,1482,,3/29/2018 19:15,,10,593,"

This recent press release claiming that Improved measurements bring final proof of Majorana particles closer than ever, which summarizes the results of a recent paper in Nature simply entitled ""Quantized Majorana conductance"" claims that

+ +
+

Thanks to their unique physical characteristics, Majorana particles are much more stable than most other qubits.

+
+ +

Why would this be the case (in theory, at least). Is the approach to qubits with Majorana particles considered to be valid, or are they surrounded by skepticism?

+",253,,26,,12/23/2018 12:14,12/23/2018 12:14,How could Majorana particles be used to improve quantum computers?,,2,2,,,,CC BY-SA 3.0 +1472,1,1475,,3/29/2018 19:22,,10,8799,"

The terms Quantum Computing Race and Global Quantum Computing Race have been used in the press and research communities lately in an effort to describe countries making investments into a ""battle"" to create the first universal quantum computer.

+ +

What countries are leading this ""Global Quantum Computing Race""?

+",1280,,274,,3/29/2018 20:47,7/13/2021 17:21,"What countries are leading this ""Global Quantum Computing Race""?",,3,0,,,,CC BY-SA 3.0 +1473,2,,1471,3/29/2018 19:27,,5,,"

I heard an interesting analogy that shed some light on the situation for me, so I'll share it here. Majorana fermions are topologically based; let's look at what topology sort of ""means"".

+ +

Topology looks at the bigger picture. If you have a balloon, no matter how much you blow it up, take air out, or tie it up in knots (if you're a balloon artist), it still doesn't have holes in it. To have holes would make it fundamentally different. You can stretch and shrink and twist a sphere all you want, but it's never going to turn into a donut. If you take a donut, though, you can twist that into all sorts of things with holes - but you can never make something without holes, like a sphere, or with two or more holes.

+ +

Another example of topology looking at the bigger picture. Take a balloon (again) and zoom in on its surface. Even though the balloon is curved when you zoom out, when you're zoomed in, it looks like a 2-d Euclidean plane. If you zoom in on a circle, it looks like a 1-d Euclidean plane. The little twists and turns don't matter in topology.

+ +

Let's bring this back towards Majorana fermions. Let's picture a system where we're registering if the electron goes all the way around a tree or not. It doesn't matter whether the electron has a really squiggly nutty path or a just a simple circular path - it still goes around.

+ +

The noise introduced in these systems might make the electron's path squiggly or it might not, but it doesn't actually matter. It still goes around. That's where the advantage in Majorana fermions lies - the noise doesn't affect it.

+ +

Obviously this isn't rigorous; I'll try to add more that sheds light on that as I have time.

+",91,,,,,3/29/2018 19:27,,,,1,,,,CC BY-SA 3.0 +1474,1,1524,,3/29/2018 19:39,,83,37818,"

From this question, I gathered that the main quantum computing programming languages are Q# and QISKit.

+ +

What other programming languages are available for programming quantum computers? Are there certain benefits to choosing particular ones?

+ +

EDIT: I am looking for programming languages, not emulators. Emulators simulate things. Programming languages are a method of writing instructions (either for real objects or for emulators). There may be a single language that works for multiple emulators and vice versa.

+",1289,,26,,3/29/2018 21:46,6/16/2020 23:37,What programming languages are available for quantum computers?,,6,4,,,,CC BY-SA 3.0 +1475,2,,1472,3/29/2018 19:56,,15,,"

There are several countries that are actively participating in the ""Quantum Race"", most of which are making significant investments. The estimated annual spending on non-classified quantum-technology research in 2015 broke down like this:

+ +
    +
  • United States (360 €m)
  • +
  • China (220 €m)
  • +
  • Germany (120 €m)
  • +
  • Britain (105 €m)
  • +
  • Canada (100 €m)
  • +
  • Australia (75 €m)
  • +
  • Switzerland (67 €m)
  • +
  • Japan (63€m)
  • +
  • France (52 €m)
  • +
  • Singapore (44 €m)
  • +
  • Italy (36 €m)
  • +
  • Austria (35 €m)
  • +
  • Russia (30 €m)
  • +
  • Netherlands (27 €m)
  • +
  • Spain (25 €m)
  • +
  • Denmark (22 €m)
  • +
  • Sweden (15 €m)
  • +
  • South Korea (13 €m)
  • +
  • Finland (12 €m)
  • +
  • Poland (12 €m)
  • +
+ +

If you chart that out, it looks something like this:

+ +

+ +

As you can see the European Union invested a combined 550 €m. It's also interesting to see how this investment correlates to patent applications from each of these countries:

+ +

+ +

Since 2015 the interest from countries in quantum computing has grown significantly. The worldwide investment has now broken 2,000 €m. The biggest increase has been seen in China's spending which encompasses quantum computing but includes quantum information systems as well.

+ +

There are several examples of countries investing in quantum computing in the past year that might be of particular interest. These include:

+ +

China - In 2017 China announced that it was set to open a National Laboratory for Quantum Information Sciences by 2020. This included a 92-Acre, $10 Billion quantum research center.

+ +

Japan - In 2017 in a joint, state-sponsored research project with Japan’s National Institute of Informatics and the University of Tokyo produced the machine, Nippon Telegraph and Telephone (NTT) shared a prototype quantum computer for public use over the internet.

+ +

Sweden - In 2017 Sweden invested 1 billion Swedish Krona (roughly $118 million or 100m €) into a research initiative with the goal of developing a ""robust quantum computer"".

+ +

Note that the United States has been seen as not investing enough into quantum computing and over the summer academia and industry testified before the U.S. House Subcommittees on Research & Technology and Energy in an effort to spur investment into this space. Most notably Dr. Christopher Monroe was quoted as saying that ""U.S. leadership in quantum technology will be critical to our national security, and will open new doors for private industry and academia while ensuring America’s role as a global technology leader in the 21st century.”

+",274,,274,,3/29/2018 20:33,3/29/2018 20:33,,,,0,,,,CC BY-SA 3.0 +1476,2,,1461,3/29/2018 20:03,,19,,"

No, a superposition of two different states is a completely different beast than a mixture of the same states. +While it may appear from your example that $\rho_1$ and $\rho_2$ produce the same measurement outcomes (and that is indeed the case), as soon as you measure in a different basis they will give measurably different results.

+ +

A ""superposition"" like $\newcommand{\up}{|\!\!\uparrow\rangle}\newcommand{\down}{|\!\!\downarrow\rangle}|\psi\rangle=\frac{1}{\sqrt2}(\up+\down)$ is a pure state. This means that it is a completely characterised state. +In other words, there is no amount of information that, added to its description, could make it ""less undetermined"". +Note that every pure state can be written as superposition of other pure states. +Writing a given state $|\psi\rangle$ as a superposition of other states is literally the same thing as writing a vector $\boldsymbol v$ in terms of some basis: you can always change the basis and find a different representation of $\boldsymbol v$.

+ +

This is in direct contrast to a mixed state like $\rho_1$ in your question. +In the case of $\rho_1$, the probabilistic nature of the outcomes depends on our ignorance about the state itself. This means that, in principle, it is possible to acquire some additional information that will tell us whether $\rho_2$ is indeed in the state $\up$ or in the state $\down$.

+ +

A mixed state cannot, in general, be written as a pure state. +This should be clear from the above physical intuition: mixed states represent our ignorance about a physical state, while pure states are completely defined states, which just so happen to still give probabilistic outcomes due to the way quantum mechanics work.

+ +

Indeed, there is a simple criterion to tell whether a given (generally mixed) state $\rho$ can be written as $|\psi\rangle\langle\psi|$ for some (pure) state $|\psi\rangle$: computing its purity. +The purity of a state $\rho$ is defined as $\operatorname{Tr} \,(\rho^2)$, and it is a standard result that the purity of state is $1$ if and only if the state is pure (and lesser than $1$ otherwise).

+",55,,55,,3/29/2018 21:02,3/29/2018 21:02,,,,0,,,,CC BY-SA 3.0 +1477,2,,1462,3/29/2018 20:22,,6,,"

The statement in Wikipedia is very generic, and only cites this paper as a reference. +Quoting from the abstract of the paper:

+ +
+

We demonstrate that decoherence of many-spin systems can drastically + differ from decoherence of single spin systems. The difference + originates at the most basic level, being determined by parity of the + central system, i.e., by whether the system comprises even or odd + number of spin-1/2 entities. Therefore, it is very likely that similar + distinction between the central spin systems of even and odd parity is + important in many other situations. Our consideration clarifies the + physical origin of the unusual two-step decoherence found previously + in the two-spin systems

+
+ +

As can be understood from the above excerpt, the paper is referring to a very specific situation, and it is by no means substantiating a broad claim such as ""qutrits are more robust to decoherence"".

+ +

Moreover, to say ""system X is more robust to decoherence under certain environmental interactions"" is generic enough to be always true, as one can always find a kind of ""environmental interaction"" in which the system X is more stable than the system Y.

+",55,,,,,3/29/2018 20:22,,,,0,,,,CC BY-SA 3.0 +1478,1,1479,,3/29/2018 20:26,,9,145,"

I was wondering if there is a source (online or review article) which tabulates recent algorithms, and their complexities, used in simulating various physical systems. Something along the lines of:

+ +

Physical System 1: Quantum Field Theory (scattering)

+ +

Complexity: Polynomial in number of particles, energy, and precision

+ +

Source: Quantum Algorithms for Quantum Field Theories (Jordan, Lee & Preskill, 2011)

+ +

Physical System 2: Atomic Energy levels

+ +

And so on.

+",1287,,26,,5/17/2019 21:44,5/17/2019 21:44,Is there any source which tabulates quantum computing algorithms for simulating physical systems?,,1,3,,,,CC BY-SA 4.0 +1479,2,,1478,3/29/2018 20:27,,8,,"

I believe what you're after is NIST's Quantum Zoo, a comprehensive catalog of quantum algorithms maintained by Stephen Jordan. Its sections include:

+ +
    +
  • Algebraic and Number Theoretic Algorithms (14 items)
  • +
  • Oracular Algorithms (34 items)
  • +
  • Approximation and Simulation Algorithms (12 items)
  • +
+ +

and for each algorithm it includes its speedup, a description and relevant references. The third category would be the answer to the present question.

+",54,,1847,,4/19/2018 6:20,4/19/2018 6:20,,,,1,,,,CC BY-SA 3.0 +1480,2,,1429,3/29/2018 20:36,,16,,"

The idea of topological quantum computing was introduced by Kitaev in this paper. The basic idea is to build a quantum computer using the properties of exotic types of particles, known as anyons.

+ +

There are two main properties of anyons that would make them great for this purpose. One is what happens when you use them to create composite particles, a process we call fusion. Let's take the so called Ising anyons (also known as Majoranas) as an example. If you bring two of these particles together, it is could be that they will annihilate. But it could also be that they become a fermion.

+ +

There are some cases where you will know which will happen. If the Ising anyons just pair created from the vacuum, you know that they'll go back to the vacuum when combined. If you just split a fermion into two Ising anyons, they'll go back to being that fermion. But if two Ising anyons meet for the first time, the result of their combination will be completely random.

+ +

All these possibilities must be kept track of somehow. That is done by means of a Hilbert space, known as the fusion space. But the nature of a many-anyon Hilbert space is very different to that of many spin qubits, or superconducting qubits, etc. The fusion space doesn't describe any internal degrees of freedom of the particles themselves. You can prod and poke the anyons all you like, you won't learn anything about the state within this space. It only describes how the anyons relate to each other by fusion. So keep the anyons far apart, and decoherence will find it very hard to break into this Hilbert space and disturb any state you have stored there. This makes it a perfect place to store qubits.

+ +

The other useful property of anyons is braiding. This describes what happens when you move them around each other. Even if they don't come close to each other in any way, these trajectories can affect the results of fusion. For example, if two Ising anyons were destined to annihilate, but another Ising anyon passes between them before they fuse, they will turn into a fermion instead. Even if there was half the universe between them all when it passed by, somehow they still know. This allows us to perform gates on the qubits stored in the fusion space. The effect of these gates only depends on the topology of the paths that the anyons take around each other, rather than any small details. So they too are less prone to errors than gates performed on other types of qubit.

+ +

These properties give topological quantum computing a built in protection that is similar to quantum error correction. Like QEC, information is spread out so that it cannot be easily disturbed by local errors. Like QEC, local errors leave a trace (like moving anyons a little, or creating a new pair of anyons from the vacuum). By detecting this, you can easily clean up. So qubits built from anyons could have much less noise than ones built from other physical systems.

+ +

The big problem is that anyons don't exist. Their properties are mathematically inconsistent in any universe with three or more spatial dimensions, like the one we happen to live in.

+ +

Fortunately, we can try to trick them into existing. Certain materials, for example, have localized excitations that behave as is they were particles. These are known as quasiparticles. With a 2D material in a sufficiently exotic phase of matter, these quasiparticles can behave as anyons. Kitaev's original paper proposed some toy models of such materials.

+ +

Also, quantum error correcting codes based on 2D lattices can also play host to anyons. In the well known surface code, errors cause pairs of anyons to be created from the vacuum. To correct the errors you must find the pairs and reannihilate them. Though these anyons are too simple to have a fusion space, we can create defects in the codes that can also be moved around like particles. These are sufficient to store qubits, and basic gates can be performed by braiding the defects.

+ +

Superconducting nanowires can also be created with so-called Majorana zero modes at the end points. Braiding these is not so easy: wires are inherently 1D objects, which doesn't give a lot of room for movement. But it can nevertheless be done by creating certain junctions. And when it is done, we find that they behave like Ising anyons (or at least, so theory predicts). Because of this, there is a big push at the moment to provide strong experimental evidence that these can indeed be used as qubits, and that they can be braided to perform gates. Here is a paper on the issue that is hot off the press.

+ +
+ +

After that broad intro, I should get on with answering your actual question. Topological quantum computation concerns any implementation of quantum computation that, at a high level, can be interpreted in terms of anyons.

+ +

This includes the use of the surface code, which is currently regarded as the most mainstream method for how a fault-tolerant circuit model based quantum computer can be built. So for this case, the answer to ""How do Topological Quantum Computers differ from others models of quantum computation?"" is that it doesn't differ at all. It is the same thing!

+ +

Topological quantum computation also includes Majoranas, which is the route that Microsoft are betting on. Essentially this will just use pairs of Majoranas as qubits, and braiding for basic gates. The difference between this superconducting qubits is little more than that between superconducting qubits and trapped ion qubits: it is just details of the hardware implementation. The hope is that Majorana qubits will be significantly less noisy, but that remains to be seen.

+ +

Topological quantum computation also includes much more abstract models of computing. If we figure out a way to realize Fibonacci anyons, for example we'll have a fusion space that cannot be so easily carved up into qubits. Finding the best ways to turn our programs into the braiding of anyons becomes a lot harder (see this paper, as an example). This is the kind of topological quantum computer that would be most different to standard methods. But if anyons can really be realized with very low noise, as promised, it would be well worth the small overheads required to use Fibonacci anyons to simulate the standard gate based approach.

+",409,,409,,3/30/2018 9:44,3/30/2018 9:44,,,,0,,,,CC BY-SA 3.0 +1481,2,,1474,3/29/2018 20:38,,34,,"

Gate model hardware vendors have built out their own low level languages:

+ + + +

These have higher level python sdk's available:

+ + + +

Rigetti is also wrapping their language in a higher level library for calling pre-built applications called Grove.

+ +

Microsoft has developed Q# to run against their existing simulator, and eventually their physical hardware.

+ +

Since the languages above are vendor specific the main benefit is that you can run quantum programs on their computers.

+ +

Outside of the vendor specific languages is Scaffold which is being developed by Princeton researchers. This language is interesting as it includes a toolchain for analyzing the programs to determine costs, performance potential, and scalability potential.

+ +

Edit: Project Q is another framework that allows you to develop programs utilizing Python which can run on an included simulator.

+ +

Oak Ridge National Labs has started work on a project called XACC which is intended to abstract vendor specific code to allow users access to the various hardware platforms without duplicating code in each vendor specific language.

+",39,,39,,04-02-2018 20:50,04-02-2018 20:50,,,,2,,,,CC BY-SA 3.0 +1482,2,,1471,3/29/2018 20:54,,9,,"

Majoranas are anyons (a type of quasiparticles wich behave differently from fermions and bosons), and so are related to the idea of topological quantum computation. This means that a good implementation should have properties that help deal with noise built in. Their main problem is that it is difficult to prepare physical systems which behave as Majorana particles.

+ +

One way of building Majoranas is with superconducting nanowires. This is the kind that the press release and paper are referring to. Will these actually work well? We shall see. Will they be better than other qubits? We shall see.

+ +

Another way of building Majoranas is by performing code deformation in surface codes (a well studied family of quantum error correction codes). Examples can be found in this paper (of which I am an author): Poking holes and cutting corners to achieve Clifford gates with the surface code. These will probably work pretty well. They won't have much in the way of advantages over more mainstream methods though, because using defects in surface codes is the most mainstream method (whether they are Majoranas or not).

+ +

There are other ways we could trick Majoranas into existing. But as far as I know, none are being actively pursued.

+",409,,1847,,4/13/2018 9:19,4/13/2018 9:19,,,,0,,,,CC BY-SA 3.0 +1483,2,,1451,3/29/2018 23:09,,27,,"

The 'Hello World' equivalent in the D-Wave world is the 2D checkerboard example. In this example, you are given the following square graph with 4 nodes:

+ +

                                                  +

+ +

Let's define that we colour vertex $\sigma_{i}$ black if $\sigma_{i} = -1$ and white if $\sigma_{i} = +1$. The goal is to create a checkerboard pattern with the four vertices in the graph. There is various ways of defining +$h$ and $J$ to achieve this result. First of all, there are two possible solutions to this problem:

+ +

               +

+ +

The D-Wave quantum annealer minimizes the Ising Hamiltonian that we define and +it is important to understand the effect of the different coupler settings. Consider for example the $J_{0,1}$ coupler:

+ +

If we set it to $J_{0,1}=-1$, the Hamiltonian is minimized if both qubits take the same value. We say negative couplers correlate. Whereas if we set it to $J_{0,1}=+1$, the Hamiltonian is minimized if the two qubits take opposite values. Thus, positive couplers anti-correlate.

+ +

In the checkerboard example, we want to anti-correlate each pair of neighbouring qubits which gives rise to the following Hamiltonian:

+ +

$$H = \sigma_{0}\sigma_{1} + \sigma_{0}\sigma_{2} + \sigma_{1}\sigma_{3} + \sigma_{2}\sigma_{3}$$

+ +

For the sake of demonstration, we also add a bias term on the $0$-th qubit such that we only get solution #1. This solution requires $\sigma_{0}=-1$ and we therefore set its bias $h_{0}=1$. The final Hamiltonian is now:

+ +

$$H = \sigma_{0} + \sigma_{0}\sigma_{1} + \sigma_{0}\sigma_{2} + \sigma_{1}\sigma_{3} + \sigma_{2}\sigma_{3}$$

+ +

So let's code it up!

+ +

NOTE: You DO NEED access to D-Wave's Cloud Service for anything to work.

+ +

First of all, make sure you have the dwave_sapi2 (https://cloud.dwavesys.com/qubist/downloads/) Python package installed. Everything is going to be Python 2.7 since D-Wave currently doesn't support any higher Python version. That being said, let's import the essentials:

+ +
from dwave_sapi2.core import solve_ising
+from dwave_sapi2.embedding import find_embedding, embed_problem, unembed_answer
+from dwave_sapi2.util import get_hardware_adjacency
+from dwave_sapi2.remote import RemoteConnection
+
+ +

In order to connect to the D-Wave Solver API you will need a valid API token for their SAPI solver, the SAPI URL and you need to decide which quantum processor you want to use:

+ +
DWAVE_SAPI_URL = 'https://cloud.dwavesys.com/sapi'
+DWAVE_TOKEN = [your D-Wave API token]
+DWAVE_SOLVER = 'DW_2000Q_VFYC_1'
+
+ +

I recommend using the D-Wave 2000Q Virtual Full Yield Chimera (VFYC) which is a fully functional chip without any dead qubits! Here's the Chimera chip layout:

+ +

+ +

At this point I am splitting the tutorial into two distinct pieces. In the first section, we are manually embedding the problem onto the Chimera hardware graph and in the second section we are using D-Wave's embedding heuristics to find a hardware embedding.

+ +

Manual embedding

+ +
+ +

The unit cell in the top left corner on the D-Wave 2000Q chip layout above looks like this:

+ +

+ +

Note, that not all couplers are visualized in this image. As you can see, there is no coupler between qubit $0$ and qubit $1$ which we would need to directly implement our square graph above. That's why we are now redefining $0\rightarrow0$, $1\rightarrow4$, $2\rightarrow7$ and $3\rightarrow3$. We then go on and define $h$ as a list and $J$ as a dictionary:

+ +
J = {(0,4): 1, (4,3): 1, (3,7): 1, (7,0): 1}
+h = [-1,0,0,0,0,0,0,0,0]
+
+ +

$h$ has 8 entries since we use qubits 0 to 7. We now establish connection to the Solver API and request the D-Wave 2000Q VFYC solver:

+ +
connection = RemoteConnection(DWAVE_SAPI_URL, DWAVE_TOKEN)
+solver = connection.get_solver(DWAVE_SOLVER)
+
+ +

Now, we can define the number of readouts and choose answer_mode to be ""histogram"" which already sorts the results by the number of occurrences for us. We are now ready to solve the Ising instance with the D-Wave quantum annealer:

+ +
params = {""answer_mode"": 'histogram', ""num_reads"": 10000}
+results = solve_ising(solver, h, J, **params)
+print results
+
+ +

You should get the following result:

+ +
{
+  'timing': {
+    'total_real_time': 1655206,
+    'anneal_time_per_run': 20,
+    'post_processing_overhead_time': 13588,
+    'qpu_sampling_time': 1640000,
+    'readout_time_per_run': 123,
+    'qpu_delay_time_per_sample': 21,
+    'qpu_anneal_time_per_sample': 20,
+    'total_post_processing_time': 97081,
+    'qpu_programming_time': 8748,
+    'run_time_chip': 1640000,
+    'qpu_access_time': 1655206,
+    'qpu_readout_time_per_sample': 123
+  },
+  'energies': [-5.0],
+  'num_occurrences': [10000],
+  'solutions': [
+      [1, 3, 3, 1, -1, 3, 3, -1, {
+          lots of 3 's that I am omitting}]]}
+
+ +

As you can see we got the correct ground state energy (energies) of $-5.0$. The solution string is full of $3$'s which is the default outcome for unused/unmeasured qubits and if we apply the reverse transformations - $0\rightarrow0$, $4\rightarrow1$, $7\rightarrow2$ and $3\rightarrow3$ - we get the correct solution string $[1, -1, -1, 1]$. Done!

+ +

Heuristic embedding

+ +
+ +

If you start creating larger and larger Ising instances you will not be able to perform manual embedding. So let's suppose we can't manually embed our 2D checkerboard example. $J$ and $h$ then remain unchanged from our initial definitions:

+ +
J = {(0,1): 1, (0,2): 1, (1,3): 1, (2,3): 1}
+h = [-1,0,0,0]
+
+ +

We again establish the remote connection and get the D-Wave 2000Q VFYC solver instance:

+ +
connection = RemoteConnection(DWAVE_SAPI_URL, DWAVE_TOKEN)
+solver = connection.get_solver(DWAVE_SOLVER)
+
+ +

In order to find an embedding of our problem, we need to first get the adjacency matrix of the current hardware graph:

+ +
adjacency = get_hardware_adjacency(solver)
+
+ +

Now let's try to find an embedding of our problem:

+ +
embedding = find_embedding(J.keys(), adjacency)
+
+ +

If you are dealing with large Ising instances you might want to search for embeddings in multiple threads (parallelized over multiple CPUs) and then select the embedding with the smallest chain length! A chain is when multiple qubits are forced to act as a single qubit in order to increase the degree of connectivity. However, the longer the chain the more likely that it breaks. And broken chains give bad results!

+ +

We are now ready to embed our problem onto the graph:

+ +
[h, j0, jc, embeddings] = embed_problem(h, J, embedding, adjacency)
+
+ +

j0 contains the original couplings that we defined and jc contains the couplings that enforce the integrity of the chains (they correlate the qubits within the chains). Thus, we need to combine them again into one big $J$ dictionary:

+ +
J = j0.copy()
+J.update(jc)
+
+ +

Now, we're ready to solve the embedded problem:

+ +
params = {""answer_mode"": 'histogram', ""num_reads"": 10000}
+raw_results = solve_ising(solver, h, J, **params)
+
+print 'Lowest energy found: {}'.format(raw_results['energies'])
+print 'Number of occurences: {}'.format(raw_results['num_occurrences'])
+
+ +

The raw_results will not make sense to us unless we unembed the problem. In case, some chains broke we are fixing them through a majority vote as defined by the optional argument broken_chains:

+ +
unembedded_results = unembed_answer(raw_results['solutions'],
+                                    embedding, broken_chains='vote')
+
+print 'Solution string: {}'.format(unembedded_results)
+
+ +

If you run this, you should get the correct result in all readouts:

+ +
Lowest energy found: [-5.0]
+Number of occurences: [10000]
+Solution string: [[1, -1, -1, 1]]
+
+ +

I hope this answered your question and I highly recommend checking out all the additional parameters that you can pass to the solve_ising function to improve the quality of your solutions such as num_spin_reversal_transforms or postprocess.

+",1234,,27,,5/19/2018 10:35,5/19/2018 10:35,,,,0,,,,CC BY-SA 4.0 +1484,2,,1474,3/30/2018 3:55,,6,,"

Or even look at Quipper, a functional Quantum programming language. With monadic semantics, built in functionality for reversible quantum computation, hierarchical circuit support, and more! +Used in actual practice too – for an algorithm to compute the solution of the Dirac equation by LaFlamme.

+",429,,45,,04-04-2018 11:27,04-04-2018 11:27,,,,0,,,,CC BY-SA 3.0 +1485,1,1535,,3/30/2018 4:12,,9,897,"

One of the biggest drawbacks of Bayesian learning against deep learning is runtime: applying Bayes' theorem requires knowledge on how the data is distributed, and this usually requires either expensive integrals or some sampling mechanism (with the corresponding drawbacks).

+ +

Since at the end of the day is all about distribution propagations, and this is (as far as I understand) the nature of quantum computing, is there a way to perform these efficiently? If yes, what limitations do apply?

+ +

Edit (directly related links):

+ + +",1346,,26,,5/17/2019 21:43,5/17/2019 21:43,Can quantum computing speed up Bayesian learning?,,1,2,,,,CC BY-SA 4.0 +1486,1,1499,,3/30/2018 4:18,,13,2362,"

Quantum algorithms scale faster than classical ones (at least for certain problem clases), meaning quantum computers would require a much smaller number of logical operations for inputs above a given size.

+

However, it is not so commonly discussed how quantum computers compare to regular computers (a normal PC today) in terms of power consumption per logical operation. (Has this not been talked about much, because the main focus of quantum computers is how fast they can compute data?)

+

Can someone explain why quantum computing would be more or less power-efficient than classical computing, per logical operation?

+",1348,,-1,,02-07-2021 17:47,02-07-2021 17:47,How power-efficient are quantum computers?,,2,2,,,,CC BY-SA 4.0 +1487,1,1498,,3/30/2018 4:46,,4,235,"

This divulgation article by Prof. Brukner talks about the possibility of creating a situation where

+ +
+

""A causing B"" and ""B causing A"" which we call a quantum switch. Such a setup is similar to some predator–prey relationships, in which predator numbers influence prey numbers, yet prey numbers also influence predator numbers. Following work that Ognyan Oreshkov, Fabio Costa, and I published in 2012, we now know that the quantum switch is just one example of an indefinite causal structure, in which it is not defined whether event A is a cause or an effect of event B, or whether the two are independent.

+
+ +

+ +

The work link leads to a paper that explains with great detail how to achieve that, and it also includes some applications, like to prove that two no-signalling channels that are not perfectly distinguishable in any ordinary quantum circuit can become perfectly distinguishable through the quantum superposition of circuits with different causal structures

+ +

I am a regular programmer without a background in quantum computing and don't quite understand the implications of such articles. What I get, and find very exciting is the possibility of having such expanded causality models as a tool. My question is: how/where could quantum switch fit in the classical computing landscape? Be it low-level hardware design or the (surely upcoming) else-then-if software pattern

+",1346,,1346,,3/30/2018 5:18,3/30/2018 13:08,What impact would have introducing the quantum switch effect in classical computing?,,1,3,,,,CC BY-SA 3.0 +1488,1,1511,,3/30/2018 4:48,,8,339,"

I've heard that quantum computers pose a major threat to 1024 bit and possibly even 2048 bit RSA public-private key cryptography. In the future however, bigger size keys will probably become at risk at one point or another, as newer, faster quantum computers are created, for lots of (if not, all) algorithms. How can I reliably know if a key size, or even an algorithm itself is secure and safe to use at the current time? Is there a reliable resource/website that calculates which key sizes are currently at risk, based on how fast the newest quantum computers are? Or possibly, will new algorithms be created which try to prevent quantum computers from being able to crack them easily? The goal here is to keep the UX positive by not making a product slow due to encryption, but slower apps are worth it to guarantee a safe transfer of data.

+",1348,,,,,4/15/2018 12:45,How can we reliably know if a key size is still safe to use as new quantum computers are created?,,3,2,,,,CC BY-SA 3.0 +1489,1,1506,,3/30/2018 5:14,,8,398,"

In words of Wikipedia,

+ +
+

The Curry–Howard correspondence is the observation that two families of seemingly unrelated formalisms—namely, the proof systems on one hand, and the models of computation on the other—are in fact the same kind of mathematical objects [...] a proof is a program, and the formula it proves is the type for the program.

+
+ +

Further, under General Formulation, it provides the following table and the statement that bears my question:

+ +

+ +
+

In its more general formulation, the Curry–Howard correspondence is a correspondence between formal proof calculi and type systems for models of computation. In particular, it splits into two correspondences. One at the level of formulas and types that is independent of which particular proof system or model of computation is considered, and one at the level of proofs and programs which, this time, is specific to the particular choice of proof system and model of computation considered.

+
+ +

Is quantum computing in this hindsight equivalent to classical computing, or is it a different ""model of computation"" (i.e. does it have a different set of elements in the Programming side)? Would it still correspond to the exact same Logic side?

+",1346,,1346,,3/30/2018 15:29,3/30/2018 15:29,How does the Curry-Howard correspondence apply to quantum programs?,,1,0,,,,CC BY-SA 3.0 +1490,1,1497,,3/30/2018 5:36,,11,2560,"

I'm a total beginner, I've been brought here by the featured stackoverflow blog post so I started studying.

+ +

Watching this youtube video (A Beginner’s Guide To Quantum Computing (3:58), I saw this slide where it talks about superposition:

+ +

+ +

At first I thought that, besides qubits, which can be in a superposition of 0s and 1s, there's also a qsphere, which can be in a superposition of 5 zeros and 5 ones, when in fact it's just 5 qubits.

+ +

So when we say a qsphere, it's known to be 5 qubits?

+",1115,,26,,12/23/2018 12:15,12/23/2018 12:15,Is qsphere an actual term representing 5 qubits?,,2,0,,,,CC BY-SA 3.0 +1496,2,,1490,3/30/2018 6:25,,7,,"
+

Is qsphere an actual term representing 5 qubits?

+
+ +

If it is, it is not widely used.

+ +

I claim this because I looked around in arXiv, a repository of electronic preprints of research articles, and found nothing. There are many other units of quantum information than just qubit though. All of the following appear at least occasionally in the relevant literature.

+ +
    +
  • Qubit has a computational basis of two states and can be implemented by a two-level quantum system.

  • +
  • Qutrit has a computational basis of three states.

  • +
  • Ququart is a (rare) term used by some, and it has a basis of four.
  • +
  • Qudit is a common term. It is the $d$-dimensional generalization of a qubit. Often $d$ is left unspecified and these are used when comparing the effect of using different values of it.
  • +
  • Qumode is again a bit less often used term in continuous-variable quantum computing, where now, informally speaking, the basis has infinitely many elements.
  • +
+",144,,,,,3/30/2018 6:25,,,,1,,,,CC BY-SA 3.0 +1497,2,,1490,3/30/2018 7:27,,12,,"

The qsphere is a way of representing multi-qubit states. So it could be used for 5 qubit states, but it could also be used for any other number.

+ +

It could also be used for just one qubit. But in this case it is important to note that the single qubit qsphere is not the same as the Bloch sphere, which is our standard way of representing single qubit states.

+ +

Instead, the qsphere is essentially a more visually appealing version of a histogram. It was introduced by IBM as a visualization for the Quantum Experience, but doesn't seem to be used so much by them any more.

+ +

To construct the qsphere of a state, you have to think of the histogram you'd get if you measure in the $|0 \rangle,|1 \rangle$ basis. For example, suppose I have a 4 qubit state that would give me the results

+ +
{'0000':0.5, '0101':0.25, '0011:'0.125, '0111':0.125}
+
+ +

Here the bit string 0000 comes out with probability $0.5$, and so on.

+ +

On the qsphere, this would be represented by four points: one for each of these non-zero probabilities. The latitude of the points depends on the number of 0s and 1s in the bit string. Our result that has all 0s would be at the north pole. Our 0111, with mostly 1s, would be near the south pole. The two results 0101 and 0011 that we have in our example would be at the equator.

+ +

The probability is represented by the strength of the line. The two with probability of only $0.125$ would have quite faint lines. The one with probability $0.5$ would have a much thicker line. Those with $0.25$ would be somewhere in between.

+ +

So far, we have represented all aspects of the histogram, but have not included any phase information that the state might also have. This can be encoded using the colour of the points. The sphere then has all the information on the multi-qubit state.

+ +

To see why it is not the nicest visualization there is, imagine performing a Hadamard gate. This transforms latitude and longitude information into colour, and vice-versa. Despite being a simple gate, it would have a very complex effect.

+ +

But then again, what visualization of many qubits doesn't have its weaknesses? If it was easy to visualize them, it would be easy to simulate them. And then we'd have no need to build quantum computers.

+",409,,,user609,3/30/2018 8:53,3/30/2018 8:53,,,,0,,,,CC BY-SA 3.0 +1498,2,,1487,3/30/2018 8:16,,3,,"

The answer depends on what exactly you mean by ""fit"", but, generally speaking, there do not seem to be direct applications of indefinite causal structures for classical computing.

+ +

Indeed, the interest of techniques like the quantum switch$^\dagger$ (aside from the purely fundamental aspects) is that they provide methods to create non-classical correlations, and more generally encoding quantum information (e.g. qubits) into ""causal"" degrees of freedoms, as nicely shown in Figure 1 here.

+ +

In other words, one could imagine a quantum computer in which the qubits are encoded into (a)causal structures. +However, apart from the notable impracticality of storing and maintaining many qubits encoded in such a way, it is also not known how one could apply operations between them. +Interestingly, in at least some limited form, this is possible, as recently demonstrated in 1712.06884, where two different such qubits were entangled, and the associated Bell violations observed. +It is not known if or how one could scale such experiments to more qubits.

+ +
+ +

$^\dagger$ It is worth mentioning that there have also been at least two (to my knowledge) experimental demonstrations of indefinite causal orders: 1608.01683 and 1712.06884.

+",55,,55,,3/30/2018 13:08,3/30/2018 13:08,,,,1,,,,CC BY-SA 3.0 +1499,2,,1486,3/30/2018 8:42,,9,,"

As usual, it is too soon to make comparisons like this. +The power consumption of a device will depend strongly on the architecture it uses, for one.

+ +

However, in principle, there is no reason to suspect that quantum computers would consume more energy than classical devices performing the same operations. +Indeed, one would expect the opposite, the fundamental reason being that quantum computers work (mostly) via unitary operations. +A unitary operation is a reversible operation, or, in other words, an operation during which no information is lost to the environment. +Such an operation is basically ""perfectly"" energy efficient (for one, it wouldn't produce heat).

+ +

So, in principle, the elementary operations performed in a quantum algorithm which uses unitary operations can be ideally energy efficient. +This is in direct contrast with what you have with classical devices, in which the elementary operations are non-reversible, and therefore necessarily ""waste"" some amount of information for every operation.

+ +

Having said this, there are a million caveats to be taken into account. +For example, quantum computers in the real world will have to deal with decoherence, so that the operations are not really unitary. This implies that error-correction protocols are necessary to take this into account, and one should then go and track what is the added energy consumption of this whole process. +Also, while unitary operations are energy efficient, in practice when one acquires the result of the measurement, measurements have to be performed, and these are non-reversible operations which typically destroy information. After each such measurement, one will need to generate the information carriers again. Also, many quantum computing protocols rely on repeated measurements during the computation. +One could go on and on, as this is very much uncharted territory.

+ +

One recent work that discusses in some measure the power consumption problem is 1610.02365, in which the authors present a method for (classical machine learning) information processing using photonic chips. +One claim of the authors is that photonic chips allow to perform operations in an extremely energy efficient way, exploiting the natural evolution of coherent light. +They do not demonstrate any form quantum computation, but their energy efficiency reasonings would not change much when using the same device for quantum information processing.

+",55,,55,,06-05-2019 20:16,06-05-2019 20:16,,,,0,,,,CC BY-SA 4.0 +1500,1,1503,,3/30/2018 9:01,,3,217,"

On the D-Wave 2000Q website, it is stated that the processor environment is kept ""in a high vacuum: pressure is 10 billion times lower than atmospheric pressure"".

+ +

Why the pressure has to be such low? What would happen to the computer if the pressure increases?

+",,user609,26,,3/30/2018 10:00,11/19/2018 13:03,Why D-Wave 2000Q requires such a low pressure?,,2,0,,,,CC BY-SA 3.0 +1501,2,,1462,3/30/2018 9:13,,6,,"

To simplify things a bit, let's take a single qubit and a single qutrit for comparison.

+ +

First, the amplitude damping channel (giving e.g. emission of a photon) for a qubit is $\mathcal E\left(\rho\right) = E_0\rho E_0^\dagger + E_1\rho E_1^\dagger$, where $$E_0 = \begin{pmatrix}1 && 0 \\ 0 &&\sqrt{1-\gamma}\end{pmatrix}, \quad E_1 = \begin{pmatrix} 0 && \sqrt \gamma \\ 0 &&0\end{pmatrix},$$ with $E_0$ being required for normalisation. This can also be written as $E_0 = \left|0\rangle\langle 0\right| + \sqrt{1-\gamma}\left|1\rangle\langle 1\right|$ and $E_1 = \sqrt{\gamma}\left|0\rangle\langle 1\right|$. When $\gamma = 1$, the amplitude damping channel gives the state $$\rho = \begin{pmatrix} 1 && 0 \\ 0 && 0\end{pmatrix} = \left|0\rangle\langle 0\right|$$ with certainty. However, this only applies at $0$ temperature, so to represent what happens at finite temperature, the generalised amplitude damping channel needs to be used. This has $\mathcal E\left(\rho\right) = \sum_kE_k\rho E_k^\dagger$, where $$E_0 = \sqrt p\begin{pmatrix}1 && 0 \\ 0 &&\sqrt{1-\gamma}\end{pmatrix}, \quad E_1 = \sqrt p\begin{pmatrix} 0 && \sqrt \gamma \\ 0 &&0\end{pmatrix},$$ $$E_2 = \sqrt{1-p}\begin{pmatrix}\sqrt{1-\gamma} && 0 \\ 0 &&1\end{pmatrix}, \quad E_3 = \sqrt{1-p}\begin{pmatrix} 0 && 0 \\ \sqrt \gamma &&0\end{pmatrix}.$$ Each of these can be split into four sections - an element in the upper left (bottom right) corner causes the amplitude of the upper left (bottom right) corner to decrease (here, this is really just a normalisation factor) while having a single element in a matrix, which is in the top right (bottom left) causes loss (gain) from the bottom right (top left) to the top left (bottom right) element of the density matrix. Now when $\gamma = 1$, this gives the state $$\rho = \begin{pmatrix}p && 0\\ 0 && 1-p\end{pmatrix} = p\left|0\rangle\langle 0\right| + \left(1-p\right)\left|1\rangle\langle 1\right|.$$

+ +

This shows that after a long time, the loss and gain cancel (i.e. no more loss or gain occurs), although this state is not very useful for computation, so lets try extending this a bit and adding another qubit. Lets go a bit further than that and take a pair of coupled spin-half fermions (such as a pair of electrons). This gives 4 states - the singlet state $S = \frac{1}{\sqrt 2}\left(\left|\uparrow\downarrow\right> - \left|\downarrow\uparrow\right>\right)$. It also gives a subspace of triplet states $T = \left\lbrace \left|\uparrow\uparrow\right>, \frac{1}{\sqrt 2}\left(\left|\uparrow\downarrow\right> + \left|\downarrow\uparrow\right>\right), \left|\downarrow\downarrow\right>\right\rbrace = \left\lbrace T_0, T_1, T_2\right\rbrace$.

+ +

Describing this as a qudit with $d=4$ and using an equivalent 'qudit generalised amplitude damping channel' gives a few possible interaction as in the qubit case:

+ +
    +
  • loss from the triplet subspace to the singlet state
  • +
  • gain from the singlet state to the triplet subspace
  • +
  • amplitude damping within the triplet subspace.
  • +
+ +

By itself, this still doesn't help very much, so lets place this pair of spin-half particles in the centre of a larger system (a 'spin bath', here used as the environment, mediating the interaction) and allow it to interact. As the states in the triplet subspace are symmetric and it is in the centre of a bath, the probability rate of amplitude damping on the first qubit will, on average, equal the rate of amplitude damping on the second qubit. This means that, instead of having a single qudit amplitude damping channel, there are two copies of the same generalised amplitude damping channel, which reduces the number of possible interactions. In the limit of long time and taking $p=1/2$ (this is just setting the system to a certain non-zero temperature), these are, ignoring normalisation:

+ +
    +
  • gain on the $\left|\downarrow\downarrow\right> = T_2$ state: $$T_2\rightarrow T_2' \propto T_2 + \beta T_1 + \beta^2T_0$$
  • +
  • loss on the $\left|\uparrow\uparrow\right> = T_0$ state: $$T_0\rightarrow T_0' \propto T_0 + \beta T_1 + \beta^2T_2$$
  • +
  • gain and loss on the $\frac{1}{\sqrt 2}\left(\left|\uparrow\downarrow\right>\pm \left|\downarrow\uparrow\right>\right)$ states ($S$ and $T_1$ states): $$T_1\rightarrow T_1' \propto \left(1+\beta^2\right)T_1 + \beta \left(T_0+T_2\right)$$ $$S\rightarrow S' \propto\left(1-\beta^2\right) S.$$
  • +
+ +

This shows that oscillations in the triplet subspace occur instead of decay, meaning that the triplet subspace is a decoherence free subsystem and can be used as a qutrit. In reality, interactions are more complicated and there will be other types of noise, so there will still be some decoherence, but the reasoning is still the same in that pairing two spin-half particles allows for the triplet state to be used as a decoherence free subsystem (or at least have less decoherence than a qubit) to mitigate the effects of some types of noise.

+",23,,,,,3/30/2018 9:13,,,,0,,,,CC BY-SA 3.0 +1502,2,,115,3/30/2018 9:42,,12,,"

A good summary on the current state of QRAM (as of 2017) can be found in this paper, and a comparison of it with classical methods can be found in this talk. The Giovannetti type ""bucket brigade"" QRAM still seems to be the best that is known, although modifications exist. There are serious caveats to the use of any such QRAM, and no alternative that avoids these has yet been proposed (other than using massively parallelized classical computers).

+ +

The ""bucket brigade"" QRAM encodes in superposition $N$ $d$-dimensional vectors into $\log(Nd)$ qubits using $\mathcal{O}(\log(Nd))$ time. An alternative scheme with polynomial time reduction was proposed in this paper. In either case, the number of physical resources used is is exponential with the number of qubits. This could cause problems that limit the implementation and/or usefulness of the scheme.

+ +

The issue depends on how many components need to be active at once. Ideally, the number of active components needs only be linear with the number of qubits in the memory. However, actual implementations are usually far from ideal.

+ +

This paper, for example, looks at the effects of noise, and concludes that the need for error correction could remove any advantages of the small number of active components. The severity of this potential problem depends on what algorithm is being used by the quantum computer, and so how many times the QRAM must be queried. For a polynomial number of queries, full fault-tolerance could be avoided. But for superpolynomial queries, such as for Grover's search, full-tolerance seems to be needed.

+ +

As far as comparing to other possibilities goes, it has been argued that the exponential number of resources for the QRAM should be compared to a classical parallel architecture with an exponential number of processors. The quantum algorithm does not look so great with this comparison. As explained here, some algorithms for which we expect a quantum speedup are actually slower when this parallelism is taken into account.

+ +

Though not as general in scope, another proposal for putting classical data into superpositions was also proposed here and so deserves a mention.

+",409,,26,,5/15/2019 14:58,5/15/2019 14:58,,,,0,,,,CC BY-SA 4.0 +1503,2,,1500,3/30/2018 9:50,,7,,"

Pressure implies the presence of stray atoms flying around messing things up. The use of a vacuum is required to prevent this, as one of the ways of keeping the device well isolated from unwanted effects. I think that they are just intending the ""10 billion times lower than atmospheric pressure"" statement to demonstrate how good their vacuum is.

+",409,,409,,3/30/2018 12:10,3/30/2018 12:10,,,,0,,,,CC BY-SA 3.0 +1504,2,,1488,3/30/2018 9:51,,3,,"

Given that you mention large key sizes (1024 bit and up), you are talking about asymmetric cryptography. Other (symmetric) cryptographic schemes are safe by simply doubling their key size (e.g. going from 128 to 256 bits) because that compensates for the theoretical advantage of Grover's algorithm for an exhaustive search.

+ +

Asymmetric cryptography can be divided into currently used, practical schemes (essentially RSA and ECC) and postquantum cryptography.

+ +

Since Shor's algorithm scales (in runtime) as $O(n^3)$, once a certain key size of RSA or ECC is unsafe, even doubling its size will only mean an 8-fold increase in the computational difficulty to calculate the new private key with a quantum computer: Once you conclude that RSA and ECC keys are no longer safe due to quantum computers, going to longer key lengths will not gain much. New algorithms (""postquantum cryptography"") are being designed that are believed to be safe against attacks using quantum computers.

+ +

Postquantum cryptography already takes quantum computers as attack vectors into account. They typically require huge key sizes, though (such as more than 10 kbits).

+",,user1039,,user1039,4/14/2018 15:58,4/14/2018 15:58,,,,0,,,,CC BY-SA 3.0 +1505,2,,1468,3/30/2018 10:27,,2,,"

Even if you assume that your quantum computer will be based on qubits, it can operate with an arbitrarily large number of states (if it has enough qubits): The combination of two qubits allow it to calculate with a total of four states, that of three qubits with eight states, etc.

+",,user1039,,,,3/30/2018 10:27,,,,0,,,,CC BY-SA 3.0 +1506,2,,1489,3/30/2018 10:35,,4,,"

Quantum computing doesn't change the Curry-Howard isomorphism, for the following reasons:

+ +
    +
  1. Quantum computers can simulate Turing machines and vice versa. These devices can be faster at certain tasks, but the computable languages remain the same.

  2. +
  3. There is, a priori, no reason we cannot apply the model of the Turing machine of Lambda calculus to quantum computers, even if they can compute different, mutually exclusive languages!

  4. +
  5. For practical purposes, I very much doubt quantum computing will be of use in automated theorem proving or proof assistance (by computers).

  6. +
+ +

So, as for your question, yes in the context of Curry-Howard, we can consider a quantum computer and a classical machine equivalent, as it is likely we wish to model both with the Lambda calculus for the purpose of the isomorphism.

+",253,,,,,3/30/2018 10:35,,,,4,,,,CC BY-SA 3.0 +1508,1,1592,,3/30/2018 11:30,,17,669,"

Let, for a Quantum Turing machine (QTM), the state set be $Q$, and the alphabet of symbols be $\sum=\{0,1\}$, which appear at the tape head. Then, as per my understanding, at any given time while the QTM is calculating, the qubit that appears at its head will hold an arbitrary vector $V_\sum = a|1\rangle+b|0\rangle$. Also, if $|q_0\rangle , |q_1\rangle, ... \in Q$, then the state vector at that instance will also be an arbitrary vector $V_q=b_0|q_0\rangle + b_1 |q_1\rangle+ ...$.

+ +

Now, after the instruction cycle is complete, the vectors $V_\sum$ and $V_q$ will decide whether the QTM will move left or right along the Qubit tape. My question is- since Hilbert space formed by $Q \otimes \sum$ is an uncountable infinite set and $\{\text{Left,Right}\}$ is a discrete set, the mapping between them will be difficult to create.

+ +

So how is the decision to move left or right made? Does the QTM move both left and right at the same time, meaning that the set $\{\text{Left,Right}\}$ also forms a different Hilbert space, and hence the motion of the QTM becomes something like $a|\text{Left}\rangle+b|\text{Right}\rangle$.

+ +

Or, just like a Classical Turing machine, the QTM moves either left or right, but not both at the same time?

+",1355,,26,,12/13/2018 19:41,12/13/2018 19:41,"In a Quantum Turing Machine, how is the decision to move along the memory tape made?",,2,2,,,,CC BY-SA 3.0 +1509,1,,,3/30/2018 11:45,,6,289,"

In the context of quantum control theory, it is common to see references to both quantum control and quantum optimal control (e.g. 0910.2350 or the guide on qutip quantrum control functions). +Sometimes it seems like the two terms are used interchangeably, while sometimes they are treated as different things.

+ +

For example, from the above link, it seems that quantum optimal control is (unsurprisingly) a special kind of quantum control. +It is however not too clear what the exact difference is between the two. +For example, are both approaches used to tackle the same classes of problems? Is the only difference in quantum optimal control theory looking for optimal solutions, while quantum control techniques have less strict requirements?

+",55,,55,,3/30/2018 11:52,10/21/2022 4:02,What is the difference between quantum control and quantum optimal control?,,1,5,,,,CC BY-SA 3.0 +1511,2,,1488,3/30/2018 12:45,,7,,"

We (i.e. the current state of research) just don't know, but we can guess.

+ +

We guess that there may be a problem if Post Quantum Crypto (PQC) lags behind, as Shor's algorithm can solve the factoring problem efficiently (thereby breaking RSA public key crypto) or for Grover's algorithm to force a doubling of the number of bits for all keys, as it can search a keyspace of $n$ bits in $O(2^{n/2})$ time (proportional to the square root of the size of the space of all possible keys), instead of expected $O(2^n)$ for the classical brute force algorithm.

+ +

So, PQC tries to create cryptography based on methods for which we currently think that quantum computing offers little advantage, such as lattice based or coding based crypto. But we can't know this for certain, just as we don't know whether there are classical algorithms that break current 'commercial grade' crypto.

+ +

Note that for RSA, increasing the key size simply doesn't work, as Shor can factor in time of a rather low order polynomial to crack the key. In other words, a key big enough for Shor to fail, is a key big enough such that any normal en/decryption operations are impossible.

+ +

So, we really need replacements. Fortunately, I think that PQC started on time and that we will get a good replacement for RSA (and others!) when the truly powerful machines capable of running Shor and Grover effectively arrive.

+",253,,253,,4/15/2018 11:08,4/15/2018 11:08,,,,0,,,,CC BY-SA 3.0 +1514,1,,,3/30/2018 13:40,,8,280,"

With the integer factorisation problem, Shor's algorithm is known to provide a substantial (exponential?) speedup compared to classical algorithms. Are there similar results regarding more basic maths, such as evaluating transcendental functions?

+ +

Let's say I want to calculate $\sin2$, $\ln{5}$ or $\cosh10$. In the classical world, I might use an expansion like the Taylor series or some iterative algorithm. Are there quantum algorithms that can be faster than what a classical computer can do, be it asymptotically better, fewer iterations to the same precision, or faster by wall clock time?

+",580,,580,,3/30/2018 21:43,3/30/2018 21:43,Does quantum computing provide any speedup in evaluation of transcendental functions?,,1,3,,,,CC BY-SA 3.0 +1515,1,1516,,3/30/2018 14:12,,6,200,"

Do the latest D-Wave machines use compounds of $\require{\mhchem}\ce{^{3}He}$ and $\ce{^{4}He}$ for cooling? If not, how does D-Wave approach cooling its plates low enough to achieve superconductivity? What compounds does DWave use for the plates in their fridge, and at what temperature do its plates reach superconductivity?

+",1437,,26,,5/17/2019 21:32,5/17/2019 21:32,Do the latest D-Wave computers use Helium compounds for cooling?,,1,0,,,,CC BY-SA 4.0 +1516,2,,1515,3/30/2018 14:28,,6,,"

Yes, they use $\require{\mhchem}\ce{^3He}$ and $\ce{^4He}$. No, they do not use compounds of these but instead a solution of these two (at the operating temperature) liquid nobel gases. The details can be found in the wikipedia article on dilution refrigerators.

+",,user1039,,,,3/30/2018 14:28,,,,0,,,,CC BY-SA 3.0 +1517,1,1518,,3/30/2018 14:29,,5,2067,"

This article talks about correlation and causality in quantum mechanics. In the abstract, under Framework for local quantum mechanics, it says (emphasis mine):

+ +
+

The most studied, almost epitomical, quantum correlations are the non-signalling ones, such as those obtained when Alice and Bob perform measurements on two entangled systems. Signalling quantum correlations exist as well, such as those arising when Alice operates on a system that is subsequently sent through a quantum channel to Bob who operates on it after that.

+
+ +

Basically, what is the difference between signalling and non-signalling, and what role does the channel play on that?

+",1346,,10480,,4/16/2021 20:44,4/16/2021 20:44,"What is the difference between signaling and non-signaling quantum correlations, and what is a signaling channel?",,1,0,,,,CC BY-SA 3.0 +1518,2,,1517,3/30/2018 14:55,,6,,"

Basically, it means that the correlations could be used to send a message. Or simply that Bob’s measurement outcomes can reveal some details of Alice’s actions.

+ +

This is impossible when Alice and Bob each hold one qubit of a Bell pair. Despite the entanglement present, as well as contextuality, signaling in this case would result faster than light communication. And that’s not allowed.

+ +

But if Alice does something to a qubit and then gives it to Bob, signaling is certainly allowed. Alice could simply encode bit values as $|0\rangle$ and $|1\rangle$, for example.

+ +

The process of handing over the qubit is referred to as the ‘quantum channel’. This could be done by physically sending it, using a technique such as teleportation to transfer the state, or just Alice shouting ‘Hey Bob, this qubit is yours now’ across the lab. Often channels come with the application of noise, because nothing in life in perfect. But sometimes they are just a concept we invoke in our stories about Alice and Bob.

+ +

Note that non-signaling correlations usually mean that some source correlated a couple of qubits, and then sent one to Alice and the other to Bob. There is then no causal link between what Alice does to hers, and what measurement results Bob gets from his. This is the reason for the lack of signaling.

+ +

Signaling correlations, however, usually mean that Alice was the source of the qubit that was sent to Bob. So there is a causal relation: Alice directly affects what state Bob gets, and can use this to send information.

+ +

This is how we would usually want to describe things but this paper would not want to use such language. This is because it is probing situations where the causal order can be indefinite. It therefore tries to identify these different kinds of correlations in a more general way, and also see if there is anything else.

+",409,,409,,3/30/2018 18:53,3/30/2018 18:53,,,,4,,,,CC BY-SA 3.0 +1519,1,,,3/30/2018 15:27,,13,234,"

In Wikipedia we can read that

+ +
+

the Curry–Howard correspondence is a correspondence between formal proof calculi and type systems for models of computation. In particular, it splits into two correspondences. One at the level of formulas and types that is independent of which particular proof system or model of computation is considered, and one at the level of proofs and programs which, this time, is specific to the particular choice of proof system and model of computation considered.

+
+ +

In this other related question I asked about the relation between the C-H correspondence as it was conceived and quantum computing. Here the questions are:

+ +
    +
  • Does this theory have to be updated to embrace quantum-specific phenomena like extended causality?
  • +
  • If yes, what are the changes made?
  • +
+",1346,,2645,,12/18/2018 19:39,12/18/2018 19:39,Does the Curry-Howard correspondence have a quantum-specific type system?,,0,0,,,,CC BY-SA 3.0 +1520,2,,1514,3/30/2018 18:04,,6,,"

The only thing I can think of is the algorithm for finding matrix powers which has superpolynomial speed up. It's from this list of quantum algorithms (it seems to be a bit outdated though).

+",1472,,,,,3/30/2018 18:04,,,,2,,,,CC BY-SA 3.0 +1522,1,1530,,3/30/2018 23:12,,17,1030,"

I've mostly worked with superconducting quantum computers I am not really familiar with the experimental details of photonic quantum computers that use photons to create continuous-variable cluster states such as the one that the Canadian startup Xanadu is building. How are gate operations implemented in these types of quantum computers? And what is the universal quantum gate set in this case?

+",1234,,26,,3/31/2018 4:57,3/31/2018 17:27,How are gates implemented in a continuous-variable quantum computer?,,1,1,,,,CC BY-SA 3.0 +1524,2,,1474,3/31/2018 6:47,,54,,"

Wikipedia list of Quantum Computer programming languages

+ +

(This answer is not a copy of that webpage, it's more updated and with verified links. In some cases the author's paper or website link is added.)

+ + + +

The website Quantum Computing Report has a Tools webpage listing over a dozen links, some new and some repeating the above list.

+ +

See also QuanTiki's webpage: ""List of QC simulators"", for a huge list of simulators and programming languages based on: C/C++, CaML, OCaml, F#, along with GUI based, Java, JavaScript, Julia, Maple, Mathematica, Maxima, Matlab/Octave, .NET, Perl/PHP, Python, Scheme/Haskell/LISP/ML and other online services providing calculators, compilers, simulators, and toolkits, etc.

+ +
+

Are there certain benefits to choosing particular ones?

+
+ +

If you plan on using a particular quantum computer then one would hope that the programming language developed by the manufacturer is both best suited for that particular machine and well supported.

+ +

Choosing a language with a larger following means that there are more Forums available and hopefully more bug fixes and support.

+ +

Unfortunately, that leaves some great niche products to struggle to gain a user base. Trying to find one language that is both powerful/expressive and supported across various platforms is the trick, the answer is an opinion ATM.

+ +

An evaluation of four software platforms: Forest (pyQuil), QISKit, ProjectQ, and the Quantum Developer Kit is offered by Ryan LaRose in ""Overview and Comparison of Gate Level Quantum Software Platforms"" (6 Jul 2018).

+ +
+ +

Updates:

+ +

Google's Cirq and OpenFermion-Cirq: ""Google's AI Blog - Announcing Cirq: An Open Source Framework for NISQ Algorithms"".

+ +

D-Wave's Leap and Ocean SDK allows access to a D-Wave 2000Q™ System in a cloud environment with access to a 2000+ qubit quantum annealing machine to test and run workloads for free, assuming the core algorithms used go into the open source pool. Apply to login at D-Wave's Leap In webpage.

+ +

Rigetti Computing's Quantum Cloud Service (QCS) offers a Quantum Machine Image, a virtualized programming, and execution environment that is pre-configured with Forest 2.0, to access up to 16 qubits of a 128 qubit computer.

+ +

Stay tuned for information on Fujitsu's Digital Annealer, an architecture capable of performing computations some 10,000 times faster than a conventional computer. If they eventually provide a development environment that is cross-compatible with true quantum computers these two paragraphs will remain in this answer, otherwise I will remove them.

+ +

While their silicon chip is not quantum in nature Fujitsu has partnered with 1Qbit to develop what is described as a ""Quantum Inspired AI Cloud Service"", whether their Digital Annealer quacks like a duck (anneals like a D-Wave, and uses compatible code) remains to be seen. Visit here to access the Fujitsu Digital Annealer Technical Service.

+ +

University of Pennsylvania's QWIRE (choir) is a quantum circuit language and formal verification tool, it has a GitHub webpage.

+ +

A review of: Cirq, Cliffords.jl, dimod, dwave-system, FermiLib, Forest (pyQuil & Grove), OpenFermion, ProjectQ, PyZX, QGL.jl, Qbsolv, Qiskit Terra and Aqua, Qiskit Tutorials, and Qiskit.js, Qrack, Quantum Fog, Quantum++, Qubiter, Quirk, reference-qvm, ScaffCC, Strawberry Fields, XACC, and finally XACC VQE is offered in the paper: ""Open source software in quantum computing"" (Dec 21 2018), by Mark Fingerhuth, Tomáš Babej, and Peter Wittek.

+ +

I will return to this answer from time to time to make updates, without excessive bumping.

+",278,,7429,,6/20/2019 0:19,6/20/2019 0:19,,,,7,,,,CC BY-SA 4.0 +1527,1,1528,,3/31/2018 13:47,,9,380,"

The circuit

+ +

+ +

can be translated to the following code:

+ +
operation Teleport(msg, there) {
+    let register = AllocateRegister();
+    let here = register;
+    H(here);
+    CNOT(here, there);
+    CNOT(msg, here);
+    H(msg);
+    // Measure out the entanglement.
+    if (M(msg) == One)  { Z(there); }
+    if (M(here) == One) { X(there); }
+  }
+}
+
+ +

How do the if-statements come about? Why are double-lines used after measurements?

+",1589,,26,,3/31/2018 14:26,3/31/2018 14:26,"What do double wires mean in quantum circuits, and how do they relate to if statements?",,1,2,,,,CC BY-SA 3.0 +1528,2,,1527,3/31/2018 13:56,,12,,"

The double lines are one common convention for classical bits in quantum circuit diagrams. In this case, they represent the bits arising from the measurements of the qubits msg and here.

+ +

The controlled operations involving the classical bits are just operations which are performed if those classical bits happen to have the value 1, which is what the if statements are for in the pseudocode.

+",124,,124,,3/31/2018 14:10,3/31/2018 14:10,,,,0,,,,CC BY-SA 3.0 +1529,1,1537,,3/31/2018 16:20,,23,4779,"

Transmon and Xmon qubits are two types of superconducting charge qubits that seem to be often used in superconducting quantum devices. +However, I wasn't able to easily find direct comparisons between them. +The Xmon architecture seems (1304.2322) to have been introduced by Martinis' group, as an alternative to the transmon qubit, so I would expect the former architecture to be better in at least some respects. +On the other hand, it also seems (cond-mat/0703002 and 0712.3581 seem to be the relevant references) that the devices from IBM use transmon qubits.

+ +

What are the main differences between the two, from a practical point of view (in other words, when and why does one prefer one over the other)?

+",55,,55,,3/31/2018 16:43,07-09-2019 07:03,What is the difference between transmon and Xmon qubits?,,2,0,,,,CC BY-SA 3.0 +1530,2,,1522,3/31/2018 17:27,,9,,"

Taking an $n$-mode simple harmonic oscillator (SHO) in a (Fock) space $\mathcal F = \bigotimes_k\mathcal H_k$, where $\mathcal H_k$ is the Hilbert space of a SHO on mode $k$.

+ +

This gives the usual annihilation operator $a_k$, which act on a number state as $a_k\left|n\right> = \sqrt n\left|n-1\right>$ for $n\geq 1$ and $a_k\left|0\right> = 0$ and the creation operator on mode $k$ as $a_k^\dagger$, acting on a number state as $a_k^\dagger\left|n\right> = \sqrt{n+1}\left|n+1\right>$.

+ +

The Hamiltonian of the SHO is $H = \omega\left(a_k^\dagger a_k+\frac 12\right)$ (in units where $\hbar = 1$).

+ +

We can then define the quadratures $$X_k = \frac{1}{\sqrt 2}\left(a_k + a_k^\dagger\right)$$ $$P_k = -\frac{i}{\sqrt 2}\left(a_k - a_k^\dagger\right)$$ which are observables. At this point there are various operations (Hamiltonians) that can be performed. The effect of such an operation on the quadratures can be found by using the time evolution of an operator $A$ as $\dot A = i\left[H, A\right]$. Applying these for time $t$ gives: $$X:P\mapsto P-t$$ $$P:X\mapsto X+t$$ $$\frac 12\left(X^2 + P^2\right): X\mapsto \cos t X - \sin t P,\, P\mapsto \cos t P + \sin t X,$$ which is just the Hamiltonian of a SHO with $\omega = 1$ and gives a phase shift. $$\pm S = \pm\frac 12\left(XP+PX\right): X\mapsto e^{\pm t}X,\, P\mapsto e^{\mp t}P,$$ which is known as the squeezing operator, where $+S\,\left(-S\right)$ squeezes $P\,\left(X\right)$.

+ +

Any Hamiltonian of the form $aX+bP+c$ can be built by applying $X$ and $P$. Adding $S$ and $H$ allows for any quadratic Hamiltonian to be built. Further adding the (nonlinear) Kerr Hamiltonian $$\left(X^2 + P^2\right)^2$$ allows for any polynomial Hamiltonian to be created.

+ +

Finally, including the beamsplitter operation (on two modes $j$ and $k$) $$\pm B_{jk} = \pm\left(P_jX_k - X_jP_k\right): A_j\mapsto \cos tA_j + \sin tA_k,\, A_k\mapsto \cos tA_k - \sin tA_j$$ for $A_j = X_j, P_j$ and $A_k = X_k, P_k$, which acts as a beamsplitter on the two modes.

+ +

The above operations form the universal gate-set for continuous variable quantum computing. More details can be found in e.g. here

+ +

To implement these unitaries:

+ +

Applying these operations is generally hinted at in the name: +Coupling a current is acting as the displacement operator $D\left(\alpha\left(t\right)\right)$ where, for an electric field $\varepsilon$ and current $j$, $\alpha\left(t\right) = i\int_{t_0}^t\int j\left(r, t'\right)\cdot\varepsilon e^{-i\left(k\cdot r - w_kt'\right)} dr\, dt'$. The displacement operator shifts $X$ by the real part of $\alpha$ and $P$ by the imaginary part of $\alpha$.

+ +

A phase shift can be applied by simply letting the system evolve by itself, as the system is a harmonic oscillator. It can also be performed by using a physical phase shifter.

+ +

Squeezing is the hard bit and is something that needs to experimentally be improved. Such methods can be found in e.g. here and here is one experiment using a limited amount of squeezed light. One possible way of squeezing is using a Kerr $\left(\chi^{\left(3\right)}\right)$ nonlinearity.

+ +

This same nonlinearity also allows for the Kerr Hamiltonian to be implemented.

+ +

The Beamsplitter operation is, unsurprisingly, performed using a beamsplitter.

+",23,,,,,3/31/2018 17:27,,,,0,,,,CC BY-SA 3.0 +1531,1,1600,,3/31/2018 18:19,,10,520,"

Using quantum control techniques it is possible to control quantum systems in a wide range of different scenarios (e.g. 0910.2350 and 1406.5260).

+ +

In particular, it was shown that using these techniques it is possible to implement gates like the (quantum) Toffoli gate (1501.04676) with good fidelities. +More precisely, they show that given the Toffoli gate $\newcommand{\on}[1]{\operatorname{#1}}\mathcal{U}_{\on{Toff}}$, defined as the C-CNOT gate +$$ +\newcommand{\ketbra}[2]{\lvert#1\rangle\langle#2\rvert} +\newcommand{\ket}[1]{\lvert#1\rangle} +\newcommand{\bra}[1]{\langle#1\rvert} +\mathcal{U}_{\on{Toff}}\equiv \ket{0}_1\!\bra{0}\otimes \on{CNOT} + + \ket{1}_1\!\bra{1}\otimes I, +$$ +and a time-dependent Hamiltonian $H(t)$ containing a specific set of interactions, one can find a set of (time-dependent) parameters of $H(t)$ such that +$$ +\mathcal T \exp\left(-i \int_0^\Theta H(\tau)d\tau\right) \simeq \mathcal U_{\on{Toff}}. +$$

+ +

Are there known results on the universality of such an approach? +In other words, do the tools provided by quantum control theory allow to say when, given set of constraints on the allowed Hamiltonian parameters, a given target gate can be realized? (1)

+ +

More precisely, the problem is the following: fix a target gate $\mathcal U$ acting over a set of qubits (or more generally qudits), and a parametrised Hamiltonian of the form $H(t)=\sum_k c_k(t) \sigma_k$, where $\{\sigma_k\}_k$ is a fixed set of (Hermitian) operators, and $c_k(t)$ are time-dependent parameters to be determined. +Is there a way to tell whether there are coefficients $\{c_k(t)\}_k$ such that +$$ +\mathcal T\exp\left(-i\int_0^{\Theta} H(\tau)d\tau\right) \stackrel{?}{=} \mathcal U. +$$

+ +
+ +

(1) Note that I here talk of quantum control only because that is the term used in the paper. If this is not the most suitable term to use to refer to this kind of problems please let me know.

+ +

Moreover, note also that the problem solved in the paper is slightly different than the one I stated here. In particular, the Hamiltonian they consider actually acts in the space of three four-dimensional qudits, and the Toffoli is only implemented as an effective dynamics in the lower levels of each ququart. +I'm also ok with results of this sort of course.

+",55,,55,,04-03-2018 11:08,4/23/2018 11:10,Does quantum control allow to implement any gate?,,2,9,,,,CC BY-SA 3.0 +1532,2,,1208,04-01-2018 03:31,,5,,"

Much of the work done so far with quantum computers has been focused on solving combinatorial optimization problems. Both D-Wave style Quantum Annealers and the more recent Gate Model machines from Rigetti, IBM, and Google have been solving combinatorial optimization problems. One promising approach to connecting machine learning and quantum computing involves finding optimization problems within standard machine learning tasks.

+ +

For example the recent Rigetti paper +Unsupervised Machine Learning on a Hybrid Quantum Computer + essentially recasts the unsupervised machine learning problem of clustering data into two groups, also known as 2-means clustering, into the combinatorial optimization problem of MaxCut. The folks at Rigetti then solve the MaxCut problem with the Quantum Approximate Optimization Algorithm (QAOA).

+ +

I would expect to see more of this style of work in the future, especially given the natural connections between optimization and machine learning.

+",1658,,,,,04-01-2018 03:31,,,,0,,,,CC BY-SA 3.0 +1533,2,,1437,04-01-2018 03:49,,3,,"

Flux Noise can be a major source of dephasing for superconducting qubits. If you look at the history of the field this makes complete sense. The ideas behind Superconducting Qubits can be traced to the SQUID, which itself was designed to be a very accurate magnetometer. So in general superconducting qubits tend to be quite sensitive to magnetic fields.

+ +

One challenge is to balance this senstivity to magnetic noise with the need to manipulate the qubits. Addressing this challenge is the subject of the Rigetti paper on the Charge- and Flux-Insensitive Tunable Superconducting Qubit +.

+",1658,,1658,,04-02-2018 15:18,04-02-2018 15:18,,,,0,,,,CC BY-SA 3.0 +1534,1,1539,,04-01-2018 03:53,,13,2427,"

Edward Farhi's paper on the Quantum Approximate Optimization Algorithm +introduces a way for gate model quantum computers to solve combinatorial optimization algorithms. However, D-Wave style quantum annealers have focused on combinatorial optimization algorithms for some time now. What is gained by using QAOA on a gate model quantum computer instead of using a Quantum Annealer?

+",1658,,55,,9/28/2019 19:12,9/28/2019 19:12,What is the difference between QAOA and Quantum Annealing?,,1,0,,,,CC BY-SA 3.0 +1535,2,,1485,04-01-2018 03:56,,5,,"

Gaussian Processes are a key component of the model-building procedure at the core of Bayesian Optimization. Therefore speeding up the training of Gaussian processes directly enhances Bayesian Optimization. The recent paper by Zhao et. al on Quantum algorithms for training Gaussian Processes + does exactly this.

+",1658,,,,,04-01-2018 03:56,,,,1,,,,CC BY-SA 3.0 +1536,1,,,04-01-2018 04:26,,4,252,"

Given the theoretical infinite quantum states that a qbit can be expressed as is there any practical limit to the processing density in any given quantum processor as compared to the absolute limits imparted in a binary system such as our current binary processors ?

+ +

By this I mean with current binary processors each gate can only have one of two states, and yet in quantum theory it appears as though every single gate can have an infinite number of states, what sort of practical limit, if any would we see in quantum processing, and given this infinite state capability wouldn't we then be in the position that largely every single quantum processor could be expressed as a single qbit with every qbit in effect acting as its own quantum processor with infinite states stand alone from other parallel qbits?

+",1663,,26,,12/23/2018 14:12,12/23/2018 14:12,Processing density capabilities in a quantum processor,,2,0,,,,CC BY-SA 3.0 +1537,2,,1529,04-01-2018 04:34,,15,,"

The transmon is a Josephson junction and capacitor in parallel. +Originally, transmons were differential circuits, i.e. two transmons on the same chip were not galvanically connected in any way. +In other words, transmons didn't share a ground reference. +Furthermore, in the early days, transmons were almost always embedded into the middle of a harmonic resonator. +The resonator, often referred to as a ""bus resonator"", was used to couple multiple qubits together, i.e. qubits embedded in the same resonator could couple to each other.

+ +

The important differences with the xmon were that

+ +
    +
  1. The xmon was grounded. Each xmon on a chip connects to a common ground plane with a nominally fixed voltage.

  2. +
  3. The xmon was not embedded into a resonator. Instead of coupling through a resonator, each xmon couples through a direct capacitance to each of its neighbors.

  4. +
+ +

Nowadays, several research groups build qubits without the bus resonator and call them ""transmons"".

+ +
+ +

Much more could be written. If someone leaves a comment asking for more details on any particular aspect of the difference between transmon and xmon, I will write more.

+ +

History of the name

+ +

Rob Schoelkopf told me the story of where the name ""transmon"" came from while we were at the Les Houches summer school on ""Quantum Machines"". +The charge qubit suffered from low frequency noisey charge fluctuations that lead to dephasing. +To get around the problem, Professor Schoelkopf thought to shunt the junction with a bit of transmission line. +The line would be a short circuit at dc, allowing low frequency charge to equalize, but it would be a high impedance at the qubit's resonance frequency allowing the resonance to remain. +The combination of a transmission line with the junction plasmon mode lead to the name ""transmon"".

+ +

In the end, it turned out that a capacitor was simpler than a transmission line and served a purpose equivalent to the transmission line, so the qubit wound up being a capacitor in parallel with the junction. +However, the name ""transmon"" had already stuck (or maybe ""capmon"" just didn't sound as good).

+",32,,32,,09-08-2018 06:21,09-08-2018 06:21,,,,4,,,,CC BY-SA 4.0 +1538,2,,1536,04-01-2018 05:01,,0,,"
+

By this I mean with current binary processors each gate can only have + one of two states, and yet in quantum theory it appears as though + every single gate can have an infinite number of states, what sort + of practical limit, if any would we see in quantum processing, and + given this infinite state capability wouldn't we then be in the + position that largely every single quantum processor could be + expressed as a single qbit with every qbit in effect acting as its own + quantum processor with infinite states stand alone from other parallel + qbits?

+
+ +

Well, it is not the gates which have an infinite number of states. It is the qubits themselves. I see lot of fundamental misconceptions in the question. +A qubit stays a linear superposition of two basis states, like $a|0\rangle + b |1\rangle$ (where $a,b$ are complex numbers). Also note that $a|0\rangle+b|1\rangle$ is an element of a 2-dimensional complex vector space.

+ +

Secondly, yes they can exist in an infinite number of superposition states, however when ""measured"" in the $|0\rangle,|1\rangle$ basis, you get either the state $|0\rangle$ (with probability $|a|^2$) or the state $|1\rangle$ (with probability $|b|^2$). Hence, the information of a qubit's initial superposition state gets lost during measurement.

+ +

Secondly, consider a 2-qubit system. Unlike in a 1-qubit system, its state lies in a 4-dimensional complex vector space (i.e. $\Bbb{C}^2\times\Bbb{C}^2$), and the superposition state looks something like $c|00\rangle+d|01\rangle+e|10\rangle+f|11\rangle$ (where $c,d,e,f\in\Bbb{C}$). Firstly a 4-dimensional complex vector space is ""larger"" than a 2-dimensional one, so it cannot be replicated by a single qubit. Also, note that now there are 4 possible basis states to which the system's state can collapse to upon measuring the first and second qubits in the appropriate basis.

+ +

Hence, the answer is an emphatic no for your question ""given this infinite state capability wouldn't we then be in the position that largely every single quantum processor could be expressed as a single qbit with every qbit in effect acting as its own quantum processor with infinite states stand alone from other parallel qbits?""

+",26,,,,,04-01-2018 05:01,,,,0,,,,CC BY-SA 3.0 +1539,2,,1534,04-01-2018 05:17,,6,,"

One of the advantages, as stated in the paper you linked, is that with QAOA you can increase the precision arbitrarily, whereas QA will only find the solution with probability 1 as $T \to \infty$ which is impractical. In addition if $T$ is too long you're likely to not find the solution as the probability is not monotonic. I believe an example of this can be found in a fair-sampling paper by Matsuda et al. Figure 4 shows that for large $\tau$, using quantum annealing on a 5-qubit system, you only likely to find 2 of the 3 possible states.

+ +

[arXiv:0808.0365v3] Ground-state statistics from annealing algorithms: Quantum vs classical approaches - Matsuda et al.

+",54,,54,,04-01-2018 15:32,04-01-2018 15:32,,,,2,,,,CC BY-SA 3.0 +1541,2,,1536,04-01-2018 09:19,,4,,"

The fact that a qubit has infinite allowed states can seem as though we could fit more than a bit inside it. However, no matter how fancy our proposed encoding, Holevo's bound shows us that we can never get more than a bit out. This is one effect that provides a bottleneck for how much density we can fit in a quantum processor.

+ +
+ +
+

given this infinite state capability wouldn't we then be in the position that largely every single quantum processor could be expressed as a single qbit with every qbit in effect acting as its own quantum processor

+
+ +

Suppose $B$ is some bit string of arbitrary length. We want to use this as input for some program $P$ to get the corresponding output bit string $P(B)$. It is true that we could associate $B$ with some state within the Bloch sphere, and then associate $P(B)$ with some other state. Then we could construct a unitary $U_P$ which rotates between them to reproduce the effect of the program.

+ +

As a simple example, we could use $| B \rangle = | 0 \rangle$, $| P(B) \rangle = | 1 \rangle$, and $U_P = Y$.

+ +

If we ignore the Holevo bound for a moment, it might seem as if we've reproduced an arbitrarily large computation within a single qubit. But remember that programs aren't designed for only a single input, but for many possible inputs. And they must be able to guide each to their corresponding output.

+ +

So let's take another possible input $B'$ and its corresponding output $P(B')$, and find some states within the Bloch sphere to encode them. Now our $U_P$ must simultaneously have the effect

+ +

$$U_P | B \rangle = | P(B) \rangle$$ +$$U_P | B' \rangle = | P(B') \rangle$$

+ +

We could do this perhaps with $| B' \rangle = | + \rangle$, $| P(B') \rangle = | - \rangle$, and still use $U_P = Y$ as before.

+ +

Now what if we add another output? What states will we use? Will $Y$ still be a good choice to implement the program? Should we change the states we've chosen already to fit the new inputs and outputs in?

+ +

The more possible inputs we add, the more of these constraints we get. Hopefully it is clear that it would become increasingly difficult to make it all work. For many programs, it would be impossible.

+ +
+ +

Let's also ignore this for a second, and suppose that a large problem that we want to solve could be squeezed into a single qubit. We are very good at simulating single qubits with standard computers. Even if numerical accuracy starts to cause problems, we can just add some more bits to make everything right again. So if we can do everything efficiently with a single qubit, we could also do it efficiently with a classical computer. Either that, or the 'compilation' process is so hard that all our computational complexity is moved to that task instead. In either case, there is no heavy lifting that we would actually need the qubit for.

+ +
+ +

So though a single qubit contains a lot of possible states, it is nevertheless too simple to outperform a classical computer. We can't get enough information out, its state space is not big enough to guide large computations from input to output, and it can be classically simulated. Many qubits are needed to overcome all of these issues.

+",409,,,,,04-01-2018 09:19,,,,0,,,,CC BY-SA 3.0 +1542,1,,,04-01-2018 09:53,,11,670,"

I want to understand what quantum entanglement is and what role does it play in quantum error correction.

+ +

NOTE: +As per the suggestions of @JamesWootton and @NielDeBeaudrap, I have asked a separate question for the classical analogy here.

+",1678,,55,,09-12-2019 11:50,9/29/2019 15:33,"What is quantum entanglement, and what role does it play in quantum error correction?",,5,4,,,,CC BY-SA 4.0 +1543,2,,1542,04-01-2018 10:11,,4,,"

There is no classical equivalent to entanglement. Entanglement is perhaps best understood using Dirac (bra-ket) notation.

+ +

Each qubit can be in the (ket) state $\left|0\right>$ or in the state $\left|1\right>$ or in a superposition $\alpha\left|0\right> + \beta \left|1\right>$ where $\alpha$ and $\beta$ are complex numbers that fulfill $|\alpha|^2 + |\beta|^2 = 1$. If you have two qubits, the basis states of the 2-qubit system are $\left|0\right> \otimes \left|0\right>$, $\left|0\right> \otimes \left|1\right>$, $\left|1\right> \otimes \left|0\right>$, and $\left|1\right> \otimes \left|1\right>$. To simplify the notation, physicists often write these as $\left|00\right>$, $\left|01\right>$, $\left|10\right>$, and $\left|11\right>$. So being in state $\left|01\right>$ means that the first qubit is in state $\left|0\right>$ and the second qubit is in state $\left|1\right>$.

+ +

Now consider a superposition of the kind $\alpha \left|01\right> + \beta \left|10\right>$. This means that the first qubit is in state $\left|0\right>$ with probability $|\alpha|^2$ and in state $\left|1\right>$ otherwise, whilst the second qubit is always in the opposite state that the first one is in: The two particles are entangled.

+ +

It is unimportant that in this example, the entangled qubits happen to be in opposite states: They might as well be in the same state and still be entangled. What matters is that their states are not independent from each other. This has caused major headaches for physicists because it means that qubits (or the particles carrying them) cannot simultaneously have strictly local properties and be governed by a concept called realism (reflect their states as intrinsic property). Einstein famously called the resulting paradox (if you still assume locaility and realism) ""spooky action at a distance.""

+ +

Entanglement does not play a special role in quantum error correction: Error correction must work for every state in the computational basis (which does not have entanglement). Then it automatically works also for superpositions of these states (which may be entangled states).

+",,user1039,,,,04-01-2018 10:11,,,,8,,,,CC BY-SA 3.0 +1544,2,,1542,04-01-2018 11:01,,5,,"

Entanglement is a natural part of quantum information and quantum computation. If it isn't present --- if you try to do things in such a way that entanglement does not arise --- then you get no benefit from quantum computation. And if a quantum computer is doing something interesting, it will produce a lot of entanglement, at least as a side-effect.

+ +

However, this does not mean that entanglement is ""what makes quantum computers go"". Entanglement is like the spinning gears of a machine: nothing happens if they aren't turning, but that doesn't mean that having those gears spin quickly is enough to make the machine do what you want. (Entanglement is a primitive resource in this way for communication, but not for computation as far as anyone has seen.)

+ +

This is as true for quantum error correction as it is for computation. Like all forms of error correction, quantum error correction works by distributing information around a larger system, in particular in the correlations of certain measurable pieces of information. Entanglement is just the usual way in which quantum systems become correlated, so it should come as no surprise that a good quantum error correction code then involves a lot of entanglement. But that doesn't mean that trying to ""pump your system full of entanglement"", like some sort of helium balloon, is something which is useful or meaningful to do to protect quantum information.

+ +

While quantum error correction is sometimes described vaguely in terms of entanglement, more important is how it involves parity checks using different 'observables'. The most important tool for describing this is the stabiliser formalism. The stabiliser formalism can be used to describe some states with large amounts of entanglement, but more importantly it allows you to reason about multi-qubit properties (""observables"") fairly easily. From that perspective, one can come to understand that quantum error correction is much more closely related to low-energy many-body physics of spin-Hamiltonians, than just entanglement in general.

+",124,,,,,04-01-2018 11:01,,,,0,,,,CC BY-SA 3.0 +1545,2,,1542,04-01-2018 11:15,,6,,"

Classical correlations between variables occur when the variables appear random, but whose values are found to systematically agree (or disagree) in some way. However, there will always be someone (or something) that 'knows' exactly what the variables are doing in any given case.

+ +

Entanglement between variables is the same, except for the last part. The randomness is truly random. Random outcomes are completely undecided until the time of measurement. But somehow the variables, though they may be separated by galaxies, still know to agree.

+ +
+ +

So what does this mean for error correction? Let's start by thinking about error correction for a simple bit.

+ +

When storing a classical bit, the kinds of errors you need to worry about are things like bit flips and erasures. So something might make your 0 become a 1, or vice-versa. Or your bit might wander off somewhere.

+ +

To protect the information, we can ensure that our logical bits (the actual information we want to store) are not just concentrated on single physical bits. Instead, we spread it out. So we could use a simple repetition encoding, for example, where we copy our information across many physical bits. This lets us still get our information out, even if some of the physical bits have failed.

+ +

This is the basic job of error correction: we spread our information out, to make it hard for errors to mess it up.

+ +

For qubits, there are more kinds of error to worry about. For example, you may know that qubits can be in superposition states, and that measurements change these. Unwanted measurements are therefore another source of noise, caused by the environment interacting with (and so in some sense 'looking at' our qubits). This type of noise is known as decoherence.

+ +

So how does this affect things? Suppose we use the repetition encoding with qubits. So we replace the $|0\rangle$ in our desired logical qubit state with $|000...000\rangle$, repeated across many physical qubits, and replace the $|1\rangle$ with $|111...111\rangle$. This again protects against bit flips and erasures, but it makes it even easier for stray measurements. Now the environment measure whether we have $|0\rangle$ or $|1\rangle$ by looking at any of many qubits. This will make the effect of decoherence much stronger, which is not what we wanted at all!

+ +

To fix this, we need to make it hard for decoherence to disturb our logical qubit information, just as we made it hard for bit flips and erasures. To to this, we have to make it harder to measure our logical qubit. Not too hard that we can't do it whenever we want to, of course, but too hard for environment to do easily. This means ensuring that measuring a single physical qubit should tell us nothing about the logical qubit. In fact, we must make it so that a whole bunch of qubits need to be measured and their results compared to extract any information about the qubit. In some sense, it is a form of encryption. You need enough pieces of the puzzle to have any idea what the picture is.

+ +

We could try to do this classically. Information could be spread out in complex correlations among many bits. By looking at enough of the bits and analyzing the correlations, we can we extract some information about the logical bit.

+ +

But this would not be the only way to get this information. As I mentioned before, classically there is always that someone or something that already knows everything. It doesn't matter whether it is a person, or just the patterns in the air caused when the encryption was carried out. Either way, the information exists outside of our encoding, and this is essentially an environment that knows everything. Its very existence means that decoherence has occurred to an irreparable degree.

+ +

So that's why we need entanglement. With it, we can hide the information away using correlations in the true and unknowable random outcomes of quantum variables.

+",409,,,,,04-01-2018 11:15,,,,0,,,,CC BY-SA 3.0 +1546,2,,1508,04-01-2018 11:53,,4,,"

The quantum Turing machine can move into a superposition of moving left and right. This is different from the classical Turing machine which can only move either left or right.

+",,user1039,,,,04-01-2018 11:53,,,,0,,,,CC BY-SA 3.0 +1547,2,,1531,04-01-2018 12:17,,5,,"

Quantum control does not necessarily allow implementing just any gate. Imagine your control of the system is a time-dependent energy. That corresponds to the Hamiltonian $\hat{H}(t) = c(t) \hat{Z}$. Then you can only ever rotate about one axis of the Bloch sphere and your only choice is how fast to rotate when. This is obviously insufficient to even generate arbitrary single-qubit gates because then you would need to be able to effect rotations about any (arbitrary) axis.

+ +

I cannot answer the second part of your question, about known results for the universality. Note, however, that I picked a very special case to illustrate that simple quantum control is not enough. Imagine I had picked the Hamiltonian $\hat{H}(t) = c(t) \hat{X} + \frac{E_0}{2} \hat{Z}$. This is a constant rotation about one axis (due to an energy difference $E_0$ between the two states of the single qubit involved) plus a rotation you completely control about an orthogonal axis. Since you can produce arbitrary rotation with a suitable combination of such rotations, this is universal (for a single qubit system). This is my attempt at illustrating that not having universal control if you have any control could be seen as a special case rather than the rule.

+",,user1039,,,,04-01-2018 12:17,,,,1,,,,CC BY-SA 3.0 +1548,1,1555,,04-01-2018 12:22,,9,561,"

Is there an intuitive explanation why the surface code fares so much better than older quantum error correction codes in terms of its high error threshold, with thresholds of up to a few percent rather than some ppm? If so, what is it?

+ +

I am particularly interested in having it clarified if such a comparison is a fair (apples to apples) comparison. I understand that the results for older quantum error correction are usually analytic results whilst those for surface codes tend to be numeric. Could it be that the analytic solutions indeed take into account the worst possible (coherent) errors whilst numerical solutions perhaps do not capture the worst possible errors because they can only explore a subset of all possible errors?

+",,user1039,,user1039,04-01-2018 12:30,04-01-2018 16:14,Why does the surface (quantum error correction) code have such a high threshold for errors?,,2,4,,,,CC BY-SA 3.0 +1549,1,1551,,04-01-2018 12:27,,12,716,"

I know that qubits are represented by quantum particles (for example photons) and that their state is given by one property (for example spin).

+ +

My question is about the quantum memory: how are the qubits stored in a quantum computer. I suppose we require a kind of black box for Heisenberg's uncertainty principle to work. If I understand this correctly this principle is relevant for the superposition of the qubit.

+ +

How is this kind of black box implemented in real quantum computers?

+",11,,1847,,4/24/2018 15:28,4/24/2018 15:28,How to store qubits while preserving Heisenberg's uncertainty principle?,,2,0,,,,CC BY-SA 3.0 +1550,2,,1548,04-01-2018 12:27,,5,,"

The surface error correction code always uses eigenstates of just a few stabilizers. By changing operators rather than states, it never needs to apply any state correction to the quantum states (but only to the classically controlled operators or measurements). Hence it is very simple, only repeatedly measuring syndromes. Since the syndromes do not even need actual computations (one only ever measures $\hat{Z}$ and $\hat{X}$), there is very little opportunity for errors to accumulate. Hence the error threshold is high.

+ +

I cannot answer if published error thresholds are an apple-to-apple comparison to those published for more traditional quantum error correction schemes.

+",,user1039,,,,04-01-2018 12:27,,,,1,,,,CC BY-SA 3.0 +1551,2,,1549,04-01-2018 12:36,,11,,"

What you call a black box is simply isolating the quantum system that stores (or represents) your qubits from the environment. This can be done in several ways depending on your physical realization. For example, in an ion trap based quantum computer, one uses states of a single ion to represent a qubit, and isolates that from the environment by levitating it in empty space (using an ion trap) and by shielding it from the kind of laser radiation or other light sources that affects the chosen states.

+",,user1039,,,,04-01-2018 12:36,,,,4,,,,CC BY-SA 3.0 +1552,1,,,04-01-2018 12:38,,5,602,"

The Quantum Algorithm Zoo includes a host of algorithms for which Quantum Computing offers speedups (exponential, polynomial, etc). However, those speedups are based on asymptotic computational complexity (Big-O complexity).

+ +

For a realistic implementation on a quantum computer (or even a simulator), the algorithm might require other add-ons to get a satisfiable result. For e.g., for multiple quantum state tomography trials, or probabilistic cloning. +I am not considering an ideal quantum computer, thus, overheads of physical to logical qubit mapping for Quantum Error Correction; overheads of nearest neighbour mapping; or experimental environmental errors are not considered. I am only considering the effect of projective measurement and no-cloning.

+ +

How can I compare a quantum algorithm taking into account such factors? The overheads might be polynomial, linear, or even constant, so asymptotically it will outperform the corresponding classical algorithm. But, asymptotically means, for a large enough problem size. I am interested in determining if that cross-over point (quantum supremacy problem size) is realistic for my case.

+ +

The specific algorithm I am currently working with is an improvement of Grover's search as proposed by Biham et. al, where the initial amplitude distribution is not uniform, and there are multiple solutions.

+",1153,,,,,04-01-2018 13:14,How to compare a quantum algorithm with its classical version?,,1,4,,04-02-2018 08:04,,CC BY-SA 3.0 +1553,2,,1552,04-01-2018 13:14,,0,,"
+

I am only considering the effect of projective measurement and no-cloning.

+
+ +

Consider that a (long) realistical quantum computation would have to involve quantum error correction which only works if the errors are rather limited: Your projective measurement will be rather close (to within 1% or so) to not incorporating errors other than quantum projection noise. This is usually irrelevant because a typical algorithm will not require you to do state tomography but instead distill a binary result into the qubits to be measured in the end. If it would involve state tomography, the (asymptotic) overhead would enter the big-O notation.

+ +

There is indeed some overhead from the no-cloning theorem: The (quantum part of the) result of e.g. Shor's algorithm must be read more than once to form the greatest common denominator. Such an overhead obviously depends on the algorithm; in the case of Shor's algorithm, it is small (typically 2 to 3, in detail depending if you are willing to take care of small factors classically, as is simple). A similarly (but not quite as) small overhead occurs if you want to generalize Grover's algorithm to the case where an unknown number of solutions exist.

+",,user1039,,,,04-01-2018 13:14,,,,1,,,,CC BY-SA 3.0 +1554,1,,,04-01-2018 14:34,,4,158,"

I want to know if there is any classical analogy for understanding quantum entanglement.

+",1678,,,,,04-01-2018 14:34,Is there any classical analogy for quantum entanglement?,,0,6,,04-01-2018 16:48,,CC BY-SA 3.0 +1555,2,,1548,04-01-2018 16:08,,6,,"

Thresholds are often calculated by treating $X$ and $Z$ errors as occurring with the same probability. Any code that corrects one more effectively than the other will then be at a disadvantage: whichever it corrects least effectively will be the bottleneck that determines the threshold. The surface code avoids this by treating these errors in an identical way.

+ +

For fault-tolerance thresholds, the fidelity of stabilizer measurements is also taken into account. This includes errors that occur in the required entangling gates. Stabilizers that act on more qubits, and so require more entangling gates, will have less reliable measurements. But the surface code has essentially all stabilizers the same size: they all act on four qubits. And that size is relatively low. They are also quasilocal, so there is no error overhead in shunting qubits around to be measured.

+ +

The combination of these effects means that it competes well in terms of threshold, and should be moderately easy to implement.

+ +

It's true that analytic results are always worse, because they aim to establish lower bounds rather than exact values. They came more from the era when it was important to determine whether fault-tolerant quantum computation was actually possible, even in principle. Just finding a non-zero threshold was a big deal. Now we focus more on things that can be done on real devices, and how they might perform, and seek thresholds as high as we can for practical reasons.

+ +

The surface code nevertheless wins in a typical apples-to-apples comparison of numerically calculated thresholds (I'll update with a reference when I find one). But it should ne noted that the choice of noise model can be used to change the goal posts in the favour of your favourite code.

+",409,,409,,04-01-2018 16:14,04-01-2018 16:14,,,,0,,,,CC BY-SA 3.0 +1558,1,1559,,04-01-2018 21:25,,6,61,"

You can make a natural correspondence between a quantum state vector and a classical probability vector, and between a quantum unitary operator and a classical stochastic matrix. There is also a correspondence between the quantum annealing algorithm and the classical simulated annealing algorithm. I am wondering whether it is possible to write down simulated annealing in the language of probability vectors and stochastic matrices, and then see what additional power is obtained by changing to the quantum counterparts.

+ +

More generally, I would like to bridge the language gaps between probabilistic algorithms and quantum algorithms, and I am wondering whether recasting probabilistic algorithms in terms of probability vectors and stochastic matrices has been tried before.

+",1658,,,,,04-03-2018 06:25,Assessing speed-up via Quantum-Stochastic correspondence,,1,0,,,,CC BY-SA 3.0 +1559,2,,1558,04-01-2018 23:59,,6,,"

Yes. This has been done by Morita and Nishimori in their 2008 publication, ""Mathematical Foundations of Quantum Annealing.""

+ +

https://arxiv.org/abs/0806.1859

+ +

In Section 5 they derive the convergence conditions from path integral Monte Carlo and Green function Monte Carlo methods. To quote;

+ +
+

In Sec. 5 we have derived the convergence condition of QA implemented by Quantum Monte Carlo simulations of path-integral and Green function methods. These approaches bear important practical significance because only stochastic methods allow us to treat practical large-size problems on the classical computer. A highly non-trivial result in this section is that the convergence condition for the stochastic methods is essentially the same power-law decrease of the transverse-field term as in the Schrödinger dynamics of Sec. 2. This is surprising since the Monte Carlo (stochastic) dynamics is completely different from the Schrödinger dynamics. Something deep may lie behind this coincidence and it should be an interesting target of future studies.

+
+",54,,26,,04-03-2018 06:25,04-03-2018 06:25,,,,0,,,,CC BY-SA 3.0 +1560,1,1564,,04-02-2018 00:44,,9,500,"

It is well-known that by utilizing quantum parallelism we can calculate a function $f(x)$ for many different values of $x$ simultaneously. However, some clever manipulations is needed to extract the information of each value, i.e. with Deutsch's algorithm.

+ +

Consider the reverse case: can we use quantum parallelism to calculate many functions (say $f(x),g(x),\dots$) simultaneously for a single value $x_0$?

+",1341,,,,,04-02-2018 22:38,Can we use quantum parallelism to calculate many functions at once?,,4,3,,,,CC BY-SA 3.0 +1562,2,,1560,04-02-2018 08:46,,2,,"

Yes, one can. The trick is to define (and implement) a new function $f_\mathrm{all}(y,x)$ that evaluates to $f(x)$ if $y=0$, to $g(x)$ if $y=1$, etc. Then one prepares the qubits representing $y$ in the desired superposition and set $x$ to $x_0$.

+",,user1039,,,,04-02-2018 08:46,,,,0,,,,CC BY-SA 3.0 +1563,2,,1560,04-02-2018 08:56,,5,,"

The functions $f,g,\ldots $ that you want to evaluate in different computational branches must, in order to be computable at all, be specifiable in some way (e.g. a sequence of classical logic gates). And the set $\{ f_1, f_2 , \ldots \}$ of the functions you wish to compute ought itself to be computable: for a given $t $, you must be able to compute a specification of how $f_t $ is to be computed on its argument. In effect: you must have a means of describing the functions $f_t $ as stored programs. (These are all necessary, even before we consider quantum computation, for the question of ""computing one/all of the functions $f_1, f_2, \ldots $ on an input $x_0$"" to be meaningful.)

+ +

Once you have a way of specifying functions as stored programs, you're basically done: a program is essentially another kind of input, which you can prepare in superposition, and e.g. evaluate on a fixed input, or a superposition of inputs, by computing the functions from their specifications in each branch.

+ +

To gain a comptational advantage from doing so is a different matter, and will have to involve some specific structure in the functions $f_t $ that you can take advantage of, but simply to ""evaluate in superposition"" is easily done if you have enough information for the question to be sensible.

+",124,,,,,04-02-2018 08:56,,,,0,,,,CC BY-SA 3.0 +1564,2,,1560,04-02-2018 09:34,,5,,"

The exact answer depends on the exact kind of superposition you want. The answers by pyramids and Niel both give you something like

+ +

$$A\sum_{t=1}^n |\,\,f_t (x)\,\,\rangle \otimes |F_t\rangle$$

+ +

Here I've followed Niel in labelling the different functions $f_1$, $f_2$, etc, with $n$ as the total number of functions you want to superpose. Also I've used $F_t$ to denotes some description of the function $f_t$ as a stored program. The $A$ is just whatever number needs to be there for the state to be normalized.

+ +

Note that this is not simply a superposition of the $f_t(x)$. It is entangled with the stored program. If you were to trace out the stored program, you'd just have a mixture of the $f_t(x)$. This means that the stored program could constitute 'garbage', which prevents interference effects that you might be counting on. Or it might not. It depends on how this superposition will be used in your computation.

+ +

If you want rid of the garbage, things get more tricky. For example, suppose what you want is a unitary $U$ that has the effect

+ +

$$U : \,\,\, | x \rangle \otimes |0\rangle^{\otimes N} \rightarrow A \sum_{t=1}^n |\,\,f_t (x)\,\,\rangle$$

+ +

for all possible inputs $x$ (which I am assuming are bit strings written in the computational basis). Note that I've also included some blank qubits on the input side, in case the functions have longer outputs than inputs.

+ +

From this we can very quickly find a condition that the functions must satisfy: since the input states form an orthogonal set, so must the outputs. This will put a significant restriction on the kinds of functions that can be combined in this way.

+",409,,409,,04-02-2018 09:40,04-02-2018 09:40,,,,2,,,,CC BY-SA 3.0 +1565,2,,1560,04-02-2018 09:37,,3,,"

Yes (depending on what ""calculate many functions at once"" means)

+ +

Describing the circuit that gives the function $f$ as $U_f$ and the circuit giving $g$ as $U_g$, there are a few ways to go about doing this:

+ +
    +
  1. Starting with the qubit registers in $\left|00x\right>$, Prepare a state $\alpha\left|01\right>+\beta\left|10\right>$ on the first two registers. This can be done by applying a unitary1 on the first register to put that register in the state $\alpha\left|0\right> + \beta\left|1\right>$ before applying CNOT, then $I\otimes X$. Then, apply $CU_f$ from the first register to the third and $CU_g$ from the second to the third.

    + +

    1.1. This gives that the third register is now in the state $\left(\alpha U_f + \beta U_g\right)\left|x\right>$, when the initial operations (up to $I\otimes X$) on the first two registers are reversed. However, owing to the general difficulties of implementing arbitrary controlled-unitary operations (as well as using extra qubits unnecessarily) it would probably be easier to implement this directly by dialling up the unitary $\alpha U_f + \beta U_g$. Note that this is neither implementing $f$ nor $g$, but a new, different function $f+g$

    + +

    1.2. Not reversing the initial operations on the first two registers puts the third in some entangled state of $f$ and $g$, which is discussed in other answers.

  2. +
  3. Starting with the state $\left|xx\right>$ and applying $U_f$ to the first register and $U_g$ to the second. This is the closest to classical parallelism, where both functions are applied independently to copies of the same state. Aside from requiring twice the number of qubits, the issue here is that, due to no-cloning, in order to copy $\left|x\right>$, it either has to be known, or be a classical state (i.e. not involve superpositions in the computational basis). Approximate cloning could also be used.

  4. +
  5. Start with the state $\left|0x\right>$, as well as a classical register. Apply a unitary1 to put the first register in the superposition $\alpha\left|0\right>+\beta\left|1\right>$. Now, measure this register (putting the result in the classical register) and apply the classical operation IF RESULT = 0 U_f ELSE U_g. While this may seem less powerful than either of the above operations, this is in some sense, equivalent to the quantum channel $\mathcal E\left(\rho\right) = \lvert\alpha\rvert^2U_f\rho U_f^\dagger + \lvert\beta\rvert^2U_g\rho U_g^\dagger$. Such methods can be used to make random unitaries, which have applications in e.g. boson sampling and randomised benchmarking

  6. +
+ +
+ +

1 given by $$\begin{pmatrix}\alpha &&-\beta^* \\ \beta && \alpha^*\end{pmatrix}$$

+",23,,23,,04-02-2018 10:09,04-02-2018 10:09,,,,1,,,,CC BY-SA 3.0 +1568,1,1569,,04-02-2018 14:39,,15,950,"

Quantum computers are efficiently able to simulate any other quantum system. Hence there must be some sort of equivalent of a (possibly simulated) quantum eraser setup. I would like to see such an equivalent drawn as a quantum circuit, ideally in the variant of a delayed choice quantum eraser.

+ +

One (quantum) experimental realization of a quantum eraser is this: You create a double slit interference experiment where you obtain which-way information by ""doubling"" photons in front of each slit using spontaneous parametric down conversion (the physics of which are not important for my argument, the point being that we have a new photon we can measure to obtain which-way information). The interference pattern naturally disappears, unless we build a quantum eraser: If the two ""doubled"" photons carrying the which-way information are superimposed via a 50-50 beamsplitter in such a manner that the which-way information can no longer be measured, the interference pattern reappears. Curiously, this is experimentally the case even if this superimposing happens after the interference pattern is recorded.

+ +

I seem to be unable to find a convincing equivalence for the interference pattern and for the quantum eraser in simple qubit gates. But I would love to make the thought (and ideally, the real) experiment on a quantum computer, too. What program (quantum circuit) would I need to run on a quantum computer to do that?

+",,user1039,26,,12/14/2018 6:10,11/13/2020 21:36,What is the quantum circuit equivalent of a (delayed choice) quantum eraser?,,3,0,,,,CC BY-SA 3.0 +1569,2,,1568,04-02-2018 19:44,,10,,"

I will try to translate the Kim et. al. experiment from an optics description into a quantum information description. Here is the experimental setup as you find it in the linked wikipedia article:
+

+ +

We associate the blue path with $|0\rangle$ and the red with $|1\rangle$. The double slit can be described by a Hadamard gate. The BBO corresponds to a CNOT-gate. The state after the BBO is $\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$. There is a phase $\varphi$ depending on the position $x$ of detector $D_0$, which corresponds to a phase gate $R_\varphi=\operatorname{diag}(1,e^{i\varphi})$. Finally superposing the beams on $D_0$ corresponds to another Hadamard gate and the measurement of $D_0$ can be seen to project onto $|0\rangle$. The complete circuit looks like this:
+

+ +

The state before the measurement is: +$$\frac{1}{2}(|00\rangle+|10\rangle + e^{i\varphi}|01\rangle - e^{i\varphi}|11\rangle)=\frac{1}{2}(((1+e^{i\varphi})|0\rangle+(1-e^{i\varphi})|1\rangle)|+\rangle + ((1-e^{i\varphi})|0\rangle +(1+e^{i\varphi})|1\rangle)|-\rangle)$$
+Let's look at the probability to measure the first photon in $D_0$ ($|0\rangle\langle 0|$).
+If we measure the second in z-basis ($D_3$ and $D_4$), the probability for a click in $D_0$ is $\frac{1}{2}$ (the post-measurement state is $|\pm\rangle$). This is independent of the phase: no interference here. For the x-basis ($D_1$ and $D_2$) the probability for a click at $D_0$ is $\frac{1}{2}(1\mp \cos \varphi)$, so here we see the interference. Whether we see interference or not depends on the basis choice on the second system, which can be delayed. Of course we need to know the outcome, so faster than light communication is not possible with this setup.

+",104,,104,,04-02-2018 21:46,04-02-2018 21:46,,,,3,,,,CC BY-SA 3.0 +1570,1,1573,,04-02-2018 21:14,,10,602,"

I am aware that IBM, Rigetti, and Google have built some small-scale devices. Which of them are available for access by an undergraduate student? For how long? With how many qubits?

+",1658,,26,,5/14/2019 6:28,5/14/2019 6:33,What real quantum computers are available for students to use?,,2,0,,,,CC BY-SA 4.0 +1571,2,,1570,04-02-2018 22:27,,7,,"

An undergrad can create an account with the IBM quantum experience. Users have some points that they can use to run simulations of their design on a real quantum computer. You can use five qubits.

+ +

I am unaware of any way for someone to use Google's quantum computer unless you get a job with them. Rigetti has a device, but what people use online is a simulator of a quantum computer, meant to assist with writing quantum algorithms, that runs up to 36 qubits. It uses a language called Quil.

+ +

There are multiple simulators available. IBM has one as a part of its IBM quantum experience, there is Quirk, and there are multiple ""programming"" languages like QCL, most of which are available on GitHub or with the creation of an account, so definitely open to an undergrad.

+ +

To summarize: if you don't have a quantum computer of your own and wish to try out a circuit on a real quantum computer, your options seem to be limited to IBM's quantum experience, which has five qubits.

+",91,,26,,5/14/2019 6:33,5/14/2019 6:33,,,,1,,,,CC BY-SA 4.0 +1572,2,,1323,04-03-2018 04:13,,5,,"
+

Why can you not subdivide a quantum bitcoin?

+
+ +

Anyone can create a Cryptocurrency, how it works is up to them, how well it is received is up to the public, generally it is decided by: Utility, Scarcity, Perceived Value.

+ +

As of today a Bitcoin is worth USD 7,073.54, A Bitcoin is 10$^8$ Satoshis which are 0.00000001 Bitcoins, so a Satoshi is worth: 7,073.54 * 0.00000001 = 7.07354 × 10$^{-5}$ USD or 0.00707354 pennies. In total there can be 21 million bitcoin units (2.1 quadrillion Satoshis). The creators of Bitcoin chose that it would be divided into Satoshis.

+ +

In Jogenfors' paper, which you cited, he decided that the Quantum Bitcoin protocol (not to be confused with the Qubitcoin (Q2C) a CPU and GPU based Cryptocurrency) will use the no-cloning theorem of quantum mechanics to construct quantum shards and two blockchains.

+ +

The answer to your question is, according to section 4.4 - Preventing the Reuse Attack:

+ +

""With Bitcoin the blockchain records all transactions and a miner therefore +relinquishes control over the mined bitcoin as soon as it is handed over to a recipient. In Quantum Bitcoin, however, there is no record of who owns what, so there is no way to distinguish between the real and counterfeit quantum bitcoin.

+ +

We prevent the reuse attack by adding a secondary stage to the minting algorithm, where data is also appended to a new ledger $\mathcal{L^′}$.

+ +

...

+ +

Quantum shard miners create and sell the quantum shards on a marketplace, another miner (called a quantum bitcoin miner) purchases $m$ quantum shards ${(s, ρi, σi)}1≤i≤m$ on the marketplace that, for all $1 ≤ i ≤ m$, fulfill the following conditions:

+ +
    +
  • $\mathsf{Verify}\mathcal{_M} ((s, ρi, σi))$ accepts

  • +
  • The timestamp $T$ of the quantum shard in the Quantum Shard ledger $\mathcal{L}$ fulfills $t − T ≤ T_{max}$, where $t$ is the current time"". See section ""A.2 The Reuse Attack"" for further proof.

  • +
+ +

Shards have a lifetime of $T_{max}$.

+ +

While a shard is designed to have a short lifetime the Quantum Bitcoin is designed to have a great longevity (as long as it is intact, undivided), see section ""A.3 Quantum Bitcoin Longevity"":

+ +

""Theorem 4 (Quantum Bitcoin Longevity) The number of times a quantum bitcoin can be verified and reconstructed is exponentially large in $n$.

+ +

Proof Corollary 1 shows that the completeness error $\varepsilon$ of $\mathcal{Q}$ is exponentially small in $n$. When verifying a genuine quantum bitcoin \$, the verifier performs the measurement $\mathsf{Verify}\mathcal{_Q}$(\$) on the underlying quantum states $ρ$, which yields the outcome “Pass” with probability $1 − \varepsilon$. Then lemma 2 shows that we can recover the underlying quantum states $\widetilde{p}_{i}$ of \$ so that $ \Vert \widetilde{p}_{i} − p_i \Vert _{tr} \le \sqrt\varepsilon$. As $\varepsilon$ is exponentially small in $n$, the trace distance becomes exponentially small in $n$ as well.

+ +

Each time such a quantum bitcoin is verified and reconstructed, the trace distance between the “before” and “after” is exponentially small in $n$. Given any threshold after which we consider the quantum bitcoin “worn out”, the number of verifications it survives before passing this threshold is exponential in $n$.

+ +

Theorem 4 shows that a quantum bitcoin \$ can be verified and re-used many times before the quantum state is lost (assuming the absence of noise and decoherence). This is of course analogous to traditional, physical banknotes and coins which are expected to last for a large enough number of transactions before wearing out."".

+ +

So, much like paper money, you need the whole bill (blockchain) undivided. Of course, that doesn't prevent you from exchanging one Quantum Bitcoin for however many Bitcoins and then offering Bitcoins (or fiat) as change.

+",278,,,,,04-03-2018 04:13,,,,0,,,,CC BY-SA 3.0 +1573,2,,1570,04-03-2018 08:56,,13,,"

The first cloud device was made available back in 2013. It is a photonic chip at the University of Bristol. Though it is an example of something we could build quantum computers from, it is quite different from the usual 'circuit model' architecture.

+ +

Then 2016 brought some devices from IBM. There are 5 qubit processors anyone can use with the Quantum Experience GUI or using a the QISKit SDK. There's also a 16 qubit device that you can use only with QISKit. Larger devices also exist, but they are not publically available.

+ +

Then came Rigetti's 19 qubit QPU at the end of last year, which can be used via pyQuil. You have to apply for an appointment to get access for a few hours, but I see no reason why anyone with a serious interest would not have it granted.

+ +

Finally, Alibaba and the Chinese Academy of Sciences have teamed up to put an 11 qubit device on the cloud. The interface is via a GUI, and is quite similar to IBM's Quantum Experience.

+",409,,,,,04-03-2018 08:56,,,,0,,,,CC BY-SA 3.0 +1577,1,1578,,04-03-2018 13:23,,25,3598,"

From what I understood, there seems to be a difference between quantum annealing and adiabatic quantum computation models but the only thing I found on this subject implies some strange results (see below).

+ +

My question is the following: what is exactly the difference/relation between quantum annealing and adiabatic quantum computation?

+ +
+ +

The observations that leads to a ""strange"" result:

+ +
    +
  • On Wikipedia, adiabatic quantum computation is depicted as ""a subclass of quantum annealing"".
  • +
  • On the other hand we know that: + +
      +
    1. Adiabatic quantum computation is equivalent to quantum circuit model (arXiv:quant-ph/0405098v2)
    2. +
    3. DWave computers use quantum annealing.
    4. +
  • +
+ +

So by using the 3 facts above, DWave quantum computers should be universal quantum computers. But from what I know, DWave computers are restricted to a very specific kind of problem so they cannot be universal (DWave's engineers confirm this in this video).

+ +

As a side question, what is the problem with the reasoning above?

+",1386,,1386,,4/17/2019 13:31,6/20/2020 20:42,What is the difference between quantum annealing and adiabatic quantum computation models?,,2,1,,,,CC BY-SA 4.0 +1578,2,,1577,04-03-2018 14:29,,12,,"

Vinci and Lidar have a nice explanation in their introduction of non-stoquastic Hamiltonians in quantum annealing (which is necessary to a quantum annealing device to simulate gate model computation).

+

https://arxiv.org/abs/1701.07494

+
+

It is well known that the solution of computational problems can be encoded into the ground state of a time-dependent quantum Hamiltonian. This approach is known as adiabatic quantum computation (AQC), and is universal for quantum computing (for a review of AQC see arXiv:1611.04471). Quantum annealing (QA) is a framework that incorporates algorithms and hardware designed to solve computational problems via quantum evolution towards the ground states of final Hamiltonians that encode classical optimization problems, without necessarily insisting on universality or adiabaticity.

+

QA thus inhabits a regime that is intermediate between the idealized assumptions of universal AQC and unavoidable experimental compromises. Perhaps the most significant of these compromises has been the design of stoquastic quantum annealers. A Hamiltonian $H$ is stoquastic with respect to a given basis if $H$ has only real nonpositive offdiagonal matrix elements in that basis, which means that its ground state can be expressed as a classical probability distribution. Typically, one chooses the computational basis, i.e., the basis in which the final Hamiltonian is diagonal. The computational power of stoquastic Hamiltonians has been carefully scrutinized, and is suspected to be limited in the ground-state AQC setting. E.g., it is unlikely that ground-state stoquastic AQC is universal. Moreover, under various assumptions ground-state stoquastic AQC can be efficiently simulated by classical algorithms such as quantum Monte Carlo, though certain exceptions are known.

+
+",54,,-1,,6/18/2020 8:31,04-03-2018 14:29,,,,2,,,,CC BY-SA 3.0 +1579,2,,1441,04-03-2018 17:25,,7,,"
+

""I'm looking for an explicit upper bound on the probability of successful counterfeiting ..."".

+
+ +

In ""An adaptive attack on Wiesner's quantum money"", by Aharon Brodutch, Daniel Nagaj, Or Sattath, and Dominique Unruh, last revised on 10 May 2016, the authors claim a success rate of: ""~100%"".

+ +

The paper makes these claims:

+ +
+

Main results. We show that in a strict testing variant of Wiesner's scheme (that is, if only valid money is returned to the owner), given a single valid quantum money state $(s,\left|\$_s\right>)$, a counterfeiter can efficiently create as many copies of $\left|\$_s\right>$ as he wishes (hence, the scheme is insecure). He can rely on the quantum Zeno effect for protection – if he disturbs the quantum money state only slightly, the bill is likely to be projected back to the original state after a test. Interestingly, this allows a counterfeiter to distinguish the four different qubit states with an arbitrarily small probability of being caught.

+ +

...

+ +

In this paper, we have focused on Wiesner's money in a noiseless environment. That is, the bank rejects the money if even a single qubit is measured incorrectly. In a more realistic setting, we have to deal with noise, and the bank would want to tolerate a limited amount of errors in the quantum state [PYJ+12], say 10%.

+
+ +

Also see: ""Quantum Bitcoin: An Anonymous and Distributed Currency Secured by the No-Cloning Theorem of Quantum Mechanics"", by Jonathan Jogenfors, 5 Apr 2016, where he discusses Wiesner's scheme and proposes one of his own.

+",278,,,,,04-03-2018 17:25,,,,0,,,,CC BY-SA 3.0 +1580,1,1582,,04-03-2018 18:40,,8,1292,"

There are many different ways to build quantum computers, such as superconducting qubits, quantum dots, and ion traps.

+ +

What I would like to understand is why some universities and research organizations have chosen to study trapped ion quantum computers. +I understand that it is better in some respects, but what are the fundamental differences?

+ +
+

+ +

Source: Youtube video

+ +

Domain of Science

+ +

Dominic Walliman

+
+",245,,26,,12/14/2018 5:30,12/14/2018 5:30,What are the fundamental differences between trapped ion quantum computers and other architectures?,,1,7,,,,CC BY-SA 3.0 +1581,1,1587,,04-03-2018 22:02,,15,985,"

The typically used gate set for quantum computation is composed of the single qubits Cliffords (Paulis, H and S) and the controlled-NOT and/or controlled-Z.

+ +

To go beyond Clifford we like to have full single qubit rotations. But if we are being minimal, we just go for T (the fourth root of Z).

+ +

This particular form of the gate set pops up everything. Such as IBM’s Quantum Experiment p, for example.

+ +

Why these gates, exactly? For example, H does the job of mapping between X and Z. S similarly does the job of mapping between Y and X, but a factor of $-1$ also gets introduced. Why don’t we use a Hadamard-like unitary $(X+Y)/\sqrt{2}$ instead of S? Or why don’t we use the square root of Y instead of H? It would be equivalent mathematically, of course, but it would just seem a bit more consistent as a convention.

+ +

And why is our go-to non-Clifford gate the fourth root of Z? Why not the fourth root of X or Y?

+ +

What historical conventions led to this particular choice of gate set?

+",409,,26,,12/23/2018 14:10,12/23/2018 14:10,Why do we use the standard gate set that we do?,,1,2,,,,CC BY-SA 4.0 +1582,2,,1580,04-04-2018 00:53,,8,,"

Disclosure: while I am not an experimental physicist, I am part of the NQIT project, which is aiming to develop quantum hardware which is suitable to realise scalable quantum computers. The architecture that we're investing most heavily in is optically linked ion traps.

+ +

Ions represent some of the physically best understood systems to experimental and theoretical physics, and the idea of performing quantum computation with ion traps is a very old one, as far as proposed implementations go. The main features of ion traps as a potential approach to realising quantum computation are

+ +
    +
  • They can give rise to very stable qubits, with life times on the order of a minute or more; and
  • +
  • For well-chosen ion species, the electron state transitions are very well understood, in principle allowing for very high precision gates.
  • +
  • Furthermore, these features can be realised without the need for heavy cryogenics.
  • +
+ +

See for instance [arXiv:0805.2450, arXiv:1606.08409]. To be sure, there are obstacles to using ion traps as a basis for quantum technologies. Most importantly, you need to solve the problem of cross-talk between the ions due to collective motional excitation, either through clever quantum control or through isolation of groups of ions. If the latter, you have to find a way to regroup collections of ions in a very large trap architecture, or find a way for ions in different traps to communicate.

+ +

The scaling problems for ion traps are not easy problems. However, significant obstacles exist for all quantum architectures; while well-resourced multinational corporations have placed their bets elsewhere, it seems to me far too early to reliably predict which platform will solve their scaling problems first. To my knowledge, none of the approaches have managed to get to the point where the only problem left is engineering.

+ +

If you could solve the scaling problems for ion traps, you would likely realise computers through the unitary circuit model, the measurement-based model, or some closely related model to these, according to how you negotiated the scaling problem. Of course, this is how one might describe computation at the lowest level, i.e. to realise error correction. At higher levels, tradition would exert a significant amount of pressure to use the primitive operations to realise fault-tolerant unitary circuit model operations, unless some other computational model was somehow compelling enough to displace it for the scalable error-corrected system.

+ +

In any case — it is not really meaningful at this stage to call ion traps in general a ""non-universal model"", as your diagram suggests, just because some groups have aimed for quantity before versatility. This would be similar to pigeon-holing transistors as a non-universal computational platform in the 1950s, just because they were used to make radios before they were used to make computers. Even if (as the linked YouTube video suggests) the ion trap groups on the U.S. eastern seaboard are aiming first for many ions and then attempting to master versatile quantum control, other groups such as the one at Oxford are taking the opposite approach of controlling very few qubits very well, then attempting to approach the problem of scaling up. Both are possible trajectories towards solving the difficult problem of designing scalable quantum computers.

+",124,,,,,04-04-2018 00:53,,,,2,,,,CC BY-SA 3.0 +1584,1,1594,,04-04-2018 06:17,,37,5744,"

Is there a general statement about what kinds of problems can be solved more efficiently using quantum computers (quantum gate model only)? Do the problems for which an algorithm is known today have a common property?

+ +

As far as i understand quantum computing helps with the hidden subgroup problem (Shor); Grover's algorithm helps speedup search problems. I have read that quantum algorithms can provide speed-up if you look for a 'global property' of a function (Grover/Deutsch).

+ +
    +
  1. Is there a more concise and correct statement about where quantum computing can help?
  2. +
  3. Is it possible to give an explanation why quantum physics can help there (preferably something deeper that 'interference can be exploited')? And why it possibly will not help for other problems (e.g. for NP-complete problems)?
  4. +
+ +

Are there relevant papers that discuss just that?

+ +

I had asked this question before over on cstheory.stackexchange.com but it may be more suitable here.

+",1185,,26,,12/13/2018 19:42,12/13/2018 19:42,Is there any general statement about what kinds of problems can be solved more efficiently using a quantum computer?,,4,0,,,,CC BY-SA 3.0 +1585,2,,1584,04-04-2018 07:29,,14,,"

TL;DR: No, we do not have any precise ""general"" statement about exactly which type of problems quantum computers can solve, in complexity theory terms. However, we do have a rough idea.

+ +

According to Wikipedia's sub-article on Relation to to computational complexity theory

+ +
+

The class of problems that can be efficiently solved by quantum + computers is called BQP, for ""bounded error, quantum, polynomial + time"". Quantum computers only run probabilistic algorithms, so BQP on + quantum computers is the counterpart of BPP (""bounded error, + probabilistic, polynomial time"") on classical computers. It is defined + as the set of problems solvable with a polynomial-time algorithm, + whose probability of error is bounded away from one half. A + quantum computer is said to ""solve"" a problem if, for every instance, + its answer will be right with high probability. If that solution runs + in polynomial time, then that problem is in BQP.

+ +

BQP is contained in the complexity class #P (or more precisely in the + associated class of decision problems P#P), which is a subclass of + PSPACE.

+ +

BQP is suspected to be disjoint from NP-complete and a strict superset + of P, but that is not known. Both integer factorization and discrete + log are in BQP. Both of these problems are + NP problems suspected + to be outside BPP, and hence outside P. Both are suspected to not be + NP-complete. There is a common misconception that quantum computers + can solve NP-complete problems in polynomial time. That is not known + to be true, and is generally suspected to be false.

+ +

The capacity of a quantum computer to accelerate classical algorithms + has rigid limits—upper bounds of quantum computation's complexity. The + overwhelming part of classical calculations cannot be accelerated on a + quantum computer. A similar fact takes place for particular + computational tasks, like the search problem, for which Grover's + algorithm is optimal.

+ +

Bohmian Mechanics is a non-local hidden variable interpretation of + quantum mechanics. It has been shown that a non-local hidden variable + quantum computer could implement a search of an N-item database at + most in ${\displaystyle O({\sqrt[{3}]{N}})}$ steps. This is slightly + faster than the $\displaystyle O({\sqrt {N}})$ steps taken by + Grover's algorithm. Neither search method will allow quantum computers + to solve NP-Complete problems in polynomial time.

+ +

Although quantum computers may be faster than classical computers for + some problem types, those described above can't solve any problem that + classical computers can't already solve. A Turing machine can simulate + these quantum computers, so such a quantum computer could never solve + an undecidable problem like the halting problem. The existence of + ""standard"" quantum computers does not disprove the Church–Turing + thesis. It has been speculated that theories of quantum gravity, such + as M-theory or loop quantum gravity, may allow even faster computers + to be built. Currently, defining computation in such theories is an + open problem due to the problem of time, i.e., there currently exists + no obvious way to describe what it means for an observer to submit + input to a computer and later receive output.

+
+ +

As for why quantum computers can efficiently solve BQP problems:

+ +
+
    +
  1. The number of qubits in the computer is allowed to be a polynomial + function of the instance size. For example, algorithms are known for + factoring an $n$-bit integer using just over $2n$ qubits (Shor's + algorithm).

  2. +
  3. Usually, computation on a quantum computer ends with a measurement. + This leads to a collapse of quantum state to one of the basis states. + It can be said that the quantum state is measured to be in the correct + state with high probability.

  4. +
+
+ +

Interestingly, if we theoretically allow post-selection (which doesn't have any scalable practical implementation), we get the complexity class post-BQP:

+ +
+

In computational complexity theory, PostBQP is a complexity class + consisting of all of the computational problems solvable in polynomial + time on a quantum Turing machine with postselection and bounded error + (in the sense that the algorithm is correct at least 2/3 of the time + on all inputs). However, Postselection is not considered to be a + feature that a realistic computer (even a quantum one) would possess, + but nevertheless postselecting machines are interesting from a + theoretical perspective.

+
+ +

I'd like to add what @Discrete lizard mentioned in the comments section. You have not explicitly defined what you mean by ""can help"", however, the rule of thumb in complexity theory is that if a quantum computer ""can help"" in terms of solving in polynomial time (with an error bound) iff the class of problem it can solve lies in BQP but not in P or BPP. The general relation between the complexity classes we discussed above is suspected to be:

+ +

$\text{P $\subseteq$ BPP $\subseteq$ BQP $\subseteq$ PSPACE}$

+ +

+ +

However, P=PSPACE, is an open problem in Computer Science. Also, the relationship between P and NP is not known yet.

+",26,,26,,04-04-2018 08:31,04-04-2018 08:31,,,,7,,,,CC BY-SA 3.0 +1586,2,,1584,04-04-2018 07:39,,9,,"

There is no such general statement and it is unlikely there will be one soon. I will explain why this is the case. For a partial answer to your question, looking at the problems in the two complexity classes BQP and PostBQP might help.

+ +
+ +

The complexity classes that come closest to the problems that can be solved efficiently by quantum computers of the quantum gate model are

+ +
    +
  1. BQP; and
  2. +
  3. PostBQP
  4. +
+ +

BQP consists of the problems that can be solved in polynomial time on a quantum circuit. Most important quantum algorithms, such as Shor's algorithm, solve problems in BQP.

+ +

PostBQP consists of the problems that can be solved in polynomial time on a quantum circuit that can additionally perform postselection. This is a lot more powerful, as PostBQP$=$PP, a class that contains PP.

+ +

However, there currently are no methods to practically implement postselection, so PostBQP is more of theoretical interest.

+ +

The relation between P, NP and BQP is currently unknown; and an open problem on the order of P vs. NP. As a general statement about what kinds of problems can be solved more efficiently using quantum computers must answer the BQP vs. P question (if BQP=P, then apparently quantum computers aren't more efficient (to complexity theorists, at least) )

+",253,,253,,04-04-2018 07:55,04-04-2018 07:55,,,,8,,,,CC BY-SA 3.0 +1587,2,,1581,04-04-2018 09:23,,14,,"

Anyone who has written a paper, and asked themselves whether they could improve the notation, or present the analysis a bit differently to make it more elegant, is familiar with the fact that choices of notation, description, and analysis can be an accident — chosen without deep motivations. There's nothing wrong with it, it just doesn't have a strong justification to be a particular way. In large communities of people more concerned (possibly with reason) with getting things done rather than presenting the cleanest possible picture, this is going to happen all the time.

+ +

I think that the ultimate answer to this question is going to be along these lines: it is mostly a historical accident. I doubt that there are any deeply considered reasons for the gate-sets being as they are, any more than there are deeply considered reasons why we talk about the Bell state $\lvert \Phi^+ \rangle = ( \lvert 00 \rangle + \lvert 11 \rangle ) \big/ \sqrt 2$ somewhat more often than the state $\lvert \Psi^- \rangle = ( \lvert 01 \rangle - \lvert 10 \rangle ) \big/ \sqrt 2$.

+ +

But we can still consider how the accident came about, and whether there is something we can learn about systematic ways of thinking which might have led us there. I expect that the reasons ultimately come from the cultural priorities of computer scientists, with both deep and superficial biases playing a role in how we describe things.

+ +

A digression on Bell states

+ +

If you'll bear with me, I'd like to dwell on the example of the two Bell states $\lvert \Phi^+ \rangle$ and $\lvert \Psi^- \rangle$ as an indicative example of how an ultimately arbitrary convention can come about by accident, in part because of biases which do not have deep mathematical roots.

+ +

One obvious reason for preferring $\lvert \Phi^+ \rangle$ over $\lvert \Psi^- \rangle$ is that the former is more obviously symmetric. +As we add the two components for $\lvert \Phi^+ \rangle$, there is no clear need to defend why we write it as we do. In contrast, we could just as easily define $\lvert \Psi^- \rangle = (\lvert 10 \rangle - \lvert 01 \rangle) \big/ \sqrt 2$ with the opposite sign, which is no better or worse motivated than the choice $\lvert \Psi^- \rangle = (\lvert 01 \rangle - \lvert 10 \rangle) \big/ \sqrt 2$. This makes it feel as though we are making more arbitrary choices when defining $\lvert \Psi^- \rangle$.

+ +

Even the choice of basis is somewhat flexible in the case of $\lvert \Phi^+ \rangle$: we can write $\lvert \Phi^+ \rangle := (\lvert ++ \rangle + \lvert -- \rangle)\big/\sqrt 2$ and obtain the same state. But things start going a little worse if you start considering the eigenstates $\lvert \pm i \rangle := (\lvert 0 \rangle \pm i \lvert 1 \rangle)\big/\sqrt 2$ of the $Y$ operator: we have $\lvert \Phi^+ \rangle = (\lvert +i \rangle\lvert -i \rangle + \lvert -i \rangle \lvert +i \rangle)\big/\sqrt 2$. This still looks pretty symmetric, but it becomes clear that our choice of basis plays a non-trivial role in how we define $\lvert \Phi^+ \rangle$.

+ +

The joke is on us. The reason why $\lvert \Phi^+ \rangle$ seems ""more symmetric"" than $\lvert \Psi^- \rangle$ is because $\lvert \Psi^- \rangle$ is literally the least symmetric two-qubit state, and this makes it better motivated than $\lvert \Phi^+\rangle$ instead of less motivated. The $\lvert \Psi^- \rangle$ state is the unique antisymmetric state: the unique state which is the $-1$ eigenvector of the SWAP operation, and therefore implicated in the controlled-SWAP test for qubit state distinguishability, among other things.

+ +
    +
  • We can describe $\lvert \Psi^- \rangle$ up to a global phase as $(\lvert \alpha \rangle \lvert \alpha^\perp \rangle - \lvert \alpha^\perp \rangle \lvert \alpha \rangle)\big/\sqrt 2$ for literally any single-qubit state $\lvert \alpha \rangle$ and orthogonal state $\lvert \alpha^\perp \rangle$, meaning that the properties which make it interesting are independent of the choice of basis.
  • +
  • Even the global phase which you use to write the state $\lvert \alpha^\perp \rangle$ doesn't affect the definition of $\lvert \Psi^-\rangle$ up to more than a global phase. The same isn't true of $\vert \Phi^+\rangle$: as an exercise for the reader, if $\lvert 1' \rangle = i \lvert 1 \rangle$, then what is $(\lvert 00 \rangle + \lvert 1' 1' \rangle)\big/\sqrt 2$?
  • +
+ +

Meanwhile, $\lvert \Phi^+ \rangle$ is just one maximally entangled state in the three-dimensional symmetric subspace on two qubits — the subspace of $+1$ eigenvectors of the SWAP operation — and therefore no more distinguished in principle than, say, $\lvert \Phi^- \rangle \propto \lvert 00 \rangle - \lvert 11 \rangle$.

+ +

After learning a thing or two about the Bell states, it becomes clear that our interest in $\lvert \Phi^+ \rangle$ in particular is motivated only by a superficial symmetry of notation, and not any truly meaningful mathematical properties. It is certainly a more arbitrary choice than $\lvert \Psi^- \rangle$. The only obvious motivation for preferring $\lvert \Phi^+ \rangle$ are sociological reasons having to do with avoiding minus signs and imaginary units. And the only justifiable reason I can think of for that are cultural: specifically, in order to better accomodate students or computer scientists.

+ +

Who ordered CNOT?

+ +

You ask why we don't talk more about $(X + Y)\big/\sqrt 2$. To me the more interesting question that you also ask: we do we talk so much about $H = (X + Z)\big/\sqrt 2$, when $\sqrt Y$ does many of the same things? I have seen talks given by experimental optical physicists to students, who even describe performing $\sqrt Y$ on a standard basis state as performing a Hadamard gate: but it was a $\sqrt Y$ gate that was actually more natural for him. The operator $\sqrt Y$ is also more directly related to the Pauli operators, obviously. A serious physicist might consider it curious that we dwell so much on the Hadamard instead.

+ +

But there is a bigger elephant in the room — when we talk about CNOT, why are we talking about CNOT, instead of another entangling gate $\mathrm{CZ} = \mathrm{diag}(+1,+1,+1,-1)$ which is symmetric on its tensor factors, or better yet $U = \exp(-i \pi (Z \otimes Z)/2)$ which is more closely related to the natural dynamics of many physical systems? Not to mention a unitary such as $U' = \exp(-i \pi (X \otimes X)/2)$ or other such variants.

+ +

The reason, of course, is that we are explicitly interested in computation rather than physics per se. We care about CNOT because how it transforms the standard basis (a basis which is preferred not for mathematical or physical reasons, but for human-centered reasons). +The gate $U$ above is slightly mysterious from the point of a computer scientist: it is not obvious on the surface of it what it is for, and worse, it is full of icky complex coefficients. And the gate $U'$ is even worse. By contrast, CNOT is a permutation operator, full of 1s and 0s, permuting the standard basis in a way which is obviously relevant to the computer scientist.

+ +

Though I'm making a bit of fun here, in the end this is what we're studying quantum computation for. The physicist can have deeper insights into the ecology of the elementary operations, but what the computer scientist cares about at the end of the day is how primitive things can be composed into comprehensible procedures involving classical data. And that means not caring too much about symmetry on the lower logical levels, so long as they can get what they want out of those lower levels.

+ +

We talk about CNOT because it is the gate that we want to spend time thinking about. From a physical perspective gates such as $U$ and $U'$ as above are in many cases the operations we would think about for realising CNOT, but the CNOT is the thing that we care about.

+ +

Deep, and not so deep, reasons to prefer the Hadamard gate

+ +

I expect that the priorities of computer scientists motivate a lot of our conventions, such as why we talk about $(X + Z)\big/\sqrt 2$, instead of $\sqrt Y \propto (\mathbb 1 - i Y)\big/\sqrt 2$.

+ +

The Hadamard operation is already slightly scary to computer scientists who are not already acquainted with quantum information theory. (The way it is used sounds like non-determinism, and it even uses irrational numbers!) But once a computer scientist gets past the initial revulsion, the Hadamard gate does have properties that they can like: at least it only involves real coefficients, it is self-inverse, and you can even describe the eigenbasis of $H$ with just real coefficients.

+ +

One way in which the Hadamard often arises is in describing toggling between the standard basis $\lvert 0 \rangle, \lvert 1 \rangle$ and 'the' conjugate basis $\lvert + \rangle, \lvert - \rangle$ (that is to say, the eigenbasis of the $X$ operator, as opposed to the $Y$ operator) — the so-called 'bit' and the 'phase' bases, which are two conjugate bases that you can express using only real coefficients. Of course, $\sqrt Y$ also transforms between these bases, but also introduces a non-trivial transformation if you perform it twice. +If you want to think of ""toggling between two different bases in which you might store information"", the Hadamard gate is better. But — this can only be defensible if you think it is important specifically to have

+ +
    +
  • a gate $H$ transforming between the standard basis and the very specific basis of $\lvert + \rangle, \lvert - \rangle$;
  • +
  • if you care specifically about $H$ having order $2$.
  • +
+ +

You might protest and say that it is very natural to consider toggling between the 'bit' and 'phase' bases. But where did we get this notion of two specific bases for 'bit' and 'phase', anyway? The only reason why we single out $\lvert + \rangle, \lvert - \rangle$ as 'the' phase basis, as opposed for instance to $\lvert +i \rangle, \lvert -i \rangle$, is because it can be expressed with only real coefficients in the standard basis. As for preferring an operator with order $2$, to mesh with the notion of toggling, this seems to indicate a particular preference for considering things by 'flips' rather than reversible changes of basis. These priorities smack of the interests of computer science.

+ +

Unlike the case between $\lvert \Phi^+ \rangle$ versus $\lvert \Psi^- \rangle$, the computer scientist does have one really good high-level argument for preferring $H$ over $\sqrt Y$: the Hadamard gate is the unitary representation of the boolean Fourier transform (that is, it is the quantum Fourier transform on qubits). This is not very important from a physical perspective, but it is very helpful from a computational perspective, and a very large fraction of theoretical results in quantum computation and communication ultimately rest on this observation. But the boolean Fourier transform already bakes in the asymmetries of computer science, in pre-supposing the importance of the standard basis and in using only real coefficients: an operator such as $(X + Y)\big/\sqrt 2$ would never be considered on these grounds.

+ +

Diagonal argument

+ +

If you're a computer scientist, once you have Hadamard and CNOT, all that's left is to get those pesky complex phases sorted as an afterthought. These phases are extremely important, of course. But just the way we talk about relative phases reveals a discomfort with the idea. Even describing the standard basis as the 'bit' basis, for storing information, puts a strong emphasis that whatever 'phase' is, it's not the usual way that you would consider storing information. Phases of all sorts are something to be dealt with after the 'real' business of dealing with magnitudes of amplitudes; after confronting the fact that one can store information in more than one basis. We barely talk at all about even purely imaginary relative phases if we can help it.

+ +

One can cope with relative phases pretty easily using diagonal operators. These have the advantage of being sparse (with respect to the standard basis...) and of only affecting the relative phase, which is after all the detail which we're trying to address at this stage. Hence $T \propto \sqrt[4]Z$. And once you've done that, why do more? Sure, we could as easily consider arbitrary $X$ rotations (and because of Euler decomposition, we do play some lip-service to these operations) and arbitrary $Y$ rotations, which would motivate $\sqrt[4]X$ and $\sqrt[4]Y$. But these don't actually add anything of interest for the computer scientist, who considers the job done already.

+ +

And not a moment too soon — because computer scientists don't really care about precisely what the primitive operations being used are as soon as they can justify move on to something higher-level.

+ +

Summary

+ +

I don't think there is likely to be any very interesting physically-motivated reason why we use a particular gate-set. But it is certainly possible to explore the psychologically-motivated reasons why we do. The above is a speculation in this direction, informed by long experience.

+",124,,124,,04-04-2018 17:39,04-04-2018 17:39,,,,3,,,,CC BY-SA 3.0 +1588,1,1589,,04-04-2018 10:49,,17,1584,"

Consider the unitary circuit model of quantum computation. If we need to generate entanglement between the input qubits with the circuit, it must have multi-qubit gates such as CNOT, as entanglement cannot increase under local operations and classical communication. Consequently, we can say that quantum computing with multi-qubit gates is inherently different from quantum computing with just local gates. But what about measurements?

+ +

Does including simultaneous measurements of multiple qubits make a difference in quantum computing or can we perhaps emulate this with local measurements with some overhead? EDIT: by ""emulate with local measurements"", I mean have the same effect with local measurements + any unitary gates.

+ +

Please notice that I am not merely asking how measuring one qubit changes the others, which has already been asked and answered, or if such measurements are possible. I am interested to know whether including such measurements could bring something new to the table.

+",144,,26,,12/23/2018 12:17,12/23/2018 12:17,Do multi-qubit measurements make a difference in quantum circuits?,,2,0,,,,CC BY-SA 3.0 +1589,2,,1588,04-04-2018 11:09,,13,,"

Entangling measurements are powerful. In fact, they are so powerful that universal quantum computation can be performed by sequences of entangling measurements only (i.e., without extra need for unitary gates or special input state preparations):

+ +
    +
  1. Nielsen showed that universal quantum computation is possible given a quantum memory and the ability to perform projective measurements on up to 4-qubits [quant-ph/0310189].

  2. +
  3. The above result was extended to 3-qubit measurements by Fenner and Zhang [quant-ph/0111077].

  4. +
  5. Later on, Leung gave an improved method that requires only 2-qubit measurements, which are also both sufficient and necessary [quant-ph/0111122].

  6. +
+ +

The idea there is to combine sequences of measurements to drive the computation. This is quite similar to Raussendorf-Briegel's model of measurement based quantum computation (MBQC) (aka the one way quantum computer), but in standard MBQC you also restrict your measurements to be non-entangling (i.e., they must act on single qubits) and you start with an entangled resource state as input (canonically, a cluster state [Phys. Rev. Lett. 86, 5188, quant-ph/0301052]). In the afore-mentioned protocols by Nielsen, Fenner-Zhang, Leung you are allowed to do entangling measurements but you do not rely on any other additional resource (i.e., no gates, no special inputs such as cluster states).

+ +

In short, the difference between entangling and local measurements is analogous to the difference between entangling and local gates.

+ +
+ +

PS: As discussed in other answers you can simulate entangling measurements with entangling gates (such as CNOTS and local measurements). Viceversa, the above results show you can trade entangling gates for entangling measurements. If your all of your resources are local you cannot use them to simulate entangling ones though. In particular, you cannot simulate entangling measurements with local gates and inputs.

+",1779,,1779,,04-05-2018 06:24,04-05-2018 06:24,,,,3,,,,CC BY-SA 3.0 +1591,2,,1441,04-04-2018 13:30,,15,,"

Abel Molina, Thomas Vidick, and I proved that the correct answer is $c=3/4$ in this paper:

+ +
+

A. Molina, T. Vidick, and J. Watrous. Optimal counterfeiting attacks + and generalizations for Wiesner's quantum money. Proceedings of the + 7th Conference on Theory of Quantum Computation, Communication, and + Cryptography, volume 7582 of Lecture Notes in Computer Science, pages + 45–64, 2013. (See also arXiv: 1202.4010.)

+
+ +

This assumes the counterfeiter uses what we call a ""simple counterfeiting attack,"" which means a one-shot attempt to transform one copy of a money state into two. (I interpret your question to be about such attacks.)

+ +

The attack of Brodutch, Nagaj, Sattath, and Unruh that @Rob referred to (and which is a fantastic result in my opinion) requires the counterfeiter to interact repeatedly with the bank and assumes the bank will provide the counterfeiter with the same money state after each verification.

+ +

The paper describes the optimal channel, which is not an entanglement breaking (i.e., measure and prepare) channel. It's an example of a cloner, and explicitly it looks like this: +$$ +\Phi(\rho) = A_0 \rho A_0^{\dagger} + A_1 \rho A_1^{\dagger} +$$ +where +$$ +A_0 = \frac{1}{\sqrt{12}} +\begin{pmatrix} +3 & 0\\ +0 & 1\\ +0 & 1\\ +1 & 0 +\end{pmatrix} +\quad\text{and}\quad +A_1 = \frac{1}{\sqrt{12}} +\begin{pmatrix} +0 & 1\\ +1 & 0\\ +1 & 0\\ +0 & 3 +\end{pmatrix}. +$$

+ +

For different sets of money states and figures of merit, you may end up with different optimal values and cloners. For example, if the money states also include $| 0\rangle \pm i |1\rangle$, then the Bužek-Hillery cloner is optimal and the correct value of $c$ drops to 2/3.

+",1764,,,,,04-04-2018 13:30,,,,0,,,,CC BY-SA 3.0 +1592,2,,1508,04-04-2018 14:43,,7,,"

If we have a QTM with state set $Q$ and a tape alphabet $\Sigma = \{0,1\}$, we cannot say that the qubit being scanned by the tape head ""holds"" a vector $a|0\rangle + b|1\rangle$ or that the (internal) state is a vector with basis states corresponding to $Q$. The qubits on the tape can be correlated with one another and with the internal state, as well as with the tape head position.

+ +

As an analogy, we would not describe a probabilistic Turing machine's global state by independently specifying a distribution for the internal state and for each of the tape squares. Rather, we have to describe everything together so as to properly represent correlations among the different parts of the machine. For example, the bits stored in two distant tape squares might be perfectly correlated, both 0 with probability 1/2 and both 1 with probability 1/2.

+ +

So, in the quantum case, and assuming we're talking about pure states of quantum Turing machines with unitary evolutions (as opposed to a more general model based on mixed states), the global state is represented by a vector whose entries are indexed by configurations (i.e., classical descriptions of the internal state, the location of the tape head, and the contents of every tape square) of the Turing machine. It should be noted that we generally assume that there is a special blank symbol in the tape alphabet (which could be 0 if we want our tape squares to store qubits) and that we start computations with at most finitely many squares being non-blank, so that the set of all reachable configurations is countable. This means that the state will be represented by a unit vector in a separable Hilbert space.

+ +

Finally, and perhaps this is the actual answer to the question interpreted literally, the movement of the tape head is determined by the transition function, which will assign an ""amplitude"" to each possible action (new state, new symbol, and tape head movement) for every classical pair $(q,\sigma)$ representing the current state and currently scanned symbol. Nothing forces the tape head to move deterministically -- a nonzero amplitude could be assigned to two or more actions that include tape head movements to both the left and right -- so it is possible for a QTM tape head to move both left and right in superposition.

+ +

For example, you can imagine a QTM with $Q = \{0,1\}$ and $\Sigma = \{0,1\}$ (and we'll take 0 to be the blank symbol). We start in state 0 scanning a square that stores 1, and all other squares store 0. I won't explicitly write down the transition function, but will just describe the behavior in words. On each move, the contents of the scanned tape square is interpreted as a control bit for a Hadamard operation on the internal state. After the controlled-Hadamard is performed, the head moves left if the (new) state is 0 and moves right if the (new) state is 1. (In this example we never actually change the contents of the tape.) After one step, the QTM will be in an equally weighted superposition between being in state 0 with the tape head scanning square -1, and being in state 1 with the tape head scanning square +1. On all subsequent moves the controlled-Hadamard does nothing because every square aside from square 0 contains the 0 symbol. The tape head will therefore continue to move simultaneously both left and right, like a particle travelling to the left and to the right in superposition.

+ +

If you wanted to, you could of course define a variant of the quantum Turing machine model for which the tape head location and movement is deterministic, and this would not ruin the computational universality of the model, but the ""classic"" definition of quantum Turing machines does not impose this restriction.

+",1764,,,,,04-04-2018 14:43,,,,4,,,,CC BY-SA 3.0 +1593,2,,1328,04-04-2018 18:26,,4,,"

Your question asks two questions that are less-related than you might hope.

+ +

First, how do we increase the probability of down-conversion occuring?

+ +

This is fundamentally a question about material properties: the chance per unit length of down-conversion occurring is proportional to $\chi^{(2)}$; if our material of choice doesn't have good phase-matching conditions then we can ""cheat"" and use quasiphasematching at a hit of ~$2/\pi$. As it happens, PPLN is just about as good as you're going to do, at least to within an order of magnitude. A very long device, say with a waveguide, lets you get more unit lengths; things still scale linearly, so to get much higher conversion you'd need a much longer device, which brings its own problems (such as absorption).

+ +

(This is setting aside other options, such as spontaneous four-wave mixing and true single photon sources.)

+ +

Second, you ask ""...and therefore entanglement between photons?"".

+ +

This does not immediately follow from the first problem (increasing conversion from pump to daughter photons). Depending on what kind of entanglement one is interested in, it's not at all the case that making more photons per (pulse/unit time) increases the entanglement of those photons. Generally speaking, SPDC sources for (discrete-variable) optical quantum computing are limited by the multi-fold emission rate, not by the pump power; this literally cannot be avoided by an SPDC scheme. If you're looking at achievable squeezing for CV experiments that's another kettle of fish that I'm not an expert in; my understanding is that the limits in total squeezing at present are not due to limited $\chi^{(2)}$, but other noise. (I may be mistaken on this count.)

+ +

As it turns out, this leads to people trying to make unentangled photons pairs and heralding one of them to try and simulate a ""true"" single photon source; a multiplexing scheme using space, delay lines, or a memory can then be used to put your known-good photon in the mode of interest. Once you have a device that can spit out a photon in a particular mode in a pure state, you're good to go.

+ +

If, on the other hand, you're interested in large amounts of time-frequency entanglement, that too is not set by the conversion efficiency, but by the phase-matching conditions in the crystal: the opposite problem from the papers posted by glS.

+ +

To conclude, while there are reasons to want to increase conversion efficiency (for one thing it seems awfully wasteful to have conversion efficiencies as low as we do), it's not immediately obvious that doing so is a rate-limiting step in most experiments at present. It's unfortunately also not at all clear how to do so in a more than marginal way; leading to people working on related problems such as increasing heralding efficiency and purity of the single-photon states instead.

+",1807,,,,,04-04-2018 18:26,,,,0,,,,CC BY-SA 3.0 +1594,2,,1584,04-04-2018 18:53,,24,,"

On computational helpfulness in general

+ +

Without perhaps realising it, you are asking a version of one of the most difficult questions you can possibly ask about theoretical computer science. You can ask the same question about classical computers, only instead of asking whether adding 'quantumness' is helpful, you can ask:

+ +
    +
  • Is there a concise statement about where randomised algorithms can help?

    + +

    It's possible to say something very vague here — if you think that solutions are plentiful (or that the number of solutions to some sub-problem are plentiful) but that it might be difficult to systematically construct one, then it's helpful to be able to make choices at random in order to get past the problem of systematic construction. But beware, sometimes the reason why you know that there are plentiful solutions to a sub-problem is because there is a proof using the probabilistic method. When this is the case, you know that the number of solutions is plentiful by reduction to what is in effect a helpful randomised algorithm!

    + +

    Unless you have another way of justifying the fact that the number of solutions is plentiful for those cases, there is no simple description of when a randomised algorithm can help. And if you have high enough demands of 'helpfulness' (a super-polynomial advantage), then what you are asking is whether $\mathsf P \ne \mathsf{BPP}$, which is an unsolved problem in complexity theory.

  • +
  • Is there a concise statement about where parallelised algorithms can help?

    + +

    Here things may be slightly better. If a problem looks as though it can be broken down into many independent sub-problems, then it can be parallelised — though this is a vague, ""you'll know it when you see it"" sort of criterion. The main question is, will you know it when you see it? Would you have guessed that testing feasibility of systems of linear equations over the rationals is not only parallelisable, but could be solved using $O(\log^2 n)$-depth circuits [c.f. Comput. Complex. 8 (pp. 99--126), 1999]?

    + +

    One way in which people try to paint a big-picture intuition for this is to approach the question from the opposite direction, and say when it is known that a parallelised algorithm won't help. Specifically, it won't help if the problem has an inherently sequential aspect to it. But this is circular, because 'sequential' just means that the structure that you can see for the problem is one which is not parallelised.

    + +

    Again, there is no simple, comprehensive description of when a parallelised algorithm can help. And if you have high enough demands of 'helpfulness' (a poly-logarithmic upper bound on the amount of time, assuming polynomial parallelisation), then what you are asking is whether $\mathsf P \ne \mathsf{NC}$, which is again an unsolved problem in complexity theory.

  • +
+ +

The prospects for ""concise and correct descriptions of when [X] is helpful"" are not looking too great by this point. Though you might protest that we're being too strict here: on the grounds of demanding more than a polynomial advantage, we couldn't even claim that non-deterministic Turing machines were 'helpful' (which is clearly absurd). We shouldn't demand such a high bar — in the absence of techniques to efficiently solve satisfiability, we should at least accept that if we somehow could obtain a non-deterministic Turing machine, we would indeed find it very very helpful. But this is different from being able to characterise precisely which problems we would find it helpful for.

+ +

On the helpfulness of quantum computers

+ +

Taking a step back, is there anything we can say about where quantum computers are helpful?

+ +

We can say this: a quantum computer can only do something interesting if it is taking advantage of the structure of a problem, which is unavailable to a classical computer. (This is hinted at by the remarks about a ""global property"" of a problem, as you mention). But we can say more than this: problems solved by quantum computers in the unitary circuit model will instantiate some features of that problem as unitary operators. The features of the problem which are unavailable to classical computers will be all those which do not have a (provably) statistically significant relationship to the standard basis.

+ +
    +
  • In the case of Shor's algorithm, this property is the eigenvalues of a permutation operator which is defined in terms of multiplication over a ring.
  • +
  • In the case of Grover's algorithm, this property is whether the reflection about the set of marked states, commutes with the reflection about the uniform superposition — this determines whether the Grover iterator has any eigenvalues which are not $\pm 1$.
  • +
+ +

It is not especially surprising to see that in both cases, the information relates to eigenvalues and eigenvectors. This is an + excellent example of a property of an operator which need not have + any meaningful relationship to the standard basis. But there is no particular reason why the information has to be an eigenvalue. All that is needed is to be able to describe a unitary operator, encoding some relevant feature of the problem which is not obvious from inspection of the standard basis, but is accessible in some other easily described way.

+ +

In the end, all this says is that a quantum computer is useful when you can find a quantum algorithm to solve a problem. But at least it's a broad outline of a strategy for finding quantum algorithms, which is no worse than the broad outlines of strategies I've described above for randomised or parallelised algorithms.

+ +

Remarks on when a quantum computer is 'helpful'

+ +

As other people have noted here, ""where quantum computing can help"" depends on what you mean by 'help'.

+ +
    +
  • Shor's algorithm is often trotted out in such discussions, and once in a while people will point out that we don't know that factorisation isn't solvable in polynomial-time. So do we actually know that ""quantum computing would be helpful for factorising numbers""?

    + +

    Aside from the difficulty in realising quantum computers, I think +here the reasonable answer is 'yes'; not because we know that you can't factorise efficiently using conventional computers, but because we don't know how you would do it using conventional computers. If quantum computers help you to do something that you have no better approach to doing, it seems to me that this is 'helping'.

  • +
  • You mention Grover's algorithm, which yields a well-known square-root speedup over brute-force search. +This is only a polynomial speedup, and a speedup over a naive classical algorithm — we have better classical algorithms than brute-force search, even for NP-compelete problems. For instance, in the case of 3-SAT instances with a single satisfying assignment, the PPSZ algorithm has a runtime of $O(2^{0.386\,n})$, which outperforms Grover's original algorithm. So is Grover's algorithm 'helpful'?

    + +

    Perhaps Grover's algorithm as such is not especially helpful. However, it may be helpful if you use it to elaborate more clever classical strategies beyond brute-force search: using amplitude amplification, the natural generalisation of Grover's algorithm to more general settings, we can improve on the performance of many non-trivial algorithms for SAT (see e.g. [ACM SIGACT News 36 (pp.103--108), 2005 — free PDF link]; hat tip to Martin Schwarz who pointed me to this reference in the comments).

    + +

    As with Grover's algorithm, amplitude amplification only yields polynomial speed-ups: but speaking practically, even a polynomial speedup may be interesting if it isn't washed out by the overhead associated with protecting quantum information from noise.

  • +
+",124,,124,,04-05-2018 08:43,04-05-2018 08:43,,,,6,,,,CC BY-SA 3.0 +1595,1,1596,,04-04-2018 19:18,,28,1774,"

Many people are interested in the subject of quantum annealing, as an application of quantum technologies, not least because of D-WAVE's work on the subject. The Wikipedia article on quantum annealing implies that if one performs the 'annealing' slowly enough, one realises (a specific form of) adiabatic quantum computation. Quantum annealing seems to differ mostly in that it does not seem to presuppose doing evolution in the adiabatic regime — it allows for the possibility of diabatic transitions.

+ +

Still, there seems to be more intuition at play with quantum annealing than just ""adiabatic computation done hastily"". It seems that one specifically chooses an initial Hamiltonian consisting of a transverse field, and that this is specifically meant to allow for tunnelling effects in the energy landscape (as described in the standard basis, one presumes). This is said to be analogous to (possibly even to formally generalise?) the temperature in classical simulated annealing. This raises the question of whether quantum annealing pre-supposes features such as specifically an initial transverse field, linear interpolation between Hamiltonians, and so forth; and whether these conditions may be fixed in order to be able to make precise comparisons with classical annealing.

+ +
    +
  • Is there a more-or-less formal notion of what quantum annealing consists of, which would allow one to point to something and say ""this is quantum annealing"" or ""this is not precisely quantum annealing because [it involves some additional feature or lacks some essential feature]""?
  • +
  • Alternatively: can quantum annealing be described in reference to some canonical +framework — possibly in reference to one of the originating +papers, such as Phys. Rev. E 58 (5355), +1998 +[freely available PDF +here] +— together with some typical variations which are accepted as +also being examples of quantum annealing?

  • +
  • Is there at least a description which is precise enough that we can say that quantum annealing properly generalises classical simulated annealing, not by ""working better in practise"", or ""working better under conditions X, Y, and Z"", but in the specific sense in that any classical simulated annealing procedure can be efficiently simulated or provably surpassed by a noiseless quantum annealing procedure (just as unitary circuits can simulate randomised algorithms)?

  • +
+",124,,124,,04-05-2018 09:35,5/13/2019 21:34,What precisely is quantum annealing?,,1,0,,,,CC BY-SA 3.0 +1596,2,,1595,04-04-2018 20:20,,11,,"

I'll do my best to address your three points.

+

My previous answer to an earlier question about the difference between quantum annealing and adiabatic quantum computation can be found here. I'm in agreement with Lidar that quantum annealing can't be defined without considerations of algorithms and hardware.

+

That being said, the canonical framework for quantum annealing and the inspiration for the D-Wave is the work by Farhi et al. (quant-ph/0001106).

+

Finally, I'm not sure one can generalize classical simulated annealing using quantum annealing, again without discussing hardware. Here's a thorough comparison: 1304.4595.

+
+

Addressing comments:

+
+

(1) I saw your previous answer, but don't get the point you make here. It's fine for QA not to be universal, and not to have a provable performance to solve a problem, and for these to be motivated by hardware constraints; but surely quantum annealing is something independent of specific hardware or instances, or else it doesn't make sense to give it a name.

+

(2) You're linking the AQC paper, together with the excerpt by Vinci and Lidar, strongly suggests that QA is just adiabatic-ish evolution in the not-necessarily-adiabatic regime. Is that essentially correct? Is this true regardless of what the initial and final Hamiltonians are, or what path you trace through Hamiltonian-space, or the parameterisation with respect to time? If there are any extra constraints beyond "possibly somewhat rushed adiabatic-ish computation", what are those constraints, and why are they considered important to the model?

+
+

(1+2) Similar to AQC, QA reduces the transverse magnetic field of a Hamiltonian, however, the process is no longer adiabatic and dependent on the qubits and noise levels of the machine. The initial Hamiltonians are called gauges in D-Wave's vernacular and can be simple or complicated as long as you know the ground state. As for the 'parameterization with respect to time,' I think you mean the annealing schedule and as stated above this is restricted hardware constraints.

+
+

(3) I also don't see why hardware is necessary to describe the comparison with classical simulated annealing. Feel free to assume that you have perfect hardware with arbitrary connectivity: define quantum annealing as you imagine a mathematician might define annealing, free of niggling details; and consider particular realisations of quantum annealing as attempts to approximate the conditions of that pure model, but involving the compromises an engineer is forced to make on account of having to deal with the real world. Is it not possible to make a comparison?

+
+
+

The only relation classical simulated annealing has with quantum annealing is they both have annealing in the name. +The Hamiltonians and process are fundamentally different.

+

$$H_{\rm{classical}} = \sum_{i,j} J_{ij} s_i s_j$$

+

$$H_{\rm{quantum}} = A(t) \sum_{i,j} J_{ij} \sigma_i^z \sigma_j^z + B(t) \sum_i \sigma_i^x$$

+

However, if you would like to compare simulated quantum annealing with quantum annealing, Troyer's group at ETH are the pros when it comes to simulated quantum annealing. I highly recommend these slides largely based on the Boxio et al. paper I linked above.

+

Performance of simulated annealing, simulated quantum annealing and D-Wave on hard spin glass instances ��� Troyer (PDF)

+
+

(4) Your remark about the initial Hamiltonian is useful and suggests something very general lurking in the background. Perhaps arbitrary (but efficiently computable, monotone, and first differentiable) schedules are also acceptable in principle, with limitations only arising from architectural constraints, and of course also the aim to obtain a useful outcome?

+
+

I'm not sure what you're asking. Are arbitrary schedules useful? I'm not familiar with work on arbitrary annealing schedules. In principle, the field should go from high to low, slow enough to avoid a Landau-Zener transition and fast enough to maintain the quantum effects of qubits.

+

Related; The latest iteration of the D-Wave can anneal individual qubits at different rates but I'm not aware of any D-Wave unaffiliated studies where this has been implemented.

+

DWave — Boosting integer factoring perfomance via quantum annealing offsets (PDF)

+
+

(5) Perhaps there is less of a difference between the Hamiltonians in QA and CSA than you suggest. $H_{cl}$ is clearly obtained from $H_{qm}$ for $A(t)=1,B(t)=0$ if you impose a restriction to standard basis states (which may be benign if $H_{qm}$ is non-degenerate and diagonal). There's clearly a difference in 'transitions', where QA seems to rely on suggestive intuitions of tunnelling/quasiadiabaticity, but perhaps this can be (or already has been?) made precise by a theoretical comparison of QA to a quantum walk. Is there no work in this direction?

+
+

$A(t)=1,B(t)=0$ With this schedule you're no longer annealing anything. The machine is just sitting there at a finite temperature so the only transitions you'll get are thermal ones. This can be slightly useful as shown by Nishimura et al. The following publication talks about the uses of a non-vanishing transverse field.

+

arXiv:1605.03303

+

arXiv:1708.00236

+

Regarding the relation of quantum annealing with quantum walks. It's possible to treat quantum annealing in this way as shown by Chancellor.

+

arXiv:1606.06800

+
+

(6) One respect in which I suppose the hardware may play an important role --- but which you have not explicitly mentioned yet --- is the role of dissipation to a bath, which I now vaguely remember being relevant to DWAVE. Quoting from Boixo et al.: "Unlike adiabatic quantum computing [...] quantum annealing is a positive temperature method involving an open quantum system coupled to a thermal bath." Clearly, what bath coupling one expects in a given system is hardware dependent; but is there no notion of what bath couplings are reasonable to consider for hypothetical annealers?

+
+
+

I don't know enough about the hardware aspects to answer this, but if I had to guess, the lower temperature the better to avoid all the noise-related problems.

+
+
+

You say "In principle the field should go from high to low, slow enough to avoid a Landau-Zener transition and fast enough to maintain the quantum effects of qubits." This is the helpful thing to do, but you usually don't know just how slow that can or should be, do you?

+
+

This would be the coherence time of the qubits. The D-Wave annealing schedules are on the order of microseconds with T2 for superconducting qubits being around 100 microseconds. If I had to give a definitive definition of annealing schedule it would be 'an evolution of the transverse field within a length of time less than the decoherence time of the qubit implementation.' This allows for different starting strengths, pauses, and readouts of field strengths. It need not be monotonic.

+
+
+

I thought maybe dissipation to a bath was sometimes considered helpful to how quantum annealers work, when operating in the non-adiabatic regime (as it often will be when working on NP-hard problems, because we're interested in obtaining answers to problems despite the eigenvalue gap possibly being very small). Is dissipation not potentially helpful then?

+
+

I consulted with S. Mandra and while he pointed me to a few papers by P. Love and M. Amin, which show that certain baths can speedup quantum annealing and thermalization can help find the ground state faster.

+

arXiv:cond-mat/0609332

+
+
+

I think that maybe if we can get the confusion about the annealing schedules, and whether or not it the transition has to be along a liner interpolation between two Hamiltonians (as opposed to a more complicated trajectory), ...

+
+

$A(t)$ and $B(t)$ don't necessarily have to be linear or even monotonic. In a recent presentation D-Wave showed the advantages of pausing the annealing schedule and backwards anneals.

+

DWave — Future Hardware Directions of Quantum Annealing (PDF)

+
+

Feel free to condense these responses however you'd like. Thanks.

+",54,,-1,,6/18/2020 8:31,5/13/2019 21:34,,,,23,,,,CC BY-SA 4.0 +1598,1,1617,,04-05-2018 01:34,,11,303,"

As the name already suggests, this question is a follow-up of this other. I was delighted with the quality of the answers, but I felt it would be immensely interesting if insights regarding optimization and approximation techniques were added, but might fall off-topic, hence this question.

+ +

From Blue's answer:

+ +
+

the rule of thumb in complexity theory is that if a quantum computer ""can help"" in terms of solving in polynomial time (with an error bound) iff the class of problem it can solve lies in BQP but not in P or BPP

+
+ +

How does this apply to approximation classes? Is there any specific topological, numerical, etc property of quantum computing that can be leveraged?

+ +
+ +

As an example of what could I be asking (but definitely not restricted to that!), take the Christofides algorithm: it exploits specific geometrical properties of the graph that it optimizes on (symmetry, triangle inequality): the salesman travels on a feasible world. But salesmen have also huge mass, and we can know their position and momentum at the same time with great precision. Maybe a quantum model could work as well for other kind of metrics with more relaxed restrictions, like the K-L divergence? In that case solving it would still be NP complete, but the optimization would apply for a broader topology. This example is maybe a long shot, but I hope you get what I mean. I don't really know if it makes sense at all, but the answer could also address it in that case :)

+ +
+ +

RELATED:

+ + +",1346,,26,,12/14/2018 6:09,12/14/2018 6:09,Is there any general statement about what kinds of problems can be approximated more efficiently using a quantum computer?,,1,0,,,,CC BY-SA 3.0 +1599,2,,1588,04-05-2018 05:31,,5,,"

While multi-qubit measurements can be incredibly powerful, as already described elsewhere, they do not give you anything new compared to unitary operations and local measurements. Think of a projective measurement for example, with projectors $P_m$. If you write down the observable $O=\sum_m P_m$, then there will be a unitary $U$ that diagonalises $O$. So, measuring $O$ is equivalent to implementing the unitary $U$ with a normal quantum circuit (including multi-qubit gates), and then performing local measurements in the standard basis.

+ +

Alternatively, this gives you some insight about multi-qubit measurements. Any unitary circuit followed by projective measurements could be wrapped up as a single multi-qubit measurement by inverting the above process.

+ +

A similar construction can be applied to more general measurements, but you have to extend the unitary operation to include some ancilla qubits. This is sometimes referred to as “the church of the larger Hilbert space”. There's a proof that unitaries + projective measurements are equivalent to generalised measurements in section 2.2.8 of Nielsen & Chuang.

+",1837,,1837,,04-05-2018 06:23,04-05-2018 06:23,,,,4,,,,CC BY-SA 3.0 +1600,2,,1531,04-05-2018 06:39,,5,,"

There is the concept of controllability of a quantum system, i.e. do the given set of controls permit you to create any state or unitary? Usually this is computed by looking at the Lie Algebra of the system, and can be quite messy; you need to take the individual Hamiltonian terms that you can control, and calculate all their commutators to arbitrary orders. If you can take linear combinations of those and make any arbitrary Hamiltonian, then your full Hilbert space is controllable; you can make any unitary you want, and any quantum state is said to be reachable from any other. See Complete controllability of quantum systems (PRA 2001) for an example.

+ +

One important point to emphasise, however, is that this tells you nothing about efficiency, i.e. how long it takes you to reach a given state (as a function of the system size). There is an explicit construction that you can make based on the above decomposition in terms of commutators, but the time required scales exponentially in the order of the commutator required. Numerical techniques of control theory are methods that attempt to find the required control fields (as a function of time) in a more efficient manner, but (to my knowledge) rarely give you any guarantees. So, if you have fixed $\Theta$ and bounded $c_k(t)$, the controllability concept may not be sufficient.

+",1837,,1847,,4/23/2018 11:10,4/23/2018 11:10,,,,0,,,,CC BY-SA 3.0 +1602,1,1673,,04-05-2018 07:54,,3,482,"

In Classical Simulation of Quantum Error Correction in a Fibonacci Anyon Code, the authors state on page 2 in section I. Background, A. Topological model:

+
+

We consider a system supporting nonabelian Fibonacci anyon excitations, denoted by $\tau$. Two such anyons can have total charge that is either $\tau$ or $ \mathbb I$ (vacuum), or any superposition of these, and so the fusion space in this case is 2-dimensional. We can represent basis states for this space using diagrams of definite total charge for the Wilson loops, and arbitrary states as linear combinations of these diagrams: +

+
+
+

For $n$ anyons of type $\tau$, the dimension of the fusion space grows asymptotically as $\varphi^n$, where $\varphi$ = $\frac{1+\sqrt 5}{2}$ is the golden ratio.

+
+
+

Observables associated with non-intersecting loops commute, and so a basis for the space can be built from a maximal set of disjoint, nested loops.

+
+
+

Finally, in "Coherence Frame, Entanglement Conservation, and Einselection" the authors state on page 3:

+
+

Preferred basis problem. From the method of CF, we discuss the preferred basis problem (PBP), which has been studied via the einselection approach [7, 8]. We will show, yet, the method of einselection is incomplete.

+
+
+

This method is described via the Stern-Gerlach experiment, as shown in the Fig. 1 of Ref. 7. The system is represented by the spin states (up and down) along some directions. One atom is put near one channel to serve +as the apparatus to interact with the spin, causing entanglement. In this measurement, the PBP means that there is no physical difference between states

+
+
+

We also demonstrated that the preferred basis problem can be resolved more naturally by the method of coherence frame than the einselection method.

+
+

Question: Are there different bases for each part of the model. Is there a preferred basis for a qudit or does it depend upon the underlying technology used to implement the qudits. How is the basis integral to the control and measurement?

+

Note: This may be related to, but is not a duplicate of: What is meant by the term "computational basis"?, nor is one of the answers currently offered there an answer to this question.

+
+

Initially, efforts where made to address some comments but this only succeeded in making the question longer and less clear, I've stripped it down but the edits can be reviewed by the curious in the edit history.

+",278,,10480,,02-10-2021 06:58,02-10-2021 06:58,What is the most economical and preferred basis for the qudit?,,2,9,,,,CC BY-SA 4.0 +1604,2,,1602,04-05-2018 11:40,,4,,"

You may be confusing two uses of the word ""base"". One definition of ""base"" has to do with how many digits are used to represent a number. For example, base two uses the digits 0 and 1, and the number five is written as 101 in base two. But in quantum mechanics there is another use of the word ""base"" which has to do with basis vectors for a vector space. This is almost entirely unrelated to the ""base"" of a number system.

+ +

I see that once you start talking about qubits versus qudits there is further confusion. Perhaps you could try thinking of a qubit as a two dimensional space, and the basis within that space as giving a preferred ""direction"", or coordinate axes. Similarly, a qutrit is a three dimensional space, etc. (This is a geometric intuition that might help you get started with thinking about quantum states, it needs some more work before it is precise.)

+",263,,,,,04-05-2018 11:40,,,,5,,,,CC BY-SA 3.0 +1609,1,1610,,04-05-2018 13:04,,9,145,"

In the last decades, the field of parameterised algorithms, with fixed parameter tractibility (FPT) as its main tool has been provided new methods to analyse old algorithms and design techniques for new algorithms.

+ +

The basic idea of parameterised algorithms is that we 'pick' another parameter of our input other than the size (such as treewidth) and design algorithms that are efficient under the assumption that the chosen parameter is a (small) constant.

+ +

I wonder if the analysis and design of quantum algorithms can benefit from this approach. Has this been done, or are there good reasons why this is likely ineffective or ignored so far?

+",253,,23,,04-05-2018 16:25,04-05-2018 16:25,Can the analysis or design of quantum algorithms benefit from parameterised algorithmics?,,1,2,,,,CC BY-SA 3.0 +1610,2,,1609,04-05-2018 13:52,,5,,"

Indeed, there are already parameterised quantum algorithms. I describe just two examples, one fairly famous and one fairly recent:

+
    +
  • The HHL subroutine for producing quantum states representing solutions to systems of equations and related algorithms depend on the sparseness of the matrix, but also the condition number $\kappa$ of the matrix, in the system of equations. The condition number in particular plays a prominent role in many analyses of this problem — the results are typically $O(\mathrm{poly}(\kappa) \log(N))$ algorithms for matrices of size at most $N \times N$.

    +

    While it doesn't seem as though there is much scope for interesting +development on the dependency on $\kappa$, bear in mind that it is probably more informative to consider the logarithm of the condition number as the more natural feature of the input. (It is easy to efficiently express matrices with exponentially large condition number.) If we write $\lambda = \log(\kappa)$, suddenly that $\mathrm{poly}(\kappa) = 2^{O(\lambda)}$ dependency seems more important. And there are other ways in which $\lambda$ seems the relevant thing to consider from a complexity-theoretic standpoint fixed-parameter complexity — for instance, if a quantum algorithm could be found with only $\mathrm{poly}(\lambda)$ dependency, it would imply that $\mathsf{BQP = PSPACE}$. If we consider this to be unlikely in the same way that we consider $\mathsf{P = NP}$ to be unlikely, it seems that fixed-parameter tractability in terms of $\lambda$ fits in the same spirit as fixed-parameter tractability of $\mathsf{NP}$-complete problems.

    +
  • +
  • There are also quantum algorithms for semidefinite programming, with complexity parameterised by the dimension of the input matrix, the row-sparsity of the input matrix, the number of constraints, and bounds on the size of primal and dual solutions.

    +
  • +
+

As quantum computers will initially be much more expensive and less available than classical computers, and have a slower clock speed to boot, it makes a great deal of sense to look for parameterised quantum algorithms: ones which will prove effective for problems of interest, under conditions of interest, allowing us to pinpoint use-cases where quantum computers can be particularly effective.

+",124,,-1,,6/18/2020 8:31,04-05-2018 13:52,,,,1,,,,CC BY-SA 3.0 +1611,2,,1353,04-05-2018 14:00,,3,,"

I will attempt to address the following question only.

+ +
+

I'm asking whether the method of 'running' quantum algorithms on a 'quantum computer' 'simulated' on a classical computer would be able to outperform normal classical algorithms (preferably for problems that not obviously involve quantum simulation)

+
+ +

The closest thing to this that I am aware of are heuristic methods that employ natural computing, in particular the ones that take inspiration from quantum physics for the development of novel problem-solving techniques. These are known as quantum inspired algorithms. Please notice that: i) I do not claim that such methods could be rigorously shown to be superior to conventional algorithms, but it seems that they can be at least competitive; ii) the algorithms may or may not actually simulate a quantum computer, the faithfulness to the original source of inspiration varies.

+ +

I will briefly outline the framework of a particular type of a quantum-inspired evolutionary algorithm (QIEA). A more complete treatment may be found in chapter 24 of the book ""Natural computing algorithms"" by Anthony Brabazon et al [1]. Concrete examples can be found for example in arXiv.

+ +

The basic ingredients of a conventional evolutionary algorithm (EA) are a population of individuals $P(t)$, an update rule for the population, and a fitness function $f$. Here, each individual in $P(t)$ represents a possible solution to some problem, and $f$ quantifies how good the solution is. After initialization, for each step $t$ one evaluates $f$ on every individual in $P(t)$, records best ones and updates $P(t)$. This is iterated until a stopping criterion is reached, and the best found individual(s) are returned. In the simplest case, the update rule could be just random variation of individuals, but it can also be more complicated and engineered to introduce selection pressure towards better values of $f$.

+ +

In a QIEA, solutions are represented by bit strings of a fixed length, say, $m$. A quantum population $Q(t)$ is used, where each quantum individual consists of $m$ qubits. At each $t$, classical population $P(t)$ is determined from $Q(t)$ by ""measuring"" the qubits. $P(t)$ is ranked by $f$ and best results are recorded. $Q(t)$ is updated by acting on each qubit with a local gate, and iteration is continued. Often for $Q(0)$, all qubits are set to balanced superposition $(1/\sqrt{2},1/\sqrt{2})^T$, making each particular solution equally likely in the beginning.

+ +

As the quantum individuals remain essentially in a product state of $m$ qubits, there is no entanglement involved at any point, making QIEA not very quantum. On the other hand, we can effectively simulate the evolution of $Q(t)$ and make as many measurements as we want without needing extra qubits. The claimed advantage is over conventional EAs, based on supposedly needing fewer individuals or being better at maintaining diversity as the population evolves, as even a fixed $Q(t)$ can lead to many $P(t)$. All in all, QIEA by its design is meant to be run only as a simulation.

+ +

As a final remark, suppose that we wish to make QIEA more quantum without making it intractable. Can we? Perhaps. Consider the update rule of QIEA as a quantum circuit. It is rather boring, with a qubit register of size $m$ and a local gate acting once on each qubit. One could try to introduce some tractable quantumness to QIEA by taking the update circuit to be some Clifford quantum circuit, mentioned and outlined for example here and here. I do not know if this could offer any benefits at all, and as far as I know, this hasn't been tried.

+ +

[1] S. M. Anthony Brabazon, Michael O’Neill, Natural Computing Algorithms. Springer-Verlag Berlin +Heidelberg, 2015.

+",144,,,,,04-05-2018 14:00,,,,0,,,,CC BY-SA 3.0 +1612,1,,,04-05-2018 17:11,,7,128,"

In Empirical Algorithmics, researchers aim to understand the performance of algorithms through analyzing their empirical performance. This is quite common in machine learning and optimization. Right now, we would like to know something about the relative performance of quantum algorithms and their classical counterparts based on preliminary data from quantum computer emulators. My concern is that we might see encouraging empirical data that shows quantum computers with better scaling using simulators up to about 35 qubits, but then this advantage will disappear in the future once we have more data. What are the best practices for analyzing relative performance of classical and quantum algorithms in a robust way that gives insight about scaling? Or is this simply not yet possible?

+",1658,,1658,,04-05-2018 23:46,04-05-2018 23:46,Empirical Algorithmics for Near-Term Quantum Computing,,0,5,,,,CC BY-SA 3.0 +1613,1,1619,,04-05-2018 17:36,,24,6154,"

In this answer, Grover's algorithm is explained. The explanation indicates that the algorithm relies heavily on the Grover Diffusion Operator, but does not give details on the inner workings of this operator.

+ +

Briefly, the Grover Diffusion Operator creates an 'inversion about the mean' to iteratively make the tiny differences in earlier steps large enough to be measurable.

+ +

The questions are now:

+ +
    +
  1. How does the Grover diffusion operator achieve this?
  2. +
  3. Why is the resulting $O(\sqrt{n})$ in total time to search an unordered database optimal?
  4. +
+",253,,253,,04-06-2018 15:27,04-06-2018 15:27,How does the Grover diffusion operator work and why is it optimal?,,2,1,,,,CC BY-SA 3.0 +1614,2,,1613,04-05-2018 18:56,,7,,"

One way of defining the diffusion operator is1 $D = -H^{\otimes n}U_0H^{\otimes n}$, where $U_0$ is the phase oracle $$U_0\left|0^{\otimes n}\right> = -\left|0^{\otimes n}\right>,\,U_0\left|x\right> = \left|x\right>\,\text{for} \left|x\right>\neq\left|0^{\otimes n}\right>.$$

+ +

This shows that $U_0$ can also be written as $U_0 = I-2\left|0^{\otimes n}\rangle\langle0^{\otimes n}\right|$, giving $$D= 2\left|+\rangle\langle+\right| - I,$$ where $\left|+\right> = 2^{-n/2}\left(\left|0\right> + \left|1\right>\right)^{\otimes n}$.

+ +

Writing a state $\left|\psi\right> = \alpha\left|+\right> + \beta\left|+^\perp\right>$ where $\left|+^\perp\right>$ is orthogonal to $\left|+\right>$ (i.e. $\left<+^\perp\mid+\right> =0)$ gives that $D\left|\psi\right> = \alpha\left|+\right> - \beta\left|+^\perp\right>$.

+ +

This gives2 that the diffusion operator is a reflection about $\left|+\right>$

+ +

As the other part of Grover's algorithm is also a reflection, these combine to rotate the current state closer to the 'searched-for' value $x_0$. This angle decreases linearly with the number of rotations (until it overshoots the searched-for value), giving that the probability of correctly measuring the correct value increases quadratically.

+ +

Bennet et. al. showed that this is optimal. By taking a classical solution to an NP-problem, Grover's algorithm can be used to quadratically speed this up. However, taking a language $\mathcal L_A = \left\lbrace y:\exists x\, A\left(x\right) = y\right\rbrace$ for a length preserving function $A$ (here, an oracle), any bounded-error oracle based quantum turing machine cannot accept this language in a time $T\left(n\right)=\mathcal o\left(2^{n/2}\right)$.

+ +

This is achieved by taking a set of oracles where $\left|1\right>^{\otimes n}$ has no inverse (so is not contained in the language). However, this is contained in some new language $\mathcal L_{A_y}$ by definition. The difference in probabilities of a machine accepting $\mathcal L_A$ and a different machine accepting $\mathcal L_{A_y}$ in time $T\left(n\right)$ is then less than $1/3$ and so neither language is accepted and Grover's algorithm is indeed asymptotically optimal.3

+ +

Zalka later showed that Grover's algorithm is exactly optimal.

+ +
+ +

1 In Grover's algorithm, minus signs can be moved round, so where the minus sign is, is somewhat arbitrary and doesn't necessarily have to be in the definition of the diffusion operator

+ +

2 alternatively, defining the diffusion operator without the minus sign gives a reflection about $\left|+^\perp\right>$

+ +

3 Defining the machine using the oracle $A$ as $M^A$ and the machine using oracle $A_y$ as $M^{A_y}$, this is a due to the fact that there is a set $S$ of bit strings, where the states of $M^A$ and $M^{A_y}$ at a time $t$ are $\epsilon$-close4, with a cardinality $<2T^2/\epsilon^2$. Each oracle where $M^A$ correctly decides if $\left|1\right>^{\otimes n}$ is in $\mathcal L_A$ can be mapped to $2^n - \text{Card}\left(S\right)$ oracles where $M^A$ fails to correctly decide if $\left|1\right>^{\otimes n}$ is in that oracle's language. However, it must give one of the other $2^n-1$ potential answers and so if $T\left(n\right)=\mathcal o\left(2^{n/2}\right)$, the machine is unable to determine membership of $\mathcal L_A$.

+ +

4 Using the Euclidean distance, twice the trace distance

+",23,,23,,04-05-2018 22:28,04-05-2018 22:28,,,,0,,,,CC BY-SA 3.0 +1617,2,,1598,04-06-2018 00:10,,3,,"

The Quantum Approximate Optimization Algorithm is a good place to start for analyzing the relative performance of quantum algorithms on approximation problems. One result so far is that at p=1 QAOA can theoretically achieve an approximation ratio of 0.624 for MaxCut on 3-regular graphs. This result was obtained using brute force enumeration of the different possible cases. This is not a technique is which easily generalizable, so relatively little is known about the performance of QAOA on other problems.

+ +

As it currently stands QAOA uses very little structure in the combinatorial optimization problem and operates more along the lines of a direct search method. One possible consequence is that QAOA would be best used for problems where there is minimal structure. In this case there is nothing that classical algorithms could use to accelerate the search process.

+",1658,,1658,,04-06-2018 20:24,04-06-2018 20:24,,,,2,,,,CC BY-SA 3.0 +1619,2,,1613,04-06-2018 07:28,,12,,"

$\newcommand{\bra}[1]{\left<#1\right|}\newcommand{\ket}[1]{\left|#1\right>}\newcommand{\braket}[2]{\left<#1\middle|#2\right>}\newcommand{\bke}[3]{\left<#1\middle|#2\middle|#3\right>}\newcommand{\proj}[1]{\left|#1\right>\left<#1\right|}$ +Since the original question was about a layman's description, I offer a slightly different solution which is perhaps easier to understand (background dependent), based on a continuous time evolution. (I make no pretense that it is suitable for a layman, however.)

+ +

We start from an initial state which is a uniform superposition of all states, +$$ +\ket{\psi}=\frac{1}{\sqrt{2^n}}\sum_{y\in\{0,1\}^n}\ket{y} +$$ +and we are aiming to find a state $\ket{x}$ that can be recognised as the correct answer (assuming there is exactly one such state, although this can be generalised). To do this, we evolve in time under the action of a Hamiltonian +$$ +H=\proj{x}+\proj{\psi}. +$$ +The really beautiful feature of Grover's search is that at this point, we can reduce the maths to a subspace of just two states $\{\ket{x},\ket{\psi}\}$, rather than requiring all $2^n$. It's easier to describe if we make an orthonormal basis from these states, $\{\ket{x},\ket{\psi^\perp}\}$ where +$$ +\ket{\psi^{\perp}}=\frac{1}{\sqrt{2^n-1}}\sum_{y\in\{0,1\}^n:y\neq x}\ket{y}. +$$ +Using this basis, the time evolution $e^{-iHt}\ket{\psi}$ can be written as +$$ +e^{-it\left(\mathbb{I}+2^{-n}Z+\frac{\sqrt{2^n-1}}{2^{n}}X\right)}\cdot\left(\begin{array}{c}\frac{1}{\sqrt{2^n}} \\ \sqrt{1-\frac{1}{2^n}} \end{array}\right), +$$ +where $X$ and $Z$ are the standard Pauli matrices. This can be rewritten as +$$ +e^{-it}\left(\mathbb{I}\cos\left(\frac{t}{2^{n/2}}\right)-i\frac{1}{2^{n/2}}\sin\left(\frac{t}{2^{n/2}}\right)\left(Z+X\sqrt{2^n-1}\right)\right)\left(\begin{array}{c}\frac{1}{\sqrt{2^n}} \\ \sqrt{1-\frac{1}{2^n}} \end{array}\right). +$$ +So, if we evolve for a time $t=\frac{\pi}{2}2^{n/2}$, and ignoring global phases, the final state is +$$ +\frac{1}{2^{n/2}}\left(Z+X\sqrt{2^n-1}\right)\left(\begin{array}{c}\frac{1}{\sqrt{2^n}} \\ \sqrt{1-\frac{1}{2^n}} \end{array}\right)=\left(\begin{array}{c}\frac{1}{2^n} \\ -\frac{\sqrt{2^n-1}}{2^n} \end{array}\right)+\left(\begin{array}{c} 1-\frac{1}{2^n} \\ \frac{\sqrt{2^n-1}}{2^n}\end{array}\right)=\left(\begin{array}{c} 1 \\ 0 \end{array}\right). +$$ +In other words, with probability 1, we get the state $\ket{x}$ that we were searching for. The usual circuit-based description of Grover's search is really just this continuous time evolution broken into discrete steps, with the slight disadvantage that you usually can't get exactly probability 1 for your outcome, just very close to it.

+ +

One caveat is the following: you could redefine $\tilde H=5H$, and evolve using $\tilde H$ and the evolution time would be 5 times shorter. If you wanted to be really radical, replace the 5 with $2^{n/2}$, and Grover's search runs in constant time! But you're not allowed to do this arbitrarily. Any given experiment would have a fixed maximum coupling strength (i.e. a fixed multiplier). So, different experiments have different running times, but their scaling is the same, $2^{n/2}$. It's just like saying that the gate cost in the circuit model is constant, rather than assuming that if we use a circuit of depth $k$ each gate can be made to run in time $1/k$.

+ +

The optimality proof essentially involves showing that if you made detection of one possible marked state $\ket{x}$ any quicker, it would make detection of a different marked state, $\ket{y}$, slower. Since the algorithm should work equally well whichever state is marked, this solution is the best one.

+",1837,,,,,04-06-2018 07:28,,,,8,,,,CC BY-SA 3.0 +1620,1,7094,,04-06-2018 09:01,,25,1636,"

Lieb-Robinson bounds describe how effects are propagated through a system due to a local Hamiltonian. They are often described in the form +$$ +\left|[A,B(t)]\right|\leq Ce^{vt-l}, +$$ +where $A$ and $B$ are operators that are separated by a distance $l$ on a lattice where the Hamiltonian has local (e.g. nearest neighbour) interactions on that lattice, bounded by some strength $J$. The proofs of the Lieb Robinson bound typically show the existence of a velocity $v$ (that depends on $J$). This is often really useful for bounding properties in these systems. For example, there were some really nice results here regarding how long it takes to generate a GHZ state using a nearest-neighbour Hamiltonian.

+ +

The problem that I've had is that the proofs are sufficiently generic that it is difficult to get a tight value on what the velocity actually is for any given system.

+ +

To be specific, imagine a one dimensional chain of qubits coupled by a Hamiltonian +$$ +H=\sum_{n=1}^N\frac{B_n}{2}Z_n+\sum_{n=1}^{N-1}\frac{J_n}{2}(X_nX_{n+1}+Y_nY_{n+1}), \tag{1} +$$ +where $J_n\leq J$ for all $n$. Here $X_n$, $Y_n$ and $Z_n$ represent a Pauli operator being applied to a given qubit $n$, and $\mathbb{I}$ everywhere else. Can you give a good (i.e. as tight as possible) upper bound for the Lieb-Robinson velocity $v$ for the system in Eq. (1)?

+ +

This question can be asked under two different assumptions:

+ +
    +
  • The $J_n$ and $B_n$ are all fixed in time
  • +
  • The $J_n$ and $B_n$ are allowed to vary in time.
  • +
+ +

The former is a stronger assumption which may make proofs easier, while the latter is usually included in the statement of Lieb-Robinson bounds.

+ +
+ +

Motivation

+ +

Quantum computation, and more generally quantum information, comes down to making interesting quantum states. Through works such as this, we see that information takes a certain amount of time to propagate from one place to another in a quantum system undergoing evolution due to a Hamiltonian such as in Eq. (1), and that quantum states, such as GHZ states, or states with a topological order, take a certain amount of time to produce. What the result currently shows is a scaling relation, e.g. the time required is $\Omega(N)$.

+ +

So, let's say I come up with a scheme that does information transfer, or produces a GHZ state etc. in a way that scales linearly in $N$. How good is that scheme actually? If I have an explicit velocity, I can see how closely matched the scaling coefficient is in my scheme compared to the lower bound.

+ +

If I think that one day what I want to see is a protocol implemented in the lab, then I very much care about optimising these scaling coefficients, not just the broad scaling functionality, because the faster I can implement a protocol, the less chance there is for noise to come along and mess everything up.

+ +
+ +

Further Information

+ +

$\newcommand{\bra}[1]{\left<#1\right|}\newcommand{\ket}[1]{\left|#1\right>}\newcommand{\bk}[2]{\left<#1\middle|#2\right>}\newcommand{\bke}[3]{\left<#1\middle|#2\middle|#3\right>}\newcommand{\proj}[1]{\left|#1\right>\left<#1\right|}$There are some nice features of this Hamiltonian which I assume make calculation easier. In particular, the Hamiltonian has a subspace structure based on the number of 1s in the standard basis (it is said to be excitation preserving) and, even better, the Jordan-Wigner transformation shows that all the properties of higher excitation subspaces can be derived from the 1-excitation subspace. This essentially means we only have to do the maths on an $N\times N$ matrix $h$ instead of the full $2^N\times 2^N$ matrix $H$, where +$$ +h=\sum_{n=1}^NB_n\proj{n}+\sum_{n=1}^{N-1}J_n(\ket{n}\bra{n+1}+\ket{n+1}\bra{n}). +$$ +There is some evidence that the Lieb-Robinson velocity is $v=2J$, such as here and here, but these all use a close to uniformly coupled chain, which has a group velocity $2J$ (and I assume the group velocity is closely connected to the Lieb-Robinson velocity). It doesn't prove that all possible choices of coupling strength have a velocity that is so bounded.

+ +

I can add a bit further to the motivation. Consider the time evolution of a single excitation starting at one end of the chain, $\ket{1}$, and what its amplitude is for arriving at the other end of the chain $\ket{N}$, a short time $\delta t$ later. To first order in $\delta t$, this is +$$ +\bra{N}e^{-ih\delta t}\ket{1}=\frac{\delta t^{N-1}}{(N-1)!}\prod_{n=1}^{N-1}J_n+O(\delta t^{N}). +$$ +You can see the exponential functionality that you would expect being outside the 'light cone' defined by a Lieb-Robinson system, but more importantly, if you wanted to maximise that amplitude, you'd set all the $J_n=J$. So, at short times, the uniformly coupled system leads to the most rapid transfer. Trying to push this further, you can ask, as a bit of a fudge, when can +$$ +\frac{t^{N-1}}{(N-1)!}\prod_{n=1}^{N-1}J_n\sim 1 +$$ +Taking the large $N$ limit, and using Stirling's formula on the factorial leads to +$$ +\frac{etJ}{N-1}\sim 1, +$$ +which suggests a maximum velocity of about $eJ$. Close, but hardly rigorous (as the higher order terms are non-negligible)!

+ +

$$ +\newcommand{\bra}[1]{\left<#1\right|}\newcommand{\ket}[1]{\left|#1\right>}\newcommand{\bk}[2]{\left<#1\middle|#2\right>}\newcommand{\bke}[3]{\left<#1\middle|#2\middle|#3\right>}\newcommand{\proj}[1]{\left|#1\right>\left<#1\right|} +$$

+",1837,,1837,,11-06-2018 07:58,8/25/2019 5:24,Explicit Lieb-Robinson Velocity Bounds,,1,9,,,,CC BY-SA 4.0 +1621,1,1623,,04-06-2018 13:40,,24,2712,"

I see many papers (e.g. Quantum principal component analysis) in which the existence of qRAM is necessary. What's the actual purpose of qRAM in quantum algorithms?

+",1850,,26,,12/13/2018 19:42,12/13/2018 19:42,What is the purpose of a quantum RAM in quantum algorithms?,,1,3,,,,CC BY-SA 3.0 +1622,1,1627,,04-06-2018 16:35,,26,2308,"

An answer to another question mentions that

+ +
+

There are arguments that suggests that such machines [""quantum Turing machines""] cannot even be built...

+
+ +

I'm not sure I fully understand the problem, so perhaps I'm not asking the right question, but here's what I could gather.

+ +

The slides are presented in a lecture (from 2013) by Professor Gil Kalai (Hebrew University of Jerusalem and Yale University). I watched most of the lecture, and it seems his claim is that there is a barrier to creating fault-tolerant quantum computers (FTCQ), and this barrier probably lies around creating logical qubits from physical components. (timestamp 26:20):

+ +

+ +

It sounds like the reason for such a barrier is due to the problem of noise and error-correction. And even though current research takes into account noise, it doesn't do so in the right manner (this is the part I don't understand).

+ +

I know many people (e.g., Scott Aaronson) are skeptical of this claim of impossibility, but I'm just trying to better understand the argument:

+ +

What is the reason for suggesting that practical quantum computers cannot be built (as presented by Professor Gil Kalai, and has anything changed since 2013)?

+",1869,,,,,5/18/2019 6:25,What is the argument that practical quantum computers cannot be built?,,3,5,,,,CC BY-SA 3.0 +1623,2,,1621,04-06-2018 17:33,,12,,"

This is discussed in chapter 5 of Ciliberto et al..

+ +

The purpose of most quantum(-enhanced) machine learning algorithms is to speed-up the processing of classical data over what is possible with classical machine learning algorithms. +In other words, the context is that you have a set of classical vectors $\{\boldsymbol x_k\}_k$, and you want to compute some function $\boldsymbol f(\boldsymbol x_k)$ of this data (which may then be used as an estimator of some property, or as a function characterising a classifier to be used for new data points, or something else). +Most quantum machine learning algorithms tell you that, provided you are able to efficiently perform the mapping +$$\{\boldsymbol x_k\}_k\mapsto\lvert \{\boldsymbol x_k\}\rangle= N\sum_{kj} x_{kj}\lvert k,j\rangle,$$ +then it is sometimes possible to compute $\boldsymbol f(\{\boldsymbol x_k\})$ more efficiently. +It is, however, highly nontrivial how to perform such mapping efficiently.

+ +

To maintain the potential exponential speed-ups of the quantum algorithms, this conversion needs to be efficient. If this is not the case, then one ends up in a situation in which the quantum algorithm can solve the problem very efficiently, but only after a lengthy preprocessing of the data has been performed, therefore killing the whole point of using the quantum algorithm.

+ +

This is where QRAMs come into play. +A QRAM is a device that can (theoretically) encode $N$ $d$-dimensional classical vectors into (the amplitudes of) a quantum state of $\log(Nd)$ qubits, in time $\mathcal O(\log(Nd))$. +As discussed in Ciliberto et al., as well as in this related answer, the actual feasibility of QRAMs is still not entirely clear, and many caveats remain.

+",55,,55,,06-11-2018 14:06,06-11-2018 14:06,,,,0,,,,CC BY-SA 4.0 +1625,2,,1622,04-06-2018 20:56,,3,,"

I can’t comment on the specifics of his arguments, because I don’t claim to understand them fully. But in general, we have to wonder whether quantum mechanics will continue to be valid for many qubit systems and states that are deep within the Hilbert space.

+ +

Physics is all about observing nature, building theories, confirming the theories, and then finding where they break down. Then the cycle begins again.

+ +

We have never had quantum systems as clean, well-controlled and large as current quantum processors. Devices capable of pulling off ‘supremacy’ are even further beyond our current experimental experience. So it is valid to wonder if this unprobed corner of QM might be where it all breaks down. Perhaps new ‘post-quantum’ effects will appear, that effectively act as uncorrectable forms of noise.

+ +

Of course, most of us don’t think it will. And we hope it won’t, or there will be no quantum computers. Nevertheless, we must be open to the possibility we are wrong.

+ +

And the minority who think quantum computing will fail should be open to the idea that they are wrong, too. Hopefully, they won’t turn out to be the new brand of ‘Bell violation deniers’.

+",409,,26,,7/13/2018 14:42,7/13/2018 14:42,,,,0,,,,CC BY-SA 4.0 +1626,2,,1622,04-06-2018 22:53,,5,,"
+

Q: ""What is the reason for suggesting that practical quantum computers cannot be built (as presented by Professor Gil Kalai, and has anything changed since 2013)?"".

+
+ +

In an interview titled ""Perpetual Motion of The 21st Century?"" Prof. Kalai states:

+ +
+

""For quantum systems there are special obstacles, such as the inability to make exact copies of quantum states in general. Nevertheless, much of the theory of error-correction has been carried over, and the famous threshold theorem shows that fault-tolerant quantum computation (FTQC) is possible if certain conditions are met. The most-emphasized condition sets a threshold for the absolute rate of error, one still orders of magnitude more stringent than what current technology achieves but approachable. One issue raised here, however, is whether the errors have sufficient independence for these schemes to work or correlations limited to what they can handle."".

+
+ +

In an earlier paper of his titled ""Quantum Computers: Noise Propagation and Adversarial Noise Models"" he states:

+ +
+

Page 2: ""The feasibility of computationally superior quantum computers is one of the most fascinating scientific problems of our time. The main concern regarding quantum-computer feasibility is that quantum systems are inherently noisy. The theory of quantum error correction and fault-tolerant quantum computation (FTQC) provides strong support for the possibility of building quantum computers. In this paper we will discuss adversarial noise models that may fail quantum computation. This paper presents a critique of quantum error correction and skepticism on the feasibility of quantum computers."".

+ +

Page 19: ""The main issue is therefore to understand and describe the fresh (or infinitesimal) noise operations. The adversarial models we consider here should be regarded as models for fresh noise. But the behavior of accumulative errors in quantum circuits that allow error propagation is sort of a “role model” for our models of fresh noise.

+ +

The common picture of FTQC asserts:

+ +
    +
  • Fault tolerance will work if we are able to reduce the fresh gate/qubit + errors to below a certain threshold. In this case error propagation will be suppressed.
  • +
+ +

What we propose is:

+ +
    +
  • Fault tolerance will not work because the overall error will behave like accumulated errors for standard error propagation (for circuits that allow error propagation), although not necessarily because of error propagation.
  • +
+ +

Therefore, for an appropriate modeling of noisy quantum computers the fresh errors should behave like accumulated errors for standard error propagation (for circuits that allow error propagation).

+ +

(As a result, in the end we will not be able to avoid error propagation.)"".

+ +

Page 23: ""Conjecture B: In any noisy quantum computer in a highly entangled state there will be a strong effect of error synchronization.

+ +

We should informally explain already at this point why these conjectures, if true, are damaging. We start with Conjecture B. The states of quantum computers that apply error-correcting codes needed for FTQC are highly entangled (by any formal definition of “high entanglement”). Conjecture B + will imply that at every computer cycle there will be a small but substantial probability that the number of faulty qubits will be much larger than the threshold. This is in contrast to standard assumptions that the probability of the number of faulty qubits being much larger than the threshold decreases exponentially with the number of qubits. Having a small but substantial probability of a large number of qubits to be faulty is enough to fail the quantum error correction codes."".

+
+ +

See also his paper: ""How Quantum Computers Fail: Quantum Codes, Correlations in Physical Systems, and Noise Accumulation"".

+ +

Many people disagee, and much has changed, see this Wikipedia page: ""Quantum Threshold Theorem"", or this paper ""Experimental Quantum Computations on a Topologically Encoded Qubit"", there's even this paper on quantum metrology where the authors claim that: ""Making use of coherence and entanglement as metrological quantum resources allows to improve the measurement precision from the shot-noise or quantum limit to the Heisenberg limit."" in their paper: ""Quantum metrology with a transmon qutrit"" by utilizing additional dimensions.

+",278,,,,,04-06-2018 22:53,,,,0,,,,CC BY-SA 3.0 +1627,2,,1622,04-06-2018 22:59,,9,,"

If your intent is to understand Gil Kalai's arguments, I recommend the following blog post of his: My Argument Against Quantum Computers: An Interview with Katia Moskvitch on Quanta Magazine (and the links therein).

+

For good measure, I'd also throw in Perpetual Motion of The 21st Century? (especially the comments). You can also see the highlights in My Quantum Debate with Aram Harrow: Timeline, Non-technical Highlights, and Flashbacks I and My Quantum Debate with Aram II. Finally, if you haven't already, see Scott Aaronson's Whether or not God plays dice, I do.

+

First, a brief summary of Kalai's view from his Notices article (see also The Quantum Computer Puzzle @ Notices of the AMS):

+
+

Understanding quantum computers in the presence of noise requires consideration of behavior at different scales. In the small scale, standard models of noise from the mid-90s are suitable, and quantum evolutions and states described by them manifest a very low-level computational power. This small-scale behavior has far-reaching consequences for the behavior of noisy quantum systems at larger scales. On the one hand, it does not allow reaching the starting points for quantum fault tolerance and quantum supremacy, making them both impossible at all scales. On the other hand, it leads to novel implicit ways for modeling noise at larger scales and to various predictions on the behavior of noisy quantum systems.

+
+

Second, a recent argument for why he thinks classical error correction is possible but quantum error correction is not.

+
+

Unlike the repetition/majority mechanism which is supported by very primitive computational power, creating a quantum error correcting code and the easier task of demonstrating quantum supremacy are not likely to be achieved by devices which are very low-level in terms of computational complexity.

+
+

(In the above mentioned conversation with Aram Harrow, it is pointed out that if one were to take Kalai's initial arguments directly, then even classical error correction would not possible.)

+

In the post, Kalai goes on to argue that a primitive quantum computer would not be able to do error correction.

+
+

Q: But why can’t you simply create good enough qubits to allow universal quantum circuits with 50 qubits?

+

A: This will allow very primitive devices (in terms of the asymptotic behavior of computational complexity) to perform superior computation.

+
+

Kalai also gave a lecture (YouTube) on why topological quantum computing would not work.

+",,user1813,-1,,6/18/2020 8:31,5/18/2019 6:25,,,,0,,,,CC BY-SA 4.0 +1628,2,,1486,04-07-2018 06:29,,6,,"

The answer to the first question (why is energy efficiency in quantum vs classical not discussed as often as speed?) is: in part because the problem is less univocal and in part because the answer is less flattering.

+ +

The answer to the second question (are quantum computers more or less energetically efficient?) will change with time, since it depends on technological developments of the different architectures.

+ +

At the present time, quantum computing is obviously less energetically efficient. A minimal classical computer can be designed to be extremely cheap, also in terms of energy (e.g. 1.5 W (average when idle) to 6.7 W (maximum under stress) for a Raspberry Pi ). In contrast, today to build and operate a minimal quantum computer is an engineering feat with staggering energy cost, even if the number of qubits is well below 100 and the maximum number of operations is orders of magnitude below what is achieved in a fraction of a second by a minimal classical computer.

+ +

In the future, one can either speculate or take into account the fundamentals. Let us avoid speculation and stick to the fundamentals:

+ +
    +
  • There is no absolute fundamental physical reason for quantum computers to be more or less energy efficient than classical ones.
  • +
  • Energy efficiency will always depend on the architecture, and thus on available technological solutions.
  • +
  • To evaluate energy consumption, it will always be important to distinguish between the idle consumption and the cost of operation.
  • +
+ +

To elaborate on the latter point, present devices, both in commercial and academic settings, are bulky. Not ENIAC-sized, but larger-than-a-large-fridge-sized. Furthermore, to be controlled they require an auxiliary classical computer. The size-per-qubit is expected to get better, the need for an auxiliary classical computer is not.

+ +

But besides direct electrical power, there are often further physical requirements which cost energy, and which are fundamentally needed to keep the device in the desired quantum regime. For example, popular architectures today include different solid-state devices that need to be kept at temperatures of the order of a few Kelvin or lower. These temperatures are achieved with the help of liquid Helium, which is energetically very costly to liquify (cryogenic gases and electricity are among the main costs in Electronic Paramagnetic Resonance laboratories such as the Electron Magnetic Resonance Facility (EMR) at the MagLab, or, closer to my experience, in the pulsed Electron Paramagnetic Resonance section at the ICMol). I have no experience with ion/atom traps, which are also popular architectures, so while they require mantaining a high-quality vacuum, for al I know it may be that these are more energy efficient.

+",1847,,1847,,04-07-2018 08:07,04-07-2018 08:07,,,,4,,,,CC BY-SA 3.0 +1629,1,1641,,04-07-2018 09:50,,19,2060,"

In order to represent the single qubit $|\psi\rangle$ we use an unitary vector in a $\mathbb{C}^2$ Hilbert space whose (one of the) orthonormal base is $(|0\rangle, |1\rangle)$.

+ +

We can draw $|\psi\rangle$ using a Bloch ball. However, I found this notation quite confusing, because orthogonal vectors are spatially antiparallel (brief explanation in this Physics Stackexchange question).

+ +

+ +

Do you know any different graphical representation for a single qubit?

+",1874,,,,,08-07-2018 21:11,Alternative to Bloch sphere to represent a single qubit,,4,0,,,,CC BY-SA 3.0 +1630,2,,1629,04-07-2018 10:27,,1,,"

The Bloch sphere historically came about to describe spins where up and down can actually be viewed as being (anti)parallel rather than (mathematically) orthogonal.

+ +

You can naturally (and perhaps more naturally!) depict a qubit's state in a way that orthogonal states are indeed orthogonal. Then a pure 1-qubit state occupies a point on the surface of a 4-dimensional sphere.

+",,user1039,,,,04-07-2018 10:27,,,,0,,,,CC BY-SA 3.0 +1631,1,1632,,04-07-2018 10:41,,19,8635,"

I have done some sort of online research on qubits and the factors making them infamous i.e allowing qubits to hold 1 and 0 at the same time and another is that qubits can be entangled somehow so that they can have related data in them no matter how far they are (even at opposite sides of the galaxies).

+ +

While reading about this on Wikipedia I have seen some equation which is still difficult for me to comprehend. Here's the link to Wikipedia.

+ +

Questions:

+ +
    +
  1. How are they entangled in the first place?

  2. +
  3. How do they relate their data?

  4. +
+",1875,,55,,04-07-2018 16:30,11/20/2019 21:51,What does it mean for two qubits to be entangled?,,3,2,,,,CC BY-SA 3.0 +1632,2,,1631,04-07-2018 11:51,,19,,"

For a simple example suppose you have two qubits in definite states $|0\rangle$ and $|0\rangle$. The combined state of the system is $|0\rangle\otimes |0\rangle$ or $|00\rangle$ in shorthand.

+ +

Then if we apply the following operators to the qubits (image is cut from superdense coding wiki page), the resulting state is an entangled state, one of the bell states.

+ +

+ +

First in the image we have the hadamard gate acting on the first qubit, which in a longer form is $H\otimes I$ so that it is the identity operator on the second qubit.

+ +

The hadamard matrix looks like +$$H=\frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}$$ where the basis is ordered $\{|0\rangle,|1\rangle\}$.

+ +

So after the hadamard operator acts the state is now

+ +

$$(H\otimes I)(|0\rangle\otimes|0\rangle)=H|0\rangle\otimes I |0\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)\otimes (|0\rangle)=\frac{1}{\sqrt{2}}(|00\rangle+|10\rangle)$$

+ +

The next part of the circuit is a controlled not gate, which only acts on the second qubit if the first qubit is a $1$.

+ +

You can represent $CNOT$ as $|0\rangle\langle0|\otimes I+|1\rangle\langle1|\otimes X$, where $|0\rangle\langle0|$ is a projection operator onto the bit $0$, or in matrix form $\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}$. Similarly $|1\rangle\langle1|$ is $\begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}$.

+ +

The $X$ operator is the bit flip operator represented as $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$.

+ +

Overall the $CNOT$ matrix is $\begin{pmatrix} 1 & 0 &0 & 0 \\ 0 & 1 &0 & 0 \\ 0 & 0 &0 & 1 \\0 & 0 &1 & 0 \\\end{pmatrix}$

+ +

When we apply the $CNOT$ we can either use matrix multiplication by writing our state as a vector $\begin{pmatrix}\frac{1}{\sqrt{2}} \\ 0 \\ \frac{1}{\sqrt{2}} \\0 \end{pmatrix}$, or we can just use the tensor product form.

+ +

$$CNOT (\frac{1}{\sqrt{2}}(|00\rangle+|10\rangle))=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)$$

+ +

We see that for the first part of the state $|00\rangle$ the first bit is $0$, so the second bit is left alone; the second part of the state $|10\rangle$ the first bit is $1$, so the second bit is flipped from $0$ to $1$.

+ +

Our final state is $$\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)$$ which is one of the four Bell states which are maximally entangled states.

+ +

To see what it means for them to be entangled, notice that if you were to measure the state of the first qubit say, if you found out that it was a $0$ it immediately tells you the second qubit also has to be a $0$, because thats our only possibility.

+ +

Compare to this state for instance:

+ +

$$\frac{1}{2}(|00\rangle+|01\rangle+|10\rangle+|11\rangle).$$

+ +

If you measure that the first qubit is a zero, then the state collapses to $\frac{1}{\sqrt{2}}(|00\rangle+|01\rangle)$, where there is still a 50-50 chance the second qubit is a $0$ or a $1$.

+ +

Hopefully this gives an idea how states can be entangled. If you want to know a particular example, like entangling photons or electrons etc, then you would have to look into how certain gates can be implemented, but still you might write the mathematics the same way, the $0$ and $1$ might represent different things in different physical situations.

+ +
+ +

Update 1: Mini Guide to QM/QC/Dirac notation

+ +

Usually there's a standard computational (ortho-normal) basis for a single qubit which is $\{|0\rangle,|1\rangle\}$, say $\mathcal{H}=\operatorname{span}\{|0\rangle,|1\rangle\}$ is the vector space.

+ +

In this ordering of the basis we can identify $|0\rangle$ with $\begin{pmatrix} 1\\ 0 \end{pmatrix}$ and $|1\rangle$ with $\begin{pmatrix} 0\\ 1 \end{pmatrix}$. Any single qubit operator then can be written in matrix form using this basis. E.g. a bit flip operator $X$ (after pauli-$\sigma_x$) which should take $|0\rangle\mapsto |1\rangle$ and $|1\rangle \mapsto |0\rangle$, can be written as $\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}$, the first column of the matrix is the image of the first basis vector and so on.

+ +

When you have multiple say $n$-qubits they should belong to the space $\mathcal{H}^{\otimes n}:=\overbrace{\mathcal{H}\otimes\mathcal{H}\otimes\cdots\otimes \mathcal{H}}^{n-times}$. A basis for this space is labelled by strings of zeros and ones, e.g. $|0\rangle\otimes|1\rangle\otimes |1\rangle\otimes\ldots \otimes|0\rangle$, which is usually abbreviated for simplicity as $|011\ldots0\rangle$.

+ +

A simple example for two qubits, the basis for $\mathcal{H}^{\otimes 2}=\mathcal{H}\otimes \mathcal{H}$, is $\{|0\rangle\otimes|0\rangle,|0\rangle\otimes|1\rangle,|1\rangle\otimes|0\rangle,|1\rangle\otimes|1\rangle\}$ or in the shorthand $\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\}$.

+ +

There's different ways to order this basis in order to use matrices, but one natural one is to order the strings as if they are numbers in binary so as above. For example for $3$ qubits you could order the basis as $$\{|000\rangle,|001\rangle,|010\rangle,|011\rangle,|100\rangle,|101\rangle,|110\rangle,|111\rangle\}.$$

+ +

The reason why this can be useful is that it corresponds with the Kronecker product for the matrices of the operators. For instance, first looking at the basis vectors: +$$|0\rangle\otimes |0\rangle=\begin{pmatrix} 1\\ 0 \end{pmatrix}\otimes \begin{pmatrix} 1\\ 0 \end{pmatrix}:=\begin{pmatrix} 1\cdot\begin{pmatrix} 1\\ 0 \end{pmatrix} \\ 0\cdot\begin{pmatrix} 1\\ 0 \end{pmatrix} \end{pmatrix}=\begin{pmatrix} 1\\ 0\\0\\0 \end{pmatrix}$$

+ +

and

+ +

$$|0\rangle\otimes |1\rangle=\begin{pmatrix} 1\\ 0 \end{pmatrix}\otimes \begin{pmatrix} 0\\ 1 \end{pmatrix}:=\begin{pmatrix} 1\cdot\begin{pmatrix} 0\\ 1 \end{pmatrix} \\ 0\cdot\begin{pmatrix} 1\\ 0 \end{pmatrix} \end{pmatrix}=\begin{pmatrix} 0\\ 1\\0\\0 \end{pmatrix}$$

+ +

and similarly

+ +

$$|1\rangle\otimes |0\rangle=\begin{pmatrix} 0\\ 0\\1\\0 \end{pmatrix},\quad |1\rangle\otimes |1\rangle=\begin{pmatrix} 0\\ 0\\0\\1 \end{pmatrix}$$

+ +

If you have an operator e.g. $X_1X_2:=X\otimes X$ which acts on two qubits and we order the basis as above we can take the kronecker product of the matrices to find the matrix in this basis:

+ +

$$X_1X_2=X\otimes X=\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}\otimes \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} = \begin{pmatrix} 0\cdot\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} & 1\cdot\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}\\ 1\cdot\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} & 0\cdot \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} \end{pmatrix}=\begin{pmatrix} 0 &0&0&1\\ 0 &0&1&0\\0 &1&0&0\\1 &0&0&0\\ \end{pmatrix}$$

+ +

If we look at the example of $CNOT$ above given as $|0\rangle\langle0|\otimes I+|1\rangle\langle1|\otimes X$.$^*$ This can be computed in matrix form as $\begin{pmatrix} 1 & 0\\ 0 & 0 \end{pmatrix}\otimes \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix}+\begin{pmatrix} 0 & 0\\ 0 & 1 \end{pmatrix}\otimes\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}$, which you can check is the $CNOT$ matrix above.

+ +

It's worthwhile getting used to using the shorthands and the tensor products rather than converting everything to matrix representation since the computational space grows as $2^n$ for $n$-qubits, which means for three cubits you have $8\times 8$ matrices, $4$-qubits you have $16\times 16$ matrices and it quickly becomes less than practical to convert to matrix form.

+ +

Aside$^*$: There are a few common ways to use dirac notation, to represent vectors like $|0\rangle$; dual vectors e.g. $\langle 0|$, inner product $\langle 0|1\rangle$ between the vectors $|0\rangle$ and $|1\rangle$; operators on the space like $X=|0\rangle\langle1|+|1\rangle\langle0|$.

+ +

An operator like $P_0=|0\rangle\langle0|$ is a projection operator is a (orthogonal) projection operator because it satisfies $P^2=P$ and $P^\dagger=P$.

+",197,,197,,04-07-2018 19:30,04-07-2018 19:30,,,,4,,,,CC BY-SA 3.0 +1633,2,,1629,04-07-2018 13:18,,6,,"

Adding to what @pyramids conveyed in their answer:

+ +

A qubit's state is generally written as $\alpha|0\rangle + \beta|1\rangle$, where $\alpha, \beta \in \Bbb{C}$, and $|\alpha|^2+|\beta|^2=1$.

+ +

$\Bbb{C}^2(\Bbb{R})$ is a four-dimensional vector space, over the field of real numbers. Since any $n$-dimensional real vector space is isomorphic to $\Bbb{R}^n(\Bbb{R})$, you can represent any qubit's state as a point in a $4$-dimensional real space, too, whose basis vectors you can consider to be $(1,0,0,0),(0,1,0,0),(0,0,1,0),(0,0,0,1)$. In such a case a qubit's state would be represented as $a(1,0,0,0)+b(0,1,0,0)+c(0,0,1,0)+d(0,0,0,1)$.

+ +

Say, $\alpha = a + i b$ (where $a,b\in \Bbb{R}$) and $\beta = c + id$ (where $c,d\in \Bbb{R}$). You need the condition $|a+ib|^2+|c+id|^2=1\implies a^2+b^2+c^2+d^2=1$ to be satisfied, which implies the state of the qubit would be a point on a 3-sphere.

+ +

As you know, it is difficult to efficiently represent a $4$-dimensional space on a $2$-dimensional surface like a paper, or your screen. Hence, you don't see that representation used often. Bloch sphere is pretty much the most efficient representation out there (for a single qubit), since it reduces one degree of freedom (of the complex numbers $\alpha,\beta$ each of which have two degrees of freedom) due to the fact that a qubit's state is usually normalized to a magnitude of $1$ i.e. $|\alpha|^2+|\beta|^2=1$.

+ +
+

Now, using the Hopf + coordinates + let's say:

+ +

$$\alpha = e^{i\psi}\cos(\theta/2)$$

+ +

$$\beta = e^{i(\psi+\phi)}\sin(\theta/2)$$

+ +

Here, $\theta$ can run from $0$ to $\pi$ whereas, $\psi$ and + $\phi+\psi$ can take values between $0$ to $\pi$.

+ +

In case you're wondering why $\theta/2$ is being used instead of $\theta$ have a look at the answers on this excellent thread on Physics Stack Exchange.

+
+ +

Okay, even now you notice three degrees of freedom $\psi,\phi,\theta$, whereas in a unit radii sphere, you only have two angles which you can change to get the different states of a qubit.

+ +

Notice that $\phi$ is basically the ""relative phase"" between $\alpha$ and $\beta$. On the other hand $\psi$ does not contribute to the ""relative phase"" of $\alpha,\beta$. Also, neither $\phi$ nor $\psi$ contribute to the magnitude of $\alpha,\beta$ (since $|e^{i\varphi}|=1$ for any angle $\varphi$). Since $\psi$ contributes neither to ""relative phase"" nor to the ""magnitudes"" of $\alpha,\beta$ it is said to have no physically observable consequences and we can arbitrarily choose $\alpha$ to be real by eliminating the factor of $e^{i\psi}$.

+ +
+

Thus we end up with:

+ +

$$\alpha = \cos(\theta/2)$$ and $$\beta=e^{i\phi}\sin(\theta/2)$$ + Where $\theta$ can run from $0$ to $\pi$, and $\phi$ can run from $0$ to $2\pi$.

+
+ +

This practical simplification allows you to represent a qubit's state using just $2$ degrees of freedom on $3$-dimensional spherical surface having unit radius, which again can again efficiently be ""drawn"" on a $2$-dimensional surface, as shown in the following image.

+ +

+ +

Mathematically, it is not possible to reduce the degrees of freedom any further, and so, I'd say there is no other ""more efficient"" geometrical representation of a single qubit than the Bloch sphere.

+ +

Source: Wikipedia:Bloch_Sphere

+",26,,26,,04-08-2018 05:52,04-08-2018 05:52,,,,5,,,,CC BY-SA 3.0 +1634,1,1647,,04-07-2018 17:51,,10,358,"

This question is based on a scenario that is partly hypothetical and partly based on the experimental features of molecule-based quantum devices, which often present a quantum evolution and have some potential to be scalable, but are generally extremely challenging to characterize in detail (a relevant but not unique example is a series of works related to this electrical control of nuclear spin qubits in single molecules).

+ +

The scenario: Let us say we have a variety of black boxes, each of which is are able to process information. We don't control the quantum evolution of the boxes; in the language of the quantum circuit model, we do not control the sequence of quantum gates. We know each black box is hardwired to a different algorithm, or, more realistically, to a different time-dependent Hamiltonian, including some incoherent evolution. We don't know the details of each black box. In particular, we don't know whether their quantum dynamics are coherent enough to produce a useful implementation of a quantum algorithm (let us herein call this ""quantumness""; the lower bound for this would be ""it's distinguishable from a classical map""). To work with our black boxes towards this goal, we only know how to feed them classical inputs and obtain classical outputs. Let us here distinguish between two sub-scenarios:

+ +
    +
  1. We cannot perform entanglement ourselves: we employ product states as +inputs, and single qubit measurements on the outputs. However, we can choose the basis of our input preparation and of our measurements (at minimum, between two orthogonal bases).
  2. +
  3. As above, but we cannot choose the bases and have to worked on some fixed, ""natural"" base.
  4. +
+ +

The goal: to check, for a given black box, the quantumness of its dynamics. At least, for 2 or 3 qubits, as a proof-of-concept, and ideally also for larger input sizes.

+ +

The question: in this scenario, is there a series of correlation tests, in the style of Bell's inequalities, which can achieve this goal?

+",1847,,1847,,04-08-2018 13:47,04-09-2018 06:50,Can one interrogate black boxes for quantum coherence?,,3,2,,,,CC BY-SA 3.0 +1635,1,,,04-07-2018 18:07,,7,384,"

This question is a follow-up on this one, with the hope of getting more specific clues, and was motivated by this answer by user Rob.

+ +

Also please note this posts that handle the topic of QA in much more detail:

+ +

https://quantumcomputing.stackexchange.com/a/1596
+https://quantumcomputing.stackexchange.com/a/4228

+ +

In D-Wave Systems' webpage, we can read a description of how an annealing QPU like their 2000Q flagship is able perform optimization on a given distribution by regarding many paths simultaneously, which makes the optimization process more robust against local minima and such.

+ +

After that, they claim that ""it is best suited to tackling complex optimization problems that exist across many domains"", and they cite (among others), optimization, machine learning and sampling / Monte Carlo

+ +

Furthermore, in Wikipedia, we see that

+ +
+

Quantum annealing (QA) is a metaheuristic for finding the global minimum of a given objective function over a given set of candidate solutions [...]. Quantum annealing starts from a quantum-mechanical superposition of all possible states (candidate states) with equal weights. Then the system evolves [...] The transverse field is finally switched off, and the system is expected to have reached the ground state.

+
+ +
+ +

At this point, we see that both explanations (with different detail levels) address the question of how annealing can help non-convex optimization (among other things). And that isn't even specifically related to quantum computing since simulated annealing stands on its own. But it is D-Wave claiming their hardware can help to perform sampling.

+ +

Maybe the connection is obvious and I'm missing it, but I think sampling is a distinct enough task from optimization to require its own explanation. The specific questions that I have are:

+ +
    +
  • Given some data following a possibly unknown distribution, how can the D-Wave be used to perform sampling on it? What are the constraints?
  • +
  • To what extent is it advantageous against other SoTA classical sampling algorithms like, say, Gibbs?
  • +
  • Are the possible advantages specific to annealing QPUs, to any QPUs or a general property of simulated annealing?
  • +
+",1346,,1346,,04-06-2019 01:48,04-06-2019 01:48,How can a D-Wave style Annealing QPU help sampling?,,1,3,,,,CC BY-SA 4.0 +1636,2,,1629,04-07-2018 22:11,,1,,"

(Firstly, the ""reputation points"" requirement is stupid - this remark should be a comment on the previous post.)

+ +

A single qubit in a pure state has 2 real degrees of freedom, not 3, when you quotient out both magnitude and phase (i.e., complex normalization). So, most reasonable two-dimensional surfaces could be used (e.g., the 2-sphere or anything topologically equivalent).

+ +

Finding a useful representation is another story. The Bloch sphere has a natural extension to mixed states (which have 3 degrees of freedom), whereas this does not appear to be the case otherwise..

+",1885,,1837,,5/22/2018 10:58,5/22/2018 10:58,,,,1,,,,CC BY-SA 4.0 +1637,2,,1635,04-08-2018 00:34,,1,,"

This answer reflects my understanding of what D-wave have to say about this, in the 2013 whitepaper they link to: Programming With D-Wave: Map Coloring Problem

+ +
+ +

To back up the question, we find once again the claim:

+ +
+

""superposition"" states [...] give a quantum computer the ability to quickly solve certain classes of complex problems such as optimization, machine learning and sampling problems.

+
+ +

And the explanation for the optimization problem:

+ +
+

The processor considers all the possibilities simultaneously to determine the lowest energy required to form those relationships. + Because a quantum computer is probabilistic rather than deterministic, the computer returns many very good answers in a + short amount of time - 10,000 answers in one second. This gives the user not only the optimal solution or a single answer,
+ but also other alternatives to choose from.

+
+ +

Then, the statement that the QPU naturally returns samples:

+ +
+

If the D-Wave quantum computer has no registers or memory locations, a natural question arises: how do we learn anything from having executed a quantum machine instruction? The answer is that we are given samples from a distribution, as a side effect of executing the QMI (quantum machine instruction).

+
+ +
+ +

But how? First, they explain what is the interface for such QMIs:

+ +
    +
  1. ""The D-Wave system has many qubits [...]. The programming model does not allow the programmer to directly set the value of these qubits""

  2. +
  3. Instead, the state of the $q_i$ qbits can be influenced by setting a weight $a$ for each qbit, and a weight $b_{ij}$ (called coupler) for each connection between any 2 qbits. The qbits are linearly combined into an ""objective function that will define the distribution from which our samples will be selected"": $O(a,b;q) = \sum_{i=1}^N{a_i q_i} + \sum_{<i,j>}b_{ij} q_i q_j$.

  4. +
  5. ""Each QMI consists of exactly the $a$ and $b_{ij}$ values that appear in the [...] objective function""

  6. +
+ +
+ +

And then they describe the protocol for such QMIs:

+ +
+

As a programmer, it is our job is to encode the various possible solutions to an optimization problem in the qubit variables $q_i$. Then we translate the constraints in our optimization problem into values of the weights $a_i$ and strengths $b_{ij}$ such that when the objective is minimized the qubits will satisfy the constraints.

+
+ +

In this context,

+ +
+
    +
  • ""Each sample is simply the collection of $q_i$ values for the entire set of qubits which enter into our problem"".

  • +
  • ""The distribution is an equal weighting across all the samples that give the minimum (or slightly above in practice) value of our objective function"".

  • +
+
+ +
+ +

So as far as I understood, three things are known to the programmer in beforehand:

+ +
    +
  1. The set of constraints (weights and couplers)
  2. +
  3. The distribution's domain (encoding of the qbits)
  4. +
  5. The fact that the distribution is a linear mixture model
  6. +
+ +

Then, running the program once will return a distribution which is by itself an average of all the possible optimal samples in the domain that satisfy the constraints.

+ +

And now I will try to answer my questions:

+ +
+
    +
  • Given some data following a possibly unknown distribution, how can the D-Wave be used to perform sampling on it? What are the constraints?
  • +
+
+ +

I will start with the constraints:

+ +
    +
  1. The distribution has to be representable as a linear mixture model of qbit distributions.
  2. +
  3. The hardware has to be capable of encoding the whole domain of the distribution
  4. +
  5. The programmer has to be able to express the desired distribution as a combination of sets of constraints within the mixture model. Therefore the distribution has to be implicitly known, but not explicitly: this suits in fact very well machine learning and data-driven workflows.
  6. +
+ +

Given this assumptions, it should be possible to extract the probability density function by running the program once per constraint set, and performing the pertinent combination. Note that the linearity of the mixture model is somehow a limitation, but also has its advantages regarding such combinations.

+ +
+
    +
  • To what extent is it advantageous against other SoTA classical sampling algorithms like, say, Gibbs?
  • +
+
+ +

The advantage comes from the speedup that ""superimposed"" states give by enabling a single program's output to collapse multiple samples. But there is one big caveat: the output returns all the minimum states for the given constraint. This means that the speedup depends on the programmer's ability of encoding the constraints in a way that collapse as many outputs as possible. Without getting into big-O, this doesn't seem trivial at all to me.

+ +
+
    +
  • Are the possible advantages specific to annealing QPUs, to any QPUs or a general property of simulated annealing?
  • +
+
+ +

I don't know about other QPUs (help very welcome), but regarding Wikipedia's pseudocode for simulated annealing we see that the output is a single sample. So, given the big caveat discussed before, this is already the worst-case scenario for a D-Wave sampler.

+ +
+ +

POSSIBLE APPLICATION EXAMPLES:

+ +
    +
  • This related paper (linked by user hopefully coherent) presents an algorithm that could be a candidate for this case: At the end of page 2, + +
    +

    ""We address the first term in (1) by describing a quantum procedure to efficiently sample the eigenvalues of an $n\times n$ Hermitian matrix $A$ uniformly at random""

    +
  • +
+ +

It would be very interesting to see if/how the constraints for eigenvalues can be encoded as the above discussed QMIs.

+ +
    +
  • Insert here some example for the encoding of some data-driven setup like ""given this $N$-dimensional dataset, find the projection or embedding that minimizes some non-convex objective"" into such QMIs.
  • +
+",1346,,1346,,04-09-2018 00:12,04-09-2018 00:12,,,,6,,,,CC BY-SA 3.0 +1638,2,,1631,04-08-2018 08:55,,5,,"

Although the linked wikipedia article is trying to use entanglement as a distinguishing feature from classical physics, I think one can start to get some understanding about entanglement by looking at classical stuff, where our intuition works a little better...

+ +

Imagine you have a random number generator that, each time, spits out a number 0,1,2 or 3. Usually you'd make these equally probability, but we can assign any probability to each outcome that we want. For example, let's give 1 and 2 each with probability 1/2, and never give 0 or 3. So, each time the random number generator picks something, it gives 1 or 2, and you don't know in advance what it's going to be. Now, let's write these numbers in binary, 1 as 01 and 2 as 10. Then, we give each bit to a different person, say Alice and Bob. Now, when the random number generator picks a value, either 01 or 10, Alice has one part, and Bob has the other. So, Alice can look at her bit, and whatever value she gets, she knows that Bob has the opposite value. We say these bits are perfectly anti-correlated.

+ +

Entanglement works much the same way. For example, you might have a quantum state $$\left|\psi\right\rangle=\frac{1}{\sqrt{2}}\left(\left|01\right\rangle-\left|10\right\rangle\right)$$ +where Alice holds one qubit of $\left|\psi\right\rangle$, and Bob holds the other. Whatever single-qubit projective measurement Alice chooses to make, she'll get an answer 0 or 1. If Bob makes the same measurement on his qubit, he always gets the opposite answer. This includes measuring in the Z-basis, which reproduces the classical case.

+ +

The difference comes from the fact that this holds true for every possible measurement basis, and for that to be the case, the measurement outcome must be unpredictable, and that's where it differs from the classical case (you may like to read up about Bell tests, specifically the CHSH test). In the classical random number example I described at the start, once the random number generator has picked something, there's no reason why it can't be copied. Somebody else would be able to know what answer both Alice and Bob would get. However, in the quantum version, the answers that Alice and Bob get do not exist is advance, and therefore nobody else can know them. If somebody did know them, the two answers would not be perfectly anti-correlated. This is the basis of Quantum Key Distribution as it basically describes being able to detect the presence of an eavesdropper.

+ +

Something further that may help in trying to understand entanglement: mathematically, it’s no different to superposition, it’s just that, at some point, you separate the superposed parts over a great distance, and the fact that that is in some sense difficult to do means that making the separation provides you with a resource that you can do interesting things with. Really, entanglement is the resource of what one might call ‘distributed superposition’.

+",1837,,1837,,04-08-2018 10:18,04-08-2018 10:18,,,,0,,,,CC BY-SA 3.0 +1639,2,,1634,04-08-2018 10:50,,4,,"

Why not input one half of a maximally entangled state as the input to the black box (so that half has the same dimension as the input dimension)? Then you could test your favourite measure, such as the purity, of the full output state. If the oracle corresponds to a unitary evolution, the purity is 1. The less coherent the smaller the purity. Incidentally, the output state describes the map that the black box implements, via the Choi-Jamiołkowski isomorphism.

+",1837,,1837,,04-08-2018 12:02,04-08-2018 12:02,,,,2,,,,CC BY-SA 3.0 +1640,2,,1634,04-08-2018 12:19,,2,,"

I'm not exactly sure what you mean by quantumness of your black box. So maybe there are some more sophisticated approaches (similar to the other answer you could use an entanglement witness to show that your black box is not entanglement breaking). However, in general you could perform quantum process tomography (see e.g. arXiv:quant-ph/9611013).

+",104,,,,,04-08-2018 12:19,,,,3,,,,CC BY-SA 3.0 +1641,2,,1629,04-08-2018 19:44,,5,,"

In the link included in your question, about another question written by user098876, "Understanding the Bloch sphere", Daniel makes a helpful comment:

+
+

"Drawing points on the sphere to represent the state of a quantum two-level system does not mean that you should think of those points as real vectors in 3D space. – DanielSank Sep 3 '15 at 20:17".

+
+

Oversimplified explanation: It's a two-side plane (or two planes) projected on a sphere.

+
+

"I found this notation quite confusing, because orthogonal vectors are spatially antiparallel (brief explanation in this Physics Stackexchange question). Do you know any different graphical representation for a single qubit?"

+
+

There are a number of efforts underway to provide a more general representation that extends from qubits to qudits. This explanation and representation using a Majorana sphere isn't so different, it's still a sphere, but perhaps it's less confusing:

+

For qubits on a Majorana sphere see: "N-qubit states as points on the Bloch sphere".

+
+

"Abstract. We show how the Majorana representation can be used to express the pure states of an N-qubit system ... In conclusion, the Majorana representation is useful when spin-$S$ particles are studied, while the alternative representation is preferable when the states of an $N$-qubit system are discussed. Besides helping to visualize $N$-qubit states and the way they transform in rotations and other operations, the latter representation may also help to identify some special $N$-qubit states, like the Majorana representation did in the context of spinor Bose-Einstein condensates.".

+
+

See: "Majorana representation, qutrit Hilbert space and NMR implementation of qutrit gates":

+

Page 1:

+
+

"The Bloch sphere provides a representation of the quantum states of a single qubit onto $\mathcal S^2$ (a unit sphere in three real dimensions), with pure states mapped onto the surface and the mixed states lying in the interior. This geometrical representation is useful in providing a visualization of quantum states and their transformations, particularly in the case of NMR-based quantum computation, where the spin-$\frac{1}{2}$ magnetization and its transformation through NMR rf pulses is visualized on the Bloch sphere. There have been several proposals for the geometrical representation for higher-level quantum systems however, extensions of a Bloch sphere-like picture to higher spins is not straightforward. A geometrical representation was proposed by Majorana in which, a pure state of a spin ‘$s$’ is represented by ‘2$_s$’ points on the surface of a unit sphere, called the Majorana sphere.

+

The Majorana representation for spin−$s$ systems has found widespread applications such as determining geometric phase of spins, representing $N$ spinors by $N$ points, geometrical representation of multi-qubit entangled states, statistics of chaotic quantum dynamical systems and characterizing polarized light. A single qutrit (three-level quantum system) is of particular importance in qudit-based ($d$-level quantum system) quantum computing schemes. A qutrit is the smallest system that exhibits inherent quantum features such as contextuality, which has been conjectured to be a resource for quantum computing. NMR qudit quantum computing can be performed by using nuclei with spin s > $\frac{1}{2}$ or can be modeled by two or more coupled spin-$\frac{1}{2}$ nuclei. In this work we use the Majorana sphere description of a single qutrit, where states of a qutrit are represented by a pair of points on a unit sphere, to provide insights into the qutrit state space.

+
+

Page 5:

+
+

The magnitude of the magnetization vector $\vert-$$\vec{M}$$\vert$ in a pure ensemble of a single qutrit can assume values in the range $[0,1]$. On the contrary, the pure ensemble of a qubit always possesses unit magnitude of the magnetization vector associated with it. The geometrical picture of the single qutrit magnetization vector is provided by the Majorana representation. The value $\vert-$$\vec{M}$$\vert$ depends upon the length of the bisector $OO'$ and lies along the $z$-axis and is rotationally invariant. Thus corresponding to a given value of the length of the bisector, one can assume concentric spheres with continuously varying radii, whose surfaces are the surfaces of constant magnetization. The radii of these spheres are equal to $\vert-$$\vec{M}$$\vert$, that vary in the range $[0,1]$.

+
+

Page 10:

+
+

CONCLUDING REMARKS

+

A geometrical representation of a qutrit is described in this work, wherein qutrit states are represented by two points on a unit sphere as per the Majorana representation. A parameterization of single-qutrit states was obtained to generate arbitrary states from a one-parameter family of canonical states via the action of $SO(3)$ transformations. The spin-$1$ magnetization vector was represented on the Majorana sphere and states were identified as ‘pointing’ or ‘non-pointing’ depending on the zero or non-zero value of the spin magnetization. The transformations generated by the action of $SU(3)$ generators were also integrated into the Majorana geometrical picture. Unlike qubits, the decomposition of single-qutrit quantum gates in terms of radio-frequency pulses is not straightforward and the Majorana sphere representation provides a way to geometrically describe these gates. Close observations of the dynamics of points representing a qutrit on the Majorana sphere under the action of various quantum gates were used to obtain the rf pulse decompositions and basic single-qutrit gates were experimentally implemented using NMR.

+
+

+

FIG. 1. A qutrit on the Majorana sphere is represented by two points $P_1$ and $P_2$, connected with the center of the sphere by lines shown in red and blue respectively. $\theta_1$, $\phi_1$ are the polar and azimuthal angles corresponding to point $P_1$ ($\theta_2$, $\phi_2$ are the angles for point $P_2$). (a) Roots of the Majorana polynomial are shown in the plane $z = 0$ by points $P'_1$ and $P'_2$, whose stereographic projection give rise to the Majorana representation. Three examples are shown corresponding to the Majorana representation of single-qutrit basis vectors $(b)\;\vert+1\rangle$, $(c)\;\vert0\rangle$ and $(d)\;\vert-1\rangle$. One of the points is shown as a solid (red) circle, while the other point is represented by an empty (blue) circle.

+

See: "Majorana Representation of Higher Spin States" (.PDF) by Wheeler (Website) or "Wigner tomography of multispin quantum states":

+
+

What does it look like using Tomography - "In this paper, we theoretically develop a tomography scheme for spherical functions of arbitrary multispin quantum states. We study experimental schemes to reconstruct the generalized Wigner representation of a given density operator (representing mixed or pure quantum states)."

+
+

Compare that to the complexity of the Bloch sphere depicted in: "Bloch-sphere representation of three-vertex geometric phases". The shape is the same it's all how you visualize the projection used.

+

Here's a less busy image:

+

+

Think of the Bloch sphere cut in half by a very large sheet of paper. At the edge of the paper (infinity) any point on the top of the sheet draws a line to (infinity) the top of the ball (the bottom of the ball for the underside of the sheet). Points nearest the center of the paper (mixed states) draw lines to the center of the sphere. That represents the distance up to infinity on a tiny ball, a qubit/qudit is finite so the paper is not so big.

+

Now draw points on the 2D paper, draw lines from the paper to the ball, remove the paper, and look at or through the clear ball to see the other endpoint of the line.

+

A much more accurate and difficult explanation is offered in the links above.

+",278,,-1,,6/18/2020 8:31,04-09-2018 14:12,,,,2,,,,CC BY-SA 3.0 +1642,1,1683,,04-08-2018 20:53,,5,216,"

Chemistry background: In magnetic molecules, it is sometimes the case that one can adjust the time-independent Hamiltonian by chemical design. This means there is freedom to adjust parameters in the time-independent Hamiltonian during the design phase, but when the device is prepared these parameters cannot be adjusted further. An example would be molecules containing magnetic ions with a well-defined spin anisotropy and which communicate with each other via dipolar interactions: if it is chemically possible to position the spins in 3D space and to orient their magnetization axes, one has a certain control over the final form of the time-independent Hamiltonian. (For a state-of-the-art example, see A modular design of molecular qubits to implement universal quantum gates).

+ +

Operational problem: In a given experimental setup aiming to implement quantum computing, there will be a collection of physical operations (described by a time-dependent Hamiltonian) which in principle allow for arbitrary quantum logical operations. In practice, the number of physical operations needed to implement a certain quantum algorithm (or even an elementary logical quantum gate) also depends on the details of the time-independent Hamiltonian. (See for example Spin qubits with electrically gated polyoxometalate molecules, where sqrt(SWAP) is spontaneously ""implemented"" by a simple waiting time).

+ +

Goal + question: Whenever one aims for a certain quantum logical manipulation (such as a Quantum Fourier Transform), it seems obvious that there will be some time-independent Hamiltonians that will require a smaller number of physical operations than others. Our goal would be to find design criteria; to find the parameters that we can adjust by chemical design that makes the number of operations small, not in a particular case but for a typical collection of quantum transformations. In the form of a question: is there a reasonable set of quantum logical operations that could be used in this context to benchmark the typical efficiency of time-dependent spin Hamiltonians?

+",1847,,26,,12/23/2018 14:06,12/23/2018 14:06,What set of quantum logical operations can one use to benchmark spin Hamiltonians?,,1,1,,,,CC BY-SA 3.0 +1643,2,,1339,04-08-2018 21:12,,1,,"

The first paper you mention, by Tejada et al, does not actually refer to conventional nanoparticles as such, but rather to single molecule magnets. This other paper by Loss, Quantum computing in molecular magnets, is about the same systems but perhaps more clear in quantum computing terms, since it gives more details on the underlying Hamiltonian and gives a suggestion for a quantum algorithm. Magnetic molecules do present decoherence (see for example Decoherence in crystals of quantum molecular magnets), but their mechanisms are different from those of magnetic nanoparticles, at least in the details, meaning Barkhausen noise is not be a sufficient description for those. In any case, you may want to check the supporting information of this latter reference by Stamp et al, since it includes methodological details on how to estimate the different decoherent sources.

+",1847,,,,,04-08-2018 21:12,,,,2,,,,CC BY-SA 3.0 +1646,2,,1549,04-09-2018 05:31,,3,,"

Your question revolves implicitly around the concept of quantum decoherence and how to protect real-world implementations of qubits from it for a long time.

+ +

This is an incredibly general problem, and at the same time, the details are wildly dependent on the technology used.

+ +

If you have access to it, you can check chapter 5 : ""Noise and decoherence"" of Theory and Design of Quantum Coherent Structures. Also, for illustration on the current state-of-the-art of different approaches, you can check this Europen project on Engineering electronic quantum coherence and correlations in hybrid nanostructures, or this other European project (disclaimer: this is my own approach) on A Chemical Approach to Molecular Spin Qubits.

+ +
+ +

Since the problem of storage of quantum information is vital, some general strategies have been developed. In a nutshell:

+ +
    +
  • Quantum Error Correction (also, for a slightly outdated pedagogical review see Quantum Error Correction for Beginners) which is a huge field by itself and which is based precisely on admitting the failure in building a sufficient protection to qubits and therefore the necessity for an active intervention to protect quantum information from degrading.

  • +
  • Different approaches to hybrid quantum devices exist, where the information is processed in qubits that interact strongly and quickly with each other and our external stimuli (and also with noise sources) and subsequently stored in qubits that interact very weakly and slowly with every stimulus (desirable or not). Again, this family of approaches is too much dependent on the technological details to make general statements.

  • +
+",1847,,58,,04-11-2018 07:11,04-11-2018 07:11,,,,0,,,,CC BY-SA 3.0 +1647,2,,1634,04-09-2018 06:50,,2,,"

Let's assume that your black box processes classical inputs (i.e. a bit string) to classical outputs in a deterministic way, i.e. it defines a function $f:x\mapsto y$.

+ +

If you can only prepare and measure separable states in that basis, all you can determine is what that function $f$ is. Assuming that all the outputs are different, it could have been computed either by a reversible classical computation or a quantum computation, and you wouldn't be able to tell.

+ +

So, let's assume you can prepare product states and measure in two different bases, $X$ and $Z$ for the sake of argument. One thing that you could do (which may be hopelessly inefficient for all I know, but it's somewhere to start) is first determine the function $f(x)$ using the $Z$ basis. Then, for any pair of bit string $x_1$ and $x_2$ that differ in only one position, prepare the state $(\left|x_1\right\rangle\pm\left|x_2\right\rangle)/\sqrt{2}$. This is a product state, using the $Z$ basis on all but one site. Let's assume that the outputs $y_1=f(x_1)$ and $y_2f(x_2)$ differ on $k>0$ sites. (If $k=0$, the evolution wasn't coherent anyway.) For the bits where $y_1$ and $y_2$ are supposed to be equal, just measure them in the $Z$ basis to make sure you get what you expect to get. On the remaining $k$ sites, if the black box is coherent, you receive a GHZ state of $k$ qubits, +$$ +\frac{1}{\sqrt{2}}(\left|y_1\right\rangle\pm\left|y_2\right\rangle). +$$ +If it were completely incoherent, you'd get a rank two mixed state +$$ +\frac{1}{2}\left(\left|y_1\right\rangle\left\langle y_1\right|+\left|y_2\right\rangle\left\langle y_2\right|\right). +$$ +If $k=1$, you can distinguish these directly by measuring that qubit in the $X$ basis (repeating a few times to get statistics). For $k>1$ you have a few options. Either you can peform a Bell test ($k=2$) or equivalent for GHZ states (such as all versus nothing proofs), or apply an entanglement witness (there are some based on single-qubit observables). Alternatively, measure every qubit in the $X$ basis and record the outcomes. In the case of the entangled state, the last outcome should be entirely predictable based on the previous outcomes. For the mixed state, the answer will be completely unpredictable. If you want to make a more quantitative statement, you could use something like an entropy, $H(X|Y)$ where $X$ is the random variable describing the output of the last measurement, and $Y$ is the random variable describing the outcome of all the previous measurements.

+ +

One possible issue is that by testing only inputs with a single site prepared in the $X$ basis, there are a lot of options you're not testing, so I don't know whether testing all of these coherences is enough, or whether one ought to start analysing what happens if you prepare pairs of sites in the $X$ basis, and so on.

+ +

Of course, while this tells you something about how coherent the implementation of the black box is, whether or not that coherence contributes to the speed of operation of the black box is a completely different matter (for example, that's the sort of thing people want to know about with transport processes in photosynthetic bacteria, or even something like D-Wave).

+",1837,,,,,04-09-2018 06:50,,,,0,,,,CC BY-SA 3.0 +1648,1,,,04-09-2018 07:15,,21,2037,"

Now that we know of bio/molecular tools that allow living organisms to deal with quantum computations e.g. the fancy proteins that allow birds to handle quantum coherence (e.g. The quantum needle of the avian magnetic compass or Double-Cone Localization and Seasonal Expression Pattern Suggest a Role in Magnetoreception for European Robin Cryptochrome 4) I wonder:

+ +
    +
  • Are these tools already solving problems you (quantum computing researchers) have?
  • +
  • Is there any specific issue these tools 'must' be solving somehow that you are struggling with at your labs?
  • +
  • Could we use them (although this will imply a paradigm shift towards biotechnology)?
  • +
+",1894,,1847,,4/22/2018 16:59,3/16/2021 9:52,Is Quantum Biocomputing ahead of us?,,3,1,,,,CC BY-SA 3.0 +1654,1,1656,,04-09-2018 18:37,,46,11565,"

This can be seen as the software complement to How does a quantum computer do basic math at the hardware level?

+ +

The question was asked by a member of the audience at the 4th network of the Spanish Network on Quantum Information and Quantum Technologies. The context the person gave was: ""I'm a materials scientist. You are introducing advanced sophisticated theoretical concepts, but I have trouble picturing the practical operation of a quantum computer for a simple task. If I was using diodes, transistors etc I could easily figure out myself the classical operations I need to run to add 1+1. How would you do that, in detail, on a quantum computer? "".

+",1847,,26,,12/23/2018 14:04,3/23/2019 13:37,How do I add 1+1 using a quantum computer?,,4,0,,,,CC BY-SA 3.0 +1655,2,,1654,04-09-2018 20:16,,6,,"

A new method for computing sums on a quantum computer is introduced. This technique uses the quantum Fourier transform and reduces the number of qubits necessary for addition by removing the need for temporary carry bits.

+ +

PDF link for 'addition on a quantum computer', written by Thomas G. Draper, written September 1, 1998, revised: June 15, 2000.

+ +

To summarize the above link, addition is performed according to the following circuit diagram (taken from page 6):

+ +

+ +

To quote the paper (again, page 6):

+ +
+

The quantum addition is performed using a sequence of conditional rotations + which are mutually commutative. The structure is very similar to the quantum + Fourier transform, but the rotations are conditioned on $n$ external bits.

+
+",1875,,91,,4/14/2018 20:16,4/14/2018 20:16,,,,0,,,,CC BY-SA 3.0 +1656,2,,1654,04-09-2018 20:34,,28,,"

As per the linked question, the simplest solution is just to get the classical processor to perform such operations if possible. Of course, that may not be possible, so we want to create an adder.

+ +

There are two types of single bit adder - the half-adder and the full adder. The half-adder takes the inputs $A$ and $B$ and outputs the 'sum' (XOR operation) $S = A\oplus B$ and the 'carry' (AND operation) $C = A\cdot B$. A full adder also has the 'carry in' $C_{in}$ input and the 'carry out' output $C_{out}$, replacing $C$. This returns $S=A\oplus B\oplus C_{in}$ and $C_{out} = C_{in}\cdot\left(A+B\right) + A\cdot B$.

+ +
+ +

Quantum version of the half-adder

+ +

Looking at the CNOT gate on qubit register $A$ controlling register $B$: \begin{align*}\text{CNOT}_{A\rightarrow B}\left|0\right>_A\left|0\right>_B &= \left|0\right>_A\left|0\right>_B \\ \text{CNOT}_{A\rightarrow B}\left|0\right>_A\left|1\right>_B &= \left|0\right>_A\left|1\right>_B \\\text{CNOT}_{A\rightarrow B}\left|1\right>_A\left|0\right>_B &= \left|1\right>_A\left|1\right>_B \\\text{CNOT}_{A\rightarrow B}\left|1\right>_A\left|1\right>_B &= \left|1\right>_A\left|0\right>_B, \\ +\end{align*} which immediately gives the output of the $B$ register as $A\oplus B = S$. However, we have yet to compute the carry and the state of the $B$ register has changed so we also need to perform the AND operation. This can be done using the 3-qubit Toffoli (controlled-CNOT/CCNOT) gate. This can be done using registers $A$ and $B$ as control registers and initialising the third register $\left(C\right)$ in state $\left|0\right>$, giving the output of the third register as $A\cdot B = C$. Implementing Toffoli on registers $A$ and $B$ controlling register $C$ followed by CNOT with $A$ controlling $B$ gives the output of register $B$ as the sum and the output of register $C$ as the carry. A quantum circuit diagram of the half-adder is shown in figure 1.

+ +
+ +

+ +

Figure 1: Circuit Diagram of a half-adder, consisting of Toffoli followed by CNOT. Input bits are $A$ and $B$, giving the sum $S$ with carry out $C$.

+ +
+ +

Quantum version of the full adder

+ +

Shown in figure 2, a simple way of doing this for single bits is by using $4$ qubit registers, here labelled $A$, $B$, $C_{in}$ and $1$, where $1$ starts in state $\left|0\right>$, so the initial state is $\left|A\right>\left|B\right>\left|C_{in}\right>\left|0\right>$:

+ +
    +
  1. Apply Toffoli using $A$ and $B$ to control $1$: $\left|A\right>\left|B\right>\left|C_{in}\right>\left|A\cdot B\right>$
  2. +
  3. CNOT with $A$ controlling $B$: $\left|A\right>\left|A\oplus B\right>\left|C_{in}\right>\left|A\cdot B\right>$
  4. +
  5. Toffoli with $B$ and $C_{in}$ controlling $1$: $\left|A\right>\left|A\oplus B\right>\left|C_{in}\right>\left|A\cdot B\oplus\left(A\oplus B\right)\cdot C_{in} = C_{out}\right>$
  6. +
  7. CNOT with $B$ controlling $C_{in}$: $\left|A\right>\left|A\oplus B\right>\left|A\oplus B\oplus C_{in} = S\right>\left|C_{out}\right>$
  8. +
+ +

A final step to get back the inputs $A$ and $B$ is to apply a CNOT with register $A$ controlling register $B$, giving the final output state as $$\left|\psi_{out}\right> = \left|A\right>\left|B\right>\left|S\right>\left|C_{out}\right>$$

+ +

This gives the output of register $C_{in}$ as the sum and the output of register $2$ as carry out.

+ +

+ +

Figure 2: Circuit diagram of a full adder. Input bits are $A$ and $B$ along with a carry in $C_{in}$, giving the sum $S$ with carry out $C_{out}$.

+ +
+ +

Quantum version of the ripple carry adder

+ +

A simple extension of the full adder is a ripple carry adder, named as it 'ripples' the carry out to become the carry in of the next adder in a series of adders, allowing for arbitrarily-sized (if slow) sums. A quantum version of such an adder can be found e.g. here

+ +
+ +

Actual implementation of a half-adder

+ +

For many systems, implementing a Toffoli gate is far from as simple as implementing a single qubit (or even two qubit) gate. This answer gives a way of decomposing Toffoli into multiple smaller gates. However, in real systems, such as IBMQX, there can also be issues on which qubits can be used as targets. As such, a real life implementation on IBMQX2 looks like this: +

+ +

Figure 3: Implementation of a half-adder on IBMQX2. In addition to decomposing the Toffoli gate into multiple smaller gates, additional gates are required as not all qubit registers can be used as targets. Registers q[0] and q[1] are added to get the sum in q[1] and the carry in q[2]. In this case, the result q[2]q[1] should be 10. Running this on the processor gave the correct result with a probability of 42.8% (although it was still the most likely outcome).

+",23,,23,,4/14/2018 14:37,4/14/2018 14:37,,,,5,,,,CC BY-SA 3.0 +1657,1,1659,,04-10-2018 00:46,,20,2413,"

The term ""Church of the Higher Hilbert Space"" is used in quantum information frequently when analysing quantum channels and quantum states.

+ +

What does this term mean (or, alternately, what does the term ""Going to To the Church of the Higher Hilbert Space"" mean)?

+",429,,26,,01-01-2019 09:05,01-01-2019 09:05,Significance of The Church of the Higher Hilbert space,,2,1,,,,CC BY-SA 3.0 +1658,2,,1657,04-10-2018 01:35,,8,,"

""Church of the higher hilbert space"" is a term coined by John Smolin. According to quantiki it is:

+ +
+

for the dilation constructions of channels and states, which [...] provide a neat characterization of the set of permissible quantum operations

+
+ +

and to quote wikipedia, it:

+ +
+

describe[s] the habit of regarding every mixed state of a quantum system as a pure entangled state of a larger system, and every irreversible evolution as a reversible (unitary) evolution of a larger system.

+
+ +

See also this Physics.SE answer.

+",91,,91,,04-12-2018 00:10,04-12-2018 00:10,,,,1,,,,CC BY-SA 3.0 +1659,2,,1657,04-10-2018 06:02,,24,,"

The church of the larger (or higher, or greater) Hilbert space is just a trick that some people like (myself included) for rewriting some operations.

+ +

The most general operations that you can write down for a system are described by completely positive maps, while we like describing things with unitaries, which you can always do by moving from the original Hilbert space to a larger one (i.e. adding more qubits). Similarly, for measurements, you can turn general measurements into projective measurements by increasing the size of the Hilbert space. Also, mixed states can be described as pure states of a larger system.

+ +
+ +

Example

+ +

Consider the map that takes a qubit and with probability $1-p$ does nothing, and with probability, $p$ applies the bit-flip operation $X$: +$$ +|\psi\rangle\langle\psi|\mapsto (1-p)|\psi\rangle\langle\psi|+pX|\psi\rangle\langle\psi|X +$$ +This is not unitary, but you can describe it as a unitary on two qubits (i.e. by moving from a Hilbert space dimension 2 to Hilbert space dimension 4). This works by introducing an extra qubit in the state $\sqrt{1-p}|0\rangle+\sqrt{p}|1\rangle$ and performing a controlled-not controlled by the new qubit and targeting the original one. +$$ +|\psi\rangle(\sqrt{1-p}|0\rangle+\sqrt{p}|1\rangle)\mapsto|\Psi\rangle=\sqrt{1-p}|\psi\rangle|0\rangle+\sqrt{p}\left(X|\psi\rangle\right)|1\rangle. +$$ +To get back the action of the system on just the original qubit, you trace out the new qubit: +$$ +\rho={\rm Tr}_2\left(|\Psi\rangle\langle\Psi|\right)= (1-p)|\psi\rangle\langle\psi|+pX|\psi\rangle\langle\psi|X. +$$ +In other words, you just ignore the existence of the new qubit after you’ve implemented the unitary! Note that as well as demonstrating the church of the larger Hilbert space for operations, this also demonstrates it for states - the mixed state $\rho$ can be made into the pure state $|\Psi\rangle$ by increasing the size of the Hilbert space.

+",1837,,1837,,04-11-2018 07:24,04-11-2018 07:24,,,,0,,,,CC BY-SA 3.0 +1660,1,,,04-10-2018 13:33,,10,345,"
+

""Quantum magic won't be enough"" (Bennett et al. 1997)

+ +

If you throw away the problem structure, and just consider the space of $2^n$ possible solutions, then even a quantum computer needs about $\sqrt{2^n}$ steps to find the correct one (using Grover's algorithm) + If a quantum polynomial time algorithm for an $\text{NP}$-complete problem is ever found, it must exploit the problem structure in some way.

+ +

(...)

+ +

I've some (basic) questions that no one seems to have asked so far on this site (maybe because they are basic). Suppose someone finds a bounded error quantum polynomial time algorithm for $\text{SAT}$ (or any other $\text{NP}$-complete problem), thus placing $\text{SAT}$ in $\text{BQP}$, and implying $\text{NP} \subseteq \text{BQP}$.

+
+ +
+ +

Questions

+ +

Which would be the theoretical consequences of such a discovery? How would the overall picture of complexity classes be affected? Which classes would become equal to which others?

+ +

Source

+",58,,1847,,4/27/2018 16:50,1/23/2022 18:23,Consequences of SAT ∈ BQP,,2,4,,,,CC BY-SA 3.0 +1661,2,,1654,04-10-2018 13:36,,6,,"

``If I was using diodes, transistors etc I could easily figure out myself the classical operations I need to run to add 1+1. How would you do that, in detail, on a quantum computer?''

+ +

Impressive! I suspect that most people cannot easily figure out themselves how to combine diodes and transistors to implement a classical two-bit adde (though I do not doubt this material scientist can probably do it). ;)

+ +

Theoretically, the way you implement a classical adder is pretty similar in a classical and quantum computer: you can do that in both cases by implementing a Toffoli gate! (See @Mithrandir24601's answer.)

+ +

But the material scientist probably wants to understand how to implement such an gate (or an equivalence sequence of other quantum gates) on a physical device. There are probably an infinite ways to do that using different quantum technologies, but here are two direct realizations of this gate using trapped ions and superconducting qubits:

+ +
    +
  1. Realization of the Quantum Toffoli Gate with Trapped Ions, T. Monz, K. Kim, W. Hänsel, M. Riebe, A. S. Villar, P. Schindler, M. Chwalla, M. Hennrich, and R. Blatt, Phys. Rev. Lett. 102, 040501, arXiv:0804.0082.
  2. +
  3. Implementation of a Toffoli gate with superconducting circuits +A. Fedorov, L. Steffen, M. Baur, M. P. da Silva & A. Wallraff +Nature 481, 170–172, arXiv:1108.3966.
  4. +
+ +

You can also decompose the Toffoli gate as a sequence of single-qubit and CNOT gates. +https://media.nature.com/lw926/nature-assets/srep/2016/160802/srep30600/images/srep30600-f5.jpg +You can read about how to implement these with photonics, cavity-QED and trapped ions in Nielsen and Chuang.

+",1779,,,,,04-10-2018 13:36,,,,1,,,,CC BY-SA 3.0 +1663,1,1668,,04-10-2018 16:16,,3,3567,"

Context

+ +

Lately, I have been reading a scholarly paper entitled An Introduction to Quantum Computing for Non-Physicists which discusses the EPR Paradox.

+ +

The paper states that:

+ +
+

Einstein, Podolsky and Rosen proposed a gedanken experiment that uses entangled particles in a manner that seemed to violate fundamental principles of relativity.

+
+ +

It concludes that the paradox is resolved as the symmetry shown by changing observers indicates that they cannot use their EPR pair to communicate faster than the speed of light. However, the paper fails to adequately explain what an EPR pair is used for and does not even define the term. The best definition I could find is referenced to in the Wikipedia article, Bell state.

+ +
+

An EPR pair is a pair of qubits (or quantum bits) that are in a Bell state together.

+
+ +

This definition doesn't provide much detail beyond the basics of what an EPR pair is.

+ +
+ +

The Question

+ +

How are EPR Pairs used in quantum computing?

+",82,,1847,,4/18/2018 18:37,4/18/2018 18:37,How are EPR Pairs used in quantum computing?,,2,1,,,,CC BY-SA 3.0 +1664,1,1666,,04-10-2018 16:34,,6,348,"
+

"Quantum magic won't be enough" (Bennett et al. 1997)

+

If you throw away the problem structure, and just consider the space of $2^n$ possible solutions, then even a quantum computer needs about $\sqrt{2^n}$ steps to find the correct one (using Grover's algorithm) +If a quantum polynomial time algorithm for a $\text{NP}$-complete problem is ever found, it must exploit the problem structure in some way. +I've some (basic) questions that no one seems to have asked so far on this site (maybe because they are basic). Suppose someone finds a bounded error quantum polynomial time algorithm for $\text{SAT}$ (or any other $\text{NP}$-complete problem), thus placing $\text{SAT}$ in $\text{BQP}$, and implying $\text{NP} \subseteq \text{BQP}$.

+
+
+

Questions

+

The possibility (or not) to exploit the problem structure in a general enough (i.e. specific-instance independent) manner seems to be the very core of the $\text{P = NP}$ question. Now if a bounded error polynomial-time quantum algorithm for $\text{SAT}$ is found, and it must exploit the problem structure, wouldn't its structure-exploitation-strategy be usable also in the classical scenario? Is there any evidence indicating that such a structure-exploitation may be possible for quantum computers, while remaining impossible for classical ones?

+

Sources

+

Related

+",58,,-1,,6/18/2020 8:31,04-10-2018 18:12,Quantum Algorithm SAT structure,,1,1,,,,CC BY-SA 3.0 +1665,2,,1289,04-10-2018 17:54,,24,,"

Here is my process for doing arithmetic on a quantum computer.

+ +

Step 1: Find a classical circuit that does the thing you're interested in.

+ +

In this example, a full adder.

+ +

+ +

Step 2: Convert each classical gate into a reversible gate.

+ +

Have your output bits present from the start, and initialize them with CNOTs, CCNOTs, etc.

+ +

+ +

Step 3: Use temporary outputs.

+ +

If you were doing this addition to e.g. control whether a Grover oracle phases by -1 or not, now is the time to apply a Z gate to your output qubit.

+ +

Step 4: Get rid of intermediate values by doing exactly the opposite of what you did to compute them.

+ +

This may or may not include getting rid of the output bits, depending on how the circuit fits into your overall algorithm.

+ +

+ +

Step 5: (Sometimes) for each output bit you keep, get rid of an input bit.

+ +

And I don't mean ""drop them on the floor"", I mean apply operations that cause them to become 0 for sure.

+ +

When you compute c+=a, leaving behind a copy of the original value of c tends to be bad. It destroys coherence. So you must look at your adder circuit (or whatever), and think hard about if there's a way to use your output bits to get rid of your input bits. For example, after computing c+a you could do a temporary out-of-place subtraction into a register r, xor r into the register storing the unwanted copy of c, then revert the temporary subtraction.

+ +

(A notable exception to ""if you keep your output, don't keep so much of your input"" is Shor's algorithm. Shor's algorithm decoheres its input on purpose, but in a very particular way that helps with period finding.)

+ +

Step 6: Be efficient

+ +

In step 5 I said you could uncompute the input of an inplace addition by doing an out of place addition followed by a temporary out-of-place subtraction. This is a bit silly. The overall adding process is going to span 4n qubits (n to hold a, n to hold c, n to hold c+a, n to hold (c+a)-a). If you are more clever, you can fit everything into 2n qubits or (slightly easier) into 2n+1 qubits:

+ +

+",119,,119,,04-10-2018 18:03,04-10-2018 18:03,,,,1,,,,CC BY-SA 3.0 +1666,2,,1664,04-10-2018 18:12,,3,,"
+

wouldn't its structure-exploitation-strategy be usable also in the classical scenario?

+
+ +

Not necessarily. For example, Shor's algorithm exploits the structure of factoring in a way that classical computers can't. Specifically, Shor's algorithm looks for periods in the multiplicative subgroup of N-1 in a way that requires a quantum Fourier transform to be efficient.

+ +

Sometimes quantum algorithms do translate back into the classical domain. For example, you can use phase estimation to apply fractional QFTs in an efficient way. Translate this circuit directly into its classical equivalent, and you get an O(N log(N)) algorithm for fractional FFTs. But there's certainly no guarantees that what works well in the quantum context will work well in the classical context.

+",119,,,,,04-10-2018 18:12,,,,0,,,,CC BY-SA 3.0 +1667,1,1672,,04-10-2018 18:24,,6,1344,"
+
    +
  1. Show that the average value of the observable $X_1Z_2$ in a two-qubit system measured in the state $(|00\rangle + |11\rangle)/\sqrt{2}$ is zero.
  2. +
+
+

How would we approach this question? I understand that $X_1$ means $\sigma_1$ acting on the first qubit, and $Z_2$ means $\sigma_3$ acting on the second qubit. I also know that the average value is given by $\left<\psi\vert M\vert \psi\right>$ (the inner product of $\psi$ with $M \psi$). I know how to solve similar problems for a single qubit system, however what confuses me here is what is the vector representation of the states $\left| 00\right>$ and $\left| 11\right>$ and how is this related to the vector representation of $\left| 0\right>$ and $\left| 1\right>$? Also, what is the matrix form of $M$ in this case? Is it the tensor product of $\sigma_1$ applied to the first qubit and $\sigma_3$ applied to the second qubit?

+

Source of the question

+",1919,,-1,,6/18/2020 8:31,09-05-2019 10:54,How to compute the average value $\langle X_1 Z_2\rangle$ for a two-qubit system?,,2,4,,,,CC BY-SA 4.0 +1668,2,,1663,04-10-2018 20:18,,5,,"

EPR pairs are a particular case of entangled pairs of qubits.

+ +

From Wikipedia: ""Quantum entanglement is a physical phenomenon which occurs when pairs or groups of particles are generated or interact in ways such that the quantum state of each particle cannot be described independently of the state of the other."" More to the point regarding your question, entanglement is a crucially important resource for quantum computing, see Bennet's laws on the inequivalences between bits, qubits and ebits (or entanglement bits).

+ +

Elementary cases of use of EPR pairs in quantum computing would be quantum teleportation and Superdense coding, and upon those pieces people have built more sophisticated applications. For the gory details of the original proposal for quantum teleportation, please check Bennet's Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen Channels (Phys.Rev.Lett., 1993, 70, 1895-1899).

+ +

For a more sophisticated application, please check this answer on the intuition behind the Choi-Jamiolkowski isomorphism, which also heavily relies on maximally entangled qubit pairs. In turn, the application of this isomorphism was suggested to me, here, a couple of days ago, as an answer to how to obtain information on what quantum logic a quantum black box is implementing.

+",1847,,1847,,04-10-2018 20:25,04-10-2018 20:25,,,,1,,,,CC BY-SA 3.0 +1669,2,,1648,04-10-2018 22:48,,9,,"
+

"Is Quantum Biocomputing ahead of us?"

+
+

There has been some work done on biocomputing, quantum computing, spin chemistry, and magnetochemical reactions.

+

Correlated radical pairs — pairs of transient radicals created simultaneously, such that the 2 electron spins, one on each radical, are correlated — on photoactive magnetoreceptive proteins such as Cryptochromes does not constitute quantum computation.

+

See: "Light-dependent magnetoreception in birds: analysis of the behaviour under red light after pre-exposure to red light" by W. Wiltschko, Gesson, Noll, and R. Wiltschko in the Journal of Experimental Biology, 2004.

+

See the article "Vision-based animal magnetoreception" at the QuantBioLab website, Quantum Biology and Computational Physics research group, University of Southern Denmark (SDU):

+

+

Figure 6. Shown here is a semi-classical description of the magnetic field effect on the radical pairs between FADH and tryptophan in cryptochrome. The unpaired electron spins (S$_1$ and S$_2$) precess about a local magnetic field produced by the addition of the external magnetic field $B$ with contributions I$_1$ and I$_2$ from the nuclear spins on the two radicals. The spin precession continuously alters the relative spin orientation, causing the singlet (anti-parallel) to triplet (parallel) interconversion which underlies the magnetic field effect. Electron back-transfer from a tryptophan to FADH quenches cryptochrome's active state. However, this back-transfer can only take place when the electron spins are in the singlet state, and this spin-dependence allows the external magnetic field, $B$, to affect cryptochrome activation.

+

+

Figure 7. Schematic illustration of a bird's eye and its important components. The retina (a) converts images from the eye's optical system into electrical signals sent along the ganglion cells forming the optic nerve to the brain. (b) An enlarged retina segment is shown schematically. (c) The retina consists of several cell layers. The primary signals arising in the rod and cone outer segments are passed to the horizontal, the bipolar, the amacrine, and the ganglion cells. (d) The primary phototransduction signal is generated in the receptor protein rhodopsin shown schematically at a much reduced density. The rhodopsin containing membranes form disks with a thickness of ~20 nm, being ~15–20 nm apart from each other. The putatively magnetic-field-sensitive protein cryptochrome may be localized in a specifically oriented fashion between the disks of the outer segment of the photoreceptor cell, as schematically shown in panel d or the cryptochromes (e) may be attached to the oriented, quasi-cylindrical membrane of the inner segment of the photoreceptor cell (f).

+

In mathematical terms, the vision-based compass in birds is characterized by a filter function, which models the magnetic field-mediated visual signal modulation recorded on the bird's retina (see Fig. 8).

+

+

Figure 8. Panoramic view at Frankfurt am Main, Germany. The image shows the landscape perspective recorded from a bird flight altitude of 200 m above ground with the cardinal directions indicated. The visual field is modified through the magnetic filter function; the patterns are shown for a bird looking at eight cardinal directions (N, NE, E, SE, S, SW, W, and NW). The geomagnetic field inclination angle is 66°, being a characteristic value for the region.

+
+

A biomechanical computer has been created. Bio4Comp, an EU-funded research project, have created biomolecular machines each only a few billionths of a meter (nanometers) in size. The actin-myosin and microtubule-kinesin motility systems can solve problems by moving through a nanofabricated network of channels designed to represent a mathematical algorithm; an approach we termed “network-based biocomputation”. Whenever the biomolecules reach a junction in the network, they either add a number to the sum they are calculating or leave it out. That way, each biomolecule acts as a tiny computer with processor and memory. While an individual biomolecule is much slower than a current computer, they are self-assembling so that they can be used in large numbers, quickly adding up their computing power. An example of how this works is shown in the video on their website.

+

+
+
    +
  • Are these tools already solving problems you (quantum computing researchers) have?

    +
  • +
  • Is there any specific issue these tools 'must' be solving somehow that you are struggling with at your labs?

    +
  • +
  • Could we use them (although this will imply a paradigm shift towards biotechnology)?

    +
  • +
+
+

"The first step in solving mathematical problems with network-based biocomputation is to encode the problem into network format so that molecular motors exploring the network can solve the problem. We have already found network encodings for several NP-complete problems, which are particularly hard to solve with electronic computers. For example, we have encoded subset sum, exact cover, boolean satisfiability and travelling salesman.

+

Within the Bio4Comp project, we will focus on optimizing these encodings so that they can be efficiently solved with biological agents and be more readily scaled up. Analogous to optimized computer algorithms, optimized networks can greatly reduce the computing power (and thus the number of motor proteins) required for finding the correct solution." - Source: Bio4Comp Research.

+
+

Another interesting paper which supports my answer that radical pairs don't constitute a quantum computer, but is merely a quantum biochemical reaction demonstrating spin chemistry, is "Quantum probe and design for a chemical compass with magnetic nanostructures" by Jianming Cai (2018).

+
+

Introduction. — Recently, there has been increasing interest in quantum biology namely investigating quantum effects in chemical and biological systems, e.g., light harvesting system, avian compass and olfactory sense. The main motivation is to understand how quantum coherence (entanglement) may be exploited for the accomplishment of biological functions. As a key step towards this goal, it is desirable to find tools that can detect quantum effects under ambient conditions. The ultimate goal of practical interest in studying quantum biology is to learn from nature and design highly efficient devices that can mimic biological systems in order to complete important tasks, e.g. collecting solar energy and detecting weak magnetic field.

+

As an example of quantum biology, the radical pair mechanism is an intriguing hypothesis to explain the ability of some species to respond to weak magnetic fields, e.g. birds, fruit flies, and plants. A magnetochemical compass could find applications in remote magnetometry, in a magnetic mapping of microscopic or topographically complex materials, and in imaging through scattering media. It was demonstrated that a synthetic donor-bridge-acceptor compass composed of a linked carotenoid (C), porphyrin (P), and fullerene (F) can work at low temperature (193 K). It is surprising that such a triad molecule is the only known example that has been experimentally demonstrated to be sensitive to the geomagnetic field (yet not at room temperature). It is currently not known how one might construct a biomimetic or synthetic chemical compass that functions at ambient temperature.

+

...

+

Summary. — We have demonstrated that a gradient field can lead to a significant enhancement of the performance of a chemical compass. The gradient field also provides us with a powerful tool to investigate quantum dynamics of radical pair reactions in spin chemistry. In particular, it can distinguish whether the initial radical pair state is in the entangled singlet state or in the classically correlated state, even in the scenarios where such a goal could not be achieved before. These phenomena persist upon addition of partial orientational averaging and addition of realistic magnetic noise. The effects predicted there may be detectable in a hybrid system compass composed of magnetic nanoparticles and radical pairs in an oriented liquid crystalline host. Our work offers a simple method to design/simulate a biologically inspired weak magnetic field sensor based on the radical pair mechanism with a high sensitivity that may work at room temperature.

+
+",278,,-1,,6/18/2020 8:31,04-11-2018 11:02,,,,0,,,,CC BY-SA 3.0 +1670,2,,1667,04-11-2018 02:37,,1,,"

Notationally, $|00\rangle = |0\rangle \otimes |0\rangle$. A basic property of tensor products is that inner products split like so: $$(\langle 0|_A \otimes \langle 0|_B) O_A\otimes O_B (|0\rangle_A \otimes |0\rangle_B) = \langle0|O_A|0\rangle \cdot \langle0|O_B|0\rangle.$$

+",483,,,,,04-11-2018 02:37,,,,0,,,,CC BY-SA 3.0 +1671,2,,1648,04-11-2018 04:18,,3,,"

Much has been written about Quantum Biology. A somewhat old -and yet, solid- take is that of Phillip Ball, The dawn of Quantum Biology (Nature 2011, 474, 271-274). For now, let's not review that and instead focus on your questions.

+ +
+ +

On the first question:(is it solving our problems?)

+ +

A system (or process) described by Quantum Biology is non-trivially quantum-mechanic, and therefore interesting, but to the best of my knowledge it is also not multi-qubit, so not really what quantum computing is about. In particular: currently known quantum biological processes do not present scalability, and neither do they present quantum logical gates (or not in the way we understand them at least), much less so quantum algorithms. So, as an answer, that's mainly a no: these tools are not solving our problems.

+ +
+ +

On the second question:(is it solving a specific issue we're struggling with?)

+ +

Reliable quantum coherence on the solid state, in complex structured systems and at high temperature is something we all would like to see solved, and, at least to some point, this is what Quantum Biology is about. So, as far as the current understanding of the field goes, this is indeed a specific issue that people in labs are working on and which seems solved in Biology (since molecules are complex nanostructures). Whenever we are able in our labs to reliably achieve quantum coherence in the solid state, in complex structured systems and at high temperature, we will jump much closer to usefulness and cheapness. So, as an answer, that is a yes.

+ +
+ +

On the third question:(could we use biomolecules as quantum hardware?)

+ +

They are not in the main league yet, to say the least. Even as an optimistic speculation, I'd say that they will not be competing with the big players any time soon, but I do believe that, as research advances past DNA origami (and related strategies) in Molecular Biology and Synthetic Biology, at some point biomolecular qubits will play a role within the subset of molecular spin qubits. In particular, the keys to relevance would be to combine the (seemingly proven) coherence in unusual conditions (warm, wet), with the unmatched ability of biomolecules for extremely complex self-organisation into functional structures. Since (coherent, organized) molecular spin qubits are my field of research, let me link to a couple of relevant papers. First, a first reaction on the first magnetic molecule that was competitive in terms of coherence with regular solid-state candidates, and thus how +magnetic molecules are back in the race toward the quantum computer. And also, this proposal (disclosure: I'm an author) on the arXiv on why and how one could use peptides as versatile scaffolds for quantum computing.

+",1847,,1847,,4/21/2018 9:38,4/21/2018 9:38,,,,0,,,,CC BY-SA 3.0 +1672,2,,1667,04-11-2018 06:43,,11,,"

I suggest two different ways of trying to solve this, which will give you experience of different bits of the formulation of Quantum Information Theory. I'll give examples that are closely related to the question you asked, but are not what you asked so that you still get the value of answering the question yourself.

+ +

Long-hand Method

+ +

Represent the kets as vectors, the Pauli matrices as matrices, explicitly perform the tensor products, and multiply everything out. So, we represent +$$ +|0\rangle\equiv\left(\begin{array}{c} 1 \\ 0 \end{array}\right)\qquad|1\rangle\equiv\left(\begin{array}{c} 0 \\ 1 \end{array}\right) +$$ +To calculate the tensor product, such as $|01\rangle$, we do +$$ +|01\rangle=|0\rangle\otimes|1\rangle\equiv\left(\begin{array}{c}1\times\left(\begin{array}{c} 0 \\ 1 \end{array}\right)\\ 0\times \left(\begin{array}{c} 0 \\ 1 \end{array}\right) \end{array}\right)=\left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right) +$$ +Remember that $\langle 01|$ is just the Hermitian conjugate of this, +$$ +\langle 01|=\left(\begin{array}{cccc} 0 & 1 & 0 & 0 \end{array}\right) +$$ +Then you do something similar for the operators. For example, $$X_1X_2=\sigma_1\otimes\sigma_1=\left(\begin{array}{cc} 0\times \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right) & 1\times \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right) \\ 1\times \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right) & 0\times \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right) \end{array}\right)=\left(\begin{array}{cccc} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{array}\right). +$$ +Once you have all of this, you simply multiply it out: +$$ +\langle 01|X_1X_2|01\rangle=\left(\begin{array}{cccc} 0 & 1 & 0 & 0 \end{array}\right)\left(\begin{array}{cccc} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{array}\right)\left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right)=0. +$$

+ +

Shorter Method

+ +

(With experience, this method lets you perform this calculation just by looking at it! Of course, when giving an answer, I don't recommend that; you should always justify your answer.)

+ +

Remember to think of each term $|01\rangle$ as $|0\rangle\otimes|1\rangle$. So, when you also write $X_1X_2=\sigma_1\otimes\sigma_1$, you see that +$$ +(\sigma_1\otimes\sigma_1)|0\rangle\otimes|1\rangle=(\sigma_1|0\rangle)\otimes(\sigma_1|1\rangle) +$$ +i.e. when everything is just tensor products, individual terms match up. Now, hopefully you know the action of $\sigma_1$ and $\sigma_3$ on the basis states: +$$ +\sigma_1|0\rangle=|1\rangle\qquad \sigma_1|1\rangle=|0\rangle \qquad \sigma_3|0\rangle=|0\rangle \qquad \sigma_3|1\rangle=-|1\rangle +$$ +Thus, +$$ +(\sigma_1|0\rangle)\otimes(\sigma_1|1\rangle)=|1\rangle\otimes |0\rangle=|10\rangle +$$ +One can then easily observe that a state such as $(|01\rangle+|10\rangle)/\sqrt{2}$ is acted on by $X_1X_2$ to give +$$ +X_1X_2(|01\rangle+|10\rangle)/\sqrt{2}=(|01\rangle+|10\rangle)/\sqrt{2},\tag{1} +$$ +the same state. So it is clear that the inner product of the state with itself is 1: +$$ +(\langle 01|+\langle 10|)X_1X_2(|01\rangle+|10\rangle)/2=1. +$$ +On the other hand, had the outcome in Eq. (1) been a different one of the four Bell states, because we know the Bell states form an orthonormal basis, the expectation value would be 0.

+",1837,,,,,04-11-2018 06:43,,,,0,,,,CC BY-SA 3.0 +1673,2,,1602,04-11-2018 08:48,,3,,"

The preferred basis problem is essentially something from the many worlds interpretation: If we are to interpret a superposition as representing many universes, what basis should we choose? Since this comes from the foundations of QM, this aspect of your question is perhaps better suited to the physics stack exchange.

+ +
+

Is there a preferred basis for a qudit or does it depend upon the + underlying technology used to implement the qudits.

+
+ +

For qudits (and qubits) the only distinction between bases is in the physical implementation. At the abstract mathematical level, all are equivalent.

+ +

This not only means that you are free to choose your computational basis. You also have some freedom in how to generalize the Pauli matrices. For a $d$ level qudit, for example, you could choose to label your basis states with the elements of a group with order $d$. Your can then define generalizations of Pauli $X$ that implement the group multiplication, and generalization of $Z$ that depend on the representations. See here for some examples.

+ +

How you choose to do this might depend on the physics of the qudits (perhaps the interactions naturally implement such operations) or it might depend on what you want to do with the qudits (such as create exotic topological error correcting codes). But other than concerns like these, nothing is forcing you to make any particular decisions.

+ +

Usually we choose the basis that is easiest for us to measure. Superconducting qubits/qudits for examples are made from the lowest $d$ levels of an anharmonic oscillator. These energy eigenstates are what we typically measure, and that is the reason they are used as the computational basis.

+ +

For Fibonacci anyons, we have to deal with a Hilbert space that isn’t really built for being carved into qubits. Typically we take a subspace for which measurements aren’t too convoluted. But then we also need to worry about braiding leaking the state out of the subspace. This gives us a whole bunch of practical concerns to think about when choosing our basis. But nevertheless, it is only these practical concerns that lead to a preferred choice, and different authors may very well choose different conventions.

+",409,,,,,04-11-2018 08:48,,,,0,,,,CC BY-SA 3.0 +1674,1,1682,,04-11-2018 09:30,,10,524,"

The D-Wave system, as I understand it, allows us to program Ising models and to find their ground states. In this form, it is not universal for quantum computation: it can not simulate a circuit model quantum computer.

+ +

What would be the simplest thing that could be done to make it universal? What are the reasons why such a thing has not been implemented?

+",409,,1837,,04-11-2018 11:39,9/30/2018 23:42,What would be the simplest addition that would make the D-Wave architecture universal?,,3,0,,,,CC BY-SA 3.0 +1675,2,,70,04-11-2018 09:48,,11,,"

As the other answer conveyed (and to which I am just trying to provide some clarification), post-selection is about just looking at a subset of possible measurement outcomes. To my mind, this falls into two different cases, as below. Yes, they are different aspects of the same thing, but they are used very differently by two different communities.

+

Experimental Post-selection

+

You do some experiments, but you only gather data when certain conditions are fulfilled. Generally it's used to compensate for heralded experimental imperfections (i.e. something is triggered that tells us we've had a undesired result before proceeding with another part of the experiment). For example, you might be using a pair of photons as information or entanglement carriers, but sometimes those photons get lost on the way. If you only do things when both photons are detected, you are post-selecting on their successful arrival.

+

Theoretical Post-selection

+

This is a thought experiment of "how much more powerful could my computer be if I could choose the outcomes of measurements?" and is not a practical proposition.

+

As a simple example, think about quantum teleportation. In the normal scenario, Alice and Bob share a Bell pair, and Alice has a qubit in an unknown state that she wants to teleport to Bob. She performs a Bell measurement on her two qubits, and sends her measurement outcome to Bob. If Bob is far away from Alice, the information on the measurement result takes a finite time to get there, and it's after that time that he can be thought of as having received the qubit (because he has to compensate for effects of the different results on the qubit he holds).

+

However, if Alice can post-select on the measurement result as always being one particular result, and Bob knows that she's going to select that one, then Alice doesn't need to send the measurement result to Bob. He can use the qubit he holds immediately. Even stronger, he can use it before Alice has made the measurement, secure in the knowledge that she will! So, not only are you achieving faster-than-light communication, you're actually communicating backwards in time! You can start to imagine how immensely powerful that could be for a computer (compute for an arbitrarily long time and then send the answer back in time to the moment the question was asked).

+",1837,,-1,,6/18/2020 8:31,04-11-2018 09:48,,,,5,,,,CC BY-SA 3.0 +1676,2,,1663,04-11-2018 09:51,,3,,"

If you were to try and imagine the simplest form of correlation, you might think of two bits that were randomly either both $0$, or both $1$.

+ +

Bell states are just this, but quantum. We have bits instead of qubits, and the randomness is due to superposition.

+ +

Since they are most conceptually simple form of entanglement, and the easiest to describe using information theoretic language, they are our first choice for anything entanglement related. If you need to explain non-locality, do it with a Bell pair. If you want to teleport something, it’s simplest both mathematically and practically if you use a Bell pair. If you want to measure how much entanglement you have why not see how many Bell pairs you could turn it into?

+ +

Typically, we use four canonical forms of the Bell pairs. They represent all the ways that we can choose to have correlations or anticorrelations between the $|0\rangle / |1\rangle$ states of the computational basis and the $|+\rangle / |-\rangle$ states of the $X$ basis.

+ +

These four states form a complete basis for two qubits. So any states, entangled or not, can be expressed as a superposition of Bell states. This can sometimes make our maths easier, which is another reason we like to use them.

+",409,,,,,04-11-2018 09:51,,,,0,,,,CC BY-SA 3.0 +1677,1,,,04-11-2018 10:12,,8,345,"

We usually talk about the power of a quantum computer by examining the separation between sets of gates that we know we can efficiently simulate on a classical computer (i.e. problems in the class BPP), and universal gate sets which, by definition, can implement any quantum algorithm, including BQP-complete algorithms. So, assuming a separation between BPP and BQP, there is a separation in the power of the algorithms that can be implemented with these gate sets, and the separation between these gate sets can be as simple as the availability of one gate (two classic examples are the Clifford gates + the $\pi/8$ phase gate, and Toffoli+Hadamard). In effect, you need a universal gate set in order to gain a computational speed-up. However, this is specifically about algorithms with polynomial running times.

+ +

What are the requirements that distinguish the power of a quantum computer which is intended solely to provide a polynomial speed-up on a problem outside BPP? For example, a device built solely for the purpose of implementing a Grover's search. Presumably the D-Wave machines fall into this category.

+ +

To be clear, I require a speed-up that changes the scaling relation. If there's a classical algorithm that requires time $O(2^n)$, then obviously there are many different ways of physically implementing it which have different running times, but all will be $O(2^n)$. I'm interested in identifying what it is in a quantum computer that permits a better scaling (but not a reduction to polynomial time running).

+ +

Asked another way: think about the D-wave machine (although I am not aiming to be limited to just talking about this case), which we believe is doing something coherent, and for a given problem size, seems to be quite speedy, but we don't know how it scales. Can we know a priori, from details of its architecture, that it at least has the potential to provide a speed-up over classical? If it were universal for quantum computation, then it certainly would have that potential, but universality probably isn't necessary in this context.

+ +

Part of what I'm struggling to get my head around, even in terms of defining the question properly, is that we don't have to have a universal gate set because it doesn't necessarily matter if the gate set can be efficiently simulated on a classical computer, just so long as the overhead in performing the simulation is similar or worse than the speedup itself.

+",1837,,1847,,4/27/2018 8:32,5/16/2018 0:56,Requirements for Achieving a Quantum Speedup,,0,3,,,,CC BY-SA 3.0 +1678,1,1681,,04-11-2018 10:30,,9,501,"

I'm looking for a quantum algorithm which I can use to demonstrate the syntax of different quantum-languages. My question is similar to this, however, for me, ""good"" means:

+ +
    +
  • What it does could be described in 1-2 paragraphs, and should be easy to understand.
  • +
  • Should use more elements of the ""quantum-programming-world"" (I mean that the algorithm should use some classical constants, measurements, conditions, qregisters, operators etc., as many as possible).
  • +
  • The algorithm should be small (at most 15-25 pseudocode-lines long).
  • +
+ +

Useful algorithms are often too long/hard, but Deutsch's algorithm doesn't use that many elements. Can someone suggest me a good-for-demo algorithm?

+",1930,,1847,,4/27/2018 16:50,08-07-2018 21:13,"A sample quantum algorithm, useful for demonstrating languages",,3,4,,,,CC BY-SA 3.0 +1679,1,,,04-11-2018 10:37,,15,786,"

As per Wikipedia, blockchains are a way to maintain ""a continuously growing list of records, called blocks, which are linked and secured using cryptography [... and] inherently resistant to modification of the data.""

+ +

Blockchains are in current practical use, for example in the cryptocurrency bitcoin. These implementations must make use of some particular approach to cryptography, which will involve assumptions intended to underwrite their security.

+ +

Are the current implementations of blockchain resistant to attacks using quantum computation?

+",1931,,124,,5/17/2018 10:23,1/22/2019 7:15,Does quantum computing threaten blockchain?,,3,6,,,,CC BY-SA 4.0 +1680,1,1697,,04-11-2018 11:02,,10,551,"

I wish to learn more about computational complexity classes in the context of quantum computing.

+ +

The medium is not so important; it could be a book, online lecture notes or the like. What matters the most are the contents.

+ +

The material should cover the basics of quantum computational complexity classes and discuss the similarities, differences and relationships between them and perhaps also with classical computational complexity classes.

+ +

I would prefer a rigorous treatment over an intuitive one. The author's style doesn't matter.

+ +

As for prerequisites, I know next to nothing about the topic, so maybe more self-contained material would be better. That being said, I probably would not read a 1000 page book unless it was phenomenally good, anything in the range of 1-500 pages might work.

+ +

As for availability, I would of course prefer material that is not behind a paywall of some sort and can be found online, but this is not a strict requirement.

+ +

What do you recommend?

+",144,,144,,04-12-2018 10:19,4/13/2018 10:19,Good introductory material on quantum computational complexity classes,,2,5,,,,CC BY-SA 3.0 +1681,2,,1678,04-11-2018 13:03,,3,,"

I suggest looking at eigenvalue/eigenvector estimating protocols. There's a lot of flexibility to make the problem as easy or as hard as you want.

+ +

Start by picking two parameters, $n$ and $k$. You want to design an $n$-qubit unitary, $U$ that has eigenvalues of the form $e^{-2\pi iq/2^k}$ for integers $q$. Make sure that at least one of those eigenvalues is unique, and call it $\omega$. Also make sure that a simple product state, say $|0\rangle^{\otimes n}$, has non-zero overlap with the eigenvector of eigenvalue $\omega$.

+ +

The aim would be to implement a phase estimation algorithm on this, being told the value $k$, and being tasked with outputting a vector $|\psi\rangle$ that is the eigenvector corresponding to eigenvalue $\omega$. In general this will comprise a circuit of $n+k$ qubits (unless you need ancillas to implement controlled-$U$).

+ +

This works as follows:

+ +
+
    +
  • set up two registers, one of $k$ qubits, and the other of $n$ qubits. (use of quantum registers)

  • +
  • every qubit is initialized in the state $|0\rangle$. (initialisation of quantum registers)

  • +
  • apply a Hadamard to each qubit in the first register (single-qubit gates)

  • +
  • from qubit $r$ in the first register, apply controlled-$U^{2^{r}}$, targeting the second register (multi-qubit controlled gates)

  • +
  • apply the inverse Fourier transform on the first register, and measure every qubit of the first register in the standard basis. These can be combined, implementing the semi-classical Fourier transform. (measurement and feed-forward of classical data)

  • +
  • for the correct measurement result, the second register is in the desired state $|\psi\rangle$.

  • +
+
+ +

For simplicity, you could pick $n=2$, $k=1$, so you need a $4\times 4$ unitary matrix with eigenvalues $\pm 1$. I'd use something like $$(U_1\otimes U_2)C(U_1^\dagger\otimes U_2^\dagger),$$ +where $C$ denotes the controlled-NOT. There is just one eigenvector with eigenvalue -1, which is $|\psi\rangle=(U_1\otimes U_2)|1\rangle\otimes(|0\rangle-|1\rangle)/\sqrt{2}$, and you can mess about with the choices of $U_1$ and $U_2$ to explore the implementation of $U$ using decomposition in terms of a universal gate set (I'd probably set this as a preliminary problem). Then, controlled-$U$ is easily implemented just by replacing the controlled-NOT with a controlled-controlled-NOT (Toffoli) gate. Finally, the inverse Fourier transform is just a Hadamard gate.

+ +

For something a little more complex, put $k=3$, and replace $C$ with the square-root of swap gate, +$$ +\left( +\begin{array}{cccc} + 1 & 0 & 0 & 0 \\ + 0 & \frac{1}{\sqrt{2}} & \frac{i}{\sqrt{2}} & 0 \\ + 0 & \frac{i}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \\ + 0 & 0 & 0 & 1 \\ +\end{array} +\right) +$$ +with $\omega=e^{\pm i\pi/4}$ and $|\psi\rangle=(U_1\otimes U_2)(|01\rangle\pm|10\rangle)/\sqrt{2}$.

+",1837,,1837,,04-12-2018 07:11,04-12-2018 07:11,,,,0,,,,CC BY-SA 3.0 +1682,2,,1674,04-11-2018 14:49,,5,,"

XX couplers are necessary to make an quantum annealing universal.

+ +

https://arxiv.org/abs/0704.1287

+ +

As for fabricating them, I’m not too familiar with the hardware issues. Perhaps someone else can comment on that.

+",54,,,,,04-11-2018 14:49,,,,0,,,,CC BY-SA 3.0 +1683,2,,1642,04-11-2018 15:21,,2,,"

The answer arguably depends on the problem you wish to solve with your computation. More specifically, are you wanting to optimize near-term applications in the NISQ era, or are you wanting to build a fully scalable, fault-tolerant and universe quantum computer?

+ +

For the latter, you need to think about error correction. Pretty much everything that will happen in a fault-tolerant quantum computer will be part of error correction. So whether we want to run Shor's algorithm or simulate a quantum system, we nevertheless mostly need to optimize for error correction.

+ +

Typically, error correcting codes require Clifford gates. For the surface codes, CNOTs (both ways around), and preparation and measurement in the $Z$ and $X$ basis are usually used. But your exact choice of code, and how to implement that code, will affect the precise details of the gate set. But it will typically be a subset of the multi qubit Clifford group, made using single and two qubit generators.

+ +

So for fault-tolerant QC, the best option is to figure out good ways to implement the Clifford group.

+",409,,1847,,4/23/2018 11:26,4/23/2018 11:26,,,,1,,,,CC BY-SA 3.0 +1684,2,,1529,04-12-2018 06:05,,5,,"

In one sense, the Xmon qubit is a transmon qubit, in that they both operate in the $E_J>>E_c$ regime of the CPB Hamiltonian and take advantage of the exponentially suppressed charge noise vs. polynomial decrease in anharmonicity effect discussed in (Koch, 2007). You could work out the dynamics of a superconducting qubit-resonator system without ever knowing whether the equations were describing an Xmon or a transmon, so functionally its hard to differentiate the Xmon.

+ +

On the other hand, there are a lot of important design differences introduced in the Xmon: The qubit is grounded (mentioned above), the qubit is no longer embedded in the resonator, its conveniently tunable, the lifetime is enhanced (although the ibmqx3 chip that IBM uses for its quantum experience has qubits with $T_1\approx40 \space \mu s$ which matches the original Xmon lifetime). Also, the Xmon's shape is a great match for a surface code architecture that requires a tight-packed grid of qubits.

+ +

Practically, there are a lot of other transmon designs that offer some of the same benefits of the Xmon. So ""transmon vs. Xmon"" isn't the general question to ask; just go with the design that's got the best lifetimes and maybe tunability.

+",1939,,,,,04-12-2018 06:05,,,,0,,,,CC BY-SA 3.0 +1685,2,,1678,04-12-2018 13:35,,3,,"

Sounds like you want a quantum ""Hello World"". The most straightforward quantum version of this would just be to write a binary encoded version of the text Hello World in a register of qubits. But this would require ~100 qubits, and be longer than your upper limit for code length.

+ +

So let's write a shorter peice of text. Let's write ;), we need a bit string of length 16. Specifically, using ASCII encoding

+ +
;)  =  00111011 00101001
+
+ +

Using QISKit, you'd do this using the following code.

+ +
from qiskit import QuantumProgram
+import Qconfig
+
+qp = QuantumProgram()
+qp.set_api(Qconfig.APItoken, Qconfig.config[""url""]) # set the APIToken and API url
+
+# set up registers and program
+qr = qp.create_quantum_register('qr', 16)
+cr = qp.create_classical_register('cr', 16)
+qc = qp.create_circuit('smiley_writer', [qr], [cr])
+
+# rightmost eight (qu)bits have ')' = 00101001
+qc.x(qr[0])
+qc.x(qr[3])
+qc.x(qr[5])
+
+# second eight (qu)bits have 00111011
+# these differ only on the rightmost two bits
+qc.x(qr[9])
+qc.x(qr[8])
+qc.x(qr[11])
+qc.x(qr[12])
+qc.x(qr[13])
+
+# measure
+for j in range(16):
+    qc.measure(qr[j], cr[j])
+
+# run and get results
+results = qp.execute([""smiley_writer""], backend='ibmqx5', shots=1024)
+stats = results.get_counts(""smiley_writer"")
+
+ +

Of course, this isn't very quantum. So you could do a superposition of two different emoticons instead. The easiest example is to superpose ;) with 8), since the bit strings for these differ only on qubits 8 and 9.

+ +
;)  =  00111011 00101001
+8)  =  00111000 00101001
+
+ +

So you can simply replace the lines

+ +
qc.x(qr[9])
+qc.x(qr[8])
+
+ +

from the above with

+ +
qc.h(qr[9]) # create superposition on 9
+qc.cx(qr[9],qr[8]) # spread it to 8 with a cnot
+
+ +

The Hadamard creates a superposition of 0 and 1, and the cnot makes it into a superposition of 00 and 11 on two qubits. This is the only required superposition for ;) and 8).

+ +

If you want to see an actual implementation of this, it can be found on the QISKit tutorial (full disclosure: it was written by me).

+",409,,409,,4/21/2018 15:29,4/21/2018 15:29,,,,2,,,,CC BY-SA 3.0 +1686,2,,1680,04-12-2018 17:03,,4,,"

I can recommend the Lecture notes of Ronald de Wolf, used for a semester course taught by him on Quantum Computing in the context of the Dutch 'Mastermath' program.

+ +

Chapter 10 ""Quantum Complexity Theory"", covers the 'classical' complexity classes briefly, but gives enough background to talk about the 'quantum' complexity classes and compare them with the classical. It doesn't cover everything, but refers to other material for further reading.

+ +

Chapter 12 ""Quantum Communication Complexity"" is also relevant and is more technical, mainly since to the theory of communication complexity has interesting applications within quantum computation.

+",253,,,,,04-12-2018 17:03,,,,0,,,,CC BY-SA 3.0 +1687,1,1688,,04-12-2018 19:23,,14,1109,"

I am pretty intrigued by the record time that a qubit has survived.

+",1931,,26,,12/23/2018 12:19,11/23/2022 6:11,What is the longest time a qubit has survived with 0.9999 fidelity?,,3,4,,,,CC BY-SA 3.0 +1688,2,,1687,04-12-2018 21:11,,12,,"

Well, for the longest coherence time ever, I'm finding this Science from 2013 entitled Room-Temperature Quantum Bit Storage Exceeding 39 Minutes Using Ionized Donors in Silicon-28, which indicates qubits that lasted for over 39 minutes; these, however, only had an 81% fidelity rate. (This is for qubits used in computation, not memory storage. For memory storage, see M. Stern's link.)

+ +

But you're looking for qubits with a high fidelity rate. In that case, I found a Nature Nanotechnology from 2014 entitled Storing quantum information for 30 seconds +in a nanoelectronic device(alternate link to arXiv) which was coherent for 30 seconds - but had a greater than 99.99% fidelity rate, which is exactly what you're looking for. Most other papers I'm finding with a 99.99% fidelity rate or greater measure their coherence times in nano or microseconds.

+ +

I will keep looking.

+",91,,1847,,4/13/2018 9:19,4/13/2018 9:19,,,,4,,,,CC BY-SA 3.0 +1689,2,,1367,04-12-2018 21:26,,5,,"

pyQuil is an open source quantum programming library in Python. The documentation includes a hands on introduction to quantum computing where you learn by programming. It doesn't assume any physics background.

+ +

Here are some links to the main topics:

+ + +",299,,,,,04-12-2018 21:26,,,,0,,,,CC BY-SA 3.0 +1690,1,1693,,4/13/2018 4:40,,4,60,"

On 1993, Seth Lloyd published in Science a proposal for A Potentially Realizable Quantum Computer. In a nutshell, this consists in a long chain of weakly coupled qubits which are operated with (almost) no need for the operator to differentially address the different memory positions (i.e. no spatial addressing is required), and at the same time it does not require every qubit to present a different energy. Instead, all qubits are equivalent, with the exceptions of

+ +
    +
  • the two extremes of the chain are distinguishable from the bulk (and this feature is used to introduce new information) and
  • +
  • a qubit is sensitive to its immediate neighbours, so at any given time you effect the same simultaneous operation on all qubits with certain surroundings, while leaving those with differing ones unperturbed (this feature allows operating in a manner of cellular automata)
  • +
+ +

My question is: have few-qubit versions of Lloyd's proposal been proposed, or implemented? (If yes: under what architecture(s), and if not, what would be required to do it?)

+",1847,,26,,12/13/2018 19:43,12/13/2018 19:43,"Any few-qubit versions of Lloyd's ""weakly-coupled-array"" quantum computing?",,1,0,,,,CC BY-SA 3.0 +1691,2,,1674,4/13/2018 5:31,,1,,"
+

What would be the simplest thing that could be done to make it universal?

+
+ +

See US Patent US9162881B2 ""Physical realizations of a universal adiabatic quantum computer"" or US Application US20150111754A1 ""Universal adiabatic quantum computing with superconducting qubits"" which is quoted here:

+ +
    +
  • Definition: Basis Throughout this specification and the appended claims, the terms “basis” and “bases” are used to denote a set or sets, respectively, of linearly independent vectors that may be combined to completely describe a given vector space. For example, the basis of standard spatial Cartesian coordinates comprises three vectors, the x-axis, the y-axis, and the z-axis. Those of skill in mathematical physics will appreciate that bases may be defined for operator spaces, such as those used to describe Hamiltonians.

  • +
  • Definition: Effective Qubit Throughout this specification and the appended claims, the terms “effective qubit” and “effective qubits” are used to denote a quantum system that may be represented as a two-level system. Those of skill in the relevant art will appreciate that two specific levels may be isolated from a multi-level quantum system and used as an effective qubit. Furthermore, the terms “effective qubit” and “effective qubits” are used to denote a quantum system comprising any number of devices that may be used to represent a single two-level system. For example, a plurality of individual qubits may be coupled together in such a way that the entire set, or a portion thereof, of coupled qubits represents a single two-level system.

  • +
+ +

[0061] +A Universal Quantum Computer (UQC) is a quantum computer which is capable of efficiently simulating any other quantum computer. In some embodiments, a Universal Adiabatic Quantum Computer (UAQC) would be able to simulate any quantum computer via adiabatic quantum computation and/or via quantum annealing. In some embodiments, a UAQC would be able to simulate a physical quantum system via adiabatic quantum computation and/or via quantum annealing.

+ +

[0062] +It has been established that local lattice spin Hamiltonians can be used for universal adiabatic quantum computation. However, the 2-local model Hamiltonians used are general and hence do not limit the types of interactions required between spins to be known interactions that can be realized in a quantum processor. The 2-local Ising model with 1-local transverse field has been realized using different of technologies.

+ +

[0063] +This quantum spin model is thought unlikely to be universal for adiabatic quantum computation. See discussion in S. Bravyi et al., 2006 arXiv:quant-ph/0606140v4 or Quant. Inf. Comp. 8, 0361(2008). However, it has been shown that adiabatic quantum computation can be rendered universal and belongs to the Quantum Merlin Arthur complexity class, a quantum analog of the NP complexity class, by having tunable 2-local diagonal and off-diagonal couplings in addition to tunable 1-local diagonal and off-diagonal biases.

+ +

[0064] +Diagonal and off-diagonal terms can be defined with reference to the computational basis. The state of a qubit can be one of two basis states or a linear superposition of the two basis states. The two states form a computational basis.

+ +

Note: Refer to the Patent for complete details.

+ +
+

What are the reasons why such a thing has not been implemented?

+
+ +
    +
  • Definition: Universal Adiabatic Quantum Computation The concept of “universality” is understood in computer science to describe the scope or range of function of a computing system. A “universal computer” is generally considered to represent a computing system that can emulate any other computing system or, in other terms, a computing system that can be used for the same purposes as any other computing system. For the purposes of the present systems, methods and apparatus, the term “universal adiabatic quantum computer” is intended to describe an adiabatic quantum computing system that can simulate any unitary evolution.
  • +
+ +

From: ""Quantum Information Processing with Superconducting Circuits: a Review"" by G. Wendin (8 Oct 2017), on page 77:

+ +

The D-Wave Systems machines are built top-down - scaling up is based on flux qubits and circuits with short coherence time. The technology is based on classical Nb RSFQ circuits combined with Nb rf-SQUID qubits, and forms the basis of the current D-Wave processors. The architecture is based on a cross-bar network of communication buses allowing (limited) coupling of distant qubits. The qubits are operated by varying the dc-bias, changing the qubit energies and qubit qubit couplings.

+ +

As a result, the coherence and entanglement properties have to be investigated by performing various types of experiments on the machines and their components: Physics experiments on the hardware, and ”benchmarking” of the performance by running a range of QA schemes.

+ +

During the last three years, the topic has rapidly evolved, and by now a certain common understanding and consensus has been reached. Based on the discussion in some recent papers, the situation can be summed up in the following way:

+ +

• The behaviour of the D-Wave machines is consistent with quantum annealing.

+ +

• No scaling advantage (quantum speedup) has so far been seen.

+ +

• QA is efficient in quickly finding good solutions as long as barriers are narrow, but ultimately gets stuck once broad barriers are encountered

+ +

• The Google D-Wave 2X results showing million-times speedup are for native instances that perfectly fit the hardware graph of the device.

+ +

• For generic problems that do not map well onto the hardware of a QA, performance will suffer significantly.

+ +

• Even more efficient classical optimisation algorithms exist for these problems, which outperform the current D-Wave 2X device for most problem instances. However, the race is on.

+ +

• With improved engineering, especially faster annealing and readout, the time to perform a quantum annealing run can be reduced by a factor 100x over the current generation QA devices.

+ +

• However, misspecification of the cost function due to calibration inaccuracies is a challenge that may hamper the performance of analogue QA devices.

+ +

• Another challenge is the embedding of problems into the native hardware +architecture with limited connectivity.

+ +

• There is the open question of quantum speedup in analogue QA.

+ +

• QA error correction has been demonstrated and may pave a path toward large scale noise-protected AQO devices.

+ +

• Typically, classically computationally hard problems also seem to be hard problems for QA devices.

+ +

• Improved machine calibration, noise reduction, optimisation of the QA schedule, larger system sizes and tailored spin-glass problems may be needed for demonstrating quantum speedup. However what is hard may not be easy to judge.

+ +

• It remains to see what the newest D-Wave 2000Q system can do with 2000 qubits.

+ +

Note: Refer to the paper for complete details.

+ +

The Patent is somewhat more cryptic in it's explanation:

+ +

The simulated coupling described in FIG. 9 and FIG. 10 allows multiple types of coupling to be realized by fewer actual coupler types. This can provide greater versatility in a quantum processor where the architecture is best-suited for specific types of couplers. For instance, a superconducting quantum processor that, for whatever reason, is best-suited to implement only ZZ-couplers and XX-couplers may incorporate simulated coupling through mediator qubits to realize the effects of simulated XZ and ZX coupling.

+ +

Those of skill in the art will appreciate that, for the purposes of realizing the qubit-coupling architectures taught in the present systems, methods and apparatus, the various embodiments of XX-, ZZ-, XZ-, and ZX-couplers described herein represent non-limiting examples of coupling devices. All of the coupling devices described in the present systems, methods and apparatus may be modified to accommodate the requirements of the specific system in which they are being implemented, or to provide a specific functionality that is advantageous in a particular application.

+ +

The present systems, methods and apparatus describe the physical realization of universal adiabatic quantum computation by the implementation of at least two different coupling mechanisms in one processor architecture. Each coupling mechanism provides coupling between a first and a second basis (for example, coupling between X and X, X and Z, or Z and Z), thereby defining a “coupled basis” (for example, XX, XZ, or ZZ). In accordance with the present systems, methods and apparatus, qubit-coupling architectures that each include at least two different coupled bases, where at least two different coupled bases do not commute, are used to realize the Hamiltonians for universal adiabatic quantum computation. For example, the various embodiments described herein teach that universal adiabatic quantum computation may be physically realized by the simultaneous application of off-diagonal couplers in a qubit-coupling architectures. Those of skill in the art will appreciate that this concept may extend to couplers that include the Y-basis, such as XY-, YX-, YY-, ZY-, and YZ-couplers.

+ +

This specification and the appended claims describe physical implementations of realizable Hamiltonians for universal adiabatic quantum computers by demonstrating universal qubit-coupling architectures. There is a common element to the embodiments of universal coupling schemes described herein, and that is the implementation of at least two different sets of coupling devices between qubits, where the respective bases coupled by the two different sets of coupling devices do not commute. Those of skill in the art will appreciate that such non-commuting couplers may be realized in a variety of different embodiments and implementations and all such embodiments cannot practically be disclosed in this specification. Thus, only two physical embodiments, the XX-ZZ coupling architecture and the XZ-ZX coupling architecture, are detailed herein with the recognition that anyone of skill in the relevant art will acknowledge the extension to any quantum processor architecture implementing non-commuting couplers. Furthermore, those of skill in the art will appreciate that ertain quantum algorithms or hardware constraints may impose minimum requirements on the number of effective qubits in the quantum processor and/or the number of couplers. The present systems, methods and apparatus describe the use of XX and ZZ couplers to simulate XZ and ZX couplers, as well as the use of XZ and ZX couplers to simulate XX and ZZ couplers, thereby proving that a pair of non-commuting couplers in a quantum processor may be used to simulate other coupler schemes.

+ +

[My comment: Basically, there's only so much room; and improvments are planned.]

+ +

In the Application it's slightly less cryptic:

+ +

[0129] Readout is likely more challenging in AQC than in GMQC. Within the latter paradigm, all qubits are isolated at the end of a computation. Consequently, one can independently read each qubit in a GMQC processor. In contrast, AQC terminates with the target Hamiltonian being asserted. When the Hamiltonian contains off-diagonal elements, read out for AQC can present a challenge. If the readout process requires the qubit register wavefunction to collapse, then that state will no longer be an eigenstate of the target Hamiltonian. Therefore, it is desirable to devise a method to simultaneously project the states of all qubits in an AQC processor in the presence of finite biases and couplings.

+",278,,,,,4/13/2018 5:31,,,,0,,,,CC BY-SA 3.0 +1693,2,,1690,4/13/2018 7:25,,3,,"

This sort of architecture has certainly been studied more, often under the banner of ""Global Control"", significantly reducing some of the requirements (in particular, only requiring an ABABAB... repeating structure instead of ABCABC...). I am not aware of any of these ideas having been implemented. I assume this is partly because there are large overheads in the length of time required to implement something, which makes it more difficult to fit inside the decoherence time of a system.

+ +

I was involved in some studies at some point, detailing how you can make larger versions fault tolerant, but also suggesting implementations in optical lattices (here and here). These were at least partly motivated by this experimental paper. At a similar time, Zoller was also looking at this idea.

+",1837,,,,,4/13/2018 7:25,,,,0,,,,CC BY-SA 3.0 +1694,2,,20,4/13/2018 7:43,,3,,"

Short answer for the superconducting -> formula example: no, we will not be able to do that.

+ +

Longer answer (and more optimistic)

+ +
    +
  • We need a one-to-one correspondence between the Hamiltonian of the system we can control in the actual experiment and the theoretical one, in terms of system size (degrees of freedom that we care about) and in terms of parameters.
  • +
  • As you describe, we are currently in the situation where we would like to know how a theoretical system evolves (the solution to a known set of equations with a known set of parameters). We map the theoretical system on the experimental one, measure and effectively know the solution to the theoretical equations.
  • +
  • The reverse would be: we know the evolution we want to obtain (the theoretical system) and we want to find the experimental system that fits. We would then do an iterative optimization process: controllably change parameters in the experimental system, measure, quantify the fidelity of the final quantum state or of the whole quantum process, and systematically tweak the parameters to optimize this. I do think this is totally doable: it's simply an extension of the forward process It's almost the same experiment, only performed more times.
  • +
  • Why we can't apply this to the superconducting -> formula case? First, because of the size requirement: if we want to relate an emerging property to the details of its composition, we probably would need an all-atomic model. Second, because we cannot continuously control the experimental variables in chemical compounds with quantum accuracy.
  • +
+",1847,,,,,4/13/2018 7:43,,,,0,,,,CC BY-SA 3.0 +1696,1,1698,,4/13/2018 9:14,,6,358,"

Capacitors, Inductors and Resistors are well-known circuit components. Since the proposal of Leon Chua in 1971, the Memristor joined them as a fundamental component. +I am wondering whether these elements would be somehow imitated by the means of quantum technologies and what would be the requirements to achieve them.

+",1955,,26,,12/13/2018 19:43,12/13/2018 19:43,Do the 'fundamental circuit elements' have a correspondence in quantum technologies?,,3,1,,,,CC BY-SA 3.0 +1697,2,,1680,4/13/2018 10:19,,9,,"

I think John Watrous' survey is a great place to start (Professor Watrous recommended it to me a long long time ago and I have been hooked ever since!):

+ +

J. Watrous. Quantum computational complexity. Encyclopedia of Complexity and System Science, Springer, 2009. arXiv:0804.3401 [quant-ph]

+ +

To the best of my knowledge, it has the highest complexity classes to page ratio.

+ +

I also really like Scott Aaronson's 2016 Barbados Lecture Notes:

+ +

S. Aaronson (with A. Bouland and L. Schaeffer). The Complexity of Quantum States and Transformations: From Quantum Money to Black Holes. ECCC TR16-109

+",,user1813,,,,4/13/2018 10:19,,,,0,,,,CC BY-SA 3.0 +1698,2,,1696,4/13/2018 12:41,,5,,"

When you ask,

+ +
+

I am wondering whether these elements would be somehow imitated by the means of quantum technologies

+
+ +

there are different levels on which you can interpret this question. You might mean to ask whether people will realise quantum capacitors, inductors, or resistors, or you might mean to ask whether people will realise components which, in quantum computers, fulfil the same functional roles as capacitors, inductors, or resistors in order to realise digital information processing — as opposed, for instance to analogue computers to model differential systems of equations.

+ +

It must be remembered that quantum technologies are at an early phase, where this is no single way which we can be confident will form the basis of a scalable quantum computer. But we can consider whether there are any cases where there may be interesting analogues.

+ +
    +
  • Many quantum technologies do not represent anything like an electrical circuit, as such. Ion traps store bits of information on individual ions, which are moved in a limited and carefully controlled way. There is no natural notion of electrical conduction, resistors, or capacitors in this setting. Quantum dots are even less like electrical circuits, in that the locations of the physical systems storing the data are fixed.

  • +
  • Flux qubits, on the other hand, explicitly include circuits which carry a current (albeit a very small one). The resistance in such circuits is effectively zero, as they are superconducting; but they do involve Josephson junctions, which are often considered a non-linear type of inductor.

  • +
+ +

This is different from whether or not there is anything in a given platform which is doing the same job as a resistor, capacitor, or inductor: which may be substantially different on the level of physics, but which are somehow performing a similar role in mediating how a system performs information processing. However, there is a big difference between the way that classical semiconductor electronics realises information processing — with physical gates, which transform information-carrying input signals to produce output signals — and the way every current quantum technology realises information processing, which is to perform controlled changes of the dynamics of systems prepared in some input state, to realise an output state.

+ +

(The one possible exception are photonic quantum systems, in which the information is carried in light signals rather than in the states of more-or-less static pieces of matter. Perhaps you might argue that an optical memory is analogous to a capacitor somehow, or that a wave plate is analogous to an inductor, but these don't seem to be meaningful functional analogues for how an optical system might be used to perform quantum information processing.)

+ +

In summary: there is no single answer to your question, because of the different things you might mean by it and because there is no single platform to refer to in order to provide a definitive answer. But most of the platforms don't have anything which represents these basic electrical components, or which play the same role. Quantum technologies are simply expected to operate differently than classical computing technology.

+",124,,,,,4/13/2018 12:41,,,,0,,,,CC BY-SA 3.0 +1699,2,,1367,4/13/2018 14:50,,4,,"

If you want to go beyond learning how to write quantum circuits in the various quantum programming frameworks such as Q#, pyQuil and QISKit, I highly recommend this recent paper with the title Quantum Algorithm Implementations for Beginners from the Los Alamos National Laboratory. It's a great resource for understanding how to compile and implement various quantum algorithms as well as their oracles and specific subroutines as quantum circuits with the IBM Q Experience. I'd recommend you to implement them in any of the aforementioned programming frameworks and learn the nitty-gritty details as you go.

+",1234,,,,,4/13/2018 14:50,,,,0,,,,CC BY-SA 3.0 +1700,1,,,4/13/2018 16:14,,10,175,"

The context: We are in the solid state. After a photon absortion by a system with a singlet ground state, the system undergoes the spin-conserving fission of one spin singlet exciton into two spin triplet excitons (for context, see The entangled triplet pair state in acene and heteroacene materials). These spin triplet pair propagates in the solid, still entangled. The quantum-computing-related goal of all this operation would be to transfer the entanglement of the two flying qubits to two positions that are fixed in space and are also well protected from decoherence (low-energy excitations of nuclear spins in a paramagnetic ion, for example).

+ +

The problem at hand (1), and the question: What would be the requirements to favour said quantum information transfer between the flying qubits and the stationary qubits? +(I know flying vs stationary qubit scenarios have been explored, but I have no experience in that field).

+",1847,,1847,,4/19/2018 14:52,4/19/2018 14:52,Entanglement transfer of spin-entangled triplet-pair states between flying qubits and stationary qubits,,0,2,,,,CC BY-SA 3.0 +1701,1,,,4/13/2018 16:17,,9,227,"

The context: We are in the solid state. After a photon absortion by a system with a singlet ground state, the system undergoes the spin-conserving fission of one spin singlet exciton into two spin triplet excitons (for context, see The entangled triplet pair state in acene and heteroacene materials). These spin triplet pair propagates in the solid, still entangled. The quantum-computing-related goal of all this operation would be to transfer the entanglement of the two flying qubits to two positions that are fixed in space and are also well protected from decoherence (low-energy excitations of nuclear spins in a paramagnetic ion, for example).

+ +

The problem at hand (2), and the question: Eventually, the entanglement between the two triplets is lost, and moreover inevitably the triplets find a way to relax back to the singlet ground state, emitting energy in form of photons. I would like to calculate how these processes are affected by vibrations. I assume the independent relaxation of each of the two triplets can be calculated mostly considering local vibrations, e.g. following a procedure similar to the one we employed here (Determining key local vibrations in the relaxation of molecular spin qubits and single-molecule magnets). Would the calculation of the loss of entanglement be necessarily related to delocalized vibrational modes that simultaneously involve the local environment of both triplets?

+",1847,,1847,,4/20/2018 6:58,4/28/2018 9:02,Decoherence of spin-entangled triplet-pair states in the solid state: local vs delocalized vibrations,,1,4,,,,CC BY-SA 3.0 +1703,2,,1696,4/14/2018 14:08,,2,,"

These elements do not necessarily have a correspondence in quantum computers, just as they do not necessarily occur in classical computers (an electronic computer might use some of them, but a mechanical or photonic computer does not necessarily have any equivalent of them).

+ +

What has an equivalence are the fundamental gates that form a classical computer. For example, there are only two classical single-bit gates, the direct connection and the NOT gate. A quantum computer has single qubit rotation gates (for example $X$, $Y$, $Z$, $H$, $T$) of which $X$ is a direct equivalent.

+",,user1039,,,,4/14/2018 14:08,,,,0,,,,CC BY-SA 3.0 +1704,2,,1678,4/14/2018 14:22,,1,,"

I would propose the (perfect) 1-bit random number generator. It is almost trivially easy:

+ +

You start with a single qubit in the usual initial state $\left|0\right>$. Then you apply the Hadamard gate $H$ which produces the equal superposition of $\left|0\right>$ and $\left|1\right>$. Finally, you measure this qubit to get either 0 or 1, each with 50% probability.

+",,user1039,,,,4/14/2018 14:22,,,,0,,,,CC BY-SA 3.0 +1706,2,,1696,4/14/2018 16:19,,2,,"
+

I am wondering whether these elements would be somehow imitated by the means of quantum technologies and what would be the requirements to achieve them

+
+ +

I don't think we would want to achieve quantum equivalents of resistors, capacitors, inductors etc (at least as of now). There are two parts to any circuit: 1) Logical implementation 2) Physical implementation.

+ +

You need the 'bits' represented as voltages/currents/spins to implement the logic, for which we have the quantum equivalent of qubits.

+ +

And when you physically implement a circuit, the concepts of resistance, capacitance etc comes into picture because of the sheer nature of the materials that we are trying to implement the circuit with. There are resistances, capacitances which act as noise in a circuit due to the wires, etc and also resistors and capacitors which we add to the circuit to vary the above said voltages/currents (for eg, a transistor is a variable resistor).

+ +

This analogy applies to quantum circuitry when you are implementing qubits. It boils down to the fact that after you have achieved the required logical implementation, in which form do you need your output? Based on this you may need to apply resistances and capacitances to the circuitry to change the voltages/currents.

+ +

So in the future if we need to change the qubits in such a way as it somehow mimics the classical resistive action, then we will need a technique to achieve a mechanism which performs the resistive action on qubits (I don't even comprehend what such action would even be for a qubit), then that technique will be called as a quantum resistor. Until then we will resort to classical resistors and capacitors to manipulate the classical signals, after the qubits have done their job.

+",419,,,,,4/14/2018 16:19,,,,0,,,,CC BY-SA 3.0 +1709,2,,1429,4/15/2018 4:12,,2,,"

Another approach to topological quantum computing could be that of topological insulators, and the use of the 1/2 integer quantum Hall effect. These insulators have the potential to be less error-prone. Topological insulators are both insulators, and conductors, at the same time, and being less error-prone, have the potential to provide a robust, quantum computing environment. Such topological insulator devices could be used in a topological quantum computer, by being a connector in between a classical system, and a quantum computer ( IEEE Reference ).

+",429,,26,,5/17/2019 22:15,5/17/2019 22:15,,,,0,,,,CC BY-SA 4.0 +1710,1,1718,,4/15/2018 6:45,,8,1800,"

Problem Statement: We are given a $2-1$ function $f:\{0,1\}^{n}\to\{0,1\}^{n}$ such that: there is a secret string $s\in\{0,1\}^{n}$ such that: $f(x)=f(x\oplus s)$. Challenge: find $s$.

+ +
+

Simon's algorithm says:

+ +
    +
  1. Set up a random superposition $$\frac{1}{\sqrt{2}}|r\rangle + \frac{1}{\sqrt{2}}|r\oplus s\rangle$$

  2. +
  3. Fourier sample to get a random $y$: $$y.s=0\ (\text{mod 2})$$

  4. +
  5. Repeat steps $n-1$ times to generate $n-1$ linear equations in $s$.

  6. +
+ +

Solve for $s$.

+
+ +

I don't understand steps (1) and (2). Why and how exactly do we set up the random superposition? How does it help? Also, in step (2), what does the dot operator (in $y.s$) stand for? Bit-wise multiplication?

+",26,,26,,05-01-2018 07:10,05-01-2018 07:10,How exactly does Simon's algorithm solve the Simon's problem?,,1,2,,,,CC BY-SA 3.0 +1715,1,1720,,4/15/2018 9:37,,6,490,"

In my constant thrill to know more about Quantum Computing I wanna know what is this relation. Additionally: Can one use squeezed light to effect multi-qubit operations on single photon qubits, or are these completely independent approaches?

+",1931,,26,,12/23/2018 14:04,12/23/2018 14:04,What is the relation between single photon qubits and squeezed light qubits?,,1,0,,,,CC BY-SA 3.0 +1716,1,,,4/15/2018 9:47,,7,197,"

In his inaugural lecture, Ronald de Wolf states

+ +
+

People are working with quantum objects, but trying to make them behave as classical as possible. (...) Instead of suppressing them to make systems behave as classically as possible, why not try to + benefit from them?

+
+ +

While he goes on to state that looking at fully exploiting the quantum effects is more interesting, I do wonder if and how the 'naturally occurring' quantum effects on extremely small transistors could be used to get better classical (or perhaps some classical/quantum hybrid) computation.

+ +

Obviously, the design of those transistors needs to use quantum mechanics, but not necessarily quantum information or quantum computation theory.

+ +

Has there been any promising research or results in this direction? Or are the good reasons why this wouldn't work?

+",253,,26,,12/13/2018 19:44,12/13/2018 19:44,Can the theory of quantum computation assist in the miniaturization of transistors?,,3,0,,,,CC BY-SA 3.0 +1717,1,,,4/15/2018 10:50,,6,203,"

Pairs of entangled qubits (or Bell pairs, or EPR pairs) are a fundamental resource for quantum computing, in the sense that any computational platform that cannot generate entanglement will also be unable to provide a computational advantage.[1] In two recent questions, Decoherence of spin-entangled triplet-pair states in the solid state: local vs delocalized vibrations and Entanglement transfer of spin-entangled triplet-pair states between flying qubits and stationary qubits, I asked about a physical scenario with the goal of generating entangled qubits pairs in the solid state. I know of this result of 2013, Heralded entanglement between solid-state qubits separated by 3 meters, which used photons and NV-centers in diamond, so this can be achieved in practice. Hoewever, I am not up-to-date on what is currently the best option.

+ +

My question is: What is the current technological status for the generation of entangled qubit pairs in the solid state? In particular, which options are currently fastest and/or most reliable?

+ +

[1] Thanks to Niel de Beaudrap who pointed that out in a comment.

+",1847,,1847,,4/15/2018 17:51,4/16/2018 14:57,What are the current solutions to generate entangled qubits in the solid state?,,1,2,,,,CC BY-SA 3.0 +1718,2,,1710,4/15/2018 11:03,,8,,"

$$ +\newcommand{\bra}[1]{\left<#1\right|}\newcommand{\ket}[1]{\left|#1\right>}\newcommand{\bk}[2]{\left<#1\middle|#2\right>}\newcommand{\bke}[3]{\left<#1\middle|#2\middle|#3\right>}\newcommand{\proj}[1]{\left|#1\right\rangle\left\langle#1\right|} +$$ +Much of the functionality here is the same as the Bernstien-Vazirani algorithm, if that helps. The following is more or less copy and pasted from some lecture notes I prepared at some point. It explains it in a slightly different way to the direction you're coming at it from, but hopefully gets you going in the right direction.

+ +

The circuit is, in principle, the same as for the Bernstein-Vazirani Algorithm, except that since the output of the function evaluation is $n$ bits, the second register, which is used for the reversible function evaluation, is also $n$ bits, + +This function evaluation is represented by the controlled-controlled...-controlled-U gate, acting as $\ket{x}\ket{y}\mapsto\ket{x}\ket{y\oplus f(x)}$.

+ +

We start with the first set of Hadamards producing an equally weighted superposition of all strings of $n$ bits, +$$ +\ket{0}\ket{0}\rightarrow\frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n}\ket{x}\ket{0}. +$$ +This is followed by the function evaluation, +$$ +\rightarrow\frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n}\ket{x}\ket{f(x)}. +$$ +At this point, you could measure the second register. This will return a random value of $f(r)$, so that the overall state of the system is in +$$ +\frac{1}{\sqrt{2}}(\ket{r}+\ket{r\oplus s})\ket{f(r)}. +$$ +So, the point is that you don't have to actively prepare different values of $r$; they will be selected for you by the measurement. We'll have to make a bit of an argument later on about how likely it is that we get new information each time we make such a measurement, but it'll all work out.

+ +

Actually, I don't usually think about performing the measurement at this point; it's unnecessary (but would allow you to avoid the density matrix formalism in the following calculation). Instead, calculate the action of the final set of Hadamards, yielding the final output state +$$ +\frac{1}{2^n}\sum_{x,z}(-1)^{x\cdot z}\ket{z}\ket{f(x)}. +$$ +Here, +$$ +x\cdot z= x_1z_1\oplus x_2z_2\oplus x_3z_3 \oplus\ldots \oplus x_nz_n, +$$ +where $x_k$ is the $k^{th}$ bit of $x$.

+ +

Now we collect unique values of $f(x)$, +$$ +\frac{1}{2^n}\sum_{z,f(x)}\left((-1)^{x\cdot z}+(-1)^{(x\oplus s)\cdot z}\right)\ket{z}\ket{f(x)}. +$$ +One can therefore verify that the output of the algorithm is +$$ +\frac{1}{2^{n-1}}\sum_{z: s\cdot z=0}(-1)^{x\cdot z}\ket{z}\sum_{f(x)}\ket{f(x)}. +$$ +The state of the first register clearly contains the information about $s$, but we need to extract it. Measuring the second register is not necessary (we could do it, but it doesn't help). Instead, let's trace out the second register, so we get +$$ +\rho=\frac{1}{2^{2n-2}}\sum_{z: s\cdot z=0}\sum_{y: s\cdot y=0}\sum_x(-1)^{x\cdot(y\oplus z)}\ket{y}\bra{z}. +$$ +By performing the sum over $x$, we are left with +$$ +\rho=\frac{1}{2^{n-1}}\sum_{s\cdot z=0}\proj{z}. +$$ +Measurement of the first register yields a binary string $z$ where $s\cdot z=0$. If we had $n-1$ such examples which are linearly independent, we would be able to determine $s$. This requires repeated application of the algorithm to find enough strings (this is the 'Fourier Sampling' part).

+ +

We must now justify that a linear number of applications is sufficient to find enough strings with high probability. In the absolute worst case, when we have found $n-2$ vectors, we must find the one remaining bit of information when there are still $2^{n-1}$ strings $s\cdot z=0$ to sample. These must constitute $1-2^{n-2}/2^{n-1}=\frac12$ of the space. Hence, the average number of trials to find one of these vectors is given by +$$ +\frac{1}{2}\sum_{n=0}^\infty\frac{n+1}{2^n}, +$$ +which is readily evaluated using the following identities, +\begin{eqnarray} +\sum_{n=0}^\infty r^n&=&\frac{1}{1-r} \nonumber\\ +\frac{d}{dr}r\sum_{n=0}^\infty r^n&=&\sum_{n=0}^\infty (n+1)r^n. \nonumber +\end{eqnarray} +On average, we always find another linearly independent vector by making $2$ samples, i.e. within $2n$ steps, we have a high probability of determining $s$. It is this final part of the argument, requiring some classical post-processing that differentiates Simon's algorithm from the Bernstein-Vazirani algorithm. Essentially, the difference comes from not knowing the eigenvectors of $U$ in advance. If we did, we could prepare the second register in a fixed state. Instead, it is prepared in a superposition of different eigenstates (which coincides with a nice state to prepare) and we have to rely on a certain amount of randomness to sample the elements we need.

+",1837,,,,,4/15/2018 11:03,,,,2,,,,CC BY-SA 3.0 +1719,2,,1488,4/15/2018 12:45,,3,,"
+

Is there a reliable resource/website that calculates which key sizes are currently at risk, based on how fast the newest quantum computers are?

+
+ +

As other answers have conveyed, if a given algorithm is susceptible to attack by quantum computers, it's not really a question of going to a larger key length; it wouldn't take much technological advancement to bring that larger key length under threat (and you never really know what the current state of the art is). We've seen from the history of classical computers (e.g. Moore's Law) that once you pass some basic threshold, exponential improvements are possible.

+ +

What other answers haven't mentioned is timeliness. Yes, you could ask ""based on our current state of technology, is a particular algorithm & key length combination secure?"", but that is only an instantaneous security. Sometimes that's good enough. If you want to agree a clandestine meeting with someone tomorrow, and so long as nobody finds out about it until after the fact, that's fine, you can use any algorithm that gave a yes answer to the question. However, what if that information is to remain secret for longer? Perhaps you're emailing someone the identity of an under-cover agent they are to meet. It's not good enough that the identity of that individual is protected now, but it must also be protected going into the future. Any data like that, you essentially have to assume that if it has been encrypted with an algorithm that is potentially susceptible to attack by a quantum computer, it will be read at some point, and is therefore compromised. Actually, if you're super-paranoid, you should assume this about all crypto algorithms anyway because even if the theory says they're perfectly secure, their practical implementation may be faulty and susceptible to cracking.

+ +
+

Or possibly, will new algorithms be created which try to prevent quantum computers from being able to crack them easily?

+
+ +

To replace these potentially breakable systems, you need new methods, which generally come under the banner of post-quantum crypto. Some of these exist already, but there are varying levels of confidence about how well they will actually hold up to attack. Much like with factoring numbers on a classical computer, where the difficulty was essentially based on ""lots of people have tried, and nobody's succeeded in doing it efficiently, so we guess it isn't possible"", the argument is similar, but not so many people have tried, and not for so long, as to have a huge weight of confidence yet, although the aim is to back it up with a bit more rigour from CS, making connections to complexity classes, and particularly the assumption P$\neq$NP.

+",1837,,,,,4/15/2018 12:45,,,,0,,,,CC BY-SA 3.0 +1720,2,,1715,4/15/2018 13:09,,5,,"

By photon qubits, I'm assuming that you meant single-photon qubit systems.

+ +
+

Can one use squeezed light to effect multi-qubit operations on photon qubits, or are these completely independent approaches?

+
+ +

There are two protocols in quantum communication namely, discrete-varibale (dv) and continuous variable (cv). Squeezed light qubits are a part of cv quantum communication protocols because continuous-variable entanglement can be efficiently produced using squeezed light. Whereas single-photon qubits are part of the dv protocols. So to answer your question, they are both different approaches.

+ +

The main difference between these protocols is explained in this review paper ""Quantum information with continuous variables"":

+ +

A valuable feature of quantum optical implementations based upon continuous variables, related to their high efficiency, is their 'unconditionalness'. Quantum resources such as entangled states emerge from the nonlinear optical interaction of a laser with a crystal in an unconditional fashion. This 'unconditionalness' is hard to obtain in dv qubit-based implementations based on single-photon states.

+ +

To expand on the answer, Splitting squeezed light on beam splitter results in two output beams in an entangled state. The quality of this entanglement produced leads to imperfect communication, where the degree of imperfection depends on the amount of squeezing of the laser light involved.

+ +

For example, in a realistic quantum key distribution scenario, the cv states accumulate noise and emerge at the receiver as contaminated versions of the sender’s input states. The dv quantum information encoded in single-photon states is reliably conveyed for each photon that is not absorbed during transmission.

+ +

I hope that this cleared your question.

+",419,,,,,4/15/2018 13:09,,,,2,,,,CC BY-SA 3.0 +1788,2,,1426,4/15/2018 18:52,,6,,"

While I’m not an experimentalist, and have not studied these systems in any great depth, my (crude) understanding is the following:

+ +

In ion traps you (more or less) have to trap the ions in lines. However, this isn’t a limitation in terms of the ease of communication because what you’re probably thinking about is when a linear system has nearest neighbour interactions, I.e. each qubit can only interact with its immediate neighbours. In ion traps, this isn’t really true because you can access a common vibrating mode of all the ions in order to make arbitrary pairs interact directly. So actually, that’s really good.

+ +

The problem is that number of qubits that you can store. The more atoms you put in the trap, the closer together their energy levels are, and the harder they become to individually address in order to control them and implement gates. This tends to limit the number of qubits you have in a single trapping area. To get around this (and with the added bonus of parallelism, necessary for error correction), people want to make multiple distinct trapping regions interact, either with flying qubits, or by shuttling the atoms between different trapping regions. This second approach seems to be very much in progress. This is the theory proposal, but I have certainly seen papers that have demonstrated the basic components.

+",1837,,,,,4/15/2018 18:52,,,,0,,,,CC BY-SA 3.0 +1789,1,1791,,4/15/2018 22:16,,7,736,"

I´ve solved the Exercise 7.1.1 (Bernstein–Vazirani problem) of the book ""An introduction to quantum computing"" (Mosca et altri). The problem is the following:

+ +

Show how to find $a \in Z_2^n$ given one application of a black box that maps $|x\rangle|b\rangle \to |x\rangle |b \oplus x · a\rangle$ for some $b\in \{0, 1\}$.

+ +

I´d say we can do it like this:

+ +
    +
  • First i go from $|0|0\rangle \to \sum_{i \in \{0,1\}^n}|i\rangle| + \rangle$ using QFT and Hadamard
  • +
  • Then I apply the oracle: +$$ \sum_{i \in \{0,1\}^n}(-1)^{(i,a)} |i\rangle| + \rangle $$
  • +
  • Then I read the pase with an Hadamard (since we are in $Z_2^n$ our QFT is an Hadamard) +$$ |a\rangle |+ \rangle $$
  • +
+ +

I think is correct. Do you agree?

+",1644,,,,,4/16/2018 12:48,Bernstein–Vazirani problem in book as exercise,,1,1,,,,CC BY-SA 3.0 +1790,2,,1426,4/16/2018 4:23,,7,,"

You may want to check out this Schaetz et al, Reports on Progress in Physics of 2012 ""Experimental quantum simulations of many-body physics with trapped ions"" (alternate link in semanticscholar). In sum: yes, the arrangement of the ions is one key limitation to scalability, but no, configurations are not currently limited to a single line of atoms. On that paper, check Figure 3 for experimental fluorescence images of laser-cooled ions in a common confining potential of a linear RF trap, including a single ion, a single line, a zig-zag chain and a three-dimensional construct.

+ +

From Figure 3 in the paper above by Schaetz et al: ""Structural phase transitions can be induced between one-, two- and three-dimensional crystals, for example by reducing the ratio of radial to axial trapping frequencies."" I am sure more recent review papers should exist, but this is the first one I found that was satisfactory. Admittedly, current results are more about direct simulation rather than universal computation, e.g. from figure 13 in the same paper: ""Changing the experimental parameters non-adiabatically during a structural phase transition from a linear chain of ions to a zigzag structure, the order within the crystal breaks up in domains, framed by topologically protected defects that are suited to simulate solitons.""

+ +

On the same topic, and also from 2012, another paper worth checking out would be Engineered two-dimensional Ising interactions in a trapped-ion quantum simulator with hundreds of spins (arXiv version) (Nature version. You have the experimental picture as Figure 1; it is a Penning trap in this case rather than a Paul trap. Indeed, it is not universal quantum computing but rather the specialized application of quantum simulation, but still it is undeniably experimental progress towards holding ions in place in a 2-D trap and thus advancing towards scalability.

+ +

I am myself no expert in traps, but this is what I got on scalability in a recent (2017) conference:

+ +
    +
  • Experimentalists play around with the potentials and achieve interesting combinations, with central zones that are quasi-crystalline (chains, ladders, ribbons etc) and exotic tips (e.g. ribbons or ladders that finish in a single atom).
  • +
  • The majority of the popular ions have a configuration of the type [noble-gas]$s^1$ (like Ca$^+$), preferredly with no nuclear spin but this is for convenience and simplicity. Accessing hyperfine states and/or a more complex spin level structure (like Yb$^+$=[Xe]f$^{14}$s$^2$) opens the door to a richer Hilbert space per ion.
  • +
  • Collective vibrations are used as the basis of interqubit communication. As in the previous point, the breathing mode is uniquely stable and thus convenient to use, but other vibrations are also accessible and would allow more interesting interqubit communication schemes.
  • +
+",1847,,1847,,4/19/2018 5:37,4/19/2018 5:37,,,,0,,,,CC BY-SA 3.0 +1791,2,,1789,4/16/2018 7:01,,7,,"

This is not correct: you need to use the state $|-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}$ instead of $|+\rangle$.

+ +

The important thing is that you've missed showing how the black box map that you've stated gives the oracle output that you've stated. To see this, apply the map on +$$ +|x\rangle|+\rangle\mapsto|x\rangle(|0\oplus x\cdot a\rangle+|1\oplus x\cdot a\rangle)/\sqrt{2}=|x\rangle(|0\rangle+|1\rangle)/\sqrt{2}. +$$ +When the $|+\rangle$ state is there, you get no phase. Meanwhile, with the $|-\rangle$ state, +$$ +|x\rangle|-\rangle\mapsto|x\rangle(|0\oplus x\cdot a\rangle-|1\oplus x\cdot a\rangle)/\sqrt{2}=\left\{\begin{array}{cc} |x\rangle(|0\rangle-|1\rangle)/\sqrt{2} & x\cdot a=0 \\ |x\rangle(|1\rangle-|0\rangle)/\sqrt{2} & x\cdot a=1\end{array}\right.. +$$ +This can simply be written as $(-1)^{x\cdot a}|x\rangle|-\rangle$.

+",1837,,1837,,4/16/2018 12:48,4/16/2018 12:48,,,,0,,,,CC BY-SA 3.0 +1792,1,1827,,4/16/2018 9:25,,13,1284,"

I read that a qubit can be encoded in a Fock state, such as the presence or absence of a photon. How do you perform single qubit rotations on Fock states?

+",1931,,26,,12/23/2018 12:20,12/23/2018 12:20,How do you rotate a Fock state qubit?,,2,1,,,,CC BY-SA 3.0 +1793,2,,1792,4/16/2018 10:28,,7,,"

The short answer is that you can't. There's something called a ""particle number superselection rule"" which postulates that you can't create a superposition of different numbers of particles. So, if you prepare a Fock state, you can perform phase gates, and bit flips, but you cannot perform arbitrary rotations that create superpositions of different particle number.

+ +
+ +

The longer answer is that sometimes you can make superpositions, if you have the right reference frame available. There's a good discussion of this stuff here. This is the reason why states such as the coherent states, which are a superposition of different numbers of photons, can be created (and they get used for quantum computation, but that's an entirely different question). But I believe that this can't work with small photon numbers (e.g. the presence or absence of a single photon). The only thing you can do in that context is create a superposition of a single photon being in one of two places.

+",1837,,,,,4/16/2018 10:28,,,,0,,,,CC BY-SA 3.0 +1794,1,1795,,4/16/2018 10:56,,15,1524,"

In a lecture, recorded on Youtube, Gil Kalai presents a 'deduction' for why topological quantum computers will not work. The interesting part is that he claims this is a stronger argument than the argument against fault tolerant computing in general.

+ +

If I understand his argument correctly, he states that

+ +
    +
  1. A (hypothetical) quantum computer without quantum error correction can simulate the system of anyons representing the qubit in a topological quantum computer.

  2. +
  3. Therefore, any quantum computer based on these anyons must have at least as much noise as a quantum computer without quantum error correction. As we know that our noisy quantum computer is insufficient for universal quantum computation, topological quantum computers based on anyons cannot provide universal quantum computation either.

  4. +
+ +

I think step 2 is sound, but I have some doubts on step 1 and why it implies 2. In particular:

+ +
    +
  • Why can a quantum computer without error correction simulate the system of anyons?
  • +
  • If it can simulate the system of anyons, is it possible that it can only do so with low probability and hence cannot simulate the topological quantum computer with the same fault tolerance as the system of anyons?
  • +
+",253,,26,,12/13/2018 19:45,12/13/2018 19:45,Is Gil Kalai's argument against topological quantum computers sound?,,1,0,,,,CC BY-SA 3.0 +1795,2,,1794,4/16/2018 11:48,,12,,"

A topological quantum computer could be made by using an exotic phase of matter in which anyons arise as localized effects (such as quasiparticles or defects). In this case, errors typically cost energy, and so the probability is suppressed for small temperatures (though it will never be zero).

+ +

A topological quantum computer could also be made (or one could also say simulated) by a standard gate model quantum computer, such as one based on qubits.

+ +

In either case, we are using a noisy medium to engineer a system of anyons. And so we will get a noisy system of anyons. The effects of the noise will cause our anyons to wander around, as well as causing pair creations of additional anyons, etc. If these effects are not accounted for, it will cause errors in any topological quantum computation that we intend to do. So in this sense, his arguments are correct.

+ +

The important point to note, therefore, is that we must not fail to account for the errors. We must look at the system, keep track of where all anyons are, try to identify which ones we are using, and identify how to clear away the ones that have been created in error. This means that we must do error correction within the topological quantum computer.

+ +

The promise of TQC is mainly that there should be ways to engineer topological phases that will have less noise. They should therefore require less error correction. But they will definitely need some.

+ +

For a gate model quantum computer simulating a topological quantum computer, the benefits are that topological error correction is quite straightforward and has high thresholds. The surface codes are examples of this. But we don't usually think of this as a gate model QC simulating a topological QC. We just think of it as a good example of a quantum error correcting code.

+",409,,409,,4/16/2018 12:07,4/16/2018 12:07,,,,4,,,,CC BY-SA 3.0 +1796,1,1814,,4/16/2018 13:18,,8,513,"

Since the original experimental contribution using the Shor's factoring algorithm to factorize the integer 15 some experiments have been performed in order to calculate the largest factorized number. But most of the experiments are particularly designed for a specific number ($N$) and not a general approach which could be used for any $<N$ integer. Example.

+ +

I am wondering which is, at the moment, the largest number that has been experimentally factorized in a general procedure by a quantum algorithm.

+",1955,,26,,11/25/2018 15:23,11/25/2018 15:23,Which is the highest number factorized by QC in a non-specific experiment?,,2,8,,,,CC BY-SA 4.0 +1797,2,,1717,4/16/2018 14:16,,3,,"

Superconducting qubits

+ +

There are several possible approaches. In the most popular scheme the qubits are always coupled, but off resonant and thus the energy conservation prevents the exchange of excitation. With external magnetic flux the qubits can be tuned temporarily into a resonance. The qubits will pick up a phase, which will depend on the state of the other qubit.

+ +

For first works see Strauch2003 or DiCarlo2009.

+ +

These days cPhase gates are done routinely < 300 ns and fidelity >99%. Barends2016, Salathe2015, McKay2016.

+ +

As superconducting qubits work in microwave regime and are thus operated at cryogenic temperature it is not straight forward to do a two qubit gate between distant qubits. A gate between qubits on different chips in the same cryostat has been demonstrated (eg. Kurpiers2017a), but in order to conduct a gate between different cryostats either a microwave-to-optical and optical-to-microwave converter is needed or alternatively a low loss cryogenic microwave link needs to be built.

+ +

Quantum dots

+ +

There are different degrees of freedom in multi-quantum-dot systems which can be used as a qubti. Thus there are different exact mechanisms. Typically either direct exchange interaction or tunable perturbation of a non-computational state is used.

+ +

Single-quantum-dot spin qubits: Watson2018

+ +

Double-quantum-dot spin qubits: Nichol2017

+ +

Between a spin and a photon: Mi2018

+",1989,,1989,,4/16/2018 14:57,4/16/2018 14:57,,,,0,,,,CC BY-SA 3.0 +1799,1,1800,,4/16/2018 21:26,,11,1090,"

I understand the notation for classical error correcting codes. E.g., ""Hamming(7,4)"" stands for a Hamming code that uses 7 bits to encode blocks of 4 bits.

+ +

What does the notation for quantum error correcting codes mean? E.g., there is a paper that deals with a [[4,2,2]]-code. What are these three numbers? What do double brackets stand for?

+",528,,26,,05-07-2018 13:19,05-07-2018 13:19,What does quantum error correction code notation stand for?,,2,0,,,,CC BY-SA 3.0 +1800,2,,1799,4/16/2018 23:04,,9,,"

An $[\![n,k,d]\!]$ code is a quantum error correction code which encodes $ k$ qubits in an $ n$-qubit state, in such a way that any operation which maps some encoded state to another encoded state must act on at least $d$ qubits. (So, for example, any encoded state which has been subjected to an error consisting of at most $\lfloor (d-1)/2 \rfloor $ Pauli operations can in principle be recovered perfectly).

+ +

This notation generalises the notation $ [n,k,d]$ for classical error correction codes, in which $ k$-bit ""plaintext"" strings are encoded in $n$-bit ""codeword"" strings, in such a way that at least $d $ bits must be flipped to transform between any two codewords representing different plaintexts. (In this context and in the quantum case, $ d $ is referred to as the code distance.) The double-brackets are used simply to denote that the code being referred to is a quantum error correction code rather than a classical code.

+",124,,124,,4/17/2018 8:21,4/17/2018 8:21,,,,1,,,,CC BY-SA 3.0 +1801,2,,1799,4/16/2018 23:51,,6,,"

Taking an $\left[\left[n, k, d\right]\right]$ code:

+ +

The classical equivalent to this is an $\left[n, k, d\right]$ code, which is a code referring to the number of bits, $n$, encoding $k$ bits. The third number, $d$, is the minimum Hamming distance taken between any two codewords. This is equal to the minimum Hamming weight (i.e. number of non-zero bits) of non-zero codewords.

+ +

As per the classical case, in the quantum case, the first two numbers are referring to the numbers of qubits, $n$, that encode $k$ qubits. $d$ is still used to refer to distance, but the definition of distance has to be changed.

+ +

The weight, $t$, of a Pauli operator $E_a$, is the number of qubits that a (single-qubit) Pauli operator $\left(X, Y \text{ or } Z\right)$ acts on. As an example, arbitrarily taking $E_1 = X\otimes I\otimes I\otimes Z\otimes I$, $E_1$ has a weight $t=2$. The distance is then the minimum weight that the overlap of a Pauli operator (in the space of possible errors) acting on a codeword, with a different codeword is non-zero, or the minimum weight such that $\left<j\vert E_a\vert i\right>\neq C_a\delta_{ji}$ for some (real) $C_a$ for all codewords $i$ and $j$. That is, the distance is the minimum number of errors that can occur on a codeword that causes it to be mapped to a different codeword.

+ +

For more details, see e.g. Chapter 7 of Preskill's quantum computation notes.

+",23,,,,,4/16/2018 23:51,,,,1,,,,CC BY-SA 3.0 +1802,1,1811,,4/17/2018 1:17,,12,335,"

I want to simulate large stabilizer circuits (H/S/CNOT/MEASURE/feedforward) with a small number of T gates mixed in. How can I do this in a way that scales exponentially only in the number of T gates? Are there existing implementations?

+",119,,27,,7/18/2019 17:55,7/18/2019 17:55,Simulating Clifford + few-T circuits,,1,2,,,,CC BY-SA 3.0 +1803,1,1805,,4/17/2018 2:37,,29,7645,"

Plain and simple. Does Moore's law apply to quantum computing, or is it similar but with the numbers adjusted (ex. triples every 2 years). Also, if Moore's law doesn't apply, why do qubits change it?

+",1348,,26,,12/13/2018 19:45,05-01-2019 09:30,Does Moore's law apply to quantum computing?,,5,3,,,,CC BY-SA 3.0 +1804,2,,1803,4/17/2018 6:22,,6,,"

The first thing to understand about Moore’s law is that it is not a law in the absolute sense, mathematically provable, or even postulated (like a law of physics). Really, it was just a rule of thumb that said the number of transistors in a processor would double every x years. This can be seen in the way that the value x has changed over time. Originally, it was x=1, then it became x=2, then what it was applied to (processor speed) changed. It has proved to be a useful rule of thumb, partly because it was the rule of thumb that was used to set targets for new generations of processor.

+ +

So, there is absolutely no reason why Moore’s law should apply to quantum computers, but it would not be unreasonable to guess that, past some basic threshold, qubit numbers will double every y years. For most implementations of quantum computation, we don’t yet have enough data points to start extrapolating an estimate for the value y. Some might argue that it’s not even clear yet whether we’re in the “vacuum tube” or “transistor” era of quantum computing (Moore’s law didn’t start until the transistor era).

+ +

We might start to try and extrapolate for some systems. For example, D-wave has a history of doubling its processor sizes. This started as y=1, and currently has about y=2. Of course, this is not a universal quantum computing device. The next best thing we might look at is the IBM quantum processor. In a year, the computer available on the IBM quantum experience went from 5 qubits to 16, although I don’t think it’s reasonable to extrapolate based on this.

+",1837,,,,,4/17/2018 6:22,,,,0,,,,CC BY-SA 3.0 +1805,2,,1803,4/17/2018 6:24,,25,,"

If you take as definition ""the number of transistors in a dense integrated circuit doubles about every two years"", it definitely does not apply: as answered here in Do the 'fundamental circuit elements' have a correspondence in quantum technologies? there exist no transistors-as-fundamental-components (nor do exist fundamental-parallel-to-transistors) in a quantum computer.

+ +

If you take a more general definition ""chip performance doubles aproximately every 18 months"", the question makes more sense, and the answer is still that it does not apply, mainly because Moore's law is not one of fundamental Physics. Rather, in the early stages, it was an observation of a stablished industry. Later, as pointed out in a comment,[1] it has been described as functioning as an ""evolving target"" and as a ""self-fulfilling prophecy"" for that same industry.

+ +

The key is that we do not have a stablished industry producing quantum computers. We are not in the quantum equivalent from 1965. Arguably we will move faster, but in many aspects we are rather in the XVII-XVIII centuries. For a perspective, check this timeline of computing hardware before 1950.

+ +

For a more productive answer, there are a few fundamental differences and a few possible parallels between classical and quantum hardware, in the context of Moore's law:

+ +
    +
  • For many architectures, in a certain sense we already work with the smallest possible component. While we might develop ion traps (of a fixed size) fitting more ions, but we cannot develop smaller ions: they are of atomic size.
  • +
  • Even when we are able to come up with tricks, such as Three addressable spin qubits in a molecular single-ion magnet, they are still fundamentally limited by quantum mechanics. We need control over 8 energy levels to control 3 qubits ($2^n$), which is doable, but not scalable.
  • +
  • Precisely because the scalability issue is one of the hardest problem we have with quantum computers -not just having a larger number of qubits, buy also being able to entangle them- it's dangerous to extrapolate from current progress. See for illustration the history of NMR quantum computers, which stalled after a very early string of successes. In theory, increasing the number of qubits in the device was trivial. In practice, every time you want to be able to control 1 more qubit you need to double the resolution of your machine, which becomes very unfeasible very quickly.
  • +
  • If and when there exists an industry that relies on an evolving technology which is able to produce some kind of integrated quantum chips, then yes, at that point we will be able to draw a real parallel to Moore's law. For a taste of how far we are from that point, see Are there any estimates on how complexity of quantum engineering scales with size?
  • +
+ +

[1] Thanks to Sebastian Mach for that insight and wikipedia link. For more details on that see Getting New Technologies Together: Studies in Making Sociotechnical Order +edited by Cornelis Disco, Barend van der Meulen, p. 206 and Gordon Moore says aloha to Moore's Law.

+",1847,,1847,,05-10-2018 18:54,05-10-2018 18:54,,,,14,,,,CC BY-SA 4.0 +1806,1,1810,,4/17/2018 7:44,,37,5757,"

Quantum state teleportation is the quantum information protocol where a qubit is transferred between two parties using an initial shared entangled state, Bell measurement, classical communication and local rotation. Apparently, there is also something called quantum gate teleportation.

+ +

What is quantum gate teleportation and what is it used for?

+ +

I am particularly interested in possible applications in simulating quantum circuits.

+",144,,55,,09-10-2020 11:47,09-10-2020 11:47,What is quantum gate teleportation?,,2,0,,,,CC BY-SA 3.0 +1807,1,1816,,4/17/2018 8:15,,10,500,"

I read that a qubit can be encoded in a polarization state (horizontal or vertical polarization of a photon). How do you perform two-qubit operations on a polarization qubit?

+",1931,,26,,12/13/2018 19:45,12/13/2018 19:45,How do you apply a CNOT on polarization qubits?,,1,0,,,,CC BY-SA 3.0 +1808,1,,,04-04-2018 18:57,,8,356,"

For an integer, $N$, to be factorised, with $a$ (uniformly) chosen at random between $1$ and $N$, with $r$ the order of $a\mod N$ (that is, the smallest $r$ with $a^r\equiv 1\mod N$):

+ +

Why is that in Shor's algorithm we have to discard the scenario in which $a^{r/2} =-1 \mod N$? Also, why shouldn't we discard the scenario when $a^{r/2} = 1 \mod N$?

+",5410,user2508039,23,,4/17/2018 9:40,03-04-2019 21:30,Shor's algorithm caveats when $a^{r/2} =-1 \mod N$,,3,2,,,,CC BY-SA 3.0 +1809,2,,1806,4/17/2018 8:35,,10,,"

Gate teleportation is in principle a method that allows the creation of different gates from an available set of gates, by teleporting qubits through entangled states. An example of the use of this method, is the creation of the T gate from a Clifford set of gates in order to make the set universal. The construction in this particular case is done with the use of special T ancillae. The standard reference can be found in arXiv:quant-ph/9908010.

+ +

For simulating quantum circuits you can use the gate teleportation to move gates around the circuit with the use of ancillae qubit (number of ancillae depended on the number of gates).

+",2000,,26,,5/13/2019 21:20,5/13/2019 21:20,,,,0,,,,CC BY-SA 4.0 +1810,2,,1806,4/17/2018 8:52,,31,,"

Quantum gate teleportation is the act of being able to apply a quantum gate on the unknown state while it is being teleported. This is one of the ways in which measurement-based computation can be described using graph states.

+ +

Usually, teleportation works by having an unknown quantum state $|\psi\rangle$ held by Alice, and two qubits in the Bell state $|\Psi\rangle=(|00\rangle+|11\rangle)/\sqrt{2}$ shared between Alice and Bob. Alice performs a Bell state measurement, getting one of 4 possible answers and Bob holds on his qubit, depending on the measurement result of Alice, one of the 4 states $|\psi\rangle,X|\psi\rangle,Z|\psi\rangle,ZX|\psi\rangle.$ So, once Bob learns what result Alice got, he can compensate by applying the appropriate Paulis.

+ +

Let $U$ be a 1-qubit unitary. Assume Alice and Bob share $(\mathbb{I}\otimes U)|\Psi\rangle$ instead of $|\Psi\rangle$. If they repeat the teleportation protocol, Bob now has one of $U|\psi\rangle,UX|\psi\rangle,UZ|\psi\rangle,UZX|\psi\rangle$, which we can rewrite as $U|\psi\rangle,(UXU^\dagger)U|\psi\rangle,(UZU^\dagger)U|\psi\rangle,(UZXU^\dagger)U|\psi\rangle.$ The compensations that Bob has to make for a given measurement result are given by the bracketed terms. Often, these are no worse than the compensations you would have to make for normal teleportation (i.e. just the Pauli rotations). For example, if $U$ is the Hadamard rotation, then the corrections are just $(\mathbb{I},Z,X,XZ)$ respectively. So, you can apply the Hadamard during teleportation just be changing the state that you teleport through (There is a strong connection here to the Choi-Jamiołkowski isomorphism). You can do the same for Pauli gates, and the phase gate $\sqrt{Z}=S$. Moreover, if you repeat this protocol to build up a more complicated computation, it is often sufficient to keep a record of what these corrections are, and to apply them later.

+ +

Even if you don't only need the Pauli gates (as is the case for $T=\sqrt{S}$), the compensations may be easier than implementing the gate directly. This is the basis of the construction of the fault-tolerant T gate.

+ +

In fact, you can do something similar to apply a controlled-NOT between a pair of qubits as well. This time, the state you need is $|\Psi\rangle_{A_1B_1}|\Psi\rangle_{A_1B_1}$, and a controlled-NOT applied between $B_1$ and $B_2$. This time, there are 16 possible compensating rotations, but all of them are just about how Pauli operations propagate through the action of a controlled-NOT and, again, that just gives Pauli operations out.

+",1837,,1837,,11-08-2018 06:07,11-08-2018 06:07,,,,4,,,,CC BY-SA 4.0 +1811,2,,1802,4/17/2018 8:53,,9,,"

Taking your comment to Kiro to its logical conclusion, the answer is 'yes'. The basic idea is to decompose the T gate 'magic' state $\tfrac{1}{\sqrt 2}\bigl(\lvert 0 \rangle + \mathrm{e}^{i \pi / 4} \lvert 1 \rangle \bigr)$ as a linear combination of stabiliser states. (If you do this for several magic states, this produces an exponentially large linear combination.) Representing the T-gate states involved as density operators, together with any other stabiliser states introduced as inputs or as auxiliary work-space, we can use this expansion to compute the probability of any particular Pauli measurement outcome, such as a standard basis measurement on a single qubit, after performing a stabiliser circuit and gate teleportations of the T gates.

+ +

The basic idea behind this can be improved by noting that there is more than one way to expand the T-gate state as a linear combination — particularly if you consider decompositions of several T-gate states at once, rather than expanding each T-gate state independently, and if furthermore you are happy with an approximate simulation rather than an exact one (see e.g. [Bravyi+Gossett 2016] and [Campbell+Howard 2017]).

+",124,,,,,4/17/2018 8:53,,,,0,,,,CC BY-SA 3.0 +1812,2,,1808,4/17/2018 9:07,,2,,"

There is no scenario of $a^{r/2}\equiv 1\text{ mod }N$ because you have already assumed that $r$ is the smallest value such that $a^r\equiv1\text{ mod }N$, and $r/2$ is smaller than $r$.

+ +

As you why you have to discount $a^{r/2}\equiv -1\text{ mod }N$, the point is that you've found something that satisfies $(a^r-1)=kN$ for some integer $k$. This factors as $(a^{r/2}-1)(a^{r/2}+1)=kN$ if $r$ is even. Either, one of the terms $(a^{r/2}\pm 1)$ is divisible by $N$, or each contains different factors of $N$. We want them to contain different factors so that we can computer $\text{gcd}(a^{r/2}\pm1,N)$ to find a factor. So, we specifically want that $a^{r/2}\pm 1\neq 0 \text{ mod }N$. One case has been eliminated as stated above by requiring $r$ to be as small as possible. The other we have to explicitly discount.

+",1837,,1837,,4/17/2018 18:02,4/17/2018 18:02,,,,0,,,,CC BY-SA 3.0 +1813,2,,1808,4/17/2018 9:25,,2,,"

The requirement that $a^r\equiv 1\mod N$ is equivalent to requiring that $a^r - 1\equiv 0\mod N$.

+ +

We want a number, $b$, such that the greatest common denominator of $b$ and $N$ is a proper factor of $N$ (i.e. is a factor $\neq 1, N$).

+ +

We also have that $a^r-1 = \left(a^{r/2}-1\right)\left(a^{r/2}+1\right)$.

+ +

So, we take $b = a^{r/2}-1$. We know that $r$ is the smallest number such that $a^r = 1\mod N$, showing that $a^{r/2}\neq 1\mod N$ and so $\gcd\left(a^{r/2}-1, N\right)\neq N$ (as otherwise, $N$ would divide $b$).

+ +

By Bézout's identity, if $\gcd\left(a^{r/2}-1, N\right)=1, \exists\, x_1, x_2\in\mathbb Z \text{ s.t. } \left(a^{r/2}-1\right)x_1+Nx_2 = 1$, or $\left(a^r-1\right)x_1+N\left(a^{r/2}+1\right)x_2 = a^{r/2}+1$. As $N$ divides $a^r-1$, this gives that $N$ divides $a^{r/2}+1$, or $a^{r/2} = -1\mod N$.

+ +

This gives that the requirement $a^{r/2}\neq -1\mod N$ (alongside the constraint on $r$) is enough to determine that the greatest common denominator of $a^{r/2} - 1$ and $N$ is a proper factor of $N$.

+",23,,23,,4/17/2018 10:27,4/17/2018 10:27,,,,0,,,,CC BY-SA 3.0 +1814,2,,1796,4/17/2018 9:25,,5,,"

The answer is $N = 200\,099$.

+ +

Shor's algorithm is not the only way to factorize integers. In fact, it is also possible to factorize integers with an optimization approach. This approach even allows for integers with more than two prime-factors to be composed.

+ +

See this paper from D-Wave, Prime factorization using quantum annealing and +computational algebraic geometry, in which the explain their approach and show the results of factoring multiple composite numbers, among which $N=200\,099$.

+",2005,,1847,,4/17/2018 17:01,4/17/2018 17:01,,,,3,,,,CC BY-SA 3.0 +1815,1,1821,,4/17/2018 9:27,,13,1721,"

Let us assume that we have quantum and classical computers such that, experimentally, each elementary logical operation of mathematical factorization is equally time-costing in classical and in quantum factorization: +Which is the lowest integer value for which the quantum proceeding is faster than the classical one?

+",1955,,26,,12/13/2018 19:46,12/13/2018 19:46,What is the minimum integer value to make quantum factorization to be worthwhile?,,2,5,,,,CC BY-SA 3.0 +1816,2,,1807,4/17/2018 12:08,,8,,"

A standard reference for linear optical quantum computing is Kok et al. 2009 (quant-ph/0512071).

+ +

If one qubit is encoded in the polarization degree of freedom of a single photon, and the second qubit in the path degree of freedom of the same photon, then a CNOT gate is trivially implemented by a polarizing beamsplitter. +This is a kind of beamsplitter that only changes the path of the photon if its polarization is in some polarization state (say, $|V\rangle$), and leaves the photon in its path otherwise. +This is therefore effectively a CNOT gate where the control qubit is the polarization and the target qubit is the path.

+ +

Of course, you cannot use the same idea to implement a gate between more than two qubits. +Generally speaking, as long as you are working on degrees of freedom of a single photon (position, time/frequency, polarization, orbital angular momentum), it is still ""easily"" doable to implement transformations between them, +but this is a limited approach because it is not really scalable to cram too much information into a single photon.

+ +

A very different story is the use of polarization qubits of many different photons. +The main problem with this is that photons do not naturally interact with each other, so that two-qubit gates between such qubits are nontrivial. +Indeed, it is easy to show that with linear optics alone it is impossible to implement arbitrary two-qubit gates in a deterministic way. +For example, consider the case where one has two single photons, each one in a different spatial mode, and both in the initial polarization state $|H\rangle$. +Using the standard second quantization notation, the set of transformations that can be implemented between these two photons within linear optics are given by +$$a_H^\dagger b_H^\dagger \to\left(\sum_{k=H,V}\alpha_k c_k^\dagger +\beta_k d_k^\dagger\right)\left(\sum_{k=H,V}\gamma_k c_k^\dagger +\delta_k d_k^\dagger\right),$$ +where $\alpha_k,\beta_k,\gamma_k,\delta_k$ are parameters characterising the linear transformation that is being implemented, $a_H^\dagger,b_H^\dagger$ are the creation operators of the input photons in the spatial modes $a$ and $b$ with polarization $H$, and $c$ and $d$ denote the two output modes of the photons. +It can be seen that, for example, no set of values of $\alpha_k,\beta_k,\gamma_k,\delta_k$ can implement the transformation +$$a_H^\dagger b_H^\dagger\to c_H^\dagger d_V^\dagger + c_V^\dagger d_H^\dagger,$$ +meaning that it is not possible to generate deterministically, within linear optics, to transform $|00\rangle$ into the Bell state $|01\rangle+|10\rangle$.

+ +

What the above tells us is that linear optics quantum computing with single photons requires some kind of nonlinearity. +One, therefore, needs to either use nonlinear elements such as Kerr media, or exploit the nonlinearity induced by the measurement process. Unfortunately, it is very hard to find materials implementing strong enough Kerr interactions (I don't think there is, to date, any viable known way to do this, but I may stand corrected). +On the other hand, linear optical quantum computation using measurements is possible via the Knill, Laflamme, and Milburn (KLM) protocol. +This protocol exploits photon bunching, gate teleportation, and projective measurements to obtain effective interactions between different polarization qubits. +I will not go into the details of how this works here, as this may be worth a question of its own, but the circuit to implement a CZ gate using the KLM protocol can be found in Fig. 10 of Kok et al. 2009.

+",55,,55,,4/17/2018 14:48,4/17/2018 14:48,,,,1,,,,CC BY-SA 3.0 +1817,1,1820,,4/17/2018 13:42,,11,1418,"

When it comes to error correction, we take our stabilizers to be members of the Pauli group. Why is the Pauli group used for this and not, say, the group of all unitary matrices?

+",2015,,26,,05-07-2018 13:19,05-07-2018 13:19,Why is the Pauli group used for stabilizers?,,2,0,,,,CC BY-SA 3.0 +1818,2,,1803,4/17/2018 14:32,,8,,"

tl;dr- Moore's law won't necessarily apply to the quantum computing industry. A deciding factor may be if the manufacturing processes can be iteratively improved to exponentially increase something analogous to transistor count or roughly proportional to performance.

+ +

Background: Moore's law and why it worked

+ +

It's important to note that Moore's law was about the numbers of transistors in high-density integrated circuits, not the performance or speed of electronics despite common approximate restatements of Moore's law.

+ +
+

Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years.

+ +

""Moore's law"", Wikipedia

+
+ +

Underlying Moore's law was the simple fact that, for a given integrated circuit size, the number of transistors we could cram into it was roughly proportional to the volume of an individual transistor,$$ +n_{\text{transistors}}~{\approx}~\frac{V_{\text{integrated circuit}}}{V_{\text{transistor}}}. +$$So, Moore's law was sorta like:

+ +
+

The volume of a transistor halved about every two years.

+
+ +

Then the question becomes, why were transistors able to shrink so rapidly?

+ +

This was largely because transistors are basically made of microscopically fabricated wires in an integrated circuit, and as manufacturing technology progressed, we were able to make smaller-and-smaller wires:

+ +

     .

+ +

The process of making crazy-small wires in an integrated circuit took a lot of research know-how, so folks in industry basically set out to iteratively improve their fabrication processes at such a rate to maintain Moore's law.

+ +

However, Moore's law is now basically over. Our fabrication processes are nearing the atomic scale such that the physics of the situation are changing, so we can't just keep shrinking further.

+ +

Can Moore's law work for quantum components?

+ +

As noted above, Moore's law is basically ending now. Computers will likely pick up speed due to other advances, but we aren't really planning to make sub-atomic transistors at this time. So despite industry's strong desire to maintain it, it seems unlikely.

+ +

If we assume similar behavior in a future quantum-computing industry, then we might assume that something like Moore's law might arise if industry finds itself in a similar position, where it can iteratively improve the components' manufacturing process to exponentially increase their count (or some similar metric).

+ +

At this time, it's unclear what basic industrial metric quantum computer manufacturers might iteratively improve over the course of decades to recreate a trend like Moore's law, largely because it's unclear what sort of quantum computing architectural technologies might find widespread deployment like modern integrated circuits have.

+",15,,,,,4/17/2018 14:32,,,,0,,,,CC BY-SA 3.0 +1819,2,,1817,4/17/2018 15:58,,5,,"

Any operator from the Pauli group has two eigenspaces of equal size. So we known that by adding stabilizer generator from this group, we reduce the size of the stabilizer space by half. This means that the stabilizer space would fit one less logical qubit. This makes it easy to know when we have enough stabilizers: to store $k$ logical qubits in $n$ physical qubits, we just need $n-k$ independent stabilizer generators.

+ +

Also, the Pauli group is made up of Hermitian operators. Since the point of a stabilizer is to be measured, it is useful for them to be Hermitian, since they can be directly interpreted as observables.

+ +

Furthermore, the the operators that map between stabilizer states (mutual eigenstates of stabilizer operators) will themselves be elements of the Pauli group. This is related to the point raised in your comment: Pauli group elements form a complete basis to describe multi-qubit operation. So once we measure the stabilizers, and the noise is effectively reduced to a mapping between stabilizer states, it is pretty much as if the noise just applied a bunch of simple Paulis. Correction can be then done by a simple Pauli frame rotation. This doesn't even require us to directly apply any gate to the code. We can just say ""It looks like a $\sigma_x$ hit this qubit, so from now on I'll interpret its $|0\rangle$ as $|1\rangle$, and vice-versa"".

+ +

Paulis aren't required, but they have nice properties. So that's why they are the focus

+",409,,409,,4/17/2018 18:12,4/17/2018 18:12,,,,0,,,,CC BY-SA 3.0 +1820,2,,1817,4/17/2018 16:13,,10,,"

There are some fairly simple reasons — beyond the merely historical — to use Pauli matrices instead of arbitrary unitary matrices. These reasons may not uniquely single out the Pauli group of operators, but they do significantly limit the scope of what is productive to consider.

+ +
    +
  1. A stabiliser operator $S$, first and foremost, must have a +1 eigenvalue; otherwise there isn't any state $\lvert \psi \rangle$ which it 'stabilises', in the sense that $S \lvert \psi \rangle = \lvert \psi \rangle$. So we must restrict ourselves to sets of operators which have +1 eigenvalues.

  2. +
  3. Secondly, we must consider how the stabiliser operators may be used operationally. If we know that there is a symmetry of the system that should hold, but we don't have any way to determine whether or not that symmetry holds in practise (that is, whether some error has occurred), then we're out of luck. What we would like to be able to do then is to be able to perform phase estimation to test whether or not the eigenvalue of a given state $\lvert \psi \rangle$ with respect to some allegedly-stabilising operator $S$ is in fact +1, to determine whether $\lvert \psi \rangle$ deviates from the properties that hold of it.

    + +

    This motivates considering operators $S$ which, yes, are unitary, but +also where the eigenvalues differ significantly from one another, in +order for phase estimation to easily distinguish a state with significant error from one with insignificant error. This motivates considering a set of $n$-qubit operators which have at most $1/\mathrm{poly}(n)$ eigenvalues.

  4. +
  5. Part of the whole problem is that we would like to detect and correct for operations which may be involved in complicated quantum transformations. If the phase estimation involved in eigenvalue estimation of a stabilising operator $S$ is itself complicated, we're not helping the situation.

    + +

    What would be good is for each of the stabilising operators $S$ we consider to have very simple structure: for instance, we may be especially interested in the case that they are tensor products of 1- or 2-qubit operations. It seems sensible to approach the subject by considering each operator $S$ to be a tensor product of single qubit operations.

  6. +
  7. In order to consider tensor product operations on $k \leqslant n$ qubits, which have at most $1/\mathrm{poly}(k)$ distinct eigenvalues, including +1 — and without imposing awkward constraints on which single-qubit operators act on which qubits — we are more or less forces to consider single-qubit unitary operators whose eigenvalues range within some finite set $E \subseteq \mathbb C$ (independent of $k$ or $n$) which includes +1.

    + +

    We may reduce this to the case $E = \{+1,-1\}$ by observing that estimating the eigenvalues of a tensor product operator $S = S_1 \otimes S_2 \otimes \cdots \otimes S_k$, where each $S_j$ has one +1 eigenvalue and one eigenvalue which is not +1, is the same as doing an artificially shortened version of eigenvalue estimation for an operator $P_j$ which has eigenvalues $\pm 1$. Furthermore, in order to consider several operators $S$ which manage to have a useful common +1 eigenspace, it helps for each operator S to have as large a +1 eigenspace as possible; then it helps for it to be as easy as possible for the eigenvalues of each $S_j$ to multiply to +1. This again motivates the case for the eigenvalues of $S_j$ to be $\pm 1$.

  8. +
  9. Nothing forces us to consider the group of operators generated by such a set, but the products of our stabiliser operators will also be stabiliser operators, and we have enough constraints on our operators that we can at least reasonably contemplate the group generated by our stabiliser operators.

    + +

    We have operators $S = S_1 \otimes \cdots \otimes S_n$ and $S' = + S'_1 \otimes \cdots \otimes S'_n$ whose tensor factors are all +either $\mathbb 1$ or non-trivial reflections on single-qubit +states; their products $S_j S'_j$ will be rotations by an angle +$\theta$ determined by the angles between the eigenbases of $S_j$ +and $S'_j$. If we want to obtain a nice clean theory, we might want +these products of stabiliser operators to themselves be easy to +measure: this motivates having $S_j S'_j$ to be proportional to an operator with eigenvalues $\pm 1$ (actually $S_j S'_j$ will have eigenvalues $\pm i$), in which case $S_j$ and $S'_j$ anticommute.

  10. +
+ +

Thus, the above combination of theoretical and practical constraints suffice to yield something which is isomorphic to the Pauli group. Furthermore, as the Pauli operators have a theory which is fairly easily understood, it has led to a fruitful theory of quantum error correction.

+ +

A fair question would be which of the above moves were more arbitrary than the others.

+ +
    +
  • It would not astonish me if there was a productive theory of error correction in which the constraints were tensor product operators, whose tensor factors had eigenvalues $\pm 1$, but where the possible operators did not necessarily anticommute (step 5 above).

  • +
  • More sophisticated (and more difficult) would be a powerful and +useful theory of error correction in which the stabilising operators +which one measures included operators which are not tensor product +operators (step 3 — which would motivate not worrying too +much about having group structure in the group of stabilisers which +you intend to measure).

  • +
+ +

From a purely mathematical perspective, there is nothing obvious to prevent or discourage such a line of investigation — aside, of course from the fact that it is likely to be hard and also likely to be unnecessary — and in this sense, it would be perfectly fine to consider theories of quantum error correction extending well beyond the Pauli group.

+",124,,,,,4/17/2018 16:13,,,,0,,,,CC BY-SA 3.0 +1821,2,,1815,4/17/2018 18:38,,9,,"

The quantum part of Shor's algorithm is, essentially, a single modular exponentiation done under superposition followed by a Fourier transform and then a measurement. The modular exponentiation is by far the most expensive part.

+ +
+

Let us assume that [...] each elementary logical operation of mathematical factorization is equally time-costing in classical and in quantum factorization

+
+ +

If we assume that the modular exponentiation takes exactly as long on a quantum computer as it would on a classical computer, then the transition where the quantum computation became better would happen at a very low number. Computing modular exponentiations is very fast, classically, because you can use repeated squaring. I would wildly estimate the crossover to happen even before you even get to 30 bit numbers (numbers over a billion).

+ +

But quantum computers are not going to do math nearly as fast as classical computers. For example, on my laptop, I can do a 1000-bit modular exponentiation in python in a fraction of a second. But on foreseeable quantum computers, it would take hours or days. The issue is the massive (massive) difference in the cost of an AND gate.

+ +

On a classical machine, performing an AND is so inconsequential that we don't even really think about it when programming. It's way more likely for you to think in terms of counting 64-bit additions than in terms of counting AND gates, when determining the cost of your algorithm. But on an error corrected quantum computer, performing an AND (usually temporarily, via a Toffoli) tends to be expensive. For example, you can do it by distilling four high-quality $|T\rangle$ states. I won't go into the numbers... suffice it to say that on early error corrected machines you would be very happy to get a million T states per second.

+ +

So suppose we get a million T states per second, and we want to convert this into a rate of 64-bit additions to compare with the classical machine. A 64-bit addition requires 64 AND gates, each requiring 4 T gates. 1 million divided by 4 divided by 64 gives... about 4KHz. For contrast a classical machine will easily do a billion additions per second. Quantum adders are a million times slower than classical adders (again, wildly estimating, and keep in mind this number should improve over time).

+ +

Another factor worth considering is the differing costs of quantum and classical computers. If you have a hundred million dollars, and you're choosing between one quantum computer and a thousand classical computers, that factor of 1000 has to be accounted for. In this sense, we could say quantum adders are a billion times less efficient than classical adders (in FLOPS/$).

+ +

A constant factor penalty of a billion is normally an immediate deal breaker. And for quantum algorithms with a mere quadratic advantage (like Grover), I contend that it is in fact a deal breaker. But Shor's algorithm gets exponentially better relative to the classical strategy as you increase the number of bits in the number to factor. How many bits before we eat away that ""measly"" 10^9 constant with our exponential growth in advantage?

+ +

Consider that RSA-640 was factored in 2005 using ~33 CPU years. A quantum computer should be able to do that number in under a day. If you have a thousand classical computers working on the problem, they'd finish in about two weeks. So it seems like quantum is winning by 640 bits, but only by an order of magnitude or three. So maybe the cutoff would occur somewhere around 500 bits?

+ +

Anyways, I know this is not a hard and fast answer. But hopefully I've conveyed some sense of the quantities I would think about when comparing classical and quantum. Really no one knows the constant factors involved yet, so I'd be surprised if anyone could give you a proper estimate better than ""somewhere in the hundreds of bits"".

+",119,,119,,4/17/2018 20:49,4/17/2018 20:49,,,,3,,,,CC BY-SA 3.0 +1822,1,1828,,4/17/2018 18:44,,14,2502,"

I keep reading (e.g. Nielsen and Chuang, 2010; pg. 456 and 465) the following three phases; ""code space"", ""code word"" and ""stabilizer code"" - but am having a difficult time finding definitions of them and more importantly how they differ from one another.

+ +

My question is therefore; how are these three terms defined and how are they related?

+",2015,,26,,01-07-2019 15:30,01-07-2019 15:30,"What is the difference between ""code space"", ""code word"" and ""stabilizer code""?",,3,0,,,,CC BY-SA 4.0 +1823,2,,1822,4/17/2018 19:07,,5,,"

A code word (for a quantum code) is a quantum state that is typically associated with a state in the logical basis. So, you’ll have some state $|\psi_0\rangle$ that corresponds to the 0 state of the qubit to be encoded (you don’t have to use qubits, but you probably are), and you’ll have another that’s $|\psi_1\rangle$ that corresponds to the 1 state of the qubit to be encoded.

+ +

The code space is the space spanned by the code words, i.e. the entire space $\alpha|\psi_0\rangle+\beta|\psi_1\rangle$ for all possible $\alpha$ and $\beta$ (normalised).

+ +

A stabilizer code is one possible formalism for telling you how to work out the code words and therefore the code space. For an [[n,k,d]] code, you are given n-k stabilizer operators $S$ ($S^2=\mathbb{I}$) that mutually commute, and act on n qubits. Any state $|\psi\rangle$ in the code space satisfies $S|\psi\rangle=|\psi\rangle$. You will further have operators $Z_m$ and $X_m$ for $m=1,\ldots k$ that all commute with the stabilizers $S$ but pairwise anticommute, $\{Z_m,X_m\}=0$, for matching subscripts. These define the Logical Pauli operators for the code, and the code words are therefore the states that satisfy $Z_m|\psi\rangle=\pm|\psi\rangle$.

+",1837,,1837,,4/17/2018 19:15,4/17/2018 19:15,,,,0,,,,CC BY-SA 3.0 +1824,2,,1822,4/17/2018 19:10,,5,,"

In a quantum error correcting code, you store a number of logical qubits, $k$, in a state of many physical qubits, $n$.

+ +

A code word is a state of the physical qubits that is associated with a specific logical state. So, for example, however you store the $|0\rangle$ state for one of your logical qubits is a code word.

+ +

The code space is the Hilbert space spanned by all possible code words. For a stabilizer code, this term is synonymous with the stabilizer space. Any state within this code space is a code word

+ +

A stabilizer code is a quantum error correcting code described by the stabilizer formalism. The stabilizer space is defined as the mutual $+1$ eigenspace of $n-k$ mutually commuting and independent tensor products of Pauli operators.

+",409,,409,,4/18/2018 5:36,4/18/2018 5:36,,,,0,,,,CC BY-SA 3.0 +1825,1,1831,,4/17/2018 19:39,,5,494,"

I'm currently busy learning about the basics of quantum information theory. Does anyone know how the measurement described in the wiki link LOCC is a measurement on the product space $\mathbb{C}^2 \otimes \mathbb{C}^n$ as stated?

+",2032,,55,,10/25/2021 16:16,10/25/2021 16:16,How is the measurement described in the LOCC wikipedia a measurement on the product state $\mathbb C^2\otimes\mathbb C^n$?,,1,1,,,,CC BY-SA 4.0 +1826,1,4060,,4/16/2018 11:17,,14,759,"

In Cabello's paper Quantum key distribution without alternative measurements, the author said ""the number of useful random bits shared by Alice and Bob by transmitted qubit, before checking for eavesdropping, is 0.5 bits by transmitted qubit, both in BB84 and B92 (and 0.25 in E91)"" (see here, page 2).

+ +

In E91 protocol, Alice and Bob each chooses independently and randomly from three measurement bases, so there are 9 situations and only 2 of them can yeild correct bits. Does that mean the efficiency of E91 is $\frac 2 9$ ? Why does the useful random bits is 0.25 bits by transmitted qubits in E91?

+",2047,Shireen,26,,05-08-2019 10:21,05-08-2019 10:21,Why is the efficiency of Ekert 91 Protocol 25%?,,2,3,,,,CC BY-SA 4.0 +1827,2,,1792,4/17/2018 20:40,,3,,"

Superpositions in Fock space--and rotations in Fock space--are absolutely ubiquitous.

+ +
    +
  • It is important to note that all classical states of the electromagnetic field are superpositions of many different photon-number eigenstates.

  • +
  • The entire discipline of quantum field theory (approximately) concerns which rotations within certain physically-motivated Fock spaces are allowed, and with what amplitudes they actually occur.

  • +
  • The experimental paradigms of circuit and cavity QED--which validate exquisitely the predictions of that now 70-year-old theory--explicitly deals with operations on photon number states (in particular ""the presence or absence of a single photon"" as DaftWullie put it), and are cornerstones of atomic, molecular and optical physics. Circuit QED is the essential theory underpinning superconducting flux qubits, which devices have been shown to display coherent quantum effects beyond any reasonable or unreasonable doubt. Serge Haroche was awarded the 2012 Nobel Prize in physics for his work on cavity QED, in which he went on happily creating, controlling and measuring superpositions of small numbers of microwave photons. Lots of experimentalists do this every day.

  • +
  • It has long been suggested that a single harmonic mode be used to represent one or more qubits in a practical quantum computer, in which logical states are encoded as superpositions of states of different occupation number. For a few ideas on how to do this as well a few reasons why it might not be the best idea, see Nielsen and Chuang, section 7.2.

  • +
+ +

There's no shortage of literature on ways to perform these kinds of operations. In fact, a nontrivial fraction of modern physics is concerned with exactly that. I can't imagine where or how you would get the opposite idea.

+",2034,,2034,,4/17/2018 20:46,4/17/2018 20:46,,,,1,,,,CC BY-SA 3.0 +1828,2,,1822,4/17/2018 23:12,,11,,"

Code spaces and code-words

+ +

A quantum error correcting code is often identified with the code-space (Nielsen & Chuang certainly seem to do so). The code space $\mathcal C$ of e.g. an $n$-qubit quantum error correction code is a vector subspace $\mathcal C \subseteq \mathcal H_2^{\otimes n}$.

+ +

A code word (terminology which was borrowed from the classical theory of error correction) is a state $\lvert \psi \rangle \in \mathcal C$ for some code-space: that is, it is a state which encodes some data.

+ +

Quantum error correction codes

+ +

In practice, we demand some non-trivial properties to hold of a quantum error correction code, such as:

+ +
    +
  • That $\mathop{\mathrm{dim}} \mathcal C \geqslant 2$, so that there is a non-zero amount of information being encoded;
  • +
  • That there are a set $\mathcal E = \{ E_1, E_2, \ldots \}$ of at least two operators including the operator $E_1 = \mathbb 1$, such +that — if $P$ is the orthogonal projector onto $\mathcal C$ — we have $$P E_j E_k P = \alpha_{j,k} P$$ +for some scalars $\alpha_{j,k}$ (known as the Knill–Laflamme conditions).
  • +
+ +

This determines some set of error operators against which you can in principle protect a state $\lvert \psi \rangle \in \mathcal C$, in that if the Knill–Laflamme conditions hold of a set of operators $\mathcal E$, and some operator $E \in \mathcal E$ acts on your state, it is possible in principle to detect the fact that $E$ has occurred (as opposed to some other operator in $\mathcal E$) and undo the error, without disrupting the data stored in the original state $\lvert \psi \rangle$.

+ +

A quantum error correction code is a code-space $\mathcal C$, together with a set of error operators $\mathcal E$ which satisfy the Knill–Laflamme conditions — that is, a quantum error correcting code must specify which errors it is meant to protect against.

+ +

Why it is common to identify quantum error correcting codes with their code-spaces

+ +

You cannot determine a unique set $\mathcal E$ of operators which satisfy the Knill–Laflamme conditions from the code-space $\mathcal C$ alone. However, it is most common to consider which low-weight operators (ones which act only on a small number of qubits) can be simultaneosuly corrected by a code, and to an extent this can be derived from the code-space alone. The code distance of a code space $\mathcal C$ is the smallest number of qubits that you have to act on, to transform one ""code-word"" $\lvert \psi \rangle \in \mathcal C$ into a distinct codeword $\lvert \psi' \rangle \in \mathcal C$. If we then describe a code-space as being a $[\![n,k,d]\!]$ code, this then says that $\mathcal C \subseteq \mathcal H_2^{\otimes n}$ has dimension $2^k$, and that the set $\mathcal E$ that we consider is the set of all Pauli operators with weight at most $\lfloor (d{-}1)/2 \rfloor$.

+ +

In some cases, describing a code as an $[\![n,k,d]\!]$ code is enough. For instance, the 5-qubit code is a $[\![5,1,3]\!]$ code, and it is possible to show that five qubits cannot encode a single qubit in such a way that any other errors can be corrected in addition to all of the single-qubit errors. +However, the same is not true of the Steane $[\![7,1,3]\!]$ code, which can protect against any single-qubit Pauli error as well as some (but not all) two-qubit Pauli errors. Which two-qubit Pauli errors you should protect against depends on what your error model is; and if your noise is symmetric and independently distributed, it won't matter very much what you choose (so that you will likely make the conventional choice of any single $X$ error together with any single $Z$ error). It is however a choice, and one which will guide how you protect your data against noise.

+ +

Stabiliser codes

+ +

A stabiliser code is a quantum error correction code determined by a set $\mathscr S$ of stabiliser generators, which are Pauli operators which commute with one another, and which define a code-space $\mathcal C$ by the intersection of their +1-eigenspaces. (It is often useful to consider the stabiliser group $\mathcal G$ formed by products of $P \in \mathscr S$.)

+ +

Almost all quantum error correction codes that people consider in practise are stabiliser codes. This is one reason why you may have problems distinguishing the two terms. However, we do not require that a quantum error correction code be a stabiliser code — just as in principle we do not require a classical error correction code to be a linear code. Stabiliser codes just happen to be an extremely successful way of describing quantum error correcting codes, just as linear error correcting codes are an extremely successful way of describing classical error correcting codes. And indeed, stabiliser codes can be regarded as a natural generalisation of the theory of classical linear codes to quantum error correction.

+ +

As people are often interested just in low-weight operators which are less than half the code distance, the set of stabilisers is often all people say about an stabiliser correction code. However, to specify the set of errors $\mathcal E$ against which the code can protect, it is necessary also to specify a relationship $\sigma$ between Pauli product operators $E$ and subsets $S \subseteq \mathscr S$, such that

+ +
    +
  • $E$ anticommutes with $P \in \mathscr S$ if and only if $P \in S$ for $\sigma(E,S)$;
  • +
  • If $E, E'$ both satisfy $\sigma(E,S)$ and $\sigma(E',S)$, then $E E' \in \mathcal G = \langle \mathscr S \rangle$.
  • +
+ +

This defines a set $$\mathcal E = \bigl\{ E \mathbin{\big\vert} + \exists S \subseteq \mathscr S: \sigma(E,S) \bigr\}$$ of errors + against which the code can protect. +The subsets $S \subseteq \mathscr S$ are called error syndromes, and the relation which I've called $\sigma$ here (which you don't usually see given an explicit name) associates syndromes to one or more errors which 'cause' that syndrome, and whose effects on the code are equivalent.

+ +

'Syndromes' represent information that can actually be obtained about an error by 'coherent measurement' — that is, by measuring operators $P \in \mathscr S$ as observables (a process which is usually simulated by eigenvalue estimation). An error $E$ 'causes' a syndrome $S \subseteq \mathscr S$ if, for any code-word $\lvert \psi \rangle \in \mathcal C$, the state $E \lvert \psi \rangle$ is in the $-1$ eigenspace of all operators $P \in S$, and in the $+1$-eigenspace of all other operators in $\mathscr S$. (This property is directly related to the anticommutation of $E$ with all of the elements of $S \subseteq \mathscr S$, and only those elements.)

+",124,,124,,06-02-2018 06:21,06-02-2018 06:21,,,,8,,,,CC BY-SA 4.0 +1829,2,,1716,4/18/2018 0:37,,4,,"

It may or may not be exactly what you're looking for, but there is research done on coherent charge oscillations in a silicon field-effect transistor (paper by Gonzalez-Zalba et. al. at Hitatchi labs).

+ +

In the above paper, they coherently control a double quantum dot in a silicon-on-insulator nanowire CMOS transistor, with a $T_2$ coherence time of $\approx 100$ ps. While this is a small coherence time, as already existing CMOS technology was used, putting this (or something similar) on a classical chip would be relatively easy. Although it's probably not going to perform any advanced quantum computations in the near future, it arguably demonstrates some potential for a limited classical/quantum hybrid chip. Regardless, this is a clear example of a field-effect transistor having the potential to be used for quantum computation.

+ +

The same labs also created a Spin Hall effect transistor, where they use spin-orbit coupling to manipulate spin to implement an AND gate. While spin-orbit coupling is a quantum mechanical process (that doesn't particularly involve quantum information or computing), manipulating spin is something that's required in various different types of quantum computer and so, I wouldn't be all that surprised if talking about such processes in terms of quantum computing/information were possible to some extent.

+ +

Overall, while you may or may not count the above as definitive evidence of what you want, they are at least small steps in that direction

+",23,,,,,4/18/2018 0:37,,,,0,,,,CC BY-SA 3.0 +1830,2,,105,4/18/2018 6:32,,3,,"

To understand this question (and its possible answers) properly, we need to discuss a couple of concepts related to temperature and its relation to quantum states. Since I think the question makes more sense in the solid state, this answer will assume that's what we're talking about.

+

For starters, I find it useful to think about Boltzmann's distribution: a probability distribution that gives the probability $p_i$ that a system will be in a certain state $i$ as a function of that state’s energy ${\varepsilon}_i$ and the temperature $T$ of the system:

+

$p_i={\frac{e^{- {\varepsilon}_i / k T}}{\sum_{j=1}^{M}{e^{- {\varepsilon}_j / k T}}}}$

+

where $k$ is Boltzmann's constant.

+

In a system that is in equilibrium, as defined by statistical mechanics, the population of the different quantum states is governed by this equation (the system will be in a thermal state). If we think of a single quantum system rather than a collection or "ensemble", this distribution of populations would correspond to a series of weights in a mixed state. +In any case, these are not the conditions one needs for quantum computing, where at any given time we want to have a good control the wavefunction. However, note that these probabilities have an exponential dependence on ${\varepsilon}_i$. This will be important further down.

+

Additionally, we need to consider phonons, the collective excitations in periodic, elastic arrangements of atoms or molecules in condensed matter. These are often the carriers of energy to and from our qubits into the part of the solid where we do not have an exquisite quantum control and thich therefore is thermalized: the so-called thermal bath.

+
+

Why must quantum computers operate under such extreme temperature conditions?

+
+

We can never fully control the quantum state of a solid chunk of matter. At the same time, we do need full control over the quantum state of our quantum computer, meaning the subset of quantum states where our information resides. These will live in pure states (including quantum superpositions), surrounded by a disordered -thermalized- environment.

+

Think about the Boltzmann distribution described above, and about the exponential term. In practice, its equation means that we can assume $p_i=0$ when the relation between temperature and the energy of the states we're interested in (which often means the states corresponding to our qubits) is such that ${\varepsilon}_i<<kT$.

+

Kinetics are often hard to model, but you know that inevitably your system will tend to thermalize. So, you need to keep your quantum computer, for as long as possible, in a state such that the only excitations that occur are those corresponding to the quantum states and quantum operations that are part of your computation. If the temperature of the solid where the quantum system is residing is low, you only need to worry about your qubits uncontrollably relaxing to a lower-energy state (which is bad enough). If the temperature is high, you also need to worry about your qubits being uncontrollably excited to higher-energy states. Inevitably, this also includes states that are outside your computational basis, meaning states that, for your qubit state, are neither $|0\rangle$ nor $|1\rangle$, nor any complex combination thereof: harder-to-correct errors.

+

If you now think about the phonons, recall that they are excitations, which cost energy, and thus are more abundant at high temperature. With rising temperatures, there is a rising number of available phonons, and they will present rising energies, sometimes allowing for interaction with different kinds of excitations (accelerating the kinetics toward thermalization): eventually, those that are detrimental to our quantum computer.

+
+

Is the need for extremely low temperatures the same for all quantum computers, or does it vary by architecture?

+
+

It does vary, and dramatically so. Within the solid state, it depends on the energies of the states that constitute our qubits. Outside the solid state, as pointed out above and in a follow-up question (Why do optical quantum computers not have to be kept near absolute zero while superconducting quantum computers do?), it's a whole another story.

+
+

What happens if they overheat?

+
+

See above. In a nutshell: you lose your quantum information faster.

+",1847,,1859,,7/24/2022 10:36,7/24/2022 10:36,,,,0,,,,CC BY-SA 4.0 +1831,2,,1825,4/18/2018 7:29,,4,,"

I assume the question refers to how LOCC is used to distinguish the two states +$$ +(|00\rangle+|11\rangle)/\sqrt{2}\qquad(|01\rangle+|10\rangle)/\sqrt{2} +$$ +when Alice and Bob each hold one qubit, and are separated by a great distance.

+ +

There are many different measurement operators that could achieve the same task. If Alice just held both qubits herself, she could simply implement a measurement in the Bell basis, described by measurement operators +$$ +M_0=\frac12(|00\rangle+|11\rangle)(\langle00|+\langle11|)\qquad M_1=\frac12(|00\rangle-|11\rangle)(\langle00|-\langle11|)\qquad M_2=\frac12(|01\rangle+|10\rangle)(\langle01|+\langle10|)\qquad M_3=\frac12(|01\rangle-|10\rangle)(\langle01|-\langle10|) +$$ +However, these measurements don't have a tensor product structure, and so cannot be implemented by LOCC. The results $M_0$ and $M_2$ distinguish the two states given.

+ +

As an alternative, Alice and Bob do as described at the original link; both make $Z$ basis measurements. So, their measurement operators are described by +$$ +M_0'=|0\rangle\langle0|\otimes |0\rangle\langle0|\qquad +M_1'=|0\rangle\langle0|\otimes |1\rangle\langle1|\qquad +M_2'=|1\rangle\langle1|\otimes |0\rangle\langle0|\qquad +M_3'=|1\rangle\langle1|\otimes |1\rangle\langle1| +$$ +If they get either answers $M_0'$ or $M_3'$, they had the first state, while if they get $M_1'$ or $M_2'$ they had the second state. Note that while Alice and Bob each act locally, on their qubit (as you can tell from the tensor product structure), they only know which overall result they got by comparing their measurement results, which requires classical communication.

+ +

Now a brief comment about the dimension of the Hilbert space. I have specifically talked about two qubits, $\mathbb{C}^2\otimes\mathbb{C}^2$. The first set of measurement operators, $\{M_k\}$, cannot be described under this tensor product structure; they are operators on $\mathbb{C}^4$, while the $\{M_k'\}$ operators can be described under $\mathbb{C}^2\otimes\mathbb{C}^2$. As for why the original source was talking about $\mathbb{C}^2\otimes\mathbb{C}^n$, for that specific example it seemed to be an unnecessary complication. Yes, you can always restrict to a qubit inside a larger Hilbert space, but there's no reason not to just take $n=2$ as far as I can see.

+",1837,,,,,4/18/2018 7:29,,,,5,,,,CC BY-SA 3.0 +1832,1,1834,,4/18/2018 11:22,,10,776,"

The complexity class BQP (bounded-error quantum polynomial time) seems to be defined only considering the time factor. Is this always meaningful? Do algorithms exist where computational time scales polynomially with the input size but other resources such as memory scale exponentially?

+",1931,,23,,4/18/2018 11:56,4/18/2018 11:58,Is BQP only about time? Is this meaningful?,,2,0,,,,CC BY-SA 3.0 +1833,2,,1832,4/18/2018 11:35,,5,,"

Not for memory, at least, as every memory access requires $O(1)$ 'time'.

+ +

In the term time complexity, 'time' is a bit misleading, as we actually count the number of elementary operations required to perform an algorithm. Under the additional assumption that these operations can be performed in '$O(1)$ time', we can say that our algorithm has a 'time complexity'. But what we are actually mean is that we have a 'operation complexity' which we express in time.

+ +

I think it is clearer that counting elementary operations is a fundamental and important measure of the number of resources required by an algorithm, as we can always decide how many resources each elementary operation requires.

+ +

While in the definition of BQP and for quantum algorithms we consider circuit complexity instead of 'operation complexity', circuit complexity can again defined in terms of operations on Turing machines, so the same reasoning applies.

+",253,,253,,4/18/2018 11:58,4/18/2018 11:58,,,,0,,,,CC BY-SA 3.0 +1834,2,,1832,4/18/2018 11:41,,10,,"

BQP is defined considering circuit size, which is to say the total number of gates. This means that it incorporates:

+ +
    +
  • Number of qubits — because we can ignore any qubits which are not acted on by a gate. This will be polynomially bounded relative to the input size, and often a modest polynomial (e.g. Shor's algorithm only involves a number of qubits which is a constant factor times the input size).
  • +
  • Circuit depth (or 'time') — because the longest the computation could take is if we perform one gate after another, without performing any operations in parallel.
  • +
  • Communication with control systems — because the gates being performed are taken from some finite gate set, and even if we allow intermediate measurements, the amount of communication required to indicate the result of the measurement and the amount of computation required to determine what is done next is usually a constant for each operation.
  • +
  • Interactions between quantum systems — even if we consider an architecture which does not have all-to-all interactions or anything close to it, we can simulate having that connectivity by performing explicit SWAP operations, which can themselves be decomposed into a constant number of two-qubit operations. This might give us a noticeable polynomial overhead which impacts how practical an algorithm is for a given architecture, but it does not hide an exponential amount of work.
  • +
  • Energy — again because the circuits are decomposed into a finite gate-set, there is no obvious way to obtain an apparent speed-up by ""doing the gates faster"" or by hiding work in an exotic interaction, if our bound is in terms of the number of operations performed from a fairly basic set of operations. This consideration is more important in adiabatic quantum computing: we can't try to avoid small gaps just by amplifying the entire energy landscape as much as we like — meaning that we must take longer to do the computation instead, corresponding in the circuit picture to a circuit with more gates.
  • +
+ +

In effect, counting the number of gates from a constant-sized set captures many things which you might worry about as practical resources: it leaves very little space to hide anything which is secretly very expensive.

+",124,,,,,4/18/2018 11:41,,,,0,,,,CC BY-SA 3.0 +1835,2,,1815,4/18/2018 14:42,,5,,"

As I mentioned in the comments, a very precise answer will likely depend on a lot of technical choices which are somewhat arbitrary. It is likely to be more important to obtain an order-of-magnitude estimate, and to account for as much as possible in making it.

+ +

This answer is intended not as a definitive answer, but as a step in the right direction by reference to the existing literature (though admittedly over a decade old by now), specifically:

+ +
    +
  • Van Meter, Itoh, and Ladd. +Architecture-Dependent Execution Time of Shor's Algorithm. Proc. Mesoscopic Superconductivity + Spintronics 2006; [arXiv:quant-ph/0507023]
  • +
+ +

Van Meter, Itoh, and Ladd attempt to compare the performance of Shor's algorithm with available computing technology performing the Number Field Sieve (the best known classical algorithm for factorisation). I have not had the time to plumb through the details of the paper — a superior answer could likely be obtained by doing so — but Figure 1 of that article allows us to make a reasonable numerical estimation:

+ +

+ +

Here, the steep curves represent the computing time of classical computing networks. The curve labeled 'NFS, 104 PCs, 2003' seems to indicate computations (and the projected computing time) of one hundred and four personal computers circa 2003, as reported by RSA Security Inc. in 2004 [http://www.rsasecurity.com/rsalabs/node.asp?id=2096].

+ +

We will carry out a Fermi calculation. Let us assume that the curve corresponds to a computation on 104 essentially identical computers, and let us presume that $n$ computers carrying out number field sieve can carry out $n \cdot v$ computations per second of Number Field Sieve, where $v$ is the number of operations per second which a single computer can carry out. A quick web search suggests the speed of a good commercially available PC circa 2003 was about 2GHz. Assuming that the computers were performing one logical operation per clock cycle, the classical computation in 2003 was effectively operating at about $2 \times 10^{11}$ operations per second. A hypothetical benchmarking of Shor's algorithm would have to be made against a quantum computer performing at a comparable clock speed.

+ +

Unfortunately the lowest curve for quantum algorithms represents a clock-rate of $10^9$, so the point at which a hypothetical realisation of Shor's algorithm would surpass this performance is not on the graph shown. However, there is some interesting information which is shown.

+ +
    +
  • Despite a operations-per-second advantage of a factor of 200 or more, the plot does indicate when this 200GHz classical NFS implementation is surpassed by a 1GHz quantum computer performing Shor's algorithm (at about 200 digit numbers) and by a 1MHz quantum computer (at about 330 digit numbers).
  • +
  • We also have a curve projecting the performance ""in 2018"", representing 1000 times the classical computation power: the intercepts with the 1GHz and 1MHz quantum computers are at 350 bit numbers and 530 bit numbers.
  • +
+ +

The increase in the crossing points against quantum computations, from the computation in 2003 to the projected one in 2018, representing a clock-speed boost of 1000, is a factor of about 5/3. From this we can estimate that the computational advantage to the size of numbers that can be quickly solved by a classical computer, due to a speed increase of a factor of 200, is roughly 7/6. Then we can estimate that the crossing point of a single 1GHz classical computer performing NFS, with a 1GHz quantum computer performing Shor's algorithm, is at about 170 bit numbers.

+ +

The bottom line — a precise answer will depend on many technical assumptions which can change the precise result significantly, so it is better to seek a rough estimate. But this question has been researched at least once before, and making some number of assumptions and extrapolations on performance based on classical performance in 2003, it seems that Shor's algorithms will outperform the best known classical algorithm on an operation-by-operation basis for numbers around 170 bits.

+",124,,119,,4/18/2018 19:49,4/18/2018 19:49,,,,2,,,,CC BY-SA 3.0 +1836,1,1837,,4/18/2018 16:22,,7,319,"

The following question is related to this one: Will deep learning neural networks run on quantum computers?. I found it complementary and necessary because the previous answers are not completely related with my concerns.

+ +

Primarily, my question is related to hardware-based neural networks. As proposed by Precioso et al. in 'Training and operation of an integrated neuromorphic network based on metal-oxide memristors' the use of solid-state or even molecular memristors can be a hardware proposal to implement efficient and non-volatile neural networks.

+ +

Thus, the question is: Is there a quantum logic operation which can be used to improve the algorithms present in neural networks? Can quantum logic add any possible strategy in order to increase the efficiency of such networks?

+",1955,,26,,12/13/2018 19:46,12/13/2018 19:46,Can quantum computing provide advantages related to Hardware-Neural Networks?,,1,0,,,,CC BY-SA 3.0 +1837,2,,1836,4/18/2018 17:18,,6,,"

Again, this is still an open question.

+ +

There are two lines of work that come to mind when you talk of ""hardware-based neural networks"" which try/claim to use photonics as a mean to speed-up processing, and make direct reference to speeding up machine learning tasks. +Shen et al. 2016 (1610.02365) propose a method to implement ""fully-optical neural networks"" which they claim to provide advantages in terms of computational speed and power efficiency. +A similar in principle (but not in method) idea is the one pursued by the LightOn startup (see the papers referenced in their website). Very roughly speaking, the idea is here to exploit the natural dynamics of complex media, and therefore the natural dynamics of scattered light, to again get better processing speed and power consumption performances (note that I'm just stating the claims made in the papers here, as the details and validity of performances and advantages can be hard to judge in this case).

+ +

Note that, in both cases, only classical light is used. In other words, there is arguably nothing ""quantum"" about these works, so you may not consider them totally relevant here. +However, both platforms could naturally be used (and have been used for different experiments) with single photons, and therefore can be used for processing of quantum information. +However, building a ""quantum neural network"" is not just a matter of building a neural network-like evolution for a quantum system, as that would likely provide no advantage at all.

+ +

Talking instead of ""quantum logic operation which can be used to improve the algorithms present in neural networks"", this is the idea behind a good chunk of research being done on quantum (assisted) machine learning, some references for which can be found in the question you linked as well as in this other one. +A notable example is HHL09, which provides a way to speed-up the problem of inverting a linear system, which is an important part of many machine-learning algorithms.

+",55,,55,,4/18/2018 21:02,4/18/2018 21:02,,,,0,,,,CC BY-SA 3.0 +1839,2,,1716,4/19/2018 10:18,,3,,"

In order to benefit from the use of quantum objects to perform classical computation, one possibility is proposed by Spintronics.

+ +

Since the discovery of the giant magnetorresistence by Fert et al. (in Giant Magnetoresistance of (001)Fe/(001)Cr Magnetic Superlattices) a new field has been developed in which the spin has been used instead of the classical charge. This new electronics relies on the use of the quantum properties of the spin to to encode information, an approach that permits the manipulation (write/read) of information using lower energies compared to those involved in conventional electronics. Spintronics proposes new materials to the data storage and data processing industries but also aims at offering novel possibilities to quantum technologies. See: Spintronics, A Spin-Based Electronics Vision for the Future.

+ +

Molecular spintronics, developed in the past decade,deals with the possibility of transferring the spintronic properties displayed by the purely inorganic compounds to systems made out of discrete molecules. Motivated by the fact that organic molecules are mostly formed by light atoms, which bear weak spin−orbit coupling and present low-contact nuclear hyperfine interaction, molecular spintronics hold promises for enhanced quantum coherence and the preservation of the spin during the operation time.

+ +

Regarding classical/quantum hybrid computation, recently the group of W. Wernsdorfer et al have reported the fist quantum algorithm implemented in a single-molecule device by means of spintronics in Operating Quantum States in Single Magnetic Molecules: Implementation of Grover’s Quantum Algorithm. I am not aware that any advance in this approach has been performed by the inorganic (non-molecular) counterpart in Spintronics.

+",1955,,1955,,4/19/2018 10:42,4/19/2018 10:42,,,,0,,,,CC BY-SA 3.0 +1840,1,1848,,4/19/2018 11:06,,8,510,"

Considering two entangled flying qubits, as far as I know, there is no physical limit for separating them without losing quantum information.

+

See: Is there any theoretical limit to the distance at which particles can remain entangled?

+
+

Question

+
    +
  1. What is the actual record in such distance using photons? I have found a previous record in 143 km (Physicists break quantum teleportation distance)

    +
  2. +
  3. What about using solid-state based qubits?

    +
  4. +
  5. Can this suppose a limitation when constructing a solid-state quantum computer?

    +
  6. +
+",1955,,-1,,6/18/2020 8:31,12/13/2018 21:00,What is the maximum separation between two entangled qubits that has been achieved experimentally?,,2,0,,,,CC BY-SA 3.0 +1841,1,1843,,4/19/2018 13:36,,10,682,"

Suppose I transform a state as follows:

+ +
    +
  1. I start with the state $\lvert 0\rangle \otimes \lvert0\rangle \otimes \lvert0\rangle \otimes \lvert 0 \rangle$.
  2. +
  3. I entangle the 1st and 2nd qubits (with an H gate and C-NOT).
  4. +
  5. I then then entangle the 3rd and 4th qubits the same way.
  6. +
+ +

If I try to apply H gate and C-NOT to the 2nd and 3rd qubits afterwords, will the whole system become entangled? What happens to the 1st and 4th qubits in that case?

+ +

(Cross-posted from Physics.SE)

+",2060,,26,,12/23/2018 12:27,12/23/2018 12:27,What happens if two separately entangled qubits are passed through a C-NOT gate?,,1,5,,,,CC BY-SA 3.0 +1842,1,1849,,4/19/2018 13:53,,5,117,"

I've read the ""trend"" article entitled Is a room-temperature, solid-state quantum computer mere fantasy? from almost 10 years ago, and was wondering if things have really changed: What is the current consensus on the viability of a room-temperature solid-state quantum computer?

+",1931,,26,,12/14/2018 5:29,12/14/2018 5:29,What is the most optimistic perspective of room-temperature solid-state QC?,,1,1,,,,CC BY-SA 3.0 +1843,2,,1841,4/19/2018 14:01,,11,,"

$\newcommand{\bra}[1]{\left<#1\right|}\newcommand{\ket}[1]{\left|#1\right>}\newcommand{\bk}[2]{\left<#1\middle|#2\right>}\newcommand{\bke}[3]{\left<#1\middle|#2\middle|#3\right>} +% +$So, first you're entangling qubits 1 and 2, and qubits 3 and 4, so overall you have the quantum state +$$ +\left(\ket{00}+\ket{11}\right)\otimes\left(\ket{00}+\ket{11}\right)/2 +$$ +Then you apply a Hadamard on qubit 2, +$$ +(\ket{0}(\ket{0}+\ket{1})+\ket{1}(\ket{0}-\ket{1}))\otimes(\ket{00}+\ket{11})/(2\sqrt{2}) +$$ +before applying a controlled-NOT from qubit 2 (control) to qubit 3 (target), right? This gives you +$$ +(\ket{0}\otimes(\ket{0}\otimes(\ket{00}+\ket{11})+\ket{1}\otimes(\ket{10}+\ket{01}))+\ket{1}\otimes(\ket{0}\otimes(\ket{00}+\ket{11})-\ket{1}\otimes(\ket{10}+\ket{01})))/(2\sqrt{2}) +$$ +Let's rearrange this slightly as +$$ +\ket{\Psi}=((\ket{0}-\ket{1})\ket{1}(\ket{10}+\ket{01})+(\ket{0}+\ket{1})\ket{0}(\ket{00}+\ket{11}))/(2\sqrt{2}) +$$ +Note that we need the full state of the whole system. You cannot really talk about the states of qubits 1 and 4 separately due to the entanglement.

+ +

The question of ""is it still entangled"" is straightforwardly ""yes"", but that is a actually a triviality of a much more complex issue. It is entangled in the sense that it is not a product state $\ket{\psi_1}\otimes\ket{\psi_2}\otimes\ket{\psi_3}\otimes\ket{\psi_4}$.

+ +

One simple way to see that this state is entangled is to pick a bipartition, i.e. a split of the qubits into two parties. For instance, let's take qubit 1 as one party (A), and all the others as party B. If we work out the reduced state of party A, a product state (unentangled) would have to give a pure state. Meanwhile, if the reduced state is not pure, i.e. has a rank greater than 1, the state is definitely entangled. For example, in this case +$$ +\rho_A=\text{Tr}(\ket{\Psi}\bra{\Psi})=\frac{\mathbb{I}}{2}, +$$ +has rank 2. Actually, it doesn't matter what you did between qubits 2 and 3, as $\rho_A$ is independent of that unitary; it cannot remove the entanglement created with qubit 1 (just possibly spread it between qubits 2 and 3). The fact that you have to look at different bipartitions to see which qubits are entangled with which already starts to indicate some of the complexity. For pure states, it is sufficient to look at each of the bipartitions of 1 qubit with the rest. If each of these reduced density matrices is rank 1, your whole state is separable.

+ +

Related to your question, you might like to look up issues of ""monogamy of entanglement"" -- the more entangled qubit 1 is with qubit 2, the less entangled qubit 1 is with qubit 3 (for example), and that can be quantified in a number of different ways. Equally, you can ask questions about ""what sort of entanglement is there?"". One approach is to look at what types of entanglement can be converted into different types (often termed ""SLOCC equivalence classes""). For example, with 3 qubits, people make the distinction between W-state entanglement, which looks like $\ket{001}+\ket{010}+\ket{100}$ and GHZ-entanglement that looks like $\ket{000}+\ket{111}$, as well as bipartite entanglement between different pairs of qubits, and a separable state on the other. It gets really messy for 4 qubits!

+",1837,,1837,,4/19/2018 15:22,4/19/2018 15:22,,,,0,,,,CC BY-SA 3.0 +1844,2,,1840,4/19/2018 15:14,,5,,"

I believe the current record is held by the Jian-Wei Pan group in China, who are able to generate entanglement via a satellite. The journal article is here, while there's plenty of media coverage that is a bit more accessible, e.g. New Scientist. This claims a distance of 1203km.

+",1837,,,,,4/19/2018 15:14,,,,0,,,,CC BY-SA 3.0 +1845,1,1846,,4/19/2018 16:50,,8,463,"

Consider the 9 qubit Shor code. This can detect and correct arbitrary single qubit errors, but if there are 2 or more single qubit errors before a correction round, the correction will fail. (In the sense that it won't reproduce the original logical state.) Hence the probability of failure of the Shor code is the probability of there being 2 or more single qubit errors. If we assume that single qubit errors occur independently with probability $p$, then we can calculate this probability as

+ +

$$ +P(\mathrm{failure}) = 1 - \left[(1-p)^{9} + 9p(1-p)^{8}\right]. +$$

+ +

We can write this probability as a series in $p$ as

+ +

$$ +P(\mathrm{failure}) = \sum_{m=2}^{9} (-1)^{m} (m-1) \binom{9}{m} p^{m}. +$$

+ +

My question is about the intuition for the coefficients of each term in this sum.

+ +

The $\binom{9}{m} p^{m}$ part seems natural. We can interpret it as the probability that $m$ single qubit errors occur multiplied by the number of groups of $m$ qubits these errors could act on.

+ +

Question: What I am confused by is the $(-1)^{m} (m-1)$ part of the coefficient. Why should some of the higher order terms lead to a suppression of the failure probability, and why should the weighting of these terms beyond the binomial coefficient increase with order? Is there some intuition for these coefficients?

+ +

Note 1: I realise that the $m=2$ case is the most important, since we typically assume $p \ll 1$. Indeed, most treatments truncate the series at $m=2$, where $(-1)^{m} (m-1) = 1$, so even if the higher-order terms do have an effect, that effect should be fairly negligible.

+ +

Note 2: Though I have specified the Shor code for concreteness, this behaviour is not specific to the Shor code. For example, if we restrict ourselves to bit-flip errors, then the failure probability of the bit-flip code will exhibit similar coefficients.

+",2061,,26,,05-07-2018 13:18,05-07-2018 13:18,Intuition for Shor code failure probability,,1,1,,,,CC BY-SA 3.0 +1846,2,,1845,4/19/2018 19:12,,5,,"

When given set of Pauli errors acts on a stabilizer code, they will either be corrected, or the attempt at correction will lead to a logical error. There will be no more complicated cases, such as a superposition of the two.

+ +

This makes calculation of the failure probability rather straightforward (though not necessarily practical for large codes). You just take the probability of every uncorrectable set of errors, and sum them up.

+ +

Even though this is a simple sum of positive terms, there are ways it could be written with the occasional minus sign. For example, you could sum the probabilities for all correctable sets of errors, and then subtract that from $1$. Such subtractions do not imply a suppression of the failure probability. They just show that you previous terms overestimated the failure probability.

+ +

Minus signs most often come because the probability for a lack of error on a qubit can be written as $1-p$. If we expand out products in which this is a factor, we will get negative terms. But that is simply because we chose to write the expression as a polynomial in the single probability $p$, rather than using both $p$ and $1-p$ for their respective events.

+ +

This is what is happening in your case. The factor $p^m$ is an overestimate for the probability of errors on $m$ qubits, because it doesn’t account for what the other qubits are doing, and the corresponding contributions to the total probability. Hence some subtractions also need to be made.

+",409,,,,,4/19/2018 19:12,,,,3,,,,CC BY-SA 3.0 +1847,2,,75,4/19/2018 19:44,,2,,"

In the dim and distant past (I.e. I don’t remember the details any more), I tried to calculate an upper bound on a fault tolerant threshold. I suspect the assumptions that I made to get there wouldn’t apply to every possible scenario, but I came up with an answer of 5.3% (non-paywall version).

+ +

The idea was roughly to make use of a well-known connection between error correction codes and distillation of multiple noisy Bell states into a single, less noisy Bell state. In essence, if you have multiple noisy Bell states, one strategy for making a single high quality Bell state is to teleport the codewords of an error correction code through them. It's a two-way relationship; if you come up with a better distillation strategy, that defines a better error correcting code and vice-versa. So, I wondered what would happen if you allowed a concatenated scheme of distillation of noisy Bell pairs, but allowed some errors to occur when applying the various operations. This would map directly to fault tolerance via concatenated error correcting codes. But the different perspective allowed me to estimate a threshold beyond which the noise accumulation would simply be too high, and thus the error correction scheme would fail.

+ +

Different works have made different assumptions. For example, this one restricts to specific gate sets, and derives an upper bound to the fault-tolerant threshold of 15% in a specific case (but then the question arises as to why you wouldn't pick the scheme with the highest upper bound, rather than the lowest!).

+",1837,,1837,,05-04-2018 19:39,05-04-2018 19:39,,,,0,,,,CC BY-SA 4.0 +1848,2,,1840,4/20/2018 4:26,,4,,"

Photons travel fast, and there's often the option to transfer their entanglement to solid state. Of course, the advantage of transferring entanglement to a solid state qubit is that one is able to operate with it (one- and two-qubit gates, for example) with ease and efficiency, whereas it is very hard to effect two-qubit quantum gates on photons themselves, for more on that see the answer to How do you apply a CNOT on polarization qubits? So, let us divide the answer into optical-solid-state hybrid approaches, purely optical approaches and purely solid-state approaches:

+ +
    +
  • The optical-solid-state hybrid approach results in records such as this one from 2012: Heralded entanglement between solid-state qubits separated by 3 meters. For the solid-state part they employed Nitrogen-Vacancy centers, which are diamond defects with remarkable quantum coherence, even at high temperature (although this particular experiment is performed at low temperature). In this case, the quantum fidelity of the final entangled state is well above the classical limit of 0.5 but at the same time well below 0.9, meaning it's enough to demonstrate quantum effects, but not great in a practical sense. Apparently, imperfect photon indistinguishability is the main limitation to fidelity in this experiment, followed by errors in the microwave pulses that are used to rotate the readout bases of the two solid-state qubits. As a more recent update on where things could be headed towards with the hybrid approach, there's this Demonstration of Entanglement Purification and Swapping Protocol to Design Quantum Repeater in IBM Quantum Computer. As far as I read it, it's not a complete demonstration, since it does not actually implement the photon-solid transfer but rather ""design a quantum circuit which could in principle equivalently perform the main operations of a quantum repeater"". For a perspective on the whole field of combining quantum communications with quantum computing, see Nature Photonic's Towards a global quantum network(arXiv version).
  • +
  • The purely optical record, as reported in his answer by @DaftWullie, is claimed by the Jian-Wei Pan group in China, who report entanglement over 1203 km via a satellite (Satellite-to-Ground Entanglement-Based Quantum Key Distribution). Because of the nature of photons, this is more useful for purely quantum communication purposes rather than for actual quantum computing.
  • +
  • On the purely solid-state approach, I found this letter to Nature Nanotechnology of 2012, Electrical control of a solid-state flying qubit (arXiv version) Yamamoto and coworkers reported the transport and manipulation of qubits over distances of 6 microns within 40 ps, in an Aharonov-Bohm rings (based on the Aharonov–Bohm_effect), connected to two-channel wires that have a tunable tunnel coupling between channels. They claim to be the first ""demonstrations of scalable ‘flying qubit’ architectures—systems in which it is possible to perform quantum operations on qubits while they are being coherently transferred—in solid-state systems"". According to Yamamoto et al., ""These architectures allow for control over qubit separation and for non-local entanglement, which makes them more amenable to integration and scaling than static qubit approaches.""
  • +
+ +

All that being said, probably the best practical answer to the question, at least for now, is currently working quantum computers: since it is claimed that 16-qubit IBM universal quantum computer can be fully entangled, it seems that the maximum distance of entanglement in solid-state devices will not be a practical limitation for quantum computing (even without employing flying qubits). I suspect that scaling and protecting that entanglement, however, will not be trivial.

+",1847,,,,,4/20/2018 4:26,,,,0,,,,CC BY-SA 3.0 +1849,2,,1842,4/21/2018 2:44,,2,,"

Articles on technology from 10 years ago are often outdated, to some extent the same can be said of last year's information. Occasionally something will stand for decades, or fall into decline only to be revisited later.

+ +

The most optimistic perspective is: someone is working on it.

+ +

Here are some more recent articles on room temperature QC:

+ +

""Room-temperature single-photon emitters in titanium dioxide optical defects"" by Chung, Leung, To, Djurišić and Tomljenovic-Hanic (04 Apr 2018).

+ +

""Room temperature high-fidelity non-adiabatic holonomic quantum computation on solid-state spins in Nitrogen-Vacancy centers"" by Yan, Chen, and Lu (20 Dec 2017).

+ +

""Scalable quantum computation based on quantum actuated nuclear-spin decoherence-free qubits"", by Rong, Dong, Geng, Shi, Li, Duan, and Du (27 Dec 2016).

+ +

""Coherent control of single spins in silicon carbide at room temperature"" by Widmann, Lee, Rendler, and 15 others (31 Oct 2014).

+ +

Lots of people are working on the idea, how soon it becomes available remains to be seen. Use of solid-state could be on the way out ...

+ +

""Practical spin wave transistor one step closer"" physics.Org article, March 1, 2018, and University of Groningen ""Magnon spintronics in non-collinear magnetic insulator/metal heterostructures"" (Feb 2017).

+ +

+",278,,,,,4/21/2018 2:44,,,,0,,,,CC BY-SA 3.0 +1850,1,,,4/21/2018 6:05,,7,630,"

It is popularly stated that quantum computing could destroy and disrupt blockchain technology completely. How is quantum computing a threat to blockchain technology?

+",2074,,26,,12/13/2018 19:47,12/13/2018 19:47,Quantum computing and blockchain technology,,2,2,,,,CC BY-SA 3.0 +1851,1,1853,,4/21/2018 9:14,,7,200,"

QMA (Quantum Merlin Arthur), is the quantum analog of NP, and QMA(k) is the class with $k$ Merlins. These are important classes when studying Quantum Complexity theory. QMA(k) is QMA with $k$ unentangled provers ($k$ Merlins), or BQP verfiers. These classes enable us to formally study the complexity of Entanglement. For instance the class QMA(2) could help us to study the separability of quantum states, and resources required for telling whether these states are separable or far from separable.

+

A natural question arises - What are the upper bounds for these classes (QMA, QMA(k))- do these classes have nontrivial upper bounds(could they have upper bounds such as PSPACE?

+",429,,1777,,8/28/2021 16:19,3/16/2022 13:11,"Upper Bounds for QMA Quantum Merlin Arthur, and QMA(k)",,2,0,,,,CC BY-SA 4.0 +1852,2,,1850,4/21/2018 9:31,,4,,"

My crude understanding of blockchain (derived mainly from the Wikipedia article) is that it gets its security from two sources:

+ +
    +
  • Individual communications are performed using a public key cryptography scheme

  • +
  • Information is stored in a decentralised manner across many different computers, meaning that there are many different copies of the same information.

  • +
+ +

The level of security provided by these two items differs, I suspect. Public key cryptography has an exponential form of security against classical attacks: it's based on a mathematical problem, and you add one bit to the problem size, and the difficulty (roughly) doubles. It's really easy to add a few bits, and put the problem completely out of anybody's reach. Meanwhile, for the decentralised part, I imagine that adding one extra computer to the network doesn't significantly increase the resources required to monitor all the communications; for a network of $N$ nodes, there are only $\binom{N}{2}$ communication links to monitor (a polynomial in $N$, not exponential). So, while adding a few more computers to the network might make monitoring a daunting task for an individual, state-level interference is unlikely to be eliminated. Thus, the security is heavily dependent upon the security of the public key cryptosystem being used.

+ +

The point is that quantum computers will be good at breaking existing public key cryptography systems such as RSA. RSA, for example, is secured by the assumption that it is difficult to find the prime factors of a large number (the person who is allowed to decript a message proves that they can by giving the factors of a particular number). To the best of our knowledge, this is true for classical computers, but Shor's algorithm makes this an easy task for a quantum computer. This means that, in principle, individual communications can be read an manipulated by a quantum computer.

+ +

Researchers are working on replacement public key cryptography systems that will be resiliant to attack by a quantum computer (called post-quantum cryptography) but these are not yet in place.

+",1837,,1837,,4/21/2018 9:51,4/21/2018 9:51,,,,0,,,,CC BY-SA 3.0 +1853,2,,1851,4/21/2018 9:38,,4,,"

You probably want to check out the complexity zoo for known results. For example, the listing on QMA(2) states:

+
+

It was shown in ABD+08 that a conjecture they call the Strong Amplification Conjecture implies that QMA(2) is contained in PSPACE.

+

It was shown in HM13 that QMA(k) = QMA(2) for k >= 2. However we still do not know if QMA(2) = QMA. The best known upper bound is NEXP.

+
+",1837,,-1,,6/18/2020 8:31,4/21/2018 9:38,,,,1,,,,CC BY-SA 3.0 +1854,1,1984,,4/21/2018 10:26,,19,423,"

This question is related to Can the theory of quantum computation assist in the miniaturization of transistors? and Is Quantum Biocomputing ahead of us?

+ +

About 10 years ago, several papers discussed the environment-assisted quantum walks in photosynthetic energy transfer (Mohseni, Rebentrost, Lloyd, Aspuru-Guzik, The Journal of Chemical Physics, 2008) and the dephasing-assisted transport: quantum networks and biomolecules (Plenio, Huelga, New Journal of Physics 2008). One major idea there seems to be that the ""environment"" (quantum decoherence) assists or optimizes the transport of a signal that is also fundamentally quantum coherent in nature.

+ +

My question is: beyond the theoretical interpretation of processes happening in natural systems, have physical processes of this kind already been explored in artificial systems, either as quantum computation (perhaps as quantum-enhanced mixed classical-quantum transistors) or in a quantum simulator? Or, if this has not happened yet, could one in principle do it?

+ +

Edit after obtaining an answer: Note the lax sense of optimize above. In a biological rather than mathematical context, optimization needs not be absolute but can refer to any significant improvement, like enzimes have been optimized via evolution to increase the speed of reactions.

+",1847,,141,,08-07-2018 03:45,10/21/2021 12:04,Quantum simulation of environment-assisted quantum walks in photosynthetic energy transfer,,2,6,,,,CC BY-SA 4.0 +1855,1,1856,,4/21/2018 11:24,,15,6726,"

I am getting confused about the meaning of the term ""ancilla"" qubit. It's use seems to vary a lot in different situations. I have read (in numerous places) that an ancilla is a constant input - but in nearly all of the algorithms I know (Simion's, Grover's, Deutsch etc) all the qubits are of constant input and therefore would be considered ancilla. Given that this does not seem to be the case - what is the general meaning of an ""ancilla"" qubit in quantum computers?

+",2015,,26,,12/23/2018 12:26,12/23/2018 12:26,"What counts as an ""ancilla"" qubit?",,2,0,,,,CC BY-SA 3.0 +1856,2,,1855,4/21/2018 11:58,,8,,"

The general meaning of ancilla in ancilla qubit is auxiliary. In particular, when people write about ""constant input"" what they mean is that, for a given algorithm -which has a purpose, such as finding the prime factors of an input number, or effecting a simple arithmetic operation between two input numbers the value of the ancilla qubits will be independent of the value of the input.

+ +

Probably your confusion arises because some algorithms study a function, employing a constant input, rather than study an input, using a constant function. Maybe in these cases the term ancilla qubit makes less sense, since, as you point out, all input qubits are constant and act as ancillae.

+",1847,,1847,,4/22/2018 6:17,4/22/2018 6:17,,,,1,,,,CC BY-SA 3.0 +1857,2,,1855,4/21/2018 11:59,,11,,"

When translating a classical circuit into a quantum circuit, you often need to introduce extra qubits simply because quantum computers only implement reversible logic. Such extra qubits are ancilla (or ancillary qubits).

+ +

One way to spot which qubits are ancilla is to look for those qubits that typically need to be ""uncomputed"" when using the quantum circuit as a quantum oracle in another quantum algorithm.

+",,user1039,,,,4/21/2018 11:59,,,,1,,,,CC BY-SA 3.0 +1858,1,1866,,4/21/2018 14:08,,7,275,"

On p490 of Nielsen and Chuang, 2010 the authors say that the preparation of the 'cat' state ($|000\ldots 0\rangle+|111\ldots 1\rangle$) is not Fault Tolerant. Below is my mock up of the diagram they draw for the preparation ($H$ and $C$-not-not *) and one part of the verification (the next two C-nots):

+ +

+ +

They then explain that this is not fault tolerant because a $Z$ error in the 'extra qubit' (i.e. that at the bottom of the diagram) propagates into two Z-errors in the ancilla qubits (the top three).

+ +

They they go onto say that this will not affect the Encoded data (I have not shown this in my diagram).

+ +

There are a couple of things that confuse me here. Firstly I cannot see how we get two $Z$-errors on the axillary qubits, Secondly even if we did get two $Z$-errors, surely this is a good thing as it will take our cat state back to the cat state? More the the crux of the issue - I cannot see what criterion they are using for fault tolerance here (I know what it means in the general case - i.e. unrecoverable error with probability no greater then $Cp^2$) and how there example violates it - Please can someone explain this to me.

+ +

*Not technical name - I couldn't find what it was actually called.

+",2015,,2015,,4/22/2018 9:20,4/22/2018 9:20,Why or how is 'cat' state preparation via a C-not-not operation not fault tolerant?,,2,1,,,,CC BY-SA 3.0 +1859,2,,1858,4/21/2018 16:19,,5,,"

First a matter of terminology. I don't have my copy of Nielsen & Chuang to hand, but I would have thought that the bottom, extra qubit, is the one that is the ancilla. I am also not entirely convinced that the errors you're talking about can be correct. You seem to be talking about $Z$ errors, but giving results that correspond to $X$ errors. (If a $Z$-gate happens on the ancilla, it makes no difference, up to a global phase, because that qubit is always in a basis state and, as you state, 2 $Z$-errors on your original state give back the original state, so no harm done.)

+ +

What the ancilla qubit is doing here is comparing qubits 2 and 3; if they are in the same state (as they should be), you get a 0 answer. If they are in a different state, you get a 1 answer. Thus, if you get a 1 answer, you know something has gone wrong on one of your 3 main qubits, and needs correction. Let's say you've already tested to see that qubits 1 and 2 are the same, and they are. So, having found that qubits 2 and 3 are different, it must be that qubit 3 has the error (assuming the error occurred on one of the main qubits). So, you apply a bit-flip gate to qubit 3.

+ +

However, let me show you what could go wrong. Here, $X$ represents a bit-flip error. + +This is the full syndrome circuit, where $|\psi\rangle$ is the cat state you produced before (also called a GHZ state).

+ +

Here, qubit 3 has the error, but you detect it on qubit 1. So, you correct it on qubit 1, and thus your state has 2 errors in it.

+ +

What does this have to do with fault-tolerance? Proofs of fault-tolerance are usually based around the idea of tracking the errors in each error correcting code (of which there are many, one for each logical qubit that you want). You can prove that provided each logical gate that you apply (preceded and followed by a round of error correction) only causes single errors on each logical qubit, then there is a threshold error probability below which these errors can be corrected away through the use of concatenated error correcting codes. So, the point is that the structure we've just talked about doesn't obey this. There is a place where a single error actually causes 2 errors on the error correcting code. Thus, the usual argument for fault tolerance does not apply. (Technically, this does not show that there aren't other arguments that could be made, it's just that the standard route doesn't work.)

+",1837,,,,,4/21/2018 16:19,,,,3,,,,CC BY-SA 3.0 +1860,1,1865,,4/21/2018 20:07,,14,2237,"

I understand that a qudit is a quantum $d$-state system. If $d=4$, is this exactly the same as a two-qubit system, which also presents $4$ quantum states? The Hilbert space is the same, right? Are there any theoretical or practical differences?

+",1931,,26,,12/23/2018 12:26,12/23/2018 12:26,What is the difference between a qudit system with d=4 and a two-qubit system?,,5,1,,,,CC BY-SA 4.0 +1861,1,1862,,4/21/2018 20:29,,7,378,"

I have no background in quantum physics, and no understanding of most formulas used in this context. I'm not looking for an in depth answer, i'd just like to vaguely understand the concept.

+ +

The way i heard it, a superposition is [real/a neccessary assumption/a concept not even wrong/whatever] and will end as soon as the object in superposition is observed. I always took that ""observed"" part to mean ""interacted with in a way that allows to make a decision on the state"" and i further supposed there was no cheating a la ""oh but i entangled A and B and now i just observed A, so B will be alright"" and suchlike .

+ +

So now we have quantum computation, which seems to rely on the superpositions of objects somehow covering a whole lot of bases at once and then [something something] which produces an answer - my question is about the i/o process: How can i input something so a superposition is achieved that encompasses the information i input, without automatically destroying the relevant superposition? How can i be sure my input was put in without looking? how can i look without destroying the very thing i wanted?

+",2080,,55,,05-07-2018 10:57,05-07-2018 10:57,Is it true that observing a quantum state will end the superposition of states? How can I not observe?,,2,0,,,,CC BY-SA 4.0 +1862,2,,1861,4/21/2018 21:26,,5,,"

I'm going to go for an intuitive answer here, as requested. Let's s go in steps:

+
    +
  • Your input is (often?) classical, so up to that point we're good.
  • +
  • Then you start doing quantum operations and achieve, for example, quantum superpositions between different states. Here you're right, you cannot look to check if you're doing OK, and that indeed is a problem, or rather the origin for different problems. For more on that you could give a try to the tags quantum entanglement (what you want, often), quantum decoherence (what happens in the way) and quantum error correction (ways to fix the situation without observing, or to observe just-enough but not-in-a-destructive-way).
  • +
  • The last point, quantum error correction, is indeed a trick that is allowed. Here the key is observing, not the values of the qubits, but rather the relation between these values. With enough redundance and smart schemes, one can deduce what went wrong and where, and fix it without observing the values that are involved in the calculation. (Hopefully the 3-qubit bit flip code can be understood without understanding quantum mechanics, since C-not operations have basic truth tables?) In this way we sacrifice part of the quantum information while preserving the part that we care about.
  • +
  • In the end we also get a classical output. We lose a lot, but that's just how reality works. You can always repeat the calculation to extract information gradually, aim for quantum state tomography.
  • +
+

Finally, since others said it better than I could, consider this quotations (emphases mine).

+

First, by Scott Aaronson, "PHYS771 Lecture 14: Skepticism of Quantum Computing":

+
+

Q: OK, so you have the Threshold Theorem, but then you have to do some error correction, right? Your computation becomes longer, right?

+

Scott: Yeah, but by a factor of polylog(n). This isn't challenging the Church-Turing Thesis, but yeah, that's true.

+

Q: I'm not sure if you'd have to perform another error correction as you proceed.

+

Scott: The entire content of the Threshold Theorem is that you're correcting errors faster than they're created. That's the whole point, and the whole non-trivial thing that the theorem shows. That's the problem it solves.

+
+

And secondly, the abstract from arXiv:quant-ph/9712048 "Fault-tolerant quantum computation" by John Preskill:

+
+

The discovery of quantum error correction has greatly improved the long-term prospects for quantum computing technology. Encoded quantum information can be protected from errors that arise due to uncontrolled interactions with the environment, or due to imperfect implementations of quantum logical operations. Recovery from errors can work effectively even if occasional mistakes occur during the recovery procedure. Furthermore, encoded quantum information can be processed without serious propagation of errors. In principle, an arbitrarily long quantum computation can be performed reliably, provided that the average probability of error per gate is less than a certain critical value, the accuracy threshold. It may be possible to incorporate intrinsic fault tolerance into the design of quantum computing hardware, perhaps by invoking topological Aharonov-Bohm interactions to process quantum information.

+
+",1847,,-1,,6/18/2020 8:31,4/22/2018 5:23,,,,4,,,,CC BY-SA 3.0 +1863,2,,1850,4/22/2018 1:37,,5,,"

This answer assumes that you do not have a technical background in cryptography or quantum physics.

+ +

Most current implementations of the blockchain rely on two math concepts: (1) Public key encryption. (2) Hash keys.

+ +

Quantum computers can break the public key encryption part, through a famous method known as Shor's algorithm (For technical details: see page 8 of: https://arxiv.org/pdf/1710.10377.pdf). This is a powerful threat. But since the digital security of the modern world is built on public key encryption, this would be a broader problem (as opposed to a blockchain specific one).

+ +

Quantum computers can also break the hash key component, through a method known as Grover search. But this part is relatively resistant to the attack (For technical details: see page 4 of the above link).

+ +

There are other ways to build a blockchain to protect against the above attack:

+ +

Some would be based on math ideas. This is known as post-quantum blockchains. Since quantum computers have already shown to break math-based cryptography systems and researchers are working on new algorithms of this future computer (see: https://www.nature.com/news/first-quantum-computers-need-smart-software-1.22590), it casts a doubt on the long-term durability of such a cryptographic system.

+ +

A research group based a blockchain on a cryptographic system that uses quantum physics (For technical details see: https://arxiv.org/abs/1705.09258); this uses the properties of quantum particles as opposed to math ideas. It's known as quantum cryptography and it is resistant to attacks from a quantum computer. The weakness in that blockchain system is that it makes technical assumptions that may not be viable in the real world.

+ +

Another research group made the blockchain itself into a quantum system (For technical details, see: https://arxiv.org/abs/1804.05979). This uses a property of quantum particles known as entanglement in time. The weakness is that the research only presents a conceptual design.

+ +

In summary, quantum computers pose a threat to the current implementations of the blockchain; future implementations may not suffer. Furthermore, it would be incorrect to single out just blockchains for such a threat; quantum computers pose a threat to other systems protected by current digital security methods.

+ +

Hope that helps.

+",2084,,,,,4/22/2018 1:37,,,,1,,,,CC BY-SA 3.0 +1864,2,,1860,4/22/2018 3:30,,7,,"

Yes the Hilbert space is the same, but you have to choose the isomorphism $\phi : \; \; (\mathbb{C}^2)^{\otimes 2} \simeq \mathbb{C}^4$. But the different setup will mean some unitaries that will be easy to implement in one setup will be hard in the other. For example, as 2 qubits gates something like $\sigma_z \otimes 1$ will be easy. But if you write that as a 4 by 4 unitary through that isomorphism $\phi$ instead that might not be as easy to implement. You should say both the Hilbert space and the easy operations that you wish to write your program in terms of.

+",434,,,,,4/22/2018 3:30,,,,1,,,,CC BY-SA 3.0 +1865,2,,1860,4/22/2018 7:46,,10,,"

For qubits, we usually base all of our operators on the Pauli matrices. Our basic gate set consists of the Pauli matrices themselves, Clifford gates like $H$ and $S$ that map between Pauli matrices, controlled operations like the CNOT that implement a Pauli on one qubit depending on the Pauli eigenstate of another, etc.

+ +

For any larger $d$-dimensional quantum system, we have to find the basic set of operators that will play the same role.

+ +

One approach is the generalize the Pauli matrices. We choose a group whose order is $d$, and define operators based in that group. This is my go-to text on how to do this, though it is actually focused more on generalizing stabilizer codes.

+ +

We could also look to the spin operators for inspiration. The Pauli matrices describe a spin-$1/2$ system. So for higher dimensional systems, we could look at the operators for higher spin. They don’t have the same sort of nice properties, though. So this doesn’t seem to be a popular approach.

+ +

Either way, the Hilbert space is the same and universal QC based on them is the same thing. The only difference is our basic gate set. So the numbers of gates required for a given task may have a difference in terms of constants and coefficients. And the maths might be nicer for one than the other. But the complexity will be the same.

+",409,,409,,4/22/2018 15:59,4/22/2018 15:59,,,,0,,,,CC BY-SA 3.0 +1866,2,,1858,4/22/2018 8:42,,5,,"

First of all, the two conditions for fault tolerant measurements are:

+ +
    +
  1. A single error gives no more than one error per block of qubits
  2. +
  3. The measurement result needs to be correct with probability $1-\mathcal O\left(p^2\right)$
  4. +
+ +
+ +

The preparation step creates the state $\frac{1}{\sqrt{2}}\left(\left|000\right>+\left|111\right>\right)$ (the three qubit 'cat state', also known as the three qubit GHZ state).

+ +

The 'ancilla verification' step is then applying two CNOTs on extra qubits (starting in the state $\left|0\right>$) to check the parities $Z_iZ_j$ to see if a bit flip has occurred.

+ +

So, assuming no errors, checking $Z_2Z_3$:

+ +
    +
  • The state after preparation, before verification is $\frac{1}{\sqrt{2}}\left(\left|000\right>+\left|111\right>\right)\left|0\right>$
  • +
  • After the first CNOT on the extra qubit: $\frac{1}{\sqrt{2}}\left(\left|000\right>\left|0\right>+\left|111\right>\left|1\right>\right)$
  • +
  • After the second CNOT: $\frac{1}{\sqrt{2}}\left(\left|000\right>+\left|111\right>\right)\left|0\right>$
  • +
  • The extra qubit is now measured and returns $0$, showing no bit flip has occurred. There is a possibility, with probability $\mathcal O\left(p^2\right)$ of two bit flips occurring, where the probability of a single error is $p$.
  • +
+ +

Instead of the above, let's have a $Z$ error after the first CNOT. The state after this error (and before the second CNOT) is $\frac{1}{\sqrt{2}}\left(\left|000\right>\left|0\right>-\left|111\right>\left|1\right>\right)$. Applying the second CNOT then gives $\frac{1}{\sqrt{2}}\left(\left|000\right>-\left|111\right>\right)\left|0\right>$, which is the same as before, only now with a $Z$ error on the second ancilla qubit.

+ +

A controlled-$M$ operation on the encoded data (in the state $\left|\psi\right> = \alpha\left|0_L\right> + \beta\left|1_L\right>$), putting the system in the state $\frac{1}{\sqrt{2}}I_1Z_2I_3\left(\left|000\right>\left|\psi\right>+\left|111\right>M\left|\psi\right>\right) = \frac{1}{\sqrt{2}}\left(\left|000\right>\left|\psi\right>-\left|111\right>M\left|\psi\right>\right)$.

+ +

The decoding operation (C-NOT-NOT, followed by $H_1$ on the ancilla qubits) is then performed. The decoding C-NOT-NOT gives the state $$\frac{1}{\sqrt{2}}\left(\left|000\right>\left|\psi\right>-\left|100\right>M\left|\psi\right>\right) = \frac{1}{\sqrt{2}}Z_1Z_2I_3\left(\left|000\right>\left|\psi\right>+\left|100\right>M\left|\psi\right>\right)$$ and the $Z$ error is now on the first qubit, directly affecting the measurement result.

+ +

While it isn't necessarily clear that this is the case, it can be shown that $Z$ errors propagate 'backwards' through CNOT gates by starting with the state $\left|++\right> = \frac{1}{2}\left(\left|0\right>+\left|1\right>\right)\left(\left|0\right>+\left|1\right>\right)$ and applying a CNOT to return the same state. However, a $Z$ error on the second qubit returns the state $\left|--\right> = ZZ\left|++\right>$, showing the $Z$ error has propagated backwards.

+ +

In this sense, a single uncorrectable $Z$ error on an extra qubit eventually ends up causing $Z$ errors on multiple ancilla qubits (at a point where multiple $Z$ errors don't cancel) before the measurement and so, this part of the procedure is not fault tolerant as an error on multiple qubits happens with probability $p$ and the probability of a successful measurement outcome is $1-p$, so the whole process has to be repeated multiple times to achieve a better measurement outcome.

+",23,,23,,4/22/2018 9:09,4/22/2018 9:09,,,,3,,,,CC BY-SA 3.0 +1867,1,1927,,4/22/2018 9:14,,7,160,"

This question is inspired by ""What is the difference between a qudit system with d=4 and a two-qubit system?"", as an experimental follow-up.

+ +

Consider for illustration these two particular cases:

+ + + +

In general I'm referring to experimental cases where in practice there is an always-on-but-sometimes-weak coupling between two two-state systems, producing a ground quadruplet.

+ +

My question is: in experiments such as these, are 2·qubit and d=4 qudit (a) strictly distinguishable beasts, or (b) theoretical idealizations which are more or less adequate depending on practical considerations?

+",1847,,2224,,06-11-2019 01:44,06-11-2019 01:44,"In qubit/qudit terms, where is the experimental limit between S=3/2 and 2·S=1/2?",,1,3,,,,CC BY-SA 4.0 +1868,2,,1716,4/22/2018 15:03,,1,,"

Miniaturizing switching or transistor technology needs to go hand in hand with quantum (transport and computing) theory. +I think that many challenging aspects (like for instance isotopically purifying silicon and donor implementation for single atom transistors) are classical material science problems. +There is a recent review article on the strategies to develop beyond current CMOS transistor technology:

+ +

Beyond CMOS computing with spin and polarization

+ +

Moreover, there is a lot of research on building qubits transistor-like:

+ +

A CMOS spin qubit

+ +

And a comment on recent advances in silicon quantum technology which is important for quantum computing applications (CNOT gate, strong coupling) from this year 2018:

+ +

Toward a silicon-based quantum computer

+",2094,,,,,4/22/2018 15:03,,,,1,,,,CC BY-SA 3.0 +1869,1,,,4/22/2018 15:26,,4,232,"

$\newcommand{\qr}[1]{|#1\rangle}$Question. Can you check whether this is correct? Also, given the analysis below, what is the domain of and co-domain of $f(\qr{x})$? I think it is $V^4 \to W^4 : f$ because

+ +

$$\qr{00} = \qr{0}\otimes\qr{0} = \left[\begin{matrix}1\\0\\0\\0\end{matrix}\right].$$

+ +

Analysis. Let

+ +

$$\qr{x} = a_{00}\qr{00} + a_{01}\qr{01} + a_{10}\qr{10} + a_{11}\qr{11}$$

+ +

and let $U_f$ implement some constant or balanced function $f(x)$. Then I claim the following holds given the linear function nature of $U_f$. +\begin{align*} + f(\qr{x}) &= U_f\qr{x}\\ + &= U_f(a_{00}\qr{00} + a_{01}\qr{01} + a_{10}\qr{10} + a_{11}\qr{11})\\ + &= U_f(a_{00}\qr{00}) + U_f(a_{01}\qr{01}) + U_f(a_{10}\qr{10}) + U_f(a_{11}\qr{11})\\ + &= a_{00}U_f(\qr{00}) + a_{01}U_f(\qr{01}) + a_{10}U_f(\qr{10}) + a_{11}U_f(\qr{11}), +\end{align*}

+ +

Now if $f$ is constant --- with $f(\qr{x}) = \qr{00}$ for all $\qr{x}$ ---, then we would have

+ +

\begin{align*} + f(\qr{x}) &= a_{00}U_f(\qr{00}) + a_{01}U_f(\qr{01}) + a_{10}U_f(\qr{10}) + a_{11}U_f(\qr{11})\\ + &= a_{00}\qr{00} + a_{01}\qr{00} + a_{10}\qr{00} + a_{11}\qr{00}\\ + &= (a_{00} + a_{01} + a_{10} + a_{11})\qr{00}. +\end{align*}

+ +

If, on the other hand, $f$ is balanced then with probability $1/2$, we have +\begin{align*} + f(\qr{x}) &= a_{00}U_f(\qr{00}) + a_{01}U_f(\qr{01}) + a_{10}U_f(\qr{10}) + a_{11}U_f(\qr{11})\\ + &= (a_{00} + a_{01} + a_{10} + a_{11})\qr{00} +\end{align*} +and with probability $1/2$ as well

+ +

\begin{align*} + f(\qr{x}) &= a_{00}U_f(\qr{00}) + a_{01}U_f(\qr{01}) + a_{10}U_f(\qr{10}) + a_{11}U_f(\qr{11})\\ + &= (a_{00} + a_{01} + a_{10} + a_{11})\qr{11}. +\end{align*}

+",1589,,26,,12/23/2018 13:59,12/23/2018 13:59,What can I deduce about $f(x)$ if $f$ is balanced or constant?,,2,8,,,,CC BY-SA 3.0 +1870,1,,,4/22/2018 15:43,,16,6146,"

In the Wikipedia article about Bell states it is written:

+ +
+

Independent measurements made on two qubits that are entangled in Bell states positively correlate perfectly, if each qubit is measured in the relevant basis.

+
+ +

What does it even mean to perform a measurement in a certain basis?

+ +

You can answer by using the example of the Bell states of the Wikipedia article.

+",,user72,55,,2/14/2021 18:51,2/14/2021 18:51,"What does ""measurement in a certain basis"" mean?",,3,0,,,,CC BY-SA 3.0 +1871,2,,1870,4/22/2018 16:12,,7,,"

Qubits are essentially quantum objects from which you can extract a bit. But there are different ways that this can be done, and the answer you get depends on the measurement you choose.

+ +

If you qubit is an electron spin, the measurement basis corresponds to measuring spin in a particular direction. We use that picture more generally in the form of the Bloch sphere. Measurement in this case corresponds to taking pair of opposing points on the sphere and making the qubit choose between then. Each possible pair of opposing points is referred to as a different measurement basis.

+ +

Often with qubits, practical reasons in implementation mean that we can only actually measure in a single basis, known as the $Z$ or computational basis. To simulate the others we can precede our measurement with a certain single qubit rotation. The rotation we choose determines the basis we end up measuring in.

+ +

For a given Bell state, and for a given measurement basis on one of the qubits, there exists a measurement basis for the second with with results will be perfectly correlated. This seems to be what the article is getting at.

+ +

For the Bell states, measuring both qubits in the $Z$ basis will either end up with perfect correlation or perfect anti correlation, depending on which Bell state it is. If you get an anti correlation, you can change basis on one qubit by applying an $X$ rotation before the measurement. This new basis will get perfect correlations with the $Z$ basis measurement results for the other qubit.

+",409,,409,,4/22/2018 16:54,4/22/2018 16:54,,,,0,,,,CC BY-SA 3.0 +1872,2,,1870,4/22/2018 16:37,,6,,"

If you think of a electronic spin $S=1/2$, imagine measuring it on the z-axis to obtain $S_z=+1/2$ (or $S_z=-1/2$). This (the z projection of the spin magnetic moment) is a possible basis for the measurement. Or you could measure the spin on the x-axis, and they you will obtain $S_x=+1/2$ (or $S_x=-1/2$). This is a different basis.

+ +

The measurements on Bell pairs will correlate with each other when measured on non-orthogonal basis (if you measure one particle on z and the other on x, the results will be perfectly uncorrelated; if you measure both on z or both on x, the results will be perfectly correlated).

+ +

Other examples of measurement bases would be polarization with photons: vertical vs horizontal is the linear polarization basis, whereas clockwise vs anticlockwise is the circular polarization basis.

+",1847,,,,,4/22/2018 16:37,,,,1,,,,CC BY-SA 3.0 +1875,1,1876,,4/22/2018 17:24,,16,1034,"

I am currently reading ""Quantum Computation and Quantum Information"" by Nielsen and Chuang. In the section about Quantum Simulation, they give an illustrative example (section 4.7.3), which I don't quite understand:

+ +
+

Suppose we have the Hamiltonian + $$ H = Z_1 ⊗ Z_2 ⊗ \cdots ⊗ Z_n,\tag{4.113}$$ + which acts on an $n$ qubit system. Despite this being an interaction involving all of the system, indeed, it can be simulated efficiently. What we desire is a simple quantum circuit which implements $e^{-iH\Delta t}$, for arbitrary values of $\Delta t$. A circuit doing precisely this, for $n = 3$, is shown in Figure 4.19. The main insight is that although the Hamiltonian involves all the qubits in the system, it does so in a classical manner: the phase shift applied to the system is $e^{-i\Delta t}$ if the parity of the $n$ qubits in the computational basis is even; otherwise, the phase shift should be $e^{i\Delta t}$. Thus, simple simulation of $H$ is possible by first classically computing the parity (storing the result in an ancilla qubit), then applying the appropriate phase shift conditioned on the parity, then uncomputing the parity (to erase the ancilla).

+ +

+ Furthermore, extending the same procedure allows us to simulate more complicated extended Hamiltonians. Specifically, we can efficiently simulate any Hamiltonian of the form $$H = \bigotimes_{k=1}^n\sigma_{c\left(k\right)}^k,$$ where $\sigma_{c(k)}^k$ is a Pauli matrix (or the identity) acting on the $k$th qubit, with $c(k) \in \{0, 1, 2, 3\}$ specifying one of $\{I, X, Y, Z\}$. The qubits upon which the identity operation is performed can be disregarded, and $X$ or $Y$ terms can be transformed by single qubit gates to $Z$ operations. This leaves us with Hamiltonian of the form of (4.113), which is simulated as described above.

+
+ +

How can we obtain gate $e^{-i\Delta t Z}$ from elementary gates (for example from Toffoli gates)?

+",2098,,23,,07-04-2019 10:59,07-04-2019 10:59,Obtaining gate $e^{-i\Delta t Z}$ from elementary gates,,1,2,,,,CC BY-SA 4.0 +1876,2,,1875,4/22/2018 17:42,,9,,"

One way order to perform Z rotations by arbitrary angles is to approximate them with a sequence of Hadamard and T gates. If you need the approximation to have maximum error $\epsilon$, there are known constructions that do this using roughly $3 \lg \frac{1}{\epsilon}$ T gates. See ""Optimal ancilla-free Clifford+T approximation of z-rotations"" by Ross et al.

+ +

The best published way to approximate arbitrary Z rotations, repeat-until-success circuits, takes a slightly more complicated approach but achieves an average of roughly $9 + 1.2 \lg \frac{1}{\epsilon}$ T gates.

+",119,,,,,4/22/2018 17:42,,,,1,,,,CC BY-SA 3.0 +1877,1,1879,,4/22/2018 18:06,,8,2750,"

I am currently reading ""Quantum Computation and Quantum Information"" by Nielsen and Chuang. In the section about Quantum Simulation, they give an illustrative example (section 4.7.3), which I don't quite understand:

+ +
+

Suppose we have the Hamiltonian + $$ H = Z_1 ⊗ Z_2 ⊗ \cdots ⊗ Z_n, \tag{4.113}$$ + which acts on an $n$ qubit system. Despite this being an interaction involving all of the system, indeed, it can be simulated efficiently. What we desire is a simple quantum circuit which implements $e^{-iH\Delta t}$, for arbitrary values of $\Delta t$. A circuit doing precisely this, for $n = 3$, is shown in Figure 4.19. The main insight is that although the Hamiltonian involves all the qubits in the system, it does so in a classical manner: the phase shift applied to the system is $e^{-i\Delta t}$ if the parity of the $n$ qubits in the computational basis is even; otherwise, the phase shift should be $e^{i\Delta t}$. Thus, simple simulation of $H$ is possible by first classically computing the parity (storing the result in an ancilla qubit), then applying the appropriate phase shift conditioned on the parity, then uncomputing the parity (to erase the ancilla). + Furthermore, extending the same procedure allows us to ismulate more complicated extended Hamiltonians. Speciffically, we can efficiently simulate any Hamiltonian of the form $$H = \bigotimes_{k=1}^n\sigma_{c\left(k\right)}^k,$$ where $\sigma_{c(k)}^k$ is a Pauli matrix (or the identity) acting on the $k$th qubit, with $c(k) \in \{0, 1, 2, 3\}$ specifying one of $\{I, X, Y, Z\}$. The qubits upon which the identity operation is performed can be disregarded, and $X$ or $Y$ terms can be transformed by single qubit gates to $Z$ operations. This leaves us with Hamiltonian of the form of (4.113), which is simulated as described above.

+
+ +

What do we mean by the parity of the qubits here? Is it the number of qubits in the state $\lvert 1 \rangle$, and can it be even or odd?

+",2098,,55,,4/23/2018 8:09,4/23/2018 8:09,What do we mean by parity of qubits?,,2,0,,,,CC BY-SA 3.0 +1878,2,,1877,4/22/2018 20:29,,0,,"

Yes. This is just checked on the computational basis of all n-bit strings and you can see it does the right thing there. It is acting by $(e^{\pm i \Delta t})$ for the respective parities of basis vectors so it is the circuit you wanted by checking on the basis.

+",434,,,,,4/22/2018 20:29,,,,0,,,,CC BY-SA 3.0 +1879,2,,1877,4/22/2018 20:32,,5,,"

In the computational $\left(Z\right)$ basis, the parity of a (classical) bit string is $0$ if the number of $1$s in the string is even (i.e. 'even parity'), or $1$ if the number of $1$s in the string is odd (i.e. 'odd parity').

+ +

The parity can be measured by applying CNOT gates from each qubit that you want to measure (the control qubits) to an ancilla qubit (the target, initially in state $\left|0\right>$). Measuring the parity of a (classical) input state $\left|x_1x_2\ldots x_n\right>$, gives the output of the ancilla as $\left|\bigoplus_{k=1}^nx_k\right>$, which is $\left|0\right>$ for even parity and $\left|1\right>$ for odd parity, as above.

+ +

The same process can be applied to quantum input states. As an example, calculating the parity of $\frac {1}{\sqrt{2}}\left(\left|00\right>+\left|11\right>\right)$, applying the CNOT gates gives the state of the overall system (including ancilla) as $\frac {1}{\sqrt{2}}\left(\left|00\right>+\left|11\right>\right)\left|0\right>$, which returns $0$, showing the input qubits have even parity. The converse of this is taking the input state as $\frac {1}{\sqrt{2}}\left(\left|01\right>+\left|10\right>\right)$, which gives the total state, after CNOTs, as $\frac {1}{\sqrt{2}}\left(\left|01\right>+\left|10\right>\right)\left|1\right>$, showing the input state has odd parity.

+ +

This shows that the parity of a quantum state is analogous to the parity of a classical state.

+",23,,,,,4/22/2018 20:32,,,,0,,,,CC BY-SA 3.0 +1880,1,,,4/23/2018 5:54,,5,111,"

Nitrogen-Vacancy centers (NVs) have astonishing quantum properties, which make them interesting as potential hardware both for quantum computing in particular and for quantum technologies in general. In part this results from the center being protected by the diamond structure, which is at the same time very rigid and practically free from nuclear spins.

+ +

However, their properties change in unpractical ways with their proximity to the surface:

+ +
    +
  • The closer they are to the surface, the better they interact with whatever is just beyond the surface. This is very important for quantum metrology but also in general for input/output in a quantum computing context, see for example A molecular quantum spin network controlled by a single qubit (arXiv version).
  • +
  • However, the closer the are to the surface, the more they are affected by all kinds of noise also just beyond the surface. This results from the fact that while the bulk is perfect diamond the surface is full of defects/impurities/rubbish.
  • +
+ +

My question is practical and is about cleaning and/or chemically modifying the diamond surface in order to passivate it: up to which point has this been experimentally demonstrated? What is the current state of the art, how much can quantum coherence on NV centers be improved by a detailed control of the diamond surface?

+",1847,,55,,09-04-2020 09:31,09-04-2020 09:31,Passive improving of nanodiamond surfaces for NV centers?,,1,0,,,,CC BY-SA 3.0 +1881,1,,,4/23/2018 7:40,,8,206,"

In Kaye, Laflamme and Mosca (2007) pg106 they write the following (in the context of Simon's algorithm):

+ +
+

...where $S=\{\mathbf{0},\mathbf{s}\}$ is a $2$-dimensional vector space spanned by $\mathbf{s}$.

+
+ +

this is not the only place I have seen this vector space referred to as ""2-dimensional"". But surely the fact that it is only spanned by one vector, $\mathbf{s}$, means (by definition) that it is only ""1-dimensional""?

+ +

Am I missing something here or is the use of the term ""dimension"" different in this area?

+ +

More Context

+ +

As mentioned above the context is Simon's Algorithm. I.e. there exists a oracle $f:\{0,1\}^n\rightarrow \{0,1\}^n$ such that $f(x)=f(y)$ if and only if $x=y\oplus \mathbf{s}$ where $\mathbf{s}\in \{0,1\}^n$ and $\oplus$ is addition in $\Bbb{Z}_2^n$ (i.e. bit-wise). The aim of the algorithm is to find $\mathbf{s}$.

+ +

After applying a relevant circuit the output is a uniform distribution of $\mathbf{z}\in \{0,1\}^n$ such that $\mathbf{z}\cdot\mathbf{s}=z_1s_1+z_2s_2\cdots+ z_ns_n=0$. The statement I have quoted above is refering to the fact that since $\mathbf{0}$ and $\mathbf{s}$ are are solution to this problem you only need $n-1$ linearly independent vectors $\mathbf{z}$ to find $\mathbf{s}$.

+ +

Edit

+ +

The term is also used in the same context at the end of Pg 4 of this pdf (Wayback Machine version).

+",2015,,124,,09-07-2018 15:11,09-07-2018 15:11,"Use of the term ""dimension"" in this description of Simon's algorithm?",,2,4,,,,CC BY-SA 4.0 +1882,2,,1869,4/23/2018 8:17,,2,,"

$f$ being balanced does not mean that it gives one output with probability $0.5$ and the other output with probability $0.5$. +Instead, it means that half of the inputs are sent to one output and the other half to a different output.

+ +

I don't know what you are referring to in the notes you link, but $f$ is there defined as $f:\{0,1\}^n\mapsto\{0,1\}$, and if it is balanced then it will act for example as +$f(00)=f(01)=0$, $f(10)=f(11)=1$. +This is totally different from what you wrote, as you will get something like +$$|x\rangle\mapsto|f(x)\rangle=(a_{00}+a_{01})|0\rangle+(a_{10}+a_{11})|1\rangle.$$

+",55,,,,,4/23/2018 8:17,,,,3,,,,CC BY-SA 3.0 +1883,2,,1861,4/23/2018 8:36,,2,,"

The whole point is that you do not want, nor need, to ""look"" how the computation is going.

+ +

You can ensure that the input is what it should be by a variety of means. The simplest case being that you may simply trust that your apparatus, which you previously characterized very well, will produce what you ask it to produce.

+ +

After that, you will know whether the input will evolve into a superposition of different states simply because you generally know what the evolution of the system looks like. Again, because you know what your computer is going to do: you built it for that purpose. +You will know that (for example) a specific sequence of entangling gates will be performed on the input, so that the state will evolve into a given superposition of states in the computational basis.

+ +

The crucial point here is that you do not, and cannot, look at the state during the computation. +In other words, you have to think of the whole quantum algorithm/circuit as a black box: you built it so you (more or less) trust what it does, but after that you can just put an input and look at the resulting output.

+ +

Looking at the output does indeed destroy the coherence of the state (that is, roughly speaking, its being in a superposition of different states), but this is not a problem because quantum algorithms are designed in such a way that the measurements performed on the output give the answer to the problem.

+",55,,,,,4/23/2018 8:36,,,,0,,,,CC BY-SA 3.0 +1884,2,,1860,4/23/2018 8:44,,1,,"

The only difference between a ""pair of qubits"" and a single ""four-dimensional qudit"" is that when you say you have ""two qubits"" you are implicitly making some assumptions on the kind of operations you can perform on it.

+ +

In particular, it only makes sense to talk of two qubits if they can be treated as two different systems, or, in other words, if it is possible to act locally on them. Similarly, the kinds of operations that one can assume to be able to perform on two qubits are different than those on qudits.

+ +

From a practical point of view, the difference is that one tends to consider different operations as ""easily available"" when talking of sets of qubits rather than (sets of) qudits.

+",55,,55,,09-11-2018 10:02,09-11-2018 10:02,,,,0,,,,CC BY-SA 4.0 +1885,1,1888,,4/23/2018 10:17,,52,17194,"

Preskill introduced recently this term, see for example Quantum Computing in the NISQ era and beyond (arXiv). I think the term (and the concept behind it) is of sufficient importance that it deserves to be explained here in a pedagogical manner. Probably it actually merits more than one question, but the first one needs to be:

+ +

What are Noisy Intermediate-Scale Quantum (NISQ) technologies?

+",1847,,26,,01-01-2019 10:15,02-11-2020 14:47,"What is meant by ""Noisy Intermediate-Scale Quantum"" (NISQ) technology?",,1,0,,,,CC BY-SA 3.0 +1886,2,,1870,4/23/2018 11:20,,2,,"
+

What does it even mean to perform a measurement in a certain basis?

+
+ +

It is very close to a measurement of a certain observable. In quantum mechanics, when we talk about measuring an observable, we usually are primarily interested in an eigenvalue as an outcome of the measurement. In quantum information, we don't care about the eigenvalues; we are solely interested in a state after the measurement, and this state can be interpreted as an eigenvector of an observable being measured.

+ +

Mathematically, for any ""measurement in a certain basis"" there exists many observables that correspond to the same measurement (not all of them have physical meaning); all these observables have the same eigenvectors (which form the measurement basis) but may differ in eigenvalues. Eigenvalues don't matter provided they are different, so the measurement distinguishes between the eigenvectors (measurement basis states).

+",2105,,,,,4/23/2018 11:20,,,,0,,,,CC BY-SA 3.0 +1887,1,1936,,4/23/2018 13:05,,5,138,"

My question is a continuation from the previous question:Is Quantum Biocomputing ahead of us?. Considering that there exist many biological processes with a quantum nature present (Photosynthesis, Electron transport chain, etc). And, since this biological quantum processes are quite optimal in terms of efficiency and yield:

+ +

I am wondering if these could be mimicked by engineers/scientists to improve the actual possibilities in quantum computation. Let me explain a bit deeper my question: Nature is extremely efficient. Especially intracellular processes are well optimized in both time and space this, with their high yield, can be studied in order to check the properties/structural features that contribute to such efficiency. Please note that I am not suggesting or asking about the possibility of using directly biological molecules or processes but to try to extract what makes them ultra-efficient and used such knowledge in the design of particular architectures. +My question is related to this, is that an actual field of research? are we now dealing with the 'natural quantum processes imitation'?

+",1955,,26,,12/13/2018 19:48,12/13/2018 19:48,Can biological quantum processes be used to guide optimized quantum algorithms?,,1,0,,,,CC BY-SA 3.0 +1888,2,,1885,4/23/2018 13:05,,47,,"

When we talk about quantum computers, we usually mean fault-tolerant devices. These will be able to run Shor's algorithm for factoring, as well as all the other algorithms that have been developed over the years. But the power comes at a cost: to solve a factoring problem that is not feasible for a classical computer, we will require millions of qubits. This overhead is required for error correction, since most algorithms we know are extremely sensitive to noise.

+ +

Even so, programs run on devices beyond 50 qubits in size quickly become extremely difficult to simulate on classical computers. This opens the possibility that devices of this sort of size might be used to perform the first demonstration of a quantum computer doing something that is infeasible for a classical one. It will likely be a highly abstract task, and not useful for any practical purpose, but it will nevertheless be a proof-of-principle.

+ +

Once this is done, we'll be in a strange era. We'll know that devices can do things that classical computers can't, but they won't be big enough to provide fault-tolerant implementations of the algorithms we know about. Preskill coined the term 'Noisy Intermediate-Scale Quantum' to describe this era. Noisy because we don't have enough qubits to spare for error correction, and so we'll need to directly use the imperfect qubits at the physical layer. And 'Intermediate-Scale' because of their small (but not too small) qubit number.

+ +

So what applications might devices in NISQ era have? And how will we design the quantum software to implement them? These are questions that are far from being fully answered, and will likely require quite different techniques than those for fault-tolerant quantum computing.

+",409,,409,,8/16/2019 8:11,8/16/2019 8:11,,,,1,,,,CC BY-SA 4.0 +1889,1,,,4/23/2018 14:22,,7,345,"

The representation of bits in different technological areas:

+ +
    +
  1. Normal digital bits are mere abstractions of the underlying electric current through wires. Different standards, like CMOS or TTL, assign different thresholds to such signals: ""if the voltage goes above this level, then the bit is 1; if it goes below this level, then the bit is 0; discard in any other case"".

  2. +
  3. In genetics, we usually consider a signal as a 1 if it is ""enough"" to trigger the target response; 0 otherwise. In this scenario, the thresholding is qualitative.

  4. +
+ +

From the point of view of quantum information, qubits also abstractions, but in practice measurements will need standards to be comparable.

+ +

Question: From the point of view of quantum engineering, is there any standard technique/method to identify their value e.g. based on detection thresholds or fidelity verification like Bell inequalities violation? Are there units for that hidden signals?

+ +

The best possible answer would probably contain specific details for different architectures (e.g. superconductors vs photons) or contexts (e.g. quantum computing vs quantum communications).

+",1894,,1847,,4/27/2018 16:51,7/13/2018 15:34,Are there measuring standards (and units) for the identification of qubits?,,1,0,,,,CC BY-SA 3.0 +1890,1,1891,,4/23/2018 14:33,,10,1557,"

In general, a qubit is mathematically represented as a quantum state of the form $\lvert \psi\rangle = \alpha \lvert 0\rangle + \beta \lvert 1\rangle$, using the basis $\{ \lvert 0\rangle, \lvert 1\rangle \}$. It seems to me that a qubit is just a term used in quantum computing and information to denote a quantum state (i.e. a vector) of a system.

+ +

Is there any fundamental difference between a qubit and a quantum state? What's more to a qubit than the quantum state it represents?

+",,user72,1847,,4/24/2018 7:34,4/24/2018 7:34,What is the difference between a qubit and a quantum state?,,1,0,,,,CC BY-SA 3.0 +1891,2,,1890,4/23/2018 15:07,,12,,"

There are a few things to distinguish here, which are often conflated by experts because we're using these terms quickly and informally to convey intuitions rather than in the way that would be most transparent to novices.

+ +
    +
  1. A ""qubit"" can refer to a small system, which has a quantum mechanical state.

    + +

    The states of a quantum mechanical system form a vector space. Most +of these states can only be distinguished from each other only +imperfectly, in that there is a chance of mistaking one state for the +other, no matter how cleverly you try to distinguish them. One may then ask the question, of a set of states, whether they are all perfectly distinguishable from one another.

    + +

    A ""qubit"" is an example of a quantum mechanical system, for which the largest number of perfectly distinguishable states is two. (There are many different sets of perfectly distinguishable states; but each such set contains only two elements.) These may be

    + +
      +
    • the polarisation of a photon ($\lvert \mathrm H \rangle$ versus $\lvert \mathrm V \rangle$, or $\lvert \circlearrowleft \rangle$ versus $\lvert \circlearrowright \rangle$);

    • +
    • or the spin of an electron ($\lvert \uparrow \rangle$ versus $\lvert + \downarrow \rangle$, or $\lvert \rightarrow \rangle$ versus $\lvert + \leftarrow \rangle$);

    • +
    • or two energy levels $\lvert E_1 \rangle$ and $\lvert E_2 \rangle$ of an electron in an ion, which may occupy many different energy levels but which is being controlled in such a way that the electron stays within the subspace defined by these energy levels when it isn't being acted on.

    • +
    + +

    Common to these systems is that one can describe their states in terms of two states, which we might label as $\lvert 0 \rangle$ and $\lvert 1 \rangle$, and consider the other states of the system (which are vectors in the vector space spanned by $\lvert 0 \rangle$ and $\lvert 1 \rangle$) using linear combinations taking the form $\alpha \lvert 0 \rangle + \beta \lvert 1 \rangle$, where $\lvert \alpha \rvert^2 + \lvert \beta \rvert^2 = 1$.

  2. +
  3. A ""qubit"" can also refer to the quantum mechanical state of a physical system of the sort we've described above. That is, we may call some state of the form $\alpha \lvert 0 \rangle + \beta \lvert 1 \rangle$ ""a qubit"". In this case we are not considering what physical system is storing that state; we are interested only in the form of the state.

  4. +
  5. ""A qubit"" can also refer to an amount of information which is equivalent to a state such as $\alpha \lvert 0 \rangle + \beta \lvert 1 \rangle$. For instance, if we know two states $\lvert \psi_0 \rangle$ and $\lvert \psi_1 \rangle$ of some complicated quantum system, and we have some physical system whose state $\lvert \Psi \rangle$ is in some superposition $\alpha \lvert \psi_0 \rangle + \beta \lvert \psi_1 \rangle$, then it doesn't matter how complicated the system is or whether either of the states $\lvert \psi_j \rangle$ have any entanglement: the amount of information expressed by the possible values of $\lvert \Psi \rangle$ is one qubit, because with a clever enough noiseless procedure, you could reversibly encode that complicated quantum state into the state of a (physical system) qubit. Similarly, you can have a very large quantum system which encodes $n$ qubits of information, if you could reversibly encode the state of that complicated system as the state of $n$ qubits.

  6. +
+ +

This may seem confusing, but it's no different from what we do all the time with classical computation.

+ +
    +
  • If in a C-like language I write int x = 5; you probably understand that x is an integer (an integer variable that is), which stores an integer 5 (an integer value).

  • +
  • If I then write x = 7; I don't mean that x is an integer which is equal to both 5 and 7, but rather that x is a container of sorts and that what we are doing is changing what it contains.

  • +
+ +

And so forth — these ways in which we use the term 'qubit' are just the same as how we use the term 'bit', only it so happens that we use the term for quantum states instead of for values, and for small physical systems rather than variables or registers. (Or rather: the quantum states are the values in quantum computation, and the small physical systems are the variables / registers.)

+",124,,,,,4/23/2018 15:07,,,,2,,,,CC BY-SA 3.0 +1892,2,,1860,4/23/2018 18:02,,2,,"

There is also a difference if you consider experiments or implementations. To make a physical qubit, I need to use a two-level quantum system. Qudits than require a more complicated quantum system, e.g., with four levels for a d=4 qudit. The engineering justification for using the more complicated system would be that you than require fewer of the four-level systems.

+",2139,,,,,4/23/2018 18:02,,,,0,,,,CC BY-SA 3.0 +1894,1,1897,,4/23/2018 20:07,,4,225,"

Let me start the question with two examples.

+ +

First, I am reading Nielsen & Chuang section ""8.3.3 Bit flip and phase flip channels"". There is a description of a quantum operation

+ +
+

$\rho \to \mathcal{E}(\rho) = P_0 \rho P_0 + P_1 \rho P_1$, where $P_0 = |0 \rangle \langle 0|$, $P_1 = |1 \rangle \langle 1|$, which corresponds to a measurement of the qubit in the $|0 \rangle$, $|1 \rangle$ basis, with the result of the measurement unknown. [Italics is mine - A.P.]

+
+ +

Second, in the edX course ""Quantum Information Science I, Part 3"" there is a question that looks like this:

+ +
+

After quantum measurement <...>, if the measurement result is known, <...>. [Italics is mine - A.P.]

+
+ +

So, I do not understand what does that mean for a result of a measurement to be known/unknown? Moreover, how could that knowledge or an absense of knowledge further affect the quantum system once the measurement is performed? Would anything change in the examples if we replace ""known"" with ""unknown"" and vice versa? Is there a mathematical formalism for the ""is known/unknown"" expression?

+ +

I believe, the source of my confusion comes from the Schrödinger's cat paradox solution. My understanding is that the cat is strictly alive or dead once the ""measurement"" by a detector happens, regardless of whether we know the fact (i.e., result of the ""measurement"") or not. That is a knowledge of an experimenter, and it has no relation to the ""measurement"".

+",528,,2175,,4/24/2018 7:44,4/24/2018 7:44,What does it mean for a result of a measurement to be known/unknown?,,2,0,,,,CC BY-SA 3.0 +1895,1,1902,,4/23/2018 20:55,,7,618,"

Suppose we have two states of a system where I tell you that there is a probability $p_1$ of being in state $1$, and probability $p_2$ of being in state $2$. The total state can be written as a vector in $L^1$ normed space:

+ +

$$p=\begin{pmatrix}p_1 \\ p_2 \end{pmatrix}, ||p||=p_1+p_2=1$$

+ +

If we define a transition matrix for a Markov process:

+ +

$$T=\begin{pmatrix}t_{11}&t_{12} \\ t_{21}&t_{22}\end{pmatrix}$$

+ +

Then the next state would be:

+ +

$$p'=Tp=\begin{pmatrix}t_{11}p_1+t_{12}p_2 \\ t_{21}p_1+t_{22}p_2\end{pmatrix}$$

+ +

Now my understanding of density matrices and quantum mechanics is that it should contain classical probability theory in addition to strictly quantum phenomena.

+ +

Classical probabilities in the density matrix formalism are mapped as:

+ +

$$p=\begin{pmatrix}p_1 \\ p_2 \end{pmatrix} \rightarrow \rho=\begin{pmatrix}p_1&0 \\ 0&p_2 \end{pmatrix}$$

+ +

And I want to obtain:

+ +

$$p'=\begin{pmatrix}t_{11}p_1+t_{12}p_2 \\ t_{21}p_1+t_{22}p_2\end{pmatrix} \rightarrow \rho'=\begin{pmatrix}t_{11}p_1+t_{12}p_2&0 \\ 0&t_{21}p_1+t_{22}p_2\end{pmatrix}$$

+ +

My attempt:

+ +

Define an operator $U$ such that:

+ +

$$\rho'=U\rho U^\dagger$$ +$$\implies \begin{pmatrix}t_{11}p_1+t_{12}p_2&0 \\ 0&t_{21}p_1+t_{22}p_2\end{pmatrix}=\begin{pmatrix}u_{11}&u_{12} \\ u_{21}&u_{22}\end{pmatrix}\begin{pmatrix}p_1&0 \\ 0&p_2\end{pmatrix}\begin{pmatrix}u_{11}^*&u_{21}^* \\ u_{12}^*&u_{22}^*\end{pmatrix}$$

+ +

$$=\begin{pmatrix}|u_{11}|^2p_1+|u_{12}|^2p_2&u_{11}u_{21}^*p_1+u_{12}u_{22}^*p_2 \\ +u_{21}u_{11}^*p_1+u_{12}^*u_{22}p_2 & |u_{21}|^2p_1+|u_{22}|^2p_2 \end{pmatrix}$$

+ +

Evidently, $|u_{ij}|^2=t_{ij}$, but the off diagonal terms aren't easily made zero, (I've wrestled with the algebra and applied all the proper normalizations of probability theory).

+ +

What would be the correct way to apply a Markov process in the density matrix formalism? It seems really basic and something that this formalism should be able to naturally handle.

+ +

Edit: Repost of : repost

+",2160,,26,,12/23/2018 12:24,12/23/2018 12:24,Markov Chain expressed in Density Matrix formalism,,3,5,,,,CC BY-SA 3.0 +1896,2,,1895,4/23/2018 22:09,,5,,"

The most general quantum evolution is a completely positive (CP) map: +$$ +\rho\mapsto \mathcal E(\rho) = \sum_i M_i\rho M_i^\dagger \ . +$$ +Here, +$$ +M_1=\left(\begin{matrix}\sqrt{t_{11}}&0\\0&0\end{matrix}\right)\,, \ +M_2=\left(\begin{matrix}0&0\\\sqrt{t_{21}}&0\end{matrix}\right)\quad +$$ +$$ +M_3=\left(\begin{matrix}0&\sqrt{t_{12}}\\0&0 +\end{matrix}\right)\,,\ +M_4=\left(\begin{matrix}0&0\\0&\sqrt{t_{22}} +\end{matrix}\right)\ . +$$

+",491,,491,,4/24/2018 11:48,4/24/2018 11:48,,,,4,,,,CC BY-SA 3.0 +1897,2,,1894,4/24/2018 1:33,,4,,"

Forget about quantum mechanics for a second and consider two people predicting a coin flip. Alice flips a coin, covers it with her hand, and asks Bob to predict the result. Alice knows the coin is heads, but Bob is unsure if it is heads or tails. They will describe the state of the coin using different probability distributions.

+ +

The same situation can apply to quantum systems. Alice may know the state of the system, while Bob is unsure. When this happens, they each will describe the situation using a different density matrix: the matrix formalism that is used to describe a quantum system in a mixed state, meaning a statistical ensemble of several quantum states.

+ +

(On the other hand, if they describe the same system using different superpositions, that's bad. At least one of them is objectively wrong. Similarly, if Alice says a die roll was definitely 100% five, and Bob says the die roll was definitely 100% six, at least one of them is dead wrong.)

+ +
+

how could that knowledge or an absense of knowledge further affect the quantum system once the measurement is performed?

+
+ +

It doesn't affect the system, it determines your ability to accurately describe the system.

+ +

For example, suppose I have a qubit in the state $\frac{3}{5} |0\rangle + \frac{4}{5}|1\rangle$. I measure the state. I tell you all this, but don't tell you what the measurement outcome was.

+ +

If you're asked to predict the state of the system, the best you can do is bet 36% odds on $|0\rangle$ and 64% odds on $|1\rangle$. And a succinct way to describe your knowledge about the system is the density matrix $0.36 |0\rangle \langle 0| + 0.64 |1\rangle \langle 1| = \begin{bmatrix} 0.36 & 0 \\ 0 & 0.64\end{bmatrix}$. By contrast, I know the measurement result. It happens to have been $|1\rangle$. So the density matrix describing my knowledge is $|1\rangle \langle 1| = \begin{bmatrix} 0 & 0 \\ 0 & 1\end{bmatrix}$.

+ +
+

Would anything change in the examples if we replace ""known"" with ""unknown"" and vice versa?

+
+ +

If you perform the same operations to multiple systems in the state $|\psi\rangle$, their outputs will be identically distributed regardless of which ones you knew started in the state $|\psi\rangle$.

+ +

However, your ability to do useful tasks with a quantum system often depends on knowing what state that system is in. In that sense it does matter if you know the state or not. For example, it's pretty hard to do magic state distillation if you keep forgetting whether or not you already did the distillation.

+ +
+

Is there a mathematical formalism for the ""is known/unknown"" expression?

+
+ +

Yes. Density matrices.

+",119,,119,,4/24/2018 4:04,4/24/2018 4:04,,,,0,,,,CC BY-SA 3.0 +1898,2,,1894,4/24/2018 4:39,,2,,"

Suppose you have a qubit in a state $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$. For simplicity let us assume $\alpha$ and $\beta$ are real.

+ +

Alternatively, the state can be described by density matrix +$$\rho=|\psi\rangle\langle\psi|=\begin{pmatrix} +\alpha^2 & \alpha\beta \\ +\alpha\beta & \beta^2 +\end{pmatrix} +$$

+ +

If we measure the qubit in the standard basis but don't look an the measurement outcome, the qubit's state after measurement is

+ +

$$\rho_{out}=\begin{pmatrix} +\alpha^2 & 0 \\ +0 & \beta^2 +\end{pmatrix} +$$

+ +

(measurement killed off-diagonal terms, and now it is a mixed state)

+ +

If we looked at the outcome of the measurement and found that the outcome is $|0\rangle$ state, then the qubit's state after measurement is

+ +

$$\rho_{out,0}=\begin{pmatrix} +1 & 0 \\ +0 & 0 +\end{pmatrix} +$$

+ +

The relation between $\rho_{out}$ and $\rho_{out,i}$ is the same as relation between unconditional and conditional probabilities studied in probability theory, +$$\rho_{out}=\alpha^2 \rho_{out,0}+\beta^2 \rho_{out,1} +$$

+ +

as explained in Craig's answer.

+ +

Concerning the cat - yes, the cat is strictly dead or alive after the box is opened (cat is measured), but we don't know it before we looked into the box. This is purely classical situation, well known in probability theory.

+",2105,,2105,,4/24/2018 7:30,4/24/2018 7:30,,,,1,,,,CC BY-SA 3.0 +1899,1,,,4/24/2018 5:47,,9,520,"

Predicting the energy of molecules to high accuracy during the course of a chemical reaction, which in turn allows us to predict reaction rates, equilibrium geometries, transition states among others is a Quantum Chemical problem.

+ +

Quantum Computing could help Quantum Chemistry by solving the Schrodinger equation for large systems. An example of a problem that is intractable but has applications to Quantum Chemistry is the Hartree-Fock method, a method to approximate the wave function and energy of a quantum many-body system (in stationary state). This problem is known to be NP-complete (see On the NP-completeness of the Hartree-Fock method for translationally invariant systems). Other examples of Quantum Computation to Quantum chemistry are 2-local-Hamiltonians (QMA-complete), Fermionic Local Hamiltonian (QMA-hard).

+ +

Quantum Computing could give yes/no answers to questions to specific problems such as showing certain molecules have dipole moment. Also, NMR, Trapped Ions, and Superconducting qubits could be used to to simulate such chemical systems too. Noise being, a factor approaches such as NISQ could play a part in simulating quantum chemical systems. What Quantum Computing approaches have been successful to solving Quantum chemistry problems such as predicting reaction rates, transition rates (or even show promise)?

+",429,,429,,4/24/2018 22:11,05-02-2018 05:53,Quantum Chemistry and Quantum Computing,,2,0,,,,CC BY-SA 3.0 +1900,2,,1881,4/24/2018 6:29,,2,,"

In order to represent a '$\mathbf 0$' state as a vector in a Hilbert space, the '$\mathbf 0$' vector must in fact be non-zero. Thus, the label '$\mathbf 0$' is just a label for some designated vector (of norm 1) in our computational basis. This is obviously an abuse of notation, but it is a fairly common one. The more usual (and less confusing) notation would be $\left|0\right>$. This notation is even used on the wiki page about qubits.

+ +

Building this up from the ground: we have $n$ 2-dimensional vector spaces $V_i$, and we designate basis elements $\left| 0_i \right>$ and $\left| 1_i \right>$ in these vector spaces. Both these elements have norm 1. We then form the $2^n$ dimensional vector space $V = \bigotimes_{i=1}^n V_i$. We can designate a computational basis $\left| b_1 b_2 \ldots b_n \right>$ with $b_1,\ldots,b_n \in \{0,1\}$ for $V$. Within $V$ there are two vectors of interest: $\mathbf 0 = \left|00\ldots0\right>$ and $\mathbf s = \left| s_1 s_2 \ldots s_n \right>$, with $s_1, \ldots, s_n$ the bits of $s$. The vector space $S = \mathbf{\text{span}} \{\mathbf 0, \mathbf s\} \subset V$ is trivially 2-dimensional.

+",2182,,2182,,4/24/2018 6:35,4/24/2018 6:35,,,,0,,,,CC BY-SA 3.0 +1901,2,,1895,4/24/2018 7:58,,2,,"

As mentioned by Norbert Schuch, the most general quantum operation (i.e. preserving quantum mechanical interpretation) is completely positive and trace-preserving map (CPTP). You can find more on this subject in the field of open quantum systems (these maps carry information on the interaction with the environment) - see the book: H-P. Breuer, F. Petruccione ""The theory of Open quantum systems"".

+ +

One reason why it's not easy (only in special cases) to express your stochastic process as unitary transformation is that the transition matrix needs not be invertible (and if it is, it is not necessarily stochastic matrix), while unitaries are always invertible and describe the legitimate quantum transformation.

+",563,,1847,,4/24/2018 10:40,4/24/2018 10:40,,,,2,,,,CC BY-SA 3.0 +1902,2,,1895,4/24/2018 8:29,,4,,"

The easiest way to get rid of off diagonal elements is to measure. You could then apply some post-measurement unitaries which depend on the result, as well as some classical randomness. Clearly you need more than just a unitary to apply such a process. Instead you'll need a more general CP map, as mentioned by the other answers.

+ +

I am going to assume that $t_{11}+t_{12}=t_{21}+t_{22}=1$ in what follows, so that the Markov process is also trace preserving.

+ +

Let's say you measure in the $\{|1\rangle, |2\rangle\}$ basis, but don't look at the result. The state is then described by

+ +

$$ +P_1 \, \rho \, P_1 + P_2 \, \rho \, P_2. +$$

+ +

Now let's consider a different process. Suppose you applied a random process that flipped the states with probability $t_{xy}$, and did nothing with probability $t_{xx}$. The state is then

+ +

$$ +t_{xx} \, \rho + t_{xy} \, X \,\rho \, X +$$ +where here I use $X$ to denote the unitary that performs the flip.

+ +

What you want is the process that combines the two. First measure, and then apply the random flip with probabilities that depend on the results. The map you need is then

+ +

$$ +t_{11} \, P_1 \rho P_1 + t_{12} \, X P_1 \, \rho P_1 \, X \,+ t_{22} \, P_2 \rho P_2 + t_{21} \, X \, P_2 \rho P_2 \, X \, . +$$

+ +

This can be expressed as as the CPTP map +$$ +\rho\mapsto \mathcal E(\rho) = \sum_i M_i\rho M_i^\dagger \ . +$$

+ +

with $M_{11} = \sqrt{t_{11}} P_1$, $M_{12} = \sqrt{t_{12}} X P_1$, $M_{22} = \sqrt{t_{22}} P_2$ and $M_{21} = \sqrt{t_{21}} X P_2$.

+ +

Note that this expression is not unique. The interpretation of how to do it is not unique either. Measurements are not necessarily required, but it can never be as simple as just a unitary.

+ +

To see why, note that your Markov process changes the value of $\rm{tr}(\rho^2)$. This cannot be done by a single qubit unitary, because

+ +

$$ (U \rho U^{\dagger})^2 = U \rho U^{\dagger} U \rho U^{\dagger} = U \rho^2 U^{\dagger},$$

+ +

and the trace is unitary invariant. A process that changes this value requires either measurement, or interaction with an external system.

+",409,,409,,4/24/2018 14:30,4/24/2018 14:30,,,,4,,,,CC BY-SA 3.0 +1903,2,,1860,4/24/2018 9:15,,5,,"

A fundamental difference between the two kinds of systems is that a two-qubit system can actually be in an entangled state. On the other hand, a single d=4 dimensional system does not possess entanglement, since entanglement is always defined with respect to more than one party. Consequently, for the purposes of quantum protocols that exploit entanglement as a resource, a two-qubit system and a single 4-dimensional quantum system are very different.

+",2186,,2186,,4/25/2018 12:33,4/25/2018 12:33,,,,2,,,,CC BY-SA 3.0 +1904,2,,1889,4/24/2018 10:20,,3,,"

It seems to me that, from the point of view of quantum engineering, we are a few years away from being at the point of fixing standards.

+ +

Standards are a good way to ensure the reproducibility of the behaviour of a piece of information technology, and the interoperability of the functionality of multiple pieces of information technology. It is clear that at some point we will require such standards. The question is: how would one begin to formulate those standards?

+ +
    +
  • Before we ensure the reproducibility of a piece of quantum technology, we should build a piece of quantum technology whose behaviour we want to reproduce, rather than immediately set out to improve upon. With perhaps one exception, I expect that almost everyone in the quantum technologies game are more interested in bettering their own designs — possibly involving significant revisions to any design parameters which could play the role of standards — rather than setting down parameters which they expect that everyone will be happy to use.

    + +

    Conceivably D-WAVE is at this stage — obviously they would also like +to improve upon their existing technology (as for instance do +conventional chip manufacturers), but my understanding is that they are in the business of making $\,N>1\,$ computers of a given model whose behaviour is intended to achieve a certain, well, 'standard'. Whatever the computational power of their machines, it seems that they are in the business of engineering complex systems with multiple components, and doing so in a reproducible manner. So it seems very likely that they have standards for their qubits: but it is not clear that anyone apart from them will be interested in conforming to those standards (rather than building more versatile quantum computers for instance), or to what extent D-WAVE's standards are publicly available.

    + +

    Another incipient exception is in the field of optics, where the inclination is very strong to use the existing materials technology of fibre optic cables: while there is no formal standard likely exists, a practical standard of using wavelengths of light which have very low attenuation in modern-day optical fibre is one that could be comfortably predicted to continue for the foreseeable future.

  • +
  • Given this situation for individual approaches to quantum technologies, the question of interoperation is even more premature. No-one knows what their equipment is going to be interoperating with — again, with the probable exception of fibre-optic cable, and perhaps the mains frequency of your electrical grid if this is somehow important to take into account.

  • +
+ +

But ask the question again in five or ten years (more likely ten), and you may get a more interesting answer.

+",124,,,,,4/24/2018 10:20,,,,0,,,,CC BY-SA 3.0 +1905,1,1906,,4/24/2018 11:54,,8,150,"

I have heard about Quantum chemistry as one of the main applications of quantum computers. However, I have not found concrete related articles with circuit-implementations for these applications.

+ +

Does anyone have articles on simulating molecules (such as for instance hydrogen or helium) with a circuit implementation to run it?

+",2005,,2293,,05-02-2018 09:10,05-02-2018 09:10,Quantum chemistry: references,,1,0,,,,CC BY-SA 4.0 +1906,2,,1905,4/24/2018 12:30,,5,,"

Have you read Towards quantum chemistry on a quantum computer (Nature Chemistry 2010, or here in the arXiv version)? They present ""a photonic implementation for the smallest problem: obtaining the energies of H$_2$ (the hydrogen molecule) in a minimal basis"". In the figure S1 of the Supporting information there is an equivalence of the operations they implement in circuit notation.

+",1847,,,,,4/24/2018 12:30,,,,0,,,,CC BY-SA 3.0 +1907,2,,1899,4/24/2018 12:43,,1,,"

You may be referring to works like Simulation of Chemical Isomerization Reaction Dynamics on a NMR Quantum Simulator (arXiv version).

+ +

However, I'd say that in general the prediction of reaction rates or transition rates will be much more difficult compared with this 3-qubit job. Note a large amount of chemistry happens either in solution or in the solid state. Only few-particle phenomena (maybe reactions among simple molecules in atmospherical chemistry or astrochemistry), which are also the cheapest to calculate also with conventional means, can be simulated with few qubits. As soon as one aspires to embed the reaction in an environment, the complexity of a realistic simulation explodes.

+ +

I agree that if we are able to find particular cases of Noisy Intermediate-Scale Quantum systems in which, by chance of by design, the noise is a reasonable approximation to the real (thermal?) effect of the environment in the chemical reaction under study could indeed give us at least exciting results, maybe even useful.

+",1847,,,,,4/24/2018 12:43,,,,0,,,,CC BY-SA 3.0 +1908,2,,1542,4/24/2018 16:00,,5,,"

For a certain class of codes called pure, the presence of entanglement is a necessary and sufficient requirement for quantum error correction, i.e. to correct all errors affecting up to a certain number of subsystems.

+ +

Recall the Knill-Laflamme conditions for a quantum error correcting code to be able to detect a certain set of errors $\{E_\alpha\}$: pick any orthonormal basis $|i_\mathcal{Q}\rangle$ that spans the code-space. Then the error $E_\alpha$ can be detected if and only if

+ +

\begin{equation} +\langle i_\mathcal{Q} | E_\alpha | j_\mathcal{Q} \rangle = \delta_{ij} C(E_\alpha). \quad\quad\quad (1) +\end{equation}

+ +

Note that $C(E_\alpha)$ is a constant that only depends on the specific error $E_\alpha$, but not on $i$ and $j$. (This means that the error $E_\alpha$ affects all states in the code subspace in the same way). In the case of $C(E_\alpha) \propto \operatorname{tr}(E_\alpha)$, the code if called pure. Many stabilizer codes considered are of this form, not however Kitaev's toric code.

+ +

Let us assume an error-model where we are only interested in on how many subsystems our errors act. If our code can detect all errors $E_\alpha$ that act on at most $(d-1)$ subsystems nontrivially, the code is said to have distance $d$. As a consequence, any combination of errors that affect up to $\lfloor(d-1)/2\rfloor$ subsystems can be corrected.

+ +

What follows is that for pure codes of distance $d$, every vector lying in the code subspace must be maximally entangled across any bipartition whose smaller subsystem has at most size $(d-1)$: +to see this, note that for every error $E_\alpha \neq \mathbb{1}$ and vector $|v_\mathcal{Q} \rangle$ in the subspace (our ONB was chosen arbitrary), one has that

+ +

\begin{equation} + \langle E \rangle = \operatorname{tr}(E |v_\mathcal{Q} \rangle\langle v_\mathcal{Q}|) = \langle v_\mathcal{Q} |E_\alpha|v_\mathcal{Q} \rangle = \operatorname{tr}(E) = 0. +\end{equation}

+ +

Thus all observables on at most $(d-1)$ parties are vanishing, and all reduced density matrices on $(d-1)$ parties must be maximally mixed. This implies that $|v_\mathcal{Q}\rangle$ is maximally entangled for any choice of $(d-1)$ parties vs. the rest.

+ +

Addendum (for the sufficiency): +As an equivalent definition to Eq. (1): All errors $E_\alpha$ acting on less than $d$ places can be detected, if and only if for all $|v\rangle, |w\rangle$ in the code subspace following condition holds,

+ +

\begin{equation} + \langle v| E_\alpha | v\rangle = \langle w | E_\alpha| w\rangle. +\end{equation}

+ +

In the case of pure codes, above expression vanishes. It follows that if one has a subspace where every state is maximally entangled for all bipartitions of (d-1) parties vs. the rest, then it is a pure code of distance $d$.

+ +

tl;dr: For a large distance $d$, a pure code consists of highly entangled states. It is a necessary and sufficient requirement for the existence of the code.

+ +

Addendum: we looked into this question further, details can be found in the paper Quantum Codes of Maximal Distance and Highly Entangled Subspaces. There is a trade-off: the more errors a quantum code can correct, the more entangled must every vector in the code space be. This makes sense, because if the information where not distributed amongst many particles, the environment - by reading a few qubits - could recover the message in the code space. This would then necessarily destroy the coded message, due to the no-cloning theorem. Thus a high distance needs high entanglement.

+",2192,,2192,,09-11-2019 18:35,09-11-2019 18:35,,,,0,,,,CC BY-SA 4.0 +1909,1,,,4/24/2018 16:35,,5,1017,"

The following $2\times 2$ matrix

+ +

$$ +P = +\begin{bmatrix} +e^{i\theta} & 0 \\ +0 & e^{i\phi} +\end{bmatrix} +$$

+ +

represents a quantum gate because it's a unitary matrix.

+ +

If we multiply $P$ by the quantum state $\lvert \psi\rangle = \alpha \lvert 0\rangle + \beta \lvert 1\rangle$, we obtain ${\lvert \psi\rangle}_P = \alpha e^{i\theta} \lvert 0 \rangle + \beta e^{i\phi} \lvert 1\rangle $, which can be derived as follows

+ +

\begin{align} +{\lvert \psi\rangle}_P +&= +\begin{bmatrix} +e^{i\theta} & 0 \\ +0 & e^{i\phi} +\end{bmatrix} +\alpha \lvert 0\rangle + \beta \lvert 1\rangle +\\ +&= +\begin{bmatrix} +e^{i\theta} & 0 \\ +0 & e^{i\phi} +\end{bmatrix} +\alpha \lvert 0\rangle + +\begin{bmatrix} +e^{i\theta} & 0 \\ +0 & e^{i\phi} +\end{bmatrix} +\beta \lvert 1\rangle +\\ +&= +\alpha +\begin{bmatrix} +e^{i\theta} & 0 \\ +0 & e^{i\phi} +\end{bmatrix} +\begin{bmatrix} 1 \\ 0 \end{bmatrix} + +\beta +\begin{bmatrix} +e^{i\theta} & 0 \\ +0 & e^{i\phi} +\end{bmatrix} +\begin{bmatrix} 0 \\ 1 \end{bmatrix} +\\ +&= +\alpha +\begin{bmatrix} e^{i\theta} \\ 0 \end{bmatrix} + +\beta +\begin{bmatrix} 0 \\ e^{i\phi} \end{bmatrix} +\\ +&= +\alpha +e^{i\theta} +\begin{bmatrix} 1 \\ 0 \end{bmatrix} + +\beta +e^{i\phi} +\begin{bmatrix} 0 \\ 1 \end{bmatrix} +\\ +&= +\alpha e^{i\theta} \lvert 0 \rangle + \beta e^{i\phi} \lvert 1\rangle +\end{align}

+ +

If we tried to measure ${\lvert \psi\rangle}_P$, we would obtain the computational basis state $\lvert 0 \rangle$ with probability $|\alpha|^2$ and the computational basis state $\lvert 1 \rangle$ with probability $|\beta |^2$. So, there's no difference between measuring ${\lvert \psi\rangle}_P$ or $\lvert \psi\rangle$, in terms of probabilities of obtaining one rather than the other computational basis state.

+ +

The reason to obtain same probabilities is because $e^{i\theta}$ and $e^{i\phi}$ are phase vectors, so they do not affect the probabilities.

+ +

$e^{i\theta}$ and $e^{i\phi}$ represent complex numbers, as vectors, in the complex plane. This can be easily visualized from the following picture

+ +

                                                     

+ +

But what's the intuitive meaning of multiplying the ""vectors"" $e^{i\phi}$ by a computational basis state? In general, what is a phase and a phase vector in this context and how does it affect the mathematics and the basis vectors? What's the relation between $\lvert \psi\rangle$ and ${\lvert \psi\rangle}_P$?

+",,user72,26,,1/21/2019 15:49,1/21/2019 15:49,What exactly is a phase vector?,,1,1,,,,CC BY-SA 3.0 +1910,2,,1909,4/24/2018 17:12,,2,,"

There are a few different things that you may be confusing.

+ +
+

Why are objects of the form $e^{i\phi}$ actually called vectors in this context?

+
+ +

A complex number can always be expressed as a vector in $\mathbb R^2$, because $\mathbb C$ is nothing but $\mathbb R^2$ with a particular product defined between its elements. Note that this has nothing to do with quantum mechanics or physics, it is just how complex numbers are defined.

+ +
+

in general what is a phase (in the context of quantum mechanics)?

+
+ +

You can think of a phase as a number that characterises how different modes interfere with each other. While as you noted adding a phase doesn't change the output probabilities in a fixed basis, it does change the output probabilities as soon as you measure in a different basis.

+ +
+

What is the relation between $|\psi\rangle$ and $|\psi\rangle_P$?

+
+ +

They are just two different states. As noted above, while the probabilities of measuring $|0\rangle$ or $|1\rangle$ are the same for these states, as soon as you measure in a different basis you will see that they behave differently. +For example, you can easily verify that $|\psi\rangle$ and $|\psi\rangle_P$ correspond to different probabilities of measuring the outcome $|+\rangle\equiv\frac{1}{\sqrt2}(|0\rangle+|1\rangle)$.

+",55,,,,,4/24/2018 17:12,,,,0,,,,CC BY-SA 3.0 +1911,1,1912,,4/24/2018 18:07,,4,437,"

I know that a density operator is separable if it can be written in the form:

+ +

$$ \rho =\sum_k a_k \rho^A_k \otimes \rho^B_k\tag{1}$$ +where

+ +

$$a_k \ge 0,\quad \sum_k a_k=1\tag{2}$$

+ +

My question is will any set of $\rho_k^A \otimes \rho_k^B$ work? I.e. I am asking if the following statement is true (if so how do we prove it and if not - why not):

+ +
+

A density matrix $\rho$ is separable if and only if when written as the sum of ""factorized"" states $\rho_K^A \otimes \rho_k^B$ (independent of which factorized states are used) the relations (1) and (2) hold.

+
+ +

This is clearly a stronger statement then saying it can be written as (1) and (2) - and if true leads to a quick way to test entanglement.

+",2015,,,,,4/24/2018 21:22,Density operators and separable states,,2,0,,,,CC BY-SA 3.0 +1912,2,,1911,4/24/2018 20:04,,5,,"

The correct definition is

+ +
+

$\rho$ is separable if and only if there exist $\rho_i^A\ge0$, $\rho_i^B\ge0$, and $p_i\ge0$ such that + $$\rho = \sum_i p_i \rho_i^A\otimes \rho_i^B\ .$$

+
+ +

All other properties (that the $\rho_i^\bullet$ have trace 1 and the $p_i$ sum to $1$) follow automatically.

+",491,,,,,4/24/2018 20:04,,,,0,,,,CC BY-SA 3.0 +1913,2,,1911,4/24/2018 20:27,,2,,"

There are fixed sets that you can use, but the question is how large that set is.

+ +

If your possible set of states is infinitely large (every possible pure state of 1 qubit), you can always do it, of course. But, worse, it has to be this large. Imagine you have a finite set $\{\rho^A_i\}$ and you want to describe a pure single-qubit state $|\psi\rangle\langle\psi|$ that is not one of the $\rho^A_i$. Clearly, there is no choice of $\{p_i\}$ such that $\sum_ip_i\rho^A_i=|\psi\rangle\langle\psi|$ because you must sum at least two terms, and hence will have a state with at least rank 2, while the pure state has rank 1.

+ +

A good way to visualise this is with the Bloch sphere. Imagine a sphere. Every possible pure state corresponds to a point on the surface. The set $\{\rho^A_i\}$ form a set of points on, or in, the sphere. The set of possible states that you can make via the term $\sum_ip_i\rho^A_i$ is given by the convex hull of the points. You can't reconstruct the surface of the sphere from a convex hull (the shape has flat sides!) unless you have infinitely many points.

+",1837,,1837,,4/24/2018 21:22,4/24/2018 21:22,,,,0,,,,CC BY-SA 3.0 +1914,1,1919,,4/25/2018 3:53,,5,1551,"

Whenever I learn about quantum computing and qubits, it always talks about the superposition principle and that the qubits can be in both states 0 and 1 simultaneously, thus claiming that quantum computers have processing capability way more than modern conventional computers.

+ +

But here is what I don't understand:

+ +
    +
  1. How can a superposition of 0 and 1 represent any discrete information at all?
  2. +
  3. What about the logic gates?
  4. +
  5. How can a discrete decision be taken by leveraging this superposition principle?
  6. +
  7. Combination of 0 and 1 is basically important for computing. How can this third state of qubits be leveraged to give a boost in computing?
  8. +
+",2201,,26,,12/23/2018 12:24,12/23/2018 12:24,How does using a superposition of 0 and 1 improve the processing capabilities of a quantum computer compared to classical computers?,,2,5,,4/25/2018 18:32,,CC BY-SA 4.0 +1915,2,,1914,4/25/2018 4:53,,1,,"
+

How can a superposition of 0 and 1 represent any discrete information at all?

+
+ +

Any algorithm, classical or quantum, inputs 0's and 1's and outputs 0's and 1's. For a quantum algorithm 0 is $|0\rangle$ and 1 is $|1\rangle$. Quantum algorithms use superpositions $|\psi\rangle = \alpha|0\rangle+\beta|1\rangle$ on intermediate computation steps only.

+ +
+

What about the logic gates?

+
+ +

Quantum computers use a different set of gates because they are built on different physics. For example, all quantum gates are reversible, while for example classical AND gate is not reversible.

+ +

Formally, a quantum gate is a unitary transformation applied to one or several qubits.

+ +
+

How can a discrete decision be taken by leveraging this superposition principle?

+
+ +

""Discrete decision"" (whatever it means) is based on the output of a quantum algorithm which is nothing more than a bunch of zeros and ones, same as with +classical algorithm.

+ +
+

Combination of 0 and 1 is basically important for computing. How can this third state of qubits be leveraged to give a boost in computing?

+
+ +

Actually quantum algorithms use not only single-qubit superposition states $|\psi\rangle = \alpha|0\rangle+\beta|1\rangle$ but also multi-qubit entangled states. Anyway, quantum algorithms have additional ""degrees of freedom"" that are unavailable to classical algorithms and indeed can leverage these degrees of freedom ""to give a boost in computing"". I doubt it is possible to explain how quantum algorithms do it without learning quantum algorithms themselves.

+",2105,,26,,05-09-2018 04:37,05-09-2018 04:37,,,,1,,,,CC BY-SA 4.0 +1916,2,,1803,4/25/2018 6:13,,4,,"
+

Plain and simple. Does Moore's law apply to quantum computing, or is it similar but with the numbers adjusted (ex. triples every 2 years). Also, if Moore's law doesn't apply, why do qubits change it?

+
+ +

A great question, with great answers; still, I will try my hand at it.

+ +

No, most quantum computers do not have qubits created in silicon; even the few that do are not created by utilizing computational lithography. Quantum computing is in it's earliest days, it can't be compared directly to a mature technology of an entirely different kind.

+ +
+ +

Information to support that short answer:

+ +

This question was asked at physics.SE: ""Reasonable to expect Moore's law for quantum computing?"", receiving one answer; not particularly well received (400 views in 144 days, and 1 UpVote).

+ +

It is termed Rose's Law, by some; after the CTO of D-Wave Systems. See this article: ""Quantum computing Rose’s Law is Moore’s Law on steroids"" or the Flickr page of the Managing Director of the investment firm Draper Fisher Jurvetson, Steve Jurvetson: ""Rose’s Law for Quantum Computers"".

+ +

+ +

The chart runs a bit ahead of itself, and it applies to quantum annealing computers, it's not exactly comparable to universal quantum computing.

+ +

The reason Moore's Law is not exactly comparable is because it refers to transistors and an entirely different manufacturing process, you're comparing a manufacturing process that was established at the time with one where the computer is in it's earliest days and is essentially hand built.

+ +

Wikipedia's webpage describes Moore's Law this way:

+ +
+

""Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years. The observation is named after Gordon Moore, the co-founder of Fairchild Semiconductor and Intel, whose 1965 paper described a doubling every year in the number of components per integrated circuit, and projected this rate of growth would continue for at least another decade. In 1975, looking forward to the next decade, he revised the forecast to doubling every two years. The period is often quoted as 18 months because of Intel executive David House, who predicted that chip performance would double every 18 months (being a combination of the effect of more transistors and the transistors being faster)."".

+
+ +

Gordon E. Moore's graphic from 1965 looked like this:

+ +

+ +

The article by Max Roser and Hannah Ritchie (2018) - ""Technological Progress"", published online at OurWorldInData.org, explains how exponential equations have been used to describe everything from Moore's Law, computational power (both operations per second and clock speed * cores * threads), the progress of human flight or even human genome DNA sequencing.

+ +

Moore's law is an observation and projection of an historical trend and not a physical or natural law. Although the rate held steady from 1975 until around 2012, the rate was faster during the first decade. A nostalgic look at the early days of personal computing is given in this Ars Technica feature: ""The creation of the modern laptop: An in-depth look at lithium-ion batteries, industrial design, Moore's law, and more"".

+ +

In this Communications of the ACM, Vol. 60 No. 1 article: ""Exponential Laws of Computing Growth"" the authors, Denning and Lewis, explain:

+ +

""The three kinds of exponential growth, as noted—doubling of components, speeds, and technology adoptions—have all been lumped under the heading of Moore's Law. Because the original Moore's Law applies only to components on chips, not to systems or families of technologies, other phenomena must be at work. We will use the term ""Moore's Law"" for the component-doubling rule Moore proposed and ""exponential growth"" for all the other performance measures that plot as straight lines on log paper. What drives the exponential growth effect? Can we continue to expect exponential growth in the computational power of our technologies?

+ +

Exponential growth depends on three levels of adoption in the computing ecosystem (see the table here). The chip level is the domain of Moore's Law, as noted. However, the faster chips cannot realize their potential unless the host computer system supports the faster speeds and unless application workloads provide sufficient parallel computational work to keep the chips busy. And the faster systems cannot reach their potential without rapid adoption by the user community. The improvement process at all three levels must be exponential; otherwise, the system or community level would be a bottleneck, and we would not observe the effects often described as Moore's Law.

+ +

With supporting mathematical models, we will show what enables exponential doubling at each level. Information technology may be unique in being able to sustain exponential growth at all three levels. We will conclude that Moore's Law and exponential doubling have scientific bases. Moreover, the exponential doubling process is likely to continue across multiple technologies for decades to come.

+ +

Self-Fulfillment

+ +

The continuing achievement signified by Moore's Law is critically important to the digital economy. Economist Richard G. Anderson said, ""Numerous studies have traced the cause of the productivity acceleration to technological innovations in the production of semiconductors that sharply reduced the prices of such components and of the products that contain them (as well as expanding the capabilities of such products).""1 Robert Colwell, Director of DARPA's Microsystems Technology Office, echoes the same conclusion, which is why DARPA has invested in overcoming technology bottlenecks in post-Moore's-Law technologies.5 If and when Moore's Law ends, that end's impact on the economy will be profound.

+ +

It is no wonder then that the standard explanation of the law is economic; it became a self-fulfilling prophesy of all chip companies to push the technology to meet the expected exponential growth and sustain their markets. A self-fulfilling prophecy is a prediction that causes itself to become true. For most of the past 50-plus years of computing, designers have emphasized performance. Faster is better. To achieve greater speed, chip architects increased component density by adding more registers, higher-level functions, cache memory, and multiple cores to the same chip area and the same power dissipation. Moore's Law became a design objective."".

+ +

Moore's Law had a lot of help, shaping the future and maintaining the growth was an objective of those whom stood to profit; not entirely constrained by technological limitations. If consumers wanted something sometimes it was provided and other times a better idea was offered; what was popular (clock speed) sold at a premium and what, at one time, was not well understood (more cores and threads) was promoted as the way forward.

+ +

Moore's Law was well received, evolving into many things, like Kurzweil's ""The Law of Accelerating Returns"". Here's an updated version of Moore’s Law (based on Kurzweil’s graph):

+ +

+ +

Another fact-based chart is provided by Top500.Org's chart of the exponential growth of SuperComputer power: +

+ +

The Missouri University of Science and Technology's article: ""Forecasting Consumer Adoption of Technological Innovation: Choosing the Appropriate Diffusion Models for New Products and Services before Launch"" explains that the Bass Model (a modification of the logistic curve) is a sound method to predict future growth (based upon past statistics).

+ +

The logistic curve features a slow start, great mid-term progress, followed by an eventual slowdown; often replaced by something new.

+ +

+ +

On forecasting models the authors had this to say:

+ +

""MODELS

+ +

The Box and Cox and Generalized Bass models were the best models when it came to curve-fitting while the Simple Logistic model did the poorest. However, the results of the research showed that a curve-fitting advantage did not translate into a forecasting advantage when creating a forecast for an innovation without a market history. The popularity of the Bass model derives from two unique factors. As this research has reinforced, the Bass model is very robust. In addition, the Bass model’s two coefficients have a theoretical foundation. The Bass model variants created for this research deliberately violated the assumption of a constant $m$. This resulted in a model (Bv) that outperformed any of the others in the radical low-priced innovation context. Unfortunately, there was just one innovation in this context – additional research is recommended to test the viability of this variation with more datasets in various contexts.

+ +

The Simple Logistic model is one of the oldest diffusion models known. It is a very basic model, but it clearly outperformed the other models in the context of really new low-priced innovations. The Gompertz model it is not recommended for forecasting the diffusion of really new or radical innovations before the launch of an innovation. However, the Gompertz model may be very well suited for forecasts generated well after the launch of an innovation. While not the focus of this research, it was observed that the diffusion of the Projection Television innovation follows a perfect Gompertz curve.

+ +

The Flexible Logistic Box and Cox model has a problem where the c variable tends to run to infinity in some scenarios. This was addressed by capping the upper limit of $c$ to 100,000. Despite (or because of) this fix, the +authors must admit to being skeptical as to how well the Box and Cox model would do in comparison to the other models. As it turned out, the Box and Cox was second only to the Bass model in terms of robustness. The Box and Cox was also the best model in the context of radical high-priced innovations."".

+ +

Moore's position as a co-founder of Intel helped ensure that he could help his prediction to come true and keep it on track. Quantum computing is too near it's genesis to be pushed forward by simply pouring money on it, with so many paths to creating a successful quantum computing device money needs to be apportioned wisely to make the most gains from the many branches that research has taken.

+ +

""The European Quantum Technologies Roadmap"" (11 Dec 2017) lists some of the challenges, following the introduction:

+ +

""Introduction

+ +

A quantum computer based on the unitary evolution of a modest number of robust logical qubits (N>100) operating on a computational state space with 2$^N$ basis states would outperform conventional computers for a number of well identified tasks. A viable implementation of a quantum computer has to meet a set of requirements known as the DiVincenzo criteria: That is, a quantum computer operates on

+ +

(1) an easily extendable set of well characterized qubits

+ +

(2) whose coherence times are long enough for allowing coherent operation

+ +

(3) and whose initial state can be set

+ +

(4). The qubits of the system can be operated on logically with a universal set of gates

+ +

(5) and the final state can be measured

+ +

(6). To allow for communication, stationary qubits can be converted into mobile ones

+ +

(7) and transmitted faithfully.

+ +

It is also understood that it is essential for the operation of any quantum computer to correct for errors that are inevitable and much more likely than in classical computers.

+ +

Today quantum processors are implemented using a range of physical systems. Quantum processors operating on registers of such qubits have so far been able to demonstrate many elementary instances of quantum algorithms and protocols. The development into a fully featured large quantum computer faces a scalability challenge which comprises of integrating a large number of qubits and correcting quantum errors. Different fault-tolerant architectures are proposed to address these challenges. The steadily growing efforts of academic labs, startups and large companies are a clear sign that large scale quantum computation is considered a challenging but potentially rewarding goal."".

+ +

...

+ +

There are too many paths to choose, and determine the best way forward, to either plot a model for growth (like Moore's Law) nor should so straight a line be expected.

+ +

With D-Wave's computer each doubling of qubits represents a doubling of computation power, for the subset of problems it is suited for, for universal quantum computers each single additional qubit represents a doubling of power; unfortunately each single qubit needs to be represented by multiple qubits, to permit error correction and maintain coherence. Some technologies used to implement qubits allow fewer or single qubits to be used as they are not error prone and have longer coherence and greater fidelity. Speed of control is also an important consideration when choosing which technology to implement and while it will affect the plot of the curve it's out of the scope of the answer offered here.

+ +

Further reading: ""Coherent control of single electrons: a review of current progress"" (1 Feb 2018), ""Hyperfine-assisted fast electric control of dopant nuclear spins in semiconductors"" (30 Mar 2018), ""A >99.9%-fidelity quantum-dot spin qubit with coherence limited by charge noise"" (4 Aug 2017).

+",278,,278,,05-12-2018 23:26,05-12-2018 23:26,,,,1,,,,CC BY-SA 4.0 +1917,1,,,4/25/2018 7:57,,12,101,"

This question is related (and complementary) to ""Passive improving of nanodiamond surfaces for NV centers?"".

+ +

Nitrogen-Vacancy centers (NVs) have astonishing quantum properties, which make them interesting as potential hardware both for quantum computing in particular and for quantum technologies in general. In part this results from the center being protected by the diamond structure, which is at the same time very rigid and practically free from nuclear spins.

+ +

However, their surfaces tend to be far from controlled. Neither in chemical terms (composition, structure) not in terms of what it contributes to the physical properties of the bulk. For example, in experiments of diamond levitation using lasers, at high powers of irradiation, the diamonds typically get noticeably lighter (and thus oscillate further in their potential wells) as they suddenly (and uncontrolledly lose the external rubbish.

+ +

Coming closer to the question: in these same experiments, even though diamonds are essentially transparent to the lasers employed for the levitation, eventually at high laser power and low pressure diamonds overheat and essentially evaporate. Since these conditions are useful to fix the diamonds in place and reduce noise, this is a problem for the control of NV centers as qubits for quantum computing purposes. One reason for the poor thermal dissipation in diamonds -which in absence of gas that can carry convection necessarily happens via black body radiation- is the fact that the phonon spectrum of diamond is essentially empty: covalent bonds are too strong, everything is fix in its place, and there is nothing available that can vibrate.

+ +

My question is, since heat release is often governed by surface properties, what is the current status of efforts to alter diamond surface with the goal of obtaining spectrally selective thermal emittance properties, meaning emitting preferentially at energies below the evaporation of the diamond starts?

+",1847,,55,,09-04-2020 09:31,09-04-2020 09:31,Active improving of nanodiamond surfaces for NV centers?,,0,0,,,,CC BY-SA 3.0 +1918,1,1923,,4/25/2018 10:44,,17,995,"

An answer to +Is there any source which tabulates quantum computing algorithms for simulating physical systems? mentions the Quantum Algorithm Zoo, a list of quantum algorithms. Several answers to Programming quantum computers for non-physics majors include links to different kinds of development kits. Likewise, What programming languages are available for quantum computers? gathers a couple of good attempts at listing those.

+ +

The present question is related to the above, and yet it's not answered by the above resources.

+ +

Does a complete list of open quantum software projects exist?

+ +

Ideal answers would be: if it exists, the link to said list, and if it doesn't, a (well-formatted) as-exhaustive-as-possible compilation of open quantum software projects.

+ +

Related question: Are there any quantum software startups?

+",1847,,1847,,05-03-2018 04:47,04-12-2019 21:13,Does a complete list of open quantum software projects exist?,,3,0,,,,CC BY-SA 4.0 +1919,2,,1914,4/25/2018 10:49,,3,,"
+

How can a superposition of 0 and 1 represent any discrete information at all?

+
+ +

The inputs to a quantum computer are always the same as the inputs to a classical computer; a definite sequence of 0s and 1s. This superposition thing is something that happens in the middle of the computation, and what you're quantum algorithm tries to do is make sure that your output is also a definite sequence of 0s and 1s. So, you're not using the superposition to represent any information in the sense of an input or an output.

+ +
+

What about the logic gates?

+
+ +

In many ways, logic gates work just the same as in the classical case. For example, there are many gates which, given an input string of 0s and 1s output a different string of 0s and 1s. Examples include the not gate, controlled-not and Toffoli (controlled-controlled-not). For a quantum state, if you have it written out as a superposition of different strings of 0s and 1s, these gates act on each of these strings just as they would in the classical case. In this sense, you can do everything that a classical computer can (the Toffoli gate, in particular, is said to be 'universal' for classical reversible computation, i.e. arbitrary combinations have the full power of classical computation).

+ +

However, quantum computation has more logic gates. It has gates which can create and recombine superpositions, and gates which can change the complex argument on the coefficients of the superposition (the $\alpha$ and $\beta$ in a statement $\alpha|0\rangle+\beta|1\rangle$).

+ +
+

How can a discrete decision be taken by leveraging this superposition principle?

+
+ +

The trick, as mentioned above, is to make sure that there is a definite output, i.e.e that there is no (or almost no) superposition when the information is read out at the end. Where this superposition becomes useful is it gives these extra gates, over and above the classical stuff, some room to work. Without specifying exactly how they do it, you can easily imagine how having access to an additional set of abilities can sometimes be combined in new ways to give faster computations.

+ +
+

Combination of 0 and 1 is basically important for computing. How can this third state of qubits be leveraged to give a boost in computing?

+
+ +

The insight about how it works is to emphasise first that the superposition of a qubit is not only one extra state. It is an infinite number of them, because instead of either being a 0 or a 1, it can be any state $\alpha|0\rangle+\beta|1\rangle$ that satisfies the constraint $|\alpha|^2+|\beta|^2$. What's more, when you combine $N$ qubits together, you can get superpositions across all $2^N$ different sequences of 0s and 1s.

+ +

Now, how is this useful? So far, we only know that it's useful for a limited set of problems, and these problems often require knowledge of some global property of a function, i.e. I don't just want to evaluate some function $f(x)$ for a specific value of $x$, but I want to know some relation between many different values of $f(x)$. The superposition can simultaneously evaluate all the different values of $f(x)$, and then it just needs a bit of magic to work out if it's possible to recombine these superpositions in order to get out the answer we're interested in. The point is that this opens up a completely new possibility for the way an algorithm could work.

+",1837,,26,,05-09-2018 04:35,05-09-2018 04:35,,,,1,,,,CC BY-SA 4.0 +1920,1,,,4/25/2018 11:32,,10,1543,"

In superdense coding, two qubits are prepared by Eve in an entangled state; one of them is sent to Alice and the other is sent to Bob. Alice is the one who wants to send (to Bob) two classical bits of information. Depending on what pair of classical bits Alice wants to send (i.e. one of $00$, $01$, $10$ and $11$), Alice applies a certain quantum operation or gate to her qubit, and sends the result to Bob, which then applies other operations to retrieve the ""classical message"".

+ +

It doesn't seem to me that superdense coding provides any advantage over classical communication techniques. Two qubits (the one sent to Alice and the one sent to Bob by Eve) and bits (the two sent to Bob by Alice) are sent, two qubits (one is received by Alice and the other by Bob) and bits (the two sent by Alice to Bob) are received. Furthermore, I read that if someone has access to the qubit sent to Bob, then the communication seems not to be secure (anyway).

+ +

What are the real advantages of superdense coding compared to just sending two bits of information from Alice to Bob?

+",,user72,55,,2/22/2021 15:59,2/22/2021 15:59,What are the real advantages of superdense coding?,,2,0,,,,CC BY-SA 3.0 +1921,2,,1920,4/25/2018 11:40,,9,,"

TL;DR: While two qubits must be transmitted in total, in the instant where two bits are to be communicated, only one qubit has to be sent. The information being sent is masked, but it is not truly secure.

+ +
+ +

There are two distinct phases to a superdense coding protocol. In phase 1,

+ +
    +
  • Alice and Bob prepare a Bell state $(|00\rangle+|11\rangle)/\sqrt{2}$. This is a two-qubit state and Alice holds one qubit, and Bob the other.

  • +
  • Alice and Bob travel off to distant locations, each taking their qubit. We assume that there are no errors; the quantum state does not change over time.

  • +
+ +

This has all happened in advance, long before Alice knows what message she wants to send to Bob. The second phase occurs later, when Alice decides what two bit message she wants to send to Bob.

+ +
    +
  • Depending on the message she wishes to send (she has 4 possible options), Alice applies either $\mathbb{I},X, Z$ or $Y$ on her qubit.

  • +
  • Alice sends her qubit to Bob.

  • +
  • When Bob receives Alice's qubit, he brings the two qubits together and measures in the Bell basis. Each of the four different possible measurement outcomes correspond to one of the 4 messages Alice had to choose from.

  • +
+ +

So, overall, you are correct that two qubits have to be sent. However, one of these qubits can be provided to Bob in advance, long before the communication, and before the content of the message was decided. Thus, in the instant when you want to send two bits of information (phase 2), you only have to send one qubit (the one that Alice has). Its like if you know you have lots of deadlines all due on the same day. You don't leave doing each of those jobs until absolutely the last minute, even if there are some last minute adjustments that you have to make on each. You work on things in advance so that, when that last minute information is available, you have to do the minimum possible.

+ +

This is the idea behind superdense coding, and it illustrates one of the principles of quantum information: you can provide some resource at an earlier time, independent of what is going to be done later and that resource can be consumed to achieve a more efficient result in the instant.

+ +

If you're interested in security, then for the protocol as described above, an eavesdropper can only get access to the qubit that Alice sends to Bob. In that case, the eavesdropper cannot tell what information Alice was sending to Bob (the density matrix of that qubit is $\mathbb{I}/2$ no matter what Alice did to it to encode the message). However, the eavesdropper can scramble the message by applying a Pauli operation on that qubit. The eavesdropper won't know what message Bob will receive (because it will be a combination of what Alice and the eavesdropper did), but Bob won't receive what Alice intended.

+ +

If it is the case that Alice prepares both qubits and sends them to Bob (just at different times), so that an eavesdropper could intercept thefirst qubit as well, then the protocol is completely insecure as the eavesdropper can just replace Bob. There is no authentication of the receiver.

+ +
+ +

What does it mean to ""prepare a quantum state""?

+ +

Every quantum state of a particular dimension, $d$ has an associated quantum state. This quantum state can be described mathematically as a $d$-dimensional complex vector. As theorists, when we say ""prepare a quantum state"", we mean that we specify a vector that we want that quantum system to be in.

+ +

In practice, how is this done? You first measure your quantum system to find out what state it's in already, and perform a unitary operation to convert it from what it is to what you want it to be.

+",1837,,1837,,4/26/2018 11:03,4/26/2018 11:03,,,,0,,,,CC BY-SA 3.0 +1923,2,,1918,4/25/2018 15:21,,14,,"

There is a really long list of quantum software projects on Quantiki. It's mostly about quantum simulators, quantum compilers and QC programming environments.

+ +

But you inspired me to start a curated list of open-source quantum software projects on GitHub here. It should not be exclusive to the aforementioned categories but list ANY (reasonable) open-source quantum software project out there. I will be doing my best to add as many projects as I can, but it would be great if some of you would contribute!

+",1234,,1234,,4/25/2018 16:52,4/25/2018 16:52,,,,2,,,,CC BY-SA 3.0 +1924,1,,,4/25/2018 15:27,,5,146,"

My question is somehow related with a previous one: What is the most optimistic perspective of room-temperature solid-state QC?.

+ +
+ +

Regarding solid-state qubits,

+ +
    +
  • What is the highest temperature at which the simplest quantum logic operation has been performed? Let's say: initialization, arbitrary rotation and measuring, repeated to have enough statistics in order to verify a good fidelity. In which solid-state system has this happened?
  • +
+",1955,,26,,12/13/2018 19:48,12/13/2018 19:48,Which temperature has been the highest to achieve a quantum logic operation?,,1,0,,,,CC BY-SA 3.0 +1925,2,,1920,4/25/2018 16:18,,4,,"

Superdense coding can be used to smooth out network utilization by ""storing bandwidth"". During low utilization, top up the traffic with EPR halves. During high utilization, burn the EPR halves to double the available capacity.

+ +

Superdense coding can turn a two-way quantum channel with bandwidth B (in both directions) into a one-way classical channel with bandwidth 2B. Just use the reverse direction to send EPR halves, which you then use to fuel superdense coding in the forward direction.

+ +

Superdense coding can convert high-latency bandwidth into low-latency bandwidth. For example, if you have two quantum channels with bandwidth B but one of them has a latency of 1 second instead of 10 milliseconds, you can deliver EPR halves over the high latency channel and use them to fuel the actual data being superdense coded over the low latency channel. (Picture a truck showing up with a box of EPR halves, so that your internet goes faster.)

+ +

Caveat: all of these assume that a quantum channel is less than twice as expensive as a classical channel, which may not ever be true financially speaking.

+",119,,,,,4/25/2018 16:18,,,,2,,,,CC BY-SA 3.0 +1926,1,1972,,4/25/2018 16:31,,14,1044,"

I am from a computer science background and I find it difficult to decide on the resources I should focus on while learning quantum computing, since there is so much to read/watch. My ultimate goal is to make a programming language acting as an interface between quantum computers and the person similar to 1972 when C was made. As a realistic intermediate stage, I would like to get to the point of writing programs on IBM's QISKit.

+ +

For that, I would need a schematic study guide in order to acquire the necessary background in Physics and the related fields required to dive into the field of quantum computing. Does this already exist: an ordered list of indispensable concepts and abilities to master, which if possible also mentions adequate material to acquire each of them?

+ +

Assume a high school level physics knowledge. Provide a study guide i.e. from beginner to expert kind of guide. Try to list video/book resources that one should follow in a chronological order so as to become an expert in the field of quantum computing to the level I can write my own quantum computing language (assuming already have other CS skills to write the language).

+",2209,,26,,05-01-2018 17:37,5/13/2018 18:40,"Does a study guide exist that starts from a ""purely CS background"" and advances towards ""making a new quantum programming language""?",,2,6,,,,CC BY-SA 3.0 +1927,2,,1867,4/26/2018 1:24,,2,,"

For theoretical purposes, I would say that describing two qubits either as exactly that, two qubits ($\mathbb{C}^2\otimes\mathbb{C}^2$), or as a single $d=4$ spin, ($\mathbb{C}^4$) are essentially equivalent, assuming you have universal control over the whole Hilbert space, because it means you can do whatever you want. The distinction is usually most applicable when you separate the two qubits over some distance, and cannot easily implement a key gate in the universal set (i.e. the two-qubit interaction). But here, you're explicitly stating that that interaction is present. So, the theory claims it makes no difference; you can always do anything you want.

+ +

I expect that in practice (although I'm not an experimentalist), the difference comes down to the error mechanisms, which will be different between what you might actually describe as 3 different settings: two qubits $\mathbb{C}^2\otimes\mathbb{C}^2$, a single spin $\mathbb{C}^4$, or the Hilbert space structure implied by the two-qubit interaction you mentioned, $\mathbb{C}\oplus\mathbb{C}^3$. The energy levels in each case are quite different, which will affect the relaxation properties, and presumably more general interactions with the environment as well.

+",1837,,,,,4/26/2018 1:24,,,,2,,,,CC BY-SA 3.0 +1929,1,,,4/26/2018 9:23,,4,217,"

In quantum computation, a common operation performed between two quantum states is the tensor product, which allows us to create a new and higher-dimensional state from two lower-dimensional states. The tensor product is usually denoted by the symbol $\otimes$. So, if $\lvert \psi\rangle = \alpha \lvert 0\rangle + \beta \lvert 1\rangle \in \mathbb{C}^n$ and $\lvert \phi\rangle = \gamma \lvert 0\rangle + \delta \lvert 1\rangle \in \mathbb{C}^n$ are the states of two qubits, then $\lvert \psi\rangle \otimes \lvert \phi\rangle \in \mathbb{C}^{n^2}$ is their tensor product.

+ +

I have just come across the notation $\lvert \mathbf{x}, 0\rangle$ while reading section $1$ of this paper. What does it mean? I understood $\mathbf{x}$ is a (classical) bit string of length $n$ (and I think this the reason it's in bold).

+ +

I am aware of the fact that the tensor product of two vectors $\lvert \psi\rangle$ and $\lvert \phi\rangle$ can be shortened as follows $\lvert \psi\rangle \lvert \phi\rangle$. So, I don't think $\lvert \mathbf{x}, 0\rangle$ is also a shorthand for $\lvert \mathbf{x} \rangle \otimes\lvert 0\rangle$.

+",,user72,26,,12/23/2018 12:23,12/23/2018 12:23,"What do we mean by the notation $\lvert \mathbf{x}, 0\rangle$?",,1,0,,,,CC BY-SA 3.0 +1930,2,,1929,4/26/2018 9:43,,6,,"

Yes, $|\mathbf{x},0\rangle$ is a shorthand for $|\mathbf x\rangle\otimes |0\rangle$.

+ +

Note that $|\mathbf x\rangle$ itself, with $\mathbf x = x_1x_2\dots x_N$ a bit string, is just a shorthand for +$$ +|\mathbf x\rangle \equiv |x_1\rangle \otimes +|x_2\rangle \otimes\cdots +|x_N\rangle\ . +$$

+",491,,491,,4/26/2018 11:20,4/26/2018 11:20,,,,0,,,,CC BY-SA 3.0 +1931,1,1932,,4/26/2018 12:10,,3,199,"

I recently read hand-written notes about the ""secret mask"" or ""secret string"" algorithm (which I can't share here) with the following notation $\lvert \underline{x} \rangle$, i.e. a letter with a line under it. What could it mean?

+",,user72,,user72,4/26/2018 15:34,4/26/2018 15:34,What does the notation $\lvert \underline{x} \rangle$ mean?,,1,2,,,,CC BY-SA 3.0 +1932,2,,1931,4/26/2018 12:59,,3,,"

It probably means that you take $x$ to be a binary string, $x\in\{0,1\}^n$ where you're talking about a system of $n$ qubits. The underline probably isn't necessary (it's hard to tell without more context), but just conveys that you could think of it as a vector, i.e. if $x=011010$, you could think of it as a vector $\underline{x}=(0,1,1,0,1,0)$. You don't always have to, it depends what you're going to do with it. Generally you only have to think about $x$ as a binary string. However, there are certain operations where it might help to think about it as a vector. For instance, the Hadamard transform can be written has +$$ +H^{\otimes n}=\frac{1}{\sqrt{2^n}}\sum_{x,z\in\{0,1\}^n}(-1)^{x\cdot z}|x\rangle\langle z|, +$$ +where you calculate $x\cdot z$ using the usual inner product for vectors. So, it can help to draw attention to the fact that you're using them like that by underlining them. Whether or not you do that largely depends on mood! For example, you can see that I haven't chosen to do it.

+",1837,,1837,,4/26/2018 13:10,4/26/2018 13:10,,,,1,,,,CC BY-SA 3.0 +1936,2,,1887,4/26/2018 17:00,,3,,"

To answer your question, if there is an actual field of research, Quantum Information Biology (QIB) seems to fit what you are looking for.

+ +

Once a textbook has been written on a subject I think it is fair to classify it as a valid research field. Also you may consider the controversial Penrose–Hameroff ‘Orch OR‘ model to fall into this category, and maybe even the no less controversial research on Quantum Cognition.

+ +

But you seem to be most interested in learning from biology for QIS which is what the Oxford Martin Programme on Bio-Inspired Quantum Technologies is investigating. A word of caution though, if you look at their publications they appear to be mostly Quantum Biology. To my knowledge no actual Quantum Information processing has been experimentally demonstrated to take place in biological systems.

+ +

And while there is no doubt that engineering and technology can often been inspired, and improved upon, by mimicking biology, I don't think this hypothesis is necessarily a foregone conclusion when it comes to quantum computing or quantum technologies.

+ +

Take photosynthesis for example. It is an incredibly well working process, arguably the most important life sustaining one on earth, but as you can tell from the color of leafs it only uses a small part of the sun light's spectrum. That is why it is not very efficient. It only converts 3%-6% of the incoming light to energy that can be used by the plant. The best solar cells are pushing beyond 40% efficiency.

+ +

So while these research fields are widely exciting, I don't see Quantum Biocomputing to happen anytime soon.

+",1375,,1375,,4/26/2018 17:06,4/26/2018 17:06,,,,0,,,,CC BY-SA 3.0 +1937,1,1941,,4/27/2018 8:10,,24,1658,"

I've read in many sources and books on adiabatic quantum computation (AQC) that it is crucial for the initial Hamiltonian $\hat{H}_i$ to not commute with the final Hamiltonian $\hat{H}_f$, i.e. $\left[\hat{H}_i,\hat{H}_f\right]\neq 0$. But I've never seen an argument to why it's so important.

+ +

If we assume a linear time dependence the Hamiltonian of the AQC is +$$ +\hat{H}\left(t\right)~=~\left(1-\frac{t}{\tau}\right)\hat{H}_i+\frac{t}{\tau}\hat{H}_f, +\qquad \left(0\leq t\leq \tau \right) +$$ +where $\tau$ is the adiabatic time scale.

+ +

So my question is: Why is it crucial that the initial Hamiltonian does not commute with the final Hamiltonian?

+",2136,,23,,4/27/2018 10:59,07-02-2018 07:13,Why is it crucial that the initial Hamiltonian does not commute with the final Hamiltonian in adiabatic quantum computation?,,4,0,,,,CC BY-SA 3.0 +1938,2,,1937,4/27/2018 8:30,,3,,"

In the context of Ising optimizers having an initial Hamiltonian that commutes with the problem Hamiltonian means it is essentially products of $\sigma^Z$ operators, which means that its eigenstates are classical bitstrings. Hence the groundstate at the beginning ($t$=0) will be classical as well, not a superposition of all possible bitstrings.

+ +

Moreover, even going beyond the strict boundaries of AQC (e.g. open-system quantum annealing, QAOA etc.) if the driving Hamiltonian commutes then it cannot induce transitions between the eigenstates of the problem Hamiltonian but only change the phase of the amplitudes in the wavefunction; and you want a driver which is able to induce spin-flips in order to explore the search space.

+",410,,,,,4/27/2018 8:30,,,,0,,,,CC BY-SA 3.0 +1939,1,1960,,4/27/2018 9:58,,8,2650,"

On page 157 of Kaye, Laflamme and Mosca they write that in Grover's algorithm we need to apply Grover's iterate a total of: +$$\Big\lfloor \frac{\pi}{4} \sqrt{N}\Big\rfloor$$ +(They actually wrote $\Big\lfloor \frac{\pi}{4} \frac{1}{\sqrt{N}}\Big\rfloor$ but I assume the above is what is intended.)

+ +

My question is why the floor? Would it not be better to go to the nearest integer - since if e.g. $ \frac{\pi}{4} \sqrt{N}=5.9999$ it would seem a bit silly to do $5$ rather then $6$ iterations.

+",2015,,55,,8/19/2019 10:32,8/19/2019 10:32,"In Grover's Algorithm, why does the optimal number of iterations involve a floor?",,2,0,,,,CC BY-SA 4.0 +1940,1,1964,,4/27/2018 10:01,,17,321,"

On the surface, quantum algorithms have little to do with classical computing and P vs NP in particular: Solving problems from NP with quantum computers tells us nothing about the relations of these classical complexity classes1.

+ +

On the other hand, the 'alternative description' of the classical complexity class PP as the class PostBQP presented in this paper is, as far as I'm aware, considered as an important result for 'classical complexity', by 'quantum complexity'.

+ +

In fact, Scott Aaronson, the author of the paper, writes at the end of the abstract:

+ +
+

This illustrates that quantum computing can yield new and simpler proofs of major results about classical computation.

+
+ +
+ +

Hence, my question is: are there results from the field of quantum complexity that 'simplify' the P vs NP problem, similar to the quantum description of PP? If there are no such results, is there a good reason to not expect these results, despite the 'success' for PP?

+ +

1: Take the answer to this question, for example: Would the P vs. NP problem become trivial as a result of the development of universal quantum computers?

+",253,,,,,5/23/2018 10:19,Are there results from quantum algorithms or complexity that lead to advances on the P vs NP problem?,,1,1,,,,CC BY-SA 3.0 +1941,2,,1937,4/27/2018 11:26,,14,,"

In adiabatic QC, you encode your problem in a Hamiltonian such that your result can be extracted from the ground state. Preparing that ground state is hard to do directly, so you instead prepare the ground state of an 'easy' Hamiltonian, and then slowly interpolate between the two. If you go slow enough, the state of your system will stay in the ground state. At the end of your process, you'll have the solution.

+ +

This works according to the Adiabatic theorem. For the theorem to hold, there must be an energy gap between the ground state and the first excited state. The smaller the gap becomes, the slower you need to interpolate to prevent mixing between the ground state and first excited states. If the gap closes, such mixing cannot be prevent, and you can't go slow enough. The procedure fails at that point.

+ +

If the initial and final Hamiltonian commute, it means they have the same energy eigenstates. So they agree on which states get assigned energy, and only disagree on the energies they get. Interpolating between the two Hamiltonians just changes the energies. The final ground state would therefore have been an excited state at the beginning, and the original ground state becomes excited at the end. At some point, when passing by each other, the energies of these states will be be equal, and so the gap between them closes. This is sufficient to see that the energy gap must close at some point.

+ +

Having non-commuting Hamiltonians is therefore a necessary condition of keeping the gap open, and hence for AQC.

+",409,,,,,4/27/2018 11:26,,,,1,,,,CC BY-SA 3.0 +1942,2,,1937,4/27/2018 11:27,,6,,"

If two matrices (in this case, Hamiltonians) commute, they have the same eigenvectors. So, if you prepare a ground state of the first Hamiltonian, then that will (roughly speaking) remain an eigenstate throughout the whole adiabatic evolution, and so you get out just what you put in. There's no value to it.

+ +

If you want to be a little more strict, then it could be that your initial Hamiltonian has a degeneracy which is lifted by the second Hamiltonian, and you might be hoping to cause the system to evolve into the unique ground state. Note, however, that the degeneracy is lifted the instant there's a non-zero amount of the second Hamiltonian. Whatever effect it can have is an instantaneous one. I believe that you don't get a proper adiabatic evolution. Instead, you have to write your initial state as a superposition of the new eigenstates, and these start to evolve over time, but you never increase the overlap of your state with the target state (the ground state).

+",1837,,,,,4/27/2018 11:27,,,,2,,,,CC BY-SA 3.0 +1944,2,,1939,4/27/2018 17:19,,2,,"

Using floor is logical as a general recommendation to build a Grover's algorithm circuit, because it means that we need less gates compared with ceiling.

+ +

Grover's algorithm is probabilistic; the probability of obtaining correct result grows until we reach about $\pi/4\sqrt{N}$ iterations, and starts decreasing after that number. For large $N$ the probability of obtaining correct result is very close to $1$ if the number of iterations is close to $\pi/4\sqrt{N}$; exact number of iterations does not matter given it is close to $\pi/4\sqrt{N}$.

+",2105,,2105,,05-06-2018 15:51,05-06-2018 15:51,,,,0,,,,CC BY-SA 4.0 +1945,1,1954,,4/27/2018 17:53,,5,101,"

Suppose we have a quantum system $Q$ with an initial state $\rho^{(Q)}$. The measurement process will involve two additional quantum systems: an apparatus system $A$ and an environment system $E$. We suppose that the system $Q$ is initially prepared in the state $\rho_{k}^{(Q)}$ with a priori probability $p_k$. The state of the apparatus $A$ and environment $E$ is $\rho_{0}^{(AE)}$, independent of the preparation of $Q$. The initial state of the entire system given the $k$th preparation for $Q$ is $$\rho_{k}^{(AEQ)} = \rho_{0}^{(AE)} \otimes \rho_{k}^{(Q)}.$$ Averaging over the possible preparations, we obtain $$\rho^{(AEQ)} = \sum_{k} p_{k} \rho_{k}^{(AEQ)}. $$

+ +

In quantum information theory, the accessible information of a quantum system is given by $$\chi := S(\rho) - \sum_{j}P_{j}S(\rho_{j}),$$ where $S$ is the von Neumann entropy of the quantum state. How can we show that if $\rho_{0}^{(AE)}$ is independent of the preparation $k$, that $$\chi^{(AEQ)} = \chi^{(Q)}?$$

+ +

Thanks for any assistance.

+",2032,,10480,,4/16/2021 20:37,4/16/2021 20:37,"Accessible information of system vs system, apparatus and environment",,1,0,,,,CC BY-SA 3.0 +1952,1,1957,,4/28/2018 4:26,,9,1500,"

Quantum Annealing, (related questions Quantum Annealing, or hamiltonian related) is the process used in D-Waves' Quantum Annealer, in which the energy landscapes are explored, for different solutions, and by tuning a suitable Hamiltonian, zero in to a possible optimal solution to a problem. The process of Quantum Annealing reduces ""transverse magnetic fields"" in the Hamiltonian, in addition to other quantum effects like quantum tunnelling, entanglement, and superposition, which in turn all play a part in zeroing to a ""valley"" of a quantum mechanical wave function, where the ""most likely"" solution lies.

+ +

The process of Reverse Annealing, very briefly, is to use classical methods such as Simulated Annealing, to find a solution, and hone into a valley using Quantum Annealing. If the Hamiltonian used by the Quantum Annealer is already in a ""valley"", as it is being passed a solution in the first place -Does the D-Wave machine reach another ""valley""( a better solution?) using the Hamiltonian passed to it, in the first place?

+",429,,26,,05-04-2018 13:23,6/13/2018 9:42,What precisely is Reverse Annealing?,,2,0,,,,CC BY-SA 3.0 +1953,2,,1701,4/28/2018 9:02,,1,,"

Let me go for a self-learner experience. After some reading, my short answer to my own question

+ +
+

Would the calculation of the loss of entanglement be necessarily related to delocalized vibrational modes that simultaneously involve the local environment of both triplets?

+
+ +

is: probably yes, but not necessarily/primarily.

+ +

A longer answer follows. With a previous familiarity with decoherence but non-familiarity with disentanglement, this paper was extremely helpful: Entanglement loss in molecular quantum-dot qubits due to interaction with the environment (Enrique P Blair et al, 2018, J. Phys.: Condens. Matter, 30, 195602). The physical scenario is not identical, but allows for a few key insights:

+ +
    +
  • Like coherence, entanglement is maintained by default, not by a process, which is to say: we only need to look for processes explicitly destroying it. This is whay one gets better numbers for entangled photons compared with solid state qubits, see What is the maximum separation between two entangled qubits that has been achieved experimentally?
  • +
  • From the above point (and from the paper above), let us first consider the case where two qubits are far enough to avoid interaction among each other, and also to avoid interaction with a common environment. For thus isolated qubits just by accounting for decoherence we will fully account for disentanglement.
  • +
  • Entanglement is exclusivity: entanglement between two parties is gradually lost as these parties entangle more and more with other parties. So, with entanglement between two qubits -as with coherence of one qubit- the primary focus of our attention should be how does the qubit interact with its environment. In the case under consideration: with the spin bath and the phonon bath. The same processes that destroy coherence will destroy entanglement, essentially at the same rate. For details, calculate fidelities and/or entanglement witnesses.
  • +
  • If the two qubits are not perfectly isolated from each other, there is an interaction between them, which can be direct or via a common environment. In that case, the two qubits can experience a collective evolution which, beyond affecting their individual coherence, also alters their entanglement. This is what the question is asking, and here the answer would be a conditional yes. Collective vibrational modes affecting both qubits need to be considered, since they promote a collective evolution that can either create or destroy entanglement.
  • +
+",1847,,,,,4/28/2018 9:02,,,,0,,,,CC BY-SA 3.0 +1954,2,,1945,4/28/2018 9:25,,6,,"

For density matrices $\rho_A$ and $\rho_B$ having eigenvalues $\lambda^{\left(A\right)}$ and $\lambda^{\left(B\right)}$, \begin{align}S\left(\rho_A\otimes\rho_B\right) &= -\rho_A\otimes\rho_B\ln\left(\rho_A\otimes\rho_B\right)\\ +&= -\sum_{j, k}\lambda^{\left(A\right)}_j\lambda^{\left(B\right)}_k\ln\left(\lambda^{\left(A\right)}_j\lambda^{\left(B\right)}_k\right)\\ +&= -\sum_{j, k}\left[\lambda^{\left(A\right)}_j\lambda^{\left(B\right)}_k\ln\left(\lambda^{\left(A\right)}_j\right) + \lambda^{\left(A\right)}_j\lambda^{\left(B\right)}_k\ln\left(\lambda^{\left(B\right)}_k\right)\right]\\ +&= -\sum_j\lambda^{\left(A\right)}_j\ln\left(\lambda^{\left(A\right)}_j\right)\sum_k\lambda^{\left(B\right)}_k - \sum_j\lambda^{\left(A\right)}_j\sum_k\lambda^{\left(B\right)}_k\ln\left(\lambda^{\left(B\right)}_k\right)\\ +&= -\sum_j\lambda^{\left(A\right)}_j\ln\left(\lambda^{\left(A\right)}_j\right) - \sum_k\lambda^{\left(B\right)}_k\ln\left(\lambda^{\left(B\right)}_k\right)\\ +&= S\left(\rho_A\right) + S\left(\rho_B\right). +\end{align}

+ +

This gives

+ +

\begin{align} +\chi^{\left(AEQ\right)} &= S\left(\rho^{\left(AEQ\right)}\right) - \sum_jp_jS\left(\rho_j^{\left(AEQ\right)}\right)\\ +&=S\left(\rho^{\left(AE\right)}\right) + S\left(\rho^{\left(Q\right)}\right) - \sum_jp_j\left(S\left(\rho^{\left(AE\right)}\right) + S\left(\rho_j^{\left(Q\right)}\right)\right)\\ +&= S\left(\rho^{\left(AE\right)}\right) + S\left(\rho^{\left(Q\right)}\right) - \sum_jp_jS\left(\rho^{\left(AE\right)}\right) -\sum_jp_jS\left(\rho_j^{\left(Q\right)}\right). +\end{align}

+ +

As $\sum_jp_j = 1$, it follows that $$\chi^{\left(AEQ\right)} = S\left(\rho^{\left(Q\right)}\right)-\sum_jp_jS\left(\rho_j^{\left(Q\right)}\right) = \chi^{\left(Q\right)}$$

+",23,,,,,4/28/2018 9:25,,,,2,,,,CC BY-SA 3.0 +1955,1,1970,,4/28/2018 9:40,,10,385,"

I am trying to build a quantum computation library as my university project. I am still learning all the aspects of the Quantum Computing field. I know there are efficient libraries already for quantum emulation. I just want to make my own, which will help me to grasp some of the core concepts of Quantum Computing.

+ +

I know that $n$ qubits can be stored with a $2^n$ element complex array. Also, a $n$ qubit gate is a $2^n \times 2^n$ 2D array. So, the following are my doubts (mostly related to entanglement):

+ +
    +
  1. When do I need to find the tensor product of gates (like $I \otimes H \otimes I$, for a $3$ qubit system)? Is it always necessary to compute the tensor product of order $2^n \times 2^n$, even if the qubits are not entangled?

  2. +
  3. With only a $2^n$ element array (which I store the coefficients), can I actually somehow calculate which qubits are entangled? Or do I need to make another data structure to store the entanglement information of my $n$ qubits (about which qubits are entangled)?

  4. +
  5. Is my 2nd question actually relevant? Do I need to keep track of the entanglement information at all? I mean, I don't know whether multiplying gates with coefficients is enough (even if the system is entangled). Maybe it is relevant at the time of measurement only.

  6. +
+",2060,,253,,05-01-2018 12:54,05-03-2018 06:07,How to keep track of entanglements when emulating quantum computation?,,1,11,,,,CC BY-SA 3.0 +1956,2,,1,4/29/2018 16:17,,11,,"

To complete what others have said: as far as we know a (classical) Turing machine cannot truly simulate quantum correlations. This is explicitly claimed in section Properties of the universal quantum computer by the seminal paper by David Deutsch Quantum theory, the Church-Turing principle and the universal quantum computer (Proceedings of the Royal Society of London A +400, pp. 97-117 (1985)).

+ +

Details will depend on the implementation or on your exact definitions for Turing machine, of quantum computer, and especially of simulate (if you are generous enough with what simulates mean, anything can simulate anything). Generally speaking, it is possible to design a quantum computer which, when repeatedly operated by starting from the exact same starting state (or input bits), in every operation generates random output bits which present certain quantum correlations with each other.

+ +

As far as I know, a Turing machine cannot do that.

+",1847,,,,,4/29/2018 16:17,,,,2,,,,CC BY-SA 3.0 +1957,2,,1952,4/29/2018 19:47,,5,,"

Until recently, D-Wave's quantum annealing devices always started from a uniform superposition over all $N$ qubits:

+ +

                                                $H_{initial} = |+\rangle_0 \otimes |+\rangle_1 ... \otimes |+\rangle_N$

+ +

where $|+\rangle_i = \frac{1}{\sqrt{2}} (|0\rangle_i + |1\rangle_i)$.

+ +

So let's suppose you already ran a few anneals with this setup and one of the low-energy results looks like a relatively good solution (some local optima) to your optimization problem. Until the very recent introduction of the reverse annealing feature, it was impossible to use this solution as input for the next anneal in order to explore the local space around that solution for bitstrings with even lower energy. Hence, reverse annealing allows us to initialize the quantum annealer with a known (classical) solution and search the state space around this local optima.

+ +

When exploring complicated (rugged) energy landscapes of optimization problems you need to balance the global exploration of the state space with the exploitation of local optima. In traditional (D-Wave) quantum annealing, we start with a high transverse field which then gets gradually decreased as you described in your question. D-Wave's quantum annealer was thereby performing a global search (due to a lot of quantum tunneling) at the beginning of the annealing schedule when the transverse field is strong. As the transverse field gets weaker, the search gets more and more local. In contrast, reverse annealing starts with a classical solution defined by the user, then gradually increases the transverse field (backward annealing) to then decrease the transverse field again (forward annealing).

+ +

This introduces the new parameter reversal distance which determines how far you want to anneal backward (how strong the transverse field should become). D-Wave published the following two plots in this D-Wave Whitepaper:

+ +

+ +

In the left plot you can see that reversal distance is a very important new hyperparameter since its value determines the probability of obtaining a new ground state (blue region). If the reversal distance is too low, you will get the same state you started with (red region) which would be useless. And of course if you reverse anneal for too long you essentially perform traditional quantum annealing and loose the information that you started with. Remember that too much transverse field means that we are performing global search again!

+ +

The right plot shows essentially the same thing by plotting Hamming distance against reversal distance and the probability of obtaining a new ground state. For your problem at hand, you want to find that sweet spot (maxima of the red curve). For large reversal distances we again see that we get solution strings that are far from our initial state in terms of Hamming distance.

+ +

All in all, reverse annealing is pretty new stuff and to the best of my knowledge there are no published papers about its effectiveness. In their Whitepaper, D-Wave claims the generation of 'new global optima up to +150 times faster than forward quantum annealing'.

+",1234,,,,,4/29/2018 19:47,,,,0,,,,CC BY-SA 3.0 +1958,1,,,4/30/2018 4:17,,11,462,"

I have heard various talks at my institution from experimentalists (who all happened to be working on superconducting qubits) that the textbook idea of true ""Projective"" measurement is not what happens in real-life experiments. Each time I asked them to elaborate, and they say that ""weak"" measurements are what happen in reality.

+ +

I assume that by ""projective"" measurements they mean a measurement on a quantum state like the following:

+ +

$$P\vert\psi\rangle=P(a\vert\uparrow\rangle+ b\vert\downarrow\rangle)=\vert\uparrow\rangle \,\mathrm{or}\, \vert\downarrow\rangle$$

+ +

In other words, a measurement which fully collapses the qubit.

+ +

However, if I take the experimentalist's statement that real measurements are more like strong ""weak""-measurements, then I run into Busch's theorem, which says roughly that you only get as much information as how strongly you measure. In other words, I can't get around not doing a full projective measurement, I need to do so to get the state information

+ +

So, I have two main questions:

+ +
    +
  1. Why is it thought that projective measurements cannot be performed experimentally? What happens instead?

  2. +
  3. What is the appropriate framework to think about experimental measurement in quantum computing systems that is actually realistic? Both a qualitative and quantitative picture would be appreciated.

  4. +
+",2260,,2260,,4/30/2018 5:51,11-04-2018 14:21,Are true Projective Measurements possible experimentally?,,2,2,,,,CC BY-SA 3.0 +1959,2,,1924,4/30/2018 6:16,,1,,"

I think your reference has the answer: nitrogen vacancy centers in diamond, where you can do one qubit gates at room temperature. In fact, even higher temperatures are possible, but you will have to play a tradeoff between fidelity and temperature at some point.

+ +

That said, NV centers are not scalable, and I don't think more than 2 qubits will ever be really possible due to the physical problems with interacting immobile NV centers which are randomly distributed.

+",2260,,2260,,4/30/2018 6:33,4/30/2018 6:33,,,,3,,,,CC BY-SA 3.0 +1960,2,,1939,4/30/2018 9:36,,13,,"

Applying the Grover iterate a total number of $\lfloor \frac{\pi}{4}\sqrt{N}\rfloor$ times is the best choice if we want to maximize the success probability of Grover's algorithm. This is to some extent explained in Kaye, Laflamme and Mosca (KLM), but let me elaborate on the most important details here.

+ +

Let $n$ be a natural number, $N = 2^n$, and suppose that we have a function $f : \{0,1\}^n \to \{0,1\}$. Suppose that we can evaluate this function $f$ by a quantum phase oracle $U_f$ (this is the same operator $U_f$ as described by KLM in equation 8.1.5 on page 155), with the property that for all $i \in \{0,1\}^n$: +$$U_f : |i\rangle \mapsto (-1)^{f(i)}|i\rangle$$ +Now, we define $G = f^{-1}(\{1\})$ and $B = f^{-1}(\{0\})$. Hence, $G \subseteq \{0,1\}^n$ is the set of all $n$-bit strings that will evaluate to $1$ if we use them as input in $f$. Another way of saying this is $i \in G \Leftrightarrow f(i) = 1$. For the purpose of this question, let's assume that $|G| = 1$, hence there is exactly one $i \in \{0,1\}^n$ such that $f(i) = 1$.

+ +

Next, we proceed with defining the good and bad states, as follows (these are defined by KLM in equation 8.1.3 on page 155): +$$|\psi_{good}\rangle = \frac{1}{\sqrt{|G|}}\sum_{i \in G}|i\rangle \qquad \text{and} \qquad |\psi_{bad}\rangle = \frac{1}{\sqrt{|B|}}\sum_{i \in B}|i\rangle$$ +Moreover, we define: +$$|\psi_{uniform}\rangle = \frac{1}{\sqrt{N}}\sum_{i \in \{0,1\}^n}|i\rangle = \sin\theta |\psi_{good}\rangle + \cos\theta |\psi_{bad}\rangle$$ +where $\theta = \arcsin(\sqrt{|G|/N}) = \arcsin(1/\sqrt{N})$.

+ +

Let's now devise an intuitive visualization of Grover's algorithm. To that end, consider all the quantum states that can be written as $\alpha|\psi_{good}\rangle + \beta|\psi_{bad}\rangle$ with $|\alpha|^2 + |\beta|^2 = 1$, and let's display them in the following picture.

+ +

+ +

We can use the visualization above to understand what Grover's iterate actually does. Grover's iterate does nothing but rotate any state in the circle above over an angle $2\theta$ counterclockwise. Hence, by applying Grover's iterate multiple times, we can rotate states over angles that are multiples of $2\theta$. How this works can be found in KLM, specifically we see the rotation over an angle of $2\theta$ appear in the last equation on page 160.

+ +

Now, suppose that we start with the state $|\psi_{uniform}\rangle$. After applying $k$ iterations of Grover's iterate, we obtain the state $|\psi_k\rangle$. From the picture, using the intuitive interpretation of Grover's iterate, we can easily see that the angle between $|\psi_k\rangle$ and $|\psi_{bad}\rangle$ is $(2k+1)\theta$. Hence, we find: +$$|\psi_k\rangle = \sin((2k+1)\theta)|\psi_{good}\rangle + \cos((2k+1)\theta)|\psi_{bad}\rangle$$ +As we want to maximize the success probability, we want our state to be as close to $|\psi_{good}\rangle$ as possible. Hence, we want $(2k+1)\theta$ to be as close to $\frac{\pi}{2}$ as possible. Solving this equation, we obtain: +$$(2k+1)\theta = \frac{\pi}{2} \Leftrightarrow k = \frac{\pi}{4\theta} - \frac12$$ +Recall that $\theta = \arcsin(1/\sqrt{N})$. By making the assumption that $N$ is big, and using the approximation $\arcsin(x) \approx x$ when $|x| \ll 1$, we obtain that our ideal choice for $k$ would be: +$$k \approx \frac{\pi}{4}\sqrt{N} - \frac12$$ +But we can only apply Grover's iterate an integer number of times, say $k^*$, so we must find the integer $k^*$ that matches $k$ as closely as possible. Hence, we round the quantity on the RHS to obtain (using the notation explained in the box on page 163 of KLM): +$$k^* = \left[\frac{\pi}{4}\sqrt{N} - \frac12\right] = \left\lfloor \frac{\pi}{4}\sqrt{N}\right\rfloor$$ +as $[x-\frac12] = \lfloor x\rfloor$ for all real $x$. If we trace back where this $\frac12$ comes from, we can see that it originates from the fact that we already start at an elevated angle of $\theta$ in the circle, even before we apply any of the Grover iterates. The floor, hence, is a deep consequence of the fact that the total angle of rotation doesn't have to be all of $\frac{\pi}{2}$, but rather $\frac{\pi}{2} - \theta$.

+",24,,24,,4/30/2018 9:44,4/30/2018 9:44,,,,0,,,,CC BY-SA 3.0 +1961,1,1962,,4/30/2018 11:49,,7,293,"

According to (Macchiavello, Palma, Zeilinger, 2001; pg82) a lower bound of the encoding Hilbert space of a non degenerate code is given by the quantum version of the Hamming bound: +$$2^k \sum_{i=0}^t 3^i \begin{pmatrix} n \\ i\end{pmatrix}\le 2^n$$ +where we are looking at a $[n,k,2t+1]$ code. Does such a bound exist for a degenerate code? and why is it different (if it indeed is)?

+",2015,,26,,05-07-2018 13:17,05-07-2018 13:17,Lower bound for Degenerate Codes?,,1,0,,,,CC BY-SA 3.0 +1962,2,,1961,4/30/2018 15:40,,9,,"

This bound works by counting the number of orthogonal states that must be available. If you're encoding into $n$ qubits, you can't require more than $2^n$ orthogonal states, because that's all that's available. This is the right hand side of the bound.

+ +

If you wish to encode $k$ logical qubits in a distance $2t+1$ code, then each of the $2^k$ basis states of those logical qubits must encode to something different. Moreover, you need to be able to correct for up to $t$ errors of type $X$, $Y$ or $Z$. If we require each of these to map to a different orthogonal state, then there are $3n$ possible 1-qubit errors, $3^2\binom{n}{2}$ 2-qubit errors (choice of one of 3 Paulis for each error, and a pair of locations for them to happen at), and so on. So, this gives the stated bound, known as the Quantum Hamming bound (also Gilbert-Varshamov). However, an essential feature of the derivation is the assumption that each error is mapped onto a different orthogonal state.

+ +

The very definition of a degenerate code is that multiple errors can be mapped onto the same state. As a trivial example, consider the effect of a single-qubit $Z$ error on any one of the qubits of the GHZ state +$$ +\frac{1}{\sqrt{2}}(|0\rangle^{\otimes n}+|1\rangle^{\otimes n}). +$$ +No matter where that error happens, the resultant state is the same, but that's fine: I don't need to be able to identify which of the $n$ qubits the error happened on to fix it. Once I know the error has happened, I can apply a $Z$ gate on any of the qubits that I choose in order to fix it. (I don't claim that this example enables you to detect that error.) So, the Quantum Hamming bound does not apply to degenerate codes. Indeed, there are known examples where the bound is beaten, e.g. D. P. DiVincenzo, P. W. Shor, and J. A. Smolin, Phys. Rev. A 57, 830 (1998) (free version), although there are surprisingly few.

+ +

The only replacement that I know of is the Quantum Singleton Bound, $n-k\geq 4t$. The Quantum Hamming bound, in practice, appears to give very good estimates of what can be achieved, but is not absolute when it comes to degenerate codes.

+",1837,,1837,,4/30/2018 18:09,4/30/2018 18:09,,,,0,,,,CC BY-SA 3.0 +1963,2,,1958,4/30/2018 16:56,,8,,"

Let's step back from QC for a moment and think about a textbook example: the projector onto position, $|x\rangle$. This projective measurement is obviously unphysical, as the eigenstates of $|x\rangle$ are themselves unphysical due to the uncertainty principle. The real measurement of position, then, is one with some uncertainty. One can treat this either as a weak measurement of position, or as a projective measurement onto a non-orthonormal basis (a strong POVM), where the various basis elements have some support on multiple values of $x$: say pixels on a detector.

+ +

Going back into QC, most systems' measurements are pretty close to projective, or are 'strong' measurements at the least. In some systems, like ion traps, the readout can be thought of as a series of weak measurements that collectively form a strong one. A photon counter, on the other hand, is very close to a projective measurement with some odd projectors due to finite efficiency--no click corresponds to a projector onto $|0\rangle + (1-e)^n|n\rangle$, for instance.

+ +

On the other hand, that projector doesn't leave behind the state listed above, because the apparatus also absorbs the photon.

+ +

To sum up, thinking of things as POVMs (Positive operator-valued measures) is probably the most-right intuition, where you can think of the outcomes of the POVM mostly as non-orthonormal projectors. Non-projective POVMs also exist, but are less common in practice in systems I've thought about.

+",1807,,1807,,4/30/2018 20:37,4/30/2018 20:37,,,,8,,,,CC BY-SA 3.0 +1964,2,,1940,4/30/2018 17:27,,9,,"

I don't think there are clear reasons for a 'yes' or a 'no' answer. However, I can provide a reason why PP was much more likely to admit such a characterisation than NP was, and to give some intuitions for why NP might never have a simple characterisation in terms of modification of the quantum computational model.

+ +

Counting complexity

+ +

The classes NP and PP can both be characterised in terms of the number of accepting branches of a non-deterministic Turing machine, which we can describe in a more down-to-earth way in terms of the possible outcomes of a randomised computation which uses uniformly random bits. We can then describe these two classes as:

+ +
    +
  • L ∈ NP if there is a polynomial-time randomised algorithm which outputs a single bit α ∈ {0,1}, such that x ∈ L if and only if Prα = 1 | x ] is non-zero (though this probability may be tiny), as opposed to zero.

  • +
  • L ∈ PP if there is a polynomial-time randomised algorithm which outputs a single bit α ∈ {0,1}, such that x ∈ L if and only if Prα = 1 | x ] is greater than 0.5 (though possibly only by the tiniest amount), as opposed to being equal to or less than 0.5 (e.g. by a tiny amount).

  • +
+ +

One way of seeing why these classes can't be practically solved using this probabilistic description, is that it may take exponentially many repeats to be confident of a probability estimate for Prα = 1 | x ] because of the tininess of the differences in the probabilities involved.

+ +

Gap complexity and quantum complexity

+ +

Let us describe the outcomes '0' and '1' in the above computation as 'reject' and 'accept'; and let us call a randomised branch which gives a reject/accept result, a rejecting or accepting branch. Because every branch of the randomised computation which is not accepting is therefore rejecting, PP can also be defined in terms of the difference between the number of accepting and rejecting computational paths — a quantity which we may call the acceptance gap: specifically, whether the acceptance gap is positive, or less than or equal to zero. With a little more work, we can obtain an equivalent characterisation for PP, in terms of whether the acceptance gap is greater than some threshold, or less than some threshold, which may be zero or any other efficiently computable function of the input x.

+ +

This in turn can be used to characterise languages in PP in terms of quantum computation. From the description of PP in terms of randomised computations having acceptance probabilities (possibly slightly) greater than 0.5, or at most 0.5, all problems in PP admit a polynomial-time quantum algorithm which has the same distinction in acceptance probabilities; and by modelling quantum computations as a sum over computational paths, and simulating these paths using rejecting branches for paths of negative weight and accepting branches of paths of positive weight, we can also show that such a quantum algorithm making a (statistically weak) distinction describes a problem in PP.

+ +

It is not obvious that we can do the same thing for NP. There is no natural way to describe NP in terms of acceptance gaps, and the obvious guess for how you might try to fit it into the quantum computational model — by asking whether the probability of measuring an outcome '1' is zero, or non-zero — instead gives you a class called coC=P, which is not known to equal NP, and very roughly could be described as being about as powerful as PP rather than close to NP in power.

+ +

Of course, someday one might somehow find a characterisation of NP in terms of acceptance gaps, or one might find new ways of relating quantum computation to counting complexity, but I'm not sure anyone has any convincing ideas of how this might come about.

+ +

Summary

+ +

The prospects for getting insights into the P versus NP problem itself, via quantum computation, are not promising — though it isn't impossible.

+",124,,124,,5/23/2018 10:19,5/23/2018 10:19,,,,1,,,,CC BY-SA 4.0 +1965,1,1981,,4/30/2018 22:44,,7,345,"

I am trying to understand the HHL algorithm for solving linear systems of equations (Harrow, Hassidim, Lloyd; presented in arXiv:0811.3171 and explained on page 17 of arXiv:1804.03719). By reading some papers, I think I got rough idea but there are many things I still do not understand. Let me ask some.

+ +

When applying Quantum Phase Estimation, in page 49 of the same article, it says ""Prepare $n$ register qubits in $|0\rangle^{\bigotimes n}$ state with the vector $|b\rangle$"", so that, by applying QPE to $|b\rangle |0\rangle^{\bigotimes n}$, we can get +$\sum_j \beta_j |u_j\rangle |\lambda_j\rangle$.

+ +

And $|\lambda_j\rangle$ is the $j^{th}$ eigenvalue of matrix $A$ and $0 < \lambda_j < 1$, and $\left|u_j\right>$ is the corresponding eigenvector.

+ +

I also understand $|\lambda_j\rangle$ is the binary representation for fraction of $j^{th}$ eigenvalue of $A$. (i.e. $\left|01\right>$ for $\left|\lambda\right>=1/4$)

+ +

My questions are,

+ +

Q1: How to decide $n$, how many qubits to prepare? I assume it is related to the precision of expressing the eigenvalue, but not sure.

+ +

Q2: What to do if $\lambda_j$ of $A$ is $≤ 0$ or $≥ 1$?

+",2100,,23,,05-01-2018 08:24,05-02-2018 07:29,"HHL algorithm, how to decide n qubits to prepare for expressing eigenvalue of A?",,1,0,,,,CC BY-SA 3.0 +1966,1,,,05-01-2018 06:51,,6,135,"

At the beginning of a quantum computational process we generally want to start in a perfectly known initial state, and evolve from there. This cannot be done perfectly, for fundamental reasons, but I strongly suspect there has to be a practical limit below which you are in a garbage-in-garbage-out situation.

+ +

My full question is not on this input fidelity threshold per se (although feel free to provide that too), but rather on the factors to consider and minimum set(s) of requirements one needs to prepare a good-enough initial state (maybe Di-Vincenzo-list style, but preferrably with some example numbers). Presumably the perfect answer has different sections, for example depending on whether one employs thermal initialization or algorithmic cooling.

+ +

For a little more context, this question is related with certain aspects of previous ones:

+ + +",1847,,,,,7/17/2021 19:27,Threshold and practical requirements for initial state preparation?,,1,0,,,,CC BY-SA 3.0 +1967,1,1982,,05-01-2018 12:34,,11,1149,"

On the Wikipedia page for Grover's algorithm, it is mentioned that:

+ +

""Grover's algorithm can also be used for estimating the mean and median of a set of numbers""

+ +

So far I only knew how it can be used to search a database. But not sure how to implement that technique to estimate the mean and median of a set of numbers. Moreover, there's no citation (as far as I noticed) on that page which explains the technique.

+",26,,26,,05-01-2018 20:21,10-05-2022 04:09,How is Grover's algorithm used to estimate the mean and median of a set of numbers?,,1,2,,,,CC BY-SA 3.0 +1970,2,,1955,05-01-2018 15:59,,5,,"

It is certainly sufficient to always calculate the full $2^n\times 2^n$ unitary matrix, and then apply it to the $2^n$-entry state vector. If that's what you choose to do, that's all you have to do as all the entanglement information is contained in that vector. A quick and easy way to see if a particular qubit is entangled is to take the partial trace of your (pure) state vector over all other qubits. If the resulting matrix is rank 1, that qubit is in a separable state, otherwise it's entangled.

+ +

I assume the point of your question is really ""How can this huge computational cost be avoided?"". In general, it can't - if you're making use of the full power of the quantum computer, you will always need the $2^n$-entry state vector. However, there are various tricks that reduce the computational cost, such as delaying the need for such a large state vector by keeping track of the entanglement.

+ +

Efficiency Improvements

+ +

The biggest saving that you can make compared to the naive implementation above is to avoid the $2^n\times 2^n$ unitary matrices. For example, if you are only using 1- or 2-qubit gates, simply using the sparsity of matrices means you only need $O(2^n)$ storage instead of $O(2^{2n})$.

+ +

Then there are other tactics that you can employ. For example, imagine you want to apply the two-qubit unitary gate $U$ on qubits $i$ and $j$. You can take sets of 4 elements from your state vector ($|x\rangle_{1,2,\ldots n\setminus i,j}|y\rangle_{i,j}$ for fixed $x\in\{0,1\}^{n-2}$ by taking all different $y\in\{0,1\}^2$) and just applying the $4\times 4$ unitary $U$ on that 4-element vector. Repeating this for every $x$ will return $U$ enacted on the original state vector.

+ +

I imagine there are other strategies one could come up with. The one that suggested itself from the original question was entanglement tracking. This gives memory and speed improvements at the start of a computation, but ultimately ends up being equivalent because (presumably) everything in the quantum computer will end up entangled.

+ +

Entanglement Tracking

+ +

Let's assume that your computation consists of only unitary evolution and measurement on the set of $n$ qubits, i.e. there is no decoherence, probabilistic maps etc. Let's further assume that you start from a fully separable state such as $|0\rangle^{\otimes n}$. At this point, every qubit is separable, and it is sufficient to keep $n$ different registers, each conveying the state of a separable qubit. If your first gate is just a single-qubit operation $U$ on qubit $i$, then you just update the state stored on qubit $i$ as $|\psi_i\rangle\mapsto U|\psi_i\rangle$, and you don't have to touch anything else.

+ +

If you want to do a two-qubit gate $V$ between qubits $i$ and $j$, say, then you have to combine the states at both sites. So, you replace two registers each of dimension 2 with one register of dimension 4, containing state $V|\psi_i\rangle|\psi_j\rangle$. The problem is that you now can't split this state up again, so you have to keep those two qubits in a register forever after. Of course, if you ever have a 1-qubit gate $U$ to apply on qubit $i$, you'll now have to apply $|\psi_{i,j}\rangle\mapsto U\otimes\mathbb{I}|\psi_{i,j}\rangle$. Then, the next time you want a 2-qubit gate between, say, $j$ and $k$, you'll have to combine the spaces for $(i,j)$ and $k$. Those spaces will keep growing, but if a gate is localised on just one space, you only have to apply it there (using $\mathbb{I}$ operators to pad it on the rest of the qubits), and you don't have to do anything to the other spaces.

+ +

If you're doing things like this, you will not have (at least for the first few steps of your algorithm) a single $2^n$ element register. You'll have to have a bunch of different registers, and keep track of which qubits are described by which register in a separate array. Each time you combine the spaces of some qubits, you'll update that extra array.

+ +

Here's some very crude pseudo-code that may help to convey my meaning:

+ +
#initialise variables
+entangled_blocks={{1},{2},{3},...,{n}}
+quantum_states={{1,0},{1,0},...,{1,0}}
+
+#apply action of each gate
+for each gate
+   for each gate_target
+       target_block=entangled_blocks entry containing gate_target
+   next gate_target
+   if all target_blocks equal then
+      apply gate on target_block (pad with identity on other qubits)
+   else
+      new_entangled_block=union(target_blocks)
+      new_state_vec=tensor_product(quantum_states for each target block)
+      apply gate on new_state_vec (pad with identity on other qubits)
+      replace all target_blocks in entangled_blocks with new_entangled_block
+      replace all quantum_states(all target blocks) with new_state_vec
+   end if
+next gate
+
+ +

Other Options

+ +

(By no means exhaustive)

+ +

You may be interested to read about Matrix Product States which are a nice way of encapsulating the information about not-too entangled states, and that may provide an alternative route for you, depending on exactly what it is that you're trying to achieve.

+",1837,,1837,,05-03-2018 06:07,05-03-2018 06:07,,,,4,,,,CC BY-SA 4.0 +1972,2,,1926,05-01-2018 17:27,,11,,"

I don't think there is a single golden resource which can you provide you all the necessary knowledge. But I could suggest a pathway (or schematic study guide in your words):

+ +

If your aim is to create a new quantum programming language I'd rather say you should thoroughly learn an existing quantum programming language first along with the basic concepts of quantum computing, both from the physics side and the computer science side (maybe even the Mathematics side!).

+ +
    +
  • Microsoft has their quantum programming language named Q# (which is a part of their Quantum Development Kit). The complete documentation-cum-guide is on their website: https://docs.microsoft.com/en-us/quantum. If you are from the CS side, I hope you already have some knowledge of vectors, matrices and linear algebra in general. If so, you can directly start reading the guide article-by-article. Initially, they start with a brief revision of matrices, vectors, etc. followed by a brief introduction to qubits. That much is sufficient for at least getting started with writing a basic quantum program, with minimal understanding of the physics behind it. By the way, if your linear algebra concepts are weak, you could always try Khan Academy's lectures on the same.

  • +
  • Next, you'd want to learn at least some basics of quantum mechanics. I personally love Professor Vazirani's lectures, which are now on Youtube. In about 60 ten minute lectures he covers all the necessary basics of quantum mechanics and quantum computation algorithms. After this, you'd be in good shape to pick up new algorithms on your own.

  • +
  • As a third step, I'd suggest picking up ""Quantum Computation and Quantum Information by Isaac Chuang and Michael Nielsen"" and also ""Quantum Computing for Computer Scientists by Mirco A. Mannucci and Noson S. Yanofsky"" for covering the important topics which you missed out.

  • +
+ +

That should be enough for you to get a solid grounding to start writing your own quantum programming language. You may also look into tutorials for the other common quantum computing languages to get an idea of how to write quantum programs and design quantum programming languages.

+",26,,26,,05-01-2018 17:32,05-01-2018 17:32,,,,0,,,,CC BY-SA 3.0 +1973,1,1980,,05-01-2018 19:27,,11,737,"

I am aware of the quantum hardware startup Rigetti and I wonder if there are any quantum startups that build software on top of current quantum computer hardware for commercial applications?

+ +

Related question: Does a complete list of open quantum software projects exist?

+",2287,,26,,12/13/2018 19:48,03-03-2019 15:23,Are there any quantum software startups?,,3,0,,,,CC BY-SA 4.0 +1974,1,1989,,05-01-2018 21:06,,2,397,"

I just came across this (apparently) entropy-reversing video. It is in fact nothing more than a computer-generated animation: first rendering the physical simulation of a bean machine (often seen in statistical mechanics educational experiments) using color-neutral balls and then assigning color by their destination bin.

+ +

+ +

I find this amusing from a computational perspective, and hence my questions:

+ +
    +
  • If it was possible to achieve the portrayed effect physically, a spontaneous sorting based on pairwise interactions between the objects, what would be the computational implications of such effect?
  • +
  • Are there any quantum algorithms described/implemented that achieve some kind of spontaneous sorting based on interference between the objects to be sorted?
  • +
+",1346,,1847,,05-03-2018 07:25,05-03-2018 09:19,What is the relation between this CGI device and a quantum sorting algorithm?,,1,11,,,,CC BY-SA 4.0 +1975,2,,1973,05-01-2018 21:26,,7,,"

Rigetti is not just a hardware company. It also builds quite a bit of software -- check out

+ +
    +
  • Forest, which gives access to both a simulator and a quantum computer via the cloud
  • +
  • PyQuil, a Python library for programming quantum computers
  • +
  • Grove, a Python library of quantum algorithmic primitives
  • +
  • Forest OpenFermion, a library to interface OpenFermion with Forest
  • +
  • Many more projects on Github
  • +
+ +

NOTE: I work at Rigetti

+",2129,,2129,,05-03-2018 03:30,05-03-2018 03:30,,,,0,,,,CC BY-SA 4.0 +1976,1,1977,,05-01-2018 21:55,,7,287,"

I read the basic introductory information about qubits on Wikipedia:

+ +
+

There are two possible outcomes for the measurement of a qubit—usually + 0 and 1, like a bit. The difference is that whereas the state of a bit + is either 0 or 1, the state of a qubit can also be a superposition of + both. [1]

+
+ +

and

+ +
+

The state of a three-qubit quantum computer is similarly described by + an eight-dimensional vector + $(a_{0},a_{1},a_{2},a_{3},a_{4},a_{5},a_{6},a_{7})$ (or a one + dimensional vector with each vector node holding the amplitude and the + state as the bit string of qubits). [2]

+
+ +

Hence does it mean that qubit using superdense coding can achieve a double capacity with the possible number of combinations of $2^{2^n}$?

+",2290,,26,,12/23/2018 12:23,12/23/2018 12:23,Does superdense coding allow to double the information capacity of a set of qubits?,,1,8,,,,CC BY-SA 4.0 +1977,2,,1976,05-02-2018 04:42,,7,,"

The short answer is no: we don't double the capacity. It turns out it's not that quite simple. There is no general mathematical expression that gives you the storage (or processing power) of a number of qubits in terms of bits. Bits, qubits and ebits work in qualitatively different ways, which in some contexts allows to draw an advantage.

+ +

The closest thing to an answer to your question are the so-called Bennett's laws, four inequalities comparing the practical information contents of classical bits, quantum bits (or qubits) and entanglement bits (or ebits), reproduced here from wikipedia. The ⩾ signs are to be taken as ""can do the job of"":

+ +
    +
  • 1 qubit ⩾ 1 bit (classical),
  • +
  • 1 qubit ⩾ 1 ebit (entanglement bit),
  • +
  • 1 ebit + 1 qubit ⩾ 2 bits (via superdense coding),
  • +
  • 1 ebit + 2 bits ⩾ 1 qubit (via quantum teleportation),
  • +
+ +

On the particular aspect of superdense coding, I refer you to the question ""What are the real advantages of superdense coding?"" and its answers.

+",1847,,1847,,05-02-2018 06:18,05-02-2018 06:18,,,,1,,,,CC BY-SA 4.0 +1979,2,,1899,05-02-2018 05:53,,1,,"

No quantum computing approach has ever been successful for predicting a reaction rate or transition state that a classical computer could not already do. There are many quantum algorithms for solving the FCI problem with a polynomial number of quantum-computer gates, so there are many algorithms that show promise for building the high-accuracy potential energy surfaces to study the reactions you describe.

+",2293,,,,,05-02-2018 05:53,,,,0,,,,CC BY-SA 4.0 +1980,2,,1973,05-02-2018 07:04,,8,,"

There are lots of startups, many of which have no hardware efforts. Here is a selection, distinguished only by the fact that I have heard of them at least once.

+ + + +

There are also QISKit and ProjectQ. Though not startups, they also deserve a mention as important quantum software projects.

+",409,,26,,03-03-2019 15:23,03-03-2019 15:23,,,,1,,,,CC BY-SA 4.0 +1981,2,,1965,05-02-2018 07:29,,2,,"

The number $n$ decides the size of the register to be used for phase estimation, which in turn determines the accuracy. If you knew your eigenvalues (for a unitary) were a subset of the ${2^n}^{th}$ roots of unity, $e^{2\pi i m/2^n}$, then using $n$ bits is guaranteed to give you the exact answer. Assuming you don't have exactly this guarantee, then you can think of the phase estimation as returning the best $n$-bit approximation of those values, i.e. $\phi/(2\pi)$, where the eigenvalue is $e^{i\phi}$, would be approximated to within $1/2^{n+1}$ (with caveats about the probability of this happening, which we can lower-bound by $4/\pi^2$, and can improve further by using a slightly larger $n$).

+ +

If memory serves, what you want to do to prepare $A$ is two things (I'm leaving out the connection between implementing a non-unitary $A$ and the unitaries required for phase estimation, since you didn't ask):

+ +
    +
  • Add enough $\mathbb{I}$ so that $A^{(1)}=A^{(0)}+B\mathbb{I}$ is non-negative (i.e. all $\lambda_i\geq 0$)

  • +
  • Rescale to $A=\epsilon A^{(1)}$ so that the maximum eigenvalue is less than 1.

  • +
+ +

These operations don't change the eigenvectors, and change the eigenvalues by known amounts that you can compensate for later, $\lambda_i=\epsilon(\lambda_i^{(0)}+B)$. You might worry that this rescaling requires you to know the very information that you're trying to calculate. However, it's easy to make at least some crude estimates of the limits of the spectrum via, for example, Gershgorin's Circle Theorem.

+ +

Actually, you could probably get away with just a rescaling (and no $\mathbb{I}$) if you ensure all eigenvalues are in the range $-1/2$ to $1/2$, due to the periodicity of the Quantum Fourier Transfer, but making use of it gives you the maximum opportunity to spread all the eigenvalues out as much as possible, and hence to get as accurate an estimate on them as possible.

+",1837,,,,,05-02-2018 07:29,,,,0,,,,CC BY-SA 4.0 +1982,2,,1967,05-02-2018 09:43,,11,,"

The idea for estimating the mean is roughly as follows:

+ +
    +
  • For any $f(x)$ that gives outputs in the reals, define a rescaled $F(x)$ that gives outputs in the range 0 to 1. We aim to estimate the mean of $F(x)$.

  • +
  • Define a unitary $U_a$ whose operation is $$U_a:|0\rangle|0\rangle\mapsto\frac{1}{2^{n/2}}\sum_x|x\rangle(\sqrt{1-F(x)}|0\rangle+\sqrt{F(x)}|1\rangle).$$ It is important to note that this unitary is easily implemented. You start with a Hadamard transform on the first register, perform a computation of $f(x)$ on an ancilla register, use this to implement a controlled-rotation of the second register, and then uncompute the ancilla register.

  • +
  • Define the unitary $G=U_a (\mathbb{I}-2|0\rangle\langle 0|\otimes |0\rangle\langle 0|)U_a^\dagger \mathbb{I}\otimes Z$.

  • +
  • Starting from a state $U_a|0\rangle|0\rangle$, use $G$ much like you would use the Grover iterator to estimate the number of solutions to a search problem.

  • +
+ +

The main bulk of this algorithm is amplitude amplification, as described here. The main idea is that you can define two states +$$ +|\psi\rangle=\frac{1}{\sqrt{\sum_x F(x)}}\sum_x\sqrt{F(x)}|x\rangle|1\rangle \qquad |\psi^\perp\rangle=\frac{1}{\sqrt{\sum_x 1-F(x)}}\sum_x\sqrt{1-F(x)}|x\rangle|0\rangle, +$$ +and this defines a subspace for the evolution. The initial state is $U_a|0\rangle|0\rangle=(\sqrt{\sum_x F(x)}|\psi\rangle+\sqrt{\sum_x 1-F(x)}|\psi^\perp\rangle)2^{-n/2}$. The amplitude of the $|\psi\rangle$ term clearly contains the information about the mean of $F(x)$, if we could just estimate it. You could just repeatedly prepare this state and measure the probability of getting a $|1\rangle$ on the second register, but Grover's search gives you a quadratic improvement. If you compare to the way Grover's is usually set up, the amplitude of this $|\psi\rangle$ which you can 'mark' (in this case by applying $\mathbb{I}\otimes Z$) would be $\sqrt{\frac{m}{2^n}}$ where $m$ is the number of solutions.

+ +

Incidentally, this is interesting to compare to the ""power of one clean qubit"", also known as DQC1. There, if you apply $U_a$ to $\frac{\mathbb{I}}{2^n}\otimes|0\rangle\langle 0|$, the probability of getting the 1 answer is just the same as the non-accelerated version, and gives you an estimate of the mean.

+ +
+ +

For the median, it can apparently be defined as the value $z$ that minimises +$$ +\sum_x|f(x)-f(z)|. +$$ +There are two steps here. The first is to realise that the function we're trying to minimise over is basically just a mean. Then the second step is to use a minimisation algorithm which can also be accelerated by a Grover search. The idea here is to use a Grover's search, and mark all items for which the function evaluation gives a value less than some threshold $T$. You can estimate the number of inputs $x$ that give $f(x)\leq T$, then repeat for a different $T$ until you localise the minimum value sufficiently.

+ +

Of course, I am skipping over some details of precise running times, error estimates etc.

+",1837,,1837,,05-02-2018 10:02,05-02-2018 10:02,,,,2,,,,CC BY-SA 4.0 +1984,2,,1854,05-02-2018 21:06,,6,,"
+

One major idea there seems to be that the ""environment"" (quantum + decoherence) assists or optimizes the transport of a signal

+
+ +

The idea that photosynthetic systems are doing a Grover search or implementing some quantum algorithm, turned out not to be accepted in the community, and while scientists remained too professional to ridicule the original papers of their fellow-scientists in published papers, the opposition to this idea was observed in many many talks at conferences. Eventually it was even published that the quantum coherence in the FMO has no relevance to its photosynthetic function: see ""Why quantum coherence is not important in the FMO complex"" by Dattani and Wilkins.

+ +

Furthermore, decoherence is not optimizing the transfer. Optimizing it would involve searching the space of all possible bath models, all possible couplings to those bath models, all possible bath spectral densities, and all possible models of static disorder, among other things. This is an uncountably infinite space, and finding some optimum transfer rate is a problem too hard even for a quantum computer to solve. To think that the living organism has found a way to transfer the energy optimally is also flawed. Maybe by removing one water molecule there would be less disturbance and the energy transfer would happen 10$^{-22}$ seconds faster, which means the previous configuration that included that water molecule was not optimal.

+ +

The environment/bath does assist the energy transfer, because without it you would have infinitely long Rabi oscillations:

+ +


+

+ +

By coupling to the bath, we get damped Rabi oscillations that get localized on the lowest-energy site (known as the ""sink"") which couples to the reaction center that the excitation needs to get to for photosynthesis:

+ +


+

+ +

Isn't that beautiful how none of the excitation was getting to the blue site without the bath, but simply by turning on the bath we get a major energy transfer from antenna to reaction center?

+ +

All calculations were done in Octave using Nike Dattani's FeynDyn (Feynman Dynamics) code:

+ +

1) sudo apt-get install octave
+2) git clone https://github.com/ndattani/FeynDyn.git
+3) open sampleInput_7x7_FMO_WilkinsDattani_2015_JCTC.m in octave
+4) Press F5 and the dynamics with bath pops up after 62 seconds
+5) On line 51 make J = 0, for no coupling to bath, press F5 again.

+ +

You can change the temperature, spectral density, and Hamiltonian parameters and get slower and faster energy transfer rates, and you will quickly see why the crystal structure parameters used in this simulation are not optimal, and why it is going to be impossible even for a quantum computer to find THE optimal transfer rate.

+ +
+

has this been explored in artificial systems either as quantum computation or in a quantum simulator?

+
+ +

I have explained that the photosynthetic complexes are not ""doing quantum computation"", even though that was claimed in the early papers of Fleming et al.

+ +

However it has been found that with quantum annealing, sometimes the ground state solution is found faster when the temperature is increased: http://convexoptimization.com/TOOLS/manufacturedspins.pdf. Having a noisy environment helps to escape local minima where the annealing process would get trapped if you were at 0 Kelvin. So this is an example of an artificial system that uses this phenomenon, and after thinking about your question all of yesterday and today, it is the only example I could come up with.

+",2293,,26,,11/25/2018 13:17,11/25/2018 13:17,,,,2,,,,CC BY-SA 4.0 +1985,2,,1687,05-03-2018 00:08,,4,,"

Answer: Fidelity of 0.9999 at 1.08 seconds in 2013: http://science.sciencemag.org/content/342/6160/830.full?ijkey=uhZaDNPnwgTdA

+ +

More details: The $T_2$ was 180 minutes, or 3 hours.

+ +

What about the 81% that Heather mentioned?: The fidelity of 81% that Heather quotes, was actually referring to something else. In the same paper they wanted to show that they could change the temperature of the sample while still maintaining the spins in a coherent superposition. The sample was increased in temperature from 4.2K to 300K gradually over 6 minutes, held there for 2 minutes, then reduced back to 4.2K gradually over 4 minutes. After doing all that, the spins had impressively maintained a fidelity of 81% with respect to the starting state.

+ +

But that 12 minute experiment where they wanted to show that they can maintain coherence even when majorly disturbing the thermal equilibrium of the sample, was far less than the 3 hours $T_2$ they measured in an experiment where the coherence survived for 300 minutes (5 hours) with temperature kept constant at 1.2K.

+ +

What about the 2014 paper with 0.9999 fidelity?: This comes from Figure S2c in the Supplement, which is only up to 0.0002 seconds. If you want to get the fidelity at 30 seconds, or at 180 minutes, look at the $T_2$ times in Fig S1 of the supplement, and you will see that all of these are orders of magnitude smaller than what it was in the 2013 paper.

+ +

The authors admit this 3 times:
+1) ""Despite the record coherence times discussed above, our results do not match those obtained in bulk ensembles[6–8]"" Reference 8 is the 2013 paper.
+2) ""This currently represents the record coherence for any single qubit in the solid state."" Note they say ""single"" qubit and ""solid state"".
+3) ""which reach here a new record for solid-state single qubits with $T_2$ > 30 s in the $^{31}$P$^+$spin"" Note the 30s is a T2!! This is much smaller than the $T_2$ = 180 minutes mentioned above.

+",2293,,2293,,05-03-2018 23:27,05-03-2018 23:27,,,,5,,,,CC BY-SA 4.0 +1988,1,1994,,05-03-2018 09:10,,20,1654,"

I recently noticed that Oxford's computer science department has started offering a grad course on categorical quantum mechanics. Apparently they say that it is relevant for the study of quantum foundations and quantum information, and that it uses paradigms from category theory.

+ +

Questions:

+ +
    +
  1. How exactly does it help in the study of quantum information?

  2. +
  3. Has this formulation actually produced any new results or predictions apart from what our general formulation of quantum mechanics has already done? If so, what are those?

  4. +
+",26,,55,,2/14/2021 18:51,2/14/2021 18:51,What is the use of categorical quantum mechanics?,,1,2,,,,CC BY-SA 4.0 +1989,2,,1974,05-03-2018 09:19,,2,,"

We could assign integers from $1$ to $k$ for each colour. This then becomes an integer sort of $n$ balls over a range of integers $r$.

+ +

I'm no expert on such sorting algorithms, but it seems that they can be done with a worst case time complexity of $O(n+r)$.

+ +

For the interacting bean machine to beat this, it would need to be faster to pass $n$ balls through a board of width $r$ (and I'll assume height $r$ too).

+ +

If we pass the balls through one-by-one, the falling process would presumably take $O(r)$ time in each case. So for the $n$ balls, that makes $O(nr)$ time, which is much too slow. They also wouldn't get any opportunity to interact, unless the falling was a quantum process with interference effects.

+ +

If we pass many balls through at once, the time taken for the process will depend on the dynamics of the balls, which depends on how they interact. If the interaction were to simulate a known sorting algorithm, the falling time would reflect that algorithm's time complexity. If the interaction does not simulate a known sorting algorithm, it could be used to define a new algorithm. The nature of the interaction (classical or quantum) would determine the kind of computer that we could run that algorithm on.

+ +

So in answer to your first question, we would need to know how the spontaneous sorting occurs to know how it compares to known algorithms.

+",409,,,,,05-03-2018 09:19,,,,1,,,,CC BY-SA 4.0 +1992,2,,1926,05-03-2018 11:18,,6,,"

I’d suggest you reflect upon whether goal of ""making a new quantum programming language"" is suitable at this point in the development of quantum computation. It is not the most common approach, since mostly we are still at the stage of thinking in terms of what is essentially machine language. When we create algorithms, the level at which this is done is similar to expressing classical algorithms in terms of logic gates (such as this example for multiplication). The quantum SDKs, like QISKit are essentially ways of creating jobs to be sent to quantum hardware or simulators. This includes tools for performing simulations, optimizing for run time or noise levels, etc. They aren't really languages in the high-level sense we are used to for classical computation.

+ +

For an introduction to what is going on at this level of the quantum stack, Q is for Quantum by Terry Rudolph might be helpful.

+ +

For your intermediate goal of writing programs with QISKit, I'd recommend the QISKit tutorial. It has many worked examples of implementing short quantum programs. There is also a QISKit publication on Medium in which some of the things in the tutorial are explained in more detail. There is also a gamified tutorial to QISKit, which might be useful as a warm-up for the full QISKit tutorial.

+ +

Full dislosure: I have contributed to all the things mentioned in the final paragraph.

+",409,,409,,5/13/2018 18:40,5/13/2018 18:40,,,,3,,,,CC BY-SA 4.0 +1993,2,,1973,05-03-2018 11:44,,4,,"

Apart from the ones James Wotton mentioned, recently IBM collaborated with a few top quantum computing software startups:

+ +
+
    +
  • Zapata Computing – Based in Cambridge, MA, Zapata Computing is a quantum software, applications and services company developing + algorithms for chemistry, machine learning, security, and error + correction.

  • +
  • Strangeworks (our site's sponsor!) – Based in Austin, TX and founded by William Hurley, Strangeworks is a quantum computing + software company designing and delivering tools for software + developers and systems management for IT Administrators and CIOs.

  • +
  • QC Ware – Based in Palo Alto, CA, QC Ware develops hardware-agnostic enterprise software solutions running on quantum + computers. QC Ware’s investors include Airbus Ventures, DE Shaw + Ventures and Alchemist, and it has relationships with NASA and other + government agencies. QC Ware won a NSF grant, and its customers + include Fortune 500 industrial and technology companies.

  • +
  • 1QBit – Headquartered in Vancouver, Canada, 1QBit builds quantum and quantum-inspired software designed to solve the world’s + most demanding computational challenges. The company’s + hardware-agnostic platforms and services are designed to enable the + development of applications which scale alongside the advances in both + classical and quantum computers. 1QBit is backed by Fujitsu Limited, + CME Ventures, Accenture, Allianz and The Royal Bank of Scotland.

  • +
+ +

Apart from these, the list also includes Cambridge Quantum Computing, + QxBranch and Quantum Benchmark. But James already mentioned them! :-)

+
+ +

As of now, it seems USA is leading in the number of quantum computing startups. I'd like to hear from others about startups in this area, in the other countries, too. And well, congratulations to Strangeworks! They have been doing great.

+",26,,,,,05-03-2018 11:44,,,,0,,,,CC BY-SA 4.0 +1994,2,,1988,05-03-2018 11:45,,17,,"

This answer is the opinion of someone who is essentially an outsider to ""CQM"" (= Categorical Quantum Mechanics), but a broadly sympathetic outsider. It should be interpreted as such.

+ +

The motivations of CQM

+ +

The motivations of Categorical quantum mechanics are not computation as such, but logic; and not quantum dynamics as such, but foundations of physics. The symptoms of this can be seen in what it describes as its achievements and points of reference, for instance:

+ +
    +
  • Its results about ""completeness"" should be interpreted in the same sense as it would in Gödel's Completeness Theorem [sic]: that a set of axioms can perfectly capture a model, which in this case is the model of transformations on a set of qubits expressed in terms of transformations of degrees of freedom expressed in terms of the Z and X eigenbases.

  • +
  • Occasional comparisons to things like ""Rel"" (that is: the category of relations, which from a computational point of view is more closely allied to non-deterministic Turing machines than quantum computers) illustrate the fact that they are aware of quantum information theory as being part of a larger landscape of computational theories, where the distinctions between these theories may lead to a robust top-down intuition about what distinguishes quantum theory from other possible dynamical theories of information.

  • +
+ +

Thus CQM is very much more in a tradition of foundations of physics and the Theory B branch of computer science. So if it does not seem to have developed a lot of ""applications"" as such, you should not be surprised, because the development of applications is not its primary motivation. (And of course, so far only a very small subset of people in the field are ever really exposed to it.)

+ +

Why CQM might seem a bit obscure

+ +

The difference in motivation of CQM to the rest of the field, also reveals itself in the approach which is taken to analysis, in which linear algebra over $\mathbb C$ takes very much a background role.

+ +

Linear algebra over $\mathbb C$ is certainly still present in the background, essentially as the target model for CQM. But the usual approach to quantum mechanics in terms of linear algebra over $\mathbb C$ it is seen as potentially obscuring ""what is actually going on"". And to give the proponents of CQM their due, they have a good argument here: the usual presentation of quantum information theory, starting from vectors over $\mathbb C$ and unitary transformations, working through density operators and CPTP maps, requires a non-trivial amount of work to develop an intuition of what it is for and in what ways it differs (and in what ways it does not differ) from probability theory. It is certainly possible to get that intuition by the usual complex-linear-algebraic approach, but the proponents of CQM would claim that the usual approach is not likely to be the most effective approach.

+ +

CQM attempts to put the intuitive meaning front-and-centre, in a mathematically rigorous way. This obligates them to talk about such apparently obscure things as ""dagger commutative Frobenius algebras"". Of course, such terminology means little to nothing to almost anyone else in the field — but then this is not much different from how quantum information theorists come across to other computer scientists.

+ +

This is just the starting point of the potential confusion for an outsider — as those pursuing CQM are in effect mathematicians/logicians with top-down motivations, there is not one single thread of research in CQM, and there is not a sharp boundary between work on CQM and work in higher category theory. This is analogous to the lack of sharp boundary between computational complexity expressed in terms of quantum circuits, quantum communication complexity, query complexity, and the classical version of these topics, along with Fourier analysis and other relevant mathematical tools. Without a clear frame of reference, it can sometimes be a bit confusing as to where CQM begins and ends, but it has in principle as well-defined a notion of scope as any other topic in quantum information theory.

+ +

If you wonder why people might like to investigate CQM rather than a more mainstream question in quantum information theory, we should first acknowledge that there are other lines of research in quantum information theory which are not exactly directed towards meaningful impact on anyone else. If we are happy for people to do research into such things as approaches to quantum computation involving physical phenomena which no-one has yet exhibited in the lab [arXiv:1701.05052] or approaches to error correction on closed d-dimensional manifolds for + d>2 [arXiv:1503.02065], we should be equally happy to admit other lines of investigation which is somewhat divorced from the mainstream. The justification in each case is the same: that while the arc of theory is long, it bends towards application, and things which are investigated for purely theoretical reasons have a way of yielding practical fruits.

+ +

The use of CQM

+ +

On that note: one view of the purpose of paying attention to foundations is to get the sort of insight necessary to solve problems more easily. Does CQM provide that insight?

+ +

I think that it is only very recently that the proponents of CQM have seriously considered the question of whether the insights it provides, allow one to obtain new results in subjects which are more in the mainstream of quantum information theory. This is again because the main motivation are the foundations, but recent work has started to develop on the theme of payoffs in the wider field.

+ +

There are at least two results which I can point to, which represent ways in which the CQM community has developed results which I would judge to be broadly relevant to the interests of the quantum information community, and in which the results are entirely new:

+ +
    +
  • Novel techniques for constructing unitary error bases and Hadamard matrices (e.g. [arXiv:1504.02715, arXiv:1609.07775]. These appeared to be of enough interest to the quantum information community that these results were presented as talks in QIP 2016 and 2017 respectively.
  • +
  • A well-thought out and clear definition of a quantum graph, which recovers the definition of a noncommutative graph from [arXiv:1002.2514] in such a way that makes the relationship to 'classical' graphs clear, allows them to connect to higher algebra, and obtain (Corollary 5.6) a result on the asymptotic density of pairs of graphs for which there is a quantum advantage in pseudo-telepathy games.
  • +
+ +

As one should expect of abstract mathematical techniques with foundational motivations, there are also payoffs for areas of computer science which are adjacent to quantum information theory:

+ +
    +
  • Some recent techniques for solving problems in counting complexity regarding the Holant, which are inspired by quantum computation [arXiv:1702.00767], are more specifically inspired by a particular line of investigation into CQM which involved the distinction between GHZ states and W states.
  • +
+ +

Finally, something which is not yet a result, but which seems a promising direction of research and which in principle does not require category theory to pursue:

+ +
    +
  • One of the main products of CQM is the ZX-calculus, which one might describe as a tensor-notation which is similar to circuit notation, but which also comes equipped with a formal system for transforming equivalent diagrams to one another. There is a line of investigation into using this as a practical tool for circuit simplification, and for realising unitary circuits in particular architectures. This is based in part on the fact that ZX diagrams are a notation which allows you to reason about tensors beyond just unitary circuits, and which is therefore more flexible in principle.
  • +
+ +

Should everyone start using CQM immediately?

+ +

Probably not.

+ +

As with many things which have been devised for heterodox academic reasons, it is not necessarily the best tool for every question which one might want to ask. +If you want to run numerical simulations, chances are you use C or Python as your programming language rather than SML. However, on that same note, just as programming languages developed in earnest by major software firms may in time be informed by ideas which were first developed in such a heterodox academic context, so too might some of the ideas and priorities of CQM eventually filter out to the broader community, making it less an isolated line of investigation than it may seem today.

+ +

There are also subjects for which CQM does not (yet) seem to provide a useful way of approaching, such as distance measures between different states or operations. But every mathematical tool has it's limits: I expect that I won't be using quantum channel theory any time soon to consider how to simplify unitary circuits.

+ +

There will be problems for which CQM sheds some insight, and may provide a convenient means for analysis. A few examples of such topics are provided above, and it is reasonable to suppose that more areas of application will become evident with time. For those topics where CQM is useful, one can choose whether to take the time to learn how to use the useful tool; apart from that, it's up to you whether or not you are curious enough. In this respect, it is like every other potential mathematical technique in quantum information theory.

+ +

Summary

+ +
    +
  • If there don't seem to be many novel applications of CQM yet, it's because there aren't — because this isn't the main motivation of CQM, nor have many people studied it.
  • +
  • Its main motivations are along the lines of foundations of computer science and of physics.
  • +
  • Applications of the tools of CQM to mainstream quantum information theory do exist, and you can expect to see more as time goes on.
  • +
+",124,,124,,05-04-2018 10:38,05-04-2018 10:38,,,,1,,,,CC BY-SA 4.0 +1999,1,,,05-03-2018 19:38,,21,1664,"

It is a well known result that the Discrete Fourier Transform (DFT) of $N=2^n$ numbers has complexity $\mathcal O(n2^n)$ with the best known algorithm, while performing the Fourier transform of the amplitudes of a quantum state, with the classical QFT algorithm, only requires $\mathcal O(n^2)$ elementary gates.

+ +

Is there any known reason why this is the case? By this I mean whether there are known characteristics of the DFT that make it possible to implement an efficient ""quantum version"" of it.

+ +

Indeed, a DFT over $N$-dimensional vectors can be thought of as the linear operation +$$\vec y=\operatorname{DFT} \vec x, \qquad \text{DFT}_{jk}\equiv \frac{1}{\sqrt N}\exp\left(\frac{2\pi i}{N}jk\right).$$

+ +

The ""quantum version"" of this problem is the task of, given a quantum state $|\boldsymbol x\rangle\equiv\sum_{k=1}^N x_k|k\rangle$, obtaining the output state $|\boldsymbol y\rangle\equiv\sum_{k=1}^N y_k |k\rangle$ such that +$$|\boldsymbol y\rangle=\operatorname{DFT}|\boldsymbol x\rangle=\operatorname{QFT}|\boldsymbol x\rangle.$$

+ +
    +
  1. A first simplification seems to come from the fact that, due to the linearity of QM, we can focus on the basis states $|j\rangle, \,\,j=1,...,N$, with the evolution of general vectors $|\boldsymbol x\rangle$ then coming for free.
  2. +
  3. If $N=2^n$, one can express $|j\rangle$ in base two, having $|j\rangle=|j_1,...,j_n\rangle$.
  4. +
  5. In the standard QFT algorithm one then exploits the fact that the transformation can be written as +$$|j_1,...,j_n\rangle\to2^{-n/2}\bigotimes_{l=1}^n\big[|0\rangle+\exp(2\pi i (0.j_{n-l+1}\cdots j_{n}))|1\rangle\big],$$ +which can then be implemented as a quantum circuit of the form +$$\operatorname{QFT}|j_1,...,j_n\rangle=\Big(\prod_{k=1}^n \mathcal U_k\Big)|j_1,...,j_n\rangle,$$ +where $\mathcal U_k$ is implemented with $\mathcal O(n)$ elementary gates.
  6. +
+ +

Suppose we have now some unitary transformation $A$, and we want to find a circuit implementing efficiently the equivalent quantum transformation +$$|\boldsymbol y\rangle=A|\boldsymbol x\rangle.$$ +The first two tricks mentioned above can always be applied, but it is then nontrivial when and how the other point can be used to obtain efficiency results like we have for the QFT.

+ +

Are there known criteria for this to be true? Or in other words, is it possible to precisely pin down what are the characteristics of the DFT that make it possible to efficiently implement the associated quantum transformation?

+",55,,26,,05-03-2018 20:06,08-07-2018 21:32,Why can the Discrete Fourier Transform be implemented efficiently as a quantum circuit?,,4,1,,,,CC BY-SA 4.0 +2002,2,,1999,05-04-2018 10:17,,14,,"

Introduction to the Classical Discrete Fourier transform:

+ +

The DFT transforms a sequence of $N$ complex numbers $\{\mathbf{x}_n\}:=x_0,x_1,x_2,...,x_{N-1}$ into another sequence of complex numbers $\{\mathbf{X}_k\}:=X_0,X_1,X_2,...$ which is defined by $$X_k=\sum_{n=0}^{N-1}x_n.e^{\pm\frac{2\pi i k n}{N}}$$ We might multiply by suitable normalization constants as necessary. Moreover, whether we take the plus or minus sign in the formula depends on the convention we choose.

+ +
+ +

Suppose, it's given that $N=4$ and $\mathbf{x}=\begin{pmatrix} 1 \\ 2-i \\ -i \\ -1+2i \end{pmatrix}$.

+ +

We need to find the column vector $\mathbf{X}$. The general method is already shown on the Wikipedia page. But we will develop a matrix notation for the same. $\mathbf{X}$ can be easily obtained by pre multiplying $\mathbf{x}$ by the matrix:

+ +

$$M=\frac{1}{\sqrt{N}}\begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & w & w^{ 2 } & w^{ 3 } \\ 1 & w^ 2 & w^4 & w^6 \\ 1 & w^3 & w^6 & w^9 \end{pmatrix}$$

+ +

where $w$ is $e^{\frac{-2\pi i}{N}}$. Each element of the matrix is basically $w^{ij}$. $\frac{1}{\sqrt{N}}$ is simply a normalization constant.

+ +

Finally, $\mathbf{X}$ turns out to be: $\frac{1}{2}\begin{pmatrix} 2 \\ -2-2i \\ -2i \\ 4+4i \end{pmatrix}$.

+ +

Now, sit back for a while and notice a few important properties:

+ +
    +
  • All the columns of the matrix $M$ are orthogonal to each other.
  • +
  • All the columns of $M$ have magnitude $1$.
  • +
  • If you post multiply $M$ with a column vector having lots of zeroes (large spread) you'll end up with a column vector with only a few zeroes (narrow spread). The converse also holds true. (Check!)
  • +
+ +

It can be very simply noticed that the classical DFT has a time complexity $\mathcal O(N^2)$. That is because for obtaining every row of $\mathbf{X}$, $N$ operations need to be performed. And there are $N$ rows in $\mathbf{X}$.

+ +
+ +

The Fast fourier transform:

+ +

Now, let us look at the Fast fourier transform. The fast Fourier transform uses the symmetry of the Fourier transform to reduce the computation time. Simply put, we rewrite the Fourier transform of size $N$ as two Fourier transforms of size $N/2$ - the odd and the even terms. We then repeat this over and over again to exponentially reduce the time. To see how this works in detail, we turn to the matrix of the Fourier transform. While we go through this, it might be helpful to have $\text{DFT}_8$ in front of you to take a look at. Note that the exponents have been written modulo $8$, since $w^8 = 1$.

+ +

+ +

Notice how row $j$ is very similar to row $j + 4$. Also, notice how column $j$ +is very similar to column $j + 4$. Motivated by this, we are going to split the +Fourier transform up into its even and odd columns.

+ +

+ +

In the first frame, we have represented the whole Fourier transform matrix +by describing the $j$th row and $k$th column: $w^{jk}$. In the next frame, we separate the odd and even columns, and similarly separate the vector that is to be transformed. You should convince yourself that the first equality really is +an equality. In the third frame, we add a little symmetry by noticing that +$w^{j+N/2} = −w^j$ (since $w^{n/2} = −1$).

+ +

Notice that both the odd side and even side contain the term $w^{2jk}$. But +if $w$ is the primitive Nth root of unity, then $w^2$ is the primitive $N/2$ nd root of unity. Therefore, the matrices whose $j$, $k$th entry is $w^{2jk}$ are really just $\text{DFT}_{(N/2)}$! Now we can write $\text{DFT}_N$ in a new way: +Now suppose we are calculating the Fourier transform of the function $f(x)$. +We can write the above manipulations as an equation that computes the jth +term $\hat{f}(j)$.

+ +

+ +

Note: QFT in the image just stands for DFT in this context. Also, M refers to what we are calling N.

+ +

This turns our calculation of $\text{DFT}_N$ into two applications of $\text{DFT}_{(N/2)}$. We +can turn this into four applications of $\text{DFT}_{(N/4)}$, and so forth. As long as $N = 2n$ for some $n$, we can break down our calculation of $\text{DFT}_N$ into $N$ +calculations of $\text{DFT}_1 = 1$. This greatly simplifies our calculation.

+ +

In case of the Fast fourier transform the time complexity reduces to $\mathcal{O}(N\log(N))$ (try proving this yourself). This is a huge improvement over the classical DFT and pretty much the state-of-the-art algorithm used in modern day music systems like your iPod!

+ +
+ +

The Quantum Fourier transform with quantum gates:

+ +

The strength of the FFT is that we are able to use the symmetry of the +discrete Fourier transform to our advantage. The circuit application of QFT +uses the same principle, but because of the power of superposition QFT is +even faster.

+ +

The QFT is motivated by the FFT so we will follow the same steps, but +because this is a quantum algorithm the implementation of the steps will be +different. That is, we first take the Fourier transform of the odd and even +parts, then multiply the odd terms by the phase $w^{j}$.

+ +

In a quantum algorithm, the first step is fairly simple. The odd and even +terms are together in superposition: the odd terms are those whose least +significant bit is $1$, and the even with $0$. Therefore, we can apply $\text{QFT}_{(N/2)}$ to both the odd and even terms together. We do this by applying we will simply apply $\text{QFT}_{(N/2)}$ to the $n-1$ most significant bits, and recombine the odd and even appropriately by applying the Hadamard to the least significant bit.

+ +

Now to carry out the phase multiplication, we need to multiply each odd +term $j$ by the phase $w^{j}$ . But remember, an odd number in binary ends with a $1$ while an even ends with a $0$. Thus we can use the controlled phase shift, where the least significant bit is the control, to multiply only the odd terms by the phase without doing anything to the even terms. Recall that the controlled phase shift is similar to the CNOT gate in that it only applies a phase to the target if the control bit is one.

+ +

+ +

Note: In the image M refers to what we are calling N.

+ +

The phase associated with each controlled phase shift should be equal to +$w^{j}$ where $j$ is associated to the $k$-th bit by $j = 2k$. +Thus, apply the controlled phase shift to each of the first $n − 1$ qubits, +with the least significant bit as the control. With the controlled phase shift +and the Hadamard transform, $\text{QFT}_N$ has been reduced to $\text{QFT}_{(N/2)}$.

+ +

+ +

Note: In the image, M refers to what we are calling N.

+ +

Example:

+ +

Lets construct $\text{QFT}_3$. Following the algorithm, we will turn $\text{QFT}_3$ into $\text{QFT}_2$ +and a few quantum gates. Then continuing on this way we turn $\text{QFT}_2$ into +$\text{QFT}_1$ (which is just a Hadamard gate) and another few gates. Controlled +phase gates will be represented by $R_\phi$. Then run through another iteration to get rid of $\text{QFT}_2$. You should now be able to visualize the circuit for $\text{QFT}$ on more qubits easily. Furthermore, you can see that the number of gates necessary to carry out $\text{QFT}_N$ it takes is exactly $$\sum_{i=1}^{\log(N)} i=\log(N)(\log(N)+1)/2 = \mathcal{O}(\log^2 N)$$

+ +
+ +

Sources:

+ +
    +
  1. https://en.wikipedia.org/wiki/Discrete_Fourier_transform

  2. +
  3. https://en.wikipedia.org/wiki/Quantum_Fourier_transform

  4. +
  5. Quantum Mechanics and Quantum Computation MOOC (UC BerkeleyX) - Lecture Notes : Chapter 5

  6. +
+ +

P.S: This answer is in its preliminary version. As @DaftWillie mentions in the comments, it doesn't go much into ""any insight that might give some guidance with regards to other possible algorithms"". I encourage alternate answers to the original question. I personally need to do a bit of reading and resource-digging so that I can answer that aspect of the question.

+",26,,26,,05-09-2018 11:52,05-09-2018 11:52,,,,3,,,,CC BY-SA 4.0 +2003,2,,1999,05-04-2018 10:40,,8,,"

This is deviating a little from the original question, but I hope gives a little more insight that could be relevant to other problems.

+ +

One might ask ""What is it about order finding that lends itself to efficient implementation on a quantum computer?"". Order Finding is the main component of factoring algorithms, and includes the Fourier transform as part of it.

+ +

The interesting thing is that you can put things like order finding, and Simon's problem, in a general context called the ""Hidden Subgroup Problem"".

+ +

Let us take a group $G$, with elements indexed by $g$, and a group operation '$\oplus$'. We are given an oracle that evaluates the function $f(g)$, and we are assured that there is a subgroup, $K$, of $G$ with elements $k$ such that for all $g\in G$ and $k\in K$, $f(g)=f(g\oplus k)$. It is our task to uncover the generators of the subgroup $K$. For example, in the case of Simon's problem, the group $G$ is all $n$-bit numbers, and the subgroup $K$ is a pair of elements $\{0,s\}$. The group operation is bitwise addition.

+ +

Efficient solutions (that scale as a polynomial of $\log|G|$) exist if the group $G$ is Abelian, i.e. if the operation $\oplus$ is commutative, making use of the Fourier Transform over the relevant group. There are well-established links between the group structure (e.g. $\{0,1\}^n,\oplus$) and the problem that can be solved efficiently (e.g. Simon's problem). For example, if we could solve the Hidden Subgroup Problem for the symmetric group, it would help with the solution of the graph isomorphism problem. In this particular case, how to perform the Fourier Transform is known, although this in itself is not sufficient for creating an algorithm, in part because there is some additional post-processing that is required. For example, in the case of Simon's Problem, we required multiple runs to find enough linearly independent vectors to determine $s$. In the case of factoring, we were required to run a continued fractions algorithm on the output. So, there's still some classical post-processing that has to be done efficiently, even once the appropriate Fourier transform can be implemented.

+ +
+ +

Some more details

+ +

In principle, the Hidden Subgroup Problem for Abelian groups is solved as follows. We start with two registers, $|0\rangle|0\rangle$, and prepare the first in a uniform superposition of all group elements, +$$ +\frac{1}{\sqrt{|G|}}\sum_{g\in G}|g\rangle|0\rangle, +$$ +and perform the function evaluation +$$ +\frac{1}{\sqrt{|G|}}\sum_g|g\rangle|f(g)\rangle=\frac{1}{\sqrt{|G|}}\sum_{g\in K^\perp}\sum_{k\in K}|g\oplus k\rangle|f(g)\rangle, +$$ +where $K^\perp$ is defined such that by taking each element and combining with the members of $K$ yields the whole group $G$ (i.e. each member of $K^\perp$ creates a different coset, yielding distinct values of $f(g)$), and is known as the orthogonal subgroup. Tracing over the second register, +$$ +\frac{1}{|K|}\sum_{g\in K^\perp}\sum_{k,k'\in K}|g\oplus k\rangle\langle g\oplus k'|. +$$ +Now we perform the Fourier Transform over the group $G$, giving the output state +$$ +\frac{|K|}{|G|}\sum_{g\in K^\perp}|g\rangle\langle g|. +$$ +Each of the vectors $|g\in K^\perp\rangle$ has a probability of $|K|/|G|$ of being found, and all others have 0 probability. Once the generators of $K^\perp$ have been determined, we can figure out the generators of $K$ via some linear algebra.

+",1837,,,,,05-04-2018 10:40,,,,0,,,,CC BY-SA 4.0 +2004,1,2005,,05-04-2018 10:49,,17,3217,"

Grover's Search algorithm is usually talked about in terms of finding a marked entry in an unsorted database. This is a natural formalism that lets it be applied directly to searching for solutions to NP problems (where a good solution is easily recognised).

+ +

I was interested to learn about other applications of Grover's search to finding the minimum, mean and median of a set of numbers. That leaves me wondering if there are any other less-obvious applications of Grover's search (or applications of its generalisations such as amplitude amplification) which are already known? Any brief insight about how this is done would be appreciated.

+",1837,,1837,,09-10-2018 07:37,01-07-2021 10:11,What applications does Grover's Search Algorithm have?,,4,0,,,,CC BY-SA 4.0 +2005,2,,2004,05-04-2018 10:57,,8,,"

Apart from the ones you mentioned, another application of (a modified) Grover's algorithm which I'm aware of is solving the Collision problem in complexity theory, quantum computing and computational mathematics. It's also called the BHT algorithm.

+ +
+

Introduction:

+ +

The collision problem most often refers to the 2-to-1 + version which was described by Scott Aaronson in his PhD thesis. Given that $n$ is + even and a function $f:\{1,...,n\}\to\{1,...,n\}$ we know beforehand + that either $f$ is 1-to-1 or 2-to-1. We are only allowed to make + queries about the value of $f(i)$ for any $i\in\{1,2,...,n\}$. The + problem then asks how many queries we need to make to determine with + certainty whether $f$ is 1-to-1 or 2-to-1.

+ +

Solving the 2-to-1 version deterministically requires $n/2+1$ queries, + and in general distinguishing r-to-1 functions from 1-to-1 functions + requires $n/r+1$ queries.

+ +

Deterministic classical solution:

+ +

This is a straightforward application of the pigeonhole principle: if + a function is r-to-1, then after $n/r+1$ queries we are guaranteed to + have found a collision. If a function is 1-to-1, then no collision + exists. If we are unlucky then $n/r$ queries could return distinct + answers. So $n/r+1$ queries are necessary.

+ +

Randomized classical solution:

+ +

If we allow randomness, the problem is easier. By the birthday + paradox, if we choose (distinct) queries at random, then with high + probability we find a collision in any fixed 2-to-1 function after + $\Theta(\sqrt{n})$ queries.

+ +

Quantum BHT solution:

+ +

Intuitively, the algorithm combines the square root speedup from the + birthday paradox + using (classical) randomness with the square root speedup from + Grover's (quantum) algorithm.

+ +

First, $n^{1/3}$ inputs to $f$ are selected at random and $f$ is + queried at all of them. If there is a collision among these inputs, + then we return the colliding pair of inputs. Otherwise, all these + inputs map to distinct values by $f$. Then Grover's algorithm is used + to find a new input to $f$ that collides. Since there are only + $n^{2/3}$ such inputs to $f$, Grover's algorithm can find one (if it + exists) by making only + $\mathcal{O}(\sqrt{n^{2/3}})=\mathcal{O}(n^{1/3})$ queries to $f$.

+
+ +

Sources:

+ +
    +
  1. https://en.wikipedia.org/wiki/Collision_problem

  2. +
  3. https://en.wikipedia.org/wiki/BHT_algorithm

  4. +
  5. Quantum Algorithm for the Collision Problem - Gilles Brassard, Peter Hoyer, Alain Tapp

  6. +
+",26,,26,,05-04-2018 14:34,05-04-2018 14:34,,,,5,,,,CC BY-SA 4.0 +2006,2,,1999,05-04-2018 16:09,,10,,"

One possible answer as to why we can realise the QFT efficiently is down to the structure of its coefficients. To be precise, we can represent it easily as a quadratic form expansion, which is a sum over paths which have phases given by a quadratic function: +$$ + F_{2^n} + = + \frac{1}{\sqrt{2^n}} + \sum_{k,x \in \{0,1\}^n} + \exp\bigl(i Q(k,x)\bigr) \; \lvert k \rangle\!\langle x \rvert, +$$ +where $Q(z) = \sum_{1 \leqslant j \leqslant k \leqslant 2n} \theta_{j,k} z_j z_k$ is a quadratic form defined on $2n$-bit strings. The quantum Fourier transform in particular involves a quadratic form whose angles are given by +$$ + \theta_{j,k} + = + \begin{cases} + \pi\big/2^{2n-j-k}, + & \text{if $1 \leqslant j \leqslant n < k \leqslant 2n-j+1$} +\\ +0, & \text{otherwise}. +\end{cases} $$ +The structure of these angles has an important feature, which allows the QFT to be easily realised as a unitary circuit:

+ +
    +
  1. There is a function $f: \{1,2,\ldots,n\} \to \{n{+}1,n{+}2,\ldots,2n\}$ such that $\theta_{j,k} = \pi$ for each $1 \leqslant j \leqslant n$ (where $f(j) = 2n-j+1$ in the case of the QFT);
  2. +
  3. For any $1 \leqslant h,j \leqslant n$ for which $\theta_{h,\,f(j)} \ne 0$, we have $\theta_{j,\,f(h)} = 0$.
  4. +
+ +

We may think of the indices of $z = (k,x) \in \{0,1\}^{2n}$ as input and output wires of a quantum circuit, where our task is to show what the circuit in the middle is which shows how the inputs connect to the outputs. The function $f$ above allows us to see the association of output wires to input wires, that in each case there is a Hadamard gate which connects the two ends together, and that apart from the Hadamards (and SWAP gates which accounts for the reversal of in the order of the indices between $(1,2,\ldots,n)$ and $(f(1), f(2), \ldots, f(n))$), all of the other operations are two-qubit controlled-phase gates for relative phases of $\exp(i \theta_{j,k})$. The second condition on $f$ serves to ensure that these controlled-phase gates can be given a well-defined time ordering.

+ +

There are more general conditions which one could describe for when a quadratic form expansion gives rise to a realisable circuit, along similar lines. The above describes one of the simplest cases, in which there are no indices in the sum except for those for the standard basis of the input and output states (in which case the coefficients of the associated unitary all have the same magnitude).

+",124,,124,,05-05-2018 10:32,05-05-2018 10:32,,,,2,,,,CC BY-SA 4.0 +2007,1,,,05-04-2018 17:45,,5,1361,"

There have been a couple of simulations already made and I recently saw this high performance, hardware accelerated quantum computer simulator QCGPU and started to wonder about how to simulate quantum computing in VR. I would like to look at demonstrating the theory behind it. As a teaching experience. Then afterwards simulating the algorithms that could run on it.

+ +

The motivation is that VR doesn't have similar limitations we have in the physical world. Essentially it will graphically illustrate the quantum physics behind this computer and to show Mesoscopic physics at work. The VR will illustrate experiments of quantum physics and theory behind this computer but it will also illustrate speed and algorithm execution. For example, this is like showing the mainframe generation how a 7'th Gen CPU would function. From hardware to software, theoretically. It would simulate how algorithms would work on this machine.

+ +

Superposition and entanglement would be the main focus. For instance to show entanglement the player would interact with a photon, electron , molecule, or laser (polarization) .The distance between electrons can be scaled to show qubits that are separated by incredible distances interacting with each other instantaneously.

+ +

Is it possible to build such a research environment for enthusiast and professionals to refer to with authenticity?

+",2215,,26,,12/13/2018 19:48,12/13/2018 19:48,"Is it possible to simulate a quantum computer in Virtual Reality? If yes, how?",,3,3,,,,CC BY-SA 4.0 +2008,2,,2007,05-04-2018 19:58,,7,,"

Virtual reality in a classical computer is just a fancy front-end on top of a classical simulation. A classical computer can simulate all of the quantum physics happening inside a quantum computer, including all the phenomena referred to in the question, but only for a limited number of qubits.

+ +

A 45-qubit circuit was simulated using 0.5PB of RAM in 2017.
+A 49-qubit circuit was simulated using 3TB of RAM in 2017.
+A 64-qubit circuit was simulated using 8TB of RAM in 2018.

+ +

These were universal random circuits. If you allow the circuit to be non-random, as in the case of most specific quantum algorithms, you can simulate more qubits.

+ +

For example, Bravyi ad Gosset showed that circuits dominated by Clifford gates can be simulated in polynomial time with respect to the number of $T$ gates on a classical computer. Here is the arXiv link if you don't have access to PRL.

+ +

If you want to simulate a real physical system (for example for your VR demonstration for students) you need to also model decoherence, but a Lindblad master equation can approximate this with the same amount of RAM as the above ""exact"" simulations for closed systems. If you want to simulate the decoherence without a Markovian approxiation, there are programs on GitHub such as FeynDyn (Feynman Dynamics). In my answer to a recent question here, I simulated 3 qubits in the presence of non-Markovian decoherence in 62 seconds, and the authors of FeynDyn claim they can simulate up to 16 qubits with non-Markovian decoherence.

+",2293,,2293,,05-06-2018 19:33,05-06-2018 19:33,,,,3,,,,CC BY-SA 4.0 +2009,1,2010,,05-05-2018 19:02,,5,281,"

My understanding of Shor's algorithm is that you have to carry out the following steps if you are trying to factor $N$:

+ +
    +
  1. Chose a random number less than $N$. Let's call it $a$.

  2. +
  3. Calculate the period of $a^x \ \text{mod} \ N$. Let's call the period $r$.

  4. +
  5. One of the factors is the GCD of $a^{r/2}+1$ and $N$. The other is the GCD of $a^{r/2}-1$ and $N$.

  6. +
+ +

However this does not work in some cases such as if $N=35$ and $a=10$. You should be getting $5$ and $7$ as the prime factors of $35$, but this is not the case. The period of $10^x \ \text{mod} \ 35$ is $6$. The GCD of $10^{6/2}+1$, $1001$ and $35$ is $7$, which is one of the factors. But the GCD of $10^{6/2}-1$, $999$ and $35$ is $1$, which is not what you should be getting. Why doesn't Shor's algorithm work in this case?

+",699,,699,,05-05-2018 21:08,05-05-2018 22:39,Confusion about random sampling of integers in Shor's algorithm,,1,2,0,,,CC BY-SA 4.0 +2010,2,,2009,05-05-2018 19:35,,6,,"

You skipped a step in the algorithm.

+ +
    +
  1. First check if $N$ is even. $35$ is not even.

  2. +
  3. Next determine if $N=a^b$ for $a \geq 1$ and $b \geq 2$. It's not.

  4. +
  5. Randomly choose $x$ in the range $1$ to $N-1$. If $\text{gcd}(x,N) > 1$ then return the factor $\text{gcd}(x,N)$. This is what you missed. $\text{gcd}(10,35) = 5$ There's no reason to perform order finding if you choose $x = 10$. $x$ should be co-prime to $N$ in order to continue.

  6. +
+ +

For completeness:

+ +
    +
  1. Find the order $r$ of $x\bmod N$.

  2. +
  3. If $r$ is even and $x^{r/2} \neq -1 \pmod N$ then compute the $\text{gcd}(x^{r/2} -1,N)$ to see if one of these is a non-trivial factor. Otherwise, the algorithm fails.

  4. +
+ +

The reason the algorithm could fail is because you don't have enough qubits to perform the order-finding part to enough precision.

+ +

These steps came from Section 5.3.2 of Nielsen & Chuang.

+",54,,23,,05-05-2018 22:39,05-05-2018 22:39,,,,8,,,,CC BY-SA 4.0 +2011,2,,2007,05-06-2018 08:36,,5,,"

Virtual reality is just a pretty front-end on top of a computer program. So, anything the computer program can do can be given a VR interface. As I see it, what is really being asked for is nice, yet 'authentic' ways of visualising quantum mechanics. One thing that immediately springs to mind is a book I read as an undergraduate, Mr. Tompkins in Paperback. Alice in Quantumland may also provide inspiration, I haven't read it. Then there's the minecraft mod (which I haven't played).

+ +

The standard trick for doing the maths would essentially be to solve the Schrodinger equation but make the value of $\hbar$ much larger. That way, you should be able to get diffraction effects and similar at a macroscopic scale. This is a trick which probably works quite well for continuous systems (e.g. where you measure position), but I don't think it helps so much where you're talking about discrete systems (e.g. a qubit). In fact, there are probably some more fundamental questions you have to answer first, like 'What does an electron look like?' Is it really a nice little ball that you can pick up and play with, or is it something more diffuse and wavy? How do you represent its spin? (I don't think an arrow pointing in some direction is very authentic.) Part of the problem you rapidly get into here is that by looking at a quantum object, you're measuring it, and therefore changing it, and often making it so that it can't do the cool quantum trick you want it to do. So, I think you generally have to make some compromises to authenticity somewhere in order to create a representation that hopefully facilitates some sort of understanding.

+",1837,,1837,,05-06-2018 13:54,05-06-2018 13:54,,,,0,,,,CC BY-SA 4.0 +2012,1,,,05-07-2018 05:30,,7,118,"

Within Quantum Error Correction and stabilizer codes, toric codes/surface codes are very tempting, mainly for their high error threshold. For more background please check up, in our Physics sister (aunt?) site: Quantum Error Correction: Surface code vs. color code.

+ +

However, these codes require fairly specific measurements in specific bases, which I find hard to translate in practice, especially into my language of interest which is spin states in a solid-state few-spin collection. To see my motivation, here is a not-quite successful attept from a few years ago, using a more naïve QEC scheme: ""Quantum Error Correction with magnetic molecules"".

+ +

So, the problem:

+ + + +

Related: How does the size of a toric code torus affect its ability to protect qubits?

+",1847,,1847,,5/19/2018 5:19,5/19/2018 9:12,Translation of color/toric code to a small network of solid-state spins,,1,4,,,,CC BY-SA 4.0 +2013,1,,,05-07-2018 15:19,,13,307,"

Recently, I heard that there can be transfer of rational classical bits (for example 1.5 cbits) from one party to another via quantum teleportation. In the Standard Teleportation Protocol, 2 classical bits and 1 maximally entangled shared resource state is required for perfect teleportation of the unknown state. But I do not understand how $1.x$ bits can be sent over in the classical channel.

+ +
    +
  1. Is that possible? If yes, could you give a brief explanation?

  2. +
  3. It'd be helpful if you could point me to some papers in which perfect teleportation is possible using fractional bits (and possibly extra quantum resources).

  4. +
+ +

Some people might be wondering as to how this may be relevant to quantum computing. D. Gottesman and I.L. Chuang suggested that quantum teleportation will play an important role as a primitive subroutine in quantum computation. G. Brassard, S.L. Braunstein and R. Cleve showed that quantum teleportation can be understood as quantum computation.

+",506,,55,,2/20/2021 16:24,2/20/2021 16:24,Using a fractional number of classical bits within quantum teleportation,,2,7,,,,CC BY-SA 4.0 +2014,2,,2013,05-07-2018 16:21,,8,,"

I don't know for sure how you would achieve fewer than two bits of classical communication for a teleportation, but here's one way that you could have a non-integer number: if you teleport a qudit with dimension $d$ that is not a power of two. For each teleportation protocol, you'd have to send two dits of information, which you could represent in bits using $\lceil 2\log_2(d)\rceil$ bits. If you then repeat the protocol many times, you could combine the classical messages that you're sending and reduce it to $2\log_2(d)$ per teleportation protocol on average.

+ +

One possible route towards fewer than two bits of classical communication (if that's what you're after) is to use a combination of imperfect teleportation and non-universal teleportation (where we have some prior knowledge of what the state to be teleported could be). If your resource state is $\alpha|00\rangle+\sqrt{1-\alpha^2}|11\rangle$, then the probability of getting each measurement result in the teleportation protocol depends on the value of $\alpha$: teleporting a state $(\cos\frac{\theta}{2}|0\rangle+\sin\frac{\theta}{2}e^{i\phi}|1\rangle)$ gives the probailities of the four different Bell measurements, +$$ +|B_{xy}\rangle=\frac{1}{\sqrt{2}}\left(|0x\rangle+(-1)^y|1\bar x\rangle\right) +$$ +as +$$ +p_{xy}=\frac{1}{4}(1+(-1)^x(2\alpha^2-1)\cos\theta), +$$ +where $x$ and $y$ are single bits. Using the input distribution for the unknown quantum state, we can calculate the average value of $\sin\theta$.

+ +

For universal teleportation (where the input state could be any state), one has $\int_0^{\pi}\cos\theta\sin\theta d\theta=0$. In this case, the probabilities are all equal, and the best we can do is just to send the measurement result as two bits, $xy$.

+ +

Now imagine the case where $(2\alpha^2-1)\langle\cos\theta\rangle=\frac12$. Then, the probabilities are $(\frac38,\frac38,\frac18,\frac18)$. One can compress this information using, for example, Huffman encoding: $\{00,01,10,11\}\mapsto\{0,10,110,111\}$. This has an expected message length $\frac{15}{8}$. Thus, if you repeat this protocol many times, on average you send 1.875 bits per teleportation. This, of course, is just an example. Any value $(2\alpha^2-1)\langle\cos\theta\rangle>\frac13$ gives compression.

+ +

The trade-off is that unless $|\alpha|^2=|\beta|^2=\frac12$ (where you don't get any compression), the teleportation is imperfect.

+",1837,,1837,,05-08-2018 06:38,05-08-2018 06:38,,,,1,,,,CC BY-SA 4.0 +2015,2,,1285,05-08-2018 20:46,,5,,"

You do not even need $Z$ and $X$.
+$\rm{CNOT}$, $H$, and $T=\pi/8$ are enough.

+
    +
  1. $H$ and $T$ are enough to make any possible unitary transformation on one qubit.
  2. +
  3. Adding $\rm{CNOT}$, you can synthesize a general unitary transformation to within any error $\epsilon>0$ using only $\mathcal{O}(\log^2(1/\epsilon))$ gates.
  4. +
+

If you wish for the error to be $\epsilon=0$ and you are only willing to add the phase gate $\pi/2$, it is still possible, if and only if the elements of the unitary you want to make are of the form: $\frac{a+ib}{2^n} + \frac{c+id}{2^{n+1/2}}$, where all variables are integers. Remarkably, at most 1 auxiliary qubit is required for this exact synthesis.

+

Another universal gate set is $\{{\rm{CCNOT}},H\}$, and in fact there's a single gate that's uniersal: the 3-qubit Deutsch gate ${D(\theta)}$.

+",2293,,2293,,12/17/2021 6:16,12/17/2021 6:16,,,,5,,,,CC BY-SA 4.0 +2016,2,,2007,05-09-2018 15:52,,4,,"

As others have said, VR is just a way of visualizing an output from a computer. If the computer producing the output is classical, it will struggle to visualize a universal set of gates on a many qubit system.

+ +

Even so, there are ways to visualize systems of few qubits with a classical computer (though not with VR). The Bloch sphere is the well known example for a single qubit. There are also a couple of games that have visualizations for few qubits:

+ + + +

I’m sure there are others too. I’ll add them as I think of them, or as people point them out in the comments

+ +

Regarding VR specifically, the only project I know of is Quantum Breathing. So you might want to check that out.

+",409,,,,,05-09-2018 15:52,,,,0,,,,CC BY-SA 4.0 +2017,1,2019,,05-09-2018 16:56,,16,813,"

It seems to me that an extremely relevant question for the prospects of quantum computing would be how the engineering complexity of quantum systems scales with size. Meaning, it's easier to build $n$ $1$-qubit computers than one $n$-qubit computer. In my mind, this is roughly analogous to the fact that it's easier to analytically solve $n$ $1$-body problems than one $n$-body problem, since entanglement is the primary motivating factor behind quantum computing in the first place.

+ +

My question is the following: It seems that we should really care about how the 'difficulty' of building and controlling an $n$-body quantum system grows with $n$. Fix a gate architecture, or even an algorithm--is there a difficulty in principle arising from the fact that an $n$-qubit computer is a quantum many-body problem? And that mathematically speaking, our understanding of how quantum phenomena scale up into classical phenomena is quite poor? Here difficulty could be defined in any number of ways, and the question we would care about, roughly is, is controlling a $1000$-qubit machine (that is, preserving the coherence of its wavefunctions) 'merely' $100$x harder than controlling a $10$-qubit machine, or $100^2$, or $100!$ or $100^{100}$? Do we have any reasons for believing that it is more or less the former, and not the latter?

+",1254,,26,,12/13/2018 19:49,12/13/2018 19:49,Are there any estimates on how complexity of quantum engineering scales with size?,,3,3,,,,CC BY-SA 4.0 +2018,2,,2017,05-09-2018 18:33,,4,,"

Circuit Complexity

+ +

I think the first issue is to really understand what is meant by 'controlling' a quantum system. For this, it might help to start thinking about the classical case.

+ +

How many different $n$-bit input, 1-bit output classical computations are there? For each of the $2^n$ possible inputs, there are $2$ different possible outputs. Thus, there are $2^{2^n}$ different possible functions that you could be asked to build, if what you're talking about in terms of controllability is ""build any of the possible functions"". You might then go on to ask ""what fraction of these functions can I create by using no more than $2^n/n$ two-bit gates?"" (you could presumably generalise this to $k$-bit gates to get a relative complexity argument between two circuit sizes). There's a detailed calculation you can perform to get a good bound on this number, showing that it's small. This is something called Shannon's Theorem (but what isn't?), but there's at least an intuitive explanation: it requires a bit string of $2^n$ bits to specify which possible computation you're wanting to perform. This information must be incompressible, as there's no 'space' to be saved. But, if you could create all of these functions using shorter circuits, then describing that circuit would be a way of compressing the data.

+ +

The equivalent statement in quantum computing is ""build any $n$-qubit unitary to within some accuracy, $\epsilon$"". But the classical answer is already horrific, even before we have to take into account the precision issues of specifying an arbitrary unitary. The point is that with both classical and quantum computations, we focus very specifically on the algorithms that we can implement 'easily', for some definition of 'easily', which is usually that the algorithms that we want to implement scale as some polynomial of the input size (with the possible exception of things like Grover's algorithm). So really the answer to the question depends on the algorithms you wish to run on the computer. If the algorithm scales as $O(n^2)$, then appropriately controlling an 1000-qubit machine is kind of 10000 times harder than controlling a 10-qubit machine, in the sense that you need to protect it from decoherence for that much longer, implement that many more gates etc.

+ +

Decoherence

+ +

Following up on the comments,

+ +
+

Let's consider a specific algorithm or a specific kind of circuit. My question could be restated--is there any indication, theoretical or practical, of how the (engineering) problem of preventing decoherence scales as we scale the number of these circuits?

+
+ +

This divides into two regimes. For small scale quantum devices, before error correction, you might say we're in the NISQ regime. This answer is probably most relevant to that regime. However, as your device gets larger, there will be diminishing returns; it gets harder and harder to accomplish the engineering task just to add a few more qubits.

+ +

At that point, you have to transition to using error correction and, indeed, fault-tolerance (which is just a form of error correction which is capable of tolerating errors in the gates that implement the correction). Specifically, fault-tolerance says that there exists a threshold error probability $p$ such that, if you can perform every gate with an error probability $\leq p$, you can define some logical qubits (made up of multiple physical qubits) such that the result of any computation or arbitrary length can be accomplished with arbitrary precision. Whatever your physical hardware, by the time you've left the NISQ regime, you've done a lot of work eliminating decoherence as much as possible, and made sure you're as far below the $p$ threshold as possible. Current estimates place $p$ somewhere around the $1\%$ mark. The question becomes ""what are the overheads for these fault-tolerant processes"". The precise details are scheme dependent, and much work continues into how to minimise these costs. The scaling argument, however, says that for each logical qubit, you require $O(-\log\epsilon)$ physical qubits to achieve an overall accuracy of $\epsilon$. There is also a time cost; most of your time is spent performing error correction rather than the logical gates. Again, this is an $O(-\log\epsilon)$ scale factor. For specific numbers, you might be interested in the sorts of calculations that Andrew Steane has performed: see here (although the numbers could probably be improved a bit now).

+ +

What is really quite compelling is to see how the coefficients in these relations change as your gate error gets closer and closer to the error correcting threshold. I can't seem to lay my hands on a suitable calculation (I'm sure Andrew Steane did one at some point. Possibly it was a talk I went to.), but they blow up really badly, so you want to be operating with a decent margin below the threshold.

+ +

That said, there are a few assumptions that have to be made about your architecture before these considerations are relevant. For example, there has to be sufficient parallelism; you have to be able to act on different parts of the computer simultaneously. If you only do one thing at a time, errors will always build up too quickly. You also want to be able to scale up your manufacturing process without things getting any worse. It seems that, for example, superconducting qubits will be quite good for this. Their performance mainly depends on how accurately you can make different parts of the circuit. You get it right for one, and you can ""just"" repeat many times to make many qubits.

+",1837,,1837,,05-10-2018 08:16,05-10-2018 08:16,,,,2,,,,CC BY-SA 4.0 +2019,2,,2017,05-09-2018 20:53,,9,,"

This is a question that I have been thinking about for more than 10 years. +In 2008 I was a student, and I told my quantum computing professor that I wanted to study the ""physical complexity"" of performing quantum algorithms for which the ""computational complexity"" was known to benefit from quantum computation.

+ +

For example Grover search requires $\mathcal{O}(\sqrt{n})$ quantum gates as opposed to $\mathcal{O}(n)$ classical gates, but what if the cost of controlling quantum gates scales as $n^4$ while for classical gates it's only $n$?

+ +

He instantly replied:

+ +
+

""Surely your idea of physical complexity will be implementation + dependent""

+
+ +

That turned out to be true. The ""physical complexity"" of manipulating $n$ qubits with NMR is much worse than it is for superconducting qubits, but we do not have a formula for the physical difficulty with respect to $n$ for either case.

+ +

These are the steps you'd need to take:

+ +

1. Come up with an accurate decoherence model for your quantum computer. This will be different for a spin qubit in a GaAs quantum dot, vs a spin qubit in a diamond NV centre, for example.
+ 2. Accurately calculate the dynamics of the qubits in the presence of decoherence.
+ 3. Plot $F$ vs $n$, where $F$ is the fidelity of the $n$ decohered qubits compared to the outcome you'd get without decoherence.
+ 4. This can give you an indication of the error rate (but different algorithms will have different fidelity requirements).
+ 5. Choose an error correcting code. This will tell you how many physical qubits you need for each logical qubit, for an error rate $E$.
+ 6. Now you can plot cost (in terms of number of auxiliary qubits needed) of ""engineering"" the quantum computer.

+ +

Now you can see why you had to come here to ask the question and the answer wasn't in any textbook:

+ +

Step 1 depends on the type of implementation (NMR, Photonics, SQUIDS, etc.)
+Step 2 is very hard. Decoherence-free dynamics has been simulated without physical approximations for 64 qubits, but non-Markovian, non-perturbative dynamics with decoherence is presently limited to 16 qubits.
+Step 4 depends on the algorithm. So there is no ""universal scaling"" of physical complexity, even if working with a particular type of implementation (like NMR, Photonics, SQUIDs, etc.)
+Step 5 depends on the choice of error correcting code

+ +

So, to answer your two questions specifically:

+ +
+

Is controlling a 1000-qubit machine (that is, preserving the + coherence of its wavefunctions) 'merely' $100$x harder than + controlling a $10$-qubit machine, or $100^2$, or $100!$ or + $100^{100}$?

+
+ +

It depends on your choice in Step 1, and no one has been able to go all the way through Step 1 to Step 3 yet to get a precise formula for the physical complexity with respect to the number of qubits, even for a specific algorithm. So this is still an open question, limited by the difficulty of simulating open quantum system dynamics.

+ +
+

Do we have any reasons for believing that it is more or less the + former, and not the latter?

+
+ +

The best reason is that this is our experience when we play with IBM's 5-qubit, 16-qubit and 50-qubit quantum computers. The error rates are not going up by $n!$ or $n^{100}$. How does the energy it takes to make the 5-qubit, 16-qubit and 50-qubit quantum computer, and how does that scale with $n$? This ""engineering complexity"" is even more implementation-dependent (think NMR vs SQUIDs) of an open question, albeit an interesting one.

+",2293,,26,,05-09-2018 22:14,05-09-2018 22:14,,,,3,,,,CC BY-SA 4.0 +2020,1,2021,,05-09-2018 22:24,,10,242,"

There is a number of emerging quantum technologies, among which we find the category of photon-based quantum technologies, including quantum key distribution or quantum random number generators.

+ +

The question is: what is the short-term viability of photon-based quantum computation and simulation, compared with other photon-based quantum technologies?

+",1847,,26,,12/13/2018 19:49,12/13/2018 19:49,What is the status of quantum computing compared with other (photonic) quantum technologies?,,1,3,,,,CC BY-SA 4.0 +2021,2,,2020,05-09-2018 22:24,,6,,"

According to this UK-oriented report by Gooch and Housego dated May 8, 2018, quantum computing is only one of several main key applications expected to have a market impact:

+ +
+
    +
  • Clock technology/timing (e.g. bridging between the optical frequencies typical of atomic clocks and electrical/microwave + frequencies typical of timing signals within telecommunication + networks and computer systems)
  • +
  • LIDAR
  • +
  • Magnetometry and gravimetry
  • +
  • Medical imaging
  • +
  • Microscopy, imaging and calibration
  • +
  • Navigation
  • +
  • Non-QKD communications
  • +
  • QKD/quantum cryptography/secure communications
  • +
  • QRNG –quantum random number generator
  • +
  • Quantum computing and simulation
  • +
+
+ +

To put the validity of this report into perspective, one needs to take into account that this study is an estimate of UK's demand (rather than global demand). Within this limitation, one can see that quantum computation and simulation is expected to be a relatively minor player within photon-based quantum technologies.

+ +

In bulk number of devices (demand volume), quantum random number generators are expected to dominate absolutely. The linked report explicitly mentions:

+ +
+

It will be incorporated into every device that requires encryption, which will drive the growth in sales volume.

+
+ +

In demand value, QKD/quantum cryptography/secure communications are expected to be a major player among quantum technologies.

+ +

Other applications, while minor, are considered solid, for example, gravimetry, about which it is stated that:

+ +
+

Commercialisation is forecast for the beginning of 2019 when gravimeters will be used for geo-surveying such as bedrock analysis, + detection of underground features and site surveying.

+
+ +

Imaging, navigation or non-QKD communications are similarly appreciated as photon-based quantum technologies that are about to hit a small but realistic market.

+ +
+ +

In comparison, the report also asserts that :

+ +
+

Experts do not believe that a quantum computer will be developed within the next 5 years.

+
+ +

and

+ +
+

Companies in sectors such as finance and banking and telecoms are adopting a ‘watch and wait’ approach, monitoring developments in + academia, investing in know-how and awareness and purchasing small + numbers of systems so that they will be ready when the technology + reaches commercialisation.

+
+",1847,,26,,05-11-2018 13:39,05-11-2018 13:39,,,,3,,,,CC BY-SA 4.0 +2022,2,,2017,05-09-2018 22:29,,3,,"

One way to think about quantum systems, which is qubits with error correction, and an encoded algorithm built in, or even in the general case where we program an algorithm is to ensure the fidelity of a qubit. So, for a system of $m$ qubits, we could could use proper error correction - for a given system of qubits, whether they are ion trap, superconducting, or any other scheme, and say the ""fidelity"" of a system given all these parameters is $n$ qubits.

+ +

So in a sense, the ""fidelity"" could give an estimate, how error prone the processor is. If you used the quantum computer to compute say chemical reaction dynamics, or any other problem, that could use superposition to achieve quantum speedup (or even ""quantum supremacy"" eventually) you could be impacted by decoherence, or even how quickly you achieve a superposition, could play a part in error free operation. ""Fidelity"" could give an error estimation, whether we use 1 qubit, or say 200 qubits. You could even ""engineer"" a Hamiltonian, to give high fidelity qubits, in the adiabatic case, where leakage errors, take place.

+ +

Note that in practice, error rates of 99.5%+ are highly desirable, to facilitate efficient error correction. Error rates could be of the type of reading electron spins between qubits to accuracy. In such a case, error rates, of 99.5%, or 99.8%(five or six sigma type confidence) would require less overheads(error correction) when scaling the system.

+",429,,429,,05-09-2018 23:21,05-09-2018 23:21,,,,0,,,,CC BY-SA 4.0 +2025,2,,53,05-09-2018 23:52,,9,,"

As one of the authors of the paper, and of the original theory papers on which that experimental realisation is based, perhaps I can attempt to answer. The BQC protocol used in that paper is based on a model of computations where measurements are made on a specially chosen entangled state (this is known as measurement-based quantum computation or MBQC, and was introduced in 2003 by Raussendorf and Briegel (PRA, arXiv). In MBQC the resource state is called a graph state, because a circuit to construct the graph state can be associated with a graph: for every vertex prepare a qubit in $|+\rangle$, and then perform a CZ gate between every pair of qubits for which the corresponding vertices share an edge in the graph. It turns out that you can implement an arbitrary quantum computation by first preparing a suitable graph state, and then by measuring out each qubit in turn, with measurement bases determined based on the target computation and on previous measurement outcomes.

+ +

What the BQC protocol does is to effectively implement an MBQC in a way that hides measurement bases from Bob. The reason we mention a need for a generic structure is because the protocol does not hide the graph. Now, it turns out that you can actually choose a generic graph which can implement any quantum computation which can be expressed as a quantum circuit of a given depth and breadth if the measurement bases are chosen appropriately. Using such a graph ensures that only circuit depth and breadth are leaked, and not the details of the computation. Furthermore, the computation can always be randomly padded to ensure that only an upper bound on depth and breadth are leaked. This is the minimum possible leakage, since ultimately Bob knows how much memory his device has (~circuit breadth) and how long it ran (~circuit depth), and so it is impossible to avoid leaking such upper bounds.

+ +

For more information you may wish to take a look at the following review paper, and references contained therein: +Private quantum computation: an introduction to blind quantum computing and related protocols, J.F. Fitzsimons, npj Quantum Information 2017.

+",2353,,409,,05-10-2018 10:37,05-10-2018 10:37,,,,0,,,,CC BY-SA 4.0 +2027,2,,2013,05-11-2018 10:11,,5,,"

I recently found a paper by Subhash Kak that introduces teleportation protocols that require lesser classical communication cost (with more quantum resource). I thought it'd be better to write a separate answer.

+ +

Kak discusses three protocols; two of them use 1 cbit and the last one requires 1.5 cbits. But the first two protocols are in a different setting, i.e the entangled particles are initially in Alice's lab (and a few local operations are performed), then one of the entangled particle is transferred to Bob's lab; this is unlike the Standard setting where the entangled particles are pre-shared between Alice and Bob before the protocol is even started. Interested people can go through those protocols that use only 1 cbit. I'll try to explain the last protocol that uses only 1.5 cbits (fractional cbits).

+ +

There are four particles, namely, $X, Y, Z$ and $U$. $X$ is the unkown particle (or state) that has to be teleported from Alice's lab to Bob's lab. $X, Y$ and $Z$ are with Alice, and $U$ is with Bob. Let $X$ be represented as $\alpha|0\rangle + \beta|1\rangle$, such that $|\alpha|^2+|\beta|^2=1$. The three particles $Y, Z$ and $U$ are in the pure entangled state $|000\rangle+|111\rangle$ (leaving the normalization constants for now).

+ +

So, the initial state of the whole system is: +$$ +\alpha|0000\rangle + \beta|1000\rangle + \alpha|0111\rangle + \beta|1111\rangle +$$

+ +

Step 1: Apply chained XOR transformations on $X, Y$ and $Z$ (i) XOR the states of $X$ and $Y$ (ii) XOR the states of $Y$ and $Z$.

+ +

The $XOR$ unitary is given by: +$$ +XOR = +\left[{\begin{array}{cccc} +1 & 0 & 0 & 0\\ +0 & 1 & 0 & 0\\ +0 & 0 & 0 & 1\\ +0 & 0 & 1 & 0 +\end{array}}\right]. +$$

+ +

In other words, the state transformations are the following: +$$ +|00\rangle \rightarrow |00\rangle \\ +|01\rangle \rightarrow |01\rangle \\ +|10\rangle \rightarrow |11\rangle \\ +|11\rangle \rightarrow |10\rangle \\ +$$

+ +

After Step 1, the state of the whole system is: +$$ +\alpha|0000\rangle + \beta|1110\rangle + \alpha|0101\rangle + \beta|1011\rangle +$$

+ +

Step 2: Apply Hadamard tranform on the state of $X$. +$$ +\alpha(|0000\rangle + |1000\rangle) + \beta(|0110\rangle - |1110\rangle) + \alpha(|0101\rangle + |1101\rangle) + \beta(|0011\rangle - |1011\rangle) +$$

+ +

Step 3: Alice measures the state of $X$ and $Y$.

+ +

On simplifying the above representation, we get +$$ +|00\rangle(\alpha|00\rangle + \beta|11\rangle) + |01\rangle(\alpha|01\rangle + \beta|10\rangle) + |10\rangle(\alpha|00\rangle - \beta|11\rangle) + |11\rangle(\alpha|01\rangle - \beta|10\rangle). +$$

+ +

Step 4: Depending on Alice's measurement outcome, appropiate unitaries are applied on $Z$ (by Alice) and $U$ (by Bob).

+ +

(a) If Alice gets $|00\rangle$, then both Alice and Bob do nothing.

+ +

(b) If Alice gets $|10\rangle$, then Alice applies $\left[{\begin{array}{cc}1 & 0 \\0 & -1 \end{array}}\right]$ and Bob does nothing.

+ +

(c) If Alice gets $|01\rangle$, then Alice does nothing and Bob applies $\left[{\begin{array}{cc}0 & 1 \\1 & 0 \end{array}}\right]$.

+ +

(d) If Alice gets $|11\rangle$, then Alice applies $\left[{\begin{array}{cc}1 & 0 \\0 & -1 \end{array}}\right]$ and Bob applies $\left[{\begin{array}{cc}0 & 1 \\1 & 0 \end{array}}\right]$.

+ +

Basically, $\left[{\begin{array}{cc}1 & 0 \\0 & 1 \end{array}}\right]$, $\left[{\begin{array}{cc}1 & 0 \\0 & -1 \end{array}}\right]$, $\left[{\begin{array}{cc}0 & 1 \\1 & 0 \end{array}}\right]$ and $\left[{\begin{array}{cc}0 & 1 \\-1 & 0 \end{array}}\right]$ can be appropiately used to alter the combined state of $Z$ and $U$ so that it becomes $\alpha|00\rangle + \beta|11\rangle$. Note that if Alice gets $|01\rangle$ or $|11\rangle$, then Bob has to apply some unitary so that the combined state of $Z$ and $U$ is $\alpha|00\rangle + \beta|11\rangle$.

+ +

Step 5: Apply Hadamard transform on the state of $Z$.

+ +

After applying the unitaries, the combined state of $Z$ and $U$ is $\alpha|00\rangle + \beta|11\rangle$ (as mentioned above). So, after Step 5, the combined state of $Z$ and $U$ is,

+ +

$$ +\alpha|00\rangle + \alpha|10\rangle + \beta|01\rangle - \beta|11\rangle \\ += |0\rangle(\alpha|0\rangle + \beta|1\rangle) + |1\rangle(\alpha|0\rangle - \beta|1\rangle). +$$

+ +

Step 6: Alice measures the state of $Z$.

+ +

Based on her measurement, she transmits one classical bit of information to Bob so that he can use an appropriate unitary to obtain the unkown state!

+ +

Discussion: So, how does the protocol require $1.5$ bits of clasiical communication? Cleary, Step 6 uses 1 cbit, and in Step 4, it is easy notice that for two outcomes (namely, $|10\rangle$ or $|00\rangle$), Bob need not apply any unitary. Bob has to apply some unitary (specified prior to the protocool; say $\left[{\begin{array}{cc}0 & 1 \\1 & 0 \end{array}}\right]$) if Alice gets the other two outcomes, and in those scenarios, Alice sends one cbit indicating that the unitary is to be used by Bob. So, it is mentioned that this has a computational burden of 0.5 cbits (because 50% of the time, Bob need not apply any unitary). Hence, the whole protocol requires only 1.5 cbits.

+ +

But, Alice must send that 1 cbit whether or not she gets those outcomes, right? Alice and Bob cannot agree on a particular time (after the protocol) when Alice sends that 1 cbit, and if Bob doesn't get that classical bit by that time, then he knows that he need not apply any unitary. These time dependent protcols are, in general, not allowed due to relativistic consequences (otherwise, you can even make the Standard protocol to use time for indicating information and reduce the classical communication cost to 1 cbit; for example, at $t_1$, send one cbit or at $t_2$, send one cbit). So, Alice must send that cbit everytime, right? In that case, the protcol requires 2 cbits (one in Step 4 and another in Step 6). I thought it'd be good if there was a discussion on this particular part.

+",506,,,,,05-11-2018 10:11,,,,2,,,,CC BY-SA 4.0 +2028,1,,,05-11-2018 14:47,,15,1390,"

When a qubit is measured, there is a ‘collapse of the wave-function’ as a result is randomly chosen.

+ +

If the qubit is entangled with others, this collapse will also effect them. And the way it affects them depends on the way we chose to measure our qubits.

+ +

From this it seems as though things we do on one qubit have instantaneous effects on another. Is this the case, or is the apparent effect more like a Bayesian update of our knowledge about the qubits?

+",409,,26,,5/17/2018 13:43,5/17/2018 13:43,Is it true to say that one qubit in an entangled state can instantaneously affect all others?,,2,0,,,,CC BY-SA 4.0 +2029,2,,2028,05-11-2018 14:55,,8,,"

It is certainly true that, within the mathematical description of qubits, operations on one qubit can require the whole description to be updated. This therefore affects the description of every qubit.

+ +

Those who take a 'epistemic' view of this mathematical description might say that we are just updating our knowledge about the other qubits, and that it doesn't affect the qubits themselves. Those who take an 'ontic' view, however, regard the wave function described by the mathematics of quantum mechanics as being a physical property of the qubits. So they would certainly conclude that the the operation on one qubit instantly affected the others.

+ +

I think the ontic view is more prevalent these days, among those who have opinions on these things. Though most take the 'Shut up and calculate' option and don't think about them too much.

+ +

Another interesting issue is the fact that instantaneous effects cause problems for relativity. Different observers in different reference frames can disagree on the time ordering of events. So one observer might see one qubit being used to affect a second, whereas another observer might see the same events and conclude that the second qubit is affecting the first. Entanglement avoids direct confrontation with relativity by making sure that the affect cannot be used to send any information instantaneously. But nevertheless, they don't play very well together. So that's why we can be hesitant to state that very strongly that entanglement allows instantaneous effects.

+ +

The process of teleportation is, I think, a good one to argue that entanglement does indeed allow qubits to instantly affect each other, as well as showing how it compromises with relativity. It is a process by which the state of a qubit is instantaneously sent from one qubit to another, using entanglement. But the state being sent also gets 'scrambled' during the process. This means that it is impossible for the receiving end to even confirm that the qubit has been sent, never mind see what its state is. However, the transmitter can send a message to the receiver with instructions on how to unscramble the qubit. Once this is done, the receivers can confirm that the teleportation did indeed send the state of the qubit. So there was indeed an instantaneous effect, but non-instantaneous effects (sending the unscrambling information) were also needed to reveal the effect and make it useful.

+",409,,,,,05-11-2018 14:55,,,,2,,,,CC BY-SA 4.0 +2030,1,2039,,05-11-2018 15:17,,23,4867,"

I have been trying to get a basic idea of what anyons are for the past couple of days. However, the online articles (including Wikipedia) seem unusually vague and impenetrable as far as explaining topological quantum computing and anyons goes.

+ +

The Wiki page on Topological quantum computer says:

+ +
+

A topological quantum computer is a theoretical quantum computer that + employs two-dimensional quasiparticles called anyons, whose world + lines pass around one another to form braids in a three-dimensional + spacetime (i.e., one temporal plus two spatial dimensions). These braids + form the logic gates that make up the computer. The advantage of a + quantum computer based on quantum braids over using trapped quantum + particles is that the former is much more stable. Small, cumulative + perturbations can cause quantum states to decohere and introduce + errors in the computation, but such small perturbations do not change + the braids' topological properties.

+
+ +

This sounded interesting. So, on seeing this definition I tried to look up what anyons are:

+ +
+

In physics, an anyon is a type of quasiparticle that occurs only in + two-dimensional systems, with properties much less restricted than + fermions and bosons. In general, the operation of exchanging two + identical particles may cause a global phase shift but cannot affect + observables.

+
+ +

Okay, I do have some idea about what quasiparticles are. For example, as an electron travels through a semiconductor, its motion is disturbed in a complex way by its interactions with all of the other electrons and nuclei; however, it approximately behaves like an electron with a different mass (effective mass) travelling unperturbed through free space. This ""electron"" with a different mass is called an ""electron quasiparticle"". So I tend to assume that a quasiparticle, in general, is an approximation for the complex particle or wave phenomenon that may occur in matter, which would be difficult to mathematically deal with otherwise.

+ +

However, I could not follow what they were saying after that. I do know that bosons are particles which follow the Bose-Einstein statistics and fermions follow the Fermi-Dirac statistics.

+ +

Questions:

+ +
    +
  • However, what do they mean by ""much less restricted than fermions and bosons""? Do ""anyons"" follow a different kind of statistical distribution than what bosons or fermions follow?

  • +
  • In the next line, they say that exchanging two identical particles may cause a global phase shift but cannot affect the observables. What is meant by global phase shift in this context? Moreover, which observables are they actually talking about here?

  • +
  • How are these quasiparticles i.e. anyons actually relevant to quantum computing? I keep hearing vague things likes ""The world-lines of anyons form braids/knots in 3-dimensions (2 spatial and 1 temporal). These knots help form stable forms of matter, which aren't easily susceptible to decoherence"". I think that this Ted-Ed video gives some idea, but it seems to deal with restricting electrons (rather than ""anyons"") to move on a certain closed path inside a material.

  • +
+ +

I would be glad if someone could help me to connect the dots and understand the meaning and significance of ""anyons"" at an intuitive level. I think a layman-level explanation would be more helpful for me, initially, rather than a full-blown mathematical explanation. However, I do know basic undergraduate level quantum mechanics, so you may use that in your explanation.

+",26,,26,,05-11-2018 19:06,5/15/2018 10:11,What exactly are anyons and how are they relevant to topological quantum computing?,,2,0,,,,CC BY-SA 4.0 +2033,2,,2028,05-11-2018 15:26,,9,,"

If Alice and Bob have an entangled pair of qubits and Alice locally measures her qubit, it does not affect local state of the Bob's qubit in any way. Mathematically, if Alice measures but does not look at the measurement outcome, density matrix of the Bob's qubit does not change. The sole fact of Alice's measurement does not affect the Bob's qubit in any way. If Alice measures and knows the measurement outcome, then Alice has more information about the Bob's qubit than Bob, but this is pure classical situation described by conditional probabilities.

+ +

So the Alice's measurement can only instantly affect Alice's information about the Bob's qubit, and no more.

+ +

The above said does not explain the ""spooky action at distance"", we know the satisfying explanation does not exist. Still we can argue about entanglement and measurements avoiding paradoxes and contradictions, and so the answer to the question in the title:

+ +

No, it is not true.

+",2105,,2105,,05-11-2018 15:52,05-11-2018 15:52,,,,5,,,,CC BY-SA 4.0 +2034,2,,2030,05-12-2018 02:15,,11,,"

You are right, it does look like the Wikipedia page needs work, so I will have to update it. But for now I will answer all five questions:

+ +
+

1) What do they mean by ""much less restricted than fermions and bosons?

+
+ +

The exchange of two fermions or bosons is restricted by: +$|\psi_1\psi_2\rangle = \pm|\psi_2\psi_1\rangle$.
+The ""$+$"" corresponds to bosons and the ""$-$"" corresponds to fermions.

+ +

For anyons we have the much less restricted: $|\psi_1\psi_2\rangle = e^{i\theta}|\psi_2\psi_1\rangle$.
+Notice that when $\theta=0$ we have bosons, and when $\theta=\pi$ we have fermions (by Euler's formula).

+ +
+

2) Do ""anyons"" follow a different kind of statistical distribution than what bosons or fermions follow?

+
+ +

Anyons can obey statistics ranging continuously between Fermi-Dirac statistics and Bose-Einstein statistics, because $\theta$ can be $0$ (bosons), $\pi$ (fermions), or anything in between.

+ +
+

3) Exchanging two identical particles may cause a global phase shift but cannot affect the observables. What is meant by global phase shift in this context?

+
+ +

That line from Wikipedia needs to be improved. The ""global phase shift"" is the $e^{i\theta}$ in the above formula. So it is not specific to anyons, since there is a global phase change of $-1$ when we exchange fermions as well.

+ +

What the Wikipedia article should have said was that when you exchange two identical particles twice you still get a global phase shift, which is not true for bosons and fermions. Here the first and second arrows indicate the first and second times we exchange particles 1 and 2:

+ +

Bosons: $|\psi_1\psi_2\rangle \rightarrow |\psi_2\psi_1\rangle \rightarrow |\psi_1\psi_2\rangle$ (no global phase)
+Fermions: $|\psi_1\psi_2\rangle \rightarrow -|\psi_2\psi_1\rangle \rightarrow -(-|\psi_1\psi_2\rangle)=|\psi_1\psi_2\rangle$ (no global phase)
+Anyons: $|\psi_1\psi_2\rangle \rightarrow e^{i\theta}|\psi_2\psi_1\rangle \rightarrow e^{i\theta}(e^{i\theta})=e^{i2\theta}|\psi_1\psi_2\rangle$ (global phase of $e^{i2\theta}$)

+ +
+

4) Moreover, which observables are they actually talking about here?

+
+ +

An observable is anything that can be observed in an experiment. For example, the position of the particle, $x$. When measuring the position of the particle, the probability of finding the particle at position $x$ is given by $\langle \psi|\hat{x}|\psi\rangle $.

+ +

Notice that this is unaffected by a global phase, because we have:
+$| \psi\rangle=e^{i\theta}|\phi\rangle$
+$\langle \psi|=e^{-i\theta}\langle\phi|$
+$\langle\psi|\hat{x}|\psi\rangle = \langle\phi|\hat{x}|\phi\rangle $.

+ +

So two states $|\phi\rangle$ and $|\psi\rangle$, which differ by a phase of $e^{i\theta}$, as in this case, have the same observations in experiment.

+ +
+

5) How are these quasiparticles i.e. anyons actually relevant to quantum computing?

+
+ +

There are many proposals for building a quantum computer, for example:

+ +
    +
  • (i) NMR quantum computers make use of fermions (such as the spin of a proton).
  • +
  • (ii) Photonic quantum computers make use of bosons (photons are bosons)
  • +
  • (iii) Topological quantum computers are a proposed type of quantum computer which would make use of anyons.
  • +
+ +

An advantage of (iii) over (i) is that fidelities should be much greater. The advantage over (ii) is that it should be easier to get the qubits to interact. The disadvantage over both (i) and (ii) is that experiments involving anyons are comparatively rather new. NMR has been around since 1938 and lasers (photonics) have been around since 1960, but experiments with anyons began in the 1980s and are still far away from reaching the maturity of spin science or laser science, not to say that it will never happen in the future.

+ +
+

""I think a layman-level explanation would be more helpful for me, initially, rather than a full-blown mathematical explanation.""

+
+ +

A layman definition without mathematics is going to be very difficult because what distinguishes anyons from bosons and anyons is that the exchange of anyons introduces a factor of $e^{i\theta}$ to the wavefunction, which is a mathematical explanation. If I had to explain anyons to someone who knows what a wavefunction is but nothing else, I would say:

+ +

""When two particles are switched, the wavefunction of the overall system stays the same for bosons, picks up a negative sign for fermions, and can pick up any factor of the form $e^{i\theta}$ for anyons.""

+",2293,,2293,,05-12-2018 03:31,05-12-2018 03:31,,,,12,,,,CC BY-SA 4.0 +2035,1,,,05-12-2018 17:34,,4,71,"

CTs / ZEFOZs: Energy level structures that include avoided crossings at accessible energies tend to be resilient to noise and therefore present high coherence times, at least in the case of spin qubits and magnetic noise: as the at first order effect of the magnetic field on the qubit energy vanishes, so does effectively most magnetic noise. Different people call these Atomic Clock Transitions (CTs or ACTs) or Zero First-Order-Zeeman (ZEFOZ) shift, but it's essentially the same phenomenon. This is experimentally expressed as high spin-spin $T_2$ relaxation times, even in presence of relatively high sources of noise.

+ +

However, in a quantum computing scenario, what we want is typically not a qubit surviving for long periods of inactivity (high $T_2$) but rather to obtain a high fidelity after a series of quantum gates, which can in general be rather complicated and involve entanglement with other qubits.

+ +

My question: Do CTs / ZEFOZs with their high relaxation times generally also translate into a high fidelity after a complicated series of quantum gates?

+ +

For context, the examples I have in mind are solid-state, mainly these two:

+ + + +

As well as this more refined variation, with electric field:

+ + +",1847,,2293,,05-12-2018 20:46,5/13/2018 5:23,Do avoided crossings / CTs /ZEFOZs optimize quantum fidelity in practice?,,1,2,,,,CC BY-SA 4.0 +3964,2,,3957,08-07-2018 10:49,,4,,"

Here is the best construction I've found. It uses 8 CNOTs.

+ +

+ +

I verified this circuit in Quirk using the channel-state duality and a known-good inverse.

+ +

The target is the middle qubit. None of the CNOTs go directly from top to bottom or bottom to top. You can switch which qubit is the the target by simply switching which line the Hadamards are on.

+",119,,119,,08-07-2018 22:37,08-07-2018 22:37,,,,9,,,,CC BY-SA 4.0 +3965,1,,,08-07-2018 12:44,,10,2757,"

I found an algorithm that can compute the distance of two quantum states. It is based on a subroutine known as swap test (a fidelity estimator or inner product of two state, btw I don't understand what fidelity mean).

+ +

My question is about inner product. How can I calculate the inner product of two quantum registers which contains different number of qubits?

+ +

The description of the algorithm is found in this paper. Based on the 3rd step that appear on the image, I want to prove it by giving an example.

+ +

Let: +$|a| = 5$, $|b| = 5 $, and $ Z = 50 $ +$$|a\rangle = \frac{3}{5}|0\rangle + \frac{4}{5}|1\rangle$$ $$|b\rangle = \frac{4}{5}|0\rangle + \frac{3}{5}|1\rangle +$$ +All we want is the fidelity of the following two states $|\psi\rangle$ and $|\phi\rangle$ and to calculate the distance between $|a\rangle$ and $|b\rangle$is given as: +$ {|a-b|}^2 = 2Z|\langle\phi|\psi\rangle|^2$ +so +$$|\psi\rangle = \frac{3}{5\sqrt{2}}|00\rangle + \frac{4}{5\sqrt{2}}|01\rangle+ + \frac{4}{5\sqrt{2}}|10\rangle + + \frac{3}{5\sqrt{2}}|11\rangle$$ +$$|\phi\rangle = \frac{5}{\sqrt{50}} (|0\rangle + |1\rangle) $$ +then how to compute +$$\langle\phi|\psi\rangle = ??$$

+",4206,,55,,5/15/2019 8:35,5/15/2019 9:52,How can I calculate the inner product of two quantum registers of different sizes?,,2,1,,,,CC BY-SA 4.0 +3966,2,,3898,08-07-2018 14:01,,4,,"

For a review of the subject, see this paper: ""Non-locality and Communication Complexity"" +https://arxiv.org/abs/0907.3584

+",4294,,4294,,08-07-2018 14:55,08-07-2018 14:55,,,,0,,,,CC BY-SA 4.0 +3967,1,3970,,08-07-2018 14:09,,3,84,"

Has any work been done on quantum systems which use a combination of types of qunits (eg. using qubits & qutrits simultaneously)?

+ +

If work has been done, what kind of work has been done? (eg. in quantum information in general? in quantum computing? in physical implementations?)

+",2645,,2645,,08-07-2018 15:15,08-07-2018 15:38,Combining Different Qunits,,2,2,,,,CC BY-SA 4.0 +3968,2,,3967,08-07-2018 14:38,,3,,"

Yes. Just to give one example, the PPT criterion is necessary and sufficient to decide whether a state is separable for qubit-qubit and qubit-qutrit systems, but not beyond.

+",491,,,,,08-07-2018 14:38,,,,0,,,,CC BY-SA 4.0 +3969,2,,3965,08-07-2018 14:54,,7,,"

I guess you're looking at equations (130) and (131)? So, here, you have $|\psi\rangle=(|0\rangle|a\rangle+|1\rangle|b\rangle)/\sqrt{2}$ and $|\phi\rangle=|a| |0\rangle+|b| |1\rangle$. When it says to calculate $\langle\phi|\psi\rangle$, what it really means is +$$ +(\langle\phi|\otimes\mathbb{I})|\psi\rangle, +$$ +padding everything with identity matrices to make them all the same size. +Thus, the calculation becomes +$$ +\frac{1}{\sqrt{2Z}}\left(\begin{array}{cccc} |a| & 0 & |b| & 0 \\ 0 & |a| & 0 & |b| \end{array}\right)\cdot\left(\begin{array}{c} a_0 \\ a_1 \\ b_0 \\ b_1 \end{array}\right), +$$ +where $a_0$ and $a_1$ are the elements of your vector $|a\rangle$. If you work this through, you'll get +$$ +\frac{1}{\sqrt{2Z}}(|a| |a\rangle+|b| |b\rangle). +$$ +I have no idea where the negative sign has come from in equation (133).

+",1837,,1837,,8/13/2018 6:27,8/13/2018 6:27,,,,3,,,,CC BY-SA 4.0 +3970,2,,3967,08-07-2018 15:00,,4,,"

Quantum walks are a simple case of quantum dynamics that involves a qubit (named coin in this context) interacting with a high-dimensional qudit (named walker in this context).

+ +

Almost anything in quantum optics can be thought of as ""combining different qunits"" as well: a photon in a superposition of many spatial modes (high-dimensional qudit), together with its polarization degree of freedom (qubit) fits the bill. +You can replace ""spatial modes"" with something like ""orbital angular momentum modes"" or ""frequency modes"" or ""time modes"" and you have the same.

+ +

Many quantum algorithms also inherently deal with this scenario: Grover's algorithm and the phase estimation algorithm come to mind (of course, the high-dimensional qudit in those algorithms can be replaced by a number of qubits, but that depends on the context).

+ +

Countless other examples could be found, but it would not be very meaningful without specifying the context a bit more.

+",55,,55,,08-07-2018 15:38,08-07-2018 15:38,,,,2,,,,CC BY-SA 4.0 +3971,1,3979,,08-07-2018 20:39,,17,2500,"

Sligthly related to this question, but not the same.

+ +

Traditional computer science requires no physics knowledge for a computer scientist to be able to research and make progress in the field. Of course, you do need to know about the underlying physical layer when your research is related to that, but in many cases you can ignore it (e.g. when designing an algorithm). Even when architectural details are important (e.g. cache layout), oftentimes it is not necessary to know all the details about them, or how they're implemented at the physical level.

+ +

Has quantum computing reached this level of ""maturity""? Can you design a quantum algorithm, or do actual research in the field, as a computer scientist that doesn't know anything about quantum physics? In other words, can you ""learn"" quantum computing ignoring the physical side, and is it worth it (in terms of scientific career)?

+",4296,,26,,12/13/2018 20:02,11/20/2020 11:27,Is quantum computing mature enough for a computer scientist with no physics background?,,4,0,,,,CC BY-SA 4.0 +3972,2,,3971,08-07-2018 20:52,,4,,"

It has pretty much always been like that. You can study the book by Nielsen & Chuang without knowing about physics. There is the introduction by Mermin aimed at computer scientists. There are probably lots of other resources (I'm pretty sure e.g. that Aaronson's book -- based on a CS lecture -- is perfectly suited for people without a physics background.) Overall, the physics formalism needed to understand quantum information and computation is pretty low-key, as compared to other fields of (quantum) physics. (This doesn't mean though that studying the phenomena in quantum information and computation is low-key.)

+",491,,491,,08-07-2018 20:57,08-07-2018 20:57,,,,0,,,,CC BY-SA 4.0 +3973,2,,3971,08-07-2018 22:18,,4,,"

As I can relate in my experience, I will say yes. One can indeed design algorithms without physics knowledge. For me it is so far Maths concepts. +I remember once I watched a course about quantum computing from Scott Aaronson and he quoted :

+ +
+

Quantum computing is really ""easy"" when you take the physics out of it.

+
+ +

However, if you are going to work on applications in physics or chemistry, it will always be useful to have a background of what you are going to work on.

+ +

The field is open to many backgrounds (Maths, Physics, Computer Science...). +I think it is just a challenge sometimes communicating between different backgrounds but it is not impossible. Indeed, I would say it is constructive and it can be beneficial to collaborate together. But one can always relate to his preferred interpretation/concepts.

+ +

As a career, it is again according to your point of view. I think there is much work to be done in this field so do not worry about it. Do it if you feel you like it. Plus working in this field does not mean you have to restrain yourself. You will still have to work with classical algorithms and you will need coding skills.

+ +

If you are interested in learning it from a computer scientist point of view, +there is this book that may be helpful: +https://www.amazon.com/Quantum-Computing-Computer-Scientists-Yanofsky/dp/0521879965

+ +

Good luck on your quantum trip !

+",4127,,,,,08-07-2018 22:18,,,,0,,,,CC BY-SA 4.0 +3974,1,3975,,08-07-2018 22:28,,10,305,"

Recently I've been wondering how high NISQ machines will be able to ""count"". What I mean by that is, given the most optimized increment circuit you can make, how many times can you physically apply that circuit to qubits in a secret initial state before there's a more than 50% chance that the output is the wrong value.

+ +

To that end, I need a good increment circuit that would actually run on a NISQ machine! E.g. this means respecting locality constrains, and costing the circuit based on how many 2-qubit operations are performed (since those are the noisiest). For simplicity, I will say that the gate set is ""any single qubit operation + local CNOTs on a grid"".

+ +

It seems clear to me that a NISQ machine should be able to apply a 3-qubit incrementer at least 8 times (so it wraps back to 0 and loses count), but I think wrapping a 4-qubit counter is much more challenging. Thus this question's focus on that size specifically.

+ +

A 4-qubit incrementer is a circuit which effects the state permutation $|k\rangle \rightarrow |k + 1\pmod{16}\rangle$. The value $k$ must be stored as a 2s complement binary integer in four qubits. If the value is under superposition, it must still be coherent after applying the incrementer (i.e. no entangling with other qubits except as temporary workspace). You may place the qubits wherever you want on the grid.

+",119,,119,,08-09-2018 01:58,08-09-2018 01:58,Minimum number of CNOTs for a 4-qubit increment on a planar grid,,1,6,,,,CC BY-SA 4.0 +3975,2,,3974,08-07-2018 22:28,,4,,"

Here is the best circuit I've found. It uses 14 CNOTs.

+ +

Note that this circuit is not using a linear layout! It is placed on the grid like this:

+ +
0-A-1
+  |
+  3
+  |
+  2
+
+ +

Where 'A' is an ancilla initialized in the |0> state and '0','1','2','3' are the qubits making up the register (with '0' being the least significant bit).

+ +

+ +

I verified this circuit in Quirk using the channel-state duality and a known-good inverse.

+ +

If one had access to the sqrt-of-CNOT operation, the number of 2-qubit operations could be brought down to 13 by merging two CNOTs and three Ts in the bottom area into a controlled-S.

+ +

If CNOTs had an error rate of 0.5%, and all other sources of error were negligible, you could apply this circuit nearly ten times before reaching a 50% failure rate. Implying a plausible NISQ machine could ""almost count to ten"".

+",119,,119,,08-07-2018 22:48,08-07-2018 22:48,,,,0,,,,CC BY-SA 4.0 +3979,2,,3971,08-08-2018 01:26,,25,,"

Speaking as a computer scientist without any physics background making contributions to quantum computing: yes, computer scientists without any physics background can make contributions to quantum computing. Though I think it was always that way; it has nothing to do with the field being "mature".

+

If you understand the postulates of quantum mechanics (operations are unitary matrices, states are unit vectors, measurement is a projection), and know how to apply those in the context of a computation, you can create quantum algorithms. The fact that these concepts were originally derived from physics is historically interesting, but not really relevant when optimizing a quantum circuit. As a concrete example: quantum physics is very heavy on calculus, but quantum computation isn't.

+

Quantum physics does become relevant if you are trying to design algorithms for simulating quantum systems. And some of the concepts you would learn in a quantum physics course should also appear in a quantum computation course. But overall I agree with Scott Aaronson:

+
+

[My] way to teach quantum mechanics [...] starts directly from the conceptual core -- namely, a certain generalization of probability theory to allow minus signs. Once you know what the theory is actually about, you can then sprinkle in physics to taste [...]

+

[quantum mechanics is] not a physical theory in the same sense as electromagnetism or general relativity. [...] Basically, quantum mechanics is the operating system that other physical theories run on as application software [...]

+

[...] [quantum mechanics is] about information and probabilities and observables, and how they relate to each other.

+
+",119,,-1,,6/18/2020 8:31,08-08-2018 01:26,,,,0,,,,CC BY-SA 4.0 +3980,1,,,08-08-2018 01:55,,10,1165,"

I was looking into applications of Quantum Computing for machine learning and encountered the following pre-print from 2003. Quantum Convolution and Correlation Algorithms are Physically Impossible. The article doesn't appear to have been published in any journals, but it has been cited a few dozen times.

+ +

The article author makes the case that it is impossible to compute discrete convolution over quantum states. Intuitively this seems incorrect to me, since I know we can perform quantum matrix multiplication, and I know that discrete convolution can be framed simply as multiplication with a Toeplitz (or circulant) matrix.

+ +

The crux of his argument seems to be that there is no realizable composition of unitary operators for the elementwise (Hadamard) product of two vectors.

+ +

Where is my disconnect? Is there any reason we cannot in general construct a Toeplitz matrix for discrete convolution in a quantum computer?

+ +

Or is the article simply incorrect? I have worked through the contradiction that the author presents in his proof of Lemma 14, and it seems to make sense to me.

+",4298,,,,,8/15/2018 0:45,Quantum Algorithms for Convolution,,2,2,,,,CC BY-SA 4.0 +3981,2,,3980,08-08-2018 07:09,,3,,"

I am highly suspicious of the result. If you look at Theorem 16, it claims that there is no operation that achieves the map +$$ +\sum_{ij}\alpha_i\beta_j|ij\rangle\mapsto \sum_i\alpha_i\beta_i|i\rangle +$$ +up to normalisation. However, consider the measurement operator +$$ +P=\sum_{i}|i\rangle\langle ii|. +$$ +This clearly implements the desired map (for that particular measurement outcome). Moreover, its implementation is quite straightforward. There is a unitary (effectively, a generalised controlled-not) that can map +$$ +|ii\rangle\mapsto|i0\rangle, +$$ +so that you then measure the second spin and post-select on getting the 0 result. This would seem to invalidate the proof of the paper.

+",1837,,,,,08-08-2018 07:09,,,,7,,,,CC BY-SA 4.0 +3982,1,3983,,08-08-2018 08:06,,7,1467,"

In a discussion with Jay Gambetta on the QISKit Slack channel, Jay told me that ""T2 is the time that $\vert 0 \rangle + \vert 1 \rangle$ goes to $\vert 0 \rangle \langle 0 \vert + \vert 1 \rangle \langle 1 \vert$"".

+ +

My question is: what is the difference between those two states?

+",1386,,,,,08-09-2018 01:24,What is the difference between $\vert 0 \rangle + \vert 1 \rangle$ and $\vert 0 \rangle \langle 0 \vert + \vert 1 \rangle \langle 1 \vert$?,,3,2,,,,CC BY-SA 4.0 +3983,2,,3982,08-08-2018 08:18,,8,,"

In short: ""coherence""! It's the crucial difference between quantum and classical. $\rho=|0\rangle\langle 0|+|1\rangle\langle 1|$ is just a statistical mixture, and behaves like a classical coin - any measurement that you perform on it gives a 50:50 split between the two possible outcomes.

+ +

By contrast, $|+\rangle=(|0\rangle+|1\rangle)/\sqrt{2}$ is a very different beast. When you look at them both as density matrices, you can see the difference: +$$ +\rho=\frac{1}{2}\left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right)\qquad |+\rangle\langle +|=\frac{1}{2}\left(\begin{array}{cc} 1 & 1 \\ 1 & 1 \end{array}\right) +$$ +Sure, if you measure $|+\rangle$ in the Z-basis, you get 50:50 outcomes, just like you did with $\rho$, but measurement in other bases gives different outcomes. Most importantly, if you measure in the $X$ basis, you get a definite measurement result, a 100:0 split.

+ +

Put another way, if I perform a Hadamard gate on the two states: +$$ +H\rho H=\rho\qquad H|+\rangle\langle +|H=|0\rangle\langle 0|. +$$

+ +

One of the crucial ways that we use the $|+\rangle$ state is in the first step of quantum algorithms: you prepare one register in $|+\rangle^{\otimes n}$. If, instead, you prepared it in $\rho^{\otimes n}$, this would be equivalent to classical sampling of the function evaluation in the next step. There would be no phase kick-back, and no quantum speed-up.

+ +
+ +

Exercise for the Reader:

+ +

Imagine a qubit starts in the state $\rho_0=|+\rangle\langle +|$ and experiences the noise map +$$ +\rho_{n+1}=\mathcal{E}(\rho_n)=(1-p)\rho_n+p Z\rho_n Z +$$ +where $0<p<1$ quantifies the probability of a phase error in a particular time step. Calculate $\rho_n$. Show that:

+ +
    +
  • $\rho_n-\rho_{n+1}\neq 0$
  • +
  • the fixed point of the map is $(|0\rangle\langle 0|+|1\rangle\langle 1|)/2$
  • +
  • as $n\rightarrow\infty$, $\rho_n$ tends to this fixed point.
  • +
+",1837,,1837,,08-08-2018 08:44,08-08-2018 08:44,,,,0,,,,CC BY-SA 4.0 +3984,2,,3982,08-08-2018 09:10,,5,,"

There are multiple ways to mathematically express the state of a quantum system. One is to write it as a linear combination of basis states, as either a vector or a matrix, as you have here. This is useful in many cases, but it also seems to be confusing to newcomers, since things like $\vert 0 \rangle + \vert 1 \rangle$, $\vert 0 \rangle \langle 0 \vert + \vert 1 \rangle \langle 1 \vert$ and other states all just look like a $50/50$ mix of $\vert 0 \rangle$ and $\vert 1 \rangle$.

+ +

So let's look at another way of representing states: using expectation values for a set of observables. For a single qubit we can use the Pauli matrices:

+ +

$$\rho = \frac{1}{2} \left( \mathbb{1} + \sum_{\alpha \in \{x,y,z\}} \, \langle \sigma_\alpha \rangle \,\,\sigma_\alpha \right)$$

+ +

Here $\langle \sigma_z \rangle$ is a quantity that takes the value $+1$ if a $|0\rangle$/$|1\rangle$ measurement is certain to give the outcome $|0\rangle$, and takes the value $-1$ if the measurement would certainly give the outcome $|1\rangle$. Otherwise we'll find $1 > \langle \sigma_z \rangle > -1$, with the value depending on the probabilities. For example, the value is $\langle \sigma_z \rangle = 0$ for a $50/50$ mix.

+ +

The quantity $\langle \sigma_x \rangle$ is the same but for a $|+\rangle$/$|-\rangle$ measurement (this is equivalent to doing a Hadamard immediately before a $|0\rangle$/$|1\rangle$ measurement). Similarly for $\langle \sigma_y \rangle$, but no-one ever cares much about poor $\sigma_y$.

+ +

With this method, the state $\vert 0 \rangle + \vert 1 \rangle$ (once normalized) is

+ +

$$ \rho = \frac{1}{2} \left( \mathbb{1} + \sigma_x \right) $$

+ +

This has $\langle \sigma_z \rangle = \langle \sigma_y \rangle =0$. So for these two types of measurement, we'd get the outcomes with $50/50$ probabilities. But $\langle \sigma_x \rangle =1$, showing that the outcome for this type of measurement is certain.

+ +

For $\vert 0 \rangle \langle 0 \vert + \vert 1 \rangle \langle 1 \vert$ (once normalized) we get

+ +

$$ \rho = \frac{1}{2} \mathbb{1} $$

+ +

$\langle \sigma_\alpha \rangle = 0$ for $x$, $y$ and $z$. All three of these measurements give random results. In fact, so will any kind of measurement.

+ +

This way of representing states is explored in IBM's Hello Quantum project, which might be of some additional help (note: I also helped make this).

+",409,,,,,08-08-2018 09:10,,,,0,,,,CC BY-SA 4.0 +3985,1,4001,,08-08-2018 11:06,,13,975,"

A metric called the ""quantum volume"" has been proposed to somehow compare the utility of different quantum computing hardware. Roughly speaking, it measures their worth by the square of the maximum depth of quantum computations it permits but limits its value to the square of the qubits involved. This limit is justified by wanting to forestall ""gaming"" of the system by optimizing towards few qubits. One reference is https://arxiv.org/abs/1710.01022.

+ +

I am concerned that this measure, as good as it may be for noisy near-term quantum computing devices, hides the actual quality advances for more advanced quantum computers (those with high quantum gate fidelity). The question is: Is this concern justified?

+ +

The argument behind my concern is the assumption that potential killer applications for quantum computers, for example quantum chemical calculations, will require computations with a gate depth much larger than the (potentially modest) number of qubits required. In this case, the ""quantum volume"" would be limited to the square of the number of qubits, regardless of whether one quantum computer (with particularly high fidelity) permits an essentially unlimited depth or whether it only allows the bare minimum gate depth to achieve the limitation of the ""quantum volume"" to the square of the number of qubits. One aspect of my question is: Is this argument correct?

+",,user1039,491,,08-08-2018 17:19,08-11-2018 06:52,"Is the ""Quantum Volume"" a fair metric for future, elaborate, high value quantum computations?",,2,1,,,,CC BY-SA 4.0 +3986,1,,,08-08-2018 14:10,,5,129,"

I have been reading about a family of quantum error correction codes called Quantum Turbo Codes, which are the quantum analog of the well-known classical Turbo codes. This codes were introduced in quantum serial turbo codes and the entanglement-assisted version of those was presented in entanglement-assisted quantum turbo codes.

+ +

In such papers, the decoding algorithm presented for is a modification of the classical belief propagation algorithms that are used for obtaining the most probable codeword received based on the channel information and the syndromes read from the information received. The difference in the quantum paradigm is that the belief propagation algorithm here is used to detect the most probable error that happened in the channel and so the recovery operation can be applied to the received quantum states. This modified algorithm is purely classical, and so it would be executed in a classical computer before feeding the decoded information back to the quantum computer.

+ +

The fact that it is a classical algorithm presents some problems, such as the fact that if the belief propagation takes more time than the decoherence time of the qubits, then the recovery would be unsuccesfull as more errors would have happened before correction is applied.

+ +

That's why I wonder if there are quantum versions of the belief propagation algorithms that use the parallel nature of quantum computers to obtain substantial speedups in the decoding process. I am seeking for reference respect to this topic, or to know if there are research groups concerned about this kind of problem.

+",2371,,,,,08-08-2018 14:10,Quantum Belief Propagation decoding,,0,2,,,,CC BY-SA 4.0 +3987,2,,3932,08-09-2018 00:04,,4,,"

(CW to avoid reps from self-answer)

+ +

There might be an interactive way for two parties to narrow in on the value of $t$, following up on @DaftWullie's answer and @Steven Sagona's comments. My formalism is poor, but I hope the idea gets through...

+ +

For example, call the two parties Alice and Bob. The parties have to cooperate, and behave honestly according to the protocol.

+ +

Alice knows how to prepare two states, $\vert A_0 \rangle$ and $\vert A_1 \rangle$. Here, $\vert A_0\rangle$ is the uniform superposition over all Rubik's cube combinations, and $\vert A_1\rangle$ is some other monkey state with the same number of qubits (such as the state corresponding to a solved Rubik's cube, or a uniform superposition over some large subgroup of $G$). Bob knows how to apply a matrix $M$ to a quantum state, where $M$ corresponds to single step of all of the Singmaster moves (with ancillas where appropriate.)

+ +

Alice and Bob want to show that the mixing time $t$ of the Rubik's cube group under Singmaster moves is at most $r$. Alice and Bob repeat the following $s$ times.

+ +
    +
  1. Alice flips a coin $i\in\{0,1\}$, and provides $\vert A_i \rangle$ to Bob
  2. +
  3. Bob repeats $r$ times to apply $M$ to $\vert A_i \rangle$, and measures the projector each time.
  4. +
  5. If the projector is $1$ for each of the $r$ iterations, then Bob says that $i=0$. If the projector is not $1$ for at least one of the $r$ iterations, then Bob says that Alice's $i=1$.
  6. +
+ +

If $i=0$, then each of Bob's $r$ iterations in step 2 does not change $\vert A_0\rangle$ - because by definition $\vert A_0 \rangle$ is an eigenstate of Bob's matrix, and Bob's matrix just permutes the states among themselves. If $i=1$, then the monkey state $\vert A_1 \rangle$ is not an eigenstate of Bob's projector, and the chance that a $1$ will not be measured grows quickly with $r$.

+ +

Thus, if Bob has accurately predicted $i$ for $s$ iterations, the probability of success grows exponentially with $s$, and Bob's $r$ is large enough to distinguish a valid Rubik's cube state from a monkey state.

+ +

I don't know how far apart $\vert A_1\rangle$ has to be from $\vert A_0\rangle$. I also don't know if interaction can be removed.

+",2927,,,,,08-09-2018 00:04,,,,0,,,08-09-2018 00:04,CC BY-SA 4.0 +3988,2,,3982,08-09-2018 01:24,,1,,"

This question is (in my opinion) the most important question to ask when trying to understand the mathematics of ""quantum superposition."" Quantum superposition is the essence of how quantum computations are made.

+ +

If I have a coin, and I flip it 50% of the times I'll get heads and 50% of the time I can get tails:

+ +

P(Heads) = 50%

+ +

P(Tails) = 50%

+ +

But if I make a quantum coin and write it in our fancy ket notation:

+ +

$|\psi\rangle = \frac{1}{\sqrt{2}}(|H\rangle + |T\rangle)$

+ +

I can see that this fancy notation gives me the same results as my coin flip!

+ +

P(H) = $| \langle H |\psi\rangle |^2 = \frac{1}{2}$

+ +

P(T) = $| \langle T |\psi\rangle |^2 = \frac{1}{2}$

+ +

So what's even the point? We say there's something special about quantum events, but the math is just the same as flipping coins?

+ +

What we need to do is investigate a little deeper to see how a coin is different from a quantum coin:

+ +

In the case where we simply check if our quantum coin is heads-or-tails, we don't see how its different from a normal coin. Instead we're going to do a different procedure (with an silly analogy for intuition): Without checking if our coin is heads-or-tails, we insert our quantum coin through a special slot machine. This special slot machine (meant for cheaters) has a trick: if we insert the coin in one orientation (Heads-side pointing to the left) it gives luckier odds than when its inserted in the other orientation (heads-side pointing to the right).

+ +

This means that if we flip a coin and (without looking) insert it into the machine, our odds look like this:

+ +

$$ \text{P(win)} = \frac{1}{2}(P(\text{win|lucky-odds}) + \frac{1}{2}(P(\text{win|unlucky-odds}) $$

+ +

Half the time we get the lucky odds and half the time we get unlucky odds. (And everyone who plays this slot that doesn't know the trick will get this average between the two odds!)

+ +

But what about the quantum coin? The quantum coin will not measure what was measured above. Let's work out the mathematical shapes of quantum mechanics, and define winning the slot machine is as a quantum mechanical operator:

+ +

$P(\text{win|lucky-odds}) = |\langle W|H \rangle|^2$ and $P(\text{win|unlucky-odds}) = |\langle W|T \rangle|^2$

+ +

But now if I insert the Heads-to-the-left orientation into the slot machine, I get the probability of winning with the lucky odds (same as before), and if the same if the heads-to-the-right orientation.

+ +

The difference now is that when I apply my fancy ket state from before $| \psi \rangle = |H\rangle + |T\rangle$, I am now working with a quantum state, so now to find the probabilties I have to square everything:

+ +

\begin{align} +P(Win) &= |\langle W|\psi\rangle|^2 \\ +&= \frac{1}{2}|\langle W | (|H\rangle+|V\rangle)|^2 \\ +&= \frac{1}{2}|\langle W|H\rangle + \langle W|T\rangle|^2 \\ +&= \frac{1}{2}|\langle W|H\rangle|^2 + |\langle W|T\rangle|^2 \\ +& + \langle T|W\rangle \langle W|H\rangle + \langle H|W\rangle \langle W|T\rangle +\end{align}

+ +

So now putting the ""normal coin"" together with our ""quantum coin"":

+ +

\begin{align} +P_{normal}(Win) &= \frac{1}{2}(P(\text{win|lucky-odds}) + \frac{1}{2}(P(\text{win|unlucky-odds}) \\ +P_{quantum}(Win) &= \frac{1}{2}(P(\text{win|lucky-odds}) + \frac{1}{2}(P(\text{win|unlucky-odds}) \\ +& + \langle T|W\rangle \langle W|H\rangle + \langle H|W\rangle \langle W|T\rangle +\end{align}

+ +

We see that we have extra terms that are in the quantum case! These ""interference terms"" are the terms that are fundamental to what a quantum superposition is!

+ +

These ""interference terms"" change depending on the sign of the quantum superposition. So consider the case when $|\psi\rangle = |H\rangle - |T\rangle $ instead of $ |H\rangle + |T\rangle $ :

+ +

\begin{align} +P_{normal}(Win) &= \frac{1}{2}(P(\text{win|lucky-odds}) + \frac{1}{2}(P(\text{win|unlucky-odds}) \\ +P_{quantum}(Win) &= \frac{1}{2}(P(\text{win|lucky-odds}) + \frac{1}{2}(P(\text{win|unlucky-odds}) \\ +& - \langle T|W\rangle \langle W|H\rangle - \langle H|W\rangle \langle W|T\rangle +\end{align}

+ +

The sign actually carries through, and this affects the probabilities to win our slot machine. These weird interference terms are the essence of quantum mechanics, and while the notation of bras and kets are convenient, it's often easy to get lost in the mathematical shapes and not realize the essence or intuition of what's going on!

+ +

So finally, to answer your question, what is the difference between $ |H\rangle + |T\rangle $ and $ |H\rangle \langle H | + |T\rangle \langle T| $? The difference is that $ |H\rangle + |T\rangle $ is a quantum coin that has these extra terms shown above. The state: $ |H\rangle \langle H | + |T\rangle \langle T| $ is a normal coin without any properties of quantum superposition. It has the porbabilities of $P_{normal}$.

+ +

In normal unitary quantum mechanics typically taught in undergraduate classes, it's actually not possible to construct a state that acts like a normal coin without quantum superposition! To get this ""normal coin"" you actually need to add extra rules to quantum mechanics (called working in the ""density matrix"" framework).

+",2660,,,,,08-09-2018 01:24,,,,0,,,,CC BY-SA 4.0 +3989,1,,,08-09-2018 09:34,,6,141,"

As I understand so far, in some algorithms such as Simon's algorithm, swap-test algorithm or quantum k-means algorithm, we repetitively perform a measurement yielding a classical result. Consequently, this pushes us to run the whole algorithm again and again (starting from initialization of the system).

+ +

My question is: +do we lose the complexity as the number of repetitions of the algorithm increases?

+",4206,,55,,08-09-2018 12:09,08-09-2018 12:09,Does the need of many quantum algorithms to be repeated several times impair the efficiency gains?,,1,0,,,,CC BY-SA 4.0 +3990,2,,3989,08-09-2018 09:44,,6,,"

You probably want to look at old posts about Simon's algorithm, such as the rather complete explanation I gave here, or talking more specifically about the number of times the algorithm has to be repeated.

+ +

Yes, you have to repeat the algorithm several times to get different pieces of classical data, which you then process classically to get your final answer. But this is taken into account in the complexity. Indeed, that is often one of the most significant bits of the analysis: what's the average run-time to get the answer? It is this that you compare to the classical run-time.

+",1837,,,,,08-09-2018 09:44,,,,0,,,,CC BY-SA 4.0 +3992,2,,3985,08-09-2018 16:53,,7,,"

As a start, you might want to look at https://arxiv.org/abs/1605.03590, which lays out conservative (i.e., high) qubit and gate requirements for a meaningful quantum chemistry calculation under some pretty reasonable assumptions. The estimates there are on the order of $10^{15}$ total logical gates (not gate depth) over roughly 100 logical qubits, which means that the gate depth must be on the order of $10^{13}$ or higher (I'm looking at the nested counts).

+ +

So at the logical level, you're correct: you don't need $10^{13}$ qubits and $10^{13}$ gate depth to run nitrogenase. The two metrics, qubit count and gate depth, are not really equivalent: to run a real problem, I need on the order of a hundred qubits, but ten quadrillion gate depth.

+ +

That's not the whole picture, though. It's on the order of a hundred logical qubits, and logical gate depth of 10^13. Quantum error correction is essentially all about trading physical qubit count to get better logical gate depth. As you can see in table II in the paper, the logical-to-physical ratio ranges from 17,000/1 to 300/1 as the physical qubits get better (i.e., as the physical gate depth increases). Again from table II, a physical gate depth of $10^3$ leads to needing $10^9$ physical qubits, while a physical gate depth of $10^9$ only requires $10^6$ physical qubits.

+ +

The ""quantum volume"" measurement still doesn't seem quite right to me at this scale, though. I think a measurement more on the order of the product of the physical gate depth and the square of the physical qubit count is more accurate; for the three cases in table II, which represent equally powerful (in some sense) quantum computers, this value is roughly constant across the three columns. It also matches the rule of thumb that the number of qubits in a distance $d$ QEC code scales as $d^2$.

+ +

The one thing this leaves out is that the computer with $10^9$ physical gate depth will run your chemistry simulation much faster than the computer with $10^3$ physical gate depth because the computational and wall-clock overhead of QEC will be much lower. You could come up with a more complicated formula to take this into account, if you like.

+",4265,,1978,,08-09-2018 17:24,08-09-2018 17:24,,,,2,,,,CC BY-SA 4.0 +3993,2,,3965,08-09-2018 21:49,,5,,"

Actually, there should be a minus. +There is a mistake in the paper. +Wittek uses a minus in his (expensive) book.

+ +

Indeed say : +$$ |\psi\rangle = \frac{1}{\sqrt{2}} (|0,a\rangle + |1,b\rangle) $$ +$$ |\phi\rangle = \frac{1}{\sqrt{Z}} (|a||0\rangle - |b||1\rangle) $$

+ +

Then : +$$ \langle \phi |\psi\rangle = \frac{1}{\sqrt{2Z}} (|a|\langle 0| - |b|\langle 1|) (|0,a\rangle + |1,b\rangle) $$ +$$ = \frac{1}{\sqrt{2Z}}( |a|\langle 0|0\rangle|a\rangle - |b|\langle 1|0 \rangle|a\rangle + |a|\langle 0|1 \rangle|b\rangle - |b|\langle 1| 1 \rangle |b\rangle )$$

+ +

$$ = \frac{1}{\sqrt{2Z}} (|a| |a\rangle - 0 + 0 - |b| |b\rangle) = \frac{1}{\sqrt{2Z}} (|a| |a\rangle - |b| |b\rangle) $$

+ +

Now for the part of the question where you ask how to swap quantum registers of different numbers of qubits, the answer is you don't really do that. You actually swap the ancilla qubit of $|\psi\rangle $ with $ |\phi\rangle $. This is not told in the reference but it is said in the original reference it is based on.

+",4127,,26,,5/15/2019 9:52,5/15/2019 9:52,,,,2,,,,CC BY-SA 4.0 +3994,1,3998,,08-10-2018 02:24,,6,276,"

Unlike Google's Bristlecone or IBM's Qbit computer, do simulators like Q# or Alibaba really use quantum mechanics anywhere in their physical chips? Are they just defining properties using a classical computer and trying to achieve quantum simulations ?

+",4259,,26,,10/22/2018 8:25,10/22/2018 8:25,Are quantum simulators like Microsoft Q# actually using quantum mechanics in their chips?,,2,0,,,,CC BY-SA 4.0 +3996,2,,3955,08-10-2018 06:18,,6,,"

The difficulty with the question is the word intuitive. Intuition basically reflects our understanding of the world around us, which is described by classical physics. Quantum mechanics is exactly the regime where our intuition breaks down because it functions very differently from the world of our everyday experience. As Terry Pratchett said :

+ +
+

It’s very hard to talk quantum using a language originally designed to + tell other monkeys where the ripe fruit is.

+
+ +

It is exactly that difference that we're using to get the computational speed-up.

+ +

There is a sequence of standard algorithms that most quantum computing texts progress through: Deutsch's algorithm, Deutsch-Jozsa, Simon's/Bernstein-Vazirani. These are chosen because they are the easiest to understand. They all have broadly the same structure, but increasing complexity, with a corresponding gain in computational speed (with Simon's giving exponential speed-up). You will not understand them intuitively. You have to do the maths. I think the closest that you will come is through the following explanation of Deutsch's algorithm:

+ +

Imagine a one-bit function $f(x)$. Either $f(0)=f(1)$, or it does not. Your task is to determine which. Obviously, in the classical world, you have to evaluate $f(0)$ and $f(1)$; two function calls. In the quantum world, crudely speaking, you can look at both values simultaneously, and perform a one-qubit measurement (which will give you one bit of information), but you can choose that measurement so that the one bit is a global property of the function, in this case $f(0)\oplus f(1)$. The same is broadly true of the other algorithms I mentioned: there is information, due to the structure of the problem, hidden in the collective properties of the function evaluations, and it is those collective properties that you are trying to determine.

+",1837,,1837,,05-04-2020 19:48,05-04-2020 19:48,,,,2,,,,CC BY-SA 4.0 +3997,2,,3994,08-10-2018 06:57,,7,,"

Quantum simulators don't rely on quantum-mechanical effects in the physical chips; instead they simulate certain aspects of quantum state and operations on it using only classical compute.

+ +

Universal simulators simulate full quantum state of the system, performing linear algebra transformations on it. They support universal set of quantum operations, but the memory required grows exponentially with the number of qubits simulated, thus the system size is limited to 30-40 qubits (depending on the hardware and the simulator).

+ +

Specialized simulators can focus on simulating certain aspects of larger systems at the cost of supporting a smaller set of operations. For example, CHP simulator (described in this paper) evolves only the stabilizer information in a matrix tableau instead of the full quantum state, and the required memory growth is quadratic in the number of qubits. The set of gates supported is limited to CNOT, Hadamard and phase gates, but it is sufficient to study quantum error correction protocols.

+",2879,,2879,,08-10-2018 15:42,08-10-2018 15:42,,,,0,,,,CC BY-SA 4.0 +3998,2,,3994,08-10-2018 08:25,,7,,"

There is a distinction between what you use to write a program (the SDK), and what you use to run it (the backend).

+ +

The SDK can be either a graphical interface, like the IBM Q Experience or the CAS-Alibaba Quantum Computing Laboratory. It could also be a way of writing programs, like Q#, QISKit, Forest, Circ, ProjectQ, etc.

+ +

The backend can either be a simulator that runs on a standard computer, or an actual quantum device.

+ +

Simulators use our knowledge of quantum theory to construct the simulation program, but no actual quantum computing happens. It is just the standard chips of your own computer, or of a supercomputer they let you use, running standard classical programs.

+ +

This approach is something we can do for small quantum programs, but the runtime will become unfeasibly long for large ones. So if you notice that your job takes longer and longer to run as you add more qubits, you know that it is being classically simulated rather than run on a real device.

+ +

The only actual quantum devices that can be used are those by IBM, Rigetti and Alibaba. To write programs for these you can use the Q Experience, QISKit or ProjectQ for the IBM devices, Rigetti's Forest for their devices, or the Alibaba graphical interface for their device.

+ +

Microsoft are making hardware, and they hope that it will one day be used as a backend in Q#. But they have not yet gotten a single qubit, so we might have to wait a while. Until then it will be only simulators that can be used (or other companies hardware).

+",409,,,,,08-10-2018 08:25,,,,2,,,,CC BY-SA 4.0 +3999,2,,3955,08-10-2018 12:44,,8,,"

I would like to suggest that period finding (a subroutine, if you like, of the famous Shor algorithm) demonstrates a very intuitive, exponential speed-up: It should be intuitively clear that something on the order of (the square root of the uncertainty $\Delta p$) of the period $p$ of function evaluations is required classically to find an unknown period $p$ of a function that is guaranteed to be periodic in its integer input value. I've deliberately placed the paranthesis such that their content will be intuitive to people who have deeply ingrained the birthday paradox yet, for demonstrating a superpolynomial speedup, it is sufficient to intuitively understand that it is somewhere near $\Delta p$, the correct answer $\sqrt{\Delta p}$, or some polynomial thereof and not something like the number of digits of $p$, $O(\log p)$.

+ +

The quantum algorithm for period finding, as employed by Shor's algorithm, simply takes the quantum Fourier transform of the periodic function applied to the equal superposition of all states. Naturally, only integer multiples of the period can then have a non-zero probability amplitude, so doing this (typically) twice will allow you to quickly extract the common factor as greatest common denominator. But a quantum Fourier transform is trivially implementable by $O(\log p)$ controlled rotations (one per each input bit).

+ +

The biggest intuitive speedup obviously occurs if you make the function evaluation very costly: The quantum algorithm only requires a constant (single) evaluation! But even otherwise you get a gain as you have an algorithm that runs, assuming function evaluations are constant time, in $O(\log p)$ rather than in $O(\sqrt{\Delta p})$ which, if you have no idea of the correct period $p$ is essentially $O(\sqrt{p})$.

+",,user1039,26,,10/28/2019 10:27,10/28/2019 10:27,,,,0,,,,CC BY-SA 4.0 +4000,2,,3959,08-11-2018 01:39,,6,,"

We can think of the Rubik's cube Cayley graph $\Gamma=(V,E)$ with each (colored) edge $E$ being one of the Singmaster moves $\langle U,U^{2},U^{3}=U^{-1},D,D^{2},D^{3},\cdots\rangle$ and each vertex $V$ being one of the $43252003274489856000\approx 4.3e{19}$ different configurations of the $3\times 3\times 3$ cubes.

+ +

The diameter of a graph is the longest shortest path in the graph. Classical algorithm for determining the diameter is polynomial in $\vert V \vert$; see, e.g., this answer from a sister site.

+ +

As mentioned above, God's number is (related to) this diameter; to know the longest shortest path between to vertices for a Cayley graph on a group, it suffices to know how many steps away from the solved state one is. We know, thanks to Rokicki, Kociemba, Davidson, and Dethridge among others, that God's number is $20$. The algorithms they executed was polynomial in $\vert V\vert$, e.g. polynomial in $4.3e{19}$.

+ +

Heiligman's quantum algorithm for graph diameter, mentioned in the comments, achieves a Grover speedup over Djikstra's algorithms, with ""a total quantum cost of $O(|V|^{9/4})$."" However, I believe Heiligman encodes the graph much as a classical algorithm would; e.g. with $O(|V|)$ qubits. Clearly if $|V|=4.3e{19}$ then this would not help.

+ +

Instead, another way to encode a Rubik's cube, as hinted in the other questions, is of course to prepare a uniform superposition over all $4.3e{19}$ states. This only takes $\log 4.3e{19}$ qubits.

+ +

Quantum algorithms are good at talking about ""eigenvalues"" and ""eigenvectors"" and ""eigenstates."" Applying all Singmaster moves to a uniform superposition of all $4.3e{19}$ states does not change the state; i.e. the uniform superposition is an eigenstate of the Markov chain on the Cayley graph.

+ +

There are relations between the diameter of a graph and the eigenvalues/eigenvectors of the corresponding adjacency/Laplacian matrix, especially the spectral gap, the distance between the two largest eigenvalues ($\lambda_1-\lambda_2$). A quick Google search of ""diameter eigenvalue"" produces this; I recommend exploring similar Google searches.

+ +

Spectral gaps are exactly what limits the adiabatic algorithm. Thus, perhaps by knowing how fast an adiabatic algorithm needs to run to evolve from the uniform superpositon to the solved state for various subgroups/subspaces of the Rubik's cube group, one could estimate the spectral gap, and use this to bound God's number. But I'm quickly out of my league here and I doubt any sense of accuracy is achievable.

+",2927,,2927,,08-11-2018 02:02,08-11-2018 02:02,,,,4,,,,CC BY-SA 4.0 +4001,2,,3985,08-11-2018 06:52,,7,,"

Quantum volume is likely only useful as a metric for small noisy computers.

+ +

It’s impossible to invent any single-number metric that’s ideal for all tasks. Even with classical computers, metrics such as Dhrystone or Windows Performance Index are at best suggestive at predicting performance on real-world tasks. Conversely, giving more than one number can potentially be much more informative. Within the quantum volume framework, I suggest when characterizing a QPU to give quantum volume as the “executive summary” but also quote for a range of different qubit numbers $N$ the model circuit depths $d(N)$. Comparing the $d(N)$ to the needed depth and qubits will be predictive, at least to the extent that the killer apps resemble the model circuit sequences of parallel random $SU(4)$ on random pairs of qubits.

+ +

The quantum volume is about correctly implementing the model circuits, thus measuring it involves simulating those circuits to compare the output of the QPU against the ideal results. Simulation is practical only for relatively few qubits or low depth, so it is only possible to measure the quantum volume for small/noisy devices (without additional assumptions). Fortunately, when width/depth reaches the limit of simulation (very roughly around $N\simeq d\simeq 50$), this is when the noise must necessarily be low enough that we could begin to use such a device to implement logical qubits. Defining appropriate metrics for logical qubits is an open question. The emphasis switches from “Can this algorithm run at all?” to “How long will this algorithm take?” and metrics will surely be very different, involving the logical gate time.

+",4322,,,,,08-11-2018 06:52,,,,0,,,,CC BY-SA 4.0 +4002,1,4004,,08-11-2018 07:51,,8,207,"

In many sources (like on Page 30 here), I found that the complexity of the original Harrow Hassidim Lloyd is stated as $\mathcal{O}(\log (N) s^2 \kappa^2/\epsilon)$ where $s$ is said to be the ""matrix sparsity"". What is a precise definition of the term ""matrix sparsity""? I couldn't find it stated anywhere.

+",26,,16606,,06-02-2022 12:48,06-02-2022 12:48,"What exactly is ""matrix sparsity"" $s$?",,2,0,,,,CC BY-SA 4.0 +4003,2,,4002,08-11-2018 08:12,,3,,"

I believe the way they use it is as

+ +
+

The maximum number of non-zero elements in any row.

+
+ +

Although that’s different to the way Wikipedia defines sparsity, which is essentially the average:

+ +
+

the total number of non-zero elements divided by the number of elements.

+
+",1837,,,,,08-11-2018 08:12,,,,0,,,,CC BY-SA 4.0 +4004,2,,4002,08-11-2018 12:09,,8,,"

You have the definitions in your paper you link page 12. +Simply said, it is a matrix with many 0s.
+As an example take N = 16, and the polynomial function is just a simple function like 1.5*X, +then your matrix has at most 1.5*log(16,2)=6 non-zero entries per row.

+ +

+ +

If you prefer a visual, you have it here:

+ +

+",4127,,4127,,08-11-2018 12:23,08-11-2018 12:23,,,,0,,,,CC BY-SA 4.0 +4005,1,,,08-11-2018 15:03,,19,5772,"

Many people have suggested using ""Random Circuit Sampling"" to demonstrate quantum supremacy. But what is the precise definition of the ""Random Circuit Sampling"" problem? I've seen statements like ""the task +is to take a random (efficient) quantum circuit of a specific form and generate samples from its output distribution"". But it is not clear to me what the terms ""random (efficient) quantum circuit"" mean precisely. Also, do we know anything about the classical computational complexity of this problem?

+",4324,,2927,,6/26/2019 22:00,6/26/2019 22:00,"What exactly is ""Random Circuit Sampling""?",,1,1,,,,CC BY-SA 4.0 +4006,2,,4005,08-11-2018 22:10,,15,,"

There are a continuous set of possible states for $n$ qubits, each of which can be expressed as a superposition of the $2^n$ basis states.

+ +

Mostly of these states are highly entangled, and would require highly complex circuits to create (assuming the standard gate set of single qubit rotations and two or three qubit entangling gates).

+ +

These circuits would have to be implemented very cleanly to be able to reach these states. Noise causes decoherence, which essentially drives your qubits to an unentangled state (like all qubits $|0\rangle$ due to relaxation, or the maximally mixed state due to constantly rotated relaxation and dephasing).

+ +

The set of unentangled states is just a tiny corner of the total set of all possible states, but it is a corner that is hard to leave for long. So implementing circuits capable of fully exploring the $n$ qubit Hilbert space will be very hard. But taking advantage of the full Hilbert space is what quantum computing is all about. So we have to show that we can overcome this hurdle.

+ +

One way to see how well we do this is to focus on just randomly producing $n$ qubit states. These should be picked uniformly from all possible states, and not be biased towards the tiny set of states that it is easy for us to produce or write down. This can be done by running a random circuits of sufficient circuit depth. The number of gates for this thought to be efficient (i.e. polynomial in $n$), though I'm not sure if this is proven or is just a widely held conjecture.

+ +

The randomness of the process ensures that there are no nice properties that could be exploited by a classical simulation. So the task of simulating arbitrary random circuits will require a full simulation of the $n$ qubits, the required classical resources for which scale exponentially with $n$.

+ +

The details of how exactly to go about creating the random circuit, and what to look for in the results to declare success, depend on the proposal (such as Google's). It is also not yet clear how many qubits are needed before current supercomputers cannot reproduce the result.

+",409,,409,,08-11-2018 22:16,08-11-2018 22:16,,,,0,,,,CC BY-SA 4.0 +4007,1,,,08-12-2018 02:25,,5,201,"

Is there a method to calculate prime factorization in Q# of Visual Studio of Microsoft Quantum Development Kit?

+ +

In the Python language, it is the sympy.factorint method.

+ +

Or else is there any Q# sample program?

+ +

It is like Shor's algorithm calling Q# from C#.

+",4308,,26,,5/14/2019 5:50,11/15/2019 18:11,Q# factorization Method Program,,1,0,,,,CC BY-SA 4.0 +4008,2,,4007,08-12-2018 05:49,,5,,"

Integer factorization sample in the official Quantum Development Kit samples repository implements Shor's algorithm in Q# and shows how to call it from C#.

+",2879,,2879,,11/15/2019 18:11,11/15/2019 18:11,,,,0,,,,CC BY-SA 4.0 +4009,2,,3960,08-12-2018 07:10,,6,,"

Any Hermitian gate is ""self-canceling"". Proof: since any gate $U$ is unitary +$$ +UU^{\dagger}=U^{\dagger}U=I +$$ +If $U$ is also Hermitian, $U=U^{\dagger}$ and +$$UU=I$$

+ +

CNOT gate +$$ +\left(\begin{array}{cccc} +1 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 \\ +0 & 0 & 0 & 1 \\ +0 & 0 & 1 & 0 +\end{array}\right) +$$ +is Hermitian by inspection.

+",2105,,,,,08-12-2018 07:10,,,,0,,,,CC BY-SA 4.0 +4010,1,4011,,08-12-2018 20:34,,12,1052,"

Quoting from this blog post by Earl T. Campbell:

+ +
+

Magic states are a special ingredient, or resource, that allows quantum computers to run faster than traditional computers.

+
+ +

One interesting example that is mentioned in that blog post is that, in the case of a single qubit, any state apart from the eigenstates of the Pauli matrices is magic.

+ +

How are these magic states more generally defined? Is it really just any state that is not a stabilizer state, or is it something else?

+",55,,,,,8/13/2018 6:25,How are magic states defined in the context of quantum computation?,,1,0,,,,CC BY-SA 4.0 +4011,2,,4010,8/13/2018 5:16,,8,,"

It is any state that, if you have an unlimited supply of them, can be used to give you universal quantum computation when used in conjunction with perfect Clifford operations.

+ +

The standard example is that if you can produce the state $(|0\rangle+e^{i\pi/4}|1\rangle)/\sqrt{2}$, then you can combine this with Clifford operations in order to apply a $T$ gate (see Fig. 10.25 in Nielsen and Chuang), and we know that $T$+Clifford is universal.

+ +

To be clear, in the one qubit case that is being discussed, I assume the accurate statement is that any pure state that is not an eigenstate of a Pauli operator is magic.

+ +

The real interest is in mixed states - how noisy can a particular magic state be before it isn’t magic any more. The theory being that Clifford operations are often comparatively easy in a fault-tolerant scenario (they can be applied transversally), and it is creating the one non-Clifford gate that’s hard. The more noise it can tolerate, the easier it will be to make.

+ +

I believe that I’ve seen results proving that there are some non-Clifford mixed states which are not magic, but I don’t remember the reference off the top of my head. Earl’s papers are the ones you want to read on this topic.

+",1837,,1837,,8/13/2018 6:25,8/13/2018 6:25,,,,0,,,,CC BY-SA 4.0 +4012,1,4013,,8/13/2018 14:27,,4,299,"

For example, one have measured some states like $|0\rangle$ in the computational basis for many times and got the approximate probability of getting 0 and 1 ($P(0)$ and $P(1)$). Then how does he calculate the off-diagonal elements of the density of the initial quantum state? +The system is open and with noise.

+",4178,,,,,8/13/2018 14:41,How to calculate the off-diagonal elements of a density matrix using the measurement result?,,1,1,,,,CC BY-SA 4.0 +4013,2,,4012,8/13/2018 14:41,,5,,"

With the given measurements, you cannot: there is no observable difference between many different states such as $|\pm\rangle=(|0\rangle\pm|1\rangle)/\sqrt{2}$.

+ +

In order to determine what the state is completely, you need more measurements. If you're using projective measurements, you need two more. These would typically be projections onto the bases +$$ +|\pm\rangle \qquad\text{and}\qquad (|0\rangle\pm i|1\rangle)/\sqrt{2} +$$ +However, if you are willing to use POVMs, you can define a single measurement, comprising four measurement operators that does the job. One way to visualise these is in the Bloch-sphere: you need a set of axes that provides a basis in the three dimensional space. Three vectors cannot do this because they satisfy a completeness relation which constrains them to be in a plane. The most effective way to do this is to inscribe a regular tetrahedron inside the Bloch sphere and take the 4 corners as measurement bases.

+",1837,,,,,8/13/2018 14:41,,,,0,,,,CC BY-SA 4.0 +4014,1,4020,,8/14/2018 2:04,,8,234,"

Boson Sampling, sometimes stylized as BosonSampling, is an attractive candidate problem to establish quantum supremacy; the engineering problems appear more surmountable than those associated with a Turing-complete quantum computer.

+ +

However, Boson Sampling has a downside, in that the output of a photonic quantum computer capable of executing Boson Sampling with only a handful ($\le 100$ or so) of qubits may not even be able to be clasically simulated. This is, of course, unlike $\mathsf{NP}$ problems such as factoring, the engineering aspects of which are significantly harder.

+ +

Thus, we may establish the results of Boson Sampling on $100$ or so photons, but in order to verify the results, we need to calculate the permanent of a $100\times100$ matrix. This is famously computationally hard to verify.

+ +

Maybe a supercomputer powerful enough to calculate the permanent can do the trick. But then everyone would have to believe both the supercomputer's results and the Boson Sampling results.

+ +
+

Is there anything about Boson Sampling that can be easily verified?

+
+ +

I've had a flight of fancy to maybe put the resources of a cryptocurrency mining network to use to calculate such a permanent, and relying on some $\mathsf{\#P}$ / $\mathsf{IP}$ tricks for public verification, but I haven't gotten very far.

+ +

EDIT

+ +

I like @gIS's answer.

+ +

Compare Boson Sampling with Appel and Franken's computer-assisted proof of the Four Color Theorem. The original proof of the 4CT was allegedly controversial, precisely because the proof was too long to be publicly verified by a human reader. We've moved so far from the '70's with our trust of computers, wherein I think now most people accept the proof of the 4CT without much controversy. But thinking about how to make things like a proof of the 4CT human-verifiable may lead to interesting ideas like the $\mathsf{PCP}$ theorem.

+",2927,,2927,,8/14/2018 16:19,9/25/2018 15:37,What about BosonSampling can be publicly verified?,,2,1,,,,CC BY-SA 4.0 +4017,1,5343,,8/14/2018 12:37,,10,1104,"

I was wondering if there was some code available for Hamiltonian simulation for sparse matrix. And also if they exist, they correspond to a divide and conquer approach or a Quantum walk approach?

+",4127,,,,,9/25/2020 7:35,Is there a Hamiltonian simulation technique implemented somewhere?,,2,0,,,,CC BY-SA 4.0 +4018,2,,4017,8/14/2018 12:59,,3,,"

In this article the authors stated that they used this Group Leader's algorithm in order to obtain the circuit implementing the hamiltonian simulation used as a subroutine in an instance of HHL algorithm.

+ +

Unfortunately though, I did not understand quite well how they actually managed to find the circuit with that method.

+",2648,,,,,8/14/2018 12:59,,,,1,,,,CC BY-SA 4.0 +4019,2,,3980,8/14/2018 13:48,,3,,"

You can actually perform convolution on a quantum computer (and exponentially faster for that matter), if your input signals have a certain structure. But for general inputs, this seems challenging and maybe even physically impossible, which is what the paper seems to argue.

+ +

Consider how you would compute the convolution of two discrete signals $f$ and $g$ classically. You can take the Fourier transform of both signals, do a point-wise multiplication of the resulting vectors, and then do an inverse Fourier transform:

+ +

$$ +\mathscr{F}^{-1} (\mathscr{F}(f) . \mathscr{F}(g)) +$$

+ +

Note that Fourier transform is a very cheap operation on a quantum computer. So this seems great. The problem is that the point-wise multiplication of two vectors is not so easy. Let's see what factors determine that.

+ +

Suppose we are lucky and the Fourier spectrum of $f$ turns out to be flat: +$$ +F = \mathscr{F}(f) = \frac{1}{N}\sum_{i=0}^{N-1}{|i\rangle} = \sum_{i=1}^{N-1}F(i) +$$

+ +

In that case, your quantum computer can do a diagonal matrix operation that gives you the point-wise multiplication: +$$ +\mathscr{F}(f) . \mathscr{F}(g) = F.G = +\left(\begin{array}{cccc} +F(0) & & & \\ + & F(1) & & \\ + & & . & \\ + & & & F(N-1) +\end{array}\right) +\left(\begin{array}{c} +G(0) \\ +G(1) \\ +. \\ +G(N-1) +\end{array}\right) +$$

+ +

However, quantum algorithms that find the point-wise multiplication of two vectors may be physically impossible in the general case. This is because this operation is non-unitary in general. As a simple example, suppose that the Fourier transform of $f$ is a spiky function, with zeros in most places:

+ +

$$ +F = \mathscr{F}(f) = \frac{1}{2}(|0\rangle + |2\rangle + |5\rangle + |7\rangle) +$$ +The point-wise multiplication of this state with another state is non-reversible (because of the zeros), and thus not unitary.

+ +

There has been prior work to discover functions that result in a flat or near-flat Fourier spectrum, and are thus easy to convolute:

+ +

https://arxiv.org/abs/0811.3208

+ +

https://arxiv.org/abs/quant-ph/0211140

+",2503,,2503,,8/14/2018 16:43,8/14/2018 16:43,,,,0,,,,CC BY-SA 4.0 +4020,2,,4014,8/14/2018 14:53,,7,,"

About the need of boson sampling verification

+ +

First of all, let me point out that it is not a strict necessity to verify the output of a boson sampler. By this, I don't mean to say that it is not useful or interesting to try and do so, but rather that it is in some sense more of a practical than a fundamental necessity.

+ +

I think you yourself put up a good argument for this when you write

+ +
+

Maybe a supercomputer powerful enough to calculate the permanent can do the trick. But then everyone would have to believe both the supercomputer's results and the Boson Sampling results.

+
+ +

Indeed, there are many instances in which one solves a problem and trusts a solution which cannot really be fully verified. I mean, forget quantum mechanics, just use your computer to multiply two huge numbers. You probably have a high confidence that the result you get is correct, but how do you verify it without using another computer?

+ +

More generally, trust in a device's results comes from a variety of things, such as knowledge of the inner working of the device, and unit testing of the device itself (that is, testing that it works correctly for the special instances that you can verify with some other method).

+ +

The problem of boson sampling certification is no different. We know that, at some point, we will not be able to fully verify the output of a boson sampler, but that does not mean that we will not be able to trust it. If the device is built with due thoroughness, and its output is verified for a variety of small instances, and other tests that one is able to carry out are all successful, then at some point one builds up enough trust in the device to make a quantum supremacy claim (or whatever else one wants to use the boson sampler for) meaningful.

+ +

Is there anything about BosonSampling that can be easily verified?

+ +

Yes, there are properties that can be verified. Due to the sampling nature of the problem, what people typically do is to rule out alternative models that might have generated the observed samples. For example, Aaronson and Arkhipov (1309.7460) showed that the BosonSampling distribution is far from the uniform distribution in total variation distance (with high probability over the Haar-random matrices inducing the distribution), and gave a protocol to efficiently distinguish the two distributions. +A more recent work showing how statistical signatures can be used to certify the boson sampling distribution against alternative hypotheses is (Walschaers et al. 2014).

+ +

All other works that I am aware of focus on certifying specific aspects of a boson sampler, rather than directly tackling the problem of finding alternative distributions which are far from the BosonSampling one for random interferometers.

+ +

More specifically, one can isolate two major possible sources of error in a boson sampling apparatus: those arising from incorrectly implementing the interferometer, and those arising from the input photons not being what they should (that is, totally indistinguishable).

+ +

The first case is (relatively) easy to handle because one can efficiently characterise an interferometer using single-photons. +However, certifying the indistinguishability of input photons is trickier. One idea to do this is to change the interferometer to a non-random one, such as the QFT interferometer, and see whether something can be efficiently verified in this simpler case. I won't try to add all the relevant references here, but this direction started with (Tichy et al. 2010, 2013).

+ +

Regarding the public verification aspect, there isn't anything done in this direction that I've heard of. I am also not sure whether it is even a particularly meaningful direction to explore: why should we require such a ""high standard"" of verification for a boson sampler, when for virtually any other kind of experiment we are satisfied with trusting the people doing the experiment to be good at what they are doing?

+",55,,55,,8/24/2018 11:20,8/24/2018 11:20,,,,0,,,,CC BY-SA 4.0 +4021,2,,2151,8/14/2018 16:37,,11,,"

Here's a list of other resources to learn about quantum machine learning:

+ +

An introduction to quantum machine learning

+ +

The quest for a Quantum Neural Network

+ +

Quantum Machine Learning: What Quantum Computing Means to Data Mining

+ +

Quantum Machine Learning 1.0

+",4294,,,,,8/14/2018 16:37,,,,0,,,,CC BY-SA 4.0 +4023,1,4034,,8/14/2018 20:22,,5,106,"

$\newcommand{\ket}[1]{\lvert#1\rangle}$I am trying to show equality of two intermediate steps in the rearrangement of the Quantum Fourier transform definition, but I do not know how to rearrange the coefficients of a tensor product.

+ +

The text claims that $$ \frac{1}{2^{n/2}}\sum_{k_1=0}^{1}\sum_{k_2=0}^{1} \cdots \sum_{k_n=0}^{1}e^{2\pi ij \left( \sum_{l=1}^n{k_l 2^{-l}} \right)} \ket{k_1 \ldots k_n} = \frac{1}{2^{n/2}}\sum_{k_1=0}^{1}\sum_{k_2=0}^{1} \cdots \sum_{k_n=0}^{1}{ \bigotimes_{l=1}^n e^{2\pi ijk_l 2^{-l}} \ket{k_l}} $$

+ +

Isolating the parts that change leaves us with $$ e^{2\pi ij \left( \sum_{l=1}^n{k_l 2^{-l}} \right)} |k_1 \ldots k_n \rangle = \bigotimes_{l=1}^n e^{2\pi ijk_l 2^{-l}} |k_l \rangle. $$ If I were to look at a small case letting $ n = 3. $ I would get the following on the left hand side, $$ e^{2\pi i j \left( k_12^{-1} + k_22^{-2} + k_32^{-3} \right) } |k_1k_2k_3 \rangle, $$ and the following on the right hand side, $$ e^{2 \pi i jk_12^{-1}} |k_1\rangle \otimes e^{2 \pi i jk_22^{-2}}|k_2\rangle \otimes e^{2 \pi i jk_32^{-3}}|k_3\rangle. $$ Is there a rule that is similar to $$ a |k_1\rangle \otimes b |k_2\rangle = ab|k_1k_2\rangle$$ that will allow me to rewrite the RHS to be equal to the LHS as desired.

+ +

I would also like to ask for reference suggestions to strengthen my understanding of tensor product algebra as well.

+",4348,,55,,8/14/2018 20:33,8/15/2018 16:28,Simplifying Quantum Tensor products with coefficients,,2,1,,,,CC BY-SA 4.0 +4029,1,4038,,8/15/2018 1:28,,6,874,"

The Wikipedia entry on the subject is rather short. I am also curious about generalizations of quantum rotors in n-dimensions. An introductory explanation with at least one resource for further reading would be greatly appreciated.

+",2645,,26,,12/14/2018 6:00,12/14/2018 6:00,"What exactly are ""quantum rotors""?",,2,0,,,,CC BY-SA 4.0 +4032,2,,4023,8/15/2018 6:08,,1,,"

Isn't +$$ e^{2\pi i j \left( k_12^{-1} + k_22^{-2} + k_32^{-3} \right) } |k_1k_2k_3 \rangle, $$ +same as +$$ e^{2 \pi i jk_12^{-1}} |k_1\rangle \otimes e^{2 \pi i jk_22^{-2}}|k_2\rangle \otimes e^{2 \pi i jk_32^{-3}}|k_3\rangle. $$

+ +

The kets gets the kronecker product according to $ a |k_1\rangle \otimes b |k_2\rangle = ab|k_1k_2\rangle$ and the exponentials adds up according to $e^{x+y}=e^x.e^y$. Just keep in mind that this identity of exponential holds only if $x$ and $y$ commutes. In your case the power to which exponentials are raised are just scalar numbers and thus commutes.

+",2817,,,,,8/15/2018 6:08,,,,0,,,,CC BY-SA 4.0 +4033,1,4040,,8/15/2018 7:40,,5,677,"

I'm new to Q# and I was curious on how one would find the number of Q# simulatable qubits for a specific machine. I know Microsoft has an approximation of 16GB ~ 30 qubits but I wanted a better estimation for my own machines.

+ +

I wrote this quick program that runs a loop which allocates a register of increasing size. When I get a std::bad_alloc error I then have an estimate. I'm guessing there is a better way either through a tool or some pre-written code.

+",1287,,26,,03-12-2019 09:06,03-12-2019 09:06,Finding the maximum number of Q# simulatable qubits,,2,0,,,,CC BY-SA 4.0 +4034,2,,4023,8/15/2018 10:28,,3,,"

Expanding and generalising from Jitendra's answer: the key observation in this case is that you must use how scalar factors behave over tensor products. +Specifically, +$$ a (U \otimes V) = (aU) \otimes V = U \otimes (aV), $$ +or more generally +$$ a_1 a_2 \cdots a_k \,(U_1 \otimes U_2 \otimes \cdots \otimes U_k) += (a_1 U_1) \otimes (a_2 U_2) \otimes \cdots \otimes (a_k U_k). $$ +Let's consider the left-hand side of the expression which you isolated, +$$ + \exp\bigl(2\pi ij \sum_{\ell=1}^n{k_\ell 2^{-\ell}} \bigr) |k_1 \cdots k_n \rangle \;: $$ +we may use the fact that $\exp(a+b+\cdots+h) = \exp(a) \exp(b) \cdots \exp(h)$ to re-express this as +$$ + = \Bigl[\, \prod_{\ell=1}^n \exp\bigl(2\pi ij k_\ell \big/ 2^{\ell} \bigr) \Bigr] |k_1 \cdots k_n \rangle \;. $$ +We next use the fact that $\lvert k_1 k_2 \cdots k_n \rangle$ is short-hand for a tensor product of operators (state-vectors to be specific): +$$ = \Bigl[\, \prod_{\ell=1}^n \exp\bigl(2\pi ij k_\ell \big/ 2^{\ell} \bigr) \Bigr] \Bigl[\, \bigotimes_{\ell=1}^n |k_\ell \rangle \Bigr] \;. $$ +Now we use the way that scalars interact with tensor products: +$$ = \bigotimes_{\ell=1}^n \Bigl[ \exp\bigl(2\pi ij k_\ell \big/ 2^{\ell} \bigr) |k_\ell \rangle \Bigr] \;, $$ +which was what we wanted to show.

+",124,,124,,8/15/2018 16:28,8/15/2018 16:28,,,,0,,,,CC BY-SA 4.0 +4035,1,,,8/15/2018 14:14,,15,2859,"

Say you have a PDE you want to solve.

+ +

What kind of quantum algorithms would you use to solve it? +How do we input our problem on a quantum computer? +What will be the output and in what form?

+ +

I know that quantum algorithms for solving linear systems (often named HHL but actually this is a bad name as other versions are not from the HHL authors) were listed before but maybe other methods are out there. +Also as it is considered as a subroutine, the output is quantum and then unless you want statistics from it or use it as an input of another quantum algorithm, it is limiting.

+",4127,,,,,01-10-2019 08:02,How would a quantum computer be used to solve Partial Differential Equations?,,2,2,,,,CC BY-SA 4.0 +4036,2,,4035,8/15/2018 15:31,,6,,"

I don't have an exact answer to your question (if it actually exists); but I can answer part of your question concerned with the I/O to a quantum processor.

+ +

As a general rule of thumb; Quantum Algorithms (currently) cannot provide direct answers to problem statements. At least for now, quantum processors exists as heterogeneous accelerators with a classical computing unit. The 'quantum accelerator' is concerned with only that part of the overall algorithm that is not trivial (or exponential in complexity) to solve on a classical computer. In the end, only a sub portion of the program is actually computed on the quantum processor. (Eg. Shor's Factoring Algorithm is actually a period finding algorithm. Period finding is a non-trivial task.)

+ +

Among several other reasons, of the main problems is input and output operation with a quantum processor. The problem 'must' be expressible in a concise form (eg. an equation). This equation is expressed as a quantum circuit in the 'oracle' which is primarily concerned with solving the equation and measurement outcome are recorded (tomography). The output too needs post processing to actually make sense (which is again performed by the classical counterpart).

+ +

p.s. I would be very interested to know more about PDE solving quantum algorithms; if there is an efficient one.

+",2391,,,,,8/15/2018 15:31,,,,3,,,,CC BY-SA 4.0 +4037,2,,4029,8/15/2018 15:58,,1,,"

First of all, quantum rotors generally appear in the context of quantum rotor models, which are lattice models, analogous to quantum spin systems on a lattice - this is, identical quantum systems arrange on a lattice and interacting via some (natural) interaction. One wouldn't usually talk of an isolated quantum rotor.

+ +

So what are quantum rotors? They are quantum mechanical versions of classical rotors, just as quantum spins can be regarded as quantum versions of classical two-level systems.

+ +

So what is a classical rotor? It is basically a tiny magnet which can rotate freely in a certain number $d$ of dimensions, such as a compass needle ($d=2$, this would be a O(2) rotor model). It is thus characterized by a unit vector $\vec n$ in $d$ dimensions. The natural interactions of these models would be dipole-dipole interactions (i.e. $\vec n_i\cdot \vec n_j$), and the natural on-site energy would be a kinetic term $\vec{L}^2/2$ with $L$ the angular momentum.

+ +

These interactions can be turned quantum mechanical in the canonical way for conjugate variables, as explained on the Wikipedia page.

+",491,,,,,8/15/2018 15:58,,,,0,,,,CC BY-SA 4.0 +4038,2,,4029,8/15/2018 16:03,,5,,"

Quantum rotor models are quantum systems based on the quantization of systems with rotational configuration spaces. For example, a particle moving on a ring or a pendulum are rotors whose configuration spaces are circular $S^1$, while a rigid body a system whose configuration space is the three-dimensional sphere $S^3$ (or equivalently, the group manifold $SU(2)$).

+ +

The dynamics of these systems depend in addition on the angular momenta of the configuration space coordinates, whose values are not confined. Thus, for the case of the pendulum, the phase space (spanned by coordinates and angular momenta) is the two-dimensional cylindrical manifold $S^1 \times \mathbb{R}$ and in the rigid body case $S^3 \times \mathbb{R}^3$.

+ +

Once, an appropriate Hamiltonian function on the phase space is defined, there are quantization rules which allow writing the Schrödinger equation to treat the system quantum mechanically.

+ +

The cylindrical phase space quantum rotor is especially relevant to quantum computation as it enters in the description of the condensate dynamics of Josephson junction arrays which can be used to implement superconducting qubits; Please see the following review by: Devoret, Wallraff and Martinis. In this case a generic Hamiltonian (of a single isolated qubit) has the form:

+ +

$$H = \frac{E_C}{2} (n – n_0)^2 + \frac{E_J}{2} \cos\theta$$

+ +

Where $E_C$ is the charging energy and $E_J$ is the total junction energy and, $n$ is the number of Cooper pairs and $n_0$ is an offset proportional to the junction residual charge and: +$$\theta = 2 \pi \frac{\Phi}{\Phi_0} \mod 2 \pi$$ +is the phase across the junction ($\Phi$ is the magnetic flux, and $\Phi_0$ is the flux quantum), and most importantly is that the cooper pair number is conjugate to the phase: +$$n = \frac{1}{E_C}\frac{\partial \theta }{\partial t }$$ +This is a consequence of Faraday's law. (Thus this model is defined on a cylindrical phase space since the angular momentum is proportional to the angular velocity).

+ +

After quantization, the Cooper pair number operator is replaced by: +$$\hat{n} =i \hbar \frac{\partial }{\partial \theta }$$

+ +

The Schrödinger equation can be solved exactly in terms of Mathieu functions. However, the qubit dynamics can be obtained if we restrict the dynamics to the two lowest eigenfunctions of the number operator $|0\rangle$ and $|1\rangle$, in this case, the kinetic and the potential terms of the Hamiltonian can be approximated for $n_0 = \frac{1}{2}$ as:

+ +

$$ E_C (n – n_0)^2 \approx E_C \sigma_z$$ +And:

+ +

$$E_J cos\theta \approx E_J \sigma_x$$ +Thus, the quantum Hamiltonian restricted to the lowest two-dimensional number subspace: +$$\hat{H} = \frac{E_J}{2}(\sigma_x + \frac{E_C}{E_J} \sigma_z)$$ +and we can use the ratio $\frac{E_C}{E_J}$ to control the quantum evolution.

+ +

Remarks:

+ +

While the authors of the main references given above don't use the name quantum rotors in relation to qubits, other authors do, for example: Girvin, Devoret, and Schoelkopf

+ +

A description of the cylindrical phase and other phase spaces from the point of view of quantum computation can be found in: Albert,. Pascazio and Devoret .

+ +

I don't know of an application of three dimensional rotors to quantum computation, they especially appear in quantum models of molecules.

+",4263,,4263,,8/15/2018 16:08,8/15/2018 16:08,,,,3,,,,CC BY-SA 4.0 +4039,2,,4033,8/15/2018 20:50,,-1,,"

A simple formula for the number of qubits you can simulate in almost all programs is given by this very well received answer.

+ +

After re-arranging the formula given there into a form that's much more relevant to your specific question, and changing 48 to 32 since Q# is written in C++, not python, we have:

+ +

$$ +N_{\rm{qubits}} = \log_2\left( \frac{\rm{RAM}}{32} \right) +$$

+ +

When $\rm{RAM} = 32\rm{GB} = 32\times 1024^3$, this formula gives 30, meaning you an simulate 30 qubits.

+ +

If you plug the amount of RAM you have, into that formula, it will give a good estimate of how many qubits you can simulate without using a more sophisticated simulator.

+",2293,,2293,,8/15/2018 23:43,8/15/2018 23:43,,,,9,,,,CC BY-SA 4.0 +4040,2,,4033,8/15/2018 22:37,,2,,"

The simple rule is:

+ +
+

Doubling the memory gives you one additional qubit.

+
+ +

So if Microsoft says that

+ +
16GB -> 30 qubits
+
+ +

then

+ +
 8GB -> 29 qubits
+ 4GB -> 28 qubits
+ 2GB -> 27 qubits
+ ...
+32GB -> 31 qubits
+64GB -> 32 qubits
+
+ +

and so further.

+ +

This scaling, as well as the number quoted by Microsoft, can be understood from an argument as the one in the linked answer, using that

+ +
    +
  1. each complex number equals 2 real numbers with double precision (8 bytes each), so 16 bytes are needed per comples number,

  2. +
  3. to describe $N$ qubits, $2^N$ numbers are needed,

  4. +
  5. and 1GB=$1024^3$ bytes

  6. +
+ +

which together yields $16\cdot 2^N = x \, \mathrm{GB} = x\cdot 1024^3$ with $x$ the memory required in GB, which results in +$$ +N = \log_2(x\,\times 1024^3/16) = \log_2(x)+26\ . +$$ +For $x=16$ (i.e. 16GB of memory), $\log_2(16)=4$, and this yields exactly +$$ +N=30 \ \mathrm{qubits}\ , +$$ +which is the number quoted by Microsoft.

+",491,,,,,8/15/2018 22:37,,,,0,,,,CC BY-SA 4.0 +4042,1,4044,,8/16/2018 11:12,,12,610,"

In the literature on QECC, Clifford gates occupy an elevated status.

+ +

Consider the following examples which attest to this:

+ +
    +
  • When you study stabilizer codes, you separately study how to perform encoded Clifford gates (even if these aren't applicable transversally). All introductory material on QECC emphasize on performing encoded Clifford operations on quantum codes. And otherwise too, emphasize on Clifford gates (i.e., even when not performing encoded Clifford gates in quantum codes).

  • +
  • The entire topic of magic state distillation* is based on the classification of certain operations (including the performance of Clifford gates) as low-cost operations, while, for instance, performing the toffoli-gate or the $\pi/8$-gate, as higher-cost operations.

  • +
+ +

Possible answers:

+ +
    +
  1. This has been justified in certain places in the literature ,for e.g., Gottesman's PhD dissertation and many papers by him, and also in https://arxiv.org/abs/quant-ph/0403025. The reason given in these places is that it is possible to perform some Clifford gates transversally (a prototypical Fault-tolerant operation) on certain stabilizer codes. On the other hand, it is not easy to find a transversal application of non-Clifford gates on quantum codes. I haven't verified this myself, but am just going by statements which Gottesman makes in his PhD. dissertation and some review articles.
  2. +
+ +

Not being able to perform an encoded gate transversally on a quantum code immediately increases the cost of performing said gate on the code. And hence performing Clifford gates goes into the low-cost category, while non-Clifford gates goes into the high-cost category.

+ +
    +
  1. From an engineering perspective, it is important to decide on a standardized list of basic units of quantum computation (state preparation, gates, measurement-observables/basis), etc. Performing Clifford gates makes for a convenient choice on that list because of multiple reasons (most well-known sets of universal quantum gates include many Clifford gates in them ,Gottesman-Knill theorem**, etc).
  2. +
+ +

These are the only two reasons I could think of for why the Clifford group has such an elevated status in the study of QECC (particularly when you're studying stabilizer codes). Both reasons stem from an engineering perspective.

+ +

So the question is can one identify other reasons, which don't stem from an engineering perspective? Is there some other major role that the Clifford gates play, which I've missed out?

+ +

Possible other reason: I know that the Clifford group is the normaliser of the Pauli group in the Unitary group (on $n$ qubit systems). Also, that it has a semidirect product structure (actually a projective representation of of semidirect product group). Does these relations/properties by themselves give another reason why one ought to study the Clifford group in association with Stabilizer codes?

+ +

*Feel free to correct this. +**Which states that restricted to certain operations, you can't obtain the quantum advantage, and hence you need a little-bit more than the set of operations you initially restricted yourself to.

+",4353,,4353,,8/16/2018 14:58,8/20/2018 6:50,Significance of Clifford operations from quantum error correction perspective,,1,0,,,,CC BY-SA 4.0 +4043,2,,1826,8/16/2018 18:01,,4,,"

TL;DR: The efficiency is 2/9, not 25%.

+ +

The Ekert 91 protocol involves many rounds. In each round, Alice and Bob share a Bell pair +$$ +(|00\rangle+|11\rangle)/\sqrt{2} +$$ +They both choose randomly which of 3 measurements to make. Alice chooses between the measurement bases $Z$, $(X+Z)/\sqrt{2}$ and $X$. Bob chooses between $(X+Z)/\sqrt{2}$, $X$ and $(X-Z)/\sqrt{2}$. They make their measurements, and get $\pm 1$ answers. They record both the measurement settings and the answers.

+ +

Later, they announce in public what measurement bases they used, but not the answers.

+ +

In the scenario of no eavesdropping, and no errors, Alice and Bob are guaranteed to get identical measurement results whenever they measure in the same basis, and each such outcome gives one shared secret bit. If Alice and Bob chose different measurement bases, they announce the outcomes that they got and use them in a CHSH test to detect eavesdropping.

+ +

How often do they get a secret bit out in this scenario? If we assume that all measurement bases are equally likely, then there are 9 possible combinations for Alice's and Bob's choices. Of these, two are matching pairs. Hence, the efficiency if 2/9.

+",1837,,1837,,8/17/2018 10:33,8/17/2018 10:33,,,,0,,,,CC BY-SA 4.0 +4044,2,,4042,8/16/2018 20:45,,5,,"

Clifford operations are often easy to do fault-tolerantly in stabilizer codes, either transversally or by code deformation. The reason is exactly as you thought: the special relationship between these gates and the Paulis, since the latter are used to define stabilizer codes.

+ +

It is possible to get non-Clifford gates in codes, but a price must be paid. Specifically, there is a relationship between the geometric locality of codes and the gates they can do transversally. So if you are allowed to do only nearest neighbour controlled gates on a 2D lattice (such as a surface or Color code) only Cliffords will be possible. See papers like this one for more on this.

+ +

The fact that we can expect fault-tolerant Cliffords from stabilizer codes has subsequently been put at the heart of techniques to synthesize universal gate sets. So if there’s a way to create a non-stabilizer encoded state in a non-fault-tolerant way, we know how to clean it up using our logical Clifford’s. To turn these states into rotations, we use our logical Cliffords. So if you have a code and want to apply all these off-the-shelf results, you’d better find your fault-tolerant Cliffords. Or at least the Paulis, H and a CZ or CNOT if you can’t manage them all.

+",409,,409,,8/20/2018 6:50,8/20/2018 6:50,,,,2,,,,CC BY-SA 4.0 +4046,1,4052,,8/17/2018 15:32,,6,653,"

Can we process infinite matrices with a quantum computer?

+ +

If then, how can we do that?

+",877,,877,,8/23/2018 23:42,8/23/2018 23:42,Can we process infinite matrices with a quantum computer?,,1,7,,,,CC BY-SA 4.0 +4047,1,,,8/17/2018 17:26,,9,423,"

I want to preface with a disclaimer that I am a physicist with minimal knowledge of computer hardware. I have a solid understanding of quantum information from a theoretical standpoint but zero knowledge of how it is implemented. Here goes...

+ +

When a company boasts that there newest chip has $X$ qubits, what exactly does that mean? Should I think of $X$ as being analogous with 32 or 64 bits on a conventional processor, meaning that the quantum computer can hold and process data types of size $X$? Or is $X$ something physical, like the number of Josephson junctions on the chip? should I think of $X$ as being equivalent to the number of transistors on a conventional processor? The benchmark of a conventional microprocessor is the number of transistors, so it is tempting to make the equivalence between transistor and qubit but I don't think that is correct because qubit is a unit of information and a transistor is a hardware thing. And furthermore I would not understand how quantum supremacy could be achieved with only ~50 qubits when conventional processors have billions of transistors. It just seems strange to say that a chip has $X$ 'qubits', because from a theoretical standpoint a qubit is information and not hardware.

+ +

EDIT:

+ +

I am realizing that my confusion boils down to memory vs processing power. I get that in order to store $2^X$ states, I would need $X$ physical qubits (Josephson junctions, spin states, etc). But then where does the processing power come from? On a conventional chip, you have registers to store the information to be processed, but then a ton of transistors to perform the computation. My question is how this processing power is measured in a quantum computer? Do the number of quantum gates on the chip really not matter as much as the number of qubits they are capable of operating on?

+",4367,,26,,12/23/2018 13:12,12/23/2018 13:12,What does it mean for a quantum computer to have $X$ qubits?,,3,2,,,,CC BY-SA 4.0 +4048,2,,4047,8/17/2018 17:42,,2,,"

Take the tables attached in the Wikipedia article here:

+ +

https://en.wikipedia.org/wiki/List_of_quantum_processors

+ +

as a starting point. The ""Gate model QPU's"" are more likely to be Turing-complete than the ""Annealing QPU's.""

+ +

For companies that claim $X$ qubits, they mean they have they have the capability to operate, quantum mechanically, on $2^X$ states, and they are physically implemented qubits, e.g. $X$ Josephson junctions, etc., much like an individual bit on DRAM on a classical computer is physically implemented (with a capacitor and a transistor.)

+ +

So in a sense, the number of qubits is roughly analogous to the amount of hardware. We can only read out $X$ qubits, but we can operate on up to $2^X$ states. Your analogy is actually not that bad of an analogy, it's merely the quantum-mechanical weirdness that's hard to get over.

+",2927,,,,,8/17/2018 17:42,,,,3,,,,CC BY-SA 4.0 +4049,2,,4047,8/17/2018 17:51,,0,,"

I would say they have individual physical systems (represented by one qubit mathematically) that they interconnect together in some fashion way (so that is why we talk about connectivity). Physical systems can be two different polarization of a photon, or as two states of an electron... But there are multiple ways to have such systems.

+ +

Now quantum supremacy is a really bad word. +So the idea is that the biggest number of qubits simulated on classical computers was around 50 qubits. So having a physical realization with a superior size would be considered as a ""hope"" for quantum computers to ""beat"" +classical ones.

+ +

A good explanation video about it can be found here.

+",4127,,,,,8/17/2018 17:51,,,,0,,,,CC BY-SA 4.0 +4052,2,,4046,8/18/2018 13:53,,5,,"

If instead of manipulating the quantum information in qubits, your quantum computer were to do operations on qu$d$its with $d$ being infinity, then you'd essentially be processing infinite matrices on a quantum computer.

+ +

However most quantum computing hardware we have today, and even most of the experiments being done in academic labs, do operations on qubits (such as spin-1/2 nuclei), rather than on qu$d$its with an infinite value for $d$ (such as a quantum harmonic oscillator).

+ +

It is theoretically possible to do quantum gates on qu$d$its with infinite $d$, which would be processing infinite matrices, but it is not done in practice. It is hard enough to make quantum computers with ~100 qubits (which would still be processing only finite-dimensional matrices).

+",2293,,,,,8/18/2018 13:53,,,,9,,,,CC BY-SA 4.0 +4053,1,4055,,8/19/2018 8:44,,8,615,"

I've just started to mess about with QISKit on Python and one thing is confusing me a fair bit.

+ +

Given that we are building Quantum Circuits what is the need for a classical register ?

+ +

Is it because the collapsed state must be classical ?

+",4373,,26,,12/13/2018 20:03,12/13/2018 20:03,Why do we need a Classical Register for carrying out Quantum Computations?,,2,0,,,,CC BY-SA 4.0 +4054,2,,4053,8/19/2018 14:21,,4,,"

Quantum computations depend on classical control: A classical computer, driven by a classical algorithm, suffices to apply the quantum gates in sequence. In some algorithms (such as quantum teleportation), the gate to be applied depends on an earlier measurement result. Hence a store for measurement results (and, possibly, calculations using it) is helpful and needed in general.

+",,user1039,,,,8/19/2018 14:21,,,,2,,,,CC BY-SA 4.0 +4055,2,,4053,8/19/2018 18:44,,3,,"

Once we measure a qubit, we get some classical information out. This is something we need to keep track of. We need it to look at the outputs of our computations, or use them as part of classical control within an algorithm. For that reason, it can be useful to have a specific object in a quantum SDK that keeps track of this classical information, and does so in a way that parallels how qubits are dealt with. This is the approach taken by QISKit.

+",409,,,,,8/19/2018 18:44,,,,0,,,,CC BY-SA 4.0 +4056,2,,4047,8/19/2018 19:02,,2,,"

When we take about hardware specs for classical computers, we are getting some information about the kind of things we can do with the device. For a circuit based quantum computer, the relevant number is how many fault-tolerant qubits we have. We can then computer this to the required qubit number for given instances of our favourite algorithm and see what that means in terms of factoring numbers, etc.

+ +

Currently, the number of fault-tolerant qubits is zero. We are instead in an era of noisy prototype devices. They are for testing, and what is possible depends strongly on how noisy the gates are and the connectivity (which pairs of qubits can we do a controlled gate with). If a company/lab does not give you information, there is no way to compare with what other companies/labs are doing (and all are currently a few orders of magnitude away from having enough noisy qubits to make a truly fault-tolerant qubit).

+",409,,,,,8/19/2018 19:02,,,,0,,,,CC BY-SA 4.0 +4057,1,4058,,8/20/2018 10:28,,10,2005,"

As a part of a discussion with my 'classical' friend, he insisted that making a a state machine for calculating the outcome of quantum computer is possible; so, simply calculate the outcomes of (known) algorithms on supercomputers and store their results in a Look-up table. (Something like storing the truth table).

+So, why do people work on quantum simulators (say, capable up to 40 qubits); which calculate the result every time?! Simply (hypothetically) use the supercomputers of the world (say capable up to 60 qubits); calculate the result for $2^{60}$ input cases, store their result and use it as reference? How can I convince him it's not possible?

+Note: this is for known quantum algorithms and their known circuit implementations.

+",2391,,26,,12/13/2018 20:04,12/13/2018 20:04,Classical Memory enough to store states up to 40 qubits quantum system?,,2,2,,,,CC BY-SA 4.0 +4058,2,,4057,8/20/2018 11:33,,14,,"

Suppose that you have a quantum algorithm with $2^{60}$ possible inputs. Suppose also that it would take 1 nanosecond to run this on a supercomputer (which is unrealistically optimistic!). The total time required to run through all possible inputs would be 36.5 years.

+ +

Clearly it would be much better to just run the instance that you care about, and get the answer in an instant, rather than waiting half a lifetime to pick it from a list. This gets ever more true as we raise the runtime from the unrealistic 1 nanosecond.

+ +
+

why do people work on quantum simulators (say, capable up to 40 qubits); which calculate the result every time?!

+
+ +

Even if you wanted to create a lookup table, you'd still need a simulator like this to create it.

+",409,,,,,8/20/2018 11:33,,,,1,,,,CC BY-SA 4.0 +4059,1,4061,,8/21/2018 8:13,,8,341,"

Take two pure bi-partite states $\psi$ and $\phi$ that have the same amount of entanglement in them as quantified by concurrence (does the measure make a difference?). Can any such states be transformed into each other using local unitaries?

+",1860,,10480,,1/27/2021 16:08,1/27/2021 17:04,Can two states with the same entanglement be transformed into each other using local unitaries?,,1,0,,,,CC BY-SA 4.0 +4060,2,,1826,8/21/2018 9:31,,7,,"

I emailed Artur Ekert to seek help for this quesiton, and he replied:

+ +
+

There are different variants of the E91 protocol that may give + you different efficiencies. In my original version the settings used for + the keys bits were indeed chosen with the probability 2/9, but others + optimised it in all kind of ways.

+
+ +

So at least 2/9 is the probability of the original E91 protocol, and for those who want to know the calculation for the original protocol, please refer to DaftWullie's answer which I think is correct. But as I'm not professional in this area, I'm not sure that the calculation in Cabello's paper is a mistake or he just calculated some optimized version.

+",2047,,,,,8/21/2018 9:31,,,,0,,,,CC BY-SA 4.0 +4061,2,,4059,8/21/2018 10:53,,7,,"

Any two bipartite pure states $\psi$ and $\phi$ can be transformed into each other with local unitaries if and only if they have the same Schmidt coefficients. (To prove the 'only if' part, note that the reduced density matrix of either qubit has eigenvalues that are the squares of the Schmidt coefficients, and are unchanged by unitaries).

+

The concurrence is typically specified for two-qubit states. Any reasonable entanglement monotone on two qubits, when calculated for pure states, must be a decreasing function of the largest Schmidt coefficient of the state, and thus two states having the same entanglement would have the same Schmidt coefficients, and hence be unitarily equivalent.

+

If you generalise beyond qubits, there will be entanglement measures that do not contain as much fine-grained information as the Schmidt coefficients, and so equality of that entanglement measure will be necessary but not sufficient for unitary equivalence.

+",1837,,10480,,1/27/2021 17:04,1/27/2021 17:04,,,,5,,,,CC BY-SA 4.0 +4062,2,,4057,8/21/2018 11:05,,3,,"

For a specific quantum algorithm that uses 40 qubits, your friend makes a good point. One can just calculate the truth table (one might find this hard, but assume that one can) and use it as reference. Of course this starts to get ridiculous as you increase the number of qubits, not just because of the number of inputs but because computing the outcome of a quantum algorithm could be exponentially harder classically for all we know.

+ +

However, being able to simulate a quantum computer (or having an actual quantum computer) is far more useful. By changing what quantum operations one does, one gets different algorithms. The number of functions that one can define on 40 bits of inputs is 2^2^40. Having a single database that gives you instant access to the results of any quantum algorithm is just absurdly infeasible. We want to be able to switch algorithms easily too, and classically we'd want simulators for that.

+",4394,,,,,8/21/2018 11:05,,,,2,,,,CC BY-SA 4.0 +4063,1,,,8/21/2018 14:43,,5,175,"

I would like to understand what is a continuous quantum register. +I know the direct definition is a quantum register that stores a real number defined by an observable with a spectrum consisting of $\mathbb{R}$ but that seems really abstract to me.

+ +

Also how it relates to qubits? Is a set of qubits used for simulating discretely a continuous quantum register? If yes how?

+",4127,,26,,12/23/2018 12:41,12/23/2018 12:41,What is continuous quantum register and how it relates to qubits?,,1,0,,,,CC BY-SA 4.0 +4064,1,4065,,8/21/2018 15:48,,4,130,"

In their 2017 paper, Childs et al. gave the definition of QLSP beginning with : ""Let $A$ be an $N\times N$ Hermitian matrix with known condition numbers $\kappa$, $||A|| = 1$ and at most $d$ non-zero entries in any row or column...""

+ +

I initially thought that by $||A|| = 1$ they meant that QLSP requires spectral norm (largest eigenvalue) of $A$ to be $1$, which sounds reasonable to me as even the original HHL paper they needed the eigenvalues of $A$ to lie in between $1/\kappa$ and $1$.

+ +

But, the paper: Quantum linear systems algorithms: a primer seems to have interpreted is as ""the determinant of $A$ needs to be $1$"" in page 28 definition 6.

+ +

Which interpretation is correct and why? In case the latter is correct, I'm not sure why it is so. I don't see why the restriction that $\text{det}(A)$ needs to be $1$ even make sense. It (the unit determinant condition) doesn't even guarantee that the eigenvalues of $A$ will be less than or equal to $1$, which is necessary for HHL to work.

+",26,,,,,8/21/2018 16:04,What does $||A|| = 1$ mean in the definition of QLSP?,,1,0,,,,CC BY-SA 4.0 +4065,2,,4064,8/21/2018 16:04,,4,,"

Certainly it is meant as the largest eigenvalue. I have no idea why the linked review paper uses the determinant. I don't see anywhere that they use that property (from an admittedly brief skim).

+ +

I presume you could rewrite conditions in terms of the determinant (you would have to alter the time step $t_0$) but it's not clear to me why you would want to. It's also worth noting that definition 8 (page 39) in that paper defines matrix inversion, putting limits on the eigenvalues of the matrix: bounding between some minimum value and 1. So they're also implicitly acknowledging that structure, and certainly not setting the determinant (product of eigenvalues) to 1.

+",1837,,,,,8/21/2018 16:04,,,,3,,,,CC BY-SA 4.0 +4066,1,4073,,8/21/2018 16:17,,6,1586,"

What would be the best way to re-create the following image of the HHL quantum circuit without compromising on image quality (the image is from this review paper)?

+ +

+ +

Using qasm2circ I can create the basic circuit. But I don't know any good program/software which will help to produce the labels of the various components and also the nice transparent boxes emphasizing the 3 main steps.

+ +

FWIW, I had contacted the corresponding author last week asking about the software they used to generate the image (and also whether the image has any associated copyright license), but haven't received any response so far, which is why I am asking here.

+",26,,,,,09-07-2018 13:09,How to re-create the following circuit image?,,4,4,,,,CC BY-SA 4.0 +4067,2,,4066,8/21/2018 16:30,,4,,"

I know there is the Latex package +qcircuit

+ +

But I am not sure about how they did the boxes for the 3 parts a,b,c. +Maybe some image tool for drawing them.

+ +

Edit : Found this discussion where one admitted using Tikz for drawing similar images

+ +

Edit 2 : I found this little tool called qpic which can help you build fancy quantum circuits. You will be able to do it easily I suppose.

+",4127,,4127,,8/21/2018 20:44,8/21/2018 20:44,,,,3,,,,CC BY-SA 4.0 +4068,2,,4066,8/21/2018 18:18,,4,,"

According to the meta-information in the PDF of the figure, it has been created with PowerPoint:

+ +
$ pdfinfo HHL_circuit.pdf 
+Title:          HHL_circuit
+Creator:        PowerPoint
+Producer:       Mac OS X 10.13.2 Quartz PDFContext
+[...]
+
+ +

(You can download paper sources from the arxiv by choosing ""other formats"".)

+",491,,,,,8/21/2018 18:18,,,,0,,,,CC BY-SA 4.0 +4069,1,4071,,8/22/2018 2:19,,17,989,"

I started reading about Randomized Benchmarking (this paper, arxiv version) and came across "unitary 2 design."

+

After some googling, I found that the Clifford group being a unitary 2 design is a specific case of "Quantum t-design."

+

I read the wikipedia page and a few other references (this one for example, non pdf link to the website that links to the pdf).

+

I would like to have some intuitive understanding of the difference between different t designs and what makes the Clifford group a 2 design.

+",4399,,55,,12/19/2022 13:27,12/19/2022 13:27,What is the intuition behind quantum t-designs?,,1,2,,,,CC BY-SA 4.0 +4070,2,,4066,8/22/2018 5:03,,9,,"

I always used to use qcircuit but, as it happens, I've recently been developing a tikz library to do the job, which has some added flexibility. The package will be available at the following DOI: 10.17637/rh.7000520. +

+ +

First, I load my library in the document preamble

+ +
\usetikzlibrary{quantikz}
+
+ +

Then I typeset the circuit with

+ +
\begin{tikzcd}
+\lstick{Ancilla \ket{0}\\ register $S$} & \qw & \qw\gategroup[background,style={fill=gray!20,rounded corners},wires=3,steps=3]{Phase estimation} & \qw & \qw & \qw &  \gate{R}\gategroup[background,style={fill=gray!20,rounded corners},wires=3,steps=1]{$R(\tilde\lambda^{-1})$ rotation} & \qw & \qw\gategroup[background,style={fill=gray!20,rounded corners},wires=3,steps=3]{Uncompute}  & \qw & \qw & \meter{} & \rstick{\ket{1}} \qw \\
+\lstick{Clock $\ket{0}^{\otimes n}$\\ register $C$}& \qwbundle & \gate{H^{\otimes n}} & \ctrl{1} & \gate{FT^\dagger} & \hphantomgate{}\qw & \ctrl{-1} & \hphantomgate{}\qw & \gate{FT} & \ctrl{1} & \gate{H^{\otimes n}} & \qw & \rstick{$\ket{0}^{\otimes n}$}\qw \\[1em]
+\lstick{Input \ket{b} \\ register $I$}&\qwbundle& \qw&\gate{U}&\qw& \qw & \qw&\qw&\qw&\gate{U}&\qw&\qw &\rstick{\ket{x}}\qw
+\end{tikzcd}
+
+",1837,,1837,,8/23/2018 9:48,8/23/2018 9:48,,,,3,,,,CC BY-SA 4.0 +4071,2,,4069,8/22/2018 5:45,,9,,"

The $t$ in $t$-design is essentially a measure of how good a job the set of gates does in terms of randomising a state (the larger t, the more random, with properly random requiring the infinite limit). Often, you want to compute the average of some function over all possible pure input states, which is equivalent to fixing the input state and averaging over all possible unitaries. However, averaging over all possible unitaries is a pain, and is unnecessary if the function you want to compute is simple enough. If the function you want is a polynomial of degree t or less in terms of the coefficients of the input state, it is sufficient to average over a set of gates that comprise a t-design.

+

Another way of thinking about this is, instead of a degree t polynomial, you can talk about calculating a linear function of t copies of the input state. This is more like you would do in an actual experiment.

+

As for what makes the Clifford group a 2-design, I guess you just have to sit down and do the maths. There's a proof in section A.1 of this paper. For the special case of the Clifford group on a single qubit, let S be the set of 1-qubit Clifford gates. Then you need to show that +$$ +\sum_{s\in S}s\otimes s|00\rangle\langle 00|s\otimes s\propto\mathbb{I}+\text{SWAP} +$$ +The critical thing here is that there’s 2 copies of the state that we’re averaging over.

+",1837,,1837,,12/19/2022 12:59,12/19/2022 12:59,,,,2,,,,CC BY-SA 4.0 +4072,1,4076,,8/22/2018 7:47,,4,179,"

This is the QCSE version of What should you do if you spotted a non-trivial error in a highly cited paper? (maybe replace ""highly cited"" with ""moderately cited"" and ""a non-trivial"" with ""several minor"").

+ +

While going through the pre-print v2 as well as the published version of the paper `Quantum Circuit Design for Solving Linear Systems of Equations' by Cao et al., I found several errors in the paper:

+ +
    +
  1. In the published version the connections of the $e^{iAt/2^i}$ gates are connected to clock register in the wrong order (Figure 4).

  2. +
  3. In the pre-print the gates $e^{-iAt/2^i}$ should actually have been $e^{iAt/2^i}$ (Figure 4).

  4. +
  5. In the pre-print the gate decomposition of $e^{iAt/2^i}$ is wrong. The last $Z$ gate must have been a controlled $Z$.

  6. +
  7. The $5$ coefficients of the gate decompositions are wrong in both the pre-print and published version. Only the coefficients given for $e^{iAt/16}$ are fine but the rest they have to be found by some method of multivariable optimization (this was implemented by @Nelimee in QISKit and I had verified it)

  8. +
  9. No SWAP gate is required in the circuit, as explained by @DaftWullie here.

  10. +
  11. They skipped most explanations of why they chose the specific form of matrix $A$ in the paper, and everything about the scaling required.

  12. +
+ +

Anyhow, this paper was essentially what I worked on, through the summer and I need to write a report on what I did, which might be put up on arXiv (and maybe for publishing, probably in QIP, later on, if I can think of sufficiently original material).

+ +

Now, I'm not sure how the quantum computing academic community looks at these type of ""correction papers"". So, basically, is it ethical to write up a correction paper like this (which doesn't correct a ""huge"" mistake in a ""highly cited"" paper but rather several small mistakes in a ""moderately cited"" paper) or are they highly frowned upon? In case the latter is true, I'll probably avoid putting it up on arXiv and wait till I can come up with sufficiently original additions to the paper (like extending it to higher dimensions and making the circuit more general).

+",26,,26,,8/22/2018 8:19,8/22/2018 18:04,Ethics: Publishing a corrected version of a moderately cited paper having several minor errors,,1,0,,,,CC BY-SA 4.0 +4073,2,,4066,8/22/2018 15:48,,11,,"

Edit —

+ +
    +
  1. I've revised this answer to make some small improvements in the commands, to tidy up the commands for drawing the wires for instance, because it seemed worthwhile.

  2. +
  3. Flattering as it is to have this answer be the accepted one for the time being, I think I should point out that the quantikz package (see Daftwullie's answer below) and the qpic package (as pointed out in cnada's answer below) are both libraries with reasonably complete interfaces, and so better for people looking for a quick and simple solution. The code below is probably more suitable for people who are comfortable with TiKZ, and might like to tweak their circuit diagrams with TiKZ commands, but who wouldn't mind having some macros to streamline drawing their diagrams. — Nice as it might be to, for the moment I have no ambition to write a LaTeX package to make these macros available with a nice interface for all purposes (but anyone else who would like to is welcome if they give me some of the credit).

  4. +
+ +

Snippet

+ +

See below for all of the code used to generate this example: the following commands are just the ones used to draw the circuit itself. (This snippet involves macros which I have defined for the purpose of this post, which I also define below.)

+ +
  % define initial positions of the quantum wires
+  \xdef\dy{1.25}
+  \defwire (A) at (0);
+  \defwire (B) at ({-\dy});
+  \defwire (C) at ({-2*\dy});
+  % draw wires
+  \xdef\dt{0.8}
+  \drawwires [\dt] (15);
+  \node  at ($(B-0)!0.5!(B-1)$) {$/$};
+  \node  at ($(C-0)!0.5!(C-1)$) {$/$};
+  % draw gates
+  \gate     (B-2)         [H^{\otimes n}];
+  \ctrlgate (B-3) (C-3)   [U];
+  \virtgate (A-3);
+  \gate     (B-4)         [\mathit{FT}^\dagger];
+  \ctrlgate (B-7) (A-7)   [R];
+  \virtgate (C-7);
+  \gate     (B-10)        [\mathit{FT}];
+  \ctrlgate (B-11) (C-11) [U^\dagger];
+  \gate     (B-12)        [H^{\otimes n}];
+  \virtgate (A-12);
+  \meas     (A-14)        [Z];
+  % draw input and output labels
+  \inputlabel  (A-0)  [\lvert 0 \rangle];
+  \inputlabel  (B-0)  [\lvert 0 \rangle^{\otimes n}];
+  \inputlabel  (C-0)  [\lvert b \rangle];
+  \outputlabel (A-15) [\lvert 1 \rangle];
+  \outputlabel (B-15) [\lvert 0 \rangle^{\otimes n}];
+  \outputlabel (C-15) [\lvert x \rangle];
+
+ +

Result

+ +

+ +

Preamble

+ +

You will need a pre-amble which contains at least the amsmath package, as well as the tikz package. You may not need all of the tikz libraries below, but they don't hurt. Be sure to include the commands involving layers.

+ +
\documentclass[a4paper,10pt]{article}
+
+\usepackage{amsmath}
+\usepackage{tikz}
+\usetikzlibrary{shapes,arrows,calc,positioning,fit}
+\pgfdeclarelayer{background}
+\pgfsetlayers{background,main}
+
+ +

For the purposes of this post, I've defined some ad-hoc macros to make reading the coded circuit easier for public consumption. (The macro format is not exactly good LaTeX practise, but I define them this way in order for the syntax to be more easily read and for it to stand out.) The parameters for dimensions in these gates were chosen to look good in your sample-circuit, and were found by trial-and-error: you can change them to change the appearance of your circuit.

+ +

The first is a simple macro to draw a gate.

+ +
\def\gate (#1) [#2]{%
+  \node [
+    draw=black,fill=white, inner sep=0pt,
+    minimum width=2.5em, minimum height=2em, outer sep=1ex
+  ] (#1) at (#1) {$#2$}%
+}
+
+ +

The second is a macro to draw an 'invisible' gate. This is not really a command which is important for the circuit itself, but helps for the placement of background frames.

+ +
\def\virtgate (#1){%
+  \node [
+    draw=none, fill=none,
+    minimum width=2.5em, minimum height=2em, outer sep=1ex
+  ] (#1) at (#1) {};
+}
+
+ +

The third is a macro to draw a controlled gate. This command works well enough for your example circuit, but doesn't allow you to draw a CNOT. (Exercise for the reader proficient in TiKZ: make a \CNOT command.)

+ +
\def\ctrlgate (#1) (#2) [#3]{%
+  \filldraw [black] (#1) circle (2pt) -- (#2);
+  \gate (#2) [#3]
+}
+
+ +

The fourth is a macro to draw a ""measurement"" box. I think it is perfectly reasonable to want to specify an explicit basis or observable for the measurement, so I allow an argument to specify that.

+ +
\def\meas (#1) [#2]{%
+  \node [
+    draw=black, fill=white, inner sep=2pt,
+    label distance=-5mm, minimum height=2em, minimum width=2em
+  ] (meas) at (#1) {};
+  \draw ($(meas.south) + (-.75em,1.5mm)$) arc (150:30:.85em);
+  \draw ($(meas.south) + (0,1mm)$) -- ++(.8em,1em);
+  \node [
+    anchor=north west, inner sep=1.5pt, font=\small
+  ] at (meas.north west) {#2};
+}
+
+ +

I define two short macros to produce the labels for the inputs and outputs of wires.

+ +
\def\inputlabel (#1) [#2]{%
+  \node at (#1) [anchor=east] {$#2$}
+}
+\def\outputlabel (#1) [#2]{%
+  \node at (#1) [anchor=west] {$#2$}
+}
+
+ +

The macros above are all looking for co-ordinates at which to place the gates. I also define macros to define ""wires"", which have regularly spaced co-ordinates where gates can be located. +The first is a macro which allows you to define a named wire (such as A, B, x3, etc.) and its vertical position in the circuit diagram (these diagrams are left-to-right by default, which you can change most easily using the rotate option of the tikzpicture environment.)

+ +
\def\defwire (#1) at (#2){%
+  \ifx\qmwires\empty
+    \edef\qmwires{#1}%
+  \else
+    \edef\qmwires{\qmwires,#1}%
+  \fi
+  \coordinate (#1-0) at ($(0,#2)$)%
+}
+
+ +

Having defined a collection of wires, the following command then draws all of them, starting from the same left-most starting point and ending at the same right-most ending point, with increments by a fixed amount (given in the square brackets) and for a given number of time slices. This defines a sequence of 'time-slice' co-ordinates for each wire: for a wire A, it defines the co-ordinates A-0, A-1, and so forth up until A-t (where t is the value of the second argument).

+ +
\def\drawwires [#1] (#2);{%
+  \xdef\u{0}
+  \foreach \t in {0,...,#2} {%
+    \foreach \l in \qmwires {%
+      \coordinate (\l-\t) at ($(\l-\u) + (#1,0)$);
+      \draw (\l-\u) -- (\l-\t);
+    }
+    \xdef\u{\t}
+  }
+}
+
+ +

The final macro is one to draw a background frame for different stages in your circuit. It takes an argument specifying which gates (including the invisible virtual 'gates') are meant to belong to the frame.

+ +
\def\bgframe [#1]{%
+  \node [%
+    draw=black, fill=yellow!40!gray!30!white, fit=#1
+  ] {}%
+}
+
+ +

The circuit diagram itself

+ +

Now to begin drawing your circuit.

+ +
\begin{document}
+\begin{tikzpicture}
+
+ +

We start by defining the relative positions of the wires. (For convenience, I do this using a macro to define the spacing between them, that I can quickly change to adjust the spacing.) Below, I define three wires: A, B, and C.

+ +
  \let\qmwires\empty
+  % define initial positions of the quantum wires
+  \xdef\dy{1.25}
+  \defwire (A) at (0);
+  \defwire (B) at ({-\dy});
+  \defwire (C) at ({-2*\dy});
+
+ +

We now draw the circuit, using the command to draw the wires and define the co-ordinates on the wire, and placing gates independently of one another according to those co-ordinates.

+ +
  % draw circuit
+  \xdef\dt{0.8}
+  \drawwires [\dt] (15);
+  \node  at ($(B-0)!0.5!(B-1)$) {$/$};
+  \node  at ($(C-0)!0.5!(C-1)$) {$/$};
+  \gate     (B-2)         [H^{\otimes n}];
+  \ctrlgate (B-3) (C-3)   [U];
+  \virtgate (A-3);
+  \gate     (B-4)         [\mathit{FT}^\dagger];
+  \ctrlgate (B-7) (A-7)   [R];
+  \virtgate (C-7);
+  \gate     (B-10)        [\mathit{FT}];
+  \ctrlgate (B-11) (C-11) [U^\dagger];
+  \gate     (B-12)        [H^{\otimes n}];
+  \virtgate (A-12);
+  \meas     (A-14)        [Z];
+  % draw input and output labels
+  \inputlabel  (A-0)  [\lvert 0 \rangle];
+  \inputlabel  (B-0)  [\lvert 0 \rangle^{\otimes n}];
+  \inputlabel  (C-0)  [\lvert b \rangle];
+  \outputlabel (A-15) [\lvert 1 \rangle];
+  \outputlabel (B-15) [\lvert 0 \rangle^{\otimes n}];
+  \outputlabel (C-15) [\lvert x \rangle];
+
+ +

Annotations for the circuit

+ +

The rest of the circuit diagram is literally commentary. We can do this using a combination of plain-old TiKZ nodes, and the \bgframe macro which I defined above. (Annotations are a little less predictable, so I don't have a good way of making them as systematic as the earlier parts of the circuit, so general TiKZ commands are a reasonable approach unless you know how to make your annotations uniform.)

+ +

First the annotations for the stages of the circuit:

+ +
  % draw annotations
+  \node [minimum height=4ex] (annotate-1) at ($(A-3) + (0,1)$)
+        {\textit{Phase estimation}};
+  \node [minimum height=4ex] (annotate-2) at ($(A-7) + (0,1)$)
+        {\textit{$\smash{R(\tilde\lambda^{-1}})$ rotation}};
+  \node [minimum height=4ex] (annotate-3) at ($(A-11) + (0,1)$)
+        {\textit{Uncompute}};
+  \node (annotate-a) at ($(C-3) + (0,-1.25)$)  {\textit{(a)}}; 
+  \node (annotate-b) at ($(C-7) + (0,-1.25)$)  {\textit{(b)}}; 
+  \node (annotate-c) at ($(C-11) + (0,-1.25)$) {\textit{(c)}};
+
+ +

Next, the annotations for the registers, at the input:

+ +
  \node (A-in-annotate) at ($(A-0) + (-3em,0)$) [anchor=east]
+    {\parbox{4.5em}{\centering  Ancilla register $S$ }};
+  \node (B-in-annotate) at ($(B-0) + (-3em,0)$) [anchor=east]
+    {\parbox{4.5em}{\centering Clock \\ register $C$ }};
+  \node (C-in-annotate) at ($(C-0) + (-3em,0)$) [anchor=east]
+    {\parbox{4.5em}{\centering Input \\ register $I$ }};
+
+ +

Finally, the frames for the stages of the circuit.

+ +
  % draw frames for stages of the circuit
+  \begin{pgfonlayer}{background}
+    \bgframe [(annotate-1)(B-2)(B-4)(C-3)];
+    \bgframe [(annotate-2)(B-7)(C-7)];
+    \bgframe [(annotate-3)(B-10)(B-12)(C-11)];
+  \end{pgfonlayer}
+
+ +

And that's the end of the circuit.

+ +
\end{tikzpicture}
+\end{document}
+
+",124,,124,,09-07-2018 13:09,09-07-2018 13:09,,,,9,,,,CC BY-SA 4.0 +4074,1,4077,,8/22/2018 16:05,,11,1284,"

Variational Quantum Eigensolver is a popular algorithm in Quantum Computing. But the ansatz part is very tricky. I do not really understand if they are built on some intuition, according to hardware or something else; or if it was just a trial and error approach.

+ +

What do you think about it?

+",4127,,26,,12/13/2018 20:04,12/13/2018 20:04,Is there an intuition built on ansatz in VQE algorithm or is it more a trial and error approach?,,1,0,,,,CC BY-SA 4.0 +4075,2,,4063,8/22/2018 17:31,,2,,"
+

direct definition is a quantum register that stores a real number defined by an observable with a spectrum consisting of ℝ

+
+ +

Yes, qubits can be used to discretize a continuous quantum system.

+ +
+

How?

+
+ +

Lets say $\Phi $ is an operator on the continuous system which we want to simulate using qubits. We need a discrete register ($D$) to store the qubit values. Now the discretization is done by measuring the shifts between the eigenstates of $\Phi$ denoted by operator say $M$. This is done by DFT. +$$ +M = F^*\Phi F +$$

+ +

Where $F$ is the DFT operator.

+ +

Now all this is considering that you already have a register which is storing continuous quantum output of a continuous system. ""What is that register?"" is I believe to be your original question. For that, you can take an example of cavity QED.

+ +

A typical quantum system with a continuous degree of freedom is the quantum field and normally, quantum simulations of quantum field theories rely on discretisation of this field.

+ +

Whereas a cavity QED output is a continuous quantum field and this acts as a continuous quantum register. This thesis has even proposed an algorithm to be directly implemented on this continuous quantum field and only discretizing after the implementation, thus utilizing the underlying properties of continuous system.

+",419,,,,,8/22/2018 17:31,,,,3,,,,CC BY-SA 4.0 +4076,2,,4072,8/22/2018 17:48,,6,,"

When you believe there are errors in a paper, you have the opportunity to publish a "comment" on the paper, in the same journal that the paper was originally published. The paper to which you refer as published in Molecular Physics, and here is an example of a "comment" published in that very same journal in 2002, about a paper that was originally published in 1968.

+

However, be careful when publishing a comment. You are publicly saying that they have made errors in a piece of work on which they have spent a lot of time and energy. Before contacting the journal, you should notify the authors privately. Instead of telling them that you have found errors in their paper, ask them if your suggestions are correct. For example:

+
+

Dear Prof. Kais,
+I found your paper "Quantum circuit design for solving linear systems of equations" very interesting and it was the subject of my summer project, for which I have to write a report now. While working through the paper I have found the following 6 points came up, which I would like to clarify with you: [here you can point out the 6 things you listed in your question, but I would recommend never to say they are "wrong" or they "skipped" something, but instead just say something like "I think the $Z$ should be a $cZ$, am I correct?"].

+

Colleagues have suggested that I publish a Comment on your article in Molecular Physics, but I would like to consult you on it first.

+

Most respectfully yours,
+Blue

+
+

I have written an email like this before, and the author's response indicated to me that he agreed with me (on a much more profound error than the above 6 points: he had claimed something is in QMA but there was an error in his proof of that so it is still an open question whether or not it is in QMA), but it was clear from his response that he would be very angry at me if I published a Comment. I decided to swallow my pride and not publish the Comment. One day he might be the referee for one of my papers, or be the examiner for one of my grant proposals, and I do not want him to have bitterness towards me for publicly exposing that he made this mistake.

+

This author suggested that the two of us publish an "Erratum" instead of a "Comment", so he could be a co-author on the paper saying that there was an error in the original, but in the end we didn't even do that. Comment and Erratum papers, like the one you may be considering to write in this case, are not so helpful for your CV anyway. They are not regarded at nearly the same level as an "original" paper. I have never published a Comment, and I only know a small number of people who have. The important thing is that you now know the problems with the original paper and the corrections to those problems, and it's up to the original authors whether or not they want to publish an Erratum.

+
+

There is also a journal where you can publish a re-analysis of other people's papers, which most people don't know about. It's the "Analysis" section in the journal Nature. They allow you to publish a re-analysis of other people's already published work, but they do this very rarely. The only example I know is this paper. Nature is not likely to publish the 6 minor things that you point out, because the standards in Nature are very high and you would have to find a much more profound re-analysis of the original paper rather than just 6 minor errors.

+",2293,,-1,,6/18/2020 8:31,8/22/2018 18:04,,,,0,,,,CC BY-SA 4.0 +4077,2,,4074,8/22/2018 18:20,,7,,"

VQE can be used for many things. The most popular application of VQE is for the quantum chemistry problem, as in this paper, where they are trying to find the ground state wavefunction of a molecular Hamiltonian (i.e. the VQE is trying to find the eigenvector with the smallest eigenvalue/energy). Here you can see that they suggest a unitary coupled cluster (UCC) ansatz. The reason they choose UCC is because it is well-known that coupled cluster already gives a very good approximation of the ground state wavefunction, in fact it is the basis for what chemists call the ""gold standard of quantum chemistry"".

+ +

Remember VQE is a heuristic. The better the ansatz that you start with, the more likely your VQE will perform well. As you correctly said in your question, you can use intuition, or trial-and-error, or just use any knowledge you have of the problem to come up with something that you believe will work well (as in the case of using a coupled cluster ansatz for the problem where coupled cluster is already considered ""the gold standard"" for people solving the problem on classical computers).

+ +

There is no general recipe for how to come up with the ansatz for VQE which will universally work well on every VQE problem, and that is why VQE is called a ""heuristic"".

+",2293,,,,,8/22/2018 18:20,,,,0,,,,CC BY-SA 4.0 +4078,1,4090,,8/23/2018 22:34,,15,10581,"

For the implementation of a certain quantum algorithm, I need to construct a multi-qubit (in this case, a three-qubit) controlled-Z gate from a set of elementary gates, as shown in the figure below. + +.

+ +

The gates that I can use are

+ +
    +
  • the Pauli gates $\rm X, Y, Z$ and all their powers (i.e. all Pauli rotations up to a phase factor),
  • +
  • ${\rm exp}(i\theta|11\rangle\langle11|)$ (rotation about $|11\rangle\langle11|$ projector),
  • +
  • $\rm H$ (Hadamard),
  • +
  • $\rm C_X$ (single-qubit controlled-X or CNOT),
  • +
  • $\rm C_Z$ (single-qubit controlled-Z), and
  • +
  • $\rm S$ (SWAP).
  • +
+ +

How can I go about building this three-qubit controlled-Z from these gates? I have read several papers on circuit decompositions, but none of them could give me a clear and concise answer.

+",2687,,26,,12/23/2018 13:12,12/23/2018 13:12,How to construct a multi-qubit controlled-Z from elementary gates?,,6,2,,,,CC BY-SA 4.0 +4079,1,4080,,8/23/2018 22:53,,8,231,"

We know that the Wigner function of a Gaussian quantum state is (up to a constant) a Gaussian distribution. The first moment and the covariance of this distribution uniquely specify a quantum state. Therefore a Wigner function uniquely determines a Gaussian state.

+ +

Are there any similar statements applying to non-Gaussian states?

+",1581,,55,,7/16/2020 9:22,7/16/2020 9:22,Does a Wigner function uniquely determine a quantum state?,,1,0,,,,CC BY-SA 4.0 +4080,2,,4079,8/23/2018 23:18,,8,,"

For any quantum state, we have a unique density matrix $\rho$.
+For any $\rho$, we can do the Wigner transformation to get a unique Wigner function $P(x,p)$.
+For any Wigner function $P(x,p)$, we can do the Weyl transformation to get back the unique $\rho$.
+If the construction of the Wigner function from $\rho$ was not unique, then it would not be possible to define an inverse transformation (but we do have an inverse transformation, namely the Weyl transformation, so the Wigner transformation does generate a unique characterization of a quantum state).

+ +

It has also been pointed out on the Physics Stack Exchange, that the Wigner function contains all information about a quantum state, just like the density matrix.

+",2293,,,,,8/23/2018 23:18,,,,1,,,,CC BY-SA 4.0 +4081,2,,4078,8/23/2018 23:33,,6,,"

You can implement an $n$-qubit controlled $U$ by the circuit given in this answer. Just replace $U$ by $Z$. However this requires CCNOT (Toffoli) gates, and you have some options for how to implement CCNOT using elementary gates.

+",2293,,,,,8/23/2018 23:33,,,,4,,,,CC BY-SA 4.0 +4082,1,,,8/24/2018 1:48,,7,443,"

I was reading papers on Randomized Benchmarking, such as this and this. +(more specifically, equation 30 in the second paper)

+ +

It appears to be some kind of averaging but I would like to have a more intuitive and physical picture of what it actually represents in terms of measurements.

+ +

(I know very little math, especially in terms of group representation)

+",4399,,26,,8/25/2018 8:08,8/25/2018 8:08,Physical meaning of twirling in Randomized Benchmarking,,2,0,,,,CC BY-SA 4.0 +4083,2,,4082,8/24/2018 5:02,,1,,"

An average over conjugations is known as a “twirl”. The “twirling” operation originates from invariant theory (where it is sometimes called “transfer homomorphism”). Twirling a quantum channel over $P_1^{⊗n}$, $C_1^{⊗n}$ or $C_n$ takes it to one described by a polynomial number of parameters.

+ +

The twirling operation will be useful if it preserves, at least partially, properties of the original channel. Specifically, one would hope that correctable codes of the twirled channel resemble those of the original channel.

+ +

See page 31 of ""Gaining Information About a Quantum Channel Via Twirling"" (Jul 31 2008) by Easwar Magesan.

+ +

More in depth references:

+ +

""Characterization of addressability by simultaneous randomized benchmarking"" (Jan 2 2013), by Jay M. Gambetta, A. D. Corcoles, S. T. Merkel, B. R. Johnson, John A. Smolin, Jerry M. Chow, Colm A. Ryan, Chad Rigetti, S. Poletto, Thomas A. Ohki, Mark B. Ketchen, and M. Steffen.

+ +

""Evenly distributed unitaries: on the structure of unitary designs"" (May 13 2007), by D. Gross, K. Audenaert, and J. Eisert.

+ +

""Exact and Approximate Unitary 2-Designs: Constructions and Applications"" (Aug 31 2012), by Christoph Dankert, Richard Cleve, Joseph Emerson, and Etera Livine.

+",278,,,,,8/24/2018 5:02,,,,1,,,,CC BY-SA 4.0 +4084,2,,4082,8/24/2018 7:59,,4,,"

I think the way that it is used in deriving bounds on quantum cloning is quite insightful, and hopefully gives you a flavour of the broader context. Where possible I'll skip some of the details in favour of description.

+ +

Imagine you want to clone an unknown one-qubit quantum state $|\psi\rangle$, making two copies. We know that this is impossible to do perfectly, but you might want to quantify how close you can get. So, let's theorise that there's some map $\rho=\mathcal{E}(|\psi\rangle\langle\psi|)$ that you're going to implement, producing a two-qubit output. The cloning fidelity in this case is +$$ +F=\text{Tr}\left((|\psi\rangle\langle\psi|\otimes\mathbb{I}+\mathbb{I}\otimes|\psi\rangle\langle\psi|)\rho\right)/2 +$$ +Skipping over the details, including the Choi-Jamiolkowski isomorphism, it turns out that you can relate the optimal choice of $\rho$ to the eigenvector with maximum eigenvalue of the operator +$$ +|\psi\rangle\langle\psi|^\star\otimes(|\psi\rangle\langle\psi|\otimes\mathbb{I}+\mathbb{I}\otimes|\psi\rangle\langle\psi|). +$$ +Actually, this is straightforward to calculate: the maximum eigenvector is $|\psi\rangle^\star|\psi\rangle|\psi\rangle$, and has eigenvalue 1. i.e. the operation can be done perfectly. However, this formulation presupposes that we know the state we're trying to clone (and obviously, if you know it, you can make arbitrarily many copies). We somehow have to quantify the fact that we don't know what the state is to be cloned.

+ +

For this, we need to take a Bayesian approach. If we don't know the state, we assign a probability to each of the possible states. In that case, the expected fidelity of the transformation is $\bar F$, the maximum eigenvalue of the operator +$$ +R=\sum_{\psi}p_{\psi}|\psi\rangle\langle\psi|^\star\otimes(|\psi\rangle\langle\psi|\otimes\mathbb{I}+\mathbb{I}\otimes|\psi\rangle\langle\psi|) +$$ +You might know something about the possible input state. For example, if you know it's either $|0\rangle$ or $|1\rangle$ with equal probability, you still find the maximum eigenvalue of 1 (which makes sense: you can measure the input state in the Z basis and make as many copies as you want). At the other extreme, if you know nothing about the state at all, you have to average over all single-qubit pure states with equal weights. What is the right way to do this? It's a bit messy as you can perhaps see from looking at the Bloch sphere: + +We need to take an average of all points on the surface of the sphere.

+ +

One way to achieve this is to say that every $|\psi\rangle=U|0\rangle$ for some $U$, and we average over all possible unitaries, where the correct averaging is determined by the Haar measure. So, we can change our operator $R$ into +$$ +R=\int U^\star\otimes U\otimes U\left(|0\rangle\langle0|^\star\otimes(|0\rangle\langle0|\otimes\mathbb{I}+\mathbb{I}\otimes|0\rangle\langle0|)\right)U^T\otimes U^\dagger\otimes U^\dagger dU +$$ +This is basically the twirling operation (except for the slight technicality that there's a complex conjugate on one of the qubits), and certainly captures the intuition of how/why it comes into things.

+",1837,,,,,8/24/2018 7:59,,,,0,,,,CC BY-SA 4.0 +4085,2,,3890,8/24/2018 13:09,,2,,"

entanglement is neither an operation nor a state, it is a concept describing a particular family of states. Here is a short explanation of this concept.

+ +

Let's limit ourselves to 2 qubits stored on 2 registers A and B :

+ +
    +
  • case 1 : you prepare independently the qubit A in some state (a1|0> + a2|1>), then you do the same with the qubit B (b1|0> + b2|1>). In that case the state of your total system is described as the concatenation of the state of the system A and the state of the system B. This case is intuitive, there is no entanglement between the registers A and B -> we call it a separable state. We can write the state of the system as a1*b1|00> + a1*b2|01> + a2*b1|10> + a2*b2|11> (tensor product of states on A and B)
  • +
+ +

In quantum mechanics, there exists other states for the total system AB that cannot be described as the concatenation of the states of the subsystems A and B. This phenomenon has no equivalent in classical mechanics and is not very intuitive. This leads us to :

+ +
    +
  • case 2 : you start with some state k1|0> + k2|1> on the register A and |0> on register B, then you apply a quantum operator known as CNOT that will ""expand"" the contents of the two superposition terms of register A to register B. The resulting state k1|00> + k2|11> on system AB cannot be written as a product (tensor product) of two separate states on A and B. We call it an entangled state.
  • +
+ +

One particularity of the entangled states is how it behave wrt measurement : in an entangled state, the outcome of a measurement in one of the subsystems can influence the outcome of a measurement on the other. For example with the state in case 2, you have :

+ +
    +
  1. 50% chance of measuring 1 on register A
  2. +
  3. 50% chance of measuring 1 on register B
  4. +
  5. but if you have already measured the register A and you have gotten 1, then you have a 100% chance of measuring 1 on register B
  6. +
+ +

This last item ""3"" would not be true if the state wasn't entangled, in which case the outcome of the measurement on register A couldn't have influenced the probabilities of the outcome on register B.

+",4420,,,,,8/24/2018 13:09,,,,0,,,,CC BY-SA 4.0 +4086,1,,,8/24/2018 18:05,,13,1262,"

A recent question here asked how to compile the 4-qubit gate CCCZ (controlled-controlled-controlled-Z) into simple 1-qubit and 2-qubit gates, and the only answer given so far requires 63 gates!

+ +

The first step was to use the C$^n$U construction given by Nielsen & Chuang:

+ +

+ +

With $n=3$ this means 4 CCNOT gates and 3 simple gates (1 CNOT and 2 Hadamards is enough to do the final CZ on the target qubit and the last work qubit).

+ +

Theorem 1 of this paper, says that in general the CCNOT requires 9 one-qubit and 6 two-qubit gates (15 total):

+ +

+ +
+ +

This means:

+ +

(4 CCNOTs) x (15 gates per CCNOT) + (1 CNOT) + (2 Hadamards) = 63 total gates.

+ +

In a comment, it has been suggested the 63 gates can be then further compiled using an ""automatic procedure"", for example from the theory of automatic groups.

+ +

How can this ""automatic compilation"" be done, and how much would it reduce the number of 1-qubit and 2-qubit gates in this case?

+",2293,,26,,12/13/2018 21:05,4/29/2019 9:03,Automatic compilation of quantum circuits,,2,7,,,,CC BY-SA 4.0 +4087,2,,4078,8/24/2018 18:39,,2,,"

While my other answer is the most obvious ""textbook"" way (using Nielsen & Chaung's CCCZ decomposition into CCNOTs, then another textbook decomposition to compile the CCNOTs), a more creative way might allow us to get the job done with fewer gates.

+ +

Step 1:

+ +

Replace all the CNOTs in Nielsen & Chuang's circuit with this gadget:

+ +

+ +

Step 2:

+ +

Now we have a bunch of CCZs instead of CCNOTs, and they can be decomposed like this (courtesy of this paper):

+ +

+ +

Step 3:

+ +

Note that $H^2 = I$, so some of these Hadamards cancel each other out and we get even more of a reduction :)

+",2293,,2293,,8/25/2018 19:00,8/25/2018 19:00,,,,1,,,,CC BY-SA 4.0 +4088,2,,4078,8/24/2018 19:35,,5,,"

I'm posting another decomposition of CCCZ here just in case it is useful for anyone else trying to compile CCCZ. It requires a smaller number of total gates, and only 1 auxiliary qubit instead of 2, but five more 2-qubit gates than the ""obvious"" answer, so may actually be worse for implementation on Hardware.

+ +

It was suggested by user @Rob in this comment: Automatic compilation of quantum circuits, and comes from this paper.

+ +

+ +

The GMS5$(\chi)$ gate is this:

+ +

+ +

with $n=5$ and all $\chi_{ij}=\chi$, which means it involves 10 two-qubit gates. These will then have to be compiled into the gate set given in the question, so this decomposition should only be used if you are trying to save on the number auxiliary qubits or if you don't mind having more 2-qubit gates in order to reduce the circuit depth by a bit.

+",2293,,,,,8/24/2018 19:35,,,,13,,,,CC BY-SA 4.0 +4089,2,,4086,8/24/2018 23:18,,4,,"

Using the procedure described in https://arxiv.org/abs/quant-ph/03030631, any diagonal gate - any thus in particular the CCCZ gate -- can be decomposed in terms of e.g. CNOTs and one-qubit diagonal gates, where the CNOTs can be optimized on their own following a classical optimization procedure.

+ +

The reference provides a circuit using 16 CNOTs for arbitrary diagonal 4-qubit gates (Fig. 4).

+ +
+ +

Remark: While there might in principle be a simpler circuit (said circuit has been optimized with a more constrained circuit architecture in mind), it should be close to optimal -- the circuit needs to create all states of the form $\bigoplus_{i\in I}x_i$ for any non-trivial subset $I\subset\{1,2,3,4\}$, and there are 15 of those for 4 qubits.

+ +

Note also that this construction by no means needs to be optimal.

+ +
+ +

1 Note: I'm an author

+",491,,23,,11-01-2018 08:48,11-01-2018 08:48,,,,1,,,,CC BY-SA 4.0 +4090,2,,4078,8/24/2018 23:24,,8,,"

(EDIT: Improved to 14 CNOTs.)

+ +

It can be done with 14 CNOTs, plus 15 single-qubit Z rotations, and no auxiliary qubits.

+ +

The corresponding circuit is

+ +

+ +

where the $\fbox{$\pm$}$ gates are rotations +$$ +R_z(\pm\pi/16)\propto \left(\begin{matrix}1\\&e^{\pm i\pi/8} +\end{matrix}\right) +$$

+ +
+ +

Derivation:

+ +

Using the procedure described in https://arxiv.org/abs/quant-ph/03030631 , any diagonal gate - any thus in particular the CCCZ gate -- can be decomposed in terms of e.g. CNOTs and one-qubit diagonal gates, where the CNOTs can be optimized on their own following a classical optimization procedure.

+ +

The reference provides a circuit using 16 CNOTs for arbitrary diagonal 4-qubit gates (Fig. 4).

+ +

This can be improved if arbitrary pairs of qubits can be coupled to 14 qubits. For nearest neighbors with periodic (open) boundary conditions, this can be done with 16 (18) CNOTs. The corresponding circuits can be found in https://epub.uni-regensburg.de/1511/1, Fig. 5.2, 5.4, and 5.5, and can e.g. be obtained using methods to construct short Gray sequences.

+ +

The number of one-qubit gates is always 15.

+ +
+ +

Remark: While there might in principle be a simpler circuit (said circuit has been optimized with a more constrained circuit architecture in mind), it should be close to optimal -- the circuit needs to create all states of the form $\bigoplus_{i\in I}x_i$ for any non-trivial subset $I\subset\{1,2,3,4\}$, and there are 15 of those for 4 qubits.

+ +

Note also that this construction by no means needs to be optimal.

+ +
+ +

1 Note: I'm an author

+",491,,23,,11-01-2018 08:50,11-01-2018 08:50,,,,3,,,,CC BY-SA 4.0 +4091,2,,4078,8/25/2018 5:55,,6,,"

Here is a CCCZ construction that uses 29 gates:

+ +

+ +

If you're allowed to use measurement and classical feedforward, the gate count can be reduced to 25:

+ +

+ +

(The Hadamard gates can be replaced with square roots of Y if needed to meet the gate set constraint.)

+ +

And if you allow me to perform Controlled-S gates and Controlled-sqrt(X) gates and perform X basis measurements, then I can get it down to 10 gates total:

+ +

+",119,,119,,8/26/2018 17:38,8/26/2018 17:38,,,,18,,,,CC BY-SA 4.0 +4092,2,,4078,8/25/2018 6:27,,4,,"

There are some large savings that can be made based on the specified gate set. For example, in the typical ccnot construction, if you replace the $T$ gate with $Z^{1/4}$, you don't need that phase correction that constitutes the last few gates between the two control qubits. This construction, which obeys the gate set specified in the question, consists of 21 gates, of which 10 are 2-qubit (you don’t need the last gate in the circuit below).

+ +

+ +

To be clear (in response to several comments): usually we look at Toffoli, and try to make it using the $T$ gate. If both controls are $|1\rangle$, then the gate sequence on the target qubit is $HXTXT^\dagger XTXT^\dagger H$. Now, since $XT^\dagger X=Te^{-i\pi/4}$, then the sequence simplifies to $-iHT^4H=-iX$, and one has to add a compensating controlled-S gate on the two control qubits. If, instead, we use $Z^{1/4}$, then $XZ^{-1/4}X=Z^{1/4}$, and none of those pesky phases come into it, and it saves you some two-qubit gates!

+ +

Also, note that the two Toffoli gates are only Toffoli because they target the 0 state. Typically you would need an extra two-qubit gate.

+ +

I've not found as efficient a construction in existing literature, although this paper claims a construction using only 11 2-qubit gates, but I haven’t done a complete gate count once it’s converted into the question’s restricted gate set.

+",1837,,1837,,8/28/2018 7:25,8/28/2018 7:25,,,,6,,,,CC BY-SA 4.0 +4093,2,,4086,8/26/2018 19:06,,7,,"

Let $g_1 \cdots g_M$ be the basic gates that you are allowed to use. For the purposes of this $\operatorname{CNOT}_{12}$ and $\operatorname{CNOT}_{13}$ etc are treated as separate. So $M$ is polynomially dependent on $n$, the number of qubits. The precise dependence involves details of the sorts of gates you use and how $k$-local they are. For example, if there are $x$ single qubit gates and $y$ 2-qubit gates that don't depend on order like $CZ$ then $M = xn+\binom{n}{2}y$.

+ +

A circuit is then a product of those generators in some order. But there are multiple circuits that do nothing. Like $\operatorname{CNOT}_{12} \operatorname{CNOT}_{12} = \mathrm{Id}$. So those give relations on the group. That is it is a group presentation $\langle g_1 \cdots g_M \mid R_1 \cdots \rangle$ where there are many relations that we do not know.

+ +

The problem we wish to solve is given a word in this group, what is the shortest word that represents the same element. For general group presentations, this is hopeless. The sort of group presentation where this problem is accessible are called automatic.

+ +

But we can consider a simpler problem. If we throw out some of the $g_i$, then the words from before become of the form $w_1 g_{i_1} w_2 g_{i_2} \cdots w_k$ where each of the $w_i$ are words only in the remaining letters. If we managed to make them shorter using the relations that don't involve the $g_i$, then we will have made the entire circuit shorter. This is akin to the optimizing the CNOTs on their own made in the other answer.

+ +

For example, if there are three generators and the word is $aababbacbbaba$, but we don't want to deal with $c$, we will instead shorten $w_1=aababba$ and $w_2=bbaba$ to $\hat{w}_1$ and $\hat{w}_2$. We then put them back together as $\hat{w}_1 c \hat{w}_2$ and that is a shortening of the original word.

+ +

So WLOG (without loss of generality), let's suppose we are in that problem already $\langle g_1 \cdots g_M \mid R_1 \cdots \rangle$ where we now use all the gates specified. Again this is probably not an automatic group. But what if we throw out some of the relations. Then we will have another group that has a quotient map down to the one we really want.

+ +

The group $\langle g_1 g_2 \mid - \rangle$ no relations is a free group, but then if you put $g_1^2=\mathrm{id}$ as a relation, you get the free product $\mathbb{Z}_2 \star \mathbb{Z}$ and there is a quotient map from the former to the later reducing the number of $g_1$'s in each segment modulo $2$.

+ +

The relations we throw out will be such that the one upstairs (the source of the quotient map) will be automatic by design. If we only use the relations that remain and shorten the word, then it will still be a shorter word for the quotient group. It just won't be optimal for the quotient group (the target of the quotient map), but it will have the length $\leq$ to the length it started with.

+ +

That was the general idea, how can we turn this into a specific algorithm?

+ +

How do we choose the $g_i$ and relations to throw out in order to get an automatic group? This is where knowledge of the kinds of elementary gates we typically use comes in. There are a lot of involutions, so keep only those. Keep careful attention to the fact that these are only the elementary involutions, so if your hardware has a hard time swapping qubits that are vastly separated on your chip, this is writing them in only the ones that you can do easily and reducing that word to be as short as possible.

+ +

For example, suppose you have the IBM configuration. Then $s_{01},s_{02},s_{12},s_{23},s_{24},s_{34}$ are the allowed gates. If you wish to do a general permutation, decompose it into $s_{i,i+1}$ factors. That is a word in the group $\langle s_{01},s_{02},s_{12},s_{23},s_{24},s_{34} \mid R_1 \cdots \rangle$ that we wish to shorten.

+ +

Note that these don't have to be the standard involutions. You can throw in $R(\theta) X R(\theta)^{-1}$ in addition to $X$ for example. Think of the Gottesman-Knill theorem, but in an abstract manner that means it will be easier to generalize. Such as using the property that under short exact sequences, if you have finite complete rewriting systems for the two sides, then you get one for the middle group. That comment is unnecessary for the rest of the answer, but shows how you can build up bigger more general examples from the ones in this answer.

+ +

The relations that are kept are only those of the form $(g_i g_j)^{m_{ij}} = 1$. This gives a Coxeter group and it is automatic. In fact, we don't even have to start from scratch to code up the algorithm for this automatic structure. It is already implemented in Sage (Python based) in general purpose. All you have to do is specify the $m_{ij}$ and it has the remaining implementation already done. You might do some speedups on top of that.

+ +

$m_{ij}$ is really easy to compute because of the locality properties of the gates. If the gates are at most $k$-local, then the computation of $m_{ij}$ can be done on a $2^{2k-1}$ dimensional Hilbert space. This is because if the indices don't overlap, then you know that $m_{ij}=2$. $m_{ij}=2$ is for when $g_i$ and $g_j$ commute. You also only have to compute less than half of the entries. This is because the matrix $m_{ij}$ is symmetric, has $1$'s on the diagonal ($(g_i g_i)^1 = 1$). Also most of the entries are just renaming the involved qubits so if you know the order of $(\operatorname{CNOT}_{12} H_1)$, you know the order of $\operatorname{CNOT}_{37} H_3$ without doing the computation over again.

+ +

That took care of all relations that only involved at most two distinct gates (proof: exercise). The relations that involved $3$ or more were all thrown out. We now put them back in. Let's say we have that, then one can perform Dehn's greedy algorithm using new relations. If there was a change, we knock it back up to run through the Coxeter group again. This repeats until there are no changes.

+ +

Every time the word is either getting shorter or staying the same length and we are only using algorithms that have linear or quadratic behaviour. This is a rather cheap procedure so might as well do it and make sure you didn't do anything stupid.

+ +

If you want to test it out yourself, give the number of generators as $N$, the length $K$ of the random word you're trying out and the Coxeter matrix as $m$.

+ +
edge_list=[]
+for i1 in range(N):
+    for j1 in range(i):
+        edge_list.append((j1+1,i1+1,m[i1,j1]))
+G3 = Graph(edge_list)
+W3 = CoxeterGroup(G3)
+s3 = W3.simple_reflections()
+word=[choice(list([1,..,N])) for k in range(K)]
+print(word)
+wTesting=s3[word[0]]
+for o in word[1:]:
+    wTesting=wTesting*s3[o]
+word=wTesting.coset_representative([]).reduced_word()
+print(word)
+
+ +

An example with N=28 and K=20, the first two lines are the input unreduced word, the next two is the reduced word. I hope I didn't make a typo when entering the $m$ matrix.

+ +
[26, 10, 13, 16, 15, 16, 20, 22, 21, 25, 11, 22, 25, 13, 8, 20, 19, 19, 14, 28]
+
+['CNOT_23', 'Y_1', 'Y_4', 'Z_2', 'Z_1', 'Z_2', 'H_1', 'H_3', 'H_2', 'CNOT_12', 'Y_2', 'H_3', 'CNOT_12', 'Y_4', 'X_4', 'H_1', 'Z_5', 'Z_5', 'Y_5', 'CNOT_45']
+
+[14, 8, 28, 26, 21, 10, 15, 20, 25, 11, 25, 20]
+
+['Y_5', 'X_4', 'CNOT_45', 'CNOT_23', 'H_2', 'Y_1', 'Z_1', 'H_1', 'CNOT_12', 'Y_2', 'CNOT_12', 'H_1']
+
+ +

Putting back those generators like $T_i$ we only put back the relations like $T_i^n = 1$ and that $T_i$ commutes with gates that do not involve qubit $i$. This allows us to make the decomposition $w_1 g_{i_1} w_2 g_{i_2} \cdots w_k$ from before have the $w_i$ as long as possible. We want to avoid situations like $X_1 T_2 X_1 T_2 X_1 T_2 X_1$. (In Cliff+T one often seeks to minimize T-count). For this part, the directed acyclic graph showing the dependency is crucial. This is a problem of finding a good topological sort of the DAG. That is done by changing precedence when one has a choice of what vertex to use next. (I wouldn't waste time optimizing this part too hard.)

+ +

If the word is already close to optimal length, there is not much to do and this procedure won't help. But as the most basic example of what it finds is if you have multiple units and you forgot there was an $H_i$ at the end of one and an $H_i$ at the beginning of the next, it will get rid of that pair. This means you can black box common routines with greater confidence that when you put them together, those obvious cancellations will all be taken care of. It does others that aren't as obvious; those use when $m_{ij} \neq 1,2$.

+",434,,26,,4/29/2019 9:03,4/29/2019 9:03,,,,2,,,,CC BY-SA 4.0 +4096,1,4098,,8/27/2018 13:54,,7,207,"

How can I synthesis a two-qubit quantum state of the state vector (a,b,b,b) using basic quantum-gate circuit (arbitrary single-qubit rotation and controlled $Z$ gate)? And further, can I know a given circuit is the simplest?

+",4439,,26,,12/23/2018 7:39,12/23/2018 7:39,"What is the smallest quantum circuit to produce two-qubit state (a,b,b,b)?",,2,0,,,,CC BY-SA 4.0 +4097,2,,4096,8/27/2018 17:17,,4,,"

The simplest way to solve this problem is to work backwards from the output to the input. Suppose you have the state $a|00\rangle + b|01\rangle + b|10\rangle + b|11\rangle$. How can you reduce this to just the state $|00\rangle$ with unitary operations? Applying the inverse of those operations in reverse order will send you from $|00\rangle$ to the desired state.

+ +

So we start here:

+ +

$|\psi\rangle = a|00\rangle + b|01\rangle + b|10\rangle + b|11\rangle$

+ +

Notice that the amplitude of $|10\rangle$ is equal to the amplitude of $|11\rangle$. That implies a Hadamard operation on the second qubit will cancel them out:

+ +

$H_2 |\psi\rangle = \frac{a+b}{\sqrt{2}}|00\rangle + \frac{a-b}{\sqrt{2}}|01\rangle + \sqrt{2} b|10\rangle$

+ +

Now, within the subspace where the first qubit is off, we have the state $|v\rangle = \frac{a+b}{\sqrt{2}}|0\rangle + \frac{a-b}{\sqrt{2}}|1\rangle$ on the second qubit. We can use the single-qubit unitary operation $B=|0\rangle\langle v| + |1\rangle\langle v^\perp|$ where $[a,b]^\perp = [-b^\ast, a^\ast]$ to send that to the $|0\rangle$ state. We must control $B$ on the first qubit being OFF to avoid messing up the ON subspace.

+ +

$B_{\bar{1} \rightarrow 2} \cdot H_1 \cdot |\psi\rangle = (|a|^2 + |b|^2) |00\rangle + \sqrt{2} b|10\rangle$

+ +

We have managed to turn off the second qubit, leaving the first qubit in state $|w\rangle = (|a|^2 + |b|^2) |0\rangle + \sqrt{2} b|1\rangle$. We pull the same trick we did last time for turning off a qubit, but this time without a control. The operation we need is $A=|0\rangle\langle w| + |1\rangle\langle w^\perp|$:

+ +

$A_1 \cdot B_{\bar{1} \rightarrow 2} \cdot H_1 \cdot |\psi\rangle = |00\rangle$

+ +

Which implies:

+ +

$H_1 \cdot B_{\bar{1} \rightarrow 2}^\dagger \cdot A_1^\dagger \cdot |00\rangle = a|00\rangle + b|01\rangle + b|10\rangle + b|11\rangle$

+ +

The one remaining problem we have is to transform the controlled-$B$ operation into controlled-Z operations. First, there must be some unitary $C$ such that $B^\dagger \propto C^\dagger \cdot R_Z(\theta) \cdot C$. Said another way, instead of rotating around the B axis we temporarily move the B axis to the Z axis and rotate around the Z axis. The value of $\theta$ depends on the eigenvalues of $B$.

+ +

This is our current state:

+ +

$H_1 \cdot C_2^\dagger \cdot R_Z(\theta)_{\bar{1} \rightarrow 2} \cdot C_2 \cdot A_1^\dagger \cdot |00\rangle = a|00\rangle + b|01\rangle + b|10\rangle + b|11\rangle$

+ +

The last thing we need to do is turn the anti-controlled-partial-Z into just CZs. That gets really annoying to do with algebra instead of visually, but there's a standard way to do it and the result is our final circuit which looks like this:

+ +

+ +

Where $Z^t \propto R_Z(t \pi)$ and $t = \frac{\theta}{2 \pi}$. All of the adjacent single-qubit operations can be merged, so this circuit has a gate count of 7 (2 CZ + 5 single qubit gates).

+",119,,,,,8/27/2018 17:17,,,,5,,,,CC BY-SA 4.0 +4098,2,,4096,8/27/2018 17:56,,4,,"

A particularly efficient way is the look at the Schmidt coefficients of your target state. You know that your state can be written as +$$ +U_1\otimes U_2(\alpha|00\rangle+\beta|11\rangle), +$$ +and the Schmidt decomposition tells you what $\alpha,\beta,U_1,U_2$ are. So, obviously, the problem becomes producing +$$ +\alpha|00\rangle+\beta|11\rangle. +$$ +This is simple: produce $(\alpha|0\rangle+\beta|1\rangle)|0\rangle$ and apply controlled-not. Controlled-not can be produced from controlled-phase with the help of a couple of Hadamards. + +Thus, you only need one controlled-phase gate, and you know that must be optimal because the target state is entangled, and therefore requires at least one entangling operation to produce. Obviously we can combing the $H$ and $U_2$ as a single step if we wish.

+ +

If you want to talk about optimality in terms of total gate count, you know that a controlled-phase is useless as an entangling gate unless neither qubit is in the computational basis, so you have to have single-qubit unitaries on both qubits before the controlled phase. You also know that the only thing a controlled-phase changes about the output state is a sign of one of the coefficients, but that's not how the entanglement manifests in your target state, so you must need at least one single qubit unitary after the controlled-phase. So, you're certainly very close to optimal.

+",1837,,1837,,8/27/2018 18:07,8/27/2018 18:07,,,,2,,,,CC BY-SA 4.0 +4099,2,,3933,8/28/2018 1:57,,6,,"

I agree with most of what you've written in the first paragraph, though I would say that at roughly the same time (only 1 month apart!) as the Rebentrost et al. paper you mentioned, a very similar paper was posted to arXiv by Plenio and Huelga called ""Dephasing assisted transport: Quantum networks in biomolecules"" and it actually got published in the same journal as the Rebentrost et al. paper, but a few months earlier. There was also Mohseni et al.'s Environment-Assisted Quantum Walks in Photosynthetic Energy Transfer posted on arXiv one month earlier than Rebentrost et al., and published in a journal 8 days before the Plenio-Huelga paper.

+ +

But actually 13 years before all of that, Nancy Makri and Eunji Sim wrote papers simulating the full quantum coherence for electron transfer in bacteriochlorophylls (see this and this). Also 11 years before that, Nobel Laureate Rudy Marcus used Marcus theory to study energy transfer in the same system, and wrote this review on the subject with 331 papers listed in the bibliography.

+ +

So the use of quantum mechanics to study energy transfer in bacteriochlorophyll goes back to decades before that Rebentrost et al. paper, and it was the 2007 Engel paper that you mentioned, where they connected the energy transfer to quantum computing, which created a new wave of interest (including in the quantum computing community which previously was not interested in biological/chemical energy transfer, examples being the two 2008 papers mentioned in the first paragraph, which featured authors from quantum computing such as Martin Plenio and Seth Lloyd).

+ +

I was lucky to get the chance to see Bob Silbey's talk at the Royal Society meeting called ""Quantum coherent energy transfer: Implications for biology and new energy technologies"" fewer than 6 months before he died, and he traced quantum biology back to Chapter 4 of Schrödinger's book ""What is Life?"" which talks about mutations being causd by electron transfer (which we now learn in high school biology: UV radiation causes excitations that cause thymine dimers to form, leading to cancer).

+ +
+ +

Things get interesting in your second paragraph when you say:

+ +
+

Given that this mechanism allows for quantum effects to take place at room temperatures without the negative effects of decoherence, are their any applications for quantum computing?

+
+ +

In my answer to this I pointed out that if the excitations were in a vacuum with no vacuum modes (in QED, even a vacuum has modes that can interact with the excitations), then the energy would just transfer back and forth (Rabi oscillations) indefinitely due to the quantum version of the Poincaré recurrence theorem. You can see that when I turned on the decoherence, these Rabi oscillations didn't just get damped, but also the excitation was ""funneled"" towards the reaction center, hence allowing it to fuel the subsequent photosynethesis. This is why it's called ""decoherence-driven"" energy transfer, and why you say that quantum effects take place ""without the negative effects of decoherence"".

+ +

The implications for quantum computing are more subtle though.

+ +

Notice that the coherence was practically gone after 1ps (notice the Rabi oscillations are gone at 1ps). This means the decoherence is still bad, in fact much worse than in some quantum computer candidates such as phosphorous-doped silicon.

+ +

Said another way, the coherence is killed in the FMO within about 1ps, whereas in phosphorous-doped silicon it was made to last more than a trillion times longer than 1ps. You should not be surprised by this difference of 12 orders of magnitude, since the FMO was not meant to be a quantum computer (it is a wet, noisy, environment full of decoherence sources), while the phosphorous-doped silicon experiments were purposely done in conditions that would allow the authors to get the longest room-temperature coherence time possible.

+ +
+ +

So in summary:

+ +
    +
  • decoherence helps photosynthesis work,
  • +
  • decoherence happens rapidly in the FMO (roughly 1ps, vs seconds for some QC candidates)
  • +
  • circuit-based quantum computers require long coherence times
  • +
  • circuit-based quantum computers will not perform well if coherence is completely lost after 1ps, especially if the quantum gates take 100ns each (which is a realistic estimate for superconducting QCs).
  • +
  • Therefore I would not choose excitations in chromophores for the qudits in a circuit-based quantum computer. Such a quantum computer is less likely to be as capable as the machines currently being made by the real companies who are trying very hard to make good quantum computers: IBM, Google, D-Wave, Rigetti, Intel, Alibaba, etc. all use superconducting systems, not biological chromophores).
  • +
+ +

The bottom line is that it is very interesting that we are able to observe quantum coherence in the energy transfer of the FMO via coherent 2D spectroscopy, but this coherence does not last nearly as long as we need it to for fault-tolerant quantum computing, and QCs that have been engineered in the lab specifically to perform well at quantum computing, have much longer coherence times. Otherwise, IBM, Google, D-Wave, Rigetti, Intel, Alibaba, etc. would be using biological chromophores, not superconducting qubits. Those companies are well-aware of the quantum coherence in the FMO. In fact as stated in my first paragraph, Mohseni was the first to write about coherence in the FMO (in 2008) in this wave that started after Engel's 2007 paper. Guess where Mohseni works? Google. You said ENAQT was originally proposed by Patrick Rebentrost. Patrick works at Xanadu, a company trying to make photonic QCs, not chromophoric QCs. Patrick's PhD supervisor Alan Aspuru-Guzik who authored (at least) 4 of the mentioned papers, including the DNA one you posted, was also the PhD adviser of multiple other people in Google and Rigetti's quantum teams. These companies know about coherence in the FMO, employ many of the lead authors on those FMO papers, and if it was a good idea to build an FMO-inspired quantum computer, they would know it, but instead they all use superconducting qubits and sometimes ion-traps or photonics.

+",2293,,,,,8/28/2018 1:57,,,,4,,,,CC BY-SA 4.0 +4100,1,,,8/28/2018 2:46,,15,986,"

I am interested in the state of the art gate speeds and decoherence times for the qubit types I know are being pursued by companies presently:

+ +
    +
  • superconducting qubits,
  • +
  • ion trap qubits,
  • +
  • photonic qubits.
  • +
+ +

Where can I find these, and is there a place where these are updated regularly?

+ +

There have been various published tables depicting these times for various types of qubits over the years (including the famous Los Alamos National Lab QC Roadmap), but the numbers always change while the published papers don't.

+ +

I needed these numbers to answer this question because I wanted to compare the 1ps decoherence time in the FMO to state-of-the-art decoherence times and gate times in popular candidates for QCs, so I went searching for some reasonable values for roughly this time period, but I don't anymore know where to look.

+ +


+The longest coherence time ever measured was given in this answer, but no gate times were given: What is the longest time a qubit has survived with 0.9999 fidelity?

+ +

James Wootton talked about the advantages and disadvantages of the above three qubit types, but not the gate/decoherence times, in this answer: What is the leading edge technology for creating a quantum computer with the fewest errors?

+",2293,,,,,8/30/2018 8:51,State of the art gate speeds and decoherence times,,2,0,,,,CC BY-SA 4.0 +4101,2,,4100,8/28/2018 4:01,,5,,"

I guess your best shot would be to look for experimental comparisons like this +one on Arxiv.

+ +

But I am not aware of a tracking. I do not think we can consider having a ""state of the art"" in this field. The goal being to make them always better of course with better connectivity for instance (a possible factor to take into account).

+",4127,,,,,8/28/2018 4:01,,,,3,,,,CC BY-SA 4.0 +4102,1,,,8/28/2018 6:47,,4,217,"

How is it possible to maintain classical data encoded into qubits, which often contains copies of information, given that the no cloning theorem prevents cloning information?

+",4446,,55,,8/20/2020 8:04,8/20/2020 8:04,Cloning classical data encoded into qubits,,1,0,,,,CC BY-SA 4.0 +4103,2,,4102,8/28/2018 12:59,,8,,"

The no cloning theorem only applies when quantum information is in an unknown superposition. If you know a basis in which the state of some qubits is not under superposition, then you can make all the copies you want.

+ +

Classical information encoded directly into qubits is going to be in the computational basis state. Therefore you can clone it. You use CNOT operations to do it.

+",119,,,,,8/28/2018 12:59,,,,0,,,,CC BY-SA 4.0 +4104,1,,,8/29/2018 7:26,,9,1736,"

I am trying to understand what the importance of tensor networks is (or will/could be) for quantum computing.

+ +

Does it make sense to study tensor networks deeply and develop them further to help pave the way towards quantum supremacy? If so, how could they help and what are the current most pressing research questions?

+",4449,,26,,12/14/2018 6:28,12/20/2019 10:45,What can tensor networks mean for quantum computing?,,2,8,,,,CC BY-SA 4.0 +4105,1,,,8/29/2018 9:27,,4,143,"

The magic square game is a two-player pseudo-telepathy game that was presented by Padmanabhan Aravind, who built on work by Mermin. In the magic square we have ones in columns (odd number) and rows (even number).

+ +

+ +

According to https://arxiv.org/abs/quant-ph/0407221v3

+ +

$$ + A2= + \frac12\left[ {\begin{array}{ccccc} + i & 1 & 1 & i \\-i & 1 & -1 & i\\i & 1 & -1 & -i\\-i & 1 & 1 & -i \end{array} } \right] +$$

+ +

$$ + B3= + \frac1{\sqrt{2}}\left[ {\begin{array}{ccccc} + 1 & 0 & 0 & 1 \\-1 & 0 & 0 & 1\\0 & 1 & 1 & 0\\0 & 1 & -1 & 0 \end{array} } \right] +$$

+ +

We have an input the entangled state shared by Alice and Bob

+ +

$ \mid \psi \rangle = \frac{1}{2}\mid0011 \rangle -\frac{1}{2}\mid0110 \rangle -\frac{1}{2}\mid1001 \rangle +\frac{1}{2}\mid1100 \rangle$

+ +

Consider for example inputs x =2 and y =3. After Alice and Bob apply A2 and B3 respectively, the state evolves to +$$ + A2 \otimes B3 \mid \psi\rangle = \frac{1}{2\sqrt{2}} \left[\mid0000\rangle -\mid0010\rangle -\mid0101\rangle +\mid 0111\rangle +\mid 1001\rangle +\mid 1011\rangle -\mid 1100\rangle -\mid 1110\rangle \right] +$$

+ +

Question is how to obtain this result. I did multiplication of the matrices

+ +

$ +(A2 \otimes B3) \mid 00 \rangle \otimes \mid 11 \rangle $

+ +

$ -(A2 \otimes B3) \mid 01 \rangle \otimes \mid 10 \rangle $

+ +

$ -(A2 \otimes B3) \mid 10 \rangle \otimes \mid 01 \rangle $

+ +

$ +(A2 \otimes B3) \mid 11 \rangle \otimes \mid 00 \rangle $

+ +

Calculate the tensor product

+ +

$ +(A2 \mid 00 \rangle \otimes B3\mid 11 \rangle $ part 1

+ +

$ -(A2 \mid 01 \rangle \otimes B3\mid 10 \rangle $ part 2

+ +

$ -(A2 \mid 10 \rangle \otimes B3\mid 01 \rangle $ part 3

+ +

$ +(A2 \mid 11 \rangle \otimes B3\mid 00 \rangle $ part 4

+ +

Let's calculate part 2 with step 1 and step 2

+ +

$$ + step 1 = A2 | 01 \rangle = + \left[ {\begin{array}{ccccc} + i & 1 & 1 & 1 \\-i & 1 & -1 & 1\\i & 1 & -1 & -i\\-i & 1 & 1 & -i \end{array} } \right] \left[ {\begin{array}{c} 0 \\ 1 \\ 0 \\ 0\end{array}} \right] = \left[ {\begin{array}{c} 1 \\ 1 \\ 1 \\ 1\end{array}} \right] +$$

+ +

$$ +step 2 = B3 | 10 \rangle = +\left[ {\begin{array}{ccccc} + 1 & 0 & 0 & 1 \\-1 & 0 & 0 & 1\\0 & 1 & 1 & 0\\0 & 1 & -1 & 0 \end{array} } \right] \left[ {\begin{array}{c} 0 \\ 0 \\ 1 \\ 0\end{array}} \right] = \left[ {\begin{array}{c} 0 \\ 0 \\ 1 \\ -1\end{array}} \right] +$$

+ +

$ step1 \otimes step2 = \left[ {\begin{array}{c} 0 \\ 0 \\ 1 \\ -1 \\ 0 \\ 0 \\ 1 \\ -1 \\ 0 \\ 0 \\ 1 \\ -1 \\0 \\ 0\\ 1 \\ -1 \end{array}} \right] $

+ +

Is this the way to go?

+",1773,,1837,,8/29/2018 10:00,8/29/2018 10:00,How to calculate tensor product for the magic square,,1,2,,,,CC BY-SA 4.0 +4106,2,,4105,8/29/2018 9:59,,2,,"

That is certainly one direction to go, which should ultimately lead to the right answer. Frankly, I'd just throw it into something like Mathematica and get it to calculate KroneckerProduct[A2,B3], and complete the calculation that way.

+ +

However, if you want to continue by hand, there are probably a few tricks that you can use in this special case. For example, take the two components +$$ +|00\rangle|11\rangle+|11\rangle|00\rangle +$$ +Ignoring normalisation for now, you can write these as +$$ +(|00\rangle+|11\rangle)(|00\rangle+|11\rangle)-(|00\rangle-|11\rangle)(|00\rangle-|11\rangle). +$$ +(You can do something similar for the $|01\rangle|10\rangle$ terms). Why does this help? The components on Bob's space (the last 2 qubits) transform really nicely under $B3$, so the maths suddenly gets a whole lot easier (Alice's get better as well). Indeed, each of the 4 terms (corresponding to Bell states on Bob's qubits) will map to a different basis state on Bob's system, so you'll be able to check the correct outcome term by term, instead of having to wait for the whole calculation and seeing what things cancel.

+ +

At first glance, the claimed output doesn't look right. According to Mathematica, the output should be +$$ +\left\{\frac{i}{2 \sqrt{2}},0,-\frac{1}{2 \sqrt{2}},0,0,-\frac{i}{2 + \sqrt{2}},0,\frac{1}{2 \sqrt{2}},0,\frac{i}{2 \sqrt{2}},0,\frac{1}{2 + \sqrt{2}},-\frac{i}{2 \sqrt{2}},0,-\frac{1}{2 \sqrt{2}},0\right\}. +$$ +Note the complex phases.

+",1837,,,,,8/29/2018 9:59,,,,1,,,,CC BY-SA 4.0 +4107,1,,,8/29/2018 12:16,,6,584,"

Hamiltionian Simulation (= simulation of quantum mechanical systems) is claimed to be one of the most promising applications of a quantum computer in the future.

+ +
+

One of the earliest – and most important – applications of a quantum + computer is likely to be the simulation of quantum mechanical systems. + There are quantum systems for which no efficient classical simulation + is known, but which we can simulate on a universal quantum computer. + What does it mean to “simulate” a physical system? According to the + OED, simulation is “the technique of imitating the behaviour of some + situation or process (whether economic, military, mechanical, etc.) by + means of a suitably analogous situation or apparatus”. What we will + take simulation to mean here is approximating the dynamics of a + physical system. Rather than tailoring our simulator to simulate only + one type of physical system (which is sometimes called analogue + simulation), we seek a general simulation algorithm which can simulate + many different types of system (sometimes called digital simulation)

+
+ +

For the details, check chapter 7 of the lecture notes by Ashley Montaro.

+ +

Question: +Assuming tomorrow we have such a powerful universal quantum computer: which interesting problem (1) based on simulating a quantum system (2) for which a quantum algorithm is known, can we solve ?

+ +

Note that it is important that a quantum algorithm is already known to solve this problem or at least that there is good evidence supporting that such quantum algorithm can be found.

+ +

With interesting I mean that it should have substantial impact beyond the field of quantum computing and quantum chemistry.

+ +

Note that interesting problem definitely includes finding molecules that can cure diseases, designing materials with specific characteristics.

+",2529,,26,,12/13/2018 20:04,5/13/2019 21:19,Example of Hamiltonian Simulation solving interesting problem?,,3,11,,,,CC BY-SA 4.0 +4108,1,,,8/29/2018 15:25,,6,897,"

In scientific literature, one typically describes Cooper pair boxes as a small superconducting island coupled to a superconducting reservoir (say, a large ground plane of superconducting metal, or a large piece in any case) via a Josephson junction. This is for example illustrated in panel c below, taken from here

+ +

+ +

The main innovation of the transmon over the Cooper pair box was (if I am not mistaken) to shunt the island with a large capacitance, so as to decrease the sensitivity to charge. One can do this while maintaining the geometry of having an island and a large reservoir; see for example figure 1a of Anharmonicity of a Gatemon Qubit with a Few-Mode Josephson Junction by Kringhoj et al. (2017), where their Josephson junction (in this case an SNS junction made from a nanowire) shunts a T shaped island to the ground plane, the reservoir in this case. The large capacitance here is to ground. I apologise for using an exotic transmon here, I couldn't find a nice picture of a standard one; Studying Light-Harvesting Models with Superconducting Circuits by Potocnik et al. (2017) has grounded $\require{\mhchem}\ce{AlOx}$ transmons in figure 1b, but the level of zoom is a little low.

+ +

But something that I often see in modern transmon designs is that it is not a shunted island coupled to a reservoir; it is instead two coupled islands; take this figure from here. The capacitance here is between the two plates themselves.

+ +

+ +

Optically and intuitively, I would say that this is a rather different device; instead of having a small, isolated (up to Josephson coupling) island coupled to a large reservoir with a very large number of charges (which might even be grounded), one now has two such small, isolated islands with a small number of charges, galvanically isolated from the rest of the circuit. Moreover, the capacitance is now not to ground, but between the islands.

+ +

Now, I know that the Hamiltonian of these systems does not care about this difference; what matters is the number of Cooper pairs which have tunnelled across the junction, not the actual number of charges on the island(s). What my question is, is how in practice these devices do differ; why would one make their transmon in one way or the other? What constraints/considerations/findings have led modern designs to favour the two island version (if this is even true)? Do they have a better life and or coherence times, and if so, why? I've been trying to find literature that talks about the distinction between the two, but I have been unsuccessful.

+ +
+ +

Note: This has been cross posted on Physics SE.

+",271,,26,,09-02-2018 17:46,12/22/2018 12:01,Transmons and cooper pair box qubits: two islands or a single island and a reservoir,,1,1,,,,CC BY-SA 4.0 +4109,2,,4107,8/29/2018 15:42,,4,,"

It's not an area I personally know much about, but I know that many of my physicist friends are excited about being able to investigate the Hubbard Model on larger lattices than we can simulate today. There are known and published algorithms for finding the ground state energy, computing Green's functions, and other important characteristics of the model.

+ +

The hope is that understanding the Hubbard model on larger, more realistic lattices might help us understand superconductivity better, and in particular lead to materials that stay superconducting at higher and higher temperatures.

+",4265,,,,,8/29/2018 15:42,,,,0,,,,CC BY-SA 4.0 +4110,1,4132,,8/29/2018 20:20,,5,700,"

Context and Motivation:

+ +

As discussed here, in multilinear regression, we can express the linear system as $AX = b$. This leads to $A^TA \hat{X} = A^T b$. From here, the estimated value of $X$ is calculated as $(A^TA)^{-1}A^Tb$. The whole process basically involves three steps:

+ +
    +
  1. Matrix multiplication of $A$ and $A^T$: $\mathcal{O}(C^2N)$

  2. +
  3. Matrix multiplication of $A^T$ and column matrix $b$: $\mathcal{O}(CN)$

  4. +
  5. LU/Cholesky factorization of matrix $A^T A$ used to compute the product $(A^TA)^{-1}A^Tb$: $\mathcal{O}(C^3)$.

  6. +
+ +

Note: $N$ is the number of training samples. $C$ is the number of features/variables.

+ +
+ +

Questions:

+ +

I guess we could speed up step $3$ by using the HHL although I guess that would be worth it only if $C$ is sufficiently large i.e. $C\lesssim N$. But is there any quantum algorithm to speed up steps 1 and 2 (which involve matrix multiplication)? The fastest classical matrix multiplication algorithms as of today have time complexities around $\mathcal{O}(N^{2.37})$.

+ +
+

So:

+ +
    +
  1. Can we do better than that? What are state-of-the-art general purpose quantum algorithms as of today, as far as matrix + multiplication is concerned?
  2. +
+ +

(By ""general purpose"" I mean that the + algorithm should have no specific restrictions on the elements of the + matrices. An user mentioned in the comments that there are different quantum matrix multiplication algorithms depending on sparsity, condition number, etc. which sounds reasonable to me. So any answer which lists and summarizes the best quantum algorithms for different such conditions/restrictions is also welcome.)

+ +
    +
  1. Would the state-of-the-art quantum matrix multiplication algorithm(s) coupled with HHL help to produce an overall reduction in the time complexity + (considering all the three steps as a whole) of multilinear + regression? If yes, by how much?
  2. +
+ +

(I'm looking for an asymptotic analysis as in here which states that the overall time complexity of classical multilinear regression at best is $\mathcal{O}(C^2N)$).

+
+ +
+ +

Note:

+ +

Please summarize any algorithm you mention (along with the constraints involved). It is practically impossible for people to read each and every paper referenced in order to check whether it suits their criteria!

+",26,,26,,09-01-2018 07:25,02-01-2019 22:35,How to speed up the matrix multiplication steps in multi-linear regression?,,3,4,,,,CC BY-SA 4.0 +4111,1,4116,,8/29/2018 20:39,,3,334,"
+

A quantum gyroscope is a very sensitive device to measure angular rotation based on quantum mechanical principles. The first of these has been built by Richard Packard and his colleagues at the University of California, Berkeley. The extreme sensitivity means that theoretically, a larger version could detect effects like minute changes in the rotational rate of the Earth.

+
+ +

There is a section on Wikipedia titled Equation with no information in it.

+ +

Can any one help fill in the blank?

+",2645,,,,,8/30/2018 13:55,Equation for Quantum Gyroscopes,,1,2,,,,CC BY-SA 4.0 +4112,2,,4110,8/29/2018 20:48,,5,,"

You were correct to seek a new quantum algorithm for this rather than just using HHL to do step 3.

+ +

There are separate quantum algorithms to do regressions:

+ + + +

There is an interesting note about the $\mathcal{O}(N^{2.37})$ algorithm you mention for matrix multiplication. The constant hidden under the big O is larger than the number of particles in the visible universe. That is why almost 100% of the implementations (for example in MATLAB, BLAS, LAPACK, etc.) use Strassen's algorithm which has scaling $\mathcal{O}(N^{2.81})$.

+",2293,,,,,8/29/2018 20:48,,,,10,,,,CC BY-SA 4.0 +4113,1,4124,,8/30/2018 4:28,,3,419,"

I'm a bachelor in Mechanical Engineering, currently pursuing Masters in Nanotechnology. I am interested to pursue a career in the field of quantum computing. I have got a basic understanding of electronics and quantum mechanics. But poor understanding in the area of computer science.

+ +

What qualities/prerequisites are required to pursue a career in the above-mentioned field? Will my weakness in computer science pose any hurdles in the future?

+",4460,,26,,12/23/2018 13:00,12/23/2018 13:00,Prerequisites for a career in spintronics based Quantum Computing,,1,8,,9/17/2018 8:28,,CC BY-SA 4.0 +4114,2,,4107,8/30/2018 8:02,,2,,"

I consider the leading candidate (considering you explicitly excluded quantum chemistry and hence all of biochemistry) the calculation of nuclear properties: This will, with suitable quantum computers, one day allow to compare experimental, atomic physics with theoretical (quantum computer calculated) values from the standard model. It's not entirely science fiction either, see this viewpoint about such a calculation.

+",,user1039,,,,8/30/2018 8:02,,,,1,,,,CC BY-SA 4.0 +4115,2,,4100,8/30/2018 8:51,,3,,"

You could also look at the following webpage:

+ +

https://quantumcomputingreport.com/scorecards/qubit-quality/

+ +

where they provide recent (I'm not sure how often they update this scores) values for gate fidelities and decoherece times for IBM and Rigetti chips (unfortunately they don't give any results on ion traps and photonics, since these machines are not well described by commercial companies).

+",563,,,,,8/30/2018 8:51,,,,0,,,,CC BY-SA 4.0 +4116,2,,4111,8/30/2018 13:55,,2,,"

By a quantum gyroscope, it is usually meant a device or a sensor capable of measuring the same quantities as a classical gyroscope, namely angular velocities or orientations but with extremely higher precision, limited only by the Heisenberg uncertainty due to the exploitation of the quantum nature of the sensor.

+ +

The principle of measurement accuracy enhancement to the Heisenberg uncertainty limit, by means of quantum operations such as entanglement and squeezing is general and should be implementable in various technologies. Please see the following review by Giovannetti, Lloyd and Maccone.

+ +

This principle was used by Dowling to propose a quantum gyroscope based on an atomic Mach-Zehnder interferometer.

+ +

Angular velocity measurement by means of interferometry is based on the Sagnac effect, where when a quantum particle moves around a closed loop in a rotating system, it acquires a geometric phase proportional to the angular velocity $\omega$ and the loop area $A$: + $$\phi = \frac{8 \pi}{\lambda c} \omega A$$ +$\lambda$ is the wavelength and $c$ is speed of light.

+ +

(Remark: holonomic quantum computing gates are based on Non-Abelian generalization of this phase).

+ +

The Dowling quantum gyroscope setup is depicted in the following diagram (extracted from Dowling's paper),

+ +

+ +

Where two beams of particles or modes $A$ and $B$ are split at a first beam splitter, reflected by mirrors and recombined at a second beam splitter. Two detectors are used to count the particle numbers at the output. A rather straightforward analysis (given in the Dowling's reference) shows that the geometric phase can be computed from the sum and difference of the detectors measurements. +Dowling showed that the input beams are in a separable state: +$$|\psi \rangle_{I} = |N\rangle_A |0\rangle_B $$ +The phase measurement accuracy is the classical shot noise limit: +$$\Delta \phi \approx \frac{1}{\sqrt{N}}$$ +While when the input beams are in the entangled state:

+ +

$$|\psi \rangle_{II} = \frac{1}{\sqrt{2}} \left ( |\frac{N+1}{2}\rangle_A |\frac{N-1}{2}\rangle_B + |\frac{N-1}{2}\rangle_A |\frac{N+1}{2}\rangle_B \right)$$

+ +

The phase measurement accuracy reaches the Heisenberg limit: +$$\Delta \phi \approx \frac{1}{N}$$

+ +

In quantum radars, exactly the same principle (i.e., scattering entangled states) is used to enhance the accuracy to the Heisenberg limit.

+",4263,,,,,8/30/2018 13:55,,,,0,,,,CC BY-SA 4.0 +4117,2,,1472,8/30/2018 14:54,,3,,"

None.

+ +

The quantum race is lead by those entities capable of building the most powerful quantum computer and it are enterprises like IBM, Google, Intel, Microsoft, D-Wave that are currently building the most powerful quantum computers.

+ +

So it are enterprises that are leading this race and not countries.

+",2529,,,,,8/30/2018 14:54,,,,0,,,,CC BY-SA 4.0 +4118,1,4121,,8/30/2018 16:44,,20,2881,"

Suppose we have a single qubit with state $| \psi \rangle = \alpha | 0 \rangle + \beta | 1 \rangle$. We know that $|\alpha|^2 + |\beta|^2 = 1$, so we can write $| \alpha | = \cos(\theta)$, $| \beta | = \sin(\theta)$ for some real number $\theta$. Then, since only the relative phase between $\alpha$ and $\beta$ is physical, we can take $\alpha$ to be real. So we can now write

+ +

$$| \psi \rangle = \cos(\theta) | 0 \rangle + e^{i \phi} \sin(\theta)| 1 \rangle$$

+ +

My Question: Why are points on the Bloch sphere usually associated to vectors written as +$$| \psi \rangle = \cos(\theta/2) | 0 \rangle + e^{i \phi} \sin(\theta/2)| 1 \rangle$$ +instead of as I have written? Why use $\theta /2$ instead of just $\theta$?

+",4465,,55,,11-07-2019 19:13,05-02-2022 08:11,Why are half angles used in the Bloch sphere representation of qubits?,,4,1,,,,CC BY-SA 4.0 +4119,2,,4118,8/30/2018 17:32,,1,,"

It is just a convention for $ 0 \le \theta \le \pi $. +You can write it your way (indeed you can ""include"" a constant in a variable) but in that case $ 0 \le \theta \le \pi / 2 $.

+ +

But we take this convention for unique coordinates. +If you refer to the Spherical coordinate system

+ +

you can see that if you want a unique set of spherical coordinates for each point of the sphere, you need to restrict their range.

+",4127,,4127,,8/30/2018 17:40,8/30/2018 17:40,,,,1,,,,CC BY-SA 4.0 +4120,2,,4118,8/30/2018 17:44,,7,,"

If we use the convention +$$| \psi \rangle = \cos(\theta) | 0 \rangle + e^{i \phi} \sin(\theta)| 1 \rangle$$ then the North ($\theta=0$) and the South ($\theta=\pi)$ are (physically) the same state $|0\rangle$;

+ +

If we use the convention$$| \psi \rangle = \cos(\theta/2) | 0 \rangle + e^{i \phi} \sin(\theta/2)| 1 \rangle$$

+ +

then North is $|0\rangle$ and South is $|1\rangle$ which is better.

+",2105,,2105,,8/30/2018 17:49,8/30/2018 17:49,,,,0,,,,CC BY-SA 4.0 +4121,2,,4118,8/30/2018 17:55,,18,,"

It is a convention, chosen so that $\theta$ is the azimuthal angle of the point representing the state in the Bloch sphere.

+

To see where this convention comes from, +start from a state $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$. Remembering the normalisation constraint $|\alpha|^2+|\beta|^2=1$, and assuming without loss of generality $\alpha\in\mathbb R$, a natural way to parametrise the state is by defining an angle $\gamma$ such that $|\alpha|=\alpha=\cos\gamma$ and $|\beta|=\sin\gamma$. +A generic state $|\psi\rangle$ thus reads +$$|\psi\rangle=\cos\gamma|0\rangle + e^{i\varphi}\sin\gamma|1\rangle,$$ +for some phase $\varphi\in\mathbb R$. +Remember now that the Bloch sphere coordinates of a generic (pure) state $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$ have the explicit form +\begin{align}\newcommand{\on}[1]{\operatorname{#1}}\newcommand{\bs}[1]{\boldsymbol{#1}} +x\equiv\langle\psi|\sigma_x|\psi\rangle +&= 2\on{Re}(\bar\alpha\beta),\\ +y\equiv\langle\psi|\sigma_y|\psi\rangle +&= 2\on{Im}(\bar\alpha\beta),\\ +z\equiv\langle\psi|\sigma_z|\psi\rangle +&= |\alpha|^2 - |\beta|^2. +\end{align} +Relating these with our previous parametrisation with $\gamma$ we find +$$z=\cos^2\gamma-\sin^2\gamma=\cos(2\gamma).$$ +But the canonical way to define spherical coordinates uses $z=\cos\theta$, so if we wish to interpret the coefficients of the state as angles in the Bloch sphere, we have to set $\gamma=\theta/2$.

+

See also the analogous question on physics.SE for more info.

+",55,,55,,05-02-2022 08:11,05-02-2022 08:11,,,,7,,,,CC BY-SA 4.0 +4122,2,,4118,8/30/2018 18:51,,5,,"

Let $\hat{n}=(\cos\phi\sin\theta,\sin\phi \sin\theta,\cos\theta)$ i.e. the Cartesian coordinate vector for a point on the unit sphere with polar angle $\theta$ and azimuthal angle $\phi$. By sending a spin-1/2 particle through a Stern-Gerlach device with orientation $\hat{n}$, we can measure the observable

+ +

\begin{align} +S_n:=\vec{S}\cdot \hat{n} +&=S_x \cos\phi\sin\theta +S_y \sin\phi \sin\theta+S_z \cos\theta\\ &= \frac{\hbar}{2}\begin{pmatrix} \cos\theta & \cos\phi\sin\theta-i \sin\phi \sin\theta\\ \cos\phi\sin\theta+i \sin\phi \sin\theta & -\cos\theta\end{pmatrix} \\&= \begin{pmatrix} \cos \theta & e^{-i\phi}\sin\theta \\ e^{i\phi}\sin\theta & -\cos\theta\end{pmatrix} +\end{align} +in the $S_z$ basis. The obvious step is now to determine eigenvalues and eigenvectors. But if we denote the spin-up and spin-down eigenstates of $S_z$ as $|0\rangle$ and $|1\rangle$ respectively, then

+ +

$$| \psi \rangle = \cos(\theta/2) |0 \rangle + e^{i \phi} \sin(\theta/2)| 1 \rangle=\begin{pmatrix} \cos(\theta/2)\\ e^{i\phi}\sin(\theta/2)\end{pmatrix}$$ and therefore +\begin{align} +S_n |\psi\rangle +&= \frac{\hbar}{2}\begin{pmatrix} \cos \theta & e^{-i\phi}\sin\theta \\ e^{i\phi}\sin\theta & -\cos\theta\end{pmatrix}\begin{pmatrix} \cos(\theta/2)\\ e^{i\phi}\sin(\theta/2)\end{pmatrix} \\ +&= \frac{\hbar}{2}\begin{pmatrix} \cos(\theta)\cos(\theta/2)+\sin(\theta)\sin(\theta/2)\\ e^{i\phi}[\sin(\theta)\cos(\theta/2)-\cos(\theta)\sin(\theta/2)]\end{pmatrix}\\ +&= \frac{\hbar}{2}\begin{pmatrix} \cos(\theta/2)\\ e^{i\phi}\sin(\theta/2)\end{pmatrix}=+\frac{\hbar}{2}|\psi\rangle +\end{align} +where in the second-to-last equality I've used the trigonometric product-to-sum formula. Hence $|\psi\rangle$ is the $S_n=+\hbar/2$ eigenstate. In other words: If a spin-1/2 particle passes through an SG device with orientation $\hat{n}$ and comes out deflected up, then $|\psi\rangle$ is the resulting spin state. (Correspondingly, one can show that $S_{-n}|\psi\rangle=-\hbar/2|\psi\rangle$ i.e. $|\psi\rangle$ will deflect down if the SG device is flipped.) The upshot is that $\theta,\phi$ are not angles in Hilbert space; rather, they're the angles in real space for the SG device for which $|\psi\rangle$ is the spin state of the upward-deflected beam.

+ +

Note that the above description is limited to points on the surface of the Bloch sphere i.e pure states. For points on the interior of the Bloch sphere, we need to go to the density matrix formalism as presented by gLs and I'll defer to that answer.

+",171,,,,,8/30/2018 18:51,,,,0,,,,CC BY-SA 4.0 +4123,2,,4107,8/30/2018 19:52,,4,,"

Several graph theory problems such as Graph Coloring (which is NP-complete) can be cleverly mapped to finding ground states of some classes of Hamiltonians.

+ +

Graph Partitioning using Quantum Annealing on +the D-Wave System

+ +

Quantum annealing of the graph coloring problem

+",4467,,26,,5/13/2019 21:19,5/13/2019 21:19,,,,0,,,,CC BY-SA 4.0 +4124,2,,4113,8/31/2018 1:46,,3,,"

If you are pursuing a Masters and are interested in pursuing a career in quantum computing research, the next step would be to do a PhD.

+ +

Most PhD students in quantum computing lack strength in some area of quantum information science. People that enter their PhD with an undergrad in CS might have a weak background in physics, people with a background in physics might have a weak background in cryptography, people with a background in pure mathematics might have a weak background in engineering.

+ +

Most PhD programs will give you plenty of time to take CS courses, and you can also start reading Theory of Computation by Michael Sipser which is not very expensive (and will be in almost all university libraries) and is a beautifully written, simple, gentle introduction to many CS branches important to quantum computing such as complexity theory and cryptography, with no pre-requesites at all except for a desire to learn CS.

+ +

It also depends on what you want your focus to be. I know plenty of experimentalists in spintronics who do not know any CS and know very little math or even quantum physics, but they know how to do a very good job of their experiments. If you do want to be a world-class quantum computing researcher you should at least be familiar with the introductory topics such as complexity theory, even though spintronics itself is far less related to CS than many other quantum computing sub-fields.

+ +

I do warn you that spintronics might not be the most promising sub-field of quantum computing right now though. NMR and ESR-based quantum computing was very popular in the 90s but has died down since there are scalability issues that have kept the maximum number of spin qubits down to only 12 qubits (even with NV Centers and Phosphorus-doped Silicon, which are two slightly newer spin-based technologies which have been proposed to be more promising than more traditional NMR/ESR proposals).

+ +

Superconducting qubits are the most popular among the major quantum hardware companies right now, and likely will remain that way for the next few years (which is presumably when you'd be doing your PhD). If you don't want to work in a ""crowded"" area, but still want to work on something more promising than spintronics, ion-traps and photonic quantum computers are the two next most popular QC sub-fields for QC implementation.

+",2293,,2293,,8/31/2018 3:33,8/31/2018 3:33,,,,2,,,,CC BY-SA 4.0 +4125,1,4127,,8/31/2018 2:37,,4,194,"

After getting help here with XNOR, RCA & XOR linked lists, I am now curious about quantum XOR ciphers (Google returns ""no results"").

+ +
+

In cryptography, the simple XOR cipher is a type of additive cipher, an encryption algorithm that operates according to the principles:

+ $A \oplus 0 = A$,
+ $A \oplus A = 0$,
+ $(A \oplus B) \oplus C = A \oplus (B \oplus C)$,
+ $(B \oplus A) \oplus A = B \oplus 0 = B$,

+ where $\oplus$ denotes the exclusive disjunction (XOR) operation.

+ -Wikipedia

+
+ +

How would a quantum XOR cipher be expressed?

+",2645,,2645,,8/31/2018 3:02,8/31/2018 17:18,Quantum XOR Cipher Construction,,1,5,,,,CC BY-SA 4.0 +4126,1,8607,,8/31/2018 10:02,,24,888,"

I'm interested in the conversion between different sets of universal gates. For example, it is known that each of the following sets is universal for quantum computation:

+ +
    +
  1. $\{T,H,\textrm{cNOT}\}$
  2. +
  3. $\{H,\textrm{c}S\}$, where $S=T^2$ and $S^2=Z$, and $\mathrm{c}S = \lvert 0 \rangle\!\langle 0 \rvert {\,\otimes\,} \mathbf 1 + \lvert 1 \rangle\!\langle 1 \rvert {\,\otimes\,} S$.
  4. +
  5. $\{H,\textrm{ccNOT}\}$, where $\textrm{ccNOT}$ is also known as the Toffoli gate. Note that this case requires the introduction of an extra ancilla bit that records whether each of the amplitudes is real or imaginary, so that the entire computation only uses real amplitudes.
  6. +
+ +

Now, let's say I've proven that the first set is universal. How can I write this set in terms of gates from the other sets? (It is possible that it is not possible perfectly.) The problem is that the other two cases are proven using a denseness in a particular space argument (here and here, much as you would use between $H$ and $T$ to generate arbitrary single-qubit rotations for the first set), each using a different subspace, and not by converting from one set to another. Is there an exact, direct conversion?

+ +

The particular sticking points are:

+ +
    +
  • (2 to 1): creating $T$ from controlled-$S$ and $H$. I could also accept creating any single-qubit phase gate that is not $S$, $Z$ or $S^\dagger$.
  • +
  • (3 to 1): creating a controlled-Hadamard from $H$ and Toffoli. (Controlled-Hadamard is the equivalent of $T$ if the target is the ancilla qubit.) Alternatively, Aharonov gives us a way to convert 3 to 2, so the (2 to 1) step would be sufficient.
  • +
+ +

For reference, section 4 of this paper seems to make some steps related to achieving the (3 to 1) case, but in aiming for generality, pedagogy has perhaps fallen by the wayside slightly.

+ +

Update

+ +

I recently came across this paper which essentially gives a necessary and sufficient condition for a given single-qubit unitary to be expressible in terms of a finite sequence of $H$ and $T$ gates. Building similar results for the other gate sets (e.g. necessary and sufficient condition for creating a two-qubit gate from $H$ and $cS$) would be a rigorous way of resolving this question one way or another.

+",1837,,1837,,10-09-2018 09:12,10/29/2019 21:32,Explicit Conversion Between Universal Gate Sets,,1,10,,,,CC BY-SA 4.0 +4127,2,,4125,8/31/2018 12:17,,6,,"

There are two simple ways in which you can generalise an ""XOR cipher"" (i.e., a one-time pad or Vernam cipher) to the quantum regime.

+ +
    +
  • One way is to use the fact that the Pauli operators form a quantum 1-design, which is a fancy way of saying that applying a uniformly random Pauli operator to any single-qubit quantum state makes it indistinguishable from the maximally mixed state — provided of course that which Pauli operator you perform is kept secret from any interfering party.

    + +

    Using this fact, you can realise a simple ""quantum XOR cipher"" between Alice and Bob by sharing a one-time pad between them, and then doing the following for any input state $\rho$ of Alice's:

    + +
      +
    1. Alice takes two bits $(x,z)$ from the one-time pad, and performs the transformation $\rho \mapsto \sigma := P \rho P^\dagger$, where $P = X^x Z^z$.

    2. +
    3. Alice transmits $\sigma$ to Bob. By averaging over all values of $(x,z) \in \{00,01,10,11\}$ it is easy to see that, to someone who doesn't know the value of $(x,z)$, the state $\sigma$ is indistinguishable from $\tfrac{1}{2} \mathbf 1$ and contains no information about the state $\rho$.

    4. +
    5. Bob recovers $\rho = P^\dagger \!\sigma P$ using the same operator $P = X^x Z^z$ defined by his copy of the one-time pad.

    6. +
  • +
  • Building on this idea, another way of generalising the XOR cipher (as Craig Gidney indicates, and which seems to be part of the folklore, see for example a post of mine on Physics.SE), is teleportation.

    + +

    In this case, the EPR state is akin to a shared random bit which is independent of any data which is to be encoded, and again must be carefully distributed between Alice and Bob.

    + +
      +
    1. The Bell-basis measurement performed by Alice is akin to computing the parity of the (arbitrary) input quantum state with that shared random bit;

    2. +
    3. The pair of classical bits which indicate the outcome play the role of the result of the parity computation, which contain no information about the state which is being ""encoded"";

    4. +
    5. The correction performed by Bob is akin to reversing the parity computation, reproducing the input state original possessed by Alice.

    6. +
    + +

    An intriguing feature of this analogy is that, whereas a one-time pad should only be used once, in the quantum regime the EPR pair can only be used once, as it is consumed. Also intriguing is that what one would think of as the encoded quantum state is effectively stored by the system playing the role of the shared random bit — the outcome of the ""parity computation"" instead plays the role of the reversible encoding of that qubit (this part of the protocol is effectively the same as the encoding of a state using a random Pauli operator).

  • +
+ +

In both cases above, two random bits are involved in what might be thought of as the encoding procedure. This might seem deep, but actually, I would suggest that it is just the fact that a uniformly random Pauli operator is involved in both cases (albeit slightly more subtly in teleportation than in using the Pauli operators as a 1-design).

+",124,,124,,8/31/2018 17:18,8/31/2018 17:18,,,,0,,,,CC BY-SA 4.0 +4129,1,4130,,8/31/2018 18:43,,4,658,"

I've just learned about the density operator, and it seems like a fantastic way to represent the branching nature of measurement as simple algebraic manipulation. Unfortunately, I can't quite figure out how to do that.

+

Consider a simple example: the state $|+\rangle$, which we will measure in the computational basis (so with measurement operator $I_2$). The density operator of this state is as follows:

+

$\rho = |+\rangle\langle+| = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} ⊗ \begin{bmatrix} \frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}} \end{bmatrix} += \begin{bmatrix} \frac 1 2 & \frac 1 2 \\ \frac 1 2 & \frac 1 2 \end{bmatrix}$

+

Since measuring $|+\rangle$ in the computational basis collapses it to $|0\rangle$ or $|1\rangle$ with equal probability, I'm imagining there's some way of applying the measurement operator $I_2$ to $\rho$ such that we end up with the same density operator as when we don't know whether the state is $|0\rangle$ or $|1\rangle$:

+

$\rho = \frac 1 2 \begin{bmatrix} 1 \\ 0 \end{bmatrix} ⊗ \begin{bmatrix} 1, 0 \end{bmatrix} ++ \frac 1 2 \begin{bmatrix} 0 \\ 1 \end{bmatrix} ⊗ \begin{bmatrix} 0, 1 \end{bmatrix} += \begin{bmatrix} \frac 1 2 & 0 \\ 0 & 0 \end{bmatrix} ++ \begin{bmatrix} 0 & 0 \\ 0 & \frac 1 2 \end{bmatrix}$

+

From there, we can continue applying unitary transformations to the density operator so as to model a measurement occurring mid-computation. What is the formula for applying a measurement operator to the density operator? Looking in the Mike & Ike textbook section on the density operator, I only see the density operator measurement formula for a specific measurement outcome. I'd like to know the density operator measurement formula which captures all possible results of the measurement.

+

As a followup question, I'm also curious as to the formula when measuring some subset of multiple qubits.

+",4153,,20465,,5/23/2022 20:16,5/23/2022 20:16,How is measurement modelled when using the density operator?,,1,0,,,,CC BY-SA 4.0 +4130,2,,4129,8/31/2018 19:10,,4,,"

To define a measurement, you need a set of measurement operators $P_i$, one for each possible measurement outcome, that satisfy a completeness relation +$$ +\sum_iP_i=\mathbb{I} +$$ +Let me specialise to the case of projective measurements, where $P_i^2=P_i$. +To calculate the probability of a given density operator $\rho$ giving outcome $i$, you simply calculate +$$ +p_i=\text{Tr}(P_i\rho), +$$ +and the state that you get as output is +$$ +\rho_i=\frac{P_i\rho P_i}{p_i}. +$$ +So, in the case of the measurement of a single qubit in the computational basis, you have +$$ +P_0=|0\rangle\langle 0|\qquad P_1=|1\rangle\langle 1|, +$$ +and you'll get $p_0=p_1=\frac12$. At that point, you proceed with using the separate measurement outcomes separately, unless there is a case where some other user doesn't know what the outcome was. Then they use a standard Bayesian approach to describe their state of knowledge as +$$ +\tilde\rho=\sum_ip_i\rho_i. +$$ +In this case, that's the maximally mixed state $\mathbb{I}/2$. But for anybody who knows what the measurement result is, you just keep going with the separate results. If it's too painful having to do the full calculation twice, you might do a trick like writing a state +$$ +\frac12(P_0+P_1\pm(P_0-P_1)). +$$ +Then you just do the calculation once and, at the end, choose either the + sign to see what happens for the 0 measurement result, or - to see what happens for the 1 result. But it probably doesn't actually reduce your workload.

+ +

If you want to talk about $n$ qubits where you measure just the first one, then you use the measurement operators +$$ +P_0=|0\rangle\langle 0|\otimes\mathbb{I}^{\otimes(n-1)}\qquad P_1=|1\rangle\langle 1|\otimes\mathbb{I}^{\otimes(n-1)} +$$ +and apply all the same formalism as before.

+",1837,,1837,,8/31/2018 19:29,8/31/2018 19:29,,,,2,,,,CC BY-SA 4.0 +4132,2,,4110,8/31/2018 22:51,,1,,"

Maybe this one can be useful. Their algorithm is called quantum hyperparallel algorithm for matrix multiplication and they state that the time complexity is $O(N^2)$ which is the lower bound for matrix multiplication apparently.

+ +

I won't describe the whole procedure but give just the idea behind. +You know matrix multiplication is just a calculation of inner products. +There is a quantum algorithm called the swap Test which enables you to compute the overlap (inner product) between quantum states.

+ +

They based their algorithm on it. It seems you have no restrictions on your matrices. You need however oracles like many quantum algorithms.

+",4127,,26,,02-01-2019 22:35,02-01-2019 22:35,,,,3,,,,CC BY-SA 4.0 +4133,2,,1206,09-01-2018 03:38,,11,,"

Less formally-stated than the other answers, but for beginners I like the intuitive method outlined by Prof. Vazirani in this video.

+ +

Suppose you have a general two-qbit state:

+ +

$|\psi\rangle = \begin{bmatrix} \alpha_{00} \\ \alpha_{01} \\ \alpha_{10} \\ \alpha_{11} \end{bmatrix} = \alpha_{00}|00\rangle + \alpha_{01}|01\rangle + \alpha_{10}|10\rangle + \alpha_{11}|11\rangle$

+ +

Now suppose you measure the most-significant (leftmost) qbit in the computational basis (as in, collapse it to either $|0\rangle$ or $|1\rangle$). There are two questions we might ask:

+ +
    +
  1. What is the probability that the measured qbit collapses to $|0\rangle$? What about $|1\rangle$?
  2. +
  3. What is the state of the 2-qbit system after measurement?
  4. +
+ +

For the first question, the intuitive answer is this: take the sum of squares of all amplitudes associated with the value for which you want to find the probability of collapse. So, if you want to know the probability of the measured qbit collapsing to $|0\rangle$, you'd look at the amplitudes associated with cases $|00\rangle$ and $|01\rangle$, because those are the cases where the measured qbit is $|0\rangle$. Thus:

+ +

$P[|0\rangle] = |\alpha_{00}|^2 + |\alpha_{01}|^2$

+ +

Similarly, for $|1\rangle$ you look at the amplitudes associated with cases $|10\rangle$ and $|11\rangle$, so:

+ +

$P[|1\rangle] = |\alpha_{10}|^2 + |\alpha_{11}|^2$

+ +

As for the state of the 2-qbit system after measurement, what you do is cross out all the components of the superposition which are inconsistent with the answer you got. So, if you measured $|0\rangle$, then the state after measurement is:

+ +

$\require{cancel} |\psi\rangle = \alpha_{00}|00\rangle + \alpha_{01}|01\rangle + \cancel{\alpha_{10}|10\rangle} + \cancel{\alpha_{11}|11\rangle} = \alpha_{00}|00\rangle + \alpha_{01}|01\rangle$

+ +

However, this state is not normalized - the sum of squares does not add up to 1, and so you have to normalize it:

+ +

$|\psi\rangle = \frac{\alpha_{00}|00\rangle + \alpha_{01}|01\rangle}{\sqrt{|\alpha_{00}|^2 + |\alpha_{01}|^2}}$

+ +

Similarly, if you measured $|1\rangle$ then you'd get:

+ +

$\require{cancel} |\psi\rangle = \cancel{\alpha_{00}|00\rangle} + \cancel{\alpha_{01}|01\rangle} + \alpha_{10}|10\rangle + \alpha_{11}|11\rangle = \alpha_{10}|10\rangle + \alpha_{11}|11\rangle$

+ +

Normalized:

+ +

$|\psi\rangle = \frac{\alpha_{10}|10\rangle + \alpha_{11}|11\rangle}{\sqrt{|\alpha_{10}|^2 + |\alpha_{11}|^2}}$

+ +

And that's how you calculate the action of measuring one qbit in a multi-qbit state, in the simplest case!

+",4153,,,,,09-01-2018 03:38,,,,0,,,,CC BY-SA 4.0 +4134,2,,4110,09-01-2018 21:16,,-1,,"

First of all the cost on a classical computer of the dominant step can be improved from the $\mathcal{O}(C^2 N)$ in your question.

+ +

I managed to bring the classical cost down to $\mathcal{O}(C^{1.37} N)$ + in my new answer to the question on the Mathematics Stack Exchange that you linked.

+ +

Now, you have two questions:

+ +
    +
  • (1) can a quantum computer do step 1 in time faster than $\mathcal{O}(C^{1.37} N)$, and
  • +
  • (2) would this lead to an overall reduction in complexity of the entire regression algorithm?
  • +
+ +

The answer to (2) is yes. If your quantum algorithm improves the classical $\mathcal{O}(C^{1.37} N)$, then it improves the whole algorithm because this is the step that dominates the complexity of the whole problem. This is because $N$ has to be bigger than $C$ for the matrix to be invertible, which was pointed out by Chris Taylor in this answer.

+ +

The answer to (1) is an open problem.

+",2293,,2293,,09-02-2018 08:52,09-02-2018 08:52,,,,0,,,,CC BY-SA 4.0 +4135,1,4381,,09-02-2018 07:03,,3,297,"

What do the terms ""hyperparallel algorithm"" and ""hyperentangled states"" mean? I found it mentioned here[1]. In the abstract they say: ""Hyperentangled states, entangled states with more than one degree of freedom, are considered as a promising resource in quantum computation"", but I'm not sure they mean by ""degree of freedom"" in this context.

+ +

[1]: Quantum hyperparallel algorithm for matrix multiplication (Zhang et al., 2016)

+",26,,55,,6/30/2021 8:01,6/30/2021 8:01,"What do ""hyperparallel algorithm"" and ""hyperentangled state"" mean?",,2,0,,,,CC BY-SA 4.0 +4136,2,,4135,09-02-2018 09:14,,2,,"

""Hyperparellel algorithm"" is not a popular term at all and all instances of this phrase seem to be coming from the same group of co-authors.

+ +

Hyper-entanglement is at least used more in the ""mainstream"", although it is still not a very common word to hear. In this presentation by NASA they define it as a system being entangled in more than one degree-of-freedom (DOF) at the same time. Here we have two photons that are entangled with respect to three different DOFs at the same time:

+ +

+ +

In this paper there's two photons that are entangled with respect to polarization, and simultaneously also entangled in terms of their spatial DOF:

+ +

+ +

Using this definition, the authors of the paper you mention defined a hyper-CNOT and talked about ""hyper-parallel quantum computation"" as operating on two DOFs at the same time:

+ +
+

""In this paper, we investigate the possibility of achieving scalable + hyper-parallel quantum computation based on two DOFs of photon systems + without using the auxiliary spatial modes or polarization modes""

+
+",2293,,2293,,10-09-2018 13:03,10-09-2018 13:03,,,,2,,,,CC BY-SA 4.0 +4137,1,,,09-02-2018 09:20,,8,390,"

That is to say,

+
    +
  • what are some common or popular misconceptions about what constitutes quantum computing?
  • +
+

and

+
    +
  • how are those things misconceptions?
  • +
+
+

It could help in explaining to frame this while imagining as if you were explaining for a layperson, such as Sophie (your meek-and-kindly-but-fox-news-informed great-aunt twice-removed), who has asked you out of genuine curiosity after hearing it referenced in passing multiple times on TV, radio, etc.

+

Sophie isn't looking for a career in computing, never-mind quantum computing (She's a darned good seamstress herself), but does grok basic maths and logic despite her technologically simpler lifestyle.

+

Sophie would like to know some mildly political things; such as how and why we fund studies in quantum-computing, why quantum-computing is studied, what we're actually using it for, as well as some mildly technical things; such as why her regular computer isn't "quantum" since it computes "quantities", how quantum-computers are any faster than her Athlon XP with Windows 2000, why doing things the way we've done them in traditional computing isn't satisfactory by itself, and when she can get that quantum-pocket-calculator that answers her questions before she asks them.

+
+

Of note: I am not Sophie nor do I have any aunt Sophie (to the best of my knowledge anyways; quantum-aunts notwithstanding!).
+I asked this question because I've read and heard a lot of random snippets of information on the topic, from which I have my own basic comprehension, but not an understanding which is strongly communicable to other people.
+Being slightly more computer-informed than other people around am also asked to try and explain the topic of quantum-computing for laypeople.
+Obviously I'm hardly an ideal teacher on the subject, but truncating conversations on the topic to the likes of "I know its not what you just described but I can't tell you how it's not that" never sits well with me, hence the rather arbitrary framing I offered.

+",4494,,-1,,6/18/2020 8:31,12/13/2018 20:11,What is quantum computing vs. what is not quantum computing,,3,1,,,,CC BY-SA 4.0 +4138,2,,4137,09-02-2018 16:47,,7,,"

The difficulty with explaining quantum computing is that quantum objects and processes have no direct classical analogue; they're an entirely new ontological category. For example, you might have learned in high school physics that light ""is both a particle and a wave"" in an attempt to relate it to two classical objects you can intuitively understand. In truth, light is neither a particle nor a wave, but rather a quantum phenomenon - something else entirely which requires learning a whole different language to understand. That language is mathematics.

+ +

This is far beyond what your average curious layperson (or those in the political process holding the science grant money purse strings) is willing to expend learning about quantum computing, so it behooves us to come up with simpler explanations. We can group these explanations into two categories:

+ +
    +
  1. What do quantum computers enable us to accomplish?
  2. +
  3. How do quantum computers work?
  4. +
+ +

The first question has been covered at length in zillions of pop science articles. Breaking RSA, simulating molecular interactions, all that stuff. Your question is specifically aimed at the second: how does a quantum computer work?

+ +

Usually when someone asks me this, I start out by asking them how they think classical computers work. If they don't have a somewhat-correct understanding of that, I just say quantum computers make use of weird quantum phenomena like superposition and entanglement to solve certain problems faster.

+ +

If they sort of understand how classical computers work, I might explain that superposition enables us to calculate, in some restricted sense, with the values of both 0 and 1 at the same time. However, I make sure to emphasize that this does not mean quantum computers ""try every possible value simultaneously"" and in the end if you have $n$ qbits you can only get $n$ bits of classical information from your quantum computation. I explain that we use a phenomenon called quantum interference to make getting the wrong answers less likely, and the right answers more likely.

+ +

Depending on how curious the person is, I might walk them through an explanation of the 2-norm similar to Scott Aaronson's approach in Quantum Computing Since Democritus. It helps if you make this entertaining and engaging for the person, using something like ""imagine you're a god, and you're gonna make your own version of the universe; now, just for fun, you want to mess with the way probability works..."" etc. Then you can use the concept of ""negative probability"" to expand on the interference explanation.

+ +

Past this point, we're getting into where you'd want to start using linear algebra to show them how things actually work, so I usually just direct them to my talk or run through it live depending on how much I like them.

+",4153,,4153,,09-02-2018 18:57,09-02-2018 18:57,,,,1,,,,CC BY-SA 4.0 +4139,1,4145,,09-02-2018 17:07,,6,600,"

The density operator can be used to represent uncertainty of quantum state from some perspective, aka a subsystem of the full quantum system. For example, given a Bell state:

+ +

$|\psi\rangle = \frac{|00\rangle + |11\rangle}{\sqrt{2}}$

+ +

where Alice has one qbit and Bob has the other, Bob does not know whether Alice has already measured her qbit and thus collapsed his to $|0\rangle$ or $|1\rangle$ (or $|+\rangle$ or $|-\rangle$ or whatever other basis Alice used). Thus we can write a density operator for the subsystem of Bob's qbit, which I believe would just be the maximally-mixed state:

+ +

$\rho = \begin{bmatrix} 1/2 & 0 \\ 0 & 1/2 \end{bmatrix}$

+ +

Given a multi-qbit system, how do we derive the density operator of some subsystem of those qbits? I'm interested in the answer for both a subsystem consisting of a single qbit and one consisting of some arbitrary subset of the qbits.

+",4153,,,,,09-03-2018 06:57,How do we derive the density operator of a subsystem?,,2,1,,,,CC BY-SA 4.0 +4140,2,,4139,09-02-2018 17:15,,3,,"

The density operator is defined to be $\rho = \left|\psi\right> \left<\psi\right|$. To get the ""reduced"" density operator for a subsystem, you have to form the trace (i.e. sum) over all of the qubits not inside your subsystem.

+ +

For example, given the Bell state $\left|\psi\right> = \frac{\left|00\right> + \left|11\right>}{\sqrt{2}}$, the density matrix is $\rho = \frac{1}{2} \big( \left|00\right>\left<00\right| + \left|00\right>\left<11\right| + \left|00\right>\left<11\right| + \left|11\right>\left<11\right| \big) = +\frac{1}{2} +\begin{bmatrix} +1 & 0 & 0 & 1 \\ +0 & 0 & 0 & 0 \\ +0 & 0 & 0 & 0 \\ +1 & 0 & 0 & 1 \\ +\end{bmatrix}$, +for either qubit, performing the partial trace over the other qubit results in a reduced density matrix +$\rho_1 = +\frac{1}{2} +\begin{bmatrix} +1 & 0 \\ +0 & 1 \\ +\end{bmatrix}$.

+",,user1039,,user1039,09-02-2018 20:41,09-02-2018 20:41,,,,1,,,,CC BY-SA 4.0 +4141,1,4156,,09-02-2018 17:37,,7,554,"

I asked a question on Physics Stack Exchange but no one answered the question and I didn't get enough views on it. I am asking it on QCSE because the question is related to experimental quantum computation realized through NMR.

+ +

For an ensemble of identical atoms in a superposition state $|\phi\rangle=\alpha|S_z^+\rangle+\beta|S_z^-\rangle$ where $|\alpha|^2$ and $|\beta|^2$ are not equal(although $|\alpha|^2 + |\beta|^2 = 1$) and gives the populations of $|S_z^+\rangle$ and $|S_z^-\rangle$ respectively, we can drive a transition between the two basis states and we call this phenomenon as population transfer(please correct me if I am wrong). Mathematically, the field induced transitions changes the values of $|\alpha|^2$ & $|\beta|^2$.

+ +

However, there can be one more phenomenon going on here called + polarization transfer which is distinct from population transfer because polarization really counts the total spin magnetization of a state in the context of NMR and EPR and maybe Quantum Optics too.

+ +

Now the third phenomenon i.e. coherence transfer(no + classical analogue) can't occur between two states but needs three + level system say $|1\rangle$, $|2\rangle$ and $|3\rangle$ and the field driven transitions between states $|1\rangle$ & $|2\rangle$ and states $|1\rangle$ & + $|3\rangle$ somehow creates transition of states $|2\rangle$ & + $|3\rangle$ even though there is no driving field of frequency corresponding to $|2\rangle$ & $|3\rangle$ transition.

+ +

The last two phenomena written in bold are what I do not understand. Any insight into them will be very helpful + and mathematics over them will be highly appreciated. This link + also tries to explain the difference between population transfer, + polarization transfer and coherence transfer through density + matrices but I cannot see much physical explanation over the + phenomenon.

+",2817,,26,,09-02-2018 17:50,09-04-2018 11:44,"Difference between coherence transfer, polarization transfer and population transfer?",,1,0,,,,CC BY-SA 4.0 +4142,1,4144,,09-03-2018 01:20,,8,218,"

Is it possible to obtain amplitude information (in lexicographic ordering) for a particular qubit register in Q#?

+ +

For example, in the following code:

+ +
operation GetProbabilityAmplitude() : Double[]
+{
+    body
+    {
+        mutable result = new Double[4];
+        using (register = Qubit[2])
+        {
+            H(register[0]);
+            CNOT(register[0], register[1]);
+            // ...put the amplitude doubles in the result array
+        }
+        return result;
+    }
+}
+
+ +

The doubles in result should be {0.5,0.0,0.0,0.5}. I looked quite a bit for it in the documentation but couldn't find anything about getting the amplitudes.

+",1287,,2879,,09-03-2018 05:15,09-03-2018 05:15,How does one obtain amplitude information in Q#?,,2,0,,,,CC BY-SA 4.0 +4143,2,,4142,09-03-2018 03:26,,3,,"

I never looked at Q# but it looks like by using Dump functions +you can output it. Hope this helps.

+",4127,,,,,09-03-2018 03:26,,,,0,,,,CC BY-SA 4.0 +4144,2,,4142,09-03-2018 05:06,,10,,"

Yes, it is possible to obtain this information, but only for troubleshooting purposes, not for using it in the code.

+ +
+ +

Dump functions dump the status of the target machine into a file or to the console output. If the program is executed on the full-state simulator, this status will include the wave function of the whole system (for DumpMachine) or of the register (for DumpRegister).

+ +

So you could do the following:

+ +
using (register = Qubit[2]) {
+    H(register[0]);
+    CNOT(register[0], register[1]);
+    DumpMachine("""");
+    // to avoid ReleasedQubitsAreNotInZeroState exception
+    ResetAll(register);
+}
+
+ +

and get the following amplitudes (each one is a complex number):

+ +
Ids:    [1;0;]
+Wavefunction:
+0:      0.707107        0
+1:      0       0
+2:      0       0
+3:      0.707107        0
+
+ +
+ +

Note that this does not allow you to implement the GetProbabilityAmplitude() function which you requested. If you are running a Q# program on a simulator, it will let you see the wave function but it will not let you make any decisions in the program based on that information. The intent is twofold: to facilitate debugging on a simulator while not allowing to implement any logic which would be impossible to execute on a quantum computer. Since the execution on a quantum computer won't give the program direct access to the quantum state, it's better not to rely on this feature in the code.

+",2879,,,,,09-03-2018 05:06,,,,0,,,,CC BY-SA 4.0 +4145,2,,4139,09-03-2018 06:57,,3,,"

Let's say you have some density matrix, $\rho$, over a set of qubits $1,2,\ldots ,n$. (If you have a pure state $|\psi\rangle$, set $\rho=|\psi\rangle\langle\psi|$.) Let's say that the subsystem that we want the reduced density operator of is specified by the set of qubits $S$ (and all the others are $\bar S$). You have that +$$ +\rho_S=\text{Tr}_{\bar S}(\rho), +$$ +where $\text{Tr}_{\bar S}$ is the partial trace over the set $\bar S$. What does this mean in practice? Pick any orthonormal basis $\{|\phi_i\rangle\}_{i=1}^{|\bar S|}$ over $\bar S$, then you simply calculate +$$ +\text{Tr}_{\bar S}(\rho)=\sum_{i=1}^{|\bar S|}\left(\langle \phi_i|_{\bar S}\otimes\mathbb{I}_S\right)\rho\left(| \phi_i\rangle_{\bar S}\otimes\mathbb{I}_S\right). +$$ +Usually, you just pick the computational basis. So, in your example of +$$ +\rho=\frac12\left(|00\rangle\langle 00|+|00\rangle\langle 11|+|11\rangle\langle 00|+|11\rangle\langle 11|\right), +$$ +you pick the basis $\{|0\rangle,|1\rangle\}$, and you calculate +$$ +\rho_A=(\mathbb{I}\otimes\langle 0|)\rho(\mathbb{I}\otimes|0\rangle)+(\mathbb{I}\otimes\langle 1|)\rho(\mathbb{I}\otimes|1\rangle)=\frac12(|0\rangle\langle 0|+|1\rangle\langle 1|)). +$$

+ +

To make this calculation a bit shorter, then it's worth noting that, over the subsystems $\bar S$ that you're taking the partial trace over, it's permutation invariant. So, you can pick up a ket from the left-hand side, and match it with a bra on the right-hand side (but make sure you always match terms from the same subsystem, and that that subsystem is in $\bar S$): +$$ +\text{Tr}_B(\rho)=\frac12\left(|0\rangle\langle0|\langle0|0\rangle+|1\rangle\langle0|\langle0|1\rangle+|0\rangle\langle1|\langle1|0\rangle+|1\rangle\langle1|\langle1|1\rangle\right), +$$ +and you can now simply read off this final result. It's always a good check, to make sure you haven't messed up too badly, to make sure that the $\rho_S$ that you output is (i) Hermitian, (ii) has trace 1.

+",1837,,,,,09-03-2018 06:57,,,,0,,,,CC BY-SA 4.0 +4146,2,,3954,09-03-2018 07:03,,2,,"

Following are some of the latest resources on Quantum Biology

+ +

Youtube video by the Royal Institution: Quantum Biology: An Introduction

+ +

Wikipedia page on quantum biology (and links therein)

+ +

University of Illinois at Urbana-Champaign, NIH Center for Macromolecular Modeling & Bioinformatics, quantum biology research page

+ +

TED Talk by Jim Al-Khalili: How quantum biology might explain life’s biggest questions

+",4501,,23,,09-03-2018 18:12,09-03-2018 18:12,,,,2,,,,CC BY-SA 4.0 +4147,2,,4137,09-03-2018 07:50,,3,,"

There are lots of separate questions in there: politics, physics, etc. and I won't pretend to answer all of it, but let me try to get towards what I think is the core of the matter.

+ +
+

How do I explain to the interested non-specialist what I do (the general field)?

+
+ +

My explanation actually varies a lot depending on who I'm talking to, and depends a lot on having a two-way conversation about it. However, one approach that I quite like, because it doesn't tell any lies or half-truths, it helps avoid some of the common misconceptions, and doesn't require any mind-blowing quantum stuff that I don't really have the right language to describe (apart from with maths), is the following:

+ +

I assume people are familiar with the concept of logic gates. Now, a computation is just a sequence of logic gates all wired up in a specific order for that specific computation. And you can think of a processor as something that can dynamically rewire different circuits. (Of course, the connectivity isn't dynamically changed, it's done with extra gates.) But the point is that every processor, from your basic pocket calculator up to the most powerful supercomputer, is constructed out of the same fundamental set of logic gates. (In fact, just one type of gate is sufficient. The NAND gate, for example, is 'universal' for classical computation).

+ +

Now, imagine I suddenly gave you a new type of logic gate that cannot be straightforwardly made out of your existing set of gates. That instantly gives you a low of potential to rewrite your existing software to take advantage of this new gate. That does not mean that every algorithm will be faster. Some won't benefit at all, while some specific ones might show ridiculous speed improvements. Do we know what proportion fall into each camp? Not really, until you go and work on the algorithms extensively.

+ +

What we do know in the case of quantum computation is some specific examples that are radically faster than the existing classical algorithms. Things like Shor's algorithm for factoring numbers, and Grover's search.

+ +
+

If it's that simple, do we have quantum computers already?

+
+ +

Not really. That extra logic gate is hard to build, and you can't just interface it directly with the existing logic gates; you have to build those from scratch as well in a new technology (and, really, we're still scratching around trying to figure out what a good technology to use actually is). It's a work in progress that's receiving a lot of attention at the moment. There are small working prototypes which are potentially on the verge of performing computations that we can't do classically, but they're still a long way from useful implementations of algorithms such as Grover of Shor.

+ +
+

Why should you care?

+
+ +

That depends on what your interests are. For Sophie, unless she's ultra-paranoid about her tap-and-go seamstress payment system being hacked, she probably doesn't care so much about the practical side. It's not yet clear that it's going to have a significant bearing on things. She can probably trust her payment system provider to take care of things as much as they can, and forget about it.

+ +

Why do I care? Quantum Mechanics is mind-blowing. I'm really excited not to just be a part of trying to categorise its effect, and explain certain things that are going on, but to harness that quantum weirdness and bend it to my will (cue maniacal laughter!)

+ +

Why does it get funding? Well, there are a few reasons (and I'm going to make some very sweeping statements here that are too broad). For one, the few specific killer applications that we have are really useful to specific groups. Shor's algorithm will allow government agencies to spy on communications that use existing public key cryptography systems (practically everybody). Grover's search will help with all sorts of computer sciencey problems which people like Google would really like help with).

+ +

Another reason is just to drive science on. As science progresses, whether or not the final outcome is ever realised as originally envisaged, there's all sorts of stuff comes out along the way (the classic example is the space program). Everybody wants to invest to make sure they're at the forefront of those new developments, and it's a game of chicken. Once there's major investment from one party, nobody wants to be left behind. Compared to many options, quantum computing appears reasonably accessible (while still being a huge challenge, it's not string theory) and it's amazing how much quantum mechanics has already permeated existing technology. The processor in your computer might not enable quantum computing (because it doesn't have the extra gate), but it still relies on quantum effects for its manufacture and functioning. Lasers are ubiquitous now (CD, DVD players), and their fundamental operating principle is quantum mechanical.

+",1837,,,,,09-03-2018 07:50,,,,0,,,,CC BY-SA 4.0 +4148,2,,4137,09-03-2018 08:46,,2,,"

When explaining quantum computers, we must deal with the fact that most people don't know the fundamentals of classical computers. Since the difference lies at this fundamental level, that can present a problem.

+ +

Nevertheless, many people know that information technology comes in analogue form as well as digital. That already gives them an intuition that there can be different devices that do the same job, but do it differently and with different strengths and weaknesses. This sets up the place that quantum will take, as a third type of information technology distinct from the other two.

+ +

For those that are more well-versed in analogue and digital computing, you could invite them to think about a device that combines the strengths of both. That, I think, is a good way to think about what a quantum computer is: Wave-particle duality becomes analogue-digital duality when we think in terms of computing.

+ +

So I guess I would invite your wise aunt to think about issues like these, tailored to her current knowledge.

+",409,,,,,09-03-2018 08:46,,,,0,,,,CC BY-SA 4.0 +4149,1,,,09-03-2018 10:37,,3,358,"

I have been active in this community a while and I repeatedly see questions and answers referring to the so-called Tensor Networks. That makes me wonder what are those elements are and what the motivation behind their construction is. References about where to look for the motivation and construction of those is sought in this question.

+ +

Additionally, an insight into the importance of these elements in the quantum computing and/or quantum communication paradigms would be helpful too, as the concept is found in this site in several places.

+",2371,,2371,,09-04-2018 07:15,08-05-2019 13:52,What are Tensor Networks and which is the relationship they have with quantum computing?,,1,5,,,,CC BY-SA 4.0 +4150,1,,,09-03-2018 11:35,,5,276,"

One of the most promising applications of a quantum computing is the design of new drugs and materials:

+ +
+

Quantum computers promise to run calculations far beyond the reach of + any conventional supercomputer. They might revolutionize the discovery + of new materials by making it possible to simulate the behavior of + matter down to the atomic level.

+
+ +

source: MIT Technology Review - Will Knight - February 21, 2018

+ +

At this moment quantum computers are not powerful enough to do these calculations / simulations. So we have not yet the proper hardware for this.

+ +

But assuming we have today quantum computers that are powerful enough, do we know how to use them to revolutionize the design of better drugs and materials in some domains ?

+ +

So is the current obstacle for this revolution that we do not have yet the proper hardware (= powerful quantum computer) or are there actually 2 obstacles:

+ +
    +
  • having powerful quantum computer (hardware)
  • +
  • knowing which quantum algorithms are needed (software)
  • +
+",2529,,2529,,09-04-2018 11:34,09-04-2018 11:34,Do we really know how a universal quantum computer can be used to revolutionize the design of new drugs or materials?,,1,0,,,,CC BY-SA 4.0 +4151,2,,4150,09-03-2018 13:59,,6,,"

There is not a complete story from ""run quantum computation"" to ""make a billion dollars via slightly better batteries"". The vague idea is that a new tool capable of giving new insights into the behavior of materials will lead to important discoveries.

+ +

It's unrealistic to expect a complete quantum-compute-to-engineering-improvement story since the most important discoveries are often the ones that aren't foreseen. I'm sure there are thousands, if not millions, of examples of this happening with classical computers. For example, the study of chaos theory can be traced back to computer simulations of weather behaving in an unexpected way.

+ +

One specific thing we know quantum computers will be able to do reasonably well is compute the ground state energy of small but classically hard systems. Error-corrected algorithms to do this have improved immensely over the past few years (from runtimes of months with very very optimistic assumptions about hardware [1] down to runtimes of hours with plausible 15-years-from-now hardware assumptions [2]). There has also been substantial improvement in NISQ algorithms, though it remains to be seen if they will improve enough to successfully run classically hard cases on non-error-corrected hardware.

+ +

Apparently knowing the ground state energy of a system allows predicting reaction rates and other important chemistry numbers. But I'm not a chemist so I can't give much more detail than that.

+",119,,,,,09-03-2018 13:59,,,,4,,,,CC BY-SA 4.0 +4152,1,,,09-03-2018 17:58,,6,77,"

Are there any good references where from I can learn about photon entanglement using spontaneous parametric down-conversion (SPDC)?

+",4505,,26,,09-03-2018 18:06,09-03-2018 18:06,Photon Entanglement using SPDC,,0,3,,,,CC BY-SA 4.0 +4153,1,4154,,09-04-2018 06:00,,6,3723,"

Can we use classical XOR gate in a quantum circuit? Or are there any alternatives for XOR gate?

+",4505,,26,,12/23/2018 13:08,12/23/2018 13:08,Classical XOR gate in Quantum Circuit,,1,1,,,,CC BY-SA 4.0 +4154,2,,4153,09-04-2018 06:39,,10,,"

You can't directly use a classical XOR gate inside a quantum circuit because the usual construction of such a gate is a classical construction - it won't preserve coherence. In other words, it will function just fine if you input a 0 or a 1 as each input, but it won't perform as you'd need it to if you supplied it with a superposition.

+ +

Instead, you can build XOR out of quantum circuit elements, so that it behaves in exactly the same way as it would for classical inputs, but it does preserve superposition. Indeed, XOR is simple as a reversible circuit, just use a controlled-not! The target qubit is the output you want + +You can readily verify this just by constructing the truth table.

+ +

Indeed, you can construct all classical gates in this way. For example, the AND gate: +

+",1837,,,,,09-04-2018 06:39,,,,2,,,,CC BY-SA 4.0 +4155,1,4157,,09-04-2018 10:46,,4,89,"

In reference to this recent nature article: https://www.nature.com/articles/s41567-018-0241-6

+ +

Specifically, does this warrant a new type of gate?

+",419,,26,,12/23/2018 13:08,12/23/2018 13:08,What is the significance of recent demonstration of a passive photon–atom qubit swap operation?,,1,0,,,,CC BY-SA 4.0 +4156,2,,4141,09-04-2018 11:24,,4,,"

Given a quantum system in a state defined by a density matrix $\rho$, it is an accepted terminology to use the term population for the diagonal matrix elements (not necessarily in the computational basis). Since a normalized vector corresponds to a pure state, thus we can define a population of the pure state $\psi$ by: +$$P_{\psi} = \langle \psi | \rho | \psi \rangle$$ +According to this definition, the sum of the populations of an orthonormal basis of pure states is a unity.

+ +

When the system evolves: $\rho = \rho(t)$, the populations also change in time. We can call a population transfer from a state $a$ to a state $b$ when the system starts with a high population of $a$ and low population of $b$ and ends with low population of $a$ and high population of $b$ with all other populations unchanged.

+ +

Usually, when the density matrix corresponds to photon polarization the same situation can be called a polarization transfer. When the density matrix corresponds to a spin system it can be called a spin transfer, and sometimes also a polarization transfer.

+ +

Sometimes (for example in spectroscopy) when the system is a tensor product of two subsystems for example two energy levels ($n$ and $n+1$) of a spinning electron. A suitable basis in this case can be chosen as: $\{| n \downarrow \rangle, | n \uparrow \rangle, | n+1 , \downarrow \rangle, | n+1 , \uparrow \rangle \}$, then the term population transfer can be reserved to the energy level reduced density matrix: +$$\rho_E = \mathrm{tr}_{\downarrow, \uparrow} \rho$$ +While the term spin transfer can be used for the spin reduced density matrix +$$\rho_s = \mathrm{tr}_{n, n+1} \rho$$

+ +

Population transfers can be also defined for projected density matrices, for example spin transfer of the projected density matrix on the level $n$.

+ +

Coherences on the other hand are used to denote the off-diagonal elements of a density matrix $\langle b | \rho | a\rangle$ . The motivation for this definition is that a density matrix with solely diagonal terms describe a classical state and can be replaced by a probability distribution and the quantum correlations are achieved by means of the off-diagonal terms.

+ +

Again when the system evolves it can happen that the reduced density matrices of two subsytems switch from high coherence to low coherence and vice versa. This is the case of coherence transfer.

+ +

Clearly, when the system is two level, there is only one off-diagonal element $\rho_{12} = \rho_{21}$ and we cannot talk about a coherence transfer.

+ +

An example for spin coherence transfer in the above spectroscopy case is given by the following: (Tensor product notation is used where the left matrix lives in the energy level Hilbert space and the right one in the spin space) +$$\rho_{\mathrm{init}} = \frac{1}{\sqrt{2}}\left(\begin{bmatrix}0 & 0 \\0 & 1 \end{bmatrix}\otimes\begin{bmatrix}\frac{1}{2} & \frac{1}{2} \\\frac{1}{2} & \frac{1}{2} \end{bmatrix} + \begin{bmatrix}1 & 0 \\0 & 0 \end{bmatrix}\otimes\begin{bmatrix}\frac{1}{2} & 0 \\0 & \frac{1}{2} \end{bmatrix} \right)$$

+ +

$$\rho_{\mathrm{fin}} = \frac{1}{\sqrt{2}}\left(\begin{bmatrix}0 & 0 \\0 & 1 \end{bmatrix}\otimes\begin{bmatrix}\frac{1}{2} & 0 \\0 & \frac{1}{2} \end{bmatrix} + \begin{bmatrix}1 & 0 \\0 & 0 \end{bmatrix}\otimes\begin{bmatrix}\frac{1}{2} & \frac{1}{2} \\\frac{1}{2} & \frac{1}{2} \end{bmatrix} \right)$$

+ +

Here, one can see a coherence transfer from the energy level $n$ to the energy level $n+1$, because the projected density matrices on the energy levels switched coherences.

+ +

Please see the following work by Aubourg and Viennot, where they define populations and coherences of a spin chain along the same lines (equations 5-7).

+",4263,,4263,,09-04-2018 11:44,09-04-2018 11:44,,,,2,,,,CC BY-SA 4.0 +4157,2,,4155,09-04-2018 11:51,,4,,"

This does not warrant a new type of gate. When we write down quantum circuits, each 'wire' corresponds to a single qubit. However, we do not (usually) specify what technology any of these qubits is made out of. You might typically assume that they're all the same technology (e.g. solid state, photonic,...) but there is no need to do so. There are very good reasons for wanting to interface different types of qubit, particularly static and flying qubits, to take advantage of the benefits of both.

+ +

So, on your quantum circuit, you specify that you want a swap gate. This is an abstraction, and does not tell you how you're physically going to realise it. The cited paper is showing one way of implementing it when the two qubits are two specific (different) types. But it's not a new gate.

+ +

As for the significance, that comes back to why you want to have both static and flying qubits in your experiment. Flying qubits are great if you need to connect two distant components because they can travel long distances with relatively little decoherence (usually more relevant to different quantum information protocols, rather than computation specifically, but some current quantum computer designs require distributed blocks). Static qubits are great if you actually want to manipulate the quantum state, interacting with other qubits etc. For example, we often talk in quantum information about processes such as ""Alice sends a qubit to Bob"" and then we talk about Bob holding the particular state he's received. So Alice probably sent it using a photon because they were far apart. But if Bob wants to hold onto the state, planning on doing something with it later, he needs to transfer it onto a static qubit. Perhaps, for example, Alice and Bob want to share a Bell pair, but their quantum communication channel is noisy, so Alice will send many Bell pair halves to Bob, and they will later perform a distillation protocol. The distillation process will be highly non-linear, and probably makes more sense on static qubits.

+",1837,,,,,09-04-2018 11:51,,,,1,,,,CC BY-SA 4.0 +4158,1,4171,,09-05-2018 01:07,,4,234,"

In the proof of Proposition 2.52 of John Watrous' QI book, there is the statement that $\text{im}(\eta(a))\subset\text{im}(\rho)$, where $\rho=\sum_{i=1}^{N}\eta(i)$ is a sum of positive operators and $\rho$ has trace one.

+ +

I don't see $\text{im}(\eta(a))\subset\text{im}(\rho)$, could someone please help explain. Thanks!

+",2375,,55,,07-01-2021 10:00,07-01-2021 10:00,Image of a sum of positive operators contains the images of each individual operator?,,2,2,,,,CC BY-SA 4.0 +4159,2,,4158,09-05-2018 02:22,,2,,"

$ \eta(a) \subset U_{a \in \sum} \eta(a) $ if I can write it this union way. +By using the property that if a subset is included in another, the images follow this inclusion too (wikipedia ling for images) :

+ +

$$\mathrm{im}(\eta(a)) \subset \mathrm{im}(U_{a \in \sum} \eta(a)) = U_{a \in \sum} \mathrm{im}(\eta(a))$$

+ +

By definition, $ \rho $ is applied over all elements in the alphabet $ \sum$. So its image will be on the right side of the inclusion.

+",4127,,26,,5/13/2019 16:07,5/13/2019 16:07,,,,0,,,,CC BY-SA 4.0 +4160,1,4170,,09-05-2018 04:00,,6,675,"

Why are conditional phase shift gates, such as CZ, symmetrical? Why do both the control and target qubit pick up a phase?

+ +

Furthermore, assuming that they are symmetrical, when using a CNOT gate as an H gate on the target qubit, a CZ gate, and another H gate on the target qubit, wouldn't the CZ gate cause the control qubit to pick up a phase?

+ +

For example, if the control qubit is an equal superposition of $|0\rangle$ and $|1\rangle$ i.e. $|+\rangle$ and then this version of the CNOT gate is implemented, wouldn't the control qubit end up with a phase?

+",3061,,26,,12/23/2018 13:07,12/23/2018 13:07,Symmetry in Conditional Phase Shift Gates and Realizing CNOT through HCZH,,2,0,,,,CC BY-SA 4.0 +4161,2,,4160,09-05-2018 04:11,,5,,"

It's not that both the qubits are independently picking up a phase, it's that the two qubit state itself is picking up that phase.

+ +

$$\mathbf{CZ}|1\rangle\otimes |1\rangle = |1\rangle\otimes(-1\times |1\rangle) = -|1\rangle\otimes |1\rangle$$

+ +

So although the second qubit is the one that has the Z applied and is actually picking up the phase, that entire $|11\rangle$ state is picking up the phase. If it helps you can look at the effect on a generic state in the computational basis:

+ +

$$\mathbf{CZ}(\alpha |00\rangle + \beta |01\rangle + \gamma |10\rangle + \delta |11\rangle) = \alpha |00\rangle + \beta |01\rangle + \gamma |10\rangle - \delta |11\rangle$$

+ +

so the key action is that CZ flips the sign of a $|11\rangle$ two qubit state.

+ +

If you follow this through on your example, you should see that the sign flip does affect the state of both qubits in the $|11\rangle$ case, but then when passed through the H gates in the correct way to get your desired behavior.

+",3056,,,,,09-05-2018 04:11,,,,0,,,,CC BY-SA 4.0 +4162,1,4165,,09-05-2018 10:23,,7,344,"

Why are quantum computers scalable?

+ +

With the subjects of spontaneous collapse models and decoherence in mind, it seems to me that the scalability of quantum computers is something which is not only physically difficult to achieve but also theoretically.

+ +

Measurement

+ +

When a measurement is made, a quantum state, whose probability amplitude is a Gaussian function, becomes an eigenstate, whose probability is definite. This change from quantum state to eigenstate on measurement happens also to qubits which change to classical bits in terms of information.

+ +

Measurement, decoherence & quantum Darwinism

+ +

From the work of Zurek one starts to see the environment as a valid observer and indeed one that makes measurments. Within this view, how can we have scalable quantum computers if many-qubit systems create more and more decoherence and as such, from my current understanding, inevitably reduce such a system of qubits to a system of eigenstates.

+ +

Is there an upper bound to quantum computing?

+ +

If the train of thought I am following is correct than shouldn't there be an upper bound to the number of qubits we can have without them (all, the whole system) being reduced to eigenstates, classical states.

+ +

I get the feeling that what I am saying is incorrect and I am fundamentally misunderstanding something. I'd really appreciate it if you could tell me why my argument is wrong and point me in the direction of understanding why quantum computers are theoretically scalable.

+ +

An example with ion trap quantum computers

+ +

Within this set up, the information is encoded within the energy levels of ions with a shared trap. Given that the ions are all within the shared trap then interactions are bound to occur given scaling within such a shared trap, or at least that's what my intuition says. Do we overcome this by having multiple traps working say in tandem?

+ +

1 - Zurek's Decoherence, einselection, and the quantum origins of the classical

+",4373,,55,,10/19/2021 17:41,10/19/2021 17:41,Is there a classical limit to quantum computing?,,1,0,,,,CC BY-SA 4.0 +4163,1,4168,,09-05-2018 13:11,,6,108,"

From a high-level point of view, given a quantum program, typically the last few operations are measurements.

+ +

In most cases, in order to extract a useful answer, it is necessary to run multiple times to reach a point where it is possible to estimate the probability distribution of the output q-bits with some level of statistical significance.

+ +

Even when ignoring noise, if the number of q-bits of the output increases, the number of required runs will increase too. In particular, for some output probability distributions it is likely that the number of required samples can be very high to obtain some sensible statistical power.

+ +

While increasing the number of q-bits seems to indicate an exponential growth, the number of measurements required (each run taking x nanoseconds) seem to limit how will quantum computation do scale.

+ +

Is this argument sensible or there is some critical concept that I am missing?

+",180,,26,,12/13/2018 20:06,12/13/2018 20:06,How scalable are quantum computers when measurement operations are considered?,,1,0,,,,CC BY-SA 4.0 +4164,1,4172,,09-05-2018 14:59,,14,2207,"

I have been reading the paper Belief propagation decoding of quantum +channels by passing quantum messages by Joseph Renes for decoding Classical-Quantum channels and I have crossed with the concept of Helstrom Measurements.

+ +

I have some knowledge about quantum information theory and quantum error correction, but I had never read about such measurement until I worked on that paper. In such article, the author states that the measurement is optimal for this decoding procedure, so I would like to know what are such kind of measurements and how can they be done.

+",2371,,55,,8/22/2020 8:05,8/22/2020 8:05,What is the Helstrom measurement?,,1,0,,,,CC BY-SA 4.0 +4165,2,,4162,09-05-2018 15:02,,9,,"

In simpler terms your question is: if noise/decoherence keeps entering the computation, how can a big computation possibly survive?

+ +

The key concept you're missing is quantum error correction, which can pump noise/decoherence back out of the system. Of particular practical interest is the surface code.

+",119,,,,,09-05-2018 15:02,,,,1,,,,CC BY-SA 4.0 +4166,1,4167,,09-05-2018 15:09,,8,173,"

If you have good references about the CV model that are understandable from a computer science background, that would be great. If they include numerical examples, that would even be better.

+",4127,,,,,09-05-2018 15:53,What good references would you recommend to understand the (continuous-variable) CV model of computation?,,1,0,,,,CC BY-SA 4.0 +4167,2,,4166,09-05-2018 15:53,,8,,"

One of the best places to learn about the continuous-variable (CV) model is the documentation of the Strawberry Fields software for photonic quantum computing. It also includes several numerical examples. You can also read the white paper here, which contains a dedicated section to explaining the CV model.

+ +

Additionally, this review paper by Braunstein and van Loock contains a short section on quantum computing with continuous variables. If you're also interested in physical implementations, this paper by Gu et al. has a nice description of the cluster-state model of photonic quantum computing.

+ +

To get you started, the fundamental difference between the CV model and the traditional qubit model is that in the CV model, we formally apply operations on infinite-dimensional instead of two-dimensional systems. Of course, in practice, each system can be effectively described by a large but finite-dimensional Hilbert space, but it is more mathematically convenient to describe operators and states on the full infinite-dimensional space. The result is simply that in the CV model, we end up with a different set of canonical states and gates, as summarized for example in the table below (taken from the Strawberry Fields white paper):

+ +

+",4294,,,,,09-05-2018 15:53,,,,0,,,,CC BY-SA 4.0 +4168,2,,4163,09-05-2018 16:23,,5,,"

Some near-term quantum algorithms rely on getting lucky with the measurements, and in fact these algorithms will not scale efficiently to large sizes. But most quantum algorithms don't have this problem; it is required that the amount luck needed [i.e. retries] scales only polynomially with the problem size.

+ +

For example, Shor's algorithm fails if the quantum part outputs 0. But the chance of 0 becomes smaller and smaller as N (the number to factor) becomes larger. Shor's algorithm also fails if the base B you pick randomly has a k such that to B^k = -1 (mod N), but AFAIK the chance of that happening stays constant as N increases so you don't expect to have to pick too many Bs.

+ +

In other words, yes you have identified a problem that quantum algorithms do in fact have to avoid. This is the difference between BQP and PostBQP.

+",119,,,,,09-05-2018 16:23,,,,1,,,,CC BY-SA 4.0 +4169,1,4174,,09-05-2018 17:29,,14,1433,"

As the title suggests, I'm searching for published examples of quantum algorithms being applied to problems in computational biology. Clearly the odds are high that practical examples don't exist (yet) – what I'm interested in is any proof of concepts. Some examples of computational biology problems in this context would be:

+
    +
  • Protein Structure Prediction (Secondary, Tertiary)
  • +
  • Drug-Ligand Binding
  • +
  • Multiple Sequence Alignment
  • +
  • De-novo Assembly
  • +
  • Machine Learning Applications
  • +
+

I've found only one such reference that I think is illustrative of what I'm looking for. In this research, a D-Wave was used for transcription factor binding, however, it would be interesting to have examples outside the realm of adiabatic quantum computing.

+ +

There are several in terms of quantum simulation. While they clearly aren't simulations at a scale often considered to be biologically relevant, one could imagine that this line of research is a precursor to modeling larger molecules of biological significance (among many other things).

+ +

So, aside from transcription factor binding and quantum simulation, are there any other proof of concepts that exist and are relevant to biology?

+

Update I: I’ve accepted the best answer so far but I’ll be checking in to see if any more examples come up. Here's another I found, somewhat old (2010), that aimed at demonstrating identification of low energy protein conformations in lattice protein models – also a D-Wave publication.

+

Update II: A table in this paper covers some existing applications, most using quantum annealing hardware.

+",1937,,1937,,12-03-2021 01:55,01-04-2022 00:23,Are there any examples of anyone applying quantum algorithms to problems in computational biology?,,4,4,,,,CC BY-SA 4.0 +4170,2,,4160,09-05-2018 17:56,,4,,"

A $Z$ gate's effect is ""If the target is ON then negate the phase"". A controlled gate's effect is ""If the control is ON then apply the gate to the target"". Therefore a controlled Z's effect is ""If the control is ON then if the target is ON then negate the phase"".

+ +

Said another way, the controlled Z's effect is ""If the control and target are both ON, negate the phase"". This is a symmetric effect because it conditions on both the control and the target in the same way. The fact that we distinguish between the control and the target is just an accident of history.

+ +

Thinking of the control and target as fundamentally different is particularly tricky when it comes to the CNOT gate, as you have noticed. Assuming that the control cannot be affected is a holdover from classical-style thinking that no longer applies in the quantum case.

+ +

Classically, you think of a NOT gate as toggling the target qubit. Quantumly, the NOT gate's effect can instead be thought of as negating the phase of the $|-\rangle$ state. The $X$ gate leaves $|+\rangle = \frac{1}{\sqrt{2}} (|0\rangle + |1\rangle)$ alone and negates $|-\rangle = \frac{1}{\sqrt{2}} (|0\rangle - |1\rangle)$, in the same way that the $Z$ gate leaves $|0\rangle$ alone and negates $|1\rangle$.

+ +

So, quantumly, a CNOT's effect is ""If the control is ON then if the target is $|-\rangle$ then negate the phase"". Equivalently, ""If the control is ON and the target is $|-\rangle$ then negate the phase"". I hope that makes it clear that ""control"" and ""target"" are not really the most appropriate names. It's more like ""the Z-axis control and the X-axis control"".

+",119,,119,,09-06-2018 19:36,09-06-2018 19:36,,,,0,,,,CC BY-SA 4.0 +4171,2,,4158,09-05-2018 20:34,,6,,"

It suffices to prove that if $P$ and $Q$ are positive semidefinite operators, then +$$ +\operatorname{im}(P) \subseteq \operatorname{im}(P+Q). +$$ +Once you have this, the statement follows by taking $P = \eta(a)$ and $Q = \rho - \eta(a)$.

+ +

Suppose that $u$ is a vector with $u \perp \operatorname{im}(P+Q)$. This implies that +$$ +0 = u^{\ast} (P + Q) u = u^{\ast} P u + u^{\ast} Q u. +$$ +As $u^{\ast} P u$ and $u^{\ast} Q u$ are both nonnegative and sum to zero, they must both be zero. Because $u^{\ast} P u = 0$, we have that $u \perp \operatorname{im}(P)$. We have just proved that +$$ +\operatorname{im}(P+Q)^{\perp} \subseteq \operatorname{im}(P)^{\perp}, +$$ +which is equivalent to the first containment above that we're aiming to prove, so we're done.

+",1764,,,,,09-05-2018 20:34,,,,0,,,,CC BY-SA 4.0 +4172,2,,4164,09-06-2018 08:31,,18,,"

The Helstrom measurement is the measurement that has the minimum error probability when trying to distinguish between two states.

+ +

For example, let's imagine you have two pure states $|\psi\rangle$ and $|\phi\rangle$, and you wish to know which it is that you have. If $\langle\psi|\phi\rangle=0$, then you can specify a measurement with three projectors +$$ +P_{\psi}=|\psi\rangle\langle\psi|\qquad P_{\phi}=|\phi\rangle\langle\phi|\qquad \bar P=\mathbb{I}-P_{\psi}-P_{\phi}. +$$ +(For a two-dimensional Hilbert space, $\bar P=0$.)

+ +

The question is what measurement should you perform in the case that $\langle\psi|\phi\rangle\neq0$? Specifically, let's assume that $\langle\psi|\phi\rangle=\cos(2\theta)$, and I'll concentrate just on projective measurements (IIRC, this is optimal). In that case, there is always a unitary $U$ such that +$$ +U|\psi\rangle=\cos\theta|0\rangle+\sin\theta|1\rangle\qquad U|\phi\rangle=\cos\theta|0\rangle-\sin\theta|1\rangle. +$$ +Now, those states are optimally distinguished by $|+\rangle\langle +|$ and $|-\rangle\langle -|$ (you get $|+\rangle$, and you assume you had $U|\psi\rangle$). Hence, the optimal measurement is +$$ +P_{\psi}=U^\dagger|+\rangle\langle+|U\qquad P_{\phi}=U^\dagger|-\rangle\langle-|U\qquad \bar P=\mathbb{I}-P_{\psi}-P_{\phi}. +$$ +The success probability is +$$ +\left(\frac{\cos\theta+\sin\theta}{\sqrt{2}}\right)^2=\frac{1+\sin(2\theta)}{2}. +$$

+ +

More generally, how do you distinguish between two density matrices $\rho_1$ and $\rho_2$? Start by calculating +$$ +\delta\rho=\rho_1-\rho_2, +$$ +and finding the eigenvalues $\{\lambda_i\}$ and corresponding eigenvectors $|\lambda_i\rangle$ of $\delta\rho$. You construct 3 measurement operators +$$ +P_1=\sum_{i:\lambda_i>0}|\lambda_i\rangle\langle\lambda_i|\qquad P_2=\sum_{i:\lambda_i<0}|\lambda_i\rangle\langle\lambda_i|\qquad P_0=\mathbb{I}-P_1-P_2. +$$ +If you get answer $P_1$, you assume you had $\rho_1$. If you get $P_2$, you had $\rho_2$, while if you get $P_0$ you simply guess which you had. You can verify that this reproduces the pure state strategy described above. What's the success probability of this strategy? +$$ +\frac12\text{Tr}((P_1+P_0/2)\rho_1)+\frac12\text{Tr}((P_2+P_0/2)\rho_2) +$$ +We can expand this as +$$ +\frac14\text{Tr}((P_1+P_2+P_0)(\rho_1+\rho_2))+\frac14\text{Tr}((P_1-P_2)(\rho_1-\rho_2)) +$$ +Since $P_1+P_2+P_0=\mathbb{I}$ and $\text{Tr}(\rho_1)=\text{Tr}(\rho_2)=1$, this is just +$$ +\frac12+\frac14\text{Tr}((P_1-P_2)(\rho_1-\rho_2))=\frac12+\frac14\text{Tr}|\rho_1-\rho_2|. +$$

+",1837,,1837,,09-06-2018 08:49,09-06-2018 08:49,,,,4,,,,CC BY-SA 4.0 +4173,2,,4169,09-06-2018 09:29,,3,,"

Quantum simulation can be used to test models that could describe certain biological process. For example, a 2018 paper by Potočnik et al. examined light harvesting models using superconducting quantum circuits (see figure below).

+ +

Currently, it's an open question whether quantum mechanics plays an important functional role in biological processes. Some candidate biological processes where quantum mechanics may have such a role include magnetoreception in birds, olfaction, and light harvesting.

+ +

+",1773,,1937,,09-07-2018 07:53,09-07-2018 07:53,,,,2,,,,CC BY-SA 4.0 +4174,2,,4169,09-06-2018 17:29,,4,,"

I was not able to find references specifically in quantum biology. +I found however a review called Quantum Assisted biomolecular modeling.

+ +

You may find it interesting but this is from 2010. The field has evolved since but I guess the ideas remain similar. The authors focus more on the idea of the ability of a quantum computer to try every classical paths simultaneously.

+ +

I do not know much about the field and common practice. However if computational biology is more focused on Optimization, then applying quantum search algorithms or hybrid classical-quantum setups should be suited (even if not that practical at the moment).

+ +

Now about Machine Learning, it is a bit unclear with quantum computing. Especially with the name Quantum Machine Learning. Different approaches/goals are taken. Some algorithms are designed for getting a speedup on classical algorithms (based on a hypothetical device called qRAM) like K-Means, SVM... Or use QC for helping the learning process in classical algorithms like restricted boltzmann machines. +Some focus on doing ML with quantum data like compressing quantum data for instance.

+ +

Conclusion: we do not have a clear idea yet but this makes it exciting. In the process, we may just create new algorithms or improve current classical ones.

+ +

Edit: Recently a press release announced a partnership between Rigetti Computing and Entropica Labs to develop real world applications of quantum computing to bioinformatics and genomics.

+",4127,,1937,,09-10-2018 18:32,09-10-2018 18:32,,,,5,,,,CC BY-SA 4.0 +4175,1,,,09-06-2018 18:07,,12,365,"

I have a superconducting system with tens of qubits, each of which can be tuned using DC flux.

+ +

One of the main tasks for coherent manipulation of the qubits is to find good idling frequencies and operating points for entangling gates. This effort is confounded by two-level systems (TLS), which cause rapid energy relaxation, and wreak general havok on coherent manipulation.

+ +

I spent a long time finding a good set of idling frequencies and operating points, all the while considering the locations of the TLS's, and then one day I came in the the lab and they had moved around! I had to start all over again.

+ +

I want to learn more about how and why TLS's move, and whether it's possible to maybe control the movement. As part of my research, I want to poll the community and see what other people's experience with this problem is like.

+",1867,,1867,,09-10-2018 18:31,12/22/2018 15:14,Superconducting qubit researchers: Do your TLS's move?,,1,1,,,,CC BY-SA 4.0 +4176,2,,1881,09-07-2018 14:44,,0,,"

The dimension of a vector space is the number of vectors that make up its basis.
+For a qubit, there's two basis vectors: [ 1 0 ], and [ 0 1 ]. Therefore the dimension of the vector space is 2.

+",,user2898,,,,09-07-2018 14:44,,,,1,,,,CC BY-SA 4.0 +4177,1,4189,,09-08-2018 09:12,,2,284,"

If the state of one qubit can be described by a ray in $\mathbb{C}^2$, then the combined state of an $n$-qubit system can be described by a ray in $(\mathbb{C}^2)^{\otimes n}=\mathbb{C}^{2 n}$.

+ +

However, if $G_1$ is the Pauli group of one qubit, with the 16 elements +$$G_1=\{i,-1,-i,1\}\times\{I,X,Y,Z\}\,,$$ +the Pauli group on $n$ qubits is defined by +$$G_n=\{i,-1,-i,1\}\times\{I,X,Y,Z\}^{\otimes n}$$ +which not the tensor product of $n$ Pauli groups $G_1$ (because $G_n$ contains $4\cdot 4^n$ elements, which does not equal $16^n$). My question thus is: what kind of tensor product do we use on the space of operators on a Hilbert space $\mathbb{C}^2$, to define $G_n$ using $G_1$?

+ +

(I do understand intuitively that we should disregard global phase (and that therefore the number of operators in the $n$-qubit Pauli group is not $16^n$), and that this can be done by introducing the projective Hilbert space, but how do tensor products work on the space of operators on a projective Hilbert space?)

+",4534,,26,,12/23/2018 12:43,12/23/2018 12:43,Tensor product between operators,,2,1,,,,CC BY-SA 4.0 +4178,1,5524,,09-08-2018 12:21,,7,162,"

[I'm sorry, I've already posted the same question in the physics community, but I haven't received an answer yet.]

+ +
+ +

I'm approaching the study of Bell's inequalities and I understood the reasoning under the Bell theorem (ON THE EINSTEIN PODOLSKY ROSEN PARADOX (PDF)) and how the postulate of locality was assumed at the start of the demonstration.

+ +

However, I find problematic to arrive at the equivalence +$$ E(\vec{a},\vec{b}) = \int_{\Lambda}d\lambda \rho(\lambda)A(\vec{a},\lambda)B(\vec{b},\lambda),$$
+starting from the point of view expressed by the Clauser and Horne definition of locality.

+ +

CH claimed that a system is local if there is a parameter $\lambda$ and a joint conditional probabilities that can be written as follows: +$$p(a,b|x,y,\lambda) = p(a|x,\lambda)p(b|y,\lambda),$$ +and $$p(a,b|x,y) = \int_\Lambda d\lambda \rho(\lambda) p(a|x,\lambda)p(b|y,\lambda)$$ +which make sense since it affirms that the probability of obtaining the value $a$ depends only on the measument $x = \vec{\sigma}\cdot\vec{x} $ and the value of $\lambda$.

+ +

However, if I use this expression to write down the expectation value of the products of the two components $\vec{\sigma}\cdot\vec{a}$ and $\vec{\sigma}\cdot\vec{b}$, I obtain as follows:

+ +

$$ +E (\vec{a},\vec{b}) = \sum_{i,j}a_ib_jp(a,b|x,y) = \\ += \sum_{ij}a_ib_j \int_\Lambda d\lambda \rho(\lambda) p(a|x,\lambda)p(b|y,\lambda) \\ += \int_\Lambda d\lambda \rho(\lambda) (\sum_{i}a_ip(a|x,\lambda))(\sum_{i}b_ip(b|y,\lambda)) +$$ +where in the last equivalence I've used the fact that if the measument are independent their covariance must be equal to $0$.

+ +

At this point, the terms in the RHS in the brackets are equal to: +$$ (\sum_{i}a_ip(a|x,\lambda)) = E(a,\lambda) =? = A(\vec{a},\lambda)\quad \quad (\sum_{i}b_ip(b|y,\lambda)) = E(b,\lambda) =?= B(\vec{b},\lambda).$$

+ +

That is not the equivalence that I want to find.

+ +

In fact in the RHS of the first equation $A(\vec{a},\lambda)$ is, according to Bell original article, the result of measure $\vec{\sigma}\cdot\vec{a}$, and fixing both $\vec{a}$ and $\lambda$ it can assume only the values of $\pm1$. (The same is applied for $B(\vec{b},\lambda)$.)

+ +

Some of you knows, where I fail? How can I obtain the original equivalence (that then is proved to be violate in the case of an entangled system) starting from the CH definition of reality?

+ +

Edit #1:

+ +

I've noted that I obtain the wanted equivalence only if I assume that $p(ab|xy\lambda) = E(\vec{a}\vec{b})$, but is it possible? How can a conditional probability be linked to the mean value of the product of two components?

+ +

Edit #2:

+ +

Surfing the internet I found an article (https://arxiv.org/abs/1709.04260, page 2, right on the top) which reports the same CH's local condition (to be accurate, the article presents the discrete version) and then affirm that:

+ +
+

Blockquote + ""The central realization of Bell’s theorem is the fact that there are quantum correlations obtained by local measurements ($M_a^x$ and $M_b^y$) on distant parts of a joint entangled state $\varrho$, that according to quantum theory are described as: + $$p_{Q}(a,b,|x,y) = \text{Tr}(\varrho(M_a^x\otimes M_b^y) $$ + and cannot be decomposed in the LHV form (i.e. The CH condition for locality)""

+
+ +

So why $p_Q(a,b|x,y)$ is seen as a measure of quantum correlation (that for definition is the mean of the product of the possible output)? It isn't a joint probability distribution (as stating while obtaining the LHV form)? +Is there a link between the classical correlation ($E(\vec{a},\vec{b})$) and the joint probability distribution $p(a,b|x,y,\lambda)$?

+",4541,,55,,10/27/2021 17:07,10/27/2021 17:07,Is there a relation between the factorisation of the joint conditional probability distribution and Bell inequality?,,1,6,,,,CC BY-SA 4.0 +4179,1,4180,,09-08-2018 14:49,,6,400,"

It is known that in a classical computer we can't generate a purely random number by deterministic process. I am taking a course in Quantum Computing, recently I learnt that using state we can generate a deterministic process which will produce random number $|0\rangle$ and $|1 \rangle $. But, I am not aware of the physical implementation and very excited to know about it. I have searched online and got this paper, but this needs knowledge of architecture inside QC I think.

+ +

Can anyone explain(from basic please) how we can develop a deterministic process to generate $|0\rangle$ and $|1 \rangle $ random-ly? Also explain please how that process is a deterministic one.

+",3023,,26,,12/13/2018 20:07,11/15/2019 6:21,Randomness from deterministic machine,,2,0,,,,CC BY-SA 4.0 +4180,2,,4179,09-08-2018 17:09,,6,,"

We start with the following qbit state:

+ +

$|\psi\rangle = |1\rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$

+ +

Then, we apply the Hadamard gate to that qbit:

+ +

$H|\psi\rangle = +\begin{bmatrix} +\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ +\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} +\end{bmatrix}\begin{bmatrix} 0 \\ 1 \end{bmatrix} += \begin{bmatrix}\frac{1}{\sqrt{2}} \\ \frac{-1}{\sqrt{2}} \end{bmatrix}$

+ +

The resulting state is also known as the $|-\rangle$ state.

+ +

We can measure this state in the $|0\rangle$, $|1\rangle$ basis (also called the computational basis) and it will collapse to $|0\rangle$ or $|1\rangle$ with the following probabilities:

+ +

$P[|0\rangle] = |\frac{1}{\sqrt{2}}|^2 = \frac 1 2$

+ +

$P[|1\rangle] = |\frac{-1}{\sqrt{2}}|^2 = \frac 1 2$

+ +

Assuming that quantum mechanics is indeed fundamentally probabilistic, this gives you a deterministic random number generator.

+",4153,,,,,09-08-2018 17:09,,,,3,,,,CC BY-SA 4.0 +4183,1,4216,,09-09-2018 01:17,,9,345,"

In Breaking Down the Quantum Swap, it is stated:

+ +
+

Thanks to the CNOT, we can implement a xor-swap on a quantum computer. All we need to do is chain three CNOTs back and forth.

+
+ +

In a comment to a previous question, the author of the aforementioned post states:

+ +
+

The quantum equivalent of the one time pad (i.e. xoring a message with a secret key) is quantum teleportation.

+
+ +

Near the end of the post there is this circuit graphic

+ +

+& it is also stated:

+ +
+

I would go into detail about how this works, but honestly it's just two quantum teleportations shoved together and this post is already way too long.

+
+ +

Can someone provide more detail on just two quantum teleportations?

+ +

Is there a generalization for $n$ quantum teleportations?

+",2645,,55,,10/19/2021 15:32,10/19/2021 15:32,Generalization for $n$ quantum teleportations,,3,1,,,,CC BY-SA 4.0 +4184,2,,4183,09-09-2018 09:26,,2,,"

I suggest firstly reading up on famous 1935 EPR paper which should lead to some interesting thoughts about the nature of entangled quantum states. +To then see how these thoughts are applied to Quantum Teleportation, I suggest page 26 of the all too famous Quantum Computing bible by Mike & Ike.

+ +

Now since you can only measure once affectively what you must do is build a long chain of qubits and then entangle that so as to reach a ""a generalization for $n$ quantum teleportations?"" and as you can imagine that has its own problems given that for larger $n$ the more decoherence and interaction you're going to get in the system and each Qubit will need a minimum of 5 other Qubits to correct for such an error, so as you can begin to appreciate this is no simple task.

+ +

Read more about this stuff here from people that know more about $n$ qubit Quantum teleportation in this paper.

+ +

Hope this will give you some good resources to further your inquiry. +Apologies for not being best suited to give a whole self contained answer myself.

+",4373,,,,,09-09-2018 09:26,,,,0,,,,CC BY-SA 4.0 +4185,1,,,09-09-2018 14:08,,13,446,"

I understand there are a lot of programming languages (e.g. Q#, Qiskit, etc.)

+ +

Which one is suitable for someone that just started learning programming and doesn't know anything about quantum mechanics?

+",2316,,2879,,08-03-2019 23:16,08-03-2019 23:16,Which programming language is suitable for a beginner?,,3,0,,,,CC BY-SA 4.0 +4186,1,4190,,09-09-2018 16:52,,11,1321,"

I was trying to self-study qmc by reading the Quantum Computing A Gentle Introduction book, in section 2.4 it talks about the quantum key distribution protocol BB84. After (I thought) I understood it I went to work on exercise 2.9 and 2.10.

+ +

Ex. 2.9 is asking how many bits do Alice and Bob need to compare to be 90% confident that there is no Eve present in BB84. So if I understood correctly, BB84 is as follows:

+ +
    +
  1. Alice randomly chooses a basis/polarization of photon from the two bases $\{ | 0 \rangle, | 1 \rangle \}$ and $\{ |+\rangle, |-\rangle \}$ to encode the bit information $0$ or $1$ (the encoding rule is known e.g. $|0\rangle$ represents $0$). Then she sends a sequence of such photons to Bob.
  2. +
  3. Bob receives the sequence of photons and randomly chooses a basis from the two same bases and measures for each one of the photon.
  4. +
  5. They then compare the bases they chose and discard the ones where they chose base differently. Bob should be able to figure out what bit Alice is trying to send. (e.g. if the base they use is $\{ |0\rangle, |1\rangle \}$ and Bob has measured using the basis $|1\rangle$ but got $0$ light intensity then he knows that Alice's polarization was $|0\rangle$ so the bit information is $0$).
  6. +
  7. To be more secure, they also compare a subset of bits, if there is no interference then their bits should all agree. They discard these bits and what left is the key.
  8. +
+ +

Eve, on the other hand, is trying to intercept the photon from Alice, measures it randomly from the two bases too, then she sends the basis she uses for measurement to Bob. After Alice & Bob publicly compare their bases, Eve can know $25%$ of the key for sure, although she inevitably changed the photon Bob would have received.

+ +

So to answer the first question ex. 2.9, I listed out different scenarios when Alice and Bob compare a subset of bits :

+ +

Suppose Alice sends a $|0\rangle$,

+ +
    +
  1. There is $0.25$ probability Eve also measures with $|0\rangle$, then she would not get detected.

  2. +
  3. $0.25$ - Eve measures using $|1\rangle$ then she would get detected for sure as Bob will get the opposite bit value as Alice.

  4. +
  5. $0.25$ chance Eve measures using $|+\rangle$, Bob now will receive $|+\rangle$, then if Bob uses $|0\rangle$ and obtain the same with $0.5$ chance, else if he uses $|1\rangle$ to measure but still end up with the correct bit with $0.5$ chance. That is $0.25 \times (0.5 + 0.5) = 0.25$

  6. +
  7. Same as 3, 0.25

  8. +
+ +

So to sum up the probability Eve would go undetected, it's $0.25 + 0 + 0.25 + 0.25 = 3/4$, and we want the sequence of bits Eve goes undetected be less than $10%$, which yields $(\frac{3}{4})^n < 0.1$, approximately $n=8$.

+ +

The second question 2.10c, modifies the condition a little bit, instead of Eve chooses from the two known bases (the standard and the $+/-$), she does not know which to choose so she chooses randomly, then how many bits A&B needs to compare to have 90% confidence?

+ +

My approach is that, suppose Alice still uses standard base $\{|0\rangle |1\rangle \}$ and she sends a $|0\rangle$. Now Eve can measure it in her base $\{ |e_1\rangle, |e_2\rangle \}$ where +$|e_1\rangle = \cos\theta|0\rangle + \sin\theta|1\rangle$ and $|e_2\rangle = \sin\theta|0\rangle-\cos\theta|1\rangle$, then Eve sends off the basis she uses to Bob again. I'm again listing out the scenarios,

+ +
    +
  1. If Eve measures with $|e_1\rangle$ (with 0.5 chance) then Bob receives $|e_1\rangle$, then if Bob measures with $|0\rangle$ then he gets correct bit with $|\cos\theta|^2$ probabiltity, if he measures in $|1\rangle$ then he gets correct bit with $1 - |\sin\theta|^2=|\cos\theta|^2$. Simiarly when Eve uses $|e_2\rangle$
  2. +
+ +

sum up then I got $0.5\times(2|\cos\theta|^2)+0.5\times(2|\sin\theta|^2)=1$, this for sure is not correct!

+ +

Then I tried to search online and found a solution here, where it says the probability Bob gets correct bit is instead: $|\langle0|e_1\rangle\langle e_1|0\rangle|^2 + |\langle0|e_2\rangle\langle e_2|0\rangle|^2 =\cos^4\theta+\sin^4\theta$, then integrate over $[0, \frac{\pi}{2}]$ (normalized by $\pi/2$) is $\frac{3}{4}$ which is again same as in ex2.9.

+ +

Can someone explain why it's $\cos^4\theta+\sin^4\theta$ in both math details and high-level intuition (e.g. why even Eve does not know which base to use it still requires 8 bits comparison for A&B?)?

+ +

Thanks a lot!

+",2069,,2927,,2/20/2021 22:16,2/20/2021 22:16,How many bits do Alice and Bob needs to compare to make sure the channel is secure in BB84?,,1,0,,,,CC BY-SA 4.0 +4187,2,,4185,09-09-2018 18:11,,8,,"

The languages themselves are all essentially the same for a new user. They all implement the same basic set of quantum operations, which are the ones that have been used by researchers for the last few decades.

+ +

If you’ve just started programming, the most relevant factor for you might be the language that the quantum SDK that is written in. They are mostly in Python, but QISKit also has Swift and Java variants. Q# is integrated into Visual Studio.

+ +

Beyond this, there are differences in things like

+ +
    +
  • tutorial materials
  • +
  • the simulators or real quantum devices your programs will run on
  • +
  • high-level applications that require no quantum knowledge.
  • +
+ +

I am very biased in what I recommend (and so I hope that others biased in different directions will also answer your question) but I’d say that taking a look at the QISKit blog might be a good start. Here’s a couple of my own articles

+ + +",409,,409,,09-10-2018 07:16,09-10-2018 07:16,,,,0,,,,CC BY-SA 4.0 +4188,2,,4177,09-09-2018 21:21,,2,,"

Tensor products always mod out $\mathbb C$ (or $\mathbb R$, or whatever field your vector space is defined over). Thus, there is only one phase in the tensor product. This is also true for the corresponding space of operators, and thus also for $G_1^{\otimes N}$. This is just the normal definition of the tensor product.

+",491,,,,,09-09-2018 21:21,,,,6,,,,CC BY-SA 4.0 +4189,2,,4177,09-10-2018 00:53,,2,,"

The group $(G_1)^{n}$ does act on $(\mathbb{C}^2)^{\otimes n}$ (you made a typo with $\mathbb{C}^{2n}$ vs $\mathbb{C}^{2^n}$) but the action factors through a quotient thereof. So you can write down this group initially as the set of all $(a_1 \sigma^{0,x,y,z}_1 \otimes \cdots a_n \sigma^{0,x,y,z}_n)$ where the $a_i \in \{ 1,i,-1,-1 \}$ and $\sigma^0=Id_2$. Not using the properties of $\otimes$ to shuffle the $a_i$ through yet. Each tensorand is a copy of $G_1$ and these factors commute amongst each other.

+ +

Then when we interpret the symbol $\otimes$, we see that $i \sigma^0_1 \otimes -i \sigma^0_2 \otimes 1 \sigma^0_3 \otimes \cdots \otimes 1 \sigma^0_n$ acts trivially. It is in the kernel of the map $(G_1)^{n} \to Aut((\mathbb{C}^2)^{\otimes n})$. By understanding this kernel, one sees that $(G_1)^{n}/(ker) \simeq G_n$. There are still some global phases from the $\{1,i,-1,-i\}$ factor.

+ +

Suppose one wants to work projectively from the very beginning. Then on 1-qubit we have some subgroup of $PU(2)$. In this case $PG_1 \simeq \mathbb{Z}_2^2$. Then for $n$ qubits you even lose that first global phases factor above and are only left with $(PG_1)^n \simeq PG_n$.

+",434,,5296,,12-11-2018 15:33,12-11-2018 15:33,,,,0,,,,CC BY-SA 4.0 +4190,2,,4186,09-10-2018 09:04,,8,,"

Your analysis of Eve's cheating doesn't seem quite right (although the final answer is correct). What you need to say is: Assume Alice prepares a particular state in one of the bases. You could assume that's $|0\rangle$, but you can make the argument more generally.

+ +
    +
  • With 50% probability, Eve measures in the same basis that Alice prepared in (the 0/1 basis in this case). Eve is guaranteed to get the same answer (0), and so Bob will still get the same answer (0) because we're working specifically in the set of cases where Bob measures in the same basis as Alice. Eve is not detected.

  • +
  • With 50% probability, Eve measures in the other basis. She'll get an answer. It doesn't actually matter what it is (in this case, either $|+\rangle$ or $|-\rangle$). Bob receives whatever that state is, and measures in the original basis, and gets the two different outcomes with 50:50 probability. Eve never learns anything about Alice's chosen bit, and she is detected half the time.

  • +
+ +

Overall, Eve learns the bit value half the time, and is detected in 1/4 of the cases. Now, strictly, you should average over all possible inputs of Alice. But there's sufficient symmetry in this simple case that all the outcomes are the same.

+ +

In the second question, you've missed one important feature: if Eve changes her measurement basis, the probability that she gets the different results varies (you've kept it fixed at 1/2).

+ +

High-level hand waving: If Eve chooses a basis which is very close to the 0/1 basis, she is almost guaranteed to get the same answer as the bit value Alice was sending (if she was sending in the 0/1 basis), and she is almost guaranteed to not be detected. As you get further away from that basis, Eve learns less, and is more likely to be detected. But, the trade-off is that if Alice had used the other basis, her chance of being detected decreases, and her knowledge of the bit improves. That said, it is not a perfect trade-off. I'll show you why very easily: Imagine that Alice is using the two standard bases. What if Eve measures in the basis $(|0\rangle\pm i|1\rangle)/\sqrt{2}$ every time? It is always the case (no matter which basis Alice chooses) that there is a 50% chance of Eve being detected.

+ +

Mathematically, what you were supposed to say was to imagine Alice sent $|0\rangle=\cos\theta|e_1\rangle+\sin\theta|e_2\rangle$. Thus, with probability $\cos^2\theta$, Eve gets answer $|e_1\rangle$, which gets passed on to Bob, who gets answer $|0\rangle$ with probability $\cos^2\theta$. Meanwhile, with probability $\sin^2\theta$, Eve gets answer $|e_2\rangle$, sends it on to Bob, and he gets answer $|0\rangle$ with probability $\sin^2\theta$. Thus, the overall probability of Bob not detecting anything is +$$ +\cos^4\theta+\sin^4\theta=1-\frac12\sin^2(2\theta), +$$ +given that Alice sent $|0\rangle$. The analysis will be identical if Alice sent $|1\rangle$. However, you do need to repeat the analysis for if Alice sent $|+\rangle$. (At this moment, it should become apparent that you needed a phase parameter in your definition of $|e_1\rangle$ and $|e_2\rangle$ if you truly want to average over all possible bases, but I'll keep going with your definition.) So, assume Alice sent $|+\rangle=((\cos\theta+\sin\theta)|e_1\rangle-(\cos\theta-\sin\theta)|e_2\rangle)/\sqrt{2}$. So, Eve gets answer $|e_1\rangle$ with probability $(\cos\theta+\sin\theta)^2/2$, and Bob gets answer $|+\rangle$ with probability $(\cos\theta+\sin\theta)^2/2$. Hence, overall, the probability of Eve not being detected is +$$ +\left(\frac{\cos\theta+\sin\theta}{\sqrt{2}}\right)^4+\left(\frac{\cos\theta-\sin\theta}{\sqrt{2}}\right)^4=1-\frac12\cos^2(2\theta). +$$ +Averaging over all possible inputs of Alice, we therefore get +$$ +\frac12\left(1-\frac12\cos^2(2\theta)\right)+ +\frac12\left(1-\frac12\sin^2(2\theta)\right)=\frac34. +$$ +At this point, the $\theta$ has disappeared. We don't have to average over all possible $\theta$. However, note that if we had correctly introduced a phase $\phi$ in the definition of $|e_1\rangle$, it would be necessary to perform some averaging. Moreover, the solution that you cite does not do that averaging correctly. Remember that if you want to convert from an integral in $(x,y)$ coordinates to an integral in $(r,\theta)$ coordinates, you need a conversion. You're going to have to perform an integral that's something like +$$ +\frac{1}{2\pi}\int_0^{2\pi}d\phi\int_{0}^{\pi/2}\sin(2\theta)d\theta f(\theta,\phi), +$$ +where $f(\theta,\phi)$ is the probability of detecting Eve for a given $\theta,\phi$. (You probably want to check out this formula carefully, and verify factors of 2, as I've written this from memory. It gets a bit messy because, given you've used an angle $\theta$ in the definition of $|e_1\rangle$, that translates into an angle of $2\theta$ on the Bloch sphere.)

+ +

The other thing we haven't calculated it how much Eve learns. If she corresponds $|e_1\rangle$ with bit value 0, and $|e_2\rangle$ with bit value 1, she is correct with probability +$$ +\frac12\left(\cos^2\theta+\left(\frac{\cos\theta+\sin\theta}{\sqrt{2}}\right)^2\right). +$$ +You could average this over $\theta$, but one of the interesting things to observe is that if Eve does know that two bases that are being used, she can optimise her value of $\theta$. The value $\theta=\frac{\pi}{8}$ gives her more knowledge (on average) that setting $\theta=0$ or $\pi/4$ (which are effectively the cases you analysed in the first question.

+",1837,,,,,09-10-2018 09:04,,,,2,,,,CC BY-SA 4.0 +4191,1,4192,,09-10-2018 09:45,,3,145,"

I've been taking a course on Quantum Mechanics in which, right now, I'm learning about its application in Computing to give Quantum Computing. +Quantum Mechanics is a concept that is applied to microscopic particles explaining their motion, behaviour etc. in the microscopic (quantum world). How is this correlated to computing? They seem entirely different things.

+",4557,,1837,,09-10-2018 09:49,09-10-2018 10:04,How is Quantum Mechanics (a Physics Concept) used in computing?,,1,0,,,,CC BY-SA 4.0 +4192,2,,4191,09-10-2018 10:04,,3,,"

Here's a simple idea that relates the two: +If you're going to build a computer, you have to build a computer out of something. What you can get that computer to do must be bound by what the stuff you've built it out of can do. So, physics is very important for telling you what a computer can do.

+ +

When I put it like this, there's a clear relation between the two. But it looks like physics is going to be a limiting factor, telling you that you can't do things. For example, there's a finite speed of communication between two physically separate entities (the bits in your computer), determined by the speed of light, so there's a maximum speed of operations. You can't put things (your bits) too close together, or they'l turn into a black hole, so there's a maximum density storage. OK, these numbers are crazy, and really not limits on existing hardware.

+ +

So, the real question here is how could physics possibly let you do something better? There's a cute example that I like. Think about two rooms. One room has 3 light switches, and the other has 3 light bulbs. Each switch switches exactly 1 bulb. Your job is to determine which switch matches each bulb, but you're only allowed to visit each room once.

+ +

Now, a computer scientist will look at this problem and tell you it's impossible. There are 6 possible ways of wiring up the switches to the bulbs. You need $\log_2(6)$ bits of information to describe that. But, by flicking some switches and observing the bulbs, you only get $\log_2(3)$ bits of information, which is insufficient.

+ +

A physicist has several tricks. For example, switch 2 bulbs one, and wait a long time. Then switch one off. Now go into the bulb room. One bulb is on, one is warm, and one is cold. So, you can perfectly map switches to bulbs.

+ +

The point here is that we've used physics to do better. The problem is that the computer scientist used an abstraction of the system that actually made several assumptions about the system, and the physicist exploited the difference between the assumption and the reality. It's kind of similar with classical versus quantum computers. In effect, the theory of classical computation assumes probability theory. But probability theory isn't always true; at the quantum level, you need probability amplitudes. These include simple probabilities (so quantum computation includes classical), but provides more options. Hence there's the possibility for doing some things better.

+",1837,,,,,09-10-2018 10:04,,,,1,,,,CC BY-SA 4.0 +4193,1,,,09-10-2018 11:36,,3,2436,"

The word 'state' makes sense in Quantum Mechanics. In classical computers, numbers are represented in binary as a series of 0s and 1s. It seems that the word 'state' extracted from QM and the 0s and 1s extracted from Computing are put together to make $\lvert 0\rangle$ and $\lvert 1\rangle$. This doesn't make sense. What's the physical significance behind that representation?

+",4557,,45,,09-10-2018 21:55,09-11-2018 22:44,What's the physical significance of |0⟩ and |1⟩?,,4,0,,,,CC BY-SA 4.0 +4194,2,,4193,09-10-2018 12:54,,1,,"

The |0,1> is a mathematical representation which makes correspondance with classical computing easier. But qubits can be realised by different physical systems.

+ +

Take for instance two different polarization of a photon; the alignment of a nuclear spin in a uniform magnetic field; two states of an electron orbiting a single atom...

+ +

Sometimes instead of 0,1, you can hear spin up or spin down to refer to the computational basis state. So it is just a matter of convention.

+",4127,,,,,09-10-2018 12:54,,,,0,,,,CC BY-SA 4.0 +4195,2,,4193,09-10-2018 13:19,,8,,"

Consider: in classical computing, what do the 0 and the 1 refer to?

+ +

Sometimes (and as suggested by Turing when he introduced Turing Machines), they represent literal symbols which are written down: a '0' character, or a '1' character. But when we use electronic computers, they more often refer to a low voltage versus a high voltage, or a magnetic field pointing in one direction or another. In each of these cases, what we are looking for is some physical system, in which there are some 'states' (either ink on a piece of paper, voltages in a wire, or direction of a magnetic field) which we can easily distinguish from one another (by seeing the differences in the ink patterns under ambient light, or by a suitable piece of electronics).

+ +

Here, as in quantum mechanics, we are considering questions of the state of a physical system, and the reason for this is that information must always be encoded in terms of physically distinguishable properties of the ways that a system could be — properties of the state of the system.

+ +

When we consider quantum computation, we have the same situation, only we have to be much more precise in what we are distinguishing. We not only want to have easily distinguished properties — like the orientation of an electron spin, or the polarisation of a photon — but for those properties also not to be closely coupled to any other properties of the system.

+ +

If we can succeed in this, and if that relatively isolated degree of freedom can have only two distinct values (which we could give any labels that we like, such as value A and value B, or indeed '0' and '1'), then we identify this as a qubit. By virtue of it not being strongly coupled to any other properties of the system, we can consider any possible configuration that this isolated degree of freedom can have, so that we may consider superpositions of '0' and '1', whether we can cause this degree of freedom to interact in a controlled way with other measurable degrees of freedom of the system, and so forth.

+ +

Often, but not always, this isolated degree of freedom (our 'qubit') is a property of one particular part of our system, such as an electron, a nuclear spin, a photon, a current through a small superconducting element, etc.; and the fact that it gives us this qubit is due to the fact that it is not too strongly coupled to other electrons, photons, etc. in the physical set-up being considered. Of course, arranging that this should be the case is a question of delicate engineering, but we can consider how to do so. Even so, it is not necessary that a qubit be as easily isolated as pointing to one particular physical system: once we get going with quantum error correction, particularly if we use planar surface codes, the story of ""where is my qubit"" may become a bit more complicated — but the qubits will still be there, and will arise out of a well-controlled degree of freedom of the system.

+ +

In short: just as '0' and '1' are shorthand for physically distinguishable values of some degree of freedom in some classical physical system, |0⟩ and |1⟩ are shorthand for physically distinguishable values of some degree of freedom in a quantum mechanical system.

+",124,,,,,09-10-2018 13:19,,,,0,,,,CC BY-SA 4.0 +4196,2,,4185,09-10-2018 15:53,,5,,"

It depends on languages you will have more affinity with.

+ +

Qiskit, pyQuil, etc are in Python, which is a programming language easy to understand generally with a lot of helpful libraries. They provide documentations/tutorials to make any beginners start quantum computing. Writing codes can be done in a few lines.

+ +

Q# is in C#. I have not tried it but if you started learning programming focusing on C or C++ (and like it), I guess you should be comfortable with this one.

+ +

For learning, I would recommend to read the book Quantum Computation and Quantum Information from Nielsen and Chuang or Quantum Computing for Computer Scientists from Yanofsky and Mannucci; if you can have access to them and in parallel look at some code in the platform of your choice. But it is always a good idea to combine different sources and explanations to understand better.

+",4127,,4127,,09-10-2018 18:58,09-10-2018 18:58,,,,2,,,,CC BY-SA 4.0 +4197,2,,4193,09-10-2018 19:51,,1,,"

$\left|0\right>$ and $\left|1\right>$ are shorthand for vectors in a pre-defined state space. +Their physical meaning depends on the underlying technology.

+ +

For example, you could have an optical polarization system, such that +$\left|0\right>$ means that the photon is vertically polarized, and +$\left|1\right>$ means that the photon is horizontally polarized.

+ +

You could have a superconducting flux loop system, such that +|0> means that the loop current is clockwise (+ magnetic flux), and +|1> means that the loop current is counter-clockwise (- magnetic flux).

+ +

You could have differential optical system, such that +$\left|0\right>$ means that the photon is in the left fiber, and +$\left|1\right>$ means that the photon is in the right fiber.

+ +

You could have a neutral atom system, such that +$\left|0\right>$ means that the unpaired electron spin is parallel to the nucleus spin, and +$\left|1\right>$ means that the unpaired electron spin is opposed to the nucleus spin.

+ +

Given the large number of possible technologies which might be used, +you can see that this list goes on for a while.

+ +

The main thing to know is that, in principle, you could build a Quantum Computer +out of any of these technologies. The software description should be identical, +as long as certain elementary operations, such as CNOT, are supported. +A quantum algorithm which implements an Oracle should work the same way, +no matter which implementation technology is used.

+ +

As other people point out, this is similar to the situation in classical logic. +Ethernet, for example, could be implemented using copper wire, optical fiber, or +radio. This is called ""Layer 1"". The definition of 0 and 1 depends on the +underlying technology. What is important is that Ethernet (and TCP/IP running on +top of Ethernet) does not care what the physical implementation of the bits is.

+",4562,,23,,09-10-2018 22:02,09-10-2018 22:02,,,,0,,,,CC BY-SA 4.0 +4198,1,,,09-10-2018 22:06,,8,303,"

Fix $n$, the number of qubits and $k$, the number of encoded logical qubits. We can find a set of $(n-k)$ operators that all mutually commute and moreover form a group $S$. Let's assume that the group $S$ is a subgroup of the Pauli group. We can use these operators to fix a $2^{n-k}$ vector space.

+ +

Now consider all the stabilizer groups $S_i$ formed this way, encoding $k$ qubits in $n$, and consider the set $\mathcal{S}= \{S_i \, \vert\, i=1 \ldots N\}$, where $S_i$ is a specific stabilizer group that stabilizes some $2^{n-k}$ dimensional vector space. How can I explicitly parametrize this set? For example: for $n=3$ and $k=1$, we could have $S_1 = \langle Z_1Z_2, Z_2 Z_3\rangle $ and $S_2 = \langle X_1X_2, X_2 X_3\rangle $, and so forth for other distinct stabilizer groups.

+ +

One possible way to a solution is to consider the parity check matrix for a specific $S_i$, and then ask what group action we could define will on the parity check matrix of $S_i$ to produce the parity check matrix for any other stabilizer group of the same cardinality. But I don't know how such a group would act on the stabilizer group. In my example for $(n,k) = (3,1)$ above, for example, I can change $S_1$ to $S_2$ by conjugating by a Hadamard, and I think this corresponds to a right multiplication of some $2n \times 2n $ matrix on the parity check matrix.

+ +

Because of this example, I am tempted to think that what I require is conjugation by either the whole Clifford group or a subgroup of it to act by conjugation of the $S_i$, and that will correspond to a symplectic $(2n \times 2n)$ matrix acting on the parity check matrices. In that case, the set $\mathcal{S}$ is parametrized by fixing a specific stabilizer group $S_i$ and acting on it by conjugation by a unitary representation of the Clifford group or subgroup. Is this close?

+",4565,,2371,,09-12-2018 21:56,09-12-2018 21:56,Compact way of describing the set of all stabilizer groups for fixed number of physical qubits and encoded logical qubits,,1,2,,,,CC BY-SA 4.0 +4199,1,4207,,09-11-2018 10:07,,3,464,"

The power of a quantum computer is often described in terms of how many qubits it has. Does that mean that quantum computers can be run using only qubits, and nothing more? Wouldn't we require some kind of quantum RAM or quantum hard drive for a fully functional quantum computer?

+",4182,,26,,12/13/2018 20:08,12/13/2018 20:08,Are qubits the only elements required to build a quantum computer?,,3,0,,,,CC BY-SA 4.0 +4200,1,,,09-11-2018 10:31,,4,1132,"

I am working in Anaconda using spyder editor. +Can anyone tell me how to install qiskit in spyder?

+",,user4573,26,,03-12-2019 09:34,03-12-2019 09:34,How to install qiskit in spyder editor?,,1,0,,,,CC BY-SA 4.0 +4201,2,,4198,09-11-2018 11:32,,5,,"

There's good news and bad news. The good news is that your intuitions are essentially right, and that there is such a group action via the Clifford group. The bad news is, depending on what you want out of that parameterisation, it may not be as useful as you are hoping.$\def\ket#1{\lvert #1 \rangle}$

+ +

The good news first — every Pauli stabiliser group on $n$ qubits, with $r = n-k$ independent generators, can be mapped to any other such group by conjugation by Clifford group operators. The simplest way to show this is by induction on $r$. If $r = 0$, then there is only one such stabiliser group: the trivial group $\{ \mathbf 1 \}$. For any $r > 0$, given an input stabilizer group $S$, you can reduce to the case of $r-1$ by the following steps:

+ +
    +
  • Select any generator $P_r$ of the stabiliser group, and some qubit $x_r$ on which $P_r$ acts non-trivially.

  • +
  • Find a Clifford group operator $C_r$ such that $C_r P_r C_r^\dagger = Z_{n-r}$, the single-qubit Pauli $Z$ operator acting only on qubit $(n-r)$. The operator $C_r$ may involve SWAP operators in order to exchange the tensor factors for qubit $x_r$ and $(n-r)$.

  • +
  • Determine how the other generators of the stabilizer group transform under $C_r$. This produces a list of generators for the group $S' = \{ C_r P C_r^\dagger \,\vert\, P \in S\}$. Because $S'$ is abelian, the image of each generator either acts on qubit $(n-r)$ with $\mathbf 1$ or $Z$. In the latter case, produce a new generator by multiplying it by $Z_{n-r}$. As $Z_{n-r}$ is an element of $S'$, this yields an equivalent set of generators for the group.

  • +
+ +

Having done this, you have a stabiliser group for a subspace which is stabilized by $Z_{n-r}$. Any state in this group factors as a tensor product of $\ket{0}$ on qubit $(n-r)$, and some state on the remaining qubits. By considering the stabilizer code defined on all of the other qubits, you have reduced to the case of a stabiliser group on $n-1$ qubits and with $r-1$ generators.

+ +

If we unpack this inductive proof, we obtain a recursive procedure to map any stabiliser code $S$ with $r$ generators to a Clifford circuit $\mathcal C$ which maps that stabiliser group to the specific group $$\mathcal Z_{n,r} := \langle Z_{n-r}, Z_{n-r+1}, \ldots, Z_n\rangle \,.$$ If you have two such codes $S_1$ and $S_2$, just compose their circuits $\mathcal C_2^\dagger \mathcal C_1$ to obtain a circuit which maps $S_1$ to $S_2$. There is some redundancy, in that different sets of generators of the stabilizer group of $S_j$ will yield different circuits $\mathcal C_j$: this corresponds to the fact that some Clifford circuits just evaluate automorphisms (i.e. logical unitaries) of the code. But never mind: what you have is a way of generating any stabilizer code on $n$ qubits with $r$ stabilizer generators from a single code.

+ +

The bad news is that, as this stands, all that we have done above is in effect to parameterize stabiliser codes by their encoding circuits. By ""encoding circuit"", I mean just the circuit which takes a $k = n-r$ qubit state $\ket{\psi}$, and then encodes $\ket{\psi}$ in an $n$-qubit system by preparing $r$ fresh qubits in the state $\ket{0}$ and acting on them by an appropriate unitary. By reducing an arbitrary stabilizer code with $r$ generators to a 'canonical' (and extremely dull) code whose stabilizer group is $\mathcal Z_{n,r}$, we have proven nothing more or less than that a stabilizer code is one with a Clifford encoding circuit. Describing stabiliser codes in terms of the orbit of $\mathcal Z_{n,r}$ under the $n$-qubit Clifford group is no more or less than describing codes in terms of their encoding circuits. This is a good fact to rely on, but more of a basic result than a deep result.

+ +

If you take some other code as the 'reference' code, then you are essentially doing the same thing, except prefacing that encoding circuit by some other Clifford circuit. This point of view may or may not be helpful to you — it's certainly a good elementary property to be aware of, when you're discussing stabiliser codes and stabiliser states with others who are less familiar with them — but without imposing additional constraints on what encoding circuits or code representations you're interested in (e.g. to limit the automorphisms of codes which you consider), my guess is that this parameterisation may be of limited usefulness. The crux, in the end, will be which properties of stabilizer codes you are concerned with.

+",124,,,,,09-11-2018 11:32,,,,3,,,,CC BY-SA 4.0 +4202,2,,4200,09-11-2018 12:31,,2,,"

The recommended way is to install it using pip which should be already within anaconda. If not, check this link.

+ +

pip install qiskit

+",4127,,,,,09-11-2018 12:31,,,,0,,,,CC BY-SA 4.0 +4203,2,,4199,09-11-2018 12:41,,1,,"

Qubits are indeed the main elements. But in the future hopefully, we will have complementary devices like a qRAM and this will be helpful for many quantum algorithms like the HHL to input in a quanntum form classical data.

+ +

For a more visual presentation of the qRAM, you can check these slides. +The architecture proposed at that time was the bucket-brigade with qutrits. +What we need is basically the ability to load data in a fast and coherent way.

+",4127,,4127,,09-11-2018 13:06,09-11-2018 13:06,,,,2,,,,CC BY-SA 4.0 +4204,1,4222,,09-11-2018 19:56,,16,3914,"

Can anyone give me a list of different research journals that has quantum computing articles that I can use.

+",4576,,55,,03-07-2021 23:19,01-02-2023 17:22,What is a list of research journals publishing quantum computing articles?,,4,0,,,,CC BY-SA 4.0 +4205,2,,4199,09-11-2018 21:01,,2,,"

Currently, as far as I know, all gate-model quantum architectures start computation in a ground state, normally denoted as the zero state $|0\rangle$. In theory, any desired input state can be created from this by use of a correct unitary transformation $\rm U$, since by definition the Hilbert space of quantum states is invariant under unitary transformations (of equal dimensions). This suggests qubits are the only thing we need for a working quantum computer.

+ +

The problem is, however, that we don't always know how to construct $\rm U$. We know the first column vector of $\rm U$—as this is the vector that $|0\rangle$ is to be mapped to—but finding a complete matrix which can be nicely expressed in terms of quantum gates quickly becomes an intractable problem. For up to four qubits (or maybe six if you're really devoted) this might be doable, but it becomes impossible for an arbitrary large number of qubits, as may be required in, for example, the HHL algorithm to represent the input vector.

+ +

And don't even talk about compiling this possibly huge $\rm U$ to the hardware.

+ +

This is where QRAM is supposed to come in. QRAM, which as of now only exists in theory, is a device which is capable of storing a quantum state for an extended period of time, and which, given an index state $|i\rangle$, will load the quantum state located at position $i$ in the memory into an empty state. That is: +$${\rm QRAM}: |i\rangle|0\rangle\mapsto|i\rangle|{\rm load}(i)\rangle.$$

+ +

In case you are wondering: yes, this can be realised as a unitary transformation. But again, even though some proposals have been made, nobody has been able to reliably implement such a map yet. This is in part caused by the short decoherence times of quantum states, which makes building QRAM a huge challenge.

+ +

Furthermore, there is a fundamental problem with the use of QRAM. Assuming we have access to such a QRAM is equivalent to assuming someone or something else will prepare the required quantum state $|{\rm load}(i)\rangle$ and write it into the QRAM for us, so we can simply read it out without effort and run our quantum algorithm with it. And this is precisely the finicky part about these famous quantum algorithms like HHL (and also Grover's, to some extent): they don't provide any description regarding how the input states are actually to be prepared. And a QRAM won't solve this, as in order to load a quantum state, it needs to be written there first.

+ +

The only upside is that there may be a clever procedure to write the desired state in a QRAM, allowing us to circumvent having to construct $\rm U$; but that's not something I can draw any conclusions on with certainty.

+ +

Apart from QRAM, there is one more aspect I want to mention regarding the power of a quantum computer: error rates. Quantum particles are fragile and very prone to error, which means we have to account for that in some way. After all, a quantum computer with thousands of qubits is completely useless if these qubits can't hold a state for long enough to do the computation. The currently most widely known method to deal with these errors is quantum error correction, which (roughly speaking) uses auxiliary qubits to reduce the probability of error occuring in the work qubits. Suffices to say that the effort put into correcting quantum errors directly contributes to the computational power—and thus usefulness—of the quantum computer.

+ +

It is for this reason that the notion of quantum volume was introduced to express the actual power of a quantum chip. This metric incorporates the number of qubits, the error rate as well as a few other things.

+ +

To answer your question: I would say yes in some sense, since (a) QRAM doesn't solve the problem of efficiently preparing input states and (b) error correction is done with extra qubits. However, if we want to build a big quantum computer which is comparable in power to a classical computer, we will need stuff like QRAM (including a clever way of writing states to it) to make it work. Just like a classical computer can't operate on bits alone.

+ +

Don't expect such a machine to be built within the next twenty years, though.

+",2687,,2687,,09-11-2018 21:37,09-11-2018 21:37,,,,0,,,,CC BY-SA 4.0 +4206,2,,4193,09-11-2018 22:16,,1,,"

Building upon the previous answers, I will give a concrete example in terms of spin-$\frac12$ particles.

+ +

Assume such a particle, say an electron, is at rest inside a magnetic field of strength $B$ pointing in the $z$ direction. The hamiltonian of this small system is

+ +

$${\rm H}=-\gamma B\,{\rm S}_z=-\gamma B\,\frac\hbar2\begin{bmatrix}1&0\\0&-1\end{bmatrix}$$

+ +

where $\gamma$ is some constant called the gyromagnetic ratio and ${\rm S}_z$ is the observable corresponding to spin in the $z$ direction. The eigenstates of this hamiltonian are the states that we prefer to name spin up and spin down, +$$|\!\uparrow\rangle=\begin{bmatrix}1\\0\end{bmatrix},\;\;\;\;|\!\downarrow\rangle=\begin{bmatrix}0\\1\end{bmatrix},$$ +with eigen-energies +$$E_\uparrow=-\gamma B\frac\hbar2,\;\;\;\;E_\downarrow=+\gamma B\frac\hbar2.$$

+ +

If we were to build a quantum computer using these spin-$\frac12$ particles as qubits, we could relabel the spin up state $|\!\uparrow\rangle$ as $|0\rangle$, and the spin down state $|\!\downarrow\rangle$ as $|1\rangle$. After all, it is much more convenient to reason about strings of qubits like $|001001010111\rangle$ than strings of ups and downs, i.e. $|\!\uparrow\uparrow\downarrow\uparrow\uparrow\downarrow\uparrow\downarrow\uparrow\downarrow\downarrow\downarrow\rangle$. Measuring a qubit would then mean measuring its spin, and finding either $E_\uparrow$ or $E_\downarrow$ to determine which state it collapsed to.

+ +

Of course, you could build a quantum computer with a completely different system under a different hamiltonian, but the same principle holds. Computer scientists talking about quantum computation generally toss out the entire idea of hamiltonians, energies and so forth because circuits carrying $|0\rangle$s and $|1\rangle$s are easier for them to understand, and because it abstracts away the underlying physical system.

+",2687,,2687,,09-11-2018 22:44,09-11-2018 22:44,,,,0,,,,CC BY-SA 4.0 +4207,2,,4199,09-12-2018 06:39,,2,,"

Before answering your question, let's go back to classical computing. The classical computer has a processor, which implements functions on bits, and it has varying levels of memory which trade speed of access against volatility and capacity. So, there are hard drives with large, non-volatile, but comparatively slow storage, and RAM, which is volatile, smaller, but fast. There's even some cache on the processor, which is smaller and faster again. But, all of this stores bits. Overall, a computer can simply be described as something that stores and processes bits, it's just that for different aspects of that process, it is more appropriate to use different technologies, leveraging their different advantages.

+ +

Exactly the same is true of quantum. Everything is qubits, both processing (application of unitaries, e.g. gate model, and measurement) and storage. We know of different technologies that have different advantages in terms of the trade between gate speed and volatility (measured as ""decoherence time""). One might also add into the mix an ability to transmit quantum information over moderate distances. So, one might imagine that the ultimate quantum computer separates out in a similar way to the classical computer. But it's still all qubits. Of course, we're not there yet. It's hard enough doing one part of the experiment with one type of hardware. Add into the mix an interface between two different types of hardware, and it's even trickier.

+",1837,,,,,09-12-2018 06:39,,,,0,,,,CC BY-SA 4.0 +4208,2,,4204,09-12-2018 06:48,,8,,"

There are loads of different quantum computing journals, so you might want to be more specific about what you're looking for. However, the vast majority of papers (certainly theory, perhaps a bit less so the experiments) appear as preprints on the arXiv, specifically the quant-ph section. The majority of papers, all in one place, collectively searchable, and free to read. What more could you ask for? If it's been published, usually it'll link to the published version as well (that depends on the author updating the record).

+ +

As with any paper you read, you'll have to make up your own mind about the validity of any paper you find on the arXiv, but these haven't necessarily been through peer review, or have been through peer review of varying degrees of rigour, depending on the ultimate publication journal.

+",1837,,,,,09-12-2018 06:48,,,,0,,,,CC BY-SA 4.0 +4209,1,,,09-12-2018 06:53,,7,810,"

How can I get access to IBM Q 20 Tokyo and IBM Q 20 Austin?

+ +

In the Q- experience site there is written that the access is for IBM clients only and in the profile there is a ""promotional code"" that you need to put to gain access to those chips.

+ +

How I became a IBM Q client and get this promotional code?

+",4524,,,,,11/13/2018 8:22,How can I get access to IBM Q 20 Tokyo and IBM Q 20 Austin?,,2,0,,,,CC BY-SA 4.0 +4210,1,4214,,09-12-2018 07:47,,4,231,"

I am using Jupyter notebook to write and running my qiskit codes (python 3.6) +and every time I encounter the message: LookupError: backend ""ibmqx4"" is not found, right now the ibmqx4 computer is not in maintenance and it running well.

+ +

I tried to regenerate Apitoken in the advanced option and make correspondingly the file Qconfig.py but the message still appears.

+ +

look at the example for a code that I write:

+ +
import numpy as np
+import matplotlib.pyplot as plt
+import qiskit as qk
+from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute, register
+
+
+import Qconfig
+
+
+
+    batch = []
+n=50
+t=450
+// we defining n similar quantum circuits
+for j in range(1,n+1):
+    q = QuantumRegister(1)
+    c = ClassicalRegister(1)
+    qc = QuantumCircuit(q, c)
+    qc.x(q)
+    qc.barrier()
+
+
+// we decide the pausing time of the system be t*Gate Time
+    for i in range(t):
+        qc.iden(q)
+    qc.measure(q,c)    
+   //we append every circuit we create to batch so batch is like a list of n 
+  circuits
+    batch.append(qc)
+
+// we executing all the n circuits and for each we run 1024 shots
+shots = 1024
+job = execute(qc,'ibmqx4',shots=shots)
+
+ +

and I get the output from the last row:

+ +
---------------------------------------------------------------------------
+LookupError                               Traceback (most recent call last)
+<ipython-input-38-a89f2171589f> in <module>()
+      1 # we executing all the n circuits and for each we run 1024 shots
+      2 shots = 1024
+----> 3 job = execute(qc,'ibmqx4',shots=shots)
+
+~\Anaconda3\envs\QISKit\lib\site-packages\qiskit\wrapper\_wrapper.py in execute(circuits, backend, config, basis_gates, coupling_map, initial_layout, shots, max_credits, seed, qobj_id, hpc, skip_transpiler)
+    270     # pylint: disable=missing-param-doc, missing-type-doc
+    271     if isinstance(backend, str):
+--> 272         backend = _DEFAULT_PROVIDER.get_backend(backend)
+    273     qobj = compile(circuits, backend,
+    274                    config, basis_gates, coupling_map, initial_layout,
+
+~\Anaconda3\envs\QISKit\lib\site-packages\qiskit\wrapper\defaultqiskitprovider.py in get_backend(self, name)
+     29 
+     30     def get_backend(self, name):
+---> 31         name = self.resolve_backend_name(name)
+     32         for provider in self.providers:
+     33             try:
+
+~\Anaconda3\envs\QISKit\lib\site-packages\qiskit\wrapper\defaultqiskitprovider.py in resolve_backend_name(self, name)
+    218 
+    219         if resolved_name not in available:
+--> 220             raise LookupError('backend ""{}"" not found.'.format(name))
+    221 
+    222         return resolved_name
+
+LookupError: backend ""ibmqx4"" not found.
+
+ +

please help me to solve this repetitive bug...

+",4524,,26,,03-12-2019 09:35,03-12-2019 09:35,"How to deal with -LookupError: backend ""ibmqx4"" is not found?",,1,0,,,,CC BY-SA 4.0 +4211,2,,4204,09-12-2018 07:47,,3,,"

I mostly work on quantum error correction and quantum information theory, so I can give you references about journals that cover such topics. Anyway, I am pretty sure that articles about quantum computing in general do also get published in such journals. I give you a list of them:

+ +
    +
  • IEEE transactions on Information Theory
  • +
  • Physical Review Letters
  • +
  • Physical Review A
  • +
  • Physical Review X
  • +
  • Springer Quantum Information Processing
  • +
  • Entropy Journal
  • +
+ +

Take into account that those journals aer not in general only dedicated to quantum computing, so you must look for the articles related to the topic you are looking for in them. Also, arXiv is a really good place to look for papers in any topic, as most of the author publish their preprints/papers in such site.

+",2371,,,,,09-12-2018 07:47,,,,0,,,,CC BY-SA 4.0 +4212,1,,,09-12-2018 08:44,,4,146,"

Given an arbitrary graph state $|G\rangle$ represented by the graph $G$, can one use the graphical structure to calculate the number of ebits (entanglement bits) present in the state?

+ +

If so, how?

+",391,,,,,09-12-2018 15:02,How to calculate the number of ebits in a graph state?,,1,4,,,,CC BY-SA 4.0 +4213,2,,4183,09-12-2018 09:50,,2,,"

You can think of teleportation as the process of sending the state of one qubit from one place to another without having to physically send the qubit itself. Instead, you start with an entangled pair shared between the two locations, and that entanglement is consumed in the process. If you're not already familiar with teleportation, I strongly recommend that you find out about it. It's one of the basic building blocks of many results in quantum information.

+ +

Of course, a swap gate is simply sending the state of one qubit from A to B, and another from B to A, which you can therefore do with two teleportations. So, how does this circuit work? Consider two locations, A and B. At each location, there are 3 qubits. So, at A, we have A1, A2 and A3.

+ +
    +
  • Before the start of the main protocol, we create two entangled pairs, one between A2 and B3, and another between B2 and A3. This is the stuff in the red box.

  • +
  • For the main protocol, there is a one-qubit state (unknown) on A1, and another on B1. The aim is to swap them.

  • +
  • The A1 qubit is teleported using the A2-B3 entangled pair, so the state arrives at B3.

  • +
  • The B1 qubit is teleported using the B2-A3 entangled pair, so the state arrives at A3.

  • +
  • A swap is performed from A3 to A1, and another from B3 to B1. Hence, the transmitted states arrive on the specific qubits they were supposed to be on.

  • +
+ +

You can't quite follow that sequence on the depicted circuit diagram as the final swaps are actually performed a little bit earlier. That just changes which qubits some of the operations are performed on.

+ +

Hopefully, you can see that this really is just 2 teleportations run independently to distribute the 2 different quantum states. As such, you can easily generalise it. If you have n parties, and everybody knows in advance where they will be sending their qubit, then, again, each party can manage with just 3 qubits, and they will share two entangled pairs, one with the user they'll be receiving a state from, and one with the user they'll be sending their state to.

+ +

Now, if you don't know in advance where each party will want to send their qubit, you could make it so that each location has $n$ qubits: $n-1$ with entangled pairs shared with every other party, and one with the qubit state to be sent. But that is highly inefficient in terms of resources, and it is an interesting question about how little entanglement is actually necessary. I assume this has been answered somewhere, but I don't know where off the top of my head.

+",1837,,,,,09-12-2018 09:50,,,,0,,,,CC BY-SA 4.0 +4214,2,,4210,09-12-2018 10:31,,2,,"

I got it, there is need to add two codes :

+ +

the first one in the begining before starting programming:

+ +
import  sys,,  time,,  getpass
+try:
+    sys.path.append(""../../"") # go to parent dir
+    import Qconfig
+    qx_config = {
+        ""APItoken"": Qconfig.APItoken,
+        ""url"": Qconfig.config['url']}
+    print('Qconfig loaded from %s.' % Qconfig.__file__)
+except:
+    APItoken = getpass.getpass('Please input your token and hit enter: ')
+    qx_config = {
+        ""APItoken"": APItoken,
+        ""url"":""https://quantumexperience.ng.bluemix.net/api""}
+    print('Qconfig.py not found in qiskit-tutorial directory; Qconfig loaded using user input.')
+
+ +

the second one is before execution:

+ +
register(qx_config['APItoken'], qx_config['url'])
+
+ +

hope I helped anyone with similar error!

+",4524,,,,,09-12-2018 10:31,,,,0,,,,CC BY-SA 4.0 +4215,2,,4209,09-12-2018 12:30,,1,,"

By going to this website. +But I do not know how exactly the process work. +It should involve your company/startup/institution and then you make some kind of agreement to be part of the ""community"".

+",4127,,,,,09-12-2018 12:30,,,,0,,,,CC BY-SA 4.0 +4216,2,,4183,09-12-2018 12:52,,5,,"

The simplest way to generalize teleportation is to just repeat it. If you have one EPR pair divided between Alice and Bob, teleportation allows you to move one qubit from Alice to Bob (or vice versa) by consuming the EPR pair and using a classical communication channel. If you have two EPR pairs, you can move two qubits by performing two independent teleportations. If you have N EPR pairs, you can move N qubits by performing teleportation N times.

+ +

In the case of the diagram you posted, what's happening is that there is one teleportation from Alice to Bob (to move the A qubit from the top to the bottom) and one teleportation from Bob to Alice (to move the B qubit to the top). This is somewhat obscured by the fact that the teleportations' operations are being interleaved, and also the two swap gates that ensure the output qubits end on on the same lines as the input qubits. But it really is just two independent copies of the teleportation circuit, one from Alice to Bob followed by one from Bob to Alice, but ""shoved together"" (i.e. with the operations slid around so they interleave more).

+ +

(The more complicated way to generalize teleportation is figuring out how to make it work on qutrits and qudits instead of only qubits. Basically, instead of using a ""basis"" made up of tensor products of X and Z matrices, you need to switch to a basis based on clock and shift matrices.)

+",119,,,,,09-12-2018 12:52,,,,2,,,,CC BY-SA 4.0 +4217,2,,4212,09-12-2018 14:57,,4,,"

The number of Bell pairs required to construct a given graph state can easily be given an upper bound: $|V|-1$, where $V$ is the set of vertices. You do this simply by preparing the entire state at one site, and teleporting all the other qubits to the relevant party.

+ +

I wonder if this is actually all there is to it? If we assume that the entire graph is connected, then every individual qubit is (maximally) entangled with the rest of the graph, and that entanglement must be distributed somehow.

+ +

Lower bounds in the multipartite entanglement setting are quite difficult to prove. In a bipartite setting, you'd do it be showing how many Bell states you can extract from asymptotically many copies of the state of interest, but in the regime of multipartite entanglement, that rapidly leads you to MREGS (minimal reversible entanglement generating set), about which very little is known.

+",1837,,1837,,09-12-2018 15:02,09-12-2018 15:02,,,,0,,,,CC BY-SA 4.0 +4218,2,,4185,09-12-2018 19:54,,4,,"

I agree with James Wootton's answer. The choice of the language becomes important once you work on a larger project in which you want to rely on libraries, resource estimates and other advanced features. When you're starting to learn the basics of quantum computing and quantum programming, your programs will be very small and really not that different across different languages.

+ +

I assume you'll be going through some book/course on the theory of quantum computing. In this case, there are two things you'll definitely want from the programming language:

+ +
    +
  • a nice set of introductory tutorials/programming exercises to help you internalize the theory you've learned.
  • +
  • a quantum state simulator that will allow you to see the state of the qubits as your program executes.
  • +
+ +
+ +

My recommendation (biased in different direction, as James suggested :-) ) is to take a look at Q#:

+ +
    +
  • Quantum Katas are self-paced programming tutorials designed to accompany a course on quantum computing theory. Each tutorial consists of a set of exercises for you to solve and a behind-the-scenes testing harness which checks whether your code is correct, providing you immediate feedback. The existing tutorials cover a nice set of introductory topics, and we are working on creating more tutorials.
  • +
  • The full state simulator included in the Quantum Development Kit allows to dump system state as a list of amplitudes, so you can use it whenever you want to check that the state of the system matches your understanding/expectation or to figure out what went wrong.
  • +
+",2879,,,,,09-12-2018 19:54,,,,0,,,,CC BY-SA 4.0 +4219,1,4221,,09-12-2018 20:52,,15,2870,"

What does it mean by ''qubit can't be copied'' ?

+

In a note, it is saying that:

+
+

Copying a qubit means $$U|\psi\rangle_A|0\rangle_B=|\psi\rangle_A|\psi\rangle_B$$ +i.e; applying a unitary transformation on the qubit state. It is explained as, if the copy operation is possible then there will be a unique unitary matrix $U$ which will work on all qubit state, and then shown that existence of such $U$ is not possible.

+
+

I am not getting how it can be written in such a way, unitary matrix $U$ will operate on $|\psi\rangle_A$ only I think, how it can copy it to second $|0\rangle$ state?

+

Secondly, why we are making the assumption that, "if such unitary matrix exists then that will be a unique unitary matrix which will work on all qubit states" why we can't use different unitary matrix to copy different qubit state(if possible, as $|+\rangle$ can't be copied)?

+

E.g, we can copy $|0\rangle_A$ to another state $|0\rangle_B$ $$U|0\rangle_A=|0\rangle_B\\U|1\rangle_A=|1\rangle_B$$ +as classical bit can be copied, it is possible to find such $U$.

+",3023,,-1,,6/18/2020 8:31,02-07-2019 17:30,"What do they mean by ""qubit can't be copied""?",,3,0,,,,CC BY-SA 4.0 +4220,2,,4219,09-12-2018 21:09,,4,,"

To answer the first part of the question (whether unitary matrix $U$ operates on $|\psi_A \rangle$ only):

+ +

A unitary matrix can operate on an arbitrary number of qubits. Single-qubit gates, like Pauli X, Y and Z gates, operate on one qubit and are represented by 2x2 matrices; CNOT gate operates on two qubits and is represented by a 4x4 matrix, etc.

+ +

In this case $U$ denotes a unitary transformation operating on two qubits, represented by a 4x4 matrix.

+ +
+ +

To answer the second part of the question (why should there be only one unitary to copy all possible states):

+ +

It is possible to find unitaries which copy some qubit states. The easiest example is CNOT gate which copies the states $|0 \rangle$ and $|1 \rangle$:

+ +

$$CNOT|0\rangle_A |0 \rangle_B=|0\rangle_A|0\rangle_B\\ +CNOT|1\rangle_A |0 \rangle_B=|1\rangle_A|1\rangle_B$$

+ +

But this unitary will not work to copy an unknown superposition of the states $|0 \rangle$ and $|1 \rangle$:

+ +

$$CNOT(\alpha |0\rangle + \beta |1\rangle)_A |0 \rangle_B=\alpha |0\rangle_A|0\rangle_B + \beta |1\rangle_A |1\rangle_B \neq (\alpha |0\rangle + \beta |1\rangle)_A (\alpha |0\rangle + \beta |1\rangle)_B$$

+ +

You can follow the proof given in the Wikipedia article to see that any one unitary can copy at best two orthogonal states.

+ +

We need to find one unitary that would work for all states because the no-cloning theorem addresses only copying of an unknown quantum state. If we know exactly what state we need to create, we can just create it from scratch without using the prototype qubit at all.

+",2879,,2879,,09-12-2018 22:42,09-12-2018 22:42,,,,1,,,,CC BY-SA 4.0 +4221,2,,4219,09-12-2018 21:37,,13,,"

All operations on quantum states are unitary operations. We don't make the rules, this is just how nature seems to work. So if you want to define an operation that copies a qbit, it has to be a unitary operation. That unitary operation would look like this:

+ +

$U|\psi\rangle_A|0\rangle_B=|\psi\rangle_A|\psi\rangle_B$

+ +

So you have the qbit you want to copy, $|\psi\rangle_A$, and the qbit into which you want to copy it, $|0\rangle_B$. This is the most general way to write the copy operation, although any other way of writing it reaches the same conclusion: it cannot be done.

+ +

The reason for this is as follows. Consider your starting state:

+ +

$|\psi\rangle|0\rangle = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} ⊗ \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} \alpha \\ 0 \\ \beta \\ 0 \end{bmatrix}$

+ +

And now consider your desired ending state:

+ +

$|\psi\rangle|\psi\rangle = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} ⊗ \begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \begin{bmatrix} \alpha^2 \\ \alpha\beta \\ \beta\alpha \\ \beta^2 \end{bmatrix}$

+ +

So you want to go from here to here:

+ +

$\begin{bmatrix} \alpha \\ 0 \\ \beta \\ 0 \end{bmatrix} \rightarrow \begin{bmatrix} \alpha^2 \\ \alpha\beta \\ \beta\alpha \\ \beta^2 \end{bmatrix}$

+ +

But see those exponents? They mean this is not a linear operation! And since we can only perform linear operations on quantum states, no operation exists which can take us from the first state to the second (other than an operation which itself uses the values of $\alpha$ and $\beta$). Thus, copying (cloning) is impossible when you don't know $\alpha$ or $\beta$.

+ +

As for why we don't just use a different unitary transformation for each copy, that would require us knowing the exact quantum state we want to copy. If we know the exact quantum state, we can just take a blank qbit and reconstruct the same quantum state on the qbit. Which is fine, but pretty useless considering the reason we want to be able to copy a quantum state is so we can find the value of the quantum state in the first place.

+ +

Classical bits can always be copied, as you discovered. Of course, we copy classical bits all the time in the real world (you're reading copied classical bits right now!).

+",4153,,4153,,09-12-2018 23:03,09-12-2018 23:03,,,,5,,,,CC BY-SA 4.0 +4222,2,,4204,09-12-2018 21:54,,15,,"

Here is what I think is a comprehensive list of journals that publish papers about quantum information with a noteworthy frequency (anyone is free to edit/add more).

+

Publishing only about quantum information:

+
    +
  • Quantum Information & Computation
  • +
  • Springer Quantum Information Processing
  • +
  • Quantum Computing Frontiers
  • +
  • npj Quantm Information
  • +
+

Publishing papers, which includes papers on quantum information:

+
    +
  • Physical Review A
  • +
  • Physical Review B
  • +
  • Physical Review X
  • +
  • Physical Review E
  • +
  • Physical Review Letters
  • +
  • Physical Review X Quantum
  • +
  • Physical Review Research
  • +
  • Reviews of Modern Physics
  • +
  • New Journal of Physics
  • +
  • Journal of Chemical Physics
  • +
  • Quantum Physics Letters
  • +
  • Quantum - the open journal for quantum science
  • +
  • Journal of Mathematical Physics
  • +
  • Physics Letters A
  • +
  • IEEE Transactions on Applied Superconductivity
  • +
  • IEEE Transactions on Information Theory
  • +
  • IEEE Transactions on Automatic Control
  • +
  • Processing Entropy Journal
  • +
  • AIP Advances
  • +
  • Applied Physics Letters
  • +
  • Annals of Physics
  • +
  • Annalen der Physik
  • +
  • Canadian Journal of Physics
  • +
  • Journal of Applied Physics
  • +
  • Journal of the Physical Society of Japan
  • +
  • Nature Physics
  • +
  • Nature Chemistry
  • +
  • Nature Materials
  • +
  • Nature
  • +
  • Science
  • +
  • Science Advances
  • +
  • Scientific Repors
  • +
  • Nature Communications
  • +
  • European Physical Journal D
  • +
  • European Physical Journal B
  • +
  • Molecular Physics
  • +
  • Laser Physics
  • +
  • Journal of Physics B
  • +
  • Review of Scientific Instruments
  • +
  • Applied Optics
  • +
  • Optics Express
  • +
  • Optics Letters
  • +
  • Nature Photonics
  • +
  • Computer Physics Communications
  • +
  • Journal of Physics: Condensed Matter
  • +
  • Physica Status Solidi
  • +
  • Chemical Physics Letters
  • +
  • Physical Chemistry Chemical Physics
  • +
  • Journal of Physical Chemistry A
  • +
  • Journal of Physical Chemistry Letters
  • +
  • Communications in Mathematical Physics
  • +
  • Electronic Journal of Theoretical Physics
  • +
  • SIAM Journal of Computing
  • +
  • SIAM Journal on Scientific and Statistical Computing (Shor's algorithm)
  • +
  • Quantum Science and Technology (IOP)
  • +
  • Advanced Quantum Technologies (Wiley)
  • +
  • Quantum Machine Intelligence (Springer)
  • +
+",2293,,2293,,01-02-2023 17:22,01-02-2023 17:22,,,,2,,,,CC BY-SA 4.0 +4223,1,4225,,09-12-2018 23:15,,3,1566,"

Q# has a measurement operator defined as follows according to the docs:

+ +

operation Measure (bases : Pauli[], qubits : Qubit[]) : Result

+ +

Where you give a Pauli gate $\{I, \sigma_x, \sigma_y, \sigma_z\}$ as a measurement operator and the qbit is projected onto its eigenvectors.

+ +

How can I measure in an arbitrary basis? I know conceptually what I have to do is first rotate my state vector by some amount then measure in one of the Pauli bases, but what are the actual primitive gates and parameters I would use to do that? For example, say I wanted to measure in the following basis:

+ +

$\begin{bmatrix} \frac{\sqrt{3}}{2} \\ \frac 1 2 \end{bmatrix}, \begin{bmatrix} \frac{-1}{2} \\ \frac{\sqrt{3}}{2} \end{bmatrix}$

+ +

So basically the computational basis but rotated $\pi/6$ radians counter-clockwise around the unit circle.

+",4153,,26,,03-12-2019 09:06,05-10-2019 07:17,How to measure in an arbitrary basis in Q#?,,1,0,,,,CC BY-SA 4.0 +4225,2,,4223,9/13/2018 0:31,,4,,"

Let's say you want to distinguish two states:

+ +

$$|A\rangle = \cos \alpha |0\rangle + \sin \alpha |1\rangle \\ + |B\rangle = -\sin \alpha |0\rangle + \cos \alpha |1\rangle$$

+ +

For your particular example $\cos \alpha = \frac {\sqrt{3}}{2}$ and $\sin \alpha = \frac{1}{2}$, so $\alpha = \frac{\pi}{6}$.

+ +

These states are orthogonal and can be obtained from $|0\rangle$ and $|1\rangle$ by rotating around Y axis, i.e. by applying Ry(2.0 * alpha, _). You can verify it using the definition of the Ry operation and matrix exponentiation.

+ +

Thus, one way to measure the states $|A\rangle$ and $|B\rangle$ is to apply adjoint operation Ry(-2.0 * alpha, _) to your qubit to get those states back to $|0\rangle$ and $|1\rangle$, and then to measure the qubit in the computational basis using operation M (or Measure([PauliZ], _)).

+ +

In more general case you'd use other rotation operations (Rx and Rz) to perform the exact rotation which converts your basis states to computational basis before measuring them.

+ +
+ +

This question is effectively task 1.4 from the Measurements quantum kata, if you want to practice it in Q# (as well as more advanced measurement scenarios).

+",2879,,2879,,05-10-2019 07:17,05-10-2019 07:17,,,,3,,,,CC BY-SA 4.0 +4226,1,4227,,9/14/2018 3:10,,4,104,"

Are there any research papers focused on implementing a game/puzzle or game theory in Quantum Computing.

+",4576,,26,,12/13/2018 19:30,12/13/2018 19:30,"Quantum Computing Research Papers, on puzzles or game theory",,1,1,,,,CC BY-SA 4.0 +4227,2,,4226,9/14/2018 4:16,,7,,"

To start, I would read ""The next stage: quantum game theory"" by E.W.Piotrowski, J. Sladkowski. While the paper is from 2003, the authors discuss how developments in quantum computation allow the extension of the scope of game theory. It includes some basic history as well as some basic ideas and recent developments in quantum game theory.

+ +

These same authors also wrote a paper entitled ""Quantum Bargaining Games"" in 2001 which I would think will also be useful in your research. It's part of a larger analysis they did of ""quantum-like"" descriptions of market economics with roots in the then recently developed quantum game theory"".

+ +

I think these two papers would be great starting points.

+ +

I would also check out James Wootton's ""Using a simple puzzle game to benchmark quantum computers"" in which he created a puzzle game called ""Quantum Awesomeness"". I recommend this primarily because it's slightly related but completely awesome.

+",274,,,,,9/14/2018 4:16,,,,0,,,,CC BY-SA 4.0 +4228,2,,74,9/14/2018 10:55,,8,,"

Quantum annealing

+ +

Quantum annealing is a model of quantum computation which, roughly speaking, generalises the adiabatic model of computation. It has attracted popular — and commercial — attention as a result of D-WAVE's work on the subject.

+ +

Precisely what quantum annealing consists of is not as well-defined as other models of computation, essentially because it is of more interest to quantum technologists than computer scientists. Broadly speaking, we can say that it is usually considered by people with the motivations of engineers, rather than the motivations of mathematicians, so that the subject appears to have many intuitions and rules of thumb but few 'formal' results. In fact, in an answer to my question about quantum annealing, Andrew O goes so far as to say that ""quantum annealing can't be defined without considerations of algorithms and hardware"". Nevertheless, ""quantum annealing"" seems is well-defined enough to be described as a way of approaching how to solve problems with quantum technologies with specific techniques — and so despite Andrew O's assessment, I think that it embodies some implicitly defined model of computation. I will attempt to describe that model here.

+ +

Intuition behind the model

+ +

Quantum annealing gets its name from a loose analogy to (classical) simulated annealing. + They are both presented as means of minimising the energy of a system, expressed in the form of a Hamiltonian: +$$ +\begin{aligned} +H_{\rm{classical}} &= \sum_{i,j} J_{ij} s_i s_j \\ +H_{\rm{quantum}} &= A(t) \sum_{i,j} J_{ij} \sigma_i^z \sigma_j^z - B(t) \sum_i \sigma_i^x +\end{aligned} +$$ +With simulated annealing, one essentially performs a random walk on the possible assignments to the 'local' variables $s_i \in \{0,1\}$, but where the probability of actually making a transition depends on

+ +
    +
  • The difference in 'energy' $\Delta E = E_1 - E_0$ between two 'configurations' (the initial and the final global assignment to the variables $\{s_i\}_{i=1}^n$) before and after each step of the walk;
  • +
  • A 'temperature' parameter which governs the probability with which the walk is allowed to perform a step in the random walk which has $\Delta E > 0$.
  • +
+ +

One starts with the system at 'infinite temperature', which is ultimately a fancy way of saying that you allow for all possible transitions, regardless of increases or decreases in energy. You then lower the temperature according to some schedule, so that time goes on, changes in state which increase the energy become less and less likely (though still possible). The limit is zero temperature, in which any transition which decreases energy is allowed, but any transition which increases energy is simply forbidden. +For any temperature $T > 0$, there will be a stable distribution (a 'thermal state') of assignments, which is the uniform distribution at 'infinite' temperature, and which is which is more and more weighted on the global minimum energy states as the temperature decreases. If you take long enough to decrease the temperature from infinite to near zero, you should in principle be guaranteed to find a global optimum to the problem of minimising the energy. Thus simulated annealing is an approach to solving optimisation problems.

+ +

Quantum annealing is motivated by generalising the work by Farhi et al. on adiabatic quantum computation [arXiv:quant-ph/0001106], with the idea of considering what evolution occurs when one does not necessarily evolve the Hamiltonian in the adiabatic regime. Similarly to classical annealing, one starts in a configuration in which ""classical assignments"" to some problem are in a uniform distribution, though this time in coherent superposition instead of a probability distribution: this is achieved for time $t = 0$, for instance, by setting +$$ A(t=0) = 0, \qquad B(t=0) = 1 $$ +in which case the uniform superposition $\def\ket#1{\lvert#1\rangle}\ket{\psi_0} \propto \ket{00\cdots00} + \ket{00\cdots01} + \cdots + \ket{11\cdots11}$ is a minimum-energy state of the quantum Hamiltonian. One steers this 'distribution' (i.e. the state of the quantum system) to one which is heavily weighted on a low-energy configuration by slowly evolving the system — by slowly changing the field strengths $A(t)$ and $B(t)$ to some final value +$$ A(t_f) = 1, \qquad B(t_f) = 0. $$ +Again, if you do this slowly enough, you will succeed with high probability in obtaining such a global minimum. +The adiabatic regime describes conditions which are sufficient for this to occur, by virtue of remaining in (a state which is very close to) the ground state of the Hamiltonian at all intermediate times. However, it is considered possible that one can evolve the system faster than this and still achieve a high probability of success.

+ +

Similarly to adiabatic quantum computing, the way that $A(t)$ and $B(t)$ are defined are often presented as a linear interpolations from $0$ to $1$ (increasing for $A(t)$, and decreasing for $B(t)$). However, also in common with adiabatic computation, $A(t)$ and $B(t)$ don't necessarily have to be linear or even monotonic. For instance, D-Wave has considered the advantages of pausing the annealing schedule and 'backwards anneals'.

+ +

'Proper' quantum annealing (so to speak) presupposes that evolution is probably not being done in the adiabatic regime, and allows for the possibility of diabatic transitions, but only asks for a high chance of achieving an optimum — or even more pragmatically still, of achieving a result which would be difficult to find using classical techniques. There are no formal results about how quickly you can change your Hamiltonian to achieve this: the subject appears mostly to consist of experimenting with a heuristic to see what works in practise.

+ +

The comparison with classical simulated annealing

+ +

Despite the terminology, it is not immediately clear that there is much which quantum annealing has in common with classical annealing. +The main differences between quantum annealing and classical simulated annealing appear to be that:

+ +
    +
  • In quantum annealing, the state is in some sense ideally a pure state, rather than a mixed state (corresponding to the probability distribution in classical annealing);

  • +
  • In quantum annealing, the evolution is driven by an explicit change in the Hamiltonian rather than an external parameter.

  • +
+ +

It is possible that a change in presentation could make the analogy between quantum annealing and classical annealing tighter. For instance, one could incorporate the temperature parameter into the spin Hamiltonian for classical annealing, by writing +$$\tilde H_{\rm{classical}} = A(t) \sum_{i,j} J_{ij} s_i s_j - B(t) \sum_{i,j} \textit{const.} $$ +where we might choose something like $A(t) = t\big/(t_F - t)$ and $B(t) = t_F - t$ for $t_F > 0$ the length of the anneal schedule. (This is chosen deliberately so that $A(0) = 0$ and $A(t) \to +\infty$ for $t \to t_F$.) +Then, just as an annealing algorithm is governed in principle by the Schrödinger equation for all times, we may consider an annealing process which is governed by a diffusion process which is in principle uniform with tim by small changes in configurations, where the probability of executing a randomly selected change of configuration is governed by +$$ p(x \to y) = \max\Bigl\{ 1,\; \exp\bigl(-\gamma \Delta E_{x\to y}\bigr) \Bigr\} $$ +for some constant $\gamma$, where $E_{x \to y}$ is the energy difference between the initial and final configurations. +The stable distribution of this diffusion for the Hamiltonian at $t=0$ is the uniform distribution, and the stable distribution for the Hamiltonian as $t \to t_F$ is any local minimum; and as $t$ increases, the probability with which a transition occurs which increases the energy becomes smaller, until as $t \to t_F$ the probability of any increases in energy vanish (because any of the possible increase is a costly one).

+ +

There are still disanalogies to quantum annealing in this — for instance, we achieve the strong suppression of increases in energy as $t \to t_F$ essentially by making the potential wells infinitely deep (which is not a very physical thing to do) — but this does illustrate something of a commonality between the two models, with the main distinction being not so much the evolution of the Hamiltonian as it is the difference between diffusion and Schrödinger dynamics. This suggests that there may be a sharper way to compare the two models theoretically: by describing the difference between classical and quantum annealing, as being analogous to the difference between random walks and quantum walks. A common idiom in describing quantum annealing is to speak of 'tunnelling' through energy barriers — this is certainly pertinent to how people consider quantum walks: consider for instance the work by Farhi et al. on continuous-time quantum speed-ups for evaluating NAND circuits, and more directly foundational work by Wong on quantum walks on the line tunnelling through potential barriers. Some work has been done by Chancellor [arXiv:1606.06800] on considering quantum annealing in terms of quantum walks, though it appears that there is room for a more formal and complete account.

+ +

On a purely operational level, it appears that quantum annealing gives a performance advantage over classical annealing (see for example these slides on the difference in performance between quantum vs. classical annealing, from Troyer's group at ETH, ca. 2014).

+ +

Quantum annealing as a phenomenon, as opposed to a computational model

+ +

Because quantum annealing is more studied by technologists, they focus on the concept of realising quantum annealing as an effect rather than defining the model in terms of general principles. (A rough analogy would be studying the unitary circuit model only inasmuch as it represents a means of achieving the 'effects' of eigenvalue estimation or amplitude amplification.)

+ +

Therefore, whether something counts as ""quantum annealing"" is described by at least some people as being hardware-dependent, and even input-dependent: for instance, on the layout of the qubits, the noise levels of the machine. It seems that even trying to approach the adiabatic regime will prevent you from achieving quantum annealing, because the idea of what quantum annealing even consists of includes the idea that noise (such as decoherence) will prevent annealing from being realised: as a computational effect, as opposed to a computational model, quantum annealing essentially requires that the annealing schedule is shorter than the decoherence time of the quantum system.

+ +

Some people occasionally describe noise as being somehow essential to the process of quantum annealing. For instance, Boixo et al. [arXiv:1304.4595] write

+ +
+

Unlike adiabatic quantum computing[, quantum annealing] is a positive temperature method involving an open quantum system coupled to a thermal bath.

+
+ +

It might perhaps be accurate to describe it as being an inevitable feature of systems in which one will perform annealing (just because noise is inevitable feature of a system in which you will do quantum information processing of any kind): as Andrew O writes ""in reality no baths really help quantum annealing"". It is possible that a dissipative process can help quantum annealing by helping the system build population on lower-energy states (as suggested by work by Amin et al., [arXiv:cond-mat/0609332]), but this seems essentially to be a classical effect, and would inherently require a quiet low-temperature environment rather than 'the presence of noise'.

+ +

The bottom line

+ +

It might be said — in particular by those who study it — that quantum annealing is an effect, rather than a model of computation. A ""quantum annealer"" would then be best understood as ""a machine which realises the effect of quantum annealing"", rather than a machine which attempts to embody a model of computation which is known as 'quantum annealing'. However, the same might be said of adiabatic quantum computation, which is — in my opinion correctly — described as a model of computation in its own right.

+ +

Perhaps it would be fair to describe quantum annealing as an approach to realising a very general heuristic, and that there is an implicit model of computation which could be characterised as the conditions under which we could expect this heuristic to be successful. If we consider quantum annealing this way, it would be a model which includes the adiabatic regime (with zero-noise) as a special case, but it may in principle be more general.

+",124,,124,,9/20/2018 13:11,9/20/2018 13:11,,,,0,,,,CC BY-SA 4.0 +4229,1,,,9/14/2018 20:41,,9,111,"

A stored programming computer model is that where a central memory is used to store both instructions and data that they operate on. Basically all the classical computers of today that follow the von Neumann architecture use the stored programming model. During program execution the CPU reads instructions or data from the RAM and places it in the various registers such as Instruction Register (IR) and other general purpose registers.

+ +

My question is whether such a stored programming model is applicable to a Quantum Computer or not, since because of the no-cloning theorem it is not possible to clone any arbitrary quantum state.

+ +

It means that if we have some qubits in some state stored in a memory register then because of the no-cloning theorem the Quantum Computer processor will not be able to read or copy those qubits from the memory to some internal registers.

+",4594,,26,,12/23/2018 12:43,12/23/2018 12:43,Can a stored programming model be applied to a Quantum Computer?,,1,1,,,,CC BY-SA 4.0 +4230,1,,,9/15/2018 4:07,,8,1604,"

As you know, universal quantum computing is the ability to construct a circuit from a finite set of operations that can approximate to arbitrary accuracy any unitary operation.

+ +

There also exist some results proving that exact decompositions of particular unitary operations can be found. For instance, a method was provided here (see section V) to construct a quantum circuit for a general two-qubit gate, based on the method given by Kraus and Cirac to decompose any desired two-qubit gate.

+ +

I want to understand this method such that I learn how to find the +quantum circuit implementing any $4\times4$ unitary matrix. To help +with that, I devised the following challenge: You are given an unitary +matrix, $U$, over two qubits and asked to follow either the previously +mentioned methods or another method you know about to come up with +the quantum circuit implementing this unitarity. I know what one circuit +representation is because I built it myself to obtain this matrix. +Our goal should be to find the decomposition with the fewer number +of gates, however, other decomposition methods will also be useful to learn. Here's the matrix

+ +

$$ +U\left(x\right)=\left(\begin{array}{cccc} +0 & \left(-\frac{1}{2}+\frac{i}{2}\right)e^{ix/2} & 0 & \left(\frac{1}{2}+\frac{i}{2}\right)e^{ix/2}\\ +0 & \frac{ie^{-ix/2}}{\sqrt{2}} & 0 & -\frac{e^{-ix/2}}{\sqrt{2}}\\ +\left(-\frac{1}{2}-\frac{i}{2}\right)e^{-ix/2} & 0 & \left(\frac{1}{2}-\frac{i}{2}\right)e^{-ix/2} & 0\\ +-\frac{e^{ix/2}}{\sqrt{2}} & 0 & \frac{ie^{ix/2}}{\sqrt{2}} & 0 +\end{array}\right) +$$

+ +

If you are aware of any numerical methods that can achieve this task +I also encourage you to post the solution you find and the steps to +find it. E.g., as answered at another question on this site, qubiter +has a module which apparently can decompose any n-qubit unitary into +cnots and single qubit rotations using the cosine-sine decomposition, +and this is probably not the only numerical method looking into this +problem.

+ +

I believe an exhaustive exploration of these methods will be a helpful +reference to a big audience, but let us start simple: can you give +a circuit decomposition for $U\left(x\right)$?

+ +

Universal gate set:

+ +

You may consider the following set of basic gates: hadamard ($H$), phase ($S$), $\pi/8$ rotation ($T$), $cX$, $S^{\dagger}$, $T^{\dagger}$, and rotations

+ +

$$ +R_{x}\left(\theta\right)=\left(\begin{array}{cc} +\cos\left(\frac{\theta}{2}\right) & -i\sin\left(\frac{\theta}{2}\right)\\ +-i\sin\left(\frac{\theta}{2}\right) & \cos\left(\frac{\theta}{2}\right) +\end{array}\right),\quad R_{y}\left(\theta\right)=\left(\begin{array}{cc} +\cos\left(\frac{\theta}{2}\right) & -\sin\left(\frac{\theta}{2}\right)\\ +\sin\left(\frac{\theta}{2}\right) & \cos\left(\frac{\theta}{2}\right) +\end{array}\right),\quad R_{z}\left(\theta\right)=\left(\begin{array}{cc} +e^{-\frac{i\theta}{2}} & 0\\ +0 & e^{\frac{i\theta}{2}} +\end{array}\right) +$$

+ +

With an eye on a real implementation, you may also consider IBM QX universal gate set made of a $cX$ together with one-qubit rotational and phase gates

+ +

$$ +\begin{split}V_{1}(\lambda) & =\begin{pmatrix}1 & 0\\ +0 & e^{i\lambda} +\end{pmatrix}\\ +V_{2}(\phi,\lambda) & =\frac{1}{\sqrt{2}}\begin{pmatrix}1 & -e^{i\lambda}\\ +e^{i\phi} & e^{i(\lambda+\phi)} +\end{pmatrix}\\ +V_{3}(\theta,\phi,\lambda) & =\begin{pmatrix}\cos\left(\frac{\theta}{2}\right) & -e^{i\lambda}\sin\left(\frac{\theta}{2}\right)\\ +e^{i\phi}\sin\left(\frac{\theta}{2}\right) & e^{i(\lambda+\phi)}\cos\left(\frac{\theta}{2}\right) +\end{pmatrix} +\end{split} +$$

+ +

Actually, $V_{1}$ and $V_{2}$ are just special cases of $V_{3}$, hence IBM's universal set can be reduced to just two gates.

+ +

Example:

+ +

Suppose you wanted to find the circuit implementation for an unitarity $U\left(x,y\right)$ given by the following matrix

+ +

$$ +\left(\begin{array}{cccc} +e^{-\frac{1}{2}i\pi y}\cos\frac{x}{2} & 0 & 0 & -e^{-\frac{1}{2}i\pi y}\sin\frac{x}{2}\\ +0 & -ie^{\frac{i\pi y}{2}}\sin\frac{x}{2} & -ie^{\frac{i\pi y}{2}}\cos\frac{x}{2} & 0\\ +\sqrt[4]{-1}e^{-\frac{1}{2}i\pi y}\sin\frac{x}{2} & 0 & 0 & \sqrt[4]{-1}e^{-\frac{1}{2}i\pi y}\cos\frac{x}{2}\\ +0 & -(-1)^{3/4}e^{\frac{i\pi y}{2}}\cos\frac{x}{2} & (-1)^{3/4}e^{\frac{i\pi y}{2}}\sin\frac{x}{2} & 0 +\end{array}\right) +$$

+ +

You can check $U\left(x,y\right)$ can be decomposed into the following +product of operations

+ +

$$ +U\left(x,y\right)=\left(T\otimes S^{\dagger}\right)\cdot CNOT_{01}\cdot\left(R_{y}\left(x\right)\otimes R_{z}\left(y\pi\right)\right)\cdot CNOT_{10} +$$

+ +

and the circuit representation is

+ +

+",1897,,1897,,9/16/2018 14:32,9/17/2018 13:26,Decomposition of arbitrary 2 qubit operator,,1,11,,,,CC BY-SA 4.0 +4231,1,,,9/15/2018 4:56,,2,314,"

I have programmed in C++, but I am interested in writing quantum programs. I have some experience with Microsoft's Q#, and I know about the canonical Shor's and Grover's algorithms.

+ +

Can anyone tell me how to write a quantum program to add two integers?

+",4596,,23,,9/15/2018 21:57,9/15/2018 21:57,How to add two integers in Q#?,,1,3,,,,CC BY-SA 4.0 +4232,2,,4219,9/15/2018 8:41,,9,,"

As already mentioned in the other answers, the crucial point is that copying means implicitly that the state of the original qubit is unknown, i.e. given a qubit in an unknown state, you want to prepare a second qubit to be in exactly the same state.

+ +

To make it more intelligible, there is a less mathematical argument that this should not be possible: By the uncertainty relation you cannot determine the values of two complementary observables (e.g. orthogonal spin directions) on the qubit with arbitrary precision at the same time. If you could copy the qubit, you could make copies and measure each of the observables on the copies with arbitrary precision, which contradicts the uncertainty relation.

+",4598,,4598,,02-07-2019 17:30,02-07-2019 17:30,,,,3,,,,CC BY-SA 4.0 +4233,2,,4231,9/15/2018 13:48,,4,,"

You will need quantum circuits called adders.

+ +

You have for example one from Cuccaro et al. and another from Himanshu et al.

+",4127,,26,,9/15/2018 18:14,9/15/2018 18:14,,,,3,,,,CC BY-SA 4.0 +4234,2,,4229,9/15/2018 15:44,,3,,"

Yes, you can encode a program into your qubits in exactly the same way you'd encode a program into bits and then run circuits that interpret the program. One might hope that you could encode the program in some fancy exponentially efficient way, but in Mike&Ike they prove that's not possible. Because there's no exponential advantage, and because the operations needed to read and decode the program are billions of time more expensive on a quantum computer, you want to store the program in the classical control computer in almost all cases.

+",119,,,,,9/15/2018 15:44,,,,8,,,,CC BY-SA 4.0 +4235,1,4236,,9/16/2018 15:47,,15,998,"

I have done some research & found a few different papers that discuss xor games (classic & quantum). I am curious if someone could give a concise introductory explanation as to what exactly xor games are & how they are or could be used/useful in quantum computing.

+",2645,,2645,,10-01-2018 01:03,03-11-2021 06:39,What exactly are Quantum XOR Games?,,1,0,,,,CC BY-SA 4.0 +4236,2,,4235,9/17/2018 8:12,,10,,"

Quantum xor games are a method of greatly simplifying the ideas behind Bell's theorem, which states that no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.

+

Basically, when two qubits are entangled, measurements on them appear correlated even if they are vastly far apart. The question then is whether the qubits decided how they would collapse at time of entanglement (thus carrying "local hidden variables" with them) or decided how they would collapse at time of measurement (thus requiring some kind of instantaneous "spooky action at a distance"). Bell's theorem, and xor games, come down firmly on the side of the latter.

+

Xor games generally have the format of two people (Alice and Bob) given some random bits, and without communication outputting some other bits with the goal of making true a logical formula.

+

For example with the original xor game, the CHSH game, Alice is given random bit $X$ and Bob random bit $Y$. Alice then outputs a chosen bit $a$ and Bob outputs a chosen bit $b$. They want to satisfy the equation $X \cdot Y = a \oplus b$. Of course, since they cannot communicate, they can only win some of the time; they want to choose a strategy to maximize the probability of winning. The best possible classical strategy is for Alice and Bob to both always output $0$, which will result in a win 75% of the time. However if Alice and Bob share an entangled qubit pair, they can come up with a strategy to win 85% of the time! The conclusion is this disproves the existence of local hidden variables, because if the qubits contained a local hidden variable (some string of bits) then Alice and Bob could have pre-shared that same string of bits to employ in their classical strategy to also get an 85% chance of winning; since no string of bits enables them to do this, that means the entangled qubits cannot be relying on a shared string of bits (local hidden variable) and something spookier is happening. You can see an implementation of the CHSH game in Microsoft's Q# samples (with expanded explanation) here.

+

The best explanation of the CHSH game is from Professor Vazirani in this video. He claims something interesting (possibly rhetorically), which is that if Einstein had had access to the simplified presentation of xor games, he'd have avoided wasting the last three decades of his life searching for a hidden variable-based theory of quantum mechanics!

+

I have also written a blog post detailing the CHSH game here.

+

One application of xor games is self-testing: when running algorithms on an untrusted quantum computer, you can use xor games to verify that the computer isn't corrupted by an adversary trying to steal your secrets! This is useful in device-independent quantum cryptography.

+",4153,,10480,,03-11-2021 06:39,03-11-2021 06:39,,,,1,,,,CC BY-SA 4.0 +4237,1,4238,,9/17/2018 10:39,,5,82,"

I am working with the quantum turbo codes presented in this paper by Wilde, Hsieh and Babar, and it is claimed that a package to simulate such codes is available at ea-turbo. However, the hyperlink to such package appears to be broken, and so I have not been able to reach that.

+ +

I would like to know if anyone around knows another link to get such package, or it by any chance has the package, if he could share it so that other people can work with such codes in an easier way.

+ +

EDIT:

+ +

It looks like a package containing the version used by the authors for a previous paper on the topic where a simpler decoder is used can be downloaded from the link presented in the answer by user1271772. I discovered such fact by reading the README file that such file contains. It would be very useful to know if newer versions (looks like the broken link that I was talking about in the questions refers to a second version of the package) can be obtained too.

+",2371,,55,,12-04-2021 12:42,12-04-2021 12:42,EA-Turbo simulation package,,2,0,,,,CC BY-SA 4.0 +4238,2,,4237,9/17/2018 10:58,,1,,"

This hyperlink works for me: https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/ea-turbo/ea-turbo.zip

+ +

That is a zip file which I was able to download.

+",2293,,,,,9/17/2018 10:58,,,,1,,,,CC BY-SA 4.0 +4239,2,,4230,9/17/2018 13:26,,3,,"

The circuit that I came up with is: + +up to a global phase factor of $-e^{i x/2 + i \pi/4}$. The phase gate ""x"" that I used is essentially $R_z(x/2)=V_1(x)$ up to a global phase factor, so this is directly applicable to the first gates set. There's some possibility to move around some of the single-qubit gates, but I think this option minimises the depth of the circuit, which may be a more relevant number than the absolute number of gates, as it's more related to the time a circuit would take to implement (and hence how susceptible to noise it might be). Of course, I don't pretend that all gates would take the same length of time, and that's a completely different set of rules that one would be optimising over.

+ +

This same circuit can be rewritten using the other gate set

+ +

To come up with this decomposition, I didn't use any particular method beyond the standard ""I stared at it for a long time"" technique. I then wrote down a circuit which was always going to be sub-optimal (including a swap gate, and a two-qubit diagonal gate with arbitrary phases on the diagonal), but then rearranged the circuit to bring some elements together so that they cancelled.

+",1837,,,,,9/17/2018 13:26,,,,1,,,,CC BY-SA 4.0 +4240,1,4241,,9/17/2018 13:42,,1,152,"

We know the QFT gives us a new orthogonal basis from the original one, however, when I apply it on two qubits, I am not getting the output vectors orthogonal.

+
+

$|out(k)\rangle = \Sigma^{N-1}_{j=0} e^{\frac{2\pi ij.k}{N}}|j\rangle$

+
+

Where $j.k$ is the bitwise 'AND' and then summed up.

+

Applying this on the basis:

+
+

$|00\rangle , |01\rangle , |10\rangle , |11\rangle$

+
+

I get the following vectors (ignore normalization factor):

+
+

$|00\rangle + |01\rangle + |10\rangle + |11\rangle$

+

$|00\rangle + i|01\rangle + |10\rangle + i|11\rangle$

+

$|00\rangle + |01\rangle - |10\rangle - |11\rangle$

+

$|00\rangle + i|01\rangle + |10\rangle -i|11\rangle$

+
+

However these are not orthogonal, as you can see, the dot product does not equal zero for the first and second vector.

+

Why is this the case?

+",2832,,-1,,6/18/2020 8:31,9/17/2018 13:58,Quantum Fourier Transform on two qubits: Non orthogonal outputs,,1,2,,,,CC BY-SA 4.0 +4241,2,,4240,9/17/2018 13:58,,3,,"

You are mis-quoting the definition of the QFT. You simply take the product of the decimal values of $j$ and $k$, and don't use their binary representations.

+",1837,,,,,9/17/2018 13:58,,,,0,,,,CC BY-SA 4.0 +4242,1,,,9/17/2018 18:00,,6,1462,"

In quantum algorithms we need to initialize the qubits at the start of our algorithm in some quantum register. Suppose that if we are working with a four qubit quantum register we can initialize the register into values such as $|0000\rangle$ or another value such as $|0101\rangle$, which means that the first and third qubits are in basis $|0\rangle$ state and the second and fourth qubits are in basis $|1\rangle$ state. Once we have initialized these qubits as such we can then proceed to apply various quantum gates on them.

+ +

My question is that in a physical quantum computer (not a simulator) where say we represent qubits with electron spin, how do we manupulate such electrons so that we can initialize a Quantum register, say with the value $|0101\rangle$?

+",4594,,26,,12/23/2018 12:44,12/23/2018 12:44,How do we physically initialize qubits in a Quantum register?,,3,0,,,,CC BY-SA 4.0 +4243,2,,4242,9/17/2018 20:17,,2,,"

Well the procedure depends on the physical system that you are using to realize the quantum register. An example I can give is of Diamond NV-Center. In simplest case, the vacancy electron is modelled as spin-1 particle here(there are reasons to model it like this) and nitrogen-14 nuclei has spin 1. So total system lies in a Hilbert space of 9-Dimensions or you can say there are 9 energy levels. Lets say you want to realise a 2-qubit register, then you would need 4-energy levels. Define the levels as $|00\rangle$, $|01\rangle$, $|10\rangle$ and $|11\rangle$. Lets say in equilibrium system is in $|00\rangle$ state, if you want to initialize the system in $|01\rangle$ state then all you would need is an electromagnetic pulse of energy being equal to the energy difference of $|01\rangle$ and $|00\rangle$ levels. In NV-Center, normally microwave and radio-frequency pulses are used.

+",2817,,,,,9/17/2018 20:17,,,,0,,,,CC BY-SA 4.0 +4244,1,,,9/17/2018 22:17,,3,489,"

I am working with sympy.physics.quantum.qubit to help teach myself more about quantum computing.

+ +

I'm confused about how best to simplify two ket expressions that appear to me to be identical. Seems to me that B and C below should be the same:

+ +
B = Qubit('01')
+
+C = TensorProduct(Qubit('0'),Qubit('1'))
+
+ +

because their representation is identical, i.e.

+ +
B.represent() == C.represent()       #returns True
+
+ +

it feels like they are the same qubit state. So far, so good.

+ +

But now when I print out B and C I get:

+ +
print(""B is"", B)
+
+B is |01>
+
+ +

but

+ +
print(""C is"", C)
+
+C is |0>x|1>
+
+ +

My question:

+ +

Is there some way I could get the Python library to use the output of B and C so that the equivalence between the two would be obvious? Basically, how do I get C to simplify it's output so that it exactly resembles B?

+ +

Am grateful for any help you could provide.

+",4605,,26,,11/20/2018 14:44,11/20/2018 14:44,Confusion over tensor products in sympy.physics.quantum.qubit (in Python),,1,0,,,,CC BY-SA 4.0 +4245,2,,4242,9/17/2018 22:44,,2,,"

One of the simplest way to think about physical realization of qubits is using Photons.

+ +

Let's say you have a photon that is vertically or horizontally polarized. There are physical devices to create and measure polarizations of these photons. So, you can mark the horizontally polarized photons as the state $\lvert 0 +\rangle$ and the vertically polarized one as the $\lvert 1 \rangle$ state.

+ +

Additionally, you could use photon polarizer to change a particular photons polarization to turn a $\lvert 0 +\rangle$ to a $\lvert 1 \rangle$ or vice versa. This wiki article would help you a bit. Moreover, you could check out this cool video from professor Allan Adams where real life photon polarization is demonstrated. (link)

+",2403,,,,,9/17/2018 22:44,,,,0,,,,CC BY-SA 4.0 +4246,2,,4237,9/17/2018 22:56,,1,,"

Since the question has been edited, there is a slightly different answer to the question (though my previous answer still has a valid link, so I am keeping it there).

+ +

The broken link presumably refers to: +https://googledrive.com/host/0B77vaqbQKbrDMWEtTUxyZDgwanc/ea-turbo-v2.zip

+ +

Which is the link at the bottom of: +https://code.google.com/archive/p/ea-turbo/

+ +

The solution to the problem (broken link) is to contact the authors: Mark M. Wilde, Min-Hsiu Hsieh, and Zunaira Babar.

+ +

I agree with you that ideally questions like these could just be answered here on the QCSE, but I am almost 100% sure that the only three people that will be able to fix the broken link are Mark M. Wilde, Min-Hsiu Hsieh, and Zunaira Babar, and I'm 0% sure that they will see the question if it's asked here.

+",2293,,,,,9/17/2018 22:56,,,,1,,,,CC BY-SA 4.0 +4247,2,,4244,9/18/2018 3:16,,2,,"

After some more reading of the documentation, I now realize the C above is a TensorProduct object (and not a Qubit) but B is a Qubit. The way you can make the tensor product become a Qubit (and print it out in a way that's comparable with B) is to just call:

+ +
matrix_to_qubit(represent(C))
+
+ +

which returns a Qubit with the answer you'd expect.

+ +

Thanks for being patient with me as I worked that out.

+",4605,,26,,11/20/2018 14:41,11/20/2018 14:41,,,,0,,,,CC BY-SA 4.0 +4248,2,,4242,9/18/2018 5:08,,4,,"

It very much depends on your physical realisation. In some realisations, such as when using photons, the process of producing the photon typically produces it in a fixed polarisation (which is probably what you’re using to encode a qubit). Alternatively, you pass the photon through a polarisation filter. This is equivalent to the most typical strategy - measure the qubit in some basis. That way, you know the definite state that the qubit is in, and you can convert it into the right state. As a final option, many systems have a natural Hamiltonian that is applied to them, and the system is cooled in order to produce the ground state of that Hamiltonian. Again, you’ve got a fixed state (typically the all zero state) and you can convert that into whatever initial state you want.

+",1837,,,,,9/18/2018 5:08,,,,0,,,,CC BY-SA 4.0 +4249,1,4250,,9/18/2018 8:30,,7,1384,"

Any 1-qubit special gate can be decomposed into a sequence of rotation gates ($R_z$, $R_y$ and $R_z$). This allows us to have the general 1-qubit special gate in matrix form:

+ +

$$ +U\left(\theta,\phi,\lambda\right)=\left(\begin{array}{cc} +e^{-i(\phi+\lambda)}\cos\left(\frac{\theta}{2}\right) & -e^{-i(\phi-\lambda)}\sin\left(\frac{\theta}{2}\right)\\ +e^{i(\phi-\lambda)}\sin\left(\frac{\theta}{2}\right) & e^{i(\phi+\lambda)}\cos\left(\frac{\theta}{2}\right) +\end{array}\right) +$$

+ +

If given $U\left(\theta,\phi,\lambda\right)$, how do I decompose it into any arbitrary set of gates such as rotation gates, pauli gates, etc?

+ +

To make the question more concrete, here is my current situation: for my project, I am giving users the ability to apply $U\left(\theta,\phi,\lambda\right)$ for specific values of $\theta$, $\phi$ and $\lambda$ to qubits.
+But I am targeting real machines that only offer specific gates. For instance, the Rigetti Agave quantum processor only offers $R_z$, $R_x\left(k\pi/2\right)$ and $CZ$ as primitive gates.
+One can think of any $U\left(\theta,\phi,\lambda\right)$ as an intermediate form that needs to be transformed into a sequence of whatever is the native gateset of the target quantum processor.

+ +

Now, in that spirit, how do I transform any $U\left(\theta,\phi,\lambda\right)$ into say $R_z$ and $R_x\left(k\pi/2\right)$? Let us ignore $CZ$ since it is a 2-qubit gate.

+ +

Note: I'm writing a compiler so an algorithm and reference papers or book chapters that solve this exact problems are more than welcome!

+",2417,,2417,,9/18/2018 12:42,10/31/2018 12:54,Decomposition of an arbitrary 1-qubit gate into a specific gateset,,1,4,,,,CC BY-SA 4.0 +4250,2,,4249,9/18/2018 14:31,,6,,"

Exact decomposition for your particular gate set

+ +

Given the range of $R_x$ gates available to you together with arbitrary $R_z$ gates, you should be able to find an easy decomposition of arbitrary $R_y$ gates (i.e. as a product of three of your elementary gates). Then using simple techniques — similar to the exercises of Chapter 4 in Nielsen & Chuang — you can show that you can exactly realise any single-qubit unitary operator that you like, using at most five of your gates.

+ +

The general problem of approximate decomposition for finite gate-sets

+ +

In general, it may not be possible to decompose a single-qubit gate exactly, as a product of some other single-qubit gates. This is true even of 'universal' gate sets such as Hadamard+T, consisting of the gates +$$ +H = \tfrac{1}{\sqrt 2}\begin{bmatrix} 1 \!&\! \phantom-1 \\ 1 \!&\! -1 \end{bmatrix}, +\qquad +T = \begin{bmatrix} 1 & 0 \\ 0 & \!\mathrm{e}^{\pi i \!\:/ 4}\! \end{bmatrix}, +$$ +which of course can only generate a countably infinite subgroup of the continuum of single-qubit unitaries. The sense in which they are 'universal' is that this subgroup is dense in the single-qubit unitaries, so that any single-qubit unitary can be approximated as closely as you like by some product of H and T gates, which is all that we really need to solve problems with bounded error on quantum computers. So it makes sense to me, practically speaking, to interpret your question as asking:

+ +
+

Question. For a given single-qubit unitary $U$, how can you generate some approximating unitary $V$ (such that $\lVert U - V \rVert < \varepsilon$, for some precision parameter $\varepsilon > 0$) from a set of unitary gates?

+
+ +
    +
  • The best-known work on these lines was by Solovay [unpublished] and Kitaev [Russ. Math. Sure. 52 (1191–1249), 1997], and is known as the Solovay–Kitaev Theorem. The excellent review article by Dawson and Nielsen [arXiv:quant-ph/0505030] would be a good place to read about this from an algorithms point of view: they give a detailed description of how you might realise such a decomposition, with improved run-time bounds on the original Theorem. This technique only works if the set of gates that you have is closed under inverses, however.

  • +
  • For specific gate sets such as Clifford+T (of which Hadamard+T is +essentially the single-qubit special case), it is possible to do much better than the Solovay–Kitaev Theorem proves for general (but closed under inverses) gate sets. As Alan Geller points out in the comments, for the specific sets of Clifford+T, Clifford+(Z1/6) or ""Clifford+$\pi$/12"" gates, and ""Fibonacci"" or ""V"" gates, the techniques developed by Kliuchnikov, Bocharov, Roetteler, and +Yard [arXiv:1510.03888] +allow you to obtain better asymptotically optimal $O(\log(1/\varepsilon))$ decompositions, and in $O(\mathrm{polylog}(1/\varepsilon))$ time, though depending in on a number theoretic conjecture in this case.

  • +
+",124,,124,,10/31/2018 12:54,10/31/2018 12:54,,,,3,,,,CC BY-SA 4.0 +4251,1,,,9/18/2018 14:46,,10,977,"

I need some useful sources about the geometry of qutrit. Specifically related to the Gell-Mann matrix representation.

+",4620,,55,,06-04-2019 22:51,06-04-2019 22:51,Geometry of qutrit and Gell-Mann matrices,,2,2,,,,CC BY-SA 4.0 +4252,1,4255,,9/18/2018 19:15,,32,6619,"

In a three-qubit system, it's easy to derive the CNOT operator when the control & target qubits are adjacent in significance - you just tensor the 2-bit CNOT operator with the identity matrix in the untouched qubit's position of significance:

+

$$C_{10}|\phi_2\phi_1\phi_0\rangle = (\mathbb{I}_2 \otimes C_{10})|\phi_2\phi_1\phi_0\rangle.$$

+

However, it isn't obvious how to derive the CNOT operator when the control & target qubits are not adjacent in significance: +$C_{20}|\phi_2\phi_1\phi_0\rangle.$

+

How is this done?

+",4153,,55,,8/24/2020 9:15,4/14/2022 21:15,How to derive the CNOT matrix for a 3-qubit system where the control & target qubits are not adjacent?,,5,1,,,,CC BY-SA 4.0 +4253,2,,4252,9/18/2018 19:46,,1,,"

As a general idea CNOT flips target based on control. I choose to flip the target if control is $\uparrow (= [1\ 0]^T)$, you may choose it $\downarrow (= [0\ 1]^T)$ too. So assume any general multiparticle state $|\phi\rangle=|\uparrow_1\downarrow_2\downarrow_3....\uparrow_{n-1}\downarrow_n\rangle$. Now you choose your control and target, lets say $i'th$ is control and $k'th$ is target. Applying CNOT on $|\phi\rangle$ will be just +\begin{equation} +CNOT|\phi\rangle=CNOT|\uparrow_1\downarrow_2...\uparrow_i...\uparrow_k...\uparrow_{n-1}\downarrow_n\rangle= |\uparrow_1\downarrow_2...\uparrow_i...\downarrow_k...\uparrow_{n-1}\downarrow_n\rangle +\end{equation}

+ +

To construct the matrix of such CNOT gate we apply $\sigma_x$($x$-Pauli matrix) if $i'th$ state is up and we apply $I$($2\times2$ Identity) if $i'th$ state is down. We apply these matrices at the $k'th$ position, which is our target. Mathematically, +\begin{equation} +CNOT = \Big[|\uparrow_1...\uparrow_i...\uparrow_{k-1}\rangle\langle\uparrow_1...\uparrow_i...\uparrow_{k-1}|\otimes\sigma_x\otimes|\uparrow_{k+1}...\uparrow_n\rangle\langle\uparrow_{k+1}...\uparrow_n| + all\ permutations\ of\ states\ other\ then\ i'th\Big] ++ \Big[|\uparrow_1...\downarrow_i...\uparrow_{k-1}\rangle\langle\uparrow_1...\downarrow_i...\uparrow_{k-1}|\otimes I\otimes|\uparrow_{k+1}...\uparrow_n\rangle\langle\uparrow_{k+1}...\uparrow_n| + all\ permutations\ of\ states\ other\ then\ i'th\Big] +\end{equation}

+ +

Note $k'th$ state(target) is excluded while creating the permutation matrix and at $k'th$ position the operator $\sigma_x$ or $I$ is written.

+ +

Take an example of five qubits in which $2^{nd}$ qubit is target and $4^{th}$ is control. Lets build the permutation matrix of $CNOT$. I take, if control is $\uparrow$ flip the target. You can take vice-versa too.

+ +

\begin{align} +CNOT & = |\uparrow_1\rangle\langle\uparrow_1|\otimes\sigma_x\otimes|\uparrow_3\uparrow_4\uparrow_5\rangle\langle\uparrow_3\uparrow_4\uparrow_5|\\ +& +|\uparrow_1\rangle\langle\uparrow_1|\otimes\sigma_x\otimes|\uparrow_3\uparrow_4\downarrow_5\rangle\langle\uparrow_3\uparrow_4\downarrow_5|\\ +& +|\uparrow_1\rangle\langle\uparrow_1|\otimes\sigma_x\otimes|\downarrow_3\uparrow_4\uparrow_5\rangle\langle\downarrow_3\uparrow_4\uparrow_5|\\ +& +|\uparrow_1\rangle\langle\uparrow_1|\otimes\sigma_x\otimes|\downarrow_3\uparrow_4\downarrow_5\rangle\langle\downarrow_3\uparrow_4\downarrow_5|\\ +& +|\downarrow_1\rangle\langle\downarrow_1|\otimes\sigma_x\otimes|\uparrow_3\uparrow_4\uparrow_5\rangle\langle\uparrow_3\uparrow_4\uparrow_5|\\ +& +|\downarrow_1\rangle\langle\downarrow_1|\otimes\sigma_x\otimes|\uparrow_3\uparrow_4\downarrow_5\rangle\langle\uparrow_3\uparrow_4\downarrow_5|\\ +& +|\downarrow_1\rangle\langle\downarrow_1|\otimes\sigma_x\otimes|\downarrow_3\uparrow_4\uparrow_5\rangle\langle\downarrow_3\uparrow_4\uparrow_5|\\ +& +|\downarrow_1\rangle\langle\downarrow_1|\otimes\sigma_x\otimes|\downarrow_3\uparrow_4\downarrow_5\rangle\langle\downarrow_3\uparrow_4\downarrow_5|\\ +& +|\uparrow_1\rangle\langle\uparrow_1|\otimes I\otimes|\uparrow_3\downarrow_4\uparrow_5\rangle\langle\uparrow_3\downarrow_4\uparrow_5|\\ +& +|\uparrow_1\rangle\langle\uparrow_1|\otimes I\otimes|\uparrow_3\downarrow_4\downarrow_5\rangle\langle\uparrow_3\downarrow_4\downarrow_5|\\ +& +|\uparrow_1\rangle\langle\uparrow_1|\otimes I\otimes|\downarrow_3\downarrow_4\uparrow_5\rangle\langle\downarrow_3\downarrow_4\uparrow_5|\\ +& +|\uparrow_1\rangle\langle\uparrow_1|\otimes I\otimes|\downarrow_3\downarrow_4\downarrow_5\rangle\langle\downarrow_3\downarrow_4\downarrow_5|\\ +& +|\downarrow_1\rangle\langle\downarrow_1|\otimes I\otimes|\uparrow_3\downarrow_4\uparrow_5\rangle\langle\uparrow_3\uparrow_4\downarrow_5|\\ +& +|\downarrow_1\rangle\langle\downarrow_1|\otimes I\otimes|\uparrow_3\downarrow_4\downarrow_5\rangle\langle\uparrow_3\downarrow_4\downarrow_5|\\ +& +|\downarrow_1\rangle\langle\downarrow_1|\otimes I\otimes|\downarrow_3\downarrow_4\uparrow_5\rangle\langle\downarrow_3\downarrow_4\uparrow_5|\\ +& +|\downarrow_1\rangle\langle\downarrow_1|\otimes I\otimes|\downarrow_3\downarrow_4\downarrow_5\rangle\langle\downarrow_3\downarrow_4\downarrow_5| +\end{align}

+",2817,,,,,9/18/2018 19:46,,,,0,,,,CC BY-SA 4.0 +4254,2,,4252,9/18/2018 19:56,,12,,"

This is a good question; it's one that textbooks seem to sneak around. I reached this exact question when preparing a quantum computing lecture a couple days ago.

+ +

As far as I can tell, there's no way of getting the desired 8x8 matrix using the Kronecker product $\otimes$ notation for matrices. All you can really say is: Your operation of applying CNOT to three qubits, with the control being the first and the target being the third, has the following effects:

+ +

$\lvert 000\rangle \mapsto \lvert 000\rangle$

+ +

$\lvert 001\rangle \mapsto \lvert 001\rangle$

+ +

$\lvert 010\rangle \mapsto \lvert 010\rangle$

+ +

$\lvert 011\rangle \mapsto \lvert 011\rangle$

+ +

$\lvert 100\rangle \mapsto \lvert 101\rangle$

+ +

$\lvert 101\rangle \mapsto \lvert 100\rangle$

+ +

$\lvert 110 \rangle \mapsto \lvert 111 \rangle$

+ +

$\lvert 111 \rangle \mapsto \lvert 110 \rangle$

+ +

and therefore it is given by the following matrix:

+ +

$U = \begin{bmatrix} + 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ + 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 +\end{bmatrix}$

+ +

This matrix $U$ is indeed neither $I_2 \otimes \mathrm{CNOT}$ nor $\mathrm{CNOT} \otimes I_2$. There is no succinct Kronecker-product-based notation for it; it just is what it is.

+",1947,,,,,9/18/2018 19:56,,,,1,,,,CC BY-SA 4.0 +4255,2,,4252,9/18/2018 20:51,,28,,"

For a presentation from first principles, I like Ryan O'Donnell's answer. But for a slightly higher-level algebraic treatment, here's how I would do it.

+ +

The main feature of a controlled-$U$ operation, for any unitary $U$, is that it (coherently) performs an operation on some qubits depending on the value of some single qubit. The way that we can write this explicitly algebraically (with the control on the first qubit) is: +$$ \mathit{CU} \;=\; \def\ket#1{\lvert #1 \rangle}\def\bra#1{\langle #1 \rvert}\ket{0}\!\bra{0} \!\otimes\! \mathbf 1 \,+\, \ket{1}\!\bra{1} \!\otimes\! U$$ +where $\mathbf 1$ is an identity matrix of the same dimension as $U$. Here, $\ket{0}\!\bra{0}$ and $\ket{1}\!\bra{1}$ are projectors onto the states $\ket{0}$ and $\ket{1}$ of the control qubit — but we are not using them here as elements of a measurement, but to describe the effect on the other qubits depending on one or the other subspace of the state-space of the first qubit.

+ +

We can use this to derive the matrix for the gate $\mathit{CX}_{1,3}$ which performs an $X$ operation on qubit 3, coherently conditioned on the state of qubit 1, by thinking of this as a controlled-$(\mathbf 1_2 \!\otimes\! X)$ operation on qubits 2 and 3: +$$ +\begin{aligned} +\mathit{CX}_{1,3} \;&=\; +\ket{0}\!\bra{0} \otimes \mathbf 1_4 \,+\, \ket{1}\!\bra{1} \otimes (\mathbf 1_2 \otimes X) +\\[1ex]&=\; +\begin{bmatrix} + \mathbf 1_4 & \mathbf 0_4 \\ + \mathbf 0_4 & (\mathbf 1_2 \!\otimes\! X) +\end{bmatrix} +\;=\; +\begin{bmatrix} + \mathbf 1_2 & \mathbf 0_2 & \mathbf 0_2 & \mathbf 0_2 \\ + \mathbf 0_2 & \mathbf 1_2 & \mathbf 0_2 & \mathbf 0_2 \\ + \mathbf 0_2 & \mathbf 0_2 & X & \mathbf 0_2 \\ + \mathbf 0_2 & \mathbf 0_2 & \mathbf 0_2 & X +\end{bmatrix}, +\end{aligned} +$$ +where the latter two are block matrix representations to save on space (and sanity).

+ +

Better still: we can recognise that — on some mathematical level where we allow ourselves to realise that the order of the tensor factors doesn't have to be in some fixed order — the control and the target of the operation can be on any two tensor factors, and that we can fill in the description of the operator on all of the other qubits with $\mathbf 1_2$. This would allow us to jump straight to the representation +$$ +\begin{alignat}{2} +\mathit{CX}_{1,3} \;&=&\; +\underbrace{\ket{0}\!\bra{0}}_{\text{control}} \otimes \underbrace{\;\mathbf 1_2\;}_{\!\!\!\!\text{uninvolved}\!\!\!\!} \otimes \underbrace{\;\mathbf 1_2\;}_{\!\!\!\!\text{target}\!\!\!\!} +&+\, +\underbrace{\ket{1}\!\bra{1}}_{\text{control}} \otimes \underbrace{\;\mathbf 1_2\;}_{\!\!\!\!\text{uninvolved}\!\!\!\!} \otimes \underbrace{\; X\;}_{\!\!\!\!\text{target}\!\!\!\!} +\\[1ex]&=&\; +\begin{bmatrix} + \mathbf 1_2 & \mathbf 0_2 & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} \\ + \mathbf 0_2 & \mathbf 1_2 & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} \\ + \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} \\ + \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} +\end{bmatrix} +\,&+\, +\begin{bmatrix} + \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} \\ + \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} \\ + \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & X & \mathbf 0_2 \\ + \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & {\mathbf 0_2} & X +\end{bmatrix} +\end{alignat} +$$ +and also allows us to immediately see what to do if the roles of control and target are reversed: +$$ +\begin{alignat}{2} +\mathit{CX}_{3,1} \;&=&\; +\underbrace{\;\mathbf 1_2\;}_{\!\!\!\!\text{target}\!\!\!\!} \otimes \underbrace{\;\mathbf 1_2\;}_{\!\!\!\!\text{uninvolved}\!\!\!\!} \otimes \underbrace{\ket{0}\!\bra{0}}_{\text{control}} +\,&+\, +\underbrace{\;X\;}_{\!\!\!\!\text{target}\!\!\!\!} \otimes \underbrace{\;\mathbf 1_2\;}_{\!\!\!\!\text{uninvolved}\!\!\!\!} \otimes \underbrace{\ket{1}\!\bra{1}}_{\text{control}} +\\[1ex]&=&\; +{\scriptstyle\begin{bmatrix} + \!\ket{0}\!\bra{0}\!\! & & & \\ + & \!\!\ket{0}\!\bra{0}\!\! & & \\ +& & \!\!\ket{0}\!\bra{0}\!\! & \\ +& & & \!\!\ket{0}\!\bra{0} +\end{bmatrix}} +\,&+\, +{\scriptstyle\begin{bmatrix} + & & \!\!\ket{1}\!\bra{1}\!\! & \\ + & & & \!\!\ket{1}\!\bra{1} \\ +\!\ket{1}\!\bra{1}\!\! & & & \\ +& \!\!\ket{1}\!\bra{1} & & +\end{bmatrix}} +\\[1ex]&=&\; +\left[{\scriptstyle\begin{matrix} +1 & 0 & 0 & 0 \\ +0 & 0 & 0 & 0 \\ +0 & 0 & 1 & 0 \\ +0 & 0 & 0 & 0 \\ +0 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 \\ +0 & 0 & 0 & 0 \\ +0 & 0 & 0 & 1 +\end{matrix}}\right.\,\,&\,\,\left.{\scriptstyle\begin{matrix} +0 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 \\ +0 & 0 & 0 & 0 \\ +0 & 0 & 0 & 1 \\ +1 & 0 & 0 & 0 \\ +0 & 0 & 0 & 0 \\ +0 & 0 & 1 & 0 \\ +0 & 0 & 0 & 0 +\end{matrix}}\right]. +\end{alignat} +$$ +But best of all: if you can write down these operators algebraically, you can take the first steps towards dispensing with the giant matrices entirely, instead reasoning about these operators algebraically using expressions such as $\mathit{CX}_{1,3} = +\ket{0}\!\bra{0} \! \otimes\!\mathbf 1_2\! \otimes\! \mathbf 1_2 + + \ket{1}\!\bra{1} \! \otimes\! \mathbf 1_2 \! \otimes\! X$ +and +$\mathit{CX}_{3,1} = +\mathbf 1_2 \! \otimes\! \mathbf 1_2 \! \otimes \! \ket{0}\!\bra{0} + +X \! \otimes\! \mathbf 1_2 \! \otimes \! \ket{1}\!\bra{1}$. +There will be a limit to how much you can do with these, of course — a simple change in representation is unlikely to make a difficult quantum algorithm efficiently solvable, let alone tractable by manual calculation — but you can reason about simple circuits much more effectively using these expressions than with giant space-eating matrices.

+",124,,124,,9/19/2018 6:20,9/19/2018 6:20,,,,4,,,,CC BY-SA 4.0 +4256,2,,4204,9/19/2018 8:59,,3,,"

We should add that applied quantum computing is also covered as on-topic by the new cross-disciplinary journals:

+

(IOP) Quantum Science and Technology

+

(Wiley) Advanced Quantum Technologies

+

(Springer) Quantum Machine Intelligence

+",410,,2293,,03-07-2021 22:29,03-07-2021 22:29,,,,1,,,,CC BY-SA 4.0 +4257,2,,4251,9/19/2018 18:52,,3,,"
I need some useful sources about the geometry of qutrit.
+
+ +

The most useful resource I know on the geometries of qutrits is the paper Geometry of the generalized Bloch sphere for qutrits.

+ +
Specifically related to the Gell-Mann matrix representation.
+
+ +

The eight Gell-Mann matrices, which form one of the generalizations of Pauli matrices to 3-level systems, are involved in what is sometimes called the ""Bloch representation of a qutrit"". This is described on Page 4 of the above linked paper.

+ +

If you are interested in the mathematics of the geometry of qutrits, the above resource is probably the best available. If you are more interested in the visualization of qutrits, the paper Three-dimensional visualization of a qutrit is the best resource I know. Keep in mind that generalizations of the Bloch sphere for higher dimensional qudits will never be as simple and elegant as the Bloch sphere is for 2-level systems, just as 4D hyper-spheres are not as easy to visualize as 3D spheres.

+",2293,,,,,9/19/2018 18:52,,,,0,,,,CC BY-SA 4.0 +4258,1,,,9/20/2018 9:52,,4,48,"

The superactivation of quantum capacity is an effect that some quantum channels present such that is two of those channels with zero capacity are combined, a non-zero channel capacity can be achieved for the combined channel. This is obviously an effect that can only happen in the quantum world, as it is known that in classical communications two zero-capacity channels give a zero-capacity channel when combined.

+ +

This is one of the most surprising and weird effects known from quantum information theory, and so I am wondering if someone can give reference about an experimental realization of such effect.

+",2371,,55,,03-08-2021 11:13,03-08-2021 11:13,Experimental Realization of Superactivation of Quantum Capacity,,1,0,,,,CC BY-SA 4.0 +4259,2,,4251,9/20/2018 10:04,,8,,"

There are many ways to describe a qutrit or a general $N$ level system geometrically. There is also a large amount of references either explaining these geometries or applying them to various problems in quantum information. I'll try to explain here one quite general geometrical method, somewhat in detail.

+ +

This method is a generalization of the Bloch sphere of the qubit, however, the qubit case is degenerate because the Bloch sphere describes the parameter space of both pure and mixed qubits (but not the maximally mixed case), while in the general case, the geometry of the parameter space depends on the degeneracy structure of the eigenvalues of the density matrix.

+ +

The description is based on the diagonalization formula of the density matrix of a general $N$ level density matrix: +$$ \rho = U \Lambda U^{-1}$$ +Where $\Lambda$ is the eigenvalue matrix, which in the most general case has the form: +$$\Lambda = \mathrm{diag}(\underbrace{\lambda_1, \lambda_1, …}_{N_1 \mathrm{times}}, \underbrace{\lambda_2, \lambda_2, …}_{N_2 \mathrm{times}}, ...., \underbrace{\lambda_k, \lambda_k, …}_{N_k \mathrm{times}})$$ +The matrix $U$ is an $N$ dimensional unitary matrix, i.e., belonging to $N$-dimensional unitary group $U(N)$.

+ +

Of course, since we are diagonalizing a density matrix, we must have: +$$\sum_{i=1}^{k}N_i \lambda_i = 1 \space\mathrm{and} \space \lambda_i \ge 0 \space \mathrm{for \space all}\space i$$ +Inspecting the eigenvalue vector, we see that the action of a general $N_i$ unitary matrix belonging to a $U(N_i)$ subgroup keeps the eigenvalue matrix diagonal, therefore the space of the unitary transformations that do change the density matrix can be identified with the coset space: +$$\frac {U(N)}{U(N_1) \times U(N_2) … \times U(N_k)}$$ +The above spaces are called coadjoint orbits. They all admit explicit parametrizations in coordinates which allow actual work on specific cases. They are compact, homogeneous and Kähler i.e., they are compatibly complex and Riemannian. They are described in a rather elementary manner in the following work of Bernatska and Holod. +Please see the following work by Loi, Mossa and Zuddas for explicit parametrization formulas for general cases.

+ +

However, even from the general form of the coset space we can extract some information of the parameter space, namely its dimension, which is in the general case: +$$d = N^2-\sum_i N_i^2$$ +In the following paragraph, I'll describe in more detail the case of a pure qutrit case. Here: +$$\Lambda = \mathrm{diag}(1, 0, 0)$$ +And the space parametrizing the pure single qutrit case is: +$$\mathbb{C}P^2 = \frac {U(3)}{U(1) \times U(2)}$$ +This is the two-dimensional complex projective space, (having a real dimension of $4$).

+ +

It is quite easy to parametrize this space since we know that we can parametrize (almost) every pure qutrit space by means of the unit vector: +$$v = \frac{1}{\sqrt{1+|z_1|^2+|z_2|^2}}\begin{pmatrix}1 \\z_1 \\ z_2 \end{pmatrix}$$ +The coordinates $(z_1, z_2)$ are the complex coordinates of $\mathbb{C}P^2$

+ +

We get the parametrization of the pure qutrit density matrix (which is a projector): +$$ \rho(z_1, z_2, \bar{z}_1, \bar{z}_2) = v v^{\dagger} = \frac{1}{1+|z_1|^2+|z_2|^2}\begin{pmatrix}1 &\bar{z}_1 & \bar{z}_2 \\ z_1 &z_1 \bar{z}_1 & z_1 \bar{z}_2 \\ z_2 &z_2 \bar{z}_1 & z_2 \bar{z}_2 \end{pmatrix}$$ +This space is symplectic, which characteristic to closed quantum system, its symplectic form sometimes called KKS (after Kirillov-Kostant-Souriau) is given by: +$$\omega_{\alpha \bar{\beta}} = \mathrm{tr}\partial_{\alpha} \rho \bar{\partial}_{\beta} \rho$$ +Being Kählerian, the symplectic form can be computed from a Kähler potential: +$$\omega_{\alpha \bar{\beta}} = \partial_{\alpha} \bar{\partial}_{\beta} K$$ +with +$$K = \ln(1 + |z_1|^2 +|z_2|^2)$$ +Given a set of Gell-Mann matrices $\mathbf{G}_i, \space i=1,…,8$, their expectation values in a general pure qubit state given by +$$ G (z_1, z_2, \bar{z}_1, \bar{z}_2) = \mathrm{tr}(\rho(z_1, z_2, \bar{z}_1, \bar{z}_2) \mathbf{G}_i)$$

+ +

Become classical Hamiltonians on $\mathbb{C}P^2$, and their algebra closes to the Lie algebra $\mathfrak{su}(3)$ under the Poisson brackets:

+ +

$$\{G_i, G_j\} = \omega^{\alpha \bar{\beta}} (\partial_{\alpha} G_i \bar{\partial}_{\alpha} G_j - +\partial_{\alpha} G_j\bar{\partial}_{\alpha} G_i )$$

+ +

Where $\omega$ with the upper indices is the inverse symplectic form.

+ +

This formulation of the qutrit allows many applications in quantum information theory, please see, for example, Hughston and Salamon , where they construct a SIC-POVM using this parametrization.

+ +

Another application by Chaturvedi, Ercolessi, Marmo, Morandi, Mukunda and Simon. Although they do not spell out the above parametrization, but they show that the connection: +$$A_{\alpha} = (\partial_{\alpha} - \bar{\partial}_{\alpha })K$$ +is a Berry connection which gives rise to Berry phases that can be used in holonomic quantum computation to generate gates for the qutrit system, Please see for example Boya, Perelomov and Santander and Khanna, Mukhopadhyay, Simon and Mukunda.

+ +

One very important property of the pure state parameter spaces is that there is a geometric interpretation of the measurement probabilities as follows: The complex projective spaces parametrizing the pure states are equipped with a metric called the Fubini-Study metric. The measurement probabilities of any observable, (for example one of the Gell-Mann matrices) is proportional to the geodesic length from the point representing the state to the point representing the observable eigenstate projector in $\mathbb{C}P^N$. Please see the important work by Ashtekar and Schilling. As far as I know a generalization of this property to mixed state cases has not been found.

+ +

In the case of $\mathbb{C}P^2$, the Fubini-Study metric is given by: +$$g_{\alpha \bar{\beta}}= \frac{(1 + |z_1|^2 +|z_2|^2)\delta_{\alpha \beta}-z_{\alpha} \bar{z_{\beta}} }{(1 + |z_1|^2 +|z_2|^2)^2}$$

+",4263,,4263,,9/20/2018 12:15,9/20/2018 12:15,,,,3,,,,CC BY-SA 4.0 +4260,1,4261,,9/20/2018 10:32,,5,1961,"

I am trying to make a quantum circuit with one qubit and 2 classical bits for each measurment in the system below: +

+ +

I want to make condition on the first bit: if the first collapse to zero so x operator is act on the circuit, else (one) nothing is acting on the circuit.

+ +

I am using qiskit language.

+ +

but when I try to create my circuit, there is always an error:

+ +
#definitions
+q = QuantumRegister(1)
+c = ClassicalRegister(2)
+qc = QuantumCircuit(q,c)
+
+# building the circuit
+qc.h(q)
+qc.measure(q[0],c[0])
+qc.x(q[0]).c[0]_if(c[0], 0)
+qc.measure(q[0],c[1])
+circuit_drawer(qc)
+
+ +

and the error is:

+ +
  File ""<ipython-input-4-66c70285946b>"", line 3
+    qc.x(q[0]).c[0]_if(c[0], 0)
+                     ^
+SyntaxError: invalid syntax
+
+ +

how to write it correctly?

+ +

When I try to change

+ +
qc.x(q[0]).c[0]_if(c[0], 0)
+
+ +

with:

+ +
qc.x(q).c_if(c, 0)
+
+ +

I succeed in building my circuit but I get circuit that I don`t want to work with: +

+ +

I wish for help, thanks.

+",4524,,26,,12/23/2018 11:41,01-03-2020 12:22,How to create a condition on only one classical bit when we have a total of 2 classic bits in the system,,2,0,,,,CC BY-SA 4.0 +4261,2,,4260,9/20/2018 11:27,,2,,"

The controlled NOT gate does the opposite of what you want, it applies $X$ to the target qubit if the control qubit is 1, and does nothing if the control qubit is 0.

+ +

What you want is to apply $X$ when the control qubit is 0 and do nothing when it is 1. This can be accomplished by applying a NOT gate (i.e. an $X$ gate) before doing the CNOT.

+ +

In the IBM composer it would look like this:

+ +

+ +

The code for doing CNOT in quiskit is:

+ +
gate cx c,t {
+ CX c,t; 
+}
+
+ +

Since there is a specific gate for what you want to do, you do not need any ""if statements""!

+",2293,,26,,9/20/2018 14:43,9/20/2018 14:43,,,,7,,,,CC BY-SA 4.0 +4262,2,,4258,9/20/2018 12:01,,3,,"

The paper Superactivation of Multipartite Unlockable Bound Entanglement, presented the first experimental realization of the following superactivation: Alice and Charlie have zero entanglement. Bob and Charle have zero entanglement. But Alice, Bob, and Charlie have non-zero tripartite entanglement.

+ +

Six years earlier was the experimental demonstration of a slightly different type of superactivation was done in Superactivation of Multipartite Unlockable Bound Entanglement.

+ +

One year further back, there was Generation and superactivation of bound entanglement, which was an experimental demonstration of superactivation in involving only bound entanglement.

+",2293,,,,,9/20/2018 12:01,,,,0,,,,CC BY-SA 4.0 +4263,1,4264,,9/21/2018 4:47,,31,1542,"

(Sorry for a somewhat amateurish question)

+ +

I studied quantum computing from 2004 to 2007, but I've lost track of the field since then. At the time there was a lot of hype and talk of QC potentially solving all sorts of problems by outperforming classical computers, but in practice there were really only two theoretical breakthroughs:

+ +
    +
  • Shor's algorithm, which did show significant speed up, but which had limited applicability, and wasn't really useful outside of integer factorization.
  • +
  • Grover's algorithm, which was applicable to a wider category of problems (since it could be used to solve NP-Complete problems), but which only showed polynomial speed-up compared to classical computers.
  • +
+ +

Quantum annealing was also discussed, but it wasn't clear whether it was really better than classical simulated annealing or not. Measurement based QC and the graph state representation of QC were also hot topics, but nothing definitive had been proved on that front either.

+ +

Has any progress in the field of quantum algorithms been made since then? In particular:

+ +
    +
  • Have there been any truly ground breaking algorithms besides Grover's and Shor's?
  • +
  • Has there been any progress in defining BQP's relationship to P, BPP and NP?
  • +
  • Have we made any progress in understanding the nature of quantum speed up other than saying that ""it must be because of entanglement""?
  • +
+",4636,,1837,,9/21/2018 7:38,10/16/2019 15:25,Has there been any truly ground breaking advance in quantum algorithms since Grover and Shor?,,2,1,,,,CC BY-SA 4.0 +4264,2,,4263,9/21/2018 7:30,,22,,"
+

Have there been any truly ground breaking algorithms besides Grover's + and Shor's?

+
+ +

It depends on what you mean by ""truly ground breaking"". Grover's and Shor's are particularly unique because they were really the first instances that showed particularly valuable types of speed-up with a quantum computer (e.g. the presumed exponential improvement for Shor) and they had killer applications for specific communities.

+ +

There have been a few quantum algorithms that have been designed since, and I think three are particularly worthy of mention:

+ +
    +
  • The BQP-complete algorithm for evaluating the Jones polynomial at particular points. I mention this because, aside from more obvious things like Hamiltonian simulation, I believe it was the first BQP-complete algorithm, so it really shows the full power of a quantum computer.

  • +
  • The HHL algorithm for solving linear equations. This is a slightly funny one because it's more like a quantum subroutine, with quantum inputs and outputs. However, it is also BQP-complete and it's receiving a lot of attention at the moment, because of potential applications in machine learning and the like. I guess this is the best candidate for truly ground breaking, but that's a matter of opinion.

  • +
  • Quantum Chemistry. I know very little about these, but the algorithms have developed substantially since the time you mention, and it has always been cited as one of the useful applications of a quantum computer.

  • +
+ +
+

Has there been any progress in defining BQP's relationship to P, BPP + and NP?

+
+ +

Essentially, no. We know BQP contains BPP, and we don't know the relation between BQP and NP.

+ +
+

Have we made any progress in understanding the nature of quantum speed + up other than saying that ""it must be because of entanglement""?

+
+ +

Even back when you were studying it originally, I would say it was more precisely defined than that. There are (and were) good comparisons between universal gate sets (potentially capable of giving exponential speed-up) and classically simulable gate sets. For example, recall that the Clifford gates produce entanglement but are classically simulable. Not that it's straightforward to state precisely what is required in a more pedagogical manner.

+ +

Perhaps where some progress has been made is in terms of other models of computation. For example, the model DQC1 is better understood - this is a model that appears to have some speed-up over classical algorithms but is unlikely to be capable of BQP-complete calculations (but before you get drawn into the hype that you might find online, there is entanglement present during the computation).

+ +

On the other hand, the ""it's because of entanglement"" sort of statement still isn't entirely resolved. Yes, for pure state quantum computation, there must be some entanglement because otherwise the system is easy to simulate, but for mixed separable states, we don't know if they can be used for computations, or if they can be efficiently simulated.

+ +

Also, one might try to ask a more insightful question: Have we made any progress in understanding which problems will be amenable to a quantum speed-up? This is subtly different because if you think that a quantum computer gives you new logic gates that a classical computer doesn't have, then it's obvious that to get a speed-up, you must use those new gates. However, it is not clear that every problem is amenable to such benefits. Which ones are? There are classes of problem where one might hope for speed-up, but I think that still relies on individual intuition. That can probably still be said about classical algorithms. You've written an algorithm x. Is there a better classical version? Maybe not, or maybe you're just not spotting it. That's why we don't know if P=NP.

+",1837,,,,,9/21/2018 7:30,,,,4,,,,CC BY-SA 4.0 +4265,2,,1404,9/22/2018 15:25,,2,,"

Here is a latest development from Xanadu, a photonic quantum circuit which mimics a neural network. This is an example of a neural network running on a quantum computer.

+ +

This photonic circuit contains interferometers and squeezing gates which mimic the weighing functions of a NN, a displacement gate acting as bias and a non-linear transformation similar to ReLU function of a NN.

+ +

They have also used this circuit to train the network to generate quantum states and also implement quantum gates.

+ +

Here are their publication and code used to train the circuit. Here is a medium article explaining their circuit.

+",419,,,,,9/22/2018 15:25,,,,0,,,,CC BY-SA 4.0 +4266,1,4284,,9/22/2018 18:40,,6,712,"

In the three-polarizing-filter experiment, two orthogonal polarizing filters block all light but then allow some amount when a third polarizing filter is placed oriented at a 45 degree angle between them.

+ +

Can we analyze this experiment through terms familiar to quantum computation? For example, representing photons as being in a superposition of horizontal & vertical polarization, being sent through unitary gates (the polarization filters) with some end measurement giving us a result with probability corresponding to the proportion of photons allowed through. Basically, reducing this experiment to a form which could be written as a quantum circuit!

+ +

I ask because it seems like quantum information processing is an ""easy"" path to reasoning about quantum phenomena, but I cannot see how exactly to apply it in this simple case.

+",4153,,,,,9/25/2018 13:09,Analyzing the three-polarizing-filter experiment as a quantum circuit,,2,0,,,,CC BY-SA 4.0 +4267,2,,4266,9/22/2018 19:04,,3,,"

Consider a sequence of 3 measurement devices, applied sequentially to the same qubit, which starts in the 0 state. The first and last devices measure in the Z basis. The second measures in the X basis. Now you ask what the probability of getting the outcome 1 from the final measurement, depending on whether or not the second measurement device is present.

+",1837,,,,,9/22/2018 19:04,,,,4,,,,CC BY-SA 4.0 +4268,1,4269,,9/22/2018 21:31,,7,1238,"

It seems like it should be simple, based on how Nielsen and Chuang talk about it, but I cannot seem to correctly implement the Inversion About the Mean operator ($2|\psi\rangle \langle\psi| - \mathcal{I}$) that is used in the Grover search algorithm, especially without using any ancilla bits.

+ +

I thought about performing a NOT operation on all the working qubits, then performing a controlled-NOT on a separate toggle qubit with the control being all the working qubits, then performing a controlled phase flip with control of the toggle bit, and finally flipping the phase of all the states. I'm not sure how I'd actually implement the controlled phase flipping, though, since, I believe, phase flipping one or all of the bits would not produce the desired effect.

+ +

Does anyone know how I can construct this? I am using Q#, by the way, if you'd like to answer in code.

+",4657,,26,,03-12-2019 09:07,07-09-2019 17:54,"How to construct the ""Inversion About the Mean"" operator?",,2,1,,,,CC BY-SA 4.0 +4269,2,,4268,9/22/2018 23:57,,7,,"

First, let's represent operation $2|\psi\rangle \langle\psi| - \mathcal{I}$ as $H^{\otimes n}(2|0\rangle \langle0| - \mathcal{I})H^{\otimes n}$, as Nielsen and Chuang do.

+ +

Doing $H^{\otimes n}$ is easy - it's just ApplyToEach(H, register).

+ +

$2|0\rangle \langle0| - \mathcal{I}$ flips the phase of all computational basis states except $|0...0\rangle$. Let's do instead $\mathcal{I} - 2|0\rangle \langle0|$, flipping the phase of only $|0...0\rangle$ (it introduces a global phase of -1 which in this case I think can be ignored).

+ +

To flip the phase of only $|0...0\rangle$:

+ +
    +
  • flip the state of all qubits using ApplyToEach(X, register). Now we need to flip the phase of only $|1...1\rangle$ state.
  • +
  • do a controlled-Z gate on one of the qubits (for example, the last one), using the rest as control. This can be done using Controlled functor: (Controlled Z)(Most(register), Tail(register)). Tail returns the last element of the array, and Most returns all elements except the last one.
  • +
  • flip the state of all qubits again to return them to the original state.
  • +
+",2879,,,,,9/22/2018 23:57,,,,3,,,,CC BY-SA 4.0 +4270,1,,,9/23/2018 2:43,,6,1960,"

I'm writing a simple multiplication algorithm that uses the Quantum Fourier Transform to repetitively add a number (the multiplicand) to itself and decrements another number (the multiplier). The repeated addition process is to be stopped once the multiplier hits the fundamental state (all qubits are in the zero state). Registers a, b, c hold the product, multiplicand and multiplier respectively. Classical register cl is used to store the final result:

+ +
def multiply(first, second, n, m):  
+    a = QuantumRegister(m+n, ""a"")
+    b = QuantumRegister(m+n, ""b"")
+    c = QuantumRegister(m, ""c"")
+    d = QuantumRegister(m, ""d"")
+    cl = ClassicalRegister(m+n, ""cl"")
+    qc = QuantumCircuit(a, b, c, d, cl, name=""qc"")
+
+    for i in range(0, n):
+        if first[i] == ""1"":
+            qc.x(b[n-(i+1)])
+        if second[i] == ""1"":
+            qc.x(c[m-(i+1)])
+    qc.x(d[0])
+
+    for i in range(0, m+n):
+        createInputState(qc, a, m+n-(i+1))
+
+    for i in range(m):
+        createInputState(qc, c, m-(i+1))
+
+ +

At this point, however, I need to create a while loop of sorts that allows me to add the multiplicand to the accumulator (register a) until register c is in the fundamental state. Unfortunately the only method I could think of was using a for loop with range (0, (value of multiplier)), but I want to find out if there is a more 'quantum' alternative. The while loop would need to have work as below:

+ +
while (register c is not in the fundamental state):
+        for i in range(0, m+n):
+            evolveQFTState(qc, a, b, m+n-(i+1)) 
+        for i in range(0, m):
+            decrement(qc, c, d, m-(i+1))
+        for i in range(0, m):
+            inverseQFT(qc, c, i)
+
+ +

And then we wrap things up:

+ +
    for i in range(0, m+n):
+        inverseQFT(qc, a, i)
+    for i in range(0, m+n):
+        qc.measure(a[i], cl[i])
+
+ +

So, in short, I am looking for a way to implement a set of statments that execute while a given condition holds true, i.e. a quantum register is not in the fundamental state. The problem I face is due to the fact that, to the best of my knowledge, we cannot use classical register bits in if statements, such as below:

+ +
if c[0] == 0:   -------> not possible for QISkit classical register bits
+    #Do something 
+
+ +

Another approach I tried was to perform the decrement operation in a different quantum circuit, but I got error messages.

+ +

Note: This is my first question here on QC SE, so please let me know if I have to rephrase it, change it or provide any additional information.

+",4412,,26,,03-12-2019 09:35,03-12-2019 09:35,How would one implement a quantum equivalent of a while loop in IBM QISkit?,,1,0,,,,CC BY-SA 4.0 +4271,2,,4270,9/23/2018 14:57,,6,,"

Qiskit makes and manipulates quantum circuits specified by the OpenQASM standard. This does indeed support statements that are conditional on a classical register.

+ +

The if statement conditionally executes a quantum operation based on the value of a classical register. So you can have statements like

+ +
if(c==3) U(theta, phi, lambda) q[0];
+
+ +

This will perform the rotation U(theta, phi, lambda) on q[0] if the classical register c is holds the bit string that corresponds to the number 3.

+ +

As @sashwat-anagolum pointed out in a comment, at the the Qiskit level this can be done with

+ +
quantumCircuit.U(theta, lambda, phi).c_if(classicalRegister, value)
+
+ +

Note that classical conditionals like these are not currently supported on quantum hardware.

+",409,,409,,9/25/2018 16:51,9/25/2018 16:51,,,,1,,,,CC BY-SA 4.0 +4272,2,,1404,9/23/2018 15:35,,3,,"

All of the answers here seem to be ignoring a fundamental practical limitation:

+ +

Deep Learning specifically works best with big data. MNIST is 60000 images, ImageNet is 14 Million images.

+ +

Meanwhile, the largest quantum computers right now have 50~72 Qbits.

+ +

Even in the most optimistic scenarios, quantum computers that can handle the volumes of data that would require Deep Learning algorithms instead more traditional modeling methods are not going to be around anytime soon.

+ +

So applying QC to Deep Learning might be a nice theoretical curiosity, but not something that's soon going to be practical.

+",4636,,-1,,11/29/2019 19:01,11/29/2019 19:01,,,,0,,,,CC BY-SA 4.0 +4273,1,,,9/23/2018 22:28,,3,124,"

I am trying to develop a more intuitive understanding of quantum computing -- I suppose Feynman would tell me that’s impossible! Let’s try: if we are trying to find the minimum of a surface or function, I can picture a grouping of qubits that would somehow consider the whole surface somewhat simultaneously, eventually finding the minimum. (Feel free to correct or improve upon that description, or tell me it’s completely useless.) My question is this: Where is the function specified, in the arrangement or connection of qubits, or in the classical programming of the inputs (and outputs?).

+",4665,,,,,9/23/2018 23:32,Where is the problem stored in a quantum computer?,,2,0,0,,,CC BY-SA 4.0 +4274,2,,4273,9/23/2018 23:32,,1,,"

It depends on how you encode the problem with a quantum computer. There are different ways but I can explain one easily.

+ +

So we will call a set of qubits a register. +Let's say our problem can be represented by n qubits and the output of the function on m qubits. +You start usually in the $ | 0 \rangle^n | 0 \rangle^m $ state. +You can create a superposition, which will represent all the candidates in the register of size n : +$$ \sum_i | i \rangle | 0 \rangle^m$$

+ +

You can see the $ | i \rangle $ as binary strings, which can be considered candidate values/input of your problem. +Note that sometimes the $| i \rangle $ are considered as indexes or adresses. In that case, you can either have values in another register and they will be ""entangled"" or linked with the $| i \rangle $, that is you have for each i an $x_i$ value, $| i \rangle | x_i \rangle $, but often in quantum algorithms it is left to an ""oracle"" which is a shortcut for stating any ways that will do it for us, or any ways of accessing those values (and computing the function most of the time).

+ +

Now by using quantum gates, which is a unitary transformation representing our function (call it U), we apply and we get the output for each candidate in the register of size m: +$$ U \sum_i | i \rangle | 0 \rangle^m \rightarrow \sum_i | i \rangle | f(i) \rangle or \sum_i | i \rangle | x_i \rangle | f(x_i) \rangle +$$

+ +

The unitary may just be a translation of bit operations into quantum gates, representing the classical computation of this function but abstractly we denote by any unitary transformations U. This can also be shortcuted as an ""oracle"". Note that with measurements, you get only one output at a time.

+ +

The second type of encoding I have seen is encoding values in amplitudes of a quantum register. For example having a vector (which will be considered normalized) of real components $x_i$, it will be encoded in a n-qubit register as : +$$ \sum_{i=1}^{2^n} x_i | i \rangle $$
+But that is non trivial to carry out and this way you are not representing every possibilities in superposition, but again computing the function is like applying a unitary operation.

+ +

Conclusion : You specify the function as a unitary operation or an ""oracle"". The storing of a problem on a quantum computer is not restricted to one way and is an exercise on its own. Still you have one more natural way I tried explaining. And this is also the case classically.

+",4127,,,,,9/23/2018 23:32,,,,0,,,,CC BY-SA 4.0 +4275,1,13879,,9/24/2018 2:13,,10,722,"

In Improved Simulation of Stabilizer Circuits by Aaronson and Gottesman, it is explained how to compute a table describing which Pauli tensor products the X and Z observable of each qubit get mapped to as a Clifford circuit acts upon them.

+ +

Here as an example Clifford circuit:

+ +
0: -------@-----------X---
+          |           |
+1: ---@---|---@---@---@---
+      |   |   |   |
+2: ---|---|---@---|-------
+      |   |       |
+3: ---@---@-------Y-------
+
+ +

And the table describing how it acts on the X and Z observables of each qubit:

+ +
       +---------------------+-
+       | 0    1    2    3    |
++------+---------------------+-
+| 0    | XZ   X_   __   Z_   |
+| 1    | ZZ   YZ   Z_   ZZ   |
+| 2    | __   Z_   XZ   __   |
+| 3    | Z_   X_   __   XZ   |
++------+---------------------+-
+| sign |  ++   ++   ++   ++  |
++------+---------------------+-
+
+ +

Each column of the table describes how the circuit acts on the X observable (left half of column) and Z observable (right half of column) of each qubit. For example, the left side of column 3 is Z,Z,_,X meaning an X3 operation (Pauli X on qubit 3) at the right hand side of the circuit is equivalent to a Z1 * Z2 * X4 operation at the left hand side of the circuit. The 'sign' row indicates the sign of the product, which is important if you're going to simulate a measurement (it tells you whether or not to invert the result).

+ +

You can also compute the table for the inverse of a circuit. In the example case I've given, the inverse table is this:

+ +
       +---------------------+-
+       | 0    1    2    3    |
++------+---------------------+-
+| 0    | XZ   Y_   __   Z_   |
+| 1    | _Z   YZ   Z_   _Z   |
+| 2    | __   Z_   XZ   __   |
+| 3    | Z_   Y_   __   XZ   |
++------+---------------------+-
+| sign |  ++   -+   ++   ++  |
++------+---------------------+-
+
+ +

The tables look almost the same if you transpose their rows and columns. But the entries aren't exactly identical. In addition to transposing, you have to encode the letters into bits (_=00, X=01, Z=10, Y=11) then swap the middle bits then decode. For example, ZZ encodes into 1010 which swaps into 1100 which decodes into Y_.

+ +

The question I have is: is there also a simple rule for the computing the inverse table's signs?

+ +

Currently I'm inverting these tables by decomposing them into circuits, inverting the circuits, then multiplying them back together. It's extremely inefficient compared to transpose+replace, but if I'm going to use transpose+replace I need a sign rule.

+",119,,119,,9/24/2018 21:00,9/22/2020 19:29,Is there a simple rule for the inverse of a Clifford circuit's stabilizer table?,,3,4,,,,CC BY-SA 4.0 +4277,2,,4273,9/23/2018 22:24,,0,,"

Generally speaking if you're doing, say, Grover's algorithm (function inversion in $\sqrt{N}$ rather than $N$ time) you would want to program the function itself using the quantum gates that your qubits are passing through. Hypothetically that information could be stored in a ""program"" of classical bits stored alongside, if you have the ability to control which of a set of quantum gates is applied with a classical computer.

+ +

The hazard here is that quantum gates can only describe reversible logic whereas usual logic gates like AND and OR are not reversible. So your first step would typically involve implementing a reversible logic gate that is universal, like Toffoli (also called CCNOT, toffoli_3(x, y, z) = (x AND y) XOR z.

+",,CR Drost,,,,9/23/2018 22:24,,,,1,,,,CC BY-SA 4.0 +4278,1,,,9/24/2018 9:10,,3,231,"

I have successfully installed qiskit. However, when I try to run a simulation I get the error ""No module named 'qiskit'

+ +
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute
+
+ +

How can I get or enable this module?

+",4668,,26,,03-12-2019 09:34,07-12-2021 17:41,How can I get the qiskit module,,3,0,,,,CC BY-SA 4.0 +4280,1,4282,,9/24/2018 11:24,,2,1462,"
    +
  1. Is it real now or not completed yet?

  2. +
  3. Can I buy one? / How many quantum computers have been manufactured in the world?

  4. +
  5. Is quantum computing independent (e.g Quantum+LCD+Keyboard) or it add-on for PC (e.g GPU)? / Can I use a quantum computer directly or must connect it to PC?

  6. +
  7. Can a quantum computer process same CPU data? (e.g CPU can't process GPU graphics)

  8. +
  9. Can quantum computers run high-end games?

  10. +
  11. Can I program a quantum computer using C++? If not, why so?

  12. +
+",4670,,26,,12/14/2018 5:45,12/14/2018 5:45,What's the current status of quantum computers?,,2,1,,11/25/2018 18:02,,CC BY-SA 4.0 +4281,2,,4280,9/24/2018 13:30,,2,,"

1 - Not completed in the sense computations suffer from noise. Doing fault-tolerant quantum computations would be a major advancement.

+ +

2 - Unless you are wealthy, no. They are hosted by big groups generally. For how many, it is non-exhaustive but you can refer to this link

+ +

3 - So far it is just a chip, with big cooling system which takes some storage place. Connection with a classical computer will be just to send instructions to the chip and get the answers back to you as classical information.

+ +

4 - No classical data cannot be processed unless we have a way to convert it to quantum form, which is non trivial.

+ +

5 - So far it is not the purpose.

+ +

6 - You have some quantum simulators in C++. You can find them on Quantiki's list of QC simulators.

+",4127,,,,,9/24/2018 13:30,,,,3,,,,CC BY-SA 4.0 +4282,2,,4280,9/24/2018 14:42,,6,,"
+
    +
  1. Is it real now
  2. +
+
+ +

Quantum computers are real, and they have been made by several companies such as IBM, Intel, Google, Alibaba, and Rigetti. D-Wave also makes quantum annealers, which do not have provable speed-up over classical computers for most problems, but do have the same speed-up as circuit-based quantum computers for some problems (such as the square-root speed-up for the sort problem that Grover's algorithm solves on circuit-based quantum computers).

+ +
+

or not completed yet?

+
+ +

The pursuit to build quantum computers is far from complete. We want to make machines with more qubits and less noise, for example.

+ +
+

2- Can I buy one?

+
+ +

D-Wave has released 4 models for public sale in the last 7 years. They sold their first annealer (the D-Wave One) for \$10 million and their second one (the D-Wave Two) for $15 million (see here). Their most recent machine, the 2000Q also sold for \$15 million.

+ +

None of the circuit-based quantum computers are publicly advertised for sale, but I'm sure if you offered Rigetti \$15 million for one, they would make it for you. Google and Intel might not be as interested in your money since they already have a lot of that, and since they seem a lot more reluctant to let people use their machines than IBM (who let's anybody use their machines directly), maybe they wouldn't sell you one for any price you offer, but this is just a guess.

+ +
+
    +
  1. How many quantum computers have been manufactured in the world?
  2. +
+
+ +

The first quantum computer had only 2 qubits in 1998 and was made at Oxford University by Jonathan Jones with theoretical input from Michele Mosca (see the original paper). Hundreds of quantum devices like this have been made in universities since then.

+ +

15 larger-scale quantum devices are listed here, but this list doesn't include Alibaba's quantum computer, and dozens of other prototypes made by these companies. For example this shows that D-Wave made at least 9 more machines that are not listed in that Wikipedia article. They have also made machines for their own testing such as the machine with the Pegasus architecture.

+ +
+
    +
  1. Is quantum computing independent (e.g Quantum+LCD+Keyboard) or it add-on for PC (e.g GPU)? / Can I use a quantum computer directly or must connect it to PC?
  2. +
+
+ +

Think of it more like a GPU accelerator. Use a classical keyboard and mouse to give it commands, and a classical LCD screen to see the output, but allow the calculations to happen on a device that involves qubits.

+ +

Here's a famous picture of someone using a classical keyboard and classical LCD when using a quantum annealer:

+ +

+ +
+
    +
  1. Can a quantum computer process same CPU data?
  2. +
+
+ +

QPU's can process all classical data. A classical CPU can't process all QPU data, but certainly can process any classical data that a QPU uses.

+ +
+
    +
  1. Can quantum computers run high-end games?
  2. +
+
+ +

Quantum computers right now don't have enough qubits. If and when quantum computers have enough qubits, they will be able to do anything that a classical computer can do, and more.

+ +
+
    +
  1. Can I program a quantum computer using C++? If not, why so?
  2. +
+
+ +

There's some C/C++ simulators here but there's also languages specifically for quantum computers, such as Q#. See this list.

+",2293,,26,,11/25/2018 18:09,11/25/2018 18:09,,,,1,,,,CC BY-SA 4.0 +4283,1,,,9/24/2018 16:31,,3,210,"

I am writing a small algorithm to multiply two numbers using IBM's QISkit. The code is below:

+ +

times_shell.py

+ +
import times
+
+first = input(""Enter a number with less than 7 digits."")
+l1 = len(first)
+second = input(""Enter another number with less than "" + str(8-l1) + "" 
+digits."")
+l2 = len(second)
+if l1 > l2:
+    n = l1
+    m = l2
+else:
+    first, second = second, first
+    n = l2
+    m = l1
+
+prod = (""0"")*(m+n)
+
+while int(second) is not 0:
+    second, prod = times.multiply(first, second, prod, n, m)
+
+ +

times.py

+ +
def multiply(first, second, product, n, m):
+
+    a = QuantumRegister(m+n, ""a"") #accumulator
+    b = QuantumRegister(m+n, ""b"") #holds multiplicand
+    c = QuantumRegister(m, ""c"") #hold multiplier
+    d = QuantumRegister(m, ""d"") #register with value 1
+    cl = ClassicalRegister(m+n, ""cl"") #used for final output
+    cl2 = ClassicalRegister(m, ""cl2"")
+    qc = QuantumCircuit(a, b, c, d, cl, cl2, name=""qc"")
+
+    for i in range(0, m+n):
+        if product[i] == ""1"":  
+            qc.x(a[m+n-(i+1)])
+
+    for i in range(0, n):
+        if first[i] == ""1"":
+            qc.x(b[n-(i+1)])
+
+    for i in range(0, m):
+        if second[i] == ""1"":
+            qc.x(c[m-(i+1)])
+
+    qc.x(d[0])
+
+    for i in range(0, m+n):
+        createInputState(qc, a, m+n-(i+1), pie)
+
+    for i in range(m):
+        createInputState(qc, c, m-(i+1), pie)
+
+    for i in range(0, m+n):
+        evolveQFTState(qc, a, b, m+n-(i+1), pie) 
+
+    for i in range(0, m):
+        decrement(qc, c, d, m-(i+1), pie)
+
+    for i in range(0, m):
+        inverseQFT(qc, c, i, pie)
+
+    for i in range(0, m+n):
+        inverseQFT(qc, a, i, pie)
+
+    for i in range(0, m+n):
+        qc.measure(a[i], cl[i])
+
+    for i in range(0, m):
+        qc.measure(c[i], cl2[i])
+
+    print(qc.qasm())
+
+    register(Qconfig['APItoken'], Qconfig['url'])
+    result = execute(qc, backend='ibmq_qasm_simulator', 
+                  shots=1024).result()
+    counts = result.get_counts(""qc"")
+    print(counts)
+    output = max(counts.items(), key=operator.itemgetter(1))[0]
+    multiplier, accumulator = str(output).split("" "")
+
+    print(multiplier)
+    print(accumulator)
+
+    return multiplier, accumulator
+
+ +

When I run it I get an error. The terminal output (the program output and the error) is as follows:

+ +
Traceback (most recent call last):
+    File ""times_shell.py"", line 18, in <module> second, prod = 
+    times.multiply(first, second, prod, n, m)
+    File ""D:\Projects\Quantum_Computing\IBM_Python\times.py"", line 122, in 
+    multiply
+    register(Qconfig['APItoken'], Qconfig['url'])
+    File ""C:\Users\ADMIN\AppData\Local\Programs\Python\Python37-32\lib\site- 
+    packages\qiskit\wrapper\_wrapper.py"", line 56, in register
+    _DEFAULT_PROVIDER.add_provider(provider)
+    File ""C:\Users\ADMIN\AppData\Local\Programs\Python\Python37-32\lib\site- 
+    packages\qiskit\wrapper\defaultqiskitprovider.py"", line 158, in 
+    add_provider
+    raise QISKitError(""The same provider has already been registered!"")
+    qiskit._qiskiterror.QISKitError: 'The same provider has already been 
+    registered!'
+
+ +

I'm not sure what the issue is here. Any help with this issue would be appreciated.

+",4412,,,,,9/25/2018 14:21,How can I resolve the error QISkit Error: The same provider has already been registered!,,1,2,,,,CC BY-SA 4.0 +4284,2,,4266,9/24/2018 18:34,,5,,"

We may simulate the three-polarising-filter experiment as a circuit, in the following way, using qutrits. +I will start by describing this as a sequence of transformations (a channel) on qutrits, and then give a circuit which simulates this using qubits. +$\def\ket#1{\left\lvert#1\right\rangle} \def\bra#1{\left\langle#1\right\rvert}$

+ +

The three-polarising-filter experiment as a channel on qutrits

+ +

We consider the path that the photon could take in the experiment, and describe the state of this path at each point in terms of a qutrit with state-space spanned by $\{ \ket\varnothing, \ket H, \ket V \} $. The state $\ket\varnothing$ is the vacuum state, meaning a state with no photons. The states $\ket H$ and $ \ket V$ correspond to having one photon which is horizontally or vertically polarised.

+ +

(A more general treatment would allow any finite number of photons, in which case one should instead consider an analysis in terms of creation and annihilation opeeators; for any finite number of qubits or qudits of fixed dimension, we must restrict ourselves to some limited number of photons — here we consider just one.)

+ +
    +
  1. As input, we can consider an arbitrary single-photon state. The state doesn't even have to be a pure state: we may consider any density matrix as input in the space spanned by $\{ \ket H, \ket V \} $.

  2. +
  3. The first filter does not quite 'measure' the photon, as it — well — filters it out. For a horizontally polarised filter, it strips away the vertical component, transforming it to the zero-photon state, and leaves the other 'basis' states unchanged. This realises the following transformation on density operators: +$$ F_H(\rho) \,=\, \ket\varnothing\!\!\bra\varnothing \,\rho\, \ket\varnothing\!\!\bra\varnothing \,+\, \ket H\!\!\bra H \,\rho\, \ket H \!\!\bra H \,+\, \ket\varnothing\!\!\bra V \,\rho\, \ket V\!\!\bra\varnothing$$ +— note the final term above.

    + +
      +
    • Note that this transformation is irreversible, and in particular, non-unitary. It is sometimes said of such transformations that they involve an 'implicit measurement'. Sure enough, this transformation could be in an abstract sense simulated by a non-demolition polarisation measurement, followed by a unitary on the state-space which depends on the measurement outcome, and finally by forgetting the measurement outcome so that no explicit trace is left of whether the filtering happened. That could be a way of simulating the filtering operation, e.g. with CNOTs and qubits representing operations on qutrits. But the actual physics is a bit closer to the irreversible map $F_H$ itself, that I've described.

    • +
  4. +
  5. The second filter acts much the way the first filter does, but in a different basis. Consider the basis $\{ \ket\varnothing, \ket D, \ket d \} $, where $ \ket\varnothing$ is as before and $\ket D = (\ket H + \ket V) \,/ \sqrt 2$, $\ket d = (\ket H - \ket V) \,/ \sqrt 2$. +Then we may analogously define +$$ F_D(\rho) \,= \,\ket\varnothing\!\!\bra\varnothing \,\rho\, \ket\varnothing\!\!\bra\varnothing + \ket D\!\!\bra D \,\rho\, \ket D \!\!\bra D \,+\, \ket\varnothing\!\!\bra d \,\rho\, \ket d\!\!\bra\varnothing$$ +to describe the effect of a polarising filter for $\ket D$.

  6. +
  7. The final filter is much as you might now expect: +$$ F_V(\rho) \,=\, \ket\varnothing\!\!\bra\varnothing \,\rho\, \ket\varnothing\!\!\bra\varnothing \,+\, \ket \varnothing\!\!\bra H \,\rho\, \ket H \!\!\bra \varnothing \,+\, \ket V\!\!\bra V \,\rho\, \ket V\!\!\bra V$$ +— note here the second term.

  8. +
+ +

The observation of the three-polarising-filter experiment is then that +$$ +\begin{align} +(F_V \circ F_H) (\rho) = F_V ( F_H(\rho)) &= \ket\varnothing \!\! \bra\varnothing, \quad\text{whereas} +\\ +(F_V \circ F_D \circ F_H) (\rho) = F_V \bigl(F_D\bigl( F_H(\rho)\bigr) \bigr) &= \tfrac 14 p\ket V \!\! \bra V + (1-\tfrac 14 p) \ket\varnothing \!\! \bra\varnothing, +\end{align}$$ +where $p = \bra H \rho \ket H$ may be non-zero.

+ +

Simulation with a circuit using qubits

+ +

There are a number of different ways in which we could simulate this using qubits. A good choice of representation of the 'photon qutrit' will give us a simpler circuit. I would suggest the following.

+ +
    +
  • Use a two-qubit representation with qubits labelled $N$ (for +number') and $P$ (for 'polarisation'), where we consider $\ket{0}_P$ to correspond to horizontal polarisation (or the lack of polarisation when there are no photons) and $\ket{1}$ to correspond to vertical polarisation. +Then +$$ +\ket{\varnothing} \equiv \ket{0}_N\ket{0}_P\,, +\quad +\ket{H} \equiv \ket{1}_N\ket{0}_P\,, +\quad +\ket{V} \equiv \ket{1}_N\ket{1}_P\,. +$$ +The two-qubit state $\ket{01}$ is not in the state-space of the simulation; by asserting that it is not valid as an input, we may use a simpler circuit.

  • +
  • The action of each filter can be described in terms of forcing a particular polarisation at output, for instance by mapping the polarisation to be selected for the $\ket{0}_P$, conditioned on the photon number being $\ket{1}_N$; preparing a fresh qubit in the state $\ket{0}$ and substituting it for the polarisation qubit; forcing the photon number to zero for the part of the state where the photon polarisation is not what we were selecting for; and then by discarding the information about the input polarisation and restoring the frame of reference for the polarisation.

  • +
+ +

Using this representation, the following circuit then represents the set-up of having all three filters present:

+ +

+ +

(I've used a 'controlled-X' gate for the $F_V$ filter to emphasise the similarity to the $F_D$ filter: in general one can define a filter using any controlled unitary $U$, where $U$ determines the polarisation to filter.)

+ +

The symbols at the end of the 'auxiliary' wires for each filter is just a map of discarding that qubit (i.e. you don't care about anything at all that happens to it from that point on and it isn't accessible for measurement). This is equivalent to using a destructive measurement, summing up the non-normalised density operators for the two measurement outcomes, and then forgetting the measurement outcome, but from a physical perspective, you don't usually have to measure a qubit — or a photon, or your carry-on luggage — in order to forget about it and then lose access to it forever.

+",124,,124,,9/25/2018 13:09,9/25/2018 13:09,,,,1,,,,CC BY-SA 4.0 +4285,2,,4275,9/24/2018 20:06,,4,,"

There is a very closely related representation of the tableau representation of Aaronson (and Gottesman), which works not only for qubits but for qudits of arbitrary finite dimension, which works particularly well for purely Clifford circuits (i.e. at most one terminal measurement).

+ +

In this alternative representation, one has tableaus describing how the single-qubit X and Z operators transform, with phase information, as in the usual representation. The columns describe multi-qubit Weyl operators specifically, which are a special subset of the Pauli operators. The advantage of doing so is that the tableau is not just an array of coefficients, but an actual linear operator on the vectors which represent Weyl operators and phases.

+ +

There is a small catch. For qubits, these vectors have coefficients which are integers modulo 4 (corresponding to a double cover of the non-trivial single-qubit Pauli operators by Weyl operators), rather than modulo 2. I think this is a small price to pay — though I might be slightly biased, as it's my own result [arXiv:1102.3354]. However, it does seem to be a somewhat 'naturally occurring' representation: Appleby developed the single-qubit or qudit special case somewhat earlier [arXiv:quant-ph/0412001] (something which I would really liked to have known before spending two years needlessly re-creating essentially the same conventions).

+ +

Using such a representation, by virtue of the fact that the 'tableau' $M_C$ of a Clifford circuit $C$ is now an actual matrix (and an invertible one) which transforms vectors, the tableau for the inverse circuit $C^{\dagger}$ is then the inverse $M_C^{-1}$ of the tableau. So, for this closely related representation at least, the rule for computing the tableau for the inverse circuit is easy.

+",124,,,,,9/24/2018 20:06,,,,6,,,,CC BY-SA 4.0 +4286,2,,4275,9/25/2018 6:47,,2,,"

To draw out Aaronson and Gottesman's techniques a bit more explicitly: you can set up each stabilizer as a bit string of length $2N$ (for $N$ qubits). The first $N$ bits specify the locations of Z operators, and the second set of $N$ specify the locations of $X$ operators (so, $X_1Z_2$ for $N=2$ is 0110). For your circuit on four qubits, the transformation due to a Clifford circuit (up to some phases) would then be given by an $8\times 8$ matrix. We can think of this as a block matrix +$$ +M=\left(\begin{array}{cc} +A & B \\ C & D \end{array}\right), +$$ +where each of the blocks is $N\times N$. By the fact that the stabilizers commute, we know that +$$ +\left(\begin{array}{cc} +A & B \\ C & D \end{array}\right)\cdot\left(\begin{array}{cc} +0 & \mathbb{I} \\ \mathbb{I} & 0 \end{array}\right)\cdot \left(\begin{array}{cc} +A & B \\ C & D \end{array}\right)^T\equiv 0\text{ mod }2 +$$ +You want to find the inverse of $M$ modulo 2. Your claimed form of the inverse is then of the form (I think) +$$ +\left(\begin{array}{cc} +D^T & B^T \\ C^T & A^T \end{array}\right) +$$ +which is interestingly reminiscent of the inverse of a $2\times 2$ matrix (but that is not sufficient for block matrices. There is a block-wise inverse but that's not so helpful here, I think).

+ +

The mess, of course, comes from keeping track of the phases. I guess the signs will be related to a change in the number of Y operators in each stabilizer, but I haven't succeeded in a unified treatment. Niel's answer probably does a better job of taking care of it automatically.

+",1837,,124,,9/25/2018 10:31,9/25/2018 10:31,,,,0,,,,CC BY-SA 4.0 +4289,1,4294,,9/25/2018 13:17,,7,253,"

I am reading the paper Polar codes for classical-quantum channels + by Wilde and Guha, and it is stated the fact that collective measurements are necessary in order to aciheve the Holevo symmetric information as it can be seen from the HSW theorem.

+ +

I am wondering what are such collective measurements and why are they different from the normal quantum measurements. Additionally, I was expecting to obtain some insight about some physical realization of such kind of measurements, as it seems that if applied in an optical-fiber link, the usage of such codes would be a way to obtain a tranmission rate equal to the Holevo symmetric information, which would be the ultimate capacity of such a channel.

+",2371,,55,,10/27/2021 17:07,10/27/2021 17:07,Collective measurements: importance and realization,,1,0,,,,CC BY-SA 4.0 +4290,2,,4283,9/25/2018 14:21,,2,,"

It seems that the solution is rather simple - simply move the registration bit of the code, i.e. the bit below, to the times_shell.py script.

+ +
register(Qconfig['APItoken'], Qconfig['url'])
+
+ +

This way I only register once, and can run my job multiple times. The successful results of my algorithm can be seen below:

+ +

Input:

+ +
Enter a number with less than 7 digits.1111
+Enter another number with less than 4 digits.11
+
+ +

Output:

+ +
{'10 101111': 459, '10 001111': 565}
+10
+001111
+
+{'01 000010': 15, '01 101110': 130, '01 011110': 197, '01 010110': 38, '01 110110': 
+78, '01 000110': 58, '01 110010': 14, '01 100010': 50, '01 010010': 5, '01 100110': 
+71, '01 111010': 21, '01 101010': 30, '01 001010': 19, '01 011010': 14, '01 111110': 
+169, '01 001110': 115}
+01
+
+{'00 101101': 664, '00 110001': 1, '00 101001': 128, '00 001001': 7, '00 111001': 18, 
+'00 011101': 8, '00 001101': 163, '00 100001': 4, '00 111101': 31}
+00
+101101
+
+ +

I hope this helps anyone facing similar issues.

+",4412,,,,,9/25/2018 14:21,,,,1,,,,CC BY-SA 4.0 +4291,2,,4014,9/25/2018 15:37,,4,,"

Just a small complement to @gIS excellent answer: I know of several people (including myself) interested on the public verification aspect. As far as I know, all attempts have failed, hence the lack of literature on the subject: as soon as one can prove the Boson sampler acted correctly, it is indeed a regime where the Boson sampler can be efficiently simulated classically.

+ +

Note that such a verification is not totally hopeless, as it exists in the related $IQP$ non-universal computation model introduced by Shepherd and Brenner in (arXiv:0809.0847 / Proc. R. Soc. A 465, 1413). +$IQP$ stands for Instantaneous Quantum computing in Polynomial time. In this model, one only applies commuting gates, but they do not commute with the computational basis. This leads to quantum superiority through a model widely thought to be less powerful as BQP, like boson sampling. In this model, there are ways to embed computation with known results (the paper speaks about matroids, but I honestly do not understand this part) inside larger computations showing quantum advantage and use them to verify the computation. Attempts have been made to hide submatrices with known determinants inside larger matrices for boson sampling, but, as far as I know, all such submatrix family tried for such role have some telltale signs (usually too big coefficients) allowing them to be detected in $P$, thus defeating their very purpose.

+",1782,,,,,9/25/2018 15:37,,,,1,,,,CC BY-SA 4.0 +4292,1,,,9/25/2018 23:11,,4,200,"

I am trying to develop a feel for quantum computing at a basic level. I would very much appreciate someone will take a look at the statement below and fix it, since I assume it needs fixing.

+ +

“Problems to be solved by a quantum computer can be programmed by creating a set of qubit input registers and connecting them to a set of output registers through an assemblage of quantum logic gates that define the problem to be solved or computation to be done. When the computation is started the qubits traverse the logic gates according to the laws of quantum mechanics, not giving the same result measurement every time, but after a sufficient number of cases have been run and measured (no trivial matter) the answer is contained in the measurements. That method, connecting the input bits to the output bits so as to provide the solution, is suitable only for trivial problems. In practice, a considerable amount of classical programming must be done to set the input qubits in such a way that the quantum machine that is configured as required for the problem can process the qubits using their quantum mechanical properties. A typical problem requires that considerable classical manipulation of the output is required also.”

+",4665,,,,,9/26/2018 9:28,Quantum computing concepts,,2,0,,,,CC BY-SA 4.0 +4293,2,,4292,9/26/2018 0:03,,4,,"
+

Problems to be solved by a quantum computer can be programmed by creating a set of qubit input registers and connecting them to a set of output registers through an assemblage of quantum logic gates that define the problem to be solved or computation to be done.

+
+ +

There are always exactly as many input registers (usually just called qubits) as output registers, because quantum computers are reversible computers.

+ +
+

When the computation is started the qubits traverse the logic gates according to the laws of quantum mechanics, not giving the same result measurement every time, but after a sufficient number of cases have been run and measured (no trivial matter) the answer is contained in the measurements.

+
+ +

The qubits traversing the logic gates ""according to the laws of quantum mechanics"" is sort of an odd thing to say; I mean pedantically, everything everywhere happens according to the laws of quantum mechanics, but here it might be better to just say something like ""the qubits are sent through the circuit and the logic gates are applied to them"".

+ +

It isn't necessarily the case that quantum algorithms must be run more than once. Deterministic quantum algorithms exist which give the correct answer every time, assuming no errors.

+ +
+

That method, connecting the input bits to the output bits so as to provide the solution, is suitable only for trivial problems. In practice, a considerable amount of classical programming must be done to set the input qubits in such a way that the quantum machine that is configured as required for the problem can process the qubits using their quantum mechanical properties. A typical problem requires that considerable classical manipulation of the output is required also.

+
+ +

I don't think this is true.

+ +

You might be interested in a quantum computing video I've made aimed at computer scientists & software engineers (link) which will answer many of your questions as to the nature of quantum computation.

+",4153,,,,,9/26/2018 0:03,,,,3,,,,CC BY-SA 4.0 +4294,2,,4289,9/26/2018 7:50,,4,,"

Collective measurements are normal measurements. You just need to be clear on the setting under which they are operating. I haven't delved deeply into the specific paper you mention (so it's always possible they make marginally different assumptions), but I expect it goes like this:

+ +
    +
  • You are looking at using many copies of the same channel.

  • +
  • Encoding will, generically, be an entangled state across the inputs for a quantum-quantum channel. In this case of a classical-quantum channel, the inputs are classical bits, so the inputs can be correlated but not entangled.

  • +
  • Decoding will, generically, involve measurement of all outputs of the channel simultaneously, in an entangled basis.

  • +
+ +

It is these measurements across multiple outputs in an entangled basis that are referred to as collective measurements, in comparison to single-system measurements on the outputs of individual channels. In comparison to measurements on the outputs of the individual channels, these collective measurements first involve an entangling unitary between all the outputs.

+ +

Now, I said ""generically"" in the sense that this is the most general case that you should consider. One might hope that the optimum measurement might be simpler than that, e.g. measurements performed on individual channel outputs. Presumably one of the points this paper is making is that this is not the case in their specific setting.

+",1837,,1837,,9/26/2018 8:13,9/26/2018 8:13,,,,4,,,,CC BY-SA 4.0 +4295,2,,4292,9/26/2018 9:28,,3,,"
+

Problems to be solved by a quantum computer can be programmed by + creating a set of qubit input registers and connecting them to a set + of output registers through an assemblage of quantum logic gates that + define the problem to be solved or computation to be done.

+
+ +

This is certainly one way to do it, based on the quantum circuit model. There are various alternatives.

+ +
+

When the computation is started the qubits traverse the logic gates + according to the laws of quantum mechanics, not giving the same result + measurement every time, but after a sufficient number of cases have + been run and measured (no trivial matter) the answer is contained in + the measurements

+
+ +

Depending on the physical implementation, the qubits themselves may not actually move. They might, or they might sit in the same place and have the quantum gates applied to them.

+ +

If you measured the qubits in the middle of a computation, they would give different (intermediate) answers every time. But, you must not measure qubits in the middle (unless the algorithm specifically tells you to), as that destroys the quantumness of the computation.

+ +
+

but after a sufficient number of cases have been run and measured (no trivial matter) the answer is contained in the measurements

+
+ +

This is not usually the case. The key to designing good quantum algorithms is that, at the end of the computation, you get the right answer with high probability. In examples such as Grover's algorithm, the probability of error is vanishingly small. So, in practice, you only have to run the algorithm a small number of times, often only once.

+ +
+

In practice, a considerable amount of classical programming must be + done to set the input qubits in such a way that the quantum machine + that is configured as required for the problem can process the qubits + using their quantum mechanical properties. A typical problem requires + that considerable classical manipulation of the output is required + also.

+
+ +

You could think about it like that, and that is certainly the way that Shor's algorithm is typically described, for example, using some classical post-processing such as the continued fractions algorithm. Perhaps a better way of thinking about it is that the entire problem could be solved on a quantum computer, specified by a single, unified, algorithm. However, because quantum computers are trickier to build and run than classical computers, if some parts of the algorithm can be performed on a classical computer, it's easier to do so, and we only run the bit that needs quantum processing on the quantum computer.

+",1837,,,,,9/26/2018 9:28,,,,3,,,,CC BY-SA 4.0 +4296,2,,4278,9/26/2018 14:07,,1,,"

Just install by using pip package +pip3 install qiskit +You can find the pip3.exe under the python installation.

+",4206,,,,,9/26/2018 14:07,,,,0,,,,CC BY-SA 4.0 +4298,1,,,9/26/2018 14:35,,5,396,"

I have an algorithm that uses QRAM, after accessing the given QRAM that store M d-dimensional classical vectors, the state of the index register and the data register become entangled. But for now I don't have the QRAM black box, I just initialize the state of both systems by providing an array value to the registers at once using initialize function that exists in the qiskit module. +My question is I have assigned the value 0 for non-visible basis. Is that ok?

+",4206,,1782,,9/27/2018 22:42,9/27/2018 22:42,Entanglement state preparation by using amplitude values,,1,0,,,,CC BY-SA 4.0 +4299,1,4300,,9/26/2018 19:09,,8,161,"

Say you have $2$ qubits, namely $q_1, q_2$. What's the right language for saying apply CNOT on $q_1$ and $q_2$ where $q_1$ is the control qubit and $q_2$ is the target?

+

For instance, can I say "apply CNOT on $q_2$ controlled by $q_1$"? What's the standard way of saying that?

+",1589,,2927,,1/23/2021 0:01,1/23/2021 0:01,"How to say ""apply CNOT on qubit 1 controlled by qubit 2""?",,1,1,,,,CC BY-SA 4.0 +4300,2,,4299,9/26/2018 22:18,,5,,"

The most common convention is to refer to qbits by the index of their significance, with the least-significant qbit having index $0$. This is cribbed from binary, where the significance index is the same as the exponent in the sum of powers of two:

+ +

$1011=1 \cdot 2^3 + 0 \cdot 2^2 + 1 \cdot 2^1 + 1 \cdot 2^0$

+ +

So for the system $|abc\rangle = a \otimes b \otimes c$, you'd say $a$ is qbit $2$, $b$ qbit $1$, and $c$ qbit $0$.

+ +

CNOT is usually denoted $C_{c,t}$ (or $CX_{c,t}$ or even $CNOT_{c,t}$) where $c$ is the index of the control qbit and $t$ is the index of the target qbit. So a CNOT gate with the most-signifigant qbit of a three-qbit system as control and least-significant qbit as target is denoted $C_{2,0}$, or just $C_{20}$ when there are ten or fewer qbits so the comma is unnecessary. Applying this operator to your system is written as $C_{20}|abc\rangle$.

+ +

This convention is outlined in detail in section 1.4, pages 10-11 of Quantum Computer Science: An Introduction by N. David Mermin.

+ +

For plain-language phrasing, something like ""apply CNOT with qbit $x$ as control, $y$ as target"" or even just ""apply CNOT-$x$-$y$"" (if spoken) works fine.

+",4153,,4153,,9/26/2018 22:34,9/26/2018 22:34,,,,0,,,,CC BY-SA 4.0 +4301,1,,,9/27/2018 8:13,,5,249,"

In a previous question I asked how stationary bits could be processed by logic gates. I casually mentioned that I could visualize qubits traversing stationary logic gates, and @DaftWullie said “It might help if you explain how you're visualising the flying qubit”, and I said to myself “Oops, I probably don’t have that right at at all!”. I simply visualize Bloch spheres moving in some constrained way from gate to gate, changing their vector angle according to the logic of the gate, with communication between the gates as programmed (like IBM Q makes it appear). If you can correct that image, maybe it would help me understand how stationary qubits (with moving gates?) work as well.

+",4665,,,,,9/27/2018 11:04,Flying qubits compared with stationary qubits,,1,0,,,,CC BY-SA 4.0 +4302,2,,4301,9/27/2018 11:04,,4,,"

Bloch spheres are a reasonable visualisation of individual qubits, so long as you don't fall in to the trap of thinking that's what they actually, physically look like. It's just a convenient mathematical mapping from a parametrisation of the state into a pretty picture. And it doesn't work so well once entanglement between multiple qubits comes into play.

+ +

While you are thinking about Bloch spheres, one can then think about what a single qubit unitary is and does. The unitary basically defines an axis through the sphere, and an angle of rotation. + +For any given state, and a fixed axis, there's basically an orbit that the state could map to (because unitaries preserve inner product, so the angle between the state and the axis must remain fixed), and the angle is just how far around that orbit the state is moved.

+ +

Now, how might you think about physically implementing this. You need some sort of apparatus that fixes what the direction of the axis is. This is usually some sort of electro-magnetic field. The angle of rotation is essentially determined by how long the qubit is in that field. For flying qubits, you can therefore think about setting up a region of a particular size in which this field is active. If you know how fast your qubit is travelling, you know how long it's in the field for, and that determines its angle of rotation (up to caveats about Heisenberg uncertainty principle). But equally, you could just have the qubit sitting in one place, and simply switch on the field for the right length of time. Indeed, these two solutions are practically the same. Just think about your flying qubit situation from the reference frame of the qubit. In that frame, the qubit is stationary, and it is the field that moves towards it.

+ +

In terms of something slightly more concrete and physical for the stationary qubits, the direction of the axis of the unitary can often be determined by a couple of lasers. So, you have a couple of lasers that can be shined at each qubit, and you just switch them on and off at the right moments in time, with the correct relative strengths.

+",1837,,,,,9/27/2018 11:04,,,,2,,,,CC BY-SA 4.0 +4303,2,,4298,9/27/2018 12:16,,2,,"

If you are talking about the qRAM for encoding real numbers in amplitudes of a quantum system yes. Basically, you assume without loss of generalities that your problem/vector is of size a power of 2. But if your problem is not exactly of this type, we just pad with 0s. When you apply a quantum algorithm, you just need to be sure you are doing the computation you want.

+",4127,,,,,9/27/2018 12:16,,,,7,,,,CC BY-SA 4.0 +4304,1,,,9/28/2018 1:13,,5,162,"

I have a HPC (High Performance Computing) cluster with NVIDIA GPU and Xeon CPU partitions.

+ +

Is this simulator list complete?

+ +

Which do you recommended to use on a HPC cluster?

+ +

Will it increase the performance (it has 512GB of memory).

+",914,,914,,9/29/2018 14:34,9/29/2018 14:34,Which QC platform is better to use on HPC cluster?,,1,0,,,,CC BY-SA 4.0 +4305,2,,4304,9/28/2018 2:09,,2,,"

You will find on this website most of them (not commercial ones I think). +The choice is all yours and you have to get to know the tools depending on what kind of simulation you want to do. For example if you are going to work on universal quantum computing, write algorithms from scratch or using already available implementations

+ +

Quantum++ seems to be a nice library with the use of openMP so that could be something suited for a HPC cluster. With GPUs, look for something integrating openCL like qRack or QCGPU +For toolkits, Forest, Qiskit and Microsoft quantum developmenent kit based on Q# would be good candidates to look at.

+",4127,,,,,9/28/2018 2:09,,,,2,,,,CC BY-SA 4.0 +4306,1,4310,,9/28/2018 14:02,,1,990,"

I've only recently started using density matrices in my work but I am confused with the following code that I have whether I am getting the right matrix:

+ +
def Hamiltonian(alpha,h):
+
+    Sx = np.array([[0,1],[1,0]])
+    Sy = np.array([[0,-1j],[1j,0]])
+    Sz = np.array([[1,0],[0,-1]])
+    I  = np.array([[1,0],[0,1]])
+
+    H =  (alpha*np.kron(np.kron(Sx,Sx),I)) 
+    H =+ (alpha*np.kron(np.kron(Sy,Sy),I)) 
+    H =+ (alpha*np.kron(np.kron(I,Sx),Sx)) 
+    H =+ (alpha*np.kron(np.kron(I,Sy),Sy)) 
+    H =+ (h*np.kron(np.kron(I,Sz),I))
+
+    return H
+
+ +

So the above gives me my Hamiltonian Function, where alpha is a real number and h is a magnetization parameter acting on one of my qubits.

+ +

I have tried the following:

+ +
H = Hamiltonian(1,0.5)
+print(H)
+
+ +

$$\begin{bmatrix} +0.5&0&0&0&0&0&0&0 \\ +0&0.5&0&0&0&0&0&0 \\ +0&0&-0.5&0&0&0&0&0 \\ +0&0&0&-0.5&0&0&0&0 \\ +0&0&0&0&0.5&0&0&0 \\ +0&0&0&0&0&0.5&0&0 \\ +0&0&0&0&0&0&-0.5&0 \\ +0&0&0&0&0&0&0&-0.5 +\end{bmatrix}$$

+ +

Why is it diagonal?

+",4681,,26,,10-07-2018 10:32,10-07-2018 10:32,Why is this Hamiltonian matrix diagonal?,,1,7,,,,CC BY-SA 4.0 +4307,1,4309,,9/28/2018 16:39,,6,558,"

I'm working through Scott Aaronson's Quantum Information Science problem sets, and I'm having trouble with a specific problem in ps5 (PDF). Specifically the following problem:

+
+

A “qutrit” has the form $a|0\rangle+b|1\rangle+c|2\rangle$, where $|a|^2+|b|^2+|c|^2=1$. Suppose Alice and Bob share the entangled state $(|00\rangle+|11\rangle+|22\rangle)/\sqrt 3$. Then consider the following protocol for teleporting a qutrit $|\psi〉=a|0\rangle+b|1\rangle+c|2\rangle$ from Alice to Bob: first Alice applies a CSUM gate from $|\psi〉$ onto her half of the entangled pair, where

+

$$\operatorname{CSUM}(|x\rangle\otimes|y\rangle) =|x\rangle\otimes|y+x \bmod 3\rangle$$

+

Next, Alice applies the unitary matrix $F$ to the $|\psi\rangle$ register, where

+

$$F=\frac{1}{\sqrt{3}}\begin{bmatrix}1&1&1\\1&\omega&\omega^2\\1&\omega^2&\omega^3\end{bmatrix}$$

+

and $\omega=e^{2i\pi/3}$ so that $\omega^3= 1$. She then measures both of her qutrits in the $\{|0\rangle,|1\rangle,|2\rangle\}$ basis, and sends the results to Bob over a classical channel. Show that Bob can recover a local copy of $|\psi\rangle$ given these measurement results.

+

This quantum circuit summarizes the protocol:

+

+

Here the double-lines represent classical ‘trits’ being sent from Alice to Bob. Depending on the value of 0, 1 or 2 Bob can apply a ? gate 0, 1, or 2 times. Prove that $|\psi〉=|\psi_\text{out}\rangle$ for appropriately chosen ? gates for all possible measurement results. Hint: You could explicitly work out all 9 possible cases, but you could also save time by noticing a general pattern that lets you handle all the cases in a unified way.

+
+
+

Here's what I've done: after applying the CSUM gate to the mixed state of three qutrits, Alice and Bob share the state: +\begin{align*} +& \frac{a}{\sqrt{3}}(|000 \rangle + |011 \rangle + |022 \rangle) \\ ++ & \frac{b}{\sqrt{3}}(|110 \rangle + |121 \rangle + |102 \rangle) \\ ++ & \frac{c}{\sqrt{3}}(|220 \rangle + |201 \rangle + |212 \rangle) +\end{align*} +After Alice applies $F$ to the first qutrit in the shared state, they're left with: +\begin{align*} +& \frac{|00 \rangle}{3}(a|0 \rangle + c|1 \rangle + b|2 \rangle)\\ + + & \frac{|01 \rangle}{3}(b|0 \rangle + a|1 \rangle + c|2 \rangle)\\ + + & \frac{|02 \rangle}{3}(c|0 \rangle + b|1 \rangle + a|2 \rangle) \\ + + & \frac{|10 \rangle}{3}(a|0 \rangle + w^2c|1 \rangle + wb|2 \rangle)\\ + + & \frac{|11 \rangle}{3}(wb|0 \rangle + a|1 \rangle + w^2c|2 \rangle)\\ + + & \frac{|12 \rangle}{3}(w^2c|0 \rangle + wb|1 \rangle + a|2 \rangle)\\ + + & \frac{|20 \rangle}{3}(a|0 \rangle + wc|1 \rangle + w^2b|2 \rangle) \\ + + & \frac{|21 \rangle}{3}(w^2b|0 \rangle + a|1 \rangle + wc|2 \rangle) \\ + + & \frac{|22 \rangle}{3}(wc|0 \rangle + w^2b|1 \rangle + a|2 \rangle) \\ +\end{align*}

+

So after Alice measures her qutrits, whatever is inside the parenthesis is the state that Bob holds. However, I don't wee what operations can be used multiple times to "fix" the output.

+

It seems to me that Alice could communicate to Bob with one trit which two coefficients need to be transposed, and use one trit to tell Bob how to fix the remaining $w$'s. That doesn't seem to fit the desired protocol though, making me doubt the computations that I have performed above. If anyone could help me out (or point out a better approach), it would be appreciated. Thanks!

+",4707,,-1,,6/18/2020 8:31,11/13/2019 18:37,Problem about qutrit teleportation protocol,,1,11,,,,CC BY-SA 4.0 +4309,2,,4307,9/29/2018 16:50,,4,,"

I tested this out in Quirk by embedding each qutrit into two qubits, and I get a simular result to you, where in addition to the cyclic shift fixup and the phasing operation you need to transpose states 1 and 2. Presumably there's some simple change to the circuit that fixes this, such as picking a different F, but I did't check too hard to see if it was possible.

+ +

Here's the circuit (and in the simulator itself):

+ +

+ +

I've put displays throughout the circuit to have a better idea that it's working. In the bottom left area the blue rectangle with the three circles along its diagonal is showing that, yes, we're in the 00+11+22 state.

+ +

In the top middle I'm preparing some state to teleport. It doesn't really matter what it is, so I used gates that keep changing what they're doing. The important bit is that the blue rectangle here looks the same as the one in the bottom right, indicating the state moved.

+ +

In the bottom right you can see the fixups being applied. The cyclic shift (-A mod 3), the mystery transpose (implemented by the swap), and then the phasing operations.

+ +

I didn't include the measurements, but if you add some before the controls in the top right and just before the Input A of the -A mod R operation the circuit will behave identically.

+",119,,,,,9/29/2018 16:50,,,,1,,,,CC BY-SA 4.0 +4310,2,,4306,9/30/2018 0:02,,2,,"

Your matrix is diagonal because the + signs are on the wrong sides of the equals signs. The code below will give you the correct matrix:

+ +
alpha=1;h=0.5;
+
+x=[0  1;  1 0 ];
+y=[0 -1i; 1i 0];
+z=[1  0;  0 -1];
+I=eye(2);
+
+H = alpha*kron(kron(x,x),I)+...
+    alpha*kron(kron(y,y),I)+...
+    alpha*kron(kron(I,x),x)+...
+    alpha*kron(kron(I,y),y)+...
+    h*kron(kron(I,z),I)
+
+ +

Here is the result:

+ +

+ +

The eigenvectors and eigenvlaues are much more complicated than what you have.

+ +

If you don't have Octave, these commands will install it, then open it:

+ +
sudo apt-get install octave
+octave
+
+",2293,,2293,,9/30/2018 10:17,9/30/2018 10:17,,,,5,,,,CC BY-SA 4.0 +4311,1,4312,,9/30/2018 11:19,,1,2873,"

I have a Hamiltonian and I want to know the corresponding density matrix. The matrix I'm interested in is the one in this question.

+",4681,,26,,10-07-2018 10:31,10-07-2018 10:31,How do I construct a Density Matrix corresponding to a Hamiltonian?,,2,0,,,,CC BY-SA 4.0 +4312,2,,4311,9/30/2018 11:44,,4,,"

There's many different density matrices that can correspond to a given Hamiltonian.

+ +
+ +

For the 8x8 matrix in your question, there's 8 different ""eigenstate"" density matrices that can be obtained, one for each of the 8 eigenvectors. The density matrices are constructed by doing the outer product of the eigenvectors. For the $i^{\rm{th}}$ eigenstate of the Hamiltonian, the density matrix $\rho_i$ is:

+ +

$ +\rho_i = |\psi_i\rangle_ \langle \psi_i| +$.

+ +
+ +

A system can also be in a ""pure"" superposition of eigenstates, for example:

+ +

$|\psi \rangle = \frac{1}{\sqrt{2}}|\psi_1\rangle + \frac{1}{\sqrt{2}}|\psi_2\rangle $.

+ +

Then the density matrix is once again made by doing the outer product of the pure wave function $|\psi\rangle$ with itself.

+ +
+ +

A system can also be in a ""mixed"" state, which means it's a linear combination of ""pure"" states.

+ +

In this case you would construct the density matrix like this (for example):

+ +

$\rho = 0.5 \rho_1 + 0.5\rho_2$,

+ +

which descrbes a state which is a 50% mixture of $\rho_1$ and a 50% mixture of $\rho_2$.

+",2293,,2293,,9/30/2018 20:03,9/30/2018 20:03,,,,4,,,,CC BY-SA 4.0 +4313,1,4319,,9/30/2018 13:08,,5,211,"

I am working with the set $\{\mathrm{CNOT}, \mathrm{H}, \mathrm{P}(\theta)\}$

+ +

where $\mathrm{H}$ is the Hadamard gate, and $\mathrm{P}(\theta)$ is the phase gate with angle $\theta$.

+ +

I want to build other gates with these gates, like $R_z(\theta)$, Control-$R_z(\theta)$, or Control-$P(\theta)$

+ +

How can I do this?

+",4725,,2293,,9/30/2018 20:29,10-01-2018 07:08,"How can I decompose a gate into $\{\mathrm{CNOT}, \mathrm{H}, \mathrm{P}(\theta)\}$?",,1,2,,,,CC BY-SA 4.0 +4314,1,,,9/30/2018 23:33,,7,281,"

D-Wave has a new prototype annealer that uses a Hamiltonian which, if there was enough qubits and sufficient control, would be able to simulate any universal circuit-based quantum computer with at most polynomial overhead. It was presented at the AQC 2018 conference at NASA:

+ +

+ +

The Hamiltonian on the slide contains both a ZZ term and a YY term. +Using the YY gadget of Section VI of this paper, the Hamiltonian becomes the classic ZZ + XX Hamiltonian, which is well known to be universal.

+ +

One issue with this design though, is that the YY term is part of the driver, not part of the problem Hamiltonian. Therefore the coefficient of the YY term has to change simultaneously with the linear X term.

+ +

Since I am not an expert in superconducting circuits, I wonder why the YY term, which makes it possible for the machine to do universal quantum computation, cannot change with the problem Hamiltonian, instead of the driving Hamiltonian?

+",2293,,2293,,10-10-2018 21:48,10-10-2018 21:48,"In D-Wave's universal quantum computer, why does the YY term have to be driven along with the linear X term?",,0,0,,,,CC BY-SA 4.0 +4315,2,,1674,9/30/2018 23:42,,2,,"

In the accepted answer, it is said that XX couplers are ""necessary"".
+However YY couplers would also do the job. This is because of te YY gadget explained in section VI of this paper.

+ +

Actually even the original paper given in the accepted answer, says that XZ would also be good enough (not just XX). For that reason, YZ should also be good enough, although no one has explicitly constructed the gadget yet.

+ +

Out of all four of these options (XX, YY, XZ, YZ) for added couplers that would make D-Wave's machines universal, one of them has already been implemented in hardware by D-Wave: the YY coupler.

+ +

It was presented at the AQC conference in 2018:

+ +

+ +

However there are some restrictions on the control of these YY terms, and the physical reason for this is the subject of my question here: In D-Wave's universal quantum computer, why does the YY term have to be driven along with the linear X term?

+",2293,,,,,9/30/2018 23:42,,,,0,,,,CC BY-SA 4.0 +4316,1,4317,,10-01-2018 01:00,,2,597,"
+

The standard way to implement a reversible XOR gate is by means of a controlled-NOT gate or CNOT; this is the ""standard quantum XOR operation"". Physics.Stackexchange

+
+ +

Is there a ""standard quantum XNOR operation""?

+ +
+

The XNOR gate (sometimes ENOR, EXNOR or NXOR and pronounced as Exclusive NOR) is a digital logic gate whose function is the logical complement of the exclusive OR (XOR) gate. Wikipedia

+
+ +

Alternatively, what is the logical complement of the CNOT gate?

+",2645,,2645,,10-01-2018 01:21,10-01-2018 01:21,What quantum gate is XNOR equivalent to?,,1,0,,,,CC BY-SA 4.0 +4317,2,,4316,10-01-2018 01:14,,4,,"

There is no ""standard"" method to implement XNOR, but it can be logically obtained by attaching a NOT gate (often called an X gate in quantum computing) to a logical XOR (which you know is implemented using CNOT). The X gate is applied to the target qubit of the CNOT.

+ +

To answer your question more directly, there is no standard ""quantum gate"" that is equivalent to XNOR. The best way to implement XNOR in a quantum circuit is with a CNOT and an X on the second qubit.

+ +

The reason why {CNOT,X} can give you a logical XNOR was explained in this answer to your own question 3.5 months ago.

+",2293,,2293,,10-01-2018 01:20,10-01-2018 01:20,,,,4,,,,CC BY-SA 4.0 +4318,2,,4311,10-01-2018 06:51,,9,,"

Your question remains very unclear as to what it actually is that you want to calculate.

+ +

There is no direct correspondence between a system Hamiltonian and the quantum state of the system. No matter what the Hamiltonian, any quantum state is a valid state of the system.

+ +

Where a Hamiltonian comes in useful is, if you know the state at some time (say, $t=0$), you can find out what the state is at any later time via the Schroedinger equation +$$ +i\frac{\partial |\psi\rangle}{\partial t}=H(t)|\psi\rangle. +$$ +If $H$ does not change in time, you get +$$ +|\psi(t)\rangle=e^{-iHt}|\psi(0)\rangle +$$ +or, if your initial state is a mixed state, +$$ +\rho(t)=e^{-iHt}\rho(0)e^{iHt}. +$$

+ +

Now, there are two reasonable things that might be relevant in terms of a state derived from a Hamiltonian - the thermal state and the ground state (which is the thermal state at 0 temperature). At temperature $T$, the thermal state is +$$ +\rho_{\text{thermal}}=\frac{e^{-H/(k_BT)}}{\text{Tr}(e^{-H/(k_BT)})}, +$$ +while the ground state is simply the eigenstate of $H$ with the smallest energy. You can (crudely) think of the thermal state as the best guess about what the state would be if you cooled it to a temperature $T$.

+ +

In one of the comments on another answer, you say

+ +
+

I need it to get the purity of my qubit states and the internal energy + of the system vs. the magnetisation factor h

+
+ +

Purity has nothing to do with the Hamiltonian. If you know the density matrix $\rho$ of your system, purity is just $\text{Tr}(\rho^2)$. The Hamiltonian will help you with the expected internal energy: $\text{Tr}(\rho H)$ but, again, the state has to be provided from elsewhere, not from the Hamiltonian.

+",1837,,1837,,10-01-2018 09:19,10-01-2018 09:19,,,,13,,,,CC BY-SA 4.0 +4319,2,,4313,10-01-2018 07:08,,4,,"

In your question, you don't define $P(\theta)$ or $R_z(\theta)$. I'm going to assume: +$$ +P(\theta)=\left(\begin{array}{cc} 1 & 0 \\ 0 & e^{i\theta} \end{array}\right)\qquad R_z(\theta)=\left(\begin{array}e^{-i\theta} & 0 \\ 0 & e^{i\theta} \end{array}\right). +$$ +In this case, you simply have that +$$ +R_z(\theta)=P(2\theta)e^{-i\theta}\equiv P(2\theta), +$$ +the point being that global phases are irrelevant. However, the difference is important when you look at the controlled-gates. Let's say we can create either controlled-$P$ or controlled-$R_z$. We can create the other via the identity

+ +

+ +

The extra $P(-\theta)$ is the gate that compensates for the extra phase.

+ +

To make controlled-$R_z$, the trick is to notice the identities $R_z(\theta_1)R_z(\theta_2)=R_z(\theta_1+\theta_2)$ and $XR_z(\theta)X=R_z(-\theta)$, where we will replace the Xs by controlled-not so that the change in rotation angle only happens if the control qubit is 1. Hence, we have

+ +

+",1837,,,,,10-01-2018 07:08,,,,0,,,,CC BY-SA 4.0 +4320,1,4321,,10-01-2018 11:50,,5,2226,"

$\newcommand{\q}[2]{\langle #1 | #2 \rangle}$ +I know from linear algebra that the inner product of two vectors is 0 if the vectors are orthogonal. I also know the inner product is positive if the vectors more or less point in the same direction and I know it's negative if the vectors more or less point in opposite directions.

+ +

This is the same inner product as in $\q{x}{y}$, right? Can you show an example of the inner product being useful in a quantum algorithm? The simplest the better. It doesn't have to be a useful algorithm.

+",1589,,55,,4/22/2021 23:07,4/22/2021 23:07,What's an use case of the inner product between two qubits in a quantum algorithm?,,3,0,,,,CC BY-SA 4.0 +4321,2,,4320,10-01-2018 12:30,,9,,"

The absolute value of the inner product between two (pure) states $\lvert\psi\rangle$ and $\lvert\phi\rangle$, $\lvert\langle\psi\rvert\phi\rangle\rvert$, can be used to quantify the distance between the two states, and is commonly referred to as fidelity (though the fidelity is often defined as the square of $\lvert\langle\psi\rvert\phi\rangle\rvert$).

+ +

If the current state is $\lvert\phi\rangle$, and $\lvert\psi\rangle$ is a possible outcome of a measurement, then $\lvert\langle\psi\rvert\phi\rangle\rvert^2$ is the probability of getting the outcome $\lvert\psi\rangle$. +More generally, $\lvert\langle\psi\rvert\phi\rangle\rvert^2$ can be thought of as a measure of indistinguishability of the two states. This quantity will thus be important every time one wants to figure out what is the current state of the system, as a high fidelity implies that a lot of measurements are necessary to tell the two states apart.

+ +

An example of a quantum algorithm computing this quantity is the C-SWAP test, which I think was introduced in (Buhrman et al. 2001). One usage of this algorithm that comes to mind is given in (LLoyd et al. 2013), where they use it as a subroutine for their supervised cluster assignment algorithm (see end of pag. 3).

+ +

More generally, the inner product between two states is their overlap, which is such a fundamental quantity in quantum mechanics that it is hard to say how it can not be ""useful"" for any kind of protocol.

+",55,,,,,10-01-2018 12:30,,,,0,,,,CC BY-SA 4.0 +4322,2,,4320,10-01-2018 13:37,,1,,"

If you want to estimate the inner product of two quantum states, use three quantum registers. The first one stores one qubit, the other two registers contains n qubits where n is used to store the value of the two states separetely. Next apply hadamard gate to the first register. Then by using controll swap gate which accepts three inputs, pass the three registers through this gate (where the first register act as a controller). Finally apply a hadamard gate to the first register qubit again and measure the first register alone. If the measurment result is 0.5 being in the ground state i.e. the two states are orthogonal. Unless the more the probability of the ancilla qubit (the first register) being in zero tends to 1, the two states are simillar.

+",4206,,4206,,10-01-2018 16:56,10-01-2018 16:56,,,,0,,,,CC BY-SA 4.0 +4323,2,,4320,10-01-2018 13:43,,1,,"

There is also an algorithm denoted in this article denoted by DistCalc and it enables you to compute the Euclidian distance between two real vectors $a$ and $b$ based on the computation of the inner product between two quantum states created from $a$ and $b$.

+ +

Say your vectors $a$ and $b$ have a dimension $N$ which can be expressed as a power of 2 without loss of generalities, then you can compute this distance with a complexity of $\mathcal{O}(\log N)$.

+",4127,,45,,10-01-2018 16:54,10-01-2018 16:54,,,,2,,,,CC BY-SA 4.0 +4324,1,,,10-01-2018 20:56,,3,592,"

I am just getting started with Qiskit and IBM Quantum Experience, so please forgive my newby question.

+ +

I have a IBM Quantum Experience account and I generated an API token.

+ +

I used the generated token in the following tiny Python3 program:

+ +
import qiskit
+QX_CONFIG = {
+  ""APItoken"": ""<my-token-here>"",
+  ""url"":""https://quantumexperience.ng.bluemix.net/api""}
+qiskit.register(QX_CONFIG['APItoken'], QX_CONFIG['url'])
+
+ +

When I run this program qiskit.register throws the following exception:

+ +
Exception has occurred: qiskit._qiskiterror.QISKitError
+""Couldn't instantiate provider! Error: Couldn't connect to IBMQuantumExperience server: error during login: HTTPSConnectionPool(host='quantumexperience.ng.bluemix.net', port=443): Max retries exceeded with url: /api/users/loginWithToken (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:645)'),))""
+
+ +

The API endpoint is reachable:

+ +
$ curl https://quantumexperience.ng.bluemix.net/api
+
+{""error"":{""status"":400,""message"":""Generic error"",""code"":""GENERIC_ERROR""}}
+
+ +

Any suggestions on how to fix this issue?

+ +

PS:

+ +
api = IBMQuantumExperience(QX_API_TOKEN)
+
+ +

generates the same exception

+",4733,,26,,03-12-2019 09:21,03-12-2019 09:21,Qiskit Python program cannot connect to QE API,,0,2,,,,CC BY-SA 4.0 +4325,1,4327,,10-02-2018 04:49,,2,239,"

How can one show that measuring the second qubit of $\psi$ is the same as measuring the second qubit after applying $U \otimes I$ to $\psi$?

+ +

I know that $\psi = \Sigma_{ij} a_{ij}|i⟩|j⟩$, $(U \otimes I)|\psi⟩ = \Sigma_{ij}a_{ij}U|i⟩ \otimes |j⟩$, but I'm not sure what to do next

+",4728,,26,,12/23/2018 12:45,4/16/2020 13:03,"Let $\psi$ be a two-qubit state, $U \in \mathbb{C}^{2\times2}$ be a single-qubit unitary. Measure second qubit after applying U to the first qubit?",,1,1,,,,CC BY-SA 4.0 +4326,1,,,10-02-2018 06:15,,4,418,"

Let's assume we developed a hashcat-like programs for quantum computer. How many qubits we need to find the correct hash (WPA, MD5,...) from a 10 characters password make from upper, lower & numeric characters (about 604,661,760,000,000,000 combinations)

+",4182,,,,,10-02-2018 16:51,How many qubits does it take to break a 10 characters password?,,1,0,,,,CC BY-SA 4.0 +4327,2,,4325,10-02-2018 06:42,,4,,"

There are a couple of different technical tools that one could use. Perhaps the simplest is the following:

+ +

Let's describe the measurement that you want to perform on qubit 2 by a set of projectors +$$ +P_i=I\otimes \sigma_i, +$$ +where I've just written that to emphasise the structure that the measurement is on qubit 2. What is the probability, $p_i$, of getting result $i$? +$$ +p_i=\langle\psi|P_i|\psi\rangle. +$$ +If, instead, you'd applied $U\otimes I $ first, then you'd be calculating +\begin{align} +p_i'&=\langle\psi|U^\dagger\otimes I P_iU\otimes I |\psi\rangle \\ +&=\langle\psi|(U^\dagger U)\otimes\sigma_i|\psi\rangle \\ +&=\langle\psi| I \otimes\sigma_i|\psi\rangle \\ +&=\langle\psi|P_i|\psi\rangle \\ +&=p_i +\end{align} +Hence, the probabilities are unchanged.

+ +

Another way to do this is using the partial trace. Everything that can be determined just from the second qubit is described by +$$ +\rho_2=\text{Tr}_1|\psi\rangle\langle\psi|. +$$ +Basically, what you do is sum over any orthonormal basis on the first qubit, +$$ +\rho_2=\sum_i\left(\langle i|\otimes I \right)\rho\left(|i\rangle\otimes I \right) +$$ +where I've written $\rho$ instead of $|\psi\rangle\langle\psi|$. Now, if I introduce the unitary $U$, we'd be calculating +$$ +\rho_2'=\sum_i\left(\langle i|\otimes I \right)U\otimes I \rho U^\dagger\otimes I\left(|i\rangle\otimes I \right). +$$ +The trick here, is to change which orthonormal basis you're summing over. You can use $|\eta_i\rangle=U|i\rangle$ instead, so that +\begin{align} +\rho_2'&=\sum_i \left( \langle \eta_i| \otimes I \right) (U\otimes I)\rho (U^\dagger\otimes I) \left(|\eta_i\rangle\otimes I \right). \\ +&=\sum_i\left(\langle i|\otimes I \right)\rho \left(|i\rangle\otimes I\right). \\ +&=\rho_2. +\end{align} +This shows the stronger result that everything that can be learnt or done to just qubit 2, without knowledge of qubit 1, is independent of the unitary applied to qubit 1.

+",1837,,11709,,4/16/2020 13:03,4/16/2020 13:03,,,,0,,,,CC BY-SA 4.0 +4328,2,,4326,10-02-2018 14:15,,4,,"

$$ +\log_2 604,661,760,000,000,000 \approx 59.07 +$$

+ +

So use $60$ qubits for the data lines where you will put a uniform superposition. This gives a total of $61$ qubits to run Grover's.

+ +

$2^{59} = 5.764607523034e+17$ so if you can throw away about $2.8e+16$ possibilities first, you would be able to do it $60$.

+ +

Edit: As cautioned this is for logical qubits.

+",434,,434,,10-02-2018 16:51,10-02-2018 16:51,,,,8,,,,CC BY-SA 4.0 +4330,1,4337,,10-02-2018 16:34,,7,1600,"

I know the inner product has a relationship to the angle between two vectors and I know it can be used to quantify the distance between two vectors. Similarly, what's an use case for the outer product? You can exemplify with the simplest case. It doesn't have to be a useful algorithm.

+ +

I do know the outer product is a matrix and I know how to compute it, but I don't know how and where I'd use it.

+",1589,,124,,10-03-2018 15:53,10-03-2018 15:53,"When would I consider using an outer product of quantum states, to describe aspects of a quantum algorithm?",,2,0,,,,CC BY-SA 4.0 +4331,1,4332,,10-02-2018 18:43,,5,661,"

$\newcommand{\q}[2]{\langle #1 | #2 \rangle} +\newcommand{\qr}[1]{|#1\rangle} +\newcommand{\ql}[1]{\langle #1|} +\renewcommand{\v}[2]{\langle #1,#2\rangle} +\newcommand{\norm}[1]{\left\lVert#1\right\rVert}$ +Here's an application of the operator $\qr{\psi}\ql{\phi}$ to the vector $\qr{x}$. One writes $$ +\begin{align} +(\qr{\psi}\ql{\phi})\qr{x} &= \qr{\psi}(\ql{\phi}\qr{x})\\ + &= \qr{\psi}(\q{\phi}{x})\\ + &= (\ql{\phi}\phi\qr{x})\qr{\psi} +\end{align}$$

+ +

The last equation totally loses me. First I don't understand what is the meaning of $\ql{\phi}\phi\qr{x}$ and I have no idea how it came about (moving $\qr{\psi}$ to the other end of the expression).

+ +

I know $\qr{\psi}\ql{\phi}$ is a matrix and I know how to get the matrix if I have two concrete vectors $\qr{\psi}$ and $\ql{\phi}$. I also know how to multiply a matrix by a vector, but I don't know how to apply the outer product as in those equations above. (They seem to be saying something important and I'm missing it.)

+ +

This is found in the very definition of outer product in the nice book by David McMahon: Quantum Computing Explained, ISBN 978-0-470-09699-4. The two relevant pages, including an example.

+ +

+",1589,,26,,12/23/2018 11:39,12/23/2018 11:39,How to apply the outer product operator?,,1,5,0,,,CC BY-SA 4.0 +4332,2,,4331,10-02-2018 21:53,,3,,"

It is normal you are lost. Indeed that is mathematically incorrect. +I guess the original idea was to say that the inner product $ \langle \phi | \chi \rangle $ corresponds just to a coefficient and can be written either at the right or left of $ | \psi \rangle $, giving you a proportionality.

+ +

If you look at the example 3.1, you see an i appearing from nowhere but it should be a 1. As advised in the comments, use a different book. This one has many bad errors.

+",4127,,,,,10-02-2018 21:53,,,,0,,,,CC BY-SA 4.0 +4333,1,4334,,10-03-2018 01:07,,5,926,"

Suppose there are 3 parties, of which 2 pairs share an EPR pair and can communicate classically. What is a protocol that results in the third pair sharing an EPR pair?

+ +

That is the problem I'm given, I'm lost on how to do it, let alone what it means to ""share an EPR pair"" in the end.

+",4728,,26,,12/23/2018 12:45,12/23/2018 12:45,Protocol for entaglement swapping,,1,0,,,,CC BY-SA 4.0 +4334,2,,4333,10-03-2018 02:14,,7,,"

For two parties to share an EPR pair means that each party has one qubit, and these two qubits together are in state $\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$ (or one of the other Bell pairs).

+ +

The entanglement swapping protocol is described, for example, in Wikipedia. It is the same as traditional (I almost wrote ""classical"" but caught myself in time) quantum teleportation, but applied to qubit in mixed state instead of pure state.

+ +

Let's say

+ +
    +
  • Alice has qubit Q1,
  • +
  • Bob has qubits Q2 (entangled with Q1) and Q3,
  • +
  • and Carol has qubit Q4 (entangled with Q3).
  • +
+ +

If Bob teleports the state of qubit Q2 to Carol (using up the entanglement of qubits Q3 and Q4 in the process), Carol's qubit Q4 will end up entangled with Alice's qubit Q1.

+",2879,,1837,,10-03-2018 07:03,10-03-2018 07:03,,,,0,,,,CC BY-SA 4.0 +4335,2,,4330,10-03-2018 10:37,,3,,"

The outer product is just a mathematical tool to write matrices, and therefore quantum operations and states. It is ""useful"" in the same sense that matrices, or more generally linear algebra, are ""useful"" in quantum mechanics. It is the mathematical formalism in which QM is formulated. +Given the above, I don't think it is particularly meaningful to talk of its utility for quantum algorithm.

+ +

In other words, I feel like the question is akin to asking ""what is a use of linear algebra in a quantum algorithm?"", and the answer is: it is the language in which QM is formulated, so it can never be not useful.

+ +

Note that this is a bit different than asking about inner products. The inner product between two vectors is a well-defined operation between sets of numbers, which makes meaningful to talk of a quantum algorithm performing this operation. +On the other hand, the outer product is (roughly speaking) only a way to denote matrix elements. There is nothing to compute here.

+",55,,,,,10-03-2018 10:37,,,,0,,,,CC BY-SA 4.0 +4336,1,,,10-03-2018 15:22,,6,124,"

I am reding the paper EXIT-Chart Aided Near-Capacity Quantum Turbo Code Design by Babar et al. and in the introduction, it is stated that due to the exponentially increasing complexity of the MIMO ML decoding, it is an idea to pass such problem to the quantum domain in order to make use of the inherent parallel nature of it, and so reduce the complexity of such problem.

+ +

I was wondering then if there has been any research on the topic of using quantum computing to solve such problem related with MIMO channels, and so if there has been, to reference some papers where such type of quantum algorithms are presented.

+",2371,,26,,12/13/2018 19:29,12/13/2018 19:29,Quantum Algorithm for MIMO ML detection,,1,0,,,,CC BY-SA 4.0 +4337,2,,4330,10-03-2018 15:38,,6,,"

An outer product is a description of an operator, which is very often to be applied to a state. It can therefore be used to describe how the state transforms under the action of the operator.$\def\ket#1{\lvert#1\rangle}\def\bra#1{\!\langle#1\rvert}$

+ +

For example, we may describe the Hadamard gate by +$$ +H \;=\; \ket{+}\bra{0} \,+\, \ket{-}\bra{1}, +$$ +which describes the fact that $H$ transforms $\ket{0} \mapsto \ket{+}$ and $\ket{1} \mapsto \ket{-}$, and transforms all linear combinations of $\ket{0}$ and $\ket{1}$ linearly. +If you want to describe the effect of the Hadamard gate on the standard basis — describing its matrix, but symbolically, so that you can actually carry out symbolic analysis — then you might then want to write something like +$$ +H \;=\; + \frac{1}{\sqrt 2}\sum_{x,y \in \{0,1\}} (-1)^{xy} \,\ket{y}\bra{x}. +$$ +Admittedly this is not often very important specifically for a single Hadamard matrix, though a representation like this may well prove useful if you want to reason about performing a Hadamard on many qubits at once, +$$ + H^{\otimes n} \;=\; + \frac{1}{\sqrt{2^n}} \sum_{k,z \in \{0,1\}^n} (-1)^{k\cdot z}\,\ket{k_1 k_2 \cdots k_n}\bra{z_1 z_2 \cdots z_n} ,$$ +or to describe the quantum Fourier transform with respect to the integers modulo $M$ for some integer $M$: +$$ +\begin{align} + F_{M} \,=\, \frac{1}{\sqrt M} \sum_{x,y \in \mathbb Z_M} \mathrm e^{2\pi i x y\!\:/\!\:M} +\end{align} \ket{y}\bra{x}. +$$ +It is sometimes useful to describe projectors using outer products as well: for example, the operator which projects states onto the $\ket{+}$ states would be written simply as $\ket{+}\bra{+}$.

+",124,,,,,10-03-2018 15:38,,,,0,,,,CC BY-SA 4.0 +4338,1,4345,,10-03-2018 22:19,,18,1187,"

I'm working through Mike and Ike (Nielsen and Chuang) for self-study, and I'm reading about stabilizer codes in Chapter 10. I'm an electrical engineer with somewhat of a background in classical information theory, but I'm by no means an expert in algebraic coding theory. My abstract algebra is essentially just a little more than what's in the appendix.

+ +

I think I totally understand the Calderbank-Shor-Steane construction, where two linear classical codes are used to construct a quantum code. The Steane code is constructed using $C_1$ (the code for qbit flips) as the [7,4,3] Hamming code, and $C_2^{\perp}$ (the code for phase flips) as the same code. The parity check matrix the [7,4,3] code is: +\begin{bmatrix} + 0&0&0&1&1&1&1 \\ + 0&1&1&0&0&1&1 \\ +1&0&1&0&1&0&1 +\end{bmatrix}.

+ +

The stabilizer generators for the Steane code can be written as:

+ +

\begin{array} {|r|r|} +\hline +Name & Operator \\ +\hline + g_1 & IIIXXXX \\ +\hline + g_2 & IXXIIXX \\ +\hline + g_3 & XIXIXIX \\ +\hline +g_4 & IIIZZZZ \\ +\hline + g_5 & IZZIIZZ \\ +\hline + g_6 & ZIZIZIZ \\ +\hline +\end{array} where for the sake of my sanity $IIIXXXX = I \otimes I\otimes I\otimes X \otimes X \otimes X \otimes X$ and so on.

+ +

It's pointed out in the book that the $X$s and $Z$s are in the same positions as the $1$s in the original parity check code. Exercise 10.32 asks to verify that the codewords for the Steane code are stabilized by this set. I could obviously plug this in and check it by hand. However, it's stated that with the observation of the similarities between the parity check matrix and the generator the exercise is ""self-evident"".

+ +

I've seen this fact noted in other places (http://www-bcf.usc.edu/~tbrun/Course/lecture21.pdf), but I'm missing some kind of (probably obvious) intuition. I think I'm missing some further connection from the classical codewords to the quantum codes other than how they're used in the indexing of basis elements in the construction of the code (i.e. Section 10.4.2).

+",4746,,304,,10-10-2018 20:13,10-10-2018 20:13,Connection between stabilizer generators and parity check matrices in the Steane code,,3,0,,,,CC BY-SA 4.0 +4339,2,,4336,10-03-2018 22:34,,2,,"

Yeah. See this one for a more-or-less analogous problem: https://ieeexplore.ieee.org/document/6515077?arnumber=6515077 . Look for some papers by Lajos Hanzo on IEEE Explore. One thing that's notable is that MIMO detection is NP hard.

+ +

Specifically, to solve the MIMO (or MU) ML detection problem exactly (currently) requires searching an exponential space. Applying quantum unstructured search gives a square root factor improvement via Grover's Algorithm, so the resulting problem is still exponential.

+ +

In more practical MIMO detection, people generally apply heuristics or solve relaxations of the actual problem. Typically, we can do a solid job in low polynomial complexity.

+",4746,,,,,10-03-2018 22:34,,,,0,,,,CC BY-SA 4.0 +4340,1,,,10-04-2018 06:08,,5,268,"

As a fundamental component for a quantum computation, the measurement needs to be implemented in a fault tolerant way.

+ +

As indicated in Chuang and Nielsen Quantum Computation and Quantum Information, a quantum circuit that allows the implementation of a FT measurement is the one represented in Fig.1:

+ +

+Quantum Circuit representation of a fault tolerant measurement.

+ +

However, three measurements are needed in order to make this implementation actually fault tolerant, due to the possible presence of errors in various part of the circuit (as it is said in the book, between two CNOT gates in a verification step for the cat state).

+ +

But how is it actually possible to execute three subsequent set of measurements? Isn't the data (the states on which the measured is performed) modified by the application of the measurement operation? +If that is the case, are the measurement operations applied on different data?

+",2601,,,,,10-04-2018 08:22,"Fault tolerant quantum measurement: how is implemented the ""majority vote""",,2,1,,,,CC BY-SA 4.0 +4341,2,,4338,10-04-2018 06:37,,2,,"

One way that you could construct what the codeword is is to project on the $+1$ eigenspace of the generators, +$$ +|C_1\rangle=\frac{1}{2^6}\left(\prod_{i=1}^6(\mathbb{I}+g_i)\right)|0000000\rangle. +$$ +Concentrate, to start with, one the first 3 generators +$$ +(\mathbb{I}+g_1)(\mathbb{I}+g_2)(\mathbb{I}+g_3). +$$ +If you expand this out, you'll see that it creates all the terms in the group ($\mathbb{I},g_1,g_2,g_3,g_1g_2,g_1g_3,g_2g_3,g_1g_2g_3)$. Corresponding it to binary strings, the action of multiplying two terms (since $X^2=\mathbb{I}$) is just like addition modulo 2. So, contained within the code are all of the words generated by the parity check matrix (and this is a group, with group operation of addition modulo 2).

+ +

Now, if you multiply by one of the $X$ stabilizers, that's like doing the addition modulo 2 on the corresponding bit strings. But, because we've already generated the group, by definition every group element is mapped to another (unique) group element. In other words, if I do +$$ +g_1\times\{\mathbb{I},g_1,g_2,g_3,g_1g_2,g_1g_3,g_2g_3,g_1g_2g_3\}=\{g_1\mathbb{I},g_1g_2,g_1g_3,g_2,g_3,g_1g_2g_3,g_2g_3\}, +$$ +I get back the set I started with (using $g_1^2=\mathbb{I}$, and the commutation of the stabilizers), and therefore I'm projecting onto the same state. Hence, the state is stabilized by $g_1$ to $g_3$.

+ +

You can effectively make the same argument for $g_4$ to $g_6$. I prefer to think about first applying a Hadamard to every qubit, so the Xs are changed to Zs and vice versa. The set of stabilizers are unchanged, so the code is unchanged, but the Z stabilizer is mapped to an X stabilizer, about which we have already argued.

+",1837,,,,,10-04-2018 06:37,,,,0,,,,CC BY-SA 4.0 +4342,2,,4338,10-04-2018 06:37,,2,,"

What follows perhaps doesn't exactly answer your question, but instead aims to provide some background to help it become as 'self-evident' as your sources claim.

+ +

The $Z$ operator has eigenstates $|0\rangle$ (with eigenvalue $+1$) and $|1\rangle$ (with eigenvalue $-1$). The $ZZ$ operator on two qubits therefore has eigenstates +$$ |00\rangle, |01\rangle, |10\rangle, |11\rangle $$. +The eigenvalues for these depend on the parity of the bit strings. For example, with $|00\rangle$ we multiply the $+1$ eigenvalues of the individual $Z$ operators to get $+1$. For $|11\rangle$ we multiply the $-1$ eigenvalues together and also get $+1$ for $ZZ$. So both these even parity bit strings have eigenvalue $+1$, as does any superposition of them. For both odd parity states ($|01\rangle$ and $|10\rangle$) we must multiply a $+1$ with a $-1$, and get a $-1$ eigenvalue for $ZZ$.

+ +

Note also that superpositions of bit strings with fixed parity (such as some $\alpha |00\rangle + \beta |00\rangle$) are also eigenstates, and have the eigenvalue associated with their parity. So measuring $ZZ$ would not collapse such a superposition.

+ +

This analysis remains valid as we go to higher number of qubits. So if we want to know about the parity of qubits 1, 3, 5, and 7 (to pick a pertinent example), we could use the operator $ZIZIZIZ$. If we measure this and get the outcome $+1$, we know that this subset of qubits has a state represented by an even parity bit string, or a superposition thereof. If we get $-1$, we know that it is an odd parity state.

+ +

This allows us to write the [7,4,3] Hamming code using the notation of quantum stabilizer codes. Each parity check is turned into a stabilizer generator which has $I$ on every qubit not involved in the check, and $Z$ on every qubit that is. The resulting code will protect against errors that anticommute with $Z$ (and therefore have the effect of flipping bits).

+ +

Of course, qubits do not restrict us to only working in the $Z$ basis. We could encode our qubits for a classical Hamming code in the $|+\rangle$ and $|-\rangle$ states instead. These are the eigenstates of $X$, so you just need to replace $Z$ with $X$ in everything I've said to see how this kind of code works. It would protect against errors that anticommute with $X$ (and so have the effect of flipping phase).

+ +

The magic of the Steane code, of course, is that it does both at the same time and protects against everything.

+",409,,,,,10-04-2018 06:37,,,,0,,,,CC BY-SA 4.0 +4343,2,,4340,10-04-2018 06:51,,0,,"

The three measurements you refer to are applied on parts of the circuit where a qubit is first initialized, then acted upon by a controlled gate, and then measured. The fact that they are initialized at the beginning of each of these three processes means that any dependence on their previous history is removed. So we could use the same qubit three times (and use the result of the previous measurement to inform us how to initialize) or we could use three different qubits. We can just choose whatever is practical.

+ +

In general, there is not reason why we shouldn't measure a qubit multiple times, despite the fact that measurement modifies the state. For example, in stabilizer codes, we constantly move information about errors that have occurred to certain qubits, and then measure them to get that information out. In this case, the effect of the measurement in collapsing superpositions has the positive benefit of collapsing complex types of error into simpler ones. So if you ever see a multiply measured qubit in a reputable source, you can be sure that the disturbance caused by the measurement has been taken into account, and is probably being specifically made use of.

+",409,,,,,10-04-2018 06:51,,,,2,,,,CC BY-SA 4.0 +4344,2,,4340,10-04-2018 08:22,,1,,"

The three measurements you refer to are composed of two data qubits and an ancilla. They essentially ask the question ""is the state of the data qubits in the subspace spanned by $|00\rangle$ and $|11\rangle$ (in which case the measurement result on the ancilla will be $0$) or is the state in the subspace $|01\rangle$ and $|10\rangle$ (in which case the ancilla would output $1$)?""

+ +

This means that if you only ever prepare you state in some superposition of $|00\rangle$ and $|11\rangle$, then the measurement will have no effect. It will leave the superposition completely intact, because every part of the state corresponds to the same measurement result.

+ +

In the actual circuit here, things are a little more complicated. There are three data qubits in all, prepared in a superposition of $|000\rangle$ and $|111\rangle$. But still, for each pair of data qubits the state should be within the subspace spanned by $|00\rangle$ and $|11\rangle$. So the three measurements (each for a different pair of data qubits) should always return the output $0$. They would have no effect on the superposition in this case.

+ +

If they return a different result, it is a sign of error. The majority voting helps us determine what to do to mitigate for that error.

+",409,,,,,10-04-2018 08:22,,,,0,,,,CC BY-SA 4.0 +4345,2,,4338,10-04-2018 12:13,,9,,"

There are a few conventions and intuition here, which perhaps it would help to have spelled out — $\def\ket#1{\lvert#1\rangle}\def\bra#1{\!\langle#1\rvert}$

+ +
    +
  • Sign bits versus {0,1} bits

    +The first step is to make what is sometimes called the 'great notational shift', and think of bits (even classical bits) as being encoded in signs. This is productive to do if what you're mostly interested in is the parities of bit strings, because bit-flips and sign-flips basically act the same way. We map $0 \mapsto +1$ and $1 \mapsto -1$, so that for instance the sequence of bits $(0,0,1,0,1)$ would be represented by the sequence of signs $(+1,+1,-1,+1,-1)$.

    + +

    Parities of sequences of bits then corresponds to products of sequences of signs. For instance, just as we would recognise $0 \oplus 0 \oplus 1 \oplus 0 \oplus 1 = 0$ as a parity computation, we may recognise $(+1)\cdot(+1)\cdot(-1)\cdot(+1)\cdot(-1) = +1$ as representing the same parity computation using the sign convention.

    +Exercise. Compute the 'parity' of $(-1,-1,+1,-1)$ and of $(+1,-1,+1,+1)$. Are these the same?

  • +
  • Parity checks using sign bits

    +In the {0,1}-bit convention, parity checks have a nice representation as a dot-product of two boolean vectors, so that we can realise complicated parity computations as linear transformations. +By shifting to sign-bits, we have inevitably lost the connection to linear algebra on a notational level, because we're taking products instead of sums. +On a computational level, because this is only a shift in notation, we don't really have to worry too much. +But on a pure mathematical level, we now have to think again a little about what we're doing with parity check matrices.

    + +

    When we use sign bits, we may still represent a 'parity check matrix' as a matrix of 0s and 1s, instead of signs ±1. Why? One answer is that a row vector describing a parity check of bits is of a different type than the sequence of bits themselves: it describes a function on data, not the data itself. The array of 0s and 1s now just requires a different interpretation — instead of linear coefficients in a sum, they correspond to exponents in a product. If we have sign bits $(s_1, s_2, \ldots, s_n) \in \{-1,+1\}^n$, and we want to compute a parity check given by a row-vector $(b_1, b_2, \ldots, b_n) \in \{0,1\}$, the parity check is then computed by +$$ (s_1)^{b_1} \cdot (s_2)^{b_2} \cdot [\cdots] \cdot (s_n)^{b_n} \in \{-1,+1\},$$ where recall that $s^0 = 1$ for all $s$. +As with {0,1}-bits, you can think of the row $(b_1,b_2,\ldots,b_n)$ as just representing a 'mask' which determines which bits $s_j$ make a non-trivial contribution to the parity computation.

    + +

    Exercise. Compute the result of the parity check $(0,1,0,1,0,1,0)$ on $(+1,-1,-1,-1,-1,+1,-1)$.

  • +
  • Eigenvalues as parities.

    +The reason why we would want to encode bits in signs in quantum information theory is because of the way that information is stored in quantum states — or more to the point, the way that we can describe accessing that information. Specifically, we may talk a lot about the standard basis, but the reason why it is meaningful is because we can extract that information by measurement of an observable.

    + +

    This observable could just be the projector $\ket{1}\bra{1}$, where $\ket{0}$ has eigenvalue 0 and $\ket{1}$ has eigenvalue 1, but it is often helpful to prefer to describe things in terms of the Pauli matrices. +In this case, we would talk about the standard basis as the eigenbasis of the $Z$ operator, in which case we have $\ket{0}$ as the +1-eigenvector of Z and $\ket{1}$ as the −1-eigenvector of Z.

    + +

    So: we have the emergence of sign-bits (in this case, eigenvalues) as representing the information stored in a qubit. And better still, we can do this in a way which is not specific to the standard basis: we can talk about information stored in the 'conjugate' basis, just by considering whether the state is an eigenstate of $X$, and what eigenvalue it has. But more than this, we can talk about the eigenvalues of a multi-qubit Pauli operator as encoding parities of multiple bits — the tensor product $Z \otimes Z$ represents a way of accessing the product of the sign-bits, that is to say the parity, of two qubits in the standard basis. +In this sense, the eigenvalue of a state with respect to a multi-qubit Pauli operator — if that eigenvalue is defined (i.e. in the case that the state is an eigenvalue of the Pauli operator) — is in effect the outcome of a parity calculation of information stored in some choice of basis for each of the qubits.

    + +

    Exercise. What is the parity of the state $\ket{11}$ with respect to $Z \otimes Z$? Does this state have a well-defined parity with respect to $X \otimes X$?

    +Exercise. What is the parity of the state $\ket{+-}$ with respect to $X \otimes X$? Does this state have a well-defined parity with respect to $Z \otimes Z$?

    +Exercise. What is the parity of $\ket{\Phi^+} = \tfrac{1}{\sqrt 2}\bigl(\ket{00} + \ket{11}\bigr)$ with respect to $Z \otimes Z$ and $X \otimes X$?

  • +
  • Stabiliser generators as parity checks.

    +We are now in a position to appreciate the role of stabiliser generators as being analogous to a parity check matrix. +Consider the case of the 7-qubit CSS code, with generators

    + +

    \begin{array} {|r|ccccccc|} +\hline +\scriptstyle\text{Generator} & & & & \!\!\!\!\!\!\!\!\!\scriptstyle\text{Tensor factors}\!\!\!\!\!\!\!\!\! & & & \\[-0.5ex] + & \scriptstyle1 & \scriptstyle2 & \scriptstyle3 & \scriptstyle4 & \scriptstyle5 & \scriptstyle6 & \scriptstyle7 \\ +\hline +\hline + g_1 & & & & X & X & X & X \\ +\hline + g_2 & & X & X & & & X & X \\ +\hline +\hline + g_3 & X & & X & & X & & X \\ +\hline +g_4 & & & & Z & Z & Z & Z \\ +\hline + g_5 & & Z & Z & & & Z & Z \\ +\hline + g_6 & Z & & Z & & Z & & Z \\ +\hline +\end{array} +I've omitted the identity tensor factors above, as one might sometimes omit the 0s from a {0,1} matrix, and for the same reason: in a given stabiliser operator, the identity matrix corresponds to a tensor factor which is not included in the 'mask' of qubits for which we are computing the parity. For each generator, we are only interested in those tensor factors which are being acted on somehow, because those contribute to the parity outcome.

    + +

    Now, the 'codewords' (the encoded standard basis states) of the 7-qubit CSS code are given by +$$ +\begin{align} + \ket{0_L} \propto{}&{} + \ket{0000000} + \ket{0001111} + \ket{0110011} + \ket{0111100} \\&+ \ket{1010101} + \ket{1011010} + \ket{1100110} + \ket{1101001} + = + \sum_{y \in C} \ket{y}, +\\[1ex] + \ket{1_L} \propto{}&{} + \ket{1111111} + \ket{1110000} + \ket{1001100} + \ket{1000011} \\&+ \ket{0101010} + \ket{0100101} + \ket{0011001} + \ket{0010110} + = + \sum_{y \in C} \ket{y \oplus 1111111}, +\end{align} +$$ +where $C$ is the code generated by the bit-strings $0001111$, $0110011$, and $1010101$. Notably, these bit-strings correspond to the positions of the $X$ operators in the generators $g_1$, $g_2$, and $g_3$. While those are stabilisers of the code (and represent parity checks as I've suggested above), we can also consider their action as operators which permute the standard basis. In particular, they will permute the elements of the code $C$, so that the terms involved in $\ket{0_L}$ and $\ket{1_L}$ will just be shuffled around.

    + +

    The generators $g_4$, $g_5$, and $g_6$ above are all describing the parities of information encoded in standard basis states. +The encoded basis states you are given are superpositions of codewords drawn from a linear code, and those codewords all have even parity with respect to the parity-check matrix from that code. +As $g_4$ through $g_6$ just describe those same parity checks, it follows that the eigenvalue of the encoded basis states is $+1$ (corresponding to even parity).

    + +

    This is the way in which

    + +
    +

    'with the observation about the similarities between the parity check matrix and the generator the exercise is ""self evident""'

    +
    + +

    — because the stabilisers either manifestly permute the standard basis terms in the two 'codewords', or manifestly are testing parity properties which by construction the codewords will have.

  • +
  • Moving beyond codewords

    + +

    The list of generators in the table you provide represent the first steps in a powerful technique, known as the stabiliser formalism, in which states are described using no more or less than the parity properties which are known to hold of them.

    + +

    Some states, such as standard basis states, conjugate basis states, and the perfectly entangled states $\ket{\Phi^+} \propto \ket{00} + \ket{11}$ and $\ket{\Psi^-} \propto \ket{01} - \ket{10}$ can be completely characterised by their parity properties. (The state $\ket{\Phi^+}$ is the only one which is a +1-eigenvector of $X \otimes X$ and $Z \otimes Z$; the state $\ket{\Psi^-}$ is the only one which is a −1-eigenvector of both these operators.) These are known as stabiliser states, and one can consider how they are affected by unitary transformations and measurements by tracking how the parity properties themselves transform. For instance, a state which is stabilised by $X \otimes X$ before applying a Hadamard on qubit 1, will be stabilised by $Z \otimes X$ afterwards, because $(H \otimes I)(X \otimes X)(H \otimes I) = Z \otimes X$. +Rather than transform the state, we transform the parity property which we know to hold of that state.

    + +

    You can use this also to characterise how subspaces characterised by these parity properties will transform. +For instance, given an unknown state in the 7-qubit CSS code, I don't know enough about the state to tell you what state you will get if you apply Hadamards on all of the qubits, but I can tell you that it is stabilised by the generators $g_j' = (H^{\otimes 7}) g_j (H^{\otimes 7})$, which consist of +\begin{array} {|r|ccccccc|} +\hline +\scriptstyle\text{Generator} & & & & \!\!\!\!\!\!\!\!\!\scriptstyle\text{Tensor factors}\!\!\!\!\!\!\!\!\! & & & \\[-0.5ex] + & \scriptstyle1 & \scriptstyle2 & \scriptstyle3 & \scriptstyle4 & \scriptstyle5 & \scriptstyle6 & \scriptstyle7 \\ +\hline +\hline + g'_1 & & & & Z & Z & Z & Z \\ +\hline + g'_2 & & Z & Z & & & Z & Z \\ +\hline +\hline + g'_3 & Z & & Z & & Z & & Z \\ +\hline +g'_4 & & & & X & X & X & X \\ +\hline + g'_5 & & X & X & & & X & X \\ +\hline + g'_6 & X & & X & & X & & X \\ +\hline +\end{array} +This is just a permutation of the generators of the 7-qubit CSS code, so I can conclude that the result is also a state in that same code.

    + +

    There is one thing about the stabiliser formalism which might seem mysterious at first: you aren't really dealing with information about the states that tells you anything about how they expand as superpositions of the standard basis. You're just dealing abstractly with the generators. And in fact, this is the point: you don't really want to spend your life writing out exponentially long superpositions all day, do you? What you really want are tools to allow you to reason about quantum states which require you to write things out as linear combinations as rarely as possible, because any time you write something as a linear combination, you are (a) making a lot of work for yourself, and (b) preferring some basis in a way which might prevent you from noticing some useful property which you can access using a different basis.

    + +

    Still: it is sometimes useful to reason about 'encoded states' in error correcting codes — for instance, in order to see what effect an operation such as $H^{\otimes 7}$ might have on the codespace of the 7-qubit code. What should one do instead of writing out superpositions?

    + +

    The answer is to describe these states in terms of observables — in terms of parity properties — to fix those states. For instance, just as $\ket{0}$ is the +1-eigenstate of $Z$, we can characterise the logical state $\ket{0_L}$ of the 7-qubit CSS code as the +1-eigenstate of +$$ Z_L = Z \otimes Z \otimes Z \otimes Z \otimes Z \otimes Z \otimes Z$$ +and similarly, $\ket{1_L}$ as the −1-eigenstate of $Z_L$. +(It is important that $Z_L = Z^{\otimes 7}$ commutes with the generators $\{g_1,\ldots,g_6\}$, so that it is possible to be a +1-eigenstate of $Z_L$ at the same time as having the parity properties described by those generators.) +This also allows us to move swiftly beyond the standard basis: using the fact that $X^{\otimes 7}$ anti commutes with $Z^{\otimes 7}$ the same way that $X$ anti commutes with $Z$, and also as $X^{\otimes 7}$ commutes with the generators $g_i$, we can describe $\ket{+_L}$ as being the +1-eigenstate of +$$ X_L = X \otimes X \otimes X \otimes X \otimes X \otimes X \otimes X,$$ +and similarly, $\ket{-_L}$ as the −1-eigenstate of $X_L$. +We may say that the encoded standard basis is, in particular, encoded in the parities of all of the qubits with respect to $Z$ operators; and the encoded 'conjugate' basis is encoded in the parities of all of the qubits with respect to $X$ operators.

    + +

    By fixing a notion of encoded operators, and using this to indirectly represent encoded states, we may observe that +$$ (H^{\otimes 7}) \,X_L\, (H^{\otimes 7}) = Z_L, \quad (H^{\otimes 7}) \,Z_L\, (H^{\otimes 7}) = X_L, $$ +which is the same relation as obtains between $X$ and $Z$ with respect to conjugation by Hadamards; which allows us to conclude that for this encoding of information in the 7-qubit CSS code, $H^{\otimes 7}$ not only preserves the codespace but is an encoded Hadamard operation.

  • +
+ +

Thus we see that the idea of observables as a way of describing information about a quantum states in the form of sign bits — and in particular tensor products as a way of representing information about parities of bits — plays a central role in describing how the CSS code generators represent parity checks, and also in how we can describe properties of error correcting codes without reference to basis states.

+",124,,124,,10-04-2018 15:30,10-04-2018 15:30,,,,3,,,,CC BY-SA 4.0 +4347,1,4352,,10-04-2018 21:55,,4,568,"

Suppose Alice and Bob hold one qubit each of an arbitrary two-qubit state $|\psi \rangle$ that is possibly entangled. They can apply local operations and are allowed classical communication. Their goal is to apply the CNOT gate to their state $| \psi \rangle$. How can they achieve this using two ebits of communication?

+ +
+ +

I'm pretty lost on where to start this problem. I would assume that if Alice has the control bit for the CNOT, I would need to follow a protocol something like:

+ +
    +
  1. tensor $|\psi\rangle$ with the EPR pairs
  2. +
  3. Alice applies some local operations and measures the EPR pairs
  4. +
  5. Alice classically communicates something to Bob
  6. +
  7. Bob applies some local operations with the information sent classically by Alice
  8. +
  9. CNOT completed.
  10. +
+ +

However, it is possible that Alice would only use one ebit, and Bob would use one as well to communicate something back. I'm not sure.

+ +

Really I just feel like I have no clue where to start this problem. Initially, some guidance on how to approach this would perfect, because right now i'm just poking around at a 6 qubit state.

+ +
+ +

A second idea I just had: might it be beneficial to instead of looking at the complete state, look at Alice and Bob's local density matrices?

+",4707,,26,,12/23/2018 11:37,12/23/2018 11:37,Applying CNOT with local operations and two EPR pairs,,1,2,,,,CC BY-SA 4.0 +4348,1,,,10-05-2018 05:47,,7,343,"

Is there a straightforward generalization of the $\mathbb{C}^2$ Bell basis to $N$ dimensions? Is there a rotational invariant Bell state in higher dimensions? If yes, then what is the form of that state (how does it look like)? And, by rotational invariance, I mean that the state is invariant under applying the same unitary transformation $U$ to each qubit separately.

+ +

For example, $|\psi^-\rangle = \frac{|0\rangle|1\rangle - |1\rangle|0\rangle}{\sqrt{2}} = \frac{|\gamma\rangle|\gamma_\perp\rangle - |\gamma_\perp\rangle|\gamma\rangle}{\sqrt{2}}$, where $|\gamma\rangle$ is some quantum state in $\mathbb{C}^2$, and $|\gamma_\perp\rangle$ is orthogonal to $|\gamma\rangle$.

+ +

It would be helpful if I could see an example of the same, in say $\mathbb{C}^4$ space, perhaps in the computational basis {$|0\rangle, |1\rangle, |2\rangle, |3\rangle$} itself.

+",506,,55,,11/30/2021 22:26,11/30/2021 22:26,Rotationally invariant maximally entangled states in higher dimensions,,1,0,,,,CC BY-SA 4.0 +4349,2,,4348,10-05-2018 07:14,,4,,"

An orthonormal maxinammly entangled basis for two quNits is easily defined: +$$ +|\Psi_{xy}\rangle=\frac{1}{\sqrt{N}}\sum_{i=0}^{N-1}\omega^{iy}|i,i+x\rangle, +$$ +where $\omega$ is an $N$-th root of unity, and $x,y=0,1,\ldots,N-1$.

+ +

I don't believe that there is a rotationally invariant maximally entangled state, except in the case $N=2$. You may want to look up 'twirling', which almost does the calculation you need (they find states invariant under $U\otimes U^\star$ instead of $U\otimes U$), however the way that I convinced myself is the following:

+ +
    +
  • Any state invariant under $U\otimes U$ must be invariant under particular examples.

  • +
  • Let's start with the $Z$-equivalent, +$$ +\tilde Z=\sum_{n=1}^N\omega^n|n\rangle\langle n| +$$ +The class of maximally entangled states that are invariant under this operation are +$$ +\frac{1}{\sqrt{N}}\sum_{n=0}^{N-1}|n\rangle|N-1-n\rangle e^{i\phi_n}, +$$ +where we have free choice over the phases $\phi_n$.

  • +
  • Next consider the action of the permutation operation +$$ +P=\sum_{n=0}^{N-2}|n+1\rangle\langle n|+|0\rangle\langle N-1| +$$ +For $N=2$, this maps back to the original state provided $e^{2i(\phi_2-\phi_1)}=1$. For all other $N$, we cannot map back to the original state. Terms like $|1\rangle|N-2\rangle$ become $|2\rangle|N-1\rangle$ which are not of the form $|m\rangle|N-1-m\rangle$, and are hence not in the original state.

  • +
+",1837,,,,,10-05-2018 07:14,,,,1,,,,CC BY-SA 4.0 +4350,1,4356,,10-05-2018 09:18,,14,3499,"

Two of the most well known entangled states are the GHZ-state $|\psi\rangle = 1/\sqrt{2}\left( |0\rangle^{\otimes n} + |1\rangle^{\otimes n}\right)$ and the $W_n$-state, with $W_3 = 1/\sqrt{3}\left(|100\rangle + |010\rangle + |001\rangle\right)$.

+ +

Constructing the GHZ-state is simple for arbitrary $n$. However, implementing the $W_n$-state is more difficult. For $n=2$ it is easy, and for $n=4$ we can use

+ +
H q[0,3]
+X q[0,3]
+Toffoli q[0],q[3],q[1]
+X q[0,3]
+Toffoli q[0],q[3],q[2]
+CNOT q[2],q[0]
+CNOT q[2],q[3]
+
+ +

Even for $n=3$ we have implementations, see this answer for instance. However, I have not found an algorithm that, given an $n$, outputs the circuit for constructing the $W_n$-state.

+ +

Does such an algorithm, defined by single- and two-qubit gates, exist? And if so, what is it?

+",2005,,,,,10/25/2022 0:14,General construction of $W_n$-state,,4,0,,,,CC BY-SA 4.0 +4351,2,,4350,10-05-2018 09:40,,7,,"

You can define the sequence recursively. Conceptually, what you want to do is:

+ +
    +
  • Create the initial state $|0\rangle^{\otimes N}$

  • +
  • On qubit 1, apply the gate +$$ +\frac{1}{\sqrt{N}}\left(\begin{array}{cc} +1 & \sqrt{N-1} \\ +\sqrt{N-1} & -1 +\end{array}\right) +$$

  • +
  • Controlled off qubit 1, apply ""make $|W_{N-1}\rangle$"" on qubits 2 to $N$ (i.e. on do this if qubit 1 is in the $|1\rangle$ state, otherwise do nothing)

  • +
  • Apply a bit-flip gate on qubit 1.

  • +
+ +

This algorithm, as expressed, is not made up of only one- and two-qubit gates, but it can certainly be broken down as such by standard universality constructions.

+ +

Also, this may not be the most efficient algorithm you could come up with. For example, if $N=2^n$, you could use just $n$ layers of square-root of swap gates to produce what you want -- start with a $|1\rangle$ on a single qubit. Root-swap with a second qubit, and you've got the $|W_2\rangle$ (up to phases that you'll need to take care of). Put an ancilla next to both of these, and do root swaps between W-ancilla pairs, and you've got $|W_4\rangle$, repeat and you've got $|W_8\rangle$, and so on. I believe this is basically what they do experimentally here. You should be able to incorporate this algorithm into the first one to make it more efficient ($O(\log N)$) for any arbitrary size, but I've never stopped to work out the details with any great care.

+",1837,,1837,,10-05-2018 13:15,10-05-2018 13:15,,,,0,,,,CC BY-SA 4.0 +4352,2,,4347,10-05-2018 10:13,,4,,"

Here is an idea of how could this be solved. It is based on teleportation.

+ +
    +
  • First, Alice teleports her qubit by means of one of the EPR pairs that she shares with Bob. In order to do that, she sends the classical information she obtains by measuring her EPR halve and her qubit.
  • +
  • Bob uses the clasical information received in order to reconstruct the qubit in his side. Now teleportation is completed, and Bob posseses the whole qubit system.
  • +
  • As local operations are allowed, now Bob performs the CNOT gate on the two qubit system.
  • +
  • Now that the operation has been done, Bob uses both the second EPR pair and classical information in order to teleport the first qubit back to Alice.
  • +
  • Alice reconstructs the qubit by appying the operations in her halve of the EPR pairs depending on the information classically obtained from Bob. Teleportation is done again.
  • +
+ +

As a consequence, Alice ercovers the qubit that she teleported back, but in this case after applying the CNOT that was desired. This way, the objective is succesfully done while fulfilling the constraint of using just two ebits (in this case the two EPR pairs for teleportation), and the allowed classical communication.

+",2371,,,,,10-05-2018 10:13,,,,3,,,,CC BY-SA 4.0 +4353,1,4355,,10-05-2018 11:03,,9,482,"

I am interesting in finding a circuit to implement the operation $f(x) > y$ for an arbitrary value of $y$. Below is the circuit I would like to build:

+ +

+ +

I use the first three qubits to encode $|x⟩$, use the other three qubits to encode $|f(x) = x⟩$, and finally I want to filter out all of the solutions for which $f(x) \leq y$.

+ +

So, if we set $y = 5$, the states would be:

+ +

$$\Psi_{0} = |0⟩\otimes|0⟩ $$

+ +

$$\Psi_{1} = \frac{1}{\sqrt{8}}\sum_{i=0}^{7} (|i⟩\otimes|0⟩) $$

+ +

$$\Psi_{2} = \frac{1}{\sqrt{8}}\sum_{i=0}^{7} (|i⟩\otimes|i⟩) $$

+ +

$$\Psi_{3} = \frac{1}{\sqrt{2}}(|6⟩\otimes|6⟩ + |7⟩\otimes|7⟩)$$

+ +

Is there a general method to come up with such filter, or is this non-trivial?

+",4755,,26,,12/23/2018 7:39,12/23/2018 7:39,Is there a general method to implement a 'greater than' quantum circuit?,,2,9,,,,CC BY-SA 4.0 +4354,1,,,10-05-2018 13:09,,8,422,"

From D-Wave flyer:

+ +
+

The D-Wave 2000Q system has up to 2048 qubits and 5600 couplers. To reach this scale, it uses 128,000 + Josephson junctions, which makes the D-Wave 2000Q QPU by far the most complex superconducting + integrated circuit ever built.

+
+ +

How do they define Qubit? Do these qubits build an universal quantum computer? Does D-Wave's system satisfy DiVincenzo's criteria?

+",914,,8141,,06-11-2020 15:02,06-11-2020 18:53,What's a Qubit on D-Wave 2000Q?,,2,1,,,,CC BY-SA 4.0 +4355,2,,4353,10-05-2018 13:12,,5,,"

What you are looking for I think is to use a quantum circuit for doing comparison. +Those are made from adder circuits with a slight modification to have comparators.

+ +

For adders, you have for example one from Cuccaro et al. (this one give the modification to adapt for comparison) and another from Himanshu et al.

+",4127,,,,,10-05-2018 13:12,,,,1,,,,CC BY-SA 4.0 +4356,2,,4350,10-05-2018 16:25,,11,,"

Yes, there are several implementations of this algorithm in the Superposition quantum kata (tasks 14 and 15):

+ +
    +
  • For $n = 2^k$, you can use a recursive algorithm: create a W state on the first $2^{k-1}$ qubits, allocate an ancilla qubit in $|+\rangle$ state, do some controlled SWAPs to set the state of the second $2^{k-1}$ qubits, and then some controlled NOTs to reset the ancilla back to $|0\rangle$ (WState_PowerOfTwo_Reference operation).
  • +
  • For an arbitrary $n$, you can use a recursive algorithm as described by DaftWullie (WState_Arbitrary_Reference operation).
  • +
  • There is also a neat trick you can use to create a $W_n$ state for an arbitrary $n$ using the first recursive algorithm. Allocate extra qubits to pad the $n$ given ones to $2^k$, create a state $W_{2^k}$ on them and measure the extra qubits; if all qubits measure to 0, the state of the original qubits is $W_n$, otherwise reset and repeat the process (WState_Arbitrary_Postselect operation).
  • +
+ +

This is my favorite task of that kata, because it allows so many different approaches.

+",2879,,2879,,05-10-2019 07:16,05-10-2019 07:16,,,,0,,,,CC BY-SA 4.0 +4357,2,,4353,10-05-2018 17:47,,1,,"

Let $U$ be the circuit you stated that takes $| 0 \rangle^{\otimes (2n)}$ to $\psi=\frac{1}{2^{n/2}}\sum_{i=0}^{2^n-1} | i \rangle \otimes | f(i) \rangle $.

+ +

$S_\psi = I - 2 | \psi \rangle \langle \psi | = U ( S_{| 0 \rangle^{\otimes (2n)}} ) U^\dagger$

+ +

$S_P$ needs to be the unitary that sends $| i \rangle \otimes | j \rangle$ to $(-1)^{j>y} (| i \rangle \otimes | j \rangle)$. For example if $y$ is $2^{n-1}-1$, then this would be done with a $Z$ gate on the most significant index. Generally for $y$ 1 less than a power of $2$ this should be easy by combining information from all more significant indices.

+ +

Those are the requisite ingredients for amplitude amplification for $f(x) \geq 2^k$ for some $k$. The number of times you need to apply $Q$ is dependent on the overlap which is dependent on how many solutions $x$ there are to $f(k) \geq 2^k$. If $f$ is reversible so just giving a permutation of $2^n$, then you have the overlap in terms of $n-k$ without bothering with any details of $f$.

+ +

You won't get exactly the result you asked for because that was projecting to a subspace/not unitary, but this will get you close to that desired state given your starting setup. There is also the trouble of setting $y$ arbitrarily, but at least you can amplify the condition $f(x) \geq 2^k$ as a step along the way to amplifying $f(x) \geq y$ for $y \geq 2^k$.

+ +

I may have misunderstood the question. You might have wanted something that works more generally, but given your small example this approach is okay. You don't have to worry about the fact that $\frac{\pi}{4\theta}$ will get really big as $n$ grows you, if you don't let $n$ grow. Did you want to input $y$ as a quantum state as well?

+ +

I wasn't careful about checking $\geq$ vs $\gt$ so you should check that.

+",434,,,,,10-05-2018 17:47,,,,0,,,,CC BY-SA 4.0 +4358,2,,4350,10-05-2018 18:20,,8,,"

The conceptually simplest way to produce a W state is somewhat analogous to classical reservoir sampling, in that it involves a series of local operations that ultimately create a uniform effect.

+

Basically, you look at each qubit in turn and consider "how much amplitude do I have left in the all-0s state, and how much do I want to transfer into the just-this-qubit-is-ON state?". It turns out that the family of rotations you need is what I'll call the "odds gates" which have the following matrix:

+

$$M(p:q) = \sqrt{\frac{1}{p+q}} \begin{bmatrix} \sqrt{p} & \sqrt{q} \\ -\sqrt{q} & \sqrt{p} \end{bmatrix}$$

+

Using these gates, you can get a W state with a sequence of increasingly-controlled operations:

+

+

This circuit is somewhat inefficient. It has cost $O(N^2 + N \lg(1/\epsilon))$ where $N$ is the number of qubits and $\epsilon$ is the desired absolute precision (since, in an error corrected context, the odds gates are not native and must be approximated).

+

We can improve the efficiency by switching from a "transfer out of what was left behind" strategy to a "transfer out of what is traveling along" strategy. This adds a fixup sweep at the end, but only requires single controls on each operation. This reduces the cost to $O(N \lg(1/\epsilon))$:

+

+

It is still possible to do better, but it starts to get complicated. Basically, you can use a single partial Grover step to get $N$ amplitudes equal to $\sqrt{1/N}$ but they will be encoded into a binary register (we want a one-hot register with a single bit set). Fixing this requires a binary-to-unary conversion circuit.

+

The partial grover step (from "Encoding Electronic Spectra in Quantum Circuits with Linear T Complexity"):

+

+

The binary to unary conversion:

+

+

Using this more complicated approach reduces the cost from $O(N \lg(1/\epsilon))$ to $O(N + \lg(1/\epsilon))$.

+",119,,119,,7/27/2021 20:02,7/27/2021 20:02,,,,0,,,,CC BY-SA 4.0 +4359,1,4363,,10-05-2018 20:56,,4,224,"

I am seeking introductory resources which will enable me to answer these questions (textbooks, lecture series, etc.):

+ +
    +
  • Given a simple quantum system, how do I derive its Hamiltonian?
  • +
  • Given a Hamiltonian, what questions can I answer about the system it describes (and how?).
  • +
+ +

I am approaching this topic primarily from the computer science side. I am familiar with Newtonian classical mechanics as presented in first-year undergraduate physics courses, but never learned classical Hamiltonian mechanics.

+",4153,,2293,,10-06-2018 17:08,10/30/2018 2:48,Introductory resources for learning about quantum Hamiltonians,,1,14,,,,CC BY-SA 4.0 +4361,1,,,10-06-2018 15:57,,11,397,"

DiVincenzo's criteria for quantum computation are the following:

+ +
    +
  1. A scalable physical system with well characterized qubits.
  2. +
  3. The ability to initialize the state of the qubits to a simple fiducial +state.
  4. +
  5. Long relevant decoherence times.
  6. +
  7. A “universal” set of quantum gates.
  8. +
  9. A qubit-specific measurement capability.
  10. +
+ +

Are they satisfied by the D-Wave 2000Q?

+ +
+ +

This was originally part of this question but is better suited to be a separate question.

+",2293,,2293,,10-06-2018 16:46,12-04-2018 08:21,Does the D-Wave 2000Q satisfy DiVincenzo's criteria?,,1,0,,,,CC BY-SA 4.0 +4363,2,,4359,10-06-2018 17:36,,3,,"
+

Given a simple quantum system, how do I derive its Hamiltonian?

+
+ +

For quantum systems of continuous variables, the most common way to construct the Hamiltonian is to add the kinetic energy and potential energy, as described in this resource. The kinetic energy part is explained here, and various potential energy models are given here:

+ +

+
which unfortunately I could only get to from this page!

+ +

For quantum systems on discrete variables, you can construct any $2^n \times 2^n$ Hamiltonian using the Pauli matrices, and any Hamiltonian of any dimension using generalizations of the Gell-Mann matrices.

+ +

In terms of other resources: +There is an open source Hamiltonian Zoo on GitHub, but it is very incomplete. So far it tells you how to derive the Hamiltonian for two charges interacting with each other (Coulomb), for a spin system interacting with a magnetic field (Zeeman), and for a 2D p-wave Fermi superfluid, but not much else. However, since this is a resource request, I think the Hamiltonian Zoo is a good starting point, because it lists the names of almost every mainstream Hamiltonian imaginable, and the best resource for learning about each of those listed Hamiltonians is Wikipedia. For example:

+ + + +

In every case, the Hamiltonian is in the article, just look for the equation containing the big $H$!

+ +
+

Given a Hamiltonian, what questions can I answer about the system it describes (and how?).

+
+ +

There is no ""resource"" I know that teaches people what can be learned about a system based on looking at its Hamiltonian, and I'd be quite surprised if such a resource existed. What I can tell you is that there are things about the system that can be learned by looking at the Hamiltonian (such as number of particles or number of degrees of freedom, by looking at the number of terms in the kinetic and potential energy operators in the case of continuous variables, or the size of the matrix in discrete variable Hamiltonians). However you may want to ask this part as a separate question (and not a resource request) in case other people want to suggest other things that can be learned about a system by looking at the Hamiltonian apart from what I've already told you.

+",2293,,,,,10-06-2018 17:36,,,,1,,,,CC BY-SA 4.0 +4364,1,,,10-07-2018 00:18,,8,1084,"

Given a single qubit in the computational basis, $|\psi\rangle =\alpha |0\rangle + \beta|1\rangle$, the density matrix is $\rho=|\psi\rangle\langle\psi|=\begin{pmatrix} \alpha^2 & \alpha \beta^*\\ +\alpha^*\beta & \beta ^2\end{pmatrix}$.

+ +

Depolarizing channel is defined to be the super-operator $\mathcal{N}:\rho \rightarrow (1-p)\rho +p\pi$, where $\pi=I/2$. Therefore, here, $\mathcal{N} (\rho) = (1-p)\begin{pmatrix} \alpha^2 & \alpha \beta^*\\ +\alpha^*\beta & \beta ^2\end{pmatrix} ++ \frac{p}{2} \begin{pmatrix} 1 & 0\\0 & 1\end{pmatrix}$.

+ +

How can one implement this evolution on IBM Q?

+",2757,,26,,01-01-2019 09:15,01-01-2019 09:15,Depolarizing channel implementation on IBM Q,,1,1,,,,CC BY-SA 4.0 +4366,1,,,10-07-2018 14:25,,6,122,"

Am I correct in thinking that post-quantum cryptography such as lattice-based solutions run on classical computers are resistant to quantum attacks (as opposed to RMS), whereas quantum key distribution schemes such as the BB84 are designed to run on quantum computers to communicate between them?

+",4773,,26,,05-08-2019 10:22,05-08-2019 10:22,Lattice based cryptography vs BB84,,1,2,,,,CC BY-SA 4.0 +4367,1,4376,,10-07-2018 16:27,,2,505,"

I want to make an operator:

+ +

$\mathrm{U3}(\arccos(\sqrt p),0,0)$, when $p$ is a random value between $0$ and $1$

+ +

How do I write code in QASM language of this ""random $\mathrm{U3}$"" operator?

+",4524,,26,,10-08-2018 07:09,10-09-2018 08:41,"How to make ""random U3"" in QASM?",,1,5,,,,CC BY-SA 4.0 +4368,1,5430,,10-07-2018 20:32,,7,242,"

I found a paper by Grover titled ""How significant are the known collision and element distinctness quantum algorithms?"", in which he expressed criticism to several famous algorithms, including Ambainis's algorithm for element distinctness. More precisely, he argues that all those algorithms use too much space, and there is a trivial way to obtain the same speedup(using same amount of memory) by separating the search space into several parts and search them on independent processors simultaneously. It seems Ambainis admitted this in another article.

+ +

My questions are:

+ +

1.Is Grover correct about those algorithms? That is, are they ""trivial"" in Grover's sense?

+ +

2.Have there been new algorithms obtaining the same speedup and using less space since then?

+",4776,,,,,02-08-2019 08:53,How significant are the variants of Grover's Algorithm?,,1,0,,,,CC BY-SA 4.0 +4369,2,,4364,10-07-2018 23:30,,7,,"

There are several ways that you could realise the depolarising map $ +\mathcal N_p(\rho) = (1\!-\!p)\:\!\rho + p \!\!\:\cdot\!\tfrac{1}{2}\mathbf 1$ map on a quantum computer — including an idealised quantum computer, in which waiting around for the noise to do the work for you would not be an available method.$\def\ket#1{\lvert#1\rangle}$

+ +

We start from the fact that the completely depolarising map can be realised by using a uniformly random Pauli operator: +$$ \mathcal N_1(\rho) = \tfrac{1}{2}\mathbf 1 = \tfrac{1}{4}\rho + \tfrac{1}{4}X\rho X + \tfrac{1}{4} Y \rho Y + \tfrac{1}{4} Z \rho Z \mspace{48mu} (*) \mspace{-48mu}$$ +For the sake of brevity, let $\mathrm{id}$ be the identity operation on a single qubit density operator, and let $\mathcal X(\rho) = X \rho X$, $\mathcal Y(\rho) = Y \rho Y$, and $\mathcal Z(\rho) = Z \rho Z$. +Then we may abbreviate Eqn. $(*)$ by writing +$$ \mathcal N_1 = \tfrac{1}{4} \mathrm{id} + \tfrac{1}{4} \mathcal X + \tfrac{1}{4} \mathcal Y + \tfrac{1}{4} \mathcal Z ,$$ +which emphasises the fact we are considering a randomly applied operation. +Then, more generally, we may decompose: +$$ +\begin{align} +\mathcal N_p \,&=\, (1\!-\!p) \;\!\mathrm{id} \,+\, p \:\!\mathcal N_1 +\\&=\, (1\!-\!\tfrac{3p}{4}) \;\!\mathrm{id} + \tfrac{p}{4}\mathcal X + \tfrac{p}{4}\mathcal Y + \tfrac{p}{4}\mathcal Z. +\end{align} +$$ +Then, one approach we can take is to simulate a source of randomness which governs which of the four maps $\mathrm{id}$, $\mathcal X$, $\mathcal Y$, or $\mathcal Z$ are applied.

+ +

Which way you would like to do this depends in part in what operations you would like to use to do so. +There are two approaches which strike me as being more obvious ones:

+ +
    +
  • If you are happy to use up to three TOFFOLI gates, then you can describe the maps as being governed by a probability distribution on two bits in which $\mathrm{Pr}[x \!=\! 00] \,=\, 1 - 3p/4$, and $\mathrm{Pr}[x \!=\! ab] \,=\, p/4$ for any $ab \ne 00$. +We could do this by considering an appropriate superposition state to describe the distribution $x$, +$$ \ket{\gamma} = \tfrac{\sqrt{4-3p\,}}{2} \ket{00} + \tfrac{\sqrt p}{2} \ket{01} + \tfrac{\sqrt p}{2} \ket{10} + \tfrac{\sqrt p}{2} \ket{11} $$ +and then using TOFFOLI gates to control an $X$, $Y$, or $Z$ operation conditioned on $x = 01$, $x = 10$, and $x = 11$ respectively.

  • +
  • At the cost of only one more qubit, you can instead use three CNOT gates, using different independently prepared qubits to control whether an $\mathcal X$ operation is applied, a $\mathcal Y$ operation is applied, a $\mathcal Z$ operation is applied, or any combination of them or none of them. This seems to me to be substantially less resource intensive, so I will describe this in some detail.

    + +

    We are going to use three qubits to trigger independent events. These events are acting on the input with an $\mathcal X$ channel, acting on the input with a $\mathcal Y$ channel, and action on the input with a $\mathcal Z$ operation, in sequence. +Because the events are independent, it may be that more than one of them occurs. This is something we have to take into account in order to simulate the distribution $(1\!-\!\tfrac{3p}{4},\,\tfrac{p}{4},\,\tfrac{p}{4},\,\tfrac{p}{4})$ of Pauli operations to be applied — because the event of [apply $\mathcal X$, don't apply $\mathcal Y$, then apply $\mathcal Z$] has the same net effect as [don't apply $\mathcal X$, apply $\mathcal Y$, and don't apply $\mathcal Z$], and we would like to have the correct probabilities of having each possible net effect. +By symmetry, we can infer that we would like to have the same probability $z$ of each of the effects to occur, so we would like to solve the equation +$$ \tfrac{1}{4}p = z(1-z)^2 + z^2(1-z) = z-z^2 $$ +For any $0 \leqslant p \leqslant 1$, there should be a unique $0 \leqslant z \leqslant \tfrac{1}{2}$ which solves the above equation. +Subject to that value of $z$, we define the state +$$ \ket{\delta_z} = \sqrt{1-z\;\!}\;\! \ket{0} + \sqrt{z}\;\! \ket{1}.$$ +We then prepare three copies of this state to use effectively as a source of randomness, albeit involved in coherently controlled operations which determines whether the input qubit is subject to an $X$, $Y$, or $Z$ transformation. +The circuit conveying the main idea is the following one on the left (with the circuit on the right describing how it would be realised in terms of the usual set of Clifford gates):

    + +

    + +

    The symbols at the end of the top three wires are 'trace out' operations, and in effect mean 'ignore anything that happens to this qubit from now on'. (You could measure the qubits and simply ignore the result if you like, though to get the result of the depolarising channel it is important that you actually do ignore the outcomes of those measurements, as the transformation that occurs conditioned on any particular outcome will be not a depolarising map but a particular Pauli transformation.) The three instances of $H$ followed by $S$ in the right-hand circuit realises the Clifford operation which permutes the Bloch sphere axes $X \to Y \to Z \to X$, allowing us to simulate the different controlled Pauli operators via cyclically changing the reference frame on the input qubit.

  • +
+",124,,124,,10-07-2018 23:35,10-07-2018 23:35,,,,0,,,,CC BY-SA 4.0 +4370,2,,4366,10-08-2018 05:42,,2,,"

Post-quantum crypto schemes run on classical computers and are hoped to be secure against quantum attacks. Quantum key distribution such as bb84 or e91 run on quantum hardware (although does not require the full power of a quantum computer) and is provably secure (subject to certain underlying assumptions about lab security etc) against quantum attack.

+",1837,,,,,10-08-2018 05:42,,,,1,,,,CC BY-SA 4.0 +4371,1,4372,,10-08-2018 22:03,,6,141,"

What is the relation between $\mathrm{QMA}$ and $\mathrm{P^{QMA}}$ and how do we prove it? Are these classes equal?

+",4650,,26,,10-09-2018 04:44,10-09-2018 08:18,Relation between $\mathrm{QMA}$ and $\mathrm{P^{QMA}}$,,1,0,,,,CC BY-SA 4.0 +4372,2,,4371,10-08-2018 22:48,,6,,"

Clearly $\mathrm{QMA \subseteq P^{QMA}}$, as we can construct a $\mathrm{P^{QMA}}$ algorithm to solve any problem in $\mathrm{QMA}$, by using an oracle call. The question is whether the reverse containment is known to hold. And the answer is that the reverse containment is not known to hold (and I think is not expected to hold).

+ +

Of course, computational complexity is full of classes where we don't know whether or not they're different — and where proving that classes are different (even when they're ""obviously"" different, as with $\mathrm{P}$ vs. $\mathrm{NP}$ or $\mathrm{BPP}$ vs. $\mathrm{BQP}$) seems to be much more difficult than proving that they're equal (even when it isn't obvious at all that they're equal, as with $\mathrm{PSPACE}$ vs. $\mathrm{IP}$). So if two classes are different but not ""enormously"" different, you might be waiting a long time to hear about a definitive proof.

+ +

Still, we can elaborate a little bit on the obstacles to showing that $\mathrm{QMA}$ and $\mathrm{P^{QMA}}$ are equal, if you think that maybe they are equal. One point is that one of them is known to be closed under complements, and the other isn't.

+ +
    +
  • $\mathrm{P^{QMA}}$ is clearly closed under complements. +The model of computation which is the basis for the definition of $\mathrm{P^{QMA}}$ is a deterministic Turing machine which has access to a $\mathrm{QMA}$ oracle. We can produce a similar oracle machine for the complementary problem by switching the final status ACCEPT / REJECT of the original machine. Therefore, if a language $L$ belongs to $\mathrm{P^{QMA}}$, then the complementary language $\overline{L}$ also belongs to $\mathrm{P^{QMA}}$. (The same goes for any promise problems, as opposed to languages.)

  • +
  • At present, it is not known whether $\mathrm{QMA}$ is closed under complement. Part of the problem is the same difficulty suffered by $\mathrm{NP}$ and $\mathrm{MA}$: there's no obvious way to construct an (efficient bounded error) verifier for certificates of NO instances of problems in $\mathrm{QMA}$, given that all you know about them is that their YES instances do admit efficient bounded error verifiers. In fact, based on the intuition drawn from computability theory which might lead one to think that $\mathrm{NP \ne coNP}$ (and that therefore $\mathrm{P \ne NP}$), one might well guess that $\mathrm{QMA}$ is not closed under complements.

  • +
+ +

Obviously, if $\mathrm{QMA = P^{QMA}}$, it would follow that $\mathrm{QMA}$ is closed under complement: but if $\mathrm{QMA}$ were closed under complement, you might also hope to be able to show that by other (possibly easier) means as well. For now, the fact that we're confused about one but not the other is a sort of suggestive, if not entirely reliable, higher-order evidence that these classes might be different.

+",124,,124,,10-09-2018 08:18,10-09-2018 08:18,,,,3,,,,CC BY-SA 4.0 +4373,1,4374,,10-08-2018 22:52,,9,374,"

Question inspired by this article from IEEE Spectrum on blocks containing different polarization filters for use in classrooms, and my previous question on representing the three-polarizing-filter experiment in quantum computing terms. Here I want to go the other way.

+ +

Do there exist any easily-purchasable educational quantum computing toys, such as those a physics teacher might use in a classroom? I am imagining here a set of polarizing filters or beam splitters with which you can (in conjunction with a laser) create very simple quantum circuits.

+ +

I am especially interested in ways to make a CNOT gate.

+",4153,,4153,,10-11-2018 01:21,12/19/2018 16:11,Do any educational quantum computing toys or devices exist?,,3,0,,,,CC BY-SA 4.0 +4374,2,,4373,10-09-2018 04:49,,5,,"

You can get the sort of optical bench that is typically used for classrooms.

+ +

For a couple examples:

+ +

3B Scientific

+ +

School Speciality

+ +

I think the one I have taught with before was from 3B, but I don't know about any of the others so research them yourself rather than taking a product recommendation from me. There are several options and this choice will be dependent on your quality/cost requirements.

+ +

Those will be for experiments about lenses and diffraction rather than polarization so you will have to get the polarizers separately. An example:

+ +

Edmund Optics

+ +

But you see how all the pieces are put on mounts along a track so you can easily slide them around. That is the sort of setup to look for so that lining everything up along the beamline is easier.

+ +

In teaching the three polarizer experiment, we would also have a detector to measure intensity, but you wouldn't need that if this is more like a toy rather than teaching fitting the data to something like $A \sin^2 (B \theta + C)$ with error analysis.

+",434,,,,,10-09-2018 04:49,,,,0,,,,CC BY-SA 4.0 +4375,2,,4373,10-09-2018 06:45,,5,,"

Quantum computers are, unfortunately, quite hard to build. Experiments with polarizing filters or beam splitters would be able to demonstrate quantum effects, but I know of no way to make simple quantum circuits for multiple qubits unless you have single photon sources and detectors.

+ +

Alternatively, you could use current cloud-based devices. The IBM Q Experience has a simple GUI interface that would be suitable for students (after some introduction), and will then run the circuit on real hardware. If you students would be able to make circuits programmatically, they can use more quantum IBM hardware as well as hardware by Rigetti, with other companies also in the pipeline.

+ +

For a 'single qubit' experiment, you could perhaps just use polarizing filters. The $|0\rangle$ and $|1\rangle$ states of the qubit could be associated with horizontal and vertical polarization, and the $|+\rangle$ and $|-\rangle$ states could be associated with angles of $45^{\circ}$ and $135^{\circ}$. Then just by holding up a filter, you can turn sunlight into a stream of single qubits in a given state.

+ +

With a second filter, you can similarly measure in the $|0\rangle / |1\rangle$ basis (by holding it horizontally or vertically, and seeing if any light comes out) or the $|+\rangle / |-\rangle$ basis (by holding it diagonally). With multiple filters you could chain these measurements, and show the how the measurement bases are complementary. You could even remake the game I made to run on quantum computers: Battleships with complementary measurements.

+ +

This would be a single qubit example, despite the fact you have many qubits, because they are always the same state and they never interact. So you just have many samples of a single qubit process, that just happen to be shining down on you all at once.

+ +

Disclosure: I work for IBM, and Rigetti once gave me a t-shirt

+",409,,409,,10-09-2018 11:16,10-09-2018 11:16,,,,3,,,,CC BY-SA 4.0 +4376,2,,4367,10-09-2018 08:41,,3,,"

There are many forms of QASM, so I'll answer for OpenQASM 2.0, as is currently used by IBM.

+ +

Declaring a gate to be random means that it would be randomly generated at compile time. Since QASM is used as an expression of a compiled circuit, such randomness must be resolved by the time the QASM is created.

+ +

It is true that are transpilation processes in the IBM stack, which convert a user generated QASM into one optimized for the needs of a given device. This could allow for the functionality your desire to be built into future versions. But I doubt that will be the case. It is much easier just to use randomness when generating the QASM, such as with Pythons random number generation when creating circuits with Qiskit.

+ +

Disclaimer: I work for IBM.

+",409,,,,,10-09-2018 08:41,,,,0,,,,CC BY-SA 4.0 +4378,2,,4108,10-09-2018 15:33,,1,,"

One advantage of the transmon design is the additional loop you gain from what you called two-island-design. The yellow flux bias line changes the Josephson energy, thus the resonance of the qubit. You can imagine this as changing the (Josephson) inductance of the SQUID loop being a non-linear LC-resonator. This helps for example in two-qubit gate implementations that rely on identical qubit frequencies. But it comes at the cost of higher sensitivity to flux noise. More details you can find in here.

+",4750,,2293,,10-09-2018 16:56,10-09-2018 16:56,,,,1,,,,CC BY-SA 4.0 +4379,1,,,10-09-2018 21:13,,7,512,"

Suppose we have a classical quantum state $\sum_x |x\rangle \langle x|\otimes \rho_x$, one can define the smooth-min entropy $H_\min(A|B)_\rho$ as the best probability of guessing outcome $x$ given $\rho_x$. How does this quantity relate to $H(A|B)_\rho$ the standard conditional entropy? If not, how does it relate to the mutual information $I(A:B)_\rho$?

+",4798,,55,,07-04-2022 15:41,07-04-2022 15:41,Relating min-entropy with conditional entropy,,1,0,,,,CC BY-SA 4.0 +4380,1,4382,,10-09-2018 23:57,,4,632,"

I'm going through the phase estimation algorithm, and wanted to sanity-check my calculations by making sure the state I'd calculated was still normalized. It is, assuming the square of the absolute value of the eigenvalue of the arbitrary unitary operator I'm analyzing equals 1. So, does it? Assuming that the eigenvector of the eigenvalue is normalized.

+",4153,,4153,,10-10-2018 00:02,10-10-2018 16:34,Are the squared absolute values of the eigenvalues of a unitary matrix always 1?,,2,1,,,,CC BY-SA 4.0 +4381,2,,4135,10-10-2018 00:25,,2,,"

Hyperentanglement refers to entanglement between multiple degrees of freedom of a given system. It is a concept commonly encountered in some fields of quantum information processing, typically in the context of photonics. +As an example, this can mean that you have entanglement between the polarisation and the position degrees of freedom of a single photon. +A review paper on the subject can be found in 1610.09896.

+ +

Note first of all that there isn't anything fundamental about hyperentanglement. At the end of the day, a hyperentangled system is a ""regularly entangled"" system in which you just happen to have nonseparability between degrees of freedom that somehow ""look different"". +The reason for having a specific name for it is more practical than fundamental.

+ +

The simplest example of a hyperentangled system that comes to mind is a quantum walk. More specifically, one can implement a photonic quantum walk in which the walker degree of freedom is implemented as the position (or orbital angular momentum, or frequency, or time, or something else) of a single photon, and the coin degree of freedom is the photon's polarisation. +Already after a single step (assuming a bunch of things about the kind of quantum walk under consideration) one has a state of the form +$$\lvert-1,\uparrow\rangle+\lvert1,\downarrow\rangle,$$ +where the first label characterises the walker (path, OAM, or anything else) dof, while the second one the polarisation dof. This is probably the simplest example of a ""hyperentangled"" photonic state. Again, note that this differs from a standard Bell state only in the physical substrate associted with the degrees of freedom, so there is nothing fundamental about it (which does not mean it is not interesting of course). +An example of such a OAM+polarisation quantum walk is studied in 1407.5424 (even though they don't use the term ""hyperentanglement"" there). +More generally, any vector beam (class of beams characterised by a space-variant polarisation in the transverse plane, see e.g. ncomms8706) qualifies.

+ +

More interesting states can be obtained by ""hyperentangling"" different degrees of freedom of different photons. An example of this is found in 1602.03769, in which entanglement of path and polarisation of two photons is exploited to get a four-qubit entangled state. Another example is 1507.08887, in which the authors implement a state in which there it entanglement between different hyperentangled states.

+",55,,55,,10-10-2018 09:32,10-10-2018 09:32,,,,0,,,,CC BY-SA 4.0 +4382,2,,4380,10-10-2018 01:19,,6,,"

Good question. The answer turns out to be Yes.
+You don't even need the vector to be normalized. Watch:

+ +

Start with the definition of eigenvalues and eigenvectors:

+ +

$$ +\begin{align} +U|\psi\rangle &= \lambda |\psi\rangle\\ +\end{align} +$$

+ +

Conjugate and transpose both sides of the equation:

+ +

$$ +\begin{align} +\langle\psi|U^\dagger &= \langle \psi| \lambda^*. +\end{align} +$$

+ +

Left multiply each side of line 1 by the corresponding side of line 2.

+ +

$$ +\begin{align} +\langle \psi|U^\dagger\cdot U|\psi \rangle &= \langle \psi | \lambda^* \lambda |\psi\rangle \\ +\langle \psi |\psi \rangle &= |\lambda |^2 \langle \psi |\psi \rangle \\ +c &= |\lambda|^2 c\\ +1 &= |\lambda|^2 +\end{align} +$$

+ +

If $|\psi\rangle $ is normalized, it just means that $c=1$, which makes no difference in this proof because the $c$ was on both sides of the equation and can be divided out.

+",2293,,2293,,10-10-2018 16:34,10-10-2018 16:34,,,,4,,,,CC BY-SA 4.0 +4383,2,,4380,10-10-2018 07:10,,5,,"

@user1271772's answer is excellent, and absolutely the right answer. I just wanted to add in some additional perspective, given recent questions regarding Hamiltonians. Many physicists start from the Hamiltonian being the underlying thing that determines evolution, and unitaries are derived as a consequence. They start from the Schrödinger equation, +$$ +i\frac{d|\psi\rangle}{dt}=H|\psi\rangle. +$$ +For a time-invariant Hamiltonian, the solution is +$$ +|\psi(t)\rangle=e^{-iHt}|\psi(0)\rangle, +$$ +where $e^{-iHt}$ is unitary because $e^{-iHt}e^{iHt}=\mathbb{I}$. Just stating this solution actually skips over the thing I really want to focus on.

+ +

We could expand a generic $|\psi\rangle=\sum_ia_i|i\rangle$, so the Schrödinger equation becomes a series of simultaneous differential equations for the $a_i$: +$$ +i\frac{da_i}{dt}=\sum_jH_{ij}a_j. +$$ +Now consider what happens to an eigenvector $|\lambda\rangle=\sum_ib_i|i\rangle$ of $H$: +$$ +\sum_{j}H_{ij}b_i^\star=\lambda b_i^\star. +$$ +We can take linear combinations of the $a_i$: +$$ +i\frac{d\sum_ib_i^\star a_i}{dt}=\sum_{ij}b_i^\star H_{ij}a_j=\lambda\sum_jb_j^\star a_j. +$$ +Hence, we see that the component $x=\sum_jb_j^\star a_j$ simply satisfies +$$ +i\frac{dx}{dt}=\lambda x, +$$ +so $x(t)=e^{-i\lambda t}x(0)$. In other words, a state initially created as an eigenvector $|\lambda\rangle$ stays in that state and just acquires a phase over time $e^{-i\lambda t}|\lambda\rangle$. Hence, eigenvectors of $H$ are also eigenvectors of the unitary $U$, with eigenvalues $e^{-i\lambda t}$, and these have modulus 1.

+",1837,,,,,10-10-2018 07:10,,,,0,,,,CC BY-SA 4.0 +4384,1,,,10-10-2018 21:49,,5,322,"

Lovas and Andai (https://arxiv.org/abs/1610.01410) have recently established that the separability probability (ratio of separable volume to total volume) for the nine-dimensional convex set of two-re[al]bit states (representable by $4 \times 4$ “density matrices” with real entries) is $\frac{29}{64}$. The measure employed was the Hilbert-Schmidt one. Building upon this work of Lovas and Andai, strong evidence has been adduced that the corresponding result for the (standard) fifteen-dimensional convex set of two-qubit states (representable by density matrices with off-diagonal complex entries) is $\frac{8}{33}$ (https://arxiv.org/abs/1701.01973). (A density matrix is Hermitian, positive definite with unit trace.) Further, with respect to a different measure (one of the class of “monotone” ones), the two-qubit separability probability appears, quite strikingly, to be $1-\frac{256}{27 \pi^2}=1-\frac{2^8}{3^3 \pi^2}$. Further, exact values appear to have been found for higher-dimensional sets of states, endowed with Hilbert-Schmidt measure, such as $\frac{26}{323}$ for the 27-dimensional set of two-quater[nionic]bit states.

+ +

Now, perhaps the measure upon the quantum states of greatest interest is the Bures (minimal monotone) one (https://arxiv.org/abs/1410.6883). But exact computations pertaining to it appear to be more challenging. Lower-dimensional analyses (having set numbers of entries of the density matrices to zero) have yielded certain exact separability probabilities such as $\frac{1}{4}, \frac{1}{2}, \sqrt{2}-1$ (https://arxiv.org/abs/quant-ph/9911058). +Efforts to estimate/determine the (15-dimensional) two-qubit Bures separability probability have been reported in https://arxiv.org/abs/quant-ph/0308037.

+ +

Recently (https://arxiv.org/abs/1809.09040, secs. X.B, XI), we have undertaken large-scale numerical simulations—employing both random and quasi-random [low-discrepancy] sequences of points—in this matter. Based on 4,372,000,000 randomly-generated points, we have obtained an estimate of 0.0733181. Further based on ongoing quasi-randomly-generated (sixty-four-dimensional) points, for which convergence should be stronger, we have obtained independent estimates of 0.0733181 and (for a larger sample) 0.0733117.

+ +

One approach to the suggestion of possible associated exact formulas, is to feed the estimates into http://www.wolframalpha.com and/or https://isc.carma.newcastle.edu.au (the Inverse Symbolic Calculator) and let it issue candidate expressions.

+ +

Certainly, $\frac{8}{11 \pi^2} \approx 0.073688$ would qualify as “elegant”, as well as $\frac{11}{150} \approx 0.073333$, but they do not seem to have the precision required. Also, since in the two cases mentioned above, we have the “entanglement probabilities” of +$\frac{25}{33} =1 -\frac{8}{33}$ and $\frac{27}{256 \pi^2} =1-(1-\frac{27}{256 \pi^2})$, it might be insightful to think in such terms.

+ +

Bengtsson and Zyczkowski (p. 415 of “Geometry of quantum states: an introduction to quantum entanglement [2017]) ”have observed ``that the Bures volume of the set of mixed states is equal to the volume of an $(N^2-1)$-dimensional hemisphere of radius $R_B=\frac{1}{2}$''. It is also noted there that $R_B$ times the area-volume ratio asymptotically increases with the dimensionality $D=N^2-1$, which is typical for hemispheres. The Bures volume of the $N$-dimensional qubit density matrices is given by $\frac{2^{1-N^2} \pi ^{\frac{N^2}{2}}}{\Gamma \left(\frac{N^2}{2}\right)}$, which for $N=4$, gives $\frac{\pi ^8}{165150720}=\frac{\pi^8}{2^{19} \cdot 3^2 \cdot 5 \cdot 7}$.

+ +

Additionaly, we have similarly investigated the two-rebit Bures separability probability question with estimates being obtained of 0.1570934 and (larger sample) 0.1570971. But our level of confidence that some exact simple elegant formula exists for this probability is certainly not as high, based upon the Lovas-Andai two-rebit result for the particular monotone metric they studied.

+ +

Strongly related, with a slightly different focus, to this issue is my question Estimate/determine Bures separability probabilities making use of corresponding Hilbert-Schmidt probabilities

+",3089,,55,,12-04-2021 12:42,12-04-2021 12:42,"Suggest, partly based upon limited numerical results, possible “elegant” exact formulas for Bures two-qubit separability probability",,1,3,,,,CC BY-SA 4.0 +4385,1,,,10-11-2018 13:55,,3,191,"

In many algorithms an array that stores a classical data or quantum data is crucial. The QRAM (quantum random access memory) that stores classical data in the circuit has been done. My question focuses on storing the given normalized d-dimensional vector using a quantum circuit. To do that I have used a generalized n-controlled phase shift gate between two Hadamard gates rotated in the specified angle. But when I try to measure the data register I couldn't find what I want, what shall I do?

+",4206,,26,,12/23/2018 11:33,12/23/2018 11:33,Simulating Read only QRAM (constructing an oracle),,0,5,,10/24/2018 2:16,,CC BY-SA 4.0 +4386,1,4417,,10-11-2018 20:29,,5,506,"

I'm working through a problem set, and I've come across the following problem:

+ +
+

In this problem, you'll explore something that we said in class about the Many-Worlds Interpretation of quantum mechanics: namely, that ""two branches interfere with each other if and only if they produce an outcome that's identical in all respects."" Consider the n-qubit ""Schrodinger cat states"" + $$\frac{|0\cdots 0 \rangle + |1 \cdots 1 \rangle}{\sqrt2}$$ + a) What probability distribution over n-bit strings do we observe if we Hadamard the first $n-1$ qubits, then measure all n qubits in the $\{ |0 \rangle , |1 \rangle \}$ basis?

+ +

b) Is this the same distribution or a different one than if we had applied the same measurement to the state + $$\frac{|0 \cdots 0 \rangle \langle 0 \cdots 0 | + |1 \cdots 1 \rangle \langle 1 \cdots 1 |}{2}$$

+ +

c) What probability distribution over n-bit strings do we observe if we Hadamard all $n$ qubits, then measure all n qubits in the $\{ |0 \rangle , |1 \rangle \}$ basis?

+ +

d) Is this the same distribution or a different one than if we had applied the same measurement to the state + $$\frac{|0 \cdots 0 \rangle \langle 0 \cdots 0 | + |1 \cdots 1 \rangle \langle 1 \cdots 1 |}{2}$$

+
+ +

I have solved the problem as follows:

+ +

a) Equal probability of seeing any n-bit string of qubits.

+ +

b) Different: that mixed state has a 50/50 shot of seeing $|0 \cdots 0 \rangle$ or $|1 \cdots 1 \rangle$

+ +

c) Equal probability of seeing any n-bit string of qubits that have an even number of $|1\rangle$s

+ +

d) Different. Same state as b.

+ +
+ +

What I don't understand is what this has to do with the Many-Worlds Interpretation! Could someone explain the significance of this exercise? Thanks!

+",4707,,55,,11-07-2019 11:10,11-07-2019 11:10,Many-Worlds Interpretation and GHZ States,,2,10,,,,CC BY-SA 4.0 +4387,2,,4386,10-11-2018 21:04,,1,,"

You are 100% correct that this question has nothing to do with the Everett interpretation (also known as "many-worlds interpretation") of quantum mechanics, and in fact I would even agree with your professor's description of the many-worlds interpretation that "two branches interfere with each other if and only if they produce an outcome that's identical in all respects."

+

The "many-world's interpretation" is an interpretation of wavefunction collapse in which if the wavefunction is in the state $\frac{1}{\sqrt{2}}\left(|0\rangle + |1\rangle\right)$ and you get $|0\rangle$ when you measure the state, there is a different parallel universe somewhere in which $|1\rangle$ was measured.

+

The picture right at the top of the many-worlds Wikipedia page depicts exactly this (the cat dies in one universe and survives in another):

+

+

What does it mean to be a "different" universe, if the universe is everything? Well Hawking talked about this in the first page of his book Brief History of Time. He defines our universe to be literally everything that is physically capable of having any influence on any measurement that anything in our universe ever makes. Therefore there is not much value in talking about other universes, because by definition we will not be able to make any measurements to test any theories we have about other universes, and if we could, then by definition they would be part of our universe (so not actually a different universe).

+

So certainly use the many-worlds interpretation to conceptualize wavefunction collapse if it helps you, or think of the Bohr interpretation or the Einstein interpretation (whatever helps you conceptualize wavefunction collapse). In all cases, the outcome of any measurements in our universe should not change, and that's why these are called interpretations and not theories (it's called "many-worlds interpretation" not "many-worlds theory").

+

So it was a trick question. Probabilities do not depend on the "interpretation" they use, they depend on the theory you use: classical mechanics, quantum mechanics (Schroedinger's equation), quantum electrodynamics, quantum chromodynamics, string theory, loop quantum gravity, or whatever you wish.

+

You are therefore correct, and may also want to point out to your professor that s/he is missing a square root symbol in the denominator, otherwise these states are not normalized and the probabilities given as the square moduli of the coefficients, will not add up to 1, which violates the definition of probability that all probabilities should add up to 1.

+

*The image is courtesy of Christian Schirm who made the image himself and put it on Wikipedia with the Creative Commons license.

+",2293,,-1,,6/18/2020 8:31,10-11-2018 21:04,,,,9,,,,CC BY-SA 4.0 +4388,1,4389,,10-11-2018 23:50,,7,135,"

Why is the $R_z$ gate sometimes written as:

+ +

$$ +R_{z}\left(\theta\right)=\begin{pmatrix}1 & 0\\ +0 & e^{i\theta} +\end{pmatrix}, +$$

+ +

while other times it is written as:

+ +

$$ +R_{z}\left(\theta\right)=\begin{pmatrix}e^{-i\theta/2} & 0\\ +0 & e^{i\theta/2} +\end{pmatrix}, +$$

+ +

and even as:

+ +

$$ +R_{z}\left(\theta\right)=\begin{pmatrix}e^{-i\theta} & 0\\ +0 & e^{i\theta} +\end{pmatrix}? +$$

+ +

And when we see an algorithm diagram using this gate, does it always work the same regardless of what the $R_z$ representation is ? Do the papers not always say what their standard representation is?

+",4819,,26,,12/23/2018 11:31,12/23/2018 11:31,$R_z$ gate representations,,3,4,,,,CC BY-SA 4.0 +4389,2,,4388,10-12-2018 00:10,,3,,"

Two representations of $R_z$ are equivalent if they are the same modulo only a global phase.

+ +

$$ +\begin{pmatrix}1 & 0\\ +0 & e^{i\theta} +\end{pmatrix} += +e^{+i\theta/2}\begin{pmatrix}e^{-i\theta/2} & 0\\ +0 & e^{i\theta/2} +\end{pmatrix} +$$

+ +

If you apply this gate to any state $|\psi\rangle$, the only difference in the outcome is a global (constant) phase of $e^{i\theta/2}$. This cannot be detected by any measurement.

+ +

For example measurements on the state:

+ +

$$ +|\psi\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle + |1\rangle \right) +$$

+ +

will result in $|0\rangle$ with a probability of $\left|\frac{1}{\sqrt{2}}\right|^2 = \frac{1}{2}$, and
+will result in $|1\rangle$ with a probability of $\left|\frac{1}{\sqrt{2}}\right|^2 = \frac{1}{2}$.

+ +

Now consider the state: +$$ +|\psi\rangle = \frac{e^{i\theta/2}}{\sqrt{2}}\left(|0\rangle + |1\rangle \right) +$$

+ +

Measurements will:
+result in $|0\rangle$ with a probability of $\left|\frac{e^{i\theta/2}}{\sqrt{2}}\right|^2 = \frac{1}{2}$, and will
+result in $|1\rangle$ with a probability of $\left|\frac{e^{i\theta/2}}{\sqrt{2}}\right|^2 = \frac{1}{2}$.

+ +

Two gates that are equivalent up to a global phase (one that's the same for all components) are equivalent for the purpose of anything you will be able to detect by measurement.

+",2293,,2293,,10-12-2018 00:16,10-12-2018 00:16,,,,2,,,,CC BY-SA 4.0 +4390,2,,4388,10-12-2018 00:20,,3,,"

Note that the first two are proportional with a $e^{i \theta /2}$ factor. Even as you tensor with other gates and continue multiplying, this just comes out in front as an unobservable global phase. Distinguishing the last one you can tell by looking at the range of angles they allow. 0 to $2\pi$ etc. If the authors fail to include their conventions , you could find the conventions described in their previous writings. Maybe they forgot in the current paper, but remembered before. Also if they are working with some particular software elsewhere, then it seems likely they are using the same conventions as the software does.

+",434,,,,,10-12-2018 00:20,,,,0,,,,CC BY-SA 4.0 +4391,2,,4388,10-12-2018 08:41,,2,,"

As already explained, the first two are equivalent.

+ +

If no convention is given, the first two are being meant - in that case, one needs to rotate by $2\pi$ to get the identity (up to a global phase), which is the natural convention. (Also, it is the evolution generated by the spin-1/2 operator $S_z=\tfrac12\sigma_z$ in time $t=\phi$.)

+ +

If $R_z(\phi)$ is defined, any definition is ok - though one deviating from the standard one might not be the wisest choice.

+",491,,491,,10-12-2018 08:50,10-12-2018 08:50,,,,0,,,,CC BY-SA 4.0 +4392,1,,,10-12-2018 14:08,,2,381,"

I have this as my minor project and plan to use Shor' Algorithm for factorization to figure out the $p$ and $q$ factors for cracking. Although I understand the theoretical part, I am having some problems in implementing it.

+",4823,,2293,,10/23/2018 21:09,10/23/2018 21:09,How can Shor's algorithm be used to crack 32 bit RSA Encryption?,,1,4,,10-12-2018 19:29,,CC BY-SA 4.0 +4393,2,,4392,10-12-2018 15:50,,1,,"

As Craig Gidney has said in the first comment, you will have to give us further instructions about what specific part of Shor's algorithm you need assistance with, since asking us to work through an entire 32-bit example would be considered ""doing your homework for you"".

+ +

However I see no harm in helping you with the non-quantum part of it, and I think this will be useful for other users who want to learn RSA without reading a rather large Wikipedia page or tutorial which is catered more towards pure math students.

+ +
+ +

In RSA you will have a public key which contains two numbers: $n$ and $e$. You need to factor $n$ into $p\times q$. Once you have $p$ and $q$ you can calculate Carmichael's totient function of $n$, which is called $\lambda$ and is equal to the lowest common multiple of $p-1$ and $q-1$.

+ +

The ""private key"" $d$ can be found by solving: $e\times d = 1\hspace{-3mm}\mod\hspace{-1mm} \lambda$.

+ +

You can now crack the code $c$ to get the original message $m$:

+ +

$m=c^d \hspace{-3mm}\mod \hspace{-1mm}n$.

+",2293,,2293,,10-12-2018 21:33,10-12-2018 21:33,,,,1,,,,CC BY-SA 4.0 +4394,2,,4384,10-12-2018 17:24,,2,,"

Use of the Inverse Symbolic Calculator (https://isc.carma.newcastle.edu.au) suggested the possible exact two-qubit separability probability of $\frac{629}{8580} =\frac{17 \cdot 37}{2^2 \cdot 3 \cdot 5 \cdot 11 \cdot 13} \approx 0.07331002$, quite close to the current estimate of ours, based on the largest sample, of 0.0733116 (employing 4,945,000,000 realizations of a quasirandom sequence of interest per the answer of Martin Roberts to https://math.stackexchange.com/questions/2231391/how-can-one-generate-an-open-ended-sequence-of-low-discrepancy-points-in-3d).

+ +

As the estimate further evolves, we may be led to modify/expand (if possible) upon this current ""suggestion"".

+ +

Here is a plot of estimates based upon two quasirandom sequences (one using $\alpha_0=0$ and one [the indicated longer one, using $\alpha_0 =\frac{1}{2}$] in the terminology of Roberts) along with the line $\frac{629}{8580}$.

+ +

+",3089,,3089,,10-12-2018 17:35,10-12-2018 17:35,,,,0,,,,CC BY-SA 4.0 +4395,1,4405,,10-12-2018 20:43,,10,1292,"

In a 2D surface code lattice, there are some data qubits and some measurement qubits. Suppose we want to do a 2-qubit computation, for example, let say, an X-gate on qubit-1 followed by a CNOT gate with qubit-1 as the control bit and qubit-2 as the target bit.

+ +

Q: How is this computation realized in a quantum computer with a 2D surface code arrangement of qubits? i.e. Which gates are applied and on which qubits?

+",4722,,2293,,10-12-2018 23:46,10/18/2018 10:22,How is computation done in a 2D surface code array?,,3,1,,,,CC BY-SA 4.0 +4396,2,,4395,10-12-2018 22:55,,5,,"

One way to store qubits in the surface code is as pairs of ""holes"". A hole is a chunk of the surface where, instead of performing the stabilizer measurements used to detect whether errors are occurring, you instead do nothing.

+ +

There are two different types of hole, depending on whether the boundary of the hole travels along would-be X measurement qubits or along would-be Z measurement qubits. A CNOT is performed by cycling a hole of one type around a hole of the other type.

+ +

Diagrammatically speaking, it looks like this:

+ +

+ +

In the (b) diagram, time is moving from left to right. Each bar corresponds to the location of a hole over time. Each qubit is stored between the corresponding pairs of white bars. The black bar represents the hole being used to perform the CNOT. It avoids the middle qubit (which is not involved), surrounds one of the bars of the bottom qubit (which is the target), and goes around a 'cross-bar' introduced into the top qubit (which is the control). That's what a surface code CNOT looks like.

+",119,,,,,10-12-2018 22:55,,,,2,,,,CC BY-SA 4.0 +4397,1,4400,,10/14/2018 22:18,,3,1132,"

Let $\vert s\rangle = \frac{1}{\sqrt{N}}\sum_{i=1}^N\vert x_i\rangle$ be an equal superposition over states from which we need to find one solution state $\vert w\rangle$.

+ +

The phase flip operator in Grover's search is $I - 2\vert w\rangle\langle w\vert$. Next, one inverts about the mean with the operator $2\vert s\rangle\langle s\vert - I$. We do this, say $\sqrt{N}$ times and then measure the state.

+ +

My question is why flip about the mean? What other options would one have that is still unitary and why are they less optimal than choosing the mean?

+",4831,,,,,06-12-2019 07:40,Why does Grover's search invert about the mean?,,1,0,,,,CC BY-SA 4.0 +4398,2,,1474,10/14/2018 22:53,,3,,"

I would include IBM's Composer. It doesn't feel like programming because you don't get all the bugs and errors and funtionality, but it clearly converts your instructions on gates into QASM and runs it on a real IBM simulator, or a real IBM quantum computer (the choice is the user's).

+",4833,,2293,,10/15/2018 3:45,10/15/2018 3:45,,,,0,,,,CC BY-SA 4.0 +4399,1,,,10/14/2018 23:32,,2,127,"

I have been interested in the idea of computer clusting which is about making multiple pysical computer system to act as one whole logical computer system computing the same task at the same time, but what about this idea in the quantum computing world has it been done or proposed before?

+",1313,,1313,,10/16/2018 0:07,6/17/2020 17:26,Can quantum computers be clustered together?,,2,0,,,,CC BY-SA 4.0 +4400,2,,4397,10/15/2018 4:10,,6,,"

It doesn't have to be an inversion about the mean.

+ +

Let $R$ be the ""reflect-a-vector operator"", meaning

+ +

$$R(v) = I - 2 |v\rangle \langle v|$$

+ +

Grover's algorithm works by starting in some state $|d\rangle$ and then alternating two reflection operations, $R(s)$ and $R(d)$, where $s$ is the solution vector and $d$ is a ""diffusion vector"". The choice of $d$ affects the speed of the algorithm. Basically, the more $d$ aligns with $s$ (the closer they are to parallel), the faster you will go. The problem is that you don't know what $s$ is, so you need to pick a $d$ that works okay for any possible $s$.

+ +

The simplest $d$ that works equally well for every possible $s$, and the $d$ that Grover happened to use, is the normalized sum of each possible $s$. That is to say, you set $d= \frac{1}{\sqrt{N}}\sum_{k=0}^{N-1} |k\rangle = |+\rangle^{\otimes \lg N}$. This $d$ is the average of all the solutions, so it inverts about the average.

+ +

Another perfectly acceptable choice of $d$ is $d = \frac{1}{\sqrt{N}}\sum_{k=0}^{N-1} (-1)^{\text{HammingWeight}(k)} |k\rangle = |-\rangle^{\otimes \lg N}$. For example, this is the state used in Quirk's example Grover circuit. Yet another perfectly acceptable choice of $d$ is the Fourier transform of any $|k\rangle$, e.g. $d = \text{QFT} \cdot |1\rangle = \frac{1}{\sqrt{N}}\sum_{k=0}^{N-1} e^{i k / N} |k\rangle$.

+ +

More generally, any $d$ that can be written in the form $\frac{1}{\sqrt{N}}\sum_{k=0}^{N-1} e^{i \theta_k}|k\rangle$ will work. As long as $|\langle d|k \rangle|^2 = 1/N$ for all $k$, you're good to go ... except that not all choices have a nice compact circuit. For that reason, you should stick to values of $\theta_k$ that factor across the qubits, i.e. states that can be factorized into the form $\otimes_{q=0}^{\lg N - 1} Z^{\phi_q}|+\rangle$.

+",119,,119,,10/15/2018 4:34,10/15/2018 4:34,,,,4,,,,CC BY-SA 4.0 +4401,2,,4399,10/15/2018 4:47,,4,,"

Yes, this has been thought about. For example, the plan for scaling up ion trap computers involves having multiple ""modules"", each with a few dozen qubits. When qubits in separate modules need to interact, they are moved to the same module using quantum teleportation or some other quantum channel. Each ""module"" is like a little quantum computer in a cluster, with the cluster forming a larger quantum computer.

+ +

Example paper: Co-designing a scalable quantum computer with trapped atomic ions

+",119,,,,,10/15/2018 4:47,,,,1,,,,CC BY-SA 4.0 +4402,1,,,10/15/2018 11:05,,2,526,"

So I am getting this error when I fire the command on Ipython or Jupyter Notebook

+ +
<ipython-input-14-8c9b14139f9d> in< module>()
+           from qiskit import IBMQ
+
+ImportError: cannot import name 'IBMQ'
+
+ +

Does anyone know that kinda error? +If anyone has experienced such errors while using qiskit, please help here.

+",4446,,26,,03-12-2019 09:20,03-12-2019 09:20,Qiskit SDK problem,,0,9,,10-06-2020 06:56,,CC BY-SA 4.0 +4403,1,4409,,10/15/2018 12:37,,8,1082,"

I'm working my way through the book ""Quantum computation and quantum information"" by Nielsen and Chuang. (EDIT: the 10th anniversary edition).

+ +

On chapter 3 (talking about reversibility of the computation) exercise 3.32, it is possible to see that the minimum number of Toffoli gates required to simulate Fredkin gate is 4. See Andrew Landahl's notes for more details.

+ +

By the end of Chapter 3, it is also stated:

+ +
+

From the point of view of quantum computation and quantum information, + reversible computation is enormously important. To harness the full power of quantum computation, any classical subroutines in a quantum computation must be performed reversibly and without the production of garbage bits depending on the classical input.

+
+ +

On chapter 4 exercise 4.25, we construct the Fredkin gate using only 3 Toffoli gates.

+ +

The question is: what's the difference between simulation and construction of Fredkin gate using Toffoli gates?

+",4504,,26,,12/23/2018 7:39,12/23/2018 7:39,Simulation vs Construction of Fredkin gate with Toffoli gates,,2,2,,,,CC BY-SA 4.0 +4404,2,,1584,10/15/2018 12:39,,4,,"

Similar to Blue's picture, I like this one from Quanta Magazine better, since it seem to visually summarize what we are talking about. +

+",1408,,,,,10/15/2018 12:39,,,,1,,,,CC BY-SA 4.0 +4405,2,,4395,10/15/2018 12:51,,11,,"

I will illustrate how one can perform operations using logical operations on the qubits, and using lattice surgery for two-qubit operations.

+ +

In the diagrams below, all of the 'dots' are data qubits: measurement qubits are omitted in order to help demonstrate the basic principles more clearly. The measurement qubits are there for when you perform stabiliser measurements, and are only ever involved in stabiliser measurements, so the story is about what you do with the data qubits — including such things as the stabiliser measurements that you perform on the data qubits.

+ +

Surface codes and logical single-qubit Pauli operations

+ +

One can use fragments of the plane to store qubits. The image below shows four qubits which are encoded as part of a larger lattice: the light dots with black outlines are qubits which are not involved in the encoded qubits, and can in principle be in any state that you like, unentangled from the rest. +
$\qquad\qquad\qquad$

+In each of these fragments, the qubit is defined by the stabiliser relations which (ideally, in the absence of errors) hold among the qubits. For surface codes with the sorts of boundary conditions illustrated here, these are either 3-qubit X or Z stabilisers around the boundary, and either 4-qubit X or Z stabilisers in the 'bulk' or body of the code. The pattern of these stabilisers is illustrated below. Note that each X stabiliser that overlaps with a Z stabiliser, does so at two qubits, so that they commute with one another. (Apologies for the image size: I cannot manage to get it to display at a reasonable size.) +

$\qquad\qquad\qquad$

+Note that by using the regularity of these stabilisers, it isn't necessary for the surface code fragment to be square (or even in principle rectangular).* This will become important later.

+ +

There are a number of (tensor product) Pauli operations which commute with all of these stabilisers. These may be used to define logical Pauli operators, which describe ways in which you can both access and transform the logical qubits. For instance, a product of Z operators across any row from boundary to boundary will commute with all stabilisers, and can be taken to represent a logical Z operator; a product of X operators across any column from boundary to boundary can similarly be take to represent a logical X operator: +

$\qquad$

+It doesn't matter which row or which column you use: this follows from the fact that a product of any two rows of Z operators, or of any two columns of X operators, can be generated as a product of stabilisers and therefore realises an identity operation on the encoded qubit (as the stabiliser generators themselves are operators which perform the identity operation on an encoded qubit state, by definition). So: if you want to apply an X operation to an encoded qubit, one way to do so would be to apply such a logical X operation, by realising X operators on each qubit in a column reaching between two boundaries.**

+ +

Logical single-qubit Pauli measurements

+ +

One advantage of thinking of the encoded qubits in terms of logical operators is that it allows you to also determine how you can perform a 'logical measurement' — that is, a measurement not only of (some of) the qubits in the code, but of the data that they encode. +Take the logical X operator above, for example: the operator XX⊗...⊗X is not only unitary, but Hermitian, which means that it is an observable which you can measure. (The same idea is used all the time with the stabilisers of the code, of course, which we measure in order to try to detect errors.) +This means that in order to realise a logical X measurement, it is enough to measure the logical X observable. +(The same goes for the logical Z observable, if you want to realise a standard basis measurement on your encoded qubit; and everything I say below can also be applied to logical Z measurements with the appropriate modifications.)

+ +

Now — measuring the logical X observable is not exactly the same as measuring each of those single-qubit X operators one at a time. The operator XX⊗...⊗X has only two eigenvalues, +1 and −1, so measuring that precise operator can only have two outcomes, whereas measuring each of n qubits will have 2n outcomes. +Also, measuring each of those single-qubit X operators will not keep you in the code-space: if you want to do computations on a projected post-measurement state, you would have to do a lot of clean-up work to restore the qubit to a properly encoded state.

+ +

However: if you don't mind doing that clean-up work, or if you don't care about working with the post-measurement state, you can simulate the logical X measurement by doing those single-qubit measurements, obtaining +1 and −1 outcomes, and then computing their products to obtain what the result of the measurement of XX⊗...⊗X ""would have"" been. +(More precisely: measuring all of those single-qubit X operators is something that does not disturb a state which would result from a measurement of the tensor product operator XX⊗...⊗X, and the product of those single-qubit measurements would have to yield a consistent outcome with the tensor product operator XX⊗...⊗X, so we can use this as a way to simulate that more complicated measurement if we don't mind all of the qubits being projected onto conjugate basis states as a side-effect.)

+ +

Lattice surgery for logical two-qubit operations

+ +

To realise a two-qubit operation, you can use a technique known as lattice surgery, wherein you 'merge' and 'split' different patches of the 2D lattice to realise operations between those patches (see [arXiv:1111.4022], [arXiv:1612.07330], or [arXiv:1704.08670] for complete descriptions of these operations. Disclosure: I am an author on the third of these articles.) +This may be realised between two adjacent patches of the planar lattice (as illustrated above) by preparing those ""uninvolved"" rows and columns of qubits in a suitable state, and then measuring stabilisers which previously you were not measuring in order to extend the memory to a larger system. +(In the diagram below, the horizontal spacing between the code segments and the column of qubits in the |0⟩ states is exaggerated for effect.) +

$\qquad$

+ +

This will affect the logical operators of the system in a non-unitary way, and is most often used (see [arXiv:1612.07330] for example) to realise a coherent XX or ZZ measurement, which can be composed to realise a CNOT operation [arXiv:1612.07330, Fig. 1(b)]: +
$\qquad\qquad\qquad$
+In this way, you can realise a CNOT operation between a pair of encoded qubits.***

+ +

Footnotes

+ +

* You can also use slight modifications of the regular pattern of stabilisers, as Letinsky [arXiv:1808.02892] demonstrates, to achieve more versatile planar-surface representations of encoded qubits.

+ +

** In practise, rather than explicitly performing (imperfect single-qubit) operations, you would take advantage of the fact that the frame of reference for the encoded qubits is one which you are fixing by convention, and update (or 'transform') the reference frame rather than the state itself when you wish to realise a Pauli operation. This is the smart way to go about error correction as well: to treat errors not as 'mistakes' which must be 'fixed', but as an uncontrolled but observable drift in your reference frame as a result of interaction with the environment. You then hope that this drift is slow enough that you can track it accurately, and compensate for the change in reference frame when you do your computation. Particularly in the context of tracking errors, this reference frame is described as the Pauli frame, and its job is to describe the frame of reference in terms of the Pauli operations which would be required to put the system in the state usually described by an error-free error correcting code.

+ +

*** Many authors would describe this construction as the point of lattice surgery, and it is certainly the original concrete application of it described in the original article [arXiv:1111.4022]. It is possible in principle to do more elaborate operations using splits and merges, by treating the merges and splits as primitive operations in their own right rather than just the components of a CNOT, and using more versatile (but not especially circuit-like) transformations — this is essentially the point of my article with Dom Horsman [arXiv:1704.08670], which opens up the possibility of the ZX calculus (a somewhat heterodox representation of quantum computation) to be directly practically useful for surface-code memories.

+",124,,124,,10/18/2018 10:22,10/18/2018 10:22,,,,3,,,,CC BY-SA 4.0 +4406,2,,4395,10/15/2018 13:12,,3,,"

There are multiple ways to store information in surface codes. Depending on the method you use, there are then multiple ways to do gates. So there's actually a lot to say on this issue!

+ +

Despite the multiplicity of methods, in practical terms they all come to pretty much the same thing: if you want you gate to be kept fault-tolerant by the code, you can only do Clifford gates (such as X, Z, H, CNOT, S). For other gates you'll need to invoke additional mechanisms to become fault-tolerant, such as magic state distillation.

+ +

But you didn't ask for anything beyond Clifford in your example. You just wanted an X and a CNOT. So that makes things easy.

+ +

For a concrete example, let's take the 17 qubit surface code shown below (as depicted in this paper, of which I am an author).

+ +

+ +

This is made up of $n=9$ physical qubits, numbered from $0$ to $8$. There are also 8 ancilla qubits depicted here, but we'll ignore them.

+ +

The dark patches in this image denote stabilizers made of $\sigma_x$ (so we measure the observables $\sigma_x^0 \otimes \sigma_x^1 \otimes \sigma_x^3 \otimes \sigma_x^4$ and $\sigma_x^1 \otimes \sigma_x^2$, for example). The light patches are then the $\sigma_z$ stabilizers.

+ +

One example of an operation that commutes with all stabilizers is to do a $\sigma_x$ rotation on a line of qubits from top to bottom, such as $\sigma_x^0 \otimes \sigma_x^3 \otimes \sigma_x^6$. Another example is to do a line of $\sigma_x$ rotations from left to right, such as $\sigma_z^3 \otimes \sigma_z^4 \otimes \sigma_z^5$.

+ +

All other operations that commute with the stabilizer will be either products of stabilzers themselves (and so act trivially), or they will be equivalent to one of these two examples. So these operations act on our logical qubit. Since one is made of $\sigma_x$s, the other is made of $sigma_z$s, and they anticommute, it is would seem sensible to assign them as the Pauli operators $X$ and $Z$ of the logical qubit

+ +

$$X = \sigma_x^0 \otimes \sigma_x^3 \otimes \sigma_x^ +6, Z = \sigma_z^3 \otimes \sigma_z^4 \otimes \sigma_z^5$$

+ +

So to do an $X$, you just perform the operation above.

+ +

For a CNOT, one of the many ways to do it is transversally. For this, suppose we have two logical qubits $A$ and $B$. Each is made of many physical qubits, that we'll number $0, 1, 2, \ldots$. So let's us $0_A$ to denote physical qubit $0$ of logical qubit $A$, for example.

+ +

To do ${\rm CNOT}(A,B)$, a CNOT with qubit $A$ as control and $B$ as target, we can then do

+ +

$$ {\rm CNOT}(0_A,0_B) \,\, {\rm CNOT}(1_A,1_B) \,\, {\rm CNOT}(1_A,1_B) \,\, \ldots$$

+ +

To see how this works, we have to look at the logical $|0\rangle$ and $|1\rangle$ states when expressed in the computational basis of the physical qubits.

+ +

Let's use $| \tilde 0 \rangle = |0\rangle^{\otimes n}$ to denote the state where all physical qubits are in state $|0\rangle$, and $| \tilde 1 \rangle = X | \tilde 0 \rangle$ a state with a line of $|1\rangle$s from top to bottom, on a background of $|0\rangle$s.

+ +

We can then simply express the logical $|0\rangle$ state as the superposition of $| \tilde 0 \rangle$ with all the state that you can get to from $| \tilde 0 \rangle$ by applying stabilizers. Logical $|1\rangle$ is similarly the superposition of $| \tilde 1 \rangle$ with all the state that you can get to from $| \tilde 1 \rangle +$ by applying stabilizers.

+ +

By thinking of the action of the transversal CNOTs in terms of these states, you should hopefully be able to see how it acts as a CNOT on the logical qubits.

+",409,,409,,10/16/2018 4:06,10/16/2018 4:06,,,,3,,,,CC BY-SA 4.0 +4407,2,,4403,10/15/2018 14:30,,4,,"

The words ""construct"" and ""generate"" are in practice synonyms when it comes to transformations, but suggest different ways in which we consider what's going on.

+ +
    +
  • ""Construct"" suggests thinking of a FREDKIN gate as a subroutine, which you realise as a composition of more primitive operations.

  • +
  • ""Simulate"" suggests the idea that there is some model (eg. conservative reversible computation) for which FREDKIN is a primitive, and which you are realising by other operations in some other model (eg. quantum computation, or reversible but not necessarily conservative computation) in which it is not a primitive.

  • +
+ +

The second viewpoint is particularly useful when you consider operations which are being realised using a protocol which succeeds only under certain circumstances, or using a protocol which is only realises an operation up to some probability of error or up to some precision.

+",124,,124,,10/15/2018 19:52,10/15/2018 19:52,,,,0,,,,CC BY-SA 4.0 +4409,2,,4403,10/15/2018 17:57,,6,,"

I don't think there is a difference between the meanings of ""construct"" and ""simulate"" in this case. Exercise 3.32 of Nielsen and Chuang doesn't actually tell you that you need 4 Toffoli gates to simulate a Fredkin one, and you can in fact do it using just 3 gates, similar to the construction of SWAP gate using 3 CNOT gates:

+ +
CCNOT(control1, control2, target)
+CCNOT(control1, target, control2)
+CCNOT(control1, control2, target)
+
+ +

The circuit given in Andrew Landahl's notes with 4 gates doesn't seem to perform a Fredkin gate on the three given qubits. Based on the notes on the circuit provided in the notes, the middle qubit (the z input) ends up in $y \oplus z$ state, not in $x(y \oplus z) \oplus z$ state as the Fredkin gate requires (the fifth qubit, which started in $|0\rangle$, ends in this state instead).

+",2879,,,,,10/15/2018 17:57,,,,6,,,,CC BY-SA 4.0 +4410,2,,4379,10/15/2018 20:01,,6,,"

The conditional min-entropy $\text{H}_{\text{min}}(A | B)_{\rho}$ can be defined for an arbitrary state $\rho$ of a pair of registers $(A,B)$ as +$$ +- \inf_{\sigma} \,\text{D}_{\text{max}}(\rho \| \mathbb{1}\otimes \sigma), +$$ +where the infimum is over all states $\sigma$ of $B$ and $\text{D}_{\text{max}}$ is the quantum relative max-entropy: +$$ +\text{D}_{\text{max}}(P\|Q) = \inf\{\lambda\in\mathbb{R}: P\leq 2^{\lambda} Q\}. +$$ +In contrast, the ordinary conditional entropy $\text{H}(A | B)_{\rho}$ can be expressed as +$$ +- \inf_{\sigma}\, \text{D}(\rho \| \mathbb{1}\otimes \sigma), +$$ +where here $\text{D}$ refers to the ordinary quantum relative entropy. (This expression for the conditional entropy simplifies to something more familiar once you know that the infimum is always achieved by $\sigma = \operatorname{Tr}_{A}(\rho)$, which is not necessarily true for the formula for the conditional min-entropy.)

+ +

It happens to be the case that for a classical-quantum state +$$ +\rho = \sum_x p(x) \,|x\rangle \langle x | \otimes \rho_x +$$ +that the conditional min-entropy $\text{H}_{\text{min}}(A | B)_{\rho}$ is equal to the negative logarithm of the optimal guessing probability.

+ +

It is always the case that +$$ +\text{H}_{\text{min}}(A | B)_{\rho} \leq \text{H}(A | B)_{\rho}, +$$ +for every state $\rho$ and not just classical-quantum states. This follows from the fact that +$$ +\text{D}(\rho \| Q) \leq \text{D}_{\text{max}}(\rho \| Q) +$$ +for every density operator $\rho$ and every positive semidefinite operator $Q$. This inequality follows from the observation that $\rho \leq 2^{\lambda} Q$ implies +\begin{align} +\text{D}(\rho \| Q) & = \operatorname{Tr}(\rho \log(\rho)) - \operatorname{Tr}(\rho \log(Q))\\ +& \leq \operatorname{Tr}(\rho\log(\rho)) - \operatorname{Tr}(\rho\log(2^{-\lambda}\rho))\\ +& = \lambda, +\end{align} +where the inequality makes use of the operator monotonicity of the logarithm function: if $Q \geq 2^{-\lambda}\rho$, then $\log(Q) \geq \log(2^{-\lambda}\rho)$.

+",1764,,,,,10/15/2018 20:01,,,,0,,,,CC BY-SA 4.0 +4411,1,,,10/15/2018 21:23,,3,586,"

I am trying to run an optimization problem on IBMQ. Running the same code on QASM simulator works fine. However, changing only the backend name to IBMQX takes long time. I am aware of the queues although, there is no way to track the status of my job. I have followed the same code structure given on qiskit-aqua partition example tutorial. +Any help would be really appreciated. Thanks.

+",4838,,26,,03-12-2019 09:24,5/31/2019 17:52,How to run algorithms on IBMQ via Qiskit-Aqua?,,3,0,,,,CC BY-SA 4.0 +4412,1,4646,,10/15/2018 21:30,,6,156,"

I'm having trouble understanding the difference between weak fourier sampling and strong fourier sampling. From this paper:

+ +
+

...two important variants of the Fourier sampling paradigm have been + identified: the weak standard method, where only representation names + are measured, and the strong standard method, where full measurement + (i.e., the row and column of the representation, in a suitably chosen + basis, as well as its name) occurs.

+
+ +

Can someone explain like I'm 5 years old?

+",4728,,2293,,10/18/2018 16:05,5/13/2019 21:22,Weak Fourier Sampling vs Strong Fourier Sampling?,,1,2,0,,,CC BY-SA 4.0 +4413,1,4419,,10/15/2018 23:07,,5,365,"

Consider the following circuit, where $F_n$ swaps two n-qubit states.

+ +

+ +

If the inital state is $|0\rangle \otimes |\psi\rangle \otimes |\phi\rangle = |0\rangle|\psi\rangle|\phi\rangle$, the state before measurement is (unless I'm wrong):

+ +

$$\frac{1}{2}\left(|0\rangle \left(|\psi\rangle|\phi\rangle + |\phi\rangle|\psi\rangle\right) + |1\rangle \left(|\psi\rangle|\phi\rangle - |\phi\rangle|\psi\rangle\right)\right)$$

+ +

How to calculate the post measurement distribution for the first qubit, in terms of $|\psi\rangle$ and $|\phi\rangle$?

+",4728,,26,,12/23/2018 11:30,12/23/2018 11:30,Calculating measurement result of quantum swap circuit,,3,0,,,,CC BY-SA 4.0 +4415,1,4421,,10/16/2018 5:17,,8,1052,"

This question builds off of this question.

+ +

In the HHL algorithm, how do you efficiently do the $\tilde{\lambda}_k$-controlled rotations on the ancilla qubit? It seems to me that since you don't know the eigenvalues a priori, you would have to control on every single $\lambda$ within your eigenvalue bounds $[\lambda_{\text{min}},\lambda_{\text{max}}]$ (since every $\lambda$ requires a different rotation angle), requiring a potentially exponential number of controlled rotations.

+ +

I kind of get how you can avoid an exponential number of controls in Shor's algorithm, because we can split up the modular exponentiation $a^x\pmod N$ so that we can deal with each bit of $x$ separately, $a^{2^{k-1}x_{k-1}}a^{2^{k-2}x_{k-2}}...a^{2^0 x_0} \pmod N$, so you only need as many controls as the number of bits of $x$. But I'm not sure how you can do something similar in the case of HHL, because $\tilde{\lambda}_k$ is not only in the denominator, but nested inside an arcsin, e.g. +\begin{align} +\mathrm{Controlled\ Rotation}=\sum_{m \in [\lambda_{\text{min}},\lambda_{\text{max}}]}\underbrace{|m\rangle\langle m|}_{\text{control reg.}} \otimes \underbrace{R_y\left(\sin^{-1}(2C/m) \right)}_{\text{anc reg.}} +\end{align} +where the number of terms in the sum is exponential in the number of bits of precision in $\lambda$. Is there a way to do this more efficiently, and if not, wouldn't this severely eat into the exponential speedup of the algorithm?

+",4841,,26,,4/26/2019 19:38,4/26/2019 19:38,Efficiently performing controlled rotations in HHL,,1,0,,,,CC BY-SA 4.0 +4416,2,,4413,10/16/2018 6:42,,5,,"

Let's start from the state +$$ +|\Psi\rangle=\frac12\left(|0\rangle(|\psi\rangle|\phi\rangle+|\phi\rangle|\psi\rangle)+|1\rangle(|\psi\rangle|\phi\rangle-|\phi\rangle|\psi\rangle)\right). +$$ +There are a couple of ways to do the calculation. If you want to be formal, which typically leads to fewer mistakes, you identify the measurement operators on a single spin +$$ +P_0=|0\rangle\langle 0|\otimes\mathbb{I}^{2n}\qquad P_1=|1\rangle\langle 1|\otimes\mathbb{I}^{2n} +$$ +and you evaluate the probabilities of the two outcomes as +$$ +p_i=\langle\Psi|P_i|\Psi\rangle +$$

+ +

Slightly less formally, but equivalent, you can collect the terms for $|0\rangle$ and $|1\rangle$, much as you have, but make sure the state on the other qubits is normalised. +$$ +|\Psi\rangle=\frac12\left(\sqrt{2+2|\langle\psi|\phi\rangle|^2}|0\rangle\frac{|\psi\rangle|\phi\rangle+|\phi\rangle|\psi\rangle}{\sqrt{2+2|\langle\psi|\phi\rangle|^2}}+\sqrt{2-2|\langle\psi|\phi\rangle|^2}|1\rangle\frac{|\psi\rangle|\phi\rangle-|\phi\rangle|\psi\rangle}{\sqrt{2-2|\langle\psi|\phi\rangle|^2}}\right). +$$ +Then you can read of the probability amplitude for finding the state in $|0\rangle$ or $|1\rangle$, and take the mod-square to get the probability

+",1837,,,,,10/16/2018 6:42,,,,0,,,,CC BY-SA 4.0 +4417,2,,4386,10/16/2018 7:22,,5,,"

I should probably start by describing my philosophical standpoint: I would never talk about ""many worlds"" or some such. However, I certainly believe that it is possible that everything, including measurement, is unitary. That apparently makes me a many-worldian. It's not necessary to buy wholesale into a picture of diverging worlds. And I think this question demonstrates my (possible) view point quite well.

+ +

So, I think of measurement as being like a controlled-not, but because the measurement device is a large device, it's like targetting many qubits. So, if you've got a qubit in $|0\rangle+|1\rangle$ that you might be measuring, the post-measurement state would be +$$ +\frac{1}{\sqrt{2}}\left(|000\ldots 0\rangle+|111\ldots 1\rangle\right) +$$ +(Aside: you might even consider yourself part of the measuring device. At that point, if you're in the 0 branch, you see results 0, and the you're in the 1 branch, you see 1 results). +The issue is, if this is true, why do we end up describing something post-measurement using classical probabilities, i.e. +$$ +\frac12\left(|000\ldots 0\rangle\langle000\ldots 0|+|111\ldots 1\rangle\langle 111\ldots 1|\right)? +$$ +The answer is basically, that, if you made sure that you did something with every quantum system that has become entangled as a result of the measurement, you would be able to see a difference between the quantum and classical descriptions (answer c$\neq$d). Miss just one qubit (which may be part of the measurement device, or somewhere else in the environment. there's so many that you'll always miss some), and all subsequent results are as if you had the classical probability distribution (answer a=b).

+ +

Hopefully you now see how that should correspond to the questions you're being asked. As other commenters have indicated, however, the measurement process that you're performing on the two different systems is the full combination of Hadamards+computational basis measurement.

+",1837,,,,,10/16/2018 7:22,,,,3,,,,CC BY-SA 4.0 +4418,2,,4411,10/16/2018 7:34,,1,,"

Depending on what is your definition of ""long time"" the answer might be different:

+ +
    +
  • If it is of the order of minutes, then you can't do anything and you just have to wait for your turn in the queue.
  • +
  • If it is several days, then there might be a problem (or a very very long queue).
  • +
+ +

Anyway, you can track the status of your job, even if this status does not include its position within the queue. Here is a (non-tested) example:

+ +
# Setup everything for Qiskit.
+# Create your quantum program
+job = qiskit.execute(my_circuits, the_backend)
+# Ask for the status
+print(job.status()) # will return one of the values listed here:
+                    # https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/backends/jobstatus.py
+
+",1386,,,,,10/16/2018 7:34,,,,0,,,,CC BY-SA 4.0 +4419,2,,4413,10/16/2018 9:46,,5,,"

While DaftWullie's answer gives you everything you need to calculate the answer in this particular case, I'd like to focus on a particular approach which is helpful in situations like yours, where you have an $n$ qubit state state$\def\ket#1{\lvert#1\rangle}\def\bra#1{\!\langle#1\rvert}$ +$$ \ket{\Psi} = \ket{0}\ket{\alpha} + \ket{1}\ket{\beta}\,,$$ +where $\ket{\alpha}$ and $\ket{\beta}$ are not necessarily normalised vectors on $n-1$ qubits. (Notice that at least one of $\ket{\alpha}$ and $\ket{\beta}$ must be sub-normalised in this case if $\ket{\Psi}$ has norm 1.) We can then ask: given such a $\ket{\Psi}$, what distribution do we expect on $\ket{0}$ and $\ket{1}$?

+ +

'Normalising' your superpositions

+ +

If you had a very slightly different representation for $\ket{\Psi}$, of the form +$$ \ket{\Psi} = u_0 \ket{0}\ket{\alpha'} + u_1 \ket{1}\ket{\beta'}\,,$$ +where $\ket{\alpha'}$ and $\ket{\beta'}$ were indeed normalised, then you'd probably be comfortable with this: you'd just recognise that the probability of '0' is $\lvert u_0 \rvert^2$ and the probability of '1' is $\lvert u_1 \rvert^2$. But we can obtain this just by considering the norms of $\ket{\alpha}$ and $\ket{\beta}$, and computing +$$ u_0 = \sqrt{\langle \alpha \vert \alpha \rangle}\,,\qquad u_1 = \sqrt{\langle \beta \vert \beta \rangle} $$ +and (if both $u_0$ and $u_1$ are non-zero) defining the normalised versions $\ket{\alpha'} \propto \ket{\alpha}$ and $\ket{\beta'} \propto \ket{\beta}$ by +$$ \ket{\alpha'} = \tfrac{1}{u_0} \ket{\alpha}\,,\qquad\ket{\beta'} = \tfrac{1}{u_1} \ket{\beta}\,. $$

+ +

Short-cutting to the measurement probabilities

+ +

But actually, the states $\ket{\alpha'}$ and $\ket{\beta'}$ are beside the point: what you actually wanted are $u_0$ and $u_1$, or more precisely, +$$\Pr\!\big[\,0\,\big] = \lvert u_0 \rvert^2 = \langle \alpha \vert \alpha \rangle\,,\qquad \Pr\!\big[\,1\,\big] = \lvert u_1 \rvert^2 = \langle \beta \vert \beta \rangle\,.$$ +So you can just compute those inner products without even worrying about representing the state $\ket{\Psi}$ in one particular way or another, and in particular without giving any thought as to whether or which of $\ket{\alpha}$ or $\ket{\beta}$ is normalised.

+ +

Example.

+ +

In your particular case, you have: +$$ \ket{\alpha} = \frac{1}{2} \bigl( \ket{\psi}\ket{\phi} + \ket{\phi}\ket{\psi} \bigr) , \qquad \ket{\beta} = \frac{1}{2} \bigl( \ket{\psi} \ket{\phi} - \ket{\phi} \ket{\psi} \bigr); $$ +then computing the probability of obtaining either '0' or '1' is just a question of computing some inner products, and in particular will give interesting results when $\ket{\psi}$ and $\ket{\phi}$ are either orthogonal (in which case $\ket{\alpha}$ and $\ket{\beta}$ are both clearly maximally entangled, normalisation aside) or parallel (in which case $\ket{\beta}$ is clearly zero).

+",124,,,,,10/16/2018 9:46,,,,0,,,,CC BY-SA 4.0 +4420,1,,,10/16/2018 10:00,,5,290,"

I am currently working on a quantum computing subject for my coding school, and I had some questions for you. My objective would be to introduce students to quantum computing with an algorithmic project. I had two games ideas for it, one of them being harder to implement than the other one.

+ +

The first (the hardest), would be to provide each of the two players a quantum byte, randomly initialized: to do that, we would apply a Hadamard gate to each qubit in the byte, measure it, use the result of the measurement as the initial state for the byte, and then apply a Hadamard gate to each of the qubits again. This way, the player really has no way to know what lies within the byte. Once the bytes are initialized, each player is given a model of 7 bits he has to reproduce over measurement. For instance, if you are given the string 01110110, you would have to obtain either 01110110 or 11110110 upon measurement, the first qubit being used as a register, allowing the players to apply multi-qubit gates to the byte without alienating the rest of their QuBits. The first player who measures his byte and obtains what he or she was asked to obtain wins. This way students are introduced to quantum state preparation, and might even produce a strToQuBit type of function.

+ +

The second idea would be similar, but instead of a model of byte they would have to reproduce, the game would be played by two players, one would have to fill their byte with 1s, the other with 0s, in other words, the string they would have to obtain upon measurement would always be either (00000000 or 10000000), or (11111111 or 01111111). The byte would of course still be randomly initialized before players can work with it.

+ +

Which idea do you think is the best, and is such a project even doable?

+ +

EDIT: I have omitted an important precision: players cannot change the states of the qubits once they have measured it, and all QuBits will be measured at once. Once the whole byte has been measured, either the result corresponds to the model byte and the player wins, or it does not and then a new model byte is assigned to the player who failed! ;)

+ +

Does this make it more complex?

+",4844,,26,,10/14/2019 10:15,10/14/2019 10:15,Algorithm-based game project to introduce quantum computing,,3,4,,,,CC BY-SA 4.0 +4421,2,,4415,10/16/2018 10:03,,9,,"

The setting is that you've got some state $$\sum_{x\in\{0,1\}^n}\alpha_x|x\rangle$$ on a register, you introduce an ancilla in state $|0\rangle$, and you want to create some state +$$ +\sum_{x\in\{0,1\}^n}\alpha_x|x\rangle\otimes R_X(f(x))|0\rangle +$$ +where $f(x)$ is some angle that you can compute. So, certainly, if you had to build that gate out of the $2^n$ different gates ""if register=$x$, then apply $R_X(f(x))$ on the ancilla"" for each $x$, that would be a very bad way to go. So, here's another way to think about it.

+ +
    +
  • Firstly, let's not apply $R_X(f(x))$, but $HR_Z(f(x))H$. The two Hadamards don't have to be controlled off anything because every gate requires them.
  • +
  • Secondly, let's assume that we know an efficient classical computation of $f(x)$. That means we can build a reversible quantum computation that runs in the same time. This involves introducing a second register. So, what you'd have is +$$ +\sum_x\alpha_x|x\rangle|f(x)\rangle +$$ +where $|f(x)\rangle$ is some $k$-bit representation of the value $2^kf(x)/\pi$.
  • +
  • Now, recognise that if you've got some state $|z\rangle$ for $z\in\{0,1\}^k$, it's easy to turn it into $e^{i\pi z 2^k}$ - you apply $Z$ on the most significant bit, $\sqrt{Z}$ on the second-most, $Z^{1/4}$ on the third, $Z^{1/8}$ on the fourth, and so on (think about how you convert from a binary to a decimal representation). Thus, if you apply these gates but controlled off the ancilla, this achieves exactly what you need.
  • +
  • Finally, you have to uncompute the second register.
  • +
+ +

Thus, overall, the scaling is limited primarily by the time it takes to compute $f(x)$. The point is this happens simultaneously for all values of $x$ due to linearity.

+",1837,,,,,10/16/2018 10:03,,,,2,,,,CC BY-SA 4.0 +4422,2,,4420,10/16/2018 16:32,,8,,"

This is definitely doable, but the tasks seem quite simple and they only introduce single-qubit measurement and the X gate, while quantum state preparation usually involves some superposition and entanglement generation. One can get rid of the input superposition by measuring each qubit and then use a bunch of X gates to set each qubit to the right state, you don't even need the scratch qubit for two-qubit gates.

+ +

I am biased in my recommendation, but I would suggest taking a look at the Quantum Katas project. I created it to help people learn quantum computing; it has some nice tasks of varying complexity and (most importantly) test harnesses that verify that the task solutions are correct. The Superposition kata in particular introduces state preparation. We have had a lot of success using the katas to teach people unfamiliar with quantum computing.

+",2879,,4265,,10/16/2018 17:18,10/16/2018 17:18,,,,1,,,,CC BY-SA 4.0 +4423,2,,4420,10/17/2018 6:13,,3,,"

They certainly seem doable. I'd suggest the first one, as it is a little more complex. For added complexity you could also include constraints on the allowed gates, such as not allowing X or Y on certain qubits, but instead supplying CNOTs (so a $|1\rangle$ can be copied from other qubits) or partial rotations around the X and Y axis, such that the X or Y can be built up from multiple applications.

+",409,,,,,10/17/2018 6:13,,,,1,,,,CC BY-SA 4.0 +4424,1,,,10/17/2018 8:52,,6,366,"

I have been studying on Quantum Fourier Transform (QFT) by myself, and I am little bit confused about how could QFT be used. + For example, if the QFT of three quantum bits is

+ +
+

$a_1|000\rangle + a_2|001\rangle + a_3|010\rangle + a_4|011\rangle + a_5|100\rangle + a_6|101\rangle + a_7|110\rangle+ + a_8|111\rangle$

+
+ +

three questions arising in my mind:

+ +
    +
  1. Whether or not the useful information is represented by the probability amplitude coefficients ($a_1,..., a_8$) of the superposition state? Are these coefficients the same as their counterparts in DFT?
  2. +
  3. To my best knowledge, once a quantum system is measured, the superposition state will be destroyed, and the information represented by the probability coefficients ($a_1, ..., a_8$) will be lost. Then how could the coefficients be extracted from by measurements?
  4. +
  5. How is QFT related with Shor’s Algorithm?
  6. +
+ +

Thank you by advance for answers.

+",4853,,55,,12-01-2021 09:47,01-02-2023 14:40,Do the probability amplitudes of the superposition state produced by the QFT transform convey useful information?,,2,0,,,,CC BY-SA 4.0 +4425,2,,4424,10/17/2018 9:12,,9,,"

You probably shouldn't be thinking of the Quantum Fourier Transform as being something where you want to extract the outcoming probability amplitudes. As you say, when you start measuring, you destroy the superposition. The only way to extract the amplitudes is to make the same state many, many times, and keep repeating your measurements until you get enough statistics to determine the $|a_n|^2$ with reasonable accuracy.

+ +

Instead, think of it as a quantum subroutine. Something that takes a quantum superposition as input, and provides a superposition as output. So, for example, QFT is used as a subroutine within Shor's algorithm. Before application of the QFT, Shor's algorithm has worked very hard to produce a superposition that somehow contains the correct computational answer, but it contains that answer encoded in the relative phases of the superposition. The QFT spits out that relative phase information as a bit string. The trick is that you don't want to know the probability amplitudes. You want to know the bit string that is produced with highest probability (i.e. the value of $n$ for which $|a_n|^2$ is largest. It occurs with a probability at least $4/\pi^2$, if memory serves). In fact, all you do is run the algorithm once. There's a fair chance you've got the right answer, and with a bit of classical post-processing, you can tell whether or not you got the right answer, and hence whether or not you need to run the whole algorithm again.

+",1837,,,,,10/17/2018 9:12,,,,0,,,,CC BY-SA 4.0 +4426,1,4429,,10/17/2018 17:25,,8,492,"

Context:

+ +

I have been trying to understand the genetic algorithm discussed in the paper Decomposition of unitary matrices for finding quantum circuits: Application to molecular Hamiltonians (Daskin & Kais, 2011) (PDF here) and Group Leaders Optimization Algorithm (Daskin & Kais, 2010). I'll try to summarize what I understood so far, and then state my queries.

+ +

Let's consider the example of the Toffoli gate in section III-A in the first paper. We know from other sources such as this, that around 5 two-qubit quantum gates are needed to simulate the Toffoli gate. So we arbitrarily choose a set of gates like $\{V, Z, S, V^{\dagger}\}$. We restrict ourselves to a maximum of $5$ gates and allow ourselves to only use the gates from the gate set $\{V, Z, S, V^{\dagger}\}$. Now we generate $25$ groups of $15$ random strings like:

+ +
+

1 3 2 0.0; 2 3 1 0.0; 3 2 1 0.0; 4 3 2 0.0; 2 1 3 + 0.0

+
+ +

In the above string of numbers, the first numbers in bold are the index number of the gates (i.e. $V = 1, Z = 2, S = 3, Z^{\dagger} = 4$), the last numbers are the values of the angles in $[0,2\pi]$ and the middle integers are the target qubit and the control qubits respectively. There would be $374$ such other randomly generated strings.

+ +

+ +

Our groups now look like this (in the image above) with $n=25$ and $p=15$. The fitness of each string is proportional the trace fidelity $\mathcal{F} = \frac{1}{N}|\operatorname{Tr}(U_aU_t^{\dagger})|$ where $U_a$ is the unitary matrix representation corresponding to any string we generate and $U_t$ is the unitary matrix representation of the 3-qubit Toffoli gate. The group leader in any group is the one having the maximum value of $\mathcal{F}$.

+ +

Once we have the groups we'll follow the algorithm:

+ +

+ +

The Eq. (4) mentioned in the image is basically:

+ +

$$\text{new string} [i] = r_1 \times \text{old string}[i] + r_2 \times \text{leader string}[i] + r_3 \times \text{random string}[i]$$ (where $1 \leq i \leq 20$) s.t. $r_1+r_2+r_3 = 1$. The $[i]$ represents the $i$ th number in the string, for example in 1 3 2 0.0; 2 3 1 0.0; 3 2 1 0.0; 4 3 2 0.0; 2 1 3 0.0, the $6$-th element is 3. In this context, we take $r_1 = 0.8$ and $r_2,r_3 = 0.2$. That is, in each iteration, all the $375$ strings get mutated following the rule: for each string in each group, the individual elements (numbers) in the string gets modified following the Eq. (4).

+ +

Moreover,

+ +
+

In addition to the mutation, in each iteration for each group of the + population one-way-crossover (also called the parameter transfer) is + done between a chosen random member from the group and a random member + from a different random group. This operation is mainly replacing some + random part of a member with the equivalent part of a random member + from a different group. The amount of the transfer operation for each + group is defined by a parameter called transfer rate, here, which is + defined as $$\frac{4\times \text{max}_{\text{gates}}}{2} - 1$$ where + the numerator is the number of variables forming a numeric string in + the optimization.

+
+ +

Questions:

+ +
    +
  1. When we are applying this algorithm to find the decomposition of a random gate, how do we know the number and type of elementary gates we need to take in our gate set? In the example above they took $\{V,Z,S,V^{\dagger}\}$. But I suspect that that choice was not completely arbitrary (?) Or could we have chosen something random like $\{X,Z,R_x,R_{zz},V\}$ too? Also, the fact that they used only $5$ gates in total, isn't arbitrary either (?) So, could someone explain the logical reasoning we need to follow when choosing the gates for our gate set and choosing the number of gates to use in total? (It is mentioned in the papers that the maximum possible value of the number of gates is restricted to $20$ in this algorithm)

  2. +
  3. After the part (in ""Context"") discussing the selection of the gate set and number of gates,is my explanation/understanding (paragraph 3 onwards) of the algorithm correct?

  4. +
  5. I didn't quite understand the meaning of ""parameter transfer rate"". They say that $4\times \text{max}_{\text{gates}} - 2$ is the number of variables forming a numeric string in the optimization. What is $\text{max}_{\text{gates}}$ in this context: $5$ or $20$? Also, what exactly do they mean by the portion I italicized (number of variables forming a numeric string in the optimization) ?

  6. +
  7. How do we know when to terminate the program? Do we terminate it when any one of the group leaders cross a desired value of trace fidelity (say $0.99$)?

  8. +
+",26,,26,,04-02-2019 19:12,04-02-2019 19:12,Understanding the Group Leaders Optimization Algorithm,,1,3,,,,CC BY-SA 4.0 +4428,2,,4420,10/17/2018 19:57,,2,,"

Consider the task on one qubit. You are given a state that is either $H|0\rangle$ or $H|1\rangle$ and your task before the measurement is to get it to be either more probably $|0\rangle$ or $|1\rangle$ as instructed. Say you were instructed to get 0, then you are trying for a unitary that takes both $H \mid 0 \rangle$ and $H | 1 \rangle$ to states close to $|1\rangle$. Say you do $H$ then you are good on half the initial states but not on the other half. Same if you do $XH$. The probabilities are $\mid\langle 0 | U H | i \rangle \mid^2$ where i indicates the starting and $U$ is what you do with it. But

+ +

$$ +\mid\langle 0 | U H | 0 \rangle \mid^2 + \mid\langle 0 | U H | 1 \rangle \mid^2 =\\ +\mid\langle 0 | H U^\dagger | 0 \rangle \mid^2 + \mid\langle 1 | H U^\dagger | 0 \rangle \mid^2 = 1 +$$

+ +

so as you make the probability to get $0$ when you start in $H|0\rangle$ larger, you worsen the other case. Similarly if you were asked to get $1$.

+ +

Let's say $p$ is the probability that if it started in $H|0\rangle$ you got it into the desired result. There is $1-p$ that if it started in $H|1\rangle$ you got it into the desired result. Combine those $\frac{1}{2}p + \frac{1}{2}(1-p) = \frac{1}{2}$. No matter what you do, you can't escape the randomness of the initial state. If you knew something more about it then you could win, but right now you don't.

+ +

If you were allowed to do processing after the measurement, then do what Mariia suggested. But as stated it's a coin flip.

+ +

If the state was entangled then you could use some information that way, but right now the problem is broken up into each qubit.

+",434,,,,,10/17/2018 19:57,,,,2,,,,CC BY-SA 4.0 +4429,2,,4426,10/17/2018 22:31,,2,,"

I suggest looking at how a genetic algorithm works in a context of discrete variables to understand it. They provide a methodology but you can apply other mutation/crossover techniques.

+ +
+ +

Briefly, in a simple optimization problem where the variables are discrete, we can solve heuristically with genetic algorithms (which belongs to the class evolutionary algorithms). We generate a population of candidates (randomly) and we change the candidates at each iteration to try to find a good solution minimizing/maximizing an objective function (called fitness). You can represent the candidates by a string of values (called chromosomes in general). If you input this string of values to the objective function, you are evaluating the candidate or you assigning a fitness. Crossover/mutation operations are meant to change candidates and hope for achieving our objective in a way related to what happens in genetic.

+ +

The GLOA is just another genetic algorithms but with the difference of having different groups of population with a local optimum (leader as best candidate if you prefer) for each and of course a slightly different strategy for mutation/crossover. Usually, we have one group of candidates with one best candidate at each iteration.

+ +

Now for your questions:

+ +

1. You can choose whatever set of gates you want (like your example of a set). This is also true for the maximum number of gate operations you want to restrict your decomposition. Those are just parameters for the algorithm. I would say this is completely arbitrary (not so much logic necessarily just heuristic) but maybe what they chose was more adapted to their example or setup of work. In practice, you would have to try many parameters.

+ +

2. You are kinda retaking the original explanations and especially the diagram so I think you are summing up well.

+ +

+ +

3. I suggest looking at Figure 2 which shows the pseudo-code of this part to understand it. This is similar to one-way crossover in genetic algorithms. +If you look at their original algorithm (GLOA, 2010), they choose a number $t$ between $1$ and half of the number of total parameters (variables) plus one. In this case, the number of parameters/variables is the length of a string which is $4 \times \text{max}_{\text{gates}}$. For the Toffoli gate, it was $5$. For other examples, it could have been more. But in general, they recommend $20$ as a good maximum (you can imagine how hard the optimization is with strings of more than $80$ variables on a simple computer).

+ +

For your visualization, look at the example string you give :

+ +
+

1 3 2 0.0; 2 3 1 0.0; 3 2 1 0.0; 4 3 2 0.0; 2 1 3 0.0

+
+ +

Here, a string represents a set of operations forming a circuit. We associate with it a fitness, which is related to the fidelity. It is like a candidate solution for our optimization problem. In that case, our $\text{max}_{\text{gates}}$ is $5$ because you see $5$ operations represented. Each operation is a gate of the set, the $2$ qubits it applies to and the angle if necessary, that is $4$ variables. In total, $5 \times 4 = 20$ variables for the problem.

+ +

1 3 2 0.0, in this case, means apply the $V$ gate on qubit $3$ controlled by qubit $2$ with an angle $0.0$ (note that with the $V$ gate there is no angle but say you were playing with like rotation gates, this becomes relevant).

+ +

4. This is also arbitrary depending on what you want. It can be a fixed number of iterations, or until you reach a threshold/convergence criteria.

+",4127,,26,,10/17/2018 23:05,10/17/2018 23:05,,,,1,,,,CC BY-SA 4.0 +4430,1,4431,,10/18/2018 2:01,,4,423,"

I have read about how Alice can send Bob a qubit $\alpha |0\rangle + \beta|1\rangle$ if they share an EPR pair. This gives an initial state:

+ +

$(\alpha |0\rangle + \beta|1\rangle) \otimes (\frac{1}{\sqrt{2}}|00\rangle + \frac{1}{\sqrt{2}}|11\rangle)$

+ +

The first two qubits belong to Alice, the third belongs to Bob.

+ +

The first step is for Alice to apply a controlled not from first qubit onto her half of the EPR pair. This gives the result:

+ +

$\frac{1}{\sqrt{2}} \big(\alpha (|000\rangle + |011\rangle) + \beta (|110\rangle + |101\rangle)\big)$

+ +

Next, let us say that Alice measures her second qubit. This has a 50/50 chance of resulting in a zero or a one. That leaves the system in one of two states:

+ +

$\alpha |000\rangle + \beta |101\rangle \quad\text{OR}\quad \alpha |011\rangle + \beta |110\rangle$

+ +

If Alice measures the second qubit as zero, she is in the first state. She can tell Bob: ""Your half of the EPR is now the qubit I wanted to send you.""

+ +

If Alice measures the second qubit as one, she is in the second state. She can tell Bob: please apply the matrix

+ +

$ +\begin{bmatrix} +0 & 1\\ +1 & 0 +\end{bmatrix} +$

+ +

to your qubit to flip the roles of zero and one.

+ +

Hasn't Alice teleported her qubit at this point?

+ +

The only problem I see is this: Alice must continue not to measure her original qubit. If her unmeasured qubit were to be measured, that would force Bob's qubit to collapse as well.

+ +

Is this therefore why Alice needs to apply a Hadamard matrix to her first qubit? Let us apply the Hadamard to the state

+ +

$\alpha |000\rangle + \beta |101\rangle$

+ +

(This is one of the two possibilities from above). We get:

+ +

$ +\frac{1}{\sqrt{2}} \big( +(\alpha |000\rangle + \beta |001\rangle) +(\alpha |100\rangle - \beta |101\rangle) +\big) +$

+ +

Alice measures her first qubit now. If it is a zero, she can tell Bob: your qubit is fine. If it is one, she can tell Bob: you need to fix it from $\alpha |100\rangle - \beta |101\rangle$ (by an appropriate rotation).

+ +

Finally, my questions are:

+ +
    +
  1. If Alice is okay with sharing an entangled copy of the transfered qubit with Bob, can she send just the first classical bit?
  2. +
  3. Is the application of the Hadamard simply to separate Alice's first qubit from Bob's qubit?
  4. +
  5. It is the application of the Hadamard to Alice's first qubit, followed by the measurement, which may disturb Bob's qubit, possibly necessitating a ""fixup."" The second classical bit is transferred to communicate whether the fixup is needed. Am I correct?
  6. +
  7. The reason Alice wants Bob to have a qubit unentangled from her own is probably because it is burdensome for Alice to keep an entangled copy from being measured. Correct?
  8. +
+ +

Sorry for the very long and rambly question. I think I understand, but maybe this writeup will help someone else.

+",4862,,26,,12/13/2018 21:01,12/13/2018 21:01,Quantum teleportation: second classical bit for removing entanglement?,,2,0,,,,CC BY-SA 4.0 +4431,2,,4430,10/18/2018 7:52,,7,,"

Your initial calculations are correct. When Alice performs her first measurement and gets a 0 outcome then, as you say, Alice and Bob are left sharing a two-qubit state +$$ +|\Psi\rangle=\alpha|00\rangle+\beta|11\rangle +$$ +(you can safely ignore the measured qubit). The problem is the statement

+ +
+

She can tell Bob: ""Your half of the EPR is now the qubit I wanted to send you.""

+
+ +

because it doesn't make sense. It kind of looks like the amplitudes are doing the right sort of thing, but it's not quite enough. The two qubits are entangled, and you need the second measurement to remove that entanglement.

+ +

To see this more clearly, what it should mean if Bob has received the state is that if he performs a measurement that projects onto one of two states $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$ or $|\psi^\perp\rangle=\beta|0\rangle-\alpha|1\rangle$, then Bob must always get the answer corresponding to $|\psi\rangle$. So, let's try it. We get the required answer with probability +$$ +\langle\Psi|(\mathbb{I}\otimes|\psi\rangle\langle\psi|)|\Psi\rangle=|\alpha|^4+|\beta|^4 +$$ +Since we know that $|\alpha|^2+|\beta|^2=1$, this is not generally equal to 1. (It could be if $\alpha\beta=0$). So, you can see that the outcomes of experiments that Bob performs on his half of the state $|\Psi\rangle$ do not match up perfectly with what he should get if he were experimenting with the state $|\psi\rangle$ which he should have.

+ +

Another way of thinking about it is that if Bob holds the state $|\psi\rangle$ that he's supposed to after teleportation, nothing that Alice can do should be able to change it. But when they share $|\Psi\rangle$, she certainly can change it. That's why the measurement basis of her second measurement is so important - there must be no Z component to the measurement because otherwise she would learn something about the state being sent (in other words, she'd collapse the state $|\psi\rangle$).

+ +

In summary:

+ +
+
    +
  1. If Alice is okay with sharing an entangled copy of the transfered qubit with Bob, can she send just the first classical bit?
  2. +
+
+ +

No. Bob has not received the state at this point, as I hope I've conveyed.

+ +
+
    +
  1. The reason Alice wants Bob to have a qubit unentangled from her own is probably because it is burdensome for Alice to keep an + entangled copy from being measured. Correct?
  2. +
+
+ +

No, it's so that Bob actually receives the state he's supposed to.

+ +
+
    +
  1. Is the application of the Hadamard simply to separate Alice's first qubit from Bob's qubit?
  2. +
+
+ +

No. The separation doesn't happen at this point. It only happens in the act of measurement that follows. The Hadamard is important for changing the state so that Alice only removes the entanglement and does not collapse the state of $|\psi\rangle$

+ +
+
    +
  1. It is the application of the Hadamard to Alice's first qubit, followed by the measurement, which may disturb Bob's qubit, possibly + necessitating a ""fixup."" The second classical bit is transferred to + communicate whether the fixup is needed. Am I correct?
  2. +
+
+ +

Pretty much. The measurement is guaranteed to disturb the qubit, because it does change after the measurement. But, as you say, the bit of communication is required to convey how it changed, and therefore what fix is required.

+",1837,,,,,10/18/2018 7:52,,,,2,,,,CC BY-SA 4.0 +4432,2,,4411,10/18/2018 10:01,,2,,"

Here is tested code (also provided in one of the qiskit tutorials)

+ +
lapse = 0
+interval = 60
+while not job.done:
+    print('Status @ {} seconds'.format(interval * lapse))
+    print(job.status)
+    time.sleep(interval)
+    lapse += 1
+print(job.status)
+
+ +

where interval is giving in seconds (if your job requires longer waiting and execution, I would suggest to increase it). This will provide you information about your queue placement.

+",563,,,,,10/18/2018 10:01,,,,0,,,,CC BY-SA 4.0 +4433,2,,4424,10/18/2018 12:06,,3,,"
    +
  1. The probability amplitudes of a quantum state are what characterises the state itself. In this sense, they convey all of the information. The modulus squared of $a_i$ gives the probability of finding the system in the $i$-th state (for example, $\lvert a_2\rvert^2$ is the probability of finding your system in the state $\lvert 001\rangle$). Furthermore, the phases of $a_i$ give information about the way the state will turn out when subjected to some evolution (equivalently, they tell you the outcome probabilities with respect to other measurement choices).

    +

    These coefficients are not the same as their counterparts in the context of the DFT, in the sense that the DFT produces as output a list of numbers written in your computer, while the QFT is a physical operation that produces a quantum state whose amplitudes are related to the amplitudes of the initial state via the DFT. In particular, while one has in principle complete knowledge about the output of a DFT of some vector, you do not have direct access to the amplitudes of the quantum state that is obtained after applying the QFT to some state.

    +
  2. +
  3. Yes, measuring the system will only give you one of the states of which the system is in a superposition of. Measuring your state you might get the second output, then the fourth, and so forth. If you perform the measurement many times, however, you will recover $\lvert a_i\rvert^2$ into the frequencies with which you observe the different outcomes. More generally, one can (but generally does not want to) perform quantum state tomography to completely reconstruct the amplitudes of a state.

    +

    However, it is important to notice that reconstructing a state via tomography is highly inefficient, and will most likely destroy all the advantages you would have gained by running a given quantum algorithm. Instead, the usefulness of the QFT (and similar operations) is that, if you set things up correctly, you can get the answer to whatever question you are asking more or less directly. For example, say the answer to your question is "3", then you might have an algorithm which generates a quantum state such that, after application of the QFT, evolves into the state $\lvert001\rangle$, that is, the third state using your notation. +Note that such a state does not have the problem abovementioned: if you measure the output and find it to be "3", then that is the answer to the problem (in an ideal scenario at least).

    +
  4. +
  5. They are related in that the QFT is a fundamental ingredient of Shor's factorisation algorithm. Very roughly speaking, the idea is to reduce the problem of factoring to the problem of period finding, and then period finding can be efficiently solved using the QFT. If you want more details about this process I would ask it in a separate question though.

    +
  6. +
+",55,,55,,01-02-2023 14:40,01-02-2023 14:40,,,,0,,,,CC BY-SA 4.0 +4434,1,,,10/18/2018 14:28,,2,91,"

When I look at QCL (the documentation of the language begins in chapter 3) I don't see a way to compute the tensor product between two q-bits. Is the programming operator & doing the tensor product's job?

+",1589,,10480,,3/20/2021 23:16,12-11-2022 02:08,How to compute the tensor product in QCL?,,1,0,,,,CC BY-SA 4.0 +4435,1,,,10/18/2018 14:33,,1,1389,"

It seems $V(\pi/2, \qr{1}) = i \qr{1}.$ I didn't expect that. To me $\qr{1}$ points up because $\qr{0}$ points to the right. So rotating $\qr{1}$ by $\pi/2$ +should yield $-\qr{0}$. What am I missing here?

+ +

$\newcommand{\q}[2]{\langle #1 | #2 \rangle} +\newcommand{\qr}[1]{|#1\rangle} +\newcommand{\ql}[1]{\langle #1|}$

+",1589,,26,,12/23/2018 11:28,12/23/2018 11:28,Why does a rotation of $\pi/2$ on $|1\rangle$ yield $i|1\rangle$?,,2,0,,,,CC BY-SA 4.0 +4436,2,,4435,10/18/2018 15:01,,4,,"

When people talk about single qubit rotations, the rotations they are visualising are generally rotations on the Bloch Sphere. In this picture, orthogonal states are represented by vectors that point in opposite directions, so $|0\rangle$ points up while $|1\rangle$ points down. + +Since the Bloch sphere is a 3 dimensional shape, there is a lot of freedom to pick the axis that is being rotated about. Since you don't define $V$ or describe where it comes from, I can only guess, but my guess would be that you're talking about a rotation about the Z axis, which preserves the states $|0\rangle$ and $|1\rangle$, up to phases. You're probably thinking (more or less) about a rotation about the X axis.

+",1837,,,,,10/18/2018 15:01,,,,4,,,,CC BY-SA 4.0 +4437,1,5233,,10/18/2018 15:28,,4,307,"

The hidden subgroup problem is often cited as a generalisation of many problems for which efficient quantum algorithms are known, such as factoring/period finding, the discrete logarithm problem, and Simon's problem.

+ +

The problem is the following: given a function $f$ defined over a group $G$ such that, for some subgroup $H\le G$, $f$ is constant over the cosets of $H$ in $G$, find $H$ (through a generating set). +In this context, $f$ is given as an oracle, which means that we don't care about the cost of evaluating $f(x)$ for different $x$, but we only look at the number of times $f$ must be evaluated.

+ +

It is often stated that a quantum computer can solve the hidden subgroup problem efficiently when $G$ is abelian. The idea, as stated for example in the wiki page, is that one uses the oracle to get the state +$\lvert gH\rangle\equiv\sum_{h\in H} \lvert gh\rangle$ +for some $g\in G$, and then the QFT is used to efficiently recover from $\lvert gH\rangle$ a generating set for $H$.

+ +

Does this mean that sampling from $\operatorname{QFT}\lvert gH\rangle$ is somehow sufficient to efficiently reconstruct $H$, for a generic subgroup $H$? If yes, is there an easy way to see how/why, in the general case?

+",55,,55,,1/15/2019 10:51,1/20/2019 9:34,Why does Fourier sampling allow to efficiently recover hidden subgroups?,,1,1,,,,CC BY-SA 4.0 +4438,1,4460,,10/18/2018 17:24,,2,133,"

I was wondering where to look for open Ph.D. positions for specific sub-fields/topics of quantum computing like error correction, optimization, quantum chemistry, etc. It is preferable if the offers/positions can be sorted subfield-wise or topic-wise.

+",4127,,26,,10/20/2018 15:06,10/20/2018 15:06,Resources to keep a track on open Ph.D. positions in specific sub-fields/topics of quantum computing,,1,13,,,,CC BY-SA 4.0 +4439,1,,,10/18/2018 17:31,,2,108,"

My question is definitely regarding quantum-speedup but the quantum-speedup tag is confined to algorithms... and my question is definitely not on algorithms. So, this is just my best shot at tagging.

+ +

This article in Physics World discusses quantum speedup and makes some pretty provocative statements (examples below).

+ +

It also tells me that people in a forum like this one might feel as follows: +“Some feel that this debate about the “how” of quantum computation is a red herring. “Researchers attending most conferences in quantum computing never mention these issues, or only in discussions over beer”.

+ +

Here are some of those ""provocative statements"".

+ +

“None of the explanations for how quantum computers achieve speed-up is universally accepted.”

+ +

“If it’s not from the vastness of Hilbert space, not from entanglement and not interference, then what? “As far as I am aware, right now it’s pretty silent in the theatre where this question is played out – that’s because the main candidates are all dead...”

+ +

Here are two specific questions. +As a worker in this field, does the following statement from the 2014 Physics World article match your own perception here in 2018? If not, what are the favored candidates today? And again, this is not a question about speedups obtained via algorithmic refinement. Why exclude algorithmic refinement? See ""Footnote"".

+ +

""Deutsch’s notion of quantum parallelism has stuck – the standard explanation in popular descriptions of quantum-computing speed-up is still that massively parallel computation takes place...""

+ +

Footnote: Why exclude algorithmic refinements? +Again from that article: +“Designing good quantum algorithms is a very difficult task,” Van den Nest agrees. “I believe this task could be made lighter if we were to arrive at a systematic understanding of the possible ways to move from classical to quantum computing” – in other words, if we had a better grasp of which aspect of quantum physics the advantages ultimately stem from.""

+ +

I just noticed that link is going be unfamiliar (to me too). So I vetted it just a little. The home page says: ""Dr. Franco Nori is a RIKEN Chief Scientist, heading the “Theoretical Quantum Physics Laboratory” at the “Cluster for Pioneering Research” at Riken, Saitama, Japan. He is also at the University of Michigan, Ann Arbor, USA""

+",4865,,4865,,10/19/2018 21:49,10/19/2018 21:49,Physics World - Questioning quantum speed,,0,14,,10/19/2018 7:21,,CC BY-SA 4.0 +4440,1,4445,,10/18/2018 19:25,,11,640,"

I'd like to be able to program simple functions into simulators such as QCL. I read that any function $f$ can be implemented, but I don't know how to get say a unitary matrix that implements $f$.
+$\newcommand{\qr}[1]{|#1\rangle}$ +I think first I must figure out a function that mimics $f$ in a reversible way. I think that $$U_f\qr{x}\qr{0} = \qr{x}\qr{0 \oplus f(x)}$$ does it. However, how do I implement this as a circuit? Please give a simple example, if you would.

+",1589,,26,,12/23/2018 11:28,1/21/2019 15:44,What's an example of building a circuit $U_f$ that implements a simple function $f$?,,3,1,,,,CC BY-SA 4.0 +4441,2,,4435,10/18/2018 20:28,,3,,"

First off, you may have some misunderstanding regarding the placement of Bloch vectors on the sphere. The placement of a state on the sphere is dictated by the following parametrisation of this state:

+ +

$$|\psi\rangle=\cos\theta/2\,|0\rangle+e^{i\phi}\sin\theta/2\,|1\rangle.$$

+ +

The parameters ($\theta, \phi$) are then taken as the tangential and azimuthal angles for the Bloch vector representation. As such, the $|0\rangle$ and $|1\rangle$ vectors, parametrised by $(0, 0)$ and $(2\pi, 0)$ respectively, are parallel to one another on the sphere – as confusing as it may seem – since $|0\rangle$ points up (by convention) and $|1\rangle$ points down.

+ +

Next, it seems that you are rotating about the Z axis, instead of about the X or Y axis. +The exponentiated operator $e^{-i\alpha/2\,{\rm Z}}=:{\rm R}_{\rm Z}(\alpha)$, where $\alpha\in(-2\pi, 2\pi]$, is a rotation about the Z axis, and has the following matrix form: +$${\rm R}_{\rm Z}(\alpha)= +\begin{bmatrix}e^{-i\alpha/2}&0\\ +0&e^{i\alpha/2}\end{bmatrix}.$$ +You can check that this indeed corresponds to a Z rotation of a Bloch vector by the same angle.

+ +

Assuming you took $V(\beta,|\psi\rangle)={\rm R}_{\rm Z}(2\beta)|\psi\rangle$, apply this operator to the $|1\rangle$ state with $\beta=\frac\pi2$ and you get ${\rm R}_{\rm Z}(\pi)|1\rangle=e^{i\pi/2}|1\rangle=i|1\rangle.$

+ +

The X and Y rotation operators, on the other hand, look like this:

+ +

$${\rm R}_{\rm X}(\alpha)=e^{-i\alpha/2\,{\rm X}}= +\begin{bmatrix}\cos\alpha/2&-i\sin\alpha/2\\ +-i\sin\alpha/2&\cos\alpha/2\end{bmatrix},$$

+ +

$${\rm R}_{\rm Y}(\alpha)=e^{-i\alpha/2\,{\rm Y}}= +\begin{bmatrix}\cos\alpha/2&-\sin\alpha/2\\ +\sin\alpha/2&\cos\alpha/2\end{bmatrix}.$$

+ +

Following the same reasoning, let us apply ${\rm R}_{\rm X}(\pi)$ and ${\rm R}_{\rm Y}(\pi)$ to the $|1\rangle$ state:

+ +

$${\rm R}_{\rm X}(\pi)|1\rangle=-i\sin\pi/2\,|0\rangle+\cos\pi/2\,|1\rangle=-i|0\rangle,$$

+ +

$${\rm R}_{\rm Y}(\pi)|1\rangle=-\!\sin\pi/2\,|0\rangle+\cos\pi/2\,|1\rangle=-|0\rangle.$$

+ +

Long story short, it depends on the axis about which you rotate (as is always the case with rotations). In this case you were expecting a Y rotation, but you did a Z rotation.

+ +

For more information, I highly recommend you check out this excellent lecture by Ian Glendinning on Pauli rotations.

+",2687,,,,,10/18/2018 20:28,,,,0,,,,CC BY-SA 4.0 +4442,1,4446,,10/18/2018 20:54,,4,365,"

$\newcommand{\qr}[1]{|#1\rangle}$ I gave myself the task of building an operator that implements the following function: $f(0) = 0$, $f(1) = 1$, $f(2) = 1$, $f(3) = 0$. I restricted myself to $x$ up to 2 bits. That is, $f$ tells the parity of its argument.

+ +

I read that in order to be reversible, a circuit $U_f$ could be defined to have the following effect: $U_f\qr{x}_2 \qr{0}_1 \to \qr{x}_2\qr{0 \oplus f(x)}_1$. After doing my calculations, I think the following matrix represents $U_f$

+ +

$$\begin{bmatrix} +1 & 0 & 0 & 0\\ +0 & 1 & 1 & 0\\ +0 & 0 & 0 & 0\\ +0 & 0 & 0 & 0 +\end{bmatrix} \otimes I_2$$

+ +

However, I'm really not sure. I once read that this definition isn't complete, that I should also define the behavior for the input $\qr{x}_2\qr{1}$. (I'm not too clear on that.)

+ +

So --- though confused ---, I went ahead and defined $$U_f\qr{x}_2 \qr{1}_1 \to \qr{x}_2\qr{1 \oplus f(x)}_1.$$ Doing my calculations again, I get the following matrix

+ +

$$\begin{bmatrix} +0 & 1 & 1 & 0\\ +1 & 0 & 0 & 0\\ +0 & 0 & 0 & 0\\ +0 & 0 & 0 & 0 +\end{bmatrix} \otimes I_2$$

+ +

Now I don't know which should be my matrix.

+",1589,,,,,10/24/2018 3:38,What is the matrix for the operator that implements a function to tell the parity of its argument?,,3,4,,,,CC BY-SA 4.0 +4443,2,,4440,10/18/2018 21:15,,4,,"

$\newcommand{\qr}[1]{|#1\rangle}$It would really depend on f. Say for instance you want to compute $f(x) = 3*x$ : +$$U_f\qr{x}\qr{3}\qr{0} = \qr{x}\qr{3}\qr{3*x}$$

+ +

You will need a multiplier circuit like this one.

+ +

However, you can mimic simple operations with NOT, CNOT, Toffoli... For example, create a function which verify a set of clauses in your bits like it is done in SAT problems. It will output a bit 1 if it satisfy the set, 0 otherwise. The whole circuit will be represented by a unitary operation.

+",4127,,23,,1/21/2019 13:12,1/21/2019 13:12,,,,0,,,,CC BY-SA 4.0 +4444,2,,4440,10/18/2018 21:30,,4,,"

Firstly, I would like to mention that it is more common to use +$${\rm U}_f|x\rangle|0\rangle=|x\rangle|f(x)\rangle$$ +instead of what you wrote down. (By the way, $0\oplus f(x)$ is simply $f(x)$.)

+ +

Now, what you want completely depends on $f$ and the domain of $x$. If for example $x$ is allowed only to be a computational basis state, $f(x)=x$ could be implemented easily with a CNOT gate: +$${\rm C}_{\rm X}|0\rangle|0\rangle=|0\rangle|0\rangle,$$ +$${\rm C}_{\rm X}|1\rangle|0\rangle=|1\rangle|1\rangle;$$ +however if $|x\rangle$ is a general quantum state, $f(x)=x$ cannot be implemented in this form since that would violate the no-cloning theorem. In this case, you should simply read out the first qubit. This shows that sometimes (i.e. when $f$ can be represented as a unitary operator) you don't even need a second state.

+",2687,,,,,10/18/2018 21:30,,,,2,,,,CC BY-SA 4.0 +4445,2,,4440,10/18/2018 21:55,,13,,"

For me the problem with understanding quantum oracles was figuring out how they work if the input is in superposition. The answer is: build the oracle so that it is a unitary which does the right thing for inputs in each of the computational basis states, and it will do the right thing for the superposition due to linearity of the unitary transformation. I don't want to write out the formulas here, you can look up a more detailed explanation here.

+ +
+ +

$\newcommand{\qr}[1]{|#1\rangle}$ +Now, to the examples.

+ +

The simplest function to implement is the constant function $f(x) = 0$. Indeed, $U_0\qr{x}\qr{y} = \qr{x}\qr{y \oplus 0} = \qr{x}\qr{y}$ - that's just an identity matrix, which you can program as doing exactly nothing :-)

+ +
+ +

The constant function $f(x) = 1$ is the next easiest thing: $U_0\qr{x}\qr{y} = \qr{x}\qr{y \oplus 1}$, and all you need to do is to apply an X gate to the output qubit.

+ +
+ +

How about a function which actually depends on its input values, for example, $f(x) = x_k$ (if the k-th qubit of the input is in $\qr{1}$ state, flip the state of the output qubit)? You need to make sure that it works if $x_k = 0$ (doing nothing) and if $x_k = 1$ (flipping the state of y). This can be done by a CNOT with the k-th qubit of the input used as a control and the output qubit as a target.

+ +
+ +

You can find more complicated examples in the Quantum Katas and try to actually implement them. Writing the oracles are covered in Deutsch-Jozsa algorithm, Simon's algorithm and Grover's algorithm katas (each kata covers writing oracles used in the respective algorithm). My favorite one is probably the oracle which implements the majority function on 3 qubits.

+",2879,,26,,1/21/2019 15:44,1/21/2019 15:44,,,,0,,,,CC BY-SA 4.0 +4446,2,,4442,10/18/2018 22:08,,5,,"

All quantum operators must be unitary. Unitary means the conjugate-transpose of the operator is its inverse. In your case:

+ +

$UU^{\dagger} = \begin{bmatrix} +1 & 0 & 0 & 0\\ +0 & 1 & 1 & 0\\ +0 & 0 & 0 & 0\\ +0 & 0 & 0 & 0 +\end{bmatrix} +\begin{bmatrix} +1 & 0 & 0 & 0\\ +0 & 1 & 0 & 0\\ +0 & 1 & 0 & 0\\ +0 & 0 & 0 & 0 +\end{bmatrix} = +\begin{bmatrix} +1 & 0 & 0 & 0\\ +0 & 2 & 0 & 0\\ +0 & 0 & 0 & 0\\ +0 & 0 & 0 & 0 +\end{bmatrix}$

+ +

So it is most certainly not unitary because $UU^{\dagger} \neq \mathbb{I}_4$ (same as your second attempt).

+ +

There's a long way to construct functions like this, and a short way. The long way is to write out all inputs and outputs:

+ +

$U|000\rangle = |000\rangle$

+ +

$U|001\rangle = |001\rangle$

+ +

$U|010\rangle = |011\rangle$

+ +

$U|011\rangle = |010\rangle$

+ +

$U|100\rangle = |101\rangle$

+ +

$U|101\rangle = |100\rangle$

+ +

$U|110\rangle = |110\rangle$

+ +

$U|111\rangle = |111\rangle$

+ +

You can then pretty easily construct the operator from this.

+ +

An easier way is to use projection operators & matrix addition to implement ""if-then"" semantics with matrices:

+ +

$U = |00\rangle\langle00| \otimes \mathbb{I}_2 + |01\rangle\langle01|\otimes X_2 + |10\rangle\langle10| \otimes X_2 + |11\rangle\langle11|\otimes \mathbb{I}_2$

+ +

The way to read this is ""if the input is $|00\rangle$ or $|11\rangle$, do not flip the third bit. If the input is $|01\rangle$ or $|10\rangle$, flip the third bit. $|\phi\rangle\langle \phi|$ is called the outer product, and is defined for example as follows:

+ +

$|0\rangle\langle0| = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$

+ +

which is called a projection operator. Use projection operators to only apply an operation on specific states - here, $|0\rangle$.

+ +

A huge benefit of this projectors & addition approach is that you never have to actually write out the full matrix, which can become enormous as the number of qbits increase - 3-qbit 8x8 matrices already have 64 elements! This is your first step into using symbolic rather than matrix reasoning. For example, we can use the rules of linear algebra to calculate the action of our $U$ on some input:

+ +

$U|101\rangle = (|00\rangle\langle00| \otimes \mathbb{I}_2 + |01\rangle\langle01|\otimes X_2 + |10\rangle\langle10| \otimes X_2 + |11\rangle\langle11|\otimes \mathbb{I}_2)|101\rangle$

+ +

Now, matrix multiplication distributes over addition. This means we have:

+ +

$|00\rangle\langle00| \otimes \mathbb{I}_2 |101\rangle + |01\rangle\langle01|\otimes X_2 |101\rangle + |10\rangle\langle10| \otimes X_2 |101\rangle + |11\rangle\langle11|\otimes \mathbb{I}_2 |101\rangle$

+ +

Let's apply further transformation rules. Note $|101\rangle = |10\rangle \otimes |1\rangle$, and $(U\otimes V)(|x\rangle \otimes |y\rangle) = U|x\rangle \otimes V|y\rangle$, where in our cases (for example) $U = |00\rangle\langle00|$, $V = \mathbb{I}_2$, $x=|10\rangle$ and $y=|1\rangle$:

+ +

$|00\rangle\langle00|10\rangle \otimes \mathbb{I}_2 |1\rangle + |01\rangle\langle01|10\rangle \otimes X_2 |1\rangle + |10\rangle\langle10|10\rangle \otimes X_2 |1\rangle + |11\rangle\langle11|10\rangle \otimes \mathbb{I}_2 |1\rangle$

+ +

Now, note we have the following four terms:

+ +

$\langle00|10\rangle, \langle01|10\rangle, \langle10|10\rangle, \langle11|10\rangle$

+ +

These are called inner products, or dot products and here all of them are zero except for $\langle10|10\rangle$ - the dot product of $|10\rangle$ with itself:

+ +

$\langle10|10\rangle = \begin{bmatrix} 0 & 0 & 1 & 0\end{bmatrix} +\begin{bmatrix}0 \\ 0 \\ 1 \\ 0\end{bmatrix} = 1$

+ +

Since the other terms are all zero, they all cancel out:

+ +

$\require{cancel} \cancel{|00\rangle\cdot 0 \otimes \mathbb{I}_2 |1\rangle} + \cancel{|01\rangle\cdot 0 \otimes X_2 |1\rangle} + |10\rangle\cdot 1 \otimes X_2 |1\rangle + \cancel{|11\rangle\cdot 0 \otimes \mathbb{I}_2 |1\rangle}$

+ +

So we are left with:

+ +

$|10\rangle \otimes X_2 |1\rangle$

+ +

Where of course $X_2|1\rangle = |0\rangle$, so:

+ +

$|10\rangle \otimes |0\rangle = |100\rangle$

+ +

And we calculated $U|101\rangle = |100\rangle$ as expected, without once having to write out a huge inconvenient matrix!

+",4153,,4153,,10/18/2018 22:39,10/18/2018 22:39,,,,3,,,,CC BY-SA 4.0 +4447,2,,4442,10/18/2018 22:08,,5,,"

Since your desired operation is a non-injective function, you need a third qubit and a unitary acting on all three qubits. Using an operator on your two input qubits and tensoring this with ${\rm I}_2$ on the third qubit is not going to work as you might as well forget about the third qubit completely if that were the case.

+ +

By the way, the two matrices you present aren't unitary (${\rm U}{}^\dagger{\rm U}={\rm I}_4$ with $^\dagger$ the conjugate transpose does not hold), and tensoring a non-unitary matrix with ${\rm I}_2$ won't make it unitary either.

+ +

So, let's start over: take three qubits, define the first two as your input registers and the third as your output register. Then, what you want your operation to do, is this: +$${\rm U}_f|000\rangle=|000\rangle\\ +{\rm U}_f|010\rangle=|011\rangle\\ +{\rm U}_f|100\rangle=|101\rangle\\ +{\rm U}_f|110\rangle=|110\rangle$$ +Note that to fully define ${\rm U}_f$, we would have to determine a desired output for the four other possible (computational basis) input states, $|\!*\!*1\rangle$. How we do this is entirely up to us! (As long as ${\rm U}_f$ remains unitary, of course.) So we can just choose something convenient, such as this: +$${\rm U}_f|001\rangle=|001\rangle\\ +{\rm U}_f|011\rangle=|010\rangle\\ +{\rm U}_f|101\rangle=|100\rangle\\ +{\rm U}_f|111\rangle=|111\rangle$$ +This has the following matrix form: +$${\rm U}_f=\begin{bmatrix} +1&0&0&0&0&0&0&0\\ +0&1&0&0&0&0&0&0\\ +0&0&0&1&0&0&0&0\\ +0&0&1&0&0&0&0&0\\ +0&0&0&0&0&1&0&0\\ +0&0&0&0&1&0&0&0\\ +0&0&0&0&0&0&1&0\\ +0&0&0&0&0&0&0&1 +\end{bmatrix}$$ +Intuitively, this unitary is constructed with two CNOT gates, which swap the third qubit $|0\rangle\to|1\rangle$ or $|1\rangle\to|0\rangle$ if and only if one of the first two qubits is in a $|1\rangle$ state:

+ +

+ +

You can check that multiplying the matrices of the two CNOTs indeed gives the same ${\rm U}_f$.

+",2687,,2687,,10/20/2018 22:32,10/20/2018 22:32,,,,3,,,,CC BY-SA 4.0 +4448,2,,4442,10/19/2018 1:41,,3,,"

Your bit strings $x$ in the case when you specify $2$ bits are $00$, $01$, $10$, $11$. Now you want to output the result in another bit/qubit.

+ +

Whether the outpit bit/qubit is initialized as $0$ or $1$ should not change your unitary operation. The only thing changing is the initial quantum state your system is onto where you apply the unitary operation representing your function.

+ +

In this case, the implementation is straightforward. This is a first CNOT with first qubit of register $x$ as control and the output qubit as target followed by another CNOT with the second qubit of $x$ as control. If you input $x=00$, nothing happens. If $x=01$ or $10$, you apply one CNOT which adds $1$ to the target (definition of CNOT). If $x=11$, the two CNOTs are applied so you add $1$ to the target twice and $1 + 1 = 0$ when having one output bit.

+ +

To get the unitary, you can just take the unitary matrix of the CNOT, tensor product with the identity matrix (but respect the order when you write). You do this for the first CNOT giving you one unitary $U_1$, then the second giving $U_2$ and finally multiply $U_2 U_1$ to get your unitary operator.

+ +

There is another way to get unitary operator which is more useful when dealing with controlled-gates in a multi-qubit system. For example, can also be written as : +$$ |0\rangle\langle 0|\otimes\mathbb{I}+|1\rangle\langle 1|\otimes U $$

+ +

It is like a sort of quantum if statement. Basically, if the control is 0, do nothing, otherwise apply U. Using this definition on your example, it follows that :

+ +

$$ |0\rangle\langle 0|\otimes\mathbb{I} \otimes\mathbb{I} +|1\rangle\langle 1| \otimes\mathbb{I}\otimes X $$ +where X is the NOT gate.

+ +

You have an example on a 5-qubit system in this +blog. Also, I asked a similar question in the past but for a code of such operation.

+",4127,,4127,,10/24/2018 3:38,10/24/2018 3:38,,,,3,,,,CC BY-SA 4.0 +4449,1,4451,,10/19/2018 11:41,,1,324,"

I'm having a bit of trouble understand @DaftWullie's answer here.

+ +

I understood that the $4\times 4$ matrix $A$ +$$ \frac{1}{4} \left[\begin{matrix} +15 & 9 & 5 & -3 \\ +9 & 15 & 3 & -5 \\ +5 & 3 & 15 & -9 \\ +-3 & -5 & -9 & 15 +\end{matrix}\right]$$

+ +

can be decomposed into Pauli matrices as:

+ +

$$A=15\mathbb{I}\otimes\mathbb{I}+9Z\otimes X+5X\otimes Z-3Y\otimes Y$$

+ +

So far so good.

+ +

Then he says:

+ +
+

Now, it is interesting to note that every one of these terms commutes. + So, that means that $$ e^{iA\theta}=e^{15i\theta}e^{9i\theta Z\otimes + X}e^{5i\theta X\otimes Z}e^{-3i\theta Y\otimes Y}. $$ You could work + out how to simulate each of these steps individually, but let me make + one further observation first: these commuting terms are the + stabilizers of the 2-qubit cluster state. That may or may not mean + anything to you, but it tells me that a smart thing to do is apply a + controlled-phase gate. + $$ CP\cdot A\cdot CP=15\mathbb{I}\otimes\mathbb{I}+9\mathbb{I}\otimes + X+5X\otimes \mathbb{I}-3X\otimes X. $$

+
+ +

Now, I hadn't heard of clusters states before, but Wikipedia gave me some idea (I'll probably need to through it a few more times though).

+ +

Anyhow, as far as I know, the controlled phase gate $CP$ is basically a controlled-$R_{\phi}$ gate where $R_{\phi}$ is:

+ +

$$\left[\begin{matrix}1 & 0 \\ 0 & e^{i\phi}\end{matrix}\right]$$

+ +

So, controlled $R_{\phi}$ would be

+ +

$$\left[\begin{matrix} +1 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 \\ +0 & 0 & 1 & 0 \\ +0 & 0 & 0 & e^{i\phi} +\end{matrix}\right]$$

+ +

This is where I am confused. Shouldn't $(\text{controlled }R_{\phi})A(\text{controlled }R_{\phi})$ contain a $\phi$ term somewhere? I don't understand how its Pauli decomposition $15\mathbb{I}\otimes\mathbb{I}+9\mathbb{I}\otimes + X+5X\otimes \mathbb{I}-3X\otimes X$ contains no term containing the phase angle $\phi$. Wolfram Alpha also agrees that the matrix multiplication result of $\operatorname{CP\cdot A\cdot CP}$ must contain a phase term. So, I'm not quite sure how Pauli decomposition of $\operatorname{CP\cdot A\cdot CP}$ as stated by DaftWullie in his answer arises. Am I missing something?

+",26,,55,,8/14/2020 6:12,8/14/2020 6:12,How does the stated Pauli decomposition for $\operatorname{CP\cdot A\cdot CP}$ arise?,,2,0,,,,CC BY-SA 4.0 +4450,2,,4449,10/19/2018 11:57,,1,,"

Yes if you work with general phase shift there would be $\phi$ in the final answer. In fact you would be able to take $\phi=0$ and just get $A$ back. Try $\phi=\frac{\pi}{4}$. Looks like notational mismatch of what's called a phase gate/phase shift gate. Whether it means the entire 1 parameter family or just one specific value of $\phi$.

+ +

EDIT: Incorrect, see comment below.

+",434,,434,,10/19/2018 19:19,10/19/2018 19:19,,,,1,,,,CC BY-SA 4.0 +4451,2,,4449,10/19/2018 12:18,,2,,"

When I talked about a controlled-phase gate, I meant the standard gate that has unitary matrix +$$ +\left(\begin{array}{cccc} +1 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 \\ +0 & 0 & 1 & 0 \\ +0 & 0 & 0 & -1 +\end{array}\right) +$$

+ +

Note that this is related to the controlled-not via the action of a Hadamard on the target qubit. It also satisfies some standard propagation relations on Pauli operations: +$$ +Z_n\mapsto Z_n\qquad X_n\mapsto Z_{3-n}X_n +$$ +for $n\in\{1,2\}$ being the two qubits that the controlled-phase gate acts on. +

+",1837,,1837,,10/19/2018 12:30,10/19/2018 12:30,,,,8,,,,CC BY-SA 4.0 +4452,1,,,10/19/2018 13:53,,6,78,"

+ +

I am trying to simulate a system of j qubits and for visualization of the dynamics considering the Husimi distribution of the state. +To carry out the projection onto coherent states I have proceeded in the following manner

+ +
    R=LA.expm(1j*theta*(Sx*np.sin(phi)-Sy*np.cos(phi)))
+    alpha=dot(R,psi0)
+
+ +

Where alpha represents the coherent state centered at (phi,theta), R is Rotation matrix, psi_0 spin state of |j,j> and Sx,Sy are spin operators along x and y direction respectively. +I am using scipy's linalg library to carry out exponentiation of matrix. +From all such alpha's I am able to construct the coherent distribution.

+ +

I am able to produce supposedly correct distribution for most cases but for some cases, I am getting negative values which should not be obtained for Husimi distribution, though the order of these are very less and might be related to an error in numerical approximation.

+ +

I am doubtful about my implementation and would like to clarify if the methodology that I have followed is correct or is there any better alternative for the same.

+",4874,,26,,12/23/2018 13:01,12/23/2018 13:01,Discrepancy regarding Husimi Probability distribution,,0,4,,,,CC BY-SA 4.0 +4453,1,4454,,10/19/2018 16:51,,1,125,"

Context: I am particularly interested in quantum cognition & would like to use a tool like pyZX to perform the following types of optimizations.

+ +
+ +

In Preparing a (quantum) belief system they ""establish a theoretical result showing that in the absence of constraints on measurements, any state can be obtained as the result of a suitable sequence of measurements.""

+ +

It is also stated in the paper that ""achieving the desired belief state may require a sequence of measurements that is not practically feasible.""

+ +
+ +

How could the Group Leaders Optimization Algorithm be applied in the above context to generate practically feasible sequences of measurements?

+",2645,,26,,12/23/2018 11:26,12/23/2018 11:26,Applying Group Leaders Optimization to Quantum Belief Systems,,1,0,,,,CC BY-SA 4.0 +4454,2,,4453,10/19/2018 17:12,,1,,"

The GLOA is just an optimization algorithm (another genetic algorithm actually). So as long as your problem translates into an objective function you seek to minimize/maximize, this would be possible (even by another genetic algorithm).

+ +

I suggest to first think how you encode your problem for the optimization. For example, a sequence of discrete and/or real variables. The objective function is also important to specify. I am guessing here you would look at the fidelity with the desired belief state.

+ +

The only thing however you would have to consider is the definition of being practically feasible as I don't know what the authors refer to

+ +
+

a sequence of measurements that is not practically feasible.

+
+ +

However, let us take the example of the GLOA applied to find the decomposition of a unitary operator U into a sequence of gates (or circuit) within a predefined set, which you can find the explanation in this article. This was discussed in this question before. Maybe it will help you relate to your problem. +The idea is to represent a circuit as a sequence of variables taking integer or real values. For instance, let's try to find the decomposition of the Toffoli gate using the set of gates $\{V,Z,S,V^{\dagger},R_x(\theta) \}$ and we allow to use maximum 5 gates. We start by randomly creating candidate circuits. Here is one example of candidate circuit (showing you an encoding of the problem) :

+ +
+

1 3 2 0.0; 2 3 1 0.0; 3 2 1 0.0; 4 3 2 0.0; 5 1 3 0.75

+
+ +

You see you have 5 sequences of 4 numbers (3 integers and one real). Each sequence represent an application of a gate. +1 3 2 0.0, in this case, means apply the $V$ gate (index 1 in the set) on qubit $3$ controlled by qubit $2$ with an angle $0.0$ (note that with the $V$ gate there is no angle but say you were playing with like rotation gates, this becomes relevant). So it is like a problem of 20 variables to tweak for the optimization scheme.

+ +

The optimization will consist of finding the solution circuit represented by a unitary operator $ U_a $ close to the unitary operator of interest $ U_t $; using GLOA which maximize the trace fidelity (here serving as an objective function telling us how close the candidate is to the solution): +$$ \mathcal{F} = \frac{1}{N}|\operatorname{Tr}(U_aU_t^{\dagger})| $$

+ +

Say you were looking for quantum states, the objective function would be for instance to minimize a distance between two quantum states with one being the desired solution. In the book of Nielsen and Chuang, you have chapter 9.2 which is about quantifying how two quantum states are close.

+",4127,,4127,,10/29/2018 21:24,10/29/2018 21:24,,,,3,,,,CC BY-SA 4.0 +4455,1,4456,,10/19/2018 17:34,,1,540,"

The circuit below implements the following two-level unitary transformation:

+ +

+ +

+ +

$\tilde{U}$ is a unitary matrix: $\tilde{U} = \left[\begin{matrix} a & c \\ b & d \end{matrix}\right]$

+ +

where $a, b, c, d$ are any complex numbers.

+ +

As we can see, $U$ acts non-trivially only on the states $\lvert 000 \rangle, \lvert 111 \rangle$.

+ +

How would you solve the circuit for the input state $\lvert 000 \rangle$ or $\lvert 111 \rangle$? My problem is figuring out how to deal with the state $\tilde{U}A$ in last two CNOT gates.

+ +

EDIT: to clarify what I want:

+ +
    +
  • I start with $\lvert \psi_{0} \rangle = \lvert 0, 0, 0 \rangle$
  • +
  • after the first CNOT I get $\lvert \psi_{1} \rangle = \lvert 0, 0, 1 \rangle$
  • +
  • after the second CNOT I get $\lvert \psi_{2} \rangle = \lvert 0, 1, 1 \rangle$
  • +
  • ...
  • +
+ +

How would you write $\lvert \psi_{3} \rangle$, $\lvert \psi_{4} \rangle$, $\lvert \psi_{5} \rangle$? For this specific circuit, is it even possible to write the full steps like that?

+",4504,,26,,12/23/2018 11:25,12/23/2018 11:25,Solving a circuit implementing a two-level unitary operation,,1,7,,,,CC BY-SA 4.0 +4456,2,,4455,10/19/2018 20:45,,3,,"

$$ +| \psi_3 \rangle = a | 0 1 1 \rangle + b | 1 1 1 \rangle\\ +$$

+ +

Because the 1 on B and C criterion is met.

+ +

$$ +| \psi_4 \rangle = a | 0 0 1 \rangle + b | 1 1 1 \rangle\\ +$$

+ +

Because only the first term meets the criterion for the controls so it is the only part affected to flip the B index.

+ +

$$ +| \psi_5 \rangle = a | 0 0 0 \rangle + b | 1 1 1 \rangle\\ +$$

+ +

Because only the first term meets the criterion for the controls so it is the only part affected to flip the C index.

+",434,,,,,10/19/2018 20:45,,,,0,,,,CC BY-SA 4.0 +4457,1,4458,,10/19/2018 23:03,,4,199,"

I was watching the following video https://www.youtube.com/watch?v=IrbJYsep45E
+And around 3 minutes in, they do an example computation with two qubits. The qubits started off in the state $|00\rangle$, and then go through what I assume is a Hadamard gate which puts them in a superposition. Why is it a $50\%$ chance of being $|01\rangle$, and a fifty percent chance of being $|10\rangle$? Why is it not a quarter chance of being $|00\rangle$, $|10\rangle$, $|01\rangle$, and $|11\rangle$?

+",4879,,26,,12/23/2018 11:24,12/23/2018 11:24,What happens when I input two qubits starting at state $|00\rangle$ into a hadamard gate?,,2,1,,,,CC BY-SA 4.0 +4458,2,,4457,10/19/2018 23:22,,1,,"

It is not two Hadamard gates.
+You are 100% correct that two Hadamrd gates would put you in an equal superposition of 00, 01, 10, 11, meaning 25% chance of getting any of those.

+ +

Let's construct her gate based on the input $|00\rangle$ and the output $\frac{1}{\sqrt{2}}\left(|10\rangle +|01\rangle\right)$.

+ +

The final state is $|\Psi^+\rangle$ from this question.

+ +

So the state can be constructed with this circuit:

+ +

+ +

where the middle gate is a CNOT.

+",2293,,2293,,10/19/2018 23:43,10/19/2018 23:43,,,,4,,,,CC BY-SA 4.0 +4459,1,4461,,10/20/2018 13:24,,4,84,"

$\newcommand{\qr}[1]{|#1\rangle}$Say I begin with $4$ qubits $\qr{+}\qr{+}\qr{+}\qr{+}$ forming a register $B$. Name these qubits as $b_3, b_2, b_1, b_0$. Also, let $C$ be another register $\qr0\qr0\qr0\qr0$ whose qubit names are $c_3, c_2, c_1, c_0$.

+ +

Now apply $\operatorname{CNOT}(b_i, c_i)$, that is, entangle $b_3$ and $c_3$, $b_2$ and $c_2$ and so on. Suppose I measure $B$ and I obtain $\qr{0101} = \qr5$. What will I get if I measure $C$ now? I say I'll get precisely $\qr{5}$.

+ +

Does this happen a $100\%$ of the time?

+",1589,,55,,10/20/2018 14:57,10/21/2018 14:32,Does entanglement correlate qubits a $100\%$ of the time?,,1,1,,,,CC BY-SA 4.0 +4460,2,,4438,10/20/2018 14:32,,1,,"

You may keep a tab on the Jobs page or in particular the Ph.D. positions page on Quantiki. You can view the extended descriptions for each position by clicking on the individual position names and see the project details like location, duration of the Ph.D., potential supervisors, etc. The positions are not restricted to any specific country as such and cannot be sorted according to sub-fields (as far as I know).

+",26,,,,,10/20/2018 14:32,,,,1,,,,CC BY-SA 4.0 +4461,2,,4459,10/20/2018 14:47,,3,,"

If you only operate on the $i$-th qubit, then the other qubits do not matter and can be ignored, so you effectively are considering the operation +$$\lvert+\rangle\lvert0\rangle\to\frac{1}{\sqrt2}[\lvert0,0\rangle+\lvert1,1\rangle]$$ +on the $i$-th qubit, with all other qubits not being affected. +Then, measuring the first qubit always collapses the second one, like in the example you showed, which means that outcomes are always correlated.

+ +

More generally, if you apply CNOT operations to a subset of the qubits (say those with indices $i\in A\subseteq\{1,...,n\}$), then you are doing the following +$$\left[\bigotimes_{k=1}^n \lvert+\rangle\right]\left[\bigotimes_{k=1}^n \lvert0\rangle\right] \rightarrow \left[\bigotimes_{i\notin A}\lvert+,0\rangle\right]\otimes 2^{-|A|/2}\left[\bigotimes_{i\in A} \big(\lvert0,0\rangle+\lvert 1,1\rangle\big)\right].$$ +Measuring the first register does not do anything interesting on the indices $i\notin A$, while it completely collapses the qubits in the second register with indices $i\in A$. If the outcome measured on the first register is $\lvert\boldsymbol x\rangle\equiv\lvert x_1,...,x_n\rangle$, then the state of the second register becomes: +$$\left[\bigotimes_{i\notin A}\lvert0\rangle\right]\otimes \left[\bigotimes_{i\in A}\lvert x_i\rangle\right].$$ +Clearly, the state of the second register is entirely determined by what has been measured in the first register, so that the measurement results will always be correlated.

+ +

More precisely stated, the measurement results are ""$100\%$"" correlated in the sense that the conditional entropy $H(C|B)$ between the two measurement outcomes (when measuring in the computational basis) is zero.

+",55,,55,,10/21/2018 14:32,10/21/2018 14:32,,,,1,,,,CC BY-SA 4.0 +4462,2,,4457,10/20/2018 15:13,,3,,"

There is a slight misconception in your question. You can't apply a Hadamard gate on $|00\rangle$. Because the basis state $|00\rangle$ is actually a tensor product of a $2\times 1$ dimensional basis state $|0\rangle$ used twice. This results in $|00\rangle = |0\rangle \otimes |0\rangle = \begin{bmatrix}1 \\0\\0\\0\end{bmatrix}$. Now you must take the tensor product of Hadamard gate(which is $2\times 2$ dimensional) too in order to apply it in this state. Like so: +$$H \otimes H (|0\rangle \otimes |0\rangle)$$ $$= H|0\rangle \otimes H|0\rangle$$ $$ =\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle) \otimes \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$$ +$$=\frac{1}{2}(|00\rangle + |01\rangle + |10\rangle + |11\rangle)$$. Now obviously $ 4 \times (\frac{1}{2})^2 = 1$, as you have correctly inferred.

+",2403,,26,,10/20/2018 17:49,10/20/2018 17:49,,,,1,,,,CC BY-SA 4.0 +4463,1,4466,,10/20/2018 15:23,,2,61,"

$\newcommand{\qr}[1]{|#1\rangle}$Say I begin with $10$ q-bits +$\qr{+}\qr{+}\qr{+}\qr{+}\qr{+}\qr{+}\qr{+}\qr{+}\qr{+}\qr{+}$ forming +a register $B$. Name these q-bits as $b_9, ..., b_2, b_1, b_0$.

+ +

Apply a gate $U_{\operatorname{div}}$ to $B$. For clarity, assume +$U_\text{div}$ divides the number represented by q-bits $b_7 b_6 b_5 b_4 b_3 +b_2 b_1 b_0$ by the number represented by $b_9 b_8$. The output of +$U_\text{div}$ is written as follows. It leaves $b_9 b_8$ alone, writes the +quotient of the division in $b_7 b_6 b_5 b_4 b_3 b_2$ and the rest is +written in $b_1 b_0$. (This makes sense: if only $2$ q-bits are +allowed for the divisor, the greatest divisor possible is $3$, so the +greatest rest possible is $2$, so $2$ q-bits for the rest is enough.)

+ +

Say now I measure $b_1 b_0$ getting the classical bits $01$, meaning +the rest of the division is $1$. Also I measure $b_7 b_6 b_5 b_4 b_3 +b_2$ getting the classical bits $001110$, meaning my quotient is $14$. +Finally, say I measure $b_9 b_8$ getting the classical bits $10$.

+ +

This means I effectively had $a = 14\times 2 + 1 = 29$, where $a$ +represents the dividend. But it seems to me impossible to talk about what $a$ had to be since $a$ would be written in q-bits $b_7 b_6 b_5 b_4 b_3 +b_2 b_1 b_0$ which was $\qr{+}\qr{+}\qr{+}\qr{+}\qr{+}\qr{+}\qr{+}\qr{+}$ at the start. It seems to me I could never infer the dividend in this experiment, though the only arithmetic that seems to make sense is $a = 14\times 2 + 1 = 29$.

+ +

What's going on here?

+",1589,,,,,10/20/2018 20:50,What's the meaning of applying $U_{\text{div}}$ to a register in superposition?,,1,0,,,,CC BY-SA 4.0 +4465,1,4473,,10/20/2018 19:17,,2,161,"

If you have two registers in the state $\frac{1}{2^{n/2}} \sum_{x = 0}^{2^{n/2} - 1} |x\rangle |0\rangle$, how could you construct a gate that produces a superposition of states $|x\rangle|1\rangle$ when some integer $r$ divides $x$, and $|x\rangle|0\rangle$ otherwise, for each input?

+ +

I.e. a unitary quantum gate that replicates the function $f(x) = \begin{cases} 1 \text{ if } r \text{ divides } x\\ +0 \text{ otherwise} +\end{cases}$

+",4657,,26,,12/23/2018 11:23,12/23/2018 11:23,"How to construct a quantum gate producing 1 if r divides x, 0 otherwise?",,1,6,,,,CC BY-SA 4.0 +4466,2,,4463,10/20/2018 20:50,,2,,"

You cannot replace those bits with the quotient without leaving pieces of information elsewhere. Wherever you put the quotient, it will be entangled with the bits that determine it. Remember that all quantum gates are reversible, so no information is lost until measurement. That being said, you will get a different answer for $a$ on different repetitions of the procedure, so you can only determine the answer for one particular (random) $a$ on each procedure.

+",4657,,,,,10/20/2018 20:50,,,,0,,,,CC BY-SA 4.0 +4468,1,4534,,10/21/2018 4:05,,5,238,"

This is along the same lines as the earlier question: When was the first use of the word Entanglement?

+ +

I was surprised to discover that when searching for ""chimera"" in both of Vicky Choi's minor-embedding papers:
+https://arxiv.org/abs/0804.4884,
+https://arxiv.org/abs/1001.3116,
+no results came up!

+ +

This means that finding the first ""chimera"" paper is harder than I imagined and I don't know where to look anymore. When and where can I find the first use of the term?

+",2293,,2293,,10/21/2018 21:34,11-01-2018 08:52,When and where was the first use of the term Chimera?,,1,0,,,,CC BY-SA 4.0 +4469,1,,,10/21/2018 12:08,,7,248,"

The Solvay Kitaev algorithm was discovered long before the Group Leaders Optimization algorithm and it has some nice theoretical properties. As far as I understand, both have exactly the same goals: given a finite dimensional unitary operator, they decompose the operator into basic quantum gates. I couldn't find any theoretical results about the time complexity or convergence time or error bounds for the GLOA as such. Does the latter (GLOA) have any practical advantage over the former at all, in terms of convergence time or anything?

+ +

P.S: For a detailed description of the GLOA, see: Understanding the Group Leaders Optimization Algorithm

+",26,,26,,10/22/2018 6:28,09-09-2020 15:09,Does the GLOA have any advantage over the Solovay-Kitaev algorithm?,,1,7,,,,CC BY-SA 4.0 +4470,1,4514,,10/21/2018 18:14,,6,242,"

Can we convert every algorithm in $\text{P}$ (polynomial time complexity for deterministic machines) into a quantum algorithm with polynomial time and $O(\log n)$ quantum bit?

+",4213,,26,,10/22/2018 6:22,10/23/2018 9:28,Can we use quantum machines to reduce space complexity of deterministic turing machines?,,1,5,,,,CC BY-SA 4.0 +4471,1,4474,,10/21/2018 18:26,,2,3413,"

I have some utility operations that I'd like to use across projects. How can I import its namespace using Q# in Visual Studio 2017 in other projects?

+",4657,,26,,03-12-2019 09:03,03-12-2019 09:03,Q# How to use a namespace in another project?,,1,0,,,,CC BY-SA 4.0 +4472,1,4484,,10/21/2018 18:54,,4,352,"

This transformation comes up a lot during symbolic manipulation of quantum operations on state vectors. It's the reason why, for instance, $(X\otimes \mathbb{I}_2)|00\rangle = |10\rangle$ - it lets us operate on a single qbit by tensoring a unitary operation $U$ with identity operators where $U$ is at the same position of significance as the qbit to which we want to apply $U$.

+ +

I've been trying to write out a proof of why this transformation works, but I lack good notation for representing and reasoning about tensored matrices and vectors - it becomes very clunky very quickly. Is there a simple way to prove this transformation holds, or a convenient notation for representing tensored matrices/vectors?

+ +

Assume $U$ is a square complex unitary matrix of size $n$, $V$ a square complex unitary matrix of size $m$, $|x\rangle$ an $n$-element complex column vector where $\langle x|x\rangle=1$, and $|y\rangle$ an $m$-element complex column vector where $\langle y|y\rangle=1$.

+",4153,,,,,10/23/2018 22:06,Simple proof that $(U \otimes V)(|x\rangle \otimes |y\rangle) = U|x\rangle \otimes V|y\rangle$?,,3,12,,,,CC BY-SA 4.0 +4473,2,,4465,10/21/2018 19:10,,3,,"

The simplest way would be to do a long division to compute the remainder, toggle the target bit if there is a non-zero remainder, then uncompute the long division.

+ +

Here is an example $r=3$, $N < 16$ circuit in Quirk:

+ +

+ +

Note the displays on the right hand side, which show that conditioning on the output qubit (the bottom one) leaves only values divisible by 3 in the input register (the top 4 qubits).

+ +

The basic idea is to keep track of the maximum value $m$ that could possibly be in the input register $i$, then iteratively pick the largest $k$ such that $r^k \leq m$ and subtract $r^k$ out of $i$ if $i \geq r^k$. This reduces $m$ by $r^k$. Repeat this until $m < r$, then toggle your output bit if $i=0$. Then uncompute all the conditional subtractions to restore $i$.

+ +

A proper construction would not require $r$ subtractions for each value of $k$ as this one does, and a proper construction would expand the comparison and addition circuits into their full form, but I think this construction gets the right idea across.

+",119,,119,,10/21/2018 19:16,10/21/2018 19:16,,,,0,,,,CC BY-SA 4.0 +4474,2,,4471,10/21/2018 19:45,,2,,"

First, you need to add the project which contains the utility operations as a reference to the project which will be using them. If you're using Visual Studio Code or command line, you can use dotnet add reference command to do that, and in Visual Studio you can use Reference Manager.

+ +

Second, you need to open the namespace which contains the utility operations in each Q# file which uses them. To do this, add an open statement after the declaration of the namespace and before definition of any functions or operations:

+ +
namespace ProjectNamespace {
+    open Utilities;
+
+    // you can use operations defined in Utilities namespace now
+}
+
+",2879,,,,,10/21/2018 19:45,,,,2,,,,CC BY-SA 4.0 +4475,2,,4472,10/21/2018 21:01,,3,,"

I will give you a few elements for the demonstration on real vectors which you can extend to complex.

+ +

Let {$e_i$} be the standard basis for the space where $U (n*n)$ is defined . +Let {$e_j$} be the standard basis for the space where $V (m*m)$ is defined.

+ +

First, it is a property that the basis {$e_i \otimes e_j$} is a basis for the n*m-matrices space.

+ +

$ U \otimes V $ is a linear mapping on the space and we have that : +$$ U \otimes V (e_i \otimes e_j) = (U e_i) \otimes(V e_j) (1) $$

+ +

A remark to be given is that in linear algebra, when $W$ is linear and $W e_i$ is known, W is uniquely determined. As {$e_i \otimes e_j$} is a basis for the linear mapping $ U \otimes V $, it will be unique for the definition (1).

+ +

In particular,
+$$ U \otimes V (x \otimes y) = (U x) \otimes(V y) $$

+ +

Indeed : +$$ U \otimes V (x \otimes y) = U \otimes V (\sum_i x_i e_i \otimes \sum_j y_j e_j) $$ +$$ = U \otimes V (\sum_{i,j} x_i y_j (e_i \otimes e_j)) $$ +$$ = \sum_{i,j} x_i y_j U \otimes V (e_i \otimes e_j) $$ +$$ = \sum_{i,j} x_i y_j (U e_i) \otimes(V e_j) $$ +$$ = \sum_{i,j} x_i y_j (U e_i) (V e_j)^T $$ +$$ = \sum_{i} x_i (U e_i) \sum_{j} y_j (V e_j)^T $$ +$$ = U (\sum_{i} x_i e_i) (V (\sum_{j} y_je_j))^T $$ +$$ = U x (V y)^T $$ +$$ = U x \otimes V y $$

+ +

You can look at that PDF if it makes it more clear.

+",4127,,4127,,10/22/2018 9:42,10/22/2018 9:42,,,,6,,,,CC BY-SA 4.0 +4484,2,,4472,10/22/2018 6:33,,3,,"

If we write +$$ +U=\sum_{i,j}U_{ij}|i\rangle\langle j|\quad V=\sum_{kl}V_{kl}|k\rangle\langle l|, +$$ +and +$$ +|x\rangle=\sum_jx_j|j\rangle\quad |y\rangle=\sum_ly_l|l\rangle, +$$ +then we can evaluate both sides of the equation +$$ +(U\otimes V)(|x\rangle\otimes|y\rangle)=(U|x\rangle)\otimes(V|y\rangle) +$$ +using the definition of the tensor product as +$$ +U\otimes V=\sum_{ijkl}U_{ij}V_{kl}|ik\rangle\langle jl|. +$$

+ +

So, the left-hand side is +\begin{align*} +(U\otimes V)(|x\rangle\otimes|y\rangle)&=\left(\sum_{ijkl}U_{ij}V_{kl}|ik\rangle\langle jl|\right)\left(\sum_{jl}x_jy_l|jl\rangle\right) \\ +&=\sum_{ijkl}U_{ij}x_jV_{kl}y_l|ik\rangle. +\end{align*} +Similarly, the right-hand side is +\begin{align*} +(U|x\rangle)\otimes(V|y\rangle)&=\left(\sum_{ij}U_{ij}x_j|i\rangle\right)\otimes\left(\sum_{kl}V_{kl}y_l|k\rangle\right) \\ +&=\sum_{ijkl}U_{ij}x_jV_{kl}y_l|ik\rangle +\end{align*} +The two are the same.

+ +

You may worry that there's a little bit of trickery going on with the kets, that contained within the ""definition"" of the tensor product is already hiding an implicit use o f the tensor product because I'm going from $|i\rangle\otimes|k\rangle$ to $|ik\rangle$, and that makes the definition rather circular. However, remember that the text in a ket is just a label, so you can really think about what I'm doing as defining a new composite label in some different Hilbert space.

+",1837,,1837,,10/22/2018 6:40,10/22/2018 6:40,,,,0,,,,CC BY-SA 4.0 +4495,1,4496,,10/22/2018 9:24,,3,136,"

I would like to calculate the state after a transformation using the Hadamard gate on a complex state. I get stuck mid-calculation, most likely due to not being able to dealing with the global state. Anybody who can tell me what the part on the question mark should be (and/or which other errors I made)?

+ +

$H {|0\rangle + i|1\rangle\over\sqrt 2} \equiv +{1\over\sqrt 2}\begin{bmatrix}1 & 1 \\ 1 & -1\end{bmatrix} {1\over\sqrt 2} \begin{bmatrix} 1 \\ i \end{bmatrix} = +{1\over2} \begin{bmatrix}1+i\\1-i\end{bmatrix} = +? = +{1\over\sqrt 2} \begin{bmatrix}1\\-i\end{bmatrix} +\equiv +{|0\rangle - i|1\rangle\over\sqrt 2} +$

+ +
+ +

Update trying to use @DaftWullie his answer: +$H {|0\rangle + i|1\rangle\over\sqrt 2} \equiv +{1\over\sqrt 2}\begin{bmatrix}1 & 1 \\ 1 & -1\end{bmatrix} {1\over\sqrt 2} \begin{bmatrix} 1 \\ i \end{bmatrix} = +{1\over2} \begin{bmatrix}1+i\\1-i\end{bmatrix} \neq +{1\over\sqrt 2} \begin{bmatrix}1+i\\1-i\end{bmatrix} = +\begin{bmatrix}{1\over\sqrt 2}+{1\over\sqrt 2}i\\{1\over\sqrt 2}-{1\over\sqrt 2}i\end{bmatrix} = +\begin{bmatrix}\cos(\pi/4)+i \cdot \sin(\pi/4)\\ +(\cos(\pi/4)+i \cdot \sin(\pi/4))\cdot(\cos(\pi/2-i\cdot \sin(\pi/2))\end{bmatrix} = +\begin{bmatrix}e^{i\pi/4}\\e^{-i\pi/2}e^{i\pi/4}\end{bmatrix}=\\ +e^{i\pi/4}\begin{bmatrix}1\\e^{-i\pi/2}\end{bmatrix}= +e^{i}e^{\pi/4}\begin{bmatrix}1\\e^{-i\pi/2}\end{bmatrix}\equiv +e^{\pi/4}\begin{bmatrix}1\\e^{-i\pi/2}\end{bmatrix} = +{1\over\sqrt 2}\begin{bmatrix}1\\-i\end{bmatrix} +\equiv +{|0\rangle - i|1\rangle\over\sqrt 2} +$

+ +

Here I still get partly stuck, as I was expecting to calculate using ${1\over 2}$ instead of ${1\over\sqrt 2}$. I see that this too falls into the category of ""multiplies the whole vector has no observable consequence"", but I wonder if I can calculate this ""cleaner"" (or did I simply make a mistake?).

+ +

Also, how do I indicate removing the global phase in an equation? Do I use the equivalence symbol? An equal symbol with a footnote above it?

+",2794,,26,,12/23/2018 11:23,12/23/2018 11:23,Outcome of Hadamard transformation on a complex state,,1,3,,,,CC BY-SA 4.0 +4496,2,,4495,10/22/2018 9:30,,3,,"

Taking out a common factor, +$$ +?=\frac{e^{i\pi/4}}{\sqrt{2}}\left[\begin{array}{c}1 \\ e^{-i\pi/2}\end{array}\right] +$$ +Then the equality that comes after the ? is ""up to global phases"", meaning that any factor $e^{i\theta}$ that multiplies the whole vector has no observable consequence.

+ +

Following the update... You've made several things worse. Let me emphasise that the things with no obervable consequence are overall multiplicative factors of modulus 1. Lengths are very important and you can't arbitrarily play around with factor of $\sqrt{2}$. Let me suggest you think of your sequence thus: +$$ +\frac{1}{2}\left[\begin{array}{c} 1+i \\ 1-i \end{array}\right]=\left[\begin{array}{c} \frac{1+i}{2} \\ \frac{1-i}{2} \end{array}\right]=\left[\begin{array}{c} \frac{e^{i\pi/4}}{\sqrt{2}} \\ \frac{e^{-i\pi/4}}{\sqrt{2}}\end{array}\right]=\left[\begin{array}{c} \frac{e^{i\pi/4}}{\sqrt{2}} \\ \frac{e^{i\pi/4}e^{-i\pi/2}}{\sqrt{2}}\end{array}\right]=e^{i\pi/4}\left[\begin{array}{c} \frac{1}{\sqrt{2}} \\ \frac{e^{-i\pi/2}}{\sqrt{2}}\end{array}\right]=e^{i\pi/4}\left[\begin{array}{c} \frac{1}{\sqrt{2}} \\ \frac{-i}{\sqrt{2}}\end{array}\right] +$$

+ +

I'm not aware for a standadr notation for ""up to a global phase"". You might chose to write $\equiv\left[\begin{array}{c} \frac{1}{\sqrt{2}} \\ \frac{-i}{\sqrt{2}}\end{array}\right]$, but most people tend to write ""up to a global phase"".

+",1837,,1837,,10/22/2018 15:04,10/22/2018 15:04,,,,1,,,,CC BY-SA 4.0 +4509,1,4515,,10/22/2018 13:12,,3,942,"

Is there a way to express the general $4 \times 4$ Hamiltonian in some block diagonal form of $2 \times 2$ matrices that I can solve, knowing the exact solution of $2\times 2$?

+ +

This is necessary for the treatment I am going to perform later, as I cannot go for general quartics solutions of this system. I need to see the action of the Hamiltonian to subspaces and the interaction between them.

+",4889,,26,,10/22/2018 17:24,10/23/2018 7:34,"Is there a way to express the general 4X4 Hamiltonian in some block diagonal form of 2X2 matrices that I can solve, knowing the exact solution of 2X2?",,2,2,,,,CC BY-SA 4.0 +4510,1,4512,,10/22/2018 18:30,,3,211,"

I'm trying to study quantum entanglement variation during quantum computation with 4 qubit systems comprising a variety of quantum gates. +How can I simulate this variation on MATLAB? Is there any other alternatives?

+",4460,,26,,10/22/2018 18:33,10/23/2018 1:10,How to simulate quantum entanglement variation in different quantum gates?,,1,0,,,,CC BY-SA 4.0 +4511,2,,4413,10/22/2018 18:48,,0,,"

A general way you can predict the measurement outcomes is to calculate the density matrix as:

+ +

$\rho = |\Psi \rangle \langle \Psi|$ and calculate the partial trace over qubits $B$ and $C$: $\rho_A = \rm{Tr}_{B,C} \left(\rho \right)$.

+ +

You then have a 2x2 density matrix where the two diagonals tell you the probabilities of getting $|0\rangle$ or $|1\rangle$ respectively.

+",2293,,,,,10/22/2018 18:48,,,,1,,,,CC BY-SA 4.0 +4512,2,,4510,10/23/2018 1:10,,4,,"

Step 1: Choose a measure for the entanglement. There are a lot of entanglement measures, some of them are easier to calculate in MATLAB than others.

+ +

Step 2: Initialize your 4-qubit system's density matrix. It will be a 16x16 matrix since $2^4 = 16$. This means all calculations will be very fast in MATLAB and you will have no problem working with the density matrix rather than wavefunction. You could also work with the wavefunction, but a lot of entanglement measures are defined in terms of the density matrix so it's better to work with the density matrix for this type of thing.

+ +

Step 3: Apply your various gates. They will be unitaries, so this will just be some matrix multiplications like: $\rho_{t} = U\rho_{t-1} U^\dagger$, where $\rho_{t-1}$ is the density matrix before the gate is applied, and $\rho_{t}$ is the density matrix after the gate is applied.

+ +

Step 4: Calculate the your entanglement measure, which can be a function of the density matrix: $f(\rho_t)$.

+ +

Step 5: Now you have $f(\rho_1)$, $f(\rho_2)$, $f(\rho_3) \ldots $
+You now have the entanglement measure at each point in time, and you can plot this and see how your measure of etanglement varies over the course of applying all your gates.

+",2293,,,,,10/23/2018 1:10,,,,11,,,,CC BY-SA 4.0 +4513,2,,4509,10/23/2018 1:14,,0,,"

If you want to diagonalize a general 4x4 Hamiltonian, you cannot just diagonalize the four 2x2 blocks and piece together what you get. You need to diagonalize the entire 4x4 matrix all at once.

+ +

If the 4x4 matrix is already block diagonal, then of course you can diagonalize each 2x2 block separately, but not in the general case where all 16 elements of the 4x4 matrix can be arbitrary.

+",2293,,,,,10/23/2018 1:14,,,,1,,,,CC BY-SA 4.0 +4514,2,,4470,10/23/2018 6:52,,4,,"

It seems this problem is open.

+ +

Watrous [J. Comp. Sys. Sci. 59, (pp. 281-326), 1999] +proved that any space $s$ bounded quantum Turing Machine (for space constructible $s(n)>\Omega(\log n)$) can be simulated by deterministic Turing machine with $O(s^2)$ space. +With the assumption $\mathsf{P \neq SC}$ (where $\mathsf{SC \subseteq P}$ is defined as the class of problems solvable by a DTM simultaneously in polynomial time and poly-logarithmic space), quantum machines will not reduce space complexity exponentially.

+ +

N.B. We don't know whether $\mathsf{P=SC}$ or not, though it is considered unlikely that they would be equal.

+",4213,,124,,10/23/2018 9:28,10/23/2018 9:28,,,,2,,,,CC BY-SA 4.0 +4515,2,,4509,10/23/2018 7:34,,3,,"

I don't have a completely general method for doing what you ask. However, there are a few of the steps that I might take:

+ +

The $4\times 4$ matrix $H$ can always be written in the form +$$ +H=a\mathbb{I}\otimes\mathbb{I}+\underline{n}_1\cdot\underline{\sigma}\otimes\mathbb{I}+\mathbb{I}\otimes\underline{n}_2\cdot\underline{\sigma}+\underline{\sigma}\cdot M\cdot\underline{\sigma} +$$ +where $M$ is a real $3\times 3$ matrix. Remember that if you want to find the coefficient of a particular term, then you can calculate +$$ +\text{Tr}(H(\sigma_i\otimes\sigma_j))/4 +$$ +for $i,j\in\{0,1,2,3\}$.

+ +

Now, if you implement the unitary $U_i$ on qubit $i$, Pauli matrices change as +$$ +\sigma_j\mapsto \underline{R}_j^{(i)}\cdot\underline{\sigma}, +$$ +and, if you work it through, you find that $M$ updates to $R^{(1)}\cdot M\cdot {R^{(2)}}^T$. So, you can choose the $R$ matrices to be the matrices that yield the singular values of $M$. In that way, you only ever have to deal with a matrix of the form +$$ +H= a\mathbb{I}\otimes\mathbb{I}+\underline{m}_1\cdot\underline{\sigma}\otimes\mathbb{I}+\mathbb{I}\otimes\underline{m}_2\cdot\underline{\sigma}+n_1X\otimes X+n_2Y\otimes Y+n_3Z\otimes Z. +$$ +If you get lucky and the local fields are in the Z-direction only, this matrix divides into two $2\times 2$ matrices spanned by $\{|00\rangle,|11\rangle\}$ and $\{|01\rangle,|10\rangle\}$ respectively.

+ +

The other option is that if you get really lucky (e.g. in this question!), the non-zero terms in the decomposition all mutually commute. Then you can analytically diagonalise $H$ in a much easier manner.

+ +

However, all of that is far more work that just throwing the $4\times 4$ matrix into the computer and asking for the eigenvalues. After all, I've already had to require diagonalization of a $3\times 3$ matrix. (Where it is more useful is if you have a translation invariant Hamiltonian of many qubits.)

+",1837,,,,,10/23/2018 7:34,,,,8,,,,CC BY-SA 4.0 +4516,2,,4260,10/23/2018 12:26,,3,,"

See the answer to this question for more on how the classical control works. Basically, your operations are controlled on the integer stored (in binary) across a register rather than on the individual bits themselves.

+ +

I also don't quite know the 'best practice' way of controlling on single bits, but I can tell you my workaround. Instead of creating a register with two bits, I create a list of two single qubit registers.

+ +
c = [ ClassicalRegister(1) for _ in range(2) ]
+
+ +

These can be added using the add method of a quantum circuit.

+ +
q = QuantumRegister(1)
+qc = QuantumCircuit(q)
+for register in c:
+    qc.add_register( register )
+
+ +

Then it is possible to use your 'building the circuit' code can be done with

+ +
qc.h(q)
+qc.measure(q,c[0])
+qc.x(q[0]).c_if(c[0], 0)
+qc.measure(q,c[1])
+circuit_drawer(qc)
+
+ +

This works because c[0] now refers to a classical register, rather than a single bit from a classical register.

+",409,,409,,01-03-2020 12:22,01-03-2020 12:22,,,,1,,,,CC BY-SA 4.0 +4518,2,,4472,10/23/2018 22:06,,0,,"

What you have in the title of your question is known as the ""mixed-product property"":

+ +

+ +

and you can get this plus many other properties from the Kronecker product wikipedia page. +The other answers have shown why the mixed-product property holds true for the left Kronecker product.

+",,user2898,,,,10/23/2018 22:06,,,,4,,,,CC BY-SA 4.0 +4519,1,4526,,10/24/2018 0:05,,2,525,"

In Duality, matroids, qubits, twistors and surreal numbers (recently submitted!) they

+ +
+

show that via the Grassmann-Plucker relations, the various apparent unrelated concepts, such as duality, matroids, qubits, twistors and surreal numbers are, in fact, deeply connected.

+
+ +

The paper includes many interesting topics which I am not well versed in (Grassmannian, Plucker Embedding, Hopf Map, Matroids, etc) & am wondering if it might be possible that someone could explain Grassmann-Plucker relations & how they are used in a quantum context?

+",2645,,,,,10/25/2018 2:21,What are Grassmann-Plucker relations?,,2,3,,,,CC BY-SA 4.0 +4520,2,,4519,10/24/2018 1:08,,3,,"

Consider a single fermionic state given by a single Slater determinant. Let the Hilbert space for the single mode be $\mathbb{C}^n$. Let there be $k$ fermions. Then that single Slater determinant is the Plucker embedding with the same notation as on the wiki page. The task of the Plucker relations is then to identify this embedding from the full projectivized fermionic Hilbert space. What homogenous equations do you need to impose on the entries of the vector in order to know that it came from a single Slater determinant? That is the question that these relations answer.

+ +

Edit: I had left it as unsaid implication, but @DavidBarMoshe made a good point about being explicit about it. Working within this embedding is beginning premise of Hartree-Fock.

+ +

This is analogous to the similar question about the Segre embedding judging which states are entangled or not. That is the comment about this being a determinantal variety.

+ +

I am not familiar with the linked paper, so I have no comment about how they use these definitions.

+",434,,434,,10/25/2018 2:21,10/25/2018 2:21,,,,0,,,,CC BY-SA 4.0 +4521,1,,,10/24/2018 2:05,,4,1053,"

I am trying to simulate Deutsch's algorithm, and I need to apply the oracle function matrix to my circuit.

+",4907,,,,,4/29/2020 19:08,How do I create my own unitary matrices that I can apply to a circuit in Cirq?,,3,0,,,,CC BY-SA 4.0 +4522,2,,4521,10/24/2018 2:21,,2,,"

I searched for doing a custom gate on the Cirq documentation and here are the results :

+
+

Gate sets

+

The xmon simulator is designed to work with operations that +are either a GateOperation applying an XmonGate, a CompositeOperation +that decomposes (recursively) to XmonGates, or a 1-qubit or 2-qubit +operation with a KnownMatrix. By default the xmon simulator uses an +Extension defined in xgate_gate_extensions to try to resolve gates +that are not XmonGates to XmonGates.

+

So if you are using a custom gate, there are multiple options for +getting it to work with the simulator:

+

Define it directly as an XmonGate. Provide a CompositeGate made up of +XmonGates. Supply an Exentension to the simulator which converts the +gate to an XmonGate or to a CompositeGate which itself can be +decomposed in XmonGates.

+
+",4127,,-1,,6/18/2020 8:31,10/24/2018 3:24,,,,0,,,,CC BY-SA 4.0 +4523,2,,4521,10/24/2018 8:07,,3,,"

[Note: this answer is outdated]

+ +

This is going to change somewhat radically in the next version of cirq, so I'll give an answer for both versions.

+ +

In v0.3, in order for a simulator to understand a custom gate, the gate must implement either cirq.CompositeGate or cirq.KnownMatrix. For your case, the simplest is to implement the matrix:

+ + + +
# assuming cirq v0.3
+import cirq
+import numpy as np
+class Oracle(cirq.Gate, cirq.KnownMatrix):
+    def __init__(self, secret_state, qubit_count):
+        self.secret_state = secret_state
+        self.qubit_count = qubit_count
+    def matrix(self):
+        m = np.eye(1 << self.qubit_count)
+        m[self.secret_state, self.secret_state] = -1
+        return m
+
+ +

You can then use this oracle to simulate e.g. a Grover circuit and see that the secret state ends up with quite a high probability:

+ +
qs = cirq.LineQubit.range(5)
+secret = 5
+oracle = Oracle(secret, len(qs)).on(*qs)
+
+diffusion = [
+    cirq.H.on_each(qs),
+    Oracle(0, len(qs)).on(*qs),
+    cirq.H.on_each(qs)
+]
+c = cirq.Circuit.from_ops(
+    cirq.H.on_each(qs),
+    [oracle, diffusion] * 4
+)
+output_vector = c.apply_unitary_effect_to_state()
+print(np.round(abs(output_vector)**2, 3))
+# [0  0  0  0  0  .999  0  0 ...]
+#                  ^ big probability at offset 5
+
+ +

In the coming v0.4 the classes such as KnownMatrix will be replaced by ""magic methods"" such as _unitary_. (This is generally how things are supposed to be done in python.) One of those magic methods is _apply_unitary_to_tensor_, which is used to enable faster simulation. With that method the oracle application can be simulated much much faster; in $O(q)$ time instead of $O(4^q)$ time assuming the oracle covers all of the qubits. We also happen to avoid the need to know the number of qubits ahead of time:

+ +
# assuming cirq v0.4
+import cirq
+class Oracle(cirq.Gate):
+    def __init__(self, secret_state):
+        self.secret_state = secret_state
+    def _apply_unitary_to_tensor_(self, target_tensor, available_buffer, axes):
+        s = cirq.slice_for_qubits_equal_to(axes, self.secret_state)
+        target_tensor[s] *= -1
+        return target_tensor
+
+",119,,119,,4/29/2020 19:08,4/29/2020 19:08,,,,0,,,,CC BY-SA 4.0 +4524,1,4525,,10/24/2018 8:59,,3,829,"

I am having hard time figuring out how the CX (controlled-NOT) gate is represented in the matrix representation.

+ +

I understood that tensor product and the identity matrix are the keys, and I understood how the matrix representation works for single-qubit matrices. For example, if we have a circuit with a quantum register q composed of 3 qubits, the operation X q[1] has the matrix representation 1 +$$I_2 \otimes X \otimes I_2 = \begin{pmatrix} +0&0&1&0&0&0&0&0 \\ +0&0&0&1&0&0&0&0 \\ +1&0&0&0&0&0&0&0 \\ +0&1&0&0&0&0&0&0 \\ +0&0&0&0&0&0&1&0 \\ +0&0&0&0&0&0&0&1 \\ +0&0&0&0&1&0&0&0 \\ +0&0&0&0&0&1&0&0 \\ +\end{pmatrix}.$$

+ +

Obviously, I am aware of the matrix representation of the CX gate: +$$\begin{pmatrix} +1&0&0&0\\ +0&1&0&0\\ +0&0&0&1\\ +0&0&1&0\\ +\end{pmatrix}$$

+ +

Taking back our quantum register q, applying CX to the second (control) and third register (target) (CX q[1], q[2]) gives us the matrix representation 2 +$$I_2 \otimes CX = \begin{pmatrix} +1&0&0&0&0&0&0&0\\ +0&1&0&0&0&0&0&0\\ +0&0&0&1&0&0&0&0\\ +0&0&1&0&0&0&0&0\\ +0&0&0&0&1&0&0&0\\ +0&0&0&0&0&1&0&0\\ +0&0&0&0&0&0&0&1\\ +0&0&0&0&0&0&1&0\\ +\end{pmatrix}$$

+ +

The problem comes when we try to apply the CX gate to other pairs of qubits:

+ +
    +
  1. I suspect the matrix representation of CX q[2], q[1] (applying a CX gate with the third qubit of the system as control and the second qubit of the system as target) to be $I_2 \otimes CX^r$ where $$CX^r = \begin{pmatrix} +0&1&0&0\\ +1&0&0&0\\ +0&0&1&0\\ +0&0&0&1\\ +\end{pmatrix}$$ +but I am not sure.

  2. +
  3. I really don't know how CX q[0], q[2] (applying a CX gate with the first qubit of the system as control and the third qubit of the system as target) would be represented.

  4. +
+ +

To summarise, my question is ""how does the CX (or more generally multi-qubit gates) is represented as a matrix when there are more than 2 qubits in the system?"".

+ +

1 Computed with Python and numpy:

+ +
import numpy as np
+X = np.array([[0, 1], [1, 0]])
+ID2 = np.identity(2)
+print(np.kron(np.kron(ID2, X), X))
+
+ +

2 Computed with Python and numpy:

+ +
import numpy as np
+CX = np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0]])
+ID2 = np.identity(2)
+print(np.kron(ID2, CX))
+
+",1386,,26,,12/23/2018 11:22,12/23/2018 11:22,Matrix representation and CX gate,,1,6,,10/24/2018 9:39,,CC BY-SA 4.0 +4525,2,,4524,10/24/2018 9:20,,2,,"

An easy way is to get it via braket notation.

+ +

Consider the general case of $n$ qubits. The representation of a CNOT between the last two qubits (last qubit being the target) is easily seen to be: +$$\newcommand{\ketbra}[1]{\lvert#1\rangle\!\langle#1\rvert} +CX_{n-1\to n}= I_{2^{n-2}}\otimes\big(\ketbra0\otimes I_2+\ketbra1\otimes X\big),$$ +that is, in matrix notation, the resulting of tensoring the identity over the first $n-2$ qubits with the usual CNOT matrix.

+ +

What if the CNOT acts instead between the $i$-th and $j$-th qubits? +The braket representation does not change significantly, we just need to change the order of the terms so that it becomes a bit uglier$^1$: +\begin{align} +CX_{i\to j} &= +I_{2^{i-1}}\otimes\ketbra0\otimes I_{2^{j-i-1}}\otimes I\otimes I_{2^{n-j}} \\ +&+ I_{2^{i-1}}\otimes\ketbra1\otimes I_{2^{j-i-1}}\otimes X\otimes I_{2^{n-j}}. +\end{align} +To build up the typical matrix representation you then just compute the Kronecker product of these terms.

+ +

For example, if $n=3$, the control is the third qubit and the target is the first second qubit, you have +$$CX_{3\to 2}=I\otimes I\otimes\ketbra0+I\otimes X\otimes \ketbra1.$$

+ +
+ +

$^1$ Note that the way I've written it, this representation only works for $i<j$, but I'm sure you can easily see what it becomes when $i>j$.

+",55,,,,,10/24/2018 9:20,,,,0,,,,CC BY-SA 4.0 +4526,2,,4519,10/24/2018 11:46,,2,,"

(This answer is given from the point of view of the theory of quantization, in which quantum systems are described by means of a quantization map of a classical phase space into a quantum space. The Plucker embedding will be described as a special case of such a map. This case has many applications in quantum computation).

+ +

Classical dynamics takes place on manifolds. For example, the dynamics of a particle moving on a straight line is completely determined by its initial position $x$ and momentum $p$ (or equivalently velocity). The set of initial parameters needed to determine the dynamics, i.e., to solve the equations of motion is known as the system's phase space. In the above case it is $\mathbb{R}^2$. i.e., a two -dimensional vector space of all possible values of the position and the momentum. The dynamics is generated by functions on the phase called Hamiltonians $H(p, q)$ through the Hamilton's equation of motion:

+ +

$$ \frac{dx}{dt} = \frac{\partial H}{\partial p}$$ +$$ \frac{dp}{dt} = -\frac{\partial H}{\partial x}$$

+ +

($t$ = time). The Hamiltonian corresponding to free motion is given by +$$ H(p, q) = \frac{1}{2m} p^2$$ +Any reasonable function on the phase space can serve as a classical Hamiltonian. For example, the function: +$$ H(p, q) = p^2 + x^2$$ +is the classical Hamiltonian of the Harmonic oscillator (which is the basis of continuous variable models.)

+ +

Phase spaces (i.e., sets of initial data) need not necessarily be vector spaces. This can happen, for example, if the particle's position is confined, for example, a particle in a box, or a particle with rotational degrees of freedom, in which the angular position is confined to lie on a sphere. In all the above cases, the particle momentum is not confined and can assume any value, thus the phase space has an infinite volume, as it has unbounded directions.

+ +

Quantum systems (including all quantum systems used in quantum computing, such as qubits, qudits, continuous variable models, toric codes, etc.) can be described by a classical system + a procedure of quantization, in which the phase space geometrical manifold is traded by a quantum system Hilbert space and the Hamiltonian functions are traded by operators on the Hilbert space. There is no unique procedure applicable to all kinds of systems, different quantization procedures often give slightly different results, but nevertheless, I'll describe one of these procedures which is certainly applicable at least when the quantum Hilbert spaces are finite dimensional (such as the case of qudits).

+ +

First, let me remark that a Hilbert space does not describe the quantum mechanical set of pure states because in quantum mechanics there is no relevance to the overall magnitude and phase of a state vector; the pure states are described by rays; thus, we are talking about a projective Hilbert space which is a Hilbert space with an equivalence relation:

+ +

$$|\Psi\rangle \sim c |\Psi\rangle, \quad c \in \mathbb{C}, \quad c \ne 0 $$

+ +

When the Hilbert space is finite dimensional, the projective Hilbert spaces are called projective vector spaces or simple projective spaces, for example the projective vector space corresponding to an $n$ dimensional complex vector space is called a complex projective space and denoted by $P(\mathbb{C}^n) \cong \mathbb{C}P^{n-1}$ (Its dimension is $n-1$, one dimension less due to the equivalence relation).

+ +

The quantization procedure in this case reduces to an embedding of a classical phase $M$ space into a quantum space $Q$ of states which an appropriate projective vector space: +$$M \overset{i}{\rightarrow} Q = \mathbb{C}P^{n-1}$$ +In each quantization method, there is a recipe of how given a classical Hamiltonian function, one can construct a corresponding quantum operator (at least for a certain class of functions).

+ +

When the Hilbert spaces are finite dimensional, such as in the qudit case, the corresponding phase spaces have finite volume.

+ +

One of the most amazing things in the above quantization procedure is that in the case of a qudit, the complex projective space is also the classical phase space. Please see Ashtekar and Schilling.

+ +

This does not mean that quantum mechanics is equivalent to classical mechanics. It only means that the space of classical pure states is the same as the space of quantum pure states. The difference lies in the process of measurement.

+ +

Let me remark that the qudit is a representative case where the dimension of the quantum Hilbert space is finite, in this case the volume of the classical phase space is also finite. This is a general principle.

+ +

The above complete correspondence breaks in cases other than a single qudit. For example, for a set of two $n$-dimensional qudits, the phase space $M = \mathbb{C}P^{n-1} \otimes \mathbb{C}P^{n-1}$ while the quantum space is $Q = \mathbb{C}P^{2n-1} $. The quantization map: $M \overset{i}{\rightarrow} Q$ in this case is a special case of the Segre embedding mentioned in AHusain's answer.

+ +

Another case of with a finite dimensional Hilbert case is the case of fermions. A set of $k$ fermions living in an $n \ge k$ dimensional Hilbert space can assume only certain entangled state vectors which are fully antisymmetric (because fermions cannot be in the same state), for example a set of two fermions ($k=2$) on a $n=4$ dimensional Vector space can assume only the following state vectors

+ +

$$ |\Psi\rangle = c_{12} v_1\wedge v_2 + c_{13} v_1\wedge v_3 + c_{14} v_1\wedge v_4 + c_{23} v_2\wedge v_3 + c_{24} v_2\wedge v_4 + c_{34} v_3\wedge v_4 $$

+ +

(The wedge $\wedge$ is the antisymmetric tensor product: $v_i \wedge v_j = v_i \otimes v_j - v_j \otimes v_i$)

+ +

The complex dimension of this vector space is $6$ and of the corresponding projective vector space is $5$ (the real dimension is 10).

+ +

The classical phase space of the above set of fermions can be obtained as follows: Taking a fixed fermionic state, for example:

+ +

$$ |\Psi\rangle = v_1\wedge v_2 $$

+ +

The vectors $v_i$ are 4 dimensional; the phase space is the orbit of the action of the unitary group $U(4)$ on this fixed vector:

+ +

$$ g \cdot |\Psi\rangle = gv_1\wedge gv_2, \quad g \in U(4) $$

+ +

Now, if $g$ acts only within the two dimensional subspace spanned by $v_3$ and $v_4$, it clearly does not change the fermion state; also if $g$ acts only within the two dimensional subspace spanned by $v_1$ and $v_2$, it also does not the fermion state because it only changes the basis, thus there is a subgroup $U(2) \times U(2)$ which does not change the initial state, thus the phase space in this case is given by:

+ +

$$Gr(2, 4) = \frac{U(4)}{U(2) \times U(2)}$$

+ +

This manifold is called the complex Grassmann manifold. The dimension in our case is: $4^2-2^2-2^2 = 8$. The quantization map, i.e., the embedding:

+ +

$$ Gr(2, 4) \overset{i}{\rightarrow} \mathbb{C}P^{5}$$

+ +

is called the Plucker embedding (this term is applicable in the general case, for arbitrary $k$ and $n$ ) . It is clear from comparing the dimensions ($8 < 10$) that not every state in the projective Hilbert state can be obtained from a point of the Grassmannian, i.e., from a unitary rotation of a fixed initial state. Thus, if we take a general element in the projective space $\mathbb{C}P^{5}$, there will be certain relations that it must satisfy to be a unitary rotation of a fixed elements, these are called the ""Plucker relations""

+ +

In our example there is a single Plucker relation: +$$c_{12} c_{34} -c_{13} c_{24} + c_{14} c_{23} = 0$$ +(These relations are necessarily homogeneous because both manifolds are projective. Please see for example the following article by Smirnov, where the Plucker embedding is explained in some detail, the above equation appears in example 2.11).

+ +

One use of the Grassmann manifold is in the solution the Schrödinger equation for fermions. Instead of looking for the ground state in the entire Hilbert space, we can formulate a variational problem running only on vectors belonging to the Grassmannian. This procedure, known as the Hartree-Fock method, results in an approximate solution. (This point was also mentioned in AHusain's answer.

+ +

Please see the following article by Karle and Pachos analyzing the geometry of the Grassmannian $Gr(2,4)$ from the holonomic quantum computation point of view.

+ +

The Grassmann manifold appears also as the ground state manifold of stabilizer codes, please see for example the following article by Zheng and Brun.

+",4263,,4263,,10/24/2018 16:28,10/24/2018 16:28,,,,5,,,,CC BY-SA 4.0 +4527,1,,,10/24/2018 12:12,,11,709,"

What are the fields/business ideas that a new business can work on within quantum computing that can be profitable if this business has no access to onboard quantum setups but can access the cloud-based quantum computing platforms? What are the problems it can work on that can be valuable to industry?

+",4718,,4718,,10/27/2018 10:26,6/17/2019 5:58,Can quantum computing be profitable without quantum hardware?,,3,1,,,,CC BY-SA 4.0 +4528,1,,,10/24/2018 13:43,,12,677,"

It is known by the no-cloning theorem that constructing a machine that is able to clone an arbitrary quantum state is impossible. However, if the copying is assumed not to be perfect, then universal quantum cloning machines can be generated, being able to create imperfect copies of arbitrary quantum states where the original state and the copy have a certain degree of fidelity that depends on the machine. I came across the paper Quantum copying: Beyond the no-cloning theorem by Buzek and Hillery where this kind of universal quantum cloning machine is presented. However, this paper is from 1996 and I am not aware if some advances in this kind of machines have been done yet.

+ +

Consequently, I would like to know if someone does know if any advances in such kind of cloning machines have been done since then, that is, machines whose fidelity is better than the one presented in such paper, or the methods are less complex ... Additionally, it would be also interesting to obtain references about any useful application that such machines present if there is any.

+",2371,,55,,11/19/2022 12:17,1/17/2023 3:28,Advances on imperfect quantum copying,,5,0,,,,CC BY-SA 4.0 +4529,2,,4528,10/24/2018 14:28,,11,,"

Regarding the optimality of the results of your linked article [1],$\def\ket#1{\lvert#1\rangle}\def\bra#1{\!\langle#1\rvert}$ we find in Section III A that on input $\ket{\phi}$, the states produced by this imperfect cloning operation are of the form +$$ +\qquad\qquad\qquad +\rho_{\text{out}} \,=\, \tfrac{5}{6}\ket{\phi}\bra{\phi} \,+\, \tfrac{1}{6}\ket{\phi^\perp}\bra{\phi^\perp}\,, +\qquad\qquad\qquad(\text{3.16 paraphrased}) +$$ +where $\ket{\phi^\perp}$ is the unique state orthogonal to $\ket{\phi}$. Put otherwise, we have +$$ +\qquad\qquad\qquad +\rho_{\text{out}} \,=\, \tfrac{2}{3}\ket{\phi}\bra{\phi} \,+\, \tfrac{1}{3}\rho_{noise}\,, +\qquad\qquad\qquad\qquad\qquad\qquad\qquad\; +$$ +where $\rho_{\text{noise}} = \tfrac{1}{2}\mathbf 1$ is the maximally mixed state. In this sense what you get is two copies of the state which you provide as input, albeit each being corrupted with white noise. +It turns out that this performance is optimal: in [2], it is shown that 5/6 is the optimal fidelity for 'universal cloners', which is what is shown to be achieved in Eqn. (3.16) of [1].

+ +

[1] Buzek and Hillery. +Quantum copying: Beyond the no-cloning theorem.
       +Phys. Rev. A 54 (1844), 1996. +[arXiv:quant-ph/9607018]

+ +

[2] Bruss et al. + Optimal Universal and State-Dependent Quantum Cloning.
      Phys. Rev. A 57 (2368), 1998. +[arXiv:quant-ph/9705038].

+",124,,,,,10/24/2018 14:28,,,,0,,,,CC BY-SA 4.0 +4530,1,,,10/24/2018 18:37,,4,101,"

The paper Quantum linear systems algorithms: a primer by Dervovic et al has this table on page 3:

+ +

+ +

I'm not sure why there's no $N$ in the time complexity of the algorithm by Childs et al. i.e. $\mathcal{O}(s\kappa \ \text{polylog}(s\kappa/\epsilon))$. It's a bit hard to believe that the time complexity doesn't depend on the dimensions of the matrix involved. I also checked the original paper by Childs et al but I couldn't find the time complexity written in this form there. Any ideas?

+",26,,,,,10/24/2018 18:37,Why is there no $N$ in the time complexity of the QLSP algorithm by Childs et al.?,,0,4,,,,CC BY-SA 4.0 +4531,1,4538,,10/24/2018 20:06,,12,737,"

In classical computing, we can run the key search (for example AES) by running parallel computing nodes as many as possible.

+ +

It is clear that we can run many Grover's algorithms, too.

+ +

My question is; it possible to have a speed up using more than one Grover's algorithm as in classical computing?

+",4866,,26,,10/25/2018 17:55,10/26/2018 7:31,Can we speed up the Grover's Algorithm by running parallel processes?,,2,0,,,,CC BY-SA 4.0 +4532,1,4537,,10/24/2018 20:41,,5,354,"

$\newcommand{\qr}[1]{|#1\rangle}$In this lecture, it is nicely explained how to define an operator that computes a function $f(x)$. I know how to implement such operators. (We just define $O\qr{x}\qr{y} = \qr{x}\qr{y \oplus f(x)}$.)

+ +

However, it it said in the lecture that this effectively proves $O = O^\dagger$ and I fail to see it so clearly. It says $O = O^\dagger$ by construction. How can I see that so clearly as it is implied?

+",1589,,491,,10/25/2018 12:12,10/25/2018 13:32,Why are oracles Hermitian by construction?,,2,2,,,,CC BY-SA 4.0 +4533,2,,4531,10/24/2018 20:50,,3,,"

In a sense, if we were doing it in parallel on different nodes, you would save time for running. But if we talk about complexity (that is what we refer to speedup generally), we need a bit of analysis.

+ +

You agree that we need about $ \sqrt{N} $ operations for the non-parallel case. +Say we have two nodes, and we separate the list of N elements into two lists of size $ N_1,N_2 $. The search on the sub-lists takes about $ \sqrt{N_1},\sqrt{N_2} $.

+ +

However, we have that +$$ \sqrt{N} = \sqrt{N_1+N_2} \le \sqrt{N_1} + \sqrt{N_2} $$

+ +

And you would still need to verify which output among what is returned by the parallel processes is the one you seek. It adds a constant in the complexity so we generally hide it into the $O$ notation.

+ +

However, that would still be interesting especially if we have to cluster hardware because we are limited in numbers of qubits or another limitations.

+",4127,,4127,,10/25/2018 5:55,10/25/2018 5:55,,,,7,,,,CC BY-SA 4.0 +4534,2,,4468,10/24/2018 20:54,,5,,"

The earliest non-internal reference I can find is in NIPS 2009 from a Google/D-Wave effort1. You'll notice that the two Choi papers, in addition to not using the term ""Chimera"", do not describe a Chimera graph (and note that the name comes from D-Wave, not from graph theory).

+ +

For a good early reference on Chimera, I recommend Bunyk et al., 20141 , which describes graph theoretical and other practical considerations related to the architecture.

+ +

1 Note: I work at D-Wave

+",4920,,23,,11-01-2018 08:52,11-01-2018 08:52,,,,0,,,,CC BY-SA 4.0 +4535,2,,4532,10/24/2018 21:23,,5,,"

Defining such oracles, you may visualize it as many controlled operations, especially $\text{CNOT}$s which is an easy way to build oracles.

+ +

We know the effect of the $\text{CNOT}$ is if the control is a 1 then we add 1 into the target (you can see it as part of a function itself but it is meant for one bit representing the output register). +If we enumerate options on a simple 2-bit example with the first as control we have : +$$\text{CNOT}(00) = 00; \text{CNOT}(01) = 01; \text{CNOT}(10) = 1(0+1)=11;\text{CNOT}(11) = 1(1+1)=10$$

+ +

We know also that we cancel the effect of the CNOT by applying it again. Take the action of a CNOT but now on images from a first CNOT: +$$\text{CNOT}(00) = 00; \text{CNOT}(01) = 01; \text{CNOT}(11) = 1(1+1)=10;\text{CNOT}(10) = 1(0+1)=11$$

+ +

So you see that the effect on bits representing the output of your function represented by controlled operations.

+ +

The $ \oplus $ symbol illustrate that if I may say so.

+",4127,,26,,10/25/2018 13:32,10/25/2018 13:32,,,,0,,,,CC BY-SA 4.0 +4536,2,,4430,10/25/2018 2:34,,1,,"

I wanted to expand on DaftWullie's accepted answer in case it is helpful to others. DaftWullie's is correct; mine is only a continuation of the same idea.

+ +

Consider the one-qubit state $\psi = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$. We know that applying the Hadamard matrix $H = \frac{1}{\sqrt{2}}\begin{bmatrix}1& 1 \\ 1 & -1\end{bmatrix}$ will give us $H\psi = (\frac{1}{2} + \frac{1}{2}) |0\rangle + (\frac{1}{2} - \frac{1}{2}) |1\rangle = |0\rangle$.

+ +

Of course the reason is constructive and destructive interference.

+ +

Now let's consider a two qubit system with state $\psi = \frac{1}{\sqrt{2}} (|00\rangle + |11\rangle)$. Let us say that we want to apply $H$ to the second qubit, but do nothing to the first (apply the identity $I$).

+ +

What matrix represents this transformation? The answer is $I \otimes H$. What is that? It is this matrix:

+ +

$$ +\begin{bmatrix} +H & 0 \\ +0 & H +\end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} +1 & 1 & 0 & 0 \\ +1 & -1 & 0 & 0 \\ +0 & 0 & 1 & 1 \\ +0 & 0 & 1 & -1 +\end{bmatrix} +$$

+ +

Can you see why this is? It is because the the overall operation should move

+ +

$$ +\begin{align} +|00\rangle &\mapsto \frac{1}{\sqrt{2}}(|00\rangle + |01\rangle) \\ +|01\rangle &\mapsto \frac{1}{\sqrt{2}}(|00\rangle - |01\rangle) +\end{align} +$$

+ +

and likewise for $|10\rangle, |11\rangle$. This is because the Hadamard application to the second qubit is not changing the first qubit.

+ +

Now, you may verify that $I \otimes H$ applied to $(\alpha |0\rangle + \beta |1\rangle) \otimes \frac{1}{\sqrt{2}}(|0\rangle + 1\rangle)$ gives you $\alpha |00\rangle + \beta|10\rangle$. When the qubits are not entangled, then you can do the intuitive thing and apply $I$ and $H$ independently to each qubit.

+ +

However, the story is totally different when the two qubits are entangled! Consider the application to $\frac{1}{\sqrt{2}} ( |00\rangle + |11\rangle)$. In this case, you get the result:

+ +

$$ +\begin{bmatrix} +\frac{1}{2} \\ +\frac{1}{2} \\ +\frac{1}{2} \\ +-\frac{1}{2} \\ +\end{bmatrix} +$$

+ +

Can you see what happened here? Applying $I \otimes H$ no longer mean that the $|00\rangle \mapsto \frac{1}{\sqrt{2}} (|00\rangle + |01\rangle)$ part destructively interferes with the $|11\rangle \mapsto \frac{1}{\sqrt{2}} (|10\rangle - |11\rangle)$ part to cancel out the possibility that the second qubit takes on the value $|1\rangle$. That's because $|01\rangle$ and $|11\rangle$ are orthogonal basis states.

+ +

And now you can see my mistake. As DaftWullie explained. A qubit cannot be considered transmitted when it is still entangled. That's because an operation on one of the entangled qubits will not give the result that would have been expected if the qubit were not entangled.

+ +

Hope this helps! Thanks to all!

+",4862,,,,,10/25/2018 2:34,,,,0,,,,CC BY-SA 4.0 +4537,2,,4532,10/25/2018 6:04,,8,,"

Showing that $O=O^\dagger$ is equivalent to showing that $O^2=\mathbb{I}$. In other words, +$$ +O^2|x\rangle|y\rangle=|x\rangle|y\rangle +$$ +for all $x$ and $y$.

+ +

To show this, we start from the definition of the oracle +$$ +O|x\rangle|y\rangle=|x\rangle|y\oplus f(x)\rangle +$$ +and apply $O$ again: +$$ +O^2|x\rangle|y\rangle=O|x\rangle|y\oplus f(x)\rangle=|x\rangle|y\oplus f(x)\oplus f(x)\rangle=|x\rangle|y\rangle +$$ +as required (since $a\oplus a=0$, and bitwise addition is associative).

+",1837,,,,,10/25/2018 6:04,,,,0,,,,CC BY-SA 4.0 +4538,2,,4531,10/25/2018 8:02,,7,,"

Certainly! Imagine you have $K=2^k$ copies of the search oracle $U_S$ that you can use. Normally, you'd search by iterating the action +$$ +H^{\otimes n}(\mathbb{I}_n-2|0\rangle\langle 0|^{\otimes n})H^{\otimes n}U_S, +$$ +starting from an initial state $(H|0\rangle)^{\otimes n}$. This takes time $\Theta(\sqrt{N})$. (I'm using $\mathbb{I}_n$ to denote the $2^n\times 2^n$ identity matrix.)

+ +

You could replace this with $2^k$ parallel copies, each indexed by an $x\in\{0,1\}^k$, using +$$ +\left(\mathbb{I}_k\otimes H^{\otimes (n-k)}\right)\mathbb{I}_k\otimes(\mathbb{I}_{n-k}-2|0\rangle\langle 0|^{\otimes (n-k)})\left(\mathbb{I}_k\otimes H^{\otimes (n-k)}\right)U_S +$$ +and starting from a state $|x\rangle(H|0\rangle)^{\otimes(n-k)}$ +The time required for running these would be reduced to $O(\sqrt{N/K})$, at the cost of requiring $K$ times more space.

+ +

In a scaling sense, one might consider this an irrelevant result. If you have a fixed number of oracles, $K$, then you get a fixed ($\sqrt{K}$) improvement (just like, if you have $K$ parallel classical cores, the best improvement you can get is a factor of $K$), and that does not change the scaling. But it does change the fundamental running time. We know that Grover's algorithm is exactly optimal. It takes the absolute minimum time possible with a single oracle. So, knowing that you get a $\sqrt{K}$ improvement in time is useful with regards to that benchmark of a specific running time at a specific value of $N$.

+",1837,,1837,,10/26/2018 7:31,10/26/2018 7:31,,,,2,,,,CC BY-SA 4.0 +4540,1,4688,,10/25/2018 14:48,,9,504,"

The line of questioning is inspired by the pick one trick in Section 4 of the PDF version of the paper +Quantum Attacks on Classical Proof Systems - The Hardness of Quantum Rewinding (Ambainis et al., 2014). Slides available here. I don't fully follow the argument there so maybe I missed something important but here is my interpretation of their trick.

+ +

Consider a classical hash function $x \rightarrow H(x)$ that is collision resistant i.e. it is computationally hard to find $H(x) = H(x') \land x\neq x'$. We wish to encode a commitment of a message using this hash function. That is, I take some message $m$ and concatenate some randomness $u$ at the end such that I generate a commitment $c = H(m\Vert u)$. When asked to prove my commitment, I cannot find a different pair $(m',u')$ such that $c = H(m'\Vert u')$ because of the collision-free nature of hashes. My only choice is to open the commitment to $(m,u)$.

+ +

Now, we attack this protocol with a quantum circuit of the hash function.

+ +
    +
  1. Take a superposition over all possible inputs $x_i$ and query the hash function with this state to obtain the state $\vert\psi\rangle = \sum_{i}\vert x_i\rangle\vert H(x_i)\rangle$.

  2. +
  3. Measure the second register to obtain a random commitment. The measurement randomly picks $c = H(x_i)$ for some $i$. The first register then has $\vert\phi\rangle = \sum_j \vert x_j\rangle$ such that $\forall j, c = H(x_j)$.

  4. +
  5. I'd like to open the commitment to some $m'$ that is given to me by the opponent. Use Grover's search on the first register to find a $x_{\text{sol}}$ from the state $\vert\phi\rangle = \sum_j\vert x_j\rangle$ that satisfies some special property. Specifically, the special property is that the first $|m'|$ bits of $x_{\text{sol}}$ are $m'$. That is, I will search to find $x_{\text{sol}} = m'\Vert u'$.

  6. +
+ +

Using the slides posted earlier (Slide 8) and their terminology, it is efficient to find a value $x$ from the intersection of two sets $S$ and $P$. Here $S$ is the set of all $x$ such that $H(x) = c$ and $P$ is the set of all $x$ where the first $|m'|$ bits of $x$ are exactly $m'$.

+ +

My questions regarding this attack are the following:

+ +
    +
  1. Did I get the basic idea of the attack correct? If wrong, ignore the rest of the post!

  2. +
  3. How many elements are there in the superposition $\vert\phi\rangle$ after we commit to a certain $c$? In order that I can open the commitment to any message, it seems like I should have $O(N)$ (the size of the hash function's range) elements. But this is too large.

  4. +
  5. The speed of Grover search - this is related to the previous point - is the other thing. Wouldn't the computational complexity of searching over such a large superposition $\vert\phi\rangle$ be the same as trying to guess a pre-image for a given output of the hash function since one has to search over all the $u$? In this case, where is the advantage?

  6. +
+ +

I'm looking for the intuition more than mathematical proofs so any help is greatly appreciated!

+",4831,,4831,,11-09-2018 10:35,11-12-2018 14:51,Quantum attack on hash functions,,1,0,,,,CC BY-SA 4.0 +4541,1,4542,,10/25/2018 18:54,,5,539,"

In Devitt et al. 2013's introduction to quantum error correction, the authors mention (bottom of page 12) how the stabilizer group for $N$ qubits is abelian.

+ +

More specifically, here is the quote:

+ +
+

An $N$-qubit stabilizer state $\lvert\psi\rangle_N$ is then defined by the $N$ generators of an Abelian (all elements commute) subgroup $\mathcal G$ of the $N$-qubit Pauli group, + $$\mathcal G=\{K^i\,|\,K^i\lvert\psi\rangle=\lvert\psi\rangle,\,[K^i,K^j]=0,\forall (i,j)\}\subset \mathcal P_N.$$

+
+ +

I am confused by this. Is the stabilizer subgroup $\mathcal G$ defined as an abelian subgroup of elements of $\mathcal P_N$ that stabilizes $\lvert\psi\rangle$, or is it instead the case that the subgroup of elements of $\mathcal P_N$ that stabilize $\lvert\psi\rangle$ is abelian?

+ +

If the latter, doesn't this introduce ambiguity in the definition? There could be other elements that stabilize $\lvert\psi\rangle$ but not commute with $\mathcal G$.

+ +

If the former, how is this shown? I can see why the action of $K^i K^j$ and $K^j K^i$ is identical on $\lvert\psi\rangle$, but how do you show that $K^i=K^j$?

+",55,,,,,10/25/2018 20:25,Why is the $N$-qubit stabilizer group abelian?,,1,3,,,,CC BY-SA 4.0 +4542,2,,4541,10/25/2018 19:16,,7,,"

It is not necessary to define the group as commuting —$\def\ket#1{\lvert#1\rangle}$ by virtue of every element in the group stabilising the state $\ket{\psi}$, this property follows.

+ +

Because we are considering subgroups of the $N$-qubit Pauli group, any two elements either commute or anti-commute. +Let $P \in \mathcal P_N$ be an operator which stabilises some vector $\ket{\psi}$, that is such that $P \ket{\psi} = \ket{\psi}$. Suppose that $Q$ anticommutes with $P$. +It then follows that +$$ Q \ket{\psi} = Q P \ket{\psi} = - P Q \ket{\psi}. $$ +Now, if $Q \ket{\psi} = \lambda_Q \ket{\psi}$ for any scalar $\lambda_Q$ at all, we have +$$ \lambda_Q \ket{\psi} = - P Q\ket{\psi} = - \lambda_Q \ket{\psi}. $$ +But this implies that either $\lambda_Q = 0$ (which is impossible as $Q$ is unitary) or $\ket{\psi} = \mathbf 0$, the zero vector.

+ +

It follows that if $\ket{\psi}$ is actually a state (so that in particular it has norm 1), any operator in $\mathcal P_N$ which has $\ket{\psi}$ as a $\pm1$ eigenvector must commute with all operators which stabilise $\ket{\psi}$. +Thus the subgroup of $\mathcal P_N$ which stabilises $\ket{\psi}$ is abelian.

+",124,,124,,10/25/2018 20:25,10/25/2018 20:25,,,,0,,,,CC BY-SA 4.0 +4543,2,,4528,10/26/2018 11:11,,14,,"

Numerous papers on quantum cloning have been written since 1996, including both theoretical and experimentally focused papers. The following survey paper is a good place to start if you want to learn more:

+ +
+

Valerio Scarani, Sofyan Iblisdir, Nicolas Gisin, and Antonio Acin. Quantum cloning. Reviews of Modern Physics 77: 1225-1256, 2005. arXiv:quant-ph/0511088

+
+",1764,,,,,10/26/2018 11:11,,,,1,,,,CC BY-SA 4.0 +4544,2,,4528,10/26/2018 11:21,,5,,"

As John Watrous said, the Rev. Mod. Phys. article is an excellent starting point.

+ +

If you want to know the sort of thing that's been looked at since, then in a shameless bit of self-promotion, you might look at this paper. There have been a couple of follow-up papers as well (including one that closes a small step left open in one of the proofs). What is does is asymmetric cloning, in which the different copies of the state have different qualities. We can get optimal results even in these cases.

+ +

You might also look for the term ""broadcasting"", which is kind of related to cloning but on mixed states rather than pure states.

+",1837,,,,,10/26/2018 11:21,,,,0,,,,CC BY-SA 4.0 +4545,1,,,10/26/2018 15:14,,1,204,"

I was wondering if anybody to help me to generate the following state. +It would be preferable if you use only Hadamard, CNOT and T-gates, on $\lceil\log_2(M+1)\rceil$ qubits: +$$|\psi\rangle = \frac{1}{\sqrt{2}}\biggl(|0\rangle + \frac{1}{\sqrt{M}}\sum_{j=1}^M|j\rangle\biggr)$$ +Assume M is a power of 2 value.

+",4206,,26,,03-12-2019 09:20,03-12-2019 09:20,How can the state $\lvert0\rangle+M^{-1/2}\sum_{j=1}^M\lvert j\rangle$ be generated?,,2,6,,,,CC BY-SA 4.0 +4546,2,,4545,10/26/2018 16:08,,1,,"

As you used the tag Qiskit I assume that you want a method to implement this state with Qiskit. Moreover, you did not mention any performance goal, so here is a general method that can be used for any quantum state:

+ +
# Import the Qiskit SDK
+import qiskit
+# Import the initializer
+import qiskit.extensions.quantum_initializer._initializer as initializer
+# Import numpy
+import numpy
+
+
+def generate_amplitudes(M: int) -> numpy.ndarray:
+    qubit_number = int(numpy.ceil(numpy.log2(M + 1)))
+    amplitudes = numpy.zeros((2**qubit_number, ), dtype=numpy.complex)
+    amplitudes[0] = 1 / numpy.sqrt(2)
+    amplitudes[1:M+1] = 1 / numpy.sqrt(2*M)
+    return amplitudes
+
+M = 10
+N = int(numpy.ceil(numpy.log2(M + 1)))
+# Create a Quantum Register with 2 qubits.
+q = qiskit.QuantumRegister(N)
+# Create a Quantum Circuit
+qc = qiskit.QuantumCircuit(q)
+
+# Initialise the state with the gate set of IBM (not H+T+CX)
+qc.initialize(params=generate_amplitudes(M), qubits=q)
+
+ +

Note that this method is not efficient in the sense that the generated circuit may not be optimal at all. As your state is quite simple, there is probably a clever algorithm to construct it more efficiently.

+ +

If you are restricted to a gate set of H, T and CX then you will need to use an external1 tool to translate the non-{H,T} gates into sequences of H and T. This can be done efficiently with the Solovay-Kitaev algorithm or (more?) efficiently with the algorithm described in https://arxiv.org/abs/1212.6253.

+ +

1 I tried to play with Qiskit's compiler and unroller, but I could not make them work properly to perform the translation to the gate set H, T and CX. Maybe someone else have an idea on how to do this translation with Qiskit?

+",1386,,,,,10/26/2018 16:08,,,,0,,,,CC BY-SA 4.0 +4547,2,,4545,10/26/2018 16:09,,1,,"

Get a qubit $c$ into the $|+\rangle$ state, then do controlled uniform superposition preparation onto a register $i$ conditioned on $c$, then increment $i$ conditioned on $c$, then toggle $c$ if $i>0$. $i$ now holds the state you wanted.

+ +

The asymptotic T cost is $O(\lg M - \lg \epsilon)$ where $\epsilon$ is the absolute error tolerance. The circuit uses two more qubits than what you wanted in order to achieve that time cost (one for the control, one to hold the temporary comparison).

+ +

Here's the controlled uniform superposition preparation circuit (the figure is from https://arxiv.org/abs/1805.03662 . I don't know the original paper that discovered this technique):

+ +

+ +

There are standard constructions for exactly converting the comparisons, increments, multi-controlled-NOTs, and controlled-Hs into the H/CNOT/T gate set. You have to be careful to use single-dirty-ancilla constructions for the arithmetic ( https://arxiv.org/abs/1706.07884 ) because otherwise the qubit count will secretly double. And there are standard constructions for approximating the arbitrary-angle Z gates.

+",119,,119,,10/26/2018 16:17,10/26/2018 16:17,,,,0,,,,CC BY-SA 4.0 +4548,1,4549,,10/26/2018 20:16,,6,188,"

I'm currently reading the paper ""Surface codes: Towards practical large scale quantum computing"" and have a couple of very basic questions that if answered will help me contextualize and organize the information in this paper much better. I understand the requisite info for basic QC topics but have the sneaking feeling I'm missing/misunderstanding some implicit abstraction core to the topic of surface codes. And so, would like to double check my intuitions against someone's actual knowledge.

+ +
    +
  1. Is a surface code an architecture onto which logical qubits, logical operations, and their connections are physically implemented? That is, is the common depiction of a surface code (the pattern of the measure qubits and data qubits) to be taken literally as a real-space image of the actual hardware or does it correspond to some abstraction in software that I'm not quite grasping?

  2. +
  3. Are quantum computers currently designed to compile programs into a surface code implementation or are surface codes still a theoretical framework due to the massive amount of qubits that are needed?

  4. +
+ +

Thank you.

+",4943,,55,,02-07-2019 10:41,02-07-2019 10:41,Is the common depiction of a surface code to be taken literally as a real-space image of the actual hardware?,,1,2,,,,CC BY-SA 4.0 +4549,2,,4548,10/26/2018 23:26,,3,,"
+

is the common depiction of a surface code (the pattern of the measure qubits and data qubits) to be taken literally as a real-space image of the actual hardware?

+
+ +

Correct. The surface code is physically implemented by a planar grid of qubits.

+ +

+ +

Left image source

+ +
+

are surface codes still a theoretical framework due to the massive amount of qubits that are needed?

+
+ +

Correct. To do error corrected quantum computation you need on the order of a thousand physical qubits per logical qubit. And your physical qubits must have fewer than 1 error per thousand operations (especially the 2-qubit operations). No one has a hundred physical qubits yet, and the best demonstrated error rates are ~5 in a thousand.

+",119,,119,,10/26/2018 23:38,10/26/2018 23:38,,,,1,,,,CC BY-SA 4.0 +4550,2,,4527,10/27/2018 3:28,,10,,"

You have two different questions here:

+ +
+

1) Can quantum computing be profitable without quantum hardware?

+
+ +

In the comments people have said this is an opinion based question, but the truth is that there's already people (and companies!) making profits off of quantum computing.

+ +

In 2016 Doug Finke made a website with his own money, which kept track of the number of qubits in all quantum computers, and a list of all companies involved in quantum computing. In 2018 started making profit from advertising. Recently he has also added a job postings section where Google, IBM, Microsoft, and other big companies have job openings listed there. He now makes even more money from consulting. Although it might sound like it, I am in no way related to the person or website, he just attends some of the major quantum computing conferences and makes himself known.

+ +

There's more than just individual consultants making profits off of quantum computing. A famous quantum information theorist (Michele Mosca) and famous a physicist known for his work in quantum key distribution (Norbert Lutkenhaus) started a company called evolutionQ which makes a profit by selling products (if you click the link a PDF will open) such as courses on quantum-safe security, and quantum cryptography, and for other products like risk assessments and even software. Again, I have no relation to this company.

+ +

So far I've given an example of an individual person making profit from advertising on a website about quantum computers, and a company making a profit from consulting about quantum computing. If you are very picky you might say that these are not examples of quantum computing being profitale without hardware, because no quantum computing is actually directly involved. So I will give a third example:

+ +

The company 1Qbit is a company that does not have any quantum hardware, but thrives by making algorithms and software for companies that are interested in running things on present or future quantum hardware. Again, I have no relation to this company.

+ +

How long will the above three examples keep making profit? +
That is what is opinion based.

+ +

Some people (not necessarily me) are of the opinion that quantum computers that can actually outperform classical computers for a useful task will never exist, so eventually people will get less and less interested in spending money on advertising on a QC-related website, or on buying software for present or future quantum computers, or on consulting fees or courses. Others disagree and think that quantum computing will become an industry just like classical computing (i.e. profits can be made by plenty of companies that don't have hardware, such as software companies, consulting companies, magazine publishers, etc.).

+ +
+

2) What are the fields/business ideas that a new business can work on within quantum computing that can be profitable if this business has no access to onboard quantum setups but can access the cloud-based quantum computing platforms?

+
+ +

This is a different question.
+First, if you have access to cloud-based quantum hardware, you can run calculations on the hardware, just like if you had it yourself in front of you, so your potential to make profit can't be too much smaller than if you actually had the hardware in front of you. Maybe you just want to know if people can make a profit apart from selling hardware? Selling the hardware is probably the only thing that you can't do to make profit if you only have access to the cloud-based hardware.

+ +

If you want to know how you can make profit from quantum computing is some way other than selling hardware, there some ""fields"" (as you call them) in which this can be done. Some examples are consulting, software development, algorithm development, and teaching private courses.

+ +

In terms of actually using the cloud-based quantum hardware to make profit, your chances of making a profit are smaller. Nothing useful that a cloud-based quantum computer can do at present cannot already be done for less cost on a classical computer (though companies like IBM are known to offer their cloud-based QPU time for free, there is still not much you can do to make money from running calculations on their cloud based chips because they don't have enough qubits to do anything very useful). You would have to be extremely good at sales to make a profit right now, and whether or not you will be able to do it easier in the future is an opinion-based matter. Present-day hardware (whether available on the cloud or not) is mainly only useful for testing the physics of the chips, or for playing around. If a competing company needs someone to learn how IBM's chip works for example, maybe they could pay you to do some experiments, but they probably already have someone that can do it. Maybe you can organize games, tournaments, or other prized competitions for doing small calculations on present-day devices and get sponsors for that, but that is about it!

+",2293,,,,,10/27/2018 3:28,,,,0,,,,CC BY-SA 4.0 +4551,2,,2622,10/27/2018 5:56,,7,,"

The restriction on the eigenvalues is usually given in the form of a condition number. This is the $\kappa$ that you see in all the runtimes in your table. $\kappa = |\lambda_{\rm{max}}/\lambda_{\rm{min}}|$ where $\lambda_{\rm{max}}$ and $\lambda_{\rm{min}}$ are the maximum and minimum eigenvalues respectively.

+ +

In all runtimes listed in your table, it is assumed that the condition number is known. One does not usually think of ""calculating the condition number"" as part of the algorithm for solving $Ax=b$, for example. If the condition number is larger, the system is harder to solve, and if it is smaller the system is easier to solve (assuming all other parameters, including the maximum desired error $\epsilon$ are held fixed).

+ +

In terms of needing to know that $\lambda_{\rm{max}} < M$ and $\lambda_{\rm{min}}>L$, there are lots of examples where we can know the bounds on the eigenvalues without actually going through the effort of calculating the eigenvalues. In this way, HHL can be a great way to find the state you're looking for, without the cost of calculating the condition number or any eigenvalues.

+ +

Let me give just one real-world example. Let's say I want to find the molecular vibrational state $|\psi\rangle$ such that after $t=10$ps of evolving under its Hamiltonian $H$, the molecule ends up in state $|b\rangle $. This can be described by the equation:

+ +

$$ +e^{-iHt}|\psi\rangle = |b\rangle +$$

+ +

where the $|\psi\rangle$ satisfying this equation is what you want to know. You can find your desired $|\psi\rangle$ by using the HHL algorithm with $A = e^{-\frac{i}{\hbar}Ht}$ and $|\psi\rangle = |x\rangle$.

+ +

Obtaining the smallest and largest eigenvalues of a molecular Hamiltonian to arbitrary precision is extremely costly on a classical compter, but knowing that they lie within the range $(L,M)$ can be determined with no cost at all. For example if the molecule is the nitrogen dimer we know the lowest and highest vibrational states have energies (eigenvalues) between 0 and 10 eV and since $e^{0}=1$ we have $L=1$ and $M = e^{-\frac{i}{\hbar} 10 \rm{eV} \cdot 10 \rm{ps}}$. You can convert eV to Hz, and ps to seconds to evaluate $M$ numerically, and then you can obtain the lower and upper bounds that you need to use when scaling your matrix the way you described in your previous question. At no point did I need to calculate the eigenvalues of a 14-electron molecular Hamiltonian (which would be extremely hard and would defeat the purpose of using HHL, because if I could calculate the eigenvalues I could just calculate $A$ and invert it to get $|\psi\rangle$). I just used the dissociation energy of the molecule to come up with the bounds on its vibrational energies. I could have come up with even better bounds by using the semi-classical WKB approximation, also with much less cost than actually calculating the eigenvalues, but the first example is already enough.

+ +

So now let's address all your individual questions:

+ +
+

First group of questions: I read plenty of papers on HHL and none of + them even mentioned this restriction. Why? Is this restriction known + but considered weak (i.e. it's easy to have this kind of information)? + Or the restriction was not known? Is there any research paper that + mention this restriction?

+
+ +

Out of the 539 papers that have (at present) cited the original HHL paper, many of them will not know the finer details like the dependence of its performance on the condition number or eigenvalues. Some of the papers will certainly know that the performance of the algorithm will depend on the condition number or eigenvalues of the matrix, namely, the papers listed in your table on improvements to the HHL algorithm. Robin Kothari also mentioned it, for example, at the very beginning of his talk in 2016 on the CKS algorithm (which is mentioned in your table).

+ +
+

Second group of questions: Is there a better algorithm in term of + complexity? If not, then why is the HHL algorithm still presented as + an exponential improvement over classical algorithms?

+
+ +

The algorithm you mention, suggested by DaftWulie, to estimate the bounds on the eigenvalues, is not going to be improved over $\mathcal{O}(\sqrt{N})$ because the dominant cost in that algorithm is in searching through all $N$ rows for the maximum and minimum values. The cost of everything else is small because the matrix is assumed to have a sparsity of $s \lll N$. There is no way to do this search faster in faster than $\mathcal{O}(\sqrt{N})$ time (unless you have some other extra knowledge of the system) because Grover's algorithm has been proven to be optimal.

+ +

You are right, people should mention the caveats of algorithms more often in their papers. In terms of your specific question ""why is the HHL algorithm still presented as an exponential improvement over classical algorithms,"" I think the original authors HHL did do their due diligence in explaining the algorithm and its caveats, in that they said that there's an exponential scaling but the cost grows quadratically with the condition number and sparsity and inversely with the size of the error you are willing to tolerate. Why do most other people after HHL not mention all the caveats? Well many of them don't know the caveats, and those that do might have felt it wasn't necessary because calculating the condition number is not part of the algorithm. Knowing the condition number will tell you how well the algorithm will work, but it is assumed you already know this like in the molecular vibrations example I gave above!

+",2293,,2293,,10/27/2018 6:04,10/27/2018 6:04,,,,3,,,,CC BY-SA 4.0 +4552,1,4553,,10/27/2018 15:25,,8,611,"

I am trying to construct a quantum multiplier using the method described here: https://arxiv.org/abs/quant-ph/0403048. However, it seems that the control qubit would only disable the following gates for one iteration. Afterward, the $|y\rangle$ would still be in the fundamental, so would flip $D$ again and enable the next iteration of gates. How do I prevent all future iterations (essentially break out of the loop) using a control qubit?

+",4657,,26,,12/23/2018 12:47,12/23/2018 12:47,How to prevent future loops using a control qubit?,,1,0,,,,CC BY-SA 4.0 +4553,2,,4552,10/27/2018 17:41,,8,,"

You're correct, there is a bug in the algorithm described by the paper. D should be unconditionally decremented in each iteration, and the control (which I would instead call the accumulator... except it looks like it is actually intended to control it?) should be toggled if D=0. The author has made the mistake of conditioning the decrement on the accumulator, which will prevent $D=0$ from becoming $D=2^N-1$ in the relevant iteration and result in the accumulator re-toggling in the next iteration.

+ +

+ +

In any case, this is an extremely inefficient multiplier. It has cost $O(N 2^N)$ instead of the $O(N^2)$ you get from naive schoolbook multiplication. Just do this instead:

+ +
for index, qubit in enumerate(input1):
+  if qubit:
+    output += input2 << index
+
+ +

+",119,,119,,10/27/2018 18:01,10/27/2018 18:01,,,,0,,,,CC BY-SA 4.0 +4554,1,4555,,10/27/2018 18:35,,7,4447,"

Can anyone explain how the CNOT matrix below is a valid presentation for the four-qubit states that follow after?

+ +

+ +
|0 0> -> |0 0> 
+|0 1> -> |0 1>
+|1 0> -> |1 1>
+|1 1> -> |1 0>
+
+ +

Source: Wikipedia

+",4120,,55,,10/28/2018 13:12,10/28/2018 13:12,Why is the CNOT gate matrix a valid representation for two-qubit states?,,1,2,,,,CC BY-SA 4.0 +4555,2,,4554,10/27/2018 19:10,,9,,"

The one concept that I think would really help you is knowing how to turn those 4 states, $|00\rangle, |01\rangle, |10\rangle, |11\rangle$, into vectors, so that you can do the matrix multiplication.

+ +

Let me show you.

+ +

$$ +\begin{align} +|00\rangle = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix},|01\rangle = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}, |10\rangle = \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}, |11\rangle = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} +\end{align} +$$

+ +

Now if you do the matrix multplication: $\rm{CNOT} \times |00\rangle$
+You will see that you will get exactly what you said, which is $|00\rangle$, and the same is true for the rest of them!

+ +

This is using the convention that $|0\rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $|1\rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$, and $|ab\rangle = |a\rangle \otimes |b\rangle$ where $\otimes$ is the left Kronecker product.

+",2293,,2293,,10/27/2018 21:00,10/27/2018 21:00,,,,6,,,,CC BY-SA 4.0 +4556,1,4559,,10/27/2018 20:21,,5,163,"

If we have a $U$ (unitary with all real entries) such that:

+ +

$U|0\rangle =a|0\rangle +b|1\rangle$

+ +

What is $U|1\rangle=?$

+ +

I know: the definition of what it means to be unitary ie. $U^\dagger U=UU^\dagger =I$

+ +

I've worked out: for $U|1\rangle=c|0\rangle+d|1\rangle$ must satisfy $ac+bd=0$ (by taking it's dagger and multiplying it for the constants).

+ +

Is this the only information we can derive? How can I write $U|1\rangle$?

+",4946,,55,,06-08-2021 23:41,6/16/2021 19:12,"If a unitary is such that $U|0\rangle=a|0\rangle+b|1\rangle$, what is $U|1\rangle$?",,3,8,,,,CC BY-SA 4.0 +4558,2,,4556,10/27/2018 21:40,,4,,"

The general form of a 2x2 unitary matrix is:

+ +

$$ +\begin{pmatrix} +\alpha & \beta \\ +-e^{i\phi}\beta^* & e^{i\phi}\alpha^* +\end{pmatrix}, +$$

+ +

with the constraint that $|\alpha|^2 + |\beta|^2$ = 1.

+ +

Since you say that $U|0\rangle = U\begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} a \\ b \end{pmatrix}$, we have that $\alpha = a$ and $-e^{i\phi}\beta^* = b$.

+ +

Therefore, the most we can say about the bottom-right corner is that $d=e^{i\phi}a^*$, and the most we can say about the top-right corner is $c=\beta = -b^* e^{i\phi}$.

+ +

So you have: $U|1\rangle = e^{i\phi}b^*|0\rangle - e^{i\phi}a^*|1\rangle$. +We therefore do not have enough information to determine the phase $\phi$, but since you only ask how to write $U|1\rangle$ we don't need $\phi$ because it enters our expression for $U|1\rangle$ only as a global phase.

+ +

In conclusion: If all we know is $U|0\rangle = a|0\rangle = b|1\rangle$, then we can say that $U|1\rangle = b^*|0\rangle -a^*|1\rangle$, which is correct up to a global phase.

+",2293,,2293,,10/28/2018 0:29,10/28/2018 0:29,,,,0,,,,CC BY-SA 4.0 +4559,2,,4556,10/27/2018 23:37,,4,,"

You have $U|1\rangle=e^{i\phi}(b^*|0\rangle-a^*|1\rangle)$ (and for real entries, $e^{i\phi}=\pm1$). This condition follows automatically from +$$ +\langle 0|U^\dagger U |1\rangle=0 +$$ +-- this is exactly the condition you describe -- together with the fact that $U|0\rangle$ and $U|1\rangle$ must have the same normalization, +$$ +\langle k|U^\dagger U |k\rangle=1 +$$ +for $k=0,1$. (This also means that $|a|^2+|b|^2=1$.)

+ +

This is the only condition, since now you have ensured that all matrix elements of $U^\dagger U$ are of the correct form.

+",491,,,,,10/27/2018 23:37,,,,0,,,,CC BY-SA 4.0 +4560,2,,4527,10/28/2018 1:44,,4,,"

In complement to the other answer from @user1271772:

+ +
+

1) Can quantum computing be profitable without quantum hardware?

+
+ +

I can add another two elements. First companies that can sell/develop anti-quantum security protocols because as you may know, RSA is threatened by quantum computers (at least in theory but it can be enough to transition to new protocols).

+ +

Also, we may have new algorithms created inspired by quantum computing. Indeed we may cite quantum evolutionary algorithms and more recently a new algorithm for recommendation systems was developed inspired by a previous work on a quantum recommendation system algorithm. In a sense, you can say quantum computing can contribute even classically algorithmically speaking.

+ +
+

2) What are the fields/business ideas that a new business can work on within quantum computing that can be profitable if this business has no access to onboard quantum setups but can access the cloud-based quantum computing platforms?

+
+ +

The list may be long to give. It is first a research playground and businesses have to study the potential of the field first (and each business is different or will consider it differently). If you want an example, you can look at this document which is a study the potential of quantum computing for finance. Having access with the cloud can be just enough but one should consider more how one would approach a problem with quantum computing (after all the modelization part is always crucial).

+",4127,,,,,10/28/2018 1:44,,,,0,,,,CC BY-SA 4.0 +4562,2,,136,10/28/2018 18:25,,1,,"

Quantum states can change in two ways: 1. quantumly, 2. classically.

+ +
    +
  1. All the state changes taking place quantumly, are unitary. All the quantum gates, quantum errors, etc., are quantum changes.

  2. +
  3. There is no obligation on classical changes to be unitary, e.g. measurement is a classical change.

  4. +
+ +

All the more reason, why it is said that the quantum state is 'disturbed' once it's measured.

+",4954,,26,,5/15/2019 14:06,5/15/2019 14:06,,,,5,,,,CC BY-SA 4.0 +4563,1,4587,,10/29/2018 3:58,,6,269,"

The CHSH inequality was presented in the paper Proposed Experiment to Test Local Hidden-Variable Theories published in 1969 by J.F. Clauser, M.A. Horne, A. Shimony, and R.A. Holt. I'm interested in which paper first presented their proposed experimental apparatus in nonlocal game format, presumably also introducing & defining the concept of nonlocal games in general.

+",4153,,,,,10/31/2018 15:09,In which paper was the CHSH game first presented?,,1,0,,,,CC BY-SA 4.0 +4564,1,4566,,10/29/2018 14:17,,2,137,"

In my former post on Physics SE I deduced a contradiction in the classical simulation of 2D graph state and the classical simulation of general measurement-based quantum computation.

+ +

In Norbert's answer, he mentioned that the serial measurements on the graph state cannot be classically simulated.

+ +

This might be the right answer if we admit that a general quantum computation cannot be efficiently simulated classically. Especially if the quantum computation is not measurement based but rather a normal quantum circuit based implementation, since even the initial state can be simulated classically, the evolution of the state may change the state and the entanglement pattern so that it does not fulfill a certain criterion for the classical simulation.

+ +

But for measurement based QC, the measurement is carried out on each individual qubit and the measurement on one qubit will not change the state of another qubit. So the sequential measurements on each qubit can all be classically simulated. Then the contradiction is still there.

+ +

I am sure there must be something wrong with my deduction. Please help me to find it.

+",4959,,55,,12/21/2021 1:26,12/21/2021 1:26,The classical simulation of 2D graph state and the measurement based quantum computation,,1,0,,,,CC BY-SA 4.0 +4565,1,,,10/29/2018 15:02,,13,371,"

Both quantum entanglement and quantum state complexity are important in quantum information processing. They are usually highly correlated, i.e., roughly a state with a higher entanglement corresponds to a higher quantum state complexity or a complex state is usually highly entangled. But of course this correspondence is not exact. There are some highly entangled states that are not complex quantum states, for example quantum states represented by branching MERA.

+ +

On the other hand, if we use the geometric measure of entanglement (defined as the minimal distance to the nearest separable state w.r.t. a certain distance metric in Hilbert space) to justify the entanglement, then it seems it's very similar with the definition of quantum state complexity (the minimal distance to a simple product state). If we only consider pure states and choose the same distance metric for them, for example, the Fubini-Study distance or Bures distance, then they are really almost identical.

+ +

Of course, when we are talking about state complexity, it's better to use the more physically motivated 'quantum circuit complexity' to measure the distance. But still, this distance can also be used to define the geometric measure of entanglement(maybe it's not a perfect distance measure for entanglement).

+ +

Then what's the relationship between entanglement and quantum state complexity? Are they essentially two different distance measures on Hilbert space? What should be the optimal metrics for them?

+ +

Or, if entanglement and complexity are both distance measures on the Hilbert space, can we find a transformation between these two metrics?

+",4959,,55,,5/31/2021 15:03,7/16/2021 2:42,Relation between quantum entanglement and quantum state complexity,,0,12,,,,CC BY-SA 4.0 +4566,2,,4564,10/29/2018 15:53,,2,,"
+

the measurement is carried out on each individual qubit and the + measurement on one qubit will not change the state of another qubit

+
+ +

This is an incorrect statement. If the state that you are measuring is entangled (which it very much is for the 2D cluster state), measuring the state of one qubit absolutely changes the state of another qubit. The trivial example of this is teleportation.

+",1837,,,,,10/29/2018 15:53,,,,7,,,,CC BY-SA 4.0 +4567,1,,,10/29/2018 16:20,,7,245,"

I noticed that there was already a post discussing the fast scrambling property of black holes. But it seems no satisfactory answer was given.

+ +

As mentioned by L. Susskind et. al, the fast scrambling property of BHs seems to say BHs are infinite dimensional systems so that every pair of qubits can directly interact 'locally' so that the fast scrambling can be implemented by BHs. He also mentioned that this is due to the effect of gravity during the collapse procedure.

+ +

I am wondering, how such a gravitational collapse can lead to such an 'infinite dimensional' geometry? If the geometry is related with tensor networks, then what's the correspondent tensor network of BHs? It sounds very strange for me.

+ +

An alternative is that maybe we do not need such an 'infinite dimensional' geometry, instead if the internal geometry of BHs is a manifold with a vanishing geodesic distance as discussed here, then the fast scrambling assumption may also be valid. But still, how such a geometry can be built inside a BH? Also, it seems that such a vanishing geodesic manifold should be an infinite dimensional manifold too.

+",4959,,55,,12-01-2021 09:47,12-01-2021 09:47,How can blackholes be fast information scramblers?,,0,6,,,,CC BY-SA 4.0 +4568,1,4573,,10/29/2018 16:21,,6,831,"

BB84 attack with entangled qubits example

+ +

Hi, I'am interested in an attack for BB84 protocol with entangled quibits. +Lets say Alice sends a qubit $x$ in state $\left|1\right>_x$ to Bob and Eve takes the CNOT gate to +entangle the states. Therefore, Eve uses a qubit $e$ in state $\left|0\right>_e$. Using CNOT gate the result of this operation is +$$\left|1\right>_x\left|0\right>_e\rightarrow \left|1\right>\left|1\right>.$$ +Let's say now Bob measures base B in 90° and 0° orientation (the $|0\rangle/|1\rangle basis). Alice and Bob communicate their choice of basis over the +classical channel. Eve now knows the orientation and therefore measures her entangled qubit in the right orientation. +That means Eve knows now the bit value of the key, lets say 1.

+ +

But what would be the case if Alice sends now a qubit in the state +$$\frac{1}{\sqrt{2}}(\left|0\right>_x-\left|1\right>_x)?$$ +Eve would create the entangled state +$$\frac{1}{\sqrt{2}}(\left|00\right>-\left|11\right>)$$ +There are two different cases depending on Bob's choice of basis:

+ +
    +
  • case 1: Alice sends the qubit to Bob and Bob measures in the wrong base B 90° and 0° orentation, therefore nothing happens, because Alice and Bob have different bases.

  • +
  • case 2: But what if Bob measures in diagonal base 45° and -45° ($|\pm\rangle=(|0\rangle\pm|1\rangle)/\sqrt{2}$). Someone said that BB84 protocol covers this in 50% of cases. But why is it like that?

  • +
+ +

A measurement in 45° and -45° basis is equal to use the Hadamard transform and a measurement in base B (90°,0°). +So it results in something like this (Bob measures the first bit): +$$H(\left|x\right>)\left|e\right>=\frac{1}{\sqrt{2}}((H\left|0\right>)\left|0\right> - (H\left|1\right>)\left|1\right>)$$ +this comes to +$$\frac{1}{2}(\left|00\right>-\left|01\right>+\left|10\right>+\left|11\right>)$$

+ +

But why does this result does not agree with Alice's bit? +Why does the BB84 protocol expose 50% of cases (in my example)?

+ +

I hope I made understandable what I wanted to ask. +I know that it is complicated. I would be very happy to receive an answer. Thank you!

+",,user4961,26,,12/23/2018 12:47,4/29/2021 16:22,BB84 attack with entangled qubits example,,2,11,,,,CC BY-SA 4.0 +4569,1,4570,,10/30/2018 0:46,,4,143,"

One of the benefits I'm reading about qubits is that they can be in an infinite number of states. I'm aware of Holevo's bound (even though I don't fully understand it). However, it made me think of why we haven't tried varying voltage on classical computers and have programmable gates to control what passes in terms of a voltage. In that way, we could simulate quantum computing more closely.

+",4966,,55,,12-11-2021 10:42,12-11-2021 10:42,Could we use varying voltage with programmable gates?,,1,0,,,,CC BY-SA 4.0 +4570,2,,4569,10/30/2018 2:03,,3,,"

This reminds me of another question we had here: What's the difference between a set of qubits and a capacitor with a subdivided plate?

+ +

Let me try to answer your question separately though:

+ +
+

One of the benefits I'm reading about qubits is that they can be in an + infinite number of states.

+
+ +

Yes, qubits don't have to be in state 0 or state 1, but can be in an infinite number of states represented by something called the Bloch sphere. However, while it sounds impressive that you can be in an infinite number of states, this alone is not what gives quantum computers their full power!

+ +

Indeed, an analog classical computer can be in an infinite number of states too.

+ +

In order to really get the full power of a quantum computer, you need at least 2 qubits. The pair of qubits can exist in a state that no digital or analog classical computer can be in, which is a mixture of being in (0,0) and (1,1) at the same time. I explained this in my answer to that previous question: What's the difference between a set of qubits and a capacitor with a subdivided plate?

+ +
+

I'm aware of Holevo's bound (even though I don't fully understand it).

+
+ +

If you have a specific question on Holevo's theorem, or Ashwin Nayak's generalization of it, I'm sure you'll get an answer if you ask that as a separate question here :)

+ +
+

However, it made me think of why we haven't tried varying voltage on classical computers and have programmable gates to control what passes in terms of a voltage.

+
+ +

I suppose you are suggesting this because voltage can be in any state between, for example, 0V and 20V (i.e. can be 0, or 1, or anything in between, like maybe 0.5 would be 10V). Qubits can also be in an infinite number of states other than 0 or 1, but that is not what gives them their full power. The power comes from how two qubits can interact.

+ +

If you have a pair of voltage-based bits, can the pair be in a state where the two bits are 0 and the two bits are also 1, at the same time?

+ +

You can have the two bits being in the states:
+(0V, 0V), or
+(0V, 20V), or
+(20V, 0V), or
+(20V, 20V), or, since you want to allow an infinite number of voltages,
+(15V, 12.3V),

+ +

but you cannot have: +$\frac{1}{\sqrt{2}}\left[(0\rm{V},0\rm{V}) + (20\rm{V},20\rm{V})\right]$

+ +

which means you're both in the (0V,0V) state and the (20V,20V) state at the same time (like Schrodinger's cat is alive and dead at the same time).

+ +

In conclusion: Even the ability to be in an infinite number of different states (like in analog classical computing), is not enough to do what a quantum computer can do!

+",2293,,26,,05-08-2019 16:39,05-08-2019 16:39,,,,4,,,,CC BY-SA 4.0 +4571,1,,,10/30/2018 2:47,,9,788,"

As it is shown here, CNOT gates between different qubits have different error rates. I have the following questions:

+ +

1) While defining a circuit on QISkit, does q[0] always correspond to the same qubit on a device (e.g. the qubit labeled q0 on the device manual)? If so, how can I only use for example qubit 12 and 13 of ibmq_16_melbourne (just as an example)?

+ +

2) If one job is being executed on a device, say for instance using 3 qubits, is any other job running on that device at the same time?

+ +

3) How many CNOT gates one circuit can have so that its error stays reasonable? Basically, how deep can a circuit be on any of the devices to get a reasonable result?

+ +

Thank you.

+",2757,,26,,03-12-2019 09:36,3/19/2019 8:12,Qubits specification on IBMQ devices,,2,0,,,,CC BY-SA 4.0 +4573,2,,4568,10/30/2018 8:04,,3,,"

Firstly, it's not entirely clear that your described eavesdropping strategy is the best there could be. But it is useful for trying to work through what's going on. As you say, If Alice sends the state $|-\rangle$ to Bob, then by Eve performing the controlled-not, they are left with +$$ +|-\rangle_x|0\rangle_e\rightarrow\frac{1}{\sqrt{2}}(|0\rangle_x|0\rangle_e-|1\rangle_x|1\rangle_e) +$$ +We can rewrite this as +$$ +\frac{1}{\sqrt{2}}(|+\rangle_x|-\rangle_e+|-\rangle_x|+\rangle_e). +$$ +Remember that these are the qubits that Bob and Eve hold at this point. Alice is expecting Bob to get measurement result $-\rangle$. But you can see from the way that I've written the state that Bob gets answer $|+\rangle$ 50% of the time, and $|-\rangle$ 50% of the time. So, half the time, his answer does not match Alice's expectation. However, Eve always knows exactly what answer Bob got by using the same measurement basis (once it's been announced) because her answers are always the opposite of what Bob gets.

+",1837,,1837,,10/30/2018 8:15,10/30/2018 8:15,,,,10,,,,CC BY-SA 4.0 +4574,2,,4571,10/30/2018 8:13,,8,,"
1) While defining a circuit on QISkit, does q[0] always correspond to the same qubit on a device 
+(e.g. the qubit labeled q0 on the device manual)? If so, how can I only use for example qubit 12 and 
+13 of ibmq_16_melbourne (just as an example)?
+
+ +

Quick answer: not always.

+ +

The way Qiskit works with quantum circuit and backends is:

+ +
    +
  1. Generate the quantum circuit with the API. The quantum circuit is stored in a QuantumCircuit object.
  2. +
  3. Transform this QuantumCircuit object into a DAGCircuit object which represents the same quantum circuit but uses a DAG instead of a list of gates.
  4. +
  5. Give this DAGCircuit object to the compiler. The compiler takes care of multiple things: + +
      +
    1. Respecting the topology of the backend you are compiling for. This is the step that will bother you as the compiler will probably ""shuffle"" (not in a random way of course) your qubits. One exception I see is when the circuit already respects the backend topology. In this case the compiler may not change the qubits.
    2. +
    3. Respecting the basis gates used by the backend.
    4. +
    5. Optimising your circuit. This step might also be problematic. I don't know if such an optimisation is present in the Qiskit compiler, but if the compiler tries to optimise also with respect to the errors rates then you might end up with ""shuffled"" qubits.
    6. +
  6. +
+ +

You will need to check what I am saying experimentally.

+ +
2) If one job is being executed on a device, say for instance using 3 qubits, is any other job being 
+ran on that device at the same time?
+
+ +

It seems unlikely to me but lets wait for the answer of one of the developers of Qiskit.

+ +
3) How many CNOT gates one circuit can have so that its error stays reasonable? Basically, how 
+deep can a circuit be on any of the devices to get a reasonable result?
+
+ +

Very few.

+ +

If you limit yourself to Q12 and Q13 then the CX gate between the two has a probability of failure of 0.041. This means that applying only CX gates, you have a probability of success of $(1 - 0.041)^n$ with $n$ being the number of CX gates applied. For $10$ CX gates, the probability is $\approx 0.66$. For $20$ gates, the probability of success drops to $\approx 0.43$.

+",1386,,1386,,3/19/2019 8:12,3/19/2019 8:12,,,,0,,,,CC BY-SA 4.0 +4576,1,,,10/30/2018 16:49,,24,4962,"

I am trying to simulate Deutsch's algorithm (elementary case of Deutsch-Jozsa algorithm), and I am not entirely sure how I would go about implementing the quantum oracle necessary for the algorithm to function, without defeating the purpose of the algorithm and "looking" at what the inputted function is, by evaluating the function.

+",4907,,2927,,4/27/2021 23:03,6/18/2022 14:41,How would I implement the quantum oracle in Deutsch's algorithm?,,5,7,,,,CC BY-SA 4.0 +4577,2,,4576,10/30/2018 17:50,,3,,"

You have two examples on the IBM Q Experience page about the algorithm. +They show an example of a function. This could inspire you for your simulations I hope.

+",4127,,,,,10/30/2018 17:50,,,,0,,,,CC BY-SA 4.0 +4578,2,,4576,10/30/2018 18:01,,3,,"

I don't have an example for Deutsch's algorithm handy, but here and here are two tutorials which walk you through implementing the Deutsch-Jozsa algorithm and the oracles it uses in Q#.

+ +

The idea for these two algorithms is the same: you have to provide the oracle to the algorithm as an operation implemented elsewhere. This way the algorithm doesn't know which oracle it is given and doesn't have a way to ""look"" at the oracle other than by calling it. These tutorials also have a harness which counts how many times the oracle is called, so that if your solution calls it more than once, it fails the test.

+ +

Admittedly, this still has a problem which oracle algorithms frequently have: a human can look at the implementation of the test and of the oracle passed and figure out the answer by figuring out which oracle is implemented. This can be countered by randomizing the oracle choice, as DaftWullie suggested.

+",2879,,,,,10/30/2018 18:01,,,,0,,,,CC BY-SA 4.0 +4579,2,,4576,10/30/2018 18:27,,12,,"

There are two questions here. The first asks how you might actually implement this in code, and the second asks what's the point if you know which oracle you're passing in.

+

##Implementation +Probably the best way is to create a function IsBlackBoxConstant which takes the oracle as input, then runs the Deutsch Oracle program to determine whether it is constant. You can select the oracle at random, if you want. Here it is, implemented in Q#:

+
operation IsBlackBoxConstant(blackBox: ((Qubit, Qubit) => ())) : (Bool)
+{
+    body
+    {
+        mutable inputResult = Zero;
+        mutable outputResult = Zero;
+
+        // Allocate two qbits
+        using (qbits = Qubit[2])
+        {
+            // Label qbits as inputs and outputs
+            let input = qbits[0];
+            let output = qbits[1];
+
+            // Pre-processing
+            X(input);
+            X(output);
+            H(input);
+            H(output);
+
+            // Send qbits into black box
+            blackBox(input, output);
+
+            // Post-processing
+            H(input);
+            H(output);
+
+            // Measure both qbits
+            set inputResult = M(input);
+            set outputResult = M(output);
+
+            // Clear qbits before release
+            ResetAll(qbits);
+        }
+
+        // If input qbit is 1, then black box is constant; if 0, is variable
+        return One == inputResult;
+    }
+}
+
+

##What's the point? +###Query complexity +Computational complexity is a field concerned with classifying algorithms according to the quantity of resources they consume as a function of input size. These resources include time (measured in steps/instructions), memory, and also something called query complexity. Query complexity is concerned with the number of times an algorithm has to query a black-box oracle function.

+

The Deutsch oracle problem is interesting to complexity theorists because the quantum algorithm only has to query the black box once, but the classical algorithm has to query it twice. With the generalized Deutsch-Jozsa problem where an $n$-bit oracle contains a function which is either constant or balanced, the quantum algorithm again only has to query it once but the (deterministic) classical algorithm requires $2^{n-1}$ queries.

+

It should be noted that a probabilistic classical algorithm exists which solves the Deutsch-Jozsa problem in much fewer than $2^{n-1}$ queries by randomly sampling oracle inputs: if the oracle continues to output the same value no matter the input, the probability that the oracle is constant grows very quickly. This means Deutsch-Jozsa is not a good candidate for a quantum supremacy/advantage problem, which leads into...

+

###Applications in the real world +If you aren't a complexity theorist, you might reasonably not care very much about query complexity and instead want to know why the Deutsch oracle problem is important in a "no rules" world where you're allowed to look inside the black box. Trying to analyze an oracle problem as a non-oracle problem is fraught with difficulty, and I don't believe anybody has solved the question of the best classical algorithm for the Deutsch oracle problem when you are allowed to analyze the oracle circuit. You might think - what is there to analyze? There are only four possible circuits! In fact, it is much more complicated.

+

If we look at the simplest representation of the one-bit Deutsch Oracle, the gate construction is as follows:

+

Identity: $C_{1,0}$

+

Negation: $X_0C_{1,0}$

+

Constant-0: $\mathbb{I}_4$

+

Constant-1: $X_0$

+

However, these are by no means the only way to implement the oracles. All of these can be rewritten using hundreds, thousands, even millions of logic gates! All that matters is the cumulative effect of these logic gates is equivalent to the above simple construction. Consider the following alternative implementation of Constant-1:

+

$H_0Z_0H_0$

+

It turns out that, for any input you could ever give:

+

$H_0Z_0H_0|\psi\rangle = X_0|\psi\rangle$

+

This is because of the associativity of matrix multiplication. If you write out the actual matrices for $H_0Z_0H_0$ and multiply them together, you get $X_0$:

+

$H_0Z_0H_0 = +\begin{bmatrix} +\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ +\frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} +\end{bmatrix} +\begin{bmatrix} +1 & 0 \\ +0 & -1 +\end{bmatrix} +\begin{bmatrix} +\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ +\frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}} +\end{bmatrix} = +\begin{bmatrix} +0 & 1 \\ +1 & 0 +\end{bmatrix} = X_0$

+

So we have:

+

$(H_0(Z_0(H_0|\psi\rangle))) = (((H_0Z_0)H_0)|\psi\rangle) = X_0|\psi\rangle$

+

So you can pass in the circuit $H_0Z_0H_0$ (or something vastly more complicated) into your quantum Deutsch Oracle algorithm instead of $X_0$, and the algorithm still works! It will tell you whether the oracle is constant or variable, regardless of how complicated its internals are. So an algorithm which "cheats" and looks inside the black box doesn't have quite as simple a time as you might think. Consider the case of I, a stranger on the internet, giving you a very complicated circuit guaranteed to be constant or variable then asking you which it is. Not something so easily solved by just looking at it!

+

###Important for historical & pedagogical reasons

+

Primarily, the Deutsch Oracle problem is important for historical and pedagogical reasons. It's the first algorithm taught to students because it's the simplest, and seems to demonstrate quantum speedup as long as you don't ask too many questions. It also serves as a good launching point for learning Simon's Periodicity Problem and then Shor's Algorithm.

+",4153,,2927,,6/18/2022 14:41,6/18/2022 14:41,,,,9,,,,CC BY-SA 4.0 +4580,1,4582,,10/30/2018 18:44,,17,6786,"

What tools exist for creating quantum circuit diagrams and exporting them as images? Preferably one which runs in Windows, or even better one which runs in the web browser.

+",4153,,4153,,10/30/2018 20:03,4/24/2021 19:10,Tools for creating quantum circuit diagrams,,6,1,,,,CC BY-SA 4.0 +4581,2,,4580,10/30/2018 19:35,,8,,"

I'm new to the quantum world as well, but so far I've been able to draw my basic simple circuits with Qasm2Circ. It requires:

+ +
    +
  • latex2e with xypic (included in tetex)
  • +
  • python version 2.3 or greater
  • +
  • ghostscript (and epstopdf) (for creation of pdfs)
  • +
  • netpbm (for creation of png files)
  • +
+ +

Hopefully, somebody will be able to list other tools.

+",4504,,,,,10/30/2018 19:35,,,,3,,,,CC BY-SA 4.0 +4582,2,,4580,10/30/2018 23:26,,19,,"

Depending on how involved your circuit is you could use

+ +
    +
  • Quantikz (written by @DaftWullie I believe)
  • +
+ +

or

+ +
    +
  • Q-circuit by Bryan Eastin and Steve Flammia.
  • +
+ +

These are tools to make circuit diagrams in TeX for papers and the like, but you can always make your TeX file just the circuit you want and save it as a pdf. Making complex and incredibly long circuits might be a bit of a hassle and would be better done in an automated tool like the one posted by Davide_sd.

+",3056,,3056,,10/31/2018 17:20,10/31/2018 17:20,,,,3,,,,CC BY-SA 4.0 +4583,2,,4580,10/31/2018 8:25,,7,,"
    +
  1. Master of Science degree project developed by Joanna Patrzyk and Bartłomiej Patrzyk at AGH UST in Cracow.
  2. +
+ +

http://www.quide.eu/

+ +

It can simulate designed circuits by generating simulator code in C#. Runs under Windows. Available under GPLv3.

+ +
    +
  1. IBM Quantum Experience. Its interface runs in web browser.
  2. +
+ +

https://quantumexperience.ng.bluemix.net/qx/editor

+ +

It can run on ""real"" quantum computer. Available for research purposes only.
Here is the agreement: https://quantumexperience.ng.bluemix.net/qx/terms +
And user guide: https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=full-user-guide&page=introduction

+",4973,,4973,,10/31/2018 8:31,10/31/2018 8:31,,,,0,,,,CC BY-SA 4.0 +4584,1,4586,,10/31/2018 9:30,,8,958,"

I am relatively new and interested in quantum computing. +Specifically, I am interested in transforming an equation that I found on Wikipedia. But I did not quite understand the transformation.

+ +

$ \frac{1}{\sqrt{2}}(\left|0\right>_x\left|0\right>_y-\left|1\right>_x\left|1\right>_y) = \frac{1}{\sqrt{2}}(|+\rangle_x|-\rangle_y+|-\rangle_x|+\rangle_y) $

+ +

My idea is so far to use Hadamard transform for the two qubits:

+ +

$ \frac{1}{\sqrt{2}}(H(\left|0\right>_x\left|0\right>_y)-H(\left|1\right>_x\left|1\right>_y)) $

+ +

I have used the Hadamard transformation and now come to this:

+ +

$ = \frac{1}{\sqrt{2}}(\frac{1}{2}[(\left|00\right>+\left|10\right>+\left|01\right>+\left|11\right>) -(\left|00\right>-\left|10\right>-\left|01\right>+\left|11\right>)]) $

+ +

If I simplify that a bit now then I have that as a result:

+ +

$ = \frac{1}{\sqrt{2}}(\left|1_x0_y\right>+\left|0_x1_y\right>) $

+ +

But the result looks different now than the equation I wrote down at the beginning:

+ +

$ = \frac{1}{\sqrt{2}}(\left|1_x0_y\right>+\left|0_x1_y\right>) = \frac{1}{\sqrt{2}}(|+\rangle_x|-\rangle_y+|-\rangle_x|+\rangle_y) $

+ +

I do not know if the forming is allowed that way. If somebody knows how the transformation of the equation works, so that I get what I wrote in the beginning, I would be very happy if somebody could explain it!

+ +

I hope that my question is understandable :)

+",4974,,26,,12/23/2018 11:17,12/23/2018 11:17,Transformation of a Bell state,,2,0,,,,CC BY-SA 4.0 +4585,2,,4584,10/31/2018 9:37,,6,,"

You've started wanting to talk about +$$ +(|00\rangle-|11\rangle)/\sqrt{2}, +$$ +but you've then gone ahead and calculated +$$ +(H\otimes H)\cdot(|00\rangle-|11\rangle)/\sqrt{2}. +$$ +You should not expect these to be equal.

+ +

On the other hand, the calculation that you've done is sensible if you understand what you're doing. What you actually want to see, in order to verify your original statement is that +$$ +(H\otimes H)\cdot(|00\rangle-|11\rangle)/\sqrt{2}=(H\otimes H)\cdot(|+-\rangle+|-+\rangle)/\sqrt{2}. +$$ +You've already calculated the left-hand side. The calculation on the right-hand side is, I suspect, the whole reason why you've chosen to apply the Hadamard transform - you know that the Hadamard converts $|+\rangle$ to $|0\rangle$, and $|-\rangle$ to $|1\rangle$. So, you can immediately read that +$$ +(H\otimes H)\cdot(|+-\rangle+|-+\rangle)/\sqrt{2}=(|01\rangle+|10\rangle)/\sqrt{2}, +$$ +so you can see that the left-hand side is equal to the right-hand side, as desired.

+",1837,,,,,10/31/2018 9:37,,,,3,,,,CC BY-SA 4.0 +4586,2,,4584,10/31/2018 9:49,,9,,"

You are not proving the equality in a correct way. By multiplying by the Hadamard matrix, you are changing the state you are trying to calculate, not demonstrating the equality you want to prove.$\def\ket#1{\lvert#1\rangle}$

+ +

In order to prove what you state at the beginning of the question, I would use the facts that $$ + \ket+ =\frac{1}{\sqrt{2}}\bigl(\ket0 + \ket1\bigr), \qquad\ket-=\frac{1}{\sqrt{2}}\bigl(\ket0 - \ket1\bigr),$$ +and then develop the equality in the inverse order. Consequently: +$$ +\begin{alignat}{2} + \frac{1}{\sqrt{2}}&\bigl(\ket+_x \ket-_y - \ket-_x \ket+_y\bigr)\mspace{-128mu} +\\[1ex]=\;& + \frac{1}{\sqrt{2}}\Bigl(\frac{1}{2}\bigl(&&\ket0_x+\ket1_x\bigr)\bigl(\ket0_y-\ket1_y\bigr)+\bigl(\ket0_x-\ket1_x\bigr)\bigl(\ket0_y+\ket1_y\bigr)\Bigr) +\\[1ex]=\;& +\frac{1}{\sqrt{2}}\Bigl(\frac{1}{2}\bigl(&&\ket0_x\ket0_y+\ket1_x\ket-_y-\ket0_x\ket1_y-\ket1_x\ket1_y +\\[-1ex]&&&+\ket0_x\ket0_y+\ket0_x\ket1_y-\ket1_x\ket0_y-\ket1_x\ket1_y\bigr)\Bigr) +\\=\;& +\frac{1}{\sqrt{2}}\bigl(\ket0_x\ket0_y - \ket1_x\ket1_y\bigr).\mspace{-128mu} +\end{alignat} +$$

+ +

And so you prove the equality you were trying to solve at the beginning of the question.

+",2371,,124,,10/31/2018 10:02,10/31/2018 10:02,,,,1,,,,CC BY-SA 4.0 +4587,2,,4563,10/31/2018 15:09,,6,,"

As far as I have been researching from the intrenet about the CHSH game, the first experimental realization seems to be the one published by Aspect in Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: +A New Violation of Bell's Inequalities. The experiment is based on photon polarization and is the proof of the universe's nonlocality at subatomical particle level.

+ +

About the general description of the non-local games I have tracked the paper Consequences and Limits of Nonlocal Strategies by Cleve et al. as the first formal description of those, and the CHSH game. I am not completely sure that this might be the first paper defining such concepts, but I think that is really one of them and it is a really good article concerning such topics.

+",2371,,,,,10/31/2018 15:09,,,,4,,,,CC BY-SA 4.0 +4588,2,,4580,10/31/2018 17:08,,7,,"
+

[...] quantum circuit diagrams [...] even better one which runs in the web browser.

+
+ +

Quirk (algassert.com/quirk) runs in browsers and can be used to create simple circuit diagrams. I use it for this purpose all of the time, though it's main purpose is to simulate circuits.

+ +

Just drag the gates you want into the circuit, perhaps use the simulation results to check that everything is behaving correctly, and then take a screenshot with e.g. Windows' Snipping Tool. Most browsers also have an option to turn a canvas into an image (e.g. in firefox if you right-click on the circuit you can select 'view image' that you can then download and crop). You can bookmark the circuit to come back to it later, in case you make a small mistake.

+ +

The main downside of this approach is a) it requires manual work every time, b) it produces bitmap images instead of vector images, and c) it is somewhat inflexible (e.g. you can't put operations with different controls into the same column).

+ +

Example:

+ +

+",119,,119,,10/31/2018 19:02,10/31/2018 19:02,,,,0,,,,CC BY-SA 4.0 +4589,2,,4576,10/31/2018 22:19,,5,,"

There is no way to build the oracle in a way which would not defeat the point of Deutsch's algorithm - that's why it is an oracle-based algorithm.

+ +

The only way would be if you would come up with an incredibly hard to compute function (this is, an incredibly long circuit) which would take one input bit $x$ and give one output bit $f(x)$ (but on the way could use as many ancillas as you want), and where for some reason you would only be interested whether $f(0)=f(1)$. Then, you could use Deutsch's algorithm to save half the time (since you have to run this circuit only once).

+ +

Now, this sounds pretty contrived, and it is. On the other hand, it sounds equally contrived that there might be function $f(x)$ where $1 \leq x \leq N$ and $f(x)$ integer, and you would like to find some $y$ such that $f(x+y)=f(x)$, without the need to learn $f(x)$ itself - yet, it turns out this is exactly what is required for factoring.

+ +

So the point is that oracle-based algorithms prove that you can get a speed-up if you have a problem with that structure (i.e. where you only want to learn some specific property of a function), but it doesn't tell you if such a problem exists.

+ +

So if you want to implement Deutsch, any way of doing the oracle is fine - it is a ""proof-of-principle"" algorithm and does not yield an actual speed-up on a real problem (at least none we know of).

+",491,,26,,11-02-2018 14:23,11-02-2018 14:23,,,,0,,,,CC BY-SA 4.0 +4592,1,,,11-01-2018 09:07,,4,1025,"

Is there any real example for Grover's algorithm but with real database (generated from SQL or file)? I download the Q# development kit & its example, there was one call DatabaseSearchExample claim to use Grover's but it technically doesn't have any kind of database. May I ask a code for this if available?

+",2994,,26,,03-12-2019 09:03,03-12-2019 09:03,Practical example of Grover's algorithm (in Q#),,1,3,,,,CC BY-SA 4.0 +4593,2,,4592,11-01-2018 10:04,,5,,"

So far, it is better to say that the Grover Search algorithm, while presented as an algorithm searching through a database, would not be suited for such purpose. We prefer to say that we search through inputs of a function (the famous oracle). Loading the database/list in a quantum form would be costly in terms of qubits so for now it is not the best application. When presenting, we say that the oracle will have access to the elements of the list/database, but it is not applicable at that time.

+ +

I remember seeing an example of using Grover for SAT problems in Qiskit. +There was a notebook showing an example on 3 qubits, and the oracle was built so that it would select the binary combination satisfying a set of clauses. They changed much the files now so you may not see it in the github for Qiskit. +You were seeking through all possible combinations using superposition and I think it was a good practical example not involving list/database.

+ +

To help you visualize the example, say you have such 3 clauses to satisfy : +$$ f(x_1, x_2, x_3) = (x_1 \vee x_2 \vee \neg x_3) \wedge (\neg x_1 \vee \neg x_2 \vee \neg x_3) \wedge (\neg x_1 \vee x_2 \vee x_3) $$

+ +

with $x_1,x_2,x_3$ binary values. +We seek a combination ($x_1,x_2,x_3$) such that f equals 1. +We can have all the possible combinations of ($x_1,x_2,x_3$) using the Hadamard transform (000,001,010,...) so we don't need to input a list here. +If you provide the f function as a quantum operator, that is a set of gates for applying f onto the combinations presented in basis states, with one application you can compute the result of the function in parallel.

+ +

And in Grover's algorithm, using a qubit oracle in the $ |-\rangle $ state, you compute the output of f into it, which makes a phase kickback $ (-1)^{f(x_1,x_2,x_3)}|-\rangle $, marking with a minus sign the combinations for which f equals 1.

+ +

Using the Grover diffusion operator, you amplify the amplitudes of the good combinations, to increase their probability of being measured as output of the circuit by measurement.

+",4127,,4127,,11-01-2018 10:38,11-01-2018 10:38,,,,8,,,,CC BY-SA 4.0 +4594,1,4595,,11-01-2018 16:33,,7,1002,"

For 1 qubit, the maximally mixed state is $\frac{\mathrm{I}}{2}$.

+ +

So, for two qubits, I assume the maximally mixed state is the maximally mixed state is $\frac{\mathrm{I}}{4}$? +Which is:

+ +
+

$\frac{1}{4} (|00\rangle \langle 00| + |01\rangle \langle 01| + |10\rangle \langle 10| + |11\rangle \langle 11|)$

+
+ +

Why is this state more mixed than the following, for instance?

+ +
+

$\frac{1}{2} (|00\rangle \langle 00| + |11\rangle \langle 11|)$

+
+ +

Also, does this generalize to higher dimensions similarly?

+",2832,,26,,12/14/2018 6:20,12/14/2018 6:20,Maximally mixed states for more than 1 qubit,,4,0,,,,CC BY-SA 4.0 +4595,2,,4594,11-01-2018 17:12,,4,,"

The Von Neumann entropy of $1/2 (|00\rangle \langle 00| + |11\rangle \langle 11|)$ is one bit. For $I/4$ it's two bits of entropy instead.

+ +

The entropy of states that only have entries on the diagonal of a density matrix is very easy to compute in general, because you just treat the entries as probabilities and compute the Shannon entropy. The Shannon entropy is maximized when all the probabilities are equal, which is why 0.25 four times beats 0.5 twice with 0 twice.

+",119,,,,,11-01-2018 17:12,,,,0,,,,CC BY-SA 4.0 +4596,2,,4594,11-01-2018 17:13,,3,,"

For a general $d$-dimensional system, the maximally mixed state is the one described by the normalised identity matrix: $$\rho=I/d.$$ +In the specific case of a two-qubit system, this reduces to the first state you write.

+ +
+

Why is this state more mixed than the following

+
+ +

This is a bit hard to answer, as it depends on what your current understanding/intuition of ""more mixed"" is. One possible answer is that it is ""more mixed"" because it represents a state associated with a higher uncertainty, as quantifiable for example via the von Neumann entropy. +You can also have a look at the answers to a similar question on physics.SE for more details.

+",55,,,,,11-01-2018 17:13,,,,0,,,,CC BY-SA 4.0 +4597,2,,4594,11-01-2018 17:53,,9,,"

For two probability distributions, there is a clear notion how to say which one is more mixed: $\vec p$ is more mixed than $\vec q$ if it can be obtained from $\vec p$ by a mixing process, this is, a stochastic process described by a doubly stochastic matrix (i.e. one which preserved the flat distribution).

+ +

Birkhoff's theorem relates this to a concept called majorization, which introduces a partial order on the space of probability distributions.

+ +

The same concept generalized to mixed states, allowing us to say which mixed state is more mixed -- for instance, one can establish an order by using the majorization condition on the eigenvalues, and then use a Birkhoff's theorem to prove that one can be converted into the other by a quantum ""mixing map"" (a unital channel).

+ +

This is explained in detail e.g. in http://michaelnielsen.org/papers/majorization_review.pdf, or also in the book of Nielsen and Chuang.

+ +

Specifically, this yields that the state with all eigenvalues equal (or equivalently the flat probability distribution) is most mixed.

+ +
+ +

To relate this to the quantification of mixedness through entropy mentioned in the other answers, the connection comes from the fact that if a state $\rho$ is more random than another state $\sigma$ in the above sense -- i.e., if $\sigma$ can be transformed into $\rho$ by mixing, or equivalently the eigenvalues of $\sigma$ majorize those of $\rho$ -- then the entropy of $\rho$ is larger than the one of $\sigma$. This property (monotonicity under majorization) is known as Schur-concavity, a property shared e.g. by all Renyi-entropies.

+",491,,491,,11-01-2018 21:51,11-01-2018 21:51,,,,0,,,,CC BY-SA 4.0 +4600,1,,,11-02-2018 13:30,,3,103,"

So the question came up in a book I am working through. Given a circuit with $n$ qubits, construct a state with only $n$ possible measurement results, each of which has only $1$ of $n$ qubits as $1$, such as $|0001\rangle$, $|0010\rangle$, $|0100\rangle$, $|1000\rangle$, obviously normalized.

+ +

The only way I can think to do this is to take the all $|0\rangle$ input state, apply $\operatorname{H}$ to each qubit and then used multiple-controlled $\operatorname{CNOT}$ gates to affect the change on each qubit, but I feel like this won't lead to the desired end state.

+ +

To be clear, I am enquiring how to create a $W_n$ state can be arbitrarily prepared, given $n$ qubits.

+",4991,,26,,12/23/2018 8:07,12/23/2018 8:07,Circuit to construct a $n$-qubit state which is a superposition of states with only a single qubit being $\lvert1\rangle$,,0,13,,11-02-2018 14:28,,CC BY-SA 4.0 +4601,1,4603,,11-03-2018 01:44,,3,265,"

In this question on Quora: What is meant by ""the photon is its own antiparticle""?, an answer is given which states:

+ +
+

In quantum theory, we have a procedure for transforming the wave function of a particle into that of an antiparticle. It is called the ""CP"" operation.

+
+ +

Unfortunately, the author of the answer does not explain what the ""CP"" operation is.

+ +

What is the ""CP"" Operation in the context of anti-particles?

+",2645,,26,,12/14/2018 6:29,12/14/2018 6:29,"What is the ""CP"" operation in the context of anti-particles?",,2,0,,11-06-2018 14:46,,CC BY-SA 4.0 +4602,1,4604,,11-03-2018 01:56,,4,173,"

I was thinking of an error correcting code to correct 1-qubit errors. I came up with the following, which I guess has to have a mistake somewhere, but I am not able to find it.

+ +

The code is the same as the 9 qubit Shor code with one small difference. As in the Shor code, first we encode our qubit using the 3-qubit phase code. Then, instead of further encoding the 3 qubits against bit-flip errors, we only encode one of them, namely, the one that contains the state that we want to protect. The resulting code would be the following $$\small{|0\rangle \rightarrow |00000\rangle + |00001\rangle + |00010\rangle + |11100\rangle + |00011\rangle + |11101\rangle + |11110\rangle + |11111\rangle}$$

+ +

$$\small{|1\rangle \rightarrow |00000\rangle - |00001\rangle - |00010\rangle - |11100\rangle + |00011\rangle + |11101\rangle + |11110\rangle - |11111\rangle}$$

+ +

Thank you!

+",4994,,26,,11-03-2018 13:03,11-03-2018 13:03,Why does this error correcting code not work?,,1,4,,,,CC BY-SA 4.0 +4603,2,,4601,11-03-2018 05:03,,4,,"

C = charge conjugation
+P = parity

+ +

CP symmetry is often mentioned in the context of particle physics.

+",2293,,,,,11-03-2018 05:03,,,,2,,,,CC BY-SA 4.0 +4604,2,,4602,11-03-2018 07:04,,1,,"

It’s worth noting that it’s not impossible that your code could work - there is a 5 qubit code that is capable of correcting a single error (look up the perfect quantum error correcting code, just beware that I seem to remember there was a slight error in one of the circuit diagrams in one of the original papers).

+ +

However, to see that your particular code does not work, consider applying an X gate on the last qubit. Logical 0 stays as logical 0, while logical 1 is returned as logical 1, but with an overall negative sign. In other words, that single X implements logical Z. So, when this gate is applied, there is no error that an be detected let alone corrected, but obviously the logical state is not preserved.

+",1837,,,,,11-03-2018 07:04,,,,1,,,,CC BY-SA 4.0 +4605,1,4610,,11-04-2018 08:14,,4,132,"

I want to test some ideas using the quantum internet. I know that is not widely available now. Can it be simulated? Are there any simulation systems which allow -

+ +
    +
  1. Entanglement
  2. +
  3. Testing states
  4. +
  5. Integration states in third-party systems (eg C++ programs)
  6. +
+ +

The kind of tool would be useful, would basically confirm certain properties of the communications and how they could then be used in a classical layer e.g. for communications.

+",4489,,26,,12/13/2018 21:46,12/13/2018 21:46,"How to test with the ""Quantum Internet""?",,2,0,,,,CC BY-SA 4.0 +4606,1,4607,,11-04-2018 10:21,,6,160,"

On page 3 here it is mentioned that:

+ +
+

However, building on prior works [32, 36, 38] recently it has been + shown in [39] that to simulate $e^{−iHt}$ for an $s$-sparse Hamiltonian + requires only $\mathcal{O}(s^2||Ht||\text{poly}(\log N, + \log(1/\epsilon)))$, breaching the limitation of previous algorithms + on the scaling in terms of $1/\epsilon$.

+
+ +

Questions:

+ +
    +
  1. What is meant by ""simulate"" in this context? Does it mean it takes $$\mathcal{O}(s^2||Ht||\text{poly}(\log N, + \log(1/\epsilon)))$$ time to decompose $e^{-iHt}$ into elementary quantum gates given we know $H$. Or does it mean we can compute the matrix form of $e^{-iHt}$ in $$\mathcal{O}(s^2||Ht||\text{poly}(\log N,\log(1/\epsilon)))$$ time given we know the matrix form of $H$?

  2. +
  3. What does $||Ht||$ mean here? Determinant of $Ht$? Spectral norm of $Ht$? I checked the linked ppt and it seems to call $||H||$ the ""norm"" of $H$. Now I have no idea how they're defining ""norm"".

  4. +
+",26,,26,,11-04-2018 10:27,11-04-2018 13:40,"Clarification needed: ""Simulation"" of $e^{-iHt}$ and its time complexity",,2,0,,,,CC BY-SA 4.0 +4607,2,,4606,11-04-2018 11:18,,5,,"
    +
  1. Yes. Computing this matrix is something we call Hamiltonian Simulation. We do not use the verb ""simulate"" alone though I think.
  2. +
  3. It is the norm. I think you assume in general they use either the max norm, which is the largest entry of the Hamiltonian, or the norm is referring to the largest eigenvalue in absolute value (which is called the spectral radius and would be less confusing to call it this way). I think it is a bit unclear here but as they mention quantum walks, the max norm is associated (here is a mention of this norm for a quantum walk approach) You can find a very good explanation about Hamiltonian Simulation from this talk from Robin Kothari.
  4. +
+",4127,,4127,,11-04-2018 13:40,11-04-2018 13:40,,,,8,,,,CC BY-SA 4.0 +4608,2,,4605,11-04-2018 12:55,,3,,"

Reutter and Vicary

+ +

Features of teleportation, dense coding and secure key distribution are mimicked even without having an honest quantum internet.

+ +

The main idea is the groudit. That is a special type of groupoid where there certain bijections as sets given as extra data. That is for a given natural number $d$, think about $d$ finite groups all of cardinality $d$ and put them all together.

+ +

I have some Haskell you can use to play around with this concept if you want.

+",434,,,,,11-04-2018 12:55,,,,3,,,,CC BY-SA 4.0 +4609,2,,4606,11-04-2018 13:01,,3,,"

""Hamiltonian Simulation"" means applying the time evolution given by $H$ to some initial state $|\psi\rangle$, i.e. to implement the unitary $U=e^{iHt}$ on a quantum computer.

+ +

If not mentioned otherwise, for an operator $H$, $\|H\|$ generally denotes the operator norm, i.e., the largest eigenvalue (in absolute value) of $H$. Whether this is really the case the answer linked is unclear, see the answer by cnada.

+",491,,491,,11-04-2018 13:27,11-04-2018 13:27,,,,4,,,,CC BY-SA 4.0 +4610,2,,4605,11-04-2018 13:14,,3,,"

There is a set of two packages in Mathematica called ""Quantum Notation"" and ""Quantum Computing"" for Wolfram Mathematica Environment, and here you can very well mimic all three considerations you are concerned with and much more in the usual Dirac Notation and Quantum Circuit Formalism. +The link to the packages are as follows: +http://homepage.cem.itesm.mx/jose.luis.gomez/quantum/

+ +

You can also use the QET package for MATLAB if you are familiar with that environment: +http://www.qetlab.com/Main_Page

+ +

The only difference being that this will be performed on a classical computer and not really on a Quantum Interface. +Since there is not really such a vast open source network for Quantum Computing, except a few like IBM-Q, the best option for my experience is to use the packages in Mathematica. +Hope this helps.

+",4889,,,,,11-04-2018 13:14,,,,0,,,,CC BY-SA 4.0 +4611,2,,4580,11-04-2018 13:25,,3,,"

One of the best tools to use is the ""Quantum Computing"" package in Mathematica: +http://homepage.cem.itesm.mx/jose.luis.gomez/quantum/

+ +

Here you can just input all the useful gates directly and also completely manipulate the Circuits in a complete GUI interface. +

+ +

Moreover, you can also perform all the desired operations over qubits.

+",4889,,,,,11-04-2018 13:25,,,,2,,,,CC BY-SA 4.0 +4612,1,,,11-04-2018 13:58,,2,92,"

I am starting to step into the field of Topological Quantum Information and Computation and am in search of tools which I can use to directly simulate or realize these transformations in a textual or graphical manner.

+",4889,,26,,11-04-2018 14:01,11-05-2018 05:23,Is there any tool or simulator for Topological quantum gates and circuits?,,1,2,,11-05-2018 20:24,,CC BY-SA 4.0 +4613,1,,,11-04-2018 14:18,,4,216,"

I was reading this paper which introduces a mapping from a qubit Hamiltonian to an Ising model. Firstly, the first step of the mapping seems to assume that we know an eigenstate of the system (correct me here because it seems unlikely in practice). Below, is the mention of the first step : +

+ +

Secondly, such mapping seems extremely costly if we look at their complexity.

+ +

My question is: would this method be considered interesting for practice compared to classical methods and how can this be implemented on practical examples?

+",4127,,4127,,11-06-2018 02:05,11-06-2018 02:05,Electronic structure calculations and the Ising model: practical?,,1,3,,,,CC BY-SA 4.0 +4614,2,,1958,11-04-2018 14:21,,3,,"

An assumption in general measurements: The measuring device itself has no degrees of freedom and it does not couple with the qudit in any form of interaction, which is not true.

+ +

1) A projective measurement is ideal and non-realistic because it is always assumed that there is no extension of this Projector to a bigger Hilbert space or more degrees of freedom than the Qudit degrees of freedom. But actually what happens experimentally is the fact that, to measure on a qubit we always have to assign a classical operation called a ""Pointer"" that is a link between your classical outcome by the measurement and the quantum measurement. By doing this the system is always exposed to a non-unitary and open environment where the measurement becomes non-deal and the information is leaked in outer degrees of freedom when the system coupled with the measuring device. This in principle itself is a nature's inherent property that forbids an ideal Quantum Measurement.

+ +

2) To go about this, as you pointed out, the true realistic method is a weak measurement method. To minimize the coupling with the environement and be close to a true quantum measurment.

+ +

However, there are certain cases which are special, certain states called ""Pointer states"" allow true ideal measurement w.r.t particular Measurement operators (Because they retain their quantum properties like Coherence, entanglement, etc) in the smaller Hilbert space and do not couple with higher degrees of freedom of the measuring device.

+ +

Some literature about this which I read in detail is from this article by W.H. Zurek: https://arxiv.org/abs/quant-ph/0105127

+",4889,,,,,11-04-2018 14:21,,,,0,,,,CC BY-SA 4.0 +4615,2,,4601,11-04-2018 15:51,,3,,"

This is the context of Particle Physics and Field Theory where CP as pointed out above very precisely, are the fundamental symmetries which conserve various quantities. This has no direct significance with Quantum Information/Computation.

+ +

In the context of the term ""CP"", in Quantum Information Theory, we have what is called ""Completely Positive"" maps or more restrictively ""CPTP - Completely Positively Trace Preserving Maps"" which are transformations (called Quantum Channels) on a qudit state from one Hilbert space to another. There are certain axioms that the channels satisfy.

+ +

Just for the literature of the subject, this is a very hot topic of research in QI, you may find some information in https://en.wikipedia.org/wiki/Quantum_channel +Or, Nielsen and Chuang, Chapter 8 titled ""Quantum noise and quantum operations.""

+",4889,,,,,11-04-2018 15:51,,,,3,,,,CC BY-SA 4.0 +4616,2,,4594,11-04-2018 16:07,,1,,"

In the sense of the entropy, the maximum entropy from a ""$d$"" dimensional density matrix is just $\log_2 d$ if you measure wrt two degrees of freedom. This is more than $\log_22=1 \ \text{bit}$, and hence more information. This is because we always compare the Quantum Information with the Classical Information as over a ""bit"". However, if we measure the information of a $d$ dimensional system over an ""information basis"" of ""$d$"" itself we will see that the maximum is $\log_d d$ which will be equivalent to $1$bit again in the sense of comparison of maximally mixed states and maximum information.

+ +

The intuitive answer as to why the higher dimensional system has more entropy as a bound is simply because it has more parameters to be identified and measured to describe it completely. This means, conversely; for two systems $M$ and $N$ with Hilbert Space dimesnions $m$ and $n$ ($m\geq n$), $M$ will have more information if both are initially completely mixed. Because more mixedness$\implies$ more uncertainty before measurement$\implies$ more surprise after a measurement. This is the intuition behind mixedness of a state.

+",4889,,26,,11-04-2018 16:14,11-04-2018 16:14,,,,0,,,,CC BY-SA 4.0 +4617,1,,,11-04-2018 16:29,,4,98,"

I understood how a Quantum Turing Machine works from this lecture.

+ +

It would be great if someone could give an example of how this machine could be used to solve a real problem though, for example, simulate the Deutsch algorithm on a Quantum Turing machine.

+",2832,,26,,11-04-2018 16:57,11-04-2018 16:57,Deutsch Algorithm on a Quantum Turing Machine,,0,2,,,,CC BY-SA 4.0 +4619,1,4621,,11-04-2018 23:26,,5,106,"

How does one map an ANF to a toffoli network? Is there a straight-forward procedure for doing this?

+ +

For example, given the ANF for the Sum function of an adder: +$$S = A \oplus B \oplus C \oplus ABC$$

+ +

I thought that mapping would be trivial following the process of

+ +

1) Perform $A \oplus B$ with two CNOT gates, storing the result in ancilla bit, call it $Anc_1$

+ +

2) Perform $Anc_1 \oplus C$ with two CNOT gates, storing the result in $Anc_2$

+ +

3) Perform $Anc_2 \oplus ABC$ using a CNOT and a 3-control bit toffoli, storing the result in the output, $S$

+ +

This process results in this circuit

+ +

+ +

Why does this not work? What is the process to map an ANF to a toffoli network?

+ +

Thank you

+ +

PS I apologize for not being able to find an appropriate tag.

+",4943,,26,,12/23/2018 8:05,12/23/2018 8:05,Mapping Algebraic Normal Form of Exclusive Sum of Products to Toffoli Network,,1,2,,,,CC BY-SA 4.0 +4620,2,,3773,11-05-2018 00:20,,4,,"

Those figures were created manually with sketchup, which is a 3d modelling tool. There was no simulation involved, only careful application of known rules.

+",119,,,,,11-05-2018 00:20,,,,0,,,,CC BY-SA 4.0 +4621,2,,4619,11-05-2018 01:48,,3,,"

My mistake. +The ANF for the sum function of an adder is +$$S = A \oplus B \oplus C$$

+ +

The process for mapping ANF to toffoli network works, but only if the ANF is correct.

+",4943,,,,,11-05-2018 01:48,,,,2,,,,CC BY-SA 4.0 +4622,2,,4612,11-05-2018 02:11,,0,,"

I found below repo on github which you can check. +https://github.com/jacobmarks/QTop

+ +

QTop allows for the simulation of topologies with arbitrary code depth, qudit dimension, and error models. Currently, QTop features Kitaev quantum double models, as well as color codes in 3-regular planar tilings.

+",4925,,4925,,11-05-2018 05:23,11-05-2018 05:23,,,,1,,,,CC BY-SA 4.0 +4623,1,4624,,11-05-2018 10:18,,6,920,"

I am new to quantum computation and I recently came across the statement that super-dense coding can be called the inverse of quantum teleportation

+",5007,,26,,12/13/2018 21:01,12/13/2018 21:01,Why is super-dense coding called the inverse of quantum teleportation?,,2,1,,,,CC BY-SA 4.0 +4624,2,,4623,11-05-2018 10:23,,8,,"

In quantum teleportation, one starts with an entangled state shared between two parties, and (after some messing at the sender's side), two classical bits are transmitted from one party to the other so that the net effect is a quantum state is sent from the first party to the second without sending any quantum data.

+ +

In superdense coding, the parties start with an entangled state shared between two parties, and (after some messing at the sender's side), a quantum state is sent from one party to the other so that the net effect is two classical bits are sent from the first party to the second.

+ +

Hopefully I've written that in such a way that it conveys the symmetry between the two settings. Where I say ""quantum state"", I specifically mean a single qubit in an unknown state.

+",1837,,,,,11-05-2018 10:23,,,,0,,,,CC BY-SA 4.0 +4625,1,4626,,11-05-2018 10:29,,31,9402,"

What exactly is an ""oracle""? Wikipedia says that an oracle is a ""blackbox"", but I'm not sure what that means.

+ +

For example, in the Deutsch–Jozsa algorithm,
$\hspace{85px}$,
is the oracle just the box labeled $`` U_f "" ,$ or is it everything between the measurement and the inputs (including the Hadamard gates)?

+ +

And to give the oracle, do I need to write $U_f$ in matrix form or the condensed form: $U_f$ gives $y \rightarrow y \oplus f(x)$ and $x \rightarrow x$ is enough with respect to the definition of an oracle?

+",5008,,55,,09-10-2020 11:47,09-10-2020 11:47,What exactly is an oracle?,,1,1,,,,CC BY-SA 4.0 +4626,2,,4625,11-05-2018 10:39,,35,,"

An oracle (at least in this context) is simply an operation that has some property that you don't know, and are trying to find out. The term ""black box"" is used equivalently, to convey the idea that it's just a box that you can't see inside, and hence you don't know what it's doing. All you know is that you can supply inputs and receive outputs. In the circuit diagram you depict, it is just the $U_f$ box. Everything else is stuff that you are adding in order order to help interrogate the oracle and discover its properties.

+ +

To give the oracle, you can write it in any valid form that defines a map from all possible inputs to outputs. This could be a matrix (presumably with an unknown parameter), or it could be the map $U:(x,y)\mapsto (x,y\oplus f(x))$ (strictly, $\forall x,y\in\{0,1\}$), because given either description, you can work out the other.

+",1837,,,,,11-05-2018 10:39,,,,5,,,,CC BY-SA 4.0 +4627,1,4637,,11-05-2018 12:23,,4,841,"

We know that $H_A\otimes H_B\neq H_B\otimes H_A$ (in general). Theoretically, we know the formalism and what observables to construct from the two compositions possible, but we never talk about both the possibilities. I wish to know that how experimentally the Measurements or Evolutions are done over such composite systems (let's just assume a bipartition as above). +How does the experimentalist know whether he is working in the $A\otimes B$ or $B\otimes A$ composite Hilbert Space?

+",4889,,,,,11-06-2018 11:06,What role does the non-commutativity of the tensor product play in experimental quantum computation?,,4,2,,,,CC BY-SA 4.0 +4628,1,4631,,11-05-2018 12:57,,7,1330,"

There is something I really misunderstand about the Deutsch-Jozsa algorithm.

+

To check if $f$ is balanced or constant, we use the following algorithm:

+

+

where $U_f$ gives $(x,y) \rightarrow (x, y \oplus f(x))$.

+

Let's take $n=1$ for simplicity (thus the function $f$ is defined on $(0,1)$). We have four possible $U_f$ associated to two constant possibilities ($f$ equal to $0$ or $1$), and two balanced possibilities.

+

So, in practice, if I want to implement this in a circuit, I have to know exactly the "matrix" of $U_f$. And to do it I have to compute $f$ two times. Thus, I know if $f$ is balanced or constant even before having applied the quantum algorithm. So for a practical aspect, I don't understand what is the point of this.

+

Said differently, if I am given $U_f$ I agree that in one step I will know if $f$ is balanced or constant. But if I know $U_f$ I already know the answer to this question.

+

I am a little confused...

+",5008,,14495,,9/23/2021 18:24,9/23/2021 18:24,How is the Deutsch-Jozsa algorithm faster than classical for practical implementation?,,2,0,,,,CC BY-SA 4.0 +4629,2,,4627,11-05-2018 13:14,,5,,"

When you say $\neq$ I presume you are talking about the implied basis in usual ordering like (00, 01, 02, 10 etc). Otherwise you would have the isomorphism of Hilbert spaces vs an equality statement. That is, AB implies a certain ordered basis and BA a different one.

+ +

The experiment has it's observables on the combined system in a basis independent way. If the experimentalist wants to put their results down, they can choose whatever basis they like.

+ +

The distinction goes into the question being asked. What is the second entry of vector v in Hilbert space that combines A and B is not a well defined question. What is the second entry with respect to a given ordered basis is. The experimentalist has to ask the second in order to get an answer. You have to ask a sensible question if you want a sensible answer.

+",434,,,,,11-05-2018 13:14,,,,0,,,,CC BY-SA 4.0 +4630,2,,4627,11-05-2018 13:33,,3,,"

The order in the tensor product is a convention and has nothing to do with experiments.

+ +

As an example, if I have a cavity (with photons in it, $H_A$) and an atom (with internal states, $H_B$), it is clear which is the atom and which is the cavity, regardless of the order ones chooses for their Hilbert spaces in the tensor product when describing the setup theoretically.

+",491,,,,,11-05-2018 13:33,,,,10,,,,CC BY-SA 4.0 +4631,2,,4628,11-05-2018 13:33,,7,,"

If you see the operator only from the unitary matrix point of view and you enumerate all inputs/outputs, which makes you visualize the matrix, indeed you somehow already know the answer.

+ +

However, imagine now $n$ is very large, say just $n>50$ or $n>60$, it becomes a bit difficult to store a $2^n * 2^n$ unitary matrix. But if you can compute the function, that is having a sequence of gates representing the operation, you can just apply it without having a knowledge of the unitary.

+ +

Let us give you an example just on n=3: +$$ f(x) = x_0 \oplus x_1 x_2 $$

+ +

To apply $ U_f $, we just need a $CNOT$ and a Toffoli gate representing the different operations of $f$, and apply directly, without necessarily having a knowledge of the unitary to build it (just decomposing into ""simple"" operations). You can extend to examples where $n$ is very large.

+",4127,,,,,11-05-2018 13:33,,,,9,,,,CC BY-SA 4.0 +4632,2,,4628,11-05-2018 13:47,,8,,"

I think there are probably two points to make here:

+ +
    +
  1. The way that one implements quantum computation is not by simply looking at the unitary matrix and building something out of that, in just the same way that classical computation is not performed simply by first building the truth table and working off that (otherwise all classical computations would be exponential). Instead, as cnada says, the computation is itself built out of simple gates. The simplest is ""do nothing"" which is a perfectly good example of a constant algorithm. I don't need to see the unitary to build that!

  2. +
  3. The context of an oracle is that you don't build it yourself, so you don't know the unitary. Somebody else gives it to you (or perhaps you'll have built it as the result of another computation), with certain promised properties, and it is your job to determine the relevant parameters. Of course, if you want to practically test whether that works, you'll build it all yourself. But then, you don't care about the efficiency saving during the test, because of course you know what you've built. Indeed, you need to know what you've built because otherwise your test cannot work; you don't know what to check the outcome against.

  4. +
+ +

Incidentally, I think this question is closely related.

+",1837,,,,,11-05-2018 13:47,,,,0,,,,CC BY-SA 4.0 +4633,2,,4613,11-05-2018 18:22,,5,,"

In this paper, the authors use the Ising model to simulate the electronic structure Hamiltonian.
+The electronic structure Hamiltonian after the Jordan-Wigner or Bravyi-Kitaev transformation (which the authors of this paper did use) has quadratic (and sometimes higher-order) terms containing $\sigma_x, \sigma_y$, and $\sigma_z$, but the Ising model does not have any quadratic terms containing $\sigma_x$ in any way.

+ +

It is possible to efficiently simulate any Hamiltonian using a Hamiltonian that has quadratic terms containing $\sigma_x$, as proven by Biamonte & Love. However, since the Ising Hamiltonian does not have such terms, which are required to simulate the electronic structure Hamiltonian efficiently, the method in the paper you mentioned, is not capable of efficiently finding the ground state of the electronic structure Hamiltonian.

+",,user5019,,,,11-05-2018 18:22,,,,12,,,,CC BY-SA 4.0 +4635,1,4643,,11-05-2018 21:23,,6,330,"

I've been looking at basic quantum algorithms such as the Deutsch-Jozsa algorithm that are able to characterize functions very well and I was wondering if similar approaches exist to characterize quantum states.

+ +

Consider the following example: Given a normalized state $\sum_{i=1}^{N} a_i\vert x_i\rangle$, is there some way to check if all the $a_i = 1/\sqrt{N}$ or if there exists some $a_i$ that are different from the others?

+ +

My initial lines of thinking were to apply a Hadamard on the state but for large $N$, it's not clear if that approach helps. Do you have any ideas on how to proceed, or is it the case that such problems are not things that quantum computers can help with?

+ +

EDIT: I should have emphasized some features of my question better

+ +

1) $N$ is large, yet the ""unbalancedness"" i.e. the variance among the $a_i$ can be small. Hence POVM measurements will succeed in giving me the answer with very low probability.

+ +

2) Methods that distinguish between two known non-orthogonal states seem slightly hard to apply since my question is subtly different. I ask if my state is of a specific kind (balanced) or not. Intuitively, my question seems easier to answer.

+ +

3) I should have stated it earlier but one can assume that one is allowed $c\ll N$ copies of the state if this helps.

+",4831,,55,,08-02-2020 07:54,08-02-2020 07:54,Balanced vs unbalanced superposition distinguisher,,2,0,,,,CC BY-SA 4.0 +4636,1,5821,,11-05-2018 21:52,,7,298,"

Rigetti reports the following parameters: (https://www.rigetti.com/qpu)

+ +
    +
  • T1, T2* times
  • +
  • 1-qubit gate fidelity (F1q)
  • +
  • 2-qubit gate fidelity (F2q) and,
  • +
  • read-out fidelity (Fro)
  • +
+ +

IBM QX reports the following: (https://quantumexperience.ng.bluemix.net/qx/devices)

+ +
    +
  • T1, T2 times
  • +
  • (single) qubit gate error
  • +
  • multi-qubit gate error and,
  • +
  • read-out error.
  • +
+ +

I understand that one can simulate the effect of noise on qubit state using operator-sum representation. According to Nielsen and Chuang, the operation elements are:

+ +

Amplitude damping +$E_0 = \begin{bmatrix} 1 & 0\\ 0 & \sqrt{1-\gamma}\end{bmatrix}$ $E_1 = \begin{bmatrix} 0 & \sqrt{\gamma}\\ 0 & 0\end{bmatrix}$

+ +

Phase damping +$E_0 = \begin{bmatrix} 1 & 0\\ 0 & \sqrt{1-\gamma}\end{bmatrix}$ $E_1 = \begin{bmatrix} 0 & 0\\ 0 & \sqrt{\gamma}\end{bmatrix}$

+ +

Phase flip +$E_0 = \sqrt{p}\begin{bmatrix} 1 & 0\\ 0 & 1\end{bmatrix}$ $E_1 = \sqrt{1-p} \begin{bmatrix} 1 & 0\\ 0 & -1\end{bmatrix}$

+ +

Bit flip +$E_0 = \sqrt{p}\begin{bmatrix} 1 & 0\\ 0 & 1\end{bmatrix}$ $E_1 = \sqrt{1-p} \begin{bmatrix} 0 & 1\\ 1 & 0\end{bmatrix}$

+ +

Bit-phase flip +$E_0 = \sqrt{p}\begin{bmatrix} 1 & 0\\ 0 & 1\end{bmatrix}$ $E_1 = \sqrt{1-p} \begin{bmatrix} 0 & -i\\ i & 0\end{bmatrix}$

+ +

Depolarizing channel +$E_0 = \sqrt{1-3p/4}\begin{bmatrix} 1 & 0\\ 0 & 1\end{bmatrix}$ $E_1 = \sqrt{p/4} \begin{bmatrix} 0 & 1\\ 1 & 0\end{bmatrix}$

+ +

$E_2 = \sqrt{p/4} \begin{bmatrix} 0 & -i\\ i & 0\end{bmatrix}$ $E_3 = \sqrt{p/4} \begin{bmatrix} 1 & 0\\ 0 & -1\end{bmatrix}$

+ +

How are the original device parameters related to the parameters in the operations elements i.e., $p$ and $\gamma$? (A first-order approximation of relation between these parameters are also welcomed.)

+ +

[P.S. are operations elements described in N&L and Kraus operators the same thing?]

+",4722,,26,,01-01-2019 09:04,04-01-2019 04:09,How are Rigetti and IBM QX device parameters related to Kraus operators?,,1,1,,,,CC BY-SA 4.0 +4637,2,,4627,11-05-2018 21:54,,11,,"

For many questions that appear on this site, and about quantum information and computation in general, it is possible to ask a completely classical version of the question, and often the (sometimes obvious) answer that one finds in the more familiar classical setting translates directly to the quantum setting. In this case, a reasonable classical version of the question asks what role the non-commutativity of the Cartesian product plays in experimental classical computing (or, let's say, in practical implementations of classical computation).

+ +

Suppose we have system $A$ that can be in any classical state drawn from a set $\mathcal{A}$, and a system $B$ that can be in any classical state drawn from the set $\mathcal{B}$. If we put system $A$ and system $B$ next to each other on the table, then we can represent the classical state of the two systems together as an element of the Cartesian product $\mathcal{A}\times\mathcal{B}$. Note that there is an implicit assumption here, which is that the two systems are distinguishable, and we're deciding more or less arbitrarily that when we talk about a state $(a,b)\in\mathcal{A}\times\mathcal{B}$ that the state $a$ of system $A$ is listed first and the state $b$ of system $B$ is listed second. We could just as easily have decided to represent the classical state of the two systems together as an element of the Cartesian product $\mathcal{B}\times\mathcal{A}$, with the understanding that the state of system $B$ now gets listed first.

+ +

As an aside, if the two systems were indistinguishable, implying that $\mathcal{A} = \mathcal{B}$, and further we placed the two systems in a bag rather than on the table, then I guess there would really be no difference between $(a,b)$ and $(b,a)$. For this reason we would probably not use the Cartesian product to represent states of the bagged systems -- maybe we would use the set of all multisets of size 2 instead -- but let us forget about this situation and assume $A$ and $B$ are distinguishable for simplicity.

+ +

Now, what role does this play in experiments or practical applications of classical computing? How does an experimenter or programmer know he or she is working in the $\mathcal{A}\times\mathcal{B}$ or $\mathcal{B}\times\mathcal{A}$ state space? When you think about the question this way, I believe it may come into focus. My answer, which is consistent with the other answers that concern the quantum setting, is that it really doesn't play any role at all, and the experimenter/programmer knows because it was his or her decision which order to use. We know the difference between the systems $A$ and $B$, and the decision to represent states of the two systems together by elements of $\mathcal{A}\times\mathcal{B}$ or $\mathcal{B}\times\mathcal{A}$ is totally arbitrary -- but once the decision is made we stick with it to avoid confusion. The decision will not affect any calculations we do, so long as the calculations are consistent with the decision of which order to use.

+ +

To my eye, at a fundamental level there is no difference between the classical version of this question and the quantum version. We decide whether to represent states of the compound quantum system using the space $H_A\otimes H_B$ or $H_B\otimes H_A$, and that's all there is to it. You'll get exactly the same results of any calculations you perform, so long as your calculations are consistent with the choice to use $H_A\otimes H_B$ or $H_B\otimes H_A$.

+",1764,,,,,11-05-2018 21:54,,,,5,,,,CC BY-SA 4.0 +4638,2,,4635,11-06-2018 07:37,,4,,"

There are many different variants depending on what it is precisely that you want to achieve (note, this was written before recent edits, although I think there is still value/relevance in this more general answer).

+ +

The closest analogy to the Deutsch-Jozsa algorithm is probably to say that you're given one of two states, and you want to know which you've been given (with maximum probability). If those two states are orthogonal, you can perfectly distinguish them just by using projective measurements, two of which project onto those particular states. Beyond the orthogonality assumption, there are ways of selecting the measurement to maximise the probability of distinguishing them. For more details, see this answer and this one.

+ +

On the other hand, if I take your question literally (which one certainly should given the edits to the question), then probably the best strategy is to define the projectors +$$ +E_1=|\psi\rangle\langle\psi|^{\otimes c}\qquad E_2=\mathbb{I}-E_1, +$$ +where $|\psi\rangle$ is the uniform superposition state ($a_i=1/\sqrt{N}$), and $c$ copies are allowed. If you get the measurement result $E_1$, you learn essentially nothing. However, if you get the answer $E_2$ (which may well happen with low probability if the $a_i$s are close to uniform), you know that the state definitely was not $|\psi\rangle$, so some of the amplitudes were different.

+",1837,,1837,,11-06-2018 12:56,11-06-2018 12:56,,,,3,,,,CC BY-SA 4.0 +4640,2,,4627,11-06-2018 11:06,,1,,"

The two spaces $A$ and $B$ are just labels, with arbitrary ordering. For distinguishable qubits (or more general), the experimentalist can just say ""this one's $A$, and this other one's $B$"". If you swap the labels, you need to swap the labels everywhere - in both the Hamiltonian and the state (including eigenvectors, density matrix etc).

+ +

In other words, if I define a swap operator $S$ such that +$$ +S(H_A\otimes H_B)S=H_B\otimes H_A, +$$ +then evolution of states can be calculated either using +$$ +e^{-i H_A\otimes H_B t}|\psi_{AB}\rangle \quad\text{or}\quad e^{-i H_B\otimes H_A t}|\psi_{BA}\rangle +$$ +where $|\psi_{BA}\rangle=S|\psi_{AB}\rangle$. Or, if you're working with a density matrix, you have $\rho_{BA}=S\rho_{AB}S$.

+",1837,,,,,11-06-2018 11:06,,,,0,,,,CC BY-SA 4.0 +4641,2,,4623,11-06-2018 11:08,,3,,"

Basically quantum teleportation is in facto the determinate side of super-dense coding. In superdense coding we fit two classical bits of information using fairly a single qubit. On the other hand, quantum teleportation uses two classical bits of information to send a single qubit that is in an unknown quantum state. I suggest you to check IBM Q documentation for more details. +Also, the “No Cloning Theorem” states that you cannot prevailingly clone a qubit in an unknown quantum state. Let’s delve a deeper into this to see why this is accurate.

+ +

Let's consider |𝜓⟩,|𝜙⟩, and |𝜔⟩ are vectors.

+ +

Given the sum |𝜓⟩ + |𝜙⟩ is a vector. The scalar product 𝛼|𝜓⟩ is a vector and 𝛼 is a complex number 𝛼 ∈ ℂ.

+ +
    +
  • Commutativity: |𝜓⟩ + |𝜙⟩ = |𝜙⟩ + |𝜓⟩
  • +
  • Associativity: (|𝜓⟩ + |𝜙⟩) + |𝜔⟩ = |𝜙⟩ + (|𝜓⟩ + |𝜔⟩)
  • +
  • Distributive property for scalars and vectors: + +
      +
    • (𝛼 + 𝛽)|𝜓⟩ = 𝛼|𝜓⟩ + 𝛽|𝜓⟩ where 𝛼,𝛽 ∈ ℂ
    • +
    • 𝛼(|𝜓⟩ + |𝜙⟩) = 𝛼|𝜓⟩ + 𝛼|𝜙⟩ where 𝛼 ∈ ℂ
    • +
  • +
  • Associative property: + +
      +
    • 𝛼(𝛽|𝜓⟩) = (𝛼𝛽)|𝜓⟩ where 𝛼,𝛽 ∈ ℂ
    • +
  • +
+ +

Then, consider that we could build a special unitary operator called U that could clone a qubit. This operator would take two qubits as input, one in an unknown state |𝜓⟩ and the other in a state such as |0⟩ that will undergo as our target for duplication. The cloning operator would then turn out with a copy of our qubit along with the original source qubit and both qubits will be in the same state |𝜓⟩.

+ +

For more detail, please check this paper: https://www.cs.mcgill.ca/~yli252/files/quantum.pdf

+",5028,,,,,11-06-2018 11:08,,,,0,,,,CC BY-SA 4.0 +4642,1,,,11-06-2018 14:45,,12,908,"

First of all : I am a beginner in quantum computing.

+ +

I would like to have a resource (or an answer if it is not complicated) explaining where we put the error correction codes in a quantum circuit.

+ +

Indeed, I know we have different possible errors that can occur (bit flip, phase flip etc), and we have algorithm to correct them. But what I would like to know is if there are some strategies to where we put the error correction algorithm. Is it after each gate involved of the main algorithm ? Is there a smarter strategy used to do a single correction for a set of gates ?

+ +

If the answer is ""complicated"" I would like to have a resource to learn all this (I find a lot of things for error correction code, but I haven't found anything about where we must do the correction).

+",5008,,2371,,11-06-2018 14:53,11-07-2018 07:33,Where do we put error correction code in quantum circuit?,,2,3,,,,CC BY-SA 4.0 +4643,2,,4635,11-06-2018 14:55,,7,,"

It is not possible at an information-theoretic level to do what you want to do.

+ +

Let us suppose we have two pure states: $|\phi\rangle$ and $|\psi\rangle$, where +$$ +|\phi\rangle = \frac{1}{\sqrt{N}}\sum_{i=1}^N |x_i\rangle +$$ +and $|\psi\rangle$ is similar to $|\phi\rangle$ but with a few of the coefficients tweaked in some way. Notice that we're fixing just two states, and even if you're promised that you're given one of these two states it will not be possible to determine which one you're given with high probability, under the assumption that $|\phi\rangle$ and $|\psi\rangle$ are close together. (As DaftWullie has suggested, a variant of this problem where $|\phi\rangle$ is fixed and $|\psi\rangle$ is not known ahead of time is certainly no easier than the case in which $|\psi\rangle$ is known ahead of time.)

+ +

To keep things simple, let us suppose that we're given $|\phi\rangle$ with probability $1/2$ and $|\psi\rangle$ with probability $1/2$, and we're aiming to maximize the probability of correctly determining which of the two states we were given. A theorem sometimes called the Holevo-Helstrom theorem tells us exactly what the optimal probability of a correct guess is: +$$ +\frac{1}{2} + \frac{1}{4} \bigl\| |\phi\rangle\langle \phi| - |\psi\rangle\langle \psi| \bigr\|_1, +$$ +where the norm is the trace norm. Because we're working with pure states, this expression can be simplified to +$$ +\frac{1}{2} + \frac{1}{2}\sqrt{1 - |\langle \psi | \phi \rangle|^2}. +$$ +This is for the optimal measurement; you cannot do any better than this, no matter what you try to do, assuming you start with one of the two states selected at random and by honest means do your best to determine which state you were given.

+ +

Now, the question suggests that the ""unbalancedness,"" or variance among the coefficients of the states, is small. With that in mind we could define +$$ +\varepsilon = 1 - |\langle \psi | \phi \rangle|^2 +$$ +and regard $\varepsilon$ as a small (but nonnegative) real number. This means that our probability of a correct guess is +$$ +\frac{1}{2} + \frac{\sqrt{\varepsilon}}{2}, +$$ +which is not much better than just randomly guessing (which yields a correct answer with probability 1/2).

+ +

If you assume that you are given $k$ copies of $|\phi\rangle$ or $k$ copies of $|\psi\rangle$, the calculation is exactly the same, except replacing the states with $|\phi\rangle^{\otimes k}$ and $|\psi\rangle^{\otimes k}$, also as DaftWullie has suggested. We can see how this will affect the optimal correctness probability: it now becomes +$$ +\frac{1}{2} + \frac{1}{2}\sqrt{1 - |\langle \psi | \phi \rangle|^{2k}}. +$$ +Notice that we're placing no constraints on the measurements for distinguishing the two cases -- they could be arbitrarily correlated across the $k$ copies of the states.

+ +

We can compare this optimal correctness probability with the one-shot case by using the inequality +$$ +1 - |\langle \psi | \phi \rangle|^{2k} +\leq k \bigl(1 - |\langle \psi | \phi \rangle|^2\bigr). +$$ +To prove this inequality, notice that it is trivial if $|\langle \psi | \phi \rangle| = 1$, and otherwise we can use the crude estimate +$$ +\frac{1 - |\langle \psi | \phi \rangle|^{2k}}{1 - |\langle \psi | \phi \rangle|^2} = 1 + |\langle \psi | \phi \rangle|^2 + \cdots + |\langle \psi | \phi \rangle|^{2(k-1)} < k +$$ +for the case $|\langle \psi | \phi \rangle| < 1$. We find that the optimal correctness probability is upper-bounded by +$$ +\frac{1}{2} + \frac{\sqrt{k\varepsilon}}{2}. +$$ +So, if $k$ is considered to be much smaller than $N$, and $\varepsilon$ is on the order of $1/N$, then we're still not doing much better than randomly guessing.

+",1764,,,,,11-06-2018 14:55,,,,0,,,,CC BY-SA 4.0 +4644,2,,4642,11-06-2018 15:03,,10,,"

Based on your question I think that you were not looking for the correct term. Error correction codes are methods in order to detect and correct possible errors that arise in qubits due to the effect of decoherence.

+ +

The term fault-tolerant quantum computing refers to the paradigm of quantum devices that work effectively even when its elementary components are imperfect, and the error correction codes you have been looking for are the base to construct such kind of computations. I encourage you to look for information related to fault tolerance by yourself as it is quite a big area in quantum computing. However, I strongly recommend you the text Fault-tolerant quantum computation by Preskill. In such paper, the author does indeed start speaking about error correction codes, but afterwards goes deep into the fault tolerant concept, and I think that it will solve much of your doubts about the topic.

+",2371,,,,,11-06-2018 15:03,,,,0,,,,CC BY-SA 4.0 +4645,1,4883,,11-06-2018 17:37,,4,129,"

In an answer to a previous question, What exactly are Quantum XOR Games?, ahelwer states:

+ +
+

One application of xor games is self-testing: when running algorithms on an untrusted quantum computer, you can use xor games to verify that the computer isn't corrupted by an adversary trying to steal your secrets!

+
+ +

In an answer to a different previous question, How to benchmark a quantum computer? (which includes a link to Using a simple puzzle game to benchmark quantum computers by James Wootton), DaftWullie suggests blind quantum computation as a general strategy.

+ +

How can XOR games be used to perform blind quantum computations to benchmark quantum computers?

+",2645,,2645,,12/18/2018 19:41,12/18/2018 19:41,Using XOR games to benchmark quantum computers,,1,0,,,,CC BY-SA 4.0 +4646,2,,4412,11-07-2018 00:45,,3,,"

Within the paper itself that you linked to, on page 4 section 1.2 ""Nonabelian Fourier Transforms"" and page 5 section 1.3 ""Weak vs Strong Sampling and the Choice of Basis"", they define what they mean by representation on page 4 third indent from the top. It specifies that they represent the finite group G as a homomorphism (structure-preserving map) ρ : G → U(d), meaning G is mapped to a unitary matrix U of dimension d, with d = to the number of columns and rows in U.

+ +

As far as the differences between the weak standard method and the strong standard method, while both perform fourier analysis on non-abelian groups, the weak method uses a random basis, while the strong method specifies the rows and columns in a suitably chosen basis in order to perform a full measurement. They state on page 5: ""We show...that we lose information-theoretic reconstructibility if we measure using a random basis. Specifically, we need an exponential number of measurements to distinguish conjugates of small subgroups of Ap. This establishes for the first time that the strong standard method is indeed stronger then measuring in a random basis: some bases provide much more information about the hidden subgroup then others.""

+ +

Hope this helps. They link to some useful resources in the paper, I found these two pretty helpful:

+ +
    +
  1. https://arxiv.org/abs/quant-ph/0211124
  2. +
  3. http://www.math.tau.ac.il/~borovoi/courses/ReprFG/Hatzagot.pdf
  4. +
+",5029,,26,,5/13/2019 21:22,5/13/2019 21:22,,,,0,,,,CC BY-SA 4.0 +4647,2,,4642,11-07-2018 07:33,,7,,"

In fault-tolerant quantum computing, we make a distinction between physical qubits and logical qubits.

+ +

The logical qubits are the ones we use in our algorithm. So if our input is a number stored in binary across $n$ qubits (as in Shor's algorithm), then these $n$ qubits are logical qubits. When we ask for a quantum Fourier transform on a collection of qubits, then these will also be logical qubits. We expect logical qubits and the operations we do on them to be completely error free, just as we do with the bits and operations in normal computers.

+ +

The physical qubits are the ones that actually exist, and they are noisy. These are what we use to make logical qubits, but it typically takes many physical qubits to make one logical qubit. This is because of the large redundancy needed to be able to detect and correct errors.

+ +

The design of the actual code run on physical qubits will happen in layers. A quantum error correction software engineer will design the logical qubits by writing the program needed to implement the quantum error correcting code. For each operation that someone might need in an algorithm, they will design an error correction compatible version, which performs the operation on the logical qubits in a way that allows its imperfections to be detected and corrected.

+ +

Then the programmer will come along and write their program. They won't need to think about physical qubits or error correction at all.

+ +

Finally, the compiler will combine everything to create the fault-tolerant version of the program to run on the physical qubits. This will look nothing like what was written by the programmer. It won't look like a constant alternation of things that the programmer wrote, followed by error correction things to clean it up. It will almost completely deal with just detecting and correcting the errors that constantly occur, with minor perturbations to implement the algorithm.

+ +

As a reference, I guess it is best to recommend something that explains how operations on logical qubits are implemented on physical qubits via an error correcting code. One of my own papers does this job, by explaining this for a variety of ways to get logical operations in the surface code. It also has references to many works by others in the same area.

+",409,,,,,11-07-2018 07:33,,,,0,,,,CC BY-SA 4.0 +4648,1,,,11-07-2018 08:00,,9,913,"

There are multiple ways of building a qubit: Superconducting (transmons), NV-centers/spin-qubits, topological qubits, etc.

+ +

The superconducting qubits are the most well-known qubits and are also the easiest to build. The machines by IBM and Google, for instance, use superconducting qubits.

+ +

Spin qubits have sizes in the order of a few nanometers and thus have great scaling capabilities. The problem with superconducting qubits, on the other hand, is the size. Apparently, it is hard to shrink the size of a superconducting qubit (typically ~0.1mm).

+ +

What is the limiting factor in the size of superconducting qubits and why can this limiting factor not be scaled down?

+",2005,,26,,11-07-2018 09:55,11-09-2018 17:49,What limits scaling down the size of superconducting qubits?,,2,0,,,,CC BY-SA 4.0 +4649,1,,,11-07-2018 13:06,,7,83,"

I have a script that takes a while to simulate. I can modify it in such a way where I can use fewer qubits at a time, but it will require more iterations of manipulation. I believe this will cut down on simulation time, but does is it worse when running on an actual quantum computer? It will still run in polynomial time, regardless.

+ +

I am thinking I should go with the fewer qubits and more gates method, since I would free up qubits for others and cut my simulation time, but I would like to know which is technically the better way, computationally.

+",4657,,26,,12/23/2018 12:48,12/23/2018 12:48,Is it better to use fewer gates or fewer working qubits?,,0,4,,11/13/2018 21:57,,CC BY-SA 4.0 +4650,1,4653,,11-07-2018 17:48,,1,202,"

In What is the Computational Basis? gIS states:

+ +
+

One also often speaks of ""computational basis"" for higher-dimensional states (qudits), in which case the same applies: a basis is called ""computational"" when it's the most ""natural"" in a given context.

+
+ +

The Wikipedia page for Hilbert space includes this snippet:

+ +
+

When that set of axes is countably infinite, the Hilbert space can also be usefully thought of in terms of the space of infinite sequences that are square-summable. The latter space is often in the older literature referred to as the Hilbert space.

+
+ +

If the Hilbert space has countably infinite axes, is the computational basis for it transfinite?

+ +

Additionally, would it be accurate to state that computations with finite bases correlate to classical computations, while computations with transfinite bases correlate quantum computations?

+",2645,,26,,12/14/2018 6:26,12/14/2018 6:26,Is the computational basis for Hilbert space transfinite?,,1,4,,,,CC BY-SA 4.0 +4652,1,,,11-07-2018 18:38,,8,579,"

I am very new to quantum computing and just try to understand things from a computer scientist's perspective.

+ +

In terms of computational power, what I have understood,

+ +
+

100 ideal qubits ... can equate to [$2^n$ pieces of information]

+
+ +

Now Rigetti Computing has announced a 128 qubit computer.

+ +

Let's imagine they indeed release it next year.

+ +

This leads me to the following thoughts, please correct me if I am wrong:

+ +
    +
  • let's say hypothetically due to the noise about 28 qubits can't be taken into consideration (as used for the fault tolerance for example)
  • +
  • that is, we could work with 100 qubits as in the example above.
  • +
+ +

Could we say then, we have an analog of von Neumann architecture i.e. say 64 qubits go for memory and 32 qubits for the instruction set (say remaining 4 are reserved).

+ +

Does this mean then (oversimplified!):

+ +
    +
  • we get 2^32 bits equivalent ~ 537 MB worth of ""register bits"" for CPU instructions, altogether with caches (no idea who might need that but could probably become a many-core ""die"" see for example Quantum 4004), compared to say 512KB=2^15 bits cache for one level on a classical computer

  • +
  • and for ""RAM"", we get remaining 2^64 bits equivalent = 2.3 exabytes worth memory? (way more than current supercomputers have; Google had though reportedly total disk storage of 10 exabytes in 2013)

  • +
+",5044,,5044,,11-07-2018 20:52,01-10-2019 21:13,Understanding (theoretical) computing power of quantum computers,,2,1,,,,CC BY-SA 4.0 +4653,2,,4650,11-07-2018 20:36,,3,,"

You have a finite basis for $(\mathbb{C}^2)^{\otimes n}$ if you have n qubits. Finite basis (with cardinality $2^n$)-still quantum.

+ +

$\mathcal{H}$ could be countably infinite dimensional. Like $L^2 (\mathbb{R}^3)$ for an orbital. Or an anti-symmetric combination of $N$ of those denoted $\wedge^N (L^2 (\mathbb{R}^3))$ for $N$ electrons in some molecule. Both countable dimensional.

+ +

The set of states is uncountable, but that is just like saying $\mathbb{C}$ or the set of phases $e^{i \theta}$ is uncountable as a set. True but not really relevant because you are using other structure besides just being a set.

+ +

Uncountable dimensional Hilbert spaces are hairier. They would not be separable so a bunch of theorems you need would disappear.

+ +

Edit: Was referring to orthogonal basis not algebraic basis. If you demanded everything written as a finite linear combinations of basis vectors then the sequence $e_i$ of vectors that have $1$ in position $i$ and 0's elsewhere would not be a basis for $\ell^2 (\mathbb{N})$. You would need infinitely many of them for something like $(1,\frac{1}{2},\frac{1}{4},\cdots)$. So to make an algebraic basis you would need more basis vectors, in fact uncountably more. Another way to see this is the set of $(1,t,t^2,\cdots)$ for all $0<t<1$ is uncountable and algebraically linearly independent so algebraically $\ell^2 (\mathbb{N})$ is at least uncountably infinite dimensional in the algebraic sense. In a Hilbert space usually using Hilbert space dimension not vector space dimension.

+",434,,434,,11-08-2018 10:30,11-08-2018 10:30,,,,4,,,,CC BY-SA 4.0 +4654,2,,4652,11-08-2018 00:12,,6,,"

It isn't really like that, for a few reasons.

+ +

For the link saying ""100 ideal qbits can equate to $2^{100}$ pieces of information"", that's talking about logical qbits. Logical qbits are composed of many (perhaps hundreds or even thousands) of physical qbits entangled in a quantum error correction scheme, functioning together as a single qbit. So with 128 physical qbits maybe we can make a single sort-of-okay logical qbit.

+ +

Now, even if we had 100 perfect logical qbits, those $2^{100}$ pieces of information are not equivalent to $2^{100}$ classical bits. It's much easier to understand with math; here's a single qbit:

+ +

$|\psi\rangle = \begin{bmatrix} \alpha \\ \beta \end{bmatrix}$, where $\alpha, \beta \in \mathbb{C}$ and $|\alpha|^2 + |\beta|^2 = 1$

+ +

So with this single qbit, we do have $2^1=2$ ""pieces of information"" ($\alpha$ and $\beta$, actually called amplitudes). However, this doesn't mean we can store two classical bits of information in it. Rather, the amplitudes just encode the probabilities that the qbit will collapse to the classical bit 0 or 1 upon measurement; the probability it collapses to 0 is given by $|\alpha|^2$, and the probability it collapses to 1 is given by $|\beta|^2$. It's a bit more complicated than that because there are other ways we can measure it where it won't collapse to 0 or 1, but we won't worry about those here. The upshot is that at the end of the day, you can only get a single bit of classical information out of the qbit.

+ +

Let's look at two qbits. We represent multiple qbits with something called the tensor product, which is where the exponential growth comes in:

+ +

$|\phi_0\rangle \otimes |\phi_1\rangle = +\begin{bmatrix} \alpha_0 \\ \beta_0 \end{bmatrix} \otimes +\begin{bmatrix} \alpha_1 \\ \beta_1 \end{bmatrix} = +\begin{bmatrix} \alpha_0\alpha_1 \\ \alpha_0\beta_1 \\ \beta_0\alpha_1 \\ \beta_0\beta_1 \end{bmatrix}$

+ +

When we measure these two qbits, $|\alpha_0\alpha_1|^2$ is the probability the system collapses to 00, $|\alpha_0\beta_1|^2$ to 01, $|\beta_0\alpha_1|^2$ to 10, and $|\beta_0\beta_1|^2$ to 11. So even though the number of amplitudes we have is growing exponentially (with three qbits there would be eight amplitudes), again at the end of the day you can only get out as many classical bits as you have qbits. This principle is also known as Holevo's bound.

+ +

If quantum computers don't get their speedup from storing & operating upon exponential numbers of amplitudes, where do they get it from? It's tricky to boil it down to a single thing. You should know this exponential growth property we're talking about (called the exponential size of the Hilbert space) is an important component, but isn't the full story. It's true that when we don't measure the qbits and continue to operate on them within the quantum computer, we indeed manipulate an exponential number of amplitudes at once. However, you have to be very clever when you do this, because at the end of the day you can only extract a very small number of classical bits from the system - you can never know the actual values of the amplitudes themselves - and so only very specific problems are amenable to speedup on a quantum computer.

+ +

If you're interested in learning more, I gave a video lecture on quantum computing aimed at computer scientists that you might enjoy.

+",4153,,4153,,01-10-2019 21:13,01-10-2019 21:13,,,,2,,,,CC BY-SA 4.0 +4655,2,,4652,11-08-2018 07:37,,4,,"
+

100 ideal qubits ... can equate to [$2^n$ pieces of information]

+
+ +

This is really not the case. Taking an equivalent line from the same article:

+ +
+

To put this all into perspective, 100 normal bits just equals 100 + pieces of information, while 100 ideal qubits (qubits we get in a + computer simulation: they are perfect and are not influenced by + external factors that influence a physical qubit) can equate to + 1,267,650,600,228,229,401,496,703,205,376 pieces of information.

+
+ +

Simply, No. It looks a bit like this, but it's very misleading. This is no more true for a quantum computer than it is for a probabilistic classical computer where you describe the state at any moment in time by a set of $2^n$ probabilities. In both cases, if you measure the state, the maximum information you can retrieve (on average) is $n$ bits.

+ +
+

let's say hypothetically due to the noise about 28 qubits can't be + taken into consideration (as used for the fault tolerance for example)

+
+ +

This isn't how error corrected computing (classical or quantum) works. For one level of error correction, you'll have to encode every logical qubit in 7 physical qubits (let's say). You'll get some improvement in the error rate, but you're already down to only have 18 useful qubits. But if that's not enough, you need a second level of error correction, so effectively you've got 1 logical qubit for every 49 physical qubits. So, you're down to 2 logical qubits. (If you have $k$ levels of error correction, then roughly speaking you need $7^k$ physical qubits per logical qubit, but the per-qubit error rate is doubly exponentially reduced.)

+ +

Your discussion of the structure of the (processor,RAM, etc.) is probably also not appropriate to the Rigetti device. Why do classical computers have hard drives, RAM, cache etc? We have a bunch of different technologies which all have different trade-offs in terms of speed of access, volatility of storage, capacity etc. A computer is a careful combination of all of these things to get the most cost effective, fast computation possible. But a device such as Rigetti's will have all the qubits essentially the same (there'll be some local variation depending on where they are on the chip, but this is a very small effect). There is no reason to split up the architecture in terms of RAM, cache, hard drive etc. These concepts only become relevant when you are interfacing several different hardware types.

+ +

Even if you were using the device in such a way then, to repeat a previous statement, you don't get to store $2^{128}$ bits of information, only 128 bits.

+ +

If you're looking for the source of a quantum speed-up, then think about classical computation in terms of universals sets of logic gates. For example, everything can be built out of NAND gates. If I suddenly gave you a new gate that cannot be built out of NAND gates, then suddenly you have a new tool that might, on a case by case basis, give you the potential to improve algorithms. This is basically what a quantum computer does.

+",1837,,,,,11-08-2018 07:37,,,,2,,,,CC BY-SA 4.0 +4656,1,,,11-08-2018 07:47,,6,879,"

By my first impression, there are many-qubits computers out there and more to come, as to follow the press.

+ +

Now a closer look reveals that it's all about designing and building physical qubits.

+ +

Then, as it seems from further reading, you actually need quite many physical qubits (dozens or hundreds) to come close to a practically usable logical qubit.

+ +

So does it mean after all, nobody has yet built any single logical qubit?

+ +

Note. This question is meant to understand the state of the art as applyied to computing, not to blame it!

+",5044,,26,,12/23/2018 12:48,12/23/2018 12:48,Is there any single-logical-qubit physical device out there as of end 2018?,,1,0,,,,CC BY-SA 4.0 +4657,2,,4656,11-08-2018 08:53,,7,,"

A logical qubit is a very fluid concept. You could use physical qubits as logical qubits. Or, you can encode multiple physical qubits as a single logical qubit. The more physical qubits you use, the better the resistance to noise. So, I would suggest that you question isn't exactly the right one to ask, and a better question is whether something useful can be done with existing quantum technology (in the direction of computation).

+ +

The long-term goal is to build quantum computers, which require logical operations to be performed with a suitable level of reliability (below the ""fault-tolerant threshold""), that can be maintained for a long time. It's true that we're probably not quite there yet, even with a single logical qubit. I don't actually know how close current hardware is to achieving it. That doesn't mean that existing devices are entirely pointless.

+ +

There is a lot of research going on at the moment into ""quantum supremacy"", in other words, given the sort of noisy quantum devices of 50-100 qubits that are starting to appear, is there anything that we could do with them that is unequivocally better than anything we could do with a classical computer? The expectation is that we're somewhere around that threshold at the moment, but I'm not aware of anything that is definitive.

+",1837,,,,,11-08-2018 08:53,,,,6,,,,CC BY-SA 4.0 +4658,2,,4648,11-08-2018 18:28,,8,,"

Getting enough capacitance and maintaining coherence essentially set the size limit. A superconducting qubit, for the purposes of answering this question, can be imagined as an oscillator consisting of an inductor and a capacitor. The frequency of the oscillator can't be too high otherwise controlling the qubit becomes difficult. At Google, we typically work with the frequency range 4-8 GHz. A wide range of microwave generation, manipulation, and analysis tools are available off-the-shelf for this range.

+ +

The capacitor is built in a simple manner to reduce noise. Essentially a plus-shaped cut in a piece of metal. The kinds of techniques used to achieve large capacitors in small sizes such as a pair of meshed combs or some kind of multi-layer metal-dielectric sandwich increase field strengths and therefore the strength of interaction with imperfections in the chip, increasing noise. To get a large capacitance with this simple design requires significant space. Indeed, our qubits are closer to 1 mm center to center.

+ +

That's the answer to your question, but there's a premise in the set up of your question that big is bad. In my opinion, small is bad, and big is far more scalable.

+ +

We drive our qubits with microwaves, these are typically delivered with coaxial cables of diameter currently of order 1/32nd of an inch. If you imagine a million qubit computer, at our scale this is about a square meter, and getting a few million lines in sounds very achievable. I'm not sure why you would want a quantum supercomputer to be smaller than this.

+",5056,,,,,11-08-2018 18:28,,,,1,,,,CC BY-SA 4.0 +4659,1,,,11-08-2018 18:51,,6,157,"

Given some set of basis states $\{\vert 0\rangle, \vert 1\rangle, \vert 2\rangle...\vert N\rangle\}$ and an unknown superposition of the form $\frac{1}{\sqrt{2}}(\vert i \rangle + \vert j \rangle)$, what exactly forbids us from computing $i$ and $j$? I know about the no cloning theorem but why can I not see it from a single copy of the state?

+",5057,,,,,11-09-2018 07:34,Can one ever find the elements of a superposition state?,,3,0,,,,CC BY-SA 4.0 +4660,2,,4659,11-08-2018 19:31,,5,,"

Given only one copy of such a state, it is not possible to determine it with any good probability. Reason being there is no way, in principle, to extract information from the system without making a measurement on the system. And when we go for a measurement, we project it to a basis element $i,j$ if we turn out to pick these by chance from the set $\{1,2,...,N\}$, in otherwise cases, we can never make out what is $i$ or $j$ from the given set of degrees of freedom. To answer a more general question, if we don't even know what the set $\{1,2,...,N\}$ is; Suppose someone prepares a quantum state and gives to you without telling what kind of experimental apparatus may be the basis, there is no way in principle to figure out anything (if someone gives $n$ isolated levels of a harmonic oscillator and you are looking for two states of polarization in it, for instance). Hence, in all literature, we assume that the type of the experimental apparatus is always known, which means that the basis $\{1,2,...,N\}$ is known but a general state $|\psi\rangle=\frac{1}{\sqrt{N}}\sum_i c_i |i\rangle$ is unknown in this Basis, where we can evaluate the coefficients $c_i$s by repeated measurements. One such technique is the State Tomography. Otherwise, the most ordinary and default way is to use the projectors $|i\rangle \langle i|$ formed from the basis and see what we get as the final state after the measurement.

+",4889,,,,,11-08-2018 19:31,,,,0,,,,CC BY-SA 4.0 +4661,2,,4659,11-08-2018 19:52,,4,,"

Denote $| \psi_{i,j} \rangle = \frac{1}{\sqrt{2}} ( | i \rangle + | j \rangle) $ for fixed $i$ and $j$. Alice gives a randomly drawn one of this form.

+ +

That is for each $0 \leq i<j \leq N$ she assigns a probability $p_{ij}$ and sends the corresponding state $| \psi_{ij} \rangle$ with that probability. When you say unknown of that form, I'll assume $p_{ij}=\frac{2}{N(N+1)}$ uniformly.

+ +

Then Bob receives a $\rho=\sum_{0 \leq i < j \leq N} p_{ij} | \psi_{ij} \rangle \langle \psi_{ij} |$. Bob can do any transformation followed by measurement he wants to get a variable $Y$.

+ +

How much information about $p_{ij}$ is contained therein? $\leq S(\rho)$. The second term is 0 in this case. I haven't calculated this entropy out though.

+ +

Holevo's Theorem

+",434,,,,,11-08-2018 19:52,,,,0,,,,CC BY-SA 4.0 +4662,1,,,11-09-2018 06:13,,2,188,"

I read this research paper.

+ +

I have octave and the package running.

+ +

This is an example of what I did so far -

+ +
+octave:3> s1 = state(normalize(ket([1,0])+ket([0,1])))  
+s1 =
+
+   0.00000   0.00000   0.00000   0.00000
+   0.00000   0.50000   0.50000   0.00000
+   0.00000   0.50000   0.50000   0.00000
+   0.00000   0.00000   0.00000   0.00000
+
+
+ +

How would I see whether or not a state is entangled? Are there any examples using octave demonstrating entanglement?

+",4489,,4489,,11-12-2018 10:41,11/13/2018 8:50,Are there any test examples of Octave and Quantum Entanglement?,,1,6,,,,CC BY-SA 4.0 +4663,2,,4659,11-09-2018 06:24,,2,,"

To determine the state, you’ll have to measure it. As you've said, the no-cloning theorem forbids us from making copies, so we just have to work with the single copy we're give. If we restrict to projective measurements, there are no more than $N+1$ outcomes to the measurement. However, you want to distinguish $\binom{N+1}{2}$ different outcomes. Hence, it’s not possible with just one copy of the state.

+ +

Of course, one must mention the exception of $N=1$ but that is a triviality because you already know what state it is! This still leaves the case of $N=2$ not covered by my argument, which one should look at explicitly.

+ +

An alternative approach is as follows: imagine you could determine from a single copy what the values of $i$ and $j$ are. In that case, you could use that knowledge to produce as many copies as you like. In other words, you can perform quantum cloning. As you state in the question, this is impossible (except when all the states are orthogonal to each other, i.e. $N=1$).

+ +

For a fully rigorous answer, one strategy is to ask about how well one can distinguish $(|0\rangle+|1\rangle)/\sqrt{2}$ from $(|0\rangle+|2\rangle)/\sqrt{2}$. If one is to complete the specified task, one must certainly be able to do this sub-task. However, the optimal measurement to distinguish these two states is known as the Helstrom measurement. See my previous answer here for further details. Since the two states are not orthogonal, you derive that it is impossible to distinguish them perfectly.

+",1837,,1837,,11-09-2018 07:34,11-09-2018 07:34,,,,0,,,,CC BY-SA 4.0 +4664,1,4692,,11-09-2018 10:55,,5,662,"

I have read the Wikipedia article which relates the polar decomposition to a complex number being split into its modulus and phase but this analogy isn't very intuitive to me.

+ +

In Nielsen and Chuang, the polar decomposition is especially important in proving results around fidelity. If I have states $\rho \in H_{R}$ and $\sigma \in H_{Q}$, Uhlmann's theorem shows that the best purification that preserves fidelity $F(\rho, \sigma)$ is some state in $R\otimes Q$ and gives us a method to find it. Set state $ m = \sum_i\vert i_R\rangle\vert i_Q\rangle$. Now

+ +

$\vert\psi\rangle = 1\otimes \sqrt{\rho}\vert m\rangle$

+ +

$\vert\phi\rangle = 1\otimes \sqrt{\sigma}V^{\dagger}\vert m\rangle$

+ +

gives the purification that preserves fidelity. Here, the polar decomposition of $\sqrt{\rho}\sqrt{\sigma}$ is $|\sqrt{\rho}\sqrt{\sigma}|V$.

+ +

While I can follow the proof in Nielsen and Chuang, I have no intuition about what the polar decomposition is doing here and why this particular purification preserves fidelity.

+ +

I'm looking for intuition only, so proofs are unimportant. Could someone help with simple example states and how they transform under different purifications and why the polar decomposition-based purification works best?

+",4831,,4831,,11-09-2018 11:01,11-08-2019 12:47,Intuitive role of the polar decomposition in proof of Uhlmann's theorem for fidelity,,3,8,,,,CC BY-SA 4.0 +4665,1,,,11-09-2018 17:35,,3,453,"

This is a question that must be asked a lot but is it possible? And if in theory possible, what are the challenges to do it, and what are the drawbacks?

+",4127,,,,,11-10-2018 21:46,Can we do adiabatic quantum computing with a quantum circuit model and how?,,3,0,,,,CC BY-SA 4.0 +4666,2,,4648,11-09-2018 17:49,,3,,"

I wanted to add a comment on Austin Flowers but it says I need 50 point reputations.

+ +

So essentially you need a low enough frequency in your superconducting circuit (4-8 GHz is Google's choice, to make use of established microwave spectroscopy tools). To get a low frequency, you need a high capacitance. To get a high capacitance, you need either:

+ +
    +
  1. a large-size capacitor (1 mm from center to center at Google), or
  2. +
  3. exotic technology such as a pair of meshed combs, but this will amplify decoherence.
  4. +
+ +

So making smaller qubits is limited by (in some hierarchical way):

+ +
    +
  1. the lack of cheap tools for working in a higher frequency region (above 8 GHz)
  2. +
  3. the inability to get to low enough frequencies without using larger capacitance (could this be mitigated by adjusting the properties of the inductor? I don't know)
  4. +
  5. the inability to get large capacitance without making the capacitor large [or] the inability to get large capacitance in small capacitors without increasing the noise.
  6. +
+ +

A simple way to put it is that they are limited by decoherence/noise, but there's other ways to improve the design which might make it possible to make qubits smaller without increasing the noise too much.

+ +

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ +

I have one final comment on the answer of Austin Fowler, which questions the validity of the original quesiton by saying that a few millions qubits can fit in a few square metres, so why do you want any smaller than that? That is an interesting point. In classical computing we keep thinking about wanting to make them smaller so that more gigs of RAM and more gigs of storage space can fit in our pocket or take up less space on the table, but quantum computers at present would only be ""supercomputers"" as Austin Fowler correctly pointed out. A few square meters isn't bad for a supercomputer.

+ +

However, it's not clear whether or not a few million qubits will be enough to do any useful, valuable, real-world computation, as Austin's series of Shor algorithm papers suggest (with error correction, which will definitely be needed to do anything useful, you will need billions of qubits). It is true that 100 qubits can't easily be simulated, in general, on a classical computer (people once said 25 qubits, then 30 qubits, then Haner & Steiger did 45 qubits with 500TB of RAM, then Sergio Boixo said 47 qubits in a 7x7 array, then IBM and Chinese groups simulated 60, then 70, on classical supercomputers, so let's just say 100 qubits for now). Simulating a fully controllable 100-qubit system will be interesting to study the physics of the system itself, and may provide insight into how to better engineer better quantum computers in the future, but ""simulating quantum computers"" is a very very tiny portion of what the world uses supercomputers for (and in this case one can argue that building quantum computers doesn't even help, since we're talking about simulating a quantum computer with a classical computer in the first place).

+ +

Most real-world HPC problems: weather modeling, stock market prediction, image processing for satellite data, astrophysics, etc. are not going to be solved with a few million qubits physical qubits error correcting a thousand logical qubits. If we need a billion qubits to outperform a classical computer on a real-world problem (I think we might need even more), then your square meter becomes 1000 square meters which is 0.1 hectares. 10 billion qubits would take up all the grass within a 400m running track, and this is then going to be too much effort to control with microwaves, to maintain in decent condition, and to power. ORNL's Titan is 400 square meters. If the quantum computer is allowed to be 1000 square meters (for 1 billion qubits), then let's allow the classical computer to be that big. Then if 1 billion was big enough to beat Titan, maybe you'll need 2 billion to beat Titan's bigger brother.

+ +

Hopefully there will be a cross-over point at some point, but I agree both with Austin (that we reached the point where there's many more important things to think about than just the size of the qubits) and with Nippons, who asked this question, because it does seem that we can use some size reduction for the qubits.

+",,user5062,,,,11-09-2018 17:49,,,,0,,,,CC BY-SA 4.0 +4667,2,,4665,11-09-2018 17:57,,5,,"

Adiabatic Quantum Computation is Equivalent to Standard +Quantum Computation.

+ +

This paper proved the other direction, that adiabatic can simulate circuit model. The direction you ask, which is whether or not circuit model can simulate adiabatic, was proven much earlier. In the paper I linked at the top there is the sentence ""It is known that adiabatic computation can be efficiently simulated by standard quantum computers [9, 13].""

+ +

One of those papers [13], is basically where Adiabatic Quantum Computation was first introduced:

+ +

E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgre +n, and D. Preda, A quantum adiabatic evolution algorithm applied to random instances of a n NP-complete problem, Science +, 292(5516):472–476, 2001.

+ +

So what that says is, that right from the beginning of Adiabatic model, people knew that circuit model could simulate it.

+",,user5062,491,,11-10-2018 21:46,11-10-2018 21:46,,,,6,,,,CC BY-SA 4.0 +4668,2,,4665,11-10-2018 06:09,,5,,"

Adiabatic Quantum Computation is simply the time-evolution of a Hamiltonian where the system is prepared in a particular initial state (the ground state) and the Hamiltonian varies slowly in time.

+ +

Simulating a Hamiltonian on a quantum computer is a standard problem. Making that Hamiltonian time varying doesn’t really make it any worse since you break it down into lots of little steps to simulate it anyway, and the slow variation means that it’s not a bad approximation to assume that theHamiltonian is fixed in each of those little steps. Preparing the initial state is also not a problem. By design, it’s something that is easily prepared, such as the all zero state.

+ +

The challenge, as with any simulation, is that there’s a polynomial overhead compared to the original scheme. While that doesn’t change computational complexity, it can have a massive impact on the practicality of implementation. What you gain, on the other hand, is the ability to use fault-tolerant techniques.

+",1837,,,,,11-10-2018 06:09,,,,0,,,,CC BY-SA 4.0 +4669,1,4671,,11-10-2018 06:50,,2,214,"

NB - notation from Octave.

+ +

I understand that

+ +
+ket([0]) 
+
+ +

is the $(1,0)$ in a $(x,y)$ plane.

+ +

But when I try a ket with three numbers I get a column vector of dimension $(8,1)$. I assume that comes from $2^3=8$ ie there are $2^n$ states for $n$ variables in the ket.

+ +

But how is that understood in terms of a vector space? I have seen the algebra online but that has not really explained how the vector space can be envisioned. I am mainly trying to relate a column position to a physical characteristic or at least explain why that size would exist.

+ +
+octave:27> ket([0,1])
+ans =
+
+   0
+   1
+   0
+   0
+
+octave:28> ket([0,1,1])
+ans =
+
+   0
+   0
+   0
+   1
+   0
+   0
+   0
+   0
+
+
+",4489,,26,,11-10-2018 10:19,11-10-2018 10:19,"Why do $n$ inputs to a ket give a vector of dimension $(2^n,1)$?",,2,2,,,,CC BY-SA 4.0 +4670,2,,4669,11-10-2018 09:31,,1,,"

The classical bit has ha state either 0 or 1. A qubit also has a state: it could be in state $\lvert 0 \rangle$ or in state $\lvert 1 \rangle$ but it can also be in a linear combination of states, called superposition:

+ +

$$ +\lvert \psi \rangle = \alpha \lvert 0 \rangle + \beta \lvert 1 \rangle +$$

+ +

Where $\alpha, \beta$ are complex numbers. This means that the state of the qubit is a vector in a 2-dimensional complex vector space! The special states $\lvert 0 \rangle$ and $\lvert 1 \rangle$ are called computational basis state, and form an orthonormal basis for this vector space. Just to clarify:

+ +
    +
  • orthonormal: two vectors are said to be orthonormal when they both are unit vectors, and they are orthogonal (therefore, taking the dot product of these two vectors you obtain 0). This is precisely the case with $\lvert 0 \rangle = \left[\begin{matrix} 1 & 0 \end{matrix}\right]^{T}$ and $\lvert 1 \rangle = \left[\begin{matrix} 0 & 1 \end{matrix}\right]^{T}$.

  • +
  • basis: a set of vectors in a vector space is called a basis if every other vector of that particular vector space can be written as linear combination of the basis set.

  • +
+ +

A two qubit system has $2^{2}=4$ computational basis states: $\lvert 00 \rangle, \lvert 01 \rangle, \lvert 10 \rangle, \lvert 11 \rangle$. A pair of qubits can exist in superpositions (linear combination) of these four states:

+ +

$$ +\lvert \psi \rangle = \alpha_{00} \lvert 00 \rangle + \alpha_{01} \lvert 01 \rangle + \alpha_{10} \lvert 10 \rangle + \alpha_{11} \lvert 11 \rangle +$$

+ +

Note that $\lvert 00 \rangle = \left[\begin{matrix} 1 & 0 & 0 & 0 \end{matrix}\right]^{T}$, $\lvert 01 \rangle = \left[\begin{matrix} 0 & 1 & 0 & 0 \end{matrix}\right]^{T}$, $\lvert 10 \rangle = \left[\begin{matrix} 0 & 0 & 1 & 0 \end{matrix}\right]^{T}$, $\lvert 11 \rangle = \left[\begin{matrix} 0 & 0 & 0 & 1 \end{matrix}\right]^{T}$. You can easily wherify they are orthonormal! The notation $\lvert 0 0 \rangle = \lvert 0 \rangle \otimes \lvert 0 \rangle$, where the symbol $\otimes$ indicates the tensor product. From wikipedia:

+ +
+

The tensor product of (finite dimensional) vector spaces has dimension equal to the product of the dimensions of the two factors

+
+ +

In our two qubit system, $\dim(\lvert 00 \rangle) = \dim(\lvert 0 \rangle) \cdot \dim(\lvert 0 \rangle) = 2 \cdot 2 = 4$, this means $\lvert 00 \rangle$ is represented by a vector having 4 elements.

+ +

A thre qubit system has $2^{3}=8$ computational basis states. Same arguments can be made about $\lvert 0 0 0 \rangle = \lvert 0 \rangle \otimes \lvert 0 \rangle \otimes \lvert 0 \rangle$, where you can verify that $\lvert 0 0 0 \rangle$ is represented by a vector having 8 elements.

+",4504,,4504,,11-10-2018 09:44,11-10-2018 09:44,,,,0,,,,CC BY-SA 4.0 +4671,2,,4669,11-10-2018 09:44,,3,,"

First, read my previous answer on what the bra-ket notation means. Now proceed:

+ +

In your post, ket([0]) stands for $|0\rangle$, ket([0,1]) stands for $|0\rangle\otimes |1\rangle = |01\rangle$ and ket([0,1,1]) stands for $|0\rangle\otimes |1\rangle \otimes |1\rangle = |011\rangle$.

+ +

The computational basis states of a single qubit are $|0\rangle$ and $|1\rangle$.

+ +

The computational basis states of a $2$-qubit system are $|00\rangle, |01\rangle, |10\rangle$ and $|11\rangle$. Basically, take all $2$-digit permutations of $0$ and $1$. That is the first qubit can be in state $|0\rangle$ and second can be in $|0\rangle$ too; the first can be in $|0\rangle$ while the second is in $|1\rangle$ and so on. So as you see, a $2$-qubit system resides in a $4$-dimensional complex vector space $\Bbb C^4$ (as it has four basis states).

+ +

Now you might be confused because you think a $2$-qubit system can also exist in some state like $(a|0\rangle+b|1\rangle)\otimes (c|0\rangle + d|1\rangle)$ where $a,b,c,d \in \Bbb C$ . Sure, but even then you can express such a state as a linear combination of the elements of $\{|00\rangle, |01\rangle, |10\rangle, |11\rangle\}$ i.e. the computational basis set. The size of the +basis set is what defines the dimension of the vector space, that is ""there are $2^n$ states for $n$ variables in the ket"".

+ +

The column vectors you speak of are just alternative representations of $|00\rangle, |01\rangle, |10\rangle$ and $|11\rangle$. Say you could consider +$|00\rangle$ to be $\left[\begin{matrix} 1 & 0 & 0 & 0 \end{matrix}\right]^{T}$, $|01\rangle$ to be $\left[\begin{matrix} 0 & 1 & 0 & 0 \end{matrix}\right]^{T}$, $|10\rangle$ to be $\left[\begin{matrix} 0 & 0 & 1 & 0 \end{matrix}\right]^{T}$ and $|11\rangle$ to be $\left[\begin{matrix} 0 & 0 & 0 & 1 \end{matrix}\right]^{T}$. You're basically mapping the computational basis states of a $2$-qubit system to the standard basis of $\Bbb R^4$.

+ +

Now can you do a similar analysis for a $3$-qubit system?

+",26,,26,,11-10-2018 09:53,11-10-2018 09:53,,,,1,,,,CC BY-SA 4.0 +4672,2,,4665,11-10-2018 09:47,,1,,"

The first difficulty is to answer whether we can convert all conventional circuit gates into adiabatic ones (or simply geometric), this has been done theoretically and recently experimentally as well (ref: https://arxiv.org/abs/1304.5186). To answer the first question, yes, it is possible. +Since this correspondence is now true, we need to tackle the problems arising in the evolution of such gates. +The challenges and drawbacks could be:

+ +

1) Experimentally constructing such systems which can be easily controlled classically (because the driving parameter space in an adiabatic scenario is mostly a classical parameter such as an external magnetic field) and adiabatically. These are now quite possible and easily exploitable and most such systems are optical cavities which interact with spin systems inside.

+ +

2) The time complexity and speed. This is the biggest issue with adiabatic evolution, it is very slowly evolving. This is the biggest price to pay w.r.t other models.

+ +

Advantage: +Very fault tolerant, the speed of the evolution and path doesn't really matter once a particular operation is targeted. Slower or faster don't matter in the adiabatic domain. For operations, even paths in the driving parameter space don't matter up-to a phase.

+",4889,,,,,11-10-2018 09:47,,,,0,,,,CC BY-SA 4.0 +4673,1,4682,,11-10-2018 10:53,,6,351,"

In superdense coding, you can use one qubit to control the Hilbert space of two qubits and steer it into 4 mutually orthogonal states, so that measurement of both qubits together will not have a probabilistic outcome.

+ +

I think the idea is cool. I want to understand a more general question that arises from this result.

+ +

What if I have 100 qubits (or N qubits, if you like), entangled by a neutral party, and 99 are sent to my friend and 1 is sent to me. How much control do I have over the composite quantum state? How many mutually orthogonal states can one steer the global state into?

+ +

Thanks.

+ +

EDIT:

+ +

Thank you for your interest in this question. I can't seem to figure out who is right, so let's consider a slightly different question.

+ +

There are a total of 5 qubits. I have 2, and my friend has 3. Clearly the upper bound of classical bits I can send to my friend is 5. But is it possible to find 5 unitary operations I can perform on my two qubits to steer the state into 5 orthogonal states.

+ +

If you can answer that, then how does this generalize to N and M qubits?

+ +

Hopefully this comes closer to the real question I want to ask.

+",1867,,55,,10/19/2021 15:32,10/19/2021 15:32,Controlling high-dimensional Hilbert spaces with a single qubit,,3,8,,,,CC BY-SA 4.0 +4674,2,,4673,11-10-2018 12:54,,0,,"

The question, as to me, it seems that the person has tried to make the analogy of a bipartite entanglement distribution with a $N$-partite distribution, which are entirely different cases. When explaining the effect of one entity in a system, entangled with more than one other constituent, various factors have to be taken into account, such as witnesses of entanglement which might (in their own sense), allow a measure of entanglememt to be distributed over multiparty. And mainly the Monogamous nature of quantum entanglement (see CKW Inequality).

+ +

The answer the question precisely, it has first to be defined to which class the composite entangled state belongs to. The multipartite entanglement can be addressed to this date, only for few specific states such as the GHZ states or W-states most broadly. A general framework is not yet known to answer this question. These two class of states saturate the CKW inequality in two extremes and hence can be formulated in a quantifiable manner in protocols.

+ +

Especially for GHZ states, this question can be answered and it is definitely possible as is executed in various protocols of Multiparty Secret Sharing where GHZ states are distributed and later on the members have to distinguish the resulting orthogonal states by appropriate projectors.

+ +

No mechanism, in general, is known for your query, but a particular class of states can make this possible.

+",4889,,,,,11-10-2018 12:54,,,,0,,,,CC BY-SA 4.0 +4675,2,,4673,11-10-2018 14:20,,6,,"

Let me call you A and your friend B.

+ +

You initially share a state $|\psi\rangle$. W.l.o.g., we can write this in its Schmidt decomposition: +$$ +|\psi\rangle = \sum_{i=1}^d \lambda_i |i\rangle_A|i\rangle_B. +$$ +You are now asking how many (orthogonal) states A can implement by acting with a unitary on their side of the state.

+ +

This number is upper bounded by the total number of basis states involved on $B$'s side, which is $d$, times the dimension of the space of Alice, $d_A$. So one can prepare at most $d_Ad$ states.

+ +

In your case, $d_A=d=2$, so you can prepare at most $4$ orthogonal states. This is exactly achieved if your state is maximally entangled ($\lambda_1=\lambda_2=1/\sqrt2$); this is dense coding.

+ +

(You can easily see that the same idea as in dense coding also applies if $d_A$ is larger. Then, you can accordingly encode more states, but A also has to send a larger $d_A$-dimensional system -- the amount of saving in the information you send is always given by $d$.)

+ +

So the bottomline is that the number of states you can steer the system to is always given by (i) the dimension of your system and (ii) the amount of entanglement, but entirely independent of the system size of B.

+",491,,55,,11/18/2019 10:19,11/18/2019 10:19,,,,5,,,,CC BY-SA 4.0 +4676,1,4678,,11-10-2018 19:35,,1,630,"

How do I calculate the matrix representation of this part of a teleportation circuit? It must be a matrix of dimension 8.

+

+",5065,,55,,5/22/2021 16:22,5/22/2021 16:22,What's the matrix representation of this 3-qubit CZ circuit?,,1,1,,,,CC BY-SA 4.0 +4677,1,,,11-10-2018 20:54,,4,339,"

I am trying to formulate the calculation of conditional min-entropy as a semidefinite program. However, so far I have not been able to do so. Different sources formulate it differently. For example, in this highly influential paper, it has been formulated as:

+ +

$$H_{\text{min}}(A|B)_\rho = - \underset{\sigma_B} {\text{inf}} \ D_{\infty}(\rho_{AB} \| id_A \otimes \sigma_B) +$$ +Where $$\rho_{AB} \in \mathcal{H_A \otimes H_B}, \sigma \in \mathcal{H_B}$$ and +$$D_{\infty}(\tau \| \tau') = \text{inf} \{\lambda \in +\mathbb{R}: \tau \leq 2^{\lambda} \tau' \}$$

+ +

How do I formulate it into a semidefinite program? It is possible as is mentioned in this lecture.

+ +

A possible SDP program is given in Watrous's lecture:

+ +

$$\text{maximize}: <\rho, X>$$ +$$\text{subject to}$$ +$$Tr_X{X} == \mathcal{1}_Y$$ +$$X \in \text{Pos}(X \otimes Y)$$

+ +

How do I write it in CVX or any other optimization system?

+",2403,,55,,8/15/2022 12:27,8/15/2022 12:27,How to calculate the conditional min-entropy via a semidefinite program?,,1,2,,,,CC BY-SA 4.0 +4678,2,,4676,11-10-2018 20:59,,5,,"

One way to do it is to build a sort of quantum IF statement. +You have in quantum computing projector operators telling you whether a qubit is 0 or 1: +$$ P_0 = \begin{pmatrix} +1 & 0 \\ +0 & 0 +\end{pmatrix} $$

+ +

$$ P_1 = \begin{pmatrix} +0 & 0 \\ +0 & 1 +\end{pmatrix} $$

+ +

Then we have the Z gate :

+ +

$$ Z= \begin{pmatrix} +1 & 0 \\ +0 & -1 +\end{pmatrix} $$

+ +

To build the unitary for your controlled operation, we do : +$$ CZ = P_0 \otimes I \otimes I + P_1 \otimes I \otimes Z $$

+ +

with $ \otimes $ the tensor product. +That means, if your first qubit is in the $ |0\rangle $ state, we apply the identity operation, otherwise we apply the Z operator only on the third qubit.

+",4127,,,,,11-10-2018 20:59,,,,0,,,,CC BY-SA 4.0 +4679,1,,,11-11-2018 09:14,,5,135,"

I know that if you have a circuit $U$ that transforms $A → B$, it's possible to construct an inverse, ${U\dagger}(B) → A$. Is it also possible to transform the states with $T_{i,o}$ so that I can use the original circuit to do the reverse computation? Like so:

+ +

$$ +U(T_o(B)) → T_i(A) +$$

+ +

For example, I know the Toffoli gate is its own inverse. So $T_{i,o}$ can be the identity function:

+ +

$$U_{\operatorname{Tof}}(A) → B$$ +$$U_{\operatorname{Tof}}(I(B)) → I(A)$$

+ +

I would like to know if some reasonable $T$ functions exists when $U$ is a universal circuit.

+ +

I'm a quantum computing newbie and coming at this question more from a physics standpoint, so not sure quite sure how to ask this most clearly. Suggestions are appreciated. Links to related research would also be great.

+ +

EDIT: $T$ may differ for input and output states.

+",5071,,5071,,11/13/2018 4:44,11/13/2018 4:44,Reversible computation without inverting the circuit,,2,4,,,,CC BY-SA 4.0 +4682,2,,4673,11-11-2018 11:09,,4,,"
+

There are a total of 5 qubits. I have 2, and my friend has 3. Clearly + the upper bound of classical bits I can send to my friend is 5. But is + it possible to find 5 unitary operations I can perform on my two + qubits to steer the state into 5 orthogonal states.

+
+ +

You can simply set up two standard superdense coding runs. So, this starts with each qubit of yours being maximally entangled with a qubit of your friend's. You do the typical things with each qubit, send them over, and your friend can extract 2 bits from each, giving a total of 4 bits.

+ +

You know that you cannot do better than this because of the idea of Schmidt coefficients. Whatever (pure) quantum state you and your friend hold, there is a unitary that your friend can apply that changes it into a pure state on just two of his/her qubits.

+ +
+

If you can answer that, then how does this generalize to N and M + qubits?

+
+ +

In general, if you have $n$ qubits, and your friend has $m>n$ qubits, you can communicate $2n$ classical bits by sending the $n$ qubits.

+ +

If you want to go up to different dimensions of spin, and on your side, you have a spin of dimension $d$, and your friend has a spin of dimension $d'>d$, I believe you can always send $2\log_2(d)$ classical bits. Again, you set up a $d$-dimensional maximally entangled state between the two spins, and you perform the correct unitary options. If memory serves, these are comprised of the power 0 to $d-1$ of the two unitaries $\tilde X$ and $\tilde Z$. These are both $d^{th}$ roots of the identity matrix, but in different bases. We have +$$ +\tilde Z=\sum_{i=0}^{d-1}\omega^i|i\rangle\langle i|\qquad \tilde X=\sum_{i=0}^{d-2}|i\rangle\langle i+1|+|d-1\rangle\langle 0|. +$$

+",1837,,,,,11-11-2018 11:09,,,,0,,,,CC BY-SA 4.0 +4683,2,,4679,11-11-2018 14:32,,2,,"

You might like to look up about quantum cellular automata. These are systems where you can repeatedly apply the same global unitary operation to generate the circuit that you want. The circuit is specified by the initial (product) state that is operated on. In that sense, you achieve the inverse using the same sequence of unitaries, just by changing the initial state in order to specify the inverse gate sequence. Thus, T will just be a sequence of bit flips.

+ +

This paper might be helpful: https://arxiv.org/abs/quant-ph/0502143

+ +

I guess that, overall, this only gives +$$ +U(T(B))=A, +$$ +rather than $U(T(B))=T(A)$, as desired.

+ +

There is a special class of unitaries (aside from the obvious $U=U^\dagger$) that can be inverted in the right sort of way. Imagine that $U$ is created by Hamiltonian evolution where the interactions are of the form $(XX+YY)$ between pairs of qubits. Let us further restrict to the case where the interactions are bipartite in nature, meaning that there is a consistent two-colouring of qubits such that every pair of qubits that are interacting under such a Hamiltonian term have different colours. (A one-dimensional chain with nearest-neighbour interactions, for example.) In this case, $T$ can be a $Z$ gate on every qubit or one particular colour, because $Z_1e^{-i(XX+YY)t}Z_1=e^{i(XX+YY)t}$. I don't know if any Hamiltonians which are universal for quantum computation satisfy a property such as this (certainly, they can be nearest-neighbour in a 1D chain).

+",1837,,1837,,11-12-2018 13:21,11-12-2018 13:21,,,,5,,,,CC BY-SA 4.0 +4684,1,4690,,11-11-2018 17:33,,12,459,"

We know how a quantum correlation setup can help us with a better probability of winning games like the CHSH. But what is the upper bound that physics can allow? Is it the quantum correlation setup? Or can we exceed them in general sense to get much stronger correlations?

+",4889,,55,,11-11-2022 07:07,12-05-2022 05:20,Are correlations stronger than those allowed by quantum mechanics possible?,,3,4,,,,CC BY-SA 4.0 +4685,2,,4677,11-11-2018 17:51,,2,,"

I think I have an answer. The following should be the CVX code for one of the formulations found in this link.

+
cvx_begin sdp
+variable X(2, 2) hermitian
+
+minimize(trace(id' * X)) % id is eye(2)
+subject to
+
+kron(id, X) >= rho_ab % the tensor product of two density matrices a, b
+X >= 0
+
+cvx_end
+
+

The optimal value found in this program is $$\text{optval} = e^{-H_{\text{min}}(A|B)}.$$ So simple calculation would solve for ${H_{\text{min}}(A|B)}$. It turns out to be pretty simple at the end, given that the theoretical foundation leading up to this solution is not quite straightforward.

+",2403,,9854,,07-02-2020 15:00,07-02-2020 15:00,,,,1,,,,CC BY-SA 4.0 +4687,2,,4664,11-12-2018 09:50,,3,,"

Purifications play an important role in the theory of density matrices (or more generally quantum states) because they provide a geometric tool in the explanation and description algebraic relations.

+ +

(I'll be following here Bengtsson and Życzkowski's + reasoning in derivation of the fidelity formula (section 9.4)).

+ +

A positive $ N \times N$ matrix $\rho$ can be (nonuniquely) written as: +$\rho = A^{\dagger} A$ +Where $A$ is an $N \times M$ matrix. +The matrix $A$ can be viewed as a vector in $\mathbb{C}^N \otimes \mathbb{C}^M$, i.e. a purification. Of course, the purification is not unique and any matrix of the form: +$$A' = A U$$ +where $U$ is an $M\times M$ unitary matrix, is an equally good purification.

+ +

Now, in physics, distance functions between points in various spaces can be frequently obtained as the solutions of variational principles, or optimization problems. This is the method adopted by Bengtsson and Życzkowski to define the fidelity function:

+ +

Since the multiplication by a unitary matrix $A' = A U$ doesn't change the matrix norm, the various purifications of a density matrix can be viewed as points on a sphere, given purifications of two density matrices: $\rho_1 = A_1^{\dagger} A_1$ and $\rho_2 = A_2^{\dagger} A_2$, a reasonable distance function can be taken as the spherical geodesic distance between their purification vectors: +$$\cos d = \frac{1}{2} \operatorname{Tr}(A_1A_2^{\dagger} + A_2A_1^{\dagger})$$ +Where $d$ is the angle between the two directions $A_1$, $A_2$. Now, since the above formula depends on the purification, it is reasonable to define the distance between the density matrices (the fidelity) as the maximum distance as we run over all the purifications: +$$F(\rho_1, \rho_2) = \max_{U, V} \frac{1}{2} \operatorname{Tr}(A_1 U V^{\dagger} A_2^{\dagger} + A_2 V U^{\dagger} A_1^{\dagger})$$

+ +

The solution of the above optimization problem which depends only on the density matrices and not on the specific purifications(as explicitly done by Bengtsson and Życzkowski) is exactly the fidelity function.

+ +

Uhlmann extensively uses purifications in his work; one of the most important objects defined by him using purifications is the Uhlmann's phase which is the generalization of Berry's geometric phase for mixed states (and which has numerous applications in quantum computing). Please see the following review of his.

+",4263,,55,,11-12-2018 15:13,11-12-2018 15:13,,,,4,,,,CC BY-SA 4.0 +4688,2,,4540,11-12-2018 14:51,,2,,"
+

Did I get the basic idea of the attack correct? If wrong, ignore the rest of the post!

+
+ +

Mostly. The way you describe it, you will indeed get the state $\lvert\phi\rangle$, but you will not be able to perform the unitary transformation $I-\lvert\phi\rangle\langle\phi\rvert$. But in step 3, when you run Grover on state $\lvert\phi\rangle$, you actually need to apply $I-\lvert\phi\rangle\langle\phi\rvert$ as part of Grover's algorithm. The reason is the following: Usually, Grover is presented as an algorithm that searches a value $x\in\{0,1\}^n$ that satisfies a predicate $P$. In this case, Grover's algorithm first initializes the state to $\lvert\phi\rangle:=\sum_{x\in\{0,1\}^n}2^{-n/2}\lvert x\rangle$. And, during its main loop, it applies the flip operator $I-\lvert\phi\rangle\langle\phi\rvert$. That operator is quite easy to construct if $\lvert\phi\rangle=\sum_{x\in\{0,1\}^n}2^{-n/2}\lvert x\rangle$, so it is normally not mentioned as a requirement of the algorithm. However, instead of searching $x\in\{0,1\}^n$, you could search for $x\in X$ for some set $X$. (After all, there is nothing special about the bitstrings of length $n$.) Then you need to change the description of Grover: The initial state will be $\lvert\phi\rangle=\sum_{x \in X}2^{-\lvert X\rvert/2}\lvert x\rangle$, and we need to apply $I-\lvert\phi\rangle\langle\phi\rvert$ during the main loop. For many sets $X$ (e.g., numbers modulo $N$) it will be fairly easy to construct $\lvert\phi\rangle$ and $I-\lvert\phi\rangle\langle\phi\rvert$. However, in the general case, it might be that it is difficult to construct either of them. In your description of the algorithm, we have a similar situation. Namely, $X=\{x:H(x)=c\}$ for some fixed $c$. But for that set $X$, there is no way to construct $\lvert\phi\rangle$ or $I-\lvert\phi\rangle\langle\phi\rvert$. That's why your algorithm doesn't work (but it still gives the right idea). Instead, in the paper you cite, both $\lvert\phi\rangle$ and $I-\lvert\phi\rangle\langle\phi\rvert$ are provided by special oracles constructed just for this purpose. Using those oracle, you can then run Grovers algorithm on the state $\lvert\phi\rangle$.

+ +
+

How many elements are there in the superposition $\lvert\phi\rangle$ + after we commit to a certain $c$?

+
+ +

This will depend on the parameters you chose. We expect roughly $M/N$ if $M$ is the size of the domain and $N$ the size of the range of $H$. For things to be interesting, you should choose $M\gg N$ so that there will be exponentially many elements in the superposition (and at least $2^{\lvert m\rvert}$ so that every message is possible). However, I assume the reason why you are wondering about this is because you think that the number of elements should be small for Grover to work, which is not the case, see below:

+ +
+

The speed of Grover search - this is the main problem and I'm not sure how their trick actually works. Wouldn't the computational complexity be the same as trying to guess a pre-image for a given output of the hash function since one has to search over all the u? In this case, where is the advantage?

+
+ +

Here seems to lie a misconception. Remember my explanations about Grover's algorithm in the beginning of this answer. Grover searches some $x\in X$, satisfying some predicate $P$. In our case $X$ can be huge ($M/N$ elements) because it is the set of all preimages of $c$. But that does not matter. Recall that the original Grover works on $\{0,1\}^n$ which is also huge, but works fast as long as we are searching for some $x\in\{0,1\}^n$ that has some common property $P$. For example, if we search using Grover for $x\in\{0,1\}^n$ that satisfies that $33\mid x$ (the predicate $P$), then we have that there are many $x$'s that satisfy $P$, every 33. $x$ satisfies it! And the runtime will be something like $\sqrt{33}$. So, as a general rule, if a predicate $P$ is satisfied for every $i$-th element of $X$, then Grover's algorithm that searches for $x\in X$ that satisfies $P$ takes about $\sqrt i$ steps. Here it does not matter what set $X$ is, or how big it is! (As long as we have a way to construct $\lvert\phi\rangle$ and $I-\lvert\phi\rangle\langle\phi\rvert$ for this set $X$.) Now, in the setting you describe, $X=\{x:H(x)=c\}$. And $P$ is the predicate saying that $x$ starts with $m'$. To analyze the runtime of Grover's algorithm in this case, we don't care about $X$, but we do need to know how often an element satisfies $P$. That is easy: Every $2^{\lvert m'\rvert}$-th element does. So the runtime of Grover will be $O(\sqrt{2^{\lvert m'\rvert}})$. This is a problem if $m'$ is long, but if $m'$ is, e.g., just a bit, then this works fine. For example, if we use the hash function to commit to one-bit messages, we can open the commitment to any value of $m'$.

+ +

If you want an example where $m'$ is longer, then you need to vary the construction. Basically, you commit on every bit individually and concatenate the commitments. Then each commitment can be broken using the method described above, and you need to run the algorithm $\lvert m'\rvert$ times.

+",5058,,,,,11-12-2018 14:51,,,,4,,,,CC BY-SA 4.0 +4689,2,,4664,11-12-2018 15:10,,3,,"

Short version

+ +

Consider the two following observations:

+ +
    +
  1. Given a state $\rho$, the problem of finding purifications of $\rho$ is equivalent to that of finding matrices $A$ such that $\rho=AA^\dagger$. The purifications of $\rho$ are then the vectorisations of these $A$ (i.e. a vector $\Psi$ is a purification of $\rho$ iff its matrix of coefficients $\Psi_{ij}$ satisfies $\Psi\Psi^\dagger=\rho$). +In this notation, the overlap between two purifications $A,B$ is written as $\operatorname{Tr}(A^\dagger B$).

  2. +
  3. We want to find the value of $\operatorname{Tr}|\sqrt\rho\sqrt\sigma|$. +As shown in this other answer, this is equivalent to finding the unitary $U$ maximising $\lvert\operatorname{Tr}(U\sqrt\rho\sqrt\sigma)\rvert$, and the unitary achieving this maximum turns out to equal $V^\dagger$ where $V$ is the unitary in the polar decomposition of $\sqrt\rho\sqrt\sigma$.

  4. +
+ +

Observation (2) above tells us why finding the fidelity reduces to a maximisation problem, and how the polar decomposition (or more generally, the singular values) enters into it, while observation (1) tells us why we can understand the terms in this maximisation as overlaps of purifications of the states.

+ +

Consider then the $U$ such that $F(\rho,\sigma)=\operatorname{Tr}(U\sqrt\rho\sqrt\sigma)$, and define $A\equiv U\sqrt\rho$ and $B=\sqrt\sigma$. Then $F(\rho,\sigma)=\operatorname{Tr}(AB^\dagger)$, and thus $F(\rho,\sigma)$ equals the overlap between the purification of $\rho$ corresponding to $A$ and the purification of $\sigma$ corresponding to $B$.

+ +

Detailed version

+ +

The starting observation is that there are two ""natural"" ways to write down $\sqrt\rho\sqrt\sigma$ (or any other product of two normal operators): using their spectral decompositions, and using the singular value decomposition of their product: +$$\sqrt\rho\sqrt\sigma=\sum_{jk} \sqrt{\lambda_j\mu_k}\lvert\lambda_j\rangle\!\langle\lambda_j\rvert \mu_k\rangle\!\langle \mu_k\rvert += \sum_m s_m \lvert s_m^L\rangle\!\langle s_m^R\rvert,$$ +where $\rho=\sum_j\lambda_j\lvert\lambda_j\rangle\!\langle\lambda_j\rvert$ and $\sigma=\sum_k\mu_k\lvert\mu_k\rangle\!\langle\mu_k\rvert$ are the spectral decompositions of $\rho$ and $\sigma$, and I denoted with $s_m$ the singular values of $\sqrt\rho\sqrt\sigma$, and with $\lvert s_m^{L(R)}\rangle$ the left (right) singular vectors of $\sqrt\rho\sqrt\sigma$. +Note that using these definitions, we have +$\lvert\sqrt\rho\sqrt\sigma\rvert=\sum_m s_m \lvert s_m^R\rangle\!\langle s_m^R\rvert$ (if using the definition $\lvert A\rvert\equiv\sqrt{A^\dagger A}$, otherwise replace $R$ with $L$ if you want to define $\lvert A\rvert\equiv\sqrt{AA^\dagger}$), +and thus $\operatorname{tr}\lvert\sqrt\rho\sqrt\sigma\rvert=\sum_m s_m$.

+ +

Let us now denote with $\lvert\psi_\rho\rangle$ and $\lvert\psi_\sigma\rangle$ a pair of purifications of $\rho$ and $\sigma$. These can be written in general as +$$\lvert\psi_\rho\rangle=\sum_k \sqrt{\lambda_k}\lvert\lambda_k\rangle\otimes\lvert u_k\rangle, \\ +\lvert\psi_\sigma\rangle=\sum_k \sqrt{\mu_k}\lvert\mu_k\rangle\otimes\lvert v_k\rangle, +$$ +for arbitrary orthonormal bases $\{u_k\}_k, \{v_k\}_k$. We can then write their overlap as +$$\langle\psi_\rho\rvert\psi_\sigma\rangle = \sum_{jk}\sqrt{\lambda_j\mu_k} +\langle\lambda_j\rvert\mu_k\rangle\langle u_k\rvert v_k\rangle = +\sum_{jk}\langle\lambda_j\rvert\sqrt\rho\sqrt\sigma\lvert\mu_k\rangle \langle u_j\rvert v_k\rangle, +$$ +where I exploited the fact that $\sqrt\rho\lvert\lambda_j\rangle=\sqrt{\lambda_j}\lvert\lambda_j\rangle$ and +$\sqrt\sigma\lvert\mu_k\rangle=\sqrt{\mu_k}\lvert\mu_k\rangle$. +Using the singular value decomposition of $\sqrt\rho\sqrt\sigma$ we thus get

+ +

\begin{align}\langle\psi_\rho\rvert\psi_\sigma\rangle &= \sum_{jkm} +s_m \langle\lambda_j\rvert s_m^L\rangle\!\langle s_m^R\rvert\mu_k\rangle \langle u_j\rvert v_k\rangle \\ +&= \sum_m s_m \langle s_m^R\rvert + \Bigg(\underbrace{\sum_k \lvert \mu_k\rangle\!\langle \bar{v}_k\rvert }_{U_1}\Bigg) + \Bigg(\underbrace{\sum_j \lvert\bar{u}_j\rangle\!\langle \lambda_j\rvert}_{U_2}\Bigg) + \lvert s_m^L\rangle, +\end{align} +where I denoted with $\lvert \bar{u}_j\rangle$ the complex conjugate vector of $\lvert u_j\rangle$, so that $\langle u_j\rvert v_k\rangle=\langle \bar{v}_k\rvert \bar{u}_j\rangle$.

+ +

Uhlmann's theorem is almost straightforward from this. +The triangle inequality gives +$\lvert\langle\psi_\rho\rvert\psi_\sigma\rangle\rvert\le \sum_m s_m$ because matrix elements of unitary matrices are always less than $1$ in modulus, and the inequality is saturated when +$U_1 U_2=\sum_m \lvert s_m^R\rangle\!\langle s_m^L\rvert\equiv \mathcal U_{PD}^\dagger$. +In terms of the purification vectors $\lvert u_j\rangle,\lvert v_k\rangle$, this happens when +$$\langle u_j\rvert v_k\rangle = \langle \mu_k \rvert \Bigg(\sum_m \lvert s_m^R\rangle \!\langle s_m^L\rvert \Bigg) \lvert \lambda_j\rangle += \langle \mu_k\rvert \mathcal U_{PD}^\dagger\rvert \lambda_j\rangle.$$ +Note that here $\mathcal U_{PD}$ is the unitary matrix that you get out of the polar decomposition, that is, the $V$ in your post. +We can thus conclude that the purifications that saturate the inequality are those of the form +\begin{align} +\lvert u_j\rangle = V\lvert\bar{\lambda}_j\rangle, \qquad +\lvert v_k\rangle = V\mathcal U_{PD}^*\lvert\bar{\mu}_k\rangle, +\end{align} +or, equivalently, +\begin{align} +\lvert u_j\rangle = V\lvert j\rangle, \qquad +\lvert v_k\rangle = V\lvert j\rangle \langle \mu_k\rvert\mathcal U_{PD}^\dagger\lvert\lambda_j\rangle, +\end{align} +for any unitary $V$.

+ +

So, in conclusion, what does this tell us? That the vectors $\lvert u_j\rangle,\lvert v_k\rangle$ that make the purifications the most aligned are determined by the overlap between the eigenvectors of $\sigma$, and the eigenvectors of $\rho$ rotated through unitary that maps the right singular vectors of $\sqrt\rho\sqrt\sigma$ into the left ones.

+ +

Why is this the case? I have no idea, mostly because I don't know of any easy way to relate the SVD of a product of two operators with their eigenvectors.

+",55,,55,,11-08-2019 12:47,11-08-2019 12:47,,,,0,,,,CC BY-SA 4.0 +4690,2,,4684,11-12-2018 16:51,,10,,"

Yes, it is possible to conceive theories with ""stronger correlations"" than those given by quantum mechanics.

+ +

One way to make this statement precise is to consider some kind of ""measurement apparatus"" (you can think of it as simply a black a box with some buttons that you can push and different LEDs that correspond to different possible outputs), and analyse the set of correlations between inputs and outputs that different physical theories allow for.

+ +

For example, if you have two possible inputs and two possible outputs than the set of possible theories, often referred to as behaviours in this context, is the set of probabilities $\{p(a|x)\}_{x,a\in\{0,1\}}$. This is the set of vectors $\boldsymbol p$ in $[0,1]^{2^2}\subset\mathbb R^{2^2}$ normalised to one. In other words, it's a section of the $2^2$-dimensional hyperplane. +More generally, in this context, it is common to consider a Bell-like scenario in which two parties are involved and each has its own box. Then, if each party can choose between $m$ possible inputs and can get $\Delta$ possible outputs, the set of possible behaviours is a hyperplane $\mathcal P\subset\mathbb R^{\Delta^2 m^2}$, which therefore has dimension $(\Delta^2-1)m^2$.

+ +

The behaviours allowed by quantum mechanics are a strict subset of $\mathcal P$.

+ +

One can consider different restrictions imposed on a physical theory and study the corresponding set of possible behaviours. A first natural assumption is to require a theory to be no-signalling, which means that it doesn't allow for faster-than-light communication. The set $\mathcal{NS}$ of no-signalling behaviours is strictly larger than the set $\mathcal Q$ of quantum behaviours. +In the context of CHSH inequalities, the boundary between $\mathcal{NS}$ and $\mathcal Q$ would be observable via Tsirelson's bound, which tells us that quantum mechanics cannot produce correlations such that $S>2\sqrt 2$ (where $S$ is the usual operator defined for CHSH inequalities). Similarly, the boundary between $\mathcal Q$ and the set $\mathcal L$ of local (classical) behaviours can be witnessed via the standard Bell's inequalities. +See the following picture representing the relations between these different sets (taken from Brunner et al., pag. 7):

+ +

+ +

Have a look at Brunner et al.'s review (1303.2849) for more details.

+ +

If you don't want to assume anything about a theory, then no, there are no restrictions at all. Any correlation between present and future is in principle possible.

+",55,,55,,11/13/2018 9:54,11/13/2018 9:54,,,,0,,,,CC BY-SA 4.0 +4691,2,,4679,11-12-2018 19:18,,3,,"

This paper gives a fairly complete answer to the question ""given oracle access to U, implement the inverse of U"".

+ +

https://arxiv.org/abs/1810.06944

+ +

They give a protocol which implements U inverse with a number of queries that's linear in the dimension of U and show that this is essentially optimal.

+ +

This seems to be fairly closely related to your question. I think that you could transform a good protocol for your problem into a good protocol for their problem, which means that the lower bound in their paper would carry over to your setting. I'm not sure, though.

+",483,,,,,11-12-2018 19:18,,,,1,,,,CC BY-SA 4.0 +4692,2,,4664,11-12-2018 20:10,,4,,"

(I will give the argument with formulas for now, hopefully I find time for some pictures later.)

+ +

Let $|m\rangle$ be the (unnormalized) maximally entangled state. Then, a purification of $\rho$ is given by +$$ +|\rho\rangle_{AB}=(\sqrt{\rho}_A\otimes1\!\!1_B)|m\rangle\ , +$$ +and correspondingly for $\sigma$ -- this can be seen most easily by first tracing the $B$ system from $|m\rangle$, which gives $1\!\!1$, which implies that the partial trace of $|\rho\rangle_{AB}$ is $\sqrt{\rho}1\!\!1\sqrt{\rho}=\rho$.

+ +

Now there is a unitary degree of freedom in the purification -- $B$ can apply any unitary and it is the same purification (as $B$ is being traced anyway). Let's use it for the purification of $\sigma$: +$$ +|\sigma\rangle = (\sqrt{\sigma}\otimes U)|m\rangle\ . +$$

+ +

Now what is the overlap of the two states? It is +$$ +\langle\rho|\sigma\rangle = \langle m| \sqrt{\rho}\sqrt{\sigma}\otimes U|m\rangle\ .$$ +It is not straightforward to work out that this equals +$$ +\langle\rho|\sigma\rangle = \mathrm{tr}[\sqrt{\rho}\sqrt{\sigma}U^T]\ .$$ +Now we want this quantity to be maximal. When is it maximal? It is maximal if $\sqrt{\rho}\sqrt{\sigma}U^T = |\sqrt{\rho}\sqrt{\sigma}|$, i.e., $\bar U$ is given by the polar decomposition of $\sqrt{\rho}\sqrt{\sigma}$. (Here -- and in the question -- $|X|$ denotes the positive part of the polar decomposition of $X$, which is $\sqrt{X^\dagger X}$, different from the absolute value function defined on the eigenvalues.)

+ +

(The last statement holds since $|\mathrm{tr}[XU]|\le \|X\|_1\|U\|_\infty = \|X\|_1$ -- this is Hölder's inequality -- where $\|X\|_1=\mathrm{tr}|X|$ is the sum of the singular values. Alternatively (following @glS) one can use the SVD of $X$, $X=\sum \sigma_i|s_i\rangle\langle q_i|$: +$$ +|\mathrm{tr}[XU]|=|\sum \sigma_i\langle q_i|U|s_i\rangle| \le +\sum \sigma_i|\langle q_i|U|s_i\rangle| \le \sum \sigma_i\ , +$$ +where we have used that +$|\langle q_i|U|s_i\rangle|\le1$, +since $||q_i\rangle|=1$ and $|U|s_i\rangle|=||s_i\rangle|=1$.)

+",491,,491,,11/13/2018 9:07,11/13/2018 9:07,,,,14,,,,CC BY-SA 4.0 +4693,1,4695,,11/13/2018 3:26,,5,2085,"

It is my understanding that light, and its polarization, is used to transfer information in quantum computers, but how can the information encoded in say, an electron also be stored in light? I understand that the spin of an electron or other particle is often used to create the qubit, where its ""value"" can be any point on the sphere below between zero and one.

+ +

However, how is the polarization of a photon able to have that kind of variety of possible values? Is there something I'm missing? Is there a quantized version/explanation of polarizations that I simply haven't seen that allows for the same range of information storage and extraction?

+",5032,,55,,12-06-2021 22:14,12-06-2021 22:14,How is the polarization of a photon able to hold quantum information?,,2,0,,,,CC BY-SA 4.0 +4694,2,,4693,11/13/2018 7:34,,5,,"

In quantum theory, the pure states are associated with the unit vectors of the Hilbert space. A pure state of a quantum bit can be represented as +$$| \psi \rangle = \alpha | 0 \rangle + \beta | 1 \rangle$$ +where $|\alpha|^2 + |\beta|^2 = 1$. The basis $| 0 \rangle$ and $| 1 \rangle$ can be viewed as two orthogonal polarization directions.

+ +

Bloch sphere is a geometric representation of qubits which makes the mathamatics of a qubit more intuitive. To see this, we may rewrite the equation as +$$| \psi \rangle = e^{i \gamma} ( +\cos \frac \theta 2 | 0 \rangle + e^{i\phi} \sin \frac \theta 2 | 1 \rangle )$$

+ +

Since global phase has no observable effects, we can ignore the factor $e^{i\gamma}$. It follows that a pure qubit state +$$| \psi \rangle = \cos \frac \theta 2 | 0 \rangle + e^{i\phi } \sin \frac \theta 2 | 1 \rangle$$ +1-to-1 correponds to a point on the Bloch sphere whose sphere coordinate is exactly $(\theta, \phi)$.

+ +

More generally, a state can be probability mixture of pure states, which leads to the notion of density operators. A density operator $\rho$ is positive and $\operatorname{Tr}(\rho) = 1$. In a qubit system, a density operator can be decomposed as +$$\rho = \frac 1 2 (I+r_1\sigma_1+r_2\sigma_2+r_3\sigma_3) $$ +where $r_1,r_2,r_3 \in \mathbb R$, $r_1^2 + r_2^2 + r_3^2 \leq 1$ and $\sigma_i$ are Pauli matrices +$$ +\sigma_0 = I = \begin{bmatrix} + 1 & 0 \\ + 0 & 1 + \end{bmatrix} +$$ +$$ +\sigma_1 = \sigma_x = \begin{bmatrix} + 0 & 1 \\ + 1 & 0 + \end{bmatrix} +$$ +$$ +\sigma_2 = \sigma_y = \begin{bmatrix} + 0 & -i \\ + i & 0 + \end{bmatrix} +$$ +$$ +\sigma_3 = \sigma_z = \begin{bmatrix} + 1 & 0 \\ + 0 & -1 + \end{bmatrix} +$$

+ +

This indicates that the whole Bloch ball can be associated with states. In particular, the states on the sphere is pure and those inside the sphere is mixed.

+",4891,,4891,,11/13/2018 8:37,11/13/2018 8:37,,,,0,,,,CC BY-SA 4.0 +4695,2,,4693,11/13/2018 7:50,,5,,"

Firstly, that sphere that you've pictured is convenient for thinking about what's going on, but remember that it is not what is actually happening. So the fact that you don't visualise light as having a little arrow pointing somewhere doesn't matter.

+ +

The fact of that matter is that for an electron spin, having the two possible states ""up"" and ""down"", we associate two distinguishable states, which we label as perhaps $|up\rangle,|down\rangle$ or $|0\rangle,|1\rangle$. Since quantum mechanics is a linear theory, and linear superposition of the two is allowed, $\alpha|0\rangle+\beta|1\rangle$.

+ +

Similarly for a photon, there are two distinguishable states of polarisations, horizontal and vertical. You can label these as $|H\rangle,|V\rangle$ or $|0\rangle,|1\rangle$. The labels really are arbitrary. But again, because quantum mechanics is linear, you can have a linear superposition of these, $\alpha|0\rangle+\beta|1\rangle$, and you can capture the same information as you can with the state of an electron.

+ +

On the Bloch Sphere, particular points are often highlighted, such as $0\rangle$, $|+\rangle=(|0\rangle+|1\rangle)/\sqrt{2}$ and $(|0\rangle+i|1\rangle)/\sqrt{2}$. If we've decided that $|0\rangle$ is the same as a horizontally polarised photon, then $|+\rangle$ is the same as a photon polarised at $45^{\circ}$, and $(|0\rangle+i|1\rangle)/\sqrt{2}$ is the same as a circularly polarised photon.

+",1837,,,,,11/13/2018 7:50,,,,0,,,,CC BY-SA 4.0 +4696,2,,4209,11/13/2018 8:22,,3,,"

The kinds of people that use these devices are affiliated with companies, quantum startups, or the IBMQ Hubs (in Oxford, Keio, Melbourne, ... ). The process is more involved than a simple web sign-up.

+ +

If you are a company and want to get the process started, you can use this web form, or try to ping someone important on the Qiskit Slack.

+ +

If you are neither a company nor an academic at one of the hubs, then I'm afraid there is no scope for access to these devices. But there are, of course, the many publicly available devices that are just as awesome.

+ +

Disclaimer: I work for IBM

+",409,,,,,11/13/2018 8:22,,,,0,,,,CC BY-SA 4.0 +4697,2,,4662,11/13/2018 8:50,,1,,"

There are a few tests of whether a two qubit state is entangled. One is to perform the so-called 'partial transpose', which can be done with the partialtranspose function in quantum-octave. If the resulting matrix has any negative eigenvalues, you know that your original state was entangled.

+ +

Another test works only when your two qubit state is pure (note: a state can be said to be either pure or mixed, depending on whether it has only a single non-zero eigenvalue, or multiple non-zero eigenvalues). To test this you can look at the eigenvalues. Or you can calculate the Von Neumann entropy with vnentropy. This returns a value of zero only for pure states.

+ +

Once you have a pure state of two qubits and want to test for entanglement, your next step is to perform a partial trace using ptrace. This will give a description of the state of just one of your qubits. If this state is also pure, your overall two qubit state is not entangled. But if it is mixed, it is entangled. To determine which is true, you can again use vnentropy. The result of this can be regarded as a measure of how entangled the state is.

+ +

Another measure of entanglement, and one that works even when your overall state is the concurrence. This can be calculated in quantum-octave with concurrence.

+ +

There's also a bunch of other ways. As you learn about entanglement you will learn many concepts, and then you'll find that many of those have an implementation in quantum-octave.

+",409,,,,,11/13/2018 8:50,,,,0,,,,CC BY-SA 4.0 +4698,1,,,11/13/2018 9:49,,5,163,"

To date, what is the longest quantum computation ever performed? Length is measured in number of operations.

+ +

EDIT --- +I'm looking for a quantum computation with a clear ending and a clear output. Something like a quantum computation that solves a set of linear equations would count. But I consider a benchmarking experiment that measures the lifetime of the quantum information 'long' because it can be arbitrarily long and give the same output as a similar experiment which is much shorter.

+",1867,,1867,,11/15/2018 12:29,11/15/2018 12:29,What is the longest quantum circuit?,,1,3,,,,CC BY-SA 4.0 +4699,1,,,11/13/2018 10:05,,25,5770,"

The Bloch sphere is a nice visualization of single qubit states. Mathematically, it can be generalized to any number of qubits by means of a high-dimensional hypersphere. But such things are not easy to visualize.

+ +

What attempts have been made to extend visualizations based on the Bloch sphere to two qubits?

+",409,,55,,06-04-2019 22:54,06-04-2019 22:54,Can the Bloch sphere be generalized to two qubits?,,5,1,,,,CC BY-SA 4.0 +4700,2,,4699,11/13/2018 10:44,,2,,"

A paper has been published on the subject, called ""Bloch sphere model for two-qubit pure states""

+ +

https://arxiv.org/abs/1403.8069

+",4160,,26,,11/13/2018 13:06,11/13/2018 13:06,,,,1,,,,CC BY-SA 4.0 +4701,2,,4699,11/13/2018 10:51,,6,,"

For more than 1-qubit visualization, we will need more complex visualizations than a Bloch sphere. The below answer from Physics Stack Exchange explains this concept quite authoritatively:

+ +

Bloch sphere for 2 and more qubits

+ +

In another article, the two qubit representation is described as a seven-dimensional sphere, S 7, which also allows for a Hopf fibration, with S 3 fibres and a S 4 base. The most striking result is that suitably oriented S 7 Hopf fibrations are entanglement sensitive.

+ +

Geometry of entangled states, Bloch spheres and Hopf fibrations

+ +

Having said that, a Bloch sphere based approach is quite useful even to model the behavior of qubits in a noisy environment. There has been analysis of the two-qubit system by use of the generalized Bloch vector to generate tractable analytic equations for the dynamics of the four-level Bloch vectors. This is based on the application of geometrical concepts from the well-known two-level Bloch sphere.

+ +

We can find that in the presence of correlated or anti-correlated noise, the rate of decoherence is very sensitive to the initial two-qubit state, as well as to the symmetry of the Hamiltonian. In the absence of symmetry in the Hamiltonian, correlations only weakly impact the decoherence rate:

+ +

Bloch-sphere approach to correlated noise in coupled qubits

+ +

There is another interesting research article on the representation of the two-qubit pure state parameterized by three unit 2-spheres and a phase factor.For separable states, two of the three unit spheres are the Bloch spheres of each qubit with coordinates (A,A) and (B,B). The third sphere parameterises the degree and phase of concurrence, an entanglement measure.

+ +

This sphere may be considered a ‘variable’ complex imaginary unit t where the stereographic projection maps the qubit-A Bloch sphere to a complex plane with this variable imaginary unit. This Bloch sphere model gives a consistent description of the two-qubit pure states for both separable and entangled states.

+ +

As per this hypothesis, the third sphere (entanglement sphere) parameterizes the nonlocal properties, entanglement and a nonlocal relative phase, while the local relative phases are parameterized by the azimuthal angles, A and B, of the two quasi-Bloch spheres.

+ +

Bloch sphere model for two

+",4501,,734,,11/14/2018 7:44,11/14/2018 7:44,,,,2,,,,CC BY-SA 4.0 +4702,2,,4699,11/13/2018 12:42,,8,,"

Since a spin $j$ irreducible representation of $SU(2)$ has a dimension $2j+1$ ($j$ is half integer), any finite dimensional Hilbert space can be obtained as a representation space of $SU(2)$. Moreover, since all irreducible representations of $SU(2)$ are symmetric tensor products of the fundamental spinor representation, therefore every finite dimensional Hilbert space can be thought of as a symmetric tensor product of fundamental $SU(2)$ fundamental representation spaces.

+ +

This is the basis of the Majorana stellar representation construction. A state of a qudit living in a Hilbert space of dimension $2j+1$ can be represented by $2j$ points on the Bloch sphere. The state vector can be reconstructed from the $2j$ (2-dimensional) spin vectors of the $2j$ points by a symmetrized tensor product.

+ +

Given a state vector in a $2j+1$ dimensional Hilbert space (Please see Liu, Fu and Wang, section 2.1) +$$|\psi\rangle = \sum_{m=-j}^{j} C_m |j, m\rangle, $$ +The locations of the corresponding points (the Majorana stars) on the Bloch sphere are given by the roots of the equation: +$$\sum_{k=0}^{2j} \frac{(-1)^k C_{j-k}}{(2j-k)! k!} z^{2j-k}=0.$$

+ +

(The parametrization is by means of the stereographic projection coordinate $ z = \tan \theta e^{i\phi}$ ($\theta$, $\phi$ are the spherical coordinates))

+ +

One application of this representation to quantum computation, is in the visualization of the trajectories giving rise to geometric phases, which serve as the gates in holonomic quantum computation. These trajectories are reflected as trajectories of the Majorana stars on the Bloch spheres and the geometric phases can be computed from the solid angles enclosed by these trajectories. Please see Liu and Fu's work on Abelian geometric phases. A treatment of some non-Abelian cases is given by Liu Roy and Stone.

+ +

Finally, let me remark that there are many geometric representations relevant to quantum computation, but they are multidimensional and may be not useful in general as visualization tools. Please see for example Bernatska and Holod treating coadjoint orbits which can serve as phase spaces of the finite dimensional Hilbert spaces used in quantum computation. The Grassmannian which parametrizes the ground state manifold of adiabatic quantum Hamiltonians is a particular example of these spaces.

+",4263,,4263,,11/14/2018 9:44,11/14/2018 9:44,,,,3,,,,CC BY-SA 4.0 +4703,2,,4698,11/13/2018 12:55,,6,,"

The longest circuit ever performed is probably one used for benchmarking. So the point would be to run many gates, look at the associated decay in fidelity, and use that to find a measure of how good the gates are.

+ +

Here's a recent paper that I found on a fairly brief search. In Fig. 2 we see depths of 1500 single qubit gates and 130 for gates on two qubits. Given that these circuits are designed to drive states to a point of complete randomness, there's not much point in going further.

+ +

However, note that circuit depth isn't uniquely defined, especially since one circuit can be recompiled to an equivalent one with non-equal depth. This is especially true of single qubit operations. For example, should the gate sequence $H Z H$ be regarded as depth 3 or as depth 1, because it is equivalent to $X$. So a more concrete measure is perhaps the depth of cnots.

+ +

For this, the longest I know of is my own benchmarking program that I ran for 10 'rounds' in which each round has a cnot depth of 4 (except for the last, which has two). This makes a total cnot depth of 38. But, of course, I only know of this because it is my own project. There might be runs with more cnots, though current noise levels mean that there is not much reason to go too far beyond this.

+",409,,409,,11/13/2018 13:02,11/13/2018 13:02,,,,1,,,,CC BY-SA 4.0 +4704,2,,4699,11/13/2018 22:53,,5,,"

We have some multiqubit visualizations within Q-CTRL's Black Opal package.

+ +

These are all fully interactive and are designed to help build intuition about correlations in interacting two-qubit systems.

+ +

The two Bloch spheres represent the relevant separable states of two qubits. The tetrahedra in the middle visually capture correlations between certain projections of the two qubits. When there is no entanglement, the Bloch vectors live entirely on the surfaces of the respective spheres. However, a fully entangled state lives exclusively in the space of correlations in this representation. The extrema of these spaces will always be maximally entangled states like Bell states, but maximally entangled states can also reside within multiple tetrahedra simultaneously.

+ +

+",5092,,26,,11/14/2018 3:54,11/14/2018 3:54,,,,3,,,,CC BY-SA 4.0 +4705,2,,4699,11/13/2018 23:42,,23,,"

For pure states, there is a reasonably simple way to make a ""2 qubit bloch sphere"". You basically use the Schmidt decomposition to divide your state into two cases: not entangled and fully entangled. For the not-entangled part, you just use two bloch spheres. And then the entangled part is isomorphic to the set of possible rotations in 3d space (the rotation is how you translate measurements on one qubit into predictions on the other qubit). This gives you a representation with eight real parameters:

+ +

1) A real value w between 0 and 1 indicating the weight of not-entangled vs fully-entangled.

+ +

2+3) The not-entangled unit bloch vector for qubit 1.

+ +

4+5) The not-entangled unit bloch vector for qubit 2.

+ +

6+7+8) The fully-entangled rotation.

+ +

Here's what it looks like if you show the rotation part as ""where X Y and Z axes get mapped"", and additionally scale the axes by w so that it gets bigger the more entangled you are:

+ +

+ +

(The bouncing in the middle one is due to a numerical degeneracy in my code.)

+ +

For mixed states, I've had a bit of success showing the envelope of bloch vectors predicted for qubit 2 given every possible measurement of qubit 1. That looks like this:

+ +

+ +

But note that a) this 'envelope' representation is not symmetric (one of the qubits is the control and the other is the target) and b) although it looks pretty it's not algebraically compact.

+ +

This display is available in the alternate dev-entanglement-display branch of Quirk. If you're able to follow the build instructions, then you can play with it directly.

+",119,,,,,11/13/2018 23:42,,,,0,,,,CC BY-SA 4.0 +4706,1,,,11/14/2018 6:44,,6,428,"

I've been working through the paper Bell nonlocality by Brunner et al. after seeing it in user glS' answer here. Early on in the paper, the standard Bell experimental setup is defined:

+ +

+ +

Where $x, y \in \{0,1\}$, $a, b \in \{-1, 1\}$, and the two people (Alice & Bob) measure a shared quantum system generated by $S$ according to their indepedent inputs $x$ and $y$, outputting the results as $a$ and $b$.

+ +

The paper then has the following equation:

+ +

$P(ab|xy) \ne P(a|x)P(b|y)$

+ +

And claims the fact this is an inequality means the two sides are not statistically independent. It's been a long time since I took probability & statistics in university, so I'm interested in this equation, what it means, and why it is a test for statistical independence. Why is this equation used, and what is the intuitive meaning of each side? I have basic knowledge of conditional probability and Bayes' theorem.

+",4153,,55,,12-01-2020 11:36,12-01-2020 11:36,"In Bell nonlocality, why does $P(ab|xy)\neq P(a|x)P(b|y)$ mean the variables are not statistically independent?",,2,1,,,,CC BY-SA 4.0 +4707,1,,,11/14/2018 8:07,,9,1178,"

I'm working through a problem set, and am stuck on the following problem:

+
+

a) What can go wrong in Shor’s algorithm if Q (the dimension of the Quantum Fourier Transform) is not taken to be sufficiently large? Illustrate with an example.

+

b) What can go wrong if the function, f, satisfies $f(p) = f(q)$ if the period $s$ divides $p − q$, but it’s not an “if and only if” (i.e., we could have $f(p) = f(q)$ even when $s$ doesn’t divide $p − q$)? Note that this does not actually happen for the function in Shor’s algorithm, but it could happen when attempting period finding on an arbitrary function. Illustrate with an example.

+

c) What can go wrong in Shor’s algorithm if the integer $N$ to be factored is even (that is, one of the prime factors, $p$ and $q$, is equal to 2)? Illustrate with an example.

+

d) Prove that there can be at most one rational $\frac{a}{b}$, with $a$ and $b$ positive integers, that’s at most $\epsilon$ away from a real number $x$ and that satisfies $b < \frac{1}{\sqrt{2\epsilon}}$. Explain how this relates to the choice, in Shor’s algorithm, to choose Q to be quadratically larger than the integer $N$ that we’re trying to factor.

+
+

I've been wrestling with it for a while, and my attempt so far is:

+
+

a) When $s$ (the period) does not divide $Q$, a sufficiently large $Q$ ensures that the rational approximation to $\frac{k Q}{s}$ where $k$ is an integer is sufficiently close to determine a unique $s$.

+

b) There might be more than one period $s$ associated with the function (something like an acoustic beat), so it would be much more difficult to solve for one period individually.

+

c) Completely lost....

+

d) I supposed that there existed two different rationals such that $\mid{\frac{a_1}{b_1} - \frac{a_2}{b_2}}\mid> 0$ and tried to force a contradiction using the constraints $\mid\frac{a_\mu}{b_\mu}-x\mid \leq \epsilon$ and $b_\mu < \frac{1}{\sqrt{2\epsilon}}$, but couldn't get it to come out. Either I am making a stupid mistake and/or missing something simple (?).

+
+

I am really struggling to gain intuition for Shor's algorithm and its specific steps, so I'm very unconfident when trying to address parts (a) - (c). I'm stumped by (c) especially; isn't Shor's algorithm robust in the sense that it does not matter if $N$ is even or odd? If anyone could point me in the right direction, it would be appreciated. Thanks!

+",5097,,-1,,6/18/2020 8:31,3/16/2020 22:14,Shor's algorithm weaknesses & uniqueness of close rational,,2,2,,,,CC BY-SA 4.0 +4708,2,,4706,11/14/2018 8:11,,3,,"

The equation

+ +

$P(ab|xy) = P(a|x)P(b|y)$

+ +

would imply that any dependence that the output $ab$ has on the inputs $xy$ (expressed by the lhs) is solely due to $a$ depending on $x$ alone, and $b$ depending on $y$ alone. This is expressed by the rhs by treating the value of $a$ and its dependence on $x$ as an independent event from the value of $b$ and its dependence on $y$, and hence the probability of a particular $ab$ is the product of these independent probabilities.

+ +

In Bell's equalities, we see that this is not the case. Correlations between $a$ and $b$ depend explicitly on which pair of inputs $xy$ is chosen. Specifically, for the CHSH inequality, whether or not $a$ agrees with $b$ depends on both $x$ and $y$, and hence $a$ cannot be said to be independent of $y$ (nor $b$ of $x$). This behaviour means that there will be at least some values of $a$, $b$, $x$ and $y$ where the above equality does not hold.

+",409,,,,,11/14/2018 8:11,,,,0,,,,CC BY-SA 4.0 +4709,2,,4706,11/14/2018 8:30,,4,,"

It perhaps helps to express $P(ab|xy)$ in words:

+ +
+

the probability that Alice gets answer A and Bob gets answer B given that choices x and y were made

+
+ +

Now independence in classical probability holds if and only if +$$ +P(e_1\text{ and }e_2)=P(e_1)P(e_2) +$$ +where $e_1$ and $e_2$ are events, and practically, you can see what it means through Bayes' theorem +$$ +P(e_1|e_2)=\frac{P(e_1\text{ and }e_2)}{P(e_2)}=P(e_1) +$$ +i.e. for independent events, the condition probability is independent of the conditioning.

+ +

Now, we could rewrite +$$ +P(ab|xy)=P(a|bxy)P(b|xy) +$$ +you lose a bit of symmetry by doing it like this, but only briefly. Now, the idea of independence in the current context is that Alice's result should not depend on anything that happens on Bob's side, so $P(a|bxy)=P(a|x)$ and Bob's result should be independent of anything that happens on Alice's side, so $P(b|xy)=P(b|x)$. Hence, independence between Alice and Bob would imply +$$ +P(ab|xy)=P(a|x)P(b|y). +$$ +So, if this condition holds for all $a,b,x,y$, then the probability distribution is independent, otherwise, the results are not statistically independent.

+",1837,,,,,11/14/2018 8:30,,,,0,,,,CC BY-SA 4.0 +4711,1,,,11/14/2018 10:07,,5,1310,"

I recently stumbled upon a press release from Xanadu.ai stating that

+ +
+

Under the hood, PennyLane's core feature is that it implements a + version of the backpropagation algorithm - the workhorse for training + deep learning models - that is naturally compatible with quantum + devices.

+
+ +

As far as I know, not many algorithms are known to profit from the features of a quantum computer. So is quantum backpropagation one of these algorithms that is theoretically faster than classical backpropagation? Or what advantages should one get from running backpropagation on a quantum computer?

+",5100,,,,,03-01-2022 12:49,Is quantum backpropagation faster than classical backpropagation?,,2,4,,,,CC BY-SA 4.0 +4712,1,,,11/14/2018 11:43,,2,152,"

I have a network problem that I believe would be represented as an adjacency-list on a quantum circuit. Typically, the list is up for traversing from various perspectives (e.g., shortest-path between two points).

+ +

Adjacency-list example:

+ +
x-> a, b, c
+y-> a, d, e
+x-> b, c, d
+
+ +

Any insight will be much appreciated!

+",4120,,26,,12/23/2018 8:02,12/23/2018 8:02,How to you encode a network as an adjacency-list in the quantum-gate model?,,1,5,,,,CC BY-SA 4.0 +4713,2,,4711,11/14/2018 11:50,,4,,"

In general, the efficiency of Quantum Machine Learning Techniques will be calibrated and measured more in terms of the energy efficiency, ability to handle complex computational problems, NP-hard problems and the ability to ensemble different domain algorithms than the speed and learning rate. However, there could be exceptionally faster quantum algorithms for a specific set of computational problems.

+ +

Quantum Backpropagation can be more energy efficient and faster if the right combination of quantum algorithms is used. It depends on how efficiently the output neuron states converge the quantum correlations employed in the feedforward network and what kind of control system algorithms are used for backpropagation. An efficient control NOT gate with lesser decoherence and optimized topology will be useful for this purpose.

+ +

One interesting approach to quantum backpropagation is by implementing +a form of quantum adaptive error correction, in the sense that, for a feedforward network, the input layer is conditionally transformed so that it exhibits the firing patterns that solve a given computational problem.

+ +

In this approach quantum backpropagation dynamics is integrated in a two-stage neural cognition scheme: there is a feedforward learning stage such that the output neurons’ +states, initially separable from the input neurons’ states, that converge during a +neural processing time to correlated states with the input layer, and then +there is a backpropagation stage, where the output neurons act as a control +system that triggers different quantum circuits that are implemented on the +input neurons, conditionally transforming their state in such a way that a +given computational problem is solved.

+ +

The following research paper has a deep analysis on the Quantum Back Propagation dynamics through the application of a Hamiltonian framework. It introduces a Hamiltonian framework for quantum neural machine learning with basic feedforward neural networks integrating quantum measurement theory and dividing the quantum neural dynamics in the learning stage and the backpropagation stage and then apply the framework to two example problems:

+ +
    +
  1. The firing pattern selection problem, where the neural network places the input layer in a specific well-defined firing configuration, from an initially arbitrary superposition of neural firing patterns

  2. +
  3. The n-to-m Boolean functions’ representation problem, where the goal for the network is to correct the input layer so that it represents an arbitrary n-to-m Boolean function.

  4. +
+ +

There is another experimental implementation of Quantum Back Propagation algorithm for a Multi Layer Perceptron based Artificial Neural Network as outlined here.

+",4501,,26,,11/14/2018 15:13,11/14/2018 15:13,,,,1,,,,CC BY-SA 4.0 +4714,2,,4712,11/14/2018 17:02,,2,,"

Say you have $m$ vertices and the longest list you can build is of size $n$.

+ +

The simplest way I could think of is an encoding in basis state like : +$$ | \text{origin} \rangle | v_0 (\text{origin})\rangle |v_1 (\text{origin})\rangle... |v_{n-1}(\text{origin})\rangle $$

+ +

We basically would encode the origin vertex and its corresponding n-sized adjacency-list whose elements are noted by $ v_0...v_{n-1} $. Potentially, the list may be of size less than $n$ but you can encode an empty element as a bitstring of your choice ($000$ for instance). Each vertex would be represented by a bit string of approximately $ \log(m+1) $ (qu)bits if you take into account the empty element.

+ +

In your example, the elements would be represented as $$(\text{empty},000),(a,001), (b,010), (c,011), (x,111), \cdots$$

+ +

For $x \to a,b,c$ you would have : +$$ | x \rangle | a\rangle |b\rangle|c\rangle = | 111 \rangle | 001\rangle |010\rangle|011\rangle = |111 001 010 011 \rangle$$

+ +

That means also you can have all pairs of $(\text{origin, list})$ in superposition, where at the end you retrieve one by measurement according to the corresponding amplitudes.

+",4127,,26,,11/14/2018 17:07,11/14/2018 17:07,,,,5,,,,CC BY-SA 4.0 +4715,1,4716,,11/15/2018 7:05,,6,209,"

Continuing from my previous question on Brunner et al.'s paper; so given a standard Bell experimental setup:

+ +

+ +

where independent inputs $x,y \in \{0, 1\}$ decide the measurement performed by Alice & Bob on quantum state $S$ with outcomes $a,b \in \{-1, 1\}$, $a$ and $b$ are correlated (not independent) if:

+ +

(1) $P(ab|xy) \ne P(a|xy)P(b|xy) \ne P(a|x)P(b|y)$

+ +

Of course, there are perfectly innocent non-quantum reasons why $a$ and $b$ could be correlated; call these reasons confounding variables, some artifact of when Alice & Bob's systems interacted in the past. The set of all confounding variables we call $\lambda$. If we take into account all variables in $\lambda$, a local theory claims that $a$ and $b$ will become independent and thus $P(ab|xy)$ will factorize:

+ +

(2) $P(ab|xy,\lambda) = P(a|x,\lambda)P(b|y,\lambda)$

+ +

This equation expresses outcomes depending only on their local measurement and past variables $\lambda$, and explicitly not the remote measurement.

+ +

Question one: what is the mathematical meaning of the comma in equation (2)?

+ +

Question two: what is an example of a variable in $\lambda$?

+ +

The paper then says the following:

+ +
+

The variable $\lambda$ will not necessarily be constant for all runs of the experiment, even if the procedure which prepares the particles to be measured is held fixed, because $\lambda$ may involve physical quantities that are not + fully controllable. The different values of $\lambda$ across the runs should thus be characterized by a probability distribution $q(\lambda)$.

+
+ +

Question three: why was it a set of variables before but is now only a single variable?

+ +

Question four: what is an example of a probability distribution for $q(\lambda)$ here?

+ +

We then have the fundamental definition of locality for Bell experiments:

+ +

(3) $P(ab|xy) = \int_{Λ} q(\lambda)P(a|x, \lambda)P(b|y, \lambda) d\lambda$ where $q(\lambda|x,y) = q(\lambda)$

+ +

Question five: What does the Λ character mean under the integral sign?

+ +

General question: so we have a continuous probability distribution $q(\lambda)$ over which we're integrating. Why are we multiplying the RHS of equation (2) by $q(\lambda)$ in the integrand? That would seem to make equation (3) different than equation (2). What's an example of this integral with concrete values?

+",4153,,55,,12-01-2021 10:06,12-01-2021 10:06,Definition of locality in Bell experiments,,1,0,,,,CC BY-SA 4.0 +4716,2,,4715,11/15/2018 14:48,,5,,"

I think that I can explain the definition through the following simple example:

+ +

Suppose that you perform two experiments in the same house in two separate rooms. In the first you measure the observable $A$ and instantaneously in the second you measure the observable $B$. The measurements are afflicted with noise, so you do not get a definite answer every time you measure but a distribution $P(A)$ and $P(B)$.

+ +

Now, as very well known that the noise is temperature dependent (we suppose that the temperature is uniform throughout the house), so when you repeat your experiment at a different teperature you observe that you obtain a different distribution. Thus, our distributions are temperature dependent and we write them as: $P(A, T)$, and $P(B, T)$.

+ +

Now, since the two rooms are remote, we know that there is no effect of each measurement on the other, thus our joint distribution must have the form: +$$P(AB,T) = P(A,T) P(B,T) $$

+ +

The comma here means that the temperature is a parameter on which the distribution depends and not an observable.

+ +

Now, suppose that we don't know in which season in which we performed the experiment; but we only know the probability density function f(T) of the temperature over the seasons. The best we can do in order to forecast predictions is to perform a weighted averaging over all the temperature values, and make our predictions according to: +$$ P(AB) = \int_{\mathcal{T}} f(T) P(A,T) P(B,T) dT$$ +Here $ \mathcal{T}$ (corresponding to the character $\Lambda$ in the question) is the space of all temperature values. This space can be one dimensional as in our case where our experiments depend on one parameter, or it can be a multidimensional manifold parametrizing the possible parameters which can influence our experiment.

+ +

Now, the measurements are local, they were performed in distant rooms, with no correlation, except through the fact that the temperature was the same every time the observables were measured instantaneously.

+ +

The only difference of the above from your question is that the observables in the question are conditional $a|x$, $b|y$, instead of $A$ and $B$. This just mean that you did not perform a single experiment of measuring $A$, but several experiments: For example if $x$ and $y$ are binary; it means that you performed an experiment measuring the observable $A = a|0$ where you held $x$ at $0$, then performed another experiment measuring $A = a|1$, etc.

+ +

A simple example of the confounding variable $\lambda$ exemplified by the temperature in the above description, would be a binary variable, in this case its probability density is: +$$q(\lambda) = \frac{1}{2}(\delta(q) + \delta(q-1))$$ +($\delta$ is the Dirac delta functions), and in this case, its range will be the whole real axis $\Lambda = \mathbb{R}$ equipped with a Lebesgue measure.

+",4263,,4263,,11/15/2018 14:54,11/15/2018 14:54,,,,0,,,,CC BY-SA 4.0 +4720,1,4722,,11/15/2018 18:02,,4,1727,"

What Clifford gate circuit operating on states $|\psi_1\rangle$ and $|\psi_2\rangle$ prepares the state $|\Psi\rangle=|\psi_1\rangle \otimes |\psi_2\rangle$ ?

+",4943,,26,,12/23/2018 8:00,2/24/2019 7:49,What circuit or operation corresponds to the tensor product?,,5,0,,,,CC BY-SA 4.0 +4721,1,4725,,11/15/2018 18:17,,2,138,"

After posting this question to Physics, it became pretty clear I should have posted here. So:

+ +

How might a (e.g.) 72-bit crypto-relevant quantum computer attack RSA-2048?

+ +

Bonus: how might that be characterized? (e.g., nn-qubit requires xxx passses, run time ~yyy)

+ +

Shor's algorithm appears to allow for parallel execution or iterative runs with a combination step. Assumption is that smaller-qubit QC might be able to perform those pieces.

+ +

However, it is suggested that a 4000-qubit/100m-gate quantum computer would be necessary. As the quantum piece of Shor's is a large transform, I assume that sets the constraint for qubit-size

+ +

Side note: there also appear to be possible speedups that may reduce the run time, such as qubit recycling? or the 4-8 passes vs. the 20-30 passes (by David McAnally)

+",5113,,26,,5/13/2019 21:52,5/13/2019 21:52,"Lesser qubit computer doing the parts of Shor's against e.g., RSA-2048 sized prime",,1,0,,,,CC BY-SA 4.0 +4722,2,,4720,11/15/2018 18:46,,7,,"

I think that you misunderstood the concept of tensor product here. There is no need of a Clifford gate in order to have a multi-qubit system. The fact that a multi-qubit system is described by the tensor produt of their state vectors comes from the fourth postulate of quantum mechanics, refer to the $94^{th}$ page of Nielsen and Chuang to see the formulation of this postulate.

+ +

Consequently, what I mean here is that if you have two qubits represented by state vectors $|\psi_1\rangle$ and $|\psi_2\rangle$, the tensor product is the mathematical operation that describes the composite state of the 2-qubit sytem $|\Psi\rangle$. This something known by postulates, and so there is no need to use any Clifford gate to obtain that.

+ +

Clifford gates for example could be used if you have state $|\Psi\rangle=|\psi_1\rangle\otimes |\psi_2\rangle$ and you want to obtain another state $|\Psi'\rangle$, then the transformation would be given by some Clifford group unitary so that $|\Psi'\rangle=U|\Psi\rangle$. However, in order to calculate such unitary, the initial states should be needed, and so here no more can be said.

+",2371,,,,,11/15/2018 18:46,,,,0,,,,CC BY-SA 4.0 +4723,2,,4720,11/15/2018 18:50,,5,,"

There is no circuit operation or Clifford gate! If you have $|\psi_1\rangle$ and $|\psi_2\rangle$ entering a circuit, then mathematically we say that: +$$ + |\psi_1\rangle \otimes |\psi_2\rangle +$$

+ +

is entering the circuit.

+ +

This means if you represent the whole circuit as a unitary matrix $U$, it will have a dimension corresponding to the size of $|\psi_1\rangle \otimes |\psi_2\rangle$, not of $|\psi_1\rangle$ or $|\psi_2\rangle$. Your output is literally the matrix-vector product:

+ +

$$ +U\left(\left|\psi_1\right\rangle \otimes |\psi_2\rangle\right), +$$

+ +

even without using any extra Clifford gates!

+",,user5115,23,,11/16/2018 0:06,11/16/2018 0:06,,,,0,,,,CC BY-SA 4.0 +4724,2,,4720,11/15/2018 18:54,,4,,"

The tensor product is not a gate, but rather a way for us as humans to model the behavior of a quantum system. Whenever we're using multiple qbits, we can look at them in two ways: in their product state (a complex vector of size $2^n$ for $n$ qbits) or their individual state ($|\psi_0\rangle \otimes \ldots \otimes |\psi_{n-1}\rangle$). We can usually switch back and forth between the two representations at will (again, they are two equivalent ways of writing the same quantum state) except for when the qbits become entangled, in which case we cannot factor them back into the individual state (this is roughly the definition of entanglement).

+ +

When you're just learning quantum computing and are writing out matrices & vectors explicitly, you usually use the product state to calculate the action of an operator on the quantum state, then factor the result back into the individual state to see the action of that operator on the individual qbits. So for example, if we want to calculate the action of CNOT on $|10\rangle$ with leftmost qbit as control:

+ +

$C|10\rangle = C \begin{bmatrix} 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} 1 \\ 0 \end{bmatrix} = +\begin{bmatrix} +1 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 \\ +0 & 0 & 0 & 1 \\ +0 & 0 & 1 & 0 \\ +\end{bmatrix} +\begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} = +\begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} = +\begin{bmatrix} 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} 0 \\ 1 \end{bmatrix} += |11\rangle$

+ +

So we see that CNOT on $|10\rangle$ flipped the rightmost qbit to $|11\rangle$ as expected, because the control (leftmost) qbit was $1$.

+ +

Here's an entangled product state which cannot be factored into its individual state:

+ +

$C_{1,0}H_1|00\rangle = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ 0 \\ 0 \\ \frac{1}{\sqrt{2}} \end{bmatrix}$

+ +

If you try to factor that out into the tensor product of two qbits, you will see you cannot.

+",4153,,,,,11/15/2018 18:54,,,,1,,,,CC BY-SA 4.0 +4725,2,,4721,11/15/2018 18:55,,3,,"

Even with qubit recycling, 72 qubits will not be enough to do RSA-2048. +Table 1 of the paper:

+ +

https://www.nature.com/articles/nature12290

+ +

Tells you that 1154 qubits are needed to do RSA-768 (which is much smaller than RSA-2048). This is without error correction.

+ +

Sure you can use your 72-qubit quantum computer to do a little sub-routine of Shor's algorithm, but this will not help if you have to do the rest of the algorithm on a classical computer. For any benefit, the quantum computer has to be doing the ""rate-limiting step""

+",,user5115,26,,5/13/2019 21:23,5/13/2019 21:23,,,,0,,,,CC BY-SA 4.0 +4726,1,,,11/15/2018 19:06,,2,93,"

I was looking up how to program for a D-Wave machine and I came across this image which says it's the ""optimal hardware graph"" for a D-Wave machine:

+ +

+ +

Unfortunately the image seems to have come from this website: +http://people.cs.vt.edu/~vchoi/MinorEmbed.pdf (I got a preview somewhere else and they provided a link to this website), and when I click on that link it says ""Forbidden You don't have permission to access /~vchoi on this server."" The same happens if I remove the PDF from the URL and only look at the faculty member's webpage. The webpage does work if I remove the faculty member's name though, but then it's just the CS department website.

+ +

So where can I find the original work where this comes from? +I searched for alternative sources on Google, but it seems that ""triad"" means very many different things!

+",,user5115,,,,11/15/2018 19:21,"What is the ""TRIAD"" graph and where can I find more information about it?",,1,0,,,,CC BY-SA 4.0 +4727,2,,4726,11/15/2018 19:21,,2,,"

Here I found a few resources talking about TRIAD which is a minor embedding technique Vicky Choi introduced :

+ +

Optimizing Adiabatic Quantum Program Compilation using a Graph-Theoretic Framework

+ +

Minor-Embedding in Adiabatic Quantum Computation: I. The Parameter Setting Problem

+ +

Minor-Embedding in Adiabatic Quantum Computation: II. Minor-universal graph design

+ +

The third one contains your image.

+",4127,,,,,11/15/2018 19:21,,,,2,,,,CC BY-SA 4.0 +4729,1,4732,,11/15/2018 20:31,,4,363,"

Let a three-qubit state shared between Alice, Bob and Charlie stationed at distant laboratories be +$$\psi_{ABC}=\frac{\sqrt{2}}{\sqrt{3}}|000\rangle+\frac{1}{\sqrt{3}}|111\rangle.$$

+ +

How to evaluate the maximum probability of transforming the state to a three +qubit maximally entangled state by local operations and classical communication only?

+",5007,,55,,12-09-2021 14:42,12-09-2021 14:42,How to transfer non maximally entangled state to maximally entangled?,,1,5,,,,CC BY-SA 4.0 +4730,1,6447,,11/15/2018 23:21,,6,431,"

I cannot seem to get an estimate for the number of solutions using the quantum counting algorithm described in Nielsen and Chuang, i.e. phase estimation with the Grover iteration acting as $U$.

+ +

I try doing the following with control and target as allocated qubit registers:

+ +
let controlBE = BigEndian(control);
+let ancilla = target[0];
+
+X(ancilla);
+ApplyToEachCA(H, control + target);
+for (i in 0..Length(control) - 1) {
+    Controlled GroverPow([control[Length(control) - 1 - i]], (2 ^ i, target));
+}
+Adjoint QFT(controlBE);
+
+let fiBE = MeasureInteger(controlBE);
+let numSolutionsD = PowD(Sin(ToDouble(fiBE) / 2.0), 2.0) * ToDouble(2 ^ Length(inputQubits));
+
+Message(""numSolutions: "" + Round(numSolutionsD));
+
+ +

My GroverPow is a discrete oracle that is supposed to perform the Grover iteration to the power defined by the given integer.

+ +
operation GroverPow(power: Int, qubits: Qubit[]): Unit {
+    let ancilla = qubits[0];
+    let inputQubits = qubits[1..Length(qubits) - 1];
+    let aug = Tail(inputQubits);
+    let ans = Most(inputQubits);
+
+    for (i in 1..power) {
+        Oracle(ans, database, ancilla, aug);  // Grover iteration
+        ApplyToEachCA(H, inputQubits);
+        ApplyToEachCA(X, inputQubits);
+        Controlled Z(Most(inputQubits), Tail(inputQubits));
+        ApplyToEachCA(X, inputQubits);
+        ApplyToEachCA(H, inputQubits);
+    }
+}
+
+ +

This just doesn't give the correct answer, even when I have the oracle do absolutely nothing. Is there an obvious bug that I'm missing? I've tried using various combinations of my home-grown functions as well as the built-in AmpAmpByOracle and QuantumPhaseEstimation functions and various initial/target states but to no avail. I've tried absolutely everything I can think of, and am almost starting to get suspicious of the validity of this algorithm...obviously it's sound but that's where I'm at! Just doesn't seem to work.

+",4657,,4657,,11/16/2018 16:06,6/14/2019 0:31,Quantum counting in Q#,,2,4,,,,CC BY-SA 4.0 +4732,2,,4729,11/16/2018 7:42,,5,,"

The first question that we have to deal with is what is meant by ""maximally entangled"" in this context. There's no single straightforward notion. In particular, for three qubits, there are two inequivalent classes of entangled state that cannot be interconverted by SLOCC (stochastic local operations and classical communication). Each has a maximally entangled representative: +$$ +|W\rangle=\frac{1}{\sqrt{3}}(|001\rangle+|010\rangle+|100\rangle)\qquad |GHZ\rangle=\frac{1}{\sqrt{2}}(|000\rangle+|111\rangle). +$$ +In this case, we're obviously talking about conversion to GHZ; conversion to W is impossible.

+ +

A straightforward protocol to achieve the conversion is to introduce the POVM +$$ +M_1=\frac{1}{\sqrt{2}}|0\rangle\langle 0|+|1\rangle\langle 1|\qquad M_2=\frac{1}{\sqrt{2}}|0\rangle\langle 0| +$$ +such that $M_1^\dagger M_1+M_2^\dagger M_2=\mathbb{I}$. If Alice performs this measurement and gets answer 1, then she has created the desired state, +$$ +(M_1\otimes\mathbb{I}\otimes\mathbb{I})|\psi\rangle=\sqrt{\frac{2}{3}}|GHZ\rangle, +$$ +which succeeds with probability 2/3. The other outcome gives a separable state, so there's no hope of getting anything useful out of it.

+ +

How do we know that this is the best we can do? Imagine a second protocol of two players, Alice and Bob. They initially share $(\sqrt{\frac23}|00\rangle+\frac{1}{\sqrt{3}}|11\rangle)$ and wish to create $(|00\rangle+|11\rangle)/\sqrt{2}$. We know that their greatest probability of success is 2/3 thanks to work by Vidal. Now, one specific strategy that Alice and Bob could follow is for Bob to introduce an extra qubit in the $|0\rangle$ state, apply a control not targeting this new qubit, controlled from his qubit in the entangled state. This leaves them with $|\psi\rangle$. Then they apply the optimal tripartite protocol for converting to $|GHZ\rangle$ before Bob repeats the same controlled-not again. The overall success probability cannot be higher than 2/3, and the only probabilistic step in there is the protocol we're interested in, so that protocol cannot succeed with probability greater than 2/3. Hence our original solution must be optimal.

+",1837,,,,,11/16/2018 7:42,,,,1,,,,CC BY-SA 4.0 +4733,2,,4720,11/16/2018 7:48,,2,,"

To put another way, the circuit is just

+ +

+ +

As everyone else said, the tensor product is just the way of constructing a composite quantum system from smaller subsystems, and doesn't require gates as such. But ordering is important, $|\psi_1\rangle\otimes|\psi_2\rangle\neq |\psi_2\rangle\otimes|\psi_1\rangle$, and there is a standard correspondence in quantum circuits between the first item in the tensor product being on the top wire.

+",1837,,,,,11/16/2018 7:48,,,,0,,,,CC BY-SA 4.0 +4734,1,4735,,11/16/2018 9:07,,3,268,"

The state of a spin $\frac{1}{2}$ particle is $|0\rangle$ which is eigenstate of $\sigma_z$. What is the most generalized way to show that the results of any spin measurement along any direction in x-y plane is completely random.

+",5007,,26,,12/23/2018 12:58,12/23/2018 12:58,Quantum spin measurement,,1,2,,,,CC BY-SA 4.0 +4735,2,,4734,11/16/2018 10:58,,5,,"

Any set of commuting observables in any quantum state can be characterized by a joint classical distribution function describing the probabilities of its measurement outputs in that quantum state. Since you need +a single observable and it is of course self commuting, the above is valid in your case.

+ +

The obsevable in your case is:

+ +

$$\sigma = \cos \phi \sigma_x + \sin \phi \sigma_y$$

+ +

In a state having a densiyy matrix $\rho$, the characteristic function of the probability density of this observable is given by: (this is the most important thing to remember here)

+ +

$$g(t) = tr(\rho e^{it\sigma})$$

+ +

In our case $\rho$ is the projector on the spin up state, and it is not difficult to show that the exponent is:

+ +

$$ \cos (t/2 )+i \sin( t/2 )\sigma$$

+ +

Thus:

+ +

$$g(t) = \cos (t/2)$$

+ +

The probability density function is the Fourier transform of the characteristic function:

+ +

$$f(s) = \frac{1}{2\pi}\int_{-\infty}^{\infty} g(t) e^{ist} dt$$

+ +

Where $s$ is the measurement outcome of $\sigma$. We get:

+ +

$$f(s) = 0.5 \delta(s-0.5) + 0.5 \delta(s+0.5)$$

+ +

This is the probability distribution function of a Bernoullian random variable, uniformly distrinuted at $\pm 0.5$.

+",4263,,4263,,11/17/2018 11:05,11/17/2018 11:05,,,,0,,,,CC BY-SA 4.0 +4737,1,4739,,11/16/2018 17:04,,7,2083,"

I'm trying to deduce the Kraus representation of the dephasing channel using the Choi operator (I know the Kraus operators can be guessed in this case, I want to understand the general case).

+ +

The dephasing channel maps a density operator $\rho$ as +$$\rho\rightarrow D(\rho)=(1-p)\rho+ p\textrm{diag}(\rho_{00},\rho_{11}) $$

+ +

The Choi operator acts on a channel as

+ +

$$C(D)=(I \otimes D)\sum_{k,j=0}^1 \vert k\rangle \langle j \vert \otimes \vert k\rangle \langle j \vert=\sum_{k,j=0}^1\vert k\rangle \langle j \vert \otimes D(\vert k\rangle \langle j \vert)=\\=|0\rangle\langle 0|\otimes|0\rangle\langle 0|+p|0\rangle\langle 1|\otimes|0\rangle\langle 1|+p|1\rangle\langle 0|\otimes|1\rangle\langle 0|+|1\rangle\langle 1|\otimes|1\rangle\langle 1|=\\=|00\rangle\langle00|+p|01\rangle\langle01|+p|10\rangle\langle10|+|11\rangle\langle11|= \sum_{j=0}^3 |\psi_j\rangle\langle\psi_j|$$

+ +

Now, to find the Kraus operators, I should just find some $K_j$ such that $|\psi_j\rangle =(I\otimes K_j) \sum_{k=0}^1 \vert k\rangle \otimes \vert k\rangle$. These operators are simply

+ +

$$ K_0=|0\rangle\langle 0|\quad K_1=\sqrt{p}|1\rangle\langle 0| \quad K_2=\sqrt{p}|0\rangle\langle 1|\quad K_3=|1\rangle\langle 1|$$

+ +

And I should have $$D(\rho)=\sum_{j=1}^3 K_j\rho K_j^\dagger$$

+ +

But +$$ \sum_{j=1}^3 K_j\rho K_j^\dagger=(\rho_{00}+p\rho_{11})|0\rangle\langle0| + (\rho_{11}+p\rho_{00})|1\rangle\langle1|$$

+ +

Which is most certainly not what I should get. I'm sure I'm either doing a massive calculation error, or I have massively misunderstand everything. +Moreover, doing this I should only be able to find 4 Kraus operator, while I know that the representation is not unique and in particular this channel can be represented by only two Kraus operators. Any help is appreciated.

+",5125,,55,,03-09-2021 09:36,11/14/2021 14:01,Deduce the Kraus operators of the dephasing channel using the Choi,,2,1,,,,CC BY-SA 4.0 +4738,1,4740,,11/16/2018 18:34,,2,313,"

What are the necessary & sufficient conditions for a matrix to be an observable, and what is the proof that any such matrix has eigenvalues -1 and 1 (if indeed that is the case)? I ask because in the standard Bell experimental setup the measurement outputs are always -1 or 1.

+ +

Possibly related: in a previous question I asked whether the squared absolute values of the eigenvalues of a unitary matrix are always 1 (they are).

+",4153,,55,,12/21/2021 1:24,12/21/2021 1:24,Are the eigenvalues of an observable always -1 and 1?,,1,0,,,,CC BY-SA 4.0 +4739,2,,4737,11/16/2018 19:15,,4,,"

Acting with the dephasing channel on the possible states of a single qubit:

+ +

\begin{align}D\left(\left|0\rangle\langle0\right|\right) &= \left|0\rangle\langle0\right| \\ D\left(\left|0\rangle\langle1\right|\right) &= \left(1-p\right)\left|0\rangle\langle1\right|\\ +D\left(\left|1\rangle\langle0\right|\right) &= \left(1-p\right)\left|1\rangle\langle0\right|\\ +D\left(\left|1\rangle\langle1\right|\right) &= \left|1\rangle\langle1\right|.\end{align}

+ +

This gives that \begin{align}C\left(D\right) &= \sum_{k,j=0}^1\vert k\rangle \langle j \vert \otimes D(\vert k\rangle \langle j \vert) \\ +&= |0\rangle\langle 0|\otimes|0\rangle\langle 0|+\left(1-p\right)|0\rangle\langle 1|\otimes|0\rangle\langle 1|+\left(1-p\right)|1\rangle\langle 0|\otimes|1\rangle\langle 0|+|1\rangle\langle 1|\otimes|1\rangle\langle 1|\\ +&= |00\rangle\langle00|+|00\rangle\langle11|+|11\rangle\langle00|+|11\rangle\langle11|- p|00\rangle\langle 11|-p|11\rangle\langle 00|\\ +&=\sum_{k, j=0}^1\left(1-p\right)\vert k\rangle \langle j \vert \otimes \vert k\rangle \langle j \vert + p\left(|0\rangle\langle 0|\otimes|0\rangle\langle 0|+|1\rangle\langle 1|\otimes|1\rangle\langle 1|\right) \\ +&= \sum_{j=0}^N |\psi_j\rangle\langle\psi_j|.\end{align}

+ +

Now using $|\psi_j\rangle =(I\otimes K_j) \sum_{k=0}^1 \vert k\rangle \otimes \vert k\rangle$ to get $$\sum_{j=0}^N |\psi_j\rangle\langle\psi_j| = \sum_{j=0}^N\sum_{k,l=0}^1\vert k\rangle\langle l\vert \otimes K_j\vert k\rangle \langle l\vert K_j^{\dagger},$$ which equals $C\left(D\right)$ when the Kraus operators $K_0 = \sqrt{1-p}I,\, K_1 = \sqrt p |0\rangle\langle 0|$ and $K_2 = \sqrt p |1\rangle\langle 1|$.

+ +

Taking an arbitrary (single qubit) density matrix $$\rho = \rho_{00}|0\rangle\langle 0| + \rho_{01}|0\rangle\langle 1| + \rho_{10}|1\rangle\langle 0| + \rho_{11}|1\rangle\langle 1|$$ and acting on this using the above Kraus operators gives +\begin{align}D(\rho)&=\sum_{j=1}^3 K_j\rho K_j^\dagger \\ +&=\left(1-p\right)\rho + p\rho_{00}|0\rangle\langle 0| + p\rho_{11}|1\rangle\langle 1|,\end{align} as expected for the dephasing channel.

+",23,,,,,11/16/2018 19:15,,,,0,,,,CC BY-SA 4.0 +4740,2,,4738,11/16/2018 19:33,,8,,"

An observable only needs to be Hermitian, and can have any real eigenvalues. They don't even need to be distinct eigenvalues: if there are repeated eigenvalues, we say that the eigenspace for that eigenvalue is degenerate.

+ +

(In the case of observables on a qubit, having a repeated eigenvalue makes the observable rather uninteresting, because absolutely all pure states are eigenstates in that case; I'd be tempted to call such an observable 'degenerate' in an informal sense as well in that case — though it is on occasion useful to include $\mathbf 1$ in an analysis of things to do with single-qubit observables.)

+ +

In the analysis of Bell's Theorem, the reason why the observables are taken to be ones with eigenvalues $\pm1$ are conventional. It makes them analogous to Pauli spin operators in particular, and it makes a perfectly mixed state have expectation value $0$. Having eigenvalues of $\pm1$ also allows the expectation values of the operators to describe a bias towards one of two outcomes, and for expectation values of tensor products to be straightforwardly interpreted as a correlation coefficient of outcomes. You could prove versions of Bell's Theorem for observables with other eigenvalues, but those versions could be derived from Bell's Theorem as its usually stated.

+",124,,124,,11/16/2018 21:15,11/16/2018 21:15,,,,8,,,,CC BY-SA 4.0 +4742,2,,4730,11/16/2018 22:57,,3,,"

Comparing your code to the reference implementation for the Grover search quantum kata, I think the problem might be in the way you're using your oracle in GroverPow. It's a little hard to tell, but if your Oracle is flipping the state of the ancilla based on whether or not the state is a ""hit"", you're then not including the ancilla in the rest of the iteration. In the kata, there's a step that transforms a ""marking"" oracle into a ""phase flip"" oracle; might you need to do that as well?

+ +

Sorry I can't be more certain! Sharing the code for your oracle might help.

+",4265,,,,,11/16/2018 22:57,,,,2,,,,CC BY-SA 4.0 +4743,1,,,11/17/2018 0:32,,1,330,"

Is it possible to create controlled gates with an exponent in Cirq? For example, a controlled $\sqrt Z$ gate.

+",4907,,26,,11/17/2018 12:36,11/18/2018 13:09,Is it possible to create controlled gates with an exponent in Cirq?,,2,0,,,,CC BY-SA 4.0 +4744,2,,4743,11/17/2018 1:09,,3,,"

Looking at the documentation and the GitHub, there is a something called ControlledGate. This class is said to augment existing gates with a control qubit. +You can look at the test file. +I can see line 72 :

+ +
cxa = cirq.ControlledGate(cirq.X**cirq.Symbol('a'))
+
+ +

Could you try:

+ +

gate = cirq.ControlledGate(cirq.X**0.5) ?

+",4127,,26,,11/18/2018 13:09,11/18/2018 13:09,,,,0,,,,CC BY-SA 4.0 +4745,2,,4743,11/17/2018 1:34,,6,,"

Yes, it is possible to create controlled gates with an exponent in Cirq.

+ +

For the specific case of the Z gate, Cirq includes a dedicated CZ gate that can be raised to a power:

+ +
cs = cirq.CZ**0.5
+
+ +

More generally, cirq.ControlledGate works on any gate. It's a bit clunkier than the dedicated gates, but it does support being raised to a power (as long as the gate it is being applied to can be raised to a power). The following two lines are equivalent:

+ +
controlled_sqrt_y = cirq.ControlledGate(cirq.Y**0.5)
+controlled_sqrt_y = cirq.ControlledGate(cirq.Y)**0.5
+
+ +

You can also raise to a power after applying to qubits, which is syntactically convenient:

+ +
cs_on_ab = cirq.CZ(a, b)**0.5
+
+",119,,,,,11/17/2018 1:34,,,,0,,,,CC BY-SA 4.0 +4746,1,4759,,11/17/2018 7:32,,1,99,"

Let's say, that we are in the possession of a quantum gate, that is implementing the action of such an operator

+ +

$$ \hat{U}|u \rangle = e^{2 \pi i \phi}|u\rangle $$

+ +

Moreover, let's say, that this operator has at least two eigenvectors $|u\rangle$ and $|v\rangle$, with the following eigenvalues:

+ +

$$ \hat{U}|u \rangle = e^{2 \pi i \phi_0}|u\rangle $$

+ +

$$ \hat{U}|v \rangle = e^{2 \pi i \phi_1}|v\rangle $$

+ +

If we would like to act with such a quatntum gate on the eigenvector $|u\rangle$, we could write this in the matrix form:

+ +

$$ \hat{U}|u\rangle \equiv \begin{bmatrix} + e^{2 \pi i \phi_0} & 0 \\ + 0 & e^{2 \pi i \phi_0} \\ +\end{bmatrix} |u\rangle \equiv e^{2 \pi i \phi_0} |u\rangle $$

+ +

What I want to do, is to act with the $\hat{U}$ gate on the superposition of $|u\rangle$ and $|v\rangle$, that is:

+ +

$$ \hat{U} [c_0|u\rangle + c_1|v\rangle] = c_0e^{2 \pi i \phi_0}|u\rangle + c_1e^{2 \pi i \phi_1}|v\rangle $$

+ +

We could use the following notation to write down eigenvectors $|u\rangle$ and $|v\rangle$ in the matrix form:

+ +

$$ |u\rangle = \begin{bmatrix} a_0 \\ a_1 \end{bmatrix}, |v\rangle = \begin{bmatrix} a_2 \\ a_3 \end{bmatrix} $$ +Then, we could rewrtie the action $\hat{U} [c_0|u\rangle + c_1|v\rangle]$ as

+ +

$$ \hat{U} [c_0|u\rangle + c_1|v\rangle] = c_0e^{2 \pi i \phi_0}|u\rangle + c_1e^{2 \pi i \phi_1}|v\rangle $$

+ +

$$ = c_0e^{2 \pi i \phi_0}\begin{bmatrix} a_0 \\ a_1 \end{bmatrix} + c_1e^{2 \pi i \phi_1} \begin{bmatrix} a_2 \\ a_3 \end{bmatrix} = \begin{bmatrix} c_0 e^{2 \pi i \phi_0} a_0 + c_1 e^{2 \pi i \phi_1} a_2 \\ c_0 e^{2 \pi i \phi_0} a_1 + c_1 e^{2 \pi i \phi_1} a_3 \end{bmatrix} $$

+ +

Is there any way of writing the $\hat{U}$ gate in the matrix form for the above case? The only thing that comes to my mind, is that it should ""at the same time"" have values $e^{2 \pi i \phi_0}$ and $e^{2 \pi i \phi_1}$ on its diagonal, but I know that this reasoning is wrong and I was wondering, if there is some official way to write this down.

+",2098,,26,,11/18/2018 13:11,11/19/2018 8:30,How to properly write the action of a quantum gate implementing an operator $U$ on the superposition of its eigenvectors?,,2,1,,,,CC BY-SA 4.0 +4747,1,4748,,11/17/2018 14:46,,5,278,"

I am doing a thesis on ""Metaheuristics and Quantum Computing"", and was wondering if anyone could recommend some papers/pages +to read talking about hybrid quantum/classical computing.

+ +

(My idea is to get quantum population, evaluate through classical +function and then considering the classical evaluation, change the original +state of qubits to get new quantum sample, and so on and so forth.)

+",5130,,124,,11/17/2018 21:48,11/27/2018 15:36,Resources on hybrid quantum-classical algorithms applied to combinatorial optimization problems,,2,1,,,,CC BY-SA 4.0 +4748,2,,4747,11/17/2018 16:10,,4,,"

So for hybrid quantum-classical algorithms, I suggest looking at :

+ + + +

List is non exhaustive of course.

+",4127,,,,,11/17/2018 16:10,,,,1,,,,CC BY-SA 4.0 +4749,1,5042,,11/17/2018 16:34,,6,1104,"

I am trying to implement VQE in pyQuil and am dumbfounded by how to measure the expectation value of a general Hamiltonian on $\mathbb{C}^{2^n}$ i.e. determine $\langle\psi , H \psi\rangle$ on a Quantum computer. As far as I understand on a real Quantum Computer (not any quantum virtual machine) I can only measure in the computational basis, which is the basis of the Hamiltonian $H = X = \sum x \left|x\right>\left<x\right|$, but not for any Hamiltonian whose eigenvectors are not the computational basis. But how do I measure with any Hamiltonian that is not diagonal in the computational basis?

+ +

Sure I can measure e.g. some of the qubits in the $X$-basis instead of the $Z$-basis by applying a Hadamard gate to them, but this surely doesn't help me if I want to measure sth. non-local, i.e. if the ground-state of my hamiltonian is an entangled state.

+ +

On a maybe related note: Can I write any hamiltonian (hermitian matrix) as a Pauli decomposition? I know I can for a single qubit, but does this hold for multiqubits aswell?

+",4850,,26,,1/31/2019 19:23,1/31/2019 19:25,Measuring the Hamiltonian in the VQE,,2,0,,,,CC BY-SA 4.0 +4750,2,,4749,11/17/2018 20:25,,1,,"

No. The computational basis is not necessarily the basis that diagonalizes the Hamiltonian. It also looks like you are confusing the X basis with the basis of the Hamiltonian.

+ +

Advice: You should write it as $H = \sum \lambda_i | \lambda_i \rangle \langle \lambda_i |$ not with $x$ so you don't confuse with the X operator. The right notation will help you avoid confusing yourself.

+",434,,,,,11/17/2018 20:25,,,,1,,,,CC BY-SA 4.0 +4751,2,,4746,11/17/2018 21:02,,1,,"

It would be better to ask this on Mathematics Stack Exchange.

+ +

Let $V$ be the unitary matrix that takes

+ +

$$ +\begin{pmatrix} +1\\ +0 +\end{pmatrix} +\to \begin{pmatrix} +a_0\\ +a_1 +\end{pmatrix} +$$

+ +

and

+ +

$$ +\begin{pmatrix} +0\\ +1 +\end{pmatrix} +\to \begin{pmatrix} +a_2\\ +a_3 +\end{pmatrix} +$$

+ +

Then

+ +

$$ +V \begin{pmatrix} +e^{2\pi i \phi_0} & 0\\ +0 & e^{2 \pi i \phi_1} +\end{pmatrix} V^\dagger +$$

+ +

will take $(a_0,a_1)$ to $(1,0)$ and then $e^{2 \pi i \phi_0} (1,0)$ and then from there $e^{2 \pi i \phi_0} (a_0,a_1)$.

+ +

Similarly for $(a_2,a_3)$

+ +

Simple change of basis problem.

+",434,,26,,11/18/2018 13:11,11/18/2018 13:11,,,,0,,,,CC BY-SA 4.0 +4752,2,,2366,11/17/2018 22:41,,3,,"

The paper propounds that quantumizing the unabridged blockchain could address the problem of prevailing crisis to the security of blockchain encryption. While quantum cryptography has been suggested as a workaround for this problem before, the proposed design by Rajan and Visser is novel. They argue that the solution lies in developing a blockchain that rests on quantum particles entangled in time, rather than space. Based on the new structure, any attempt to hack or distort the blockchain would result in the link being destroyed, as entanglement is heavily critical. In their paper, Rajan and Visser explain that by encoding transactions on a quantum particle (or photon), it would be possible to entangle the past information, allowing the chronologically older blocks to vanish once they have been absorbed into the more fresh addition.

+ +

Their might be a proper chance of quantum networked time machine effect in future based on the current research arena of quantum entanglement in blockchain.

+",5028,,,,,11/17/2018 22:41,,,,0,,,,CC BY-SA 4.0 +4753,1,,,11/18/2018 2:33,,6,85,"

A common task to perform during quantum computation on the surface code is moving qubits from one place to another. There are standard ways to do this within the surface code, but I was wondering what the actual fundamental limits are. If we forget about the fact that we're using the surface code, and just focus on the fact that we have a planar grid of noisy qubits with nearest-neighbor connections, and a fast classical computer noting measurements and generally helping out, how fast can we move quantum information across that patch?

+ +

Given an operation failure rate $\epsilon$, a patch of length L and height H, and the ability operations in parallel with some duration T, how long does it take to move N qubits from the left side of the patch to the right side of the patch?

+",119,,55,,12-06-2021 22:12,11/18/2022 11:06,"What is the quantum bandwidth of a planar array of noisy qubits, assuming free classical communication?",,1,7,,,,CC BY-SA 4.0 +4754,1,,,11/18/2018 12:13,,8,2378,"

How did we derive that the state we get by $n$ qubits is their tensor product? You can use $n=2$ in the explanation for simplicity.

+",2559,,26,,12/23/2018 12:48,12/23/2018 12:48,Why is the state of multiple qubits given by their tensor product?,,3,2,,,,CC BY-SA 4.0 +4755,2,,4754,11/18/2018 15:45,,4,,"

This assertion is an axiom of quantum mechanics. It appears, for example, as Postulate 4 on page 94 of Nielsen and Chuang.

+ +

It is considered to be one of the Dirac-von Neuman axioms of quantum mechanics.

+ +

However, when the quantum system's Hilbert space is defined as the space of $l^2$ functions on some set, then when the set is a Cartesian product, the corresponding Hilbert space becomes a tensor product. For example, when you put spins on a lattice $\Lambda$ as in Kitaev's surface code, then the Systems Hilbert space is: +$$l^2(\underline{2}^{\Lambda}) = \bigotimes_{\Lambda} \mathbb{C}^2$$ +Or, when you quantize a system of two particles moving on the line $\mathbb{R}$, the individual particle Hilbert space is $ L^2(\mathbb{R})$ and the composite system Hilbert space is $L^2(\mathbb{R} \times \mathbb{R}) $. It is known that +$$L^2(\mathbb{R} \times \mathbb{R}) = L^2(\mathbb{R}) \bigotimes L^2(\mathbb{R}),$$ +Even though a specific basis of the tensor product $L^2(\mathbb{R}) \bigotimes L^2(\mathbb{R})$ does not include the entangled states in $L^2(\mathbb{R} \times \mathbb{R})$, these states can be constructed from combinations of this basis.

+ +

However, the above explanation doesn't prove the assertions; it only transfers the problem to the classical spaces before quantization. Then why should they be taken as Cartesian products?

+ +

Many authors analyzed this problem from the quantum logic point of view, see for example +Aerts and Daubechies. These attempts intend to find a logical equivalent to the above axiom and not to prove it.

+",4263,,4263,,11/18/2018 18:36,11/18/2018 18:36,,,,0,,,,CC BY-SA 4.0 +4756,1,,,11/18/2018 21:02,,13,417,"

According to the Wikipedia (Which quotes this paper https://arxiv.org/abs/1203.5813 by Preskill) the definition of Quantum Supremacy is

+ +
+

Quantum supremacy or quantum advantage is the potential ability of + quantum computing devices to solve problems that classical computers + practically cannot.

+
+ +

On that same paper, Preskill says that a more feasible approach would be to find Quantum Systems that a quantum computer can simulate in polynomic time while a classical computer can't.

+ +

My question is: Would that situation be enough to prove Quantum Supremacy? How do we know no better classical algorithm exist? Maybe there is a efficient way of simulating that system but we don't know it yet. If this is the case, then proving quantum supremacy is more about proving rigorously that a problem is classically hard than about finding that it is quantumly easy, right?

+",5140,,,,,11/20/2018 9:50,Quantum Supremacy: How do we know that a better classical algorithm doesn't exist?,,2,0,,,,CC BY-SA 4.0 +4757,2,,4756,11/18/2018 22:26,,9,,"

For all we know, it is extraordinarily hard to prove that a problem which can be solved by a quantum computer is classically hard.

+ +

The reason is that this would solve an important and long-standing open problem in complexity theory, namely whether PSPACE is larger than P.

+ +

Specifically, any problem which can be solved by a quantum computer in polynomial time can also be solved in polynomial space by a classical computer (the class PSPACE). However, it is not known whether PSPACE is strictly larger than P (the class of problems efficiently solvable on a classical computer). This is a long-standing open question in complexity theory, and thus any hardness result as the ones you talk about would also resolve that question, making it an extremely hard problem.

+ +

(In fact, there are tighter upper bounds on BQP, most importantly the class PP, but separating PP from P is even harder.)

+ +

This might be a reason for Preskill's careful formulation

+ +
+

Quantum supremacy or quantum advantage is the potential ability of quantum computing devices to solve problems that classical computers practically cannot.

+
+ +
+ +

It should however be said that this does not mean that we cannot make any statement about the hardness of certain problems: What we can do is to relate their hardness to the hardness of other problems, for which -- while equally not proven -- there is a lot of accumulated evidence: For instance, it might be known that if problem X is easy, a range of other problems would also be easy, which have long resisted solution.

+ +

Thus, while we cannot unconditionally prove hardness of a problem, it is possible to relate it to the hardness of other problems whose hardness, while also unproven, are far better backed up with evidence.

+",491,,491,,11/20/2018 9:50,11/20/2018 9:50,,,,8,,,,CC BY-SA 4.0 +4758,2,,4754,11/19/2018 8:12,,5,,"

Perhaps it helps to take a step back and start with something simpler:

+ +
+

why do we tabulate probability amplitudes for state vectors and unitaries?

+
+ +

For a single quantum system with $d$ distinct states, labelled 0 to $d-1$, we associate a complex number $a_i$ with the probability amplitude for being in state $i$. For a unitary, we associate a probability amplitude $U_{ij}$ with transforming an input $j$ into an output $i$. We choose to tabulate these as +$$ +U=\left(\begin{array}{cccc} +U_{00} & U_{21} & \ldots & U_{0,d-1} \\ +U_{10} & U_{11} & \ldots \\ +\vdots & \vdots & \ddots \\ +U_{d-1,0} & U_{d-1,1} & \ldots & U_{d-1,d-1} +\end{array}\right)\qquad |\psi\rangle=\left(\begin{array}{c} a_0 \\ a_1 \\ \vdots \\ a_{d-1} \end{array}\right) +$$ +for the simple reason that calculating $U|\psi\rangle$ takes are of the two axioms of quantum theory that tell us how to calculate the output probability amplitudes:

+ +
    +
  • for independent events, the probability amplitude for both events happening is the product of the individual probability amplitudes.
  • +
  • for mutually exclusive events, the probability amplitude for either of the two events to happen is the sum of their probability amplitudes.
  • +
+ +

This puts is in a position to answer

+ +
+

Why is the state of multiple qubits given by their tensor product?

+
+ +

If two quantum systems have distinguishable states $i$ and $j$, then we can choose to label the states of the composition system by $i,j$, meaning that the first system is in state $i$ and the second system is in state $j$. Then, if the two systems start in states +$$ +|\psi\rangle=\left(\begin{array}{c} a_0 \\ a_1 \\ \vdots \\ a_{d-1} \end{array}\right) \qquad |\phi\rangle=\left(\begin{array}{c} b_0 \\ b_1 \\ \vdots \\ b_{d-1} \end{array}\right), +$$ +then what's the probability amplitude for the overall system to be in $1,0$, for example? By the independence axiom, it must be $a_1b_0$. Hence, if we tabulate the probability amplitudes using the ordering $00,01,02,03,\ldots (0,d-1),10,11,\ldots,(1,d-1),20,21,\ldots,(d-1,d-1)$, the column vector turns out to be exactly $|\psi\rangle\otimes|\phi\rangle$, basically by definition of what the tensor product is. If you do similar things with unitaries $U$ and $V$, you find the composite action is $U\otimes V$, and everything is internally consistent, meaning that the outcome of the combined unitary evolution would be +$$ +(U\otimes V)(|\psi\rangle\otimes|\phi\rangle)=(U|\psi\rangle)\otimes(V|\phi\rangle), +$$ +as was explicitly checked here.

+",1837,,,,,11/19/2018 8:12,,,,0,,,,CC BY-SA 4.0 +4759,2,,4746,11/19/2018 8:30,,2,,"

The most straightforward way to construct the matrix representation of $U$ is to write +$$ +U=e^{2\pi i\phi_0}|u\rangle\langle u|+e^{2\pi i\phi_1}|v\rangle\langle v| +$$ +which will work just fine under the assumption that $\langle u|v\rangle=0$ (which must be the case if $U$ is going to be unitary).

+ +

However, you should also be able to work it out from the maths you were writing down, so long as you impose some of the important properties of the eigenvectors. Firstly, you need $\langle u|u\rangle=1$, so $|a_0|^2+|a_1|^2=1$. Secondly, you need $\langle u|v\rangle=0$, as already mentioned. This means that +$$ +|v\rangle=e^{i\gamma}\left(\begin{array}{c} +-a_1^\star \\ a_0^\star +\end{array}\right) +$$

+ +

Now, when trying to work out $U$, it helps to pick two different sets of $(c_0,c_1)$. In the first, we want $c_0|u\rangle+c_1|v\rangle=\left(\begin{array}{c} 1 \\ 0 \end{array}\right)$ because $U\left(\begin{array}{c} 1 \\ 0 \end{array}\right)$ is the first column of $U$. This is achieved with $c_0=a_0^\star$ and $c_1=-a_1e^{-i\gamma}$. Similarly, if we can calculate $U\left(\begin{array}{c} 0 \\ 1 \end{array}\right)$, using $c_0=a_1^\star$ and $c_1=e^{-i\gamma}a_0$, we find the second column of $U$.

+",1837,,,,,11/19/2018 8:30,,,,0,,,,CC BY-SA 4.0 +4760,1,4763,,11/19/2018 11:52,,5,401,"

In the discussions about quantum correlations, particularly beyond entanglement (discord, dissonance e.t.c), one can often meet two definitions of mutual information of a quantum system $\rho^{AB}$: + $$ + I(\rho^{AB}) = S(\rho^A) + S(\rho^B) - S(\rho^{AB}) + $$ + and + $$ + J(\rho^{AB}) = S(\rho^A)-S_{\{\Pi^B_j\}}(\rho^{A|B}), + $$ +where $S$ is the Von-Neumann entropy, $\rho^A$ and $\rho^B$ are the reduced states of the individual subsystems of $\rho^{AB}$ and the second term in $J$ is the quantum analogue of the conditional entropy +$$ +S_{\{\Pi^B_j\}}(\rho^{A|B}) = \sum_j p_j S(\rho^{A|\Pi^B_j}). +$$ +In the expression for the conditional entropy $\rho^{A|\Pi^B_j} = \text{Tr}_B[\rho^{AB} (\mathbb{I}^A \otimes \Pi^B_j )]/p_j $ are the states of the subsystem $A$ after getting a particular projector $\Pi^B_j$ in $B$, which happens with a probability $p_j = \text{Tr}[\rho^{AB} (\mathbb{I}^A \otimes \Pi^B_j ) ]$. While $I$ characterizes the total correlations between $A$ and $B$ the second expression involves a measurement process, in which non-classical features of $\rho^{AB}$ are lost, and therefore $J$ characterizes classical correlations in $\rho^{AB}$.

+ +

While measuring $J$ is relatively straightforward, (for 2 qubits one can just measure 4 probabilities $p(\Pi^A_i \Pi^B_j), \, i,j = 1,2$ and calculate the mutual information of the resulting probability distribution) I can't think of an easy way of estimating $I$. So my question is: is it possible to measure $I$ without performing a full tomography of $\rho^{AB}$?

+",5145,,55,,04-04-2022 19:27,04-04-2022 19:27,Does computing the quantum mutual information $I(\rho^{AB})$ require full tomographic information of $\rho^{AB}$?,,1,2,,,,CC BY-SA 4.0 +4761,1,,,11/19/2018 13:40,,5,381,"

I have some very basic questions about stabilizers.

+ +

What I understood:

+ +

To describe a state $|\psi \rangle$ that lives in an $n$-qubit Hilbert space, we can either give the wavefunction (so the expression of $|\psi\rangle$), either give a set of commuting observable that $|\psi\rangle$ is an eigenvector with $+1$ eigenvalue.

+ +

We define a stabilizer $M$ of $|\psi \rangle$ as a tensor product of $n$ Pauli matrices (including the identity) that verifies $M |\psi \rangle = |\psi\rangle$.

+ +

And (apparently) we need $n$ stabilizers to fully define a state.

+ +
+ +

The things I don't understand:

+ +
    +
  1. How can a stabilizer necessarily be a product of Pauli matrices?
  2. +
+ +

With $n=1$, I take $|\psi \rangle = \alpha | 0 \rangle + \beta |1 \rangle$, excepted for specific values of $\alpha$ and $\beta$, this state is only an eigenvector of $I$ (not of the other pauli matrices). But saying $I$ is the stabilizer doesn't give me which state I am working for.

+ +
    +
  1. How can we need only $n$ stabilizers to fully define a state?
  2. +
+ +

With $n$ qubits we have $2^n$ dimensional Hilbert space. I thus expect to have $2^n$ stabilizers, not $n$ to fully describe a state.

+ +
+ +

I am looking for a simple answer. Preferably an answer based on the same materials as my question, if possible. I am really a beginner in quantum error correction.

+ +

I learned these things within a 1-hour tutorial, so I don't have references for which book I learned this from. It is what I understood (maybe badly) from the professor talking.

+",5008,,26,,11/19/2018 13:57,11/20/2018 8:32,Stabilizer for quantum error correction code,,1,0,,,,CC BY-SA 4.0 +4762,2,,4761,11/19/2018 14:48,,2,,"

Here are a couple of observations which will hopefully clarify things.

+ +
    +
  1. Only some states have Pauli stabilisers.

    + +

    You have correctly identified that not all states have Pauli stabilisers. An example of a state that does not have any Pauli stabilisers (apart from the identity operator of course, which we typically ignore) is $\lvert A \rangle = \tfrac{1}{\sqrt 2}\bigl( \lvert 0 \rangle + \mathrm{e}^{i \pi /4} \lvert 1 \rangle \bigr)$.

    + +

    However, a quantum error correction code is not a single state: it is a subspace of some larger state-space. Some subspaces can be described as the set of states which are stabilised by a set of commuting operators from the Pauli group, on some number $n$ of qubits. +That set of stabilising operators forms an abelian subgroup of the Pauli group, and is referred to as a stabiliser group. +If the stabiliser group has $r$ generators, then the dimension of the subspace which they jointly stabilise is $2^{n-r}$. +In the case that $r = n$, the space which you stabilise has dimension $2^0 = 1$, which means that you have identified a single state (representing in this case a somewhat degenerate example of an error correcting code, because it is 'encoding' zero qubits of information).

  2. +
  3. Only $n$ stabiliser generators are needed to characterise a state, because for each independent stabiliser (each operator for which you impose the restriction to the $+1$ eigenspace, which cannot be formed as a product of the other operators), you are essentially removing one qubit's worth of freedom from the range of states that you can have.

    + +

    The above is of course the intuitive description. More formally — given that we are talking about commuting operators selected from the Pauli group — a more formal way of seeing this is that

    + +
      +
    • For any commuting set of $r$ independent Pauli operators $\{P_1,P_2,\ldots,P_r\}$ on $n$ qubits, there is a Clifford group operation $C$ such that $C P_j P^\dagger = Z_j$ for each $1 \leqslant j \leqslant r$, where $Z_j$ is the single-qubit $Z$ operator acting only on qubit $j$.

    • +
    • The set of Pauli operators $\{ Z_1, Z_2, \ldots, Z_k \}$, as a set of stabilisers, describe a set of states in which qubit $1$ is fixed to the state $\lvert 0 \rangle$, qubit $2$ is also fixed to the state $\lvert 0 \rangle$, and so forth up to qubit $r$; the more of these operators there are, the more qubits there are with fixed states, and the fewer degrees of freedom there are in the states which satisfy those constraints. In particular, the subspace of states satisfying this property has dimension $2^{n-r}$.

    • +
    • The error correcting code is related to this second set of states with single-qubit constraints by a unitary transformation, which preserves the dimension of subspaces by virtue of being invertible. Thus, the error correcting code also has dimension $2^{n-r}$.

    • +
    + +

    Each added stabiliser generator can be said to remove one qubit of freedom from the state-space, in the sense of the formal argument above. In particular, any set of $n$ independent commuting Pauli stabilisers on $n$ qubits, will characterise some single state (though, as observed above, not all states do have such a representation).

  4. +
+",124,,124,,11/20/2018 8:32,11/20/2018 8:32,,,,3,,,,CC BY-SA 4.0 +4763,2,,4760,11/19/2018 16:27,,5,,"

The mutual information can be written in terms of the relative entropy, please see +Nielsen and Chuang (the entropy Venn diagram figure 11.2). I am writing the equation in the question's notation: +$$I(\rho^{AB}) = S(\rho^{AB}|\rho^{A} \otimes \rho^{B})$$ +The relative entropy can be estimated without full tomography. The procedure is described in Bengtsson and Życzkowski (equation 12.55-12.59) based on Lindblad's work:

+ +

The estimation procedure for the estimation of $S(\rho|\sigma)$ is performed as follows:

+ +
    +
  1. Preparation of a composite system: +$$\rho^N = \otimes^{N} \rho$$
  2. +
  3. Measurement a set of POVMs $\{E\}$: +$$p_i = \operatorname{Tr}(\rho^N E_i)$$ +$$q_i = \operatorname{Tr}(\sigma^N E_i)$$
  4. +
  5. Computation of the ""Classical"" relative entropy: +$$S_N(\rho|\sigma) = \frac{1}{N} \sum_i p_i \log{\frac{p_i}{q_i}}$$
  6. +
+ +

The relative entropy is estimated by optimization over a large set of POVMs and for a large number of copies $N$ due to the result: + $$ S(\rho|\sigma) = \lim_{N\rightarrow \infty}\operatorname{Sup}_E S_N(\rho|\sigma) $$

+ +

Of course, as in any statistical estimation, there are estimation errors due to finite samples, however, I don't know how to obtain these error bounds.

+",4263,,,,,11/19/2018 16:27,,,,4,,,,CC BY-SA 4.0 +4764,2,,4754,11/19/2018 16:47,,1,,"

I'll attempt here to provide a physical justification for why tensor products are the natural way to describe systems comprised of a number of different subsystems (e.g. a number of qubits). +The takeaway message is that tensor products, while not strictly necessary, are a natural choice when dealing with local operations.

+ +

First of all, it is important to notice that it is not strictly necessary to use tensor products at all to describe any kind of quantum mechanical system. +It just so happens that not using them makes things much more complicated and inelegant that they could be.

+ +

By this, I mean that we can describe, say, the state of an $N$-qubit system, as a $2^N$-dimensional complex vector (a $2^N$-dimensional qudit if you prefer) with no tensor product structure. If you choose to describe things this way, unitary operations, as well as any other operator acting on states, become $2^N\times 2^N$ matrices.

+ +

However, if we are talking of ""$N$ qubits"", instead of simply a single system with many degrees of freedom, we are implicitly assuming that we will deal with local operations. Local operations are operations that act only on some subset of the degrees of freedom of the system, leaving the rest untouched.

+ +

More generally, if you have a system which is composed of two subsystems, each of which can be in one of a number of different states, it is only natural to describe the whole system in terms of the states of the single subsystems. Therefore, if the first system can be in one of the states $i=1,2,...,N$ and the second system in one of $j=1,2,...,M$, then a natural choice for the basis states of the whole system is the set of $NM$ pairs $(i,j)$.

+ +

Consider as a toy example a system with $N=6$ degrees of freedom (a $6$-dimensional qudit), which is obtained by combining a $3$-dimensional system with a $2$-dimensional one. Without tensor products, it is awkward to describe a local operation acting only on the first subsystem: states would be described as single-index vectors $\psi_i\in\mathbb C$, $i=1,...,6$ and operations as matrices $\mathcal U_{ij}$ with $i,j=1,...,6$, and a local operation would be some $\mathcal U$ such that $\mathcal U_{ij}$ is always zero when $i$ and $j$ are indices corresponding to two different states of the second subsystem, assuming some conventional way to arrange the indices. +On the other hand, if you describe the state as an object $\psi_{ij}\in\mathbb C$ with $i=1,2,3$ describing the states of the first subsystem and $j=1,2$ describing the state of the second subsystem, an operation $\mathcal U$ acting locally on the first subsystem is naturally written as one satisfying $\mathcal U_{ij,kl}=U_{ij}\delta_{kl}$ for some unitary $U$.

+ +

One can then proceed with building the mathematical infrastructure to reason formally about tensor product spaces etc., but from a physical perspective, this is all there is to it. It is a convenient way to describe systems which are comprised of subsystems in such a way that it is natural to describe the state of the whole system via the tuples of states of the subsystems.

+",55,,,,,11/19/2018 16:47,,,,0,,,,CC BY-SA 4.0 +4765,1,4767,,11/19/2018 21:18,,2,123,"

I've written an API which takes in a circuit scaffold, according to some specifications, and outputs the results of simulating the circuit. The circuit constraints are that the measurements are performed optionally at the end. I want to test the API against known quantum circuit results.

+ +

I was wondering if there was a online resource which tabulates a list of known circuits and their results/outputs? An online quantum simulator would also work.

+",1287,,26,,11/19/2018 21:55,11/20/2018 14:48,List of known circuits and their expected output,,1,0,,,,CC BY-SA 4.0 +4766,2,,4756,11/19/2018 22:53,,7,,"
+

How do we know no better classical algorithm exist?

+
+ +

We can know thanks to computational complexity theory, which studies the complexity of solving different problems with different computational models. +It is in principle possible to prove that no classical algorithm can solve a given problem efficiently. +A common way to do it is using reductions, that is, proving that solving a given problem is equivalent to solving another problem. One might thus prove that if it were possible to solve efficiently some kind of problem with a classical computer, it would also be possible to solve efficiently another problem which might be known to be hard.

+ +

However, as pointed out in the other answer, this is much easier said than done. A thorough discussion would be out of place here, but the main issue is that there are some big open problems at the core of computational complexity theory which no one has been able to solve yet (most notably the P vs NP problem). +While unproven, many of these problems correspond to assumptions which are widely believed to hold (e.g. that P$\neq$NP). Many other results have then been proven conditionally to these assumptions holding. +By reflection, these results also do not constitute complete proofs, but they can still be regarded as strong evidence for something to be true.

+ +

This is the current situation with quantum supremacy. There are problems which have been ""proven"" to be impossible to solve/simulate efficiently with a classical device, but for which efficient solutions are possible with a quantum computer, where ""proven"" here means that is has been shown that if these problems were efficiently solvable classically, then there would be unexpected consequences for some complexity classes. Notable examples are boson sampling and simulating commuting circuits. This post on cstheory.SE gives some other examples of such problems. A recent review on the topic is 1809.07442.

+",55,,,,,11/19/2018 22:53,,,,1,,,,CC BY-SA 4.0 +4767,2,,4765,11/20/2018 1:19,,2,,"

Maybe Quirk would suit your needs? It's an in-browser, graphical quantum-circuit simulator. You can build your circuit with drag-and-drop and it will show the probabilities of measurements. However, there is a limit to the number of qubits you can simulate. I think it is 16 or so.

+",4657,,4657,,11/20/2018 14:48,11/20/2018 14:48,,,,2,,,,CC BY-SA 4.0 +4769,1,4771,,11/20/2018 9:25,,6,1113,"

Firstly, I'd like to specify my goal: to know why QFT runs exponentially faster than classical FFT. +I have read many tutorials about QFT (Quantum Fourier Transform), and this tutorial somehow explains something important between classical and quantum Fourier Transform.

+ +

Inside 1, it describes that:

+ +
+

The quantum Fourier transform is based on essentially the same idea with the only difference that the vectors x and y are state vectors (see formula 5.2, 5.3),

+
+ +

+ +

However, I couldn't catch up the statement regarding ""x and y are state vectors""; I get stuck at the formula 5.2 and 5.3. +I want to know how to convert the classical input vector x to the right hand side of the formula 5.2. +If this confusing problem is solved, it'll be better for me to understand the time complexity issue of QFT.

+",5152,,26,,11/20/2018 14:01,11/20/2018 14:01,"How to describe, or encode, the input vector x of Quantum Fourier Transform?",,2,1,,,,CC BY-SA 4.0 +4770,2,,4769,11/20/2018 9:53,,5,,"

You don't convert a classical input to the r.h.s. of Eq. (5.2). The r.h.s. of Eq. (5.2) is something you get as the output of a preceding quantum computation as a quantum state, such as in Shor's algorithm. This is the only way to get an exponential speedup -- if you had to start from an exponentially big classical vector, there would be no way to solve this in polynomial time.

+",491,,,,,11/20/2018 9:53,,,,2,,,,CC BY-SA 4.0 +4771,2,,4769,11/20/2018 9:55,,6,,"

Formula 5.2 refers to an encoding we call amplitude encoding. Imagine you have a vector $x$ with components $x_i$, the components are then encoded as amplitudes of a quantum state.

+ +

This encoding is very important as a vector that has a dimension $N$, will be encoded in quantum form using about $log(N)$ qubits. This is the main reason why in many quantum algorithms using this encoding we can achieve exponential speedup in the size of the problem.

+ +

However, generally in quantum computing, you have to assume that this encoding is done using a device called quantum random access memory for loading in this form a vector. Or you are given a circuit that do the job for you.

+",4127,,4127,,11/20/2018 11:10,11/20/2018 11:10,,,,2,,,,CC BY-SA 4.0 +4772,1,,,11/20/2018 10:01,,3,85,"

Recently I have started to read about network quantum information theory, where network problems are studied under the classical-quantum channel. For example, capacities of the cq-MAC, cq-broadcast or the cq-interference cannels are studied to characterize the maximum achievable rates possible in such communication scenarios. I found Ivan Savov's PhD thesis to be a really interesting document regarding this issues, and very recommendable for anyone interested to start to study network quantum information theory.

+ +

However, in chapter 4.3.1 the author states a conjecture called the three sender quantum simultaneous decoder conjecture, where the existence of a POVM decoder for a cq-channel with three senders is conjectured. Such result is very important to proof most of the results of the thesis and it is an important result in general for network quantum information theory. However, at the time it was unproved, and so it remained as a conjecture. I have been researching to see if such cojecture has been proved, but I have been unable to find a general proof for it (in Classical communication over a quantum interference channel it is proved for an special case, but not in a general way).

+ +

Consequently, I was wondering if such conjecture has already been proved and so if it has been, I would like to go through such proof. Note that both references I gave are from 2012, so I assume that advances in the issue have been done by this point.

+",2371,,55,,12/16/2021 12:05,12/16/2021 12:05,Three sender quantum simultaneous decoder conjecture,,0,0,,,,CC BY-SA 4.0 +4773,1,,,11/20/2018 10:28,,6,114,"

I have an interesting idea for a proof approach that someone might find useful. Here it is.

+ +

Suppose we are given a quantum qubit channel $N$ (for example the amplitude damping channel) whose Holevo information we are trying to prove is additive for two uses, i.e, we are trying to show that $\chi_{N \otimes N} = 2 \chi_N$.

+ +

Given the channel $N$ and another channel $M$ that we know has strongly additive Holevo information, suppose we construct a channel $N^\prime$ that simulates the channel $N$ with probability $(\chi_{N \otimes N} - 2 \chi_N)$, and simulates the channel $M$ with probability $1 - (\chi_{N \otimes N} - 2 \chi_N)$, i.e,

+ +

$$N^\prime = (\chi_{N \otimes N} - 2 \chi_N) N + (1 - (\chi_{N \otimes N} - 2 \chi_N))M.$$

+ +

Note that for qubit channels $\chi_{N \otimes N} \leq 2$ and $\chi_N \leq 1$, so the assumption that $(\chi_{N \otimes N} - 2 \chi_N) \leq 1$ -- and so can be used as a probability -- is reasonable. +From the construction of the channel, we can see that if $N$ is additive, then $(\chi_{N \otimes N} - 2 \chi_N) = 0$, and $N^\prime$ is strongly additive.

+ +

That was the idea; to use $(\chi_{N \otimes N} - 2 \chi_N)$ in the construction of a new channel. May be this idea, or a variant of it, can be used to prove something.

+ +

I would be interested to know about similar ideas that have been used in proofs before.

+",1860,,55,,12-06-2021 11:32,12-06-2021 11:32,Quantum channel Holevo information additivity: proof approach,,0,0,,,,CC BY-SA 4.0 +4774,1,,,11/20/2018 11:15,,7,2186,"

Let's say you have a system with which you can perform arbitrary rotations around the X and Z axis. How would you then be able to use these rotations to obtain an arbitrary rotation around the Y axis?

+ +

I have seen somewhere that rotation around an arbitrary axis can be achieved by doing three rotations around two fixed axis, that is, $$\hat{R}_\vec{n}(\theta)=R_Z(\gamma)R_X(\beta)R_Z(\alpha)$$ for some angles $\gamma, \alpha, \beta$. But how do you actually use this? What if I want to rotate around the Y axis with an angle of $\theta$ i.e. $\hat{R}_Y(\theta)$? Then how do I figure out what $\gamma,\alpha,\beta$ to use?

+ +

Edit: I've found a nice answer on Physics SE.

+",5153,,26,,11/20/2018 13:49,12/19/2018 11:10,How to obtain Y rotation with only X and Z rotations gates?,,2,1,,,,CC BY-SA 4.0 +4775,2,,4774,11/20/2018 12:07,,2,,"

Try selecting $\gamma$ and $\alpha$ so that you get the rotation +$$ +\sqrt{Z}R_X(\beta)\sqrt{Z}^\dagger. +$$ +There's two little tricks here that make this work. Firstly, the $R_X(\beta)$ has an $\mathbb{I}$ component and an $X$ component (I always get factors of $1/2$ wrong here, so I won't write out the cos and sin functions explicitly unless you define your $R_x$ function). Now, +$$ +\sqrt{Z}\mathbb{I}\sqrt{Z}^\dagger=\mathbb{I} +$$ +while there's some funky anti-commutation that goes on with $X$: +$$ +\sqrt{Z}X\sqrt{Z}^\dagger=ZX=iY, +$$ +so you've managed to effectively change the $X$ into a $Y$, so you're getting $Y$ rotations.

+ +

Of course, this doesn't answer your more general question about how you get a more general rotation....

+",1837,,,,,11/20/2018 12:07,,,,3,,,,CC BY-SA 4.0 +4776,1,4781,,11/20/2018 16:39,,1,195,"

I have recently read a lot about the BB84 protocol, I have used three primary sources, the original work, a QK book, and a diploma thesis.

+ +

My questions refer to the photons sent by Alice, the base of Bob and a possible attack on the protocol (MITM attack).

+ +

My first question is about sending the photons from Alice to Bob. The original paper states that Alice sends a single photon in one out of four polarization directions, my question. How does Alice know the direction of the photon? I imagine that I look at a single unpolarized photon, this is sent through a random filter. But there are cases in which a photon does not come through a filter (photon has 90° and the filter is 0°). So how does Alice know if a photon is ever sent? How can she even produce a single say we make a vertically polarized photon?

+ +

My second question is based on Bob, in the original paper, only Bob is measured in one of two possible bases (0°, 90°) or (45° and -45°). In my second source (the book), however, it is stated that Bob simply uses a filter from the respective base. I'll explain it a bit more carefully: The statement of the book is that Bob measures (always) at 90° or 45°. But now the diploma thesis, which I use as a third source, says that Bob uses 0°, 90° 45° -45° as a filter for the detection at random. I understand both possibilities, because assuming a photon comes in 90° and I measure in 0° then I can indeed from the non-detection conclude that the photon has been polarized in 90° direction. So I suspect the statements are equivalent, both in the book and in the thesis. Is it correct that way? and what does measuring in a base mean?

+ +

My third question relates to a possible attack. I have read a paper in which a MITM (man-in-the-middle) attack is carried out. My book source also lists this attack. But in what way is that an attack scenario, if only I have to authenticate the connection? Then the actual attack is still witless?

+ +

I hope my questions are understandable. It is important for me to understand that. I am looking forward to your answers. If I should explain one or the other in more detail, then, of course, I would correct my question and specify. Thanks so far!

+",,user4961,55,,12/21/2021 1:25,12/21/2021 1:25,Comprehension questions on quantum cryptography especially BB84,,1,2,,,,CC BY-SA 4.0 +4778,1,4780,,11/20/2018 21:13,,2,863,"

I recently came across the concepts of operators. However with current my knowledge I am unable to solve the following problem.Given an operator $$\vec{A}=\frac{1}{2}(I+\vec{n}.\vec{\sigma})$$ where $\vec{n}=n_x\hat{x}+n_y\hat{y}+n_z\hat{z}$ is a unit vector and $\vec{\sigma}=\sigma_x\hat{x}+\sigma_y\hat{y}+\sigma_z\hat{z}$ in usual Pauli matrices notation, under what condition on $\vec{n}$, is $\vec{A}$ is a positive operator and in other condition $\vec{A}$ is a projection operator.

+",5007,,26,,11/21/2018 6:38,11/21/2018 7:49,Projection operators and positive operators,,1,2,,,,CC BY-SA 4.0 +4779,1,,,11/21/2018 5:19,,2,79,"

Recently, I was reading a paper (arXiv:1804.03719 [cs.ET]), which had the following quote (the most relevant part has been bolded),

+ +
+

Quantum algorithms are often grouped into number-theory-based, Oracle-based, and quantum simulation algorithms, such as for instance on the excellent Quantum Zoo site [57], which is largely based on the main quantum algorithmic paradigm that these algorithms use. These paradigms are the Quantum Fourier Transform (QFT), the Grover Operator (GO), the Harrow/Hassidim/Lloyd (HHL) method for linear systems, variational quantum eigenvalue solver (VQE), and direct Hamiltonian simulation (SIM). The fact that most known quantum algorithms are based on these few paradigms in combination is remarkable and perhaps surprising. The discovery of additional quantum algorithm paradigms, which should be the subject of intense research, could make quantum algorithms applicable across a much wider range of applications.

+
+ +

I am very interested in exploring the topic of quantum algorithm paradigms. However, my usual approach of following the reference trail failed to unearth any relevant papers.

+ +

If anyone has any suggestions regarding where to look, or know any relevant papers, I would appreciate your input.

+ +

Thanks!

+",5157,,26,,11/21/2018 6:37,11/21/2018 6:37,Pointer to related research (paper),,0,4,,,,CC BY-SA 4.0 +4780,2,,4778,11/21/2018 7:49,,4,,"

This is really a question about eigenvalues:

+ +
    +
  • A projector has eigenvalues 1 and 0. So, for a qubit, that could be eigenvalues $\{1,1\}$ or eigenvalues $\{1,0\}$.

  • +
  • A positive operator is one for which all eigenvalues $\lambda$ satisfy $\lambda>0$.

  • +
+ +

One could calculate the eigenvalues by brute force, but there are a couple of tricks that will help you. Firstly, the trace is equal to the sum of the eigenvalues +$$ +\text{Tr}(A)=\sum_{i=1}^2\lambda_i. +$$

+ +

Reader exercise: justify that $\text{Tr}(A)=\frac12$.

+ +

Hence, it is impossible that the projector has eigenvalues $\{1,1\}$ (this should be obvious, as that can only give you $\mathbb{I}$). We're looking for the projector to have eigenvalues $\{1,0\}$. Note also that this must be the edge case for positivity.

+ +

Now, to progress with the calculation of the eigenvalues, evaluate +$$ +\text{Tr}(A\cdot A)=\sum_{i=1}^2\lambda_i^2=\lambda_1^2+\left(\frac12-\lambda_1\right)^2. +$$ +To answer the specific question, we don't need a full calculation of the eigenvalues. Instead, we just need to know when $\text{Tr}(A\cdot A)=1$. +\begin{align} +\text{Tr}(A\cdot A)&=\frac{1}{4}\text{Tr}\left((\mathbb{I}+\vec{n}\cdot\vec{\sigma})(\mathbb{I}+\vec{n}\cdot\vec{\sigma})\right) \\ +&=\frac{1}{4}\text{Tr}\left(\mathbb{I}+2\vec{n}\cdot\vec{\sigma}+(\vec{n}\cdot\vec{\sigma})(\vec{n}\cdot\vec{\sigma})\right) \\ +&=\frac12+\frac14\text{Tr}\left((\vec{n}\cdot\vec{\sigma})(\vec{n}\cdot\vec{\sigma})\right) +\end{align} +Reader Exercise: Justify that $\text{Tr}\left((\vec{n}\cdot\vec{\sigma})(\vec{n}\cdot\vec{\sigma})\right)=2\vec{n}\cdot\vec{n}$.

+ +

Hence, the condition for being a projector is $|\vec{n}|=1$, and the condition for being positive is $|\vec{n}|<1$, although I would suggest that of more relevance for density matrices is the positive semi-definite condition: $|\vec{n}|\leq 1$.

+",1837,,,,,11/21/2018 7:49,,,,0,,,,CC BY-SA 4.0 +4781,2,,4776,11/21/2018 8:17,,1,,"
+

How does Alice know the direction of the photon?

+
+ +

You're talking about the photons as if we've just magically plucked them out of the air with no idea what polarisation they're in. In practice, we're producing the photons. A source of individual photons that can be produced 'on demand' is still an experimental challenge (not that I'm completely up to date with the latest experimental literature). But the point is that we have systems that produce photons for us, and they can produce polarised photons. For example, some designs of laser produce polarised outputs.

+ +

Another way that could help you think of this (although I don't think people really do this), is imagine you have a photon going along a path. If you put a polarising beamsplitter in the way, then you direct one output to a detector. If that detector doesn't click then (if it's a perfect detector), the photon is travelling along the other path in the opposite polarisation.

+ +
+

I suspect the statements are equivalent

+
+ +

Yes, they are. The overall protocol is to make a random choice of which basis you want (0/90 or $\pm$45) and then perform that basis measurement. This is basically the same as the original statement. Those individual measurements could be made using a polarising beamsplitter with detectors on both outputs of the beamsplitter, or you could just put the detector on one output and assume that if you don't get a hit, it was the other outcome. At that point, you can use a filter instead of the beamsplitter because, either way, you just lose the other polarisation. So, the random basis choice is making a random choice between whether you're using a 0 or a 45 degree filter. This is the same as the book's statement.

+ +

Equally, using the combination of 0,90,$\pm$45 will achieve just the same, it's just more complicated than necessary (because 90 will always give the opposite result to 0, so why not just use 0?).

+ +
+

what does measuring in a base mean?

+
+ +

That's a much bigger question which deserves a much fuller answer than I can reasonably give as a sub-answer, and would preferably require some mathematical formalism. Still, to give you a sense: in quantum, when you want to ask a system what its state is, you can only ask limited questions. You cannot ask ""what is the polarisation angle of the photon?"" but, instead, you can ask if it's in one of two perpendicular directions, such as 0 and 90, and get the answer with varying probabilities.

+ +
+

But in what way is that an attack scenario, if only I have to authenticate the connection? Then the actual attack is still witless?

+
+ +

I don't really understand what you're asking here. Of course, the point of the protocol is to make sure that a man in the middle attack cannot work. But you have to prove that, and this helps give insight as to why the quantum protocol is a good one. The sort of thing that might go on is that Eve, sat in the middle between Alice and Bob, can take an (approximate) copy of the photons as they zoom past, and later use her copies to get as much information about the key shared between Alice and Bob as possible. The point is that Eve trying to do this has a knock-on effect on what Bob measures, and so Eve's meddling is detectable. In principle, they can even estimate how much she might know, and perform a protocol called privacy amplification to reduce that knowledge arbitrarily.

+",1837,,,,,11/21/2018 8:17,,,,5,,,,CC BY-SA 4.0 +4782,1,4784,,11/21/2018 8:54,,1,877,"

I saw some related topics, but none of them give step by step instructions on measuring in standard basis (or some other basis). Could you please give such instructions? Giving an example would be good, too.

+",2559,,26,,11/21/2018 9:00,11/21/2018 13:53,How to measure in the standard basis?,,1,4,,,,CC BY-SA 4.0 +4783,1,,,11/21/2018 9:21,,1,442,"

I wonder what the steps for encoding a qubit in a certain basis are (you can give your answer in terms of the standard basis for simplicity). Example and a little explanation about steps would be great.

+",2559,,26,,11/21/2018 10:29,1/19/2020 15:00,How to encode a qubit in standard basis?,,1,3,,,,CC BY-SA 4.0 +4784,2,,4782,11/21/2018 12:46,,4,,"

Measurements have corresponding measurement operators, $P_i$, satisfying +$$ +\sum_iP_i=\mathbb{I}. +$$ +Often, we talk about projective measurements, meaning $P_i^2=P_i$.

+ +

For example, measurement in the standard basis means setting +$$ +P_0=|0\rangle\langle 0|\equiv \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right)\qquad P_1=|1\rangle\langle 1|\equiv \left(\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array}\right). +$$

+ +

Now, if we measure a state $|\psi\rangle$, it gives outcome $i$ with probability +$$ +p_i=\langle\psi|P_i|\psi\rangle, +$$ +and the state of the system (for a non-destructive measurement) is +$$ +P_i|\psi\rangle/\sqrt{p_i}. +$$

+ +

So, let's take $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$. We get outcome 0 with probability +$$ +p_0=(\alpha^\star\langle 0|+\beta^\star\langle 1|)|0\rangle\langle 0|(\alpha|0\rangle+\beta|1\rangle)=|\alpha|^2, +$$ +and the remaining state is +$$ +|0\rangle\langle 0|(\alpha|0\rangle+\beta|1\rangle)/|\alpha|=e^{i\text{Arg}(\alpha)}|0\rangle. +$$ +Up to an irrelevant global phase, the outcome is $|0\rangle$.

+ +

Of course, you don't need the full formalism of projectors to get this. If you look at a state $\alpha|0\rangle+\beta|1\rangle$, that literally says that the probability amplitude for finding the system in state $|0\rangle$ is $\alpha$, and hence the probability is $|\alpha|^2$.

+",1837,,1837,,11/21/2018 13:53,11/21/2018 13:53,,,,2,,,,CC BY-SA 4.0 +4785,1,4791,,11/21/2018 19:16,,5,345,"

I'm currently trying to understand the T magic state distillation algorithm described in ""Universal Quantum Computation with Ideal Clifford Gates and Noisy Ancillas"" [1] (Section V starting on Page 6). I need to understand the basics of magic state distillation in order to understand the motivations for optimization procedures used in another paper [2], which I need to give a talk on for a class. I know the distillation procedure used [2] (Bravyi-Haah) is different from the one described in [1], but [1] seems like a more natural starting point.

+ +

As this is background for the class presentation I am to give, my goals with this post are to refine my understanding as I continue to try and understand this topic and (most importantly) to make sure I do not spread any misinformation. That is, I don't expect to achieve 100% understanding from the responses to this post.

+ +

I would like to verify which points of the following plain-language laymen's explanation of the magic state distillation procedure described in [1] are incorrect or correct.

+ +

Imagining the production of T-states for use in a surface code,

+ +

1) Magic state distillation is performed within the surface code

+ +

2) The initial step of producing many copies of raw noisy T-states is done through the direct use of a non-fault tolerant T-gate

+ +

3) Distillation of these raw states is performed through discarding states which cause nontrivial surface code stabilizer measurements (eigenvalue = -1)

+ +

4) The raw states with trivial surface code stabilizer measurements (eigenvalue = 1) are transformed into a single qubit magic state.

+",4943,,,,,11/22/2018 18:24,Magic State Distillation Understanding Check,,1,0,,,,CC BY-SA 4.0 +4786,1,,,11/21/2018 19:35,,5,128,"

Continuing from my previous (1, 2) questions on Brunner et al.'s paper on Bell nonlocality.

+ +

Again, we have the following standard Bell experiment setup:

+ +

+ +

where independent inputs $x,y \in \{0, 1\}$ decide the measurement performed by Alice & Bob on quantum state $S$ with outcomes $a,b \in \{-1, 1\}$. We say $a$ and $b$ are correlated (not independent) if:

+ +

$P(ab|xy) \ne P(a|x)P(b|y)$

+ +

which is a lazy physicist's way of writing:

+ +

$P[A = a \cap B = b | X = x \cap Y = y] \ne P[A = a | X = x] \cdot P[B = b | Y = y]$

+ +

Where $A, B, X, Y$ are discrete random variables and $a,b,x,y$ some specific elements from the sets defined above.

+ +

I wanted to check this basic (in)equality with some simple example values, so I considered the following:

+ +
    +
  • $S = |++\rangle$, a non-entangled quantum state
  • +
  • If $X = 0$, Alice measures with $\sigma_z$; if $X = 1$, she measures with $\sigma_x$
  • +
  • If $Y = 0$, Bob measures with $\sigma_x$; if $Y = 1$, he measures with $\sigma_z$
  • +
+ +

Since $S$ is not an entangled state, we can write out the following probability tables:

+ +

$\begin{array}{|c|c|c|} +\hline +x & a & P(a|x) \\ \hline +0 & 1 & 0.5 \\ \hline +0 & -1 & 0.5 \\ \hline +1 & 1 & 1 \\ \hline +1 & -1 & 0 \\ \hline +\end{array}$ +$\begin{array}{|c|c|c|} +\hline +y & b & P(b|y) \\ \hline +0 & 1 & 1 \\ \hline +0 & -1 & 0 \\ \hline +1 & 1 & 0.5 \\ \hline +1 & -1 & 0.5 \\ \hline +\end{array}$

+ +

We then expect $P(ab|xy) = P(a|x)P(b|y)$ for all the values of $a,b,x,y$. The problem is I don't know how to calculate the LHS of that equation! I can make the following table:

+ +

$\begin{array}{|c|c|c|c|c|c|} +\hline +x & y & a & b & P(a|x)P(b|y) & P(ab|xy) \\ \hline +0 & 0 & 1 & 1 & 0.5 \cdot 1 = 0.5 & ? \\ \hline +0 & 0 & 1 & -1 & 0.5 \cdot 0 = 0 & ? \\ \hline +0 & 0 & -1 & 1 & 0.5 \cdot 1 = 0.5 & ? \\ \hline +0 & 0 & -1 & -1 & 0.5 \cdot 0 = 0 & ? \\ \hline +0 & 1 & 1 & 1 & 0.5 \cdot 0.5 = 0.25 & ? \\ \hline +0 & 1 & 1 & -1 & 0.5 \cdot 0.5 = 0.25 & ? \\ \hline +0 & 1 & -1 & 1 & 0.5 \cdot 0.5 = 0.25 & ? \\ \hline +0 & 1 & -1 & -1 & 0.5 \cdot 0.5 = 0.25 & ? \\ \hline +1 & 0 & 1 & 1 & 1 \cdot 1 = 1 & ? \\ \hline +1 & 0 & 1 & -1 & 1 \cdot 0 = 0 & ? \\ \hline +1 & 0 & -1 & 1 & 0 \cdot 1 = 0 & ? \\ \hline +1 & 0 & -1 & -1 & 0 \cdot 0 = 0 & ? \\ \hline +1 & 1 & 1 & 1 & 1 \cdot 0.5 = 0.5 & ? \\ \hline +1 & 1 & 1 & -1 & 1 \cdot 0.5 = 0.5 & ? \\ \hline +1 & 1 & -1 & 1 & 0 \cdot 0.5 = 0 & ? \\ \hline +1 & 1 & -1 & -1 & 0 \cdot 0.5 = 0 & ? \\ \hline +\end{array}$

+ +

But cannot figure out how to fill in the value of $P(ab|xy)$. How do I do that (without using the values of $P(a|b)P(b|y)$)?

+ +

I would then like to perform the same exercise with the CHSH setup:

+ +
    +
  • $S = |\Psi^+\rangle = \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$
  • +
  • If $X = 0$, Alice measures with $\sigma_z$; if $X = 1$, she measures with $\sigma_x$
  • +
  • If $Y = 0$, Bob measures with $\sigma_z$ rotated $\frac{\pi}{8}$ radians counter-clockwise around the y-axis; if $Y = 1$, he measures with $\sigma_z$ rotated $\frac{\pi}{8}$ radians clockwise around the y-axis
  • +
+ +

How would we then write out the above three probability tables? I guess we probably wouldn't be able to easily write out the first two, but we can with the third?

+",4153,,55,,12-01-2021 09:48,12-01-2021 09:48,Determining whether $P(ab|xy)$ factorizes in Bell experiments,,1,0,,,,CC BY-SA 4.0 +4787,1,4789,,11/21/2018 20:09,,5,174,"

If we're measuring in common bases like $|0\rangle$, $|1\rangle$ or $|+\rangle$, $|-\rangle$ we express this by saying we're measuring with $\sigma_z$ or $\sigma_x$, or measuring in the computational or sign bases. What's the conventional or most-concise way to say we're measuring in a non-standard basis, like the in CHSH experiment where Bob measures in the computational basis rotated $\pm\frac{\pi}{8}$ radians around the y-axis? Do we derive the observable from its eigenvectors and use that?

+",4153,,55,,12-11-2021 10:43,12-11-2021 10:43,Convention for expressing measurement in non-standard basis,,1,0,,,,CC BY-SA 4.0 +4788,1,,,11/21/2018 21:50,,1,660,"

This question is regarding the simulation of qubits, using FPGAs. My question is: how does using FPGAs to simulate qubits help us understand or give us an insight into how quantum computers could be constructed? I know many quantum computation scientists use software to simulated qubits, for example, Matlab and even Python. But I just don't understand why to use FPGAs. +I know for a fact that FPGAs are really useful in speeding up processing power and execute processes in parallel.

+ +

Is this the sole purpose of using FPGAs or are there other reasons why they are being used in quantum computation simulations?

+",3043,,26,,12/13/2018 19:28,08-02-2022 12:46,FPGA qubit simulation,,1,3,,,,CC BY-SA 4.0 +4789,2,,4787,11/22/2018 8:38,,3,,"

If you express it as an operator of the form $\vec{n}\cdot\vec{\sigma}$, it will certainly be understood. In this context, you're probably talking about $(Z+X)/\sqrt{2}$.

+ +

You could derive this from the eigenvectors, of you can derive it from the Bloch sphere picture, where a measurement corresponds to any point on the surface of the Bloch sphere (and that is specified by the vector $\vec{n}$). You just have to remember that there's a doubling of angles between the way we write them on states, and the angles on the Bloch sphere, so your $\pi/8$ angle becomes $\pi/4$.

+",1837,,,,,11/22/2018 8:38,,,,1,,,,CC BY-SA 4.0 +4790,2,,4786,11/22/2018 8:47,,4,,"

I think you're doing things a little bit backwards. You probably shouldn't be calculating $P(a|x)$ or $P(b|y)$ in advance, because you're simply trying to ask:

+ +
+

Given a set of $\{P(ab|xy)\}$, do there exist assignments to $P(a|x)$ and $P(b|y)$ that satisfy $P(ab|xy)=P(a|x)P(b|y)$ for all $a,b,x,y$?

+
+ +

So, how do you evaluate the probability of getting answers $a$ and $b$ when you make measurements $x$ and $y$? Let's assume your two observables for measurement choices $x$ and $y$ are $\vec{n}\cdot\vec{\sigma}$ and $\vec{m}\cdot{\sigma}$ respectively. Then you have projectors for each of the 4 possible outcomes described by $ab$ with +$$ +P_{ab}=\frac{1}{4}(\mathbb{I}+(-1)^a\vec{n}\cdot\vec{\sigma})\otimes(\mathbb{I}+(-1)^b\vec{m}\cdot\vec{\sigma}). +$$

+ +

So, what's $P(ab|xy)$? +$$ +P(ab|xy)=\langle\psi|P_{ab}|\psi\rangle +$$

+ +

For example, with $|\psi\rangle=|++\rangle$, and $x=y=0$ (meaning $X$ measurements, as specified in the question), then +$$ +P(00|00)=\langle++|\frac{1}{4}(\mathbb{I}+X)\otimes(\mathbb{I}+X)|++\rangle=1. +$$

+",1837,,,,,11/22/2018 8:47,,,,0,,,,CC BY-SA 4.0 +4791,2,,4785,11/22/2018 18:19,,5,,"
+

1) Magic state distillation is performed within the surface code

+
+ +

If you mean the distillation circuit is implemented with encoded logical qubits instead of raw physical qubits, then yes.

+ +
+

2) The initial step of producing many copies of raw noisy T-states is done through the direct use of a non-fault tolerant T-gate

+
+ +

Yes, the initial T states fed into the process are made with physical gates. Later rounds use T states distilled in the previous round.

+ +
+

3) Distillation of these raw states is performed through discarding states which cause nontrivial surface code stabilizer measurements (eigenvalue = -1)

+
+ +

Yes, you discard states that fail any of various parity checks. In principle you could try to correct errors instead of just detecting them, but this would be significantly less efficient because you get less error suppression (e.g. the 15-to-1 distillation process can correct any single T-gate error, but then you'd get p -> O(p^2) suppression instead of the p -> O(p^3) you get by detecting any pair of errors).

+ +
+

4) The raw states with trivial surface code stabilizer measurements (eigenvalue = 1) are transformed into a single qubit magic state.

+
+ +

Yes, though it could be multiple magic states. In the case of the 15-to-1 factory, it's just one.

+",119,,119,,11/22/2018 18:24,11/22/2018 18:24,,,,0,,,,CC BY-SA 4.0 +4792,1,4796,,11/22/2018 20:49,,4,214,"

I came across a quantum circuit very similar to the phase estimation circuit, which is shown below:

+ +

+ +

In the phase estimation algorithm we assume, that we can efficiently implement an operator $U$, which performs the following operation:

+ +

$$ U|u\rangle \equiv e^{2 \pi i \phi} |u\rangle, $$

+ +

where $|u\rangle$ is the eignevector of $U$ and $e^{2 \pi i \phi}$ is the corresponding eigenvalue. Such an operator can be denoted in the matrix form as

+ +

$$ U \equiv \begin{bmatrix} e^{2 \pi i \phi} & 0 \\ 0 & e^{2 \pi i \phi}\end{bmatrix} $$

+ +

and its controlled version can be written as

+ +

$$ CU \equiv \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & e^{2 \pi i \phi} & 0 \\ 0 & 0 & 0 & e^{2 \pi i \phi}\end{bmatrix} $$

+ +

(the $CU^j$ gate will only have $e^{2 \pi i \phi j}$ in places of $e^{2 \pi i \phi}$). For such a case I was still able to express the resulting vector (after going through the $CU^j$ gate) as the tensor product of two vectors. Thanks to this, I was able to see what is the effect of applying the inverse quantum Fourier transform on one of these vectors.

+ +

Now let's say, that $U$ is not diagonal and its each entry is different from 0. In such a case, I think there appears strong entanglement between qubits in the first and the second register. Because of this (from the very definition of entanglement) I wasn't able to express the resulting vector as tensor product of some compound vectors and I don't know, what will be the result of applying the inverse quantum Fourier transform on the first register.

+ +

All of the above is just one example showing my real problem - how to analyze quantum circuits, where qubits (or registers) are highly entangled?

+",2098,,55,,11/25/2018 11:00,11/25/2018 11:00,How to analyze highly entangled quantum circuits?,,1,0,,,,CC BY-SA 4.0 +4793,1,,,11/22/2018 22:16,,2,369,"

How can I show that a multi-qudit graph state $|G\rangle$ is the maximally entangled state? What kind of measure of entanglement can be used to quantify the amount of entanglement in a given graph state?

+",4288,,91,,11/23/2018 3:39,11/23/2018 9:39,Graph state and maximally entangled state,,1,3,,,,CC BY-SA 4.0 +4794,1,,,11/23/2018 2:21,,3,337,"

This is really a question out of curiosity. I am aware that geometric algebra and geometric calculus provide simplifications in many aspects of physics. I'm wondering if this framework's usefulness extends to the realm of quantum computing.

+",5172,,26,,11/23/2018 4:18,12-02-2022 22:51,Is geometric algebra/calculus used in quantum computing?,,1,1,,,,CC BY-SA 4.0 +4795,2,,4794,11/23/2018 3:15,,2,,"

The algebra generated over $\mathbb{C}$ by $\sigma_{x,y,z}$ gives $\text{Cliff}(\mathbb{R}^{3,0})$. But this doesn't really use more of the general features of Clifford algebra for general $\mathbb{R}^{p,q}$. You can phrase it in terms of Clifford algebra if you want, but not necessary for small examples.

+",434,,26,,11/23/2018 4:05,11/23/2018 4:05,,,,0,,,,CC BY-SA 4.0 +4796,2,,4792,11/23/2018 7:43,,3,,"

The $U$ used in phase estimation is not only a diagonal matrix with the same diagonal elements. Instead, it is an arbitrary unitary matrix.

+ +

The way that you analyse it, instead, is that the input $|u\rangle$ is specifically chosen to be an eigenvector of $U$. That means $U|u\rangle=e^{i\phi}|u\rangle$. But there are different eigenvectors with different eigenvalues. For example, perhaps $|v\rangle$ with $U|v\rangle=e^{i\theta}|v\rangle$.

+ +

Now, the way that you've analysed it tells you that $|0\rangle|u\rangle\mapsto|\phi\rangle|u\rangle$ (assuming $\phi$ is of the form $2\pi k/2^t$ for integer $k$). But it tells you that you get the same effect for every eigenvector, so $|0\rangle|v\rangle\mapsto|\theta\rangle|v\rangle$.

+ +

So, what happens for an arbitrary input that is not an eigenvector of $U$? The eigenvectors form a basis, so we can write any input state as a superposition of the different eigenvectors. And by linearity we know how each of those components evolve. For example, +$$ +|0\rangle(\alpha|u\rangle+\beta|v\rangle)\rightarrow \alpha|\phi\rangle|u\rangle+\beta|\theta\rangle|v\rangle. +$$ +This will, typically, be highly entangled, but the entanglement itself does not necessarily cause a problem in the analysis.

+ +

More generally, what's the answer? Well, ultimately, you might not be able to analyse these circuits. Part of the point of quantum circuits is that you cannot easily simulate them on a classical computer. But there are plenty of strategies such as the one above (picking a basis of states which are easy to analyse), or the Gottesman-Knill theorem, that let you deal with specific situations when there's lots of entanglement present.

+",1837,,,,,11/23/2018 7:43,,,,0,,,,CC BY-SA 4.0 +4797,2,,4793,11/23/2018 9:39,,2,,"

I'm not familiar with how graph states extend to qudits, so let me just answer for the specific case of qubits.

+ +

Consider a graph $G$, and we create the corresponding graph state $|G\rangle$ by placing a qubit on every vertex in the $|+\rangle$ state, and applying a controlled-phase gate along every edge.

+ +

Now, take a bipartition of $G$. On either side of the bipartition, we can apply unitaries (we're just not allowed to do anything across the partition). Hence, we can apply controlled-phases along any edges that remain within a bipartition. We are reduced to a graph state that is the same as the original, but only has edges across the bipartition, and it has the same amount of entanglement with respect to that bipartition.

+ +

Next, throw away all vertices that don't have an edge, as they're irrelevant. Let's have the sizes of the two bipartitions being $n\leq m$. Then we can write the graph state as +$$ +\frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n}|x\rangle\otimes(Z_{N(x)}|+\rangle^{\otimes m}), +$$ +where $N(x)$ describes the neighbourhood of the spins of the first bipartition specified by $x$. More explicitly, a $Z$ is applied to any qubit in the second bipartition if it is the neighbour to an odd number of vertices $i$ in the first bipartition for which $x_i=1$.

+ +

Under the assumption that $N(x)$ is distinct for all $x\in\{0,1\}^n$, this is a Schmidt decomposition, meaning that there are $2^n$ Schmidt coefficients, each of value $1/2^n$, from which you can calculate anything such as the entanglement entropy, $n$. However, if any qubits were discarded in the discard step, this entanglement entropy will not be maximal.

+ +

Moreover, the assumption of the distinctness of $N(x)$ is not necessarily true. Think, for example, of the square graph (of 4 vertices), and a bipartition of grouping qubits on the diagonals. Here, $N(00)=N(11)$. However, this can be avoided by using the local equivalence of graph states to minimise the number of edges across the bipartition. I don't have a rigorous proof for this off the top of my head, bur probably a good way to approach it is to show that if $N(x)=N(y)$, then $N(x\oplus z)=N(y\oplus z)$. This means that you would break down the state into the form +$$ +\frac{1}{\sqrt{2^n}}\sum_{z\in\{0,1\}^n}(|z\rangle+|z\oplus x\oplus y\rangle)\otimes\left(Z_{N(z)}|+\rangle^{\otimes m}\right), +$$ +where we restrict the sum over $z$ to avoid double counting. This leads to a halving of the number of Schmidt coefficients, each of which has doubled in value. The entanglement entropy is $n-1$. Of course, it could be reduced by any integer amount depending on the structure. The most extreme case, of course, is the fully connected graph (which is locally equivalent to a GHZ state). Any bipartition only has 1 ebit of entanglement across the bipartition, even though it initially looks like there are many edges crossing it.

+",1837,,,,,11/23/2018 9:39,,,,0,,,,CC BY-SA 4.0 +4798,1,4802,,11/23/2018 12:53,,26,3177,"

I read about 9-qubit, 7-qubit and 5-qubit error correcting codes lately. But why can there not be a quantum error correcting code with fewer than 5 qubits?

+",5007,,1837,,11/23/2018 14:04,12-01-2018 16:14,Why can't there be an error correcting code with fewer than 5 qubits?,,4,0,,,,CC BY-SA 4.0 +4799,2,,4798,11/23/2018 13:17,,15,,"

What we can easily prove is that there's no smaller non-degenerate code.

+ +

In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let's say you had a 5 qubit code, with the two logical states $|0_L\rangle$ and $|1_L\rangle$. The set of possible single-qubit errors are $X_1,X_2,\ldots X_5,Y_1,Y_2,\ldots,Y_5,Z_1,Z_2,\ldots,Z_5$, and it means that all the states +$$ +|0_L\rangle,|1_L\rangle,X_1|0_L\rangle,X_1|1_L\rangle,X_2|0_L\rangle,\ldots +$$ +must map to orthogonal states.

+ +

If we apply this argument in general, it shows us that we need +$$ +2+2\times(3n) +$$ +distinct states. But, for $n$ qubits, the maximum number of distinct states is $2^n$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need +$$ +2^n\geq 2(3n+1). +$$ +This is called the Quantum Hamming Bound. You can easily check that this is true for all $n\geq 5$, but not if $n<5$. Indeed, for $n=5$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result.

+",1837,,1837,,11/26/2018 7:49,11/26/2018 7:49,,,,4,,,,CC BY-SA 4.0 +4800,2,,4798,11/23/2018 14:45,,8,,"

As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is +\begin{equation} +2^{n-k}\geq\sum_{j=0}^t\pmatrix{n\\j}3^j, +\end{equation} +where $n$ refers to the number of qubits that form the codewords, $k$ is the number of information qubits that are encoded (so they are protected from decoherence), and $t$ is the number of $t$-qubit errors corrected by the code. As $t$ is related with the distance by $t = \lfloor\frac{d-1}{2}\rfloor$, then such non-degenerate quantum code will be a $[[n,k,d]]$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $2^n$ dimensional Hilbert space is partitioned into $2^{n-k}$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That's why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement.

+ +

However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound.

+",2371,,,,,11/23/2018 14:45,,,,0,,,,CC BY-SA 4.0 +4801,1,4803,,11/23/2018 16:52,,2,259,"

Suppose I have two registers x and y, of length m and n bits respectively. I want to initialize my system to contain an equal superposition of all $2^{n+m}$ states, then apply an oracle function (in superposition). How do I notate this correctly?

+

For example:

+

Consider the system $|\psi\rangle=|x_{m-1}\rangle...|x_0\rangle|y_{n-1}\rangle...|y_0\rangle = |x\rangle|y\rangle$

+

and quantum oracle $F(x,y)\rightarrow \{0,1\}$

+
    +
  1. Initialize the system to $|\psi_0\rangle=|0\rangle|0\rangle$

    +
  2. +
  3. Apply the Hadamard gate to obtain uniform superposition over all states

    +

    $|s\rangle$ = $H|\psi_0\rangle = \frac{1}{\sqrt{2^{n+m}}}\sum_{x=0}^{2^m}\sum_{y=0}^{2^n}|x\rangle|y\rangle$

    +
  4. +
  5. Compute $|\phi\rangle = F(|s\rangle) = \alpha|0\rangle + \beta|1\rangle$, for $\alpha,\beta \in \mathbb{C}$

    +
  6. +
+

I hope the algorithmic steps I'm describing are relatively clear but is this the correct way to notate it?

+",5174,,-1,,6/18/2020 8:31,1/17/2019 10:46,Notation for two entangled registers,,1,2,,,,CC BY-SA 4.0 +4802,2,,4798,11/23/2018 22:04,,17,,"

A proof that you need at least 5 qubits (or qudits)

+ +

Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $d$, and any quantum error correcting code protecting one or more qudits of dimension $d$.

+ +

(As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill--Laflamme article [arXiv:quant-ph/9604034] which set out the Knill--Laflamme conditions: the following is the proof technique which is more commonly used nowadays.)

+ +

Any quantum error correcting code which can correct $t$ unknown errors, can also correct up to $2t$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*. +Slightly more generally, a quantum error correcting code of distance $d$ can tolerate $d-1$ erasure errors. For example, while the $[\![4,2,2]\!]$ code can't correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case).

+ +

It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits. +Now: suppose you have a quantum error correcting code on $n \geqslant 2$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $n-2$ qubits to Alice, and $2$ qubits to Bob: then Alice should be able to recover the original encoded state. If $n<5$, then $2 \geqslant n-2$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice's state. As this is ruled out by the No Cloning Theorem, it follows that we must have $n \geqslant 5$ instead.

+ +

On correcting erasure errors

+ +

* The earliest reference I found for this is

+ +

[1] +Grassl, Beth, and Pellizzari. +
      +Codes for the Quantum Erasure Channel. +
      +Phys. Rev. A 56 (pp. 33–38), 1997. +
      +[arXiv:quant-ph/9610042]

+ +

— which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $d$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators).

+ +
    +
  • The loss of $d-1$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors.

  • +
  • If the locations of those $d-1$ qubits were unknown, this would be fatal. +However, as their locations are known, any pair Pauli errors on $d-1$ qubits can be distinguished from one another, by appeal to the +Knill-Laflamme conditions.

  • +
  • Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $d-1$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state.

  • +
+",124,,124,,11/30/2018 11:03,11/30/2018 11:03,,,,1,,,,CC BY-SA 4.0 +4803,2,,4801,11/24/2018 7:14,,2,,"

Your notation is OK up to and including step 2, except for the range of summation. You need +$$ +|s\rangle=\frac{1}{\sqrt{2^{n+m}}}\sum_{x=0}^{2^n-1}\sum_{y=0}^{2^m-1}|x\rangle|y\rangle +$$ +Now the problem is how to write down the effect of the oracle, and you cannot just write down the output qubit. I think you probably know this from the title of the question, because entanglement will appear that you're not describing. So, you have an oracle that acts as +$$ +|x\rangle|y\rangle|0\rangle\xrightarrow{\text{oracle}}|x\rangle|y\rangle|F(x,y)\rangle. +$$ +Hence, if the input is some superposition state such as $|s\rangle$, we have +$$ +|s\rangle|0\rangle\xrightarrow{\text{oracle}}|\Psi\rangle=\frac{1}{\sqrt{2^{n+m}}}\sum_{x=0}^{2^n-1}\sum_{y=0}^{2^m-1}|x\rangle|y\rangle|F(x,y)\rangle. +$$ +You absolutely cannot describe (except in very special cases of $F$) the last qubit in the form $\alpha|0\rangle+\beta|1\rangle$ because it is entangled with the other registers.

+ +
+ +

The question seems to be evolving into

+ +
+

If I'm not measuring $x$ or $y$, why can't the state of the extra qubit be written in the form $\alpha|0\rangle+\beta|1\rangle$?

+
+ +

There are several ways that this might be answered. Normally, I'd take the partial trace and calculate the reduced density matrix, but I infer from comments that the OP doesn't know this technique. Thus, let us try another route.

+ +

Let us assume that the extra qubit can be written in the form $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$. This means that we could define a measurement +$$ +P_\psi=|\psi\rangle\langle\psi|\qquad P_{\perp}=|\psi^\perp\rangle\langle\psi^\perp| +$$ +where $|\psi^\perp\rangle=\beta^{\star}|0\rangle-\alpha^\star|1\rangle$ is orthogonal to $|\psi\rangle$. If we can guarantee that the extra qubit is in that state, then we are guaranteed to get the measurement result $P_{\psi}$. In other words, +$$ +\langle\Psi|\mathbf{I}\otimes\mathbf{I}\otimes P_{\psi}|\Psi\rangle=1. +$$ +(The identity operations are how we say that we're not measuring the $x$ and $y$ systems.) I claim that there are no satisfying $\alpha,\beta$ where $|\alpha|^2+|\beta|^2=1$, unless $F(x,y)$ is a constant function.

+ +

So, we start to evaluate +\begin{align*} +\langle\Psi|\mathbf{I}\otimes\mathbf{I}\otimes P_{\psi}|\Psi\rangle&=\frac{1}{2^{n+m}}\sum_x\sum_y\langle F(x,y)|P_{\psi}|F(x,y)\rangle \\ +&=\frac{1}{2^{n+m}}\left(\sum_{x,y:F(x,y)=0}|\alpha|^2+\sum_{x,y:F(x,y)=1}|\beta|^2\right) \\ +&= \frac{1}{2^{n+m}}(M|\alpha|^2+(2^{n+m}-M)(1-|\alpha|^2)) +\end{align*} +Where $M$ is the number of values such that $F(x,y)=0$. Setting this equal to 1, we can rearrange for $|\alpha|^2$: +$$ +|\alpha|^2=\frac{M}{2M-2^{n+m}} +$$ +For $|\alpha|^2$ to be a valid value, it must be $0\leq|\alpha|^2\leq 1$. One has to be careful in the analysis here. If we assume that $2M>2^{n+m}$, then the denominator is positive, and $|\alpha|^2\leq 1$ implies +$$ +M\geq 2^{n+m}. +$$ +This only happens if $M=2^{n+m}$, in other words, $F(x,y)=0$ for all $x$ and $y$. On the other hand, if $2M<2^{n+m}$, the denominator is negative, and so $|\alpha|^2\geq 0$ implies $M\leq 0$. This can only happen if $M=0$, i.e. all answers $F(x,y)$ give answer 1.

+ +

We conclude that unless $F(x,y)$ is constant, there is no valid $\alpha,\beta$ so that the measurement gives probability 1, which means there is no pure state description of that qubit.

+",1837,,5174,,12-10-2018 19:20,12-10-2018 19:20,,,,14,,,,CC BY-SA 4.0 +4805,1,4808,,11/25/2018 4:24,,5,214,"

In version 2 of the paper Quantum Circuit Design for Solving Linear Systems of Equations by Cao et al., they have given circuit decomposition for $e^{iA\frac{2\pi}{16}}$, given a particular $A_{4\times 4}$ matrix, in Fig 4. I am trying to find equivalent decomposition for a $2\times 2$ matrix like $A'=\begin{pmatrix} 1.5&0.5\\0.5&1.5 \end{pmatrix}$. Can anyone explain and summarize the standard method for this?

+",4644,,26,,11/25/2018 7:19,11/26/2018 8:14,How to find parameters for circuit decomposition of Hamiltonian simulation of any matrix $A$?,,1,3,,,,CC BY-SA 4.0 +4806,1,,,11/26/2018 3:36,,2,129,"

My goal in writing this algorithm in Q# was that func would either output (1,2) or (10,20), since the output result can be either One or Zero. However, I sometimes have (1,20) or (10,2) as output. Does anyone know why this happens?

+ +
    operation func () : (Int,Int)
+{
+    mutable res000 = Zero;
+    mutable int1 = 0;
+    mutable int2 = 0;
+
+    using (goop = Qubit[1])
+    {
+
+       H(goop[0]);
+
+       set res000 = M(goop[0]);
+
+
+       if(res000 == Zero)
+       {
+           set int1 = 1;
+           set int2 = 2;
+       }
+       else
+       {
+           set int1 = 10;
+           set int2 = 20;
+       }
+
+       ResetAll(goop);
+    }
+
+    return (int1,int2);
+}
+
+ +

Edit:

+ +

Here's another bit of information. I also have two projection functions, and I want the projection functions to output func:

+ +
    operation Pr0 (m:Int,n:Int) : Int
+{
+    return m;
+}
+
+operation Pr1 (m:Int,n:Int) : Int
+{
+    return n;
+}
+
+operation func () : (Int,Int)
+{
+    mutable res000 = Zero;
+    mutable int1 = 0;
+    mutable int2 = 0;
+
+    using (goop = Qubit[1])
+    {
+
+       H(goop[0]);
+
+       set res000 = M(goop[0]);
+
+
+       if(res000 == Zero)
+       {
+           set int1 = 1;
+           set int2 = 2;
+       }
+       else
+       {
+           set int1 = 10;
+           set int2 = 20;
+       }
+
+       ResetAll(goop);
+    }
+
+    return (int1,int2);
+}
+
+operation testPr1 () : (Int,Int)
+{
+    return (Pr0(func()),Pr1(func()));
+}
+
+ +

Here are the C# codes:

+ +
    class Driver
+{
+    static void Main(string[] args)
+    {
+        using (var sim = new QuantumSimulator())
+        {
+            var res3 = testPr1.Run(sim).Result;
+            Console.WriteLine(res3);
+
+        }
+
+        Console.WriteLine(""Press any key to continue..."");
+        Console.ReadKey();
+    }
+}
+
+",5189,,26,,11/28/2018 4:13,11/28/2018 4:13,An algorithm with the Hadamard operator,,1,4,,,,CC BY-SA 4.0 +4807,1,4817,,11/26/2018 7:15,,4,657,"

Often unitary gates are defined as a product of exponentials, with some parameter in the power-term. However, often it is not clear how to construct unitary gates from it, at least not for me that is.

+ +

In here we see two of such situations. The first is +$$\Pi_{j=1}^n e^{-i\beta\sigma_j^x},$$ +with parameter $\beta$ and $\sigma_j^x$ a single bit Pauli-X on qubit $j$. As I understand it, this is the same as a $R_X(2\beta)$ gate applied to every qubit. Is that correct?

+ +

The second one, that is a bit harder I believe and is related to clauses, is +$$\Pi_{\alpha} e^{-i\gamma C_{\alpha}},$$ +where $C_{\alpha}$ is 1 whenever clause $\alpha$ is true and 0 otherwise. How can I implement this as elementary unitary gates?

+",2005,,2005,,11/30/2018 11:52,4/30/2020 14:11,Unitary gate(s) from product of exponent,,1,0,,,,CC BY-SA 4.0 +4808,2,,4805,11/26/2018 7:53,,3,,"

The simplest method to implement $e^{iA\theta}$ for a small, Hermitian matrix $A$ is to:

+ +
    +
  1. Find the eigenvectors $|\lambda\rangle$ and eigenvalues $\lambda$ of $A$.
  2. +
  3. Construct the unitary $U=\sum_i|i\rangle\langle\lambda_i|$.
  4. +
  5. Implement the gate sequence: + +
      +
    • $U$
    • +
    • $e^{i\theta\sum_i\lambda_i|i\rangle\langle i|}$
    • +
    • $U^\dagger$
    • +
  6. +
+ +

Now, for one qubit, you have the middle term is equivalent to $e^{i\theta(\lambda_0-\lambda_1) Z}$, up to an irrelevant global phase.

+ +

Technically, this answers your question. However, this is a silly way of doing it for solving a system of linear equations. If you can find the eigenvectors of $A$, you might as well directly invert the linear system.

+ +

So, instead, you need to proceed as if you cannot directly calculate the eigenvalues/vectors, because you're going to use your implementation of $A$ within a phase estimation protocol to find these. There are various methods for Hamiltonian simulation. A very basic summary of one method (there are much more efficient methods available) is here. But your given example is kind of trivial: I decompose +$$ +A'=\frac32\mathbb{I}+\frac12X, +$$ +and since you don't care about a global phase, one might as will implement $e^{i\theta X}$, for whatever the relevant $X$ is. Now it depends on what gates you're allowing in your quantum circuit as to how you decompose it. You might be able to implement it directly. Or, if you've only got a finite gate set such as $H$ and $T$, you might need to apply the Solovay-Kitaev algorithm to get a good decomposition.

+",1837,,1837,,11/26/2018 8:14,11/26/2018 8:14,,,,0,,,,CC BY-SA 4.0 +4809,1,4816,,11/26/2018 11:40,,6,1489,"

I am reading about the phase damping channel, and I have seen that some of the different references talking about such channel give different definitions of the Kraus operators that define the action of such channel.

+ +

For example, Nielsen and Chuang define in page 384 the phase damping channel with Kraus operators +\begin{equation} +E_0=\begin{pmatrix}1 & 0 \\ 0 & \sqrt{1-\lambda}\end{pmatrix}, \qquad E_1=\begin{pmatrix}0 & 0\\0 & \sqrt{\lambda} \end{pmatrix}, +\end{equation} +where $\lambda$ is the phase damping parameter. However, in the $28^{th}$ page of Preskill's notes on quantum error correction, such channel is defined by Kraus operators: +\begin{equation} +E_0=\sqrt{1-\lambda}I, \qquad E_1=\begin{pmatrix}\sqrt{\lambda} & 0 \\0 & 0 \end{pmatrix}, \qquad E_2=\begin{pmatrix} 0 & 0 \\ 0 & \sqrt{\lambda}\end{pmatrix}. +\end{equation}

+ +

Seeing the notable difference between both descriptions, while also having a diffeent number of Kraus operators, I am wondering which is the correct one, or if they are equivalent, why is it such case. A unitary description of the phase damping channel will also be helpful for me.

+",2371,,55,,2/14/2021 18:57,2/14/2021 18:57,Confusion on the definition of the phase-damping channel,,1,0,,,,CC BY-SA 4.0 +4810,1,4812,,11/26/2018 12:57,,3,365,"

I am trying to implement the HHL algorithm (for solving $Ax=b$). I am assuming $A$ to be unitary and Hermitian so that I can find the Hamiltonian simulation for it easily. +For any $A$ to be Hermitian and unitary, it has to be of form, +$$ A = \begin{pmatrix} x & \pm\sqrt{1-x^2}\\ \pm\sqrt{1-x^2} & x\end{pmatrix} $$ +I reduced $e^{i\alpha A}$ to following (by using formula $e^{i\theta A} = (\cos\theta)I + i(\sin\theta)A$ where $A^2=I$), but I don't know how to implement it on Qiskit. +$$ e^{i\alpha A} = \begin{pmatrix} \cos\alpha+i\sin\alpha\cos\frac{\theta}{2} & i\sin\alpha \sin\frac{\theta}{2} \\ i\sin\alpha \sin\frac{\theta}{2} & \cos\alpha+i\sin\alpha\cos\frac{\theta}{2} \end{pmatrix} .$$ +where $\theta = 2\cos^{-1}{x}$. How to construct this gate?

+",4644,,26,,12/23/2018 7:59,12/23/2018 7:59,Implementing gate with two parameters using Qiskit in Python,,2,0,,,,CC BY-SA 4.0 +4811,2,,4810,11/26/2018 14:13,,2,,"

There are plenty of methods that can be used to implement a given unitary matrix.

+ +

Only for 1-qubit gates ($2\times 2$ matrix):

+ +
    +
  1. The algorithm described in https://arxiv.org/abs/1212.6253 seems to be the most efficient at the moment. It is restricted to 1-qubit quantum gates ($2\times 2$ unitary matrices) and only decompose in the Clifford + T basis.
  2. +
  3. See this answer from @Niel de Beaudrap for other links.
  4. +
+ +

For n-qubit gates ($2^n \times 2^n$ unitary matrices):

+ +
    +
  1. The Solovay-Kitaev algorithm.
  2. +
  3. All the Hamiltonian simulation algorithms. See my answer on another question for a list of links about Hamiltonian simulation.
  4. +
+ +
+ +

As you specifically mentioned Qiskit, you will probably be interested by this answer that show how to use Qiskit to initialise the $|0^{\otimes n}\rangle$ state to an arbitrary state.

+ +

If you give to this procedure the desired output state +$$ +e^{i\alpha A}|0\rangle = \begin{pmatrix} \cos\alpha+i\sin\alpha\cos\frac{\theta}{2} & i\sin\alpha \sin\frac{\theta}{2} \\ i\sin\alpha \sin\frac{\theta}{2} & \cos\alpha+i\sin\alpha\cos\frac{\theta}{2} \end{pmatrix} \begin{pmatrix}1 \\ 0\end{pmatrix} = \begin{pmatrix} \cos\alpha+i\sin\alpha\cos\frac{\theta}{2} \\ i\sin\alpha \sin\frac{\theta}{2}\end{pmatrix} +$$ +then the quantum circuit generated by the procedure will implement the unitary $e^{i\alpha A}$.

+ +

The issue with this method is that you will need to recompute the unitary for every different value of $\alpha$ you encounter.

+",1386,,,,,11/26/2018 14:13,,,,0,,,,CC BY-SA 4.0 +4812,2,,4810,11/26/2018 14:19,,2,,"

(Remark: I have corrected a typo in the matrix exponentiation).

+ +

Please notice that the matrix has the form:

+ +

$$ e^{i\alpha A} = \begin{pmatrix} a & b \\ b & a \end{pmatrix} $$ +with +$$|a|^2 + |b|^2 = 1$$ +Now, please notice that this matrix can be expanded as: +$$\begin{pmatrix} a & b \\ b & a \end{pmatrix} = \begin{pmatrix} e^{i \arg(a)} & 0 \\ 0 & e^{i \arg(a)} \end{pmatrix} \begin{pmatrix} |a | & i|b| \\ i|b |& |a| \end{pmatrix} $$ +Defining +$$ |a| = \cos \phi$$ +$$ |b| = \sin \phi$$ +Thus we obtain: +$$ e^{i\alpha A} = e^{i \arg(a) I_2} e^{i\phi \sigma_x} $$ +In short, the result constitutes of an overall phase multiplication and an $R_x$ gate.

+ +

Expressed in the question's original variables: +$$\arg(a) = \arctan (\tan \alpha \cos \frac{\theta}{2})$$ +and +$$\phi = \arcsin (\sin \alpha \sin \frac{\theta}{2})$$

+",4263,,,,,11/26/2018 14:19,,,,0,,,,CC BY-SA 4.0 +4814,1,4815,,11/26/2018 23:07,,1,94,"

Assume that Alice and Bob are allowed to share entanglement and are spatially separated. Alice is given an unknown state and asked to measure this in the computational basis to obtain $\vert 0\rangle$ or $\vert 1\rangle$. Is there some way for Bob to also have a copy of same state as Alice instantaneously?

+ +

Note that it does not violate no-signalling since the outcome of the measurement for Alice is random - so she cannot use it to communicate. Another perspective is that this is sort of like cloning but since the only outcomes that Alice gets are $\vert 0\rangle$ or $\vert 1\rangle$ and they are orthogonal, it isn't forbidden by the no-cloning.

+ +

If this can be done, how should she and Bob design a quantum circuit that achieves this? Otherwise, what forbids this possibility?

+",4831,,55,,12/19/2021 13:57,12/19/2021 13:57,Shared entanglement to copy orthogonal states,,1,0,,,,CC BY-SA 4.0 +4815,2,,4814,11/26/2018 23:56,,5,,"

Assume this works. Then, nothing prevents Alice from applying the same protocol to a quantum state that is known to her, such as $|0\rangle$ or $|1\rangle$. This way, she could send information to Bob instantaneously. Thus, it violates faster-than-light communication and thus is impossible.

+",491,,26,,11/27/2018 1:54,11/27/2018 1:54,,,,0,,,,CC BY-SA 4.0 +4816,2,,4809,11/27/2018 0:58,,8,,"

Let $\mathcal{N}$ be the channels which subscripts for which conventions.

+ +

$$ +\mathcal{N}_{N.C.} (\rho) = \begin{pmatrix} +\rho_{00} & \rho_{01} \sqrt{1-\lambda}\\ +\rho_{10} \sqrt{1-\lambda} & \rho_{11} +\end{pmatrix} +$$

+ +

As compared to

+ +

$$ +\mathcal{N}_{P} (\rho) = \begin{pmatrix} +\rho_{00} & \rho_{01} (1-\lambda)\\ +\rho_{10} (1-\lambda) & \rho_{11} +\end{pmatrix} +$$

+ +

So you can see that $\mathcal{N}_P (\bullet) = \mathcal{N}_{N.C.} (\mathcal{N}_{N.C.} (\bullet ))$

+ +

Easy to see that these are both representing the same sort of process just with different timescales. Preskill's does Nielsen-Chuang's twice.

+",434,,,,,11/27/2018 0:58,,,,4,,,,CC BY-SA 4.0 +4817,2,,4807,11/27/2018 1:30,,2,,"

It may be easier to write with $e^{-i \beta \sigma^z}$. You can see that this is the matrix

+ +

$$ +e^{-i \beta \sigma^z} = \begin{pmatrix} +e^{-i \beta} & 0\\ +0 & e^{i \beta} +\end{pmatrix}\\ += e^{-i \beta} \begin{pmatrix} +1 & 0\\ +0 & e^{i 2 \beta} +\end{pmatrix}\\ += e^{-i \beta} R_Z (2 \beta) +$$

+ +

This was already diagonal in the computational basis so easier, but the same logic holds in the basis for $\sigma^x$

+ +

For the product, yes you are applying the same to every qubit. You can see that all the individual terms in the product commute. It doesn't matter which order you apply these $R_X (2 \beta)$ gates on each.

+ +

The second question is a bit misstated with the indexing. You use $\alpha$ as the labels for your clauses so it should be

+ +

$$ +\prod_{\alpha=1}^k e^{-i \gamma C_\alpha} +$$

+ +

where $C_\alpha$ is the diagonal $2^n$ by $2^n$ matrix whose entries are either $1$ or $0$ depending on whether the clause is true or not. For example if the clause was AND and there were 2 qubits then $C_\alpha$ would have diagonal entries $0,0,0,1$ for the basis vectors being $00$, $01$, $10$ and $11$ in that chosen order. The number of clauses, the number of $\alpha$'s does not have to be the same as $n$ the number of qubits.

+ +

The second question is also a bunch of commuting terms. So if you know how to decompose into gates for a single term, you can just put them one after another in order $\alpha=1$ through $k$. So let's do just a single term $e^{-i \gamma C_\alpha}$ for some arbitrary Boolean expression $f$ on $n$ inputs.

+ +

All the terms are diagonal in the computational basis of size $2^n$ with entries either $e^{-i \gamma}$ or $1$.

+ +

It will be useful to have an auxiliary qubit. Then you can think about the reversible circuit that takes you from $x_1 , \cdots , x_n , y \to x_1 \cdots x_n , y \bigoplus f(x_1 \cdots x_n)$. This is a reversible operation, so I'll refer to other resources for how to break this down in terms of TOF gates.

+ +

So if you have an auxiliary with $y=0$ you can do this and then get the auxiliary to be in the state 1 if and only if the $x_1 \cdots x_n$. Now do an $R_Z (-\gamma)$ on that auxiliary. Now uncompute. That means bringing the auxiliary back to $0$. You can do this with the reverse of the reversible circuit you already found.

+ +

This will take care of $e^{-i \gamma C_\alpha}$ for a single clause. Then just stick all the circuits for all the clauses back to back.

+ +

Of course this will be longer than necessary. But now that you have a circuit in many quantum gates that does the job poorly, getting one that does the job well is a much easier task.

+",434,,434,,4/30/2020 14:11,4/30/2020 14:11,,,,0,,,,CC BY-SA 4.0 +4818,2,,4747,11/27/2018 15:36,,1,,"

Pedro! I assume you are familiar to Grover's algorithm. Therefore, I suggest to read carefully these two papers below:

+ +

1) Tight bounds on quantum searching (BBHT): it's a very broad Grover's algorithm analysis;

+ +

2) A quantum algorithm for finding the minimum (DH): this is the first Grover's algorithm application to optimization problems and we call DH (authors' name);

+ +

My first steps in quantum computing were in optimization problems. I think the papers below are a very good start:

+ +

3) Grover’s quantum algorithm applied to global optimization (BBW): an adaptation of Grover's to optimization and it uses DH at the framework;

+ +

4) Using modifications to grover’s search algorithm for quantum global optimization: a modification of BBW;

+ +

5) A hybrid method for quantum global optimization

+ +

6) A new hybrid classical-quantum algorithm for continuous global optimization problems: this last paper was produced at my research group at LNCC (Brazil) and you search several minimums with classical routines and the algorithm escapes with Grover.

+",5209,,,,,11/27/2018 15:36,,,,0,,,,CC BY-SA 4.0 +4819,2,,2261,11/27/2018 15:54,,2,,"

I suggest a good introductory book, but using lots of linear algebra, and it covers several nice topics. I really, really, love this book:

+ +

1) Quantum Computing Explained

+ +

I recommend my advisor's book because it has a very nice view on Grover's algorithm and a good explanation on it.

+ +

2) Quantum Walks and Search Algorithms

+",5209,,,,,11/27/2018 15:54,,,,0,,,,CC BY-SA 4.0 +4820,1,,,11/27/2018 17:04,,3,86,"

How can a CHSH game be realized in a photonic circuit?

+",2645,,2645,,12/18/2018 20:13,12/18/2018 20:13,Photonic CHSH Games,,0,4,,,,CC BY-SA 4.0 +4821,2,,2261,11/27/2018 17:53,,2,,"

The best book I could suggest is from Yanofsky and Mannucci Quantum Computing for Computer Scientists. You would find very detailed and very numerical examples rather than abstract theorems which I believe is better in the beginner level.

+",2403,,,,,11/27/2018 17:53,,,,0,,,,CC BY-SA 4.0 +4822,1,4826,,11/27/2018 19:53,,2,147,"

I'm trying to work through a self-made exercise, which may be ill formed as a question. Any general advice in dealing with these types of problems is also much appreciated!

+ +

I'm looking at a quantum gate $U_f$ for a function $f$, that has the effect $$\sum_x \alpha_x \vert x\rangle\vert 0\rangle \mapsto \sum_x \alpha_x\vert x\rangle\vert f(x)\rangle. $$ +This will in most cases be an entangled state: for instance, if $f(x) = x$, then I get what looks like a Bell state.

+ +

I want to consider a case where the first register is already maximally entangled with a third party, Eve.

+ +
    +
  • One way to proceed is to write the first register as a mixed state which I obtain after tracing out Eve's part. The trouble now is, when we consider the action of the gate, that the gate entangles the two registers. I have no idea how to sort out the entanglement between Eve and the first register and the new entanglement between the first and second registers.

  • +
  • Alternatively, if I don't trace out Eve's register and instead implement the gate $\mathbb 1\otimes U_f$, then I'm still not sure what the outcome is. Before the gate, I have $$\sum_x \vert x \rangle_E\vert x\rangle\vert 0\rangle. $$ (I have marked Eve's register for clarity.) After the gate, I could naively write $$\sum_x \vert x\rangle_E\vert x\rangle\vert f(x)\rangle, $$ but this looks dubious to me. Particularly, this looks like Eve is now entangled with the second register but that seems wrong.

  • +
+ +

I'm not sure how entanglement monogamy fits in but I suspect my guess for the state isn't compatible with it. Can anyone clarify what's going on for me?

+",4831,,26,,12/23/2018 8:15,12/23/2018 8:15,A quantum circuit with entanglement with Eve,,1,4,,,,CC BY-SA 4.0 +4823,2,,2261,11/27/2018 21:20,,1,,"

I created this simple video lecture aimed at computer scientists, which explains how basic quantum circuits work mathematically and how you can use them to solve the Deutsch Oracle problem: the simplest problem where a quantum computer outperforms a classical computer in some sense. I tried to make the lecture I would have wanted to watch when originally struggling through introductory textbooks on quantum computing (I used Quantum Computing for Computer Scientists by Mannucci & Yanofsky and Quantum Computer Science: An Introduction by Mermin) and am happy with the result. The slides are available here.

+",4153,,,,,11/27/2018 21:20,,,,0,,,,CC BY-SA 4.0 +4825,2,,4806,11/28/2018 1:36,,3,,"

Thanks for posting the full Q# code! The problem is in testPr1: you're calling func() twice, and returning the first element of the tuple returned from the first call, and the second element of the tuple returned from the second call. Each call operates on a different qubit and performs a separate random measurement, so all 4 possible combinations should show up.

+ +

To get the results you're looking for, try replacing the body of testPr1 with something like:

+ +
let res = func();
+return (Pr0(res), Pr1(res));
+
+",4265,,,,,11/28/2018 1:36,,,,0,,,,CC BY-SA 4.0 +4826,2,,4822,11/28/2018 8:10,,2,,"

In fact, what you suggest is absolutely right. If you start in the state +$$ +\sum_x|x\rangle_E|x\rangle|0\rangle, +$$ +then you will get the state +$$ +\sum_x|x\rangle_E|x\rangle|f(x)\rangle. +$$

+ +

Monogamy of entanglement is a slightly different issue that's not really a concern here. What is says (loosely), is that if Alice and Eve share a maximally entangled state, then Alice and Bob cannot share any entanglement at all. But then, there's a tradeoff. As you reduce the amount of entanglement between Alice and Eve, you can correspondingly increase the amount of entanglement between Alice and Bob. The state that you've produced is entangled (a particular type of multipartite entanglement called GHZ), and there's entanglement between all 3 parties. But it sits within the bounds of what's allowed.

+ +

To be explicit, there are several different types of entanglement measure to which you can apply monogamy arguments. Probably the most common is the tangle, but I'm going to compute a different one (because it's something I did in one of my papers, which means I remember how to do it). Let's assume we have the state +$$ +|\Psi\rangle=\frac{1}{\sqrt{n}}\sum_{x=0}^{n-1}|x\rangle_E|x\rangle_A|x\rangle_B. +$$ +We're going to calculate something called the singlet fraction between Alice and Eve, $p_{AE}$. By symmetry (exchange the labels of E and B), Alice and Bob will share the same singlet fraction $p_{BE}=p_{AE}$.

+ +

The monogamy relation that they're supposed to satisfy (according to the paper) is +$$ +p_{AE}+p_{AB}\leq\frac{n-1}{n}+\frac{1}{n+1}\left(\sqrt{p_{AE}}+\sqrt{p_{AB}}\right)^2, +$$ +so for $p_{AE}=p_{AB}$, we're expecting to find that +$$ +p\leq\frac{n+1}{2n}. +$$ +So long as it does that, there's no problem.

+ +

What is the single monogamy? +$$ +p_{AE}=\max_{U,V}\text{Tr}(|B\rangle\langle B|_{AE}\otimes \mathbb{I}_BV_E\otimes U_A\otimes \mathbb{I}_B|\Psi\rangle\langle\Psi|V_E^\dagger\otimes U_A^\dagger\otimes\mathbb{I}_B). +$$ +where $|B\rangle=\frac{1}{\sqrt{n}}\sum_x|x\rangle|x\rangle$ is the Bell state. +You can certainly verify that if $U=V=\mathbb{I}$, then $p_{AE}=1/n$, and you can prove that this is optimal with just a little more work. Since +$$ +\frac{1}{n}\leq\frac{n+1}{2n}, +$$ +we're happy. Indeed, this particular state is nowhere near saturating this monogamy relation for $n\geq 2$.

+",1837,,,,,11/28/2018 8:10,,,,0,,,,CC BY-SA 4.0 +4827,1,5324,,11/28/2018 15:12,,2,215,"

The question is already explained in the title, but I'm curious regarding the possibility to adopt the BB84 protocol over LiFi (in theory). Could it be possible or are there some limitations, such as the single photon evaluation?

+",5214,,26,,11/28/2018 15:15,1/31/2019 12:30,BB84 protocol over LiFi,,1,1,,,,CC BY-SA 4.0 +4828,2,,2261,11/28/2018 17:29,,3,,"

If you prefer to learn while coding, check out the Quantum Katas. There are katas for teleportation, Deutsch-Jozsa, Grover's, and Simon's algorithms, among others.

+",4265,,,,,11/28/2018 17:29,,,,0,,,,CC BY-SA 4.0 +4829,1,4830,,11/29/2018 4:03,,4,335,"
+

Zeno machines (abbreviated ZM, and also called accelerated Turing machine, ATM) are a hypothetical computational model related to Turing machines that allows a countably infinite number of algorithmic steps to be performed in finite time. -Wikipedia

+
+ +
+ +
+

The quantum Zeno effect (also known as the Turing paradox) is a feature of quantum-mechanical systems allowing a particle's time evolution to be arrested by measuring it frequently enough with respect to some chosen measurement setting. -Wikipedia

+
+ +
+ +

This seems fitting for a continuous-variable environment (eg. photonics). Has any research been done on quantum Zeno machines? (Google: No results found for ""quantum zeno machine"".)

+",2645,,55,,08-09-2020 04:59,08-09-2020 04:59,Has any research been done on quantum Zeno machines?,,2,4,,,,CC BY-SA 4.0 +4830,2,,4829,11/29/2018 15:43,,5,,"

There are two notions of Zeno topics related to quantum computation. The first, which is controversial is usually called hypercomputation, which deals with the possibility of surpassing the limitations of the Church-Turing thesis by means of quantum computation. It is related to the Zeno effect through the fact that if it could be realized, it may solve the halting problem. Contogo refers to this option as a ""Quantum Zeno machine"". In this context, please see also:Nielsen exploring this possibility.

+ +

The second topic which goes by the name Quantum Zeno effect (as referred to in the Wikipedia page in the question) is well established and experimentally verified, (please see Kwiat, White, Mitchell, Nairz, Weihs, Weinfurter, and Zeilinger ).

+ +

Kwiat et al. were motivated by one of the most striking examples of this effect: the Elitzur-Vaidman bomb testing problem, (please see the following review by Vaidman).

+ +

This problem deals with bombs which can only interact with the outside world by means of their trigger, classically, testing the bomb would cause every good bomb to explode, but quantum mechanically, one can reach, using the Zeno effect, almost 100% detection probability without exploding the bomb. This is an example of a non-demolition measurement which is not accompanied by state reduction.

+ +

Translated to quantum computation terminology, this effect is called by Jozsa counterfactual quantum computation, according to which a quantum computer, programmed to solve a problem, can give the result even without running.

+ +

A detailed account of the Elitzur-Vaidman bomb testing problem is given by Penrose in his popular book: Shadows of the mind.

+ +

One of the most important application of the quantum Zeno effect is its exploitation to keep a system inside a decoherence free subspace (by performing repeated measurements) as proposed by: Beige, Braun, Tregenna, and Knigh. Very recently, this proposal was adapted to holonomic quantum computation by Mousolou and Sjöqvist .

+",4263,,2645,,11/29/2018 15:45,11/29/2018 15:45,,,,8,,,,CC BY-SA 4.0 +4831,1,4891,,11/30/2018 1:20,,2,89,"

I'm simulating a circuit like this:

+ +
(0, 0): ───────────────X───────X─────────@───────────────M─────────────                 
+                       │       │         │               │                             
+(1, 0): ───H───@───H───@───X───@^0.333───@^0.5───X───H───M─────────────                 
+               │                                         │                                                                 
+(2, 0): ───────X─────────────────────────────────────────M───────────── 
+
+ +

but when I try to debug by simulating with moments steps (as instructed here), I end up getting 36 steps, and I assume this is because some of the gates in the circuit are decomposed into XmonGates. Is there a way to see this decomposed circuit?

+ +

Alternatively, is there a way to step through the simulation where each step matches a moment in the original circuit?

+",5222,,26,,12/23/2018 7:58,12/23/2018 7:58,Is it possible to see how CompositeGates are decomposed when simulated using XmonSimulator?,,1,0,,,,CC BY-SA 4.0 +4832,2,,4798,11/30/2018 10:26,,6,,"

I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of

+ +
A Theory of Quantum Error-Correcting Codes
+Emanuel Knill, Raymond Laflamme 
+https://arxiv.org/abs/quant-ph/9604034
+
+ +

where the specific result is:

+ +
+

Theorem 5.1. A $(2^r,k)$ $e$-error-correcting quantum code must satisfy $r \geqslant 4e + \lceil \log k \rceil$.

+
+ +

Here, an $(N,K)$ code is an embedding of a $K$-dimensional subspace into an $N$-dimensional system; it is an $e$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $e$. +In particular, a $(2^n, 2^k)$ $e$-error-correcting code is what we would now describe as an $[\![n,k,2e\:\!{+}1]\!]$ code. Theorem 5.1 then allows us to prove that for $k \geqslant 1$ and an odd integer $d \geqslant 3$, an $[\![n,k,d]\!]$ code must satisfy +$$ +\begin{aligned} + n +\;&\geqslant\; + 4\bigl\lceil\tfrac{d-1}{2}\bigr\rceil + \lceil \log 2^k \rceil +\\[1ex]&\geqslant\; + \bigl\lceil 4 \cdot \tfrac{d-1}{2} \bigr\rceil + \lceil k \rceil +\\[1ex]&=\; + 2d - 2 + k +\;\geqslant\; + 6 - 2 + 1 +\;=\; + 5. +\end{aligned} +$$

+ +

(N.B. +There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.)

+ +

As an alternative proof, I could imagine (but haven't tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used

+ +
Quantum MacWilliams Identities
+Peter Shor, Raymond Laflamme
+https://arxiv.org/abs/quant-ph/9610040
+
+ +

to show that no degenerate code on five qubits exists that can correct any single errors.

+",2192,,124,,12-01-2018 16:14,12-01-2018 16:14,,,,5,,,,CC BY-SA 4.0 +4834,1,4836,,11/30/2018 17:55,,12,3768,"

Let's say Alice wants to send Bob a $|0\rangle$ with probability .5 and $|1\rangle$ also with probability .5. So after a qubit Alice prepares leaves her lab, the system could be represented by the following density matrix: $$\rho = .5 |0\rangle \langle 0| + .5 |1\rangle \langle 1|= \begin{bmatrix} .5 & 0 \\ 0 & .5 \end{bmatrix} $$Am I right?

+ +

Then, Bob would perform measurement in the standard basis. This is where I get confused. For example, he could get $|0\rangle$ with probability .5 and $|1\rangle$ with .5. So what is the density matrix representing the state after Bob performs his measurement in the standard basis accounting for both of his possible outcomes?

+",2403,,55,,2/20/2021 16:25,2/20/2021 16:25,Density matrix after measurement on density matrix,,3,1,,,,CC BY-SA 4.0 +4835,2,,4834,11/30/2018 18:25,,6,,"

So Alice sends Bob a qubit with the density matrix

+ +

$$\rho = \frac{1}{2}|0\rangle\langle 0| + \frac{1}{2}|1\rangle\langle 1| = \begin{bmatrix} .5 & 0 \\ 0 & .5 \end{bmatrix}$$

+ +

as you said. (I've fixed the notation to make it a density matrix, what you wrote was in the structure of a state, but with non-normalized coeffecients. It is important to note the distinction of pure states vs mixed states. What you are discussing is a mixed state and cannot be described as a single state $|\psi\rangle$, mixed states can only be described in the density matrix picture.)

+ +

After measurement the state Bob has becomes a pure state, as you described. He gets either

+ +

$$|0\rangle \rightarrow \rho = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$$ +or $$|1\rangle \rightarrow \rho = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}$$ +with equal probability. As you can see once the qubit is measured you lose all indication that it was once in a mixed state or anything else about its history.

+",3056,,,,,11/30/2018 18:25,,,,0,,,,CC BY-SA 4.0 +4836,2,,4834,11/30/2018 19:40,,12,,"

So, Bob is given the following state (also called the maximally-mixed state):

+ +

$\rho = \frac{1}{2}|0\rangle\langle 0| + \frac{1}{2}|1\rangle\langle 1| = \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix}$

+ +

As you noticed, one nice feature of density matrices is they enable us to capture the uncertainty of an outcome of a measurement and account for the different possible outcomes in a single equation. Projective measurement is defined by a set of measurement operators $P_i$, one for each possible measurement outcome. For example, when measuring in the computational basis (collapsing to $|0\rangle$ or $|1\rangle$) we have the following measurement operators:

+ +

$P_0 = |0\rangle\langle0| = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$, $P_1 = |1\rangle\langle1| = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}$

+ +

where $P_0$ is associated with outcome $|0\rangle$ and $P_1$ is associated with outcome $|1\rangle$. These matrices are also called projectors.

+ +

Now, given a single-qbit density operator $\rho$, we can calculate the probability of it collapsing to some value with the following formula:

+ +

$p_i = Tr(P_i \rho)$

+ +

where $Tr(M)$ is the trace, which is the sum of the elements along the main diagonal of matrix $M$. So, we calculate the probability of your example collapsing to $|0\rangle$ as follows:

+ +

$p_0 = Tr(P_0 \rho) += Tr \left( \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix} \right) += Tr \left( \begin{bmatrix} \frac 1 2 & 0 \\ 0 & 0 \end{bmatrix} \right) = \frac 1 2$

+ +

And the formula for the post-measurement density operator is:

+ +

$\rho_i = \frac{P_i \rho P_i}{p_i}$

+ +

which in your example is:

+ +

$\rho_0 = \frac{\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}}{\frac 1 2} = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$

+ +

which is indeed the density matrix for the pure state $|0\rangle$.

+ +

We don't just want the density operator for a certain measurement outcome, though - what we want is a density operator which captures the branching nature of measurement, representing it as an ensemble of possible collapsed states! For this, we use the following formula:

+ +

$\rho' = \sum_i p_i \rho_i = \sum_i P_i \rho P_i$

+ +

in our example:

+ +

$\rho' = \left( \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \right) + \left( \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \right)$

+ +

$\rho' = \begin{bmatrix} \frac 1 2 & 0 \\ 0 & 0 \end{bmatrix} ++ \begin{bmatrix} 0 & 0 \\ 0 & \frac 1 2 \end{bmatrix} = \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{bmatrix} = \frac{1}{2}|0\rangle\langle 0| + \frac{1}{2}|1\rangle\langle 1|$

+ +

Your final density matrix is unchanged! This should actually be unsurprising, because we started out with the maximally-mixed state and performed a further randomizing operation on it.

+ +

Much of this answer copied from my previous detailed answer here, which in turn is based off of another answer here.

+",4153,,4153,,11/30/2018 19:45,11/30/2018 19:45,,,,1,,,,CC BY-SA 4.0 +4837,2,,4834,12-01-2018 11:58,,4,,"

It is important to emphasise that a density matrix may not be absolute; it represents the state of somebody's knowledge of the system.

+ +

To see this, consider 3 parties: Alice, Bob and Charlie. Alice prepares a qubit in either $|0\rangle$ or $|1\rangle$. Now, Alice knows which state she prepared (let's assume it's $|0\rangle$), so Alice's description of the qubit is $|0\rangle\langle 0|$. However, Bob and Charlie only know the preparation procedure, not which choice Alice made. So, they both have to describe the state as 50:50 being 0 or 1. i.e. +$$ +\frac{1}{2}(|0\rangle\langle 0|+|1\rangle\langle 1|). +$$

+ +

Now, Bob receives the state and measures it in the Z basis. So long as Alice knows what measurement is being done, she knows that the state won't change, so her description of the state remains the same, $|0\rangle\langle 0|$. Bob gets the measurement result, so now he knows that the state is in $|0\rangle$, so if he's using a density matrix, he describes it as $|0\rangle\langle 0|$. However, Charlie, who does not know what measurement result Bob got, still does not know any better than to describe the state by +$$ +\frac{1}{2}(|0\rangle\langle 0|+|1\rangle\langle 1|). +$$

+",1837,,,,,12-01-2018 11:58,,,,0,,,,CC BY-SA 4.0 +4838,1,,,12-01-2018 23:51,,3,467,"

How to set up Qconfig.py and where is the file? I mean under which folder? Like /anaconda/lib/python3.6/site-packages/qiskit/.

+",5232,,26,,12-02-2018 06:13,12-07-2018 10:53,How to set up Qconfig.py and where is the file?,,1,0,,,,CC BY-SA 4.0 +4839,1,4841,,12-02-2018 11:28,,9,1225,"

In the Polar Decomposition section in Nielsen and Chuang (page 78 in the 2002 edition), +there is a claim that any matrix $A$ will have a decomposition $UJ$ where $J$ is positive and is equal to $\sqrt{A^\dagger A}$.

+ +

Firstly, how can we be sure that every matrix $A^\dagger A$ will have a square root, and secondly, that the square root will be a positive operator?

+",2832,,640,,1/28/2019 21:45,10/22/2022 9:12,"How can we be sure that for every $A$, $A^\dagger A$ has a positive square root?",,1,0,,,,CC BY-SA 4.0 +4841,2,,4839,12-02-2018 12:44,,9,,"

A matrix is positive if and only if it is Hermitian (and thus unitarily diagonalizable) and all its eigenvalues are positive (that they are real follows automatically from it being Hermitian). +If this is not the way you define a positive semidefinite (PSD) operator, then you need to specify how you do so that we can prove the equivalence.

+

In other words, $A$ is positive, $A\ge0$, iff it can be written as +$$A=\sum_k \lambda_k v_k v_k^*\equiv \sum_k \lambda_k \lvert v_k\rangle\!\langle v_k\rvert, \quad \lambda_k\ge0,$$ +with $\langle v_k\rvert v_j\rangle\equiv v_k^* v_j=\delta_{jk}$. +I used here both diadic notation and bra-ket notation just to point out that these are simply two equivalent ways to write the same thing.

+

Why is $A^\dagger A\ge0$?

+

One way to show this is to start from the fact that unitarily diagonalizable is equivalent to normal. +This means that a matrix $B$ can be written as $B=UDU^\dagger$ for some unitary $U$ and diagonal matrix $D$ if and only if $BB^\dagger=B^\dagger B$.

+

As you can readily verify, it is the case that $(A^\dagger A)^\dagger (A^\dagger A)=(A^\dagger A)(A^\dagger A)^\dagger$. It follows that we can write +$$A^\dagger A=\sum_k s_k\lvert v_k\rangle\!\langle v_k\rvert$$ +for some (generally complex as far as we know now) $s_k$, satisfying $A^\dagger A\lvert v_k\rangle=s_k\lvert v_k\rangle$.

+

That $s_k\in\mathbb R$, and more precisely $s_k\ge0$, now follows from +$$A^\dagger A\lvert v_k\rangle=s_k\lvert v_k\rangle +\Longrightarrow s_k=\langle v_k\rvert A^\dagger A\lvert v_k\rangle=\|Av_k\|^2\ge0.$$

+

Why $\sqrt{A^\dagger A}\ge0$?

+

The square root of a PSD operator is (can be) defined through the square root of its eigenvalues. In other words, if $A=\sum_k s_k\lvert v_k\rangle\!\langle v_k\rvert$ with $s_k\ge0$, then we define +$$\sqrt A=\sum_k \sqrt{s_k}\lvert v_k\rangle\!\langle v_k\rvert.$$ +Clearly, $s_k\ge0\Longrightarrow \sqrt{s_k}\ge0$, and thus $A\ge0\Longrightarrow \sqrt A\ge0$.

+

Actually, more accurately, the square root of a matrix is not uniquely defined, and does not need to be positive. In part for the same reason the square root of a real or complex number is only defined up to a sign. In the case of matrices you have even more freedom whenever there's degenerate eigenvalues: if a matrix has degenerate eigenvalues, then you can choose arbitrary bases within each degenerate eigenspace, and then choose arbitrarily the signs to assign taking the square roots of the corresponding eigenvalues. +As trivial examples, consider $A^\dagger A=I$, and observe that both $(-I)^2=I$ and $X^2=I$ , even though clearly $-I$ and $X$ are not PSD. +Nonetheless, when saying that the square root of a PSD is PSD, you are choosing a positive square root, similarly to what you do when taking the algebraic square root of real numbers, hence the statement.

+

Polar decomposition from SVD

+

This is not directly related to the question asked, but as we are talking about this in connection with the polar decomposition, let me show another way to get to the polar decomposition.

+

Start from the previously shown fact that $A^\dagger A\ge0$. +This is equivalent to it being writable as $A^\dagger A=\sum_k s_k\lvert v_k\rangle\!\langle v_k\rvert$ for $s_k\ge0$. +But then again, note that +$$A^\dagger A v_k=s_k v_k +\Longleftrightarrow \langle A v_k,Av_j\rangle=\delta_{jk}s_k +\Longleftrightarrow Av_j=\sqrt{s_j}w_j$$ +for some orthonormal set of vectors $w_j$. +This is nothing but the singular value decomposition of $A$.

+

Now, to get to the polar decomposition, we just rewrite this as +$$A=\sum_k \sqrt{s_k}\lvert w_k\rangle\!\langle v_k\rvert= +\Bigg(\underbrace{\sum_k \lvert w_k\rangle\!\langle v_k\rvert}_{U}\Bigg)\Bigg(\underbrace{\sum_k \sqrt{s_k} \lvert v_k\rangle\!\langle v_k\rvert}_{\sqrt{A^\dagger A}}\Bigg),$$ +which is nothing but the polar decomposition of $A$.

+

See also Intuitive role of the polar decomposition in proof of Uhlmann's theorem for fidelity.

+",55,,55,,10/22/2022 9:12,10/22/2022 9:12,,,,0,,,,CC BY-SA 4.0 +4842,1,4843,,12-02-2018 20:26,,4,154,"

Given an operator $L = \sum_{ij}L_{ij}\vert i\rangle\langle j\vert$, in some basis, the definition of vectorization is $vec(L) = \sum_{ij}L_{ij}\vert i\rangle\vert j\rangle$. The operation is invertible and heavily used in quantum information.

+ +

This definition always leads to a pure bipartite state but my notes, Wikipedia, etc. have no indication about whether one can also map operators (of some sort) to a mixed bipartite state. Can anyone shed some light on if this makes sense and there exist extensions of the definition of vectorization?

+",4831,,55,,8/20/2020 8:03,8/20/2020 8:03,Can vectorization lead to mixed states?,,1,3,,,,CC BY-SA 4.0 +4843,2,,4842,12-02-2018 22:17,,4,,"

You have given $L$ as a general linear operator between Hilbert spaces $H_1$ to $H_2$ (the one indexed by $j$ and $i$ respectively). That is all possible linear operators, the entirety of $Mor(H_1,H_2)$. Mor here is short for the word morphism, a word for whatever the notion of whatever sorts of functions/operators/etc that are allowed. In the present context of vector spaces, this just means what are the linear operators $H_1 \to H_2$.

+ +

So we have all of the linear operators and for each of them $\mathbf{vec}$ produces a pure state in $H_2 \otimes H_1$. There is nothing missing. There are no other linear operators. There is no way to get states $\rho$ on $H_2 \otimes H_1$ that aren't pure here. Only mixed.

+ +

You already noticed this is invertible. That is we have $Mor(H_1,H_2) \simeq H_2 \otimes H_1$. The LHS is where $L$ is an element and the RHS is where $\mathbf{vec}(L)$ is an element.

+ +

There are extensions of this concept. For example suppose we have 3 Hilbert spaces. $H_1$, $H_2$ and $H_3$. Giving a linear map from $H_1$ to the space of linear maps from $H_2$ to $H_3$, which would be denoted as $Mor(H_1, Mor(H_2,H_3))$ is the same as giving a linear operator from $H_1 \otimes H_2 \to H_3$. See how this implies the above. Take $H_3=\mathbb{C}$. Then a way of thinking about a bra for $H_2$ is as a linear map $H_2 \to H_3=\mathbb{C}$. I'll leave you to ponder this fact some more to fully flush out how it implies the above.

+",434,,26,,12-03-2018 07:49,12-03-2018 07:49,,,,0,,,,CC BY-SA 4.0 +4844,1,,,11/26/2018 22:23,,4,163,"

Show that an isometric extension of the erasure channel is $$U^N_{A\to BE} =\sqrt{1−\epsilon}\left(|0\rangle_B \langle 0|_A +|1\rangle_B \langle 1|_A \right)\otimes|e\rangle_E+ \sqrt{\epsilon}|e\rangle_B \langle0|_A \otimes |0\rangle_E + \sqrt{\epsilon}|e\rangle_ B \langle 1|_A \otimes |1\rangle_E$$ +$$=\sqrt{1−\epsilon} \text{ I}_{A \to B}\otimes|e\rangle_E+\sqrt{\epsilon}\text{ I}_{A \to E}\otimes\otimes|e\rangle_B$$ +where, the Erasure Channel implements the following: $\rho \to (1 − \epsilon) \rho + \epsilon|e\rangle\langle e|$.

+ +

I know that the Kraus operators for the quantum erasure channel are the following: $\left\{\sqrt{1−\epsilon}\left(|0\rangle_B \langle 0|_A +|1\rangle_B \langle 1|_A \right),\sqrt{\epsilon}|e\rangle_B \langle0|_A,\sqrt{\epsilon}|e\rangle_B \langle1|_A\right\}$. I also know that for a quantum channel $N_{A\to B}$ with the following Kraus representation: +$$N_{A\to B}(\rho_A) = \sum_jN_j\rho_A N_j^{\dagger},$$ +an isometric extension of the channel $N_{A\to B}$ is the following linear map: $$U^N_{A \to BE} =\sum_jN_j \otimes|j\rangle$$ .

+ +

Using this, the $N_1$, $N_2$ and $N_3$ in our case are $\sqrt{1−\epsilon}\left(|0\rangle_B \langle 0|_A +|1\rangle_B \langle 1|_A \right)$,$\sqrt{\epsilon}|e\rangle_B \langle0|_A$ and $\sqrt{\epsilon}|e\rangle_B \langle1|_A$ are respectively. I am just confused on, which $|j\rangle$ to choose. How do I know the appropriate orthonormal vectors in this case?

+ +

Thanks for the help!

+",,tattwamasi amrutam,55,,12-11-2021 18:58,01-10-2022 19:08,Derive the form of an isometric extension of the erasure channel,,1,2,,,,CC BY-SA 4.0 +4845,1,,,12-03-2018 05:06,,2,112,"

I am trying to implement the order finding algorithm on Cirq finding the minimal positive $r$ for coprime $x$ and $N$ satisfying the equation $x^r \ = \ 1$(mod$ \ N$). In my case, I have set $x \ = \ 2$ and $N \ = \ 3$, so the algorithm should output $r \ = \ 2$. In order to implement the unitary, I simply observed that if we initialize the input that is being acted upon by the controlled-unitary matrices as a collection of $|1\rangle$ states, the unitary operation for this algorithm, $U|y\rangle \ = \ |2^jy \ $mod($3)\rangle$ acts trivially, in all circumstances, since $|y\rangle \ = \ |11\rangle \ = \ |3\rangle$. I feel as though I am missing a very important point, or maybe am not understanding the algorithm correctly, because when I try to implement the algorithm with no unitary gate (since it is supposedly trivial), the algorithm does not work.

+",4907,,26,,12-03-2018 07:31,12-03-2018 07:31,Would this quantum algorithm implementation work?,,0,10,,,,CC BY-SA 4.0 +4846,1,4847,,12-03-2018 06:58,,3,316,"

Are projective measurement bases always orthonormal?

+",4153,,26,,12-03-2018 07:21,10-05-2021 12:26,Are projective measurement bases always orthonormal?,,2,1,,,,CC BY-SA 4.0 +4847,2,,4846,12-03-2018 07:42,,8,,"

Yes.

+ +

Remember that you require several properties of a projective measurement including $P_i^2=P_i$ for each projector, and +$$ +\sum_iP_i=\mathbb{I}. +$$ +The first of these show you that the $P_i$ have eigenvalues 0 and 1. Now take a $|\phi\rangle$ that is an eigenvector of eigenvalue 1 of a particular projector $P_i$. Use this in the identity relation: +$$ +\left(\sum_jP_j\right)|\phi\rangle=\mathbb{I}|\phi\rangle +$$ +Clearly, this simplifies to +$$ +|\phi\rangle+\sum_{j\neq i}P_j|\phi\rangle=|\phi\rangle. +$$ +Hence, +$$ +\sum_{j\neq i}P_j|\phi\rangle=0. +$$ +The $P_j$ are all non-negative, so the only way that this can be 0 is if $P_j|\phi\rangle=0$ for all $j\neq i$. (To expand upon this, assume there's a $P_k$ such that $P_k|\phi\rangle=|\psi\rangle\neq 0$. This means that +$$ +\sum_{j\neq i,k}\langle\psi|P_j|\phi\rangle=-\langle\psi|P_k|\phi\rangle, +$$ +so some terms must be negative, which is impossible if the eigenvalues are all 0 and 1.)

+",1837,,1837,,12-03-2018 16:20,12-03-2018 16:20,,,,0,,,,CC BY-SA 4.0 +4848,2,,64,12-03-2018 11:26,,3,,"

http://quantumcomputer.ac.cn

+ +

They really need to get their act together and make sure it's on the first page of Google. I had to go way into uncharted territory to find this. (Like, not the first page of Google.)

+",1867,,,,,12-03-2018 11:26,,,,0,,,,CC BY-SA 4.0 +4849,1,,,12-03-2018 12:26,,0,163,"

$\newcommand{\q}[2]{\langle #1 | #2 \rangle} +\newcommand{\qr}[1]{|#1\rangle} +\newcommand{\ql}[1]{\langle #1|} +\newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor} +\newcommand{\round}[1]{\left\lfloor #1 \right\rceil} +\DeclareMathOperator{\div}{div} +\DeclareMathOperator{\modulo}{mod} +$I present all the detailed reasoning in my strategy and show it has a problem. My question is how to overcome this flaw. An example here will be best. In what follows, ""bit"" means ""q-bit"".

+ +

Let $N = 77$ and let $n$ be the number of bits of $N$. How many bits do I need to superpose all odd integers from 1 to $\sqrt{77}$? I believe that's approximately $n/2$. (It is $n/2$ exactly if $n$ were even. Since it is not, I need $\floor{n/2} + 1$.) For $N = 77$, $7$ bits is enough.

+ +

Let $B$ be a register big enough to hold the superposed states of all all odd integers from 1 to $\sqrt{77}$. Let $A$ be a register big enough to hold $77$, but also big enough to hold the division of $77$ by the superposed state held in $B$. For clarity, assume my division operator is given by

+ +

$$U_{\div} \qr{b}_x \qr{a}_y = \qr{b}_x (\qr{a \div b} \qr{a \modulo b})_y$$

+ +

and assume that $y = n + (n/2)$ and $x = n/2$. So, in our example, since $N=77$, it follows $n = 8$ and then the size of $B$ is $4$ bits, while the size of $A$ is $8 + 4 = 12$.

+ +

But since I want in $B$ only the odd integers, I take $B$'s lowest bit and force it to be $1$. So my preparation of $B$ is to start with it completely zeroed out, flip its lowest bit and finally use the Hadamard gate on all of B's bits except the lowest. I get

+ +

$$H^{\otimes 3} \qr{000}\otimes\qr1 = \qr{+}\qr{+}\qr{+} \otimes \qr{1}.$$

+ +

Now I get the states $\qr{1}, \qr3, \qr5, \qr7, \qr9, \qr{11}, \qr{13}, \qr{15}$. I wish I had stopped at $\qr{7}$.

+ +

This means I need less than $n/2$ bits in register $B$. By inspection, I see in this example that the size of $B$ should be $3$ bits, not $4$ because this way I end up with the superposition terms $\qr1, \qr3, \qr5, \qr7$, but all I'm sure of here is just this example.

+ +

So the question is what size in general should $B$ have so that it is able to hold all superposition terms of only odd integers from $1$ to $\sqrt{N}$?

+",1589,,1589,,12-03-2018 23:30,12-04-2018 01:07,How to prepare a superposed states of odd integers from $1$ to $\sqrt{N}$?,,1,4,,,,CC BY-SA 4.0 +4850,2,,4849,12-03-2018 17:51,,1,,"

You would only need $\log_2(N)$ bits to represent a number $N$ and also all the numbers from $0$ to $N$. Similarly, you would need $$\log_2(\sqrt{N}) = \log_2(N^{\frac{1}{2}}) = \frac{1}{2} \times \log_2(N) $$ bits to represent numbers from $0$ to $\sqrt{N}$. So I would say you would need half of $\log_2(N)$ qubits in your B register. The power of quantum computing comes from the fact that in the classical computer, you could only represent a specific number in the range of $1$ to $\sqrt{N}$ with $\log_2(\sqrt{N})$ qubits, here you would get the benefit of superposition that would hold all of them.

+",2403,,26,,12-03-2018 22:35,12-03-2018 22:35,,,,1,,,,CC BY-SA 4.0 +4851,2,,4846,12-03-2018 19:04,,3,,"

Here is another way to see this.

+

A projection $P$ is an operator such that $P^2=P$.

+

This directly implies that we can attach to each projector $P$ a set of orthonormal states that represent it, by choosing any orthonormal base for its range. More precisely, if $P_i$ has trace $\operatorname{tr}(P_i)=n$, then we can represent $P_i$ as a set of orthonormal states $\{\lvert\psi_{i,j}\rangle\}_{j=1}^n$. +Note in particular that if $\operatorname{tr}(P_i)=1$ then this choice is unique, meaning that there is always a bijection between trace-1 projections and states.

+

The projector $P_i$ and the corresponding states are connected through +$$P_i=\sum_{j=1}^n \lvert\psi_{ij}\rangle\!\langle \psi_{ij}\rvert.$$ +In the simpler case of $\operatorname{tr}(P_i)=1$ this reads $P_i=\lvert\psi_i\rangle\!\langle\psi_i\rvert$.

+

Now, if you are asking for a projective measurement basis, then you require a set of operators which describes every possible outcome of your state. +This condition is expressed mathematically by requiring $$\sum_i P_i=I,$$ +which in terms of the associated ket states reads +$$\sum_{ij}\lvert\psi_{ij}\rangle\!\langle\psi_{ij}\rvert=I,$$ +which is the completeness relation for the vectors $\{\lvert\psi_{ij}\rangle\}_{ij}$. +This immediately implies that this is also an orthonormal set (to see it, take for example the sandwich of this expression with any $\lvert\psi_{ij}\rangle$).

+

Orthogonality of $P_i$ is equivalent to orthogonality of the corresponding $\lvert\psi_{ij}\rangle$, thus the conclusion.

+

See also Orthogonal projections with $\sum P_i =I$, proving that $i\ne j \Rightarrow P_{j}P_{i}=0$ on math.SE, and links therein.

+",55,,55,,10-05-2021 12:26,10-05-2021 12:26,,,,0,,,,CC BY-SA 4.0 +4852,1,,,12-03-2018 23:51,,6,782,"

I asked a question about this earlier, but I am still coming across problems in my algorithm implementation.

+ +

I am trying to implement the order finding algorithm on Cirq finding the minimal positive $r$ for coprime $x$ and $N$ satisfying the equation $x^r \ = \ 1$(mod$ \ N$). In my case, I have set $x \ = \ 2$ and $N \ = \ 3$, so the algorithm should output $r \ = \ 2$.

+ +

Since the unitary is defined by $U|y\rangle \ = \ |2^jy$mod($3)\rangle$, and $y \ = \ |1\rangle$, then the value of $U|1\rangle$ should simply switch back and forth between $|1\rangle$ and $|2\rangle$. Since we are defining:

+ +

$2^j$mod($3) \ = \ \big(2^{j_1 2^0}$mod($3)\big)\big(2^{j_2 2^1}$mod($3)\big) \ ... \ \big(2^{j_t 2^{t-1}}$mod($3)\big)$

+ +

Each of the terms in the form $\big(2^{j_k 2^k}$mod($3)\big)$ becomes $\big(1 \ $mod($3)\big)$, except for when $k \ = \ 0$, in which case we get $\big(2 \ $mod($3)\big)$. Because of this, I implemented two $CNOT$ gates being controlled by the $j_1$ qubit acting on each of the two qubits acting as $|y\rangle$ ($|1\rangle$ is mapped to $|2\rangle$ if the control qubit is $|1\rangle$, and $|1\rangle$ is mapped to $|1\rangle$ if the control qubit is $|0\rangle$). It doesn't seem necessary to implement more gates on all the other $j$ qubits because mathematically, they turn out to act trivially in this context.

+ +

After this, I then pass all of the $j$ qubits through the inverse quantum Fourier transform. The outputs I'm getting seem kind of strange:

+ +

+ +

(This is all $7$ of the $j$ qubits measured for $20$ iterations of the circuit).

+ +

I was just wondering if anyone had any insight, since I feel as though I made some kind of mistake in creating the unitary.

+ +

I don't think this is necessarily a coding problem but just in case it is helpful, this is the code I have for the circuit so far:

+ +
# Quantum Order Finding Algorithm
+
+import cirq
+import numpy as np
+import random
+import tensorflow as tf
+import time
+import timeit
+from cirq.google import ExpWGate, Exp11Gate, XmonMeasurementGate
+from cirq.google import XmonSimulator
+from matplotlib import pyplot as plt
+from itertools import combinations
+from cirq.circuits import InsertStrategy
+
+# Gate schematic --> Hadamard (0) --> Controlled Phase gate (1, 0) --> Hadamard (1) --> Swap (0, 1) --> Measure (0, 1)
+
+n = 5
+a = 2
+
+qubits = []
+for i in range(0, n):
+    qubits.append(cirq.GridQubit(0, i))
+
+other_qubits = []
+for k in range(n, n+a):
+    other_qubits.append(cirq.GridQubit(0, k))
+
+circuit = cirq.Circuit()
+
+#Preparing the qubits
+
+for j in qubits:
+    had_gate = cirq.H.on(j)
+    circuit.append([had_gate], strategy=InsertStrategy.EARLIEST)
+
+'''
+for l in other_qubits:
+    x_gate = cirq.X.on(l)
+    circuit.append([x_gate], strategy=InsertStrategy.EARLIEST)
+'''
+
+circuit.append(cirq.X.on(other_qubits[1]))
+
+
+
+
+#Applying the unitary
+
+circuit.append([cirq.CNOT.on(qubits[0], other_qubits[0]), cirq.CNOT.on(qubits[0], other_qubits[1])])
+#circuit.append([cirq.CNOT.on(qubits[0], other_qubits[1]), cirq.CNOT.on(qubits[0], other_qubits[2]), cirq.CNOT.on(qubits[1], other_qubits[2]), cirq.CNOT.on(qubits[1], other_qubits[0]), cirq.CCX.on(qubits[0], qubits[1], other_qubits[0])])
+
+#Applying the Inverse QFT
+
+
+circuit.append(cirq.SWAP.on(qubits[0], qubits[4]))
+circuit.append(cirq.SWAP.on(qubits[1], qubits[3]))
+
+
+
+for b in range(0, n):
+    place = n-b-1
+    for h in range(n-1, place, -1):
+        holder = h
+        gate = cirq.CZ**(1/(2**(h-place)))
+        circuit.append(gate.on(qubits[holder], qubits[place]))
+    circuit.append(cirq.H.on(qubits[place]))
+
+def circuit_init_again(meas=True):
+    if meas:
+        yield XmonMeasurementGate(key='qubit0')(qubits[0])
+        yield XmonMeasurementGate(key='qubit1')(qubits[1])
+        yield XmonMeasurementGate(key='qubit2')(qubits[2])
+        yield XmonMeasurementGate(key='qubit3')(qubits[3])
+        yield XmonMeasurementGate(key='qubit4')(qubits[4])
+
+#circuit.append()
+circuit.append(circuit_init_again())
+
+print("" "")
+print("" "")
+print(circuit)
+print("" "")
+print("" "")
+
+simulator = XmonSimulator()
+result = simulator.run(circuit, repetitions=50)
+print(result)
+
+",4907,,4907,,12-04-2018 22:41,7/22/2019 20:57,Why is this implementation of the order finding algorithm not working?,,1,0,,,,CC BY-SA 4.0 +4855,1,,,12-04-2018 01:22,,5,90,"

I've been looking into $\mathsf{QPIP}_\tau$ as a complexity class. The following will be a summary of definition 3.12 in Classical Verification of Quantum Computations by Urmila Mahadev.

+ +
+

A language $L$ is said to have a Quantum Prover Interactive Proof ($\mathsf{QPIP}_\tau$ ) with completeness $c$ + and soundness $s$ (where $c − s$ is at least a constant) if there exists a pair of algorithms $(P, V)$, where $P$ is the + prover and $V$ is the verifier, with the following properties:

+ +
    +
  1. The prover $P$ is a $\mathsf{BQP}$ machine, which also has access to a quantum channel which can transmit $\tau$ qubits.

  2. +
  3. The verifier $V$ is a hybrid quantum-classical machine. Its classical part is a $\mathsf{BPP}$ machine. The quantum + part is a register of $\tau$ qubits, on which the verifier can perform arbitrary quantum operations and which has + access to a quantum channel which can transmit $\tau$ qubits. At any given time, the verifier is not allowed to + possess more than $\tau$ qubits. The interaction between the quantum and classical parts of the verifier is the + usual one: the classical part controls which operations are to be performed on the quantum register, and + outcomes of measurements of the quantum register can be used as input to the classical machine.

  4. +
  5. There is also a classical communication channel between the prover and the verifier, which can transmit + polynomially many bits at any step.

  6. +
  7. At any given step, either the verifier or the prover perform computations on their registers and send bits and + qubits through the relevant channels to the other party.

  8. +
+
+ +

There are some more details regarding defining $c$ and $s$, but these are unimportant for my question. +Additionally, it should suffice to take $\tau = 0$, and let $V$ be a $\mathsf{BPP}$ machine that has entirely classical communication with $P$.

+ +
+ +

I'm curious about how specifically the classical part of $V$ is supposed to manipulate the input. +The setting for this paper is taking arbitrary $L\in\mathsf{BQP}$ and reducing it to 2-Local Hamiltonian (where the non-identity parts are Pauli $X$ and $Z$ gates), which is $\mathsf{QMA}$-complete. +This means that the initial input $L$ is specified as some polynomial-size quantum circuit, which must be reduced (in quantum polynomial time) to a local hamiltonian instance.

+ +

I have issues understanding how $V$ (in the case when $\tau = 0$, so $V$ is just a $\mathsf{BPP}$ machine) can preform this reduction, or even more generally hold the quantum circuit. Each of the (polynomially many) gates in the circuit is a quantum gate, which for a $n$-qubit system would be of size $2^n\times 2^n$. I feel like $V$ can't have an input of this size, because then the input size is exponential in $n$ (which would cause all sorts of issues with restricting $V$ to be poly-time in the input).

+ +

I could see each of (polynomial many) gates $U_i$ being written as a product of (polynomial many) universal gates from some fixed, finite set of gates (or even some gate set of size $\mathsf{poly}(n)$ --- it shouldn't matter). This would mean that the input to $V$ is small enough such that ""polynomial time in the input"" is a reasonable restriction.

+ +
    +
  1. Under such a restriction, can a $\mathsf{BPP}$ verifier $V$ reduce the circuit $L$ to $2$-Local Hamiltonian?

  2. +
  3. Additionally, while such a restriction is natural to me, is this how it is typically done? Specifically, how are quantum circuits generally input to classical machines?

  4. +
+",5248,,26,,12/23/2018 7:57,12/23/2018 7:57,What's the notion of input size for Quantum Verification?,,1,0,,,,CC BY-SA 4.0 +4856,2,,4855,12-04-2018 05:02,,2,,"

The answer is yes to both questions. See page 2 of Bookatz's QMA-Complete Problems, which states:

+ +
+

When + a problem is given a unitary or quantum circuit, $U_x$, it is assumed that the problem is actually given + a classical description $x$ of the corresponding quantum circuit, which consists of $\mathsf{poly}(|x|)$ elementary + gates. Likewise, quantum channels are specified by efficient classical description.

+
+ +

So the input to $V$ is entirely classical, and polynomial-sized in $|x|$, which is usually taken to be the input to the circuit (and therefore the number of qubits).

+ +

As for a verifier being able to reduce things to 2 local Hamiltonian, this is also yes. This picture is from 3-Local Hamiltonian is QMA Complete, and constructs a 3-Local Hamiltonian entirely from projections and $U_i$'s. As the various $U_i$'s are efficiently representable classically (and I assume projections onto the computational basis are as well), it appears a $\mathsf{P}$ machine should be able to construct the local Hamiltonian efficiently (given the classical description of $U_x$, likely with respect to a fixed universal gate set).

+ +

Note that the below construction is for 3 Local Hamiltonian, and while the construction for 2 Local Hamiltonian is more technical, I see no difficulties in extending the above argument.

+ +

+",5248,,,,,12-04-2018 05:02,,,,0,,,,CC BY-SA 4.0 +4857,2,,4361,12-04-2018 08:21,,2,,"

No, as point 4 is not satisfied.

+ +

The D-Wave machines are quantum annealers and thus not universal.

+ +

See this question on how to make from the D-Wave machine a universal quantum computer.

+",2005,,,,,12-04-2018 08:21,,,,0,,,,CC BY-SA 4.0 +4858,1,4860,,12-04-2018 15:26,,5,469,"

In the ""Quantum Computation and Quantum Information 10th Anniversary textbook by Nielsen & Chuang"", they claim that Eqn(4.75) is a rotation about the axis along the direction +( ${cos(\pi/8)}$, ${sin +(\pi/8)}$, ${cos(\pi/8)}$ ). They then defined an angle ${\theta}$ such that:

+ +

${cos(\theta/2)}$ = $cos^2$(${\pi/8}$)

+ +

and its also claimed to be an irrational multiple of 2${\pi}$.
+We know that the rotational matrices about any arbitrary axis takes the form of

+ +

${cos(\theta/2)}$ ${I}$ - ${i}$ (${n_x}$${X}$ + ${n_y}$${Y}$ + ${n_z}$${Z}$) ${sin(\theta/2)}$ ,

+ +

but Eqn(4.75) gives:

+ +

${cos^2(\pi/8)}$ ${I}$ - ${i}$ [ ${cos(\pi/8)(X+Z)+sin(\pi/8)Y}$ ] ${sin(\pi/8)}$

+ +

My question is how does this ${\theta}$ be able to simultaneously satisfy ${sin(\pi/8})$? Why does ${\theta}$ be referenced from ${cos^2(\pi/8)}$ instead of ${sin(\pi/8})$?

+",5253,,26,,12/23/2018 7:56,12/23/2018 7:56,Construction of ${R_n(\theta)}$ using only the Hadamard and ${\pi/8}$ gates,,1,0,,,,CC BY-SA 4.0 +4859,1,,,12-04-2018 15:32,,5,386,"

I'm taking a quantum information course and one of my exercises says to find $p,p'$ for which there is a channel $\tilde\Lambda(\Lambda(\rho))=\Lambda'(\rho)$, where $\Lambda$ and $\Lambda'$ are dephasing channels with $\Lambda(\rho)=(1-p)\rho+p\sigma_z\rho\sigma_z, \Lambda'(\rho)=(1-p')\rho+p'\sigma_z\rho\sigma_z$. I'm rather confused how to do that, would appreciate any tips and hints.

+",5254,,10480,,12-12-2021 03:56,12-12-2021 03:56,"How to show there is a channel $\tilde\Lambda$ such that $\tilde\Lambda\circ\Lambda=\Lambda'$ with $\Lambda,\Lambda'$ dephasing channels?",,1,2,,,,CC BY-SA 4.0 +4860,2,,4858,12-04-2018 15:49,,4,,"

In order to compare to the Pauli vector exponentiation formula, we need to write in terms a normalized unit vector:

+ +

$${\displaystyle R = \cos^2(\pi/8) I_2 -\frac{i}{\sqrt{1+\cos^2(\pi/8)} } \times \quad \times \left [\cos(\pi/8)(X+Z)+ \sin(\pi/8)Y \right ] \sqrt{1+\cos^2(\pi/8)}\sin(\pi/8)} +$$

+ +

Now, the result can be seen by inspection from the comparison to the general formula:

+ +

$${\displaystyle e^{ia({\hat {n}}\cdot {\vec {\sigma }})}=I\cos {a}+i({\hat {n}}\cdot {\vec {\sigma }})\sin {a}}$$

+",4263,,,,,12-04-2018 15:49,,,,2,,,,CC BY-SA 4.0 +4861,1,,,12-04-2018 20:35,,2,312,"

The algorithm is being implemented on Cirq, with the goal of finding the smallest $r$ for cooprime numbers $x$ and $N$ satisfying the equation $x^r \ = \ 1($mod $N)$. I have set $x \ = \ 2$ and $N \ = \ 3$, so the algorithm should output $r \ = \ 2$.

+ +

This is the circuit that Cirq outputs for my code. Are there any mistakes that I'm making in the algorithm implementation, as I am getting some strange results:

+ +

+",4907,,,,,12-04-2018 22:32,Is this the correct quantum circuit for the order-finding algorithm?,,1,0,,,,CC BY-SA 4.0 +4862,2,,4861,12-04-2018 22:32,,3,,"

The Fourier transform part (everything from the swaps onward) looks correct. The initialization (column of Hadamards) looks correct. But the part where you do controlled modular multiplications doesn't, because there's no operations controlled on the 2nd through fifth qubits that you are QFT-ing.

+ +

You also seem to expect the output to be the period, when actually the output is a number of the form $2^n / (kp)$ where $p$ is the period and $k$ is an integer and $n$ is the number of qubits in your QFT. This blog post may be helpful..

+ +

Here is my recommendation on how to proceed.

+ +
    +
  1. Start from Quirk's example period finding circuit:

    + +

  2. +
  3. In Quirk, adjust the circuit to apply to your case, where $B=2$ and $R=3$.

  4. +
  5. Replace the $\times B^A \pmod{R}$ operation with a series of controlled $\times A \pmod{R}$ operations, with a different $A=\text{constant}$ for each one.

  6. +
  7. Replace each $\times A \pmod{R}$ operation, one by one, with simple gates such as CNOTs and Toffolis.

  8. +
  9. Replace the inverse QFT with simple gates such as Hadamards and controlled phases.

  10. +
  11. Compare with the circuit you're trying to make in Cirq.

  12. +
+ +

As you are making the circuit transformations, keep an eye on the output display. If the spikes have moved around or changed, you made a mistake when doing a decomposition.

+",119,,,,,12-04-2018 22:32,,,,5,,,,CC BY-SA 4.0 +4863,1,,,12-05-2018 02:50,,1,76,"

I'm sorry for posting so many questions about this specific problem, but I just want to make sure that I am implementing an algorithm correctly. I am simulating the order finding algorithm (finding minimal $r$ for $x^r \ = \ 1$mod($N)$). Right now, I am trying to implement the case of $x \ = \ 2$ and $N \ = \ 5$. I am getting outputs that look somewhat like this:

+ +

+ +

When I pass these results through the continued fraction algorithm, I'm getting the expected results of the algorithm, however, these results seem weird, as they are basically just every combination of $1$ and $0$ for the first two qubits. I just want to make sure that I am actually implementing the algorithm correctly, and not just getting lucky with my results. Again, I'm very sorry for all the questions about this specific problem.

+",4907,,,,,12-05-2018 09:38,Do these outputs seem normal for the order finding algorithm?,,1,0,,,,CC BY-SA 4.0 +4864,1,,,12-05-2018 04:16,,3,285,"

Do they both just use similar methods of calculation, or are they completely interchangeable?

+",4907,,,,,12-05-2018 10:22,Are the order finding and period finding algorithms the same thing?,,1,0,,,,CC BY-SA 4.0 +4865,1,,,12-05-2018 06:45,,1,155,"

I am reading an article on representation of digital images using qubits. But I am not able to understand the notations of the article. +Can somebody help me: $|0\rangle$: It means the first basis vector in the respective vector space. But what does $ |0\rangle^{3q+ 2n}$ mean?

+ +

+",5263,,26,,12-05-2018 14:17,12-06-2018 07:54,A question about notation for quantum states,,2,0,,,,CC BY-SA 4.0 +4866,2,,4865,12-05-2018 07:47,,4,,"

A single qubit in the $0$ state is often written as $|0\rangle$.

+ +

If we have two independent qubits in the zero state we can write them as $|0\rangle\otimes|0\rangle$. This is often also written as $|0\rangle|0\rangle = |00\rangle =|0\rangle^{\otimes2}=|0\rangle^2$. This extends naturally to more qubits.

+ +

Coming back to your question: $|0\rangle^{\otimes 3q+2}$ (or $|0\rangle^{3q+2}$), it are just $3q+2$ qubits, all in the zero state $|0\rangle$.

+",2005,,,,,12-05-2018 07:47,,,,0,,,,CC BY-SA 4.0 +4867,2,,4863,12-05-2018 09:38,,1,,"

This looks right, although I would emphasise that it is not really best practice to have to ask this question at this stage. The whole point of doing a particularly simple example is so that you can confirm that it's doing what you've already calculated analytically. It's quite important to do the analytic bit first to avoid confirmation bias.

+ +

Anyway, let's see what we should get. I'm going to assume a first register (the one that we do the Fourier transform etc on) contains $t$ qubits. I'm guessing that $t=7$ from your output. We start with the system in $|0\rangle^{\otimes t}|1\rangle$, and apply the Hadamard transform to the first register: +$$ +\frac{1}{\sqrt{2^t}}\sum_{j=0}^{2^t-1}|j\rangle|1\rangle. +$$ +Then we apply the modular exponentiation +$$ +\frac{1}{\sqrt{2^t}}\sum_{j=0}^{2^t-1}|j\rangle|2^j\text{ mod }5\rangle. +$$ +We can simply this as +$$ +\frac{1}{\sqrt{2^t}}\sum_{k=0}^3\left(\sum_{j=0}^{2^{t-2}-1}|k+4j\rangle\right)|2^k\text{ mod }5\rangle. +$$

+ +

Now, instead of simply applying the inverse Quantum Fourier Transform, I prefer to think about its action on states. The QFT (not inverse), transforms basis states +$$ +|x\rangle\rightarrow|\psi_x\rangle=\frac{1}{\sqrt{2^t}}\sum_{y=0}^{2^t-1}\omega^{xy}|y\rangle +$$ +where $\omega=e^{2\pi i/2^t}$, so we want to describe our states such as $|0\rangle+|4\rangle+|8\rangle+|12\rangle+\ldots$ in terms of $|\psi_x\rangle$ to know what values of $x$ we might get as output from the inverse QFT. But we can easily observe that +$$ +|0\rangle+|4\rangle+|8\rangle+|12\rangle+\ldots=\frac{1}{2}\sum_{x=0}^3|\psi_{2^{t-2}x}\rangle. +$$ +Hence, for the $k=0$ case, we get each of the 4 answers $0,2^{t-2},2^{t-1},3\times 2^{t-2}$ with equal probability. These are the 4 bit strings that you're seeing on output. Similarly, the other 3 values of $k$ can be decomposed also in terms of $\{|\psi_{2^{t-2}x}\rangle\}_{x=0}^3$, just with different multiplicative factors (fourth roots of unity), the point being that these 4 vectors all have period 4 and they form a basis for the states $|0\rangle$ to $|3\rangle$.

+",1837,,,,,12-05-2018 09:38,,,,2,,,,CC BY-SA 4.0 +4868,2,,4864,12-05-2018 09:43,,5,,"

If you look at Nielsen and Chuang, figure 5.5, page 241 (at least in my 2002 version), order finding and period finding are separated. (Note that this is talking about the problem definition, not the algorithm for solving them). Essentially, period finding aims to find the least positive $r$ such that +$$ +f(x+r)=f(x) +$$ +for all $x$, while order finding is doing a special case of this where the function is of a specific form: $f(x)=a^x\text{ mod }N$.

+ +

In terms of an algorithm for solving them, it turns out that a quantum computer gives us a good algorithm for period finding, and hence it can also be applied to the special case of order finding. Strictly, this doesn't rule out the possibility that there could be something better for order finding than there is for period finding in general, I guess.

+",1837,,1837,,12-05-2018 10:22,12-05-2018 10:22,,,,0,,,,CC BY-SA 4.0 +4869,2,,4865,12-05-2018 16:16,,1,,"

Also, think about the dimensions these basis vectors are creating. $|0\rangle$ is a $2\times 1$ vector, which is in the standard basis looks like: $$\begin{bmatrix} 1\\0 \end{bmatrix}$$Then, continuing $$|0\rangle^{\otimes2} = |00\rangle=|0\rangle|0\rangle = |0\rangle \otimes |0\rangle = \begin{bmatrix} 1\\0 \end{bmatrix} \otimes \begin{bmatrix} 1\\0 \end{bmatrix} = \begin{bmatrix} 1\\0\\0\\0 \end{bmatrix}$$

+ +

They all mean the same thing.

+",2403,,26,,12-06-2018 07:54,12-06-2018 07:54,,,,0,,,,CC BY-SA 4.0 +4870,1,,,12-05-2018 16:57,,8,967,"

I have a computer science and mathematics degree and am trying to wrap my head around quantum computing and it just doesn't seem to make sense from the very beginning. I think the problem is the definitions out there are generally watered down for lay-people to digest but don't really make sense.

+ +

For example, we often see a bit compared to a qubit and are told that an ""old fashion digital bit"" can only be in one of two states, 0 or 1. But qubits on the other hand can be in a state of 0, 1, or ""superposition"", presumably a third state that is both 0 AND 1. But how does being in a state of ""both 0 and 1"" deliver any value in computing as it is simply the same as ""I don't know""? So for example, if a qubit represents the last binary digit of your checking account balance, how does a superimposed 0 and 1 deliver any useful information for that?

+ +

Worse you see these same articles say things like ""and two qubits can be put together for a total of 4 states, three can make a total of 8 states"" -- ok that's just $2^n$, the same capability that ""old-fashioned bits"" have.

+ +

Obviously there is great promise in quantum computing so I think its more a problem of messaging. Any suggestions for intro/primer literature that doesn't present the quantum computing basics in this oxymoronic way?

+",5266,,26,,12-06-2018 08:22,09-09-2021 00:32,Non-layperson explanation of why a qubit is more useful than a bit?,,4,3,,,,CC BY-SA 4.0 +4871,2,,4870,12-05-2018 17:45,,2,,"

So there are two major advantages qubits have over classical bits: superposition and entanglement.

+

Superposition is the one more often discussed, and is the "0 or 1" phrase you always hear. Essentially the idea is that instead of just being in 0 or 1, we can be in some mix of the two. However quantum operators are linear, so if they are fed a superposition, they apply themselves to both parts separately:

+

$$A(\alpha |0\rangle + \beta |1\rangle ) = \alpha A |0\rangle + \beta A |1\rangle$$

+

This is essentially a system with built in parallelization, which is nice, as a blackbox can be queried multiple times at once. A common starting point for quantum algorithms is to apply the Hadamard operator (Which takes $|0\rangle \rightarrow \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$) to each of a string of qubits in the $|0\rangle$ state. This leads to:

+

$$H^{\otimes N} |0\rangle^{\otimes N} = \sum_{k=0}^{2^N - 1}|k\rangle$$

+

where $|k\rangle$ is the state with it's qubits in the binary representation of $k$. Plugging this state into a quantum operator essentially gets every single possible input into said operator with one query, and if you can tease out the information you need with that one operator then you only need one attempt instead of $2^N$ queries. See the Deutsch-Jozsa algorithm for more details.

+

Entanglement is the second idea, and I would argue is much more powerful. The idea is that you can combine qubits in ways which are inseparable, meaning that they stop being objects with independent values. For example you can put two qubits into a state:

+

$$\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$

+

where the values are linked together. That means that when qubits are combined, the possible states they can occupy are far more varied. If I measure one of the qubits I affect the data stored in the other qubit because they were a linked object. Through this you can get cool interference effects by changing bases. I would also look at the quantum teleportation circuit to get an idea of how this can be useful for things that classical bits cannot do.

+",3056,,2927,,09-08-2021 23:36,09-08-2021 23:36,,,,5,,,,CC BY-SA 4.0 +4872,1,,,12-05-2018 20:35,,1,118,"

This circuit was created on the Quirk platform. I'm trying to implement a basic case of phase estimation. For some reason, I'm getting this strange result.

+ +

+ +

When the Inverse QFT is broken down, it seems to yield the expected answer:

+ +

+ +

I have no idea why this is happening. I tried playing around with the endian-ness of the qubits, but it didn't seem to work.

+",4907,,26,,12-06-2018 08:14,12-06-2018 08:14,Why are these circuits not producing the expected output?,,1,0,,,,CC BY-SA 4.0 +4873,2,,4872,12-05-2018 20:59,,2,,"

The endian-ness of the qubits is the answer. Both QFT and phase estimation rely on certain endianness of the register, and the representations used in the controlled-unitary part has to match the endianness used in the QFT part (and in the answer). This circuit produces the expected outcome with the inverse QFT block:

+ +

+",2879,,2879,,12-05-2018 21:13,12-05-2018 21:13,,,,5,,,,CC BY-SA 4.0 +4874,1,4884,,12-05-2018 21:12,,3,147,"

As I understand it, device-independent quantum cryptography enables you to safely perform cryptographic operations without necessarily trusting the quantum device on which they are performed. Nonlocal games are said to have applications in this domain; see for example A Monogamy-of-Entanglement Game With Applications to Device-Independent Quantum Cryptography by Tomamichel et al. Is there a simple explanation of how nonlocal games can help in this domain? I understand the mechanics of the CHSH game quite well.

+",4153,,,,,12-06-2018 14:29,How are nonlocal games used in device-independent quantum cryptography?,,1,1,,,,CC BY-SA 4.0 +4875,1,4877,,12-05-2018 21:57,,6,1956,"

When implementing the inverse quantum Fourier transform, in addition to reversing the circuit, does one need to take the conjugate transpose of the phase shift gates in the circuit as well?

+",4907,,26,,12-06-2018 08:13,12-06-2018 08:13,Implementation of inverse QFT?,,1,0,,,,CC BY-SA 4.0 +4876,1,,,12-05-2018 22:41,,9,291,"

What is known about quatum algorithms for problems outside NP (eg NEXP-complete problems), both theoretically like upper & lower speedup bounds and various (im)possibility results, as well as concrete algorithms fro specific problems?

+ +

The reason I am asking is that we currently have processorrs with low 10's of qubits. NP problems over low 10s of classical bits can generally solved on classical computers. With non-NP problems we could have problems which are not classically tractable even in that range. This could be an opportunity to demonstrate practical quantum advantage on current hardware. This does not necessarily require the quantum algorithm to be generally tractable, only that it can solve smallish problems in acceptable time where classical algorithms can not.

+ +

The idea is to find problems that take considerable time on classical computers for instance sizes that are representable on current quantum processors. Finding quantum algorithms that are faster on those instances would be a form of quantum advantage even if the quantum algorithms were not necessarily superior asymptotically.

+",1370,,1370,,12-07-2018 22:20,1/31/2019 13:07,Quantum algorithms for problems outside NP,,1,7,,,,CC BY-SA 4.0 +4877,2,,4875,12-05-2018 23:13,,9,,"

Yes.

+ +

You have been given a factorization $QFT=U_1 \cdots U_n$ where each $U_i$ is an individual gate.

+ +

$$ +QFT^{-1} = U_n^{-1} \cdots U_1^{-1}\\ += U_n^{\dagger} \cdots U_1^{\dagger}\\ +$$

+ +

A lot of the individual gates will have the property that $U_i = U_i^\dagger = U_i^{-1}$. These are the involutions like NOT, CNOT, etc. In those cases you are lucky and don't have to worry about conjugating.

+ +

In other gates, you do have to change parameters in order to acheive the conjugate transpose. Like $R_Z (\beta)$ vs $R_Z (-\beta)$. In these sorts of basic gates, the conjugate transpose is straightforward to see how parameters like Euler angles change.

+",434,,,,,12-05-2018 23:13,,,,2,,,,CC BY-SA 4.0 +4878,1,,,12-06-2018 00:21,,1,95,"

+ +

+ +

I am simulating the phase shift algorithm on the Quirk platform. Even when the endian-ness of the built-in inverse QFT gate is corrected for, the circuits still output different results. Shouldn't the output be identical?

+ +

The bottom circuit seems to be producing the output I would expect from the phase estimation algorithm, I'm not sure what is going on with the top circuit.

+ +

I am using Quirk to ensure I am implementing the order finding algorithm correctly, so this result is kind of worrying, since I trust the built in inverse QFT over the one that I made, however, mathematically, I am fairly certain that my circuit is producing the right values.

+",4907,,26,,12-06-2018 08:12,12-06-2018 09:10,Why are these circuits not producing the same output?,,1,0,,,,CC BY-SA 4.0 +4880,2,,4870,12-06-2018 04:17,,2,,"

Adding to @Dripto's answer.

+ +
+

For example, we often see a bit compared to a qubit and are told that + an ""old fashion digital bit"" can only be in one of two states, 0 or 1. + But qubits on the other hand can be in a state of 0, 1, or + ""superposition"", presumably a third state that is both 0 AND 1. But + how does being in a state of ""both 0 and 1"" deliver any value in + computing as it is simply the same as ""I don't know""? So for example, + if a qubit represents the last binary digit of your checking account + balance, how does a superimposed 0 and 1 deliver any useful + information for that?

+
+ +

In this context, 0 and 1 refer to orthogonal basis states of a Hilbert space (a complex vector space) representation of the states of physical objects like electrons (say spin states of an electron - ""up"" and ""down""). It is more appropriate to denote them as $|0\rangle$ and $|1\rangle$, according to the Dirac notation. I've written about this previously, here. Just saying ""superposition of state 0 and state 1"" doesn't convey any useful information, yes. However specifying the superposition state like $\alpha|0\rangle+\beta|1\rangle$, where $\alpha,\beta\in \Bbb C$ and $|\alpha|^2+|\beta|^2=1$, makes complete sense mathematically and conveys useful information. By the way, $|\alpha|^2$ is the probability of the qubit collapsing to state $|0\rangle$ upon measurement and $|\beta|^2$ is the probability of it collapsing to state $|1\rangle$, upon measurement. You might say ""superposition of state 0 and state 1"" doesn't make physical or intuitive sense. Sure, quantum mechanics is simply a mathematical model that happens to give correct predictions about real world phenomena. It doesn't need to make physical or intuitive sense. It just needs to work.

+ +

Also, we would never use a qubit to represent the last binary digit of your account balance, in the first place. That would be silly. And even if we do, the qubit should be restricted to the computational basis states $|0\rangle$ or $|1\rangle$, and not their superposition states.

+ +
+

Worse you see these same articles say things like ""and two qubits can + be put together for a total of 4 states, three can make a total of 8 + states"" -- ok that's just $2^n$, the same capability that + ""old-fashioned bits"" have.

+
+ +

Yes, that is nonsense. A single qubit can exist in any state one out of uncountably infinite number of states like $\alpha|0\rangle+\beta|1\rangle$ where $\alpha,\beta \in \Bbb C$ (simply because there are uncountably many complex numbers and $\alpha,\beta$ can take up any of those uncountably many values). And, by extension, any $n$-qubit system can exist in an uncountably infinite number of states. I guess by $2^n$ they just meant the number of basis of the Hilbert space in which the composite qubit system belongs, for instance, $|00\rangle, |01\rangle, |10\rangle$ and $|11\rangle$ form the computational basis set for a 2 qubit system. In general, a two qubit system can take up any state like $(\alpha|0\rangle+\beta|1\rangle)\otimes (\gamma |0\rangle + \delta |1\rangle)$ (a tensor product of the state of the two individual qubits which constitute the $2$-qubit system), where $\alpha,\beta, \gamma, \delta \in \Bbb C$ and $|\alpha|^2+|\beta|^2=1$ and $|\gamma|^2+|\delta|^2=1$. $$(\alpha|0\rangle+\beta|1\rangle)\otimes (\gamma |0\rangle + \delta |1\rangle)$$ can be expressed as a linear combination of the computational basis elements $|00\rangle, |01\rangle, |10\rangle$ and $|11\rangle$ i.e. $$\alpha\gamma |0\rangle \otimes |0\rangle + \alpha\delta |0\rangle \otimes |1\rangle + \beta\gamma |1\rangle \otimes |0\rangle + \beta\delta|1\rangle \otimes |1\rangle,$$ alternatively denoted as:

+ +

$$\alpha\gamma |00\rangle + \alpha\delta |01\rangle + \beta\gamma |10\rangle + \beta\delta|11\rangle.$$

+ +

From here you can say that a $n$-qubit system can store $2^n$ values(coefficients) in parallel, although there's always the restriction that the squared sum of the moduli of the coefficients of the computational basis states must add up to $1$.

+ +
+

Obviously there is great promise in quantum computing so I think its + more a problem of messaging. Any suggestions for intro/primer + literature that doesn't present the quantum computing basics in this + oxymoronic way?

+
+ +

See my previous answers: this and this, for resource recommendations. As mentioned there, I'd recommend starting with Vazirani's lectures and then moving on to Nielsen and Chuang. I recently found the 2006 lecture notes by John Watrous which are also pretty great for beginners. It helps a lot to have a thorough grounding in linear algebra, while learning quantum computing, but I suspect you already have that, being a mathematics and computer science graduate.

+ +

As for how computation using qubits can be faster, in some cases, than using classical bits, I recommend carefully thumbing through the standard quantum algorithms like Deutsch-Jozsa, Shor's, Grover's among others. Here is a simple explanation of the Deutsch-Jozsa algorithm. It would be a bit difficult to summarize that in one answer. Please keep in mind that quantum computing cannot speed up all type of computations. It's applicable to only very specific problems.

+",26,,26,,12-06-2018 07:33,12-06-2018 07:33,,,,0,,,,CC BY-SA 4.0 +4881,2,,4878,12-06-2018 09:10,,1,,"

You have to put an extra SWAP-gate after the QFT, see this circuit.

+ +

Furthermore, the two controlled-Z gates on the same qubit are not necessary. This can reduce the circuit further to this.

+",2005,,,,,12-06-2018 09:10,,,,0,,,,CC BY-SA 4.0 +4882,1,,,12-06-2018 10:19,,2,181,"

Recently I have been reading about surface codes a little and one thing I came acros was a specific order in which gates should be applied before the ancilla is measured. See for instance figure 2 in this article.

+ +

The specific order of the two-qubit gates with the ancilla is important due to some commutation relations which will not be valid otherwise.

+ +

Can someone explain in more detail why this order is important? (heavy math is allowed here)
+Furthermore, what goes wrong if we use different orders? And why is this order different for $X$- and $Z$-type plaquettes?

+",2005,,26,,12/13/2018 20:39,12/13/2018 20:39,Measuring order ancilla qubits in surface code,,1,0,,,,CC BY-SA 4.0 +4883,2,,4645,12-06-2018 13:20,,2,,"

If you look at the literature for blind quantum computation, there is the concept of a ""trap state"". Basically, something that isn't part of the main computation that is supposed to give specific results so that you can easily verify that the computer is behaving as expected. I believe some of these trap states are Bell pairs, and the measurements performed on them are implementing CHSH tests and the like, to verify that there really is (effectively, perhaps hidden by a layer of encoding) a maximally entangled qubit pair present. See, for example, Fig. 2 here.

+",1837,,,,,12-06-2018 13:20,,,,0,,,,CC BY-SA 4.0 +4884,2,,4874,12-06-2018 13:29,,3,,"

Imagine you're playing a CHSH game with someone, although you don't know what quantum system it is that you're playing with, or even what measurements it is that you're doing on the system. You just know that you're getting the average value +$$ +\langle A_1(B_1+B_2)+A_2(B_1-B_2)\rangle=2\sqrt{2} +$$ +(where measurement results of $\pm 1$ are recorded in $A_1$ and $A_2$ by the first player for their two separate measurements, and in $B_1$ and $B_2$ for the second player's two separate measurements). The simple fact that you got this value of $2\sqrt2$ tells you that, in effect, you have a maximally entangled qubit pair, and that your measurements are acting as qubit measurements with the correct relative angles to generate the CHSH result. That's the essence of how device-independent crypto works, as you could now use this ""thing that's proven to be equivalent to a Bell pair+measurements"" in a standard crypto scheme such as the key distribution protocol of E91.

+ +

If, instead, you get some expectation value $2<S\leq2\sqrt2$, then you know that at least some of your answers are being generated in a truly random way (and it gives you a quantitative statement about how much somebody else could know about those randomly generated answers) because if they're not, you'd have to be getting $S\leq 2$.

+",1837,,1837,,12-06-2018 14:29,12-06-2018 14:29,,,,2,,,,CC BY-SA 4.0 +4885,2,,4882,12-06-2018 13:30,,2,,"

To make a simple example, let's imagine we are not doing measurement by instead just applying the stabilizer operators. So we want to do $XXXX$ around the $X$ plaquettes, and $ZZZZ$ around the $Z$ plaquettes.

+ +

The method in the article you mention applies operations in four steps: first to the 'north east' qubit of each plaquette, then in the order NW $\to$ SE $\to$ SE for $X$ plaquettes and SE $\to$ NW $\to$ SW for the $Z$ plaquettes.

+ +

This ensures two properties. One is that no qubit is involved in more than one operation during each step. To see what the other important property, let's focus on two neigbouring plaquettes: one $X$ and one $Z$.

+ +
0---1---3
+| X | Z |
+3---4---5
+
+ +

Step-by-step, the operation we apply is

+ +

$$ (Z_{4} X_{3})~~(Z_{1} X_{4})~~(Z_{5} X_{0})~~(Z_{3} X_{1})$$

+ +

Now let's do some commutations. We know that $X$ and $Z$ anticommute when applied to the same qubit, but the operations all commute otherwise. So note that each $Z$ commutes to everything to its left. That means we can rewrite this as

+ +

$$ ( Z_{4} Z_{1}Z_{5} Z_{3} )~~( X_{3} X_{4} X_{0} X_{1}).$$

+ +

So though we didn't do a complete $X$ stabilizer operation followed by a complete $Z$ stabilizer operation, the effect is the same as if we had. The same holds true for the stabilizer measurements made using controlled operations. Due to this exact same commutation behaviour, the effect is equivalent to doing them independently, one after the other.

+ +

For other sequences of operators, we might instead come out with

+ +

$$ - ~~ ( Z_{4} Z_{1}Z_{5} Z_{3} )~~( X_{3} X_{4} X_{0} X_{1}),$$

+ +

due to an anticommutation that we needed to account for.

+ +

When applying stabilizer operators, this just results in a global phase and is not too important. When applying controlled operations for stabilizer measurements, however, the effects are much more drastic. It is no longer possible to separate out the two independent measurements of the two types of stabilizer. Instead, you'll get some strange combined measurements of both plaquettes. This won't tell you about errors, and won't preserve your code space. So it is exactly what you don't what to do in your surface code.

+ +

Any method that prevents all neigbouring pairs of plaquette measurements from becoming interleaved in this way is valid. So you could swap the methods for $X$ and $Z$ in the paper and it would still work.

+",409,,26,,12-06-2018 15:37,12-06-2018 15:37,,,,2,,,,CC BY-SA 4.0 +4886,1,12135,,12-06-2018 16:19,,4,606,"

I'm stuck with a very specific problem that I'm not sure on how to implement using quantum gates. Suppose I have an n qubit circuit and that I want in output a random n qubit string containing exactly k qubits equals to one. E.g. if n=7 and k=3, possible outputs can be $|0010110\rangle$ or $|0101001\rangle$. +In other words, it basically should give one of the possible combinations of the binomial coefficient $\binom{n}{k}$. +Is there any efficient way to implement this?

+",4848,,26,,12-06-2018 17:21,08-12-2020 00:07,How to get all combinations of given input?,,3,7,,,,CC BY-SA 4.0 +4887,1,,,12-06-2018 18:35,,6,995,"

If we have a quantum channel mapping from a $d$-dimensional state to a $d$-dimensional state, it can be described by at most $d^2$ Kraus operators. Suppose our channel maps instead from a $d_1$-dimensional state to a $d_2$-dimensional state, with $d_1>d_2$, e.g. with the quantum operation of taking the partial trace over a mode. What is the maximum required number of Kraus operators to characterise the channel? Is it $d_1d_2$, analogous to the case where $d_1=d_2$?

+",5276,,55,,5/16/2021 16:18,5/16/2021 16:18,How many Kraus operators are required to characterise a channel with different start and end dimensions?,,2,0,,,,CC BY-SA 4.0 +4888,2,,4887,12-06-2018 20:34,,2,,"

Yes. Choi's Theorem a priori uses different Hilbert spaces of potentially different dimensions $d_1$ and $d_2$. Then $d_1=d_2$ is a corollary. The proof is included there.

+",434,,5870,,02-05-2020 11:41,02-05-2020 11:41,,,,0,,,,CC BY-SA 4.0 +4889,2,,4886,12-07-2018 03:10,,1,,"

Go inductively like the strategy described in @DaftWullie answer of General construction of W_n state

+ +

Say you have the circuits $U_{n-1,k-1}$ that take you from $| 0 \rangle^{\otimes (n-1)}$ to the state that is an equal superposition of the $\binom{n-1}{k-1}$ possibilities with $k-1$ ones. That is each with a coefficent $\frac{1}{\sqrt{\binom{n-1}{k-1}}}$. Same but for $k$ as well with $U_{n-1,k}$

+ +

Starting from $| 0 \rangle ^{\otimes n}$ apply a unitary on the first qubit that gets you in $\frac{\sqrt{\binom{n-1}{k}}}{\sqrt{\binom{n}{k}}} | 0 \rangle + \frac{\sqrt{\binom{n-1}{k-1}}}{\sqrt{\binom{n}{k}}} | 1 \rangle$. This is just a 2 by 2 unitary so you can say where $| 1 \rangle$ goes by swapping $0$ and $1$ coefficients and putting the $-$ sign. From there figure out the Euler angles to get into standard form.

+ +

Now do two controlled unitaries. A controlled version of $U_{n-1,k-1}$ when first qubit is 1. Also a controlled version of $U_{n-1,k}$ when first qubit is 0. The numerators cancel and you get the desired resulting state. The whole circuit for this would be $U_{n,k}$ then.

+ +

For small values of $n$ and $k$ you need the base case of the induction. But that is going to either be $n=1$ and $k=1$ in which case you do a $NOT$ gate. Or $n=1$ and $k=0$ in which case you do the identity.

+ +

Again same caveats about one and two qubit gates and not the most efficient. You can check how $k=1$ special case works.

+",434,,,,,12-07-2018 03:10,,,,5,,,,CC BY-SA 4.0 +4890,2,,4838,12-07-2018 10:53,,1,,"

The Qconfig file was used to hold your API key, so that you could access the cloud based quantum devices and simulators. For the most recent version of Qiskit, it is no longer required. Instead you can use the command

+ +
from qiskit import IBMQ
+IBMQ.save_account('MY_API_TOKEN')
+
+ +

This is a python command. You can do it in a Jupyter notebook, on the python command line, or wherever else you prefer that accepts python3 commands. You will need to replace MY_API_TOKEN with the token you get from the IBM Q Experience website.

+ +

This step of saving credentials only needs to be done once. Afterwards, you can simply load the saved information using

+ +
from qiskit import IBMQ
+IBMQ.load_accounts()
+
+ +

Full instructions can be found here.

+ +

If you still want to use a Qconfig file, you can. You can create one yourself using the template in the link above. Then you place it wherever you want. To import it, using the sys package to append the path to where you have placed the file.

+",409,,,,,12-07-2018 10:53,,,,0,,,,CC BY-SA 4.0 +4891,2,,4831,12-07-2018 19:20,,0,,"

In v0.4 (which is out now), your problem is fixed by using cirq.Simulator instead of cirq.google.XmonSimulator.

+ +

We originally added automatic decomposition to the xmon simulator as a convenience feature, because it was the only simulator and it seemed onerous to require users to figure out how to decompose their circuits into native xmon gates before they could do a simulation. But it has become clear that it's very confusing to do this because a) it can easily quintuple the size of the circuit because the decomposition is done naively and b) it makes the results of the moment stepping simulation completely impossible to predict (as you have noticed). This is why we introduced the more flexible cirq.Simulator and why, in the future, cirq.google.XmonSimulator is going to require predecomposed inputs so these accidental decompositions don't happen anymore and so the moment structure of the circuit is preserved.

+",119,,,,,12-07-2018 19:20,,,,0,,,,CC BY-SA 4.0 +4892,2,,4886,12-07-2018 19:46,,2,,"

An alternative way to prepare this state is by using amplitude amplification and a circuit to compute the Hamming weight (number of set bits) of a register.

+
    +
  1. Computing the Hamming Weight
  2. +
+

You can use a 3-bit adder to fold three qubits of weight $W$ into one qubit of weight $W$ and one qubit of weight $2W$ (plus some temporary junk). Assume you have this circuit.

+

To compute the Hamming weight, consider of all the register's qubits to have weight $1$. Keep applying the 3-to-2 adder to triplets of remaining weight $1$ qubits until you have exactly $1$ qubit of weight $1$, and $N/2$ qubits of weight $2$. Repeat on the qubits of weight $2$, until you have exactly $1$ qubit of weight $2$, and $N/4$ qubits of weight $4$.

+

Keep repeating until you have exactly one qubit of each weight class up to weight $2^{\lceil(\log_2(N))\rceil}$. These qubits form your binary Hamming weight register. If you measured that register and it returned $\ldots 000110$, then you would be projected into an equal superposition containing all the cases where exactly 6 bits are set.

+

Once you have the weight, you can easily create a qubit that's set or not set depending on if the weight equals the desired weight. Applying a $Z$ gate to this qubit is the crucial step needed for amplitude amplification.

+
    +
  1. Amplitude amplification.
  2. +
+

You want to amplify cases where the Hamming weight is equal to the desired weight. We have just defined how to perform an operation that negates the phases of correct states (compute the Hamming weight, compute the 'is it the right weight?' indicator qubit, hit that qubit with a $Z$, uncompute the indicate and the Hamming weight). If you alternate this with Grover diffusion operations, you will rotate towards the desired state.

+

Depending on what your target Hamming weight is, this will take different amounts of time. There is one tricky bit during the very last iteration where you want to use partial phasing operations to exactly nail the desired states, but otherwise it's straightforward.

+

For example, here is a circuit that creates a superposition of all states less than a given value. It only needs to do one partial step, no full steps, because it limits itself to the case where at least half of the values are included:

+

+
    +
  1. Overview
  2. +
+

I am unsure if this approach will be more or less efficient than the inductive approach used in the other answer. It may depend on what your target weight is.

+

If amplitude amplification is daunting to you, and you're in a very small case, you can avoid it by initializing the register into $|+\rangle$ states, measuring the Hamming weight, and keep retrying until it returns the right answer.

+",119,,2927,,08-12-2020 00:07,08-12-2020 00:07,,,,1,,,,CC BY-SA 4.0 +4893,1,5239,,12-08-2018 01:07,,3,399,"
+

Suppose we define an operator CPT that carries out the CPT transformation: + $$\text{CPT}|\Psi\rangle = A|\Psi\rangle$$ + where A is just a constant. Or put another way, the states of our theory are eigenfunctions of the CPT operator. -Source

+
+ +

What do the eigenvalues of the CPT operator mean? *

+",2645,,26,,1/18/2019 10:42,1/21/2019 14:35,Eigenvalues of CPT operator,,1,11,,,,CC BY-SA 4.0 +4894,1,4903,,12-08-2018 22:04,,4,208,"
+

Time-bin encoding is a technique used in Quantum information science to encode a qubit of information on a photon. Wikipedia

+
+ +

Is there a generalization for $n$-th level qudits?

+",2645,,2645,,12-09-2018 19:27,12-11-2018 10:26,Time-bin encoding qudits,,2,4,,,,CC BY-SA 4.0 +4895,1,4896,,12-09-2018 17:38,,6,1693,"

If a control qubit is in superposition, how it will affect target qubit if it is collapsed or in superposition? Is it true that CNOT works only if the control bit collapsed to 1? Also, is it possible to collapse or Hadamard control qubit “on the go” in a real life quantum computer and have a functional CNOT gate?

+",5292,,55,,5/30/2021 8:27,5/30/2021 8:27,How does the CNOT gate operate when the control qubit is a superposition?,,1,3,,,,CC BY-SA 4.0 +4896,2,,4895,12-09-2018 18:07,,4,,"

By an example with a control qubit in superposition and the target in $ |0\rangle $ state:

+ +

$$ \frac{|0\rangle + |1\rangle}{\sqrt{2}} |0\rangle = \frac{|0\rangle|0\rangle + |1\rangle |0\rangle}{\sqrt{2}}$$

+ +

Applying a CNOT will have the following result: +$$ \frac{ CNOT(|0\rangle|0\rangle + |1\rangle |0\rangle)}{\sqrt{2}} = \frac{ CNOT(|0\rangle|0\rangle) + CNOT(|1\rangle |0\rangle)}{\sqrt{2}} = \frac{ |0\rangle|0\rangle + |1\rangle |1\rangle}{\sqrt{2}}$$

+ +

That is the CNOT acts linearly with a control qubit in superposition, but will change the target only on the part involving a $ |1\rangle$ in the control qubit.

+",4127,,,,,12-09-2018 18:07,,,,4,,,,CC BY-SA 4.0 +4897,1,4898,,12-09-2018 21:06,,3,872,"

Suppose I have created a circuit composed of some registers with the usual

+ +
qc = QuantumCircuit(qr, cr)
+
+ +

where qr and cr are a quantum register and a classical register respectively.

+ +

Now, suppose that at this point I want to invoke a subroutine. This subroutine, however, uses some ancillas. Is there any functions to append this new set of qubits to the original circuit? Something like

+ +
ancillas = QuantumRegister(n, 'ancillas')
+#qc.append(ancillas)    
+
+ +

An equivalent problem (maybe) is the following one. Suppose I have a quantum circuit qcn composed of n qubits and a subroutine which returns another quantum circuit qck operating on k qubits, with k > n. Is it possible to compose the two circuits in such a way that the first n qubits on which the subroutine operates are the same of the original circuit?

+ +

At the moment, the only solution to me seems to declare in advance all the total number of qubits required (k in the previous case) and then passing around them to the various functions.

+",4848,,26,,12-10-2018 04:18,12-10-2018 04:18,Is it possible to expand/merge different circuits?,,1,0,,,,CC BY-SA 4.0 +4898,2,,4897,12-09-2018 21:58,,5,,"

For the first question, you can use

+ +
qc.add(ancillas)
+
+ +

Note that this will change to add_registers in Qiskit Terra 0.7.0.

+ +

Some more guidance on how to combine and extend circuits, you can see this guide. Note that this is for the upcoming 0.7.0 release, but you can already get the functionality with

+ +
pip install git+https://github.com/Qiskit/qiskit-terra.git
+
+",409,,,,,12-09-2018 21:58,,,,2,,,,CC BY-SA 4.0 +4899,1,,,12-09-2018 22:29,,3,53,"

In his 2009 paper, Grover vs McElicee, Bernstein proposed to use the Grover's algorithm to obtain a quadratic speedup on the Prange ISD. +However, it is not quite clear to me in which part of the algorithm a Grover search can be used and all other papers and references citing Bernstein's paper don't seem to clarify the point. +Does anyone has any idea on this point?

+",4848,,26,,12-10-2018 03:36,12-10-2018 03:36,Speedup Prange ISD using Grover,,0,0,,,,CC BY-SA 4.0 +4900,2,,2200,12-09-2018 22:43,,0,,"

I implemented the same problem for multiple qubits using qiskit here.

+ +

For the 3 qubit state you presented, you can use an oracle like the one here (I'm using quirk just to show the amplitudes in real time). +Note that the first three Hadamards (the ones before the ...) are there only to simulate a random input to the oracle and are not part of the oracle itself. In every case, as you can see from the amplitudes at the end of the circuit, only the $|111\rangle$ state gets flipped out, while all the other states remain unchanged.

+ +

In general, the idea is to simulate a CCZ gate using Hadamard on target bit following by a CCX gate and then another Hadamard on the target bit.

+",4848,,,,,12-09-2018 22:43,,,,0,,,,CC BY-SA 4.0 +4901,1,,,12-10-2018 00:56,,6,83,"

I am reading on how the adiabatic evolution can be approximated by a quantum circuit of size poly(nT) and I am trying to follow the derivation in the paper

+ +
+

W. van Dam, M. Mosca, and U. Vazirani, “How Powerful is Adiabatic + Quantum Computation?,” Proceedings 2001 IEEE International Conference + on Cluster Computing, pp. 279–287, 2001.

+
+ +

In section 4, page 4, it states that:

+ +

""The Campbell-Baker-Hausdorff theorem tells us how well we can approximate ‘parallel Hamiltonians’ by consecutive +ones: $|||\exp(A+B) − \exp(A)\exp(B)||| \in O(|||AB|||)$.""

+ +

The norm I believe is just the operator induced norm. I am familiar with the BCH formula but could not see the above relation directly coming out from the formula. So how is this relation derived?

+ +

I tried looking into the reference which is ""matrix analysis"" by Rajendra Bhatia but didn't get any success.

+",5005,,26,,12-10-2018 03:34,12-10-2018 09:47,Proof on approximating adiabatic evolution by quantum circuit,,1,0,,,,CC BY-SA 4.0 +4902,2,,4901,12-10-2018 08:17,,3,,"

The Baker-Campbell-Hausdorff formula says that you can expand +$$ +\log(e^Ae^B)=A+B+[A,B]/2+\ldots=M +$$ +where higher order terms have 3 or more uses of $A$ and $B$. Now, let's say that $A$ and $B$ are anti-Hermitian so that $e^A$, and similar terms, are unitary. We have +$$ +\|\exp(A+B)-\exp(A)\exp(B)\|=\|e^{A+B}\left(\mathbb{I}-e^Me^{-(A+B)}\right)\|. +$$ +The matrix norm is invariant under the action of unitaries, so this is the same as +$$ +\|\mathbb{I}-e^Me^{-(A+B)}\|. +$$ +Now, you might apply the BCH formula again to get +$$ +\|\mathbb{I}-e^{M-A-B-[M,A+B]/2+\ldots}\|, +$$ +the point being that the leading order $A+B$ stuff cancels from $M-A-B$ and the commutator, leaving terms like $[A,B]$. If both $A$ and $B$ are small ($O(\epsilon)$), then higher order terms have vanishing relevance, so we have +$$ +\|\mathbb{I}-e^{[A,B]/2+O(\epsilon^3)}\|, +$$ +and if we do an expansion on that, we get +$$ +\|\mathbb{I}-(\mathbb{I}+[A,B]/2+O(\epsilon^3))\|=\|[A,B]\|/2+O(\epsilon^3)=O(\|AB\|). +$$

+",1837,,1837,,12-10-2018 09:47,12-10-2018 09:47,,,,4,,,,CC BY-SA 4.0 +4903,2,,4894,12-10-2018 23:37,,2,,"

Yes! The first application of time bin photonic qudits that comes to mind is for quantum key distribution. Here's an example: https://arxiv.org/abs/1611.01139. I am sure there are more references out there though!

+",4222,,,,,12-10-2018 23:37,,,,1,,,,CC BY-SA 4.0 +4904,1,,,12-11-2018 08:48,,1,122,"

Suppose that in my circuit I have to generate multiple, say n, random coin flips. +For example, this coin flips could be used to activate n CNOTs half of the time.

+ +

The trivial solution could be to use n different qubits and Hadamard them. However, this gets really huge when n is large.

+ +

Is there any better way? By better I mean using a small (fixed??) number of qubits and only a few simple quantum gates.

+",4848,,4848,,12-11-2018 08:54,12-11-2018 09:33,Multiple random coin flips,,1,0,,,,CC BY-SA 4.0 +4905,2,,4904,12-11-2018 09:33,,1,,"

This depends on exactly what you want to do with the outcome. If you want to use the $n$ outcomes simultaneously, then you need $n$ separate coins. Alternatively, you are happy to implement them all in sequence (one after the other), then what you could do is:

+ +
    +
  • start with qubit in the state $|0\rangle$
  • +
  • apply Hadamard to it
  • +
  • measure it in the 0/1 basis
  • +
  • drive the controlled-not off it
  • +
  • apply Hadamard to it
  • +
  • measure it in the 0/1 basis
  • +
  • drive the controlled-not off it
  • +
  • apply Hadamard to it
  • +
  • ...
  • +
+ +

which only requires the one qubit. +This is assuming that when you talk about coin flips, you really mean the classical version (which is why I have the measurements in there). If the coherence were important to you, it might be a different matter.

+",1837,,,,,12-11-2018 09:33,,,,4,,,,CC BY-SA 4.0 +4906,2,,4894,12-11-2018 10:26,,2,,"

There is a number of groups using time-bin encoding to realise computation/communication protocols.

+ +

One example is Furusawa's group in Japan, which among other things works on measurement-based QC with time-bin encoding (e.g. 1706.06312). +Another example that comes to mind is Silberhorn's group in Paderborn. They use time-bin encoding for various things, a random example of which is 1710.06103.

+",55,,,,,12-11-2018 10:26,,,,0,,,,CC BY-SA 4.0 +4907,1,4971,,12-11-2018 10:46,,2,1100,"

I have a circuit composed by n qubits, plus a single one which is an ancilla. I'm making multiple measurements on the ancilla at different stages of the circuit, while working on the n qubits. These measurements are not really needed at all: they are just a way to collapse the state of the qubit at some point during the computation and then reuse the same qubit in a different way.

+ +

At the end of the circuit, when I'm measuring the outcome of the n qubits, I don't want the result of this ancilla to be shown in the output of the get_counts() function; what I want is only the output of the n qubits. Is there any way to achieve this result?

+",4848,,4848,,12/15/2018 18:12,12/15/2018 18:22,qiskit - Is there any way to discard the results of a measurement?,,1,0,,,,CC BY-SA 4.0 +4908,1,4909,,12-11-2018 12:58,,6,557,"

In this research paper, the authors introduce a new algorithm to perform Hamiltonian simulation.

+ +

The beginning of their abstract is

+ +
+

Given a Hermitian operator $\hat{H} = \langle G\vert \hat{U} \vert G\rangle$ that is the projection of an oracle $\hat{U}$ by state $\vert G\rangle$ + created with oracle $\hat{G}$, the problem of Hamiltonian simulation is approximating the time evolution operator $e^{-i\hat{H}t}$ at time $t$ with error $\epsilon$.

+
+ +

In the article:

+ +
    +
  • $\hat{G}$ and $\hat{U}$ are called ""oracles"".
  • +
  • $\hat{H}$ is an Hermitian operator in $\mathbb{C}^{2^n} \times \mathbb{C}^{2^n}$.
  • +
  • $\vert G \rangle \in \mathbb{C}^d$ (legend of Table 1).
  • +
+ +

My question is the following: what means $\hat{H} = \langle G\vert \hat{U} \vert G\rangle$? More precisely, I do not understand what $\langle G\vert \hat{U} \vert G\rangle$ represents when $\hat{U}$ is an oracle and $\vert G \rangle$ a quantum state.

+",1386,,1386,,12-11-2018 13:33,12-11-2018 13:33,"Problem with the mathematical formulation of ""qubitization""",,1,0,,,,CC BY-SA 4.0 +4909,2,,4908,12-11-2018 13:23,,5,,"

You want to start by being careful with the sizes of the operators. $\hat U$ acts on $q$ qubits, and $\hat H$ acts on $n<q$ qubits. I believe that $|G\rangle$ is a state of $q-n$ qubits. So, what we really need to talk about is two distinct sets of qubits. Let me call them sets $A$ and $B$. $A$ contains $n$ qubits, and $B$ contains $q-n$ qubits. I'll use subscripts to denote which qubits the different operators and states act upon:

+ +

$$ +\hat H_A=(\langle G|_B\otimes\mathbb{I}_A)\hat U_{AB}(|G\rangle_B\otimes\mathbb{I}_A) +$$

+",1837,,,,,12-11-2018 13:23,,,,4,,,,CC BY-SA 4.0 +4910,1,4911,,12-11-2018 13:46,,8,373,"

How do I find an explicit isomorphism between the elements of the Clifford group and some 24 quaternions?

+ +

The easy part: +The multiplication of matrices should correspond to multiplication of quaternions.

+ +

The identity matrix $I$ should be mapped to the quaternion $1$.

+ +

The hard part: +To what should the other elements of the Clifford group be mapped? Since the following to elements generate the entire group, mapping these will be sufficient:

+ +

$$H=\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\1&-1\end{bmatrix}\text{ and }P=\begin{bmatrix}1&0\\0&i\end{bmatrix}$$

+ +

Can anybody help?

+",5305,,,,,12-12-2018 10:10,Isomorphism between the Clifford group and the quaternions,,1,0,,,,CC BY-SA 4.0 +4911,2,,4910,12-11-2018 16:34,,5,,"

The quaternions are represented faithfully in two dimensions by the unit matrix and the Pauli matrices multiplied by the imaginary unit: $i = \sqrt{-1} X$, $j = \sqrt{-1} Y$ and $k = \sqrt{-1} Z$ respectively, thus you only need to write $H$ and $P$ the linear combinations: $H = -\frac{\sqrt{-1} }{\sqrt{2}}(i+k)$ and $P = \frac{1+\sqrt{-1}}{2}(1-k)$

+ +

However, this is not an isomorphism between the Clifford group and the quaternions, because here we use the quaternions as an algebra not a group. What can be said that the Clifford group is isomorphic to a subgroup of invertible elements of the quaternion algebra.

+ +

The term Quaternion group is reserved to another subgroup of invertible elements of the quaternion algebra consisting of $\pm 1$, $\pm (-iX = R_x(\pi))$, $\pm (-iY = R_y(\pi))$ and $\pm (-iZ = R_z(\pi))$. This group is called the quaternion group. This group can be generated by $\pi$ rotations around two major axes. However, the order-8 quaternion group is not isomorphic to the order-24 Clifford group, which can be generated by $\frac{\pi}{2}$ rotations around two major axes. The Clifford group is in a certain sense the square root of the quaternion group.

+ +

Clarifications

+ +

@Knot Log, Sorry I have misled you on two points:

+ +

1) The quaternions imaginary units should be represented as $i = \sqrt{-1} X$, $j = \sqrt{-1} Y$, $k = \sqrt{-1} Z$, as they have to square to $-1$ (I have corrected that in the main text) .

+ +

2) I forgot to mention that we need to work with the quaternion algebra over the complex field, i.e., we need to distinguish between the complex imaginary unit $\sqrt{-1}$ and the quaternion $i$, (I hope you took care of that in your analysis – in any case I have added the explicit expressions of $H$ and $P$ to the main text. In addition, it is correct that global phases are not important when you use the elements as quantum gates, and you correctly took equivalence classes.

+ +

However, Please see the article by Michel Planat , where in section 2.2. he mentions that the group generated by $H$ and $P$ should be of order 192, such that only when you remove a $\mathbb{Z}_8$ center you reach the 24 element Clifford group (I haven't done the work myself).

+ +

Moreover, it is possible to generate the $24$ element group directly without additional phases (Please see the lecture notes by Michel Devoret if you start with generators of unit determinant (for example, $R_x(\frac{\pi}{2})$ and $R_z(\frac{\pi}{2})$), because the Clifford group is geometric, as it is isomorphic to the octahedral group, the group of symmetries of the cube or the octahedron and all of its elements are rotations, i.e., with a unit determinant.

+",4263,,4534,,12-12-2018 10:10,12-12-2018 10:10,,,,5,,,,CC BY-SA 4.0 +4912,1,23749,,12/13/2018 0:42,,9,832,"

Shor's algorithm to factor a number $N$ goes as follows:

+ +
    +
  1. Pick a random value $b \in (0, N)$.
  2. +
  3. Use a specific quantum computation to a sample a value $v$ that should be close to $2^{m} k/p$ where $m$ is a precision parameter of the quantum computation, $p$ is the period of $f(x) = b^x \pmod{N}$, and $k$ is an unknown integer resulting from the sampling process.
  4. +
  5. Convert $v$ into a potential period $p$ using an algorithm based on continued fractions.
  6. +
  7. If $b^p \neq 1 \pmod{N}$, goto 1. The period finding process failed (e.g. maybe you got $v=0$).
  8. +
  9. If $p$ is odd, goto 1. You got a useless period. Try a different $b$.
  10. +
  11. If $b^{p/2} \equiv -1 \pmod{N}$, goto 1. You got a useless period. Try a different $b$.
  12. +
  13. Output $\gcd(b^{p/2} - 1, N)$
  14. +
+ +

My question is: how often do steps 4, 5, or 6 fail? Assuming an ideal quantum computer with no error, how often do you just get unlucky and pick a bad $b$ or sample a bad $v$? How many times do you expect to repeat step 2 before the factoring succeeds?

+ +

References giving numerical upper bounds on the 4-5-6 failure chance would be especially appreciated.

+",119,,26,,12/13/2018 16:09,1/21/2022 5:53,Expected repetitions of the quantum part of Shor's algorithm,,2,0,,,,CC BY-SA 4.0 +4913,2,,4912,12/13/2018 1:18,,7,,"

This self-answer gives a not-very-good worst case analysis. I'd really rather have a proper distribution of repetition counts.

+ +

Probability of a period resulting in factoring

+ +

In Shor's original paper, you can find the following statement:

+ +
+

The multiplicative group (mod $p^α$) for any odd prime power $p^α$ is cyclic [Knuth 1981], so for any odd prime power $p_i^{a_i}$, the probability is at most 1/2 of choosing an $x_i$ having any + particular power of two as the largest divisor of its order $r_i$. Thus each of these powers of 2 has at most a 50% probability of agreeing with the previous ones, so all k of them agree with probability at most $1/2^{k−1}$, and there is at least a $1 - 1/2^{k-1}$ chance that the $x$ we choose is good. This scheme will thus work as long as $n$ is odd and not a prime power; finding factors of prime powers can be done efficiently with classical methods

+
+ +

(Oddly, I also found an older version of the paper with a bound of $1-2^{-k}$ instead of $1-2^{1-k}$. Might be an error that was corrected?)

+ +

This implies a 50% chance of success of getting a good period (steps 4+5), since all numbers have at least two distinct factors or else can be factored classically.

+ +

Probability of a sample determining a period

+ +

For getting a good sample (step 3), Shor gives a more complicated bound. If you sample a value with a $k$ that's not relatively prime to the period, the period you recover will be wrong. But integers have a limited number of divisors, and Shor cites a bound:

+ +
+

There are also $r$ possible values for $x^k$, since $r$ is the order of $x$. Thus, there are $r \phi(r)$ states which would enable us to obtain $r$. Since each of these states occurs with probability at least $\frac{1}{3r^2}$, we obtain $r$ with probability at least $\frac{\phi(r)}{3r}$. + Using the theorem that $\phi(r)/r > k/\log \log r$ for some fixed $k$ [17, Theorem 328], this shows that we find $r$ at least a $k/ \log \log r$ fraction of the time.

+ +

[17]: G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers, Fifth Edition, Oxford University Press, New York (1979).

+
+ +

We know $r < N$, so this proves step 3 introduces at most $O(\lg \lg N)$ repetitions. It seems likely to me that this can be improved (e.g. if you keep replacing $b$ by $b^p$ when you get a bad $p$ that would presumably keep reducing $r$, or maybe that's bad?), and I'm sure someone has explained how to do this in great detail, but I don't know the reference.

+ +

""On Shor's Quantum Factor Finding Algorithm: Increasing the Probability of Success and Tradeoffs Involving the Fourier Transform Modulus"" seems promising. It discusses using the lcm of two samples in order to get a constant probability of successfully recovering a period (~60%).

+ +
+ +

I also did some simulation of how many repetitions are needed when the two factors are similarly sized primes, using some basic post-processing, and it appears to converge to 1.5. The following plot is showing a blue dot for each factoring attempt, with a random +-0.5 on each dot's X and Y position so you get a sense of the density in each area. The thick black line is the average.

+ +

+",119,,119,,05-09-2019 08:16,05-09-2019 08:16,,,,0,,,,CC BY-SA 4.0 +4914,1,4916,,12/13/2018 15:02,,2,2448,"

I was reading a proof of No-cloning theorem, there are a couple of steps that are not clear to me, but the book does not give explanation for them. So here it is: +Theorem: It is impossible to create an identical copy of an arbitrary unknown quantum state. +Proof (by contradiction): Suppose that there exists a unitary $C$ (that copies an arbitrary unknown q-state). Then: .

+ +

This is the first thing I have trouble with, why do we take only one state ($\left| 0 \right>$ state) to prove that it is impossible to copy any arbitrary state into $any$ qubit? +The proof goes on stating: +

+ +

Here I don't understand how (116) is the same as (117) (don't mind explaining the identity matrix in the middle). I will appreciate it if you make these 2 steps clearer to me.

+",2559,,26,,12/13/2018 20:40,12/13/2018 20:40,Proof of no-cloning,,1,4,,,,CC BY-SA 4.0 +4915,2,,4852,12/13/2018 15:48,,3,,"

You can test stand alone the a modular multiplication circuit. In this case $\text{base} = 2$ and $N = 3$. However the smallest useful composite $N = 15 = 3 \times 5$.

+ +

Let's take a well known Multiplication by 7 modulo 15 circuit

+ +

+ +

We start with input $$\ |1\rangle \text{ gives } |7\rangle$$ +$$\ |7\rangle \text{ gives } |4\rangle$$ +$$\ |4\rangle \text{ gives } |13\rangle$$ +$$\ |13\rangle \text{ gives } |1\rangle$$ +This repeats after $4$ times and the period = $4$

+ +

In the not so useful case of $N = 3$ and $a = 2$

+ +

$$1 \times 2 \text{mod} (3) = 2$$

+ +

$$2 \times 2 \text{mod} (3) = 1$$

+ +

It repeats after $2$ times. Period is correct $2$ and can be tested in a quantum simulator.

+",1773,,1773,,7/22/2019 20:57,7/22/2019 20:57,,,,0,,,,CC BY-SA 4.0 +4916,2,,4914,12/13/2018 16:36,,6,,"

For step (116), the equivalence between both of them is proved by

+ +

\begin{equation} +(\langle\psi_1|\otimes\langle0|)C^\dagger C(|\psi_2\rangle\otimes|0\rangle) = (\langle\psi_1|\otimes\langle0|)(|\psi_2\rangle\otimes|0\rangle) = \langle\psi_1|\psi_2\rangle\otimes\langle0|0\rangle=\langle\psi_1|\psi_2\rangle\langle0|0\rangle, +\end{equation} +where in the second step I used the property of tensor products that $(A\otimes B)(C\otimes D)=AC\otimes BD$, where obviously the dimensions of the multiplying matrices should match; and for the third step I used the fact that $\langle\psi_1|\psi_2\rangle$ and $\langle0|0\rangle$ are scalars, implying that their tensor product is just a multiplication between them.

+ +

For your first question, state $|0\rangle$ is used because it is the typical ancillary system used for quantum algorithms, that is, the zero state is the one used a auxilliary variable for doing the corresponding operations. However, note that the proof is not state dependent, meaning that you can take an arbitrary ancillary qubit, name it $|\rho\rangle$, and prove the theorem. Here I leave you such proof: +\begin{equation} +C(|\psi_1\rangle\otimes|\rho\rangle) = |\psi_1\rangle\otimes|\psi_1\rangle\\ +C(|\psi_2\rangle\otimes|\rho\rangle) = |\psi_2\rangle\otimes|\psi_2\rangle +\end{equation} +And so $C$ would be the unitary we are looking for, so now the proof: +\begin{equation} +\langle\psi_1|\psi_2\rangle=\langle\psi_1|\psi_2\rangle\langle\rho|\rho\rangle =(\langle\psi_1|\otimes\langle\rho|)C^\dagger C(|\psi_2\rangle\otimes|\rho\rangle) = (\langle\psi_1|\otimes\langle\psi_1|)(|\psi_2\rangle\otimes|\psi_2\rangle)=(\langle\psi_1|\psi_2\rangle)^2. +\end{equation} +And so the same contradiction as before is obtained, and the theorem is proved.

+",2371,,,,,12/13/2018 16:36,,,,0,,,,CC BY-SA 4.0 +4917,1,4918,,12/13/2018 19:15,,2,765,"

I'm trying to build the matrix that corresponds to this quantum teleportation circuit, but it never works when I test it in the quirk simulator, I tried finding the matrix corresponding to every part of the circuit and then multiplying but it never works, anyone knows what I might be doing wrong? +When I was calculating the matrices I didn't consider the measurement gates. +

+",5065,,26,,12/23/2018 7:55,11/15/2019 16:31,Building a matrix corresponding to the teleportation circuit,,1,5,,,,CC BY-SA 4.0 +4918,2,,4917,12/13/2018 19:35,,4,,"

Since the quantum teleportation circuit has three qbits, the matrix at each step is 8x8 and thus has 64 elements; this is pretty clunky to type out in its entirety, so I'll just walk you through step by step and you can derive the full matrix for a specific step if you want. Given a qbit we want to teleport:

+ +

$|\psi\rangle = \begin{bmatrix} \alpha \\ \beta \end{bmatrix}$

+ +

the operations are as follows:

+ +

$H_2C_{2,1}C_{1,0}H_1 +\left ( +\begin{bmatrix} \alpha \\ \beta \end{bmatrix} \otimes \begin{bmatrix} 1 \\ 0\end{bmatrix} \otimes \begin{bmatrix} 1 \\ 0\end{bmatrix} +\right ) = H_2C_{2,1}C_{1,0} +\left ( +\begin{bmatrix} \alpha \\ \beta \end{bmatrix} \otimes \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} \otimes \begin{bmatrix} 1 \\ 0\end{bmatrix} +\right ) = H_2C_{2,1} \left (\begin{bmatrix} \alpha \\ \beta \end{bmatrix} \otimes \begin{bmatrix} \frac{1}{\sqrt{2}} \\ 0 \\ 0 \\ \frac{1}{\sqrt{2}} \end{bmatrix} \right ) = H_2 \left ( \frac{1}{\sqrt{2}} \begin{bmatrix} \alpha \\ 0 \\ 0 \\ \alpha \\ 0 \\ \beta \\ \beta \\ 0 \end{bmatrix} \right ) = \frac{1}{2} \begin{bmatrix} \alpha \\ \beta \\ \beta \\ \alpha \\ \alpha \\ -\beta \\ -\beta \\ \alpha \end{bmatrix}$

+ +

This is the vector directly before the first two qbits are measured. Note we can write it as follows:

+ +

$\frac{1}{2} \begin{bmatrix} \alpha \\ \beta \\ \beta \\ \alpha \\ \alpha \\ -\beta \\ -\beta \\ \alpha \end{bmatrix} += \frac{1}{2} \left ( |00\rangle \otimes \begin{bmatrix} \alpha \\ \beta \end{bmatrix} ++ |01\rangle \otimes \begin{bmatrix} \beta \\ \alpha \end{bmatrix} ++ |10\rangle \otimes \begin{bmatrix} \alpha \\ -\beta \end{bmatrix} ++ |11\rangle \otimes \begin{bmatrix} -\beta \\ \alpha \end{bmatrix} \right )$

+ +

We can then apply the intuitive ""cancel and normalize"" approach to measurement for each of the four possible measurement outcomes, which I outline in this answer. It should then become clear how applying the final $X$ and $Z$ gates (depending on measurement outcomes) will lead to the rightmost qbit taking on the value of $|\psi\rangle$.

+ +

If you'd like a more advanced account of how measurement works in quantum teleportation, you can also see an approach using the density operator which I go over here.

+",4153,,4153,,11/15/2019 16:31,11/15/2019 16:31,,,,6,,,,CC BY-SA 4.0 +4955,1,4956,,12/14/2018 7:13,,2,124,"

I have a three composite system of the form $H_{\text{tot}}=H_{ab}\otimes H_c$ where the system $C$ is behaving as the dissipator or the environment (I can model it as a thermal bath). And it is coupled only to system $B$ but not $A$. While $A$ is coupled with $B$ and entangles with it under time evolution. At $t=0$ is can take a composite state completely separable $H_{\text{tot}}(0)=H_a\otimes H_b \otimes H_c$. +My objective is to solve the Master equation (more precisely the Lindbladian form) for $\rho_{ab}.$

+ +

But when I do that (with the partition as $H_{ab}|H_c$), the sub-system $A$ trivially disappears from the equations of motion. Because it does not couple with $C$ directly but only acts via $B$ indirectly. +What is the right way to model this kind of interaction?

+",4889,,55,,12/19/2021 13:57,12/19/2021 13:57,How to formulate the master equation for three systems?,,1,0,,,,CC BY-SA 4.0 +4956,2,,4955,12/14/2018 8:11,,4,,"

To be clear:

+ +
    +
  • You have a Hilbert space $\mathcal{H}_A\otimes\mathcal{H}_B\otimes\mathcal{H}_C$.
  • +
  • The initial state is $\rho_\text{tot}(0)=\rho_A\otimes\rho_B\otimes\rho_C$.
  • +
  • There is a Hamiltonian acting on the system of the form $H_{\text{tot}}=H_{AB}\otimes\mathbb{I}_C+\mathbb{I}_A\otimes H_{BC}$
  • +
  • Instead of directly calculating the effect of the Hamiltonian evolution $\rho_{\text{tot}}(t)=e^{-iH_{\text{tot}}}\rho_{\text{tot}}(0)e^{iH_{\text{tot}}}$, you want to solve a Master equation +$$ +\frac{d\rho_{AB}}{dt}=-i[H_{AB},\rho_{AB}]+\sum_nL_n\rho_{AB}L_n^\dagger-\frac12L_n^\dagger L_n\rho_{AB}-\frac12\rho_{AB}L_n^\dagger L_n +$$
  • +
+ +

Is this a correct summary of what you're wanting to do?

+ +

If so, then note that the presence of the Hamiltonian $H_{AB}$ means that the systems A and B should couple together.

+",1837,,,,,12/14/2018 8:11,,,,0,,,,CC BY-SA 4.0 +4957,1,4959,,12/14/2018 10:46,,1,720,"

Once a state is measured, but we don't look at the result, is the state now written as a density matrix, that is, the probability that it could land on a measurement operator multiplied by the operator applied on the state, this summed up for every measurement operator that it could land on contained in the measurement?

+",2832,,55,,5/28/2020 10:46,5/28/2020 10:46,"How to write a post-measurement state, if we don't know the measurement result?",,2,1,,,,CC BY-SA 4.0 +4958,2,,4957,12/14/2018 10:54,,1,,"

In the Copenhagen interpretation, there are only two kinds of things that one can do, one is evolution and other is the measurement. Measuring but not looking is equivalent to measuring the system and hence projecting it to one of the possible eigenstates. (Or maybe you can clarify more what you meant by not looking?)

+ +

And after the system is probed in the measurement, it is no longer in a superposition and no longer in the statistical mixture anymore. It just becomes a decohered density matrix with a single element in the measurement (projector) basis. Density matrix representation then becomes trivial.

+ +

(I think in the latter part of your question you are pointing the completeness of probability in the measurement which summed over all the measurement projectors will be unity. But this has to do with the act of measurement, once that is performed, there is only one deterministic state.)

+",4889,,,,,12/14/2018 10:54,,,,0,,,,CC BY-SA 4.0 +4959,2,,4957,12/14/2018 11:58,,4,,"

Suppose you have a state $\rho$, and a random process that changes this to a state $\rho_j$ with probability $p_j$. If you know what the value of $j$ is, your knowledge of the resulting state will be described by the corresponding $\rho_j$. If you have no information regarding $j$, your knowledge will be described by

+ +

$$\sum_j ~ p_j ~ \rho_j$$

+ +

This is a general statement that holds for any random process. For the case you describe, which is measurement, the possible outcomes can often be described by a set of projectors $\{P_j\}$. For these

+ +

$$ p_j = {\rm tr}~(~P_j~\rho~), ~~~ \rho_j = \frac{P_j \rho P_j}{p_j}.$$

+ +

Probabilities for more general measurements can be calculated by more general operators, but figuring out the post-measurement states for these is not always as easy.

+",409,,26,,12/14/2018 12:02,12/14/2018 12:02,,,,0,,,,CC BY-SA 4.0 +4960,1,4961,,12/14/2018 12:21,,4,342,"

I've been struggling to understand the modular exponent bit of Shor's algorithm. My understanding is that it takes +a register in the state $\frac{1}{\sqrt{Q}}\sum_{k=1}^{Q-1} |k\rangle |0\rangle$ to the state $\frac{1}{Q}\sum_{k=1}^{Q-1} |k\rangle |f(k)\rangle$ where $f(k) = x^{k}$ mod $N$. +(Here, $x$ is the random integer found at the start of Shor's algorithm.)

+ +

My question is: Why is this operation unitary?

+",5328,,26,,12/14/2018 13:04,12/19/2018 19:28,Understanding why the modular function part of Shor's algorithm is unitary,,1,0,,,,CC BY-SA 4.0 +4961,2,,4960,12/14/2018 13:05,,3,,"

The critical thing, in this case, about a unitary operator is that it maps orthogonal states to orthogonal states (if $\langle i|j\rangle=0$, then $\langle i|U^\dagger U|j\rangle=0$, so the transformed vectors $U|i\rangle$ and $U|j\rangle$ are orthogonal). Now, you've defined what it must do for a set of states: +$$ +|k\rangle|0\rangle\mapsto |k\rangle|f(k)\rangle. +$$ +Now, it should be clear that all of the outputs are orthogonal to all other ones, simply as a result of the different values of $k$. So, surely, it will be possible to define a unitary over all possible basis states.

+ +

Indeed, the one of the usual starting points is to define the action of the unitary as +$$ +|k\rangle|y\rangle\mapsto \left\{\begin{array}{cc} +|k\rangle|y+x^k\text{ mod }N\rangle & y<N \\ +|k\rangle|y\rangle & y\geq N +\end{array}\right. +$$ +(a second is to use $|k\rangle|yx^k\text{ mod }N\rangle$ on the top line). +If we look at all possible $y$ and $k$, then we're looking at the whole basis. Again, different $k$s clearly give orthogonal vectors, independent of the value of $y$. Primarily, we have to check that +$$ +\langle y+x^k\text{ mod }N|y'+x^k\text{ mod}N\rangle=0. +$$ +This must be true provided $y-y'\text{ mod }N\neq0$, which is true for cases $y,y'<N$ and $y\neq y'$.

+",1837,,2879,,12/19/2018 19:28,12/19/2018 19:28,,,,0,,,,CC BY-SA 4.0 +4962,1,4969,,12/14/2018 20:14,,3,874,"

I'm using this simple code to test my Qiskit and learn how to use it but it keeps giving this problems, how do I fix it (Using VS Code and Anaconda Python 3.7)

+ +
import numpy as np
+from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
+from qiskit import execute
+# Create a Quantum Register with 3 qubits.Basicaly creating the number of qubits your system will use
+q = QuantumRegister(3, 'q')
+# Create a Quantum Circuit acting on the q register. Declaring the Circuit, this circuit shall create a GHZ state
+circ = QuantumCircuit(q,)
+# Add a H gate on qubit 0, putting this qubit in superposition.
+circ.h(q[0])
+# Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting
+# the qubits in a Bell state.
+circ.cx(q[0], q[1])
+# Add a CX (CNOT) gate on control qubit 0 and target qubit 2, putting
+# the qubits in a GHZ state.
+circ.cx(q[0], q[2])
+#Draws the circuit
+from qiskit.tools.visualization import circuit_drawer
+circuit_drawer(circ)
+
+ +

+",5065,,26,,12/15/2018 4:58,12/15/2018 10:59,How to fix? 'E1101:Instance of 'QuantumCircuit' has no 'h' member' and 'E1101:Instance of 'QuantumCircuit' has no 'cx' member',,2,0,,,,CC BY-SA 4.0 +4963,1,4970,,12/14/2018 21:24,,5,613,"

In the general form of Grover's algorithm, we start with the uniform superposition of n qubits. Now, suppose instead that we start with a generic state, for example the W state, and the oracle only inverts the phase of one of the basis.

+ +

To make it more concrete, let's say we have in input a $W_3$ state $$|W\rangle = \frac{1}{\sqrt{3}} (|001\rangle + |010\rangle + |100\rangle)$$ +and that the oracle inverts only state $$|001\rangle$$

+ +

At this point, how can we implement the diffusion operator? In other words, how can we amplify only this state in order to obtain the right output with an high probability?

+",4848,,55,,12-11-2022 10:31,12-11-2022 10:31,How to implement Grover's diffusion operator when starting with a W state?,,2,4,,,,CC BY-SA 4.0 +4965,2,,4963,12/15/2018 2:16,,3,,"

Say $ | \psi \rangle $ represents the uniform suposition.

+ +

Then, the Grover diffusion operator is written : +$$ 2 | \psi \rangle \langle \psi | - I$$

+ +

Now to act on a subset of a superposition, you would need to implement : +$$ 2 | W\rangle \langle W | - I$$

+ +

I am not sure if someone had really looked into how decompose this operation as a circuit. I guess it would be too dependent on the initial state.

+ +

This paper however shows a trick to still get the state you want with a high probability. Check section 3.3.1 . +The idea is to start with $ | W \rangle $ and use a first Grover iteration to inverse amplitudes about average by only marking the target state. Then you would mark the computational basis states that are present initially in $ | \psi \rangle $ and use inversion about average again. Finally, you would continue with usual Grover iterations. They provide an example to help you visualize the effect of the steps. The target should have a high probability of being measured, even if others states not present in $ | W \rangle $ can be measured.

+",4127,,,,,12/15/2018 2:16,,,,1,,,,CC BY-SA 4.0 +4966,2,,4962,12/15/2018 2:30,,0,,"

I guess you are following the code from the notebook getting_started_with_qiskit_terra. +However it should be

+ +
circ = QuantumCircuit(q) 
+
+ +

instead of

+ +
circ = QuantumCircuit(q,)
+
+ +

according to this notebook. +Can you try removing the coma and running?

+",4127,,26,,12/15/2018 4:55,12/15/2018 4:55,,,,2,,,,CC BY-SA 4.0 +4967,1,,,12/15/2018 4:04,,2,286,"

I would like to see the compiled circuit as executed on a hardware. +Nominally, this sequence of commands should return the QASM circuit in the variable ran_qc, but it is a Null pointer. Does anyone know how to make it work?

+ +

I was notified via this ticket that it should be fixed, but I do not see any improvement on my end.

+ +

I'm using qiskit.version'0.6.1' +and connected to IBMQX4.

+ +
+jobRes=job.result()
+print('ran_qc get_names()=',jobRes.get_names())
+assert len( jobRes.get_names()) ==1
+circuit_name = jobRes.get_names()[0]
+print('ran_qc name=',circuit_name)
+ran_qc=jobRes.get_ran_qasm(circuit_name)
+print(ran_qc)
+
+",5334,,26,,12/15/2018 4:56,12/16/2018 5:43,Print circuit compiled on hardware?,,1,0,,,,CC BY-SA 4.0 +4969,2,,4962,12/15/2018 10:59,,4,,"

There are no errors in your program.

+ +

Pylint is a static code analysis tool and sometimes it gets confused and emits false positives. In this case, gate methods (h, cx and others) are added to QuantumCircuit dynamically at a later point, so Pylint cannot detect them. See qiskit.extensions.standard if you'd like to learn how this works.

+ +

The simplest “fix” for this “error” is making Pylint ignore those classes in your pylintrc (if your installation of Pylint comes pre-built into your IDE, you'll have to see if the IDE has some particular way of configuring Pylint).

+ +
[TYPECHECK]
+
+# List of class names for which member attributes should not be checked (useful
+# for classes with dynamically set attributes). This supports the use of
+# qualified names.
+ignored-classes=QuantumCircuit,CompositeGate
+
+",580,,,,,12/15/2018 10:59,,,,0,,,,CC BY-SA 4.0 +4970,2,,4963,12/15/2018 14:04,,3,,"

Okay I think I have a potential solution, but I don't know if it's plausible or theoretically correct.

+

Standard Grover's algorithm

+

Here is an example of the original Grover's algorithm with just three qubits; the oracle negates the phase of $$|011\rangle$$ I'll post the image also:

+

+At the various steps you can see the probabilities and the amplitudes. In particular, after the oracle, you can see that there is precisely one state whose phase is negated, the 011 one. Then the oracle rotates all the qubits around the superposition, by amplifying the 000 state. +After one repetition of Grover's algorithm we have a 78% chance of reading the right state, which grows to 94.5% after two repetition.

+

Grover with W3 and standard diffusion

+

The trick should be to rotate around one of the basis states, say the first one i.e. $$|001\rangle$$ Here is the circuit representing it:

+

+

Because we only have 3 overall possible states, a single iteration should suffice. The above circuits correctly select the state $$|001\rangle$$

+

Grover with W5 state and standard diffusion

+

Here is another example with a W5 state selecting $$|00010\rangle$$ while rotating around $$|00001\rangle$$. +

+

As we can see, in both cases the standard diffusion selects the right amplitude with a not so huge probability

+

Grover with W4 and W4 conjugate transpose

+

A much better algorithm is the one described by this circuit +

+

The idea is pretty similar to the original Grover algorithm. Basically, you apply the conjugate transpose of the W4 state (and in the general case, the conjugate transpose of whatever you have applied to obtain the initial state of the Grover's algorithm). In this way, you have a non zero probability of obtaining the all zero state, which in this case is $$|0000\rangle$$. Then, you invert about this state and reapply the W4 state. +In this case, because we starts with 4 possible answers, the Grover's algorithm works 100% of the time.

+",4848,,-1,,6/18/2020 8:31,12/15/2018 16:07,,,,5,,,,CC BY-SA 4.0 +4971,2,,4907,12/15/2018 17:32,,2,,"

If you keep measuring to the same bit, the value should get overridden every time. So you won't receive the intermediary values.

+ +

For example, the following will output a single 1 from the second measurement, with no trace of the first.

+ +
from qiskit import ClassicalRegister, QuantumRegister, QuantumCircuit
+from qiskit import execute
+from qiskit import BasicAer
+
+q = QuantumRegister(1)
+c = ClassicalRegister(1)
+qc = QuantumCircuit(q,c)
+
+qc.measure(q,c)
+qc.x(q)
+qc.measure(q,c)
+
+job = execute(qc,backend=BasicAer.get_backend('qasm_simulator'))
+job.result().get_counts()
+
+ +

I guess what you want is something more like

+ +
q = QuantumRegister(n) # n qubits
+a = QuantumRegister(1) # one ancilla qubit
+c = ClassicalRegister(n) # n classical bits for output
+
+qc = QuantumCircuit(q,a,c)
+
+qc.x(a[0]) 
+qc.measure(a[0],c[0]) # measure the ancilla to one of the classical bits
+
+qc.measure(q,c) # measure the n qubits to the n bits (overwriting the output from the previous measurement
+
+",409,,409,,12/15/2018 18:22,12/15/2018 18:22,,,,5,,,,CC BY-SA 4.0 +4972,2,,4967,12/15/2018 17:41,,2,,"

You need to extract the compiled qasm from a qobj object. You can create this by compiling

+ +
from qiskit import compile
+qobj = compile(qc,backend,shots=shots)
+
+ +

If you want to create a batch job, where you send many circuits in at once, you can replace the single circuit qc with a list of circuits.

+ +

Information about the circuits, the backend on which they'll run, and how they've been compiled, can then be found by querying the qobj.

+ +

Perhaps the best was is to use qobj.as_dict(), which returns a dictionary containing the information. In Qiskit 0.7.0 (which will be the stable version as of end of Dec 2018), you can get the information you want using

+ +
qobj.as_dict()['experiments'][index]['header']['compiled_circuit_qasm']
+
+ +

Actually running the job defined by the qobj can be done with

+ +
job = backend.run(qobj)
+
+",409,,,,,12/15/2018 17:41,,,,2,,,,CC BY-SA 4.0 +4974,1,4986,,12/16/2018 16:09,,3,423,"

I'm studying teleportation circuits with this tutorial and just out of curiosity, why can't a teleportation circuit be run on an IBM Q device?

+",5065,,26,,03-12-2019 09:16,03-12-2019 09:16,Why can't you run a teleportation circuit on an IBM Q device?,,1,2,,,,CC BY-SA 4.0 +4975,1,5012,,12/16/2018 17:13,,28,10261,"

I'm creating a gate for a project and need to test if it has the same results as the original circuit in a simulator, how do I build this gate on Qiskit? It's a 3 qubit gate, 8x8 matrix:

+ +

$$ +\frac{1}{2} +\begin{bmatrix} +1 & 0 & 1 & 0 & 0 & 1 & 0 & -1 \\ +0 & 1 & 0 & 1 & 1 & 0 & -1 & 0 \\ +0 & 1 & 0 & -1 & 1 & 0 & 1 & 0 \\ +1 & 0 & -1 & 0 & 0 & 1 & 0 & 1 \\ +1 & 0 & 1 & 0 & 0 & -1 & 0 & 1 \\ +0 & 1 & 0 & 1 & -1 & 0 & 1 & 0 \\ +0 & 1 & 0 & -1 & -1 & 0 & -1 & 0 \\ +1 & 0 & -1 & 0 & 0 & -1 & 0 & -1 +\end{bmatrix} +$$

+",5065,,26,,03-12-2019 09:16,03-03-2021 04:03,How do I build a gate from a matrix on Qiskit?,,6,4,,,,CC BY-SA 4.0 +4976,1,4983,,12/16/2018 18:55,,5,261,"

For a one qubit system, take a basis. +Call this the mixture basis. +Consider only basis states and classical mixtures of these basis states.

+ +

Definition of Shannon Entropy used here: Defined with respect to the measurement basis, on the probabilities of various outcomes. +For eg: $\frac{1}{2}|0\rangle \langle0| + \frac{1}{2}|+\rangle \langle+|$, when measured in the $|0\rangle, |1\rangle$ basis has Shannon Entropy $-\frac{3}{4}\log(\frac{3}{4}) - \frac{1}{4}\log(\frac{1}{4})$ because there is $\frac{3}{4}$ chance of measuring $|0\rangle$ and $\frac{1}{4}$ chance of measuring $|1\rangle$.

+ +

I'm trying to prove that the least value of Shannon Entropy will occur when the measurement basis is equal to the mixture basis.

+ +

(This is for me to get an intuition of Von Neumann entropy. If I prove the above, then I can think of Von Neumann entropy as the least Shannon entropy I could get after measuring across any basis.)

+ +

Let the mixture basis be $\frac{1}{2}(I + n.\sigma)$ and $\frac{1}{2}(I - n.\sigma)$

+ +

Let the measurement basis be $\frac{1}{2}(I + m.\sigma)$ and $\frac{1}{2}(I - m.\sigma)$

+ +

Let the qubit be $$p\frac{1}{2}(I + n.\sigma) + (1-p)\frac{1}{2}(I - n.\sigma)$$

+ +

Probability of the qubit showing up as $\frac{1}{2}(I + m.\sigma)$ when measured is: +$$p(\frac{1}{2}(1 + m.n)) + (1-p)(\frac{1}{2}(1-m.n))$$

+ +

Let the above value be $p^{'}$

+ +

Then the Shannon Entropy will be $p^{'}log(p^{'}) + (1-p^{'})log(1-p^{'})$

+ +

And to minimize the entropy, I need to minimise or maximise $p^{'}$

+ +

I'm not sure how to do that though, and whether what I'm trying to do so far makes sense. I'll be grateful for any help on continuing the proof/ insight on the intuition I'm trying to build.

+",2832,,10480,,3/20/2021 23:26,3/20/2021 23:26,Shannon entropy is least when Measurement basis = Mixture basis,,2,0,,,,CC BY-SA 4.0 +4977,1,5102,,12/16/2018 19:00,,3,456,"

Inspired by the question Are there emulators for quantum computers?, I'm curious to know if it's possible to emulate a quantum network on a classic computer. Additionally, is it possible to emulate a quantum network over a classic network?

+ +

Current resources:

+ + +",2645,,2645,,12/16/2018 22:29,11/14/2020 13:51,Are there emulators for quantum networks?,,3,0,,,,CC BY-SA 4.0 +4978,1,4980,,12/16/2018 20:35,,7,2451,"

In Nielsen and Chuang (page:379), it is shown that the operator sum representation of a depolarizing channel $\mathcal{E}(\rho) = \frac{pI}{2} + (1-p)\rho$ is easily seen by substituting the identity matrix with

+

$$\frac{\mathbb{I}}{2} = \frac{\rho + X\rho X + Y\rho Y +Z\rho Z}{4}.$$

+

What is the more systematic way to see this result? Particularly, for the higher dimensional analogue, I cannot see how to proceed.

+",4831,,55,,7/16/2020 0:53,11-01-2021 01:29,How to find the operator sum representation of the depolarizing channel?,,2,0,,,,CC BY-SA 4.0 +4979,2,,4977,12/16/2018 21:07,,2,,"

I joined the Quantum internet Hackaton with Simulaqron We did simulations for the quantum leader election algorithms

+ +

Simulaqron is more an abstract simulator on a classical computer or classical network. Most important aspect is entanglement between 2 nodes. This can be done with only command called EPR and creates an entangled pair of qubits on different nodes, nice and easy setup for your quantum network. Keep in mind though that it is not an real entangled pair but only a classical local copy.

+",1773,,,,,12/16/2018 21:07,,,,2,,,,CC BY-SA 4.0 +4980,2,,4978,12/16/2018 21:45,,8,,"

This really depends where you want to start from. For instance, you can construct the Choi state of $\mathcal E$, i.e., +$$ +\sigma = (\mathcal E \otimes \mathbb I)(|\Omega\rangle\langle\Omega|)\ , +$$ +with $\Omega = \tfrac{1}{\sqrt{D}}\sum_{i=1}^D |i,i\rangle$, and then extract the Kraus operators of $\mathcal E(\rho)=\sum M_i\rho M_i^\dagger$ by taking any decomposition +$$ +\sigma = \sum |\psi_i\rangle\langle\psi_i|\ ,\tag{*} +$$ +and writing $|\psi_i\rangle = (M_i\otimes\mathbb I)|\Omega\rangle$ (which is always possible).

+ +

Note that the decomposition $(*)$ is highly non-unique (any $|\phi_j\rangle = \sum V_{ij} |\psi_i\rangle$, with $V$ an isometry, is also a valid decomposition), which relates to the fact that the Kraus decomposition is equally non-unique. Obviously, the eigenvalue decomposition is a simple choice (which, moreover, minimizes the number of Kraus operators).

+ +
+ +

Let's look at your example in a bit more detail. Here, $D=2$. You have that +$$ +\mathcal E(X)=p\mathrm{tr}(X)\,\frac{\mathbb I}{2}+(1-p)X +$$ +for any $X$ (due to linearity) -- the $\mathrm{tr}(X)$ is required to make this trace-preserving for general $X$.

+ +

We now have that +\begin{align} +\sigma &= (\mathcal E \otimes \mathbb I)(|\Omega\rangle\langle \Omega|) +\\ +& = \tfrac1D \sum_{ij} \mathcal E(|i\rangle\langle j|)\otimes |i\rangle\langle j|\ +\end{align} +inserting the definition of $|\Omega\rangle$ and using linearity.

+ +

This yields +$$ +\sigma = \frac{p}{2D}\mathbb I\otimes \sum_{i}|i\rangle\langle i| + +(1-p)\frac1D \sum_{ij}|i\rangle\langle j|\otimes |i\rangle\langle j|\ . +$$ +The second term is just $(1-p)|\Omega\rangle\langle\Omega|$, and the first term is +$\frac{p}{2D}\mathbb I\otimes\mathbb I$.

+ +

You can now see that one possible eigenvalue decomposition of $\sigma$ is given by the four Bell states (I leave it to you to work out the weights), and it is well known and easy to check that that the four Bell states can be written as +$$ +(\sigma_k\otimes \mathbb I)|\Omega\rangle\ , +$$ +where $\sigma_k$ are the three Pauli matrices or the identity.

+ +

Thus, you get that the $M_i$ in the Kraus representation are the Paulis and the identity, with the weight given by the eigenvalue decomposition of $\sigma$.

+",491,,491,,12/17/2018 0:16,12/17/2018 0:16,,,,4,,,,CC BY-SA 4.0 +4981,1,4982,,12/17/2018 6:28,,6,553,"

In the Quantum Operations section in Nielsen and Chuang, (page 358 in the 2002 edition), they have the following equation: +$$\mathcal E(\rho) = \mathrm{Tr}_{env} [U(\rho \otimes \rho_{env})U^\dagger]$$

+

They show an example with +$\rho_{env} = |0\rangle \langle0|$ +and $U = \mathrm{CNOT}$, and claim that the final solution is: +$$P_0\rho P_0 + P_1\rho P_1,$$ +where $P_0=|0\rangle \langle0|$ and $P_1=|1\rangle \langle 1|$.

+

These are my steps so far to get this, but I don't know how to trace out the environment after this:

+

Let $\rho$ be $|\psi \rangle \langle \psi |$, +so that $\rho \otimes \rho_{env} = |\psi, 0\rangle \langle \psi, 0|$.

+

Applying the unitary $U$, we have

+

$$ |00 \rangle \langle 00| \psi, 0 \rangle \langle \psi, 0 | 00 \rangle \langle 00 | + + |00 \rangle \langle 00| \psi 0 \rangle \langle \psi 0 | 10 \rangle \langle 11 | \\ + + |11 \rangle \langle 10| \psi 0 \rangle \langle \psi 0 | 00 \rangle \langle 00 | + + |11 \rangle \langle 10| \psi 0 \rangle \langle \psi 0 | 10 \rangle \langle 11 |. +$$

+

I don't know how to trace out the environment in the above state.

+

Also, I realize that I have considered only a pure state, if anyone can show it for a general state that would be great.

+",2832,,55,,2/19/2021 18:38,2/19/2021 19:02,How does $\mathcal E(\rho)=\mathrm{Tr}_{env}[U(\rho\otimes\rho_{env})U^\dagger]$ turn into $P_0\rho P_0+P_1\rho P_1$?,,2,0,,,,CC BY-SA 4.0 +4982,2,,4981,12/17/2018 8:03,,4,,"

Let's start with a general state +$$ +\rho\otimes\rho_0=\sum_{x,y\in\{0,1\}}\langle x|\rho|y\rangle|x\rangle\langle y|\otimes |0\rangle\langle 0|. +$$ +If we apply the controlled-not, we have +$$ +\rightarrow\rho_{\text{final}}=\sum_{x,y\in\{0,1\}}\langle x|\rho|y\rangle|x\rangle\langle y|\otimes |x\rangle\langle y|. +$$

+ +

Now we want to take the partial trace over the second subsystem. This means calculating +$$ +\sum_k(\mathbb{I}\otimes\langle k|)\rho_{\text{final}}(\mathbb{I}\otimes|k\rangle)=\sum_k\sum_{x,y\in\{0,1\}}\langle x|\rho|y\rangle|x\rangle\langle y|\times \langle k|x\rangle\langle y|k\rangle. +$$ +If we perform the sums over $x$ and $y$, we find that $x=y=k$, so +$$ +=\sum_k\langle k|\rho|k\rangle|k\rangle\langle k|, +$$ +which is entirely equivalent to removing all the off-diagonal elements of $\rho$.

+",1837,,,,,12/17/2018 8:03,,,,0,,,,CC BY-SA 4.0 +4983,2,,4976,12/17/2018 8:13,,2,,"

Let's write +$$ +p'=\frac12+m\cdot n\frac{2p-1}{2}, +$$ +and assume without loss of generality that $p>\frac12$, which also means that $p'>\frac12$. Note the binary entropy function $h(p')$ is symmetric about $p'=\frac12$, and is monotonically decreasing for $p'>\frac12$, meaning that we want to make $p'$ as large as possible in order to minimise $h(p')$. Since $p$ is fixed, the only way to increase $p'$ is to make $m\cdot n$ as large as possible. This happens with $m=n$ so that $m\cdot n=1$ and $p'=p$.

+",1837,,,,,12/17/2018 8:13,,,,0,,,,CC BY-SA 4.0 +4985,1,,,12/17/2018 10:28,,1,123,"

The question is similar to this one. As suggested in the answer, I can easily do this with just one qubit: I repeatedly Hadamard it and measure in order to have a fair coin flip at every point. The problem with it is that, because of the measurements, the computation is not reversible.

+ +

So, again, suppose that in my circuit I have to generate multiple, say n, random coin flips. For example, this coin flips could be used to activate n CNOTs half of the time.

+ +

The trivial solution could be to use n different qubits and Hadamard them. However, this gets really huge when n is large.

+ +

Is there any better way? By better I mean using a smaller number of qubits and only a few simple quantum gates.

+",4848,,26,,12/23/2018 7:54,12/23/2018 7:54,Multiple random coin flips without measurements,,1,2,,,,CC BY-SA 4.0 +4986,2,,4974,12/17/2018 12:05,,3,,"

I asked someone from IBM and got this answer: +Teleportation can not be run on the IBM Q devices at the moment as no operations can be performed after a measurement.

+",5065,,,,,12/17/2018 12:05,,,,0,,,,CC BY-SA 4.0 +4987,2,,4976,12/17/2018 16:50,,4,,"

My favourite way of proving that the Shannon entropy is minimized for a measurement in the qubit basis is through the notion of majorizaion (see Nielsen and Chuang or the book on Matrix Analyis by Bhatia for a formal definition). Specifically $p$ and $(1-p)$ is related to $p'$ and $(1-p')$ with the following relation

+ +

\begin{equation} +\left(\begin{array}{c} +p'\\ +1-p' +\end{array}\right)=D +\left(\begin{array}{c} +p\\ +1-p +\end{array}\right) +\end{equation}

+ +

where +\begin{equation} +D= +\left( +\begin{array}{c} +\frac{1}{2}(1+m\cdot n)&\frac{1}{2}(1-m\cdot n)\\ +\frac{1}{2}(1-m\cdot n)&\frac{1}{2}(1+m\cdot n) +\end{array} +\right) +\end{equation} +is a bistochastic matrix, meaning it's elements are probabilities and the sum of it's columns and arrays is equal to one. We then say that the probability distribution defined be the left hand side of the first equation (call it $\vec{p}'$) is majorized by that on the right hand side (call it $\vec{p}$) and we write symbolically that $\vec{p}'\prec\vec{p}$. Inuitively this means that $\vec{p}'$ is more ""random"" than $\vec{p}$.

+ +

Now for any Schur concave function $f$

+ +

$$f(\vec{p}')\geq f(\vec{p})\quad \mbox{when}\quad \vec{p}'\prec\vec{p}$$

+ +

The Shannon entropy is such a Schur concave function and the proof is now complete.

+",5238,,,,,12/17/2018 16:50,,,,0,,,,CC BY-SA 4.0 +4988,2,,4985,12/17/2018 21:21,,2,,"

You say it's a problem that when using measurements your circuit is not reversible, but generating a truly random number is an inherently non-reversible operation. Consider that for an operation to be reversible, you must be able to uniquely determine the input given the output. If the output of your operation is truly random, then it is by necessity unrelated to the input and thus not reversible.

+ +

You could possibly implement a classical deterministic pseudo-random number generator in a reversible way on a quantum computer and use that instead, but that sounds like a tremendous inconvenience unless you have a very good reason for requiring reversibility. At that point why not just generate the random numbers classically and give them as input to the quantum circuit?

+",4153,,,,,12/17/2018 21:21,,,,4,,,,CC BY-SA 4.0 +4989,1,4993,,12/17/2018 22:40,,10,801,"

In an answer to a previous question, Generalization for n quantum teleportations, Craig Gidney states:

+ +
+

The more complicated way to generalize teleportation is figuring out how to make it work on qutrits and qudits instead of only qubits. Basically, instead of using a ""basis"" made up of tensor products of X and Z matrices, you need to switch to a basis based on clock and shift matrices.

+
+ +

How can quantum teleportation be generalized for qudits?

+",2645,,55,,10/20/2021 9:19,4/16/2022 18:57,How is quantum teleportation generalized to qudits?,,4,0,,,,CC BY-SA 4.0 +4990,1,,,12/17/2018 23:44,,3,262,"

In the process of research leading up to my previous question, I found out about matrix, vector & logical clocks.

+ +

The citation in the aforementioned question mentions clock and shift matrices. Wikipedia states:

+ +
+

These two matrices are also the cornerstone of quantum mechanical dynamics in finite-dimensional vector spaces as formulated by Hermann Weyl, and find routine applications in numerous areas of mathematical physics. The clock matrix amounts to the exponential of position in a ""clock"" of d hours, and the shift matrix is just the translation operator in that cyclic vector space, so the exponential of the momentum. They are (finite-dimensional) representations of the corresponding elements of the Weyl-Heisenberg on a d-dimensional Hilbert space.

+
+ +

I am curious to find out

+ +
    +
  • if there is any usage of matrix, vector or logical clocks in quantum computing
  • +
  • how clock matrices and matrix clocks compare & contrast
  • +
+",2645,,2645,,12/26/2018 16:25,12/26/2018 16:25,Clock matrix vs matrix clock,,0,2,,,,CC BY-SA 4.0 +4991,1,,,12/18/2018 1:42,,4,151,"

I am trying to wrap my head around the correlation between repeating teleportation & XOR linked lists.

+ +

I understand that:

+ +
    +
  1. ""the quantum equivalent of the one time pad (i.e. XORing a message with a secret key) is quantum teleportation"" (source)
  2. +
  3. ""the simplest way to generalize teleportation is to just repeat it"" (source)
  4. +
+ +

What I don't understand is how repeating teleportation could lead to ""a data structure which is used to maintain a list of data items"" (see this answer for more use cases for quantum XOR linked lists).

+ +

Hopefully someone can help me to understand how these concepts connect, because I do not understand teleportation well enough to see how it could be used to accomplish any of the use cases described in the answer to my XOR linked list question.

+ +

See also: Equivalent Quantum Circuits sections VII. A & B

+ +
+

As example applications we study quantum teleportation and dense coding protocols in terms of a simple XOR swapping circuit and give an intuitive picture of a basic gate teleportation circuit.

+
+",2645,,2645,,12/24/2018 17:38,12/24/2018 17:38,What does teleportation have to do w/ XOR linked lists?,,0,6,,3/27/2019 16:44,,CC BY-SA 4.0 +4992,1,5033,,12/18/2018 5:14,,4,85,"

Suppose I have a quantum algorithm that produces solutions where more than one different linear combination of qubit values with raised probability amplitudes is a correct result. Each linear combination is correct in the context of this hypothetical algorithm, but it is not the task of the algorithm to also iteratively elicit the solutions, but merely to produce a superposition where all possible solutions are represented within it. The problem isn't with the implementation of the supposed algorithm, but with the issue of potentially measuring a superposition that hasn't been fully honed to a single maximal linear combination, since in the ensemble of measurements you might obtain each of the different linear combinations, making the results unclear, despite the fact the algorithm implementation is potentially correct. The objective is therefore to have an abstract procedure to read off these possible answers in a clean manner, so that they aren't lost, but for which the superposition has been narrowed to a final value.

+ +

For example, suppose I have linear combinations $|10\rangle$ and $|11\rangle$, each which cannot be decomposed into simpler expressions with independent amplitudes, that have raised probability in that, for my quantum algorithm, these are correct. The others, $|00\rangle$ and $|01\rangle$ have amplitudes near zero. For simplicity, it can be said that each the current superposition is described by: $\frac{1}{\sqrt{2}}(|10\rangle + |11\rangle)$. Before reading each value off, I want to cancel the probability of all others aside from the ith term of the superposition. Therefore, first cancel out $|11\rangle$, so that I don't have a very high chance of reading $|11\rangle$ when the first measurement is done, so the superposition becomes simply $|10\rangle$. In every iteration of this procedure, I want the superposition narrowed to a single term, so that there is no confusion upon measurement. As with the example, after the first measurement, repeat, but with probability canceled from $|10\rangle$ to $|11\rangle$.

+ +

But, I want a mechanism to do this form me for $N$ possible admissible answers, so that it works in the more general case.

+",5350,,26,,12/22/2018 11:50,12/22/2018 14:19,Solution enumeration algorithm?,,1,4,,,,CC BY-SA 4.0 +4993,2,,4989,12/18/2018 7:56,,9,,"

Let's define the shift and clock matrices (the generalisation of the Pauli X and Z matrices) as +$$ +X=\sum_{i=0}^{d-1}|i+1\text{ mod }d\rangle\langle i|\qquad Z=\sum_{i=0}^{d-1}\omega^i|i\rangle\langle i| +$$ +where $\omega=e^{2\pi \sqrt{-1}/d}$. Now we can define a maximally entangled orthonormal basis (the equivalent of the Bell basis): +$$ +|\Psi_{ij}\rangle=(X^iZ^j\otimes\mathbb{I})\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}|k\rangle|k\rangle. +$$ +(Reader exercise: verify that $\langle\Psi_{ij}|\Psi_{kl}\rangle=\delta_{ik}\delta_{jl}$.)

+ +

The teleportation setup is basically the same as for qubits. Alice holds an unknown qubit state $|\psi\rangle\in\mathbb{C}^d$, and Alice and Bob share the two-qudit state $|\Psi_{00}\rangle$.

+ +

Alice performs a measurement between her two qudits using the basis $|\Psi_{ij}\rangle$, and gets an answer $(ij)$. Let's assume that the answer is $(00)$. In this case, Bob receives the state +$$ +d^2\text{Tr}_A\left(|\Psi_{00}\rangle\langle\Psi_{00}|_A\otimes\mathbb{I}_B\cdot|\psi\rangle\langle\psi|\otimes|\Psi_{00}\rangle\langle\Psi_{00}|\right)=|\psi\rangle\langle\psi|, +$$ +i.e. the state is perfectly teleported. What happens for the other measurement results? We need to calculate +$$ +d^2\text{Tr}_A\left(\left(X^iZ^j\otimes\mathbb{I}|\Psi_{00}\rangle\langle\Psi_{00}|Z^{-j}X^{-i}\otimes\mathbb{I}\right)_A\otimes\mathbb{I}_B\cdot|\psi\rangle\langle\psi|\otimes|\Psi_{00}\rangle\langle\Psi_{00}|\right) +$$ +But this is the same as +$$ +d^2\text{Tr}_A\left(|\Psi_{00}\rangle\langle\Psi_{00}|_A\otimes\mathbb{I}_B\cdot\left(Z^{-j}X^{-i}|\psi\rangle\langle\psi|X^iZ^j\right)\otimes|\Psi_{00}\rangle\langle\Psi_{00}|\right) +$$ +so it's as if we're teleporting the state $Z^{-j}X^{-i}|\psi\rangle$ and getting measurement result $(00)$. So, we know that Bob receives $Z^{-j}X^{-i}|\psi\rangle$, so when Alice sends Bob the 2-dit message of her measurement result $(ij)$, he can apply the correction $X^iZ^j$, and he's perfectly received $|\psi\rangle$, no matter what Alice's measurement result was.

+",1837,,1837,,06-03-2019 18:42,06-03-2019 18:42,,,,5,,,,CC BY-SA 4.0 +4994,1,,,12/18/2018 9:46,,4,1098,"

After installing qiskit-terra via git (pip install qiskit), all python programs involving the line from qiskit import BasicAer do not run.

+ +

Example:

+ +
 from qiskit import *
+
+ q = QuantumRegister(2)
+
+ c = ClassicalRegister(2)
+
+ qc = QuantumCircuit(q,c)
+
+ qc.h(q[0])
+
+<qiskit.extensions.standard.h.HGate object at 0x7f6a146ee7f0>
+
+ qc.cx(q[0],q[1])
+
+<qiskit.extensions.standard.cx.CnotGate object at 0x7f6a146ee940>
+
+qc.measure(q,c)
+
+<qiskit.circuit.instructionset.InstructionSet object at 0x7f6a146eea58>
+
+ backend_sim = Aer.get_backend('qasm_smulator')
+
+Traceback (most recent call last):
+  File ""<stdin>"", line 1, in <module>
+NameError: name 'Aer' is not defined
+
+",5353,,5353,,12/18/2018 9:59,12/18/2018 12:54,Module BasicAer not found,,1,1,,,,CC BY-SA 4.0 +4995,2,,1472,12/18/2018 9:56,,4,,"

Actually, after having researched the question over the last months, the two answers (one above and one below) are correct, but we can build upon them to get something more up to date. +The first answer, however, relies on figures and data which are slightly obsolete, while the source is uncertain (it is impossible to know if the source is McKinsey or The Netherlands). We have updated these figures for a number of countries and also updated the way to understand the race and its funding as static figures do not at all reflect what is happening in the world. +So, for example, in terms of government funding, according to available non classified figures we would have China, leading followed by Germany then by the US. +

+ +

The picture above is just a screenshot (sorry for the definition). You can watch the dynamic mapping we came up with showing this state of play of the evolving race in the article - we recorded it as a short video to show evolution Mapping The Race for Quantum Computing - Quantum, AI and Geopolitics (3)

+ +

Yet, things are actually not that simple. +As underlined by JanVdA, indeed, as soon as one adds one of the giant IT companies, it becomes clear that they lead the race. We did the test with IBM and the amount of funding just completely dwarfs state funding in a quite impressive way (this is the fifth mapping in the article mentioned above). +We also foresee that all this is likely to be completely upset by the entry of Venture Capital notably through mega tech funds such as Vision Fund (belonging to Softbank, which is also the major shareholders of Ali Baba). The overall amount of the fund is 100 billion USD, for digital and new tech, not only quantum... yet this is an awfully huge amount of money available. Interestingly Saudi Arabia has invested 45 billion USD in this fund. Thus if we just play a rapid ""what if"" scenario and imagine that one third of Vision Fund is invested in Quantum Computing - i.e. 33 billion USD - this means that Saudi Arabia will have invested $14.85 bn in quantum tech... which puts it at the forefront of the race in terms of funding. True enough, there is a difference between yearly funding and venture capital investment for which we should also account. Nonetheless, when one knows that Softbank plans for a second Vision Fund, then investors within these funds become really serious player in the Race to Quantum Computing.

+ +

Finally, it is also truly important to understand that the race has quite numerous specificities that need to be considered if one want to truly assess who is at the top or not. Our work is work in progress considering the amount of research needed. So, stay put. Furthermore, it is not because a lab publishes a lots of scientific papers, or because an actor spends a huge amount of money that the still necessary breakthroughs to create and engineer a universal quantum computer will take place in that country and/or in this lab. Thus, we should consider that the race is still young, and that it is conducive to surprises.

+",5352,,,,,12/18/2018 9:56,,,,0,,,,CC BY-SA 4.0 +4996,2,,4994,12/18/2018 10:49,,6,,"

What was Aer in 0.6 is being renamed BasicAer in 0.7. The Aer name will then be used for a larger and fancier simulation package.

+ +

Since the current stable version is 0.6.1, your pip install will have given you that version of Terra. So you can simply replace each instance of BasicAer with Aer for now.

+ +

If you want to get 0.7 already, you can install with

+ +
pip install git+https://github.com/Qiskit/qiskit-terra.git
+
+ +

but it should also be moved to stable very soon.

+",409,,409,,12/18/2018 12:54,12/18/2018 12:54,,,,2,,,,CC BY-SA 4.0 +4998,1,,,12/18/2018 14:06,,3,3347,"

The following message I got by running the qft.py code

+ +

Traceback (most recent call last):

+ +

File ""qft.py"", line 18, in

+ +
from qiskit.providers.ibmq import least_busy
+
+ +

ModuleNotFoundError: No module named qiskit.providers

+ +
+ +

This the code snippet (from the QISKit repository):

+ +

I suppose that something was wrong with the installation process?)

+ +
# -*- coding: utf-8 -*-
+
+# Copyright 2017, IBM.
+#
+# This source code is licensed under the Apache License, Version 2.0 found in
+# the LICENSE.txt file in the root directory of this source tree.
+
+""""""
+Quantum Fourier Transform examples.
+Note: if you have only cloned the Qiskit repository but not
+used `pip install`, the examples only work from the root directory.
+""""""
+
+import math
+from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
+from qiskit import execute, BasicAer, IBMQ
+from qiskit.providers.ibmq import least_busy
+
+
+###############################################################
+# make the qft
+###############################################################
+def input_state(circ, q, n):
+    """"""n-qubit input state for QFT that produces output 1.""""""
+    for j in range(n):
+        circ.h(q[j])
+        circ.u1(math.pi/float(2**(j)), q[j]).inverse()
+
+
+def qft(circ, q, n):
+    """"""n-qubit QFT on q in circ.""""""
+    for j in range(n):
+        for k in range(j):
+            circ.cu1(math.pi/float(2**(j-k)), q[j], q[k])
+        circ.h(q[j])
+
+
+q = QuantumRegister(5, ""q"")
+c = ClassicalRegister(5, ""c"")
+qft3 = QuantumCircuit(q, c, name=""qft3"")
+qft4 = QuantumCircuit(q, c, name=""qft4"")
+qft5 = QuantumCircuit(q, c, name=""qft5"")
+
+input_state(qft3, q, 3)
+qft3.barrier()
+qft(qft3, q, 3)
+qft3.barrier()
+for j in range(3):
+    qft3.measure(q[j], c[j])
+
+input_state(qft4, q, 4)
+qft4.barrier()
+qft(qft4, q, 4)
+qft4.barrier()
+for j in range(4):
+    qft4.measure(q[j], c[j])
+
+input_state(qft5, q, 5)
+qft5.barrier()
+qft(qft5, q, 5)
+qft5.barrier()
+for j in range(5):
+    qft5.measure(q[j], c[j])
+
+print(qft3)
+print(qft4)
+print(qft5)
+
+###############################################################
+# Set up the API and execute the program.
+###############################################################
+try:
+    IBMQ.load_accounts()
+except:
+    print(""""""WARNING: There's no connection with the API for remote backends.
+             Have you initialized a file with your personal token?
+             For now, there's only access to local simulator backends..."""""")
+
+print('Qasm simulator')
+sim_backend = BasicAer.get_backend('qasm_simulator')
+job = execute([qft3, qft4, qft5], sim_backend, shots=1024)
+result = job.result()
+print(result.get_counts(qft3))
+print(result.get_counts(qft4))
+print(result.get_counts(qft5))
+
+# Second version: real device
+least_busy_device = least_busy(IBMQ.backends(simulator=False,
+                                             filters=lambda x: x.configuration().n_qubits > 4))
+print(""Running on current least busy device: "", least_busy_device)
+job = execute([qft3, qft4, qft5], least_busy_device, shots=1024)
+result = job.result()
+print(result.get_counts(qft3))
+print(result.get_counts(qft4))
+print(result.get_counts(qft5))
+
+",5353,,26,,12/18/2018 14:31,1/18/2019 11:39,ModuleNotFoundError: No module named qiskit.providers,,1,0,,,,CC BY-SA 4.0 +5000,1,5004,,12/18/2018 14:46,,6,1319,"

How can you measure qubits in QuTiP?

+ +

As far as I have seen you can define a Hamiltonian and let it evolve in time. It is also possible to define a quantum circuit, however, measuring and running it is not possible.

+ +

Does anyone know how to do this?

+ +

A simple circuit could be

+ +
H q[0]
+CNOT q[0],q[1]
+Measure q[0], q[1]
+
+",2005,,26,,12/18/2018 14:49,04-02-2020 14:03,Measuring qubits in QuTiP,,4,0,,,,CC BY-SA 4.0 +5001,2,,4975,12/18/2018 21:14,,4,,"

Qubiter uses a CSD compiler for a Unitary matrix to a sequence of elementary operations tranformation

+ +

One setback is that qubiter needs extra packages so installing could be troublesome.

+",1773,,,,,12/18/2018 21:14,,,,0,,,,CC BY-SA 4.0 +5002,2,,4975,12/18/2018 22:44,,9,,"

Here's the circuit for your specific case:

+ +

+ +

I made it manually, by entering the matrix into Quirk, diagonalizing the matrix by adding operations, then simplifying the operations. It's not too hard to do by hand when all the operations are Clifford as in this case.

+",119,,,,,12/18/2018 22:44,,,,5,,,,CC BY-SA 4.0 +5003,2,,4975,12/19/2018 1:40,,3,,"

You can't directly build a gate from arbitrary matrices because custom gates need to be implemented using the build-in gates.

+ +

You have to decompose your matrix to known gates.

+ +

For a random two-qubit gate, there is two_qubit_kak:

+ +
+

two_qubit_kak (unitary_matrix, verify_gate_sequence=False)

+ +

Decompose a two-qubit gate over CNOT + SU(2) using the KAK decomposition.

+ +

Based on MATLAB implementation by David Gosset.

+ +

Computes a sequence of 10 single and two qubit gates, including 3 CNOTs, which multiply to U, including global phase. Uses Vatan and Williams optimal two-qubit circuit (quant-ph/0308006v3). The decomposition algorithm which achieves this is explained well in Drury and Love, 0806.4015.

+
+",2214,,26,,05-01-2019 10:01,05-01-2019 10:01,,,,1,,,,CC BY-SA 4.0 +5004,2,,5000,12/19/2018 4:23,,2,,"

QuTiP is not really meant for this I think. As said on the home page :

+ +
+

QuTiP is open-source software for simulating the dynamics of open quantum systems.

+
+ +

Simulating dynamics of open quantum systems by definition means you are interested in the quantum state as a result of your algorithm.

+ +

I tried looking at the Notebook examples provided in this Github but could not find measurement examples somewhere. +You have a possibility to get expectation values though (see this notebook).

+",4127,,4127,,12/19/2018 4:33,12/19/2018 4:33,,,,0,,,,CC BY-SA 4.0 +5005,1,5008,,12/19/2018 5:24,,8,1714,"

I was recently watching a talk by Urmila Mahadev on ""Classical Verification of Quantum Computations"" (see this). I am not new to quantum computation just have a familiarity with the qubit and some part of quantum mechanics. I even did not get the meaning of the simulation. I am guessing it means that given an encoding of a quantum device with input. The classical device will take this encoding and get the results. The only thing which I have understood is that there may be many states in which quantum device can go through on a particular input.

+ +

Question: Why it is hard to simulate a quantum device by a classical device?

+ +

Please note that I am not sure what it means by "" hard "". Is it time-wise or space-wise? The meaning of "" simulation "" is also not clear to me.

+",5361,,,,,6/15/2021 10:34,Why it is hard to simulate a quantum device by a classical devices?,,2,1,,,,CC BY-SA 4.0 +5006,1,5024,,12/19/2018 6:29,,7,652,"

I have read somewhere / heard that the set of all states that have non-negative conditional Von Neumann entropy forms a convex set. Is this true? Is there a proof for it?

+ +

Can anything be said about the reverse - set of all states that have negative conditional Von Neumann entropy?

+",2832,,10480,,04-05-2021 17:43,04-05-2021 17:43,Is the set of all states with negative conditional Von Neumann entropy convex?,,2,0,,,,CC BY-SA 4.0 +5007,1,6710,,12/19/2018 7:08,,9,413,"

This is how I think about classical relative entropy: There is a variable that has distribution P, that is outcome $i$ has probability $p_i$ of occuring, but someone mistakes it to be of a distribution Q instead, so when outcome $i$ occurs, instead of being $-log(p_i)$ surprised, they are $-log(q_i)$ surprised (or gain said amount of information).

+ +

Now someone who knows both the distributions is calculating the relative Shannon entropy, so expectation value of their surprise is $-\Sigma p_i log(p_i)$ and they know that the mistaken person's probability of being $log(q_i)$ surprised is $p_i$, so their the expectation value of surprise is $-\Sigma p_i log({q_i})$ and the difference is $\Sigma p_i log(p_i) - \Sigma p_i log(q_i)$ which is the classical relative entropy.

+ +

For a given state, the Von Neumann entropy is the Shannon entropy minimised over all possible bases. Since in the measurement basis, the eigenvalues are the probabilities, and both eigenvalues and trace are basis invariant, we can write this as $\Sigma \lambda _i log(\lambda_i)$ which is also equal to $Tr(\rho log( \rho ))$.

+ +

Relative Von Neumann entropy is defined as follows: +$$ Tr(\rho log(\rho)) - Tr(\rho log (\sigma))$$

+ +

The first term is understandable, but by analogy to the classical relative entropy, assuming that person Q is measuring in the sigma basis, let's call it ${\{| \sigma_i \rangle +\}}$, the second term should reduce to $p^{'}_1 log (q1) + p^{'}_2 log(q2) ... $, where $p^{'}_i$ is the actual probability of the state $\rho$ landing on $| \sigma_i \rangle $. The log part is take care of, but I'm not sure how multiplying and tracing out will give this result.

+ +

If there's a better way to understand relative Von Neumann entropy, that's welcome too.

+",2832,,10480,,3/20/2021 23:25,3/20/2021 23:25,Building Intuition for Relative Von Neumann Entropy,,1,0,,,,CC BY-SA 4.0 +5008,2,,5005,12/19/2018 8:54,,5,,"

The first thing to understand is how quantum operations (i.e. quantum gates) and quantum states are mathematically represented:

+
    +
  • Quantum operations on $n$ qubits are unitary matrices of size $2^n \times 2^n$.
  • +
  • Quantum states on $n$ qubits are complex vectors of size $2^n$.
  • +
+

If you are not 100% sure of theses numbers, you can read more about it in:

+
    +
  • (Almost?) every book on quantum computing. For example, Nielsen & Chuang wrote about that in the very beginning of their book.
  • +
  • These exponents in base $2$ are due to the way states are composed with tensor product. You can read a little bit more about it here.
  • +
+

Once you are convinced that the numbers I wrote above are valid, you have your answer:

+
+

Simulating a quantum device by a classical device is limited by the available RAM memory on the classical computer.

+
+

To elaborate a little more, think about how a classical computer would simulate a quantum one. One thing that the classical computer will definitely have to store is the current quantum state of the quantum machine it is simulating. As I wrote in the beginning of my answer, a quantum state is a vector of $2^n$ complex numbers. Now let's compute (in the following, byte == octet):

+
    +
  1. The size of a floating-point number is 4 or 8 octets (depending on the precision, i.e. float or double, and assuming a non-exotic classical computer).
  2. +
  3. A complex number is represented by 2 floating-point numbers: one for the real-part and the second for the imaginary-part. So it needs 8 or 16 octets.
  4. +
  5. The quantum state needs $2^n$ complex numbers, i.e. $f_{\text{single}}(n) = 2^{3+n}$ octets if you use single precision or $f_{\text{double}}(n) =2^{4+n}$ octets if you use double precision.
  6. +
+

Say you want to simulate a n-qubit quantum computer with your classical computer:

+
    +
  • For $n = 10$ you will need at least $f_{\text{simple}}(10) = 2^{13} = 8192\, \text{o} = 8\, \text{kio}$. Every classical computer should be able to do this.
  • +
  • For $n = 20$ you will need at least $f_{\text{simple}}(20) = 2^{23} = 8388608\, \text{o} = 8\, \text{Mio}$. Every classical computer should be able to do this.
  • +
  • For $n = 30$ you will need at least $f_{\text{simple}}(30) = 2^{33} = 8589934592\, \text{o} = 8\, \text{Gio}$. A publicly-accessible laptop is capable of doing it, but old computer may not have a sufficient amount of RAM.
  • +
  • For $n = 40$ you will need at least $f_{\text{simple}}(40) = 2^{43} = 8796093022208\, \text{o} = 8\, \text{Tio}$. This is definitely out of reach for publicly-accessible things, you will need access to a computing server.
  • +
  • For $n = 50$ you will need at least $f_{\text{simple}}(50) = 2^{53} = 9007199254740992\, \text{o} = 8\, \text{Pio}$. Even Summit, the TOP 1 computer (in terms of FLOPS), cannot simulate this as it "only" has $2.8\, \text{Pio}$ of RAM.
  • +
+

Of course some clever simulation algorithms are capable of using the specific structure of some quantum programs in order to reduce the needed amount of memory. But for a generic quantum program, this is the quantity of RAM you will need.

+
+

Note that I did not speak about computing power. The cost in terms of floating-point operations is generally not a limitation because most of the quantum circuits are a succession of sparse quantum operations (i.e. they are represented by a sparse matrix) and matrix-vector multiplication with a sparse matrix are quite cheap (depending on the sparseness of the matrix).

+

Nevertheless, note that you may have a $1$-qubit quantum program that contains $10^{30}$ quantum gates. In this case, the simulation algorithm will be time-wise limited, not memory-wise.

+",1386,,1386,,6/15/2021 10:34,6/15/2021 10:34,,,,6,,,,CC BY-SA 4.0 +5009,2,,5005,12/19/2018 10:21,,3,,"

The simulation of a quantum computation (some people choose to use the term 'emulation' in this context to disambiguate from a different type of simulation) is when one tries to recreate the calculation that you want a quantum computer to perform, but on a classical computer.

+ +

When you're simulating a particular algorithm, there are many different problem instances with problem sizes $n$ (this is usually the number of bits required to specify the problem instance). We say a problem is hard if the time that it takes to run grows quicker than any polynomial in $n$.

+ +

Now, it must be emphasised that we don't know that simulating a quantum computer is hard. It's just that we don't know how to do it. And we've tried quite hard. For example, we believe that quantum computers can perform some classical computations that are hard at least as strongly as we believe that there's no efficient classical algorithm for factoring large composite integers (because there's a quantum algorithm that achieves that in polynomial time).

+ +

If we could prove that classical computers can simulate quantum computers, then there wouldn't be nearly so much interest in building a quantum computer. That said, simulation is a polynomial overhead equivalence. Quantum computers could still be much faster, which might be desirable in some contexts.

+ +

So, your question effectively boils down to ""where do quantum computers get their power from""? Variations on this theme have been asked a number of times already on this site. The way that I like to think about it is to recall that classical computers, no matter how complex, are built out of the same fundamental set of gates (indeed, one gate such as NAND is sufficient). If somebody suddenly comes along with an extra gate that cannot be built out of the existing gates, it suddenly gives you the potential to use this gate to improve existing algorithms. Sometimes it'll help, sometimes it won't.

+ +

I would just like to point out that one aspect which is not the source of power is the exponential state space. Probabilistic classical computations also have an exponentially large state space, and yet we can still perform them. (Of course, the difference is about how we deal with probabilities. Quantum probabilities can interfere, which means that we have to keep all the paths ""alive"" as we simulate, rather than just sampling individual paths. But this is a much more subtle issue.)

+",1837,,,,,12/19/2018 10:21,,,,0,,,,CC BY-SA 4.0 +5010,2,,4998,12/19/2018 10:22,,3,,"

This issue was due to you taking example code using the master branch, and running it using the stable version (which is installed when you pip install qiskit).

+ +

Since this question was asked, Qiskit has been updated. The program given in the question will now run fine with the stable version of Qiskit.

+ +

If you ever want to run code from the master branch, you can pip install the relevant version of Qiskit using

+ +
pip install git+https://github.com/Qiskit/qiskit-terra.git
+
+",409,,409,,1/18/2019 11:39,1/18/2019 11:39,,,,0,,,,CC BY-SA 4.0 +5011,2,,4774,12/19/2018 11:10,,6,,"

Single-qubit unitaries are just 3D rotations, multiplied by a phase. So in order to find the actual angles, you can resort to the theory of rotation matrices, in particular to Euler's rotation theorem, which states that any rotation is a composition of 3 rotations (the theorem proof is constructive, so you get the actual angles).

+",2558,,,,,12/19/2018 11:10,,,,0,,,,CC BY-SA 4.0 +5012,2,,4975,12/19/2018 12:43,,5,,"

I don't think Qiskit has this simulation feature. You have to decompose it indeed.

+ +

However, there is another way to solve your problem. +To check if a quantum circuit (that you can submit in Qiskit) corresponds to a unitary matrix, you can use the unitary_simulator backend.

+ +
# Run the quantum circuit on a unitary simulator backend
+backend = Aer.get_backend('unitary_simulator')
+job = execute(circ, backend)
+result = job.result()
+print(np.around(result.get_unitary(circ), 3))
+
+ +

This will print the unitary matrix that your circuit represents. And you can compare to yours.

+",4127,,,,,12/19/2018 12:43,,,,0,,,,CC BY-SA 4.0 +5013,1,,,12/19/2018 14:13,,3,956,"

Can I decompose a 4-qubit Toffoli gate into two qubit CNOT gate without ancillary state?

+",4131,,26,,12/23/2018 7:53,12/23/2018 7:53,How to decompose 4 qubits Toffoli-gate into two-qubits CNOT gate?,,1,0,,,,CC BY-SA 4.0 +5014,2,,5006,12/19/2018 14:16,,9,,"

Geometric characterization (as any other characterization) of subsets of the quantum state space in relation with their locality and entanglement properties becomes very complicated as the number of qubits rises. The geometry of the space of negative conditional entropy two qubit states, which are also locally maximally mixed (Weyl states) is known; it is reviewed by Friis, Bulusu and Bertlmann. All the figures and the data given in this answer are taken from this review.

+ +

Their result is described in the following figure of the Weyl tetrahedron, which can be summarized as follows:

+ +

+ +
    +
  1. The corners of the tetrahedron are the four Bell states.
  2. +
  3. The center of the tetrahedron is the maximally mixed two qubit sate.
  4. +
  5. The maximally mixed state is surrounded by what is called the Kuś-Życzkowski ball of maximally separable states whose unitary translates are also separable.
  6. +
  7. The surface of the ball touches the double pyramid of separable states in the central points of its faces.
  8. +
  9. The dark-yellow surface outside the double pyramid encloses the local states which cannot violate the CHSH inequality.
  10. +
  11. The outer red surface encloses all the states with positive conditional entropy.
  12. +
  13. The solid red line parametrizes a family of Werner states which penetrates through the whole Weyl tetrahedron. The cross section of this line in the tetrahedron is given by:
  14. +
+ +

+",4263,,,,,12/19/2018 14:16,,,,8,,,,CC BY-SA 4.0 +5015,2,,5013,12/19/2018 14:44,,3,,"

Yes, it is possible.

+ +

A circuit is given e.g. in this answer to a closely related question.

+",491,,,,,12/19/2018 14:44,,,,0,,,,CC BY-SA 4.0 +5016,2,,4373,12/19/2018 16:11,,4,,"

Here are a couple of contributions related to your question:

+ +

1- Very recently, Chris Ferrie created an open-source card game based on a toy version of quantum mechanics, called $<B|racket|S>$.

+ +

2- The company Phase Space Computing markets electronic kits that simulate quantum gates and simple quantum algorithms.

+",2558,,,,,12/19/2018 16:11,,,,0,,,,CC BY-SA 4.0 +5017,1,5018,,12/19/2018 18:52,,3,544,"

This may be a silly question but at the start of Shor's algorithm to factorise a number $N$ we need to find a number $n$ such that +$N^{2} \leq 2^{n} \leq 2N^{2}$ +Why does such a number $n$ exist for any $N$?

+",5328,,55,,09-03-2021 11:35,09-03-2021 11:35,"In Shor's factorization algorithm for $N$, why can we always find $n$ such that $N^2\le 2^n\le 2N^2$?",,2,0,,,,CC BY-SA 4.0 +5018,2,,5017,12/19/2018 19:04,,7,,"

Let's represent $N^2$ as $2^a+b$, where $a$ is the greatest power of 2 that not exceeds $N^2$, and $b \ge 0$ (which is always possible to do - $a$ is just the number of bits in binary representation of $N^2$). Then $n = a+1$:

+ +
    +
  • $N^2 \le 2^{a+1}$, because otherwise $a$ would not be the greatest power of 2 that not exceeds $N^2$.
  • +
  • $2^{a+1} \le 2N^2 = 2(2^a+b) = 2^{a+1} + 2b$, because $b \ge 0$.
  • +
+",2879,,,,,12/19/2018 19:04,,,,0,,,,CC BY-SA 4.0 +5019,2,,5017,12/19/2018 19:56,,3,,"

If $2^k$ is less than $x$, you can increase $k$ by 1 without exceeding $2x$. Because if $2^k < x$ then $2 \cdot 2^k < 2 \cdot x$ and so $2^{k+1} < 2x$. If you start at $k=0$ and keep incrementing, $2^k$ will eventually exceed $x$, and at that exact moment you stop; knowing that you didn't also exceed $2x$ and so have met both criteria.

+ +

Or you can just use a closed-form definition:

+ +

$k = \lceil \log_2 x \rceil$

+ +

Or, in the original variables:

+ +

$n = \lceil \log_2 N^2 \rceil$

+",119,,,,,12/19/2018 19:56,,,,0,,,,CC BY-SA 4.0 +5020,1,5021,,12/19/2018 20:20,,3,1689,"

Let me take Grover's algorithm as an example. In most cases, Grover's algorithm is able to yield with a high probability the desired term of the superposition. When the superposition has more than 4 terms, there's a small chance we will not obtain the desired term of the superposition, in which case we can repeat the procedure and measure it again, until we really get the desired result.

+ +

Although the probability of not getting the desired result decreases exponentially, it is technically not guaranteed that one will ever get the desired measurement. Therefore, we cannot prove that Grover's algorithm is an algorithm because we cannot prove it terminates with the correct answer in a finite number of steps. + (Otherwise, what part of the definition of ""algorithm"" I'm missing here?)

+ +

We can however define Grover's algorithm as a Las Vegas algorithm because if we do not measure the desired result, we could produce a ""failure"" result, satisfying therefore the definition of ""Las Vegas algorithm"".

+ +

Surely a quantum computer is able to calculate everything a classical computer can, so quantum computers can execute algorithms in the formal sense of the word. But is there an algorithm (not a Las Vegas algorithm) that uses true quantum features like superpositions and entanglement always producing the right answer in a finite number of steps and is not a Las Vegas algorithm? That's what I'm after. I appreciate any light on this direction.

+",1589,,1589,,12/19/2018 20:26,12/19/2018 21:02,Is there any (really) quantum procedure that's an algorithm and not a Las Vegas algorithm?,,3,3,,,,CC BY-SA 4.0 +5021,2,,5020,12/19/2018 20:45,,9,,"

It sounds like you're looking for algorithms that succeed deterministically with probability 1, instead of probabilistic algorithms that succeed with probability bounded from a 1/2 by a finite amount, say 2/3.

+ +

Exact is the keyword for deterministic quantum algorithms, such as in this paper Exact quantum algorithms have advantage for almost all Boolean functions by Andris Ambainis, Jozef Gruska, Shenggen Zheng that answers your question in the affirmative.

+",5370,,,,,12/19/2018 20:45,,,,2,,,,CC BY-SA 4.0 +5022,2,,5020,12/19/2018 20:52,,8,,"
+

Although the probability of not getting the desired result decreases exponentially, it is technically not guaranteed that one will ever get the desired measurement. Therefore, we cannot prove that Grover's algorithm is an algorithm because we cannot prove it terminates with the correct answer in a finite number of steps. (Otherwise, what part of the definition of ""algorithm"" I'm missing here?)

+
+ +

It is important that an 'algorithm' terminates, but it is possible to consider slightly more flexible final conditions than you are considering when it does terminate. This is true of classical algorithms as well as quantum algorithms.

+ +

A Las Vegas algorithm is one which can 'succeed' or 'fail', and which produces a correct result whenever it 'succeeds' (so that it is meaningful to say that it has succeeded). We look for the probability of failure to be low, ideally — and this can be achieved in principle by trying again if you don't succeed (though in some cases you may end up trying many times).

+ +

This idea is taken seriously enough that there is a Las Vegas version of the complexity class P, known as ZPP. (It does not do to take complexity class names too seriously, but the acronym stands for ""zero error probabilistic polynomial-time"", as it is the class of decision problems solvable in polynomial time, using randomness, and without error — allowing however for a bounded probability of failing to produce a YES/NO answer at all.) This is a class between P and BPP.

+ +

There is an analogous class, ZQP, of problems solvable with an idealised quantum computer in polynomial time with zero error, which happens to contain integer factorisation (due to a combination of Shor's algorithm, the fact that verifying integer multiplication is in P, and the fact that primality testing is in ZPP and indeed also in P). These of course are analogous to 'Las Vegas' algorithms in the sense that they are zero error (though 'Las Vegas' would usually only be used for classical randomised algorithms) — and in particular, they are indeed algorithms.

+",124,,,,,12/19/2018 20:52,,,,1,,,,CC BY-SA 4.0 +5023,2,,5020,12/19/2018 21:02,,5,,"

Firstly, I would say that when you embrace quantum computing, you are considering probabilistic programming and not deterministic if I may say so (so your definition of algorithm depends on the kind of programming/computing you are doing). It will still be an algorithm though because you create a circuit (which is what we call an algorithm in quantum computing with the circuit model). A quantum algorithm through its quantum operations change the quantum state of a circuit(and this is the definition of quantum computing). Briefly said, this is just a different kind of computing.

+ +

If you are looking for deterministic output though from a quantum algorithm, we can take quantum circuits for arithmetic operations like adder circuits. You have also the common quantum phase estimation in the case you input one eigenvector of the unitary operation of interest. In that case, it outputs the phase (approximately) associated with the eigenvector provided exactly.

+",4127,,,,,12/19/2018 21:02,,,,0,,,,CC BY-SA 4.0 +5024,2,,5006,12/20/2018 13:05,,10,,"

The conditional von Neumann entropy is a concave function: if $\rho$ and $\sigma$ are states of a pair of registers $(\mathsf{X},\mathsf{Y})$ and $\lambda\in[0,1]$ is a real number, then +$$ +\mathrm{H}(\mathsf{X}|\mathsf{Y})_{\lambda\rho + (1-\lambda)\sigma} \geq \lambda\, \mathrm{H}(\mathsf{X}|\mathsf{Y})_{\rho} + (1-\lambda)\,\mathrm{H}(\mathsf{X}|\mathsf{Y})_{\sigma}. +$$ +It follows that the set of all states having nonnegative conditional von Neumann entropy is convex. This is true for $\mathsf{X}$ and $\mathsf{Y}$ being registers with arbitrary dimension.

+ +

Some things can be said about the set of all states having negative conditional von Neumann entropy. Every such state is entangled, for instance, but it is certainly not a convex set.

+ +

One way to prove that the conditional von Neumann entropy is concave is as follows. Consider the state +$$ +\lambda\, \rho \otimes |0\rangle\langle 0| + (1-\lambda)\, \sigma \otimes |1\rangle\langle 1| +$$ +of three registers $(\mathsf{X},\mathsf{Y},\mathsf{Z})$, where $\mathsf{Z}$ is a new, single qubit register that is being introduced for the sake of the proof. By the strong subadditivity of von Neumann entropy we have +$$ +\mathrm{H}(\mathsf{X},\mathsf{Y},\mathsf{Z}) + \mathrm{H}(\mathsf{Y}) \leq \mathrm{H}(\mathsf{X},\mathsf{Y}) + \mathrm{H}(\mathsf{Y},\mathsf{Z}). +$$ +If you evaluate each of the entropies in this inequality for the state above, and then rearrange using $\mathrm{H}(\mathsf{X}|\mathsf{Y}) = \mathrm{H}(\mathsf{X},\mathsf{Y})-\mathrm{H}(\mathsf{Y})$, you should get the concavity of conditional von Neumann entropy.

+ +

Of course, the strong subadditivity of von Neumann entropy is far from trivial to prove, but there are multiple known proofs, and if you search you will easily find one.

+",1764,,26,,5/13/2019 19:45,5/13/2019 19:45,,,,1,,,,CC BY-SA 4.0 +5025,1,5035,,12/20/2018 14:35,,4,273,"

On a classical computer, I want to simulate a learning-based quantum state tomography of a qubit. We can formulate it as finding a parametrized unitary evolution that takes the unknown pure state to a known state. We measure on the basis of the known state (see 1). If we get the known state we proceed, otherwise we change the parameter of the unitary evolution slightly, until we obtain a large number of projections to the known state successfully. There are a lot of simulators listed in quantiki website and also in some previous posts here. Does anyone know a simple simulator to do that?

+",5377,,26,,12/31/2018 16:03,12/31/2018 16:03,Numerical quantum state tomography simulator,,1,2,,,,CC BY-SA 4.0 +5026,1,,,12/20/2018 19:20,,4,698,"

Suppose one of my functions create some ancillary qubits and reset them. +Then, another function wants to create a larger number of ancillary qubits. So, I'd like to reuse the first set of qubits and add the difference two them. Is there any efficient way to achieve the goal?

+ +
def function1(qc):
+    r = 3
+    ancillas = QuantumRegister(r, 'ancillas')
+    qc.add(sum_q)
+    # use the ancillas
+    return ancillas
+
+def function2(qc, partial_ancillas):
+    r = 5
+    diff = r - len(partial_ancillas)
+    if diff > 0:
+        # something like ancillas = partial_ancillas.add(diff)
+        # or maybe ancillas = partial_ancillas + QuantumRegister(diff)
+    else:
+        ancillas = partial_ancillas
+    return ancillas
+
+qc = QuantumCircuit()
+anc = function1(qc)
+anc = function2(qc, anc)
+
+",4848,,26,,12/21/2018 7:15,12/21/2018 15:01,Qiskit - expand and/or merge registers,,1,1,,,,CC BY-SA 4.0 +5027,1,,,12/21/2018 12:46,,5,115,"

I want to find out what values $|u\rangle$ and $|v\rangle$ can take if I want to write $$\frac{1}{\sqrt 2} (|00\rangle + |11\rangle)$$ as $$\frac{1}{\sqrt 2} (|uu\rangle + |vv\rangle).$$

+ +

Say +$$|u\rangle = a|0\rangle + b|1\rangle$$

+ +

$$|v\rangle = c|0\rangle + d|1\rangle.$$

+ +

Now, +$$\frac{1}{\sqrt 2} (|uu\rangle + |vv\rangle)$$

+ +

= $$(a^2 + b^2)|00\rangle + (ab + cd)(|01\rangle + |10\rangle) + (c^2 + d^2)|11\rangle.$$

+ +

We have

+ +

$$(a^2 + b^2)e^{i\theta} = 1$$

+ +

$$(c^2 + d^2)e^{i\theta} = 1$$

+ +

(for the same $\theta$)

+ +

$$ab + cd = 0$$

+ +

We also know that:

+ +

$$|a|^2 + |b|^2 = 1 \implies a^*a + b^*b = 1$$

+ +

$$|c|^2 + |d|^2 = 1 \implies c^*c + d^*d = 1$$

+ +

How do I find the relation between $a, b, c, d$ as rigorously as possible?

+",2832,,26,,12/21/2018 13:00,12/21/2018 14:37,Ways in which $\frac{1}{\sqrt 2} (|00\rangle + |11\rangle)$ can be expressed as $\frac{1}{\sqrt 2} (|uu\rangle + |vv\rangle)$,,2,0,,,,CC BY-SA 4.0 +5028,1,,,12/21/2018 12:52,,1,292,"

When I use Qiskit to do quantum computing, I suffer from a big problem. +When running the code,it turns out to be:

+ +
   {""error"":{""name"":""Error"",""status"":401,""message"":""Authorization Required"",
+   ""statusCode"":401,""code"":""AUTHORIZATION_REQUIRED""}}
+
+ +

It seems that I failed to authorize my account. I don't know how to get authorization. Can anyone help me?

+",4763,,26,,12/21/2018 12:55,11-05-2019 03:33,Qiskit - Authorization Required error,,1,2,,,,CC BY-SA 4.0 +5029,2,,5027,12/21/2018 13:39,,2,,"

I think the clearest way to specify this is $|u\rangle,|v\rangle\in\mathbb{R}^2$ such that $\langle u|v\rangle=0$ (up to the same global phase shared by both states).

+ +

To see how this corresponds to what you wrote: +We start with $b^2=e^{-i\theta}-a^2$ and take the mod-square: +$$ +|b|^4=1+|a|^4-a^2e^{i\theta}-{a^*}^2e^{-i\theta} +$$ +and we can compare this to the square of the normalisation condition: +$$ +|b|^4=1+|a|^4-2|a|^2 +$$ +Hence, we require +$$ +a^2e^{i\theta}+{a^*}^2e^{-i\theta}=2|a|^2, +$$ +which must mean that $a=|a|e^{-i\theta/2}$. Putting this back in $b^2=e^{-i\theta}-a^2$ gives that $b=e^{-i\theta/2}|b|$. Hence, we might as well take $a,b\in\mathbb{R}$, with $\theta/2$ just being a global phase. We can apply an identical argument to $c,d$.

+ +

Now that we know $a,b,c,d$ are real, we see that +$\langle u|v\rangle=ab+cd=0$, +i.e. $|u\rangle$ and $|v\rangle$ are orthogonal.

+ +

Actually, I should probably point out that you're presupposing that $\langle u|v\rangle=0$ because otherwise your state $(|uu\rangle+|vv\rangle)/\sqrt{2}$ wouldn't be normalised.

+ +

I've also just noticed that you've switched somewhere in the middle between b being $\langle 1|u\rangle$ and $\langle 0|v\rangle$. I don't think that affects the calculation, but it would be better to have it consistent!

+",1837,,1837,,12/21/2018 14:37,12/21/2018 14:37,,,,0,,,,CC BY-SA 4.0 +5030,2,,4528,12/21/2018 13:59,,3,,"

You may also want to check for:

+ +
    +
  1. state dependent deterministic cloners which clone with a better fidelity when input state comes from a known ensemble.
    +Ref: Bruss et al., PRA 57, 2368 (1997)
  2. +
  3. probabilistic cloners which clone with unit fidelity but with less than unity success probability
  4. +
  5. asymmetric cloners where the outputs have cloned with different fidelities
  6. +
  7. coherent state cloning machines in infinite dimensional Hilbert space picture which have better optimal fidelity than those for discrete variables in finite dimensions.
  8. +
+",5392,,11,,12/26/2018 17:28,12/26/2018 17:28,,,,0,,,,CC BY-SA 4.0 +5031,2,,5027,12/21/2018 14:29,,2,,"

The state $|\omega\rangle=\tfrac{1}{\sqrt{2}}(|00\rangle+|11\rangle)$ is invariant under transformations of the form $U\otimes \bar{U}$, with $U$ unitary (or more generally, $X\otimes {\bar X^{-1}}$): +$$ +|\omega\rangle=(U\otimes \bar U)|\omega\rangle\ . +$$ +Thus, +\begin{align} +|\omega\rangle &= \tfrac{1}{\sqrt{2}}(U|0\rangle\otimes \bar U|0\rangle+U|1\rangle\otimes \bar U|1\rangle) +\\ +&= \tfrac{1}{\sqrt{2}}(|u\rangle\otimes |\bar u\rangle+|v\rangle\otimes |\bar v\rangle)\ , +\end{align} +with $|u\rangle=U|0\rangle$, $\bar u=\bar U|0\rangle$, etc. Thus, if you want $|\bar u\rangle=|u\rangle$, $|\bar v\rangle=|v\rangle$, you need to choose $U$ real. In that case, +\begin{align} +|u\rangle = \cos\phi|0\rangle + \sin\phi|1\rangle\ , +|v\rangle = -\sin\phi|0\rangle + \cos\phi|1\rangle\ . +\end{align}

+",491,,,,,12/21/2018 14:29,,,,0,,,,CC BY-SA 4.0 +5032,2,,5026,12/21/2018 15:01,,1,,"

I don't think you can add qubits in a QuantumRegister already set. +If your goal however is to create another ancilla registers that cannot exceed a limit of qubits, what you can do is :

+ +
def create_ancillas(r=3):
+
+    ancillas = QuantumRegister(r, 'ancillas')
+    return ancillas
+
+def create_ancillas_limited(partial_ancillas,r=5):
+
+    diff = r - partial_ancillas.size
+    if diff > 0:
+        return QuantumRegister(diff, 'ancillas_2')
+    else:
+        return None
+
+qc = QuantumCircuit()
+anc1 = create_ancillas()
+qc.add(anc1)
+
+anc2 = create_ancillas_limited(anc1)
+qc.add(anc2)
+
+ +

Maybe better if you want to limit the number of qubits in your circuits when adding ancillas is :

+ +
def add_ancillas_limited(qc,n_ancillas_to_add=3,limit=5):
+    if qc.width() - n_ancillas_to_add > limit:
+        print(""Cannot exceed limit!"")
+    else:
+        qc.add(QuantumRegister(n_ancillas_to_add))
+
+",4127,,,,,12/21/2018 15:01,,,,4,,,,CC BY-SA 4.0 +5033,2,,4992,12/22/2018 14:19,,2,,"

There are a couple of possible strategies that I can think of. I have no idea what their scaling is like. I guess nothing will perform better than random sampling, but that’s a complete guess.

+ +

The method I’d start from is just do the usual search, get a random answer, and then adapt my search the next time round, explicitly unmarking that known marked item.

+ +

Another option is to use the fact that Grover’s search can be applied to learning the minimum of a set, see What applications does Grover's Search Algorithm have?. So, you might adapt it to find the smallest, then the second smallest, and so on.

+",1837,,,,,12/22/2018 14:19,,,,0,,,,CC BY-SA 4.0 +5034,2,,4175,12/22/2018 15:14,,5,,"

The resonance frequencies of TLS fluctuate due to their interaction with neighboring TLS, which occurs through electric dipole interaction or the local mechanical strain in the material. If a TLS at low energy (below kB*T) is involved, this one may change its state randomly due to thermal activation. The resulting change in local electric field or strain can detune also TLS at higher energy which are within the qubit tuning range. Here's a theory paper on that: https://arxiv.org/abs/1503.01637

+ +

This process is called 'spectral diffusion' and can probably only be avoided by improving circuit materials to reduce the TLS density (and thus interactions). But even without thermal processes, a TLS may be trapped in a long-living metastable potential well from which escape by tunneling could take hours, days, or even years.

+ +

TLS can be detuned in frequency by controlling the mechanical strain in the sample, see this paper which also discusses TLS interactions: +https://www.nature.com/articles/ncomms7182

+ +

Here's a review article which summarizes decoherence effects from TLS on superconducting qubits and resonators: https://arxiv.org/abs/1705.01108

+",5398,,,,,12/22/2018 15:14,,,,0,,,,CC BY-SA 4.0 +5035,2,,5025,12/23/2018 2:50,,0,,"

To me the best choice is clearly qutip, a quantum computing simulator based in Python. It's free, and it has good documentation.

+ +

Let us know if you have any other queries beyond asking which simulator could do the job best.

+",1867,,,,,12/23/2018 2:50,,,,0,,,,CC BY-SA 4.0 +5036,2,,5000,12/23/2018 2:59,,2,,"

Here. Scroll down to the stochastic solver, and you'll find an attribute for storing measurements.

+ +

It's certainly not the emphasis of the package, as cnada pointed out, but it's there.

+",1867,,1867,,12/25/2018 0:14,12/25/2018 0:14,,,,0,,,,CC BY-SA 4.0 +5037,1,,,12/23/2018 6:49,,3,329,"

While computing the carry bit [C=0 XOR (AB)] I am unable to compute that AB in Qiskit. I don't whether Toffoli gate is available in Qiskit. So does anyone know how to perform AB, which is basically the AND gate without using the Toffoli gate if it isn't available?

+",4446,,26,,12/31/2018 22:22,12/31/2018 22:22,Programming quantum half adder,,1,0,,,,CC BY-SA 4.0 +5038,2,,5037,12/23/2018 10:05,,1,,"

The Toffoli gate is indeed available in qiskit. It goes by the name ccx. Here's an example circuit where the Toffoli is used.

+ +
from qiskit import QuantumRegister, ClassicalRegister
+from qiskit import QuantumCircuit, BasicAer, execute
+
+q = QuantumRegister(3)
+c = ClassicalRegister(3)
+qc = QuantumCircuit(q, c)
+
+qc.h(q[0])
+qc.h(q[1])
+qc.ccx(q[0], q[1], q[2])
+qc.measure(q, c)
+
+backend = BasicAer.get_backend('qasm_simulator')
+job = execute(qc, backend)
+result = job.result()
+
+print(result.get_counts(qc))
+
+ +

The third qubit in the argument (q[2] here) acts as the target. The other two are the controls.

+ +

If you are still interested in how to make this gate from two qubit gates, you can check out the blog post here, which explains how the following gate sequence will give you the effect you need (though it isn't quite the same as a Toffoli).

+ +
qc.ch(q[0], q[3])
+qc.cz(q[1], q[3])
+qc.ch(q[0], q[3])
+
+",409,,,,,12/23/2018 10:05,,,,0,,,,CC BY-SA 4.0 +5039,1,,,12/23/2018 16:59,,4,151,"

I am reading on how to approximate adiabatic evolution with quantum circuit and I had some trouble following the arguments given in the early papers which proves this results. I am mainly following

+ +
+

W. van Dam, M. Mosca, and U. Vazirani, “How Powerful is Adiabatic Quantum Computation?,” Proceedings 2001 IEEE International Conference on Cluster Computing, pp. 279–287, 2001.

+
+ +

In it the authors assumed a problem Hamiltonian

+ +

$$H_p=f(z)|z\rangle\langle z|$$

+ +

so $H_p$ is diagonal in the computational basis. Towards the end of the proof, it is kind of assumed that $H_p$ and its associated evolution operator can be efficiently computed and implemented in quantum circuits. I struggled to see why this is so. Doesn't one need to fully specify $H_p$ to implement the unitary associated with it? In this case does it become equivalent to assuming we can already solve the optimization problem?

+ +

Also, if we don't assume $H_p$ to be diagonal in the computational basis, would it be possible that we need exponential number of gates to implement the adiabatic evolution?

+ +

Maybe a relevant and more general question is, in general is there any hard Hamiltonian's that is difficult to simulate on a quantum circuit?

+",5005,,5005,,12/25/2018 16:15,12/27/2018 7:56,Problem with approximating adiabatic evolution with quantum circuit,,1,0,,,,CC BY-SA 4.0 +5040,2,,5000,12/23/2018 20:43,,2,,"

Main purpose of Qutip is to explore dynamics of quantum systems and therefore density matrices are the tool to use. According to this answer on Quantum computing, we can model a measurement operator Pi on a density matrix. +In the case of the measurement of a single qubit in the computational basis, you have +$$P_0=|0\rangle\langle 0|\qquad P_1=|1\rangle\langle 1|$$

+ +

If you want to talk about n qubits where you measure just the first one, then you use the measurement operators

+ +

$$P_0=|0\rangle\langle 0|\otimes\mathbb{I}^{\otimes(n-1)}\qquad P_1=|1\rangle\langle 1|\otimes\mathbb{I}^{\otimes(n-1)}$$

+ +

Implementation with the Qutip dag method. +First we set up a two level quantum system with the basis method +use a vector v0 for the zero vector and v1 for one vector.

+ +

v0 = qp.basis(2, 0)

+ +

Calculate outer product with the dag method this will give a density operator

+ +

P0 = v0 * v0.dag()

+ +

expand for multiqubit gate

+ +

M0 = qp.gate_expand_1toN(P0, self.activeQubits, qubitNum)

+ +

Also

+ +

v1 = qp.basis(2, 1)

+ +

You can find a basic qubit quantum simulator running on Qutip in the SimulaQron software.

+ +

SimulaQron crudeSimulator

+",1773,,,,,12/23/2018 20:43,,,,0,,,,CC BY-SA 4.0 +5041,1,,,12/24/2018 17:40,,6,1311,"

Simon's problem is that you are given a function $f : \{0,1\}^n \to \{0,1\}^n$ such that $f(x)=f(y)$ if and only if $x \bigoplus y$ is either $0^n$ or some unknown $s$. The problem is to find $s$. If $s=0^n$, then $f$ is 1 to 1 otherwise 2 to 1.

+ +

What is the classical complexity for Simon's problem?

+ +

Wikipedia says $\sqrt {2^n}$ but without any proof. Is there any site or book where I can find the proof for this?

+",5410,,2879,,12/26/2018 16:56,12/26/2018 16:56,Classical complexity for Simon's problem,,3,0,,,,CC BY-SA 4.0 +5042,2,,4749,12/24/2018 17:56,,4,,"

Yes, you can decompose any Hamiltonian. For VQE purposes, any finite-dimensional Hamiltonian can be presented as a sum of terms which consist of tensor products of Pauli matrices (https://arxiv.org/abs/1304.3061):

+ +

$$ +H = \sum_{\alpha, i} h^{\alpha}_{i} \sigma^{\alpha}_{i} + +\sum_{\alpha, \beta, i, j} h^{\alpha \beta}_{ij} \sigma^{\alpha}_{i} \sigma^{\beta}_{j} + \dots +$$

+ +

As such, the expected value $ \langle H \rangle$ can be estimated by measuring the expected values of such combinations of Pauli matrices. There may be trouble if the quantity of these terms grows exponentially in the size of the system, but many interesting Hamiltonians decompose into a polynomial number of operators.

+ +

The coefficients of the decomposition can be obtained by making a scalar product of the Hamiltonian with the basis term: $(H, A) = \frac1d \mathrm{Tr}(HA)$.

+",5103,,26,,1/31/2019 19:25,1/31/2019 19:25,,,,0,,,,CC BY-SA 4.0 +5043,1,5046,,12/24/2018 18:13,,1,275,"

The Qiskit documentation on VQE describes two of the ansatz as ""rotations with entanglements"". The rotation gates are more or less clear, but the documentation doesn't mention what gate is used for entanglement. I suspect they use something like $\exp(-i \alpha Z_1 Z_2)$, but what exactly?

+",5103,,26,,03-12-2019 09:16,03-12-2019 09:16,Entanglement in VQE ansatz in Qiskit,,1,0,,,,CC BY-SA 4.0 +5044,2,,5041,12/25/2018 5:09,,1,,"

Sources on quantum computing tend to give a classical complexity of $\sqrt{2^n}$ but not the proof. I believe the sources on classical cryptography call this algorithm birthday attack and use it to find collisions of hash functions (which is effectively what the Simon's algorithm does). You should be able to find the math details looking for it in crypto context - Crypto StackExchange even has a dedicated tag for birthday attack.

+",2879,,,,,12/25/2018 5:09,,,,0,,,,CC BY-SA 4.0 +5045,1,5066,,12/25/2018 8:15,,6,1880,"

$$Tr(\rho^{AB} (\sigma^A \otimes I/d)) = Tr(\rho^A \sigma^A)$$

+ +

I came across the above, but I'm not sure how it's true. I figured they first partial traced out the B subsystem, and then trace A, but I don't see how you are allowed to partial trace out B from both the factors in the arguments. A proof or any intuition on this would be appreciated.

+ +

Edit 1:

+ +

The notation

+ +

$\rho^{AB}$ is a state in Hilbert space $H_A \otimes H_B$

+ +

$\sigma^A$ is a state in Hilbert space $H_A$

+ +

$\rho^A$ is $\rho^{AB}$ with $B$ subsystem traced out.

+ +

$I/d$ is the maximally mixed state in Hilbert space $B$.

+ +

I saw this being used in Nielsen and Chuang, section 11.3.4, in the proof of subadditivity of entropy.

+ +

Edit 2:

+ +

So, I tried to write an answer based on DaftWullie's comment and Алексей Уваров's answer, but I am stuck again.

+ +

So, $$\rho^{AB} = \sum_{mnop} \rho_{mnop} |mo\rangle \langle np|$$

+ +

Then $$\rho^{A} = \sum_{mno} \rho_{mnoo} |m\rangle \langle n|$$

+ +

Let $$\sigma^A = \sum_{ij} \sigma_{ij} |i\rangle \langle j|$$

+ +

And $$I/d = \sum_{xy} [I/d]_{xy} |x\rangle \langle y|$$

+ +

RHS

+ +

$$Tr(\rho^A \sigma^A)\\ += Tr(\sum_{mno} \rho_{mnoo} |m\rangle \langle n|\sum_{ij} \sigma_{ij} |i\rangle \langle j|)\\ += Tr(\sum_{mnoj} \rho_{mnoo} \sigma_{nj} | m \rangle \langle j|)\\ += \sum_{mno} \rho_{mnoo} \sigma_{nm}$$

+ +

LHS

+ +

$$Tr(\rho^{AB} (\sigma^A \otimes I/d)\\ += Tr(\sum_{mnop} \rho_{mnop} |mo\rangle \langle np| \sum_{ijxy} \sigma_{ij} [I/d]_{xy} |ix\rangle \langle jy|)\\ += Tr(\sum_{mnoxjy}\rho_{mnox} \sigma_{nj} [I/d]_{xy} | mo \rangle \langle jy |)\\ += \sum_{mnyx} \rho_{nm}[I/d]_{xy}\\ += (1/d)\sum_{mny} \rho_{mnyy} \sigma_{nm}$$

+ +

Which is the same as the RHS, but there's an extra $1/d$ factor?

+ +

Also, am I thinking about this the wrong way? Is there a simpler way to look at this?

+",2832,,10480,,4/16/2021 20:38,4/16/2021 20:38,Partial trace over a product of matrices - one factor is in tensor product form,,2,6,,,,CC BY-SA 4.0 +5046,2,,5043,12/25/2018 12:29,,1,,"

Looks like the entangler gates are controlled-PHASE gates, at least that is mentioned in the Qiskit tutorial:

+ +

$$ +c\text{PHASE} = +\begin{pmatrix} +1 & 0 & 0 & 0 \\ +0 & 1 & 0 & 0 \\ +0 & 0 & 1 & 0 \\ +0 & 0 & 0 & i \\ +\end{pmatrix} +$$

+ +

However, in hardware, one also uses the drift Hamiltonian $U_{\text{ENT}}=\exp(-iH_0 \tau)$, which naturally entangles all qubits.

+",5103,,26,,12/25/2018 14:33,12/25/2018 14:33,,,,0,,,,CC BY-SA 4.0 +5047,2,,5045,12/25/2018 12:42,,7,,"

Here the important fact is that the maximally mixed state is in fact an identity matrix.

+ +

Let me rewrite the expression on the left in index notation (the summation sign is omitted according to the Einstein convention):

+ +

$$ +Tr(\rho^{AB} (\sigma^A \otimes I/d)) = [\rho^{AB}]_{ijkl} [\sigma^A]_{ji} [I/d]_{lk} +$$

+ +

But $[I/d]_{lk} = \frac1d \delta_{lk}$, therefore $[\rho^{AB}]_{ijkl} [\sigma^A]_{ji} [I/d]_{lk} = \frac1d [\rho^{AB}]_{ijkk} [\sigma^A]_{ji}$, which is exactly what happens if you first trace out the subsystem $B$ (UPD: up to the prefactor of $1/d$ apparently).

+ +

The physical intuition would be as follows. This expression is basically an expected value of a Hermitian operator $\frac1d \sigma^A \otimes I$ over a state $\rho$. This operator only acts nontrivially on the first subsystem, thus we can safely trace out the rest.

+ +

EDIT: Also, this contraction problem can be understood better if you use tensor network notation. Learning it requires some time, but if you do, I suggest starting here and here.

+",5103,,5103,,12/28/2018 9:06,12/28/2018 9:06,,,,7,,,,CC BY-SA 4.0 +5048,1,5056,,12/25/2018 21:32,,10,4013,"

First, I know there are differences in logical qubits and physical qubits. It takes more physical qubits for each logical qubit due to quantum error.

+ +

Wikipedia states that it takes quantum gates of order $\mathcal{O}((\log N)^2(\log \log N)(\log \log \log N)$ using fast multiplication for Shor's Algorithm. That comes out to $1,510,745$ gates for $2^{1024}$. Further down the article, it says that it usually take $n^3$ gates for $n$ qubits. This would mean it would take ~$115$ qubits.

+ +

However, I've run Shor's Algorithm as implemented in Q# samples using Quantum Phase Estimation and it comes out to $1025$ qubits.

+",4693,,1828,,1/21/2019 4:33,1/21/2019 4:33,How many logical qubits are needed to run Shor's algorithm efficiently on large integers ($n > 2^{1024}$)?,,1,1,,,,CC BY-SA 4.0 +5049,1,5051,,12/26/2018 2:25,,5,164,"

Within photonic quantum computing, one of the ways to represent information is the dual-rail representation of single-photon states ($c_0|01\rangle \ + \ c_1|10\rangle$). Is it possible to utilize multi-photon states (for example, two optical cavities with total energy $2\hbar\omega$, using a state like $c_0|02\rangle \ + \ c_1|20\rangle \ + \ c_2|11\rangle$, where $|02\rangle$ represents two photons in one cavity, $|20\rangle$ represents two photons in the other cavity, and $|11\rangle$ represents one photon in each cavity?

+",4907,,26,,12/31/2018 15:49,12/31/2018 15:49,Multi-photon states in photonic quantum computing?,,2,0,,,,CC BY-SA 4.0 +5050,2,,5041,12/26/2018 11:21,,1,,"

Here's a partial explanation.

+ +

Start by thinking about the 2 bit case. We first evaluate $f(00)$, and we learn nothing. Then, we evaluate (say) $f(01)$. Now we can compare the values, and see if $s=01$, or not. Next we calculate a new value, say $f(10)$. By comparing to $f(00)$, we can determine if $s=10$ but, also, by comparing to $f(01)$, we simultaneously check if $s=11$.

+ +

Now, there are $2^n-1$ possible values of $s$ (I'm ignoring the $s=000...0$ case). If we were cunning, by selecting the right $k$ values of $x\in\{0,1\}^n$ values, then so long as $\binom{k}{2}\geq 2^n-1$, this could be possible as you could have checked $\binom{k}{2}$ different values of $s$. This is achieved by $k\sim 2^{n/2}$. Now, there still remains a careful argument about what those $k$ values are that you need to select, which I'm not presenting here. It's probably worthwhile trying a few small cases (as I did for $n=2$ above) to gain some understanding.

+",1837,,2645,,12/26/2018 15:30,12/26/2018 15:30,,,,0,,,,CC BY-SA 4.0 +5051,2,,5049,12/26/2018 11:27,,3,,"

Yes. The kets themselves can have arbitrary labels, and it's just for you to establish the connection between them and the physical scenario. There's no reason why you can't have the physical scenario you've specified and, indeed, people frequently do.

+",1837,,,,,12/26/2018 11:27,,,,0,,,,CC BY-SA 4.0 +5052,2,,5039,12/26/2018 11:43,,2,,"

I'm not sure if you're wanting to implement $H_p=f(z)|z\rangle\langle z|$ for a specific $z$, or +$$ +H_p=\sum_zf(z)|z\rangle\langle z|. +$$ +I'm going to assume the latter because you can easily redefine the function to convert it into the former. Now, assume that $g(z)$ is a function that we can compute efficiently on a classical computer that gives the best $k$-bit approximation for $f(z)$. Now, construct a unitary that can compute +$$ +U|z\rangle|y\rangle=|z\rangle|y\oplus g(z)\rangle +$$ +for $y\in\{0,1\}^k$. You can implement $U$ because it can be computed classically, and $U^\dagger=U$. So, if you have a first register on which you want to implement the Hamiltonian (state $\sum_z\alpha_z|z\rangle$), a second register of $k$ qubits, initialised in $|0\rangle^{\otimes k}$, and you implement $U$, then you've got state +$$ +\sum_z\alpha_z|z\rangle|g(z)\rangle. +$$ +Now think about the binary representation of $g(z)$. If you implement a phase gate $e^{i\phi Z}$ on the least significant bit, a phase $e^{i2\phi Z}$ on the second bit, and $e^{i2^{j-1}\phi Z}$ of the $j^{th}$ least significant bit, then the net effect is a phase $e^{ig(z)\phi}$. +$$ +\sum_z\alpha_ze^{ig(z)\phi}|z\rangle|g(z)\rangle +$$ +Now we run $U$ again, leaving +$$ +\sum_z\alpha_ze^{ig(z)\phi}|z\rangle|0\rangle^{\otimes k}. +$$ +This is as if the Hamiltonian +$$ +\sum_zg(z)|z\rangle\langle z| +$$ +has been applied for a time $\phi$, and this, for sufficiently large $k$, will be a good approximation to $H_p$.

+ +
+ +

In terms of different bases, if there's a unitary $V$ that you can efficiently implement that changes basis, then you're certainly fine - you just add that before the evolution I've described, and add a $V^\dagger$ after.

+ +

More generally, is there a Hamiltonian evolution that you can't efficiently implement on a quantum computer? Assuming certain basic properties such as a bounded energy, then the answer is no: Hamiltonian simulation is BQP-complete, meaning that every Hamiltonian can be efficiently simulated on a quantum computer (up to some accuracy), and there are some for which it is the hardest thing you could ask of a quantum computer and it still be efficient. The caveat here is the way in which the Hamiltonian is specified. For instance, the way you specify the Hamiltonian could make it hard to implement, while specifying it a different way would be OK.

+ +
+ +

However, that's not necessarily the question that you want to be asking. In terms of ""implementing the adiabatic evolution"", it's one thing to be able to implement the Hamiltonian. The other issue is how long do you have to simulate the adiabatic evolution for. There are certain Hamiltonians for which we know that finding the ground state is QMA-complete (see here, for example), which means it's strongly believed to be an exponential time problem (the difference between BQP and QMA is effectively the quantum version of the difference between P and NP).

+",1837,,1837,,12/27/2018 7:56,12/27/2018 7:56,,,,4,,,,CC BY-SA 4.0 +5053,1,,,12/26/2018 16:25,,3,294,"

I am interested in combinatorial game theory & was doing some research on quantum combinatorial games. This lead me to wondering how a quantum computer might be able to perform nimber arithmetic (perhaps as a part of a game playing AI).

+ +

I am aware of the fact that nim sums are equivalent to bitwise XOR & that XOR correlates to CNOT, which is why it seems resonable to me that a quantum computer would be able to perform calculations with nimbers.

+ +

How could a quantum computer perform nimber arithmetic?

+",2645,,2645,,5/16/2020 2:47,5/17/2020 7:49,How could a quantum computer perform nimber arithmetic?,,2,5,,,,CC BY-SA 4.0 +5054,2,,5041,12/26/2018 16:48,,7,,"

To determine the classical complexity of a problem you need two things, of course: an upper bound (generally an algorithm) and a lower bound.

+ +

There is an easy randomized algorithm that works with high probability given $O(2^{n/2})$ queries to the function $f$: for a suitable constant $c>0$, generate $k = c 2^{n/2}$ strings $x_1,\ldots,x_k\in\{0,1\}^n$ uniformly at random, compute $f(x_j)$ for each $j\in\{1,\ldots,k\}$, and check to see if there is a collision. If you find distinct strings $x$ and $y$ with $f(x) = f(y)$, then answer $s = x \oplus y$ (which is guaranteed to be correct). Otherwise answer $s = 0^n$ (which might be wrong if you were unlucky). The probability that this succeeds depends on the choice of $c$, but for any desired constant probability of error $\varepsilon$ there is a constant $c$ that yields success probability $1-\varepsilon$. The analysis is essentially that of the generalized birthday problem, and it can be found in numerous books and lecture notes.

+ +

There is, in fact, a classical deterministic algorithm that succeeds with certainty and requires $O(2^{n/2})$ queries to $f$. The idea is to choose the strings $x_1,\ldots,x_k$ in advance so that +$$ +\{x_i\oplus x_j \,:\, i,j\in\{1,\ldots,k\}\} = \{0,1\}^n, +$$ +so a collision will be guaranteed if there is one. This paper describes one way to do this:

+ +
+

Guangya Cai and Daowen Qiu. Optimal separation in exact query complexities for Simon's problem. Journal of Computer and System Sciences 97: 83-93, 2018.

+
+ +

The lower bound is more difficult, as lower bounds generally are, if you want a formal analysis. Simon's original paper proved that any probabilistic algorithm making $2^{n/4}$ queries to $f$ can determine whether or not $s=0^n$ with probability at most $1/2 + 2^{-n/2}$, assuming that $f$ is chosen randomly from a certain distribution that gives $s=0^n$ with probability $1/2$. In other words, a probabilistic algorithm making $2^{n/4}$ queries gives only an exponentially small advantage over random guessing. You can find a proof of the stronger claim that $\Omega(2^{n/2})$ queries are required for a classical algorithm to solve Simon's problem with probability at least 3/4 in these lecture notes:

+ +
+

Richard Cleve. Classical lower bound for Simon's problem. Lecture Notes, 2011.

+
+",1764,,,,,12/26/2018 16:48,,,,0,,,,CC BY-SA 4.0 +5055,2,,5049,12/26/2018 18:39,,1,,"

This is very much possible. And is a very general technique of how product systems in composite states are coupled. Here of the form $|n_1\rangle |n_2\rangle$. This kind of general ket is a solution of the Hamiltonian interaction/coupling terms like $V\sim (a_1^\dagger a_2 +h.c)$ which describe the exchange of one quanta (between the two optical cavities here). $a_i (a_i^\dagger)$ are the annihilation and creator operators for $i^{th}$ cavity in second quantisation. It follows similarly for more systems coupled in the form of product states. +A very general and frequently used is a spin (with an excited and ground state as $|e\rangle,|g\rangle$) in an optical cavity with solutions like $|\psi\rangle=c_1|e,n\rangle+c_2|g,n+1\rangle$ where $n$ is the number of photons in the cavity, and so on for more complicated systems.

+",4889,,,,,12/26/2018 18:39,,,,0,,,,CC BY-SA 4.0 +5056,2,,5048,12/26/2018 19:42,,19,,"

The question is about how many logical qubits it takes to implement Shor's algorithm for factoring an integer $N$ of bit-size $n$, i.e., a non-negative integer $N$ such that $1 \leq N \leq 2^n{-}1$. The question is a poignant one and not easy to answer as there are various tradeoffs possible (e.g., between number of qubits and circuit size).

+ +
+ +

Executive Summary Answer: $2n{+}2$ qubits which leads to a quantum circuit implementation that has less than $448 n^3 \log_2(n)$ number of $T$-gates. For a bit-size of $n=1,024$, this would work out to be $2050$ logical qubits and $4.81 \cdot 10^{12}$ $T$-gates.

+ +
+ +

As mentioned in the question, one can apply fast methods such as Schoenhage-Strassen's algorithm for fast multiplication to implement the modular arithmetic asymptotically in $O(n^2 \log(n) \log \log(n))$ primitive operations (say, over the Clifford$+T$ gate set). This has been discussed for instance in Zalka's paper. However, it should be pointed out that this is indeed (i) only a statement about asymptotic cost and (ii) only a statement about the number of operations required and does not imply the number of qubits.

+ +

Regarding (i), the constant that is hidden in the ""O-notation"" can be prohibitively large. To the best of my knowledge, it has not been attempted to construct a quantum circuit to implement Shor's algorithm based on Schoenhage-Strassen, so we do not even know upper bounds on what that constant is. The other catch, (ii), is that it is not straightforward to relate the number of qubits and the gate cost as seems to be suggested in the question. Besides the fact that we do not know the constant, there is another issue, namely that a straightforward implementation of Schoenhage-Strassen via Bennett's method would lead to a very large number of logical qubits required. Therefore, even as there are faster methods available for integer multiplication than the simple method of n additions, these are much more non-trivial to code in quantum programming languages such as LIQUi|> and Q#.

+ +

In terms of concrete resource estimates for Shor's algorithm, the paper by Haener et al might be a good entry point which implemented the arithmetic in terms of so-called Toffoli gates which have the advantage of being testable at scale on classical input vectors. It is shown in that paper that $2n{+}2$ logical qubits are sufficient to implement Shor's algorithm for factoring integers using a circuit that uses $64 n^3 \log_2(n)$ Toffoli gates which yields $448 n^3 \log_2(n)$ primitive gates (this latter number refers to the number of $T$-gates and ignores that number of Clifford gates as these are significantly more easy to implement fault-tolerantly).

+ +

The currently available Q# implementation of Shor's algorithm (see the IntegerFactorization sample at https://github.com/microsoft/quantum) is based on another way of implementing the arithmetic, namely based on Draper's method to implement additions using the Fourier basis, see also here. This implementation follows Beauregard's paper and requires $2n{+}3$ logical qubits in total. A recent improvement has been obtained by Gidney who reduced the number of clean qubits to $2n{+}1$ (of which only $n{+}2$ have to be ""clean"" qubits, i.e., initialized in a known state. The rest can be ""dirty"" qubits that can be used and returned in their (unknown) state). Finally, there is an interesting claim by Zalka that the number of qubits can be reduced to $1.5n{+}2$ (and perhaps even further), however, his proposed solution comes at a dramatic increase of circuit size as it involves inversions and, to my knowledge, has not been verified nor implemented in a programmatic way.

+",1828,,1828,,1/20/2019 1:18,1/20/2019 1:18,,,,7,,,,CC BY-SA 4.0 +5057,1,,,12/27/2018 4:04,,3,65,"

In Mermin's Quantum Computer Science, section 1.10 (Measurement gates and state preparation), Mermin writes that:

+ +
+

This role of measurement gates in state preparation follows from the Born rule if the Qbits that are to be prepared already have a state of their own, even though that state might not be known to the user of the quantum computer. It also follows from the generalized Born rule if the Qbits already share an entangled state – again, not necessarily known to the user – with additional (unmeasured) Qbits. But one cannot deduce from the Born rules that measurement gates serve to prepare states for Qbits “off the shelf,” whose past history nobody knows anything about. In such cases the use of measurement gates to assign a state to the Qbits is a reasonable and plausible extension of the Born rules. It is consistent with them, but goes beyond them.

+
+ +

Why are ""off the shelf"" qubits any different from qubits whose state is unknown to the user? And why does whatever property these qubits have mean that the use of measurement gates for state preperation doesn't follow directly from the Born rule?

+",5418,,,,,12/27/2018 4:04,Why does state preparation of 'off the shelf' qubits not follow from the Born rule (Mermin)?,,0,0,,,,CC BY-SA 4.0 +5058,1,5061,,12/27/2018 9:30,,13,1639,"

I currently have 2 unitary matrices that I want to approximate to a good precision with the fewer quantum gates possible.

+ +

In my case the two matrices are:

+ +
    +
  • The square root of NOT gate (up to a global phase) +$$G = \frac{-1}{\sqrt{2}}\begin{pmatrix} i & 1 \\ 1 & i \end{pmatrix} = e^{-\frac{3}{4}\pi} \sqrt{X}$$
  • +
  • $$W = +\begin{pmatrix} +1&0&0&0\\ +0&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}&0\\ +0&\frac{1}{\sqrt{2}}&\frac{-1}{\sqrt{2}}&0\\ +0&0&0&1 \\ +\end{pmatrix}$$
  • +
+ +

My question is the following:

+ +
+

How can I approximate these specific matrices with the fewer quantum gates possible and a good precision?

+
+ +

What I want to have an can afford to have it:

+ +
    +
  1. I can afford to use several days/weeks of CPU time and a lot of RAM.
  2. +
  3. I can afford to spend 1 or 2 human days searching for mathematical tricks (in last resort, that is why I ask here first). This time does not include the time I would need to implement the hypothetical algorithms used for the first point.
  4. +
  5. I want the decomposition to be nearly exact. I don't have a target precision at the moment, but the 2 gates above are used extensively by my circuit and I don't want errors to accumulate too much.
  6. +
  7. I want the decomposition to use the fewest quantum gates possible. This point is secondary for the moment.
  8. +
  9. A good method would let me choose the trade-off I want between the number of quantum gates and the precision of the approximation. If this is not possible, an accuracy of at least $10^{-6}$ (in terms of trace norm) is probably (as said before, I do not have estimates so I am not sure of this threshold) required.
  10. +
  11. The gate set is: +$$ +\left\{ H, X, Y, Z, R_\phi, S, T, R_x, R_y, R_z, \text{CX}, \text{SWAP}, \text{iSWAP}, \sqrt{\text{SWAP}} \right\} +$$ +with $R_\phi, \text{SWAP}, \sqrt{\text{SWAP}}$ as described in Wikipédia, $R_A$ the rotation with respect to the axe $A$ ($A$ is either $X$, $Y$ or $Z$) and +$$\text{iSWAP} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ + 0 & 0 & i & 0 \\ + 0 & i & 0 & 0 \\ + 0 & 0 & 0 & 1 \\ \end{pmatrix}$$.
  12. +
+ +

The methods I know about:

+ +
    +
  1. The Solovay-Kitaev algorithm. I have an implementation of this algorithm and already tested it on several unitary matrices. The algorithm generates sequences that are quite long and the trade-off [number of quantum gates] VS [precision of the approximation] is not enough parametrisable. Nevertheless, I will execute the algorithm on these gates and edit this question with the results I obtained.
  2. +
  3. Two papers on 1-qubit gate approximation and n-qubit gate approximation. I also need to test these algorithms.
  4. +
+ +

EDIT: edited the question to make ""square root of not"" more apparent.

+",1386,,1386,,9/20/2019 13:09,9/20/2019 13:09,Approximating unitary matrices,,3,5,,,,CC BY-SA 4.0 +5059,2,,5058,12/27/2018 13:50,,4,,"

Neither of these gates require approximate sequences. You can implement them exactly with your specified gate sets with no great effort.

+ +

Up to a global phase (which should be irrelevant), G is simply $HSH$.

+ +

The second, $W$, is a little more complicated. The way that I constructed this was to think of it as a controlled-Hadamard where I then required a basis change which is created by a controlled-not. One therefore has the circuit

+ +

+ +

where $U=\cos\frac{\pi}{8}\mathbb{I}-i\sin\frac{\pi}{8}Y$. I always get factors of 2 wrong in these angles, but this is of the form $R_Y(\theta)$. I've taken no particular effort to optimise this gate sequence, but this is probably fairly good.

+",1837,,,,,12/27/2018 13:50,,,,0,,,,CC BY-SA 4.0 +5060,2,,5058,12/27/2018 15:07,,7,,"

When a two qubit gate $W$ can be expressed (up to a global phase) in the computational basis by a matrix with entirely real entries, i.e., $W \in O(4)$, then there is general construction of implementing the gate with $CNOTs$ and single qubit gates, please see Vatan and Williams.

+ +

The construction is optimal in the sense that it requires two CNOT gates and at most 12 single qubit gates (for the most general case of a real two qubit gate). +The construction is based on the homomorphism:

+ +

$$SO(4) \approx SU(2) \times SU(2),$$ +which asserts that any two-qubit gate $W$ with real entries can be expressed as: +$$W = M U M^{\dagger}$$ +with $U\in SU(2) \otimes SU(2)$ , i.e., can be implemented by two single qubit gates.

+ +

A matrix $M$ inducing this homomorphism is named by Makhlin the magic matrix, one possible solution is given on the bottom of page 2 in Vatan and Williams work, another possibility is given in equation (10) by Fujii. +Vatan and Williams gave a construction with one CNOT for $M$

+ +

+ +

Using this construction, the full gate implementation given by Vatan and Williams is:

+ +

+ +

with $S_1 = S_z(\frac{\pi}{2})$ and $R_1 = S_y(\frac{\pi}{2})$

+ +

Using this construction, it should be a quite an easy exercise to compute the one-qubit gates $A$ and $B$, for your case.

+",4263,,,,,12/27/2018 15:07,,,,0,,,,CC BY-SA 4.0 +5061,2,,5058,12/27/2018 15:58,,10,,"

You have picked two particularly simple matrices to implement.

+ +

The first operation (G) is just the square root of X gate (up to global phase):

+ +

+ +

In your gate set, this is $R_X(\pi/2)$.

+ +

The second operation (W) is a Hadamard matrix in the middle 2x2 block of an otherwise-identity matrix. Anytime you see this 2x2-in-the-middle pattern you should think ""controlled operation conjugated by CNOTs"". And that's exactly what works here (note: you may need to swap the lines; depends on your endianness convention):

+ +

+ +

So the only real trouble is how to implement a controlled Hadamard operation. A Hadamard is a 180 degree rotation around the X+Z axis. You can use a 45 degree rotation around the Y axis to move the X+Z axis to the X axis, then do a CNOT in place the CH, then move the axis back:

+ +

+ +

Where $Y^{1/4} \equiv R_Y(\pi/4)$.

+",119,,,,,12/27/2018 15:58,,,,0,,,,CC BY-SA 4.0 +5063,1,,,12/27/2018 22:13,,6,465,"

I read about the classical-quantum states in the textbook by Mark Wilde and there is an exercise that asks to show the set of classical-quantum states is not a convex set. But I have an argument to show it is a convex set. I wonder whether I made a mistake in my proof.

+ +

Here is the definition of classical-quantum states (definition 4.3.5):

+ +
+

The density operator corresponding to a classical-quantum ensemble + $\{p_X(x)$, $|x\rangle\langle x|_X \otimes \rho_A^x\}_{x\in \mathcal{X}}$ is called a classical-quantum state and takes the following form: + $$\rho_{XA} = \sum_{x \in \mathcal{X}} p_X(x) |x\rangle\langle x|_X \otimes \rho_A^x.$$

+
+ +

My argument about the set of classical-quantum states is convex is as follows. Let $\rho_{XA}$ and $\sigma_{XA}$ to be two arbitrary classical-quantum states. Specifically, we can write +$$\rho_{XA} = \sum_{x \in \mathcal{I}_1} p_X(x) |x\rangle\langle x|_X \otimes \rho_A^x,$$ +$$\sigma_{XA} = \sum_{x \in \mathcal{I}_2} q_X(x) |x\rangle\langle x|_X \otimes \sigma_A^x,$$ +where $\mathcal{I}_1 = \{x: p_X(x) \neq 0\}$ and $\mathcal{I}_2 = \{x: q_X(x) \neq 0\}$.

+ +

Then we take the union $\mathcal{I}=\mathcal{I}_1 \cup \mathcal{I}_2$. We define $\rho_A^x$ to be an arbitrary density operator for $x \notin \mathcal{I}_1$ and similarly $\sigma_A^x$ to be an arbitrary density operator for $x \notin \mathcal{I}_2$.

+ +

We can then rewrite $\rho_{XA}$ and $\sigma_{XA}$ as +$$\rho_{XA} = \sum_{x \in \mathcal{I}} p_X(x) |x\rangle\langle x|_X \otimes \rho_A^x,$$ +$$\sigma_{XA} = \sum_{x \in \mathcal{I}} q_X(x) |x\rangle\langle x|_X \otimes \sigma_A^x.$$

+ +

Since we are adding zero operators, $\rho_{XA}$ and $\sigma_{XA}$ are not changed.

+ +

Then for any $\lambda \in (0,1)$, we want to show $\lambda \rho_{XA} + (1-\lambda) \sigma_{XA}$ is a classical-quantum state. +(Note that the trivial case where $\lambda =1$ or $\lambda = 0$ just gives back $\rho_{XA}$ and $\sigma_{XA}$ back, respectively.)

+ +

We now define $\xi_{XA} :=\lambda \rho_{XA} + (1-\lambda) \sigma_{XA}.$ +$$ +\xi_{XA} =\lambda \sum_{x \in \mathcal{I}} p_X(x) |x\rangle\langle x|_X \otimes \rho_A^x + (1-\lambda) \sum_{x \in \mathcal{I}} q_X(x) |x\rangle\langle x|_X \otimes \sigma_A^x\\ +=\sum_{x \in \mathcal{I}} |x\rangle\langle x|_X \otimes (\lambda p_X(x) \rho_A^x + (1-\lambda) q_X(x)\sigma_A^x) \\ +=\sum_{x \in \mathcal{I}} w_X(x)|x\rangle\langle x|_X \otimes \xi_A^x, +$$ +where $w_X(x) = \lambda p_X(x) + (1-\lambda) q_X(x)$ and $\xi_A^x = \frac{\lambda p_X(x) \rho_A^x + (1-\lambda) q_X(x)\sigma_A^x}{\lambda p_X(x) + (1-\lambda) q_X(x)}$.

+ +

Since $X \in \mathcal{I}$, not both $p_X (x)=0$ and $q_X(x)=0$. For $\lambda \in (0,1)$, we have $w_X(x) \neq 0$ for $x \in \mathcal{I}.$

+ +

Also, $\sum_{x \in \mathcal{I}} w_X(x) = \sum_{X \in \mathcal{I}} \lambda p_X(x) + (1-\lambda) q_X(x) = 1$.

+ +

Therefore, the state $\xi_{XA}$ is a classical-quantum state. +So, I conclude the set of classical-quantum states is convex.

+ +

Can anyone point out where I made a mistake?

+ +

Or is there a typo in the textbook?

+",5427,,55,,11/30/2021 22:26,11/30/2021 22:26,Is the set of classical-quantum states convex?,,1,0,,,,CC BY-SA 4.0 +5064,2,,5063,12/27/2018 22:34,,5,,"

Your mistake is that you assume that $\rho$ and $\sigma$ are classical-quantum in the same classical basis on $X$. However, there is no need to do so -- all which is necessary is that there exists such a basis, which can however depend on the state. As soon as you choose a different classical basis for the two states, your argument breaks down.

+",491,,,,,12/27/2018 22:34,,,,3,,,,CC BY-SA 4.0 +5065,1,5078,,12/28/2018 5:11,,2,133,"

I refer to this paper but reproduce a simplified version of their argument. Apologies if I have misrepresented the argument of the paper!

+ +

Alice has a classical description of a quantum state $\rho$. Alice and Bob both agree on a two outcome observable $M$. Now, the goal for Bob is to come up with a classical description of a state $\sigma$ that gives the right measurement statistics i.e. $Tr(M\rho) \approx Tr(M\sigma)$.

+ +

The way this is done is that Bob has a guess state, say the maximally mixed state, $I$. Alice then tells him the value of $Tr(M\rho)$. Bob then measures the maximally mixed state repeatedly (or he runs a classical simulation of this with many copies of the maximally mixed state) and ""postselects"" the outcomes where he obtains $Tr(M\rho) \approx Tr(MI)$. In this way, he obtains a state that reproduces the measurement statistics of the true state.

+ +

What is the meaning of postselection in this context? How does Bob go from $I$ to $\sigma$ in this procedure?

+",4831,,26,,12/31/2018 16:03,12/31/2018 16:03,How is postselection used in quantum tomography?,,1,0,,,,CC BY-SA 4.0 +5066,2,,5045,12/28/2018 13:36,,8,,"

The equation at the top of the question is not correct: there is a missing factor of $1/d$ on the right-hand side. Let's eliminate this factor from the left-hand side to make it simpler, so that the equation we want is this: +$$ +\text{Tr}\bigl(\rho^{AB} \bigl(\sigma^A \otimes I\bigr)\bigr) = \text{Tr}\bigl(\rho^A \sigma^A\bigr). +$$

+ +

To see why this is true, it helps to start with an easy special case, which is that $\rho^{AB}$ is a product state: +$$ +\rho^{AB} = \rho^A \otimes \rho^B. +$$ +In this case we have +$$ +\text{Tr}\bigl(\bigl(\rho^A \otimes\rho^B\bigr) \bigl(\sigma^A \otimes I\bigr)\bigr) = \text{Tr}\bigl(\rho^A \sigma^A\bigr)\text{Tr}\bigl(\rho^B\bigr) = \text{Tr}\bigl(\rho^A \sigma^A\bigr), +$$ +using just elementary properties of tensor products and their traces.

+ +

Now, given that the equation is true in the special case, it has to be true in general because the expressions +$$ +\text{Tr}\bigl(\rho^{AB} \bigl(\sigma^A \otimes I\bigr)\bigr)\;\;\text{and}\;\;\text{Tr}\bigl(\rho^A \sigma^A\bigr) +$$ +depend linearly on $\rho^{AB}$, and the set of all product states $\rho^A\otimes\rho^B$ spans the vector space of all operators acting on $H_A\otimes H_B$.

+ +

Alternatively, we have +$$ +\text{Tr}((X\otimes Y)(Z\otimes I)) = \text{Tr}(XZ)\text{Tr}(Y) = \text{Tr}\bigl(\text{Tr}_B(X\otimes Y)\, Z\bigr) +$$ +for all operators $X$ and $Z$ acting on $H_A$ and all $Y$ acting on $H_B$, irrespective of their traces, and therefore +$$ +\text{Tr}(W(Z\otimes I)) = \text{Tr}\bigl(\text{Tr}_B(W) Z\bigr) +$$ +for all operators $W$ acting on $H_A\otimes H_B$ by linearity.

+",1764,,,,,12/28/2018 13:36,,,,4,,,,CC BY-SA 4.0 +5067,2,,5053,12/28/2018 15:38,,5,,"

Nonlocal games such as the CHSH game are not impartial games in the sense of Sprague-Grundy. Alice and Bob are thought to be cooperating rather than competing, and randomness is central to the study of nonlocal games. Impartial games are competitive, deterministic, and perfect information.

+ +

A good sanity check when asking about quantizing some classical object is to first see whether it can be meaningfully randomized.

+",483,,,,,12/28/2018 15:38,,,,2,,,,CC BY-SA 4.0 +5068,1,,,12/28/2018 23:34,,5,136,"

If I implement an adder operation in Q#, I'd like to see a quantum circuit diagram of what that adder is doing in order to check that it looks right. Is there a built-in way to do this?

+",119,,26,,01-01-2019 07:32,01-01-2019 07:32,How do I produce circuit diagrams from a Q# program?,,2,1,,,,CC BY-SA 4.0 +5069,1,,,12/29/2018 3:41,,2,79,"

I'm trying to write unit tests for some small Q# operations. It would be ideal if I could access the wavefunction. Is there a way to get it?

+ +

I found Microsoft.Quantum.Diagnostics.DumpRegister, but it writes its output to the console or a file in a format intended for humans. I don't want to parse a non-trivial file format as part of writing a unit test.

+",119,,2879,,11/15/2019 18:07,11/15/2019 18:08,Programmatic access to wavefunction in Q# for tests,,1,1,,,,CC BY-SA 4.0 +5070,1,5072,,12/29/2018 4:06,,4,209,"

In Q#'s type documentation, it is mentioned that you can create signatures like this:

+ +
function ConjugateInvertibleWith : (inner: ((Qubit[] => Unit) : Adjoint),
+                                    outer : ((Qubit[] => Unit) : Adjoint))
+                                 : ((Qubit[] => Unit) : Adjoint)
+
+ +

My question is: how is this function actually implemented.

+ +

Presumably a function with this name will return an operation that, when invoked, calls outer, then inner, then adjoint outer. However, I have no idea how to actually write a function like this. In particular, it's not clear how to write the equivalent of a lambda with a closure. For example, if I try to declare an operation inside a function (similar to how you can def inside a def in python), I get a syntax error.

+ +

Does this have to be done in a non-Q# library, like in C#, then imported into Q#? If so, how?

+",119,,26,,01-01-2019 07:32,11/15/2019 18:06,How do I write functions that modify operations in Q#?,,1,0,,,,CC BY-SA 4.0 +5071,2,,5069,12/29/2018 5:21,,2,,"

For unit testing, you can use Assert* operations which allow you to verify that certain properties of the wavefunction match your expectations, for example, AssertProbInt operation or Microsoft.Quantum.Diagnostics namespace. The documentation mentions some of them here; you can also do ""Filter by title"" for library reference using ""Assert"" query and check which ones fit your specific goal best.

+",2879,,2879,,11/15/2019 18:08,11/15/2019 18:08,,,,5,,,,CC BY-SA 4.0 +5072,2,,5070,12/29/2018 5:31,,5,,"

For this example, one obtains a function with that signature by partial application of an operation that is defined outside the body, instead of as a lambda in the function. As a concrete example, consider this non-generic version of the WithA operation, modified from Q# canon.

+ +
operation WithA(
+    outer : (Qubit[] => Unit : Adjoint), 
+    inner : (Qubit[] => Unit : Adjoint), 
+    target : Qubit[]) 
+    : Unit
+{
+    body (...)
+    {
+        outer(target);
+        inner(target);
+        Adjoint outer(target);
+    }
+    adjoint invert;
+}
+
+ +

This applies the sequence $|\textrm{target}\rangle\rightarrow\textrm{outer}^\dagger\cdot\textrm{inner}\cdot\textrm{outer}|\textrm{target}\rangle$.

+ +

We can then partially apply the target, by using the underscore character in place of an argument, to create the desired signature as follows.

+ +
function WithAFunction(
+    outer : (Qubit[] => Unit : Adjoint), 
+    inner : (Qubit[] => Unit : Adjoint)) 
+    : ((Qubit[] => Unit) : Adjoint)
+{
+    return WithA(outer, inner, _);
+}
+
+",5370,,2879,,11/15/2019 18:06,11/15/2019 18:06,,,,0,,,,CC BY-SA 4.0 +5073,1,,,12/29/2018 9:16,,5,430,"

Is there any way to get information from intermediate points in the execution of a quantum circuit. This is not possible on real quantum hardware but would be very useful on the Aer simulators for both learning quantum programming and for debugging.

+",1370,,26,,03-12-2019 09:15,11/19/2019 22:11,Introspecting quantum circuit execution on Qiskit Aer simulators,,1,2,,,,CC BY-SA 4.0 +5074,1,,,12/29/2018 16:33,,4,54,"

I am working in Adiabatic Quantum Computing and I have a $6\times6$ Hamiltonian. I have only the symbolic expression for its eigenstates which have complicated expressions in solutions of degree $6$ characteristic polynomial of the Hamiltonian and even ordinary computing over these eigenstates is beyond the power of Mathematica in my computer.
+I need to evaluate the Geometric phases terms like $\langle\psi_n|\partial _{\phi}|\psi_n\rangle$ where $\phi$ is just a driving adiabatic parameter and $|\psi_n\rangle$ are the solutions to the Hamiltonian.

+ +

Now, I can get numerics for $|\psi_n\rangle$ but once I get that, how can I evaluate the differential $\partial_{\phi}|\psi_n\rangle$? +Or to even approximate it in any form? Because it is impossible to get a manipulative symbolic form for $|\psi_n\rangle$ with such messy calculations, all that can be done is to write Hamiltonian numerically and find the obvious numerical eigenstates.

+ +

I have tried using all kinds of Numerical functions available in Mathematica but such a numerical differentiation in one parameter is not possible for this kind of complexity (takes a lot of time).

+",4889,,,,,12/29/2018 16:33,Numerical approximation to eigenstates and their differentials,,0,1,,,,CC BY-SA 4.0 +5075,1,5077,,12/29/2018 16:38,,4,153,"

For example, if I've defined the operation PlusEqual, I'd like to say

+ +
operation MinusEqual = Adjoint PlusEqual;
+
+ +

but this produces a syntax error.

+ +

Is the only way to do this by exhaustively re-listing all arguments and functors?

+ +
operation MinusEqual (all_args_and_types_again) : ReturnType {
+    body (...) {
+        return Adjoint AddEqual(all_args);
+    }
+    adjoint auto;
+    controlled auto;
+    controlled adjoint auto;
+}
+
+",119,,26,,01-01-2019 08:09,01-01-2019 08:09,How do I name the adjoint of an operation in Q#?,,1,0,,,,CC BY-SA 4.0 +5076,1,5087,,12/29/2018 18:40,,5,300,"

Quantum parallelism is usually introduced from CNOT schema, saying that there it results, calling $ |c\rangle $ the control and $|t\rangle$ the target +$$ |0\rangle |t\rangle \implies |0\rangle |t\rangle $$

+ +

$$ |1\rangle |t\rangle \implies |1\rangle X|t\rangle $$ +Generalizing that for a generic controlled-U gate, it gives +$$ +|1\rangle |t\rangle \implies |1\rangle U|t\rangle +$$ +To note that the separation of the output is possible only if the control $|c\rangle$ is a pure state (i.e. $|0\rangle$ or $|1\rangle$).

+ +

With a further step it is said that, if $|t\rangle = |0\rangle$, the controlled-$U$ behaves as following +$$ +|c\rangle |0\rangle \implies |c\rangle |f(c)\rangle +$$ +with $f(x)$ a proper function, non necessarily equal to $U$. More generally +$$ +|c\rangle |t\rangle \implies |c\rangle |t XOR f(c)\rangle +$$ +Now my question: from where above two expressions are derived? And how can we build $f(x)$ from U? Making some examples with $C_X$, $C_Z$, $C_Y$ the unitaries needed for implementing the $f(x)$ are always different, and I couldn't find any clear logic for it.

+ +

Any clarification, or a working examples, would be great!

+",4927,,26,,12/31/2018 23:17,12/31/2018 23:17,How does quantum function parallelism work?,,2,0,,,,CC BY-SA 4.0 +5077,2,,5075,12/29/2018 18:57,,4,,"

You can define an immutable symbol for MinusEqual inside the body of an operation which will use it (you can't define it globally):

+ +
operation UseMinusEqual () : Unit {
+    ...
+    let MinusEqual = Adjoint PlusEqual;
+    MinusEqual(...);
+}
+
+ +

If you need MinusEqual to be a globally visible operation, there is no shorthand syntax for this right now, so the only way to do it is a full operation definition like you say.

+",2879,,2879,,12/29/2018 19:08,12/29/2018 19:08,,,,0,,,,CC BY-SA 4.0 +5078,2,,5065,12/29/2018 19:08,,1,,"

In principle, Bob here just has to guess the $2\times 2$ matrix $\sigma$. +If he starts with any parametric state $\sigma(\alpha,\beta)$ with $\alpha,\beta\in\mathbb{C}$ and measures the outcome Tr$(M\sigma)$ with the post measurement state $\sigma '=M\rho M^\dagger/\text{Tr}(M\rho)$, he receives a number and he has to tune $\alpha,\beta$ to come close to the value Tr$(M\rho)$. This tuning is done on the basis of postselection by which it is implied that he selects the state $\sigma$ close to $\rho$ on the constraint of minimising the relative entropy: +\begin{equation} +\rho\text{ln}\rho-\rho\text{ln}\sigma +\end{equation} +so as to move closer to the state $\rho$, which can be found out in terms of $\alpha,\beta$. In such problems usually one starts with a parametric state in one variable and optimises over it.

+",4889,,,,,12/29/2018 19:08,,,,0,,,,CC BY-SA 4.0 +5079,2,,4684,12/29/2018 20:18,,1,,"

This question is precisely answered within the following work: +https://arxiv.org/abs/1511.08144

+ +

They basically present that quantum bounds are basically the upper bounds for classical probabilistic correlations. And this is not unique to quantum mechanics, the upper bounds can be attained by other frameworks which are deterministic. They present a simple experiment based on tree network via which this can be realized.

+",,user5438,26,,01-02-2019 12:21,01-02-2019 12:21,,,,1,,,,CC BY-SA 4.0 +5080,2,,5076,12/29/2018 20:39,,2,,"

For $cX$,$cY$ and $cZ$ first.

+ +

Let $U_Y$ be the 1 qubit unitary such that $U_Y^\dagger X U_Y = Y$ and similarly for $Z$. This can be done because they all diagonalize to the same thing $Z$, so you can find that $U$ as an exercise.

+ +

Suppose you did $U_Y$ on the target qubit, then a CNOT and then a $U_Y^\dagger$ on the second qubit.

+ +

If the initial state is of the form $|0\rangle | t \rangle$ then you get $| 0 \rangle ( U_Y | t \rangle )$ after the first step. The second step doesn't change anything and then the third step takes you to $| 0 \rangle ( U_Y^\dagger U_Y | t \rangle )$ which is the same as the starting state. So that agrees with what $cY$ should have done on this type of state.

+ +

If the initial state is of the form $|1\rangle | t \rangle$ then you get $| 1 \rangle ( U_Y | t \rangle )$ after the first step. The second step does change things this time. You get $| 1 \rangle ( X U_Y | t \rangle )$. The third step then takes you to $| 1 \rangle ( U_Y^\dagger X U_Y | t \rangle )$. But by definition of $U_Y$ this is $| 1 \rangle ( Y | t \rangle )$.So that agrees with what $cY$ should have done on this type of state.

+ +

The same applies mutatis munandis with $Z$ replacing $Y$ to make $cZ$.

+ +

Secondarily:

+ +

Say you have some complicated expression for $U$ but it is written as a product of $U_1 \cdots U_n$ but you know how to make controlled versions of each of the $U_i$. Then you can make $cU$ by $cU_1 \cdots cU_n$. Like $U=XZY$ turning into $cX cZ cY$.

+ +

General 2-qubit $cU$:

+ +

You can use theorem 5 of this paper to decompose other controlled unitaries $cU$ besides those where there exists a $V$ such that $V^\dagger X V=U$. In those cases you can do the first procedure, but if not, you must do something more.

+ +

For 2 qubits with $f(c)$:

+ +

Suppose $f(0)=0$, then either $f(1)=1$ in which case you are describing a controlled not or $f(1)=0$ in which case you are describing the identity. +If $f(0)=1$, then either $f(1)=1$ in which case you are describing a $X$ on the target qubit no matter the control. Or $f(1)=0$ in which case you are doing a controlled NOT but after reversing the way the controls go so that $0$ means perform the operation and $1$ means do not. This would be implemented by conjugating by an $X$ on the control qubit.

+ +

You should more be thinking of this as building the unitary from the function not the other way around. There are functions $\{0,1\}^c \to \{0,1\}^t$ where $c$ is the number of control qubits and $t$ is the number of target qubits and XOR is replaced by bitwise XOR in your formula. These give unitary matrices as you describe on $(\mathbb{C}^2)^{c+t}$. But there are far more unitaries than functions. So you can't expect to go from unitary to function, only the other way around.

+",434,,26,,12/31/2018 16:12,12/31/2018 16:12,,,,4,,,,CC BY-SA 4.0 +5081,1,5082,,12/30/2018 3:18,,2,191,"

If I write this:

+ +
function f(n: Int) : Double {
+    return 1.5*n;
+}
+
+ +

I get an error:

+ +
The arguments to the binary operator do not have a common base type.
+
+ +

Apparently I need a function to turn n from an Int into a Double. There are lots of functions going from double to int, like Microsoft.Quantum.Extensions.Math.Floor, but I wasn't able to find anything from int to double.

+",119,,26,,01-01-2019 07:32,5/13/2019 19:29,How do I multiply an integer and a double in Q#?,,1,5,,,,CC BY-SA 4.0 +5082,2,,5081,12/30/2018 4:23,,5,,"

You want the Microsoft.Quantum.Extensions.Convert.ToDouble function (deprecated in favor of Microsoft.Quantum.Convert.IntAsDouble in 0.6 release).

+ +
open Microsoft.Quantum.Extensions.Convert;
+
+function f(n: Int) : Double {
+    return 1.5*ToDouble(n);
+}
+
+ +

The reason it works this way is because in Q# (Num a) => a -> a -> a is not the same as (Num a,Num b) => a -> b -> a as Haskell would denote it.

+",434,,2879,,5/13/2019 19:29,5/13/2019 19:29,,,,0,,,,CC BY-SA 4.0 +5083,1,,,12/30/2018 5:15,,-1,90,"
+

It is conventional to assume that the state input to the circuit is a + computational basis state, usually the state consisting of all + $|0\rangle$s. This rule is broken frequently in the literature on + quantum computation and quantum information, but it is considered + polite to inform the reader when this is the case.

+
+ +

What does this mean?

+ +

Source: M. Nielsen and I. Chuang. Quantum Computation and Quantum Information. Cambridge UniversityPress, 2000. (1.3.4 Quantum Circuits)

+",5439,,26,,01-01-2019 10:08,01-01-2019 10:08,Default input states of qubits to quantum circuits,,1,2,,04-03-2019 13:21,,CC BY-SA 4.0 +5084,2,,5083,12/30/2018 5:43,,1,,"

You have some quantum algorithms assuming you prepared a special input state which is not represented as a superposition of bitstrings.

+ +

For example, you can just say you start in the $| + \rangle$ state for all your qubits. +Another one is starting in a state where you have encoded your input in the amplitudes of a quantum state (this type of encoding is called amplitude encoding). This is used in quantum linear system solvers (originally HHL) to input a vector in a quantum form to solve a linear system.

+",4127,,,,,12/30/2018 5:43,,,,0,,,,CC BY-SA 4.0 +5085,1,5088,,12/30/2018 15:36,,5,242,"

In Shor's algorithm we require the period to be even. If the period is not even or $x^{r/2}+1 \equiv 0 \bmod N$ then we have to restart the process and pick a new random $x$. Why do we know that the process will work for some $x$ and still be quicker than the current method of just checking if factors work?

+ +

(Here $r$ is the period, $x$ is the random number given at the start of Shor's, $N$ is the number to be factorised.)

+",5328,,26,,12/31/2018 16:21,12/31/2018 16:21,Shor's algorithm effectiveness,,1,0,,,,CC BY-SA 4.0 +5086,1,,,12/30/2018 18:56,,7,81,"

For example, given the $R$ & $F$ gates and Toric codes for a given problem, how to convert this code into the conventional circuit model and vice versa. +From the literature developed, it seems that they tackle fairly different kinds of problems for these realizations.

+ +

How to write, let's say, Grover's algorithm for a topological experiment? +What is the correspondence to convert them back and forth?

+",,user5438,26,,12/31/2018 16:18,12/31/2018 16:18,Correspondence between the Topological model and Quantum Circuit model,,0,0,,,,CC BY-SA 4.0 +5087,2,,5076,12/31/2018 8:18,,2,,"

Generically, given any controlled-$U$, you cannot go backwards and work out the function $f(x)$, because it may not exist. For example, controlled-Hadamard takes a basis state and returns a superposition of basis states rather than a single basis state output. So there's no single $f(x)$ value to identify.

+ +

If it happens to be the case that for all inputs of the form $|x\rangle|0\rangle$ you gate an output of the form $|x\rangle|f(x)\rangle$, then you could literally just write down a table to the values $x$ and the corresponding $f(x)$. That truth table would define the function. Of course, that does not mean that $U$ acting on $|x\rangle|y\rangle$ necessarily produces $|x\rangle|y\oplus f(x)\rangle$.

+ +

For example, for controlled-not, one has

+ +
+---+------+
+| x | f(x) |
++---+------+
+| 0 |    0 |
+| 1 |    1 |
++---+------+
+
+ +

i.e. f(x) is the identity function.

+ +

What you can do is work in the other direction. If you're given $f(x)$, you can figure out the circuit that gives you a unitary $U$ that acts as +$$|x\rangle|y\rangle\mapsto |x\rangle|y\oplus f(x)\rangle.$$ +This is not a quantum problem, but a problem of reversible classical computation (how do you convert a function evaluation into a reversible function, and find the circuit).

+ +

For instance, imagine I have a two-bit input $x$, and I want to evaluate $f(x)$ as the AND function. You need the truth table

+ +
+----+---+------------+
+| x  | y | y XOR f(x) |
++----+---+------------+
+| 00 | 0 |          0 |
+| 01 | 0 |          0 |
+| 10 | 0 |          0 |
+| 11 | 0 |          1 |
+| 00 | 1 |          1 |
+| 01 | 1 |          1 |
+| 10 | 1 |          1 |
+| 11 | 1 |          0 |
++----+---+------------+
+
+ +

Hopefully, you can convince yourself that this is the same as the Toffoli gate (controlled-controlled-not).

+",1837,,,,,12/31/2018 8:18,,,,1,,,,CC BY-SA 4.0 +5088,2,,5085,12/31/2018 8:23,,6,,"

Check out Thoerem 5.3 in Nielsen and Chuang. It conveys that we are almost guaranteed to get a good value of $x$ (the probability is stated to be at least $1-\frac{1}{2^m}$ where there are $m$ unique prime factors of $N$). The expected number of repetitions of the algorithm (due to this effect) is only just more than 1. At worst, it's 2. There's no absolute guarantee that in a particular instance you wouldn't need many, many repetitions, it's just ridiculously unlikely.

+",1837,,,,,12/31/2018 8:23,,,,2,,,,CC BY-SA 4.0 +5089,2,,5068,12/31/2018 9:06,,8,,"

Unfortunately, there is indeed currently no way to generate circuit diagrams from a Q# program. Since this is a feature request, consider making it here: https://quantum.uservoice.com/forums/906940-debugging-and-simulation.

+ +

To give a little bit of context, Q# makes a conscious effort to encourage reasoning about quantum algorithms in terms of control flow and transformations rather than circuits, allowing e.g. qubit aliasing. Correspondingly, the kit focuses on tools that are commonly used in software engineering like unit testing instead of circuit diagrams. I can understand that a diagram may come in handy for publications, though it is certainly not unheard of to give pseudo code instead.

+",5448,,,,,12/31/2018 9:06,,,,2,,,,CC BY-SA 4.0