GigaOm CEO Byron Reese recently sat down with Scott Aaronson to discuss Quantum Computing. Aaronson is the David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas in Austin, where he also directs the UT Quantum Information Center. Prior to UT, he taught Electrical Engineering and Computer Science at MIT. Aaronson’s research focuses on the capabilities and limits of quantum computers and computational complexity.
Byron Reese: Welcome Scott.
Scott Aaronson: Thanks, great to join you.
So it seems like you’re on a one-man crusade to dispel all the popular notions of quantum computing. Why don’t we start with that?
Okay, well I write a blog and basically what happens is that every time there’s some really outrageous claim about quantum computing, that gets into the press, which is often. People start emailing me and they ask me to respond to it. So just by circumstance, and because no one else sets out to do this, it became me who did a lot of the responding.
And so, the thing that you often have to respond to is this idea that a quantum computer can be zero and can have all these qubits that are zero and one simultaneously, and therefore they can solve really complex problems by looking at every possible combination of those all at once, and that’s not true. So explain why that isn’t true or what is true, how ever you want to do it.
Well, it takes some time to explain…
Take all the time you want.
All right, so a qubit, which is the quantum version of a bit, is advanced as can be in what we call a superposition of the zero and one state. So it’s neither definitely zero nor definitely one. And the main problem is that people always want to round this down to something that they already know. They’ll say, “Well you just mean that the bit is either zero or it’s one, and you just don’t know which,” right? And then you look at it and you see which one. Well, if that’s all there was to it, it wouldn’t be so interesting. And this is what the popular articles often do: they switch to saying, “Well it must be both zero and one simultaneously, it must be both,” Then well if I had 1,000 qubits, then they could explore let’s say ‘two to the 1,000th power’ if possible simultaneously and that must be what is producing this enormous ‘speedup’ that quantum computation promises.
That is gesturing towards something in the vicinity of the truth, but the problem is, that when you measure a qubit, you only see one result. You see a zero or a one; you don’t see both, and what quantum mechanics is at the core, is a way of calculating the probability that you’re going to see one outcome or another one when you make an observation. Now, the key point is that quantum states don’t obey the normal rules of probability that we know. So a probability is a number from zero to one, so you could have a 30% chance of rain tomorrow, but you never have a -30% chance, right, that would be nonsense, okay? But quantum mechanics is based on numbers called amplitudes, which can be positive or negative. In fact, they can even be complex numbers. So when you make a measurement, these amplitudes turn into probabilities, and so the larger amplitude becomes a larger probability of being something, but when a system is isolated, the amplitude can evolve by rules that are very unfamiliar to everyday experience. That is what pretty much everything you’ve ever heard about the weirdness of the quantum world boils down to.
So what a qubit really is is it’s a bit that has some amplitude for being zero and some amplitude for being one, so it has one of these complex numbers attached to each of the possibilities. If I had 1,000 qubits, likewise then there would be an amplitude that I would have to assign to every possible setting of all 1,000 bits. So there’s this sort of, quantum mechanics has been telling us since the 1920s, just to keep track of the state of let’s say, 1,000 measly particles, there is an immense object beneath the surface, and nature is somehow keeping track of this list of ‘two to the 1,000 power’ complex numbers if you like, which is more numbers than can be written in the entire observable universe. But again the problem is, when you make a measurement, you don’t actually see these numbers, you just see a single probabilistic outcome, so you could create what we call an equal superposition over all possible answers to your hard problems, that’s actually a very easy thing to do with a quantum computer.
The problem is if you just did that, then when you measure, then quantum mechanics tells you that all you’re going to see would be a random answer, and if you just wanted a random answer, well you could have picked one yourself with a lot less trouble, right? So the entire hope for getting a speed advantage from a quantum computer is to exploit the way that these amplitudes work differently than probabilities.
The main thing that amplitudes can do, that probabilities don’t do, is that they can interfere with each other. This is most famously illustrated in the ‘double slit experiment,’ which we were just discussing before the show. This is the thing where you shoot a photon one at a time at a stream with two small slits in it, and you look at where they end up on a screen behind it, and what you find is that there are certain spots where the photon never appears, almost never appears, and yet, if you close off one of the slits, then the photon could appear in those spots.
To say that again, like decreasing the number of paths that the photon could take to reach a certain spot, you can increase the chance that it gets to that spot. This is the thing that violates any conventional understanding of probability. If the photon was just going through one slit or the other, this would be nonsense, okay, whereas if you observe which slit the photon is going through more generally, if the information about which slit it’s going through, leaks out into the external world in any way, then again the photon can appear in these spots, and then you stop measuring to see which slit it went through and then it doesn’t appear there anymore. So the quantum mechanical explanation for this is that the photon has some amplitude of reaching the first spot for the first slit and some amplitude of reaching it for the second slit, and to find the final amplitude that it gets to a certain place, you have to add up the amplitudes from all the ways it could have reached that spot. Now if one of those amplitudes is positive and the other one is negative, then those amplitudes can, as we say, interfere destructively and cancel each other out, with the result being that the final amplitude is zero, and so that event doesn’t happen at all, whereas if you close off one off the slits then the amplitude is positive or it’s negative, and so then the photon can appear there.
Okay, so, that’s quantum interference, which as I said, is behind pretty much every goofy quantum effect you’ve ever heard about. Now the idea with the quantum computer, is to do something like the double slit experiment, but on a much more massive scale, where instead of just having one particle, we might have thousands or millions of particles, which could all be correlated with each other. The quantum version of correlation is called ‘entanglement,’ okay, and so the state of 1,000 qubits as we said, it involves two to the 1,000 amplitudes and so forth.
Now what they’re trying to do with every quantum algorithm, is you’re trying to choreograph things in such a way that for each wrong answer to your computational problem, some of the paths leading to that answer have positive amplitude, and others have negative amplitude, so on the whole, they cancel each other out, whereas the path leading to the right answer should all be ‘in phase’ with each other. They should all have amplitudes of the same side, say all positive or all negative. If you can mostly arrange for that to happen, then when you measure the state of your quantum computer, then you will see the right answer with a large probability. If you don’t see the right answer, you can simply keep repeating the computation until you do.
So nature really gives you a very bizarre ‘hammer’ that I think can be explained in five or ten minutes as I did, but it doesn’t really compress well to a one sentence output. People always want to round it down, whereas the quantum computer just tries all of the possible answers at once, but the truth is that if you want to see an advantage, you have to exploit this interference phenomenon. It was really only in the 1990s, so more than a decade after physicists like Richard Feynman, first proposed the idea of quantum computation, that people finally started figuring out: What are some nails that this hammer can hit? What is this interference ability good for? Then that’s what really started quantum computation as a field.
And where are we with it now? Quantum machines exist right?
Oh yeah absolutely, so…
And how many are there?
Yeah well, so people have been doing experiments in quantum computation since the 1990s, and just within the last 3-4 years, it’s starting to see really serious investment, where it is no longer just the sort of academic pursuit it was when I entered this field around 2000. It is now Google, IBM, Microsoft, Intel, a bunch of startup companies, all are investing on a scale of hundreds of billions of dollars.
Why are they so expensive?
Well, you’re trying to build a technology that’s never been built before. At a bare minimum, so there are many different approaches to quantum computing, but if you’re doing superconducting qubits, which is maybe the most popular approach today, then at a bare minimum, you need to cool everything down to 10 mKB, or so, so that your chip superconducts and you see the quantum behavior, so that means that you need to see an enormous cooling system. Then you need all these custom electronics to send signals to the qubit. Now what people are trying to do is integrate a large number of qubits and it’s a serious operation.
And who’s the current record holder right now? How many qubits?
It’s a mistake to just look at the pure number of qubits, because there’s a tradeoff. What we really care about is the quality of the qubits, so the qubit that maintains it’s quantum state for a long time, without leaking it into the environment is a very good qubit. A qubit that just sort of leaks its state really fast is a bad qubit.
Like a discount qubit.
Yeah exactly, so, a bad qubit you can’t use for very many steps of computation, right? You can maybe do a few steps of quantum computation but then the qubit will just die, it will just revert to being a classical bit. So you really need to look at quality. If you just cared about the sheer number, there’s a startup company called D-Wave that notoriously has been selling devices with 2,000 qubits. A few people have bought them, but analysis over the past 5 years has found that the qubits do not seem to be of a good enough quality to see a clear speed up over a classical computer, when you do a fair comparison.
What players like Google and IBM are trying to do right now, is something similar to what D-Wave did, mainly chip with a large number of integrated superconducting qubits, but now with qubits of a much, much higher quality. So roughly the D-Wave qubits could maintain their coherence for some nanoseconds, and the new generation of superconducting qubits can maintain their quantum coherence for a scale of tens of microseconds, which doesn’t sound like a long time but that’s tens of thousands of times longer than nanoseconds, and it gives you a lot more room to view interesting quantum operations.
So to answer your question, with this new generation of qubits, I think that Google and IBM and Rigetti have all built chips on the order of 20 qubits, which work reasonably well, not as well as you would like, and they’re right now in a race actually to scale this up to about 50-70 qubits. The biggest question (they will surely be able to do that) is how well will these qubits perform when they’re all integrated in one chip? Will they still maintain their quantum coherence long enough for us to actually execute interesting algorithms and see an interesting speedup?
So when you say they become just a plain old ordinary bit, is that like a light bulb burning out and then they’re not good for anything, or is that like a reset button?
There’s a reset button. You can always re-initialize the qubit, but then again you have a very short time to try to do quantum operations before the qubit leaks into the environment.
So, how big are these machines? If you went into Google, and said, “show me.” Because we tell all these stories about how the first computers filled a room.
Yeah well I was just out there a few months ago in their lab in Santa Barbara. It does fill a room, but it is I guess a device, each one is the size of a small room, but almost all of that is just the cooling system, and it’s the controlled electronics and the actual chip where the qubits are the size of an ordinary computer chip.
Is room temperature quantum computing, would that ever be a thing?
Conceivably. There are proposals, including optical, photonic quantum computing, yeah that if one could get them to work at all, then they could work at room temperature. The superconducting approach which is maybe the furthest point along right now, does currently require very low temperatures, and trapped ions which is maybe the second approach after superconducting also requires very low temperatures right now.
Is the development of these machines progressing along a Moore’s Law kind of arc? Are they doubling by some measure in some capability every X months?
I think it’s too early to identify any Moore’s Law pattern. I mean for god sakes we don’t even know which technology is going to be the right one. The community is not converged around is it going to be superconducting or trapped ions or something else? You can make plots of the number of qubits and the coherence time of those qubits, and you do see a strong improvement. But the number of qubits, let’s say it’s gone up from one or two to 20, it’s kind of hard to see an exponential in those numbers.
Actually the coherence time, if you plot it over the past 20 years, I think the error rate has been going down more or less exponentially, so there is sort of Moore’s law there. Basically the rates of de-coherence — unwanted interaction between the qubits and their environment — are still too high. They’re still higher than they need to be for us to scale this technology up to say millions of qubits, but on the other hand, they are orders of magnitude better than they were 20 years ago when people first started doing these experiments.
That sounds exponential.
Well yeah, so I think there is there, but the error rates — we know first of all they can’t be pushed down forever, but secondly they don’t have to be. This is actually a very important point: there were physicists in 1990 when quantum computing was brand new, who said, “This could never work, not even in principle, because you will never perfectly isolate a qubit from its environment, and if it’s not perfectly isolated, then you can only do so many steps until everything is dead. You’re never going to be able to scale this up.”
Now what changed most people’s views, was a fundamental discovery in the mid 90s, which was called quantum error correction, and quantum fault power, and what that said is you don’t actually need to get the rate of coherence down to zero, you merely need to make it very small. Let’s say initially it shows one in a billion chance of an error per time per logical operation would suffice. Now I think the estimates are more on the order of one in a thousand, but as long as the decoherence rate is well enough, what was found is that if you can encode the logical qubits, you care about across the collective state of many physical qubits, using a clever error-correcting code, in such a way that even if it’s any, let’s say 1% of your physical qubits are in a dock, go out like that lightbulb, you can detect that. You fix it and you recover the information that you care about from the remaining 99%, and then you just keep going.
The main engineering goal in this field, since the discovery of fault tolerance, is to get the physical decoherence rate to be well enough that you can start using these error correcting codes, and then push the effective decoherence rate down to zero. So we’re not quite there yet. Basically if you just look at one or two qubits in isolation, then the Google group and others, just I think four or five years ago, have gotten the decoherence rates good enough, that if you just looked at the qubits in isolation, it looks like they’re you’re already good enough, you’re already past the threshold for fault tolerance. I mean that itself is fairly recent.
But now the problem is that when you try to integrate a large number of qubits into a single chip, like let’s say 50 of them or 100, then you need much more controls that are electronic, there are much more interactions, and that pushes the rate of decoherence back up. So now the challenge is to maintain that decoherence rate where you could apply these error correcting codes while integrating a huge number of qubits in a single system.
Where is the United States compared to other countries in terms of investment and accomplishments? Is the majority of the activity in the field here in this country, or are we just a small part of it?
I would say that the U.S. is the leader. The efforts of Google and IBM and Rigetti and the group in Maryland which is the leader in trapped ions, are all US-based. Also many of the leading theoretical groups like MIT, CalTech, Berkeley. Canada also happens to be a huge player in quantum computing, particularly the University of Waterloo, which may be the world’s biggest center for this field.
Besides that, Europe is a big player. In fact, they recently got a $1 billion quantum information funding initiative for the EU, so the Netherlands especially, well the UK (that will no longer be a part of the EU) and a bunch of the countries in Europe. China has, for whatever reason, focused much more on quantum communication which is a different area than quantum computing, and I would say China is now the world leader on quantum communication. They had a breakthrough last summer where for the first time, they could send a quantum state up to a satellite and back down to Earth and so from one end of China to the other end, it hasn’t maintained its quantum state, maintained its coherence, and that could be useful for various applications, but of course that’s a different thing from quantum computing.
That’s an entanglement thing. You flip it one way in Shanghai and it instantly flips the same way in Beijing and in theory in communications, it can’t be hacked?
Okay well wait, there’s a bunch of things to disentangle there so to speak. The first thing is the Chinese did actually demonstrate distributing this quantum entanglement across thousands of miles, which was a distance record for entanglement. But you have to be careful again, because entanglement cannot be used to send a signal faster than light, so if I have two entangled particles, I can measure one of them, and I see some axon like zero and then instantly I know that the other one is also zero. But it’s not like I got to choose that the outcome should be zero. It was going to be zero or one randomly.
But it is faster than the speed of light?
Well it’s instantaneous, but it is not a channel of communication, what it is is it’s a form of correlation, which you can actually use to produce correlation between far away particles that you could never have produced classically. That was the famous discovery made by John Bell in the 1960s that there were certain experiments that you could do on entangled particles that could never be explained by any sort of theory where the particles would just sort of agree in advance, ‘Listen if anyone asks, I’ll be a zero and you’ll be a one.’ Right? There’s no theory of that kind that explains the results of these experiments, so entanglement is a real phenomenon in the universe but it’s not useful for actually sending instantaneous signals right?
Einstein’s speed limit: you can’t send a signal faster than light is still upheld. Then the other thing you alluded to was quantum cryptography, quantum key distribution, which is a different idea in quantum computing that involves having a theoretically unbreakable cryptography that you would get by sending qubits from across a channel, that actually doesn’t require entanglement. It can be done with current technology.
There are even companies which sell quantum cryptography devices today. So far there’s been only a very, very small market for them, because first of all, it doesn’t work over the standard internet. You need a special communication infrastructure to do it, and the current devices, the ones that use a fiber optic cable, they work over systems of about 10 miles. After about 10 miles, the photons lose their quantum coherence, so, it’s good enough for the financial district of a city, but not for really connecting the whole world. That’s why people were excited when China managed to do this to and from a satellite over thousands of miles. Unfortunately though, the bit rate I think is still extremely poor.
And it’s been argued by some that human consciousness is itself a quantum phenomenon. Does that mean anything to you?
It’s an interesting hypothesis, but I don’t think there’s good evidence at this time for consciousness involving quantum computation, and there are several difficulties that any theory of that kind would have to overcome. The first one is that the brain seems like an incredibly hot and wet and noisy environment. It seems like no place for a qubit.
But the answer to that has been that they do believe now that quantum phenomena are occurring like in a bird’s navigation systems and things like that.
Oh yeah, that’s right, there is no question that there are quantum effects that are important for biology, that is not in dispute for bird navigation. Also green plant photosynthesis is a quantum effect, but maybe this is not so surprising, because all of chemistry is based on quantum mechanics right? So of course when you go to a small enough scale, you’re going to see quantum phenomena. What’s cool is that evolution was sometimes able to exploit these phenomena, but now the problem is if you want [to see a] human thought display, the brain is a very large system. This is not on the molecular scale, and once things are resolved to the level of a given neuron is inspiring a signal, across an axon or it’s not firing one, right, then that seems like very much a classical event. It’s an event that leaves records in its environment, that’s what I mean by that.
But to jump in on that one for just a second, but within the neuron itself, I mean we don’t know how a neuron does what it does, it could be operating at the Planck level, right?
The problem is that a neuron does not have anything of a nearly high enough energy to probe physics at the Planck scale. Not even the Large Hadron Collider is able to get anywhere near to Planck scale. This is 20 orders of magnitude bigger than the Planck scale, then yeah it is not ruled out. There could be all sorts of weird quantum phenomena taking place in a neuron, but then one would then have the burden of showing that any of those phenomena were important for consciousness, as opposed to just being like another source of thermal noise, effectively. So that’s where that discussion is. If you want us to have a quantum account of consciousness, I think that there are further difficulties. The first thing is you have to have a reason why is that [needed], what does that help you to explain that was previously unexplained?
The answer to that might be along the lines of, there seem to be sorts of problems that the human brain can solve that don’t seem to be solvable by a Turing machine.
I’m not sure that that’s true actually. Now I would say we don’t know the answer to that. I mean the famous ‘halting problem’ that proveably no Turing machine is able to solve, but I can’t solve the halting problem either. If I could then I could immediately win the field medal in Math by resolving thousands of famous unsolved math problems. I try to solve math problems, but it’s very much hit or miss, so what we know from the work of Godel and Turing and those people is that you could never build a computer and some people have seized on that point.
Some people like Roger Penrose have seized on the observation by Godel and Turing that no machine can be a perfect oracle for mathematics. In order to say that the brain or at least the mathematician’s brain must be doing something that a Turing machine can’t, but the obvious problem with that argument is that humans are not perfect oracles for mathematics either to put it very mildly. To achieve the dreams of AI, a computer would not need to be a perfect mathematician, it would merely have to be as smart as or smarter than us.
So, we haven’t really talked about what a quantum computer would be used for, what it would be useful for, but that feeds into this debate as well about quantum mechanics and consciousness, because an issue is that the types of problems that we know that quantum computers would be good at, do not seem like a good fit to what human creativity is good at. To say it very briefly, the main applications that quantum computers are known to have are simulating quantum physics and chemistry, breaking public key encryption systems, and getting probably some modest speedups for optimization in machine learning type of problems. But the most dramatic speedups are for breaking public key cryptography and for simulating quantum mechanics, which I hope you agree are not exactly the things that humans evolved towards to help us survive on the savannah.
Right, well clearly birds don’t use quantum effects to navigate because quantum computing’s only good for breaking public key encryption, not for navigating north to south. You’re just saying that what we’re building machines to do in the Model-T era of quantum computers doesn’t seem to be what the brain does. Ergo, the brain is not a quantum device.
Well, I’m just saying that these are the burdens that this hypothesis has to pass to get taken seriously. You need to show where the quantum effects could actually be used in a computational way in the brain, and then you need to explain what they’re for that the brain could not be doing just as easily classically, and what you gain by postulating.
Fair enough. I think the answer to that, like if you really boil it down is, people say, “Well, we have consciousness, we experience the world,” and how that comes about does not seem to be a question we know how to ask scientifically, nor do we even know what the answer to that would look like scientifically, and so it seems like this big asterisk in the log book of life. Then you all of a sudden get this theory that has all this other weird stuff going on. You say that’s weird too. Maybe the two weirds are paired together, so I think it’s an intuitive thing more than anything.
Right well, that does seem to be what the argument boils down to. Consciousness is weird, quantum mechanics is weird, ergo maybe they’re related. I mean the problem is, as I was saying is just a bare quantum computer, it doesn’t seem a lot easier to understand how that could be conscious than to understand how an ordinary computer could be conscious. It seems like there’s a mystery in the one case just as in the other.
Regarding consciousness and quantum phenomenon, you talked briefly about some of the things that we plan to use quantum machines for, but surely Google and IBM aren’t investing all of that money because they want to break the public key encryption, right?
Right, that’s absolutely right. I think to be perfectly honest, Google and IBM and the other players are not completely sure themselves what the applications are going to be. They’re very excited about the applications to machine learning and optimization. To be honest it’s sort of a question mark right now. Even if you had a perfectly functioning quantum computer with perfect coherence and millions of qubits, we’re not really sure yet exactly how much speed up it could give you for optimization and machine learning problems.
There are algorithms that might give a huge speedup but we don’t really know how to analyze them. We may just have to try them out with the quantum computer and see how they perform, and then there are algorithms that can’t be analyzed, that do give huge speedups, but only in very very special situations, where we don’t really know yet if they will be relevant to practice or not. What you typically can get for optimization and machine learning is a square root speedup, so you can typically solve those sorts of problems in something like the square root of the number of steps that a classical computer would need, and that is using one of the most famous quantum algorithms, which is called Grover’s algorithm, discovered in 1996.
A square root speedup is very useful that sort of doubles the size of the problem instance you could handle if you’re trying to do computational optimization. What used to take two to the ‘n’ steps, now only takes 2 to the n over two. Okay, but that’s not an exponential speedup. The exponential speed-ups that we know about seem to be much more special. They do include breaking essentially all laws of public key encryption that we happen to use today to secure the internet, so that’s probably the most famous application of quantum computers. That’s called Shor’s algorithm, which was discovered in 1994, but even there there’s a lot of research today on building quantum-proof public key encryption systems, and actually NIST (National Institute for Standards & Technology) is going to have a competition over the next few years to establish standards for quantum resistance encryption, and it looks like we may actually migrate to that over the next decade or two. So, that is a solvable problem. I think the current encryption we use is vulnerable.
Now you know what I think is probably the most important application of quantum computing, at least that we know about today is actually the first point that Richard Feynman and the others thought of when they proposed this idea in the 1980s, and that’s simply to use a quantum computer to simulate quantum mechanics itself. That’s something that, it sounds almost too obvious to mention, it’s what a quantum computer does in its sleep, and yet that has an enormous range of applications.
If you want to design high temperature superconductors, we talked before about how current super conductors only work at close to absolute zero, well what if you wanted to solve that? That is a quantum mechanics problem. If you wanted to design higher efficiency solar panels, if you wanted to design better ways of making fertilizer, where it could be done at lower temperatures, these are all sort of, many body quantum materials and quantum chemistry problems, where, even with the best super computers that are available today, there’s only a limited amount that we can learn because of this exponentiality of amplitude.
So a quantum computer can give you an enormous new window into simulating physical chemistry and that is something that can have a lot of industrial applications. That’s not something that directly affects the end user in the sense that you’re going to use it to check your email or play ‘angry birds’ or something, but that is something where, to improve on any of these sorts of material processes, could be billions of dollars of value.
Why is it that we don’t know more about what we would do with quantum machines? Because it would seem to my limited mind, that we know what we’re trying to build, we just don’t have the physics down on actually building it. We know in theory how it would behave, so is it that we don’t know how the machine will work, or we don’t have the imagination at this point, it’s just too soon to have thought it all through?
Well, one hypothesis would be that the quantum computers only do give you a speedup for certain specialized applications, and we have discovered many of those applications. That might be the truth of the matter. A second possibility would be that there are many more applications of quantum computers that haven’t been discovered yet, and we just haven’t had the imagination to invent the algorithms. I would guess that the truth is somewhere in between those two.
People have been thinking about quantum algorithms seriously now for about 25 years, so it’s not as long as people have been thinking about classical algorithms, but it’s still a significant chunk of time, and there is an immense body of theoretical understanding about quantum algorithms, what they can do, and also what they can’t do in various settings. We know we understand some things about what sorts of tasks seem to be hard even for quantum computers, but some people are disappointed that the set of maybe the most striking quantum algorithms have been in place since the 1990s.
Shor’s algorithm, Grover’s algorithm, quantum simulation and all of these things have been enormously generalized and applied to all sorts of other problems but there haven’t been that many entirely new families of thought of algorithms to be discovered. There was maybe one in 2007, something called the HHL algorithm for solving linear systems, and that led to a lot of other developments. The truth is that we’re not even very close to understanding the ultimate capabilities of classical algorithms, let alone quantum algorithms.
So you’ve probably heard of the P versus NP question? We can’t even rule out that there’s a super fast classical algorithm, they just solved the traveling salesman problem, to solve all these other NP-complete problems, although most of us believe that that doesn’t exist, but it’s a measure of how far we are from really understanding algorithms, that we can’t rule it out. Far less do we understand the ultimate capabilities and remits of quantum algorithms, but there’s a lot that we do know, and check back in another few years. I hope that we’ll know more.
Alrighty well, that’s a good place to leave it. Tell the readers how they can keep up with you and your writing. You mentioned your blog, can you throw out some?
So I’m pretty easy to find, my homepage is www.scottaaronson.com. I write a blog about quantum computing and also all sorts of other things, that’s www.scottaaronson.com/blog. If you go to my blog, I’ve got the links to a bunch of popular articles and lecture notes about quantum computing, and then I have my book, “Quantum Computing since Democritus,” which came out in 2013.
A reference to, ‘there’s nothing but atoms and the void?’
Yeah, that’s right.
Alrighty well thanks a bunch, Scott.
from Gigaom https://gigaom.com/2019/01/17/quantum-computing-capabilities-and-limits-an-interview-with-scott-aaronson/
Comments
Post a Comment