Scott Aaronson - Quantum Supremacy _ AI Podcast Clips

The Largest Supercomputer on Earth: A Platform for Quantum Computing Experiments

The experiment was conducted on the largest supercomputer currently existing on earth, which is called Summit at Oak Ridge National Lab. For this type of experiment, a hundred qubits would be ideal, but even with that number, it's still difficult to verify the results due to the complexity of quantum mechanics. Therefore, researchers opted for a smaller number of qubits, 53, which is large enough to challenge the capabilities of classical computers but not so large that they become impractical.

The significance of using a small number of qubits lies in the fact that it allows us to verify the results of the experiment and ensures that any observed effects are due to the quantum computer itself rather than any error or bias. This is crucial, as we want to be confident that our findings are not simply a result of the classical computer's ability to process information quickly.

Google's Experiment: A Breakthrough in Quantum Supremacy

Google applied a linear cross-entropy benchmark to their experiment, which involved generating samples and calculating probabilities for those samples using both quantum and classical computers. The test aimed to determine whether the quantum computer was biased towards outputting strings that it was designed to favor. By comparing these probabilities with the mean probability, researchers could assess whether the quantum computer had achieved true quantum supremacy.

The Implications of Quantum Supremacy

If the quantum computer successfully demonstrated quantum supremacy, it would have significant implications for the field of quantum computing. It would mean that quantum computers are capable of solving certain problems much faster than classical computers, which could lead to breakthroughs in fields such as cryptography and optimization. However, this raises a crucial question: how do we know that a classical computer couldn't have achieved the same result through a fast algorithm?

The Hardness Problem: Complexity Theory

The hardness problem is a fundamental challenge in theoretical computer science and complexity theory. It involves proving that certain problems are hard to solve classically, meaning that even the fastest classical algorithms would take an impractically long time to solve them. In the case of simulating quantum circuits or experiments, researchers have given reduction evidence to show that these problems are likely to be hard.

Reduction Evidence: A Step Towards Solving the Hardness Problem

Reduction evidence involves showing that a problem is at least as hard as another known-hard problem. This can provide some confidence that a problem is indeed hard, but it's not a definitive proof. In the case of simulating quantum circuits or experiments, researchers have given reduction evidence to show that these problems are likely to be hard.

The Future of Quantum Computing: Empirical Evidence and Theoretical Results

While empirical evidence plays a crucial role in demonstrating quantum supremacy, theoretical results provide a deeper understanding of the underlying principles. Researchers are actively working on developing new algorithms and techniques for simulating quantum circuits or experiments, which will help us better understand the limits of quantum computing.

The Open Problem: Making Reduction Evidence Better

One of the biggest open problems in this area is to make reduction evidence more satisfactory. Reducing a problem to another known-hard problem can provide some confidence that it's hard, but it's not a definitive proof. Researchers are working on developing new techniques and strategies to improve reduction evidence and ultimately solve the hardness problem.

The Importance of Reduction Evidence

Reduction evidence is a fundamental tool in theoretical computer science and complexity theory. It involves showing that a problem is at least as hard as another known-hard problem, which can provide some confidence that it's indeed hard. In the case of simulating quantum circuits or experiments, reduction evidence has been crucial in demonstrating the hardness of these problems.

Theoretical Computer Science: A Field of Inquiry

Theoretical computer science is a field of inquiry that seeks to understand the underlying principles and limitations of computers. It involves studying the complexity of algorithms and data structures, as well as the properties of computational systems. Theoretical computer science has many applications in cryptography, optimization, and coding theory, among others.

NP-Completeness: A Key Concept

NP-completeness is a fundamental concept in theoretical computer science. It refers to problems that are both in NP (nondeterministic polynomial time) and NP-hard (at least as hard as the hardest problem in NP). Problems that are NP-complete have been shown to be at least as hard as other known-hard problems, but they may not necessarily be impossible to solve classically.

The P versus NP Problem: An Open Question

The P versus NP problem is one of the most famous open questions in theoretical computer science. It asks whether every problem with a polynomial-time algorithm (P) can also be verified in polynomial time (NP). This question has been open for over 30 years, and it remains an active area of research.

In conclusion, Google's experiment on quantum supremacy using the Summit supercomputer was a groundbreaking achievement that demonstrated the capabilities of quantum computing. The experiment involved generating samples and calculating probabilities for those samples using both quantum and classical computers. By comparing these probabilities with the mean probability, researchers could assess whether the quantum computer had achieved true quantum supremacy. While empirical evidence plays a crucial role in demonstrating quantum supremacy, theoretical results provide a deeper understanding of the underlying principles.

"WEBVTTKind: captionsLanguage: enGoogle mmm announced with their work in the paper in nature with quantum supremacy yes can you describe again back to the basic what is perhaps not so basic what is quantum supremacy absolutely so quantum supremacy is a term that was coined by again by John Prescott in 2012 not not everyone likes the name you know but uh you know it's sort of stuck you know we don't we sort of haven't found a better alternative quantum computational compare mister that's right that's right and but but the basic idea is actually one that goes all the way back to the beginnings of quantum computing when richard fineman and david deutsch people like that we're talking about it in the early 80s and-and-and and quantum supremacy just refers to sort of the point in history when you can first use a quantum computer to do some well-defined task much faster than any known algorithm running on any of the classical computers that are available okay so you know notice that I did not say a useful task okay you know it could be something completely artificial but it's important that the task be well-defined so in other words you know there is it is something that has right and wrong answers you know and that are knowable independently of this device right and we can then you know run the device see if it gets the right answer or not can you clarify a small point you said much faster than a classical implementation what about sort of what about the space with where the class there's no there's not it doesn't even exist a classical algorithm so so so so maybe I should clarify everything that a quantum computer can do a classical computer can also eventually do okay and the reason why we know that is that a a classical computer could always you know if it had no limits of time and memory it could always just store the entire quantum state you know of your you know of the quantum great store in a list of all the amplitudes you know in the state of the quantum pewter and then just you know do some linear algebra to just update that state right and so so anything that quantum computers can do can also be done by classical computers albeit exponentially slower okay so I on computers don't go to some magical place outside of Alan Turing's a definition of computation precisely they do not solve the halting problem they cannot solve anything that is uncomputable in how an Turing sense what they what we think they do change is what is efficiently computable okay and you know since the 1960s you know the word efficiently you know as well as been a central word in computer science but it's sort of a code word for something technical which is basically with polynomial scaling you know that as you get to larger and larger inputs you would like an algorithm that uses an amount of time that scales only like the size of the input raised to some power and not exponentially with the size of the input right yeah so I do hope we get to talk again because one of the many topics that there's probably several hours with a competent conversation on is complexity which you probably won't even get a chance to touch today but you briefly mentioned it but let's let's maybe try to continue so you said the definition of quantum supremacy is basically design is achieving a place where much faster on a formal quantum computer is much faster on a formal well-defined problem yes it's not that is or isn't useful yeah yeah yeah right right and and I would say that we really want three things right we want first of all the quantum computer to be much faster just in the literal sense of like number of seconds you know it's a solving this you know well-defined you know problem secondly we want it to be sort of a you know for a problem where we really believe that a quantum computer has better scaling behavior right so it's not just an incidental you know matter of hardware but it's that you know as you went to larger and larger inputs you know the classical scaling would be exponential and the scaling for the quantum algorithm would only be polynomial and then thirdly we want the first thing the actual observed speed-up to only be explainable in terms of the scaling behavior right so you know I want I want you know a real world you know a real problem to get solved let's say by a quantum computer with 50 cubits or so and for no one to be able to explain that in any way other than well you know the to the this this computer involved a quantum state with two to the fiftieth power amplitudes and you know a classical simulation at least any that we know today would require keeping track of 2 to the 50th numbers and this is the reason why it was faster so the intuition is that then if you demonstrate on 50 cubits then once you get to 100 cubits then it'll be even much more faster precisely precisely yeah and and you know and and and quantum supremacy does not require error correction right we don't you know we don't have you could say true scalability yet or true you know uh error correction yet but you could say quantum supremacy is already enough by itself to refute the skeptics who said a quantum computer will never outperform a classical computer for anything but one Hadji demonstrate quantum yeah supremacy and two what's up with these new news articles I'm reading that Google did so yeah well what they actually do great great questions because now you get into actually you know a lot of the work that I've you know I and my students have been doing for the last decade which was precisely about how do you demonstrate quantum supremacy using technologies that you know we thought would be available in the near future and so one of the main things that we realized around 2011 and this was me and my student alex arkhipov at MIT at the time and independently of some others including a Bremner josa and shepard ok and the realization that we came to was that if you just want to prove that a quantum computer is faster you know and not do something useful with it then there are huge advantages to sort of switching your attention from problems like factoring numbers that have a single right answer to what we call sampling problems so these are problems where the goal is just to output a sample from some probability distribution let's say over strings of 50 bits right so there are you know many many many possible valid outputs you know your computer will probably never even produce the same output twice you know if it's running as even you know assuming it's running perfectly okay but but the key is that some outputs are supposed to be likelier than other ones so sorry to clarify is there a set of outputs that are valid and said they're not or is it more that the distribution of a particular kind of output is more is the specific distribution yeah there's there's there's a specific distribution that you're trying to hit right or you know that you're trying to sample from now there are a lot of questions about this you know how do you do that right now now how you how you do it you know it turns out that with a quantum computer even with the noisy quantum computers that we have now that we have today what you can do is basically just apply a randomly chosen sequence of operations all right so we you know we in sometimes you know we you know that part is almost trivial right we just sort of get the qubits to interact in some random way although a sort of precisely specified random way so we can repeat the exact same random sequence of interactions again and get another sample from that same distribution and what this does is it basically well it creates a lot of garbage but you know very specific garbage right so you know of all of the so if we're gonna talk about google's device there were 53 qubits there okay and so there are two to the 53 power possible outputs now for some of those outputs you know there are there was a little bit more destructive interference in their amplitude okay so their amplitudes were a little bit small and for others there was a little more constructive interference you know the amplitudes were a little bit more aligned with each other you know that and so those those that were a little bit likelier okay all of the outputs are exponentially unlikely but some are let's say two times or three times you know unlikely er than others okay and so so you can define you know the sequence of operations that gives rise to this probability distribution okay now the next question would be well how do you you know even if you're sampling from and how do you verify that right how do you exam how do you know and so my students and I and also the people at Google we're doing the experiment came up with statistical tests that you can apply to the outputs in order to try to verify you know what is you know that that at least that some hard problem is being solved the the test that Google ended up using was something that they called the linear cross entropy benchmark okay and it's basically you know so that the drawback of this test is that it requires like it requires you to do a two to the 53 time calculation with your classical computer okay so it's very expensive to do the test on a classical computer the good news I think of a numbers - it's about nine quadrillion okay it doesn't help what well you want any like scientific notation I don't know what I mean it yeah it is it is it is adjustable to run honest yes that we will come back to that it is just barely possible to run we think on the largest supercomputer that currently exists on earth which is called summit at Oak Ridge National Lab okay so I ironically for this type of experiment we don't want a hundred qubits okay because with a hundred qubits even if it works we don't know how to verify the results okay so we want you know a number of qubits that is enough that you know click the biggest classical computers on earth will have to sweat you know and we'll just barely you know be able to keep up with the quantum computer you know using much more time but they will still be able to do it in order that we can verify that was just where the 53 comes from for the cube a well I mean I mean I mean I mean I mean that's also that sort of you know the mote I mean that's that's that's sort of where they are now in terms of scaling you know and then you know soon you know that point will be passed and and then when you get to larger numbers of qubits then you know these these types of sampling experiments will no longer be so interesting because we won't even be able to verify the results and we'll have to switch to other types of computation so with it with the sampling thing you know so so the test that Google applied with this linear cross-entropy benchmark would basically just take the samples that were generated which are you know a very small subset of all the possible samples that there are but for those you calculate with your classical computer the probabilities that they should have been output and you see are those probabilities like larger than the mean you know so is the quantum computer bias toward outputting the strings that it's you know that you want it to be biased toward okay and then finally we come to a very crucial question which is supposing that it does that well how do we know that a classical computer could not have quickly done the same thing right how do we know that you know this couldn't have been spoofed by a classical computer right and so well the first answer is we don't know for sure because you know this takes us into questions of complexity theory you know you know the I mean questions on the of the magnitude of the P versus NP question and that right we you know we don't know how to rule out definitively that there could be fast classical algorithms for you know even simulating quantum mechanics and for you know simulating experiments like these but we can give some evidence against that possibility and that was sort of the you know the main thrust of a lot of the work that my colleagues and I did you know over the last decade which is then sort of in around 2015 or so what led to Google deciding to do this experiment so is the kind of evidence you first of all the hard P equals NP problem the and the kind of evidence the year were looking at is that something you come to on a sheet of paper or is this something are these empirical experiments it's it's math for the most part I mean it you know it's also trot you know you know we have a bunch of methods that are known for simulating quantum circuits or you know quantum computations with classical computers and so we have to try them all out and make sure that you know they don't work you know make sure that they have exponential scaling on on you know these problems and and not just theoretically but with the actual range of parameters that are actually you know arising in Google's experiment okay so so there is an empirical component to it right but now on on the theoretical side you know what basically what we know how to do in theoretical computer science and computational complexity is you know we don't know how to prove that most of the problems we care about are hard but we know how to pass the blame to someone else yeah we know how to say well look you know I can't prove that this problem is hard but if it is easy then all these other things that you know you know first you you probably we're much more confident or we're hard that then those would be easy as well okay so so we can give what are called reductions this has been the basic strategy in you know an NP completeness right in in all of theoretical computer science and cryptography since the 1970s really and so we were able to give some reduction evidence for the hardness of simulating these sampling experiments the sampling based quantum supremacy experiments so reduction evidence is not as satisfactory as it should be one of the biggest open problems in this area is to make it better but you know we can do something you know certainly we can say that you know if there is a fast classical algorithm to spoof these experiments then it has to be very very unlike any of the algorithms that we know which kind of in the same kind of space of reasoning that people say P equal not equals NP yeah it is it's in the same spirit youGoogle mmm announced with their work in the paper in nature with quantum supremacy yes can you describe again back to the basic what is perhaps not so basic what is quantum supremacy absolutely so quantum supremacy is a term that was coined by again by John Prescott in 2012 not not everyone likes the name you know but uh you know it's sort of stuck you know we don't we sort of haven't found a better alternative quantum computational compare mister that's right that's right and but but the basic idea is actually one that goes all the way back to the beginnings of quantum computing when richard fineman and david deutsch people like that we're talking about it in the early 80s and-and-and and quantum supremacy just refers to sort of the point in history when you can first use a quantum computer to do some well-defined task much faster than any known algorithm running on any of the classical computers that are available okay so you know notice that I did not say a useful task okay you know it could be something completely artificial but it's important that the task be well-defined so in other words you know there is it is something that has right and wrong answers you know and that are knowable independently of this device right and we can then you know run the device see if it gets the right answer or not can you clarify a small point you said much faster than a classical implementation what about sort of what about the space with where the class there's no there's not it doesn't even exist a classical algorithm so so so so maybe I should clarify everything that a quantum computer can do a classical computer can also eventually do okay and the reason why we know that is that a a classical computer could always you know if it had no limits of time and memory it could always just store the entire quantum state you know of your you know of the quantum great store in a list of all the amplitudes you know in the state of the quantum pewter and then just you know do some linear algebra to just update that state right and so so anything that quantum computers can do can also be done by classical computers albeit exponentially slower okay so I on computers don't go to some magical place outside of Alan Turing's a definition of computation precisely they do not solve the halting problem they cannot solve anything that is uncomputable in how an Turing sense what they what we think they do change is what is efficiently computable okay and you know since the 1960s you know the word efficiently you know as well as been a central word in computer science but it's sort of a code word for something technical which is basically with polynomial scaling you know that as you get to larger and larger inputs you would like an algorithm that uses an amount of time that scales only like the size of the input raised to some power and not exponentially with the size of the input right yeah so I do hope we get to talk again because one of the many topics that there's probably several hours with a competent conversation on is complexity which you probably won't even get a chance to touch today but you briefly mentioned it but let's let's maybe try to continue so you said the definition of quantum supremacy is basically design is achieving a place where much faster on a formal quantum computer is much faster on a formal well-defined problem yes it's not that is or isn't useful yeah yeah yeah right right and and I would say that we really want three things right we want first of all the quantum computer to be much faster just in the literal sense of like number of seconds you know it's a solving this you know well-defined you know problem secondly we want it to be sort of a you know for a problem where we really believe that a quantum computer has better scaling behavior right so it's not just an incidental you know matter of hardware but it's that you know as you went to larger and larger inputs you know the classical scaling would be exponential and the scaling for the quantum algorithm would only be polynomial and then thirdly we want the first thing the actual observed speed-up to only be explainable in terms of the scaling behavior right so you know I want I want you know a real world you know a real problem to get solved let's say by a quantum computer with 50 cubits or so and for no one to be able to explain that in any way other than well you know the to the this this computer involved a quantum state with two to the fiftieth power amplitudes and you know a classical simulation at least any that we know today would require keeping track of 2 to the 50th numbers and this is the reason why it was faster so the intuition is that then if you demonstrate on 50 cubits then once you get to 100 cubits then it'll be even much more faster precisely precisely yeah and and you know and and and quantum supremacy does not require error correction right we don't you know we don't have you could say true scalability yet or true you know uh error correction yet but you could say quantum supremacy is already enough by itself to refute the skeptics who said a quantum computer will never outperform a classical computer for anything but one Hadji demonstrate quantum yeah supremacy and two what's up with these new news articles I'm reading that Google did so yeah well what they actually do great great questions because now you get into actually you know a lot of the work that I've you know I and my students have been doing for the last decade which was precisely about how do you demonstrate quantum supremacy using technologies that you know we thought would be available in the near future and so one of the main things that we realized around 2011 and this was me and my student alex arkhipov at MIT at the time and independently of some others including a Bremner josa and shepard ok and the realization that we came to was that if you just want to prove that a quantum computer is faster you know and not do something useful with it then there are huge advantages to sort of switching your attention from problems like factoring numbers that have a single right answer to what we call sampling problems so these are problems where the goal is just to output a sample from some probability distribution let's say over strings of 50 bits right so there are you know many many many possible valid outputs you know your computer will probably never even produce the same output twice you know if it's running as even you know assuming it's running perfectly okay but but the key is that some outputs are supposed to be likelier than other ones so sorry to clarify is there a set of outputs that are valid and said they're not or is it more that the distribution of a particular kind of output is more is the specific distribution yeah there's there's there's a specific distribution that you're trying to hit right or you know that you're trying to sample from now there are a lot of questions about this you know how do you do that right now now how you how you do it you know it turns out that with a quantum computer even with the noisy quantum computers that we have now that we have today what you can do is basically just apply a randomly chosen sequence of operations all right so we you know we in sometimes you know we you know that part is almost trivial right we just sort of get the qubits to interact in some random way although a sort of precisely specified random way so we can repeat the exact same random sequence of interactions again and get another sample from that same distribution and what this does is it basically well it creates a lot of garbage but you know very specific garbage right so you know of all of the so if we're gonna talk about google's device there were 53 qubits there okay and so there are two to the 53 power possible outputs now for some of those outputs you know there are there was a little bit more destructive interference in their amplitude okay so their amplitudes were a little bit small and for others there was a little more constructive interference you know the amplitudes were a little bit more aligned with each other you know that and so those those that were a little bit likelier okay all of the outputs are exponentially unlikely but some are let's say two times or three times you know unlikely er than others okay and so so you can define you know the sequence of operations that gives rise to this probability distribution okay now the next question would be well how do you you know even if you're sampling from and how do you verify that right how do you exam how do you know and so my students and I and also the people at Google we're doing the experiment came up with statistical tests that you can apply to the outputs in order to try to verify you know what is you know that that at least that some hard problem is being solved the the test that Google ended up using was something that they called the linear cross entropy benchmark okay and it's basically you know so that the drawback of this test is that it requires like it requires you to do a two to the 53 time calculation with your classical computer okay so it's very expensive to do the test on a classical computer the good news I think of a numbers - it's about nine quadrillion okay it doesn't help what well you want any like scientific notation I don't know what I mean it yeah it is it is it is adjustable to run honest yes that we will come back to that it is just barely possible to run we think on the largest supercomputer that currently exists on earth which is called summit at Oak Ridge National Lab okay so I ironically for this type of experiment we don't want a hundred qubits okay because with a hundred qubits even if it works we don't know how to verify the results okay so we want you know a number of qubits that is enough that you know click the biggest classical computers on earth will have to sweat you know and we'll just barely you know be able to keep up with the quantum computer you know using much more time but they will still be able to do it in order that we can verify that was just where the 53 comes from for the cube a well I mean I mean I mean I mean I mean that's also that sort of you know the mote I mean that's that's that's sort of where they are now in terms of scaling you know and then you know soon you know that point will be passed and and then when you get to larger numbers of qubits then you know these these types of sampling experiments will no longer be so interesting because we won't even be able to verify the results and we'll have to switch to other types of computation so with it with the sampling thing you know so so the test that Google applied with this linear cross-entropy benchmark would basically just take the samples that were generated which are you know a very small subset of all the possible samples that there are but for those you calculate with your classical computer the probabilities that they should have been output and you see are those probabilities like larger than the mean you know so is the quantum computer bias toward outputting the strings that it's you know that you want it to be biased toward okay and then finally we come to a very crucial question which is supposing that it does that well how do we know that a classical computer could not have quickly done the same thing right how do we know that you know this couldn't have been spoofed by a classical computer right and so well the first answer is we don't know for sure because you know this takes us into questions of complexity theory you know you know the I mean questions on the of the magnitude of the P versus NP question and that right we you know we don't know how to rule out definitively that there could be fast classical algorithms for you know even simulating quantum mechanics and for you know simulating experiments like these but we can give some evidence against that possibility and that was sort of the you know the main thrust of a lot of the work that my colleagues and I did you know over the last decade which is then sort of in around 2015 or so what led to Google deciding to do this experiment so is the kind of evidence you first of all the hard P equals NP problem the and the kind of evidence the year were looking at is that something you come to on a sheet of paper or is this something are these empirical experiments it's it's math for the most part I mean it you know it's also trot you know you know we have a bunch of methods that are known for simulating quantum circuits or you know quantum computations with classical computers and so we have to try them all out and make sure that you know they don't work you know make sure that they have exponential scaling on on you know these problems and and not just theoretically but with the actual range of parameters that are actually you know arising in Google's experiment okay so so there is an empirical component to it right but now on on the theoretical side you know what basically what we know how to do in theoretical computer science and computational complexity is you know we don't know how to prove that most of the problems we care about are hard but we know how to pass the blame to someone else yeah we know how to say well look you know I can't prove that this problem is hard but if it is easy then all these other things that you know you know first you you probably we're much more confident or we're hard that then those would be easy as well okay so so we can give what are called reductions this has been the basic strategy in you know an NP completeness right in in all of theoretical computer science and cryptography since the 1970s really and so we were able to give some reduction evidence for the hardness of simulating these sampling experiments the sampling based quantum supremacy experiments so reduction evidence is not as satisfactory as it should be one of the biggest open problems in this area is to make it better but you know we can do something you know certainly we can say that you know if there is a fast classical algorithm to spoof these experiments then it has to be very very unlike any of the algorithms that we know which kind of in the same kind of space of reasoning that people say P equal not equals NP yeah it is it's in the same spirit you\n"