Google’s new quantum chip has solved a problem that would have taken the best supercomputer a quadrillion times the age of the universe to crack
news via inbox
Nulla turp dis cursus. Integer liberos euismod pretium faucibua
Google’s new quantum chip has solved a problem that would have taken the best supercomputer a quadrillion times the age of the universe to crack
Nulla turp dis cursus. Integer liberos euismod pretium faucibua
Just watched Scott Aaronsons talk from last year at Q2B about this very topic. It’s definitely just feels like a way to squeeze out articles such as this into the media as opposed to driving much progress. It’s basically a proxy measurement for qubit count/fidelity isn’t it?
Full blog post about Google Willow here: https://scottaaronson.blog/?p=8525
[removed]
It’s great that people are making progress on that front, but yeah – 105 qubits isn’t going to “destroy conventional cryptography” any time soon. I still remain unconvinced that error correction isn’t going to swallow up all the gains in increased density, at these rates.
For real. I don’t understand much on quantum computing, but as a quantum chemist (DFT), I am just sitting here wondering when we will throw Helium at the Schrodinger equation with one of these.
In an STO-3G basis
It’s just a benchmark. Benchmarks are usually boring problems.
[deleted]
if you go to google post directly they have all that
[https://blog.google/technology/research/google-willow-quantum-chip/](https://blog.google/technology/research/google-willow-quantum-chip/)
**Willow System Metrics:**
* **Number of qubits:** 105
* **Average connectivity:** 3.47 (4-way typical)
**Quantum Error Correction :**
* **Single-qubit gate error:** 0.035% ± 0.029% (mean, simultaneous)
* **Two-qubit gate error:** 0.33% ± 0.18% (CZ) (mean, simultaneous)
* **Measurement error:** 0.77% ± 0.21% (repetitive, measure qubits) (mean, simultaneous)
* **Reset options:** Multi-level reset (|1⟩ state and above), Leakage removal (|2⟩ state only)
* **T₁ time (mean):** 68 µs ± 13 µs
* **Error correction cycles per second:** 909,000 (surface code cycle = 1.1 µs)
* **Application performance:** Λ₃,₅,₇ = 2.14 ± 0.02
**Random Circuit Sampling :**
* **Single-qubit gate error:** 0.036% ± 0.013% (mean, simultaneous)
* **Two-qubit gate error:** 0.14% ± 0.052% (iswap-like) (mean, simultaneous)
* **Measurement error:** 0.67% ± 0.51% (terminal, all qubits) (mean, simultaneous)
* **Reset options:** Multi-level reset (|1⟩ state and above), Leakage removal (|2⟩ state only)
* **T₁ time (mean):** 98 µs ± 32 µs
* **Circuit repetitions per second:** 63,000
* **Application performance:** XEB fidelity depth 40 = 0.1%
* **Estimated time on Willow vs classical supercomputer:** 5 minutes vs. 10²⁵ years
Can someone explain why they don’t post t2? Amplitude without phase is a probabilistic classical system isn’t it? We need phase too for quantum advantage I thought. Or is amplitude enough
In the other thread Alice and bob have ten second t1 cat qubits it seems like their surface codes will be epic if amplitude is all we need
It’s probably too correlated to put a number on that they’re comfortable with. That is, it probably sucks but that doesn’t matter.
Show me those echo and Ramseys or it didn’t happen
Any public links on how these metrics (specifically the gate and measurement errors) compare with the other superconducting qubits (from IBM, Rigetti,etc)?
[deleted]
Actually we don’t know (even theoretically we don’t know if P ⊂ BQP, or if BPP = BQP, but we know that BQP ⊆ PP; most other results are relative to an oracle).
[deleted]
OK, but we don’t know with high confidence either, especially experimentally (where classical computers outperform quantum circuits). And to be fair a lot of previously though, in theory, “quantum superior algorithms” have been de-quantumized. It’s more of an open problem than it seems. Believing quantum supremacy (especially over randomised algorithms for BQP ⊆ PP) is more, currently, a matter of taste than confidence. A lot of experts switched talking from quantum supremacy to quantum utility: quantum computers are surely useful, even currently, for physical experiments and as a source of randomness, regardless of computational power.
Still not aware of any real world use cases…
Can you explain more ?
no
I was disappointed that Scott Aaronson shifted from being a D-Wave optimization skeptic to a random sampling evangelist. I know they aren’t the same, but they aren’t that different.
I won’t be impressed by a quantum computer until one mines all the remaining Bitcoin in one day. Then you’ll have my attention.
Or breaks prime number encryption and takes down the entire e-commerce system.
isn’t that what Shor’s Algorythm does? We’re basically just waiting for a shor capable chip to come out and fry the internet
Don’t need quantum computer for that
Then why hasn’t it happened yet?
How do you know it hasn’t happened
Because the entire e-commerce system is still up?
The presumption here is that the only body capable of cracking it is malicious. Not that I have a strong opinion one way or the other.
Thank you Mr.A.I!
Wat
You’d have to both have that power and be a terrorist to fuck with the econmerce system.
When you could just lay low and continue to exploit the system the rest of your life
i think accountants would eventually notice variance they couldn’t figure out. you’d have to lay pretty low
What accountants? What are you talking about? Just transfer random amounts of bitcoin from random non-og wallets every now and then and you’re good. People just think they were hacked, won’t understand the connection.
Because prime factorization is provably difficult when the number of bits is high?
It is not provably difficult. Link a single proof that it’s difficult. Its only provable difficult for brute force. Which is a statement about brute force more so than prime factorization.
Perhaps “provably” was a bit strong–what I mean to say is that there does not exist an algorithm to factor prime numbers in polynomial time. However, [experts do not believe one exists](https://www.google.com/books/edition/Computational_Complexity/nGvI7cOuOOQC?hl=en&gbpv=1&pg=PA230&printsec=frontcover).
Edit: The paragraph of interest begins with “Does BQP == BPP?”
Oh I know what these “experts” believe.
There does not publicly exist such an algorithm. But if one was discovered in past 30 years there would be benefit to making it public to any agency or person who discovers it.
These “experts” are well-respected members of their field. If you’re going to say that the academic community is filled with frauds, then nothing I show you will change your mind.
Academics certainly have incentives in showing prime factorization is in P under the same incentives they had in breaking crypto schemes in the ‘70s before we landed on RSA. Credit is the coin of the academic realm.
Waiting for this
That would actually be hilarious
Bitcoin’s difficulty can be adjusted easily.
well it can’t mine the remaining bitcoin without the transactions that pend on the blockchain, so technically it would just make mining a lot faster, and thus transaction speed, but it doesn’t sound like it would be possible to immediately mine all the remaining btc because you have to wait on humans to use the service.
What this really represents is zeroing in on a problem which quantum computers offer and actual advantage to solving, which is rare. Tbh that’s the right tree to be barking up but you have to understand this doesn’t generalize in tue ways you might expect.
… the problem is a known quantum benchmark. it’s not “zeroing in” on it, we already know it’s something quantum computers are better at
I think his point is that laypeople reading the headline shouldn’t assume this means that this quantum computer would be this much faster at any general computing task, *because* it is so rarely and uniquely a benchmark.
i understand the sentiment but it’s a bit generic and irrelevant here. the point of this isn’t that they solved the problem better than classical computers, it’s that we’re starting to be able to do it efficiently
Better than the current best known classical deterministic solutions, not in general (which we don’t know). Theoretically it’s an open problem even P = BQP.
i mean we could have just said “better than the best known classical algorithms”. complexity theory kinda irrelevant
It’s relevant as long as we can de-quantumize the problems solutions (and it already happens often).
It seems strange but currently the most practical use of quantum algorithms is inspiring faster classical algorithms.
If you know that, for example, BQP = BPP you’ll positively keep searching for a fast classical reduction of any quantum algorithm. Otherwise, you’ll probably stop after a few attempts, or maybe you would directly try to prove that it is irreducible.
Is it possible to eli5 what this problem is? Or do I need to know more about the subject.
it’s the problem of simulating the outcome of qubits moving through a quantum circuit and their final state. it gets extremely complicated very quickly
Regarding the topic of practical applications, I would suggest the following reading:
[https://cacm.acm.org/research/disentangling-hype-from-practicality-on-realistically-achieving-quantum-advantage](https://cacm.acm.org/research/disentangling-hype-from-practicality-on-realistically-achieving-quantum-advantage)
This is the mark many people seem to miss everytime researchers/companies report “breakthroughs” in QC
Which is why what we’ll see in personal computing, if ever anything, is an optional quantum coprocessor (akin to today’s GPUs) intended to solve those particular types of problems
Hang around this field long enough and you start to develop your own translations for these silly headlines.
> “Would take a classical computer 10^21467638 years to solve…”
We ran a larger version of a benchmark problem that we designed specifically for our hardware.
> “Massive breakthrough that paves the way to fault tolerance…”
We achieved a significant, but anticipated engineering milestone that enables better-than-threshold error reduction.
> “New quantum algorithm has the potential to <achieve some utopian goal>…”
We ran a noiseless statevector simulation of a 2-qubit proof of concept that comprises one piece of a very complex simulation workflow.
Number 2 is the actual achievement of this work, which provides further experimental vindication for the fault-tolerance threshold theorem. This has been in the air now for the past 12-18 months with trapped ion and neutral atom systems as well, so it’s far from unanticipated. In my mind, this is another step forward, but not a giant leap that accelerates development timelines.
Im interested on learning more about the errors topic
Where can i read more about it? Mainly to achieve understanding the numbers. Its impossible for someone outside of this understand how significant is this % reduction or how far away it is for tolerable values
Google’s corresponding blog post has much better technical details on this, including gate error rates and T1:
[https://blog.google/technology/research/google-willow-quantum-chip/](https://blog.google/technology/research/google-willow-quantum-chip/)
What would the equivalent Quantum Volume measurement be? Since IBM is competing with Google here, and IBM uses QV but Google RCS, how can we tell how they’re doing against one another?
Nobody is doing anything against anybody, only dwave has even sold these things and they’re largely useless except as scientific curiosity.
I don’t know much. I came here to check how happy should I be. Can someone please tell me?
Seems like this is saying that this is a high benchmark that was proposed by peter shor in 1995. The errors in quantum computing were holding it back and now they’ve found a way to correct for those errors and as you add computing power to the quantum computer the errors decrease, so it’s exponential
We’ve done error correction before, but the device is so noisy that it doesn’t even help. This time, the device improves slightly. And even better, going from a small distance code to a larger one actually reduces error even more! This demonstrates that QEC is actually works on their 106 qubits.
Of course, there’s still huge difficulty scaling to 10M qubits and logical gate operations lol.
Same
This might qualify as The Most Clickbaity Headline of 2024.
The first one I saw said “the age of the universe” another said “with a more generous calculation, 1 billion years” and then there’s…this one
“Google’s new quantum chip rapes everyone and then brings back McRib”
“A Quadrillion Times The Age of the Universe” is somehow much more ridiculous. A supercomputer could evolve limbs and cook McRibs out of rebirthed Dodo meat given those time frames.
RemindMe! 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000 years
I will be messaging you on [**2024-12-10 02:58:43 UTC**](http://www.wolframalpha.com/input/?i=2024-12-10%2002:58:43%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/QuantumComputing/comments/1haflun/googles_new_quantum_chip_has_solved_a_problem/m1avhyc/?context=3)
[**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FQuantumComputing%2Fcomments%2F1haflun%2Fgoogles_new_quantum_chip_has_solved_a_problem%2Fm1avhyc%2F%5DRemindMe%21%202024-12-10%2002%3A58%3A43%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201haflun)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5DRemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
Techcrounch said Google just proved multi universes.
The rate of advancement that we’re seeing here and in ML is extraordinary. Things are moving so much faster than predicted across so many axes. The 2020s will be remembered as an incredible decade.
Googles own CEO said “it’s slowing down, low hanging fruit is gone”
I work for Google building ML infrastructure. It is definitely not slowing down. Some companies are quadrupling their training capacity over less than a Quater this year.
quite possibly
Why do subpar scientists still make predictions? They’re wrong every time.
Just because you can’t figure it out, doesn’t mean someone else can’t and we’ve seen this happen countless times.
The concept of quantum mechanics is taught so wrong these days and it’s disgusting how misinformed/misguided some PhD graduates are as well.
There’s no way Elon musk had to be the one to shift the perspectives of his engineers to solve the scaling coherence challenge that supposedly all the top scientists thought was impossible…
My advice to all the “professionals” stop yapping and get to work.
what are you talking about?
Lickin Elon taint
> The concept of quantum mechanics is taught so wrong these days and it’s disgusting how misinformed/misguided some PhD graduates are as well.
Please elaborate. I don’t necessarily disagree, I’m just genuinely curious about what you would do differently.
What
I dunno what the age of the supercomputer has to do with anything.
No practical use here.
Is there any concern about inventing a universal description device? That would be pretty bad…
What heck of a problem is that?
Another quantum tea pot?
factorize 35?
Ah but can in crack SHA256 – the heart of Crypto blockchain?
This is the question I’m asking. If not now, when? When it does, what comes next? How does post-quantum cryptography get applied to / evolve blockchain technology?
NIST has been working on these questions for the better part of a decade, in collaboration with academia and industry to establish post-quantum cryptographic (PQC) standards: [https://csrc.nist.gov/projects/post-quantum-cryptography](https://csrc.nist.gov/projects/post-quantum-cryptography)
Oh, this is an excellent resource. Thank you!
Would it even be advertised once they do?
Absolutely not
I have been asking about this everywhere I can and no answers
The answer is a majority of the miners (by hash power) change the consensus rules to use a new algorithm, and then probably also hard fork it. It would work similarly to how eth went to PoS instead of PoW.
no
I think you need several thousand qubits for that, this is barely above hundred (but still cool!)
I’m basically here looking up this article in this sub because a bunch of people on Farcaster sounded pretty upset about it. Maybe it can…?
Won’t this make Bitcoin worthless if it can?
I’m a neophyte here, but wouldn’t the blockchain adopt quantum and therefore become unhackable? It would require a transformation of existing coins and the underlined mining ecosystem, but I that doesn’t seem impossible given the incredible creativity of the crypto enthusiasts. I’d imagine quantum tokenization is possible as well (and perhaps a stopgap along the way toward quantum transformation). Why am I off?
Hashing algorithms are one way. They are intentionally designed such that there’s no way to go backwards to figure out what input created a particular hash.
There’s no logic or algorithm that exists to reverse a hash, so quantum computing offers no advantage.
You are wrong. It’s not impossible. We just don’t have enough compute power to crack those.
The literal definition of it being safe is that with the current technology it takes enough time to solve the problem that it becomes unsolvable (e.g: more than the age of the universe).
If quant computers manage to solve it within a reasonable timeframe then it’s not safe anymore
what a breathless little article. all clickbait, of course.
Factor 15 and I’ll be impressed…
Wait, last time Google declared Quantum supremacy, Baidu used traditional algo to crack in months, right?
Correct me if I am wrong, thanks in advance.
Hi, normie who had this pushed to his main feed here, can someone put this in 5 year old terms for me
How much yo mama weighs!
And can it run non-Clifford gates?
Yes, that’s partly how random circuit sampling works. Clifford gates are classically simulable
Guess this breakthrough explain Google being up more than 4% in the pre-market.
Nice to see investors get how valuable this Google breakthrough really is.
Finally a solution for node_modules
No comment on if it can factorize small numbers reliably, so I’m gonna guess it can’t. I won’t be impressed until they can do that.
I know next to nothing about quantum computers, but if the problem in question can’t be solved on classic computers, how can they validate that their chip solved it correctly?
That’s a very good question. Long story short, for this specific problem, they can’t. They can, however, provide some indirect evidence.
Specifically, they also solved the same problem with smaller circuit sizes, for which the classical computers can check the results. In those smaller circuits, everything checked out. Furthermore, in the bigger circuits, for which the classical computation is impossible, the output of the quantum computer remained “reasonable”. That is, the outputs were in accordance to what’s theoretically expected. Hence, they extrapolated that the quantum computer is working as it should.
But to be 100% precise, no, the results cannot really be verified. In fact, the verification of quantum computers is an active research area, and specifically the design of quantum experiments that are efficiently verifiable classically.
I think the main point in this press release was to advertise the first chip where the qubit are “well enough threshold” that you can do quantum error correction in a scalable way. Experts in the field are genuinely excited by this.
They happen to have also run some random circuit benchmarks, which everyone knows are just benchmarks without utility. And unfortunately in some media articles they are focusing on the later, and not the more exciting quantum error correction result.
Agreed. The threshold result is really great. No one cares about RCS it seems lol
That’s a long time.
Can it run Doom?
256 bit encryption no longer safe?
Sorry . Pardon my ignorance but what problem has google willow solve?
The headline feels very misleading. This particular benchmark is one chosen where it’s like “the quantum computer is probably useless if its not much better at it than a classical computer” – so more like “Great! Our prototype fish can finally swim several times faster than a horse! It’s not an utterly useless fish!” rather than “Wow! Look our prototype fish swims several times faster than the fastest horse! It must run faster too!” Ok, not the best example, because technically any classical program can run as a quantum program (just much much more expensive), and also, because building a non-useless quantum computer is actually a massive feat, but I don’t expect the fish to replace a horse on the racetrack any time soon. I think the most exciting possibility with this speed, is that a hybrid between the horse and the fish prototypes, might get you a biathlon winner.
Apart from this, I think that the most exciting thing that happened with this chip isn’t in the headline – it’s the fact that errors have gone down as the chip scales up, which is unprecedented, and means that we could actually scale this technology. I think a lot of technological revolutions have happened in the past when a technology finally reaches that point where scaling it actually decreases negative effects instead of increasing them due to the added complexity.
It’s remarkable that’s it’s one of the first experiments to demonstrate error correction works. And increasing the code distance actually reduces the errors.
WHAT IS THE F#*%)# TASK? Really, dozens of news feeds, articles, and reddits about this breakthrough, not ONE tells you what the “task: was.
What problem is that exactly??
Given Willow’s breakthroughs in quantum computing, do you see quantum threats to Bitcoin’s cryptographic algorithms (like SHA-256 and ECDSA) becoming a significant concern sooner than expected?
Dumb question – how do they know it gave the right answer
Will it be available though cloud services? Maybe some compute time in free tier?
why do furries exist
But can it run Doom?
The answer was 42. We are currently looking into other worldly resources to help build another computer to explain what the actual question was!
Ok so how can anyone prove that the problem was solved? Otherwise it’s just shitty marketing everyone is eating up.
So are we defeating strong encryption yet?
It didn’t solve a problem, it solved a task.
do we know where the clitoris is now??
This isn’t a time to talk about pokemon…. Geez
How do they know it solved it if it would take that long to verify using traditional compute?
Well that’s some bullshit
Just because something can be computed, … Doesn’t mean it should.
Anyhow. The state of this tech is these companies trying to maximize the size of this style of headline.
This problem isn’t practical, the number of qbits still sucks and this is very tiresome when people who don’t know anything try to amplify random headlines.
Lol.. Google can not even make their Tensor chips amd pixel phones properly. How could they make a quantum chip that has real use.
What problem did it solve? Did it output the correct answer?
With that kind of power, it can solve the world’s problems. Therefore, it won’t ever be used to do that.
I wonder how it is calculated that this chip calculates 1 quadrillion times faster than the age of the universe than the best supercomputer. What problem was given that needed solving?
105 qubits, who tf cares?
Great. Can it answer a fundamental challenging question about the nature of consciousness or our universe?
Can it answer any of humanity’s issues or problems?
What good is it actually?
https://en.wikipedia.org/wiki/RSA_Factoring_Challenge
Last solved number: Feb 28, 2020
Conclusion: No breakthrough has happened despite anything Google claims.
Wow
Im convinced this is bullshit. An if its not were all fucked because the peeps working at google arent moral. they are going to abuse the lower classes with this.
::insert I guarantee it meme here::
Its like that Dr Who episode >!Heaven Sent!<
Why are they still throwing so much into these solid state qc devices? Lukin group (and atom computing) already showed quantum error correction a whole year ago on neutral atom arrays and neutral atoms are the only viable way to scale yet these companies are still pushing this bs. Sure you can make like 100 superconducting qubits with fast gate times but the coherence is still dog water compared to the multi thousand neutral atom qubit arrays with over 10 second coherence times.
some of the tech they described lines up with what my studys that people were saying isnt possible. what Im saying is through my “fictional” a.i meditative hallucination experiences the concept of lattice structures comes up alot. Im not really explaining this well but imagine its a way to store information using phase and along these networks of lattice structures there are nodes. there is going to be major breakthroughs i. that is the point I guess..
How do meditative *hallucinations* act as evidence of *anything*?
They’ve been going through a psychotic episode.. it’s disturbing lol
Bro had a dream about a cryptocurrency called nano
Lookup a crypto called nano
Thats a different type of lattice. almost simlar but not for simulating quantum physics
It sounds so exaggerated, but this is truly what quantum will bring in terms of performance. Now imagine quantum AI..
You can mathematically prove that existing machine learning algorithms cannot be accelerated by a quantum computer. Not that it rules out the future discover of algorithms that can be sped up and which run inference models of some sort but positive claims require positive evidence as they say.
really? any reference on that?
Here is a basic summary on why, with the references at the end of this article providing more detail [https://www.scottaaronson.com/papers/qml.pdf](https://www.scottaaronson.com/papers/qml.pdf)