December 9, 20241 yr Any chip industry or CS guys want to comment on Google’s Willow chip announced today: https://blog.google/technology/research/google-willow-quantum-chip/ I’m trying to make sense of the ‘breakthrough’ as lots of small guys are fawning over announcement and a few taking exception. Any opinions or analysis?
December 9, 20241 yr I hope Brad and Will talk about this on the Tech Pod this week. Edited December 9, 20241 yr by SimonBolivar
December 10, 20241 yr I'm not a quantum computing person, but I've read a few white papers and understand what they stand to solve. The promise of quantum computing is so probabilistically solve NP-hard problems in constant time. An NP-hard problem is one where its very easy to check the solution's correctness, but extremely hard to find a correct solution. This concept is what underlies cryptography and our current paradigm of encryption (and yes I know there's "quantum-safe" protocols now) in that if you have the right key, you know very easily that you do. And if you don't have the right key, you're almost certainly NOT going to guess the right one. That's sort of what the announcement gestures at, in that it solved a problem in minutes that would otherwise take 10,000,000,000,000,000,000,000,000 years. But this is all just research in a lab with ideal problems, not generalized solutions to large scale problems. As far as we know at least lol. Definitely interesting in the world of computer science, though.
December 10, 20241 yr Grade A Bullshit. The tweet says "a breakthrough that can reduce errors exponentially as we scale up using more qubits, cracking a 30-year challenge in the field" Breaking that down: The "30-year challenge" is the fundamental problem in quantum computing that "qubits", the rough quantum computing equivalent of a bit, are unreliable and prone to failure. https://www.livescience.com/technology/computing/google-willow-quantum-computing-chip-solved-a-problem-the-best-supercomputer-taken-a-quadrillion-times-age-of-the-universe-to-crack Quote Quantum computers are inherently "noisy," meaning that, without error-correction technologies, every one in 1,000 qubits — the fundamental building blocks of a quan computer — fails. It also means coherence times (how long the qubits can remain in a superposition so they can process calculations in parallel) remain short. By contrast, every one in 1 billion billion bits fails in conventional computers. The breakthrough, and maybe it is a breakthrough, is that they use some type of array to error correct and the error correction gets better as the array gets bigger. Sounds great, right? Until you read deeper and realize the scale of what they tested: This image is from the Google blog: https://research.google/blog/making-quantum-error-correction-work/ Yep, you're reading that right, the breakthrough research is with 17 qubits, 49 qubits, and 97 qubits. The entire Willow chip has a grand total of 105 qubits. Not exactly ready to take on any conventional computers. As for the claims of solving a problem that would take a conventional computer a lifetime. Almost certainly bullshit. I don't care enough to dig into what they are talking about here, but just about every other time a similar claim has been made for quantum computing, someone looks at it and realizes they have the conventional computer doing some unoptimized brute force type algorithm. A quick rewrite to a good algorithm and it crushes the quantum computer. Or, another one I've seen, is the "beat the conventional computer" is based on a simulation of what could happen if you built the quantum computer scaled up by a million times.
December 10, 20241 yr The error reduction breakthrough is a big, big deal. What was holding quantum computing back was you couldn't scale it because of the increasing errors. Google is basically saying they've figured that out and can now begin building large scale quantum computers. They are now saying the larger the system (qubits) the more accurate results will actually become, where previously more qubits meants less accurate results. Edited December 10, 20241 yr by MonkeyDoughnut
December 12, 20241 yr The Bullshit spigot has fully opened. The Headline :Google says its new quantum chip indicates that multiple universes exist The Link: https://www.yahoo.com/finance/news/google-says-quantum-chip-indicates-192059739.html The Money Quotes: Quote Google Quantum AI founder Hartmut Neven wrote in his blog post that this chip was so mind-boggling fast that it must have borrowed computational power from other universes. Quote Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch. The benchmark that they claim proves a multiverse is the same crap they've been pushing for years. It's called "Random Circuit Sampling" (RCS) and is a made up task of "sampling from a random quantum circuit". Yes, that's right the benchmark requires a quantum circuit. Classical computers don't have quantum circuits to sample from, so they have to simulate the quantum circuit and do stupid shit like generate all possible outcomes. It's as stupid as if you wanted to benchmark an Apple computer reading from an Apple branded disk against a PC. But since PC's don't have Apple branded disks, you make the PC simulate an entire Apple computer including MacOS and the Apple disk drive. Everyone would laugh at that, so they wrap the RCS test in layers upon layers of PhD level Math and Physics. I was struggling to describe RCS, so I asked ChatGPT about it: Quote Why It's Not True Benchmarking True benchmarking involves comparing systems based on their ability to perform the same task under the same constraints, often with practical applications. RCS differs because: Specialized Task: The task (sampling from a random quantum circuit) is designed to favor quantum systems and has no immediate practical application. Asymmetric Requirements: The quantum computer generates samples quickly but doesn’t output the full probability distribution. The classical computer is tasked with verifying these samples or simulating the distribution, which is exponentially harder. No Practical Use Case: Unlike traditional benchmarks, which assess performance on tasks like sorting or matrix multiplication, RCS is not solving a real-world problem.
December 12, 20241 yr Author While I do not possess PhD level math and physics knowledge, my bullshit detector is of a fairly high order. I think this is Sundar trying to goose his stock price.
December 13, 20241 yr 14 minutes ago, MalibuSheriff said: While I do not possess PhD level math and physics knowledge, my bullshit detector is of a fairly high order. I think this is Sundar trying to goose his stock price. If that’s the case it’s been working.
December 13, 20241 yr Author Yep. Reminds me of one of my favorite quotes: “I think I've been in the top 5% of my age cohort all my life in understanding the power of incentives, and all my life I've underestimated it. And never a year passes but I get some surprise that pushes my limit a little farther.” - Charlie Munger
December 27, 2024Dec 27 If you're looking to have a better feel for the sorts of ideas quantum computing works on and with, this video is pretty solid I like the fundamental descriptor of: classical computing input is a zero OR a one, quantum computing input is a zero AND a one. Boolean logic and fundamental proofs still hold consistently, but inputs are more complex than simple binary. It's the difference between hearing a single note, and a chord of notes creating overtones (constructive interference) that wouldn't exist but for everything being just so. The computational magic comes in building a quantum system/algorithm that collapsed onto the correct set of inputs for a classically hard computing problem. And wouldn't you know it, highly dimensional inputs like what we use for crypto and LLM's and ML are logically similar. The work being done now is to set up a "hello world" for a real computational problem. The disruption will be when they build a quantum system of algorithms that can find large primes, which is basically already theoretically possible. Willow isn't the goal line, but it certainly is a fresh set of downs Edited December 27, 2024Dec 27 by Captainant
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.