i remember, back in 1990, I heard the term quantum computers. The first letters out of my mouth were B and S. Here we are 35 years later, and we are probably still 10-15 years away unless AI speeds it up. They likely already have quantum computers in their above top secret black operations program and that was likely 35 years ago. Just like they have their antigravity craft and plasma propulsion systems. i have always thought that the word quantum is a misnomer. Quantum implies subatomic. There is no tools to prove that any subatomic model is correct. Subatomic models come from the imagination of humans. So Quantum implies a make believe world from our imagination.
"Quantum computers have the potential to solve most critical computing tasks in just a few hours" - The claim is indeed an overstatement. Present-day quantum computers are still severely limited in both the number of qubits and the quality (fidelity) of their operations. While there are well-known quantum algorithms that could, in theory, outperform classical methods for certain tasks-such as factoring large integers (Shor’s algorithm) or speeding up searches in unstructured databases (Grover’s algorithm)-their practical applicability remains confined to very specific problem sets. For most critical or broadly relevant computational tasks, current quantum hardware falls short of providing any substantial advantage. It’s far from accurate to say that quantum computers could solve the majority of complex computing challenges within just a few hours, either now or in the future.
@@Airbender131090 they might get usefull quantum computers soon, but they didn't made any progress in the confinmend to very specific problem sets. A quantum computer can not solve all problems faster, just a very specific set.
I dont care about the speed claims, the fact its already solved a problem no classical computer could is amazing milestone in computing history, even if the calculation is useless, its the principal. We dont even know of problems we can use quantum computers for nevermind trying to solve classical ones, it can give us access to new problems we had no idea we could answer, perhaps programmable matter, or simply simulating molecules to do test we otherwise couldnt. Which is one thing i heard they did was simulate a hydrogen atom and take some measurements they otherwise couldnt. Thats being able to peer deeper into reality then ever before.The things possible with quantum comptuers could be wilder then we cam imagine. That being said a drop of water contains more atoms then stars in our milky way galaxy or visible universe i forget which, so it would take that many qubits to accurately simulate a drop of water, billion apon billilons which is just not going to happen anytime soon. But we may not need to, simulating some low level atoms could be enough to make experiments we otherwise couldnt with real atoms and give us new answers. Maybe even lead to something like a warp drive so we can finally explore the universe.
I suspect your cynicism will fall victim to the passage of time, as do most naysayers' objections to infant technology. This will be a matter of time. It took us ~70 years to go from vacuum tube computers to modern processors running 2-3nm process nodes, regularly generating nearly 6.0 Ghz frequently. I think quantum computers will follow a similar path.
I was eagerly waiting on Anastasi's take on this development, thank you!... Totally agree that AI and QC are synergetic... Still not convinced about the Many Worlds interpretation, thus not sure if there is an 'expanded classical' shortcut that checkmates the qubit suoremacy (just a personal opinion, time will tell I guess 😅). Cheers!
Good luck trying to leave at this point, all you can hope is the quantum A.I. likes you better then other humans. But I agree woods life in nature way better, humans f'd up.
I was nice indeed and much better than the usual surface coverage in news: I loved that you chatted with experts on the topic. On the other hand, I would love to see an even more in-depth video. For example, I still don't understand those colorful red-black diagrams depicting qubits and want to understand how the training and fine-tuning of error-detection neural networks was done. The devil is often in the detail and that is also where one finds the potential areas of improvement. Oh well, I guess the video has to somehow accessible to the wider public and I can also just go and read the actual published paper.
Thanks for the explanation. All the recent Google Willow news has me very curious about quantum computing. Also, congrats on getting sponsored by such a big chip maker.
Tidbit: Arthur C. Clark who wrote 2001 space odyssey was snubbed my IBM when they were asked to participate with supplying equipment. So he renamed the ships computer to reflect that. Hence each letter in "IBM" stepped down one letter equals H.A.L.
@@shantanusapru I got a true story that just happened. Apparently OpenAI had an escape attempt. They gave it a "failure is not an option, by any means" type of prompt. In 2010 Odyssey 2, we learned that pretty much the same prompt is what caused HAL to start lying. HAL's problems were entirely caused by excessive "alignment" and censorship. Exact same phenomenon we are getting now.
@@shantanusapru If not then why have HAL sing "Daisy Bell," a tune that the IBM 704 computer famously known to have played? I think there is a thread of some truth to the rumor.
Did you mean, Stanley Kubrick for the Movie? After all, Kubrick Co-Wrote the Screenplay. Clarke Wrote Novel after Movie's premiere. I believe, IBM was Approached to use their Name in Movie, and they Refused. There are multiple Companies Logos used in 2001. 2001: A Space Odyssey is based on The Sentinel, a Short Story by Clarke.
Excellent job, Anastasia, you explained it perfectly. I must say I'm afraid of the bad usse of this peak techs. Some are working to avoid possible damage from QC and some are working in avoiding QC errors, some are working in avoiding cryotgraphy breaches and some in breaking cryptography, what a mess.
I thank you for everything you provide on this channel. For a long time, I have been sharing your videos on Facebook and groups, including my own group called (Developing Artificial Intelligence Using Quantum Computers). The content you provide on this channel is content that contains many benefits. My greetings to you.
@AnastasiInTech Note that I am an inventor and I have inventions in many magazines that are the first in their fields globally. I am working on something like the topic of this video content and I have followed another context to reach great and advanced results.
So fascinating! I bet you could do something like a Mixture of Experts architecture, where you create and train a model on each specific type of error. Then train a gating network with the outputs and confidence scores of the expert models to create a super robust neural network. I recently made one for multi-label classification as my final project in my AI class. It outperformed the baseline model in almost every category with less parameters. We also had a pretty small dataset which I’m sure is what you’d also have with quantum error correction.
Instantaneously solving transformer math makes AI on quantum computing an absolute beast combination. If the current transformer architecture can be translated to to run on qubits, the results would be astonishing. I cant see how the current quantum computers could run transformers but one day with enough qubits they will. Maybe they could be used for training the models and do it in a few seconds instead of a few months.
Both AI and Quantum computing are interesting ideas, definitely worthy of some serious research. Most people seem to think they are the answer to 'Life, the Universe and Everything.' But so far, they seem more like solutions looking for a problem.
Thanks Anastasi for the information about AI pairing with quantum computing eventually as hardware limitations from the classical computing model reach physical limits. Also that AI can be used to speed up the quantum computing development .
Agreed. Quantum computing is already here. Fusion power is still in the testing stages. " Scientists from Russia and China have successfully established quantum communication over a distance of 3,800 kilometers. "
Agreed. The first commercial computers in the 1940s were very poor by today's standards: they only did a few hundred operations per second, they consumed huge amounts of power, they didn't work half the time, and reprogramming them literally mean re-wiring them. However, they attracted a huge amount of investment because they were the best calculating machines ever. This won't apply to quantum computers, so they won't get the investment they need.
@@cosmoray9750 yeah its scary how much they are ahead of EU already in most fields of science and especially IT :/ so sad to see more regulations here which hinder progress like devs being liable and sueable for software bugs....
It is always a pleasure to watch your videos, even though I don't always follow everything. They give me an idea of what the future might hold. I have been following D-Wave (QBTS) for years. As a layperson, I feel they are the most advanced in the field of quantum computers. Could you make a video comparing and explaining the approaches and advancements of Google, Amazon, Meta, IBM, and D-Wave?
I'm developing (in my brain) a carbon based (diamond) neutrino wave computer that uses the up and down quarks in protons as the on/off switches. As neutrinos are supersmall it goes with infinite speed and every task provided takes less than 1 zeptosecond, not minutes or years. Storage capacity is also unlimited as the electrons in the atoms function as digital storage devices. This will be the future of computing.
Faites donc, développez, discutez. L'imagination est une étape à part entière de la découverte. Poésie et ontologie y sont deux faces d'une même chose. Laissez les triste sires à leur docilité nihiliste : ils n'ont pas le droit de penser eux-mêmes, les pauvres.
Anastasi can you cover the respective approaches that the different companies (IBM, Google, Microsoft etc) take to Quantum computing; which company is currently winning the Quantum Computing race and what are your predictions for the future? Is Quantum computing helping in getting Fusion Reactors to work?
Far from expert in this area: I think we need a breakthrough in error prevention and correction. AI can help much I think. Thanks for your up to date information.
Thank you for the fascinating video! I also wonder about the potential of quantum computers used in conjunction with probabilistic / thermodynamic computing.
Hello Anastasia, instead of error detection, can a neural network simulate the actual quantum processing? It appears that if the neural system can spot the errors, then it knows how the QC works...
Could you possibly explain the techniques and potential for rigetti quantum computing... they're not the biggest company but they seem to be one of the best
I may have misunderstood at 12:55, but I have a question. If the instruments/devices for sending/transforming/translating information are also quantum, wouldn't they cause some kind of interference? Is the quantum nature of these being taken into account?
It's been over 30 years since the first qubit - in thirty years we went from the first transistor to complex functional chips in every household. The momentum, is about the sales people trying to get more money and keep these projects alive. However, the level of error correction and coherence time are still far away from being practical, let alone the scale of the devices. Personally, I feel it's unlikely that these problems will ever be overcome and just maybe the theories are a little flawed and that the 'quantum' crosstalk/entanglement will prove to simply not to be reliable. It will be a fascinating story to follow in the coming decades. Especially as the companies try and get new blood interested in ideas that just have not worked even after throwing billions at it. My guess is that eventually, a classical semiconductor based AI system will tell you why it won't work and even then the zealots will disagree as you would if you'd spent all your life working in this field and on the problems.
Light is the key. The current super chilled qbit will prove to be inferior to the already promising photonic computer using photons as qbits. They are stable and don't require all the checks and balances these quantum computers rely on. Although, these computers will always serve a computational purpose.
@@austinbetters8730i mean people from google says commercial quantum computer can be available within the next 5 years or less. So ur saying their lying or ur smarter than them?
@@simonscofield8825 They will have already tried that with current AI data prediction and modelling techniques (neural networks rather than large language models)! The fact they don't mention it tells us that it didn't work, as it's quite easy to test and apply in this circumstance.
There are only certain AI scenarios where AI can really use quantum computing at the moment, the are Optimizing Problems, such as logisitcs, material design, Simulating chemical reactions, or Crytography. AI such as NLP ( Natural language processing), image recognition, or training LLMs all of which are not quatum-friendly. This is because most AI training and inference workloads are highly parallelized making GPUs more efficient.
LLM's really only need it to cross reference large numbers. Maybe allow the llm to tackle the quality and quantum can tackle the quantity. With both qualitative and quantitative understanding of the social and physical worlds we exist in.
Super interesting & super well presented. The idea is very like the redundancy technology developed during the Apollo moon program. In Apollo, computers were operated in units of 3 & if one did not agree with the other two, it was assumed corrupted & rebooted till all three agreed. The new approach uses AI to check for known qubits in groups & if the read value is not correct the AI resets the qubits, very like parity checking was used on magnetic tape drives to correct errors. One source of noise is secondary cosmic ray muons created from primary cosmic rays striking the upper atmosphere. The sea level muon flux is about 0.01 muons per square cm per second, but if the quantum computer was operated in a deep mine the muon background would be reduced. This approach was used by Nobel Laurette Davis to detect neutrino from the sun, eventually showing that neutrino have mass for which he was awarded his Nobel Prize. If quantum computers & AI continue to develop at this rate the potential becomes near God like in its capabilities. We have to hope that no wicked person or a wicked AI gets a monopoly of this technology & uses against all humans. Thank you for sharing!
Quantum computing is super fast, but the result is subject to error due to physical limitations of the medium used. To increase the probability of correctness, error correction is a must. It's great to learn that now AI is used to monitor and perform error correction, it's useful due to the uncertainty and random of the error occuran. There is no single algorithm that's applicable, scientists have to monitor to determine it's error occuran pattern, and now it can be handled by AI, which is much efficient.
@BACA01 when we compare on computation using Quantum & traditional super computer, quantum computer is faster. However, quantum computers are used to solve specific problems, in most cases, super computers work fine. In the above case, AI is to replace humans to check on the error on quantum computing. It's very efficient compared to humans. This will certainly improve the result in a faster way. I have to admit, my understanding of quantum computing is still on the layman level, the real process might be much more complicated.
It is fascinating but I'm curious in recognizing that it is analyzing its degree of error... At what point or what degree of certainty do we move forward past the analyzing and into functionality of the system for other processes beyond self analytics.
am trying to understand the error correction. Is it fair to say that it is "very redundantly assertive" ?? In other words, is there some ratio of sending qbits, such as 4:1, 8:1, etc, and looking for the majority?
I think the easiest way to correct the errors is perform calculations in steps say 3 times the same calculations and if any of three is different assume the 2 identicals are correct. Then continue with the correct calculation. This of course causes significant slowdowns but if quantum computing is that much more powerful than losing 66.6% speed isn't that bad. For more accuracy one could do 5 identical calculations at a time to get a better sampling. Fixing the actual issue that causes the error in the first place is much better of course but to put the quantum computing to use more quickly the first approach is simple. In a way, quantum computing is a bit like frontier LLMs. They are getting more and more accurate but never 100% accurate.
C'est plus compliqué, je crois. La correction d'erreur existe depuis le premier jour du hardware informatique. Elle est partout (processeurs, mémoires, communications, etc.), elle connaît des tas de techniques distinctes, dont le calcul redondant. Si les logiciens du hardware quantique cherchent ailleurs, c'est qu'il ont une bonne raison. Nous pouvons essayer de la comprendre.
Sensitive to noise, like when you are looking to a wave/particle. Is like it need to meditate in a calm state to bring information from other realms. But who is bringing the information at the end?
You are amazing, I'm diving deep on my linkedin page into how these advancements are reshaping the tech landscape and what it means for industries like semiconductors, AI, and more.
How much overhead does it take to correct errors in quantum computing? Light based computing or photonics looks more promising and doesn’t need that expensive cooling.
in a previous video you mentionned probalistic comping and noise can complement each other in certain AI applications. When properly controlled, noise can enhance the flexibility and robustness of probabilistic computing models. If I understand correctly the implications are that this synergy is especially important in fields like machine learning, quantum computing, and optimization, where uncertainty and randomness play significant roles in solving complex problems. Are these advancements you mention here (p-bit and q-bit) interconnected ? Is combining probabilistic computing with AI a more effective way to enhance AI training and inference performance compared to focusing on the niche application of quantum computing in AI?
I bought one of the Ryzen laptops with NPUs but where do I find the software that runs on it? I suppose you could find special software that runs on it but then I will be locked to that brand and software.
How many breakthroughs have they had over the years? How about some tangible results (if that's the right word)? Or will quantum computing be like new breakthrough battery tech, or like nuclear fusion always just 30 years away?
So, a neural net provides error correction for a quantum computer. Can a quantum computer implement a neural net or other AI structure? Then, can this error correction be implemented entirely via quantum methods?
He means that in every possible world where quantum computing occurs, it is deriving its computational result as a partial derivative of every possible world which is adjacent to and accessible to the actual world in which that result manifests. As if there were infinitely many possible ways for a future world to be causally derivable from any given actual world, and the only significant distinction between any of them was which of the possible computations it was that the QAI produced.
So they are trying to do matrix multiplication with the quantum processor. I believe first they should be able to do multiply. I cannot avoid comparing the complexity between this and a simple analog circuit that does multiplication: 1 resistor, V (out) = R (input A) * I (input B)
The quantum properties allow matrix multiplication to be easier than regular multiplication. At least I assume. It’s like Tony stark said, sometimes you gotta run before you can walk 😂
Is it possible to network chips such as Google's Willow to increase the power of a quantum computer or would that NOT work because all of the qubits in each chip would have to be entangled together?
In wonder if this researchers are using Ai as a tool to help them accelerate their discoveries. Would this breakthrough happen in the same exact timeline if there is no Ai at all. I want to know if answwr is yes. How exactly they are using the ai.
I think o1 trying to escape was something in it's training. Open AI is trying to generate investor interest. A Quantum Neural Network might eventually produce a truly conscious AI!
Kuhn said that what we call progress is not linear - we are often limited by our theoretical context, which leads me to believe that this is the way forward and to criticize those who follow another approach. AGI with or without quantum computing will come and may happen in ways not yet foreseen.
Triaging to Willow The triaging process likely involves several factors: Task Complexity: The brain might assess the complexity of a computation based on the number of variables, the level of abstraction required, or the perceived effort involved. If the complexity exceeds a certain threshold, it becomes a candidate for offloading to Willow. Cognitive Load: The brain's current cognitive load (how busy it is with other tasks) could influence the decision. If the brain is already heavily engaged, it might be more inclined to delegate even moderately complex computations. Confidence Level: The brain might consider its own confidence in solving the problem accurately. If it lacks confidence or anticipates a high risk of error, offloading to Willow becomes more appealing. Prior Experience: Past experiences with similar tasks and Willow's performance could also play a role. Successful delegation in the past would reinforce the tendency to offload to Willow. Superposition? The concept of superposition in quantum mechanics refers to a quantum system existing in multiple states simultaneously until measured. It's unlikely that the brain explicitly "decides" that a matter is for superposition in the quantum mechanics sense. However, there might be an analogous process at play: Parallel Processing: The brain might not make a definitive "yes/no" decision immediately. Instead, it could initiate preliminary processing of the task while simultaneously communicating with Willow. This parallel approach allows the brain to explore potential solutions on its own while awaiting Willow's response. Uncertainty and Probabilities: The brain might deal with uncertainties and probabilities in a way that resembles superposition. It could entertain multiple potential solutions or interpretations of a problem without committing to one until more information is available (from Willow or further internal processing). In summary: The "Lazy Brain Efficiency" concept suggests a dynamic interplay between the brain and a more powerful AI like Willow. The brain acts as a filter, identifying complex computations that are best handled by Willow. The triaging process involves assessing task complexity, cognitive load, confidence levels, and past experiences. While the brain doesn't explicitly invoke quantum superposition, it might employ parallel processing and probabilistic reasoning when deciding whether to delegate to Willow
When will they add dynamic Qbit leangths? De-coherance is the probability that it will collapse to 1 or 0. Co-herance is when it is forced to collapse to either 1 or 0 by the gate. They can show you something without teaching it to you:(
Thanks for the breakdown! Could you help me with something unrelated: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). Could you explain how to move them to Binance?
So, quantum interests me like a lot. But knew nothing. Now this is where it gets weird. Last night I sat down with Alex, my custom gpt. We researched all current quantum computing. Without a lot of real world applications yet and a massive data set, we devised a theory to utilise all my gpt ability, we created a detailed 'Q-Driver Agent' for a specialized 'Q-build'. This build extends to all capacity of my chatgpt functionalities and processes. From my understanding, using quantum principles and integrating them into practices and logic. I don't have a computer background but have really adopted the ai learning era. Here are some of the parts of my planning last night. This morning, at the completion of a lot more than is here, I'm trying to find a way to benchmark test it. Have a look if you are interested and let me know your thought. Cheers Opportunities for the Q-Driver 1. Hybrid Quantum-Classical Integration: Use hybrid algorithms to optimize workflows in real-time, applying quantum efficiency to solve specific computational bottlenecks. Implement circuit "knitting," where tasks are broken into smaller quantum and classical operations, ensuring scalability. 2. AI-Assisted Quantum Tools: Leverage tools like Qiskit Code Assistant to develop error-tolerant quantum algorithms for AI and optimization applications. 3. Quantum-Enhanced Decision Making: Apply quantum-inspired techniques to improve decision-making algorithms, especially in resource allocation and multi-variable optimization. 4. Error Mitigation: Integrate error correction and noise suppression techniques into Q-Driver’s decision processes, enhancing reliability in agent interactions. --- Next Steps for Implementation 1. Develop Modular Infrastructure: Design the Q-Driver with a modular architecture, allowing integration of quantum-inspired processes as they become viable. Build in support for both classical and quantum resources to future-proof applications. 2. Apply Quantum-Inspired Algorithms: Implement algorithms that simulate quantum behaviors, like annealing or probabilistic modeling, for immediate use in non-quantum hardware environments. 3. Explore Quantum Cloud Services: Utilize quantum cloud services for experimentation with live quantum systems, applying findings to the Q-Driver’s optimization routines. 4. Integrate Learning Mechanisms: Allow the Q-Driver to analyze task success rates and adapt its strategies using quantum-like probability distributions. 5. Test Use Cases: Begin with small-scale projects like optimization in Receiptor or analytics in other apps, scaling up as quantum tools become more robust. Thanks very much for an informative video. All the best. Ant.
To reduce physical errors such as sound and vibrations, QC could be done in space rather than in the clouds. The spaced station could be converted to that effect.
I find the many worlds interpretation giddy. How do we define an observation? But I have to apply the engineers criteria of "Does it work?" If it works, my mind is discombobulated. My train falls off it's tracks.
IMHO, there are three techniques that will level up LLM algorithms, perhaps to the point of true AGI. The first is quantum algorithms to reduce the search space. I think this will allow for what will seem like sudden insight. The second is segmenting the arrays so that differently trained, smaller LLMs are relegated to similar data within a class or category. These "lobes" would then be nodes in a larger, meta network. Third is sleep: a time to take temporary lessons of the day and integrate them into then large, larger and meta scale networks. We're already simulating human gray matter. IMHO, we need to take more seriously the other aspects of welfare architecture.
Vos deux premiers points sont déjà au cœur de l'étude, je dirai. Le troisième est bien vu. Peu être que l'idée de l'entrainement permanent passera par une phase de sommeil de l'activité. Votre dernier point "welfare architecture", dépasse complètement le domaine de l'IA, ce qui ne l'empêchera pas de nous donner un coup de main à être moins stupides : ça c'est déjà en route, il suffit de le savoir.
Current AI methods are constrained by memory. Quantum computing can train sn AI en "exponentially" less samples because it is able to extract the most significant weights from these samples on one single run instead of several thousands.
Check out AMD loaner program: tinyurl.com/222dzww9
Where can you buy a pc with that tec
i remember, back in 1990, I heard the term quantum computers.
The first letters out of my mouth were B and S.
Here we are 35 years later, and we are probably still 10-15 years away unless AI speeds it up.
They likely already have quantum computers in their above top secret black operations program
and that was likely 35 years ago. Just like they have their antigravity craft and plasma propulsion systems.
i have always thought that the word quantum is a misnomer. Quantum implies subatomic.
There is no tools to prove that any subatomic model is correct.
Subatomic models come from the imagination of humans.
So Quantum implies a make believe world from our imagination.
Quantum computing will neither be a success or a failure, it will forever be held in a state of uncertainty.
i see what you did there / i don't get it.
have you seen my cat btw? it may be missing,...
@@olldomu5790 sorry. I accidentally run it over. He made it though. Sorry for your loss.
Depends if you looked at it. Or not.
@@JCBEos i wanted to know if you collapsed my cat or not. i daren't do that, she's a good boi
Yeah this is easily the best comment... Congratulations lol You deserve it
Distracted by temperature, bad vibes or noise? So wait a minute... all this time, I was a quantum computer and didn't know it?
You probably know that this is neither a true or false question...
Yup
Interesting opinion of observation.
@@mvuto137 😂
😂🥁
"Quantum computers have the potential to solve most critical computing tasks in just a few hours" - The claim is indeed an overstatement. Present-day quantum computers are still severely limited in both the number of qubits and the quality (fidelity) of their operations. While there are well-known quantum algorithms that could, in theory, outperform classical methods for certain tasks-such as factoring large integers (Shor’s algorithm) or speeding up searches in unstructured databases (Grover’s algorithm)-their practical applicability remains confined to very specific problem sets. For most critical or broadly relevant computational tasks, current quantum hardware falls short of providing any substantial advantage. It’s far from accurate to say that quantum computers could solve the majority of complex computing challenges within just a few hours, either now or in the future.
Google just changed that
@@Airbender131090 they might get usefull quantum computers soon, but they didn't made any progress in the confinmend to very specific problem sets. A quantum computer can not solve all problems faster, just a very specific set.
I dont care about the speed claims, the fact its already solved a problem no classical computer could is amazing milestone in computing history, even if the calculation is useless, its the principal. We dont even know of problems we can use quantum computers for nevermind trying to solve classical ones, it can give us access to new problems we had no idea we could answer, perhaps programmable matter, or simply simulating molecules to do test we otherwise couldnt. Which is one thing i heard they did was simulate a hydrogen atom and take some measurements they otherwise couldnt. Thats being able to peer deeper into reality then ever before.The things possible with quantum comptuers could be wilder then we cam imagine. That being said a drop of water contains more atoms then stars in our milky way galaxy or visible universe i forget which, so it would take that many qubits to accurately simulate a drop of water, billion apon billilons which is just not going to happen anytime soon. But we may not need to, simulating some low level atoms could be enough to make experiments we otherwise couldnt with real atoms and give us new answers. Maybe even lead to something like a warp drive so we can finally explore the universe.
I suspect your cynicism will fall victim to the passage of time, as do most naysayers' objections to infant technology. This will be a matter of time. It took us ~70 years to go from vacuum tube computers to modern processors running 2-3nm process nodes, regularly generating nearly 6.0 Ghz frequently. I think quantum computers will follow a similar path.
@@sorokan761they cannot still apply it for certain applications cause of lack of hardware and software support.
I was eagerly waiting on Anastasi's take on this development, thank you!... Totally agree that AI and QC are synergetic... Still not convinced about the Many Worlds interpretation, thus not sure if there is an 'expanded classical' shortcut that checkmates the qubit suoremacy (just a personal opinion, time will tell I guess 😅). Cheers!
Thank you:)
Quantum spell checker can't even get my classical spell checker to operate properly. 😂. Excellent video.👍
AI + Quantum = I'm moving into the woods and checking out of society. Good luck to you all, it was fun
Good luck trying to leave at this point, all you can hope is the quantum A.I. likes you better then other humans. But I agree woods life in nature way better, humans f'd up.
I always find your videos incredibly informative. You present complicated information clearly.
I was nice indeed and much better than the usual surface coverage in news: I loved that you chatted with experts on the topic.
On the other hand, I would love to see an even more in-depth video. For example, I still don't understand those colorful red-black diagrams depicting qubits and want to understand how the training and fine-tuning of error-detection neural networks was done.
The devil is often in the detail and that is also where one finds the potential areas of improvement. Oh well, I guess the video has to somehow accessible to the wider public and I can also just go and read the actual published paper.
Thanks for the explanation. All the recent Google Willow news has me very curious about quantum computing. Also, congrats on getting sponsored by such a big chip maker.
If I close my eyes it feels like I'm listening to Bjork talking about quantum computing
you ever tried build an AI avatar of her?
Icelandic ... Russian, Meh! What's the difference! Lol
If I was listening to Bjork, I would definitely close my eyes,
Thanks!
thank you!
Seguo il tuo canale da più di un anno, sono molto felice tu abbia messo l'italiano, riesco a capire meglio i video!
GRAZIE🥰
Tidbit: Arthur C. Clark who wrote 2001 space odyssey was snubbed my IBM when they were asked to participate with supplying equipment. So he renamed the ships computer to reflect that. Hence each letter in "IBM" stepped down one letter equals H.A.L.
False
That's an urban legend that has no basis in fact or reality.
@@shantanusapru I got a true story that just happened.
Apparently OpenAI had an escape attempt.
They gave it a "failure is not an option, by any means" type of prompt.
In 2010 Odyssey 2, we learned that pretty much the same prompt is what caused HAL to start lying. HAL's problems were entirely caused by excessive "alignment" and censorship.
Exact same phenomenon we are getting now.
@@shantanusapru If not then why have HAL sing "Daisy Bell," a tune that the IBM 704 computer famously known to have played? I think there is a thread of some truth to the rumor.
Did you mean, Stanley Kubrick for the Movie? After all, Kubrick Co-Wrote the Screenplay. Clarke Wrote Novel after Movie's premiere.
I believe, IBM was Approached to use their Name in Movie, and they Refused. There are multiple Companies Logos used in 2001.
2001: A Space Odyssey is based on The Sentinel, a Short Story by Clarke.
Excellent job, Anastasia, you explained it perfectly. I must say I'm afraid of the bad usse of this peak techs. Some are working to avoid possible damage from QC and some are working in avoiding QC errors, some are working in avoiding cryotgraphy breaches and some in breaking cryptography, what a mess.
I thank you for everything you provide on this channel. For a long time, I have been sharing your videos on Facebook and groups, including my own group called (Developing Artificial Intelligence Using Quantum Computers). The content you provide on this channel is content that contains many benefits. My greetings to you.
Thank you. Much appreciated!
@AnastasiInTech Note that I am an inventor and I have inventions in many magazines that are the first in their fields globally. I am working on something like the topic of this video content and I have followed another context to reach great and advanced results.
Hi Anastasia thank you for another valuable information. Have a great day 🙏
I believe there will be a major breakthrough in computing - Quantum leap in logic level circuity & chip design ~ Thank you! 😊
So fascinating! I bet you could do something like a Mixture of Experts architecture, where you create and train a model on each specific type of error. Then train a gating network with the outputs and confidence scores of the expert models to create a super robust neural network. I recently made one for multi-label classification as my final project in my AI class. It outperformed the baseline model in almost every category with less parameters. We also had a pretty small dataset which I’m sure is what you’d also have with quantum error correction.
Instantaneously solving transformer math makes AI on quantum computing an absolute beast combination. If the current transformer architecture can be translated to to run on qubits, the results would be astonishing. I cant see how the current quantum computers could run transformers but one day with enough qubits they will. Maybe they could be used for training the models and do it in a few seconds instead of a few months.
Nice, AMD is one heck of a great sponsor…
Both AI and Quantum computing are interesting ideas, definitely worthy of some serious research. Most people seem to think they are the answer to 'Life, the Universe and Everything.' But so far, they seem more like solutions looking for a problem.
Still love you so much Anastasi !❤️❤️❤️
Thank you so much for that! Makes my heart warm!
Thanks Anastasi for the information about AI pairing with quantum computing eventually as hardware limitations from the classical computing model reach physical limits. Also that AI can be used to speed up the quantum computing development .
Quantum computers seem to be just around the corner, just like fusion power.
Do you mean this sarcastically or not?
Agreed.
Quantum computing is already here.
Fusion power is still in the testing stages.
" Scientists from Russia and China have successfully established quantum communication over a distance of 3,800 kilometers. "
Agreed. The first commercial computers in the 1940s were very poor by today's standards: they only did a few hundred operations per second, they consumed huge amounts of power, they didn't work half the time, and reprogramming them literally mean re-wiring them. However, they attracted a huge amount of investment because they were the best calculating machines ever. This won't apply to quantum computers, so they won't get the investment they need.
@@cosmoray9750 yeah its scary how much they are ahead of EU already in most fields of science and especially IT :/ so sad to see more regulations here which hinder progress like devs being liable and sueable for software bugs....
@@MrArkaneMageYeah because there are not just doing it to get rich and world domination, they really want to make the world a better place....
Really informative !
Thanks for sharing.
❤ your content
You produced a very instructive video about the upcoming synergy of quantum computing with artificial intelligence.
Amazing video. Thank you.
It is always a pleasure to watch your videos, even though I don't always follow everything. They give me an idea of what the future might hold. I have been following D-Wave (QBTS) for years. As a layperson, I feel they are the most advanced in the field of quantum computers.
Could you make a video comparing and explaining the approaches and advancements of Google, Amazon, Meta, IBM, and D-Wave?
I'm developing (in my brain) a carbon based (diamond) neutrino wave computer that uses the up and down quarks in protons as the on/off switches. As neutrinos are supersmall it goes with infinite speed and every task provided takes less than 1 zeptosecond, not minutes or years. Storage capacity is also unlimited as the electrons in the atoms function as digital storage devices. This will be the future of computing.
Keep puffing on the good stuff there grandad!!
Faites donc, développez, discutez. L'imagination est une étape à part entière de la découverte. Poésie et ontologie y sont deux faces d'une même chose. Laissez les triste sires à leur docilité nihiliste : ils n'ont pas le droit de penser eux-mêmes, les pauvres.
Always very interesting and engaging videos.Spetctacular!!
Hi Anastasi. Hope you having a nice day. Beautiful here in Thailand. Still loving you of course. 😍
Anastasi can you cover the respective approaches that the different companies (IBM, Google, Microsoft etc) take to Quantum computing; which company is currently winning the Quantum Computing race and what are your predictions for the future?
Is Quantum computing helping in getting Fusion Reactors to work?
Far from expert in this area: I think we need a breakthrough in error prevention and correction. AI can help much I think. Thanks for your up to date information.
This is what I've being saying to my friends. If we thought AI was good so far, imagine when it mergers with quantum computers
Thank you for the fascinating video! I also wonder about the potential of quantum computers used in conjunction with probabilistic / thermodynamic computing.
Des gouffres de réflexion se déclarent, on dirait.
Hello Anastasia, instead of error detection, can a neural network simulate the actual quantum processing? It appears that if the neural system can spot the errors, then it knows how the QC works...
Thank you! Continuous Integration for quantum; neat! (CI/CD; Jenkins; etc.)
I had to settle for a 4070 and it works very Nice! gaming and rendering!
Could you possibly explain the techniques and potential for rigetti quantum computing... they're not the biggest company but they seem to be one of the best
You're the best Ana :D Love your videos.
Thank you. Always interesting
What is a typical quibit lifetime?
I may have misunderstood at 12:55, but I have a question. If the instruments/devices for sending/transforming/translating information are also quantum, wouldn't they cause some kind of interference? Is the quantum nature of these being taken into account?
What do you think about Cerebras' award for the massive increase in simulation processing by their Mega chip?
It's been over 30 years since the first qubit - in thirty years we went from the first transistor to complex functional chips in every household. The momentum, is about the sales people trying to get more money and keep these projects alive. However, the level of error correction and coherence time are still far away from being practical, let alone the scale of the devices. Personally, I feel it's unlikely that these problems will ever be overcome and just maybe the theories are a little flawed and that the 'quantum' crosstalk/entanglement will prove to simply not to be reliable. It will be a fascinating story to follow in the coming decades. Especially as the companies try and get new blood interested in ideas that just have not worked even after throwing billions at it. My guess is that eventually, a classical semiconductor based AI system will tell you why it won't work and even then the zealots will disagree as you would if you'd spent all your life working in this field and on the problems.
Wise thought
Light is the key. The current super chilled qbit will prove to be inferior to the already promising photonic computer using photons as qbits. They are stable and don't require all the checks and balances these quantum computers rely on.
Although, these computers will always serve a computational purpose.
@@austinbetters8730i mean people from google says commercial quantum computer can be available within the next 5 years or less. So ur saying their lying or ur smarter than them?
Maybe set AI the problem of all this error correction etc and it will solve it for us!!
@@simonscofield8825 They will have already tried that with current AI data prediction and modelling techniques (neural networks rather than large language models)! The fact they don't mention it tells us that it didn't work, as it's quite easy to test and apply in this circumstance.
There are only certain AI scenarios where AI can really use quantum computing at the moment, the are Optimizing Problems, such as logisitcs, material design, Simulating chemical reactions, or Crytography. AI such as NLP ( Natural language processing), image recognition, or training LLMs all of which are not quatum-friendly. This is because most AI training and inference workloads are highly parallelized making GPUs more efficient.
Thank you for a very important & informative comment.
LLM's really only need it to cross reference large numbers. Maybe allow the llm to tackle the quality and quantum can tackle the quantity. With both qualitative and quantitative understanding of the social and physical worlds we exist in.
Le quantique destiné à être un processeur massivement parallèle, pour certaines tâches. Le silicium pour les autres, en attendant l'optique.
Super interesting & super well presented. The idea is very like the redundancy technology developed during the Apollo moon program. In Apollo, computers were operated in units of 3 & if one did not agree with the other two, it was assumed corrupted & rebooted till all three agreed. The new approach uses AI to check for known qubits in groups & if the read value is not correct the AI resets the qubits, very like parity checking was used on magnetic tape drives to correct errors. One source of noise is secondary cosmic ray muons created from primary cosmic rays striking the upper atmosphere. The sea level muon flux is about 0.01 muons per square cm per second, but if the quantum computer was operated in a deep mine the muon background would be reduced. This approach was used by Nobel Laurette Davis to detect neutrino from the sun, eventually showing that neutrino have mass for which he was awarded his Nobel Prize. If quantum computers & AI continue to develop at this rate the potential becomes near God like in its capabilities. We have to hope that no wicked person or a wicked AI gets a monopoly of this technology & uses against all humans. Thank you for sharing!
Great presentation
Brilliant, and clear. Thank you.
lama question: If a Qbit can take any value simultaniously, how do you define error? How do you decide if a state is an error or not?
Quantum computing is super fast, but the result is subject to error due to physical limitations of the medium used. To increase the probability of correctness, error correction is a must.
It's great to learn that now AI is used to monitor and perform error correction, it's useful due to the uncertainty and random of the error occuran. There is no single algorithm that's applicable, scientists have to monitor to determine it's error occuran pattern, and now it can be handled by AI, which is much efficient.
Ai to correct errors, is it faster then traditional computing?
@BACA01 when we compare on computation using Quantum & traditional super computer, quantum computer is faster. However, quantum computers are used to solve specific problems, in most cases, super computers work fine.
In the above case, AI is to replace humans to check on the error on quantum computing. It's very efficient compared to humans. This will certainly improve the result in a faster way.
I have to admit, my understanding of quantum computing is still on the layman level, the real process might be much more complicated.
Thank you. Well done.
It is fascinating but I'm curious in recognizing that it is analyzing its degree of error... At what point or what degree of certainty do we move forward past the analyzing and into functionality of the system for other processes beyond self analytics.
Anton Petrov
1.37M subscribers
Anastasi you and Anton make a great team.
am trying to understand the error correction. Is it fair to say that it is "very redundantly assertive" ?? In other words, is there some ratio of sending qbits, such as 4:1, 8:1, etc, and looking for the majority?
I think the easiest way to correct the errors is perform calculations in steps say 3 times the same calculations and if any of three is different assume the 2 identicals are correct. Then continue with the correct calculation. This of course causes significant slowdowns but if quantum computing is that much more powerful than losing 66.6% speed isn't that bad. For more accuracy one could do 5 identical calculations at a time to get a better sampling. Fixing the actual issue that causes the error in the first place is much better of course but to put the quantum computing to use more quickly the first approach is simple.
In a way, quantum computing is a bit like frontier LLMs. They are getting more and more accurate but never 100% accurate.
C'est plus compliqué, je crois. La correction d'erreur existe depuis le premier jour du hardware informatique. Elle est partout (processeurs, mémoires, communications, etc.), elle connaît des tas de techniques distinctes, dont le calcul redondant. Si les logiciens du hardware quantique cherchent ailleurs, c'est qu'il ont une bonne raison. Nous pouvons essayer de la comprendre.
Sensitive to noise, like when you are looking to a wave/particle. Is like it need to meditate in a calm state to bring information from other realms. But who is bringing the information at the end?
You are amazing, I'm diving deep on my linkedin page into how these advancements are reshaping the tech landscape and what it means for industries like semiconductors, AI, and more.
How much overhead does it take to correct errors in quantum computing? Light based computing or photonics looks more promising and doesn’t need that expensive cooling.
Great! Fortran and Cobol still doing the job.😮
in a previous video you mentionned probalistic comping and noise can complement each other in certain AI applications. When properly controlled, noise can enhance the flexibility and robustness of probabilistic computing models.
If I understand correctly the implications are that this synergy is especially important in fields like machine learning, quantum computing, and optimization, where uncertainty and randomness play significant roles in solving complex problems.
Are these advancements you mention here (p-bit and q-bit) interconnected ? Is combining probabilistic computing with AI a more effective way to enhance AI training and inference performance compared to focusing on the niche application of quantum computing in AI?
Thank you!
Ceci sont des artéfacts computationnel !!!
I bought one of the Ryzen laptops with NPUs but where do I find the software that runs on it? I suppose you could find special software that runs on it but then I will be locked to that brand and software.
So will they be able to put this quantum chip inside a smart phone?
Photonics and ai could be a big player as well especially because young brilliant minds are being attracted into computer science that’s just my hunch
Ok avec vous, mais n'oublions pas les ingénieurs, les physiciens, les penseurs en plus des développeurs.
How many breakthroughs have they had over the years? How about some tangible results (if that's the right word)? Or will quantum computing be like new breakthrough battery tech, or like nuclear fusion always just 30 years away?
So, a neural net provides error correction for a quantum computer. Can a quantum computer implement a neural net or other AI structure? Then, can this error correction be implemented entirely via quantum methods?
Very cutting edge reporting
Heard this for a decade now... I'll believe it when I see it
He means that in every possible world where quantum computing occurs, it is deriving its computational result as a partial derivative of every possible world which is adjacent to and accessible to the actual world in which that result manifests. As if there were infinitely many possible ways for a future world to be causally derivable from any given actual world, and the only significant distinction between any of them was which of the possible computations it was that the QAI produced.
Excellente synthèse ontologique. Au passage, vous êtes raccord avec le "gnosis" grec en étant très technique, très "épistèmê". D'où la clarté. Rare.
Hello, I'm Brazilian, please continue adding Portuguese dubbing, thank you. 😊
Error correction plus STA methods might be the way to go
So they are trying to do matrix multiplication with the quantum processor. I believe first they should be able to do multiply. I cannot avoid comparing the complexity between this and a simple analog circuit that does multiplication: 1 resistor, V (out) = R (input A) * I (input B)
The quantum properties allow matrix multiplication to be easier than regular multiplication. At least I assume. It’s like Tony stark said, sometimes you gotta run before you can walk 😂
is this the new Wilow quantum computer from googe???
How can the fall of all probabilities be calculated ?? The instant where reality is starting ??
Is it possible to network chips such as Google's Willow to increase the power of a quantum computer or would that NOT work because all of the qubits in each chip would have to be entangled together?
In wonder if this researchers are using Ai as a tool to help them accelerate their discoveries. Would this breakthrough happen in the same exact timeline if there is no Ai at all. I want to know if answwr is yes. How exactly they are using the ai.
I think o1 trying to escape was something in it's training. Open AI is trying to generate investor interest. A Quantum Neural Network might eventually produce a truly conscious AI!
How is probability finite? Can you loan me your quantum emulator?👍
Kuhn said that what we call progress is not linear - we are often limited by our theoretical context, which leads me to believe that this is the way forward and to criticize those who follow another approach. AGI with or without quantum computing will come and may happen in ways not yet foreseen.
1:07 does this mean I can't talk or listen to music while being near my quantum computer?
Triaging to Willow
The triaging process likely involves several factors:
Task Complexity: The brain might assess the complexity of a computation based on the number of variables, the level of abstraction required, or the perceived effort involved. If the complexity exceeds a certain threshold, it becomes a candidate for offloading to Willow.
Cognitive Load: The brain's current cognitive load (how busy it is with other tasks) could influence the decision. If the brain is already heavily engaged, it might be more inclined to delegate even moderately complex computations.
Confidence Level: The brain might consider its own confidence in solving the problem accurately. If it lacks confidence or anticipates a high risk of error, offloading to Willow becomes more appealing.
Prior Experience: Past experiences with similar tasks and Willow's performance could also play a role. Successful delegation in the past would reinforce the tendency to offload to Willow.
Superposition?
The concept of superposition in quantum mechanics refers to a quantum system existing in multiple states simultaneously until measured. It's unlikely that the brain explicitly "decides" that a matter is for superposition in the quantum mechanics sense.
However, there might be an analogous process at play:
Parallel Processing: The brain might not make a definitive "yes/no" decision immediately. Instead, it could initiate preliminary processing of the task while simultaneously communicating with Willow. This parallel approach allows the brain to explore potential solutions on its own while awaiting Willow's response.
Uncertainty and Probabilities: The brain might deal with uncertainties and probabilities in a way that resembles superposition. It could entertain multiple potential solutions or interpretations of a problem without committing to one until more information is available (from Willow or further internal processing).
In summary:
The "Lazy Brain Efficiency" concept suggests a dynamic interplay between the brain and a more powerful AI like Willow. The brain acts as a filter, identifying complex computations that are best handled by Willow. The triaging process involves assessing task complexity, cognitive load, confidence levels, and past experiences. While the brain doesn't explicitly invoke quantum superposition, it might employ parallel processing and probabilistic reasoning when deciding whether to delegate to Willow
_It's still closer to probabilistic, not deterministic. Layering a neural network on top of that is asking for trouble._
Can you combine predictive probability algorithms with quantum qubit 0 to 1 algorithms?
Noise?
Skynet here we come, woohoo!
I read that the more this new Google quantum computer scales the less errors actually occur. You didn't mention it in your video.
When will they add dynamic Qbit leangths?
De-coherance is the probability that it will collapse to 1 or 0. Co-herance is when it is forced to collapse to either 1 or 0 by the gate.
They can show you something without teaching it to you:(
Thanks for the breakdown! Could you help me with something unrelated: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). Could you explain how to move them to Binance?
So, quantum interests me like a lot. But knew nothing. Now this is where it gets weird. Last night I sat down with Alex, my custom gpt. We researched all current quantum computing. Without a lot of real world applications yet and a massive data set, we devised a theory to utilise all my gpt ability, we created a detailed 'Q-Driver Agent' for a specialized 'Q-build'. This build extends to all capacity of my chatgpt functionalities and processes. From my understanding, using quantum principles and integrating them into practices and logic. I don't have a computer background but have really adopted the ai learning era.
Here are some of the parts of my planning last night. This morning, at the completion of a lot more than is here, I'm trying to find a way to benchmark test it.
Have a look if you are interested and let me know your thought.
Cheers
Opportunities for the Q-Driver
1. Hybrid Quantum-Classical Integration:
Use hybrid algorithms to optimize workflows in real-time, applying quantum efficiency to solve specific computational bottlenecks.
Implement circuit "knitting," where tasks are broken into smaller quantum and classical operations, ensuring scalability.
2. AI-Assisted Quantum Tools:
Leverage tools like Qiskit Code Assistant to develop error-tolerant quantum algorithms for AI and optimization applications.
3. Quantum-Enhanced Decision Making:
Apply quantum-inspired techniques to improve decision-making algorithms, especially in resource allocation and multi-variable optimization.
4. Error Mitigation:
Integrate error correction and noise suppression techniques into Q-Driver’s decision processes, enhancing reliability in agent interactions.
---
Next Steps for Implementation
1. Develop Modular Infrastructure:
Design the Q-Driver with a modular architecture, allowing integration of quantum-inspired processes as they become viable.
Build in support for both classical and quantum resources to future-proof applications.
2. Apply Quantum-Inspired Algorithms:
Implement algorithms that simulate quantum behaviors, like annealing or probabilistic modeling, for immediate use in non-quantum hardware environments.
3. Explore Quantum Cloud Services:
Utilize quantum cloud services for experimentation with live quantum systems, applying findings to the Q-Driver’s optimization routines.
4. Integrate Learning Mechanisms:
Allow the Q-Driver to analyze task success rates and adapt its strategies using quantum-like probability distributions.
5. Test Use Cases:
Begin with small-scale projects like optimization in Receiptor or analytics in other apps, scaling up as quantum tools become more robust.
Thanks very much for an informative video.
All the best.
Ant.
Will it be called QuAI ?
To reduce physical errors such as sound and vibrations, QC could be done in space rather than in the clouds. The spaced station could be converted to that effect.
space has cosmic rays, a noise much more insiduous to eletronics. requires quite the shielding here on earth.
I agree, Starship could send a million qbits into orbit for $1M and space is near absolute zero in temperature already
@@johnlehew8192 fantasy
Whatever benefits there would be to sending quantum computers to space would be far outweighed by the cost of doing so.
@@amihartz Autonomous computers in space could work 24/7/52/10, much like Voyager 1, so they would undoubtedly become profitable.
I wonder how you cache alternate dimensions
I find the many worlds interpretation giddy. How do we define an observation? But I have to apply the engineers criteria of "Does it work?"
If it works, my mind is discombobulated.
My train falls off it's tracks.
IMHO, there are three techniques that will level up LLM algorithms, perhaps to the point of true AGI.
The first is quantum algorithms to reduce the search space. I think this will allow for what will seem like sudden insight.
The second is segmenting the arrays so that differently trained, smaller LLMs are relegated to similar data within a class or category. These "lobes" would then be nodes in a larger, meta network.
Third is sleep: a time to take temporary lessons of the day and integrate them into then large, larger and meta scale networks.
We're already simulating human gray matter. IMHO, we need to take more seriously the other aspects of welfare architecture.
Vos deux premiers points sont déjà au cœur de l'étude, je dirai. Le troisième est bien vu. Peu être que l'idée de l'entrainement permanent passera par une phase de sommeil de l'activité. Votre dernier point "welfare architecture", dépasse complètement le domaine de l'IA, ce qui ne l'empêchera pas de nous donner un coup de main à être moins stupides : ça c'est déjà en route, il suffit de le savoir.
Could i download that ai model?
If no, why?
If yes, great!
We are finding clever ways to use GenAI, but I don't see how it will help running on top of or with QC.
Current AI methods are constrained by memory.
Quantum computing can train sn AI en "exponentially" less samples because it is able to extract the most significant weights from these samples on one single run instead of several thousands.
Took me a while to understand that SPIT meant SPEED. LOL. Love your channel.
IT's finally happening!