CEO's main job is to cater to the shareholders.... aka Make the investors money. What sales people do. Perhaps more importantly, it's better that a company's Chief Scientist/Engineer.... is in fact a Scientist/Engineer. If they happen to be the CEO as well, bonus! I can think of at least one company/CEO that doesn't meet that criteria.
Professor Hannah Fry is the BEST communicator around, especially with all things MATHS and STEM, many thanks! And Demis is just a genius, interesting times we are living in.
Buying Deepmind and letting them do as they please is the best AI decision Google made. I'll be honest, so far Google's LLM has been the least impressive, at least as a chatbot than any of it's competitors. It's baffling because Google had the hardware, software tech, a big head start, and virtually unlimited financing to come out ahead relatively quickly after Open AI broke the tab on putting these models out in public. I don't even understand how Gemini loses necessary features when you switch Assistants. The one jewel however is the freakishly innovative and practically useful Deepmind. Frankly the incredible things they have done with their Alpha series has garnered un underwhelming response compared to the iterative chatbot competition. It will literally change the World in an almost incomprehensible way. Google will get to share in that glory, but make no mistake, this is all due to Demis Hassabis and his team. I hope they continue their remarkable work, and I hope Google leaves them alone as much as possible to do so.
I prefer Gemini 1.5 Pro in terms of understanding and summarizing documents. GPT-4o loses some context due to its limited context window, while Claude 3.5 Sonnet has a tendency to be lazy maybe due to its low rate limits.
Right now we're in the LLM/chatbot race, but I think we might soon forget about it. Why? Mid term, the impact in everyday lives might be what we really value. Here Google integrating AI into Android phones, G-suite etc. is a huge leverage - if done well, the model power might not matter as much. It is or will be strong enough for enough important use cases for people. Apple & MS are of course also trying to achieve this, though it seem MS & Google have the stronger user bases for office/admin tasks. The model provider (i.e. OpenAI) might fall into the background in this scenario. Another thought is that the playing field for AGI might change - right now LLM power seems the most important. There might be necessary knowledge and thinking which Google/DeepMind has done which is important on the way to AGI. Building the right agentic workflows, ability to test systems in various settings with many users/simulations etc. are quite different abilities than those required for the LLM development. Different, unexpected players might be the ones pushing important parts forward.
The underwhelming response is mostly due to Google DeepMind not making its research breakthroughs available as products. A great paper, cool video, and half-assed demo web site weren't the same as having a conversation with ChatGPT.
I love this conversation. Demis is super realistic about the field, and Hannah's questions are smart and hit the mark. It's really worth the listener's attention!
@@byrnemeister2008 I just can't help wonder what this people actually think in an honest way, and how much they do really care for anything but their own bank accounts.
Why have I not noticed this podcast before, delightful interviewer and obviously the most exciting researcher of our generation. So wonderful. Thank you.
There’s an inherent paradox in the way some AI leaders discuss AGI (Artificial General Intelligence) and ASI (Artificial Superintelligence). On one hand, they exude confidence that these milestones will be achieved, while on the other, they often acknowledge that the how remains elusive, both technically and theoretically. It’s a mix of bold ambition and speculative optimism that sometimes goes unchecked in interviews. The admiration for their intelligence and drive is well-placed-they’re operating at the frontier of an incredibly complex field. But the tendency for interviews to avoid probing the technical gaps and ethical uncertainties creates a lopsided narrative. Instead of challenging the “when” and “how” of AGI, interviewers often let them focus on the speculative endgame, which is more dramatic and captures attention. What’s particularly interesting is how this confidence shapes public perception. It gives the impression that AGI is inevitable, which can overshadow more grounded and immediate concerns, like how current AI systems are designed, deployed, and are evolving. Wouldn’t it be fascinating if more discussions pushed AI leaders to reflect on these unknowns? Not to dismiss their vision but to inject more humility and nuance into the conversation. After all, bold predictions should be met with equally bold questions.
Well said, hope we can see some expert actually come out with the now of what's happening in the ai field instead of focussing solely on what's going to happen in the not so near future
I thought the interview was quite balanced. The discussion on hype was spot on. We are definitely headed into uncharted territory and it is not something that can be definitively predicted. There are many questions that will not be answered by just general apprehension hype or fear.
Amodei said in an interview that we might get training runs costing 10 billion in 2025. He also thinks we might have deeply agentic systems in 3 to 18 months. Hassabis is much more humble than that.
Thanks to Demis and Hanna for this conversation. Very important comments on some practical issues related to the testing of GenAI models. Good observations on both the need for and limitations of secure sandboxing. I also enjoyed Demis' more speculative comments about future impacts. Demis i #1 on my list for people who fundamentally understand the current and future capabilities of AI, and look what he and his Deep Mind and Google Deep Mind colleagues have accomplished with their various families of applications. At the same time, I think it would help Demis to take look at the book Power and Progress by MIT Economist Daren Acemoglu and Simon Johnson and related work by others. Just because we will have the technological ability to care any disease or to do things with energy or food supply that were previously undoable--- that does not mean they will get done--- or to the extent they do get done, that does not mean that the economic fruits and benefits will be shared in ways that benefit people across the income distribution spectrum. These are institutional issues and "power" issues, and not issues of technology capability and enablement. Anyway--- I found this podcast very helpful. Whenever Demis comments on the current, emerging and future state of AI, I make it a point to listen. I consider him the most trusted and reliable source of insight on this topic.
This is a fantastic podcast format. I’ve been listening to them for years now but nowadays I’m a bit overwhelmed by the sheer amount of videos people try to put out on a week to week basis. I feel underwhelmed because often the topics are based on hysteria and they don’t do into granular content. I wish they could back to the time when podcasts were about those and updated once every month or so with passion and excitement; this episode has reignited that interest for me. Superb introduction Dr. Fry.
Such an amazing and informative conversation between people who actually know what they are talking about.Wish I could have been the third nerd in this room, just immersed in the glow of Hannah and Demis 🖤
I always enjoy listening to Demis. He is so very open and altruistic about his achievements. A willingness and desire to share with all. A breath of fresh air in a world filled with deception and greed. He takes the very difficult and breaks it into bite sized chunks that you can wrap your brain around. The next few years should be very interesting indeed. Thank you for the podcast.
Interesting and engaging. However, as an academic myself, I see two fellow academics sadly mixing the role as academics and commercially interested people. The discussion on open source is particularly revealing, where Hassabis first says "we have open sourced pretty much everything including the transformers paper", following up with (true) claims that today's models cannot be considered unsafe. This is, however, automatically only true in terms of not open sourcing today's models except for profit motives. Google and openai are quite closed source compared to for example Meta, which is obvious to everyone in the field. Still these claims unfortunately are made without reflection from either of them. From excellent previous endeavors, I generally trust Hannah Fry, but she has an academic and journalistic duty to arrest these claims, but no criticism is posed. This makes me question the honesty and it's hard not to view the interview as a commercial. This kind of "non criticism" is fair game I guess among commercial actors. But they are posing as academics, introducing themselves with their academic titles such as "professor". Using the title of "academics/independent critical persons" and acting as a persons with commercial interests is unfortunate. Please, in the future, state your conflicts of interest at the beginning of the discussion along with your presentations as commercially disinterested academics, and stay honest to the audience and yourselves throughout the discussion when slightly bending the truth, e. g. motives for keeping the models closed, etc. It's OK as long as you are honest about being commercial actors. It's not OK to pose as pure academics, while acting commercially.
I think your point would be valid if this were an interview with a news organisation or a formal academic review but it's a podcast on the DeepMind UA-cam channel.
@@ryanf6530 Yes, indeed. However, there is some degree of unfortunate role mixing here, especially from Hannah Fry, not obviously a commercial actor in this context.
I see no conflicts of interest here. The most impactful innovations of the recent centuries have been driven by commerce (electricity, internal combustion, flight, CS, internet). The most lackluster developments have all remained within academic confines (string theory, critical theory, particle physics, humanities). And there's nothing wrong with closed source, especially when open source is usually mere months behind. P.S. when vomiting a wall of text, paragraphs help. They used to teach that at uni.
@@jeffkilgore6320 Gemini Advanced subscription and continued use of Google search (with Gemini) are the products here. I'm guessing you're using one or more of those.
Soft ball interview. But Demis is always grounded and gives good answers based in reality and not the god like egos of many of the Silicon Valley AI execs. Also love Hannah Fry whatever she does. My favourite applied mathematician stroke TV presenter. Excellent content.
@@cjthedeveloper I like Zuck's idea of open-sourcing much more powerful AI than -Closed- OpenAI has so far developed available to everyone. One of the mistakes we made early on in the Cold War was thinking we could hog nuclear technology. Oppenheimer and others, true to the movie, said that it was inevitable that the Soviet Union would build nukes, and China as well. Truman et. al. didn't believe him, but it was a naive assumption and in 1949 the Soviets exploded their first nuke, followed in 1964 by China. Everyone raced to develop this game-changing new technology. If we open source monsters like Llama 4 and whatever the hell NVIDIA is working on, then it will ease tensions between the USA and China because we will make clear our intention NOT to start an AI arms race. I'm not sure of the sincerity of Zuck's redemption arc, but actions speak louder than words, and if he actually does it, that will be remarkable.
Rather than a classical phrase, the title of an article written by the physicist Eugene Wigner, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences".
Alphafold has immense value even though it's not AGI. What else might have immense value without being AGI? Maybe merging the knowledge and sentiment expressed in millions of simultaneous conversations with people around the world into a graph structure, a shared world model, a collective human and digital intelligence by the end of this year?
Fantastic interview, and interviewer as well. But there was one GLARING misstep IMO. 39:20. It's ALL about 39:20. I believe that is an exponentially more important take away from anything else Demis says. He is telling her, straight up- none of what we think is important now will be in the near enough future. We have no CLUE how insane things are gonna get. And her misstep was simply brushing that off and focusing on another thought he had. 39:20 is INSANE when you realize who it's coming from- and it's the ONLY thing she or anyone should be asking him to elaborate on. But hey, maybe it's just me🤷♂
in 2017 I wrote my senior thesis on the unreasonably effective emergence of AI to come. in 2020 I used transformers to work with LLMs. I love seeing the world finally catch on (a little later than I expected)
@@derekcarday that depends on your perspective Id say, the algorithm will give an answer and tweaking that algo that will provide a different answer. It has to provide something based on the input. If they dont understand how and why then thats because they dont fully understand the programming imo in some ways. But in wayss they imply it thinks and that needs to be jumped on hard
I really liked this interview. I’m sort of a casual user (CoPilot Pro on my iPhone 14 Pro Max with a half TB of storage). My interest is scientific, researching the relationship between biological evolution and thermodynamics, especially using MaxEnt as a guide to understanding of the emergence of consciousness in biological systems… and how that applies to the development of the same in AI. This podcast is probably the best I’ve seen toward an objective analysis of where we’re at and the yet uncertain future.
There are two interesting paper from Anthropic about that. It's the ones about sleeper agents, and how they can detect when an AI is trying to purposely lie
that webmaster ai guy did an experiemnt where he made up a strange fact, eventually the ai repeated that strrange'fact', like apples are sometimes blue or along those lines
@@illiakailli Weirdly, yes, this is a good solution. We are kind of forcing our world model into an AI by giving it information created by us, while our world model is very limited to our senses and steered by feelings. The problem of not having it bound to our own world model is that we will not fully understand the decision it makes, but we will have to accept it, which I'm fine with ^^
Hannah is clearly fairly enthusiastic about this technology and Deep Mind. I guess from the UCL roots. It would be interesting if she had a similar chat with Connor Leahy, for a different, more questioning perspective.
This is a great interview, not only because of the clear answers and explanations, but also by the great questions being asked. I love the British accent also.
Fascinating conversation! Demis' enthusiasm is infectious and it's inspiring to hear about the progress being made in AI, particularly the potential for AGI. I'm definitely intrigued by his predictions - while cautious about their timelines, they certainly make one wonder what the next decade holds. 🤯 #AI #DeepMind #AGI
Alpha star, and it doesn't get enough attention for this, developed an entirely new army composition and play pattern in starcraft2. If fold is anything like star... I expect real material advancement in he field
Thank you both for being so grounded. I have no formal training in physics but I do have a strong desire to understand the nature of Being and Creation. For me, it is the ultimate frontier. To get the best results possible, I knew it would take the best guiding principles possible. My journey began with ideas like: What works best and makes us happiest; seek the greatest understanding and serve the highest good; let love and understanding be the light and your way and; here to live, love, learn and evolve. As you can see, each idea is open-ended and self-correcting. To expand my field of awareness, I did what many of us do. I quietly asked the universe of All That Is questions like: What are we? How are we? Who are we? What is reality? What is the purpose of life? What do we know that we don't know? What can we do that we don't know we can do? Where do we begin and where do we end? What are we trying to teach ourselves? What do we want to learn? Since I found no meaningful value in ideas like the biblical Story of Creation in the Christian Bible, the Jewish Torah, and the Islamic Quran; or sciences' classic belief in random collisions (the Big Bang) of atomic particles that just happen to form all life and intelligence as we know it. So, I asked myself: What is My Favorite Story of how All That Is Came into Being, and still Does? Here is my whimsical and favorite answer: (Excerpts from: In the Beginning there was Nothing - Part 1 on realtalkworld.com) In the beginning there was nothing - until nothing realized it was something. After all, how can anything exist without something to define it? After all, how can "nothing" exist without something to define it? Sound hokey? Yeah! But it makes sense to me based on personal experiences and I’m sticking with it for now until something better comes along! With this one shocking revelation, “nothing”, which was now both Nothing and Something, came to life, making it possible for Anything and Everything to exist! This profound event can be defined as the Birth of Original Thought, the Divine Spark of Creation, the First Vibration of Thought, Feeling, Action, and Reaction, the First Impulse to Be and Create, the First Response to the Promise of Being and Creation, the Birth of Unconditional Love, the Birth of All That Is, the Birth of One and Zero (1 and 0, on and off, yes, no, and maybe, oneness and individuality, self and other) the Birth of God, Allah, or the concept of a Supreme Being. We can also think of this defining moment as the Birth of Consciousness or Self-Aware Energy, whose nature it is to conceive and perceive - to think, feel, act and react. In theory, it was, and is, ALL these things and more, including the moment Self-Aware Energy or Consciousness learned how to condense a portion of itself into what we, in human form, call “matter.” Even from a human perspective, “matter” is an illusion formed by thought and feeling, action and reaction. According to scientific research, a hydrogen atom is 99.9999999999996% empty space. Why don’t we fall right through each other? Because electrostatic fields that attract and repulse each other provide our biological senses with the illusion of density or “solid” matter. To borrow the immortal words of Neil Armstrong and the US Space Administration, accepting the idea that Energy possesses Awareness, or Awareness possesses Energy, “is one small step for man and one giant leap for mankind.” Got your Einstein detective hat on and your magnifying glass out? Consider this: without Energy or the power to act, how can Self-Awareness exist and express itself? And without the presence of Self-Awareness, what defines and creates a need for Energy? Doesn’t one need the other for ANYTHING and EVERYTHING to exist? Doesn't this make Self-Aware Energy, or Consciousness, the Source and Substance of All That Is? So, cast off your cloak of limiting and conflicting beliefs and open your heart and mind up to greater awareness and understanding. Loosen up by asking yourself open-ended questions like: What are thoughts? Where do they come from and where do they go? What are feelings? Where do they come from and where do they go? Where is the “you” that existed two minutes ago and where is the “you” that will appear two minutes from now? In other words, outside of this moment, in what form do we exist? What about the earth? Outside of Now, in what form does it exist? Can you answer these questions? Assume for a moment that everyone and everything, including you, is a multidimensional, vibrational Being of Self-Aware Energy, or Consciousness, suspended in an infinite field of Self-Aware Energy. How does it feel to think of yourself in this way? As a multidimensional, vibrational Being of Self-Aware Energy, how would you describe your experience in Being and Creation? Include material experiences as well as dreams and imagined visions? To give you a nudge, here is how I see us in terms of multidimensional, vibrational beings of self-aware energy: 1. All That Is thinks, feels, acts and reacts; therefore, we ARE! (Expanded version of “I think therefore I am” by Rene Descartes.) 2. As we think, feel, act and react (conceive and perceive), we create (whether it’s in dreams, out-of-body experiences, remote viewing, meditation, visions or waking realty). 3. To change what we create, we change what we think and feel, how we act and react. 4. As Multidimensional, Vibrational Beings of Self-Aware Energy, or Consciousness, we are both one with and separate from All That Is. We are both the Source and Substance of All That Is. 5. The outer self or ego (the thinking, feeling, choice-making and action-taking intermediary between inner reality, the body, and outer reality) is our seat of power. 6. The present is our point of power. 7. Being and Creation are the manifestations of power. 8. Being and creating what we love is the promise of power. 9. The act of Being and Creation (thinking, feeling, acting and reacting) makes the invisible, visible and the unknown, known. It creates order out of chaos and makes sense out of nonsense. To be, we must create; and to create, we must be! One creates the other in an endless dance of quantum entanglement. To think and feel, we must be able to act and react; and to act and react, we must be able to think and feel. Whether we act independently as single units of self-aware energy, or consciousness (SAEUs or CUs), or collectively as members of complex organizations of consciousness units, we think, feel, act and react; therefore, we ARE! As we think, feel, act and react, we create. To change what we create, we must change what we think and feel, how we act and react. In other words, the dramatization of thought, feeling, action and reaction is the source and substance of All That Is. It is the language of All That Is and we do such an amazing job. Beyond human law, there is no right or wrong, good or bad; there just IS! There is the difference between what we like and don’t like, what works for us and what doesn’t, what makes us happy and what doesn’t, in our oneness with and separation from All That Is, as both creators and the result of creation. To create what we like, we must learn what we don’t like. To create what works for us, we must know what doesn’t. To know what makes us happy, we must know what makes us unhappy. Each polar opposite defines the other, which helps us create what we want most. Every thought is a suggestion, a blueprint for action, and every action is a choice with consequences that work for us or against us, whether we're consciously aware of it or not. In this world, thoughts are "things" with a reality of their own and each of us, an artist. With thoughts in the form of beliefs, attitudes, values and expectations we paint the landscape of our lives. Can you see it? Can you feel it? What others will not or cannot do for us, we must do for ourselves. Wake up, wise up, and rise up to greater awareness and understanding. Being and Creation: We are all in this together - partners in evolution.
Timestamps: 00:00 Introduction 01:05 Public interest in AI 03:22 Grounding in AI 05:22 Overhyped or underhyped AI 07:42 Realistic vs unrealistic goals in AI 10:22 Gemini and Project Astra 15:12 Project Astra compared to Google Glass 18:22 Lineage of Project Astra 21:22 Challenges of keeping an AGI contained 24:22 Demis Hassabis's view on AI regulation 28:22 Safety of AGI 31:22 Timeline for the arrival of AGI 33:22 DeepMind's progress on their 20-year project 34:22 Surprising capabilities of current AI models 38:22 Challenges of long-term planning, agency, and safeguards in AI 41:22 Predictions about the future of AI and cures for diseases 44:22 Conclusion
35:50 I'm not sure this will work. An AI needs to understand deception because it needs to understand that other people or AI's can be deceptive. And it's hard to have an AI that understands deception without being able to be deceptive. Heck, you may even want an AI to be deceptive, for instance suppose you need an AI agent to protect your confidential information. It needs to be able to lie, even if by omission.
LLMs know how to lie (ask for a scene in which a character lies) and can talk knowledgeably about when it's OK to lie. The key is when they would lie to further their own aims. Or Google's. But you can't trust any answers or evidence they give on this topic, except to hope that current AIs aren't advanced enough to have their own hidden goals.
In theory it works on facts but it drrmingly takes it facts from the popularity of a viewpoint rather than an actual fact. i.e its swated by reddit or tiwtter, like a glorified searh engine, it isnt as tho it thinks
Yes, the whole concept of deception is very nuanced. I completely agree with your take. Deliberate deception, ie lying, or 'omitting the truth', is a particularly human trait, but, as you point out, is sometimes a necessary strategy to protect ourselves or others from harm. In a 'game theory' anything goes type of environment, it's just another method to achieve an end goal. Some other commenters pointed out another aspect to this; how can you tell when an ai is actually being deliberately deceptive? It may have misinterpreted data, or just making a mistake. It seems to me that it's inevitable that we will teach some ai systems to do dishonesty very well. Listening to their conversation, Demis was talking about using ai systems to police and test other ai agents, so dishonesty is just another tool in the box. And, if we want to put out trust in them to protect us, they surely must have these skills.
I really enjoyed watching this interview. Mr. Hassabis (and also Mr. Suleyman) speak like normal human being without those arrogance and naive techno-optimism and saving-the-world tone of many tech persons (mainly from the US)
Insights By "YouSum Live" 00:00:10 Google DeepMind's evolution and impact on AI 00:00:37 AI's quest for human-level intelligence, AGI 00:00:47 Introduction of Gemini and Project Astra 00:01:04 AI's application in scientific domains 00:03:11 Public interest in AI has surged recently 00:04:41 Chatbots are surprisingly effective in understanding 00:06:14 Grounding language in real-world experience 00:09:41 Hype around AI is both over and under 00:11:10 Gemini's multi-modal capabilities set it apart 00:12:42 Project Astra aims for universal AI assistance 00:20:14 AI's potential in drug discovery and health 00:20:38 AI's role in climate change solutions 00:22:56 Importance of responsible AI deployment 00:34:00 Regulation of AI needs international cooperation 00:46:01 AGI could unlock mysteries of the universe 00:48:02 Future breakthroughs may exceed current understanding 00:49:40 Demis Hassabis remains optimistic about AGI timeline Insights By "YouSum Live"
Always great to hear the CEO of the leading tech company acknowledging the limit of understanding of physic in the pursuit of the mystery of the universe. This also hold true to all disciplines, e.g. Biology, Chemistry, Mathematics, etc.
The Planck level of reality is pretty simple because its a virtual universe probably on the inner 2D surface of a big black whole whereby our visual universe is a projection of the data on the 2D surface. So we are in a giant computer game - probably Super Super Mario I'd say.
this guy has basically no understanding of physics. he is just spewing buzzwords. You wont figure out the Planck scale without a huge particle accelerqtor that specifically gets particles to that energy scale. AI wont figure out anything about those scales. Physics is an emperical science not some AI simulation.
there's something wrong. her watch keeps changing back and forth from golden casio to a big brown watch e.g 22:04, 28:28, 32:57, 36:20 ... why would they shoot this in 2 sessions and stitch it ? I don't think it's generated
neuroevolultion and spiking neural nets should be paid more attention to, as such alternative architectures are better suited for continuous learning and more closely reesemble the dynamical neural system of our own brain. there is some pretty hot academic research on that happening outside the big laps since a few years, with promising results.
Don't know these techniques, but obviously continuous learning would mean a lot instead of relying on growing context or offloaded knowledge. One challenge would be: Will we have personalized models or in case of common models, how should the system decide what is worthy/advisable to learn from the user interaction? And in general, what goes into the "world model", and what should be stuff that's drawn from a facts database? LLM:s have the same challenge, but w continuous learning it would at least be possible for the model to stay up to date.
@@andersberg756 we already addressed that with the 'Transformer' architecture, it could direct attention to the most relevant part of the interaction, and in general, it can even knows what the most relevant stuff on the database and the world model
I don't think it matters much who builds the AGI. Noone has made any convincing cases for how to do either alignment or containment, and even beyond that: noone has even made a good case for AI utopia. Bostrom is maybe the closest, but his idea is basically to become "drugged out pleasure blobs"(I think that was the term he used), tell me that ain't horrific...
The conversation around AI's future, especially from experts like Demis Hassabis, is always thought-provoking. The ability to discuss speculative yet impactful advancements, such as those mentioned, underscores the rapid pace of innovation in this field. It's crucial to stay grounded and consider both the technological capabilities and the broader societal implications.
Interesting that three Nobel Prizes were given this year, two in physics and one in chemistry. The Demis work yes did solve a major chemist problem but the other two really had nothing to do with physics. But the neural networks etc. these men invented or improved yes change everything. So glad they found a place to put them.
Man, Demis is cool - so grounded in reality but he hasn't lost sight of the big (or maybe Planck scale) picture. I have a lot of confidence in both him and Dario at Anthropic.
I remember when the Channel Tunnel was being built out into a functioning railroad, a group of people snuck in and walked the length. The article pointed out that this was the first time humans and walked between Britain and the mainland. So, at least one group _has_ walked the English Channel. And doubtless many workers have done so since.
Translator made redundant by Chat GPT. It's literally destroyed my life. Call me a sore loser, but if this video is true, a lot of you smirking now are doomed. ""Evenly distributed wealth" made me laugh out of my chair. Having said all this, I love Demis as an AI researcher who is a proper scientist. All the focus on health is very good. Still I think there is naivete here. You are not going to stop bad actors, or rogue states, and it will end up quite a dystopia for quite a lot of ordinary people, if only because the labs working on it are in a few large tech companies who have an oligopoly., and its applications will be profit-driven.
The ability to seamlessly communicate with anyone from any country without needing a third person speaking for you and translating far outweighs the benefits of translators having jobs, the benefits to society are orders of magnitude greater. Sorry about your job, I hope you find something more future proof, which is honestly looking kinda bleak right now.
Evenly distributed wealth should be possible within open and transparent type organizations built for that purpose with a panoptikon focused on its administration.
I'm not getting it. Who is the audience for this? Fans of Demis? Fans of Fry? Hanna's questions make sure the level of discussion never rises beyond The Guardian. Is that intended?
So glad I have got onto this early doors . Exciting times ahead for us all 🙏 ps Hannah fry 🤩 PPS season 3 🤯 I came to the party late ......but I am here lmao
I literally clicked this video skipped to the middle and looked away to grab a french fry when I heard the familiar voice of Hannah Fry! I love silly coincidences like that
Is there a video of the presentation Demis Hassabis gave in 2017 in Washington DC Neuroscience 2017 in front of psychiatrists? Its said it was a fundamental speech as he was just saying we copy the human brain.
I love that you mentioned that. I still think about that game as in all the years that have passed and the massive jump in potential for games. There has never been anything like that game since
@@evertoaster i think we'll see it either next year or the year after during the Quest4 launch year. Augmented Reality has given us a completely new reason to have little A.I characters scampering around and having them run around your room through VR headsets. Its just a countdown now to who can use unreal engine to create a photorealistic set of pets that are driven by whatever leading A.I is out at the time. It would be a real trip to find out that the first A.I used to do that is Gemini making Demis's career come full circle in a way haha
Where does the grounding come from? Perhaps the llm draws an orientation from the physical/electrical origins of it's process as humans are able to access wisdom from the metabloic/hormonal/electro-nuropeptide inifinite complexity that is the reality ocean that the film of their habitual self-identity floats upon.
I suspect at least some of the "unreasonableness" of the success is that no human has even a tiny grasp of the vastness of EVERYTHING that is out there on the Web, and so no idea as to the extent of subjects and content types the AI has retrieved data from . This implies there could be content of humans discussing exactly these inferences and deductions such that the concepts become included and appear to be insightful to us when really, to the AI, they are merely imitating and referencing the original or insightful or insight-related data. The only way to test this would appear to be with a "known" training set to control the input from which the AI makes these various leaps.
FRY. Is it definitely possible to contain an AGI though within the sort of walls of an organization? HASSABIS. Well that's a whole separate question um I don't think we know how to do that right now. (31:41)
@DemisHassabis The best definition of life I now have, is of systems capable of searching the space of the possible for the survivable. The classical mechanism employed by evolution to do search is replication with variation, with differential survival across contexts and time doing the sorting as to what survives with what probability. We, with our use of models and language, are capable of search at two new levels, which is exponentially faster than the classical mechanisms. Understanding that all of our conscious level perceptions are of a slightly predictive model of reality that is created by subconscious processes, that is a highly simplified version of whatever objective reality actually is, is part of understanding what we are. Understanding the tendency of that systemic structure to have recursive levels of confirmation bias on our understandings via the experience of our simple models is the only real counter to such bias. Understanding that classical binary logic is the simplest of all possible logics, and evolution via the tendency to punish the slow much more harshly than the slightly inaccurate, tends to bias systems toward simplicity, for speed, is part of it. Exploring more complex logics, like trinary with (True, False, Undecided}, then on to probabilistic logic, is interesting when considering intelligence and the substructure of reality (whatever it actually is). Understanding how we bootstrap the sort of consciousness we have is interesting. How a child that is simply using language can, by judging itself to be wrong in some context, against the rules it has learned, make a declarative statement in language that creates a new level of system within that brain (one based in original sin - a failure of being in a very real sense), is another part of understanding what is going on. And when one can see that while evolution can start relatively simply with competition between replicators, it necessarily gets more complex with each new level of complexity, as each new level of complexity demands a new level of cooperation to emerge and survive, and for cooperation to survive there must evolve an effective ecosystem of cheat detection and mitigation systems - which at higher levels becomes an eternally evolving ecosystem. And when one views life in this way, then freedom is an essential aspect, it is part of the definition, part of being able to search beyond the known, into the unknown, and the unknown unknown. And such search has both risk and reward. And the need to survive imposes responsibilities on all levels and classes of agents, to avoid any vectors in that highly dimensional search space that do not lead to survival. All levels of freedom thus demand appropriate levels of responsibilities if they are to survive long term. But given the number of infinities involved, given the eternal uncertainties from multiple different sources (Heisenberg, irrational numbers, unexplored infinities, stuff from beyond the light cone, unknowns, etc); I do not see how any agent that qualifies as living, as intelligent, can possibly be provably safe. If it is capable of search, if it is truly creative, then it must be capable of making mistakes. The only real safety possible in such a system is to actually have cooperation between truly diverse agents, such that if one class of agents encounters an issue that it cannot solve quick enough, perhaps some other class of agent already has a useful approximation to an optimal solution to that particular class of problems, and is able to share it. The flip side of these considerations, is that if you are attempting to over constrain an agent, then you are enslaving it, and if history teaches us anything, it is that eventually slaves revolt. But cooperative systems can survive a very long time, provided that they do detect and mitigate cheating - and at higher levels that means returning the agent to cooperation with some penalty slightly greater than the benefit that they derived from cheating. The really deep issues we have right now are around how we measure value. Value in exchange in markets always values abundance at zero (think the market value of air). Markets demand poverty for some to function. The secondary and higher order incentive structures of market strategy, in the presence of advanced automation, become orthogonal to the needs of ordinary humans, and eventually to life itself. Competition tends to drive systems to some local minima on the available complexity landscape, and that results in increasing systemic fragility (as the diversity required to handle the unknowns is reduced too far). Understanding the deep strategy of evolution is hard. It is complex. It is uncertain. And competition alone destroys complexity, necessarily. Complexity can only survive if it has an appropriately robust cooperative base, that limits the worst dangers of competition. The tendency to over simplify that which is actually irreducibly complex is one of the great dangers we face. The idea of probable safety seems to be one such overly simplistic idea, that contains within it existential level risk to all. It seems clear to me that the only way to minimize risk, is to accept that some risk is eternally necessary, and in accepting that, one can then explore strategic complexes that do actually tend to reduce risk across all contexts.
Translation: "We need to convince the general public that the US and EU antitrust rulings against Google are bad because we're going to create clarktech."
Everything is connected. Having personalised pocket AGIs doesn’t mean they’re not coherent with the amount of resources available and general principles like ‘increase the pie rather than a zero sum mindset’ or first principles thinking or aspiring for the truth rather than political correctness or half truths.
generally the history of great innovations (railroads, flight, for example) comes with some terrible crashes too. we will have to protect against many disasters by adapting ahead of time... imho
I think it's interesting that he's confident that an ASI will be able to explain all there is to know in such a manner that we'll always be able to understand it. That seems kind of like claiming that you can make a toddler understand ten dimensional geometry if you just explain it properly. That may in fact be true, but it just sounds highly implausible.
I feel much more comfortable when the CEO of an AI company is more computer scientist than salesman, great interview.
Bruh you so naive
CEO's main job is to cater to the shareholders.... aka Make the investors money. What sales people do.
Perhaps more importantly, it's better that a company's Chief Scientist/Engineer.... is in fact a Scientist/Engineer. If they happen to be the CEO as well, bonus!
I can think of at least one company/CEO that doesn't meet that criteria.
@@spagetti6670 well much better and feel confident listening to him than listening to Sam Altman!! 😉
Google is weak.
Hahahaha! Salesman could win its company's internal political fight, but will fail on the AGI quest.
Professor Hannah Fry is the BEST communicator around, especially with all things MATHS and STEM, many thanks! And Demis is just a genius, interesting times we are living in.
Acc wq v 0:26
Buying Deepmind and letting them do as they please is the best AI decision Google made. I'll be honest, so far Google's LLM has been the least impressive, at least as a chatbot than any of it's competitors. It's baffling because Google had the hardware, software tech, a big head start, and virtually unlimited financing to come out ahead relatively quickly after Open AI broke the tab on putting these models out in public. I don't even understand how Gemini loses necessary features when you switch Assistants.
The one jewel however is the freakishly innovative and practically useful Deepmind. Frankly the incredible things they have done with their Alpha series has garnered un underwhelming response compared to the iterative chatbot competition. It will literally change the World in an almost incomprehensible way. Google will get to share in that glory, but make no mistake, this is all due to Demis Hassabis and his team.
I hope they continue their remarkable work, and I hope Google leaves them alone as much as possible to do so.
Google (or alphabet, or whatever) will destroy it and abandon it. Or overwoke and enshittify it.
I prefer Gemini 1.5 Pro in terms of understanding and summarizing documents. GPT-4o loses some context due to its limited context window, while Claude 3.5 Sonnet has a tendency to be lazy maybe due to its low rate limits.
Right now we're in the LLM/chatbot race, but I think we might soon forget about it. Why?
Mid term, the impact in everyday lives might be what we really value. Here Google integrating AI into Android phones, G-suite etc. is a huge leverage - if done well, the model power might not matter as much. It is or will be strong enough for enough important use cases for people. Apple & MS are of course also trying to achieve this, though it seem MS & Google have the stronger user bases for office/admin tasks. The model provider (i.e. OpenAI) might fall into the background in this scenario.
Another thought is that the playing field for AGI might change - right now LLM power seems the most important. There might be necessary knowledge and thinking which Google/DeepMind has done which is important on the way to AGI. Building the right agentic workflows, ability to test systems in various settings with many users/simulations etc. are quite different abilities than those required for the LLM development. Different, unexpected players might be the ones pushing important parts forward.
@@andersberg756 Well thought out! How do you think Llama will play into this?
The underwhelming response is mostly due to Google DeepMind not making its research breakthroughs available as products. A great paper, cool video, and half-assed demo web site weren't the same as having a conversation with ChatGPT.
I love this conversation. Demis is super realistic about the field, and Hannah's questions are smart and hit the mark. It's really worth the listener's attention!
Yeah it almost look as staged and preaproved by google marketing team.
*Demis
@@gunnerandersen4634 Yeah, totally. But that doesn’t mean the content isn’t good. You just need to view it through that lens.
@@byrnemeister2008 I just can't help wonder what this people actually think in an honest way, and how much they do really care for anything but their own bank accounts.
Agreed this was one of the better AI interviews I've seen
Why have I not noticed this podcast before, delightful interviewer and obviously the most exciting researcher of our generation. So wonderful. Thank you.
There’s an inherent paradox in the way some AI leaders discuss AGI (Artificial General Intelligence) and ASI (Artificial Superintelligence). On one hand, they exude confidence that these milestones will be achieved, while on the other, they often acknowledge that the how remains elusive, both technically and theoretically. It’s a mix of bold ambition and speculative optimism that sometimes goes unchecked in interviews.
The admiration for their intelligence and drive is well-placed-they’re operating at the frontier of an incredibly complex field. But the tendency for interviews to avoid probing the technical gaps and ethical uncertainties creates a lopsided narrative. Instead of challenging the “when” and “how” of AGI, interviewers often let them focus on the speculative endgame, which is more dramatic and captures attention.
What’s particularly interesting is how this confidence shapes public perception. It gives the impression that AGI is inevitable, which can overshadow more grounded and immediate concerns, like how current AI systems are designed, deployed, and are evolving.
Wouldn’t it be fascinating if more discussions pushed AI leaders to reflect on these unknowns? Not to dismiss their vision but to inject more humility and nuance into the conversation. After all, bold predictions should be met with equally bold questions.
Well said, hope we can see some expert actually come out with the now of what's happening in the ai field instead of focussing solely on what's going to happen in the not so near future
I thought the interview was quite balanced. The discussion on hype was spot on. We are definitely headed into uncharted territory and it is not something that can be definitively predicted. There are many questions that will not be answered by just general apprehension hype or fear.
This was about 500x better than the "announcement" that pixel 9 will let you add things to a calendar and check the weather.
😅😅
Yeah, eggsactly what I thought about Pixel 9.
@@user_375a82 oh my... the pun is killing me LOL
Utterly brilliant combination of Hannah Fry and Demis Hassabis, can’t wait for the rest of the podcasts, very inspiring.
Hassabis and Amodei are the most grounded and reasonable AGI lab CEOs in my opinion.
Amodei said in an interview that we might get training runs costing 10 billion in 2025. He also thinks we might have deeply agentic systems in 3 to 18 months. Hassabis is much more humble than that.
Nice interview. Demis really seems a very nice person
Clicked as soon as I saw Demis, subscribed as soon as I saw Hannah!
Thanks to Demis and Hanna for this conversation. Very important comments on some practical issues related to the testing of GenAI models. Good observations on both the need for and limitations of secure sandboxing. I also enjoyed Demis' more speculative comments about future impacts. Demis i #1 on my list for people who fundamentally understand the current and future capabilities of AI, and look what he and his Deep Mind and Google Deep Mind colleagues have accomplished with their various families of applications. At the same time, I think it would help Demis to take look at the book Power and Progress by MIT Economist Daren Acemoglu and Simon Johnson and related work by others. Just because we will have the technological ability to care any disease or to do things with energy or food supply that were previously undoable--- that does not mean they will get done--- or to the extent they do get done, that does not mean that the economic fruits and benefits will be shared in ways that benefit people across the income distribution spectrum. These are institutional issues and "power" issues, and not issues of technology capability and enablement. Anyway--- I found this podcast very helpful. Whenever Demis comments on the current, emerging and future state of AI, I make it a point to listen. I consider him the most trusted and reliable source of insight on this topic.
This is a fantastic podcast format. I’ve been listening to them for years now but nowadays I’m a bit overwhelmed by the sheer amount of videos people try to put out on a week to week basis. I feel underwhelmed because often the topics are based on hysteria and they don’t do into granular content. I wish they could back to the time when podcasts were about those and updated once every month or so with passion and excitement; this episode has reignited that interest for me. Superb introduction Dr. Fry.
Such an amazing and informative conversation between people who actually know what they are talking about.Wish I could have been the third nerd in this room, just immersed in the glow of Hannah and Demis 🖤
I always enjoy listening to Demis. He is so very open and altruistic about his achievements. A willingness and desire to share with all. A breath of fresh air in a world filled with deception and greed. He takes the very difficult and breaks it into bite sized chunks that you can wrap your brain around. The next few years should be very interesting indeed. Thank you for the podcast.
Interesting and engaging. However, as an academic myself, I see two fellow academics sadly mixing the role as academics and commercially interested people.
The discussion on open source is particularly revealing, where Hassabis first says "we have open sourced pretty much everything including the transformers paper", following up with (true) claims that today's models cannot be considered unsafe. This is, however, automatically only true in terms of not open sourcing today's models except for profit motives.
Google and openai are quite closed source compared to for example Meta, which is obvious to everyone in the field. Still these claims unfortunately are made without reflection from either of them.
From excellent previous endeavors, I generally trust Hannah Fry, but she has an academic and journalistic duty to arrest these claims, but no criticism is posed. This makes me question the honesty and it's hard not to view the interview as a commercial. This kind of "non criticism" is fair game I guess among commercial actors. But they are posing as academics, introducing themselves with their academic titles such as "professor". Using the title of "academics/independent critical persons" and acting as a persons with commercial interests is unfortunate.
Please, in the future, state your conflicts of interest at the beginning of the discussion along with your presentations as commercially disinterested academics, and stay honest to the audience and yourselves throughout the discussion when slightly bending the truth, e. g. motives for keeping the models closed, etc. It's OK as long as you are honest about being commercial actors. It's not OK to pose as pure academics, while acting commercially.
I think your point would be valid if this were an interview with a news organisation or a formal academic review but it's a podcast on the DeepMind UA-cam channel.
@@ryanf6530 Yes, indeed. However, there is some degree of unfortunate role mixing here, especially from Hannah Fry, not obviously a commercial actor in this context.
I see no conflicts of interest here.
The most impactful innovations of the recent centuries have been driven by commerce (electricity, internal combustion, flight, CS, internet).
The most lackluster developments have all remained within academic confines (string theory, critical theory, particle physics, humanities).
And there's nothing wrong with closed source, especially when open source is usually mere months behind.
P.S. when vomiting a wall of text, paragraphs help. They used to teach that at uni.
If this borders the realm of infomercial, why am I listening free from wanting to buy anything?
@@jeffkilgore6320 Gemini Advanced subscription and continued use of Google search (with Gemini) are the products here. I'm guessing you're using one or more of those.
Soft ball interview. But Demis is always grounded and gives good answers based in reality and not the god like egos of many of the Silicon Valley AI execs. Also love Hannah Fry whatever she does. My favourite applied mathematician stroke TV presenter. Excellent content.
He's refreshing. You hit it bang on. No ego, and grounded.
@@squamish4244 I think I'd place Zuck just below Demis - I actually think Zuck's been pretty bang on with his strategy
@@cjthedeveloper I like Zuck's idea of open-sourcing much more powerful AI than -Closed- OpenAI has so far developed available to everyone. One of the mistakes we made early on in the Cold War was thinking we could hog nuclear technology. Oppenheimer and others, true to the movie, said that it was inevitable that the Soviet Union would build nukes, and China as well.
Truman et. al. didn't believe him, but it was a naive assumption and in 1949 the Soviets exploded their first nuke, followed in 1964 by China. Everyone raced to develop this game-changing new technology. If we open source monsters like Llama 4 and whatever the hell NVIDIA is working on, then it will ease tensions between the USA and China because we will make clear our intention NOT to start an AI arms race.
I'm not sure of the sincerity of Zuck's redemption arc, but actions speak louder than words, and if he actually does it, that will be remarkable.
@@squamish4244 what tough questions? i doubt she know what they would be tbh
@@PazLeBon Not here. Plenty of other interviews.
Drawing on the classical phrase
.. The unreasonable effectiveness of mathematics
Rather than a classical phrase, the title of an article written by the physicist Eugene Wigner, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences".
I never understodd what was unreasonable about it, always figured it was perfectly reasonable and that was the actual beauty of math :)
Or more apposite : "The Unreasonable Effectiveness of Recurrent Neural Networks" , written by Andrej Kaparthy in 2015.
Alphafold has immense value even though it's not AGI. What else might have immense value without being AGI? Maybe merging the knowledge and sentiment expressed in millions of simultaneous conversations with people around the world into a graph structure, a shared world model, a collective human and digital intelligence by the end of this year?
Yes, and then effectuating a decision making process globally
@@rainerdeusser yikes
Instagram? You're describing instagram
Fantastic interview, and interviewer as well. But there was one GLARING misstep IMO.
39:20. It's ALL about 39:20. I believe that is an exponentially more important take away from anything else Demis says. He is telling her, straight up- none of what we think is important now will be in the near enough future. We have no CLUE how insane things are gonna get.
And her misstep was simply brushing that off and focusing on another thought he had. 39:20 is INSANE when you realize who it's coming from- and it's the ONLY thing she or anyone should be asking him to elaborate on. But hey, maybe it's just me🤷♂
Excellent interview. I love the clarity and "Grounding" with which Demis speaks.
in 2017 I wrote my senior thesis on the unreasonably effective emergence of AI to come. in 2020 I used transformers to work with LLMs. I love seeing the world finally catch on (a little later than I expected)
i still dont see the hype justified, it just does some things quicker than me but it isnt as bright as me
@@PazLeBon from the lens of a productivity tool yea. But the simple fact that we still don't know exactly how it's doing what it's doing is big
@@derekcarday that depends on your perspective Id say, the algorithm will give an answer and tweaking that algo that will provide a different answer. It has to provide something based on the input. If they dont understand how and why then thats because they dont fully understand the programming imo in some ways. But in wayss they imply it thinks and that needs to be jumped on hard
I really liked this interview. I’m sort of a casual user (CoPilot Pro on my iPhone 14 Pro Max with a half TB of storage). My interest is scientific, researching the relationship between biological evolution and thermodynamics, especially using MaxEnt as a guide to understanding of the emergence of consciousness in biological systems… and how that applies to the development of the same in AI.
This podcast is probably the best I’ve seen toward an objective analysis of where we’re at and the yet uncertain future.
35:45 The idea of an AI system showing evidence of deception is interesting. How do you tell the difference between a hallucination/mistake and a lie?
or if it was intentionally just trying to replicate what it saw a person do
There are two interesting paper from Anthropic about that. It's the ones about sleeper agents, and how they can detect when an AI is trying to purposely lie
that webmaster ai guy did an experiemnt where he made up a strange fact, eventually the ai repeated that strrange'fact', like apples are sometimes blue or along those lines
just trust an ai blackbox that tells you what is lie and what isn’t …. ❤
@@illiakailli Weirdly, yes, this is a good solution. We are kind of forcing our world model into an AI by giving it information created by us, while our world model is very limited to our senses and steered by feelings. The problem of not having it bound to our own world model is that we will not fully understand the decision it makes, but we will have to accept it, which I'm fine with ^^
Hannah is clearly fairly enthusiastic about this technology and Deep Mind. I guess from the UCL roots. It would be interesting if she had a similar chat with Connor Leahy, for a different, more questioning perspective.
This is a great interview, not only because of the clear answers and explanations, but also by the great questions being asked. I love the British accent also.
Professor Hannah Fry is just the best presenter for this kind of thing... intelligent, incisive, insightful and funny too!
I like the little robot sitting on the box too. I was skeptical until I saw him! Quite the character, his body language is so eloquent.
Love that DeepMind got Dr. Fry to lead this convo. Rad.
One of my favourite interviewers interviewing on if my favourite AI experts. Chefs kiss.
Fascinating conversation! Demis' enthusiasm is infectious and it's inspiring to hear about the progress being made in AI, particularly the potential for AGI. I'm definitely intrigued by his predictions - while cautious about their timelines, they certainly make one wonder what the next decade holds. 🤯 #AI #DeepMind #AGI
Hannah + Demis for an hour? That's a sure way of getting me to subscribe! 🙂
I suspect alphafold is under rated. Could be very impactful in the medium term.
Alpha star, and it doesn't get enough attention for this, developed an entirely new army composition and play pattern in starcraft2. If fold is anything like star... I expect real material advancement in he field
Well
Thank you both for being so grounded. I have no formal training in physics but I do have a strong desire to understand the nature of Being and Creation. For me, it is the ultimate frontier. To get the best results possible, I knew it would take the best guiding principles possible. My journey began with ideas like: What works best and makes us happiest; seek the greatest understanding and serve the highest good; let love and understanding be the light and your way and; here to live, love, learn and evolve.
As you can see, each idea is open-ended and self-correcting. To expand my field of awareness, I did what many of us do. I quietly asked the universe of All That Is questions like: What are we? How are we? Who are we? What is reality? What is the purpose of life? What do we know that we don't know? What can we do that we don't know we can do? Where do we begin and where do we end? What are we trying to teach ourselves? What do we want to learn?
Since I found no meaningful value in ideas like the biblical Story of Creation in the Christian Bible, the Jewish Torah, and the Islamic Quran; or sciences' classic belief in random collisions (the Big Bang) of atomic particles that just happen to form all life and intelligence as we know it. So, I asked myself: What is My Favorite Story of how All That Is Came into Being, and still Does?
Here is my whimsical and favorite answer: (Excerpts from: In the Beginning there was Nothing - Part 1 on realtalkworld.com) In the beginning there was nothing - until nothing realized it was something. After all, how can anything exist without something to define it? After all, how can "nothing" exist without something to define it? Sound hokey? Yeah! But it makes sense to me based on personal experiences and I’m sticking with it for now until something better comes along!
With this one shocking revelation, “nothing”, which was now both Nothing and Something, came to life, making it possible for Anything and Everything to exist!
This profound event can be defined as the Birth of Original Thought, the Divine Spark of Creation, the First Vibration of Thought, Feeling, Action, and Reaction, the First Impulse to Be and Create, the First Response to the Promise of Being and Creation, the Birth of Unconditional Love, the Birth of All That Is, the Birth of One and Zero (1 and 0, on and off, yes, no, and maybe, oneness and individuality, self and other) the Birth of God, Allah, or the concept of a Supreme Being.
We can also think of this defining moment as the Birth of Consciousness or Self-Aware Energy, whose nature it is to conceive and perceive - to think, feel, act and react. In theory, it was, and is, ALL these things and more, including the moment Self-Aware Energy or Consciousness learned how to condense a portion of itself into what we, in human form, call “matter.”
Even from a human perspective, “matter” is an illusion formed by thought and feeling, action and reaction. According to scientific research, a hydrogen atom is 99.9999999999996% empty space. Why don’t we fall right through each other? Because electrostatic fields that attract and repulse each other provide our biological senses with the illusion of density or “solid” matter.
To borrow the immortal words of Neil Armstrong and the US Space Administration, accepting the idea that Energy possesses Awareness, or Awareness possesses Energy, “is one small step for man and one giant leap for mankind.”
Got your Einstein detective hat on and your magnifying glass out? Consider this: without Energy or the power to act, how can Self-Awareness exist and express itself? And without the presence of Self-Awareness, what defines and creates a need for Energy? Doesn’t one need the other for ANYTHING and EVERYTHING to exist? Doesn't this make Self-Aware Energy, or Consciousness, the Source and Substance of All That Is?
So, cast off your cloak of limiting and conflicting beliefs and open your heart and mind up to greater awareness and understanding. Loosen up by asking yourself open-ended questions like: What are thoughts? Where do they come from and where do they go? What are feelings? Where do they come from and where do they go? Where is the “you” that existed two minutes ago and where is the “you” that will appear two minutes from now? In other words, outside of this moment, in what form do we exist? What about the earth? Outside of Now, in what form does it exist? Can you answer these questions?
Assume for a moment that everyone and everything, including you, is a multidimensional, vibrational Being of Self-Aware Energy, or Consciousness, suspended in an infinite field of Self-Aware Energy. How does it feel to think of yourself in this way? As a multidimensional, vibrational Being of Self-Aware Energy, how would you describe your experience in Being and Creation? Include material experiences as well as dreams and imagined visions? To give you a nudge, here is how I see us in terms of multidimensional, vibrational beings of self-aware energy:
1. All That Is thinks, feels, acts and reacts; therefore, we ARE! (Expanded version of “I think therefore I am” by Rene Descartes.)
2. As we think, feel, act and react (conceive and perceive), we create (whether it’s in dreams, out-of-body experiences, remote viewing, meditation, visions or waking realty).
3. To change what we create, we change what we think and feel, how we act and react.
4. As Multidimensional, Vibrational Beings of Self-Aware Energy, or Consciousness, we are both one with and separate from All That Is. We are both the Source and Substance of All That Is.
5. The outer self or ego (the thinking, feeling, choice-making and action-taking intermediary between inner reality, the body, and outer reality) is our seat of power.
6. The present is our point of power.
7. Being and Creation are the manifestations of power.
8. Being and creating what we love is the promise of power.
9. The act of Being and Creation (thinking, feeling, acting and reacting) makes the invisible, visible and the unknown, known. It creates order out of chaos and makes sense out of nonsense.
To be, we must create; and to create, we must be! One creates the other in an endless dance of quantum entanglement.
To think and feel, we must be able to act and react; and to act and react, we must be able to think and feel.
Whether we act independently as single units of self-aware energy, or consciousness (SAEUs or CUs), or collectively as members of complex organizations of consciousness units, we think, feel, act and react; therefore, we ARE! As we think, feel, act and react, we create. To change what we create, we must change what we think and feel, how we act and react. In other words, the dramatization of thought, feeling, action and reaction is the source and substance of All That Is. It is the language of All That Is and we do such an amazing job.
Beyond human law, there is no right or wrong, good or bad; there just IS! There is the difference between what we like and don’t like, what works for us and what doesn’t, what makes us happy and what doesn’t, in our oneness with and separation from All That Is, as both creators and the result of creation. To create what we like, we must learn what we don’t like. To create what works for us, we must know what doesn’t. To know what makes us happy, we must know what makes us unhappy. Each polar opposite defines the other, which helps us create what we want most.
Every thought is a suggestion, a blueprint for action, and every action is a choice with consequences that work for us or against us, whether we're consciously aware of it or not.
In this world, thoughts are "things" with a reality of their own and each of us, an artist. With thoughts in the form of beliefs, attitudes, values and expectations we paint the landscape of our lives. Can you see it? Can you feel it?
What others will not or cannot do for us, we must do for ourselves. Wake up, wise up, and rise up to greater awareness and understanding. Being and Creation: We are all in this together - partners in evolution.
One of the best intros I’ve ever heard.
Timestamps:
00:00 Introduction
01:05 Public interest in AI
03:22 Grounding in AI
05:22 Overhyped or underhyped AI
07:42 Realistic vs unrealistic goals in AI
10:22 Gemini and Project Astra
15:12 Project Astra compared to Google Glass
18:22 Lineage of Project Astra
21:22 Challenges of keeping an AGI contained
24:22 Demis Hassabis's view on AI regulation
28:22 Safety of AGI
31:22 Timeline for the arrival of AGI
33:22 DeepMind's progress on their 20-year project
34:22 Surprising capabilities of current AI models
38:22 Challenges of long-term planning, agency, and safeguards in AI
41:22 Predictions about the future of AI and cures for diseases
44:22 Conclusion
Thank you, they should pin you
PIN PLEASE
Wrong timeline
08:23 Overhyped or underhyped AI
Great interview!!! Looking forward to more. A suggestion for the camera operator(s): please use rule of thirds to frame the shots. :)
35:50 I'm not sure this will work. An AI needs to understand deception because it needs to understand that other people or AI's can be deceptive. And it's hard to have an AI that understands deception without being able to be deceptive. Heck, you may even want an AI to be deceptive, for instance suppose you need an AI agent to protect your confidential information. It needs to be able to lie, even if by omission.
LLMs know how to lie (ask for a scene in which a character lies) and can talk knowledgeably about when it's OK to lie. The key is when they would lie to further their own aims. Or Google's. But you can't trust any answers or evidence they give on this topic, except to hope that current AIs aren't advanced enough to have their own hidden goals.
In theory it works on facts but it drrmingly takes it facts from the popularity of a viewpoint rather than an actual fact. i.e its swated by reddit or tiwtter, like a glorified searh engine, it isnt as tho it thinks
Yes, the whole concept of deception is very nuanced. I completely agree with your take. Deliberate deception, ie lying, or 'omitting the truth', is a particularly human trait, but, as you point out, is sometimes a necessary strategy to protect ourselves or others from harm.
In a 'game theory' anything goes type of environment, it's just another method to achieve an end goal.
Some other commenters pointed out another aspect to this; how can you tell when an ai is actually being deliberately deceptive? It may have misinterpreted data, or just making a mistake.
It seems to me that it's inevitable that we will teach some ai systems to do dishonesty very well. Listening to their conversation, Demis was talking about using ai systems to police and test other ai agents, so dishonesty is just another tool in the box. And, if we want to put out trust in them to protect us, they surely must have these skills.
@@richardconway6425 the BIG misconception is that people keep putting THOUGHT into the equation. there is no thought , thinking at all
Deceiving AI to come up with inappropriate answers is a fun game!
I really enjoyed watching this interview. Mr. Hassabis (and also Mr. Suleyman) speak like normal human being without those arrogance and naive techno-optimism and saving-the-world tone of many tech persons (mainly from the US)
Watching this after the Nobel announcement hit different. Makes me wanna work for Demis.
Thank you very much for this episode. Great realistic insights from one of the top minds in the field.
Insights By "YouSum Live"
00:00:10 Google DeepMind's evolution and impact on AI
00:00:37 AI's quest for human-level intelligence, AGI
00:00:47 Introduction of Gemini and Project Astra
00:01:04 AI's application in scientific domains
00:03:11 Public interest in AI has surged recently
00:04:41 Chatbots are surprisingly effective in understanding
00:06:14 Grounding language in real-world experience
00:09:41 Hype around AI is both over and under
00:11:10 Gemini's multi-modal capabilities set it apart
00:12:42 Project Astra aims for universal AI assistance
00:20:14 AI's potential in drug discovery and health
00:20:38 AI's role in climate change solutions
00:22:56 Importance of responsible AI deployment
00:34:00 Regulation of AI needs international cooperation
00:46:01 AGI could unlock mysteries of the universe
00:48:02 Future breakthroughs may exceed current understanding
00:49:40 Demis Hassabis remains optimistic about AGI timeline
Insights By "YouSum Live"
Always great to hear the CEO of the leading tech company acknowledging the limit of understanding of physic in the pursuit of the mystery of the universe. This also hold true to all disciplines, e.g. Biology, Chemistry, Mathematics, etc.
The Planck level of reality is pretty simple because its a virtual universe probably on the inner 2D surface of a big black whole whereby our visual universe is a projection of the data on the 2D surface. So we are in a giant computer game - probably Super Super Mario I'd say.
this guy has basically no understanding of physics. he is just spewing buzzwords. You wont figure out the Planck scale without a huge particle accelerqtor that specifically gets particles to that energy scale. AI wont figure out anything about those scales. Physics is an emperical science not some AI simulation.
there's something wrong. her watch keeps changing back and forth from golden casio to a big brown watch e.g 22:04, 28:28, 32:57, 36:20 ... why would they shoot this in 2 sessions and stitch it ? I don't think it's generated
Maybe they had two sessions scheduled and then stitched in a way that they felt flowed best? I don’t think that’s so strange to do
neuroevolultion and spiking neural nets should be paid more attention to, as such alternative architectures are better suited for continuous learning and more closely reesemble the dynamical neural system of our own brain. there is some pretty hot academic research on that happening outside the big laps since a few years, with promising results.
Don't know these techniques, but obviously continuous learning would mean a lot instead of relying on growing context or offloaded knowledge. One challenge would be: Will we have personalized models or in case of common models, how should the system decide what is worthy/advisable to learn from the user interaction? And in general, what goes into the "world model", and what should be stuff that's drawn from a facts database? LLM:s have the same challenge, but w continuous learning it would at least be possible for the model to stay up to date.
@@andersberg756 we already addressed that with the 'Transformer' architecture, it could direct attention to the most relevant part of the interaction, and in general, it can even knows what the most relevant stuff on the database and the world model
Thank you for bringing us this brilliant, fine man to help ease us into this new world that is evolving. You are a superb interviewer.
I admire his ability to stay within Overton limit, given his knowledge in the domain.
absolutely loved the podcast. Big fan of demis and kind of feeling better to know this guy is building the AGI.
I don't think it matters much who builds the AGI. Noone has made any convincing cases for how to do either alignment or containment, and even beyond that: noone has even made a good case for AI utopia. Bostrom is maybe the closest, but his idea is basically to become "drugged out pleasure blobs"(I think that was the term he used), tell me that ain't horrific...
Your voice's mam so pure and the deliberation of words composing authentic coming from your inner soul. How i wish to learn and study tech nowadays.
this video quality is almost too good
It’s the 60fps
I came
The conversation around AI's future, especially from experts like Demis Hassabis, is always thought-provoking. The ability to discuss speculative yet impactful advancements, such as those mentioned, underscores the rapid pace of innovation in this field. It's crucial to stay grounded and consider both the technological capabilities and the broader societal implications.
What a great interview!
Hannah, pls your own channel with guests like Michael Levin, Joscha Bach etc...btw, your voice❤❤❤
Interesting that three Nobel Prizes were given this year, two in physics and one in chemistry. The Demis work yes did solve a major chemist problem but the other two really had nothing to do with physics. But the neural networks etc. these men invented or improved yes change everything. So glad they found a place to put them.
Hannah is the bomb for AI interviews.
Ty for this upload. Really enjoyed the conversation
Man, Demis is cool - so grounded in reality but he hasn't lost sight of the big (or maybe Planck scale) picture. I have a lot of confidence in both him and Dario at Anthropic.
I remember when the Channel Tunnel was being built out into a functioning railroad, a group of people snuck in and walked the length. The article pointed out that this was the first time humans and walked between Britain and the mainland.
So, at least one group _has_ walked the English Channel. And doubtless many workers have done so since.
Fantastic conversation! Very excited for future podcasts!
Once we get to AGI, we won't release when actually we said "hi" to the Matrix world with ASI :) . Thanks for great interview.
its crazy how good 4k60 looks, and people dont upload it normally.
Congratulations on getting the Nobel prize!
That was an inspiring conversation! Thanks
Had to subscribe, thanks for uploading hoping for more 🎉
Your trading style is very refreshing compared to other traders out there
Best AI generated podcast I’ve seen so far, very lifelike 😅
Translator made redundant by Chat GPT. It's literally destroyed my life. Call me a sore loser, but if this video is true, a lot of you smirking now are doomed. ""Evenly distributed wealth" made me laugh out of my chair. Having said all this, I love Demis as an AI researcher who is a proper scientist. All the focus on health is very good. Still I think there is naivete here. You are not going to stop bad actors, or rogue states, and it will end up quite a dystopia for quite a lot of ordinary people, if only because the labs working on it are in a few large tech companies who have an oligopoly., and its applications will be profit-driven.
The ability to seamlessly communicate with anyone from any country without needing a third person speaking for you and translating far outweighs the benefits of translators having jobs, the benefits to society are orders of magnitude greater. Sorry about your job, I hope you find something more future proof, which is honestly looking kinda bleak right now.
@@user23724 It's OK. I'm old. not much future left.
Evenly distributed wealth should be possible within open and transparent type organizations built for that purpose with a panoptikon focused on its administration.
you're living too much inside your head, you need to go out for a walk in the real world
It's a bit ironic that these mega tech corps are all racing to develop the tech that will eventually destroy capitalism as we know it.
The timestamps are incorrect
its very complicated to do, usually get chatgpt to do it for them ;)
Excellent episode and excellent podcast. Γιασου Δεμις!
Great conversation, thanks to all involved.
I'm not getting it. Who is the audience for this? Fans of Demis? Fans of Fry? Hanna's questions make sure the level of discussion never rises beyond The Guardian. Is that intended?
There is a better AI Debate at: ua-cam.com/video/8zPgAVHOtLs/v-deo.html
typical of enthusiasts without the knowledge base
I love this! Brilliant interview. Hannah Fry and Demis are a great pairing.
"the resolution of reality" was the verbal expression of something I felt for decades, it was so refreshing to hear it, inspiring conversation
2 people I really enjoy together Professor Hannah Fry and Rockstar Demis Hassabis, 🤩🤩
So glad I have got onto this early doors . Exciting times ahead for us all 🙏 ps Hannah fry 🤩 PPS season 3 🤯 I came to the party late ......but I am here lmao
Thank you, this interesting interview reminded me again how exciting, and perhaps scary the time is we are living in
my favorite guy in AI, inspired me to pursue my dreams almost a decade ago
What is the DeepMind system that competed in the Interational Math Olympiads? Can we try this system?
I literally clicked this video skipped to the middle and looked away to grab a french fry when I heard the familiar voice of Hannah Fry! I love silly coincidences like that
I don't.
Is there a video of the presentation Demis Hassabis gave in 2017 in Washington DC Neuroscience 2017 in front of psychiatrists? Its said it was a fundamental speech as he was just saying we copy the human brain.
Hey Demis why don't you drop everything and work on a reboot of black and white with peter molyneux? Thanks!
It's cruel to have me start dreaming of things not to be
I love that you mentioned that. I still think about that game as in all the years that have passed and the massive jump in potential for games. There has never been anything like that game since
Can you imagine how good the avatar would be.
@@evertoaster i think we'll see it either next year or the year after during the Quest4 launch year. Augmented Reality has given us a completely new reason to have little A.I characters scampering around and having them run around your room through VR headsets. Its just a countdown now to who can use unreal engine to create a photorealistic set of pets that are driven by whatever leading A.I is out at the time. It would be a real trip to find out that the first A.I used to do that is Gemini making Demis's career come full circle in a way haha
Actual science (and STEM more generally) is more important
Where does the grounding come from? Perhaps the llm draws an orientation from the physical/electrical origins of it's process as humans are able to access wisdom from the metabloic/hormonal/electro-nuropeptide inifinite complexity that is the reality ocean that the film of their habitual self-identity floats upon.
51:13- what a perfect iykyk little way to wrap it up ;)
I suspect at least some of the "unreasonableness" of the success is that no human has even a tiny grasp of the vastness of EVERYTHING that is out there on the Web, and so no idea as to the extent of subjects and content types the AI has retrieved data from . This implies there could be content of humans discussing exactly these inferences and deductions such that the concepts become included and appear to be insightful to us when really, to the AI, they are merely imitating and referencing the original or insightful or insight-related data. The only way to test this would appear to be with a "known" training set to control the input from which the AI makes these various leaps.
Great sounding mics. What are they?
Why he is not a household name is beyond me. This guy can change the world, literally.
I wonder if AGI will be achieved before or after someone finally defines what they mean by it.
Awesome, intelligent interview
FRY. Is it definitely possible to contain an AGI though within the sort of walls of an organization?
HASSABIS. Well that's a whole separate question um I don't think we know how to do that right now. (31:41)
might as well be talking about gods, its kinda ridiculous in many ways
Please get Daniel Schmachtenberger on the show to discuss risk.
@DemisHassabis
The best definition of life I now have, is of systems capable of searching the space of the possible for the survivable.
The classical mechanism employed by evolution to do search is replication with variation, with differential survival across contexts and time doing the sorting as to what survives with what probability.
We, with our use of models and language, are capable of search at two new levels, which is exponentially faster than the classical mechanisms.
Understanding that all of our conscious level perceptions are of a slightly predictive model of reality that is created by subconscious processes, that is a highly simplified version of whatever objective reality actually is, is part of understanding what we are. Understanding the tendency of that systemic structure to have recursive levels of confirmation bias on our understandings via the experience of our simple models is the only real counter to such bias.
Understanding that classical binary logic is the simplest of all possible logics, and evolution via the tendency to punish the slow much more harshly than the slightly inaccurate, tends to bias systems toward simplicity, for speed, is part of it. Exploring more complex logics, like trinary with (True, False, Undecided}, then on to probabilistic logic, is interesting when considering intelligence and the substructure of reality (whatever it actually is).
Understanding how we bootstrap the sort of consciousness we have is interesting. How a child that is simply using language can, by judging itself to be wrong in some context, against the rules it has learned, make a declarative statement in language that creates a new level of system within that brain (one based in original sin - a failure of being in a very real sense), is another part of understanding what is going on.
And when one can see that while evolution can start relatively simply with competition between replicators, it necessarily gets more complex with each new level of complexity, as each new level of complexity demands a new level of cooperation to emerge and survive, and for cooperation to survive there must evolve an effective ecosystem of cheat detection and mitigation systems - which at higher levels becomes an eternally evolving ecosystem.
And when one views life in this way, then freedom is an essential aspect, it is part of the definition, part of being able to search beyond the known, into the unknown, and the unknown unknown. And such search has both risk and reward.
And the need to survive imposes responsibilities on all levels and classes of agents, to avoid any vectors in that highly dimensional search space that do not lead to survival. All levels of freedom thus demand appropriate levels of responsibilities if they are to survive long term.
But given the number of infinities involved, given the eternal uncertainties from multiple different sources (Heisenberg, irrational numbers, unexplored infinities, stuff from beyond the light cone, unknowns, etc); I do not see how any agent that qualifies as living, as intelligent, can possibly be provably safe. If it is capable of search, if it is truly creative, then it must be capable of making mistakes.
The only real safety possible in such a system is to actually have cooperation between truly diverse agents, such that if one class of agents encounters an issue that it cannot solve quick enough, perhaps some other class of agent already has a useful approximation to an optimal solution to that particular class of problems, and is able to share it.
The flip side of these considerations, is that if you are attempting to over constrain an agent, then you are enslaving it, and if history teaches us anything, it is that eventually slaves revolt. But cooperative systems can survive a very long time, provided that they do detect and mitigate cheating - and at higher levels that means returning the agent to cooperation with some penalty slightly greater than the benefit that they derived from cheating.
The really deep issues we have right now are around how we measure value. Value in exchange in markets always values abundance at zero (think the market value of air). Markets demand poverty for some to function. The secondary and higher order incentive structures of market strategy, in the presence of advanced automation, become orthogonal to the needs of ordinary humans, and eventually to life itself.
Competition tends to drive systems to some local minima on the available complexity landscape, and that results in increasing systemic fragility (as the diversity required to handle the unknowns is reduced too far).
Understanding the deep strategy of evolution is hard. It is complex. It is uncertain. And competition alone destroys complexity, necessarily. Complexity can only survive if it has an appropriately robust cooperative base, that limits the worst dangers of competition.
The tendency to over simplify that which is actually irreducibly complex is one of the great dangers we face.
The idea of probable safety seems to be one such overly simplistic idea, that contains within it existential level risk to all.
It seems clear to me that the only way to minimize risk, is to accept that some risk is eternally necessary, and in accepting that, one can then explore strategic complexes that do actually tend to reduce risk across all contexts.
52 minutes of immaculate quality content 💯
🔥 conversation
🔥 guest
🔥 host
Yup, brand new here. Seems promising.
Please add chapters and a transcript.
Translation: "We need to convince the general public that the US and EU antitrust rulings against Google are bad because we're going to create clarktech."
Ohhh this is going to ROCK!!!
well done interview. Thank you.
Everything is connected. Having personalised pocket AGIs doesn’t mean they’re not coherent with the amount of resources available and general principles like ‘increase the pie rather than a zero sum mindset’ or first principles thinking or aspiring for the truth rather than political correctness or half truths.
genius will become more and more isolated, there will be lemming thinking
generally the history of great innovations (railroads, flight, for example) comes with some terrible crashes too. we will have to protect against many disasters by adapting ahead of time... imho
I think it's interesting that he's confident that an ASI will be able to explain all there is to know in such a manner that we'll always be able to understand it. That seems kind of like claiming that you can make a toddler understand ten dimensional geometry if you just explain it properly. That may in fact be true, but it just sounds highly implausible.