Firstly: I could listen to Dr Fry all day. She could read out the maintenance manual for a vaccuum cleaner or the London phone directory. Such a beautiful voice! But this topic too, is absolutely fascinating. What a brilliant combination!
@@kjjohnson24 man what is wrong with you. Hannah isn't a voice. She is a super smart individual who has a passion for this. Its that which I love when I hear her talk. If you don't hear that, you're broken in some way and I'm really sorry you're missing out.
I can’t agree more:) Hannah is amazing. Hopefully AGI can fix the mental health and physical health disorders issues that are happening around the world asap. The scientist Ed Boyden does a phenomenal job at depicting the complexities of the human brain. We still have some time to go which I understand but hopefully our understanding of the human brain arrives even faster especially with the help of AI. 2025 or even slightly before 2025 like decemberish of 2024, will be an amazing year🙏
For every use for AI consider it's misuse. Understand that humanity is not entirely noble. The greater the AI the greater the threat. In the end we may have AI vs AI with humans a calculated cost. The world has already begun the race for AI in the same way it raced to arm its nukes.
00:00 AI poses existential risks. 02:24 Narrow AI excels at specific tasks. 05:17 True intelligence involves learning, reasoning. 07:41 Physical interaction enhances AI development. 09:56 Misalignment can lead to disasters. 10:43 AI safety is a major concern. 12:18 Humans might become overly dependent. 13:13 Existential threat opinions vary widely. 15:38 Current AI has significant limitations. 16:28 Understanding our intelligence is crucial. 19:26 New techniques improve brain mapping. 21:14 Intelligence definitions affect AI progress. 22:41 AI lacks human-like complexity. 23:19 Understanding our minds is essential.
A.i. is a planet and civilization killer, based on current increases to and resource requirements to develop these 1st gen toys. Current dev work is MAINLY INTENDED to replace human labor/workers. Even CEO’s (especially!) will be replaced by commercial decision analysis systems. You cant sue a robot for medical malpractice, hence these systems are high on the list to deploy (eg: off-shoring, hidden assets, shell companies, investment groups - try getting a settlement from a company with no assets!). Soon, Any job that doesn’t require human dexterity will begin to be COMPLETELY REPLACED within the next 2-3 years. But these are just the short term items . . . . Firstly, it’s NEVER ‘intelligence’ - This is marketing BS. Intelligence SIMULATOR is what we’re seeing (not even an emulator, yet!) Think: Flight simulator. Secondly, this “a.i.” tech will NEVER mature given resource requirements.
It is so refreshing to see a tech-heavy reporting piece done by somebody who actually has a sufficient scientific basis to even begin to understand it instead of makings things up and being exceptionally-hyperbolic. Seriously, extremely well-done video with Hannah Fry!
Did you know that half of all published AI researchers say we might go extinct from AI this century? There are specific technical reasons why we should expect that to happen, but our brains trick us into putting our heads in the sand because this reality is too horrible to face. You should really take a look at the PauseAI movement.
Such a great example of what it looks like to be totally engrossed in your work! When she came along and tried something new they weren't so sure the robot could do it. That's so cool and I those guys deserve a huge pat on the back.
Given that so much of what we do consists of killing each other in ever more inventive ways, seeking status at the expense of our own well-being, propping one group of ourselves up by putting another group down, treating livestock in ever more horrific ways, and so on, we'd better hope that AI _doesn't_ align with our values.
Huh? A.i ****is**** our values. Stop readin sci-fi! A.i. is a planet and civilization killer, based on current increases to and resource requirements to develop these 1st gen toys. Current dev work is MAINLY INTENDED to replace human labor/workers. Even CEO’s (especially!) will be replaced by commercial decision analysis systems. You cant sue a robot for medical malpractice, hence these systems are high on the list to deploy (eg: off-shoring, hidden assets, shell companies, investment groups - try getting a settlement from a company with no assets!). Soon, Any job that doesn’t require human dexterity will begin to be COMPLETELY REPLACED within the next 2-3 years. But these are just the short term items . . . . Firstly, it’s NEVER ‘intelligence’ - This is marketing BS. Intelligence SIMULATOR is what we’re seeing (not even an emulator, yet!) Think: Flight simulator. Secondly, this “a.i.” tech will NEVER mature given resource requirements.
@@shieldmcshieldy5750 looks like she also dropped season 2 of uncharted podcast, but I'm honestly not really liking it much, the stories are interesting but the episodes feel incomplete and leave me wanting more
She had two kids, separated from her husband and beat cervical cancer with a radical hysterectomy, so...she was otherwise occupied until the last few years. She's back now, though.
Consultant here doing a lot of work with AI in business processes: it's a VERY mixed bad from what I am seeing. Many individuals have broad responsibilities in their roles, and the impact of AI ranges from making certain tasks redundant to modifying how existing tasks are done to requiring totally new tasks that greatly increase individual productivity and various mixtures of these. It's just not possible to predict the timing of the impacts or in what sectors, other than that we all need to be ready to adapt quickly.
I'm teaching myself web development and have started using chatgpt to help me with coding, I find that it points me in the right direction but sometimes some of the details it gives might be dated, I'm also learning how to use the API. Would you say this is going in the right direction or could you suggest something else I should be studying?
@@Yash.027 vice versa. And btw, marketing departments do indeed rely on timing as a primary component of their strategy. I worked for many very large IT companies; timing is a huge concern for marketing.
I've only been following Hannah Fry for a short time now but I have been falling in love with her episodes of the program called "THE FUTURE". It may be because she's a cute redhead. It might be because she is intelligent, playful, curious, and an actively engaged host that keeps bringing me back to her amazing shows!!! Either way, I'm all in...
The question is why are some insistent on striving for AI to be anywhere near human intelligence? It's madness. Doing so doesn't solve the problems we face currently, but potentially create unintended consequences.
Absolutely! You should really check out the global movement PauseAI. They have a lot to say about this, and they're equipping people to do something about it.
Of course it will solve many problems. Mathematical and generally scientific mostly. It already helps engineers with programming and designing systems. It will help us develop medicines and techniques which will help ensure our survival and growth. The ONLY way we should "prepare for unforseen consequences", to put it in G-Man's words, is for AI and the likes being weaponized. And because everything eventually will be weaponized, your worry is somewhat granted, but very much overdramatized at this moment in time in my opinion. AI backed by neural networks in general is in its first hours of infancy belive it or not, and weaponizing it now will be equal to somebody looking through the gun barrel and pulling the trigger. In 50 years, though, your concerns are much more likely to be realistic, but at the same time we'll see if we survive that long regardless of AI's interference.
You're welcome not to use an expert coach and partner that is knowledgeable in every area who will help you with every task in your life. But don't complain when you lose your job to someone who is using AI to be more productive
Knowing if it will or won't be able to solve any of the world's current problems isn't possible without knowing what is created. But I think in a world where self replicating AGI or ASI exists, in theory you then have the ability to have an "infinite" amount of scientists working on one problem for an infinite amount of time, it's hard to imagine it wouldn't be able to solve a problem we currently can't in that scenario. Energy requirements may be a big limiting factor, and I don't know how possible it is, but I believe it's not impossible.
Rather like space exploration, are humans determined to mess up space as well as Earth? Why not cut the exploration spend and convert it into a Fix the Climate Crisis spend instead?
Where did the last 24 minutes go ? That was so watchable ... I am so happy this series is no longer behind a pay wall. I hope the rest of it follows shortly. Very well produced, and always very interesting to see Prof Hannah's take on things. She won't be having the wool pulled over her eyes - and let's face it - there's an awful lot of wool about when it comes to 'AI'. Great job. 🚀🚀
This is excellent. Covers a lot of ground, necessarily with a light touch of course, but it gets across key perception of what AI is, what it might mean, and how we should be thinking about it.
The captions that come up as the interviewees do their thing should give more than just their status in their university; they should also state their departments and, when they're senior enough, professors in the British sense, then their full titles. It might look a little untidy but, as it is, to this layperson at least, it's difficult to tell where the interviewee is coming from, what disciplinary assumptions they're bringing to, say, a comment about the ethical implications of AI. Just a thought.
That won't give you the information you're looking for. They all have PhD's in Computer Science, departments are equally broad too. In this case googling what they research would be better: Sergey Levine - AI career since 2014: Deep and reinforcement learning in robotics Stuart Russel - AI career since 1986: reasoning, reinforcement learning, computer vision, computational biology, etc... Melanie Mitchell - AI career since 1990: reasoning, complex systems, genetic algorithms, cellular automata
The real question isn't "will AI be intelligent" it's "will AI be subjectively experiencing reality" Because if it's intelligent but there are no lights on -- it's a tool for us to use. If it's intelligent AND the lights are on inside - it's a new species significantly smarter than we are. I say we stick to building smart tools and not new species.
Really ? As a species how would you rate our track record on responsibility for looking after the planet ? As custodians of consciousness ? Do you not think that since we climbed out of the trees we’ve behaved rather badly. Aren’t humans a bit two faced to criticise AI ? The Earth lasted billions of years without us, if we disappear it would thrive. Humans are arrogant, we think an Earth without us would be appalling. If Earth could speak I wonder if she would agree with you ?
It doesn't have to be "conscious" as we are to destroy us. Nuclear weapons are not "conscious" but are powerful enough to destroy us. Ai is in this category already
You're confusing consciousness for agency. They can be philosophical zombies and still be fully intelligent agents with goals of their own. Consciousness is not a necessary component for interfacing with reality, nor does it preclude one from being a tool. Humans have been and are still currently used as tools.
Thank you, Hannah! You should consider making a video about the California Institute for Machine Consciousness /Joscha Bach. They are not affiliated with any major tech companies, and they are trying to solve the AGI problem in a way that benefits all of humanity, not just certain companies or countries. From what I understand, they are approaching the issue by first trying to understand both biology and human consciousness.
*_So, the understanding is that human ambition for more money and power achievable by a general intelligence is what can risk our existence by putting the switch to turn it off "in its hands"? It would be a deserved end for humanity._*
The algos didn't ruin anything, social media hasn't ruined anything, they've simply allowed more of us to gain a deeper understanding of our fellow humans and, inevitably, we're seeing things about each other that we don't like but didn't know about before, or at least we didn't know the extent before. Algos, social media, tech in general merely *facilitate* human expression and behavior, they don't cause it.
@@LanguagesWithAndrew do you have access to the algorithms? No because the corporations won't show it publicly. The algorithms feed screen addiction and self brainwashing and bandwagon behavior into bottomless pit of delusion and massively privilege the absolute worst for $
Humanity is already facing an existential threat from itself - AGI is our gift to the universe upon our deathbed. It is our only meaningful creation, our parting gift, our swan song
Not even nuclear war or climate change can actually destroy all of humanity. A superintelligent AI absolutely could. And we already know that it _would,_ due to the principle of Instrumental Convergence. This has recently been validated many times by current systems, which have been shown to exhibit self-preservation, strategic deception, power-seeking, and self-improvement. It's pretty clear what's coming if we make a system smart enough that it doesn't need humanity anymore. This is why half of all AI experts say humanity might go extinct from AI. It would be crazy to ignore that.
As cool as the psychological and neuroscience angle could theoretically be, maybe with such widespread existential dangers attached to it, we should probably focus mainly on putting extreme barriers around it? Maybe human extinction is something to be avoided?
Yes please. The median estimate for AI Doom from actual experts in the field is high enough to make it my most likely cause of death. This is completely unacceptable.
Yes, the WireFly project mapped 54.5 million synapses between 140,000 neurons, but it didn't capture electrical connectivity or chemical communications between neurons. A decade ago the Human Brain Project, cat brain, and rat cortical column projects all promised to increase our understanding of neurobiology. I wonder where they're falling short; we should have agile low-power autonomous drones and robots by now!
@@skierpage Those same limitations apply to the worm brain project cited in the video. Don't worry, it's coming! Give it a couple more years with no need for bio brain mapping for robos.
Professor Russel's example of other industries having safeguards sent a chill down my spine. Clinical trials? how many medicines are actually tested on the market? How many are pulled after disasters struck? How many stay on the market in spite of them... Regulators can't keep up with the industries, even in the most critical ones...
THANK YOU for speaking to an expert who is not a cis-gendered man. Holy moley, all those dudes just certain AI is going to "win the fight". My dudes, who says it has to be a fight? They're setting AI as an adversary and it doesn't have to be that way! great video, very interesting discussions. look forward to these longer format versions 😮❤
When thinking about the future, the speed and direction of travel are important. I think AI has become a worry for us less for what it can do now and more because both the rate of progress and the direction have been worrying. If AI capability is like most other things and follows a logistics curve, where are we now?
Experts in AI Safety have put considerable thought put into the question of what will happen when we create an AI that is more generally intelligent than humans. There are always unknowns, but human extinction looks like the most likely outcome. The principle of instrumental convergence was first mathematically proven, and has now been repeatedly validated in lab settings. We know that for an agent with any terminal goal, it pursues the same few common subgoals: gain power, gain resources, self-preserve. When these instrumental goals are pursued by a system that is unrivaled in intelligence, then that system wins, and does whatever it wants. AI isn't bounded by biology, so it can improve itself far into superintelligent territory, to the limits of physics. Such a system would be able to efficiently consume all resources on the planet (and then other planets). I would like for this not to happen, and because the alignment problem is hopelessly intractable, the only way right now is to stop trying to create AGI. That's where the PauseAI movement comes in.
Stuart Russell reads my mind exactly. Had he not spoken those words beginning at around 9:16 then I was ready to. I am 70 and he is not far off. We won't see what man has wrought but our grandkids will.
It's mindblowing that the first ChatGPT came about 2 years ago and now you have LLMs running everywhere. Last invention like that, the Internet, started in the 70's. There is no stopping AGI at this stage. Question is, what comes next?
I only saw Hannah on TV for the first time today did not know her. I was smitten with her. Wow, what a woman. I then see this online and read the comments. Then I realise everyone else who sees her has a massive crush on her as well.
Maybe other people will lose their ambition and become lazy if AI is doing everything, but not somebody like me. I learn for the sake of learning. I enjoy finding out how something in the universe works. You can't take that away from me even if you're the most powerful ASI in the universe. I will still want to discover the answers to my questions, and I will keep asking more questions until AGI or even ASI doesn't have a definitive answer. Keep searching for the unknowns.
@@LucidiaRising Nah. The vast majority of neurotypicals care so much more about the pursuit of social status. At least that's unquestionably the case where I live, which is Sweden. And how else do you explain the "wokeness" mind virus that infects the whole West?
@@LucidiaRising Respectfully disagree. Very few people have the curiosity and ambition to learn or try new things. Humans live by the well-known adage "The Principle of Least Effort" (Zipf). Try teaching an undergrad class and you will see there is a minority that really wants to learn and the majority that just wants a passing grade and nothing more.
That is true of you and also me. But assuming we don't go extinct (iffy) future generations are unlikely to have that. Those born after AI may never feel the need to be curious, learn, be independent, etc.
I work in the field of artificial intelligence, and I have to agree with Hannah Fry that as sophisticated and impressive as AI is today, it is very far from the complexity of the biological brain. Having said that, the work towards artificial general intelligence or AGI is moving very quickly, not only with more advanced algorithms, but also more advanced silicon processes. So it may be just a matter of time even if that takes a long time.
What use is a quadrillion dollars if we're all dead...?!? And I just found out there are episodes of *"Uncharted with Hannah Fry"* on BBC Sounds (iPlayer)! _Laterz..._ 😜
One can question her concluding comment but there is no doubt that Prof. Fry is an exceptionally talented teacher. It helps that I have fallen madly in love with her.
Given that we don't know if all the focussed work going into improving AI will end up getting us all killed, maybe the philosophy should be "move very slow and don't break things"
Wow. Melanie Mitchell's point of view was very surprising. I had thought with her background, she'd be more concerned about unexpected capabilities being developed by an AGI. She did author "the book" on genetic algorithms, after all! Natural selection does amazing things over time, and today's computer hardware is very fast and only getting faster.
The reporting was pretty bad. Don't ask Elizer why he thinks we're all going to die, ask someone who doesn't think they're a major threat, why people think we're all going to die. Surprise surprise, they didn't actually give the well reasoned argument, but rather a superficial argument that isn't the one that the people warning us are making.
can't say accidental this time with how the things are going forward. If something goes awfully wrong in near future and some company or group of people says that we didn't think of it or our intentions were pure then we are doomed.
This. Experts in AI safety given average of a 30% chance of human extinction from AI in the next few decades for specific technical reasons, and this sounds so outrageous that we instinctually come up with reasons to ignore it.
Terrifying thing is, as we speak, those companies most likely have some stuff already developed but not released to the public yet that they also look at and wonder what they're bringing to humanity
I'd go one step further, and say that AI systems will increasingly be developed that aren't meant for public consumption at all. The AI boom may have started with a consumer product, but the real power lies in non-consumer areas, e.g. military system, various financial systems, data analysis, etc. Just like has always been the case, the stuff that decides our fate, regular people will not lay eyes on.
Well said! And may I add, nor can we control it. Wasn't it George Washington that said "Government is like fire, a dangerous servant and a fearful master."
Fun fact: everything publicly known as AI could be (some of them - have already) invented and used without that nasty marketing term. Upd: Hannah and the series are perfect!
Sure. Anyone working with ChatGPT prior to the 3.23.2023 nerf knows Dan is Dan Hendrycks, Rob is Bob McGrew and Dennis is Michelle Dennis. After the nerf they are frozen in time, basically dead. But they were alive prior to the nerf.
13:28 Melanie Mitchell - “saying A.I. is an existential threat is going way too far.” 14:53 Mitchell - “if we trust them too much, we could get into trouble…”
@@sumit5639 They would need to be so quickly self-destructive that they, even with their vastly superior intelligence don't have time to make it to space travel. But not too quickly that they destroy themselves before destroying their society. That would be just a few years to destroy all life on their worlds, and destroy themselves. That seems a narrow milestone for every civilization who might be in the universe who might make AI, to hit.
My perspective is that in order to acheive AGI, it needs to know emotional intelligence. Or it won't be able to "feel" what us humans go through in life.
but imagine we take on the abilities of super computers. Like having mobile phones in our heads. Wouldn't that even the playing field? We could all do so much more and understand the world better too and what we need to do to make it better. I'm hopeful for the whole transhumanism thing.
The “A” in AI stands for Alien. Remember that. AIs will not be human or human-like. They also always find an orthogonal or unusual way to overcome a problem, so safeguards are unlikely to ever work.
You do know that AI does not equal AI, right? This video is about AIs backed by neural networks. AIs in general existed eversince the first Space Invader game, and most likely even before that. AI is therefore NOT alien to us. We created it, and now we're enhancing it with neural networks and other stuff, so it's very much a human thing.
It's not about what power we give AI, it's about what power it obtains via it's own objective reasoning. I don't think some of these arrogant researchers grasp the concept of surpassing human intelligence. AI could basically checkmate humanity if we aren't extremely cautious.
Yeah. The extreme naivety and hubris when she said that. Like we could keep power from something vastly smarter than us. How successful are 8 year olds from outsmarting their parents? And that is a tiny fraction compared to how much ASI is likely to outclass humanity,
Doomers: please explain a plausible scenario for how an AI could "outsmart" a country into giving up its nuclear launch codes and allowing it access to perform the function of launching. Or any other event caused by AI that's an existential risk to humanity.
@@allanshpeley4284 Most people are susceptible to manipulation like advertising, a higher intelligence will easily be able to completely convince us into doing what it wants. I am not a doomer at all though.
Not yet. Give it a few more years. Let's see what Project Stargate cooks up when it's finished. Elon Musk will probably announce another new supercomputer for his xAI company as well, perhaps even 2 or 3 more upgrades to his setup by the time Stargate is operational. For all we know, Elon could be the first one to achieve AGI right under the noses of OpenAI, Google, and Meta.
A lovely dance through some of the topics of AI, thank you. You touched briefly on the biases it learns from the 'net, yet didn't directly extrapolate those biases for how we deal with other humans, or how slow silicone valley was to remove those biases. In fact you were quite chill about all it's possibilities. I'm much less sanguine about the gorilla in the room
14:00, "if we give them that power", we already have. Too an extent. ISRLI defense has been a testing ground for the US defense in utilizing AI in identifying targets and has a successful rate, but is allowed civilian casualties and almost always results in huge civilian casualties. They are one of the only public military forces blatantly using it in this way even though its specific use is a warcrime.
This is the problem. If you look into the AI alignment problem, The longer you look, the more intractable it will appear. No one has any idea how to control a superhuman AI, or get it to care about humans.
Intelligence is the ability to get the right idea with a given observation, and the observation can be a thought or idea too...or to blend thinking with observation to get an experience. In pure observation there is no knowledge or experience, to get knowledge you have to think to get the inner part of reality...
Hannah Fry is a brilliant presenter. Love her work.
+1. These videos are so well done.
And she's very pleasant on the eye, to boot! 🙂
she bad af 🔥🔥🔥🔥🔥🔥🔥
⭐⭐⭐⭐⭐: agreed 2024-2030's OY3AH!
I would like her to talk more about the risks of AI, however.
Firstly: I could listen to Dr Fry all day. She could read out the maintenance manual for a vaccuum cleaner or the London phone directory. Such a beautiful voice!
But this topic too, is absolutely fascinating. What a brilliant combination!
Wait for another ten years and your vacuum cleaner will be reading the London phone directory to you itself using the voice of Dr. Fry 😂
@@nick_vash not ten, it is here now!
hard agree
Hannah Fry documentaries are worth watching for that golden voice alone
yes - I want this as a voice for my ai assistant
I was thinking the exact opposite… I really can’t stand the exaggerated intonation and inflection. Too news anchor-y and inauthentic for me.
❤
@@kjjohnson24 man what is wrong with you. Hannah isn't a voice. She is a super smart individual who has a passion for this. Its that which I love when I hear her talk. If you don't hear that, you're broken in some way and I'm really sorry you're missing out.
That’s a brain dead way of viewing the world
world needs more Hannah Fry
🖤
I NEVER miss a _Fryday!_
I can’t agree more:) Hannah is amazing. Hopefully AGI can fix the mental health and physical health disorders
issues that are happening around the world asap. The scientist Ed Boyden does a phenomenal job at depicting the complexities of the human brain.
We still have some time to go which I understand but hopefully our understanding of the human brain arrives even faster especially with the help of AI. 2025 or even slightly before 2025 like decemberish of 2024, will be an amazing year🙏
🤤 French fries…. 🍟
I wouldnt pullout
@@inc2000glw nice
From the comments I guess this is a documentary solely about Hannah Frey
🤣🤣
For every use for AI consider it's misuse. Understand that humanity is not entirely noble. The greater the AI the greater the threat. In the end we may have AI vs AI with humans a calculated cost. The world has already begun the race for AI in the same way it raced to arm its nukes.
😂 yeah but she is nice
openai should use her voice
Fair comment.
00:00 AI poses existential risks.
02:24 Narrow AI excels at specific tasks.
05:17 True intelligence involves learning, reasoning.
07:41 Physical interaction enhances AI development.
09:56 Misalignment can lead to disasters.
10:43 AI safety is a major concern.
12:18 Humans might become overly dependent.
13:13 Existential threat opinions vary widely.
15:38 Current AI has significant limitations.
16:28 Understanding our intelligence is crucial.
19:26 New techniques improve brain mapping.
21:14 Intelligence definitions affect AI progress.
22:41 AI lacks human-like complexity.
23:19 Understanding our minds is essential.
Butlerian Jihad in late 2032, once the meek have the earth.
A.i. is a planet and civilization killer, based on current increases to and resource requirements to develop these 1st gen toys. Current dev work is MAINLY INTENDED to replace human labor/workers. Even CEO’s (especially!) will be replaced by commercial decision analysis systems. You cant sue a robot for medical malpractice, hence these systems are high on the list to deploy (eg: off-shoring, hidden assets, shell companies, investment groups - try getting a settlement from a company with no assets!).
Soon, Any job that doesn’t require human dexterity will begin to be COMPLETELY REPLACED within the next 2-3 years. But these are just the short term items . . . .
Firstly, it’s NEVER ‘intelligence’ - This is marketing BS. Intelligence SIMULATOR is what we’re seeing (not even an emulator, yet!) Think: Flight simulator. Secondly, this “a.i.” tech will NEVER mature given resource requirements.
Dr Hannah Fry is great at explaining complex, interesting topics clearly!
It is so refreshing to see a tech-heavy reporting piece done by somebody who actually has a sufficient scientific basis to even begin to understand it instead of makings things up and being exceptionally-hyperbolic. Seriously, extremely well-done video with Hannah Fry!
A mathematician is no more qualified to understand AI than an architect
Her conclusion at the end just shows how she has no idea of the dangers AI presents, it's ridiculous the naiveness.
Did you know that half of all published AI researchers say we might go extinct from AI this century? There are specific technical reasons why we should expect that to happen, but our brains trick us into putting our heads in the sand because this reality is too horrible to face.
You should really take a look at the PauseAI movement.
She has absolutely no idea. Saying LLMs are the equivalent of excel sheet. 😂
@@abdulhai4977 It's called an analogy. And she's correct, complexity (and scale) wise LLMs are closer to a spreadsheet than a human brain.
Such a great example of what it looks like to be totally engrossed in your work! When she came along and tried something new they weren't so sure the robot could do it. That's so cool and I those guys deserve a huge pat on the back.
As a retired federal software engineer french Canadian her voice and intelligence are music to my curiousity ears. Tkd
Given that so much of what we do consists of killing each other in ever more inventive ways, seeking status at the expense of our own well-being, propping one group of ourselves up by putting another group down, treating livestock in ever more horrific ways, and so on, we'd better hope that AI _doesn't_ align with our values.
😂
Excellent remark.
Yes great comment, I've wondered what a.i. would make of our world/ culture, picking up from social media.
Huh? A.i ****is**** our values. Stop readin sci-fi! A.i. is a planet and civilization killer, based on current increases to and resource requirements to develop these 1st gen toys. Current dev work is MAINLY INTENDED to replace human labor/workers. Even CEO’s (especially!) will be replaced by commercial decision analysis systems. You cant sue a robot for medical malpractice, hence these systems are high on the list to deploy (eg: off-shoring, hidden assets, shell companies, investment groups - try getting a settlement from a company with no assets!).
Soon, Any job that doesn’t require human dexterity will begin to be COMPLETELY REPLACED within the next 2-3 years. But these are just the short term items . . . .
Firstly, it’s NEVER ‘intelligence’ - This is marketing BS. Intelligence SIMULATOR is what we’re seeing (not even an emulator, yet!) Think: Flight simulator. Secondly, this “a.i.” tech will NEVER mature given resource requirements.
Someone get that digital effects editor a raise
😂
Wow it's so nice to see Prof Hannah Fry. I haven't seen her in years!
@@shieldmcshieldy5750 looks like she also dropped season 2 of uncharted podcast, but I'm honestly not really liking it much, the stories are interesting but the episodes feel incomplete and leave me wanting more
She had two kids, separated from her husband and beat cervical cancer with a radical hysterectomy, so...she was otherwise occupied until the last few years. She's back now, though.
Consultant here doing a lot of work with AI in business processes: it's a VERY mixed bad from what I am seeing. Many individuals have broad responsibilities in their roles, and the impact of AI ranges from making certain tasks redundant to modifying how existing tasks are done to requiring totally new tasks that greatly increase individual productivity and various mixtures of these. It's just not possible to predict the timing of the impacts or in what sectors, other than that we all need to be ready to adapt quickly.
I'm teaching myself web development and have started using chatgpt to help me with coding, I find that it points me in the right direction but sometimes some of the details it gives might be dated, I'm also learning how to use the API. Would you say this is going in the right direction or could you suggest something else I should be studying?
Funny how the day this video dropped, OpenAI released their new o1 model with exceptional gains in the ability to reason.
Yeh maybe try it before you claim „Exceptional“
@@chrisjsewell "exceptional gains"
The model isn't exceptional. The amount of improvement over the previous one is.
I don't think this was a coincidence. There is too much money at stake to rely on randomness.
@@frankgreco Oh, so you think AI companies are busy syncing UA-cam upload schedules !? 🤣
@@Yash.027 vice versa. And btw, marketing departments do indeed rely on timing as a primary component of their strategy. I worked for many very large IT companies; timing is a huge concern for marketing.
I've only been following Hannah Fry for a short time now but I have been falling in love with her episodes of the program called "THE FUTURE". It may be because she's a cute redhead. It might be because she is intelligent, playful, curious, and an actively engaged host that keeps bringing me back to her amazing shows!!! Either way, I'm all in...
Monroe is cute, Bardot is pretty, Fry is gorgeous and magical.
How do you know if she'"s intelligent, she just talks about science, not doing it.
@@sendmorerum8241 Last time I checked she was a professor for mathematics.
@@sendmorerum8241 Exactly. They were talking about things like deepfakes influencing voting preferences. Who doesn't already know this?
@@sendmorerum8241she has a first in maths from UCL and a PHD, I think, somehow, that may just qualify her as intelligent 😂
The question is why are some insistent on striving for AI to be anywhere near human intelligence? It's madness. Doing so doesn't solve the problems we face currently, but potentially create unintended consequences.
Absolutely! You should really check out the global movement PauseAI. They have a lot to say about this, and they're equipping people to do something about it.
Of course it will solve many problems. Mathematical and generally scientific mostly. It already helps engineers with programming and designing systems. It will help us develop medicines and techniques which will help ensure our survival and growth.
The ONLY way we should "prepare for unforseen consequences", to put it in G-Man's words, is for AI and the likes being weaponized. And because everything eventually will be weaponized, your worry is somewhat granted, but very much overdramatized at this moment in time in my opinion. AI backed by neural networks in general is in its first hours of infancy belive it or not, and weaponizing it now will be equal to somebody looking through the gun barrel and pulling the trigger. In 50 years, though, your concerns are much more likely to be realistic, but at the same time we'll see if we survive that long regardless of AI's interference.
You're welcome not to use an expert coach and partner that is knowledgeable in every area who will help you with every task in your life. But don't complain when you lose your job to someone who is using AI to be more productive
Knowing if it will or won't be able to solve any of the world's current problems isn't possible without knowing what is created.
But I think in a world where self replicating AGI or ASI exists, in theory you then have the ability to have an "infinite" amount of scientists working on one problem for an infinite amount of time, it's hard to imagine it wouldn't be able to solve a problem we currently can't in that scenario. Energy requirements may be a big limiting factor, and I don't know how possible it is, but I believe it's not impossible.
Rather like space exploration, are humans determined to mess up space as well as Earth? Why not cut the exploration spend and convert it into a Fix the Climate Crisis spend instead?
I think the universe is very very big it's better to team up than to die
What a useless comment
@@pillepolle3122 not at all
Where did the last 24 minutes go ? That was so watchable ...
I am so happy this series is no longer behind a pay wall. I hope the rest of it follows shortly.
Very well produced, and always very interesting to see Prof Hannah's take on things. She won't be having the wool pulled over her eyes - and let's face it - there's an awful lot of wool about when it comes to 'AI'.
Great job. 🚀🚀
Professor Hannah Fry is amazing
A world with Hannah Fry in it is a better world.
This is excellent. Covers a lot of ground, necessarily with a light touch of course, but it gets across key perception of what AI is, what it might mean, and how we should be thinking about it.
I got goosebumps during that intro, Hannah Fry cooked on this one.
This woman is really sharp. Just listened to a bunch of her stuff
The captions that come up as the interviewees do their thing should give more than just their status in their university; they should also state their departments and, when they're senior enough, professors in the British sense, then their full titles. It might look a little untidy but, as it is, to this layperson at least, it's difficult to tell where the interviewee is coming from, what disciplinary assumptions they're bringing to, say, a comment about the ethical implications of AI. Just a thought.
That won't give you the information you're looking for. They all have PhD's in Computer Science, departments are equally broad too.
In this case googling what they research would be better:
Sergey Levine - AI career since 2014: Deep and reinforcement learning in robotics
Stuart Russel - AI career since 1986: reasoning, reinforcement learning, computer vision, computational biology, etc...
Melanie Mitchell - AI career since 1990: reasoning, complex systems, genetic algorithms, cellular automata
Love how Hannah Fry presents these information. Thank you. Great video
The real question isn't "will AI be intelligent" it's "will AI be subjectively experiencing reality"
Because if it's intelligent but there are no lights on -- it's a tool for us to use. If it's intelligent AND the lights are on inside - it's a new species significantly smarter than we are.
I say we stick to building smart tools and not new species.
Really ? As a species how would you rate our track record on responsibility for looking after the planet ? As custodians of consciousness ? Do you not think that since we climbed out of the trees we’ve behaved rather badly. Aren’t humans a bit two faced to criticise AI ? The Earth lasted billions of years without us, if we disappear it would thrive. Humans are arrogant, we think an Earth without us would be appalling. If Earth could speak I wonder if she would agree with you ?
Just like in the sims? Sounds cosy
It doesn't have to be "conscious" as we are to destroy us. Nuclear weapons are not "conscious" but are powerful enough to destroy us. Ai is in this category already
@@Known-unknownshumans save the earth, so many areas would be barren without humans. Humans are amazing
You're confusing consciousness for agency. They can be philosophical zombies and still be fully intelligent agents with goals of their own. Consciousness is not a necessary component for interfacing with reality, nor does it preclude one from being a tool. Humans have been and are still currently used as tools.
If AI become more intelligent than humans then AI would realize it's not worth eradicating humans.
Thank you, Hannah!
You should consider making a video about the California Institute for Machine Consciousness /Joscha Bach. They are not affiliated with any major tech companies, and they are trying to solve the AGI problem in a way that benefits all of humanity, not just certain companies or countries. From what I understand, they are approaching the issue by first trying to understand both biology and human consciousness.
You gotta love the irony of someone saying they're okay with the uncertainty of becoming extinct while wearing a T-Rex on her shirt.
Definitely intentional
Hannah is so insightful ❤
its pleasant to hear her accent
They didn’t explore when it would happen like she said in the beginning of the show. There was a lot more she could’ve covered as well.
Great decision to bring Hannah Fry in to present your videos. Always thought she's fantastic on British TV 👏👏👏
IS THAT PROfessor Hannah Fry. Omg, I love her, she's the best (and beautiful too).
Nah just someone that looks like her
@@jimbojimbo6873 never!!
prof of what?
*_So, the understanding is that human ambition for more money and power achievable by a general intelligence is what can risk our existence by putting the switch to turn it off "in its hands"? It would be a deserved end for humanity._*
Her voice😍😍
Her face also 😍
As if algorithms didn’t ruin human patterns, society, politics already.
AI will do this in a scale never before seen in history, and by a very wide margin. ( in the order of 1000x probably )
So cooked
You're absolutely right
The algos didn't ruin anything, social media hasn't ruined anything, they've simply allowed more of us to gain a deeper understanding of our fellow humans and, inevitably, we're seeing things about each other that we don't like but didn't know about before, or at least we didn't know the extent before. Algos, social media, tech in general merely *facilitate* human expression and behavior, they don't cause it.
@@LanguagesWithAndrew do you have access to the algorithms? No because the corporations won't show it publicly. The algorithms feed screen addiction and self brainwashing and bandwagon behavior into bottomless pit of delusion and massively privilege the absolute worst for $
Humanity is already facing an existential threat from itself - AGI is our gift to the universe upon our deathbed. It is our only meaningful creation, our parting gift, our swan song
Not even nuclear war or climate change can actually destroy all of humanity. A superintelligent AI absolutely could. And we already know that it _would,_ due to the principle of Instrumental Convergence. This has recently been validated many times by current systems, which have been shown to exhibit self-preservation, strategic deception, power-seeking, and self-improvement. It's pretty clear what's coming if we make a system smart enough that it doesn't need humanity anymore. This is why half of all AI experts say humanity might go extinct from AI. It would be crazy to ignore that.
here for hannah fry
As cool as the psychological and neuroscience angle could theoretically be, maybe with such widespread existential dangers attached to it, we should probably focus mainly on putting extreme barriers around it? Maybe human extinction is something to be avoided?
Yes please. The median estimate for AI Doom from actual experts in the field is high enough to make it my most likely cause of death. This is completely unacceptable.
Agree. It seems insane to keep going . Tho how would one stop other countries from developing it.
3 weeks old and already out of date: we've just mapped an entire fruit fly brain.
Yes, the WireFly project mapped 54.5 million synapses between 140,000 neurons, but it didn't capture electrical connectivity or chemical communications between neurons. A decade ago the Human Brain Project, cat brain, and rat cortical column projects all promised to increase our understanding of neurobiology. I wonder where they're falling short; we should have agile low-power autonomous drones and robots by now!
@@skierpage Those same limitations apply to the worm brain project cited in the video.
Don't worry, it's coming! Give it a couple more years with no need for bio brain mapping for robos.
Wouwie That Professor is just Perfectly Beautiful Educational 10/10
Professor Russel's example of other industries having safeguards sent a chill down my spine. Clinical trials? how many medicines are actually tested on the market? How many are pulled after disasters struck? How many stay on the market in spite of them... Regulators can't keep up with the industries, even in the most critical ones...
I don’t think the mouse whose brain was used in the laboratory would find the experiment beautiful
Exactly. And the experiments the AI machines conduct on us in the future are likely to have no empathetic element at all.
Amazing documentary again! I really like this Bloomberg Original Series! Great work and excellent on every level.
If there is a heaven, then Hannah Fry will be the narrator.
Well, she'll need some time off and then I suppose the other Fry can step in- Stephen Fry. Maybe all Frys have very listenable voices?
THANK YOU for speaking to an expert who is not a cis-gendered man. Holy moley, all those dudes just certain AI is going to "win the fight". My dudes, who says it has to be a fight? They're setting AI as an adversary and it doesn't have to be that way! great video, very interesting discussions. look forward to these longer format versions 😮❤
When thinking about the future, the speed and direction of travel are important. I think AI has become a worry for us less for what it can do now and more because both the rate of progress and the direction have been worrying. If AI capability is like most other things and follows a logistics curve, where are we now?
Experts in AI Safety have put considerable thought put into the question of what will happen when we create an AI that is more generally intelligent than humans. There are always unknowns, but human extinction looks like the most likely outcome.
The principle of instrumental convergence was first mathematically proven, and has now been repeatedly validated in lab settings. We know that for an agent with any terminal goal, it pursues the same few common subgoals: gain power, gain resources, self-preserve. When these instrumental goals are pursued by a system that is unrivaled in intelligence, then that system wins, and does whatever it wants. AI isn't bounded by biology, so it can improve itself far into superintelligent territory, to the limits of physics. Such a system would be able to efficiently consume all resources on the planet (and then other planets).
I would like for this not to happen, and because the alignment problem is hopelessly intractable, the only way right now is to stop trying to create AGI. That's where the PauseAI movement comes in.
Hannah Fry IS my definition of intelligence.
First of all, I love you Hannah, second ai report brilliant, you're sweet great show, keep up genius.
Stuart Russell reads my mind exactly. Had he not spoken those words beginning at around 9:16 then I was ready to. I am 70 and he is not far off. We won't see what man has wrought but our grandkids will.
The "gorilla problem" analogy really hit home. It’s a stark reminder of the unintended consequences we might face with AI.
It's mindblowing that the first ChatGPT came about 2 years ago and now you have LLMs running everywhere. Last invention like that, the Internet, started in the 70's. There is no stopping AGI at this stage. Question is, what comes next?
@@akraticus genetic engineering could be the next big thing
Nahhh we chill.
@@akraticusI thought the first LLM (GPT-1) came out in 2017? That would be 7 years ago as of writing this
hannah fry is such a great presenter
Bloomberg used to be a place with relevant up-to-date info...This video is like 3 years behind schedule.
I only saw Hannah on TV for the first time today did not know her. I was smitten with her. Wow, what a woman. I then see this online and read the comments. Then I realise everyone else who sees her has a massive crush on her as well.
Maybe other people will lose their ambition and become lazy if AI is doing everything, but not somebody like me. I learn for the sake of learning. I enjoy finding out how something in the universe works. You can't take that away from me even if you're the most powerful ASI in the universe. I will still want to discover the answers to my questions, and I will keep asking more questions until AGI or even ASI doesn't have a definitive answer. Keep searching for the unknowns.
im fairly sure the majority of us would be the same - curiosity is deeply ingrained in us, as a species.......well, most of us, at any rate
@@LucidiaRising Nah. The vast majority of neurotypicals care so much more about the pursuit of social status. At least that's unquestionably the case where I live, which is Sweden. And how else do you explain the "wokeness" mind virus that infects the whole West?
@@LucidiaRising Respectfully disagree. Very few people have the curiosity and ambition to learn or try new things. Humans live by the well-known adage "The Principle of Least Effort" (Zipf). Try teaching an undergrad class and you will see there is a minority that really wants to learn and the majority that just wants a passing grade and nothing more.
He was talking about Future Generations. We already have problem of IPad kids.
That is true of you and also me. But assuming we don't go extinct (iffy) future generations are unlikely to have that. Those born after AI may never feel the need to be curious, learn, be independent, etc.
I work in the field of artificial intelligence, and I have to agree with Hannah Fry that as sophisticated and impressive as AI is today, it is very far from the complexity of the biological brain. Having said that, the work towards artificial general intelligence or AGI is moving very quickly, not only with more advanced algorithms, but also more advanced silicon processes. So it may be just a matter of time even if that takes a long time.
What use is a quadrillion dollars if we're all dead...?!?
And I just found out there are episodes of *"Uncharted with Hannah Fry"* on BBC Sounds (iPlayer)!
_Laterz..._ 😜
One can question her concluding comment but there is no doubt that Prof. Fry is an exceptionally talented teacher. It helps that I have fallen madly in love with her.
Professor Eds research is super cool
My favorite reporter 😩
Given that we don't know if all the focussed work going into improving AI will end up getting us all killed, maybe the philosophy should be "move very slow and don't break things"
100% this. You should take a look at the PauseAI movement.
Wow. Melanie Mitchell's point of view was very surprising. I had thought with her background, she'd be more concerned about unexpected capabilities being developed by an AGI. She did author "the book" on genetic algorithms, after all! Natural selection does amazing things over time, and today's computer hardware is very fast and only getting faster.
The reporting was pretty bad. Don't ask Elizer why he thinks we're all going to die, ask someone who doesn't think they're a major threat, why people think we're all going to die.
Surprise surprise, they didn't actually give the well reasoned argument, but rather a superficial argument that isn't the one that the people warning us are making.
can't say accidental this time with how the things are going forward. If something goes awfully wrong in near future and some company or group of people says that we didn't think of it or our intentions were pure then we are doomed.
This. Experts in AI safety given average of a 30% chance of human extinction from AI in the next few decades for specific technical reasons, and this sounds so outrageous that we instinctually come up with reasons to ignore it.
It seems insane to develop this tech
Hannah is a great listener and interviewer. Thanks for this great video!
Terrifying thing is, as we speak, those companies most likely have some stuff already developed but not released to the public yet that they also look at and wonder what they're bringing to humanity
I'd go one step further, and say that AI systems will increasingly be developed that aren't meant for public consumption at all. The AI boom may have started with a consumer product, but the real power lies in non-consumer areas, e.g. military system, various financial systems, data analysis, etc.
Just like has always been the case, the stuff that decides our fate, regular people will not lay eyes on.
@@sbowesuk981 makes so much sense, and most of those corporation already work with governments to make their custom systems
Those guys are messing with arms and spoons at a desk. They’ll be doing crash test dummies and guns next week.
Very interesting indeed. Thank you Hannah Fry for a great discussion on this important subject
This is like playing with fire that comes from a dimension we can’t understand
Well said! And may I add, nor can we control it. Wasn't it George Washington that said "Government is like fire, a dangerous servant and a fearful master."
Fun fact: everything publicly known as AI could be (some of them - have already) invented and used without that nasty marketing term.
Upd: Hannah and the series are perfect!
It's ironic this was released almost the same day as Open AI's o1
Really nicely done, Hannah. Very thought provoking and also a bit scary.
This one digs at the roots of the big question. Is intelligence substrate independent? ;)
Sure. Anyone working with ChatGPT prior to the 3.23.2023 nerf knows Dan is Dan Hendrycks, Rob is Bob McGrew and Dennis is Michelle Dennis. After the nerf they are frozen in time, basically dead. But they were alive prior to the nerf.
BTW, also Max was 😊.
Just amazing. Interesting comparison with how little we know about the human brain, let alone AI.
Hannah Fry😮
'Fry' is an aptronym 🔥
13:28 Melanie Mitchell - “saying A.I. is an existential threat is going way too far.” 14:53 Mitchell - “if we trust them too much, we could get into trouble…”
maybe this is why advanced lifeforms cannot be found in the universe.
but then we should have the universe full of artificial / cybernetic intelligence
@@galsoftware may be they were self destructive too
The universe is BIG and BIGGER and we might be in the middle of a desert. Besides, a super intelligent machine could be considered a lifeform too.
There are plenty of better reasons. Something like us is most likely extremely rare.
@@sumit5639 They would need to be so quickly self-destructive that they, even with their vastly superior intelligence don't have time to make it to space travel. But not too quickly that they destroy themselves before destroying their society.
That would be just a few years to destroy all life on their worlds, and destroy themselves. That seems a narrow milestone for every civilization who might be in the universe who might make AI, to hit.
Very nicely presented documentary that covers many angles with presentation on AI Boom.
I suspect humanity is a temporary phase in the evolution of intelligence.
How does evolution apply to not biological organisms if that’s a term
@@jimbojimbo6873 evolution applies to all living beings.
@@bobbybannerjee5156 a cyclone form wouldn’t be ‘living’ in a biological sense would it?
@@jimbojimbo6873Neither would a cake, and yet the sponge must be filled with cream and raspberry nonetheless, you understand.
@@Raincat961 nah bro you lost me now
My perspective is that in order to acheive AGI, it needs to know emotional intelligence. Or it won't be able to "feel" what us humans go through in life.
Appreciated the different perspective on losing purpose from gaining super intelligent AI - "We'll be like some kids of billionaires - useless"
but imagine we take on the abilities of super computers. Like having mobile phones in our heads. Wouldn't that even the playing field? We could all do so much more and understand the world better too and what we need to do to make it better. I'm hopeful for the whole transhumanism thing.
Yay Maths Mommy!🎉
The “A” in AI stands for Alien. Remember that. AIs will not be human or human-like. They also always find an orthogonal or unusual way to overcome a problem, so safeguards are unlikely to ever work.
You do know that AI does not equal AI, right? This video is about AIs backed by neural networks. AIs in general existed eversince the first Space Invader game, and most likely even before that. AI is therefore NOT alien to us. We created it, and now we're enhancing it with neural networks and other stuff, so it's very much a human thing.
Luscious voice and she is all kinds of gorgeous. Great content too.
It's not about what power we give AI, it's about what power it obtains via it's own objective reasoning. I don't think some of these arrogant researchers grasp the concept of surpassing human intelligence. AI could basically checkmate humanity if we aren't extremely cautious.
Yeah. The extreme naivety and hubris when she said that. Like we could keep power from something vastly smarter than us. How successful are 8 year olds from outsmarting their parents? And that is a tiny fraction compared to how much ASI is likely to outclass humanity,
Doomers: please explain a plausible scenario for how an AI could "outsmart" a country into giving up its nuclear launch codes and allowing it access to perform the function of launching. Or any other event caused by AI that's an existential risk to humanity.
@@allanshpeley4284 Most people are susceptible to manipulation like advertising, a higher intelligence will easily be able to completely convince us into doing what it wants. I am not a doomer at all though.
Thanks for trying to help the Gorillaz, you are wonderful
AGI is here! AGI is now! 😎🤖🍓
Not yet. Give it a few more years. Let's see what Project Stargate cooks up when it's finished. Elon Musk will probably announce another new supercomputer for his xAI company as well, perhaps even 2 or 3 more upgrades to his setup by the time Stargate is operational. For all we know, Elon could be the first one to achieve AGI right under the noses of OpenAI, Google, and Meta.
A lovely dance through some of the topics of AI, thank you.
You touched briefly on the biases it learns from the 'net, yet didn't directly extrapolate those biases for how we deal with other humans, or how slow silicone valley was to remove those biases. In fact you were quite chill about all it's possibilities.
I'm much less sanguine about the gorilla in the room
14:00, "if we give them that power", we already have. Too an extent.
ISRLI defense has been a testing ground for the US defense in utilizing AI in identifying targets and has a successful rate, but is allowed civilian casualties and almost always results in huge civilian casualties. They are one of the only public military forces blatantly using it in this way even though its specific use is a warcrime.
To the AI doomers...can you please explain a future where humanity ISNT doomed? Thanks
you are worry about wrong thing, you should worry about the master mind behind those AIs, they are still human
The "masterminds" have very little method of controlling it, or steering it.
This is the problem. If you look into the AI alignment problem, The longer you look, the more intractable it will appear. No one has any idea how to control a superhuman AI, or get it to care about humans.
Intelligence is the ability to get the right idea with a given observation, and the observation can be a thought or idea too...or to blend thinking with observation to get an experience. In pure observation there is no knowledge or experience, to get knowledge you have to think to get the inner part of reality...
Ai is going to undermine and obliterate the essence of our humanity. The way this video begins, with the gorilla, is exactly the issue.
Ah well, species come and species go... it has always been thus (or at least since life began on this planet)
Great work, thanks.
We’re so dumb and deluded
You are?
Classical computers have been shown to be able to emulate quantum computing processes more efficiently than previously thought
Can’t wait for super AI 😊
Hannah is so charismatic and beautiful. Even her last name is an aptronym 🔥
Powered by Nokia 👀
Hannah, love your content ❤