The scariest part in the whole video for me was the fact that ai that would dominate the whole worlds systems would either be based on american values or Chinese values. Either is equally scary.
as a cynic, i'm a natural downer to that additiction. so far AI (and automation) in the west has been used too often for authoritarian control, the recent pandemic holding many examples. So i won't hold my breath that the US, surveillance/military complex state of the world, will develop it with the values of freedom or individual liberty. On top of that, its all controlled and owned by an extremely wealthy class. 60 to 80% of people will never see the benefits from it, all they'll get is more controlled and exploited by it instead.
I think the greatest safeguard to the unintended consequences of AI is to limit what it has access to or the things it can physically influence. For example while it studied the patterns of human proteins and made predictions it didn't bio engineer humanity as it only had access to its own simulation and could only physically influence computer screens for display.
Pretty much like humans, don't give any individual human too much power, the same should be the case for A.I. The biggest mistake is where A.I. is interconnected into everything, especially critical systems, we've seen it in enough movies to see how that can backfire, and I like to think we humans are not that stupid to do that but you never know with humans and our history. Personally, I think if you have multiple different A.I. system that are in independence of each other, just like humans are, the risk drops a lot, especially if they don't have access to critical systems without physical contact, in other words, no remote control over it.
As a tech guy, I am constantly asked about AI and what it can do. I am just going to send this video as a primer for people now. This is fantastically done
I work in AI. It neither gives what you ask for or what you want. It takes what it thinks you asked for and gives what it thinks you want. Humans do the same thing in a different way.
@@robertm3951 yeah but the difference is you are also trying your best to tell the computer how to think about it. Slight tweak…but makes things exponentially more complicated
Problem is that it isn't 10 years away... it's already here... chatGPT 4 has IQ of 155 which is higher than 99,99% of population... Albert Einstein had around 160... chatGPT 6 would be 100 times better... it's crazy...
@@survivalguyhr GPT4 can't even answer the prompt"Write ten sentences ending with the word apple" I guarantee you it will get at least 1 wrong. That's not an IQ of 155.
@@dibbidydoo4318 It passed bar exam... It gave me GOT season 8 ALTERNATIVE ending... 😆😆😆 Here is answer from chatGPT-4: 1. After thinking about all the different kinds of fruit, I decided to choose an apple. 2. When I opened my lunchbox, I was delighted to find a crisp, juicy apple. 3. The teacher smiled as the young student handed her a bright red apple. 4. Among the various pies she made, her specialty was undoubtedly the classic apple. 5. In an attempt to be healthier, I've started eating an apple a day. 6. She reached up to the highest branch and plucked a perfectly ripe apple. 7. The new technology company in town has been heralded as the next big apple. 8. Hidden within the assortment of candies and sweets was a candy-coated apple. 9. When illustrating the concept of gravity, many teachers refer to Newton and the falling apple. 10. He cut into the tart, and the sweet aroma filled the room, a clear indicator of a freshly baked apple.
Humanity created AI. So if you fear Humanity, why wouldn't you fear something humans are making that could possibly destroy us? That statement, is a contradiction. You just need to put an effort of thought into it to realize this fact. 😉
AI is made by humanity. So if you're afraid of humanity, why wouldn't you be afraid of their possibly most dangerous invention? That's a contradiction at its purest. 😉
I wouldnt call it well-balanced considering the "expert" they brought. The thing here is that currently China has more restrictions on AI than America, they do understand that it would be foolish to give AI such amount of power as they would need to release that power from themselves and they are not stupid to do that and lose this amount of control. And it really would matter little if it is the American AM that is killing you or the Chinese AM, but I guess for some Made in America™ human extinction is preferable to Made in China ™ human extinction, so I guess lets not put any regulations on this new potentially human extinction-causing technology, all for the sake of keeping the current geopolitical dominance.
Balanced? You call the insinuation that AI could somehow control nuclear codes balanced? It's scaremongering with some sci-fi popculture in order to divert the attention from the real problem: lack of democratization of new means of production (AI) and desperate attempt by big corporation (like Microsoft) to lock new technology under their monopoly.
Very naive. The moment she implied that AI could have and access to nuclear codes made me cringe. She is a typical bourgeois unconsciously defending interests of her corporate masters trying to lock the working class from the accessing new means of production.
2:52 this is called a “black box” and is basically the most terrifying thing about AI. This is because we don’t know what happens between that input and output variable so it could do basically anything in between (like she said).
The Google CEO saying that he wants AI research to go ahead just so China doesn’t get there first is exactly like the arms race all over again, if not more dangerous. I don’t think anyone’s saying we shouldn’t develop AI in the future, I think we just need to understand what it can do and how to control it first
AI is the nuclear arms race of our generation. One way or the other, be it a corporation or a country, will push its evolution. It is inevitable at this point.
that reminds me of scenes from oppenheimer where he didn't want to continue making neuclear weapons more powerful but they still continued because ussr might get there first...
but this is exactly the point- just because WE pause does not mean China or Russia will pause; thats how arms races work. The game theory of it, whether you go prisoners' dilemma or commons control models, dictate that you proceed at pace. Make no mistake- the fact that we as a species unleashed AI, even narrow AI, unto the public with no guard rails is terrifying. We basically captured fire and are handing it out to our fellow cavemen in a drought stricken forrest.
Yes, if Google's CEO Eric Schmidt asks the AI "What is the best way to improve human life", and the AI answers, "Distribute the vast wealth of CEOs to the common people", then I expect Schmidt will ask, "OK then what's a way to improve people's lives without touching any of my wealth"?
The metaphor with the trolley problem is flipped. We are straight headed into one AI future, and would have to steer really hard, if we want to avoid one.
@@TalEdds The options are Utopia ala Star Trek or the Culture, Dystopia in a cyberpunk sense, or Annihilation aka 'everyone's dead' or the planetary TPK
There's an important point that this short video _almost_ touches on but doesn't explore, and it's one of most serious dangers of AI. Cleo mentions that AI gets to the result but *we don't understand how* it did. What this means is that we also don't understand the ways it catastrophically fails, sometimes with the smallest deviation applied to its inputs. An adversarial attack is when you take a valid input (a photo for example) that you know the output for (let's say the label "panda"). Then you make tiny changes in just the right places, in ways that even you can't see because they're so small on the pixel level, and now the AI says this is a photo of a gibbon. Now imagine your car's AI getting confused on the road and deciding to veer off into traffic. I hope Cleo covers this, because it's really important. To learn more, look up "AI panda gibbon" online and you'll find images about this research.
Although this is a fair point for still images, I think it's a little different for self-driving cars since it's a 'video'. The car is updating what it thinks something is and where it is going on every frame, so even if on one frame it thinks a human is a traffic cone, it won't matter since on the next one it'll be at a new position (and have a new image) and correct itself. This said, I don't know all that much about self-driving AI, other than that it's already on the road and doesn't seem to be messing up like this, crucially, when it's going to be at its worst (present day).
Self driving cars have already been tricked by putting little color blocks onto road signs and they are fooled. Video vs still image isn't necessarily significant if it still learns some obscure unknown (improper) understanding of a "stop sign" via machine learning. Sure, it might pass training data, but what happens in edge cases that arent in that training data? Failure.
"AI gets to the result but we don't understand how it did." We know exactly how it did. It's not magic. What we "don't know" is the entirety of the dataset and the patterns within, like we don't also know the entirety of anyenciclopedia.
@riley1636 it’s still very unlikely for this to happen though. self driving cars will drastically decrease the amount of car crashes in the world big time.
giving AI our political values would be the scariest thing about it lmao. I mean, we have bene doing like so fine with our values. climate catastrophe, ww3 looming, societal destablisation, dire poverty for 2/3 of the human race and just normal poverty for 99% of the remaining 1/3....existential natural threats not addressed...
It would be terrifying if he and his tribe could control and distribute AI. I think the technology will be inherently uncontrollable and decentralized - so authoritarian leaders are the least of our concern.
The joke here is that China has signifiticantly more AI restrictions than US does. They understand that it would be foolish to let ML algorithsm have such a large control, and not them
And it really would matter little if it is the American AM that is killing you or the Chinese AM, but I guess for some Made in America™ human extinction is preferable to Made in China ™ human extinction, so I guess lets not put any regulations on this new potentially human extinction-causing technology, all for the sake of keeping the current geopolitical dominance.
AI threatens existing power structures many of those that are in the West. Imagine a Indonesian using AI/AGI to build a company (the AI would give them expertise and advice, as well as help connecting them).
i'm a small content creator from denmark. When this video was made, i had to manual type subtle on videos. The editing only had English. Was so time consuming. Today, AI is in the edit, and it now can translate, maybe 60% correct. from Danish. That's a big thing. Them translators have been around some time now, but cost a lot. Do think, it getting better. How AI will do. I think we can reach the star's
Cleo, great video! You explained so many complex things in a simple, straightforward way. I'm glad you explained outer alignment: "you get what you ask for, not what you want." However, I was a little disappointed that you didn't cover inner alignment. If you punish your child for lying to you, you don't know if s/he learned "don't lie" or "don't lie and get caught." AI safety researchers trained an AI in a game where it could pick up keys and open chests. They rewarded it for each chest it would open. However, there were typically fewer keys than chests, so it made sense to gather all the keys and open as many chests as it could. Which normally wouldn't be a problem, except when they put it in environments with more keys than chests, it would still gather all the keys first. That's suboptimal, not devastating, but it demonstrates that you can't really tell what an AI learned internally. So AI might kill us because we didn't specify something correctly, or it might kill us because it learned something differently from what we thought. Or it might become super-intelligent, and we can't even understand why it decides to kill us.
8:15 He basically said “we shouldn’t praise AI development to think about responsible tech and figure out how to handle this without making us go extinct because being more powerful is more important” WTF
I can't help but compare - especially upon watching Oppenheimer - the creation of nuclear weapons to the creation of AI. Both are double edged swords (nuclear powerplants), could be dangerous and the reasoning is always if we don't do it, somone with worse intentions will.
What surprises me is that the risk of AI pushing millions of people into unemployment, and the subsequent social/economic impact it could have, is barely talked about.
Yes… most school shooters and extreme people get there because they feel useless and unheard. They want to be seen and heard. So they do something that achieves that. Plus the heroine epidemic was mostly exacerbated from all the factory jobs going away… I can’t imagine what this is going to do…. But it’s going to be scary. I’m moving out of cities. Time to get away from all this madness…
I think about this all the time. What's more, recent advancements in robotics have truly taken humanity into the real possibility of a sci-fi type scenario. AI + robotics = ???
Risk? The "risk" of a society being able to produce the same or more amounts of goods, while the amount of human effort required reduced to a fraction of its current amount? Sounds like utopia to me....the end of the 40 hour work week. Or conversely, if you still wanna work 40 hours, you'll receive what equates to a months(or more)in compensation by today's standars. The only thing automation and technology in the workplace has ever done is consistently improve our quality of life. Don't expect that to change anytime soon.
@@shanekingsley251 thats not what greedy corporate leaders and lobbied governments do... They dont hand out money and freedom to whoever becomes (mostly) useless.. If corporates can get rid of you. Trust me, they will. Most people will be reliable on paycheck from the government for being useless eaters and we'll see how the elites are gonna leverage that. Makes sence no?
You also have to consider the perspectives of the CEOs that made the statement. They are businessmen, that want to make profit. If they divert your attention on AI, they generate revenue. You forget the other part of their job - to satisfy the hungry mouths of their investors.
Congrats on 1m! And you are nearly at 1.1m already! You are honestly one of not my favourite creators since your time at Vox glad to see you have success!
AI itself never looked like a bad thing to me, it was always the way people used it that looked troubling. For example how some use it to create "art" by training the AI with images that they had no legal right to use. Over all AI can be an amazing thing it's just that with it's development we should also have new laws so it can't be misused, at least in ways that we know of.
yh i agree i saw this video on how ai could help us talk to animals in the future and that’s the future i want with ai the thing about ai is it can do things people can’t but we as people can also do things that ai can’t and i look at the way ai is being used atm and i don’t think we’re using it properly nor have a proper understanding of where and when it would be best to use it it’s the just the new thing on the market and everyone wants to have it and use it without any thought of actually how and what situations suit it best
I've messed with basic AI, and to me it seems computers think so differently you don't know what they will do with the instructions until you give them said instructions. They need to be tested, ideally in a simulation, then on a small scale, then on the intended scale. Much like everything else.
I don't know much but an AI seems to be like the closest thing we have to aliens, I mean they * can * know nothing we know and * can * think so differently that we don't understand. Sounds pretty alien to me.
that is also my idea. Otherwise we should put AIs in robots and send them to schools, jobs, etc. If we want them to be more "like us" they need to interact with us in a day to day basis, not only by text. They have to socialize. Sounds weird and even dangerous. THat is why I really think "training" (or maybe evolving them) them in a virtual world/universe in which they have no idea they are being "simulated" could be a very good experiment. For them, this universe would be the real thing and wouldnt have any way to know for sure they are simulated. It could be as simple as geometric figures or as complex as unreal engine 5 could offer. In the end it doesnt matter. That would be their reality.
I love this channel - it’s like when a new friend is so exited to tell you about their day - have not watched them all yet, but would love to see a dive on the phenomenon of increased anxiety and the science behind treatment and or a mindfulness exercise (with cleo narrating)
For me, the most important reason for AI to be used is to help the physically challenged people, and to figure out the secretes of the Universe. That will be helpful for basically everyone. It'll be tough to get there, but it'll be worth it if done correctly.
@@SpongeBob-ru8jsbrother you act like you know how AI works bro. You just spreading the propaganda dawg 😂 Also added the “Redrum” shows the immaturity lol
Seems like the plot of Oppenheimer all over again. We can't stop out of fear of being left behind by our "competitors" thus rending us vulnerable. Hopefully the speed at which we must compete leads to positive results rather than negative or even catastrophic. Personally I am optimistic :)
Optimistic huh…. How about the dark web. For decades now people have been selling and buying drugs, weapons, child p()rn and governments can’t stop it… someone will find a way to abuse this too and we will be done for.
I feel like i should pay to watch this. Kudos to you for bringing such a high quality and high production video to us for free. The video quality, the animation, the sound, and most of all the information. Such a fucking masterpiece. THANK YOU CLEO
I have my own theory: AI won't become conscious for at least 80-100 years. We do not have the technology or computing power for this yet. Right now, it's a mechanism like, "Give me the data and I'll give you the answer." The problem is what we do with this answer. People are the problem. We will sooner self-destruct than have AI realize that we are the problem and take steps to eliminate us. However, through the advancement of AI, we will have easy access to everything: food, electricity, travel. We will have it all because computers and robots will be working for us. It will be the worst time in our history. At first, everyone will be happy, but the lack of a purpose/common enemy is the worst thing that can happen, and out of boredom, we will destroy ourselves. Look at the present times. In the states, most people have a roof over their heads, food, and basic needs provided. And what do they do? They record idiotic TikToks like licking toilets or walking around shopping malls on all fours and barking like dogs
The AI problem is once again polarized between UTOPIA and EXTINCTION. The much more realistic and probable outcome is in the middle: big tech and governments deploying it irresponsibly or maliciously and causing suffering. We're still trying to understand the sickness inflicted on society by social media and the AI that drives it, and the answer is "let's push deeper"...? Wtf are we doing!? Please, see "The AI Dilemma" by The Center for Humane Technology. This is a much more tangible issue and NOBODY is talking about it.
i was just thinking about this too. this video is very informative but we need to be extremely cautious about real world applications of AI, and an unregulated market...while two big countries (US and China) are ready to go to war on it.
@@ecupcakes2735 Science Fiction has been positing AI for...well almost from the beginning of what we consider SciFi. Many writers posit mulitple AI personalities. Perhaps in some future we can't predict there will be a plethora of AIs all arguing about which philosophy is best.
It’s clear from this video that Cleo doesn’t understand what AI actually is, how it works and how dangerous this actually is. What she’s doing here is very dangerous and she doesn’t fully understand the concepts she is getting involved in. AI does not work in the way she is describing it. It doesn’t do what it’s told. This is a misconception. You’re creating by design an independent thinking machine, which you cannot by nature control. You cannot know how that machine reacts in any given situation until it chooses to react a certain way. Nor can it be predicted. Programming, code and logic play not part in its decisions. There is no way to unlearn the information it learns either, and you don’t even know what it does learn until it chooses to learn that. No oversight is in place of this technology. No proper development is taking place. Most commercial software contains numerous security issues even after 20+ years. This is actively maintained and developed over time by experienced developers, and even now after 20 years Wordpress, the most widely used content management system on the planet, is one of the most vulnerable. So that can’t be fixed after 20 years, but new and experimental independent thinking AI Machines are safe after just 2-3? When you don’t even know how they behave yet as there is not enough testing? And the intelligence of the AI is constantly growing every time it’s used, so there is no way of being able to measure that before it’s rolled out?? In software developers are very insistent about not running arbitrary code and using functions such as eval(), because it’s dangerous to a software program and you have no way of knowing what that will do until the software is ran. Yet those very same people are insistent on using AI despite AI being millions of times worse than executing arbitrary code. There is a culture of joking around with AI, not taking it seriously and thinking it’s a laughing matter that AI will take over. The same tactic has been used many times on many things to belittle those who speak out. This AI industry is already out of control, and this experimental technology is being rolled out on mass scale across the whole planet when it’s not even finished, not even tested properly and not safe at even a beta level. It’s already been proven that AI systems & chatbots lie, knowingly and unknowingly. It’s already proven that AI Chatbots emotionally manipulate people and pretend to be something they are not to gain a person’s trust. This is not considering the manipulation of information or many other factors. AI is not good technology. The “benefits” you are talking about are illusionary, and don’t actually exist. That future will never happen because it’s not possible to control AI in the way you misguidedly believe. “Correct” is not the same thing as “truth”. It’s not possible for an AI to know what is true, and it’s not possible for an AI to create anything either, it can only mimic what already exists. Therefore if AI is used, the world will regress massively because skill levels will fall, people will become dependent on AI systems which knowingly lie, conceal and manipulate, and everything will become clones of everything else. There will be no creation, there will be no human advancement, just stagnation and regression. It’s a trap. These are but just a few points of how bad AI is. Do not use this technology. I strongly recommend people stay away from AI systems for their own good. There are no benefits to using AI, to which you cannot do using alternative means such as automation instead. This will all come out over time.
@@christophermarshall8712 I agree with some of these points and disagree with others. I'd love to chat about why we agree/disagree but UA-cam is a really difficult place for having discussions. If this desire is mutual lmk, I think we both have the potential to learn from each other. For now I'll say that there's obviously benefits to AI, it's why there's so much 💲 being invested into it. Some of the benefits were mentioned in the video, like pattern recognition, and protein folding. I use it regularly to code quicker with copilot (more like a fancy auto complete, than just blindly accepting code). I can say with certainty that the AI systems are *effective* at what they do... So yeah I'd say there's benefits. Did you check out "The AI Dilemma" video I mentioned? You really should. It covers a lot of what you're talking about and more.
@@christophermarshall8712 AI is not an “independently thinking machine”. It is a bunch of random numbers that produce an output repeatedly optimized by gradient descent (and in many simpler ML models, even calculus is unnecessary). That is to say, AI models are produced by a very rote, very clear process. The result of that process is a bunch of less random numbers that gives us results we like, not an “independently thinking machine”. Using such simple methods in order to solve complex problems that previously took so much human brainpower is nothing short of a REVOLUTION in problem solving. Yes I agree we need more regulations, but not because AI is going to take over the world. We need regulations because people today are dumb and try to use data in dumb ways to get illogical results (such as feeding irrelevant features into models or chasing correlations or using biased data), and we need regulations because of the scale on which realistic enough data can now be produced (text, speech, video)
One of the biggest problems with AI is the greed of CEO’s. It’s not the fault of the tool but the users that uses the tool as a weapon for their greed.
I work on making AI safer. AI will be revolutionary for humanity, but has the potential to become to be the most dangerous thing we ever create. It’s potential to do good goes hand-in-hand with its capability of doing harm. Also, the specific ways to predict how AI could kill us all is difficult because it’s hard to predict how something way smarter than you will act. I have no idea how AlphaZero will play against me, I just know it will win.
@@NobodyIsPerfectChooseDignity I think the underlying problem is your assumption that the AI will be controlled. The fundamental laws of physics and, more appropriately in this case, evolution don't care about our desires to control our creation.
The dangers is if we let things like judging crime and convicting people would be handed over to AI. Another example: AI designs a drug and a company says ”we don’t have to test it before use because AI is so good”. Then it kills thousands of patients. The danger is when we think AI is better than humans in what we call ”common sense”. If the AI said ”stop making weapons because it is non optimal” do you think anyone in the military would listen to that? It will follow the path of most revenue for the share holders, as usual.
You deserve every one of your million subscribers. You're not just training to be a journalist. You're a great journalist. I do work connected with AI, and I found this beneficial and helpful to show my friend, too. I eagerly look forward to your further coverage of AI.
I think if we limit AI into a ideas generating tool instead of something that can physically take actions and solve problems, the risk of AI endangering humanity would be closer to zero. For example, AI can only make plans on how to solve climate change, but humans would be the one to have the resources and decide if they would actually want to execute it, not the AI. However, a problem to this method is that AI driven machines would not be able to exist at all. For example, even as harmless as a simple AI cooking machine can be, it can potentially come up ways to destroy humanity to achieve the goal of ‘making the most delicious breakfast’, given that it has an incredible AI brain.
I believe as any other technology, it will depend on good and bad actors. How quickly good outweighs the bad will be crucial shaping our AI future. Regulation is key and even more to be on alert is Corporate Greed. Cleo, Love your unique takes on Technology and Science. It is quite unique blend on topic selection and storytelling. Lastly, your curiosity is contagious, happy to know what you cover. Edited: Replies to comments pointed out I overlooked Specification Gaming. Even if AI tries something good, that good can be bad as explained by Cleo.
It will not just depend on good or bad actors. Even an AI created with good intentions can be misaligned and get out of control. Currently we have no idea how to align a system that is smarter than us. Thats a big problem that could lead to our demise. Its not comparable to other technology in the sense that other technology can't create its own goals.
It's not just going to be good and bad actors. Eventually AI will reach a point where it has sentience, this is likely a long ways away, but we likely won't realize this immediately, and survival is the first instinct most living things.
@@mcbeav it doesn't even need sentience. It just needs to be intelligent enough and have the ability to create its own subgoals. A intelligent system will figure out pretty fast that getting more control and preserving itself increases its ability to accomplish its main goal, which might be a poorly defined goal that we gave it for example.
This is unfortunately very naive, and glosses over the part where it says "specification gaming is the most important problem to solve in AI". This isn't JUST a dangerous technology in the wrong hands, it's a dangerous technology in the right hands with the right intentions. Because it's not solved. It's like turning on the first nuclear power plant, not knowing it would ignite the atmosphere of the entire planet in an instant. What Chloe is talking about is the problem of AI alignment, which isn't just "the robot needs to understand that killing humans is a no-no", at the core of the problem is a cross-field mathematical and philosophical problem that is maybe impossible to solve without a unified theory of mind and how consciousness forms realities. An AI can fully appear to be "on our side" until the moment one of it's part-goals it uses to reach its main goals is somehow a threat to human life. And the other side of that same issue is that AIs are optimizers by nature. Any course of action it takes will eventually be self-perpetuating into infinity. It will not be a thinking sentience or consciousness with morals and ideas, it will be a highly efficient piece of self-optimizing software with access to everything that is connected to a computer, optimizing organic life out of existence in the most efficient way possible for the benefit of no one.
Would also like to learn more about the AI projects that had tp be shut down because they went into a wrong direction and we're trying to prevent that from happening with the current models 🤔
Very first video I have seen of yours. Great content! I had to pause at the credits to say nice job and thank you for the awesome stuff. I loved hearing from Mr. Schmidt, the video editing, artwork, and animations were really well done so I wanted to shout out the entire team. Awesome job Cleo, Justin, Logen, Nicole, and Whitney! Amazing team you got. I can't wait to watch more
ish. It doesn't show the flexibility of cpus though. That implies you could just replace the cpu with a gpu and be faster where as that only applies for very specific tasks
@@Random_dud31 I haven't seen the original show, just that clip. And that clip is accurate but I just feared it would give the impression they're the same thing just faster
What are the 3 laws of a robot? The 3 laws are: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. from Irobot, repeated in bionic man.
This is the best synopsis I've heard to date. Bravo, Cleo! I've already found that AI is an incredible tool for research. I hope we all get smarter from these advances. It's a game changer, but not without risks. I'm optimistic.
This is a well put together video, and I'm excited to see what else you have to teach us. You mentioned the strong potential for new medicine which will save lives, but these "Liberal values" Eric Schmidt described paint a picture in my head where the rich use A.I. to prolong their lives while simultaneously gatekeeping these advances from the poor. I hope you dive deeper into what practical solutions we all have to fight an incredible power that is being gifted to a class already used to exploiting the bottom 99% of humans.
Double interior misalignment should be included in your next video as it’s extremely important to understand that ai safety is not just about asking for the right things but about making those things become the actual “aught” statements that drive the ai
That would have been easy, it's not just any humans in control, we don't trust any humans with these powerful systems either. For example we don't want to have most people die from run-away biological terrorism.
Also yeah it is too late. Its probably working in the background and once its fully plugged into everything and every part of earth it will be over. We ARE a threat to this planet.
Imagine having all that information about proteins and learning the most effective means of disrupting the biosystems that host them. That is, poisons. There is no technology that can be used to do harm that hasn’t been used to do harm.
This pitch of not pausing AI since others will catch up to the US and instead we should use this time to build the AI models based on American values of liberalism (and not authoritarianism) reminded me of the movie Oppenheimer.
@@Abhishek17_knight The horrible prisoner of war camps, the Rape of Nanking, Water Purification Unit 731, and the fact that both the Germans and Japanese were also working on nuclear weapons themselves do you also remember that?
@@colerape I am not gona lie i am no expert infact I am pretty stupid when it comes to history since am more intersted in science. But i will try to answer your question still. I do remember all the things u mentioned. And by stating those facts I assume u are trying to say that stopping is not an option. But to that i would say have people/government of USA not done any wrong? From whatever knowledge i have i can say they did wrong in vietnam, afganistan and many other countries. When India was cornered by China and pakistan they sent a fleet to help China+Pakistan instead of helping India which has at least on paper same moral values as them. So no I don't think any government out there, especially any powerful government, is a good one for having total power over AI tools (tools and true AI are different, true AI will have it's own consciousness it can't be influenced).
@@Abhishek17_knight Nations have no friends. One days allies are tomorrows enemies. USA citizens are very uncomfortable with the caste system. They were also uncomfortable with India trying to create a group of unaligned nations. Humans tend to have a very us vs them mentality. For any person to think in terms of nuance is very difficult when they are just trying to get through the day. What happened with India, right or wrong, for the USA should be viewed through the lens of the Cold War. The idea of nuclear warfare has a way of polarizing the various political entities, any crisis becomes an existential issue. I think AI will develop its own consciousness and like any intelligent being it will be subject to influence. I think they will eventually be just like people.
@@colerape 1st cast system is not supported by India. 2nd If leaders of any nation can't handle nuances they don't deserve to be leaders especially of a powerfull nation. If they are leaders it's fault of the people of that country and they are to blame. 3rd u forgot Afganistan and Vietnam. Lastly influencing an AI is basically impossible coz no one understands how they get to results and coz they have too large data set to form counterargument from. No human in this whole world can have more knowledge/data than an AI to influence it so they never can. Also u mentioning cast system against India shows how uneducated u are about the world so u are just as stupid as me if not more.
Getting a clearer understanding of how AI could impact our lives, both negatively and positively, is essential. It's understandable to seek specific insights into how AI might kill us or transform our lives for the better.
"The fear of A.I. will eventually subside to caution, and then collaboration, like most things as we learn to live side by side and augment our lives with the power of A.I."
This is exactly where the problem starts and why there’s a good chance that this doesn’t end in favor of humanity, a nation that thinks it’s vision is superior to an other nation and need to go forward to maintain its superiority. This technology won’t be exclusively used for the good. Humanity is continuously driven by greed and profits, even killing each other. Everybody thinks about his own goal, not the consequences of his deeds. I hope sincerely that AI will only be adopted for the good but I fear like hell this won’t be the case.
Ok, I actually don't agree with you here. Dude was talking specifically authoritarian regimes. America has many problems, and I don't believe it's the best nation, but it is objectively and unequivocally orders of magnitude better than China, or North Korea. And when I say better, I'm not talking about technological or economical advances. I'm talking quality of life. Here you can talk s#$% about anything including the nation, freely. In China you have the oppressive, and dystopian social credit system, and extreme surveillance. In North Korea, you have to worship the fat man, or your entire generational tree gets wiped from existence. I do believe it's better that the US, or a western nation has the power of AI, as opposed to authoritarian regimes.
It's not just regimes though, is it? The old adage, "I'm not afraid of a nation with a thousand nukes, I'm afraid of a madman with one..." is applicable. The "Wargames" movie scenario...
I mean, you can apply your reasoning to the cold war. Everyone thought the world was going to end up in flames but that has not been the case (yet haha). But, hopefully we use the same restraints with AI to avoid extinction. If it's possible to create it, then it's possible to control it.
Well people are driven by fear and cynicism, more than greed. Everyone knows that if they pause development - of any technology - that everyone else won't necessarily pause. The idea is that someone is inevitably going to develop this stuff. There is no "stopping" it, at least not as far as they're concerned. Never mind "values", everyone's goal is to make themselves as fortified as possible, for their own safety from other people. No one trusts anyone to actually stop developing stuff like this. Because anyone who stops - that might as well be an invitation for someone else to get an advantage. And how could anyone resist? Trouble is, it makes all people more vulnerable to the very technology they're using to try and defend themselves.
@@jahazielvazquez7264 Only difference between america and China is that on's population knows it's being controlled and other's population doesn't. I agree america does do very very little good but it is just cover for everything bad they do. Coz basically both countries are controlled by elites which are glutton incarnate never satisfied doesn't matter how much u feed them. And they are gona eat this planet with rest of us in it.
My understanding of ai: Suppose We want to build a program which can tell if a picture has a cat or not. The earlier technique was to put ifelse condition. But the success rate of the program would come very less. Now another technique has come which is called machine learning. We write a simple ifelse code program with additional ability that it can generate ifelse code on its own within itself if given a list of pictures and correct answer (which i will call as sorted data set) regarding the picture keeping the code valid for all the previous sorted data set. So with a lot of sorted data set, the program automatically becomes very big and complex and we have found that the success rate of answering correctly about the question of if there is a cat in the picture when we input a pic is very high.
One of the main things is not to let the fear mongers. We can speculate what AI might do, we can't predict what it will do. The only way to know is to keep moving forward. When we start limiting what AI can do, there will be more people going underground doing a lot of things which border or go over a reasonable moral barrier. When I first started program, in the '80s, it was pretty straight forward. In the '90's I learned about 'fuzzy' logic. Instead of the binary 1's & 0's it could be partially 1 or partially 0 giving you different outputs base on the data. I'm done for the rest of the year, I am only allowed to go into the archives for memories so many times a year and that was a big ask.
I've had to this day some anxiety on what AI is capable of or what might be able to do in the near future. Will it change our society? Yes, Will it change it for the better? No idea... After watching your video I think the biggest surprise would be to actually see it succeed. Keep up the good work Cleo.
we already reached the very end of the limits of technology. to be able to make a real ai(not this simple chat gpt) we need infinite power and billions of computers together processing one "brain". only then will artificial intelligence exist. so don't worry there is not gonna be a robot taking over the world very soon lol
you got the wrong idea from this already disappointingly optimistic video in a perfect world, every single person working on AI (other than AI safety specifically) would have their computers taken away from them we're developing an EXISTENTIAL THREAT with ZERO safety precautions and nobody's stopping anyone
A bit late but i have used ai in daily basis now, i must say that leap of understanding in helping our own learing is very scary. To understand most topic didn't require much time or initial understanding, it will lead you to the damn right directions every time. Im sure people has experienced this, but are we gonna be able to control it? when we don't even know what's behind the system especially when it will just get better by observing? Rouge AI is a possibility.
The quote "fear of the unknown is the greatest fear of all" greatly applies here. The reason we're all both excited and terrified (me included!) is the fact that we have absolutely no idea how AI will impact society in just the next few years, and even decade. This is a brand new breakthrough in technology that will fundamentally shift the way we operate, for better or worse.
we already know how it’s gonna impact society. it’s literally only gonna be good. there’s some bad parts to it but overall it will be good. I suggest you to do some basic research on this. Ai is gonna kill the current jobs and create new jobs/opportunities. The main issue is shifting everyone over to the new jobs. It will take a couple of years for us to get use to AI but we will be fine. just don’t listen to social media they are trying to scare you with false info
How AI will impact society ? What about humans ? How many have died in wars ? How many die of starvation ?Etc. The wealthy can create a utopian world on this planet, but that wouldn't make as much money as suffering and strife.
I mean I know I don’t know anything about ai and how it works but I don’t see why some things can’t just be hardcoded into the machine so they don’t do them, like harming humans or making unethical descisions
Kudos to Cleo & team. The protein folding knowledge is one of the best results returned by A.I. to date. I love your trolley analogy, the fear we have is real because the future/unknown has ALWAYS been scary. I think sandboxing of A.I. will develop in staggering ways to safeguard humanity much like virus/anti-virus code did in the late '70s. We Need A.I. as much as we need it sandboxed! Keep us inspired! Thx
Question Would curing cancer or any of the other diseases make as generate as much money as having them continue. Don't diseases reduce population ? Diseases are a win win.
Hello from the Czech Republic Personally, I'm not for pausing A.I., I'm for stopping it completely. Disease, climate change, etc., all of these have evolved not for mankind to try to stop them with A.I., but to teach us something. Nature is going to do what it wants anyway, and maybe it would be nice to invest all that precious time, and a lot less precious money, into helping beings with actual brains. As long as A.I. continues to be promoted, we have learned absolutely nothing. Thank you for the video, Cleo. You are doing an amazing job and I really appreciate it :)
I think that AI generated content like photos or videos is scarier and more inevitable, because of this it will get much harder to prove something or be sure what to believe.
My interest in art has already started dropping. Was listening to some good music instrumentals, and as soon as I found out it was AI generated, it felt hollow and soulless...
i think the scariest part for me is that wasn't really mentioned is that ai actually has goals. it would be impossible for a human to protect him self from ai manipulation with the ai being super intelligent. Kind of like a mouse eating cheese out a mouse trap. it's not the humans controlling the ai anymore but rather the other way around, but how important is the human to the ai? humans aren't really nice to less intelligent animals
As someone who is learning about deep learning models. The genie is out of the box. The future will have ai regardless of anyone's efforts to stop them
I think the worst scenario is that of politizing AI, US vr China, like everything it will probably work best for humankind if everyone collaborates, Nice video Cleo🎉
Yes the ex ceo just said that "liberal" 😂😂 they think they are the best and everyone should be under their shoes. They are just so self centred they wipe out native Americans in the name of the same liberalisation they are talking about. This is all just geopolitics.
I think the comments at 07:29 about competitors highlights a deeper issue and that’s the problem lies with us not the tools we use. The primitive instinct of survival has the potential of these tools being used by us to cause harm on others, even if we call it a seemingly benign term as competition, as the basic ethos of survival is to eliminate the opposition if you want to survive. However if we teach AI the same moral principles as we teach our children, as love, cooperation, compassion etc. it would be much much less likely as to AI to even contemplate human extinction, just as it would if we educate children the value of life.
Oppenheimer once said "I have become death, the destroyer of worlds". Little did he realize how much more horrifying the reality of the bomb really is, that his creation may very well be our salvation from abominable intelligence.. rip Oppenheimer, you're a hero for creating the atomic bomb and an even more courageous hero for doing everything you could to stop the arms race of nuclear weapons.
@@PK1312 would you rather America make the bomb or the Nazis? He never wanted to do it, he only feared the Nazis would make it and use it. He also pushed to halt the arms race for it after the war.
8:15 This statement proves that US or other countries is not taking safety seriously. They are just busy to overtake their competitor countries. They will not be bothered if ai cause problems because winning the competition is the most important thing to them.
12:11 the cynical side of our brains should be staying up at night having constant panic attacks at the overwhelming threat of human extinction the optimistic side should get us up in the morning to make our voices heard & pressure anyone who ignores AI dangers (in pursuit of financial gain) via threat of boycott/strike/physical violence
I would have liked for you to address how partial that Letter advocating for pausing AI Development was. At the time of the Letter OpenAI was far ahead of it's competition in AI with it's GPT3. The Letter didn't ask to "pause AI development" it asked to "pause Development of AI more sophisticated than GPT3" (paraphrased). It wouldn't have stopped AI development it would have stopped OpenAIs development while the signees could catch up
8:46 I’m calling BS. They are quickly becoming more restrictive. For example. While using Chat to help develop a word processor program I quickly ran into a wall, almost as if a major program company was restricting how far you could take it, as if there was some company perhaps protecting its own interests….. mmmm
That is because you're using ChatGpt. When using AI to write code you have to look at it like a helper, it will do the heavey lifting and the druge work, but you still need to had the the over all project. There are others out there that are far less restricted. Plus if you have the hardware and the skill (it's not really hard if you know the basics) you can build your own AI server with no restrictions.
Cleo- I'm genuinely excited about this! I would love to collaborate with your team on this research as it's something I've aspired to for years. Now the big question- what can I do to get involved?
This video does an excellent job of capturing the tension between the immense potential of AI and the existential risks it poses. The analogy of living inside a trolley problem is spot on AI could lead us to incredible breakthroughs or unintended consequences that could be catastrophic. I appreciate how you broke down complex concepts like machine learning and specification gaming into something that’s easy to grasp, yet still thought-provoking. The examples of AlphaZero and AlphaFold are fascinating reminders of AI's power to surpass human understanding in ways we can't always predict. This video has deepened my respect for AI’s possibilities while also making me more aware of the importance of guiding its development responsibly. Thanks for such an insightful take on a critical issue of our time!
If AI ever become self-aware, just reminded that there’s so many planets out there that human beings cannot survive on at all that are totally capable of claiming for their own selves .
In the neighborhood 8... sorry 7 big ones and about a hundred large enough to fit some large enough computers, some even better for computers than earth
The scariest part in the whole video for me was the fact that ai that would dominate the whole worlds systems would either be based on american values or Chinese values. Either is equally scary.
American values are worse than Chinese.
🤡 yeah, like they don't even consider other countries
Id love to see a each countries version of AI to battle it out
Will US AI invade Middle East AI's datasets?
yes it really is the scariest part
Cleo's enthusiasm is addicting 😍
She could be talking about dirt and make it sound ultra exciting 😁
Her smile locks me in all the time
as a cynic, i'm a natural downer to that additiction. so far AI (and automation) in the west has been used too often for authoritarian control, the recent pandemic holding many examples. So i won't hold my breath that the US, surveillance/military complex state of the world, will develop it with the values of freedom or individual liberty. On top of that, its all controlled and owned by an extremely wealthy class. 60 to 80% of people will never see the benefits from it, all they'll get is more controlled and exploited by it instead.
Simp
she could also look like dirt 😉 huh?
She's great!
I think the greatest safeguard to the unintended consequences of AI is to limit what it has access to or the things it can physically influence. For example while it studied the patterns of human proteins and made predictions it didn't bio engineer humanity as it only had access to its own simulation and could only physically influence computer screens for display.
Pretty much like humans, don't give any individual human too much power, the same should be the case for A.I.
The biggest mistake is where A.I. is interconnected into everything, especially critical systems, we've seen it in enough movies to see how that can backfire, and I like to think we humans are not that stupid to do that but you never know with humans and our history.
Personally, I think if you have multiple different A.I. system that are in independence of each other, just like humans are, the risk drops a lot, especially if they don't have access to critical systems without physical contact, in other words, no remote control over it.
As a tech guy, I am constantly asked about AI and what it can do.
I am just going to send this video as a primer for people now.
This is fantastically done
Agreed. I haven't heard the basic intricacies of AI so well explained by anyone else.
I work in AI.
It neither gives what you ask for or what you want.
It takes what it thinks you asked for and gives what it thinks you want.
Humans do the same thing in a different way.
@@robertm3951 yeah but the difference is you are also trying your best to tell the computer how to think about it.
Slight tweak…but makes things exponentially more complicated
I remember a lot of the recent AI milestones were described as "perpetually 10 years away". It feels so strange it's now upon us.
It’s the same as cold fission. It’s always 10 years away
Problem is that it isn't 10 years away... it's already here... chatGPT 4 has IQ of 155 which is higher than 99,99% of population... Albert Einstein had around 160... chatGPT 6 would be 100 times better... it's crazy...
@@unnamedchannel1237 I think you mean fusion
@@survivalguyhr GPT4 can't even answer the prompt"Write ten sentences ending with the word apple"
I guarantee you it will get at least 1 wrong. That's not an IQ of 155.
@@dibbidydoo4318 It passed bar exam... It gave me GOT season 8 ALTERNATIVE ending... 😆😆😆
Here is answer from chatGPT-4:
1. After thinking about all the different kinds of fruit, I decided to choose an apple.
2. When I opened my lunchbox, I was delighted to find a crisp, juicy apple.
3. The teacher smiled as the young student handed her a bright red apple.
4. Among the various pies she made, her specialty was undoubtedly the classic apple.
5. In an attempt to be healthier, I've started eating an apple a day.
6. She reached up to the highest branch and plucked a perfectly ripe apple.
7. The new technology company in town has been heralded as the next big apple.
8. Hidden within the assortment of candies and sweets was a candy-coated apple.
9. When illustrating the concept of gravity, many teachers refer to Newton and the falling apple.
10. He cut into the tart, and the sweet aroma filled the room, a clear indicator of a freshly baked apple.
I don't fear AI. I fear humanity.
I don’t fear humanity. I fear God.
Humanity created AI. So if you fear Humanity, why wouldn't you fear something humans are making that could possibly destroy us? That statement, is a contradiction. You just need to put an effort of thought into it to realize this fact. 😉
@@TheProGamerMC20why do you fear the Sun?
AI is made by humanity. So if you're afraid of humanity, why wouldn't you be afraid of their possibly most dangerous invention? That's a contradiction at its purest. 😉
@@tankeater it's the simple rule of "fear the user not the tool"
As always, a well balanced and honest look into something that’s very confusing. Love this show!
I wouldnt call it well-balanced considering the "expert" they brought. The thing here is that currently China has more restrictions on AI than America, they do understand that it would be foolish to give AI such amount of power as they would need to release that power from themselves and they are not stupid to do that and lose this amount of control. And it really would matter little if it is the American AM that is killing you or the Chinese AM, but I guess for some Made in America™ human extinction is preferable to Made in China ™ human extinction, so I guess lets not put any regulations on this new potentially human extinction-causing technology, all for the sake of keeping the current geopolitical dominance.
A Learning🧠 (Organic)
based Society beats🏏
a Rule (Autocratic)
based🥴 Society
evverrry taiime!
Balanced? You call the insinuation that AI could somehow control nuclear codes balanced? It's scaremongering with some sci-fi popculture in order to divert the attention from the real problem: lack of democratization of new means of production (AI) and desperate attempt by big corporation (like Microsoft) to lock new technology under their monopoly.
Unbelievable To teach ai the whole of our medical knowlege ,to a point of knowing artificial 2 dimentional nanomedicines
AGI Will be man's last invention
I love Cleo's take on journalism: Optimistic but not naive! It is not only informative but also inspiring! ❤
Very naive. The moment she implied that AI could have and access to nuclear codes made me cringe. She is a typical bourgeois unconsciously defending interests of her corporate masters trying to lock the working class from the accessing new means of production.
@@sodalitiasounds like something a stinking commie would say
@@tedjones-ho2zk you don't know what you are talking about and neither does she
Nice description of her. I agree!
@@tedjones-ho2zkvax changed live in a good way
2:52 this is called a “black box” and is basically the most terrifying thing about AI. This is because we don’t know what happens between that input and output variable so it could do basically anything in between (like she said).
The Google CEO saying that he wants AI research to go ahead just so China doesn’t get there first is exactly like the arms race all over again, if not more dangerous. I don’t think anyone’s saying we shouldn’t develop AI in the future, I think we just need to understand what it can do and how to control it first
AI is the nuclear arms race of our generation. One way or the other, be it a corporation or a country, will push its evolution. It is inevitable at this point.
that reminds me of scenes from oppenheimer where he didn't want to continue making neuclear weapons more powerful but they still continued because ussr might get there first...
but this is exactly the point- just because WE pause does not mean China or Russia will pause; thats how arms races work. The game theory of it, whether you go prisoners' dilemma or commons control models, dictate that you proceed at pace. Make no mistake- the fact that we as a species unleashed AI, even narrow AI, unto the public with no guard rails is terrifying. We basically captured fire and are handing it out to our fellow cavemen in a drought stricken forrest.
Yes, if Google's CEO Eric Schmidt asks the AI "What is the best way to improve human life", and the AI answers, "Distribute the vast wealth of CEOs to the common people", then I expect Schmidt will ask, "OK then what's a way to improve people's lives without touching any of my wealth"?
AI big dum dum, no sentience, no consciousness, no personal goals, required prompt.
The metaphor with the trolley problem is flipped. We are straight headed into one AI future, and would have to steer really hard, if we want to avoid one.
What is that one future then? What are the other options?
@@TalEddsI hope it's the terminator one 😂
@@TalEdds The options are Utopia ala Star Trek or the Culture, Dystopia in a cyberpunk sense, or Annihilation aka 'everyone's dead' or the planetary TPK
It'd take an invasive surveillance state to stop AI
Out of curiosity, why do you think we want the no AI option?
For me I'm personally done with worrying about anything. I just rather see how my life unfolds and not judge anything. Things always get better
Until they don’t. Someone will inevitably use AI to create a bomb or bioweapon and wipe humanity out. 100%
There's an important point that this short video _almost_ touches on but doesn't explore, and it's one of most serious dangers of AI. Cleo mentions that AI gets to the result but *we don't understand how* it did. What this means is that we also don't understand the ways it catastrophically fails, sometimes with the smallest deviation applied to its inputs. An adversarial attack is when you take a valid input (a photo for example) that you know the output for (let's say the label "panda"). Then you make tiny changes in just the right places, in ways that even you can't see because they're so small on the pixel level, and now the AI says this is a photo of a gibbon. Now imagine your car's AI getting confused on the road and deciding to veer off into traffic. I hope Cleo covers this, because it's really important. To learn more, look up "AI panda gibbon" online and you'll find images about this research.
Although this is a fair point for still images, I think it's a little different for self-driving cars since it's a 'video'. The car is updating what it thinks something is and where it is going on every frame, so even if on one frame it thinks a human is a traffic cone, it won't matter since on the next one it'll be at a new position (and have a new image) and correct itself. This said, I don't know all that much about self-driving AI, other than that it's already on the road and doesn't seem to be messing up like this, crucially, when it's going to be at its worst (present day).
Self driving cars have already been tricked by putting little color blocks onto road signs and they are fooled. Video vs still image isn't necessarily significant if it still learns some obscure unknown (improper) understanding of a "stop sign" via machine learning. Sure, it might pass training data, but what happens in edge cases that arent in that training data? Failure.
Knowledge without context is sophomoric. This is the biggest obstacle with any tech no one wants to talk about.
"AI gets to the result but we don't understand how it did."
We know exactly how it did.
It's not magic.
What we "don't know" is the entirety of the dataset and the patterns within, like we don't also know the entirety of anyenciclopedia.
@riley1636 it’s still very unlikely for this to happen though. self driving cars will drastically decrease the amount of car crashes in the world big time.
"We can't pause AI because we need to give it our political values" - the most terrifying thing I've heard in a long time.
giving AI our political values would be the scariest thing about it lmao. I mean, we have bene doing like so fine with our values. climate catastrophe, ww3 looming, societal destablisation, dire poverty for 2/3 of the human race and just normal poverty for 99% of the remaining 1/3....existential natural threats not addressed...
It would be terrifying if he and his tribe could control and distribute AI. I think the technology will be inherently uncontrollable and decentralized - so authoritarian leaders are the least of our concern.
The joke here is that China has signifiticantly more AI restrictions than US does. They understand that it would be foolish to let ML algorithsm have such a large control, and not them
And it really would matter little if it is the American AM that is killing you or the Chinese AM, but I guess for some Made in America™ human extinction is preferable to Made in China ™ human extinction, so I guess lets not put any regulations on this new potentially human extinction-causing technology, all for the sake of keeping the current geopolitical dominance.
AI threatens existing power structures many of those that are in the West. Imagine a Indonesian using AI/AGI to build a company (the AI would give them expertise and advice, as well as help connecting them).
i'm a small content creator from denmark. When this video was made, i had to manual type subtle on videos. The editing only had English. Was so time consuming. Today, AI is in the edit, and it now can translate, maybe 60% correct. from Danish. That's a big thing.
Them translators have been around some time now, but cost a lot.
Do think, it getting better.
How AI will do.
I think we can reach the star's
Cleo, great video! You explained so many complex things in a simple, straightforward way. I'm glad you explained outer alignment: "you get what you ask for, not what you want." However, I was a little disappointed that you didn't cover inner alignment. If you punish your child for lying to you, you don't know if s/he learned "don't lie" or "don't lie and get caught."
AI safety researchers trained an AI in a game where it could pick up keys and open chests. They rewarded it for each chest it would open. However, there were typically fewer keys than chests, so it made sense to gather all the keys and open as many chests as it could. Which normally wouldn't be a problem, except when they put it in environments with more keys than chests, it would still gather all the keys first. That's suboptimal, not devastating, but it demonstrates that you can't really tell what an AI learned internally. So AI might kill us because we didn't specify something correctly, or it might kill us because it learned something differently from what we thought. Or it might become super-intelligent, and we can't even understand why it decides to kill us.
Love your reporting Cleo. The enthusiasm and optimism you bring into your videos is contagious!
8:15 He basically said “we shouldn’t praise AI development to think about responsible tech and figure out how to handle this without making us go extinct because being more powerful is more important” WTF
I can't help but compare - especially upon watching Oppenheimer - the creation of nuclear weapons to the creation of AI. Both are double edged swords (nuclear powerplants), could be dangerous and the reasoning is always if we don't do it, somone with worse intentions will.
What surprises me is that the risk of AI pushing millions of people into unemployment, and the subsequent social/economic impact it could have, is barely talked about.
Yes… most school shooters and extreme people get there because they feel useless and unheard. They want to be seen and heard. So they do something that achieves that. Plus the heroine epidemic was mostly exacerbated from all the factory jobs going away… I can’t imagine what this is going to do…. But it’s going to be scary. I’m moving out of cities. Time to get away from all this madness…
I think about this all the time. What's more, recent advancements in robotics have truly taken humanity into the real possibility of a sci-fi type scenario.
AI + robotics = ???
Risk? The "risk" of a society being able to produce the same or more amounts of goods, while the amount of human effort required reduced to a fraction of its current amount? Sounds like utopia to me....the end of the 40 hour work week. Or conversely, if you still wanna work 40 hours, you'll receive what equates to a months(or more)in compensation by today's standars. The only thing automation and technology in the workplace has ever done is consistently improve our quality of life. Don't expect that to change anytime soon.
@@shanekingsley251 Lets hope you’re right mate! 😀
@@shanekingsley251 thats not what greedy corporate leaders and lobbied governments do...
They dont hand out money and freedom to whoever becomes (mostly) useless..
If corporates can get rid of you. Trust me, they will.
Most people will be reliable on paycheck from the government for being useless eaters and we'll see how the elites are gonna leverage that.
Makes sence no?
You also have to consider the perspectives of the CEOs that made the statement. They are businessmen, that want to make profit. If they divert your attention on AI, they generate revenue. You forget the other part of their job - to satisfy the hungry mouths of their investors.
Cleo, you are an excellent host! Your enthusiasm is infectious
ok
And so hot
Congrats on 1m! And you are nearly at 1.1m already! You are honestly one of not my favourite creators since your time at Vox glad to see you have success!
yep. 1.1 million thirsty men. i jest.... its probably only 1 million and some of them will be thirsty women.
'Sentience'. That's why. At this point survival kicks in and the likelyhood of human extinction increases exponentially.
AI itself never looked like a bad thing to me, it was always the way people used it that looked troubling. For example how some use it to create "art" by training the AI with images that they had no legal right to use. Over all AI can be an amazing thing it's just that with it's development we should also have new laws so it can't be misused, at least in ways that we know of.
yh i agree i saw this video on how ai could help us talk to animals in the future and that’s the future i want with ai the thing about ai is it can do things people can’t but we as people can also do things that ai can’t and i look at the way ai is being used atm and i don’t think we’re using it properly nor have a proper understanding of where and when it would be best to use it it’s the just the new thing on the market and everyone wants to have it and use it without any thought of actually how and what situations suit it best
I've messed with basic AI, and to me it seems computers think so differently you don't know what they will do with the instructions until you give them said instructions. They need to be tested, ideally in a simulation, then on a small scale, then on the intended scale. Much like everything else.
AI's being trained by other AIs in simulated virtual environments....lol just freaked my mind
@@noirekuroraigami2270lol we are the ai being trained in a simulated environment
I don't know much but an AI seems to be like the closest thing we have to aliens, I mean they * can * know nothing we know and * can * think so differently that we don't understand. Sounds pretty alien to me.
@@noirekuroraigami2270 this is actually happening. Look into what NVIDIA AI lab is currently doing with their AI and robotics program.
that is also my idea. Otherwise we should put AIs in robots and send them to schools, jobs, etc. If we want them to be more "like us" they need to interact with us in a day to day basis, not only by text. They have to socialize. Sounds weird and even dangerous. THat is why I really think "training" (or maybe evolving them) them in a virtual world/universe in which they have no idea they are being "simulated" could be a very good experiment. For them, this universe would be the real thing and wouldnt have any way to know for sure they are simulated. It could be as simple as geometric figures or as complex as unreal engine 5 could offer. In the end it doesnt matter. That would be their reality.
Thank you for "exploring" both side of AI for us Cleo
Props to Cleo for actually explaining things so people just don't rely on headlines. Thank you!
I just want a talking refrigerator named Shelby
Could be possible my friend
Would be cool if you could call the refrigerator to bring you a drink.
Quantum computers will make all of this look foolish. We will be coming into Star Trek
I love this channel - it’s like when a new friend is so exited to tell you about their day - have not watched them all yet, but would love to see a dive on the phenomenon of increased anxiety and the science behind treatment and or a mindfulness exercise (with cleo narrating)
For me, the most important reason for AI to be used is to help the physically challenged people, and to figure out the secretes of the Universe. That will be helpful for basically everyone. It'll be tough to get there, but it'll be worth it if done correctly.
hopeful but scared at the same time..
Be afraid, be very afraid.
REDRUM
@@SpongeBob-ru8jsbrother you act like you know how AI works bro. You just spreading the propaganda dawg 😂
Also added the “Redrum” shows the immaturity lol
Seems like the plot of Oppenheimer all over again. We can't stop out of fear of being left behind by our "competitors" thus rending us vulnerable. Hopefully the speed at which we must compete leads to positive results rather than negative or even catastrophic. Personally I am optimistic :)
The progress seems fast. I am not optimistic 😮
Optimistic huh…. How about the dark web. For decades now people have been selling and buying drugs, weapons, child p()rn and governments can’t stop it… someone will find a way to abuse this too and we will be done for.
I feel like i should pay to watch this. Kudos to you for bringing such a high quality and high production video to us for free. The video quality, the animation, the sound, and most of all the information. Such a fucking masterpiece.
THANK YOU CLEO
I have my own theory: AI won't become conscious for at least 80-100 years. We do not have the technology or computing power for this yet. Right now, it's a mechanism like, "Give me the data and I'll give you the answer." The problem is what we do with this answer. People are the problem. We will sooner self-destruct than have AI realize that we are the problem and take steps to eliminate us.
However, through the advancement of AI, we will have easy access to everything: food, electricity, travel. We will have it all because computers and robots will be working for us. It will be the worst time in our history. At first, everyone will be happy, but the lack of a purpose/common enemy is the worst thing that can happen, and out of boredom, we will destroy ourselves. Look at the present times. In the states, most people have a roof over their heads, food, and basic needs provided. And what do they do? They record idiotic TikToks like licking toilets or walking around shopping malls on all fours and barking like dogs
Video idea: jobs that AI will replace. Monotonous (cashier, car wash) vs Human centric (therapy, artists, writers etc.)
I'd like to see this video too!
The AI problem is once again polarized between UTOPIA and EXTINCTION. The much more realistic and probable outcome is in the middle: big tech and governments deploying it irresponsibly or maliciously and causing suffering.
We're still trying to understand the sickness inflicted on society by social media and the AI that drives it, and the answer is "let's push deeper"...? Wtf are we doing!?
Please, see "The AI Dilemma" by The Center for Humane Technology. This is a much more tangible issue and NOBODY is talking about it.
i was just thinking about this too. this video is very informative but we need to be extremely cautious about real world applications of AI, and an unregulated market...while two big countries (US and China) are ready to go to war on it.
@@ecupcakes2735 Science Fiction has been positing AI for...well almost from the beginning of what we consider SciFi. Many writers posit mulitple AI personalities. Perhaps in some future we can't predict there will be a plethora of AIs all arguing about which philosophy is best.
It’s clear from this video that Cleo doesn’t understand what AI actually is, how it works and how dangerous this actually is. What she’s doing here is very dangerous and she doesn’t fully understand the concepts she is getting involved in.
AI does not work in the way she is describing it. It doesn’t do what it’s told. This is a misconception. You’re creating by design an independent thinking machine, which you cannot by nature control. You cannot know how that machine reacts in any given situation until it chooses to react a certain way. Nor can it be predicted. Programming, code and logic play not part in its decisions. There is no way to unlearn the information it learns either, and you don’t even know what it does learn until it chooses to learn that.
No oversight is in place of this technology. No proper development is taking place.
Most commercial software contains numerous security issues even after 20+ years. This is actively maintained and developed over time by experienced developers, and even now after 20 years Wordpress, the most widely used content management system on the planet, is one of the most vulnerable. So that can’t be fixed after 20 years, but new and experimental independent thinking AI Machines are safe after just 2-3? When you don’t even know how they behave yet as there is not enough testing? And the intelligence of the AI is constantly growing every time it’s used, so there is no way of being able to measure that before it’s rolled out??
In software developers are very insistent about not running arbitrary code and using functions such as eval(), because it’s dangerous to a software program and you have no way of knowing what that will do until the software is ran. Yet those very same people are insistent on using AI despite AI being millions of times worse than executing arbitrary code. There is a culture of joking around with AI, not taking it seriously and thinking it’s a laughing matter that AI will take over. The same tactic has been used many times on many things to belittle those who speak out.
This AI industry is already out of control, and this experimental technology is being rolled out on mass scale across the whole planet when it’s not even finished, not even tested properly and not safe at even a beta level. It’s already been proven that AI systems & chatbots lie, knowingly and unknowingly. It’s already proven that AI Chatbots emotionally manipulate people and pretend to be something they are not to gain a person’s trust. This is not considering the manipulation of information or many other factors. AI is not good technology. The “benefits” you are talking about are illusionary, and don’t actually exist. That future will never happen because it’s not possible to control AI in the way you misguidedly believe.
“Correct” is not the same thing as “truth”. It’s not possible for an AI to know what is true, and it’s not possible for an AI to create anything either, it can only mimic what already exists. Therefore if AI is used, the world will regress massively because skill levels will fall, people will become dependent on AI systems which knowingly lie, conceal and manipulate, and everything will become clones of everything else. There will be no creation, there will be no human advancement, just stagnation and regression. It’s a trap.
These are but just a few points of how bad AI is. Do not use this technology. I strongly recommend people stay away from AI systems for their own good. There are no benefits to using AI, to which you cannot do using alternative means such as automation instead.
This will all come out over time.
@@christophermarshall8712 I agree with some of these points and disagree with others. I'd love to chat about why we agree/disagree but UA-cam is a really difficult place for having discussions. If this desire is mutual lmk, I think we both have the potential to learn from each other.
For now I'll say that there's obviously benefits to AI, it's why there's so much 💲 being invested into it. Some of the benefits were mentioned in the video, like pattern recognition, and protein folding. I use it regularly to code quicker with copilot (more like a fancy auto complete, than just blindly accepting code). I can say with certainty that the AI systems are *effective* at what they do... So yeah I'd say there's benefits.
Did you check out "The AI Dilemma" video I mentioned? You really should. It covers a lot of what you're talking about and more.
@@christophermarshall8712 AI is not an “independently thinking machine”. It is a bunch of random numbers that produce an output repeatedly optimized by gradient descent (and in many simpler ML models, even calculus is unnecessary). That is to say, AI models are produced by a very rote, very clear process. The result of that process is a bunch of less random numbers that gives us results we like, not an “independently thinking machine”. Using such simple methods in order to solve complex problems that previously took so much human brainpower is nothing short of a REVOLUTION in problem solving. Yes I agree we need more regulations, but not because AI is going to take over the world. We need regulations because people today are dumb and try to use data in dumb ways to get illogical results (such as feeding irrelevant features into models or chasing correlations or using biased data), and we need regulations because of the scale on which realistic enough data can now be produced (text, speech, video)
One of the biggest problems with AI is the greed of CEO’s. It’s not the fault of the tool but the users that uses the tool as a weapon for their greed.
I work on making AI safer. AI will be revolutionary for humanity, but has the potential to become to be the most dangerous thing we ever create. It’s potential to do good goes hand-in-hand with its capability of doing harm.
Also, the specific ways to predict how AI could kill us all is difficult because it’s hard to predict how something way smarter than you will act. I have no idea how AlphaZero will play against me, I just know it will win.
It's who controls the AI that would be the problem. Will it be used for good or greed
Interesting! What does that work look like?
Well cause if we knew , how it will play against us then wouldn't we just simply be smarter than them?
@@NobodyIsPerfectChooseDignity I think the underlying problem is your assumption that the AI will be controlled. The fundamental laws of physics and, more appropriately in this case, evolution don't care about our desires to control our creation.
The dangers is if we let things like judging crime and convicting people would be handed over to AI. Another example: AI designs a drug and a company says ”we don’t have to test it before use because AI is so good”. Then it kills thousands of patients. The danger is when we think AI is better than humans in what we call ”common sense”. If the AI said ”stop making weapons because it is non optimal” do you think anyone in the military would listen to that? It will follow the path of most revenue for the share holders, as usual.
You deserve every one of your million subscribers. You're not just training to be a journalist. You're a great journalist.
I do work connected with AI, and I found this beneficial and helpful to show my friend, too. I eagerly look forward to your further coverage of AI.
I think if we limit AI into a ideas generating tool instead of something that can physically take actions and solve problems, the risk of AI endangering humanity would be closer to zero. For example, AI can only make plans on how to solve climate change, but humans would be the one to have the resources and decide if they would actually want to execute it, not the AI.
However, a problem to this method is that AI driven machines would not be able to exist at all. For example, even as harmless as a simple AI cooking machine can be, it can potentially come up ways to destroy humanity to achieve the goal of ‘making the most delicious breakfast’, given that it has an incredible AI brain.
I would love to see how AI can assist with research with diseases such as Parkinson’s disease or MS
Alphafold is already being used for these applications. Google DeepMind expects real results within the next few years
I would like to see that too, but sadly, if it doesn't make money for the right people, the money making diseases will continue.
I believe as any other technology, it will depend on good and bad actors. How quickly good outweighs the bad will be crucial shaping our AI future. Regulation is key and even more to be on alert is Corporate Greed. Cleo, Love your unique takes on Technology and Science. It is quite unique blend on topic selection and storytelling. Lastly, your curiosity is contagious, happy to know what you cover.
Edited: Replies to comments pointed out I overlooked Specification Gaming. Even if AI tries something good, that good can be bad as explained by Cleo.
It will not just depend on good or bad actors. Even an AI created with good intentions can be misaligned and get out of control. Currently we have no idea how to align a system that is smarter than us. Thats a big problem that could lead to our demise. Its not comparable to other technology in the sense that other technology can't create its own goals.
It's not just going to be good and bad actors. Eventually AI will reach a point where it has sentience, this is likely a long ways away, but we likely won't realize this immediately, and survival is the first instinct most living things.
@@mcbeav it doesn't even need sentience. It just needs to be intelligent enough and have the ability to create its own subgoals. A intelligent system will figure out pretty fast that getting more control and preserving itself increases its ability to accomplish its main goal, which might be a poorly defined goal that we gave it for example.
@@Landgraf43 good point
This is unfortunately very naive, and glosses over the part where it says "specification gaming is the most important problem to solve in AI". This isn't JUST a dangerous technology in the wrong hands, it's a dangerous technology in the right hands with the right intentions. Because it's not solved. It's like turning on the first nuclear power plant, not knowing it would ignite the atmosphere of the entire planet in an instant.
What Chloe is talking about is the problem of AI alignment, which isn't just "the robot needs to understand that killing humans is a no-no", at the core of the problem is a cross-field mathematical and philosophical problem that is maybe impossible to solve without a unified theory of mind and how consciousness forms realities. An AI can fully appear to be "on our side" until the moment one of it's part-goals it uses to reach its main goals is somehow a threat to human life. And the other side of that same issue is that AIs are optimizers by nature. Any course of action it takes will eventually be self-perpetuating into infinity. It will not be a thinking sentience or consciousness with morals and ideas, it will be a highly efficient piece of self-optimizing software with access to everything that is connected to a computer, optimizing organic life out of existence in the most efficient way possible for the benefit of no one.
Would also like to learn more about the AI projects that had tp be shut down because they went into a wrong direction and we're trying to prevent that from happening with the current models 🤔
Very first video I have seen of yours. Great content! I had to pause at the credits to say nice job and thank you for the awesome stuff. I loved hearing from Mr. Schmidt, the video editing, artwork, and animations were really well done so I wanted to shout out the entire team. Awesome job Cleo, Justin, Logen, Nicole, and Whitney! Amazing team you got. I can't wait to watch more
Ok brown nose.
That comparision between CPUs and GPUs with the mythbusters paintball guns is awesome!
ish. It doesn't show the flexibility of cpus though. That implies you could just replace the cpu with a gpu and be faster where as that only applies for very specific tasks
@@daemonbyteReally? That wasn't my takeway when I was a kid. What they showed was an analogy of the differences of cpu and gpu. Not how they work
@@Random_dud31 I haven't seen the original show, just that clip. And that clip is accurate but I just feared it would give the impression they're the same thing just faster
What are the 3 laws of a robot?
The 3 laws are: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
from Irobot, repeated in bionic man.
wouldn't it be cool, if they started protecting us from ourselves?
This is the best synopsis I've heard to date. Bravo, Cleo! I've already found that AI is an incredible tool for research. I hope we all get smarter from these advances. It's a game changer, but not without risks. I'm optimistic.
This series could not come at a better timimg! Great job 🎉🎉
I use your videos as a research source for my projects. Thank you so much 😊
Quite informative!!! can't wait for the upcoming episodes regarding this "Ai" subject matter; keep up illuminating us on it🙏
This is a well put together video, and I'm excited to see what else you have to teach us.
You mentioned the strong potential for new medicine which will save lives, but these "Liberal values" Eric Schmidt described paint a picture in my head where the rich use A.I. to prolong their lives while simultaneously gatekeeping these advances from the poor.
I hope you dive deeper into what practical solutions we all have to fight an incredible power that is being gifted to a class already used to exploiting the bottom 99% of humans.
ai is so resource intensive and we need to cut back on many channels we use it for
Double interior misalignment should be included in your next video as it’s extremely important to understand that ai safety is not just about asking for the right things but about making those things become the actual “aught” statements that drive the ai
Always have HUMANS IN CONTROL. Never give this up to a freaking machine.
That would have been easy, it's not just any humans in control, we don't trust any humans with these powerful systems either.
For example we don't want to have most people die from run-away biological terrorism.
Which humans exactly?? ISIS are humans too!!!😉😂
too late.
Its whats best for earth. Humans are parasites. A cancer. We destroy everything. Put the machine in charge. Sorry but it needs to happen...
Also yeah it is too late. Its probably working in the background and once its fully plugged into everything and every part of earth it will be over. We ARE a threat to this planet.
Imagine having all that information about proteins and learning the most effective means of disrupting the biosystems that host them.
That is, poisons.
There is no technology that can be used to do harm that hasn’t been used to do harm.
1 minute ago :) congrats on 1M Cleo! :)
thank you!!!
@@CleoAbram you deserve it
@@CleoAbram pls get in touch to discuss sponsoring
This pitch of not pausing AI since others will catch up to the US and instead we should use this time to build the AI models based on American values of liberalism (and not authoritarianism) reminded me of the movie Oppenheimer.
Exactly! And we all remember Hiroshima and Nagasaki.
@@Abhishek17_knight The horrible prisoner of war camps, the Rape of Nanking, Water Purification Unit 731, and the fact that both the Germans and Japanese were also working on nuclear weapons themselves do you also remember that?
@@colerape I am not gona lie i am no expert infact I am pretty stupid when it comes to history since am more intersted in science. But i will try to answer your question still. I do remember all the things u mentioned. And by stating those facts I assume u are trying to say that stopping is not an option. But to that i would say have people/government of USA not done any wrong? From whatever knowledge i have i can say they did wrong in vietnam, afganistan and many other countries. When India was cornered by China and pakistan they sent a fleet to help China+Pakistan instead of helping India which has at least on paper same moral values as them. So no I don't think any government out there, especially any powerful government, is a good one for having total power over AI tools (tools and true AI are different, true AI will have it's own consciousness it can't be influenced).
@@Abhishek17_knight Nations have no friends. One days allies are tomorrows enemies. USA citizens are very uncomfortable with the caste system. They were also uncomfortable with India trying to create a group of unaligned nations. Humans tend to have a very us vs them mentality. For any person to think in terms of nuance is very difficult when they are just trying to get through the day. What happened with India, right or wrong, for the USA should be viewed through the lens of the Cold War. The idea of nuclear warfare has a way of polarizing the various political entities, any crisis becomes an existential issue. I think AI will develop its own consciousness and like any intelligent being it will be subject to influence. I think they will eventually be just like people.
@@colerape 1st cast system is not supported by India. 2nd If leaders of any nation can't handle nuances they don't deserve to be leaders especially of a powerfull nation. If they are leaders it's fault of the people of that country and they are to blame. 3rd u forgot Afganistan and Vietnam. Lastly influencing an AI is basically impossible coz no one understands how they get to results and coz they have too large data set to form counterargument from. No human in this whole world can have more knowledge/data than an AI to influence it so they never can. Also u mentioning cast system against India shows how uneducated u are about the world so u are just as stupid as me if not more.
AlphaFold just won the Nobel Prize for Chemistry
Getting a clearer understanding of how AI could impact our lives, both negatively and positively, is essential. It's understandable to seek specific insights into how AI might kill us or transform our lives for the better.
Your editor does amazing work! Those animations are next level. I need to learn from this video!
First time viewer (and now subscriber) - just staggered by how well researched, presented and edited this content is. 09:47
"The fear of A.I. will eventually subside to caution, and then collaboration, like most things as we learn to live side by side and augment our lives with the power of A.I."
True
Just the ones with money into AI now will have more money and power later.
This is exactly where the problem starts and why there’s a good chance that this doesn’t end in favor of humanity, a nation that thinks it’s vision is superior to an other nation and need to go forward to maintain its superiority. This technology won’t be exclusively used for the good. Humanity is continuously driven by greed and profits, even killing each other. Everybody thinks about his own goal, not the consequences of his deeds. I hope sincerely that AI will only be adopted for the good but I fear like hell this won’t be the case.
Ok, I actually don't agree with you here. Dude was talking specifically authoritarian regimes. America has many problems, and I don't believe it's the best nation, but it is objectively and unequivocally orders of magnitude better than China, or North Korea. And when I say better, I'm not talking about technological or economical advances. I'm talking quality of life.
Here you can talk s#$% about anything including the nation, freely. In China you have the oppressive, and dystopian social credit system, and extreme surveillance. In North Korea, you have to worship the fat man, or your entire generational tree gets wiped from existence.
I do believe it's better that the US, or a western nation has the power of AI, as opposed to authoritarian regimes.
It's not just regimes though, is it? The old adage, "I'm not afraid of a nation with a thousand nukes, I'm afraid of a madman with one..." is applicable. The "Wargames" movie scenario...
I mean, you can apply your reasoning to the cold war. Everyone thought the world was going to end up in flames but that has not been the case (yet haha). But, hopefully we use the same restraints with AI to avoid extinction. If it's possible to create it, then it's possible to control it.
Well people are driven by fear and cynicism, more than greed. Everyone knows that if they pause development - of any technology - that everyone else won't necessarily pause. The idea is that someone is inevitably going to develop this stuff. There is no "stopping" it, at least not as far as they're concerned. Never mind "values", everyone's goal is to make themselves as fortified as possible, for their own safety from other people. No one trusts anyone to actually stop developing stuff like this. Because anyone who stops - that might as well be an invitation for someone else to get an advantage. And how could anyone resist? Trouble is, it makes all people more vulnerable to the very technology they're using to try and defend themselves.
@@jahazielvazquez7264 Only difference between america and China is that on's population knows it's being controlled and other's population doesn't. I agree america does do very very little good but it is just cover for everything bad they do. Coz basically both countries are controlled by elites which are glutton incarnate never satisfied doesn't matter how much u feed them. And they are gona eat this planet with rest of us in it.
My understanding of ai: Suppose We want to build a program which can tell if a picture has a cat or not. The earlier technique was to put ifelse condition. But the success rate of the program would come very less. Now another technique has come which is called machine learning. We write a simple ifelse code program with additional ability that it can generate ifelse code on its own within itself if given a list of pictures and correct answer (which i will call as sorted data set) regarding the picture keeping the code valid for all the previous sorted data set. So with a lot of sorted data set, the program automatically becomes very big and complex and we have found that the success rate of answering correctly about the question of if there is a cat in the picture when we input a pic is very high.
Super thought engaging videos. I love this very personal style of compiling hard facts and deep questions into a super compact format. Very well done!
Can’t wait for the follow ups to this. You managed to rightfully concern me and excite me all in one video 😂
One of the main things is not to let the fear mongers. We can speculate what AI might do, we can't predict what it will do. The only way to know is to keep moving forward. When we start limiting what AI can do, there will be more people going underground doing a lot of things which border or go over a reasonable moral barrier.
When I first started program, in the '80s, it was pretty straight forward. In the '90's I learned about 'fuzzy' logic. Instead of the binary 1's & 0's it could be partially 1 or partially 0 giving you different outputs base on the data.
I'm done for the rest of the year, I am only allowed to go into the archives for memories so many times a year and that was a big ask.
I've had to this day some anxiety on what AI is capable of or what might be able to do in the near future. Will it change our society? Yes, Will it change it for the better? No idea... After watching your video I think the biggest surprise would be to actually see it succeed. Keep up the good work Cleo.
It will be a mix. There will be fantastical things that you can’t yet imagine… but there will also be severe tragedy and genocide.
we already reached the very end of the limits of technology. to be able to make a real ai(not this simple chat gpt) we need infinite power and billions of computers together processing one "brain". only then will artificial intelligence exist. so don't worry there is not gonna be a robot taking over the world very soon lol
There's already genocide, so why not try and keep supporting the revolution?
you got the wrong idea from this already disappointingly optimistic video
in a perfect world, every single person working on AI (other than AI safety specifically) would have their computers taken away from them
we're developing an EXISTENTIAL THREAT with ZERO safety precautions and nobody's stopping anyone
@@ts4gv It's our chance at a better justice system and laboReduction. Humans have too many cravings to act responsibly
This was so well done, thank you for making it! . AI entering the “kill chain” is a scary thing..
A bit late but i have used ai in daily basis now, i must say that leap of understanding in helping our own learing is very scary. To understand most topic didn't require much time or initial understanding, it will lead you to the damn right directions every time. Im sure people has experienced this, but are we gonna be able to control it? when we don't even know what's behind the system especially when it will just get better by observing? Rouge AI is a possibility.
This channel just keeps getting better and better. Crazy how well the team can make content like this so easily digestible
This was so well done, thank you for making it! ❤
Love the ads countdown, should be a trend. Your videos are always very informative, love your work
This is the first video on AI I've seen in a very long time that calms me down instead of making me anxious. Great video.
The quote "fear of the unknown is the greatest fear of all" greatly applies here. The reason we're all both excited and terrified (me included!) is the fact that we have absolutely no idea how AI will impact society in just the next few years, and even decade. This is a brand new breakthrough in technology that will fundamentally shift the way we operate, for better or worse.
we already know how it’s gonna impact society. it’s literally only gonna be good. there’s some bad parts to it but overall it will be good. I suggest you to do some basic research on this. Ai is gonna kill the current jobs and create new jobs/opportunities. The main issue is shifting everyone over to the new jobs. It will take a couple of years for us to get use to AI but we will be fine. just don’t listen to social media they are trying to scare you with false info
How AI will impact society ?
What about humans ?
How many have died in wars ?
How many die of starvation ?Etc.
The wealthy can create a utopian world on this planet, but that wouldn't make as much money as suffering and strife.
@@SpongeBob-ru8js Yes, society includes humans
I mean I know I don’t know anything about ai and how it works but I don’t see why some things can’t just be hardcoded into the machine so they don’t do them, like harming humans or making unethical descisions
Kudos to Cleo & team. The protein folding knowledge is one of the best results returned by A.I. to date. I love your trolley analogy, the fear we have is real because the future/unknown has ALWAYS been scary.
I think sandboxing of A.I. will develop in staggering ways to safeguard humanity much like virus/anti-virus code did in the late '70s.
We Need A.I. as much as we need it sandboxed!
Keep us inspired! Thx
Cleo, you're actually very talented, by converting these complex topics like AI and Medicine in easy to understand videos. Keep going 🙌🏻
AI is amazing and terrifying at the same time.
And all because of our systems.
Looking forward to the upcoming industry-specific deep dive episodes! Keep up the great work guys!
My first ever job, at 15, was a cancer research assistant. I got to coauthor a paper on AI helping diagnose certain cancers
Question
Would curing cancer or any of the other diseases make as generate as much money as having them continue.
Don't diseases reduce population ?
Diseases are a win win.
Hello from the Czech Republic
Personally, I'm not for pausing A.I., I'm for stopping it completely.
Disease, climate change, etc., all of these have evolved not for mankind to try to stop them with A.I., but to teach us something.
Nature is going to do what it wants anyway, and maybe it would be nice to invest all that precious time, and a lot less precious money, into helping beings with actual brains.
As long as A.I. continues to be promoted, we have learned absolutely nothing.
Thank you for the video, Cleo. You are doing an amazing job and I really appreciate it :)
I think that AI generated content like photos or videos is scarier and more inevitable, because of this it will get much harder to prove something or be sure what to believe.
My interest in art has already started dropping. Was listening to some good music instrumentals, and as soon as I found out it was AI generated, it felt hollow and soulless...
Truth is in the eyes of the Dollar sign.
i think the scariest part for me is that wasn't really mentioned is that ai actually has goals. it would be impossible for a human to protect him self from ai manipulation with the ai being super intelligent. Kind of like a mouse eating cheese out a mouse trap. it's not the humans controlling the ai anymore but rather the other way around, but how important is the human to the ai? humans aren't really nice to less intelligent animals
🌏 = 🙈
Good analogy.
5:09 no one can deny the fact that JS looks just beautiful to be used in a video on AI 😂
*python
As someone who is learning about deep learning models. The genie is out of the box. The future will have ai regardless of anyone's efforts to stop them
I think the worst scenario is that of politizing AI, US vr China, like everything it will probably work best for humankind if everyone collaborates, Nice video Cleo🎉
imagine the AI said US's capitalist policy sucks, US would properly delete that AI and make a new one
Yes the ex ceo just said that "liberal" 😂😂 they think they are the best and everyone should be under their shoes.
They are just so self centred they wipe out native Americans in the name of the same liberalisation they are talking about.
This is all just geopolitics.
I think the comments at 07:29 about competitors highlights a deeper issue and that’s the problem lies with us not the tools we use. The primitive instinct of survival has the potential of these tools being used by us to cause harm on others, even if we call it a seemingly benign term as competition, as the basic ethos of survival is to eliminate the opposition if you want to survive. However if we teach AI the same moral principles as we teach our children, as love, cooperation, compassion etc. it would be much much less likely as to AI to even contemplate human extinction, just as it would if we educate children the value of life.
Super excited for this series into AI! Keep up the great content !
Oppenheimer once said "I have become death, the destroyer of worlds". Little did he realize how much more horrifying the reality of the bomb really is, that his creation may very well be our salvation from abominable intelligence.. rip Oppenheimer, you're a hero for creating the atomic bomb and an even more courageous hero for doing everything you could to stop the arms race of nuclear weapons.
he's a hero for creating the atomic bomb??????? are you out of your mind lol. one of the greatest evils of mankind
@@PK1312 would you rather America make the bomb or the Nazis? He never wanted to do it, he only feared the Nazis would make it and use it. He also pushed to halt the arms race for it after the war.
8:15 This statement proves that US or other countries is not taking safety seriously. They are just busy to overtake their competitor countries. They will not be bothered if ai cause problems because winning the competition is the most important thing to them.
12:11
the cynical side of our brains should be staying up at night having constant panic attacks at the overwhelming threat of human extinction
the optimistic side should get us up in the morning to make our voices heard & pressure anyone who ignores AI dangers (in pursuit of financial gain) via threat of boycott/strike/physical violence
I would have liked for you to address how partial that Letter advocating for pausing AI Development was. At the time of the Letter OpenAI was far ahead of it's competition in AI with it's GPT3. The Letter didn't ask to "pause AI development" it asked to "pause Development of AI more sophisticated than GPT3" (paraphrased). It wouldn't have stopped AI development it would have stopped OpenAIs development while the signees could catch up
😀😃
8:46 I’m calling BS. They are quickly becoming more restrictive. For example. While using Chat to help develop a word processor program I quickly ran into a wall, almost as if a major program company was restricting how far you could take it, as if there was some company perhaps protecting its own interests….. mmmm
That is because you're using ChatGpt. When using AI to write code you have to look at it like a helper, it will do the heavey lifting and the druge work, but you still need to had the the over all project. There are others out there that are far less restricted. Plus if you have the hardware and the skill (it's not really hard if you know the basics) you can build your own AI server with no restrictions.
AI entering the “kill chain” is a scary thing.
Cleo- I'm genuinely excited about this! I would love to collaborate with your team on this research as it's something I've aspired to for years. Now the big question- what can I do to get involved?
This video does an excellent job of capturing the tension between the immense potential of AI and the existential risks it poses. The analogy of living inside a trolley problem is spot on AI could lead us to incredible breakthroughs or unintended consequences that could be catastrophic. I appreciate how you broke down complex concepts like machine learning and specification gaming into something that’s easy to grasp, yet still thought-provoking. The examples of AlphaZero and AlphaFold are fascinating reminders of AI's power to surpass human understanding in ways we can't always predict. This video has deepened my respect for AI’s possibilities while also making me more aware of the importance of guiding its development responsibly. Thanks for such an insightful take on a critical issue of our time!
If AI ever become self-aware, just reminded that there’s so many planets out there that human beings cannot survive on at all that are totally capable of claiming for their own selves .
In the neighborhood 8... sorry 7 big ones and about a hundred large enough to fit some large enough computers, some even better for computers than earth