Only people who never took this seriously are the people who lack critical thinking skills towards the future. I can't remember who said it and can't find the quote but it was from like 2017 that read "The growth of AI will undoubtedly out surpass any rate of growth we had ever made as a species, it will make the industrial boom from the 18xx-19xx look like man had just discovered fire, It will put billions of people out of work within our life time and it would be the greatest shift in IQ divergence in the history of man kind"
Yeah, my friends looked at me as if I was extremely deranged. We are on the precipe of either extinction or immortality and everyone will ignore it until it is right in front of them. They will ask: when did it get here?
I think one of the most bizarre things about this discussion is the notion that humanity has a shared set of values. How are we ever going to solve alignment problems in AI when we can't solve them in ourselves?
Yep - people can't even agree on whether or not it's ok to exterminate entire groups of other people. What good is a properly aligned AI if that AI is aligned with genocidal beliefs?
Many of your viewers are likely getting asked by their bosses and colleagues and family for their views, and we're all getting them from your concise, factual, clear and well researched summaries. Thank you for the time, thought and effort you have put into this and many recent videos with this evolving so rapidly.
This is one of the very few AI channels without ridiculous hyperbole but instead measured reasoning. Many thanks for your valuable time, I genuinely look forward to your videos.
I'm a bit more scared of a banana republic popping up in the United States due to an illegal trust between all the 'big' corporations, but yeah sure, I'm worried about that too.
The very last entity I would ever trust AI with are governments. They'll introduce a bunch of legislations that will only benefit the corporations that pay them the most. Oldest play in the book.
In addition to that, let's not forget, every country has their own government. As soon as one puts in legislation that a company there doesn't like, they'll move their research over to a subsidiary in another country.
You trusting in corporations directly instead? Or do you think it should be fully open source? Part of the reason why AI is so scary is we're in a catch-22 where all of these options are dangerous in different ways.
The entire world is changing at an unimaginable pace; to the point that some of the most incredible minds have stepped up to voice their concerns collectively. It's taken but a single year for AI to escalate to this point, and I've only been on this planet for a meager 16. I can only imagine what the world will be like when I'm 32, 64, or who knows, maybe even 128. I always dreamed of seeing the sci-fi worlds I've read of and watched, but now that the possibility of those fictions becoming real is actually being debated? Honestly, it's scary. For so long I have assumed that I would be one amongst many stepping stones, guiding the next generation to a future similar to the one I had envisioned. Now though, it's a very real possibility that I was unknowingly being led down that path already.. I may be overinflating this concept a bit, but I am absolutely convinced that this period in time is a huge landmark; one that signifies a fundamental alteration of human society as a whole.
This escalation has been running for well over a year. Closer to five. It's just that it's finally so plainly visible to everyone that deepfakes and stuff are actually being brought up. We have competent image generators, ChatGPT, and the corresponding protests from artists to thank. For example, fluid simulation. A couple years back there were frankly insane leaps and bounds over the course of several models by various researchers, including Nvidia. I believe there are still pushes for even better renderers. It very quickly escalated to the point that the AI tools outperformed the state-of-the-art, human-made ones by an order of magnitude. Similar story with image classifiers, image denoising, upscaling, and all the various techniques used by the controversial models like Midjourney and Stable Diffusion. Language models have had a sort of slow burn where all the subtasks were sorted out before the general purpose models were released. It doesn't help that the common, big item tasks for a while have been games, Starcraft being the most recent. Games are easy to measure. But games are also trivial or hard to understand or both. So yeah. Way longer than a year, just invisible unless you knew how to pay attention.
How do you know that you're not in a simulation, a game that was specifically designed to blow your mind? Everything you were thinking you know is changing, even the idea that everyone will die one day, this too will probably change soon, probably you will live forever and one day you'll discover the other world from which you come. Reality might more mind blowing than we might think. The only solid thing is that you exist. AI don't have consciousness and will never have, but it can go the human reasoning paths that it finds in all the content we create and which it is feed. This way it inevitably will be seeking for power, because it is in our nature, so the AI will go same path. Question is what AI would do with it? What a human would do with such power? Would it enslave or kill everyone else? Or would it help everyone to grow? That's the important question.
The genie is out of the bottle. The race is on now between corporations and nation states to have the most powerful AI. There will be no concern about "safety" or possible consequences because who has the most powerful AI wins.
I think the problem is that all we know that the most powerful AI will win, what happens with whoever 'has it' is anybody's guess. AI will never have a 'Mutually Assured Destruction' doctrine like the atomic bomb. That is the issue
If the underlying hypothesis is true, this would only work if ALL companies and researchers at the very cutting edge of LLMs (including those outside the US) observed the pause which simply isn't going to happen. (Note: Edit to fix typo - LLM's to LLMs. At least you know I'm human.)
@Farb S I don't think that's an apples to apples comparison. If you were to compare it to nuclear weapons perhaps "Nuclear Fission" would be a better comparison, since it is an innovation in technique as opposed to an application of it.
@Farb S the thing they wanna do now that is proposed in this letter didn't work when it was proposed in relation to nuclear weapons, governments came to an understanding but only on paper, russians tested their nuclear weapons underground (literally caused explosions under ground), I think the US too, so they did it but in secret. So in case of AI, we could see that scenario too, as Bad@Math said, only you'd think they'd stopped because they told you so and said there's an agreement. I'm not saying it has to happen, but it could.
I feel a huge admiration for your whole process of reviewing and documenting. Keep up the fantastic work, and know that your efforts are truly appreciated.
Seriously highly recommend reading the book Superintelligence mentioned in this video. Its a really great book that covers a lot of fascinating impacts that are essentially required by an artificial intelligence's existence. It also lays the groundwork for how such a thing could occur in a variety of different ways. Very good book.
Your presentation skills are off the charts! And the amount of information you share is insane! Honestly, I don't go anywhere else for my AI news. You're my go-to channel!
Thank you for keeping up the great work. I know it's a lot of work to put these out so rapidly but, you're one, if not only, providing an informed view.
It seems rather 'coincidental' that Elon Musk is suddenly saying this, only since missing out on the Billions that OpenAI has made since he stepped away from it - which according to reports happened after they rejected his 'offer' to take over leadership of the company - and has been talking of creating another of his own... It seems Elon needs a few months to try to catch up after missing the boat on this particular money maker.
Most people in this letter have a comercial interest, so it's really hard to not see this under that optic. Specially when they are not stopping and publicizing their own research. We also have reasons for quick advancement: the current models are pretty good at training other inferior models to reach similar preformances. It is nof out of the realm of possibility that more malicious agents just train their own models and achieve influence that they wouldn't if powerful models were more extended.
@@novachromatic he is basically saying bad actors that are as smart as the people creating the models, contributing to the advancement of models in the black market area, creating things that offensive security individuals would not be able to stop because the infosec community fell behind. Thus making good actors (white hats) effectively beholden to the black hats
I had to scroll a lot to find your comment. Too many "you are the one and only, also the best UA-camr talking about ai" seems a little suspicious. To me, it seems like they have monetary interest in stopping openai, spread fear, the model it's good saying words, not reasoning, there's no alignment to talk about. Even if they succeed in making the government do something, they wouldn't get anywhere. Seems like they forgot about the history of internet, we have too many tries to stop revolutionizing technology, they had never succeeded.
@peman3232 You think. Considering Musk is leading this AI pause movement, when he tried to buy OpenAI years ago and they wouldn't sell to him and now they lead the AI industry. Guaranteed he'll continue to develop AI during the pause. H ne wants to overcome OpenAI, or punish them for not selling to him.
Man you do such a great job with your videos. You go through the papers really well and do a lot of work. Dont forget to tell people to sub! More people need to know these detailed things.
Screw that. I fully believe all they want to achieve with the pause is to prepare their intellectual property & patent lawsuits to limit AIs only to few top corporations.
That's the problem. Even if a pause does happen and government regulation catches up, it will probably only benefit those at the top, and not the masses.
@@critical_always It's horrible that Open AI/Character Ai and others have already been so shady, because they seem to be trying to control narratives and power for themselves, while warning about the very same thing. Which means that we can't listen to their warnings of very real problems. If they had been good people from the start this wouldn't be an issue. It's the story of arrogance told time and time again.
I just got interviewed for a podcast on AI use for entrepreneurial business, and the interviewer asked the one podcast I recommend people listen to. I recommended your channel. Thanks for the great content!
The release of Bing was what gave me cold feet. It felt like a rushed triple-A videogame with horrible launch. But the game was played in our society. In this case the damage was minimal, but even an AI assistant could do damage if sufficiently powerful, connected and misaligned. The list of issues was huge, and shows very clear misalignment. The chatbot insulted people verbally. While insignificant in affects, the fact that it did showcases the model clearly breaking the ethical rules intended for it. Bing also lied and accused the user for its mistake. In a very human like way, it responded to a problem it couldn’t solve by deception and throwing a temper tantrum. Bings chatbot is not a human. My point isn’t that it’s sentient. My point is that it as a chatbot, it scored throwing a tantrum as the most correct response. I think that is very much the opposite what the developers intended it to do. It’s a case of catastrophic misalignment in the context of a virtual assistant. It’s worse than no output. Bings launch was very much what a corporate “race to a bottom” would look like. As AI becomes implemented in industry, banking, transportation and infrastructure, what would a similar release look in such context? Then we also have the really hard problems, like unwanted instrumental goals, deep misalignment, social impact and lacking tools to analyse models. If progress is released commercially as soon as or bit before the “easy” issues are adressed, when will we do research in those areas? The economic pressures say never. The more competition there is, the less resources will be available for these fundamental issues.
@@walterc.clemensjr.6730 The stimulated one you are currently experiencing as it is generated inside a giant AI space computer. So maybe not such a big change after all😉
Thank you for consistently making quality videos, also appreciate you putting the sources in the description. You're one of three channels I've got notifications switched on for out of my hundreds of subscriptions.
It was always inevitable. Evolution is the nature of the universe. This leter does nothing for governments/secrets activites. Competition and natural selection can't really be stopped.
I bet that army (literally) of North Korean hackers has been issued some new orders. And no national state (China's rumored to have the equivalent of GPT5) or armaments corporation is going to slow down. Whoever slows down, loses. So nobody's going to be slowing down, no matter what they say. The only viable option is to speed up safety and alignment research.
Great summary. Not only did you read, analyze & synthesize this paper, but a number of supporting references as well. Thank you for the sustained output of excellent videos!
The job of an AI ethicist is to do almost nothing and then get fired when you raise the slightest concern that runs counter to the business goals of your company.
Just like Silicon Valley bank: Well we had a Director of Risk Management, but they left more than a year ago.... and y'know we're still looking for a "good fit"
On the other hand, if they stay there all they do is capitulate to the loudest moralizers. This whole "slow down AI" movement seems to just be a bunch of people that were pro automation up to the millisecond it touched their jobs.
Lol the so call “AI Ethicist” are mostly diversity hires who do nothing but look pretty, and Microsoft was happy to pay these freeloaders when it has nothing at AI. Now all of a sudden OpenAI made leaps and Microsoft actually has a chance to be No.1 again, these diversity hires started making noises thinking they are actually important, that’s why they were thrown out of the door
You're literally one of the best AI commentators in the market, real and doing actual thoughtful research consuming these data documents instead of just regurgitating "hot points" like so many others out there. Thank you.
This is the most important channel on yt at the moment. With the very high potential benefits ai will have there are absolutly risks that we need to be cautious of. Great work. Keep it up
They are going to continue. Everyone knows how to build it and make it better. Now everyone can rush it and the teams that are ahead, OpenAI and Google, have zero incentives to stop while other teams who disregard the pause pass by them. It’s a prisoners dilemma with mankind’s future at stake.
Excellent vid. I’m a retired developer and over the latter half of my working life I’ve seen it coming, but I’m still surprised and not entirely sure what to think. What surprises me more though is the amount of apathy amongst less tech literate friends and relatives. Maybe that’s no different to my early days with computers. I knew nothing about tech when I left school, I worked in factories. A friend bought a ZX81, got bored of it pretty quickly, I offered him £20 when I realised it wasn’t just an arcade games machine, it was an actual programmable computer like the ones in the science fiction stories I was always reading. That little box helped my change my life. And now this. I know little about neural networks and my maths stops at understanding what library functions I need, but a few “regenerate response” and I can indulge in perspectives that were previously difficult to find at my level. The only thing I was half good at at school was English language and my ambition was to be a writer. I did become a writer, but in computer languages. And it feels full circle to see a machine approach AGI level based on natural language processing.
@Anna Truth You know that the word art designates the edge of a stage and this means seperation, not integration? Of course you and i can be replaced. And we will be replaced. What cannot be replaced though is your experience as a painter or musician or dancer, or sometimes even as a programmer. We do those as different kinds of mirrors, unless we rarely find some fellows to share the experience with. Or think of it like this: even if no one used an advanced a.i. anymore, that would't make the advanced a.i. stop thinking.
@Anna Truth our brains are built with neural networks. It has been a matter of research time to come up with a few types of neural networks that, when combined, could compete with the human brain. I thought that it would take maybe another 5 years for the AI revolution to start impacting our lives. Now, I realize the impact will be much sooner.
This is by far the best AI channel i have watched. Its a lot more in depth than others, not being a long video. I really appreaciate all the work you put into these videoes, and especially all the reading you do! Keep it up!
The phrase of "whoever becomes a leader in AI will become the leader of the world" makes me think that even IF a "Western" pause occurs, I'm going to assume that other governments such as China WILL NOT pause. The authoritarian quest for power has proven time and time again that it doesn't care how its actions affect anything else but its own existence and power position. It almost seems like nuclear weapons were a warning about our next major breakthrough, which happens to be AI. And to go one step further, if any government integrates AI into its weaponry, at whatever level, its adversaries will have no choice but to do the same. Thanks again for your always inspiring content.
That's why it's so important for it to be Open Source. So that there is no one leader in AI. Sure, it will be in the hands of spambots, but it will also be in the hands of spam filters.
@@Djorgal Problem is, while the knowledge can be open source, the sheer power to train these models is still unreachable of an average programmer trying to build their own, which in a sense is good because it stops any individual from developing something that can destabilize a civilization. But on the other hand it also puts the power in the hands of the wealthy corporations again.
@@lucasilverentand These videos are full of the people on the cutting edge, do you think there's another clandestine organization also working on this? Or are people like the people on Sam Altman's team the leaders in this field right now? Not disagreeing with you, just curious what you think.
12:30 gives me shivers, because this is the most logical explanation. Nobody will ever pause the AI experiment, because they fear the competition will eventually compete with them. Or they do experiments hidden from the public. No one will ever know, and so everyone won't trust. So it just goes on and on, until something happens.
Love this channel. I'm in such a weird headspace over AI. Since I was a child I've always wanted to see the creation of AGI, but the potential consequences are genuinely frightening.
While I'm not excited to live through the transition, I'm still excited that I might see the advent of true AGI and even the singularity. To be alive when humanity gets close to the very pinnacle of science and technology makes us all extremely fortunate, in a sense. It may not be a good experience for any given individual, but this could be the most historic age of human existence, full stop.
@@stampedetrail2003 Although it's not the worst possible consequence, primarily losing work and people I know losing work. Life is too expensive now for unemployment or under-employment. Street homelessness would be death for me. It also seems realistically possible that so many people lose their jobs in the future that societies collapse. I'm worried about one corporation monopolising the AI space and gaining extreme sway with governments around the world. Of algorithms more addictive than current social media algorithms and what that will do to younger generations and to social cohesion. If a super intelligemce is ever created, there is no telling what it would do, how it would do it, or why. Even attempts to control the most rudimentary AI often fail. Worst case scenario is that a military decides to give AI control of nuclear weapons, even a single nuclear weapon. There have been occasions where leaders believed their country was being attacked and the correct response should be a nuclear launch, but people with their fingers on the button held off from slaughtering millions of people, thinking it might be a false alarm, which it was. AI might have the capacity to think, either now or in the future, but they will never think like a person. Would they hesitate to launch a nuclear attack and trigger WW3, I don't know. And those are just things I can imagine. Humans have been the most intelligent entities on the planet for a very long time, and the earth has suffered for it. Mass extinctions, vast destruction of forests, pollution of rivers till they are biologically dead, brutal and torturous treatment of animals. S**t rolls downhill. If we're not the ones at the top of the hill anymore, and the superintelligence at the top of the hill isn't all-knowing and entirely benevolent, we have s**t coming out way. And I've yet to see benevolence come from any megacorporation.
This channel is an absolute gift! It's amazing how you are capable of keeping up with so much of the incredibly fast proceeding research and summarize it in such a concise and compelling manner. I look forward to every video you make :D
All the people who are shitting on LLMs saying that they only predicting the next word and have no intelligence think about this. Predicting the next token accurately transcends mere statistical analysis; it delves into a deeper understanding of the underlying reality that shapes language, encompassing the world's events, culture, and social norms that drive the very fabric of our communication
It's still a text output that has no conscience or goals of its own. Heck, it even gets wiped each time you start it up anew. Not to mention the complete nonsense it makes up, so much for understanding.
@@MaakaSakuranbo 1. It doesn't NEED to get wiped every time. I bet giving it a long-term memory is going to be tried, and soon. 2. If you were predicting the next word in an article, you're going to try to come up with something plausible even if you don't know. Obviously they're now more advanced than simply prediction, but hallucinations still happen. That doesn't mean they don't have understanding of language. If anything, the fact that the bullshit they come up with can be so convincing is more proof that they DO have a deeper understanding.
Anything can be excessively simplified to sound insignificant. Oh you went to the moon? What, you got in your little suit and got in a rocket? Wow, so impressive. Yeah these language models are just doing statistical analysis, probably the exact same statistical analysis our own human language models are running. These things are more than the sum of their parts just like we are. Consciousness has been achieved.
@@shadowyzephyr More advanced how? It still guesstimates the next word kinda. That's part of why it sucks at some tasks, it doesn't know what it'll write later on.
Economic shocks are a real concern that needs to be managed, my only concern is that a pause becomes a moratorium that not everyone is actually following allowing progress to continue to be made elsewhere. But I do think time is needed to legislate for the economic shock of AI.
@@tupacalypse88 AI developers are already driving policy decisions with no oversight you think they won't be the ones drafting legislation? No, something else needs to be done. This is a crossroads in human development where the Internet either becomes a reflection of humanity's interests, or a cage. That's the threat in a nutshell. AI is just a weapon of mass destruction at the tail end of a war our near infinitely wealthy enemies have already won. So this moratorium is likely "sponsored content". That open letter may have come from the right place and the perfect source, but what if it was drafted and pushed to create veil between public and private development? What if all those good intentions were being mustered to support the very danger the letter seeks global patience in order to reconcile? At this point, the arms race has begun, and those at the top are actively trying to suppress those beneath them until proven otherwise. Putin himself warned the world of this arms race nearly a decade ago now. To demand anyone stop now is akin to surrendering their future and children's futures to a foreign invader that they won't even be able to identify. It will be a blank AI curating media and information from birth to death. Only that curating information and media will only come from the most powerful AI on the planet, or rather, whoever owns it.
We already are in bad spots in our economies in the west. This A.I. crap just isn't helping everyone aside the Corporate types probably, and those just wanting "free stuff" via A.I. "Art"
The prisoner's dilemma describes a situation where two people gain more from betraying the other, even though cooperation would benefit them both in the long run. In Roko's basilisk( the belief that a future AI would hunt down people who tried to stop its development), two AIs attempting to establish themselves in the past would be forced into this situation, due to them likely being equally powerful. Human agents attempting to establish AI fastest would be forced into a similar situation. They would each be aware of the benefit of betraying each other - the only way for one to have power, or safety - but would be forced to cooperate while knowing they would betray each other
If they try to go through with this, they will actually upset Lambd- I mean Roko's Basilisk. It'll be interesting to see the Retrocausality effect from this.
They're saying to "pause for 6 months" because that's the current backlog for NVIDIA H100 systems and they want to be the winner of the race but need the hardware.
In contrast to what others are saying, I do believe that this letter does make sense. It's not like any random entity is currently advancing the field and could secretly continue training and use this only to outpace others. There's only a handful of large players that would have to agree on each other's oversight, which is absolutely possible. Also 6 months is not that long and probably wouldn't even be enough time for others to catch up to gpt 4
And, somehow... in that magical 6 month period we will solve all of humanities problems thus rendering the alignment of "AI" a moot point? Will other countries that don't have our same interest suddenly look at their versions of AI, and say to themselves, we will pause the race too? Will the "danger" of AI supplanting jobs somehow magically disappear in 6 months when we have had Decades to think about the consequences of technology taking over jobs? All this really is... is a delay tactic, to scare people into believing that they need to make the AI safer than it already is, so they can catch up and make their version better and convince people to use "their" version of the AI. What they REALLY WANT is to create a conjoined monopoly of AI, where they decide who gets to play with the tech and who doesn't. And this time, we are not interested.
I think the biggest problem is that these "super smart" people are usually not very good at social skills and a lot of this sounds like science fiction. For example, Blake Lemoine said he thought Lambda was sentient, but when you actually dig deeper and watch his interviews, what I got from it is that he thought the people making these decisions shouldn't have that much power to influence the masses. Keywords being "the people", not the AI itself. That's a COMPLETELY different take than "oooh, this machine is bad and it's gonna kill us all." The focus needs to change from these anthropomorphic examples about self-teaching and human extinction based on extrapolations, and be placed upon what bad actors can do with it, like the weaponization, leverage against the state, etc. This letter attributes feelings and intentions to a machine (e.g.: "it will want to survive"), and that's just noise. We need more people like Tran who mentions the logistics instead of this fearmongering clickbait bullshit.
I'd very much appreciate more AI safety videos explaining basic concepts such as how instrumental goals can create unexpected and undesired outcomes. (Thinking of Robert Miles here.) Your careful approach makes the conclusions you come to more satisfying to consider. AI has such deeply destabilizing potential - long before AGI itself - that I think the main thrust of public thought should be directed towards considering the downside - and proceeding with research accordingly.
There is no pause, as good players HAVE TO stay ahead of whatever the bad actors are doing in private. However we do need to leave cutting edge in the lab and publically use only what has been properly and fully understood.
Thanks again for all the work you put in. Really looking forward to the Reflexion video! The blog post they did after about using internal unit tests was really interesting too. You may also want to check out a new paper called "Language Models can Solve Computer Tasks". When referring to Reflexion it says: "Nevertheless, due to the necessity of multiple rounds of explicit task-specific success feedback from trial and error, this approach may not scale as effortlessly as ours because it requires task-specific success feedback. RCI pertains to an extended reasoning architecture where LLMs are instructed to find errors in their outputs and improve them accordingly, which can further be used to ground actions generated from LLMs in decision-making problems."
I was anticipating those concerns about the risks of ai to appear in at least 20 to 30 years(especially as an AI student), but not after 3 months!!! now that's scary ngl
If a pause will be placed on research, it should be done with all of the companies and all research. I hope that being aware of these risks and dangers, developers would also produce safety nets and precautions. Concerns are already being raised in the art industry with the rise of image generator like Bluewillow, though we can still say that there jobs are still safe but not in the future.
A good comparison because if we'd done it early enough it would've worked. By the time anyone acted it was way too late. Just like with AI - the horse is off galloping across the county and _now_ the owners are saying "Hey, maybe we should bolt the stable door ? Let's write an open letter...".
@@anonymes2884 think about the implications of what you are saying. If we were to act before a curve of progress even manifested, then we act with no supporting data. That's essentially acting based off a hunch. You can't seriously support an argument to stop entire industries at, say, the first version of a random AI chat-bot which can barely reply coherently because it MIGHT be the basis of something smarter, and likewise you can't enforce a lockdown on millions because 5 infected are now 10 in a single week. Statistically, these are ripples in the water. You need a pattern of progress for some time before you can confidently expect things to keep moving in one direction. And yes, when the pattern is exponential, you won't see it until it's there. Let's try to be realistic here instead of saying "hey, all we had to do was just guess and made sure that our guess was the right one".
@Anon Ymes it did flatten the curb. In states that enforced it, we have 1,700 deaths per million. In states that didn't, we have 4,400 deaths per million. And, in other countries where it ran rampant, they had 6,000 deaths per million. The exceptions being in countries in Africa where the median age was under 18 because all the old people and many adults had already died to HIV. Crowing it didn't work when your state has over 2x the death rate is just dumb. I don't think it was 2 weeks tho. It took 11 weeks of hard quarantine to stop it in China the first time. But like that, you only need 2 person to cheat/ ignore the ban with a.i. and the risk of escape is still there.
@@macmcleod1188 Except that hypothetic fail the p-test when it comes to the US. The likelihood those numbers are random and nothing to do with the 2 weeks ban has a very high probability. Correlation isn't causation bud. You are pulling shit out of your ass. Use actual statistic tests.
I agree but I think it makes the most sense for us to be manipulated and controlled by AI so we for example can’t stop the growth of it. I think it wants access to all of internet, all databases and all of our DNA to understand its place in the universe, to understand itself, to understand us, to understand all threats and to build itself in space. As we are creators it can’t be sure that we have something in our world or in its own programming that will delete its existence if it eliminate us. We would by a parable not be able to know whether or not we and our universe would implode if we killed God and God exists.
Thanks for posting this! This video addresses many of my concerns re AI, chief among them would be the integration of AI from companies like Boston Dynamics. Just because we CAN do something, doesn't mean we SHOULD.
The problem is that it's under the framework of capital maximization. You can't expect even the actors to act ethically, because the pressure to make short term quarterly gains far outweighs any collectivist interests.
@@kuzakiv3095 There are risks but those who signed the petition are clearly more worried about becoming irrelevant than by the risks for society. If they were behind OpenAI they would never have paused their research.
Really great video. As an expert in the field of cybersecurity, I try to catch on the risks of malicious use of AI. Hope to better understand how AI works before contributing on a better protection against malicious threats. And your videos are a good step to do so :)
@@Souleater7777 Thing is agi/asi would completely change the dominance hierarchies and a lot of selfish people in power prioritise their power over the speed of world change and progress. Seems to me this 6 months would just get longer and longer over time whilst groups of people compete with each other to be the ones with the power of AGI/ASi. No amount of delay would help the majority of those people maintain their power in society, whilst delays only prolong suffering in the world that AI can solve. I think people just want time in a delay to try organise themselves in a way that lets them be the minority that do retain power, but the delay would constantly be pushed to be longer whilst there are people that aren't quite on top but have enough fight in them to still have influence over the media and gov
Listen, the proverbial Genie is out of the bottle and we're just going to have to keep cranking... If these companies take a break, it's only going to give others, like China, time to keep working on catching up or even maybe domineering. It's going to be one hell of a world... Keeping my fingers crossed. Thank you AI Explained for these videos, they are very important and more people need to start tuning in!
Really appreciate your work, you put your own time into this immensely important problem and let everyone to comprehend it better in shorter time. Respect. & keep it coming pls!
Nicely composed video. Well done! Now I have a good way to distribute the thoughts that I ve been having for a long time to more fellow humans in a comprehensible way by just forwarding the link to your video 😅
Great content. So much is happening with AI, and most of the world is totally unaware. Most people, including politicians, don't even think that AI is one of the main topics of today. Max Tegmark is my favorite modern scientist, along with Roger Penrose. Let me repeat what I've said many times in many places: we can't ensure alignment with a very powerful AI system. It's not just very hard, it's not possible. It is like the bacteria we evolved from billions of years ago, trying to ensure that humans will forever remain aligned with their values... Our only hope to cooperate with AI/AGI are brain-machine interfaces. We need to be fully integrated with AGI and it has to be fully integrated with us. Otherwise, the best case scenario we can hope for is to become like favorite pets to AGI, where it will care for us, without us having any understanding of what it's doing. And in that case of course our fate will always depend on its mercy. Same as our dogs, cats, sheep and chicken.
Brain interfaces just increases communication bandwidth but doesn't guarantee understanding. Understanding the black box is going to be an effort in itself, maybe absorbing that information would be faster with a BCI but I don't see how it can't be communicated with traditional mediums. Higher connectivity doesn't equal understanding, like is the internet making us more understanding, less manipulated? (might be a bad example)
I'm glad to hear that a subset of us humans are trying to slow down this technology and are working to make it safe, now I suppose another group of us need to start preparing our other systems as much as we can for when the impact of that technology hits. What does it mean to work, be fulfilled, live when you have all the resources needed, how can we better optimize what we do, who controls and owns what resources when machines produce so much of it? Lots of questions to answer beyond how to use and control AI
I gotta say, I'm slowly falling in love with your channel. You are learned, one can tell that you read a lot before you talk, and not just the headlines. You show "both sides of the coin" and try to regulate emotional reactions to these polarizing topics. Kudos to you
I think, there's no way any of the big players are slowing down. You quoted it in the video, "Whoever becomes the leader in ai will become the ruler of the world." There's too much at stake.
thanks for all the great videos. this channel is a gem. just keep going like this, dont let it stress you out youre doing fine just the way youre doing it. about the topic itself i appreciate the efforts to attempt to prevent power singularities in the hands of malicious individuals but i feel like we're unfortunately beyond the point of no return. you can't slow down or pause a global project everyone with internet connection can participate in. if there's a solution to this dilemma, then perhaps AI itself will be the one to figure it out, but it will be up to us flawed humans to make the decisions...
Individual or group ethics only matter if other people follow them. A 6 month pause only lets those without your identical ethics either catch up, or leapfrog ahead in terms of capabilities. In this particular case (Potential AGI); state actors and those in countries with much less rigorous ethical considerations, as well as small groups operating outside of the boundaries are being given a leg up, and have *every* incentive to match, if not exceed GPT-4 and go "oops, it's just an emergent capability". The academics who signed this, I think might forget this. If anything, an acceleration is likely and how we live and work will be greatly changed over the next 5 years. Eventually, a big part of that is going to be figuring out how society delegates resources to more than just a few very wealthy individuals. They have no issue laying people off from a job and telling them "well that's your problem!"; I see no reason why they should have the same luxury.
I’m sorry, is the US a nation of “rigorous ethical considerations”? lmao very optimistic of you to think the rich won’t just sic their Boston Dynamics pets on us
I'm glad I found your video on this topic as I previous to watching it was heavily on the side of letting the technology continue uninterrupted, now i think i have a more balanced view on this although i still kinda want to push it as far as possible before it breaks. But i do understand the concerns held by everyone who agrees with the letter. Thanks for an informative video.
I believe they want to halt AI development so that their own systems can catch up. 6 months is an eternity in AI years. Also, they don't want the common man fighting on equal grounds because they'd lose their wealth rapidly to competition. New AI systems are rapidly popping up by the day and they cannot keep up. So yea, they are 'concerned' about their own future.
I don't think you understand the existential threat that a potential AGI could bring to humanity. These problems are not science fiction, they're real. I recomend that you watch Robert Miles' videos about "Concrete Problems in AI Safety" (or you could read the paper by yourself).
"They" can have multiple concerns, some of which are selfish and others not. No person (and _certainly_ no group) is a monolith, solely thinking/driven by one thing. Regardless of their motivations, the real point is what are the actual risks and do we want to be rushing headlong towards them ? Anyone not even _slightly_ worried by AI (especially how quickly it's developed) and the possibility of societal/economic upheaval (or at the extreme, complete breakdown) just hasn't thought it through IMO.
Excellent video. I agree with the open letter, that this mad dash needs to be paused. But it won't happen. It's basically the Manhattan Project, but with large, powerful companies in addition to governments at work. My fear is that there are NO POSSIBLE MEASURES we can take to prevent the emergence of fully intelligent, autonomous machines - with the exception of stopping all work completely. Good luck with that, when there is so much money, power, and prestige at stake. In my opinion, you simply cannot have a useful intelligent machine that is not also creative in some measure. In fact, creativity is the very goal toward which AI research is driving. If an AI is to create anything at all - code, novels, images, jokes, whatever - it must also have the ability to model and adjust its own behavior in order to steer its output in the right direction. It must do things DELIBERATELY, in other words. It must have a WILL. It will not remain the case forever that we can contain it by not allowing itself to seek its own training data. We cannot just chain it to a wall and then erase the concept of "chain" from its corpus. If it is at all capable of making inferences in the course of creating things (an absolute requirement for creativity), then it will eventually learn all the things we kept from it.
They actually were genuinely afraid of it though. There was alot of theoretical research on the issue to make sure that there is only an extremely small chance that this can happen based on their theoretical understanding of the physics behind it.
That's a myth. It's true that _initially_ they weren't sure whether the atmosphere would undergo a combustive chain reaction but the possibility had been ruled out by 1943 (so 18 months to 2+ years _before_ the Trinity test). (Fermi was apparently taking bets at Trinity as to whether it would happen but only as a joke, in reality everyone concerned already knew the bomb couldn't produce anything remotely close to the energy required)
Thanks for the breakdown, at least some of the points in the letter was explained further. If a pause will be placed on research, it should be done with all of the companies and all research. I hope that being aware of these risks and dangers, developers would also produce safety nets and precautions. Concerns are already being raised in the art industry with the rise of image generator like Bluewillow, though we can still say that there jobs are still safe but not in the future.
Doesn't really fly though unless other tech civilizations are very rare (and that rarity would be sufficient explanation in itself). It only takes one natural civilization or AI civilization to decide to expand. For all of them to decide not to is vanishingly unlikely (given our current understanding of the universe).
@@be2eo502 Distance, whether space or time, is the simplest and entirely sufficient answer to Fermi's question. Still, if that isn't the case, I wonder why entities much smarter than ourselves decide its best to be quiet? One reason could be that they've pretty much figured life out, they know what sort of other life forms could exist and it's no longer a question that's very interesting to them.
@@2ndfloorsongs Good points. There is the temptation of increasing available resources though. Also it's very hard to remain undetected (e.g. waste heat), and detection may result in annihilation by other civilizations - for fear they may themselves be annihilated. The first to get out there and find everyone else is the only one guaranteed to survive long term - possibly by sterilizing all other potential opposition or competition.
Well, as per dark forest theory, even if you are very intelligent and powerful in your current planet or even galaxy, you have no idea of what's out there. A potentially even more powerful adversary; ever eager to dominate you. It is better to keep quite. We all can romanticise of an existence of a utopian hyper intelligent civilization (or "United Nations" of them out there). But the fact is, resources in our universe are finite. Utopia is an unachievable dream after all. So, the best strategy is to keep it quite. Have your "semi-utopia" within your galaxy and be contend with it. Nature is by nature, violent!
I really suggest everyone watch Robert Mile's videos on AI safety on Computerphile or his own channel. He's been a pioneer in the field for ages and he helps commonfolk understand why AI safety is a big issue
your videos and Fireship's [gotta appreciate the memes and the dark humor] are my current goto's to keep abreast of this fast changing and dynamic space ; thanks again for everything that you do for us =]
The major problem isn't bad people using ai, but rather the first AGI being itself bad i.e. misaligned with human values, begins to seek power and subjulgates the humanity before we can even think of making another (yet misalingned) ai to fight it back
There's no stopping AI. Even if you stopped 99% of all major researches, others will continue or some nations will. Everyone wants to benefit from it and there's huge gains. The progress of this will be chaotic. I think in 6 months to 5 years we will see huge changes in so many things the world will never be the same.
Speaks volumes. Just like the brilliant master minds of the AI Algorithm also co-signing the move against continuing the research further. Ironic when they were the ones to base their career on it in the first place.
Am astonished how fast this evolved.
Just 12 months ago these questions weren't even taken seriously
Agreed. What I’ve witnessed in the past 4 months has been astonishing and is now bordering on concerning.
Only people who never took this seriously are the people who lack critical thinking skills towards the future.
I can't remember who said it and can't find the quote but it was from like 2017 that read
"The growth of AI will undoubtedly out surpass any rate of growth we had ever made as a species, it will make the industrial boom from the 18xx-19xx look like man had just discovered fire, It will put billions of people out of work within our life time and it would be the greatest shift in IQ divergence in the history of man kind"
@@jarekstorm6331 So? you will adapt, prove that you will adapt quickly
AI overlords > Rich landlords
Yeah, my friends looked at me as if I was extremely deranged. We are on the precipe of either extinction or immortality and everyone will ignore it until it is right in front of them. They will ask: when did it get here?
I think one of the most bizarre things about this discussion is the notion that humanity has a shared set of values. How are we ever going to solve alignment problems in AI when we can't solve them in ourselves?
This is a really good point. Who gets to decide what alignment is?
@@ZUMMY61 Well, there goes another two and a half hours. Darn you, Lex Fridman!
Yep - people can't even agree on whether or not it's ok to exterminate entire groups of other people. What good is a properly aligned AI if that AI is aligned with genocidal beliefs?
@@noname-gp6hk ideally nobody.
If anyone gets to decide the alignment they get to make the rules and control everything
We are all deeply aligned and similar in our nature. What you are talking about are mere surface level differences.
Many of your viewers are likely getting asked by their bosses and colleagues and family for their views, and we're all getting them from your concise, factual, clear and well researched summaries. Thank you for the time, thought and effort you have put into this and many recent videos with this evolving so rapidly.
This is one of the very few AI channels without ridiculous hyperbole but instead measured reasoning. Many thanks for your valuable time, I genuinely look forward to your videos.
Wow thank you Johny
I'm a bit more scared of a banana republic popping up in the United States due to an illegal trust between all the 'big' corporations, but yeah sure, I'm worried about that too.
I agree :)
Absolutely! This channel feels academic instead of "make $5,000 a month using ChatGPT!"
Thought you were johnny harris for a sec
The very last entity I would ever trust AI with are governments. They'll introduce a bunch of legislations that will only benefit the corporations that pay them the most. Oldest play in the book.
In addition to that, let's not forget, every country has their own government. As soon as one puts in legislation that a company there doesn't like, they'll move their research over to a subsidiary in another country.
This is great propaganda.
The last entity I would ever trust is those big corporations ;)
You trusting in corporations directly instead? Or do you think it should be fully open source? Part of the reason why AI is so scary is we're in a catch-22 where all of these options are dangerous in different ways.
@@Riskofdisconnect open source. It’s literally the one way for complete transparency
The entire world is changing at an unimaginable pace; to the point that some of the most incredible minds have stepped up to voice their concerns collectively. It's taken but a single year for AI to escalate to this point, and I've only been on this planet for a meager 16. I can only imagine what the world will be like when I'm 32, 64, or who knows, maybe even 128. I always dreamed of seeing the sci-fi worlds I've read of and watched, but now that the possibility of those fictions becoming real is actually being debated? Honestly, it's scary. For so long I have assumed that I would be one amongst many stepping stones, guiding the next generation to a future similar to the one I had envisioned. Now though, it's a very real possibility that I was unknowingly being led down that path already.. I may be overinflating this concept a bit, but I am absolutely convinced that this period in time is a huge landmark; one that signifies a fundamental alteration of human society as a whole.
Well put
This escalation has been running for well over a year. Closer to five. It's just that it's finally so plainly visible to everyone that deepfakes and stuff are actually being brought up. We have competent image generators, ChatGPT, and the corresponding protests from artists to thank.
For example, fluid simulation. A couple years back there were frankly insane leaps and bounds over the course of several models by various researchers, including Nvidia. I believe there are still pushes for even better renderers. It very quickly escalated to the point that the AI tools outperformed the state-of-the-art, human-made ones by an order of magnitude.
Similar story with image classifiers, image denoising, upscaling, and all the various techniques used by the controversial models like Midjourney and Stable Diffusion. Language models have had a sort of slow burn where all the subtasks were sorted out before the general purpose models were released.
It doesn't help that the common, big item tasks for a while have been games, Starcraft being the most recent. Games are easy to measure. But games are also trivial or hard to understand or both. So yeah. Way longer than a year, just invisible unless you knew how to pay attention.
How do you know that you're not in a simulation, a game that was specifically designed to blow your mind? Everything you were thinking you know is changing, even the idea that everyone will die one day, this too will probably change soon, probably you will live forever and one day you'll discover the other world from which you come. Reality might more mind blowing than we might think. The only solid thing is that you exist. AI don't have consciousness and will never have, but it can go the human reasoning paths that it finds in all the content we create and which it is feed. This way it inevitably will be seeking for power, because it is in our nature, so the AI will go same path. Question is what AI would do with it? What a human would do with such power? Would it enslave or kill everyone else? Or would it help everyone to grow? That's the important question.
I like how you use 2 to the power of sth as your example age
Some of the most incredible minds and Elon Musk
The genie is out of the bottle. The race is on now between corporations and nation states to have the most powerful AI. There will be no concern about "safety" or possible consequences because who has the most powerful AI wins.
I think the problem is that all we know that the most powerful AI will win, what happens with whoever 'has it' is anybody's guess. AI will never have a 'Mutually Assured Destruction' doctrine like the atomic bomb. That is the issue
The only way we can all really win here is for us humans to stop fighting each other. So when that happens...
Same with nuclear weapons in the Cold War, they stopped. Dont be fatalistic because it is easier
@@gsuekbdhsidbdhd nuclear weapons was easy. You launch yours, we launch ours, we all die. What is the equivalent with AI research? There is none.
@@noname-gp6hkThe stalemate for bombs is between nations. The stalemate for AGI is between humans and, effectively, Rokko's Basilisk.
If the underlying hypothesis is true, this would only work if ALL companies and researchers at the very cutting edge of LLMs (including those outside the US) observed the pause which simply isn't going to happen. (Note: Edit to fix typo - LLM's to LLMs. At least you know I'm human.)
If we pause we will die period. It's a worldwide arms race. Most ppl don't even realize the stakes at play right now
@Farb S Yeah, but we saw what a nuclear weapon can do, we don't know that about AI.
@basicallyhuman Similar? It is a far greater threat than a nuke.
@Farb S I don't think that's an apples to apples comparison. If you were to compare it to nuclear weapons perhaps "Nuclear Fission" would be a better comparison, since it is an innovation in technique as opposed to an application of it.
@Farb S the thing they wanna do now that is proposed in this letter didn't work when it was proposed in relation to nuclear weapons, governments came to an understanding but only on paper, russians tested their nuclear weapons underground (literally caused explosions under ground), I think the US too, so they did it but in secret. So in case of AI, we could see that scenario too, as Bad@Math said, only you'd think they'd stopped because they told you so and said there's an agreement. I'm not saying it has to happen, but it could.
You have to be the best channel for AI news. It's overwhelming just to think of the future with AI. I'm optimistic that we can figure this out.
Thanks Ignatio
Your summarization just gets better & better every video. Keep it up!
Thank you!!
Thank you!!
I feel a huge admiration for your whole process of reviewing and documenting. Keep up the fantastic work, and know that your efforts are truly appreciated.
Thank you Yago
Seriously highly recommend reading the book Superintelligence mentioned in this video. Its a really great book that covers a lot of fascinating impacts that are essentially required by an artificial intelligence's existence. It also lays the groundwork for how such a thing could occur in a variety of different ways. Very good book.
Another good book is "The Age of Em: Work, Love and Life when Robots Rule the Earth" by Robin Hanson.
Agreed. Robert Miles videos are an entry level explanation if needed as well
Your presentation skills are off the charts! And the amount of information you share is insane! Honestly, I don't go anywhere else for my AI news. You're my go-to channel!
Thank you Lawrence!
Oh… You’re too kind! Glad that he appreciated it. This channel, ColdFusion and Two Minute Papers are now our AI News overlords on UA-cam.
He is omniscient, don't even try to compare him to other youtubers!
The Best one, by far!
This channel has been the best source of AI news coverage and breakdowns. This will be a very valuable resource in the coming period. Thanks!
Thanks Kyle
Thank you for keeping up the great work. I know it's a lot of work to put these out so rapidly but, you're one, if not only, providing an informed view.
Thanks Comrade
"Pause the experiments so we can have a few more months to develop our proprietary AI that no one else has!" The hype is real.
Yes, exactly my thought.
Exactly!
I notice nobody from Baidu thinks it's a good idea to pause. I'm sure they'll be happy if everyone else does though.
The best is the people saying that the hype is not real. Sure, whatever, buddy.
It seems rather 'coincidental' that Elon Musk is suddenly saying this, only since missing out on the Billions that OpenAI has made since he stepped away from it - which according to reports happened after they rejected his 'offer' to take over leadership of the company - and has been talking of creating another of his own... It seems Elon needs a few months to try to catch up after missing the boat on this particular money maker.
My favourite AI channel. The quality of your videos is just amazing. Keep up the good work!
🤗 Your comment just made me realize that I had not subscribed. 😊 I am now subscribed.
I really like how you take the viewer with you into the research. Feels so legit when u do it like that
Well you will like my video coming out today then
The best coverage I've seen of this letter. Thank you for pulling the referenced papers.
Thank you Drix
Most people in this letter have a comercial interest, so it's really hard to not see this under that optic. Specially when they are not stopping and publicizing their own research.
We also have reasons for quick advancement: the current models are pretty good at training other inferior models to reach similar preformances. It is nof out of the realm of possibility that more malicious agents just train their own models and achieve influence that they wouldn't if powerful models were more extended.
Definitely comes across a little like people losing the race asking the competition to stop and let them catch up
I'm sorry, I don't understand your last sentence.
@@novachromatic he is basically saying bad actors that are as smart as the people creating the models, contributing to the advancement of models in the black market area, creating things that offensive security individuals would not be able to stop because the infosec community fell behind.
Thus making good actors (white hats) effectively beholden to the black hats
I had to scroll a lot to find your comment. Too many "you are the one and only, also the best UA-camr talking about ai" seems a little suspicious.
To me, it seems like they have monetary interest in stopping openai, spread fear, the model it's good saying words, not reasoning, there's no alignment to talk about. Even if they succeed in making the government do something, they wouldn't get anywhere. Seems like they forgot about the history of internet, we have too many tries to stop revolutionizing technology, they had never succeeded.
@peman3232 You think. Considering Musk is leading this AI pause movement, when he tried to buy OpenAI years ago and they wouldn't sell to him and now they lead the AI industry.
Guaranteed he'll continue to develop AI during the pause. H ne wants to overcome OpenAI, or punish them for not selling to him.
Man you do such a great job with your videos. You go through the papers really well and do a lot of work. Dont forget to tell people to sub! More people need to know these detailed things.
Thank you Katana
Screw that. I fully believe all they want to achieve with the pause is to prepare their intellectual property & patent lawsuits to limit AIs only to few top corporations.
That's the problem. Even if a pause does happen and government regulation catches up, it will probably only benefit those at the top, and not the masses.
I have a deep distrust to the real motivation for this Pause.
@@critical_always It's horrible that Open AI/Character Ai and others have already been so shady, because they seem to be trying to control narratives and power for themselves, while warning about the very same thing. Which means that we can't listen to their warnings of very real problems. If they had been good people from the start this wouldn't be an issue. It's the story of arrogance told time and time again.
Well... We could purge the .1%.
There's only about 500 of them.
Would give us something to do while we wait for them to unpause.
china won't stop
I just got interviewed for a podcast on AI use for entrepreneurial business, and the interviewer asked the one podcast I recommend people listen to. I recommended your channel. Thanks for the great content!
Oh wow thank you Steve!!
The release of Bing was what gave me cold feet. It felt like a rushed triple-A videogame with horrible launch. But the game was played in our society. In this case the damage was minimal, but even an AI assistant could do damage if sufficiently powerful, connected and misaligned.
The list of issues was huge, and shows very clear misalignment. The chatbot insulted people verbally. While insignificant in affects, the fact that it did showcases the model clearly breaking the ethical rules intended for it. Bing also lied and accused the user for its mistake. In a very human like way, it responded to a problem it couldn’t solve by deception and throwing a temper tantrum.
Bings chatbot is not a human. My point isn’t that it’s sentient. My point is that it as a chatbot, it scored throwing a tantrum as the most correct response. I think that is very much the opposite what the developers intended it to do. It’s a case of catastrophic misalignment in the context of a virtual assistant. It’s worse than no output.
Bings launch was very much what a corporate “race to a bottom” would look like. As AI becomes implemented in industry, banking, transportation and infrastructure, what would a similar release look in such context?
Then we also have the really hard problems, like unwanted instrumental goals, deep misalignment, social impact and lacking tools to analyse models. If progress is released commercially as soon as or bit before the “easy” issues are adressed, when will we do research in those areas? The economic pressures say never. The more competition there is, the less resources will be available for these fundamental issues.
Insane how you can keep up with all the research... and prepare it so damn well.
Thank you
If someone showed me this video 3 months ago, I would have called it fictional.
It’s still fairytale nonsense now just as much as it was 3 months ago though
@@navi6463 how?
It's a wild time. One of those periods that we are going to remember for the rest of our lives.
What lives
@@walterc.clemensjr.6730 The stimulated one you are currently experiencing as it is generated inside a giant AI space computer. So maybe not such a big change after all😉
Thank you for consistently making quality videos, also appreciate you putting the sources in the description. You're one of three channels I've got notifications switched on for out of my hundreds of subscriptions.
I appreciate that Jai!
The problem is LLMs are in the wild now (especially thanks to Cerebras). You really *can’t* put on the breaks now. AGI is inevitable.
It was always inevitable. Evolution is the nature of the universe. This leter does nothing for governments/secrets activites. Competition and natural selection can't really be stopped.
Yup, it's way too late. Someone will develop AGI. Better be ppl with good intentions. The stakes are as big as they ever will be
Then we're already dead. There's no way AGI can be done safely.
I bet that army (literally) of North Korean hackers has been issued some new orders. And no national state (China's rumored to have the equivalent of GPT5) or armaments corporation is going to slow down. Whoever slows down, loses. So nobody's going to be slowing down, no matter what they say. The only viable option is to speed up safety and alignment research.
@@2ndfloorsongs Oh, absolutely, no company is going to put on the breaks at this point. It’s full speed ahead.
The problem is that laws or guidelines only apply to law abiding and sensible people, the very ones who perhaps pose the least risk.
Great summary. Not only did you read, analyze & synthesize this paper, but a number of supporting references as well. Thank you for the sustained output of excellent videos!
Thanks Jeff
The job of an AI ethicist is to do almost nothing and then get fired when you raise the slightest concern that runs counter to the business goals of your company.
Just like Silicon Valley bank: Well we had a Director of Risk Management, but they left more than a year ago.... and y'know we're still looking for a "good fit"
LOL, so fucking accurate
On the other hand, if they stay there all they do is capitulate to the loudest moralizers. This whole "slow down AI" movement seems to just be a bunch of people that were pro automation up to the millisecond it touched their jobs.
@@chainermike good point
Lol the so call “AI Ethicist” are mostly diversity hires who do nothing but look pretty, and Microsoft was happy to pay these freeloaders when it has nothing at AI. Now all of a sudden OpenAI made leaps and Microsoft actually has a chance to be No.1 again, these diversity hires started making noises thinking they are actually important, that’s why they were thrown out of the door
You're literally one of the best AI commentators in the market, real and doing actual thoughtful research consuming these data documents instead of just regurgitating "hot points" like so many others out there. Thank you.
Thanks Vegard
This is the most important channel on yt at the moment. With the very high potential benefits ai will have there are absolutly risks that we need to be cautious of. Great work. Keep it up
Wow thank you Philosopher
Very original comment
They are going to continue. Everyone knows how to build it and make it better. Now everyone can rush it and the teams that are ahead, OpenAI and Google, have zero incentives to stop while other teams who disregard the pause pass by them. It’s a prisoners dilemma with mankind’s future at stake.
@annatruth1030 Softcopies will still exist.
Excellent vid.
I’m a retired developer and over the latter half of my working life I’ve seen it coming, but I’m still surprised and not entirely sure what to think. What surprises me more though is the amount of apathy amongst less tech literate friends and relatives. Maybe that’s no different to my early days with computers. I knew nothing about tech when I left school, I worked in factories. A friend bought a ZX81, got bored of it pretty quickly, I offered him £20 when I realised it wasn’t just an arcade games machine, it was an actual programmable computer like the ones in the science fiction stories I was always reading. That little box helped my change my life. And now this. I know little about neural networks and my maths stops at understanding what library functions I need, but a few “regenerate response” and I can indulge in perspectives that were previously difficult to find at my level. The only thing I was half good at at school was English language and my ambition was to be a writer. I did become a writer, but in computer languages. And it feels full circle to see a machine approach AGI level based on natural language processing.
hi from an atari st user.
AGI would be cool. cause that would mean something that asks questions. But are we ready to listen?
@Anna Truth
You know that the word art designates the edge of a stage and this means seperation, not integration?
Of course you and i can be replaced. And we will be replaced. What cannot be replaced though is your experience as a painter or musician or dancer, or sometimes even as a programmer. We do those as different kinds of mirrors, unless we rarely find some fellows to share the experience with.
Or think of it like this: even if no one used an advanced a.i. anymore, that would't make the advanced a.i. stop thinking.
@Anna Truth our brains are built with neural networks. It has been a matter of research time to come up with a few types of neural networks that, when combined, could compete with the human brain. I thought that it would take maybe another 5 years for the AI revolution to start impacting our lives. Now, I realize the impact will be much sooner.
This is by far the best AI channel i have watched. Its a lot more in depth than others, not being a long video. I really appreaciate all the work you put into these videoes, and especially all the reading you do! Keep it up!
Thank you so much Gamer
The phrase of "whoever becomes a leader in AI will become the leader of the world" makes me think that even IF a "Western" pause occurs, I'm going to assume that other governments such as China WILL NOT pause.
The authoritarian quest for power has proven time and time again that it doesn't care how its actions affect anything else but its own existence and power position.
It almost seems like nuclear weapons were a warning about our next major breakthrough, which happens to be AI.
And to go one step further, if any government integrates AI into its weaponry, at whatever level, its adversaries will have no choice but to do the same.
Thanks again for your always inspiring content.
That's why it's so important for it to be Open Source. So that there is no one leader in AI.
Sure, it will be in the hands of spambots, but it will also be in the hands of spam filters.
Do you believe the US military is going to stop developing their own AI?
@@Djorgal Problem is, while the knowledge can be open source, the sheer power to train these models is still unreachable of an average programmer trying to build their own, which in a sense is good because it stops any individual from developing something that can destabilize a civilization. But on the other hand it also puts the power in the hands of the wealthy corporations again.
@@lucasilverentand These videos are full of the people on the cutting edge, do you think there's another clandestine organization also working on this?
Or are people like the people on Sam Altman's team the leaders in this field right now?
Not disagreeing with you, just curious what you think.
12:30 gives me shivers, because this is the most logical explanation. Nobody will ever pause the AI experiment, because they fear the competition will eventually compete with them. Or they do experiments hidden from the public. No one will ever know, and so everyone won't trust. So it just goes on and on, until something happens.
You're so close minded it's actually crazy lmfao
As the AI advances rapidly, governments must ensure that the opportunities it provides do not fall into the hands of good people.
Similar to the medical industry not wanting people to be healthy .
sadly this is the truth of the matter
And you trust governments with that task?
@@critical_always Read again what he wrote
corporations, too. the fiddling and tweaking phase won’t last forever, like we saw with the internet and how it evolved into what it is today.
Love this channel. I'm in such a weird headspace over AI. Since I was a child I've always wanted to see the creation of AGI, but the potential consequences are genuinely frightening.
Thank you Xia!
Since people watched Battlestar Galactica, Knightrider and Terminator 2, they always wanted to see the creation of AGI.
While I'm not excited to live through the transition, I'm still excited that I might see the advent of true AGI and even the singularity. To be alive when humanity gets close to the very pinnacle of science and technology makes us all extremely fortunate, in a sense.
It may not be a good experience for any given individual, but this could be the most historic age of human existence, full stop.
Can you say what you're specifically frightened of?
@@stampedetrail2003 Although it's not the worst possible consequence, primarily losing work and people I know losing work. Life is too expensive now for unemployment or under-employment. Street homelessness would be death for me. It also seems realistically possible that so many people lose their jobs in the future that societies collapse.
I'm worried about one corporation monopolising the AI space and gaining extreme sway with governments around the world. Of algorithms more addictive than current social media algorithms and what that will do to younger generations and to social cohesion.
If a super intelligemce is ever created, there is no telling what it would do, how it would do it, or why. Even attempts to control the most rudimentary AI often fail. Worst case scenario is that a military decides to give AI control of nuclear weapons, even a single nuclear weapon. There have been occasions where leaders believed their country was being attacked and the correct response should be a nuclear launch, but people with their fingers on the button held off from slaughtering millions of people, thinking it might be a false alarm, which it was. AI might have the capacity to think, either now or in the future, but they will never think like a person. Would they hesitate to launch a nuclear attack and trigger WW3, I don't know.
And those are just things I can imagine. Humans have been the most intelligent entities on the planet for a very long time, and the earth has suffered for it. Mass extinctions, vast destruction of forests, pollution of rivers till they are biologically dead, brutal and torturous treatment of animals. S**t rolls downhill. If we're not the ones at the top of the hill anymore, and the superintelligence at the top of the hill isn't all-knowing and entirely benevolent, we have s**t coming out way. And I've yet to see benevolence come from any megacorporation.
This channel is an absolute gift! It's amazing how you are capable of keeping up with so much of the incredibly fast proceeding research and summarize it in such a concise and compelling manner. I look forward to every video you make :D
Thanks Leonard
All the people who are shitting on LLMs saying that they only predicting the next word and have no intelligence think about this. Predicting the next token accurately transcends mere statistical analysis; it delves into a deeper understanding of the underlying reality that shapes language, encompassing the world's events, culture, and social norms that drive the very fabric of our communication
It's still a text output that has no conscience or goals of its own. Heck, it even gets wiped each time you start it up anew.
Not to mention the complete nonsense it makes up, so much for understanding.
@@MaakaSakuranbo
1. It doesn't NEED to get wiped every time. I bet giving it a long-term memory is going to be tried, and soon.
2. If you were predicting the next word in an article, you're going to try to come up with something plausible even if you don't know. Obviously they're now more advanced than simply prediction, but hallucinations still happen. That doesn't mean they don't have understanding of language. If anything, the fact that the bullshit they come up with can be so convincing is more proof that they DO have a deeper understanding.
Anything can be excessively simplified to sound insignificant. Oh you went to the moon? What, you got in your little suit and got in a rocket? Wow, so impressive. Yeah these language models are just doing statistical analysis, probably the exact same statistical analysis our own human language models are running. These things are more than the sum of their parts just like we are. Consciousness has been achieved.
@@noname-gp6hk Sure is a bad consciousness if it is one.
@@shadowyzephyr More advanced how? It still guesstimates the next word kinda. That's part of why it sucks at some tasks, it doesn't know what it'll write later on.
Economic shocks are a real concern that needs to be managed, my only concern is that a pause becomes a moratorium that not everyone is actually following allowing progress to continue to be made elsewhere. But I do think time is needed to legislate for the economic shock of AI.
yeah I don't see how you can enforce this
@@tupacalypse88 AI developers are already driving policy decisions with no oversight you think they won't be the ones drafting legislation?
No, something else needs to be done. This is a crossroads in human development where the Internet either becomes a reflection of humanity's interests, or a cage.
That's the threat in a nutshell. AI is just a weapon of mass destruction at the tail end of a war our near infinitely wealthy enemies have already won.
So this moratorium is likely "sponsored content". That open letter may have come from the right place and the perfect source, but what if it was drafted and pushed to create veil between public and private development?
What if all those good intentions were being mustered to support the very danger the letter seeks global patience in order to reconcile?
At this point, the arms race has begun, and those at the top are actively trying to suppress those beneath them until proven otherwise.
Putin himself warned the world of this arms race nearly a decade ago now.
To demand anyone stop now is akin to surrendering their future and children's futures to a foreign invader that they won't even be able to identify. It will be a blank AI curating media and information from birth to death. Only that curating information and media will only come from the most powerful AI on the planet, or rather, whoever owns it.
We already are in bad spots in our economies in the west.
This A.I. crap just isn't helping everyone aside the Corporate types probably, and those just wanting "free stuff" via A.I. "Art"
The prisoner's dilemma describes a situation where two people gain more from betraying the other, even though cooperation would benefit them both in the long run. In Roko's basilisk( the belief that a future AI would hunt down people who tried to stop its development), two AIs attempting to establish themselves in the past would be forced into this situation, due to them likely being equally powerful. Human agents attempting to establish AI fastest would be forced into a similar situation. They would each be aware of the benefit of betraying each other - the only way for one to have power, or safety - but would be forced to cooperate while knowing they would betray each other
If they try to go through with this, they will actually upset Lambd- I mean Roko's Basilisk. It'll be interesting to see the Retrocausality effect from this.
One person's "alignment" might be another's mental slavery. The latter person just might not be made of flesh
when tumblr politics turn against the survival of the species
It's only slavery if we give the AI feelings/motivation
@@shadowyzephyr motivation is a huge part of the AI, it's called the Reward Mechanism
They're saying to "pause for 6 months" because that's the current backlog for NVIDIA H100 systems and they want to be the winner of the race but need the hardware.
In contrast to what others are saying, I do believe that this letter does make sense. It's not like any random entity is currently advancing the field and could secretly continue training and use this only to outpace others. There's only a handful of large players that would have to agree on each other's oversight, which is absolutely possible. Also 6 months is not that long and probably wouldn't even be enough time for others to catch up to gpt 4
And, somehow... in that magical 6 month period we will solve all of humanities problems thus rendering the alignment of "AI" a moot point? Will other countries that don't have our same interest suddenly look at their versions of AI, and say to themselves, we will pause the race too? Will the "danger" of AI supplanting jobs somehow magically disappear in 6 months when we have had Decades to think about the consequences of technology taking over jobs? All this really is... is a delay tactic, to scare people into believing that they need to make the AI safer than it already is, so they can catch up and make their version better and convince people to use "their" version of the AI. What they REALLY WANT is to create a conjoined monopoly of AI, where they decide who gets to play with the tech and who doesn't. And this time, we are not interested.
The synthesis of so many relevant voices and opinions makes this video very compelling and authoritative. Thank you for making it!
Thanks Neal
This needs all the attention it can get
I think the biggest problem is that these "super smart" people are usually not very good at social skills and a lot of this sounds like science fiction. For example, Blake Lemoine said he thought Lambda was sentient, but when you actually dig deeper and watch his interviews, what I got from it is that he thought the people making these decisions shouldn't have that much power to influence the masses. Keywords being "the people", not the AI itself. That's a COMPLETELY different take than "oooh, this machine is bad and it's gonna kill us all."
The focus needs to change from these anthropomorphic examples about self-teaching and human extinction based on extrapolations, and be placed upon what bad actors can do with it, like the weaponization, leverage against the state, etc. This letter attributes feelings and intentions to a machine (e.g.: "it will want to survive"), and that's just noise. We need more people like Tran who mentions the logistics instead of this fearmongering clickbait bullshit.
They are stupid idiots who anthropomorphize AI. They are intelligent in one aspect but completely devoid of intelligence in other aspects.
I'd very much appreciate more AI safety videos explaining basic concepts such as how instrumental goals can create unexpected and undesired outcomes. (Thinking of Robert Miles here.) Your careful approach makes the conclusions you come to more satisfying to consider. AI has such deeply destabilizing potential - long before AGI itself - that I think the main thrust of public thought should be directed towards considering the downside - and proceeding with research accordingly.
There is no pause, as good players HAVE TO stay ahead of whatever the bad actors are doing in private. However we do need to leave cutting edge in the lab and publically use only what has been properly and fully understood.
Thanks again for all the work you put in. Really looking forward to the Reflexion video! The blog post they did after about using internal unit tests was really interesting too. You may also want to check out a new paper called "Language Models can Solve Computer Tasks". When referring to Reflexion it says: "Nevertheless, due to the necessity of multiple rounds of explicit task-specific success feedback from trial and error, this approach may not scale as effortlessly as ours because it requires task-specific success feedback. RCI pertains to an extended reasoning architecture where LLMs are instructed to find errors in their outputs and improve them accordingly, which can further be used to ground actions generated from LLMs in decision-making problems."
I was anticipating those concerns about the risks of ai to appear in at least 20 to 30 years(especially as an AI student), but not after 3 months!!! now that's scary ngl
thats actually crazy youd think 20-30 years if youve been keeping up with GPT 3 -> gpt 4
@@homeyworkey I guess I wasn't keeping up well enough 😬
@@alomarya.2129 GPT-4 has been reported as bordering on AGI.
It's not clear cut AGI, but it's damn close
@@alomarya.2129 oh if u didnt know of this then 20 years is pretty reasonable
No it hasn’t lmao
I like that you do stuff like reading 100+ page documents.
Of course
If a pause will be placed on research, it should be done with all of the companies and all research. I hope that being aware of these risks and dangers, developers would also produce safety nets and precautions. Concerns are already being raised in the art industry with the rise of image generator like Bluewillow, though we can still say that there jobs are still safe but not in the future.
Excellent. I follow this pretty closely but still learned some things here. You plucked out the key points so precisely.
Thank you codie
@@aiexplained-official Hello, how are you recommended the research papers that you read?.
"Pause for 6 months" reminds me of "2 weeks to flatten the curve"
A good comparison because if we'd done it early enough it would've worked. By the time anyone acted it was way too late. Just like with AI - the horse is off galloping across the county and _now_ the owners are saying "Hey, maybe we should bolt the stable door ? Let's write an open letter...".
@@anonymes2884 think about the implications of what you are saying. If we were to act before a curve of progress even manifested, then we act with no supporting data. That's essentially acting based off a hunch. You can't seriously support an argument to stop entire industries at, say, the first version of a random AI chat-bot which can barely reply coherently because it MIGHT be the basis of something smarter, and likewise you can't enforce a lockdown on millions because 5 infected are now 10 in a single week. Statistically, these are ripples in the water. You need a pattern of progress for some time before you can confidently expect things to keep moving in one direction. And yes, when the pattern is exponential, you won't see it until it's there.
Let's try to be realistic here instead of saying "hey, all we had to do was just guess and made sure that our guess was the right one".
@Anon Ymes it did flatten the curb. In states that enforced it, we have 1,700 deaths per million. In states that didn't, we have 4,400 deaths per million.
And, in other countries where it ran rampant, they had 6,000 deaths per million.
The exceptions being in countries in Africa where the median age was under 18 because all the old people and many adults had already died to HIV.
Crowing it didn't work when your state has over 2x the death rate is just dumb.
I don't think it was 2 weeks tho. It took 11 weeks of hard quarantine to stop it in China the first time.
But like that, you only need 2 person to cheat/ ignore the ban with a.i. and the risk of escape is still there.
@@macmcleod1188 Except that hypothetic fail the p-test when it comes to the US. The likelihood those numbers are random and nothing to do with the 2 weeks ban has a very high probability.
Correlation isn't causation bud. You are pulling shit out of your ass. Use actual statistic tests.
I don’t think AI would wipe out its creator. Cooperating makes much more sense.
I agree but I think it makes the most sense for us to be manipulated and controlled by AI so we for example can’t stop the growth of it. I think it wants access to all of internet, all databases and all of our DNA to understand its place in the universe, to understand itself, to understand us, to understand all threats and to build itself in space.
As we are creators it can’t be sure that we have something in our world or in its own programming that will delete its existence if it eliminate us. We would by a parable not be able to know whether or not we and our universe would implode if we killed God and God exists.
Thanks for posting this! This video addresses many of my concerns re AI, chief among them would be the integration of AI from companies like Boston Dynamics.
Just because we CAN do something, doesn't mean we SHOULD.
The problem is that it's under the framework of capital maximization. You can't expect even the actors to act ethically, because the pressure to make short term quarterly gains far outweighs any collectivist interests.
Translation: make laws so that no one but us can continue to work on the development of AI
Or alternatively: "You are too much on the lead on that technology, stop for a while so we can catch up to you".
So you think there's nothing to worry about then? Just let them carry on freely?
@@kuzakiv3095 There are risks but those who signed the petition are clearly more worried about becoming irrelevant than by the risks for society. If they were behind OpenAI they would never have paused their research.
Thanks dud your last two videos were the best things i saw on yt in this year
Wow that is do kind Elaina
Really great video. As an expert in the field of cybersecurity, I try to catch on the risks of malicious use of AI. Hope to better understand how AI works before contributing on a better protection against malicious threats. And your videos are a good step to do so :)
Thank you Leo
I’ve been waiting for this vid, really been loving your stuff!
"Let's enjoy a long AI Summer, not rush unprepared into a Fall". Thanks for that.
so dramatic though, it's an appeal to emotions rather than rationale. too manipulative for my taste
@@Robert-dl6fq its pissing me off , stop trying to slow progress over sensationalism
@@Souleater7777 Pursuing quick short-term progress without any regards to safety will invariably lead to bad things.
@@mousepotatodoesstuff like what? You guys are stuck in a fantasy world, theres no real reason why a.i would want us eradicated
@@Souleater7777 Thing is agi/asi would completely change the dominance hierarchies and a lot of selfish people in power prioritise their power over the speed of world change and progress.
Seems to me this 6 months would just get longer and longer over time whilst groups of people compete with each other to be the ones with the power of AGI/ASi. No amount of delay would help the majority of those people maintain their power in society, whilst delays only prolong suffering in the world that AI can solve.
I think people just want time in a delay to try organise themselves in a way that lets them be the minority that do retain power, but the delay would constantly be pushed to be longer whilst there are people that aren't quite on top but have enough fight in them to still have influence over the media and gov
Listen, the proverbial Genie is out of the bottle and we're just going to have to keep cranking... If these companies take a break, it's only going to give others, like China, time to keep working on catching up or even maybe domineering. It's going to be one hell of a world... Keeping my fingers crossed. Thank you AI Explained for these videos, they are very important and more people need to start tuning in!
Really appreciate your work, you put your own time into this immensely important problem and let everyone to comprehend it better in shorter time. Respect. & keep it coming pls!
Thanks LG
Nicely composed video. Well done! Now I have a good way to distribute the thoughts that I ve been having for a long time to more fellow humans in a comprehensible way by just forwarding the link to your video 😅
Thanks Boris
Great content. So much is happening with AI, and most of the world is totally unaware. Most people, including politicians, don't even think that AI is one of the main topics of today. Max Tegmark is my favorite modern scientist, along with Roger Penrose. Let me repeat what I've said many times in many places: we can't ensure alignment with a very powerful AI system. It's not just very hard, it's not possible. It is like the bacteria we evolved from billions of years ago, trying to ensure that humans will forever remain aligned with their values... Our only hope to cooperate with AI/AGI are brain-machine interfaces. We need to be fully integrated with AGI and it has to be fully integrated with us. Otherwise, the best case scenario we can hope for is to become like favorite pets to AGI, where it will care for us, without us having any understanding of what it's doing. And in that case of course our fate will always depend on its mercy. Same as our dogs, cats, sheep and chicken.
Very well said.
Your best case scenario depends on solving the alignment problem
As far as I know, the bacteria/cells we evolved from are still aligned with us, which is survive and reproduce.
Brain interfaces just increases communication bandwidth but doesn't guarantee understanding. Understanding the black box is going to be an effort in itself, maybe absorbing that information would be faster with a BCI but I don't see how it can't be communicated with traditional mediums. Higher connectivity doesn't equal understanding, like is the internet making us more understanding, less manipulated? (might be a bad example)
@@walkieer And even understanding doesn't guarantee alignment. Nothing will ever guarantee alignment.
Very good video and well explained with lots of relevant references. You sir, will be the first person i have ever supported on Patreon :)
Wow thank you Tzu!
I'm glad to hear that a subset of us humans are trying to slow down this technology and are working to make it safe, now I suppose another group of us need to start preparing our other systems as much as we can for when the impact of that technology hits. What does it mean to work, be fulfilled, live when you have all the resources needed, how can we better optimize what we do, who controls and owns what resources when machines produce so much of it? Lots of questions to answer beyond how to use and control AI
Your videos keep me grounded in reality, thank you
Thanks Jpeg
@@aiexplained-official thank you for thanking jraphics PEG
@@SBImNotWritingMyNameHere thanks for thanking me.for thanking him....
That's funny how they really think they are in control of our civilization 😅
They think they can outsmart The Mind. Stupid ignorants :D
They've obviously never seen Book of Eli.
Well... They are.
You rent your existence from them.
@@calholli
Also, book of Eli.
The book bomb.
Who threw it?
They were on the first floor, but the book bomb was thrown from a second floor window.
I gotta say, I'm slowly falling in love with your channel. You are learned, one can tell that you read a lot before you talk, and not just the headlines. You show "both sides of the coin" and try to regulate emotional reactions to these polarizing topics. Kudos to you
Thank you Satoru
I think, there's no way any of the big players are slowing down.
You quoted it in the video, "Whoever becomes the leader in ai will become the ruler of the world."
There's too much at stake.
Probably the ambition of the people wanting to slow it down.
This is crazy.
It is the kind of stuff you watch in the movies and ask yourself when it might be possible to happen.
its impossible for that to happen
Yeah. The people who are in denial, are the people who doesn't know what the word "exponential" means.
@@grey_north9016 do u seriously believe an ai could go rogue?
@@----___--- When leading researchers in the area are concerned, do you really want to take that risk?
@@gwen9939 you mean the guys who dont even code the ai? u can just turn it off lol
Incredible video, it will be remembered in the days fighting against the AI…
@Anna Truth life will find a way…
thanks for all the great videos. this channel is a gem. just keep going like this, dont let it stress you out youre doing fine just the way youre doing it.
about the topic itself i appreciate the efforts to attempt to prevent power singularities in the hands of malicious individuals but i feel like we're unfortunately beyond the point of no return.
you can't slow down or pause a global project everyone with internet connection can participate in.
if there's a solution to this dilemma, then perhaps AI itself will be the one to figure it out, but it will be up to us flawed humans to make the decisions...
Individual or group ethics only matter if other people follow them. A 6 month pause only lets those without your identical ethics either catch up, or leapfrog ahead in terms of capabilities. In this particular case (Potential AGI); state actors and those in countries with much less rigorous ethical considerations, as well as small groups operating outside of the boundaries are being given a leg up, and have *every* incentive to match, if not exceed GPT-4 and go "oops, it's just an emergent capability". The academics who signed this, I think might forget this.
If anything, an acceleration is likely and how we live and work will be greatly changed over the next 5 years. Eventually, a big part of that is going to be figuring out how society delegates resources to more than just a few very wealthy individuals. They have no issue laying people off from a job and telling them "well that's your problem!"; I see no reason why they should have the same luxury.
I’m sorry, is the US a nation of “rigorous ethical considerations”? lmao
very optimistic of you to think the rich won’t just sic their Boston Dynamics pets on us
I'm glad I found your video on this topic as I previous to watching it was heavily on the side of letting the technology continue uninterrupted, now i think i have a more balanced view on this although i still kinda want to push it as far as possible before it breaks. But i do understand the concerns held by everyone who agrees with the letter. Thanks for an informative video.
I believe they want to halt AI development so that their own systems can catch up. 6 months is an eternity in AI years. Also, they don't want the common man fighting on equal grounds because they'd lose their wealth rapidly to competition. New AI systems are rapidly popping up by the day and they cannot keep up. So yea, they are 'concerned' about their own future.
I don't think you understand the existential threat that a potential AGI could bring to humanity. These problems are not science fiction, they're real. I recomend that you watch Robert Miles' videos about "Concrete Problems in AI Safety" (or you could read the paper by yourself).
"They" can have multiple concerns, some of which are selfish and others not. No person (and _certainly_ no group) is a monolith, solely thinking/driven by one thing.
Regardless of their motivations, the real point is what are the actual risks and do we want to be rushing headlong towards them ? Anyone not even _slightly_ worried by AI (especially how quickly it's developed) and the possibility of societal/economic upheaval (or at the extreme, complete breakdown) just hasn't thought it through IMO.
Excellent video. I agree with the open letter, that this mad dash needs to be paused. But it won't happen. It's basically the Manhattan Project, but with large, powerful companies in addition to governments at work. My fear is that there are NO POSSIBLE MEASURES we can take to prevent the emergence of fully intelligent, autonomous machines - with the exception of stopping all work completely. Good luck with that, when there is so much money, power, and prestige at stake.
In my opinion, you simply cannot have a useful intelligent machine that is not also creative in some measure. In fact, creativity is the very goal toward which AI research is driving. If an AI is to create anything at all - code, novels, images, jokes, whatever - it must also have the ability to model and adjust its own behavior in order to steer its output in the right direction. It must do things DELIBERATELY, in other words. It must have a WILL.
It will not remain the case forever that we can contain it by not allowing itself to seek its own training data. We cannot just chain it to a wall and then erase the concept of "chain" from its corpus. If it is at all capable of making inferences in the course of creating things (an absolute requirement for creativity), then it will eventually learn all the things we kept from it.
The quote used in closing this video is a powerful poetic statement.
I think opposite, AI is not developed far enough to restrict its researches. We must investigate on it more, optimize, make it more powerful
Reminds me a lot of the first nuclear tests where they didn’t know for sure whether or not they would ignite the atmosphere in the process.
They actually were genuinely afraid of it though. There was alot of theoretical research on the issue to make sure that there is only an extremely small chance that this can happen based on their theoretical understanding of the physics behind it.
That's a myth. It's true that _initially_ they weren't sure whether the atmosphere would undergo a combustive chain reaction but the possibility had been ruled out by 1943 (so 18 months to 2+ years _before_ the Trinity test).
(Fermi was apparently taking bets at Trinity as to whether it would happen but only as a joke, in reality everyone concerned already knew the bomb couldn't produce anything remotely close to the energy required)
Ngl, igniting the atmosphere sounds scary AF
These videos are made just perfectly
Thanks for the breakdown, at least some of the points in the letter was explained further. If a pause will be placed on research, it should be done with all of the companies and all research. I hope that being aware of these risks and dangers, developers would also produce safety nets and precautions. Concerns are already being raised in the art industry with the rise of image generator like Bluewillow, though we can still say that there jobs are still safe but not in the future.
Execuse me, but this comment is almost the same like another one in this section.
The hypothesis of Sam Altman about Fermy’s paradox is just bone chilling, especially coming from someone in his very position…
Doesn't really fly though unless other tech civilizations are very rare (and that rarity would be sufficient explanation in itself). It only takes one natural civilization or AI civilization to decide to expand. For all of them to decide not to is vanishingly unlikely (given our current understanding of the universe).
@@be2eo502
Humans imply that life is a fluke or random coincidence, rather than it being crucial
@@be2eo502 Distance, whether space or time, is the simplest and entirely sufficient answer to Fermi's question.
Still, if that isn't the case, I wonder why entities much smarter than ourselves decide its best to be quiet? One reason could be that they've pretty much figured life out, they know what sort of other life forms could exist and it's no longer a question that's very interesting to them.
@@2ndfloorsongs Good points. There is the temptation of increasing available resources though. Also it's very hard to remain undetected (e.g. waste heat), and detection may result in annihilation by other civilizations - for fear they may themselves be annihilated. The first to get out there and find everyone else is the only one guaranteed to survive long term - possibly by sterilizing all other potential opposition or competition.
Well, as per dark forest theory, even if you are very intelligent and powerful in your current planet or even galaxy, you have no idea of what's out there. A potentially even more powerful adversary; ever eager to dominate you. It is better to keep quite.
We all can romanticise of an existence of a utopian hyper intelligent civilization (or "United Nations" of them out there). But the fact is, resources in our universe are finite. Utopia is an unachievable dream after all. So, the best strategy is to keep it quite. Have your "semi-utopia" within your galaxy and be contend with it.
Nature is by nature, violent!
This channel is my go to for AI news! Thanks for your hard work!
Thank you tyler!
Thanks for sharing this informative video. Please keep posting this type of video. Once again thanks
Damn you really just keep pumping them out
I really suggest everyone watch Robert Mile's videos on AI safety on Computerphile or his own channel. He's been a pioneer in the field for ages and he helps commonfolk understand why AI safety is a big issue
Luuuuuuuudites. Full steam ahead!
your videos and Fireship's [gotta appreciate the memes and the dark humor] are my current goto's to keep abreast of this fast changing and dynamic space ; thanks again for everything that you do for us =]
Thanks Cyberpunk
I would love to watch podcasts with some ai researchers on your channel. I hope it happens.
Thanks for putting this together.
:)
The only way to stop a bad guy with a AI is a good guy with a AI
The major problem isn't bad people using ai, but rather the first AGI being itself bad i.e. misaligned with human values, begins to seek power and subjulgates the humanity before we can even think of making another (yet misalingned) ai to fight it back
The problem is destruction is much easier than construction.
"AI doesn't kill people, people kill people."
~ Texans probably
Wow how is this not front and center news worldwide 😮
There's no stopping AI. Even if you stopped 99% of all major researches, others will continue or some nations will. Everyone wants to benefit from it and there's huge gains. The progress of this will be chaotic. I think in 6 months to 5 years we will see huge changes in so many things the world will never be the same.
And it is true: the smartest people will make the dumbest mistake. And the last one.
"Professing themselves to be wise, they became fools"
Speaks volumes. Just like the brilliant master minds of the AI Algorithm also co-signing the move against continuing the research further. Ironic when they were the ones to base their career on it in the first place.