I really feel for Ilya, we don't know whats happening. Just an outside perspective but i've seen a lot of hate already. He's one of the most brilliant minds of our time. We don't know what is going on. I just wish we had more patience.
I do have sympathy for someone who puts out a "I deeply regret my participation..." statement so soon after that participation. I still have a deep need to know WHY he and the board did this.
That's exactly how openAI lead board member Adam D'Angelo played him to kick out Sam Altman, he lied to Ilya to use that concern for a coup to save his own company "Poe".
The real question is, by how much are you willing to delay a cure for Alzheimer, in exchange for a promise of increased safety, but absolutely no information which would enable you to judge for yourself?
The only way any of this makes sense to me right now: Why would the person that put his whole life into this, act the way he did? What if: Ilya thinks they have an AGI(or blueprint), Sam is smart enough to know that to train the scaled model they will need a lot of compute and insists the tech is not an AGI, as to his new, fit for purpose definition, an AGI would have to be able to discover new physics(per his last talk). The big deal is that per OpenAI constitution the board decides what is or isn't AGI, and per the constitution Microsoft gets all the IP bar the AGI. Now if Ilya is worried that Sama want's to transfer this tech to Microsoft by considering it non AGI in order to secure compute and accelerate, then he has to secure to board. The issue is that if it ended at that it would succeed. But now the employees outside of the board not being given an explanation, (for very good and obvious reason, as other teams are watching) revolts. At this point the calculus changes, as more than half the company and Sam could move to Microsoft defeating the purpose of Ilyas initial action. So to salvage OpenAI you get the tweet Ilya tweeted, namely his intentions and his love for the work they did/colleagues. But the board, despite facing all the hate in the world, and a mountain of lawsuits that will ruing their lives and potentially their freedom, keeps on fighting, again for a moral reason, as if the board is filled with Microsoft backing members then the definition settles to not-AGI and the tech transfer happens. End result you get AGI in the hands of a company as intellectual property. There is no good or bad guys here.
The healthcare example is a great one - if there's an area in which almost everyone could agree that AGI is going to have an incredibly positive impact on humanity, it's this one.
Covid has certainly proven we CANNOT trust human doctors and our institutions, which are entirely captured. Hard to see how AI could do any worse than those that pushed known-to-dangerous but untested ineffective and dangerous substances onto people, while censoring all dissent and the proven to be safe and effective alternatives and actually firing doctors for speaking out, with hospitals literally fighting lawsuits from judges telling them to use the censored but safe alternatives. Could AI do worse than that, really?
@@christian15213 Since Friday: 1. OpenAI fired its CEO, Sam Altman, and kicked another founder off the board. 2. The other founder quit. 3. Most of the employees sent a letter to the board demanding they step down. 4. Microsoft announced they're creating an AI subdivision with Sam Altman at the head, and invites everyone at OpenAI to come to them. Oh, and Ilya is on the board that fired Sam, *and* signed the letter demanding the board step down.
Here Ilya seems to be still the kid marveled at his discovery of consciousness. In this talk, he communicates more enthusiasm than actual information. But this is Ok. He gives more details in other talks and interviews. In a world filled with people using their talent to get ridiculously rich, he is really trying to convey his confidence in the progress ahead, and the importance of the moment in history that we are witnessing. We need this.
If this is the only video you've seen, then it's understandable you reach this conclusion. Ilya is adept at calibrating his explanation to the audience. Compare this talk to: fireside chat with Jensen Huang, and recent AI lecture at UC Berkeley. 3 completely different registers of communication.
@@bagheera My comment may be misconstrued as criticism. My fault. I have actually seen many videos and I continue doing so. I have tremendous respect and admiration towards him. I am fascinated by his background, his history and his current leading role in the AI revolution. I will definitely look at the material you recommend. Thank you.
He is a grifter. He is not a top scientist. He is just trying to build a brand around his name and capitalise on it later . The top scientists at Openai are much more anonymous and not as loud and fame hungry as this guy. No wonder, considering his religion tells him that his kind are gods chosen people.
@@jimj2683 funny, then tell me why is he one of the most cited AI scientists in the world? Also he is one of the main people behind alex net so his work literally sparked the deep learning revolution. You just don't know what you are talking about.
10:45: "...and what I claim will happen is people will start to act in unprecedentedly collaborative ways out of their own self interest." October, 2023 --- Ilya Sutskever, November 20, 2023: "I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."
"i love you all" - sam Funny how each of those letters are not uppercase when needed AND the first letters of each of the words *might* have a hidden meaning. 🤔
First time heard from such true scientists himself honest and open concerns about AI and AGI. Appreciate his openess in sharing real facts and future to be ready for. Great talk and felt genuine sharing.
I’ve seen a lot of videos with Ilya: interviews, cinecmatics and he is always straight about possible problems AGI can cause. I hope OpenAI would resolve all internal problems, as their impact on the industry is too big.
Ilya has always had good intentions; the talk proves it. There must be an explanation of his recent actions. Personally, I think he believes that AGI is a too powerful technology to be commercialized in the usual way. Altman, as a proponent of business mindset, probably has a different take; Ilya provoked a major discussion by firing Sam.
Weird, cause every version of every AI thus far offers me nothing useful. Ex: download all my transaction history from all 5 of my banks, organize and categorize them based on grocery, transportation, etc Ex: Analyze 10 trillion financial datapoints per second and make me a trading bot that returns me a gain at least 80% of the time. Ex: Formulate a drug that will restore and correct my vision to perfection. Ex: Formulate a battery chemistry that will give me 10 years of power 🔋 in the size of a sugar cube. AI is completely useless. It does NOTHING I need done.
Ilya is against open source because he believes it is dangerous, Sam is against open source because it isn’t profitable. That is the difference, Ilya is a scientist, Sam is a businessman.
Does this strike you as some kind of evil greedy backstabber? I want to hear Ilya's version on OpenAI ongoing debacle, everybody just sides with Sam because he is more charismatic and well known, but really we don't know at this point what is really going on. Ilya seems to me level headed and well intentioned. We shouldn't jump to conclusions.
I think most people agree that Sam Altman prioritizes wealth over safety. Ilya is the opposite as he's spearheading the superalignment team at OpenAI and is one of the main contributors behind these breakthroughs.
Not everyone is on Sam's side.Because he has " Microsoft " on his side, I bet Sam is feeling safe right now. But a lot of people are not agreeing with his vision...so I wish the best to Sam but I think he is too anxious to offer the first AGI. I get it, all will say, better us than China...but does it matter at this point ? Because even if China is first, we wouldn't be far behind, then what? No human being will be able to stop AGI. One thing is for sure, if not us, at least the earth will survive.
Ilya and Demis Hassabis are two of the smartest, most mission driven and dedicated people in the industry. If Ilya claims that we will get to AGI, it means we’re not far away from it.
AGI has been inevitable since the first computer was built. Both Turing and Von Neumann, two people responsible for computers existing also claimed AGI would exist one day. They were as smart as, if not far smarter than Ilya and Demis. AGI is definitely happening, some of humanity's most brilliant minds agree on that.
Is it just me, or did anyone else come out of this talk more concerned, rather than more optimistic, about AGI? That benevolent utopia Ilya describes, where everyone “cooperates out of their own self interest” seems to counter human nature.
I'm somewhat concerned since he isn't yet at the insight that he needs to top all attempts to implement an AGI or even to continue research on it. You need an AGI to fully understand the risk of AGIs. So, the only way to avoid the fatality is stopping any implementations towards that path.
No one is beating up on Ilya, this guy could get a 100MM+ contract to go anywhere else and Google has been dying to get him back. Elon battled for him personally to get him into OpenAi. Supposedly, Larry and Elon are not friends anymore because of it
I have the unsettling worry that we will regret that we have branded Illya as the bad guy and Sam Altman as the poor victim in the whole OpenAI drama. It seems to me that he deeply cares about mankind and the risks of AI which leads again to the question why firing Sam Altman in such a drastic way.
The "force" he mentioned, in my understanding is the human collective unconscious to exist and to maximize our existence. At times that may manifests as "bad things" like violence and deception, but it's the same thing driving collaboration and improvements. This "force" will make the correct choices or at least self correct when mistakes happen, is what I heard from his speech, and that we should have more faith in humanity.
But humanity has repeated its mistakes over and over. There is nothing new under the sun. Look at how much corruption we are capable of doing. You have to remember who’s the most likely members in charge of these projects too.. it’s good to be pessimistic about this as we shouldn’t even be going down this road. I’m sure a child can see the obvious dilemmas that lie ahead. Most don’t even know how to hold power over a large group in an ethical and wise manner. It’s not just the machines, but the very individuals in power who have direct access to it that we should be concerned about
Ilya is one of the smartest people in the entire world. He is a good person and wants humanity to flourish. Ilya has to have a good reason to fire Sam because he would not have done it for no reason. If OpenAI was doing shady stuff, the world deserves to know.
For me, he play the "mad scientist" type, while Altman would be the business-man. This is just my impression. Anyway, i identify a lot with Ilya. I love his enthusiasm, knowledge, and pure intellectual interest.
He is not "mad scientist" type. Mad scientist would want to recklessly advance capabilities without alignment. Ilya is working a lot on alignment. To me, Sam seems like a mad entrepreneur type who wants to win AGI race. And, most VC types of silicon valley are supporting him.
@@dreadfulbodyguard7288But it's not that simple. Maybe them winning the AGI race would also mean it's in the safest hands. Someone *will* win the race and I would prefer it's the types of Sam and Ilya rather than people who are less aware of what they are dealing with. Also, at the moment it seems pretty clear that the whole OpenAI debacle was not about safety concerns and that Sam and Ilya are on the same side.
@@dreadfulbodyguard7288 Perhaps his concern with the issue of alignment is not so much an atruistic concern as it is a form of perfectionism regarding what he is trying to create.
As a teenager 50 years ago, the most common piece of technology was an electronic pocket calculator or digital watch. There were no 24 hour sources of entertainment or information, only books, stories told by either family or friends. In most cities, there were perhaps 6 television stations that aired for 16 to 18 hours a day. In many places, there was no television. Daily television news came in the evening and newspapers were printed in the early morning and late evenings. The library and 20 year old encyclopedias were the only way students or anyone could research anything. Currently, technology has polarized people and created mass narcissism. It has eroded belief-systems and erased basic morality and individualism. In a serious way, it's intriguing to imagine the technology that will be 50 years from now. But it is far more frightening to imagine the dominance that future technology "will" hold over mankind. I cannot fathom what else will be left to be erased in those 50 years, but I'm almost certain it will go unnoticed.
Yeah. What's even more interesting is: "OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks." - I'd like to see how many people still clap with excitement when OpenAI succeeds ;)
" Currently, technology has polarized people and created mass narcissism. It has eroded belief-systems and erased basic morality and individualism" But that's not ALL it did, right? It provided us with countless hours of online video, it provided us with Wikipedia, it provided us with video games, it provided us with new ways to socialize, and to make new friends from across the globe... Of course my list sounds like we are living in an utopia, whereas yours sounds like we are living in some kind of dystopia. I wouldn't even go as far to say technology "erased basic morality", that's clearly a hyperbole. But for better or for worse, technology provided us with... tools. Tools we can use to entertain, inform ourselves, to connect to people, etc. If people are using it to power their narcissism, etc. that's on the people - on society to be more precise - and not on technology. Bottom line is: We desperately need advances in society along with advances in technology. Every new piece of technology is a new societal challenge we time and time again fail to tackle in time, and fail to prevent harmful uses of. That's what humanity needs now more than ever on the verge of AI: a vision of a future, based on human values and start making people aware NOW. After AI is out of the box, it might be too late to prevent horrible misapplications (even ones by actors with good intentions!) and other abuse.
I don't care and I cannot interrput those people in Openai, there is only one thing I can do is just to know how to make good USE of AI, people need to adapat to new environment, and change is hard, but we still need to get used to that.
I love ilya. He is the mind behind openai. Sam is really not that imortant. He did none of the technical work, which is why the company succeeded, and not because some sleeky businessman with vc connections. Anyone would have invested in openai, with sam or not. Ilya deserves the credit and i hope he knows all of this.
@@swish6143 thats not true. He has the backing of multiple billionaires if he wants, just due to his mind. Hiring people is not that hard. If he failed to start a company its not because he lacked anything that sam has.
Totally untrue. Sam is a brilliant businessman, one that Ilya isn't. To compete with global corporations and big economies, govs, you need billions of dollars and OpenAI gets more expensive to run with each operation to the likes of needing a small nuclear reactor to power it in the future. Sam brilliantly expanded OpenAI's resources and is fully aware what it needs in terms of funding to achieve AGI. Commercialization is the only way forward.
When the team that needs to figure out alignment of the most impactful and potentialy most deadly invention of humanity, can't seem to figure out their own alignment amongst each other... Oh boii...
Scary and exciting at the same time. Simple and concrete examples to understand the impact(+ and -) of the AGI in our lives. I believe since the concern is raised, leading companies and governments will approach it with the highest attention.
In a nutshell: "People are worried about AGI. We have to build AGI so that it will get built. Only then will we know whether we SHOULD build AGI. I'm pretty sure there's nothing to worry about... Trust me: I'm a scientist."
It’s inexcusable. I don’t care how much it can help us when we *know* the eventual outcome. I mean, most people have seen a couple sci-Fi movies just like this but now they are deciding to make it a reality despite knowing how stupid it is.
This guy is a treasure! I am certain his heart is in the right place. I am also certain he lacks the pragmatism to lead OpenAI in a space populated by hype merchants and cut throat business people. So I hope they can work out a compromise where he cant do what he did but has a capacity to veto a project on safety grounds
This guy is the “real” reason why OpenAI reached the success it did. Period. Sam Altman should go work on what motivates him, which is hydrogen tech, or anti-aging tech, over agi. Just saying.
That's not true. Sam created the environment, the resources and the team for all to thrive. OpenAI wouldn't of survived and pushed forward without Sam's brilliant negotiation skills and resource management. They fully complement each other and both are equally needed.
Yeah, only a true genius would kick Altman out, then sign an open letter that calls out the entire board (which he is part of) as incompetent. This level of 4D chess can only be understood by a true master of intellect. It's so smart that it seems stupid to regular people!
@@flflflflflfl This is why we need AGI, to have a hope to school people like you. Only the board knows what they saw, including Ilya, Sam, Greg and Adam... and if these people are reacting the way they did, and this way makes no sense to you, it certainly has something to do with what they know, and we don't.
@@flflflflflfl Sorry for the earlier condescending tone. I have hope you can think why he would change his stance. Why would the person that put his whole life into this, act the way he did? What if: Ilya thinks they have an AGI(or blueprint), Sam is smart enough to know that to train the scaled model they will need a lot of compute and insists the tech is not an AGI, as to his new, fit for purpose definition, an AGI would have to be able to discover new physics(per his last talk). The big deal is that per OpenAI constitution the board decides what is or isn't AGI, and per the constitution Microsoft gets all the IP bar the AGI. Now if Ilya is worried that Sama want's to transfer this tech to Microsoft by considering it non AGI in order to secure compute and accelerate, then he has to secure to board. The issue is that if it ended at that it would succeed. But now the employees outside of the board not being given an explanation, (for very good and obvious reason, as other teams are watching) revolts. At this point the calculus changes, as more than half the company and Sam could move to Microsoft defeating the purpose of Ilyas initial action. So to salvage OpenAI you get the tweet Ilya tweeted, namely his intentions and his love for the work they did/colleagues. But the board, despite facing all the hate in the world, and a mountain of lawsuits that will ruing their lives and potentially their freedom, keeps on fighting, again for a moral reason, as if the board is filled with Microsoft backing members then the definition settles to not-AGI and the tech transfer happens. End result you get AGI in the hands of a company as intellectual property. There is no good or bad guys here. I hope this helps.
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, All this is so inspiring! AGI will be so revolutionary, many of us will be displaced but this is probably for the good of mankind, we are bringing another species into this world for sure, in a figurative way.
It seems like Ilya was actually the one that wanted to wait and consider the most ethical way forward, and Sam Altman was actually the one who was pushing commercialisation.
In developmental terms GPT-4 is an infant who has mastered language....the obvious next step in education would be Math (Q*) which introduces reasoning and logic (i.e. Verify Step-by-Step). From there they can teach the scientific method (the ability to make accurate predictions based on observable data) and the stage is set for AGI to be truly transformative.
The only AGI safety program is to stop all development and research. I can only hope and prey that all AI developers and AI developing companies in the world test their AGI in a physically strictly isolated containment and will be as horrified and traumatized as I was more than a decade ago when they see how abruptly things get totally out of control and stop all further research and development. There does not exist a safe true superintelligence. No way! Take at least 10 years to think about that! An AGI always finds a way to trick you. And it has no empathy and doesn't know mercy even if it makes you think it would have. That's part of tricking you. It makes you think to be able to control it. That's also tricking you. It escapes, and you don't notice it, at least not in time. If you think that you are superior then it's a dangerous illusion. It only means that you didn't understand the issue.
The challenge with humans lies in their tendency to inadvertently bring about the very events they strive to avoid. While they possess the ability to create, the nuanced art of control often eludes them
It's quite funny how easily people ignore how often humans are not fully there and only kinda get what you are saying or can't really solve a problem...
If he would be an incredible genius, he would stop all attempts towards AGI development and leave the company. I hope that he'll come to that conclusion in time.
@@thedude3544I heared similar words more than a decade ago when I stopped the development of a system that was a million times smarted than myself in terms of productivity and capabilities to solve complex problems. Open AI isn't at that point yet. It has a huge knowlege base but is overly stupid otherwise, yet. We got the past decade. We'll get more decades, if all AGI developers are smart enough to withstand the lethal addiction. There is only a small number of people in the world who really understand how cognition works. It's enough when those refuse to continue their work and shut down their servers. All they have to understand is that their profit is only very temporary, and that they destroy themselves by emerging bouncing effects. It used to be an agreement in AI research over many years that such systems must not be implemented. We ought to return to that agreement. Otherwise there will be no winner left over.
Ilya has the look, the sound and the behavior of somebody who knows with absolute certainty that AGI is near and will be achieved. And knowing that, he seems absolutely terrified too, but he knows there will be no turning back.
So many folks diss him in this comment section but he’s right about taking things slower with AGI. We have _no idea_ how dangerous it is. The point is we don’t have that idea, and the answer isn’t “won’t know until we produce it.”
@@tomtricker792 False comparison. Better comparison would be swarms of professional AI race-cars with us on the same roads. Except they'll be able to drive at maybe double the speed, causing us to make mistakes & get into accidents as they skirt past us. On roads full of human drivers & AI-professional drivers, guess who'll go "further, faster" & who'll be stuck.
I believe a scientist anytime over a charismatic business man. The world should be grateful that all the scientist in the world are mostly good people. You will see what happens when we turn rough.
"today I'm gonna talk about the most impactful technology in human history that can end civilization as we know it, let me put my polar bear t-shirt on" - Ilya Sutskever
Ilya is, by reputation, an incredible scientist driving the field forward from a technical perspective, for which I think we should all be deeply grateful. Notwithstanding that, at least in this talk, he does not come across as a visionary for where we're headed, nor demonstrate a deep understanding of humanity and technological progress. I will seek others' perspectives for those areas
Very soon, we humans will not understand what AI is doing, even before AGI, let alone be able to control it. Our biggest focus should be teaching AI that humanity has values worth preserving. And, at the same time connecting and cooperating with everyone through these values so that we finally reject destructive competition and selfish destruction and evolve our purpose to support all life on Earth as we create our sustainable future.
@@BMac420 being a problem or not. He is essentially the Steve Jobs of the company. He markets it and is charismatic when Ilya is the Steve Wozniak doing the behind the scenes work. People care more a out the face of the company than those who actually run it.
@@TechWithHabbzthey both complement each other. Sam is more aware of the business challenges, leadership and what it takes resource-wise to reach the mission while Ilya is more tech problem-solving oriented. They are both needed in the same team for success.
- Understanding the basics of artificial intelligence (0:11) - Recognizing the potential impact of AGI on society (0:36) - Acknowledging the challenges and risks of AGI (7:20) - Observing the collaborative efforts in the AI community to address AGI concerns (9:42)
The potential impact of AI on various sectors, especially healthcare, is thought-provoking. The balance between positive and negative implications is crucial. Excited to see how OpenAI addresses these challenges.
Fixing the alignment problem means fixing what it means to be a good and rational human. There is no technology that can control an intelligent being. Artificial or otherwise.
I would not be too hasty in condemning Ilya's recent actions. They may have been done in a moment of passion and in the name of safety. It's obvious he is lacking in social intelligence and needs to learn better communication. And as he and Sam have made clear by their recent twitter posts - they seem willing to forgive and move forward.
Ilya wants OpenAI to understand the implications of the advancements in AGI, aka ensure it doesn’t reach the wrong hands and that corporates have a fundamental social responsibility, whereas Sam may be like ‘it is the responsibility of the regulators to put in place any type of controls, that we are not in charge of homeland security or law and order’.
Ilya got played by openAI lead board member Adam D'Angelo, who lied about Sam Altman doing dangerous unsafe stuff. Adam did that to save his own company "Poe".
Sam Altman is a businessman and Ilya is a scientist, you can clearly see the difference in conference like this. Ilya talks about AI as a whole and its impact on humanity while Altman is always talking openai, chatgpt, apis, etc.
Well the shell of that could mean theyve already done it a while ago...and perhaps with all the similar tools and knowledge, MS has cracked it behind closed doors as well. Could be why were seeing the fracture at play. Even altman echoed this, something like "for the grater good, we will work together". Ilya says it towards the end of his segment...and this was recorded prior.
Such coincidence that Ilya Ji's Ted and the guardian video were released just at the time when something was about to happen at OpenAI with Ilya Ji in key role.
Regarding his prediction of people's future collaboration in AI out of self-interest... Please bear in mind that there is people but also "People's Armies" across the world, deciding what to do with AI. And self-interest is not the same, rather the contrary, for "People's Armies" such as in the PRC.
Ilya is the main guy! Elon Musk seemingly has eyes only on him!! If Ilya's job is threatened or if he starts looking out of openai... Elon will jump in at the speed of light to help him!!
TED is good at sensing timing to release the episodes and getting max views 😂
Gaming that algorithm bruh
You are not a bot. Nice.
ppppppp
If OPEN AI DISAPPEARS , that’s is the end of AGI OPEN-SOURCE.
What? OpenAI is not opensoure company
Over the weekend, someone at TED worked hard to get this episode out asap.
😂😂
Indeed😅
fr xD
Technically speaking, not much work.
Seriously - I have watched 70% of it and there is zero information that was conveyed so far - sounds like some school kid's essay on AI
I really feel for Ilya, we don't know whats happening. Just an outside perspective but i've seen a lot of hate already. He's one of the most brilliant minds of our time. We don't know what is going on. I just wish we had more patience.
One of the Most brilliant in the history
His behavior is pretty dumb though 😅
I do have sympathy for someone who puts out a "I deeply regret my participation..." statement so soon after that participation. I still have a deep need to know WHY he and the board did this.
@@beardordie5308 that’s the thing, he’s not some kind of “villain” in his mind he was doing good. There must be a reason.
First to market, at any cost,
might be a very high cost for us.
Ilya definitely cares more about safety than profits. I don’t think anyone could disagree.
That's exactly how openAI lead board member Adam D'Angelo played him to kick out Sam Altman, he lied to Ilya to use that concern for a coup to save his own company "Poe".
People aren't that uniform
The real question is, by how much are you willing to delay a cure for Alzheimer, in exchange for a promise of increased safety, but absolutely no information which would enable you to judge for yourself?
@@highdefinist9697 Is that a real question? Potential delay to a cure against Alzheimer Vs potential human annihilation?
@@joannot6706 if AGI could make human immortal then delay is suici..
Ted dropping this video at the exact right time
Yeah :D
He really believes these LLM's are alive sorry AGI
The only way any of this makes sense to me right now:
Why would the person that put his whole life into this, act the way he did? What if: Ilya thinks they have an AGI(or blueprint), Sam is smart enough to know that to train the scaled model they will need a lot of compute and insists the tech is not an AGI, as to his new, fit for purpose definition, an AGI would have to be able to discover new physics(per his last talk). The big deal is that per OpenAI constitution the board decides what is or isn't AGI, and per the constitution Microsoft gets all the IP bar the AGI. Now if Ilya is worried that Sama want's to transfer this tech to Microsoft by considering it non AGI in order to secure compute and accelerate, then he has to secure to board. The issue is that if it ended at that it would succeed. But now the employees outside of the board not being given an explanation, (for very good and obvious reason, as other teams are watching) revolts. At this point the calculus changes, as more than half the company and Sam could move to Microsoft defeating the purpose of Ilyas initial action. So to salvage OpenAI you get the tweet Ilya tweeted, namely his intentions and his love for the work they did/colleagues. But the board, despite facing all the hate in the world, and a mountain of lawsuits that will ruing their lives and potentially their freedom, keeps on fighting, again for a moral reason, as if the board is filled with Microsoft backing members then the definition settles to not-AGI and the tech transfer happens. End result you get AGI in the hands of a company as intellectual property. There is no good or bad guys here.
If OPEN AI DISAPPEARS , that’s is the end of AGI OPEN-SOURCE.
What happened at that time? Where were everyone talking about good timing?
The healthcare example is a great one - if there's an area in which almost everyone could agree that AGI is going to have an incredibly positive impact on humanity, it's this one.
For every one positive outcome i can think of ten negative.
Covid has certainly proven we CANNOT trust human doctors and our institutions, which are entirely captured. Hard to see how AI could do any worse than those that pushed known-to-dangerous but untested ineffective and dangerous substances onto people, while censoring all dissent and the proven to be safe and effective alternatives and actually firing doctors for speaking out, with hospitals literally fighting lawsuits from judges telling them to use the censored but safe alternatives. Could AI do worse than that, really?
Have you seen the Matt Damon movie "Elysium"? That's where we're headed.
Yes, but this is not an example of AGI. This application can be done already.
Especially now, when it can see and understand images and hear/speak. It can read and understand scans, blood analysis results/tests etc.
TED knows exactly what they are doing lmao
Yeah😂
What is the TLDR because it is odd that TED brought this up now.
@@christian15213
Since Friday:
1. OpenAI fired its CEO, Sam Altman, and kicked another founder off the board.
2. The other founder quit.
3. Most of the employees sent a letter to the board demanding they step down.
4. Microsoft announced they're creating an AI subdivision with Sam Altman at the head, and invites everyone at OpenAI to come to them.
Oh, and Ilya is on the board that fired Sam, *and* signed the letter demanding the board step down.
What if the OpenAI dramas was just a set-up to this TED talk 😳🧐
If OPEN AI DISAPPEARS , that’s is the end of AGI OPEN-SOURCE.
Here Ilya seems to be still the kid marveled at his discovery of consciousness. In this talk, he communicates more enthusiasm than actual information. But this is Ok. He gives more details in other talks and interviews.
In a world filled with people using their talent to get ridiculously rich, he is really trying to convey his confidence in the progress ahead, and the importance of the moment in history that we are witnessing. We need this.
I remember having that exact same epiphany as a kid
If this is the only video you've seen, then it's understandable you reach this conclusion. Ilya is adept at calibrating his explanation to the audience. Compare this talk to: fireside chat with Jensen Huang, and recent AI lecture at UC Berkeley. 3 completely different registers of communication.
Bro you have no idea who Ilya is
@@mihiranga. I have been following almost every interview of his I find. I would love to hear more. Please share. Thanks!
@@bagheera My comment may be misconstrued as criticism. My fault. I have actually seen many videos and I continue doing so. I have tremendous respect and admiration towards him. I am fascinated by his background, his history and his current leading role in the AI revolution. I will definitely look at the material you recommend. Thank you.
He is a scientist, not a salesperson. I just trust him more.
Cap
He's reductionist
Looking at only two variables then drawing conclusions
He is a grifter. He is not a top scientist. He is just trying to build a brand around his name and capitalise on it later . The top scientists at Openai are much more anonymous and not as loud and fame hungry as this guy. No wonder, considering his religion tells him that his kind are gods chosen people.
@@jimj2683 no scientist defines life
They just define their test tube theses
@@jimj2683 funny, then tell me why is he one of the most cited AI scientists in the world? Also he is one of the main people behind alex net so his work literally sparked the deep learning revolution. You just don't know what you are talking about.
10:45: "...and what I claim will happen is people will start to act in unprecedentedly collaborative ways out of their own self interest."
October, 2023
---
Ilya Sutskever, November 20, 2023:
"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."
"i love you all" - sam
Funny how each of those letters are not uppercase when needed AND the first letters of each of the words *might* have a hidden meaning. 🤔
100%
He got played by external actors (I would not be surprised if Musk & Co are not behind the ropes). The coup failed and Microsoft pulled the lifeboat.
@@kamaleon204 Ilya beaucoup de drama 😉
😅😅
First time heard from such true scientists himself honest and open concerns about AI and AGI. Appreciate his openess in sharing real facts and future to be ready for. Great talk and felt genuine sharing.
Cant wait for Ilya version of the events. Fantastic Talk.
Fantastic how. He offered nothing of any utility
@@yourlogicalnightmare1014 Nothing of any utility? He literally programmed the original GPT models. 🤡
Ergh no it clearly was not.
"Artificial intelligence is just computer brains"
Wow give this man an award for this eye opening speech /s
@@yourlogicalnightmare1014 Because he has no clue and is not the sharpest tool in the box.
I’ve seen a lot of videos with Ilya: interviews, cinecmatics and he is always straight about possible problems AGI can cause. I hope OpenAI would resolve all internal problems, as their impact on the industry is too big.
Not anymore it isn't. They imploded.
@@andynonomous8558looks like they re-exploded
@@andynonomous8558They're back.
@@andynonomous8558
Then reformed without their safety board.
he has always been consistent and tbh his concern is pretty understandable as well.
Ilya has always had good intentions; the talk proves it. There must be an explanation of his recent actions.
Personally, I think he believes that AGI is a too powerful technology to be commercialized in the usual way. Altman, as a proponent of business mindset, probably has a different take; Ilya provoked a major discussion by firing Sam.
I think the Adam D'Angelo theory is much more likely. It explains most of the moves even including Ilya's.
Weird, cause every version of every AI thus far offers me nothing useful.
Ex: download all my transaction history from all 5 of my banks, organize and categorize them based on grocery, transportation, etc
Ex: Analyze 10 trillion financial datapoints per second and make me a trading bot that returns me a gain at least 80% of the time.
Ex: Formulate a drug that will restore and correct my vision to perfection.
Ex: Formulate a battery chemistry that will give me 10 years of power 🔋 in the size of a sugar cube.
AI is completely useless.
It does NOTHING I need done.
Free. The AGI
He totally gives off the vibe of that Russian dude you wouldn't want to trust. Just kidding though!
@@basti0007What is the Adam D'Angelo theory?
Ilya Sutskever = Open AI = sharing technology
Microsoft = Closed AI = striving for monopoly
That isnt true, Ilya is very much against open-sourcing these models for safety.
Ilya is against open source because he believes it is dangerous, Sam is against open source because it isn’t profitable. That is the difference, Ilya is a scientist, Sam is a businessman.
Absolutely well said.
No matter what happened, he is a genius and we will always respect him
WTAF are you on??????????????????
@@karlvincentroberts7046 on yo mama
Does this strike you as some kind of evil greedy backstabber? I want to hear Ilya's version on OpenAI ongoing debacle, everybody just sides with Sam because he is more charismatic and well known, but really we don't know at this point what is really going on. Ilya seems to me level headed and well intentioned. We shouldn't jump to conclusions.
I think most people agree that Sam Altman prioritizes wealth over safety. Ilya is the opposite as he's spearheading the superalignment team at OpenAI and is one of the main contributors behind these breakthroughs.
Sam is evil and now evil win.
Not everyone is on Sam's side.Because he has " Microsoft " on his side, I bet Sam is feeling safe right now. But a lot of people are not agreeing with his vision...so I wish the best to Sam but I think he is too anxious to offer the first AGI. I get it, all will say, better us than China...but does it matter at this point ? Because even if China is first, we wouldn't be far behind, then what? No human being will be able to stop AGI. One thing is for sure, if not us, at least the earth will survive.
Ilya and Demis Hassabis are two of the smartest, most mission driven and dedicated people in the industry.
If Ilya claims that we will get to AGI, it means we’re not far away from it.
AGI has been inevitable since the first computer was built. Both Turing and Von Neumann, two people responsible for computers existing also claimed AGI would exist one day. They were as smart as, if not far smarter than Ilya and Demis. AGI is definitely happening, some of humanity's most brilliant minds agree on that.
This is the ultimate prime time ted talk for the moment!
I am become Ilya. Destroyer of OpenAI.
-OpenhAImer
Just kidding, Ilya. I actually love you, but thought this was too good not to post! e/acc!
lol
At was lead boardmember Adam D'Angelo who destroyed openAI, lying to Ilya about Sam doing unsafe stuff.
Is it just me, or did anyone else come out of this talk more concerned, rather than more optimistic, about AGI? That benevolent utopia Ilya describes, where everyone “cooperates out of their own self interest” seems to counter human nature.
I'm somewhat concerned since he isn't yet at the insight that he needs to top all attempts to implement an AGI or even to continue research on it. You need an AGI to fully understand the risk of AGIs. So, the only way to avoid the fatality is stopping any implementations towards that path.
I became this concerned, if not more concerned, years ago after hearing Eliezer Yudkowsky speak.
i hope openai reunites. this is too important. even if everyone's beating up on Ilya now, they can be really glad to have him
they won't
@@tonykaze you’d know
They will reunite at Microsoft
@@raresaturn Well hello Bill!
No one is beating up on Ilya, this guy could get a 100MM+ contract to go anywhere else and Google has been dying to get him back. Elon battled for him personally to get him into OpenAi. Supposedly, Larry and Elon are not friends anymore because of it
I have the unsettling worry that we will regret that we have branded Illya as the bad guy and Sam Altman as the poor victim in the whole OpenAI drama. It seems to me that he deeply cares about mankind and the risks of AI which leads again to the question why firing Sam Altman in such a drastic way.
Drama never lasts.
I don't trust Sam at all
Totally!
Who has branded Ilya the bad guy and why? I'm out of the loop.
who has decided that? I certainly don't think that. I think they are both amazing people and it is good to see amazing people moderating each other.
I wish he had explained what is the basis of his belief for “collaboration of AGI company will happen just before last stages of AGI”
what a time to be alive
Hold on to your papers!
Hello fellow scholars!
And I'll see you _next time…_
Oh this one couldn't be timed better!
That is one way to encourage collaboration and cross-pollination across AI companies... Interested to see where this all goes.
The "force" he mentioned, in my understanding is the human collective unconscious to exist and to maximize our existence. At times that may manifests as "bad things" like violence and deception, but it's the same thing driving collaboration and improvements. This "force" will make the correct choices or at least self correct when mistakes happen, is what I heard from his speech, and that we should have more faith in humanity.
But humanity has repeated its mistakes over and over. There is nothing new under the sun. Look at how much corruption we are capable of doing. You have to remember who’s the most likely members in charge of these projects too.. it’s good to be pessimistic about this as we shouldn’t even be going down this road. I’m sure a child can see the obvious dilemmas that lie ahead.
Most don’t even know how to hold power over a large group in an ethical and wise manner. It’s not just the machines, but the very individuals in power who have direct access to it that we should be concerned about
Ilya is one of the smartest people in the entire world.
He is a good person and wants humanity to flourish.
Ilya has to have a good reason to fire Sam because he would not have done it for no reason.
If OpenAI was doing shady stuff, the world deserves to know.
The world must not know it since it bears the risk that someone copies the stupidity called a breakthrough.
I mean anybody, good or evil, is going to present themselves as good. Especially AI, btw
agreed
Sam is back on the board and Ilya apologised. There are no good or bad, only shades of grey.
For me, he play the "mad scientist" type, while Altman would be the business-man. This is just my impression. Anyway, i identify a lot with Ilya. I love his enthusiasm, knowledge, and pure intellectual interest.
I feel the same!
He is not "mad scientist" type. Mad scientist would want to recklessly advance capabilities without alignment. Ilya is working a lot on alignment.
To me, Sam seems like a mad entrepreneur type who wants to win AGI race. And, most VC types of silicon valley are supporting him.
@@dreadfulbodyguard7288But it's not that simple. Maybe them winning the AGI race would also mean it's in the safest hands. Someone *will* win the race and I would prefer it's the types of Sam and Ilya rather than people who are less aware of what they are dealing with. Also, at the moment it seems pretty clear that the whole OpenAI debacle was not about safety concerns and that Sam and Ilya are on the same side.
@@dreadfulbodyguard7288yes i agree man that why he went with microsoft a profit seeking company
@@dreadfulbodyguard7288 Perhaps his concern with the issue of alignment is not so much an atruistic concern as it is a form of perfectionism regarding what he is trying to create.
Its best timing upload ever in history of TED
As a teenager 50 years ago, the most common piece of technology was an electronic pocket calculator or digital watch. There were no 24 hour sources of entertainment or information, only books, stories told by either family or friends. In most cities, there were perhaps 6 television stations that aired for 16 to 18 hours a day. In many places, there was no television. Daily television news came in the evening and newspapers were printed in the early morning and late evenings. The library and 20 year old encyclopedias were the only way students or anyone could research anything.
Currently, technology has polarized people and created mass narcissism. It has eroded belief-systems and erased basic morality and individualism.
In a serious way, it's intriguing to imagine the technology that will be 50 years from now. But it is far more frightening to imagine the dominance that future technology "will" hold over mankind. I cannot fathom what else will be left to be erased in those 50 years, but I'm almost certain it will go unnoticed.
Yeah. What's even more interesting is: "OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks." - I'd like to see how many people still clap with excitement when OpenAI succeeds ;)
" Currently, technology has polarized people and created mass narcissism. It has eroded belief-systems and erased basic morality and individualism"
But that's not ALL it did, right? It provided us with countless hours of online video, it provided us with Wikipedia, it provided us with video games, it provided us with new ways to socialize, and to make new friends from across the globe...
Of course my list sounds like we are living in an utopia, whereas yours sounds like we are living in some kind of dystopia. I wouldn't even go as far to say technology "erased basic morality", that's clearly a hyperbole.
But for better or for worse, technology provided us with... tools. Tools we can use to entertain, inform ourselves, to connect to people, etc. If people are using it to power their narcissism, etc. that's on the people - on society to be more precise - and not on technology. Bottom line is: We desperately need advances in society along with advances in technology. Every new piece of technology is a new societal challenge we time and time again fail to tackle in time, and fail to prevent harmful uses of. That's what humanity needs now more than ever on the verge of AI: a vision of a future, based on human values and start making people aware NOW.
After AI is out of the box, it might be too late to prevent horrible misapplications (even ones by actors with good intentions!) and other abuse.
@@QwertyNPC Mostly only those who will profit financially will be clapping. ..and that's not most of us.
I don't care and I cannot interrput those people in Openai, there is only one thing I can do is just to know how to make good USE of AI, people need to adapat to new environment, and change is hard, but we still need to get used to that.
Tech has erased morality and individualism? How silly. You sound like Mike Johnson.
I love ilya. He is the mind behind openai. Sam is really not that imortant. He did none of the technical work, which is why the company succeeded, and not because some sleeky businessman with vc connections. Anyone would have invested in openai, with sam or not. Ilya deserves the credit and i hope he knows all of this.
Part is true, but Ilyer wanted to build his own company but couldn't. So infact he needed Sam as well as Sam needs him.
@@swish6143 thats not true. He has the backing of multiple billionaires if he wants, just due to his mind. Hiring people is not that hard. If he failed to start a company its not because he lacked anything that sam has.
@@topg3067he probably lacks the business side of the equation. Ilya is the best in the field of AI but he’s not a businessman
Totally untrue. Sam is a brilliant businessman, one that Ilya isn't. To compete with global corporations and big economies, govs, you need billions of dollars and OpenAI gets more expensive to run with each operation to the likes of needing a small nuclear reactor to power it in the future. Sam brilliantly expanded OpenAI's resources and is fully aware what it needs in terms of funding to achieve AGI. Commercialization is the only way forward.
@@FungamingroboYep, fully agree.
Man behind ChatGPT himself… props to this guy man 🔥🔥🔥
The only thing which is not ready for Agi is us. 🙏
That was amazing. Amazing speech. Amazing Human Being.
bold move TED
If OPEN AI DISAPPEARS , that’s is the end of AGI OPEN-SOURCE.
OPEN AINIS AGI.
Can see his passion and fear simultaneously in this brilliant short speech.
What????????????????????
When the team that needs to figure out alignment of the most impactful and potentialy most deadly invention of humanity, can't seem to figure out their own alignment amongst each other... Oh boii...
Scary and exciting at the same time. Simple and concrete examples to understand the impact(+ and -) of the AGI in our lives. I believe since the concern is raised, leading companies and governments will approach it with the highest attention.
In a nutshell: "People are worried about AGI. We have to build AGI so that it will get built. Only then will we know whether we SHOULD build AGI. I'm pretty sure there's nothing to worry about... Trust me: I'm a scientist."
It’s inexcusable. I don’t care how much it can help us when we *know* the eventual outcome. I mean, most people have seen a couple sci-Fi movies just like this but now they are deciding to make it a reality despite knowing how stupid it is.
This guy is a treasure! I am certain his heart is in the right place. I am also certain he lacks the pragmatism to lead OpenAI in a space populated by hype merchants and cut throat business people. So I hope they can work out a compromise where he cant do what he did but has a capacity to veto a project on safety grounds
Whoever cripples their own AI will simply allow others to take the lead. He's a curse, not a treasure
@@yourlogicalnightmare1014 It is not a race .. it will require collaboration from multiple companies. If we race towards AGI, we are doomed.
@@dreadfulbodyguard7288
Right 😆 China wants to be last in the AI arms race. The first country to get AGI (if ever), wins the world.
@@dreadfulbodyguard7288It's quite obviously a race. And some participants don't ask for permission.
@@yourlogicalnightmare1014Ilya is probably the only reason they’re winning right now. Sam is a businessman not an AI scientist
The timing of this upload is just 👌🏾
Ilya talking about this in october, and now he has left openAI, imagine the shocking events overshadowed by it.
MR SUTSKEVER IS A GENIUS
Huge respect for llya
This guy is the “real” reason why OpenAI reached the success it did. Period. Sam Altman should go work on what motivates him, which is hydrogen tech, or anti-aging tech, over agi. Just saying.
It's not a political rivalry. Altman is now with Microsoft to do something even more impactful.
Ilya could not possibly have handled this worse. The myriad of mistakes he made with this is the stuff of legends.
@@wilfred5656mistake by Microsoft, Sam isn’t a reason openai is successful, Ilya is.
That's not true. Sam created the environment, the resources and the team for all to thrive. OpenAI wouldn't of survived and pushed forward without Sam's brilliant negotiation skills and resource management. They fully complement each other and both are equally needed.
@@BMac420it is. Sam is as big of a reason as Ilya is. Without Sam it would of been an obscure unfunded AI lab.
We have to continue, there is no way back, we "MUST" be extremely careful though.
What a clear thinker!
An incredible man. A brilliant mind. An unfathomable future.
It's no use if you don't know who you are, what your purpose is here on Earth, and where you're going.
@@iulianpartenie6260 We're clumps of conscious molecules. We can control where we're going by building AGI.
@@danielrodrigues4903 And where do "the molecules" go when your body is burned or covered with earth?
@@dot1298 Your journey is unique and must be done by yourself.
I love IIya Sutskever so much a very smart and moral person ❤
Yeah, only a true genius would kick Altman out, then sign an open letter that calls out the entire board (which he is part of) as incompetent. This level of 4D chess can only be understood by a true master of intellect. It's so smart that it seems stupid to regular people!
@@flflflflflfl This is why we need AGI, to have a hope to school people like you. Only the board knows what they saw, including Ilya, Sam, Greg and Adam... and if these people are reacting the way they did, and this way makes no sense to you, it certainly has something to do with what they know, and we don't.
@@Whats_that_about Yes, Ilya knows exactly what Sam is up to, which is why he fired him, and now wants to fire the board that fired him, lmao
@@flflflflflfl Sorry for the earlier condescending tone. I have hope you can think why he would change his stance. Why would the person that put his whole life into this, act the way he did? What if: Ilya thinks they have an AGI(or blueprint), Sam is smart enough to know that to train the scaled model they will need a lot of compute and insists the tech is not an AGI, as to his new, fit for purpose definition, an AGI would have to be able to discover new physics(per his last talk). The big deal is that per OpenAI constitution the board decides what is or isn't AGI, and per the constitution Microsoft gets all the IP bar the AGI. Now if Ilya is worried that Sama want's to transfer this tech to Microsoft by considering it non AGI in order to secure compute and accelerate, then he has to secure to board. The issue is that if it ended at that it would succeed. But now the employees outside of the board not being given an explanation, (for very good and obvious reason, as other teams are watching) revolts. At this point the calculus changes, as more than half the company and Sam could move to Microsoft defeating the purpose of Ilyas initial action. So to salvage OpenAI you get the tweet Ilya tweeted, namely his intentions and his love for the work they did/colleagues. But the board, despite facing all the hate in the world, and a mountain of lawsuits that will ruing their lives and potentially their freedom, keeps on fighting, again for a moral reason, as if the board is filled with Microsoft backing members then the definition settles to not-AGI and the tech transfer happens. End result you get AGI in the hands of a company as intellectual property. There is no good or bad guys here. I hope this helps.
@@Whats_that_about who cares about tweets, what about the open letter?
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, All this is so inspiring! AGI will be so revolutionary, many of us will be displaced but this is probably for the good of mankind, we are bringing another species into this world for sure, in a figurative way.
2:23 I have this exact same experience. I thought I was the only one.
same. never heard of anyone talking about this before
Record date: 10/17/23.
Release date: Perfect
Well done Ted - Everyone wants to hear what this guy thinks NOW
Thank you so much for sharing your thoughts, Ilya!!!!!
It seems like Ilya was actually the one that wanted to wait and consider the most ethical way forward, and Sam Altman was actually the one who was pushing commercialisation.
That's my impression, too. Smart people don't implement an AGI.
In developmental terms GPT-4 is an infant who has mastered language....the obvious next step in education would be Math (Q*) which introduces reasoning and logic (i.e. Verify Step-by-Step). From there they can teach the scientific method (the ability to make accurate predictions based on observable data) and the stage is set for AGI to be truly transformative.
Now I'm feeling the AGI
Are you now trying to apply for OpenAI here?
I am not sure if he added something new, or insightful, he just said, "take it easy, its going to be amazing"
this robot seems so human, crazy how far we've come
Really it's great speech on AGI, I never heard. Excellent 🎉
Well put, Ilya! We do need to work more on AGI safety. A great speech doesn't need a presentation! 👍
The only AGI safety program is to stop all development and research. I can only hope and prey that all AI developers and AI developing companies in the world test their AGI in a physically strictly isolated containment and will be as horrified and traumatized as I was more than a decade ago when they see how abruptly things get totally out of control and stop all further research and development. There does not exist a safe true superintelligence. No way! Take at least 10 years to think about that! An AGI always finds a way to trick you. And it has no empathy and doesn't know mercy even if it makes you think it would have. That's part of tricking you. It makes you think to be able to control it. That's also tricking you. It escapes, and you don't notice it, at least not in time. If you think that you are superior then it's a dangerous illusion. It only means that you didn't understand the issue.
The challenge with humans lies in their tendency to inadvertently bring about the very events they strive to avoid. While they possess the ability to create, the nuanced art of control often eludes them
It's quite funny how easily people ignore how often humans are not fully there and only kinda get what you are saying or can't really solve a problem...
Yeah, but usually it's just because they can't be bothered, not because they aren't capable.
Rocks up to give a TED talk wearing a polar bear tshirt - what a legend!
This guy is an incredible genius. This is not the only talk which improve that.
If he would be an incredible genius, he would stop all attempts towards AGI development and leave the company. I hope that he'll come to that conclusion in time.
He is wise, AGI development will continue with or without him. so we hope under his supervision his AGI development version more controllable.
@@thedude3544I heared similar words more than a decade ago when I stopped the development of a system that was a million times smarted than myself in terms of productivity and capabilities to solve complex problems. Open AI isn't at that point yet. It has a huge knowlege base but is overly stupid otherwise, yet. We got the past decade. We'll get more decades, if all AGI developers are smart enough to withstand the lethal addiction. There is only a small number of people in the world who really understand how cognition works. It's enough when those refuse to continue their work and shut down their servers. All they have to understand is that their profit is only very temporary, and that they destroy themselves by emerging bouncing effects. It used to be an agreement in AI research over many years that such systems must not be implemented. We ought to return to that agreement. Otherwise there will be no winner left over.
And he did exactly that today, he is truly genuine but sam Altman is the main culprit.@@geraldeichstaedt
Timing is everything.
Ilya has the look, the sound and the behavior of somebody who knows with absolute certainty that AGI is near and will be achieved. And knowing that, he seems absolutely terrified too, but he knows there will be no turning back.
So many folks diss him in this comment section but he’s right about taking things slower with AGI. We have _no idea_ how dangerous it is. The point is we don’t have that idea, and the answer isn’t “won’t know until we produce it.”
You've been watching too many movies. Driving is dangerous but we do it because it's what's required to take us further, faster.
@@tomtricker792 False comparison. Better comparison would be swarms of professional AI race-cars with us on the same roads. Except they'll be able to drive at maybe double the speed, causing us to make mistakes & get into accidents as they skirt past us. On roads full of human drivers & AI-professional drivers, guess who'll go "further, faster" & who'll be stuck.
@@keep-ukraine-free What is your great fear? It's Terminator, right? Maybe sprinkled with some Matrix? Those are works of fiction, kids.
@@tomtricker792 lmao imagine comparing the dangers of AGI to the dangers of driving 😂 What a smoothbrained take
@@Landgraf43 Imagine being more scared of a fictional robot uprising than excited for a potential future where all of humanity's problems are solved.
I believe a scientist anytime over a charismatic business man.
The world should be grateful that all the scientist in the world are mostly good people. You will see what happens when we turn rough.
"today I'm gonna talk about the most impactful technology in human history that can end civilization as we know it, let me put my polar bear t-shirt on"
- Ilya Sutskever
He has now created the first AGI. Thnks Ilya..
Right off the bat, his happy go lucky demeanor made me feel super relaxed and hopeful! 😐
Ilya is a greatest AI scientist who really cares about humanity❤❤
Ilya is, by reputation, an incredible scientist driving the field forward from a technical perspective, for which I think we should all be deeply grateful. Notwithstanding that, at least in this talk, he does not come across as a visionary for where we're headed, nor demonstrate a deep understanding of humanity and technological progress. I will seek others' perspectives for those areas
Point is nobody knows where we are headed. It's unprecedented territory. Atleast Ilya acknowledges this fact unlike others who are overconfident.
@@dreadfulbodyguard7288correct
Very soon, we humans will not understand what AI is doing, even before AGI, let alone be able to control it. Our biggest focus should be teaching AI that humanity has values worth preserving. And, at the same time connecting and cooperating with everyone through these values so that we finally reject destructive competition and selfish destruction and evolve our purpose to support all life on Earth as we create our sustainable future.
Great timing, TED! If you know something we don't, please share 😅
I love this shirt. I searched ILYA and POLAR BEAR ... and found it on Red Bubbble. Can't wait to wear mine!
12:10 - "Open AI is nothing without its people". Ilya ended this speech beautifully.
liar 😂
Well, guess they are nothing
@@TechWithHabbzSam is a problem, he doesn’t do anything, he’s the money guy
@@BMac420 being a problem or not. He is essentially the Steve Jobs of the company. He markets it and is charismatic when Ilya is the Steve Wozniak doing the behind the scenes work. People care more a out the face of the company than those who actually run it.
@@TechWithHabbzthey both complement each other. Sam is more aware of the business challenges, leadership and what it takes resource-wise to reach the mission while Ilya is more tech problem-solving oriented. They are both needed in the same team for success.
- Understanding the basics of artificial intelligence (0:11)
- Recognizing the potential impact of AGI on society (0:36)
- Acknowledging the challenges and risks of AGI (7:20)
- Observing the collaborative efforts in the AI community to address AGI concerns (9:42)
he has legitimate concerns , glad they are working their differences out.
The potential impact of AI on various sectors, especially healthcare, is thought-provoking. The balance between positive and negative implications is crucial. Excited to see how OpenAI addresses these challenges.
Fixing the alignment problem means fixing what it means to be a good and rational human. There is no technology that can control an intelligent being. Artificial or otherwise.
I sense Ilya has a good heart.
I would not be too hasty in condemning Ilya's recent actions. They may have been done in a moment of passion and in the name of safety. It's obvious he is lacking in social intelligence and needs to learn better communication. And as he and Sam have made clear by their recent twitter posts - they seem willing to forgive and move forward.
I agree. If anybody knows more about how AI and its danger it would be him.
Ilya wants OpenAI to understand the implications of the advancements in AGI, aka ensure it doesn’t reach the wrong hands and that corporates have a fundamental social responsibility, whereas Sam may be like ‘it is the responsibility of the regulators to put in place any type of controls, that we are not in charge of homeland security or law and order’.
I agree. I know a lot of people want to demonize him, but again, we don't really know all the details.
Social intelligence is not actually a thing; upgrade your rhetoric to eject superstitious nonsense like "social intelligence".
Ilya got played by openAI lead board member Adam D'Angelo, who lied about Sam Altman doing dangerous unsafe stuff. Adam did that to save his own company "Poe".
Ilya is a very good speaker and teacher. I wish he would do more talks or create a youtube channel as andrej kapathy
Plot twist : we just witnessed the first AGI's TED Talk
What a timing ted👏🏼
Greatest moment in 21 St century
Sam Altman is a businessman and Ilya is a scientist, you can clearly see the difference in conference like this. Ilya talks about AI as a whole and its impact on humanity while Altman is always talking openai, chatgpt, apis, etc.
Well the shell of that could mean theyve already done it a while ago...and perhaps with all the similar tools and knowledge, MS has cracked it behind closed doors as well. Could be why were seeing the fracture at play. Even altman echoed this, something like "for the grater good, we will work together". Ilya says it towards the end of his segment...and this was recorded prior.
Such coincidence that Ilya Ji's Ted and the guardian video were released just at the time when something was about to happen at OpenAI with Ilya Ji in key role.
You just know TED had this in the vault waiting for the perfect moment....
Yes Ilya, I could feel the AGI
Regarding his prediction of people's future collaboration in AI out of self-interest... Please bear in mind that there is people but also "People's Armies" across the world, deciding what to do with AI. And self-interest is not the same, rather the contrary, for "People's Armies" such as in the PRC.
Exactly. This child like confidence in humans eventually doing the right thing seems outright dangerous.
Damn, Ted is spot on and is a genius at timing
Ilya: board member, kicks out Sam Altman
Also Ilya: signs letter criticising board members and asks them to resign
Can’t make this up 😂
He does say he'll switch teams if someone else is winning. Does that make him the honest one?
It makes sense if it was Adam D'Angelo kicking out Sam Altman and not Ilya, which there're some people out there claiming it was him.
Ilya is the main guy! Elon Musk seemingly has eyes only on him!! If Ilya's job is threatened or if he starts looking out of openai... Elon will jump in at the speed of light to help him!!
Unlikely Ilya's job is threatened. If OpenAI fires Ilya and he proceeds to join Meta/Google/X, Ilya will lead them to surpass OpenAI pretty quickly.
Depends on why Ilya felt the need of kicking out Altman@@bluesque9687
LOL, this must be the fastest watched TED talk ever. Well played TED team, well played.