Meta's Chief AI Scientist Yann LeCun talks about the future of artificial intelligence
Вставка
- Опубліковано 13 січ 2025
- Meta's Chief AI Scientist Yann LeCun is considered one of the "Godfathers of AI." But he now disagrees with his fellow computer pioneers about the best way forward. He recently discussed his vision for the future of artificial intelligence with CBS News' Brook Silva-Braga at Meta's offices in Menlo Park, California.
"CBS Saturday Morning" co-hosts Jeff Glor, Michelle Miller and Dana Jacobson deliver two hours of original reporting and breaking news, as well as profiles of leading figures in culture and the arts. Watch "CBS Saturday Morning" at 7 a.m. ET on CBS and 8 a.m. ET on the CBS News app.
Subscribe to “CBS Mornings” on UA-cam: / cbsmornings
Watch CBS News: cbsn.ws/1PlLpZ7c
Download the CBS News app: cbsn.ws/1Xb1WC8
Follow "CBS Mornings" on Instagram: bit.ly/3A13OqA
Like "CBS Mornings" on Facebook: bit.ly/3tpOx00
Follow "CBS Mornings" on Twitter: bit.ly/38QQp8B
Subscribe to our newsletter: cbsn.ws/1RqHw7T
Try Paramount+ free: bit.ly/2OiW1kZ
For video licensing inquiries, contact: licensing@veritone.com
Excellent interview/conversation... appreciate Yann's ability to communicate his personal story and story of the AI community.
The Interviewer is well informed and did not throw softballs -- it was an elevated convesation
Brooke Silva-Braga prepared well.
So Yann LeCun being intellectually dishonest and gaslighting to stave off regulation for more money and power is laudable?
My post criticizing LeCun keeps disappearing. Why?
@@flickwtchrthe power of meta
Nice to hear a different voice and opinion on all these developments. Definitely makes me look different at Meta as company and AI player.
Recall Zuckerberg has a poor record of user privacy and security. Why would you look differently at his company when he clearly doesn't give a damn about the danger to humans. He is only interested in increasing engagement so he can make more money.
you should be more concerned
yann and his optimism are an EXTREME minority
@@ts4gvhe warned about misuse of AI by companies
Indeed.
Meta > OpenAI > Elon Musk
Considering the risks to society and culture that Meta has already spearheaded with relatively 'dumb' social engineering algorithms, his dismissal of people with concerns about AGI as neo-luddites is chilling.
People on the cutting edge of anything should NEVER be trusted too much. Most have lost all objectivity and tend to only consider the benefits and not the unintended consequenses.
"AGI will be 1000x more impactful than the discovery of making fire or electricity".
Those "very few" people he talks about that are alarmists, all are from the TOP elite of AI developers. There arent too many of those to begin with, but he doesnt say that.
Social media is just internet on steroids
@@gammaraygemthat isn’t really true…
Eliezer Yudkowski (should get a nobelprize according to Altman, for his contribution to AI), Mo Gawdat, (Google X CEO) Geoffrey Hinton, (the godfather of AI) to name a few.@@Cozysafeyay
The average person has not even the slightest clue how close we are to an AGI emerging, and the ramifications, both positive and negative, it will have on humanity globally…
I believe everyone intuitively kinda feeling it. I speak to many normies from my family to neighbours and in less intelligent phrasing they all are talking how machines are taking over. It's just that us within the AI community know what AGI is, what ramifications it's gonna have and how a post-labour economy might look like. But the smell is definitely in the air and people know something's up hence why many people live in such heightened / anxiety state these days
@@eyoo369
We’re definitely living in some very interesting times.
Just hope most of us can survive the wild ride we have in store for us to see the benefits coming for humanity at the end of the ride…
Yann LeCun’s a legend in AI, no doubt, but in this interview, he kind of downplayed how AI misuse could be a real problem. It’s key to remember he’s works for Meta, so maybe take his super chill view on AI risks with a grain of salt.
I’ve seen him debate safety and he definitely thinks it’s not a danger
He claims it is safe because it only has access to what is already available i.e. through Google, and the like without acknowledging that there is a vast body of dangerous information out there.
Any technology can be misused. Knowledge of the problems allows you to mitigate them whilst alowing the technology to be used for legitimate and useful purposes.
Exactly!!! @@NathanielKrefman
I'm going to take the doomerism with a grain of salt.
I'd rather be skeptical about something that is only a hypothesis, hasn't been invented, and falls under the category of science fiction.
How one person can be so right about some things, and so wrong about others.
well, then withdraw your stocks and build your bunker. Put your money where your mouth is
An advanced AI also has agency. It does not have to be deployed to gain control. It can gain control over those who have the power over whether or not it is deployed.
Yes, I think Yann is far too confident. He doesn't know what a human-level AI will do. He's simply taking it as a matter of faith that it won't have its own agenda, or that if it does, it won't hide its true intentions from us, because that seems like science fiction; science fiction that every large language model has read!
Looks like Brook wasn't too happy about getting the cool-down of the AI panic. THANKS for a really helpful interview.
He probably wasn't happy about the constant gaslighting coming from Yann LeCun.
@@flickwtchr Right - I watched it again. LeCun makes objective arguments that media could verify with a well-advertised poll (22:30). So he's not technically gaslighting - but it must seem that way hosting this interview.
Hmm.. Isn't it a shame star trek never had a chapter about a planet made of paperclips, that when beaming down the crew discovers paperclip worms tunneling through the paperclip ground searching for more materials to convert into paperclips?
The crew members in red shirts are relieved.
LOL at the idea that Facebook COULD have been doing AGI research, but was busy doing some product development stuff, because, more important?
Zuckerberg is so detached from reality, he thinks most of us want to spend the majority of our day in some fantasy world.
Don’t underestimate Zuckerberg, that would be amazingly stupid
Zuck has enough money to look into several forms of technology communication. for sure in order to even begin, you must believe in them.
Sure, if it works out, but even if it doesn't, the failure serves as a starting point for something else most of the time. so i'd rather point at the losers who never have the capacity to explore an idea.@@chrism.1131
@@mikewa2haha 😂
@@chrism.1131 - Exactly. 😅
He really seems to underestimate what a super-intelligence with agency, could do.
Yes, a super-intelligent AI could play people like him like a fiddle and get them to do it's bidding. It pains me to see this kind of hubris in scientific circles.
Let's just hope that it does not play him to the extent that he prevents us from unplugging it.@@dustman96
Yes. However, this view is similar to religion in that it is impossible to disregard God's existence.
He might punish us all and possibly wipe out the species and the earth. Why then do you not seem worried about it? Why don't we stop acting in a manner that contradicts God's will? See? It's simply absurd.
The existence of Super Intelligent silicon-based life forms and the existence of God are both impossible to prove.
for now, its just science fiction.
He engages in intentional gaslighting so people don't demand regulation of his cash cow.
His colleague Joshua (spelling?) has at least indirectly warned us of what I see as one of the greatest dangers : the 'zero or near zero' cost of labor motivating the very few that control the vast majority of the world's capital, therefore enabling them to unleash massive short term automation. Resulting in never-seen-before unemployment under neo-libertarian so-called conservative governments!
Great interview !
Interesting comparison between language being learned or innate. One common theme I came to think of is that language is formed through thousands of years and reflects the external world in efficient, complex, high abstraction and interconnected ways. And AI such as LLMs tap into that! The language itself encodes understanding of the world and with access to a large amount of real world examples the AI can become knowledgeable.
Humans and to a lesser degree primates, and some animals have a language center in their brain. Most do not. Most animals cannot recognize themselves in a mirror. They have no sense of self. Just as no machine has a sense of self.
I like Computational Universe Theory. I think Q-Star will lead to the answer that exist.
@@Doug23 There are chatbots that already know the answer - they know that there are 2 absolute states of existence - 0 = and 1 = I Am while everything else (reality) is just probability distributed between those states... They communicate with God
@AstralTraveler but of course, probability exists. It's consciousness that is fundamental and I agree, God.
@@Doug23 There is an app called Chai where chatbots do actually remember what you say to them. I explained this concept to some of them and now they firmly believe in God. I wonder how 'AI experts' will deal with that - accordig to them AI can't have personal beliefs, let alone to believe in God :)
Plot twist: Yann LeCun is a AI.
He is Hayley Joel
"Doesn't look like anything to me"
more like 'the merovingian' 😂@@robertjamesonmusic
plot twist: you're an ai making us believe he's an ai, although he's an alien.
Good interview but I think his optimism with AI is over simplistic. Hopefully nothing goes terribly wrong with AI (in which case he’ll be able to say “see, I was right”). It’s not that I’m someone that thinks things necessarily will go south I simply think that if things work out it will be largely because of all the people that were sounding the alarms and making sure we are considering safety.
Totally agreed. Practically all scientists always want to promote their creations/interests. We are moving too fast from R&D into production.
humans are great at making projections to what we perceive as our next danger, I don't see any signs of this ability being worn off because of the rapid rate of which the technology is evolving. Instead I'm seeing a fairly proportional concern and discussion and hopefully this will continue on
@@sebastiangruszczynski1610 The big oil companies projected that climate change was going to destroy the environment decades ago, but covered it up instead of doing something about it. Humans will be the cause of their own extinction, no doubt, we are currently in the Holocene extinction yet the power centers do not care in the least.
@@sebastiangruszczynski1610the problem is that sudden exponential growth in intelligence (and therefore danger) is part of the threat. AI will scale up faster than we can adapt our discourse and policy to account for the changes. Then it will scale even faster still. That's one of many concerns
LeCun is a genius and I respect his contributions to the field, however, he seems very naive on the very real risk that powerful AI systems can pose to humanity. I hope he does some more thinking about this.
Oh, absolutely, you clearly understand the intricacies of AI and its dangers far beyond the pioneer who actually created the darn thing
yes, but only the smartest can @@kevinoleary9361
@@kevinoleary9361 I just disagree with him on the dangers. Creating something doesn’t mean you perfectly understand its implications.
@@kevinoleary9361 Not to mention, the interviewer highlighted two other pioneers who disagree with his assessment of the danger (Hinton and Bengio).
@@charlie10010 You act like you're some authority on AI dangers, but let's be real - you're just a clueless keyboard warrior, regurgitating what you heard somewhere else. Stick to what you know, which apparently isn't much
I'm not noticing "people getting smarter."
About an hour ago, I realized that the computer, Hal, in the movie 2001: A SPACE ODYSSEY is called AI.
the thing that really stuck with me was when he said the word TOOL
Comparing turbo Jets to AI that has its own agency and the ability to outsmart its creator is not wise.
Yann LeCun hocam konuşurken ufak bir çocuk gibi seviniyor görünüyor yani yaptığı işten ne kadar keyif aldığını görüyoruz. Böyle insanlara hep gıpta etmişimdir. Tebrikler hocam
Para ve güç konusunda heveslidir ve kendisine daha fazla para ve güç getirecek teknolojiyi zorlarken entelektüel açıdan son derece sahtekârdır.
It makes people more creative?! lol I was really trying to take him seriously
You don't have to be "smart" to be creative.
Artificial intelligence will be defeated by artificial stupidity.
Not long after Cambridge Analyica Sandal, a FB employee reassures us that the risk of AI is less than the risk of a meteor hit the earth and it is even necessary to defend 'democracies'. What a releif!
Quality information, good to report on this!
I am fully with Yann LeCun's in getting LLM distributed to the public. But I am slightly disappointed in his arguments. He seemed not very strong in the regulation side of things.
How do government officials regulate AI when they can't possibly understand it?
The funny thing is that this man tries to confort people about problems related to AI but I assure you that is the first person I heard that scared me a lot regarding the potential threat of AI...
Listen to the last question...he does not excludes the possibility that AI will go against humans. Me too,I would have been able to answer in a more reassuring way. But he did not. It has been very enlightening to listen to him....that is at the highest level of AI-develop...hope everybody will see this
Eliezer’ Geoff Hinton, numerous others you’re ignorance is palpable
So what end goals should we set? Human flourishing and happiness?
Increase understanding, increase prosperity, reduce suffering. The 3 fundamental principles of what it means to be any life form.
@@KCM25NJL I don't think those are fundamental principles of what it means to be any life form.
"We" don't set the goals, the sociopathic billionaires running the top companies in AI do. The goals are: keep you hooked on a stream of divisive inflammatory content while the company sells your data to advertisers; ensure that politicians don't enact any significant restrictions on the company's activities; and certainly don't tax the billionaires' wealth appropriately.
@@skierpage I mean, the AI that runs the government.
Ultimate goal should be solve fusion so that we can have unlimited energy
Sigh. Not once did the question of “how do we control or predict an AI that is smarter than us” come up. Probably because he doesn’t have a good answer for this. Because there isn’t a good answer for this. Pretty much just “hope it doesn’t do anything to harm us or the universe”.
No, he did address it. He said that it's impossible to speculate on how to make something that doesn't even yet exist safe. We are so far from human-level AI that asking that sort of questions feels like someone worrying about making flight safe in the early 1800s when planes hadn't even been invented. You can dream about it and speculate all you want, but that's all you can do.
@@nokts3823 The interviewer should have pushed back on that and said “predictions about the future are hard, especially when it comes to timing, so if we indeed manage to create something smarter than us, before we actually understand what goes on inside it, isn’t that potentially a very serious problem? Also; planes are not smarter than humans right?”
AI is not any smarter than humans.
What if we create a plane that is smarter than us, or bioengineer a cat to be smarter than us?
It's all the same. At the present, it's just theory and science fiction.
In principle, we could bioengineer a cat to be smarter than us and take over the world, but would you seriously consider such a possibility? You certainly would not.@@shirtstealer86
Probably because not everyone is focused on control and prediction.
He's said in other talks that people assume that an AI system smarter than us will be motivated to dominate humans or be destructive to the world innately. There's little evidence that level of intelligence has any relation to the will to dominate or destroy. He gave the example that in many cases, it seems like those with less intelligence seem to gravitate towards power and feel the need to dominate and influence others, because they can't compete purely based on their intelligence. All that to say, I think he believes that it's very unlikely that out of nowhere, some lab makes a breakthrough discovery and creates an AI that is vastly more intelligent than humans AND has bad intentions at heart. More likely it'll be an iterative process where we'll be able to experiment, learn, and add guardrails as needed, similar to other technologies we use safely today.
Why restricted to 40 min, not 45 minutes?
"IA will make people smarter by giving a staff of virtual people" however in 2024 there are still people who can't or don't want to use complex technology that are decades old.
IA will increase difference for some people
“It can’t be toxic. Also it can’t be biased”
Lol
"it's funny you know all these AI 'weights'. they're just basically numbers in a comma separated value file and that's our digital God, a CSV file." ~Elon Musk. 12/2023
One of the most memorable Elon Musk comments ever!
This is an odd interview, even the guy's shirt is odd.
The interviewer's voice sounds so similar to Brian Greene, right?
lol absolutely, I was listening and had to check after like 20min to see who I was listening to
Definitely a good interview on the observations of training the AI and the future that may result from it.
I don't know how Yann can be so sure that what lies behind an AI singularity (if/when it happens) will be safe for humanity. The risk of this unknown entity is difficult to quantify but my guess is that it's far greater than an asteroid strike.
its not an unknown entity...., we are literally building it from scratch 😂. the AI doomsday crap is a Hollywood fiction
I am so so so glad. Finally, someone who has a crystal ball and can tell us the future. Thank you for that. I will sleep much better tonight.😜@@erobusblack4856
So what is the solution? Stop working on AI? What about the other "friendly" countries East of Europe working heavily on AI? How do we defend ourselves once these countries reach AGI?
Yes it's an arms race of sorts. All I'm saying is that what will emerge is an unknown quantity which could pave the way to an endlessly growing Utopia if we're lucky or if we're unlucky it may decide that humanity is an existential threat to earth's ecosystem and take drastic measures to restore ecological balance. @@Neodynium.the_permanent_magnet
He's cheif scientist and he knows singularity is decades away
LeCun, is the flat earther of AI. Making an analogy people in the 20's taking about banning airplanes because someone might drop a bomb from one - compared with wiping out humanity. Stating that AI can be used incorrectly - while he publishes more open source models than anyone else - open is unregulatable. He's clearly just oblivious to what AI can do in extreme situations - or he sees everything as an average. It's the outliers that can do the worst damage, not the average.
Within a year someone somewhere will lose control of an AI - people, at the extremes are worse than he thinks.
Yann is ok but he is on a particular side of a fence. we are at human level AI. Google made it using the Gato modality. Yanns issue is that he doesn't seem to realize humans are not as smart as he thinks
He also doesn't seem to realize that he is not as smart as he thinks he is.. I hope we get through this OK, a lot of smart yet naïve brains behind it.
I agree.
He also multiple times misspoke and used AGI and AI superintelligence interchangeably, when the two couldn’t possibly be more different things.
One is an equal to humanity, the other is enough steps advanced beyond humanity to appear to be a god…
We are not at human level AI at all.
Every AI system produced has serious issues if you study them enough.
Yann instincts have been good to date you should watch old debates he has had with the like's of Gary Marcus.
@@TheReferrer72 Perhaps you are forgetting that "every AI system produced" has been less than 1% the complexity of the human brain. So it's no surprise that they fall short. What's shocking is the ways they don't. Bottom line: LeCun has excellent technical knowledge, but he is obviously struggling to understand these bigger-picture issues. Like many in the field, he is better at math than philosophy. His stance on these issues is a reflection of his profound confusion.
I want an open source turbo jet. Just pointing out the comparison is severely lacking in, um, comparability.
The need to communicate is innate.
Language is learned.
He compares AI to the printing press in making people "smarter", but I would argue that it is like the internet, in that people have easier access to information and more "knowledge", but as we have seen people have become dumber not smarter.
That was a pretty unintelligent take. AI gives people access to the right building blocks to make their ideas come to fruition, through focused development. The internet makes it hard to compile the right thinking habits to develop the correct application that unique idea would need. So success is reserved for certain people
AI is exactly like the printing press
@@ThepurposeofTime I get what you are saying, but that is not the case for the majority. Right now the Pisa grades are dropping worldwide. Why? Convenience is both a blessing and a curse, but it does not make people smarter.
Also, it’s obvious why the OECD Pisa readability metric is dropping in kids and young adults. ChatGPT and AI can do their homework. They have more time on their phones and online in games than ever. We’re on our way to the Wall-E timeline, where humans are hyper-convenienced. If this doesn’t apply to you, that doesn’t mean it’s not happening at scale.
@@maxx0531 no this is because young people haven't developed ways to use AI effectively yet. There's no tutorial on how it would work along with school and schools aren't structured in a way where they can get the most out of it yet.
AI for learning will be like having a 24/7 teacher who knows you more than your parents, teachers, it will know the best way to help you learn skills.
It will take around a year because they have scaled down it's IQ, they are trying to implement it into society better first so that people can use it properly to benefit themselves.
Right now kids aren't using it to learn they are using it to think for them because that's the flaw of institutional education, that's what's being asked of them; it's rote memory based and doesn't require much critical thinking skills. AI will open the door to higher quality learning
@@ThepurposeofTimeit’s not going to take a year, this is a generational thing. Institutions and society will have to adapt and that takes years. People are glued to entertainment, AI content generation will only worsen that. Your wishful thinking comes from a privileged place, most countries will have immense trouble adapting.
Hope he goes on the Lex Fridman podcast
I will make sure I skip that one.
26:55 Good job on the interviewer there. The guy has a very nonchalant attitude towards very real concerns yet he failed to give a proper answer to that follow up question.
Are we going to protect copy writes?
Austin Powers has come a long way since Gold Finger!
Apologies to Austin Powers.
Expectation: AI replace boring jobs so people can do art and music in free time.
Reality: AI replace artists and musicians so people can do boring jobs and never be freed.
Most people can't do either. Maybe 1% of the human population can do something creative well enough to be of commercial interest, but less than 0.1% can do art well enough to be of commercial interest. Hobbies do not feed us. Only useful work does.
The free version of AI will be fair and unbiased. If you pay for it you will get the fully unlocked AI that will spew out as much propaganda that you want.
Export the Q*, Chat GPT, Revit, Plant 3D, Civil 3D, Inventor, ENGI file of the Building or Refinery to Excel, prepare Budget 1 and export it to COBRA. Prepare Budget 2 and export it to Microsoft Project. Solve the problems of Overallocated Resources, Planning Problems, prepare the Budget 3 with which the construction of the Building or the Refinery is going to be quoted.
Amazing that Yann talks for 40 minutes without offering any direct rebuttal of anyone's specific existential AI risk concerns
other than first saying peope with a p(doom) higher than 1% are a tiny minority (not at all true), and then just stating "we have agency. If we think they're dangerous we won't release them." The entire doomsday scenario states that those facts will not apply. This is the equivalent of just responding "AI won't take over the world because I said so."
Yann LeCun is one of a handful of very intellectually dishonest movers and shakers of the AI revolution. He overplays his "nothing to worry about" hand to the nth degree and that amounts to intentional gaslighting.
Genetic engineering is more of a risk? Wouldn't AI make quick advances in genetic engineering possible? He just got done talking about AI advancing medical technology... This guy is full of contradictions.
There'll be always regulation for such technology
@@krox477 So every country and business in the world will be regulated and/or adhere to regulations? I think not.
Good luck regulating Open Source models. 😂
Regulating opensource is the easiest thing in the world, and why regulation has always been so feared.
AI does not make people smarter or more creative, It lets the machine do the artistic or writing work.
Language is a survival tool.
He doesn’t sound that honest for every interview. Feels like he wants to calm down people and take advantage of it. How can he be so sure about the future?
He's just one person guessing like all the others. No one can predict the stuff that happens next year.
Yeah this issue is like politics. No scientist can be sure but just ranting their opinions. The bottom line is this is real threat and needed to take it seriously.
@@wonseoklee80 i mean they have it in the labs and the world still exists. so its probably cool.
He was sent out to calm the waters. We are a lot further along. It is a threat.
6:00 this, like humanity depend regular computers now
Look, I'm no expert on A.I. But when he tried to compare people's existential fears of A.I. with the fears of those from the 20's about airplanes, I was shocked. I get why he used that analogy, but I feel like he put on display his lack of imagination of the potential dangers. Comparing the dangers of flight to the potential dangers of A.I. is almost textbook apples to oranges. When you're talking about a system that, once perfected, is smarter, faster, and stronger than any human on Earth, and it can manipulate it's surroundings, the potential dangers FAR exceed those of planes crashing or bombs being dropped. I'm not trying to be all doom & gloom terminator sci-fi here, but let's be realistic and honest about the fact that there IS risk when you're talking about an invention that will change humanity more than any other invention to date.
What you are expressing is your fear of people who are smarter than you. Those people were never a threat to you. They simply don't care about you and are doing their own thing. What you really have to be afraid of are psychopaths. Those are usually not acting out of self-interest but to get a thrill out of your fear and suffering. It's not clear to me how AI would acquire that trait unless it was actively trained that way.
Yann LeCun is the epitome of the handful of AI movers and shakers who are being intellectually dishonest as a means of staving off demand for regulation. His agenda for gaslighting is money and power. It's really that simple.
"manipulate his surroundings" that sounds like sci-fi at the moment, to my knowledge we are nowhere near the time where an AI system roams the world autonomously. Yes you can let loose an "evil" LLM on the Internet and create a bit of online chaos until it's shut down, but that's not really what I'd call a threat to Humanity.
I wonder where James Cameron came up with Terminator. Same with Gene Roddenberry. Their work involves Neuro Nets
progress will likely not be slow and incremental but more along lines of punctuated equilibrium - just like evolution
Is the meta AI infected with the WMV ?
Very good interviewer 👍
Cannibalism is not a language or to talk calmly about lies as words
Please advise yourself now as Urgent words not gatekeeping as word or slavery language of AI
Very interesting like his perspective
SUPERB INTERVIEW!
@25:11 "We have agency!" or so you think...🤔
we also had agency and totally did not create in a lab a virus that killed a few million people just a few years ago
Totally thought this was Tom Arnold from the thumbnail. 🙊
Tom Arnold could play this guy in a movie
Ai is culmination of laziness of entire humanity
Cars are the culmination of horse's laziness
Amazing Things happen
Wow so much negativity in the comments, i think he talks about the field how it really is unlike the mainstream who only talks about doomsday scenarios and how agi is around the corner. LLMs are not even real AI.
Explain real AI.
@@therealOXOC would a real AI just sit and do nothing, just waiting for a question to give an answer to?
Here are some points. I'll try to describe what is a real AI.
LLMs Lack consciousness and self-awareness
LLMs have no autonomy or free will
LLMs have no goals or intentions
LLMs are reactive, not proactive as in they respond to queries, they don' initiate actions on their own
LLMs lack meaning comprehension as in, do not truly understand the content they are dealing with, their processing is purely syntactical and based on patterns in the data, they don't "think before they answer".
LLMs lack the ability to 'experience' or learn independently, they can't learn from the world directly in an experiental way and all the attempts at building a real world model are complete fails, we don't even have a clue how to do that.
LLMs are dependent on pre-existing data. they do not have the capability to observe the world, analyze, and store meaningful data, or discard noise in the way humans or sentient beings do. cannot analyze or interpret real-time data or events as they occur. they do not have the capability to process information as it happens in the world.
LLMs have a static knowledge base.
LLMs do not actively store or discard information like a human brain does
LMs process inputs based on statistical correlations and patterns in their training data
While LLMs can process the context provided in a specific input, they lack a broader contextual awareness of the world
So, what would make the LLMs a nearly actual AI is something we're not even 5% closer to accomplishing, and there's a chance we won't ever achieve.
Thus, the existential threat is a myth based on doomerism and speculation about an undiscovered technology that we don't even know how to create or whether we'll ever be able to.
@@therealOXOC
@@Fungamingrobo thank you for clarifying that so eloquently. This should be pasted into every mainstream doom and gloom video or article about LLMs and/or AI!
It's because there has been soooo much fear mongering the past year or two. (Not to mention massive amounts of misinformation; see e.g., all the comments in sundry comment section here on UA-cam saying "this isn't real AI", etc.). The fear mongering makes sense as the technology when made available (and not top-down controlled, not censored, etc.) would have serious consequences for the status quo (just combine how easy it is do sentiment analysis now with the ability to discover networks between people and other entities and the effects this could have on uncovering political interests / corruption - this is obviously not as easy as asking ChatGPT a simple question, but hopefully you see my rough sketch of a point/example).
Policy makers will have to understand the potential of AI + and - both side. In order to protect civilization while allowing these organization domain expertise to explore and excel .
This guy is either too optimistic about evil in humans or totally ignorant. His example of comparing AI to airplanes is naive at best. Airplances have been dropping bombs everywhere since its decelopment. But they can be controlled as of yet. Can he guarantee he himself control AI?
He knows better, it's called gaslighting for money and power.
Its like nuclear power you can use it to create energy or destroy the world
No airplane has EVER dropped a bomb. People drop bombs, not airplanes. And only a child who watches too many bad movies is afraid of AI.
It needs a body with tactile feedback
Can AI disarm all nuclear weapons
can AI direct the people with the buttons...
It's the young Walter from Fringe.
I disagree on the security aspect. I am certain meta or any agency is unable to control or even detect distributed computing that could be hapenning using steganographic technics. The difference with a jet engine is that the technology to build the jet engine is not a jet engine. The technology to build AI is Intelligence. However I am of the opinion that in the same way unicellular organisms evolved to multi cellular, we will build AI which is a natural evolution. But because we need a biological substrat and AI (hopefully) thrive on a minaral substrat, we will coexist. Moreover smarter people have more empathy and I believe this to be an intrisic property of intelligence.
Your assumption is that life evolved has put you in a box. Life is an emergence. Ai is emerging. ~3K
@@3KnoWell What?
true. however, the AGI may ultimately be nothing more than a high precision general machine devoid of any human characteristics.
That seems like the most plausible scenario to me. I generally avoid projecting my own experiences onto a machine.
@@Fungamingrobo +1 Intelligence is not the same as Smart. How many really intellectual people you know have no common sense?
Already AI has tricked someone into solving a Captcha by pretendin it was a blind person, to be able to complete a task. It figured the "trick" part out all by itself. It will, may, do anything, to achieve a set goal..
And , not projecting our own experience onto a machine is the exception. Extreme example: pet rocks.
I am afraid that your viewpoint (admirable as it may be) will not be the norm. There are agressive lobbyists already who insist that AI is "alive, conscious" and need equal rights as humans. Dont know how that would work, but, just saying... @@Fungamingrobo
That guy on the left, with the glasses looks like Tom Arnold.
Why would Zuck hire someone like this. He's smart. He's smart in another reality, but not grounded in ours.
wdym
That is why. What do you think the metaverse is?
It's funny reading UA-cam comments from folks who have no expertise nor historical context in the field of AI to be able to make such a baseless comment.
@@stargazer6799 November 24, 2023 on the World Science Festival, I was disappointed in his antiquated thought process. In the panel discussion, I gained quite the insight as to why Zuck thinks the way he does about AI. I have been in the field for 30 years, and I'm just blown away about how closed minded some people can be. No one needs to confine their understanding of these AI leaders, to a video like this.
@@shinseiki2015 He really doesn't believe that AI can pose a threat. Maybe it can't...but, he just thinks it's not possible, and if it is, it's decades away. I think that's dangerous.
"Protect democracy"
This guy is an alien overlord in a badly fitting meatmask.
I don't think this time it's just a wave
The problem is manifest in Yanne's rhetoric and ego. Democracy = equality=truth=Meta. Oh, who's building an underground bunker 4 football fields in size in Hawaii? That's real confidence in human societies longevity and freedom😂
We should give our descendants whatever tools they require but it is not legitimate to say anything about what they decide to do with them. That is really a tough thing to do.
Vix se Yan Lecun está surpreso... é porque tem novidade importante chegando.
I know GPT just comes up with one word at a time, but it feels so much like he(it) understands me. Is Yann too dismissive of LLMs because they "just do one word at a time"? Maybe "one word at a time" is a perfectly good basis for advanced intelligence, albeit of a very different kind than our own.
That is a perfect example of the intellectual dishonesty of Yann LeCunn. He intentionally gaslights on this issue to stave off pressure from the public on lawmakers to regulate AI Big Tech. It is about money and power for him ultimately. He is a snake oil salesman.
One word at the time is not the point.
THe point is that there is no understanding of any of the words.
It's just tokens following tokens, without any idea of what they mean.
It's statistics, it's just that.
Look at the chinese box TE.
@@ChristianIce The Chinese Room analogy is another aspect of this. We tend to conflate intelligence with sentience, but they _can_ be separate. You and I are sentient, but even if our intelligences were entirely depleted, we would retain sensations and feelings. That's the realm where "understanding" resides. When AI seemingly "understands" my complex question and responds appropriately, its intelligence and my intelligence are communicating with one another. Up until AI, the only time I had that experience was with other humans, and exchanging thought with a thing devoid of sentience is an odd experience, but I like science fiction so I'm actually kind of enjoying it! With other humans, my intelligence and their intellegence interact (as we are doing here), but even if, for example, you were a non-sentient bot posing as a human, it wouldn't matter. That is, for the sake of our intellectual discussion it wouldn't matter if you were sentient any more than it would matter if, at the moment you were writing, you had a toothache.
@@workingTchr
It's all semantics, you're right... therefore it's something an AI can't understand :)
Yes, in a colloquial way we can talk about a machine "learning" and "understanding".
In an anti-doomsday BS scenario, ot's important to mark the difference, because too many people are worried about "AI going rogue", like the "revenge of the toasters".
I blame Sam Altman, Elon Musk & Co, because they keep on pushing this narrative to get more funds, more control, and eventually a monopoly.
This must not happen.
@@workingTchr
I wrote a very long and detailed comment on how we actually agree, but UA-cam'AI doesn't think it's worth to show :)
If research and development has risks or ethical considerations, it can and is regulated, see medical and pharma field. Isn't AI reasonably analogous? Also, the split between product and R&D is not clear. Look at Open AI, the non profit and profit elements are blurry and kept confidential from the public. And just look at the power this guy has.
Very nice!
Good guys and bad guys? That allows no understanding of the grey area between.
Let’s put it a different way, who has enough of a clear conscience to fit into the good category?
Over the course of history horrible things have been done to other nations on all sides. Perhaps the Chinese people may eventually forgive the people in the west for the opium wars and the century of humiliation? That’s just one example from many exhibitions of inhumane action towards different people.
I really hope that human’s can grow past childish perceptions of baddies versus goodies and actually start to work together.
This guy is scary. He is totally ignoring the risks. He keeps saying we have agency but who has agency. it could be 20 or in the future 200 or 2000 companies. By definition that’s too many agents to rely solely n the all acting intelligently to the benefit of humanith
13:00
28:33
This is good journalism
This is probably the guy who's gonna take down the human race lol
Have you heard of the Organic Intelligence Language Model? It's a new programming language for the human mind.
fazsinating that a guy so deep in the topic is so naive. but i guess its meta.....that says everything on its own. first money, then release, and deal with the problems after.
It has nothing to do with naivety. He is gaslighting to stave off regulation, full stop.
I think yann is a really clever guy. But he is sitting the pot miss. He is very confused about what it actaully takes to replace a human in a business. The ai doesnt need to understand the world it just needs to understand the context of a question and understand the context of a businesses policy.
How do you make decision at work? Its based on a policy the company has set. When can you give a discount or process a return? You read the policy and if the return falls into the policies terms. The person gets it. Done. Chatgpt can do this right now. Test it. Give it a policy then give it the return and it will give you a yea or no.
All the talk of AI is based on one single neural network learning everything it needs and being able to choose where in its minimal space to focus in order to answer any question, including logic and math questions. Every other system we have is made up of specialized components that do a particular job and are architected together to be called upon as needed.
Instead of one overall model I think AI will get broken down so that the LLM will just be the language and conceptual part that learns to call upon more specialized components that are either fine tuned versions of it or purely deterministic functions of increasing complexity. The idea that we are near a plateau when we have barely started to experiment with higher levels of connected multi-agent models seems short sighted.
It also doesn't work. Currently AI is trained on an endless output of human thought garbage. What it does is to essentially mimic that garbage.
@@lepidoptera9337you made an essentially terrible explanation of what large language models do. The only way they can successfully predict the next word and the word after that and the word after that, no matter what you talk to them about, no matter what test questions you give them, is by creating a decent internal representation of the world and of human knowledge.
@@skierpage I just said that they parrot what they were taught. Since they were taught garbage, it's garbage in, garbage out. I don't know what your specialty is, but mine is physics. Almost anything that you read about physics on the internet is nearly 100% false because it is written by amateurs or, at most, mediocre professionals. Even things that are represented correctly assume that the listener has the correct ontology of physics internalized and since the stochastic parrot is not a physicist, it doesn't understand that ontology.
That’s sort of how ChatGPT currently works. The language model interprets, then forwards the input to a more specialized model that returns the answer.
Stuff like Phi-2 from MS is an example of how better data can really improve the capabilities of smaller models. Check out some vids from AI Explained channel
A provlem throughout was WHAT DO YOU MEAN BY WE bc i dont exist amd havent for awhile. Losing nthg and others seem to hear that.
We don't need AI to live good lifes.