Meta's Chief AI Scientist Yann LeCun talks about the future of artificial intelligence
Вставка
- Опубліковано 15 гру 2023
- Meta's Chief AI Scientist Yann LeCun is considered one of the "Godfathers of AI." But he now disagrees with his fellow computer pioneers about the best way forward. He recently discussed his vision for the future of artificial intelligence with CBS News' Brook Silva-Braga at Meta's offices in Menlo Park, California.
"CBS Saturday Morning" co-hosts Jeff Glor, Michelle Miller and Dana Jacobson deliver two hours of original reporting and breaking news, as well as profiles of leading figures in culture and the arts. Watch "CBS Saturday Morning" at 7 a.m. ET on CBS and 8 a.m. ET on the CBS News app.
Subscribe to “CBS Mornings” on UA-cam: / cbsmornings
Watch CBS News: cbsn.ws/1PlLpZ7c
Download the CBS News app: cbsn.ws/1Xb1WC8
Follow "CBS Mornings" on Instagram: bit.ly/3A13OqA
Like "CBS Mornings" on Facebook: bit.ly/3tpOx00
Follow "CBS Mornings" on Twitter: bit.ly/38QQp8B
Subscribe to our newsletter: cbsn.ws/1RqHw7T
Try Paramount+ free: bit.ly/2OiW1kZ
For video licensing inquiries, contact: licensing@veritone.com
Nice to hear a different voice and opinion on all these developments. Definitely makes me look different at Meta as company and AI player.
Recall Zuckerberg has a poor record of user privacy and security. Why would you look differently at his company when he clearly doesn't give a damn about the danger to humans. He is only interested in increasing engagement so he can make more money.
you should be more concerned
yann and his optimism are an EXTREME minority
@@ts4gvhe warned about misuse of AI by companies
Excellent interview/conversation... appreciate Yann's ability to communicate his personal story and story of the AI community.
The Interviewer is well informed and did not throw softballs -- it was an elevated convesation
Brooke Silva-Braga prepared well.
So Yann LeCun being intellectually dishonest and gaslighting to stave off regulation for more money and power is laudable?
My post criticizing LeCun keeps disappearing. Why?
@@flickwtchrthe power of meta
Looks like Brook wasn't too happy about getting the cool-down of the AI panic. THANKS for a really helpful interview.
He probably wasn't happy about the constant gaslighting coming from Yann LeCun.
@@flickwtchr Right - I watched it again. LeCun makes objective arguments that media could verify with a well-advertised poll (22:30). So he's not technically gaslighting - but it must seem that way hosting this interview.
Plot twist: Yann LeCun is a AI.
He is Hayley Joel
"Doesn't look like anything to me"
more like 'the merovingian' 😂@@robertjamesonmusic
🎯 Key Takeaways for quick navigation:
00:00 🧠 *AI Landscape Overview: Yann LeCun highlights the current AI landscape, expressing a mix of excitement and challenges, including scientific, technological, political, and moral debates.*
02:15 🌐 *History of Neural Nets: Yann discusses his entry into AI through a debate on language origins, delving into neural nets' early days in the 1980s and efforts to revive interest in the 2000s.*
05:17 🌍 *AI Impact on Products: LeCun emphasizes AI's widespread integration in products, from content moderation to translation, and its critical role in various sectors, citing its indispensability at Meta.*
08:30 🚀 *Benefits of Open AI Development: Yann advocates for open AI development, asserting that disseminating AI technology across society fosters creativity, intelligence, and benefits various domains while acknowledging the need for responsible regulation.*
15:43 📹 *Objective-Driven Models: LeCun introduces the concept of objective-driven AI, emphasizing the importance of moving beyond autoregressive language models to systems that plan answers based on predefined objectives, enhancing control, safety, and effectiveness.*
21:48 🌐 *Yann LeCun supports open platforms for AI due to the future role of AI systems as a basic infrastructure, emphasizing diversity in knowledge, much like Wikipedia covering various languages and cultures.*
23:41 🌍 *LeCun dismisses existential risks, comparing fears of AI wiping out humanity to concerns about banning airplanes in 1920, stating that safe AI deployment relies on societal institutions.*
25:18 ⚔ *Autonomous weapons are discussed, with LeCun acknowledging their existence and emphasizing the moral debate around their deployment for protecting democracy while addressing concerns about potential misuse.*
27:39 🚗 *AI's positive impact in the short term includes safety systems for transportation and medical diagnosis. Medium-term advancements involve understanding life, drug design, and addressing genetic diseases.*
29:04 🧠 *LeCun envisions a future where AI systems assist individuals, making everyone essentially a leader with virtual people working for them. He emphasizes controlling AI systems and setting their goals without handing over control.*
Made with HARPA AI
```
- Made with HARPA AI
Oh, the irony
Thanks a million
Fraud
AI is too verbose
Quality information, good to report on this!
Good luck regulating Open Source models. 😂
Considering the risks to society and culture that Meta has already spearheaded with relatively 'dumb' social engineering algorithms, his dismissal of people with concerns about AGI as neo-luddites is chilling.
People on the cutting edge of anything should NEVER be trusted too much. Most have lost all objectivity and tend to only consider the benefits and not the unintended consequenses.
"AGI will be 1000x more impactful than the discovery of making fire or electricity".
Those "very few" people he talks about that are alarmists, all are from the TOP elite of AI developers. There arent too many of those to begin with, but he doesnt say that.
Social media is just internet on steroids
@@gammaraygemthat isn’t really true…
Eliezer Yudkowski (should get a nobelprize according to Altman, for his contribution to AI), Mo Gawdat, (Google X CEO) Geoffrey Hinton, (the godfather of AI) to name a few.@@nicholasstarr6096
I am fully with Yann LeCun's in getting LLM distributed to the public. But I am slightly disappointed in his arguments. He seemed not very strong in the regulation side of things.
I'm not noticing "people getting smarter."
The average person has not even the slightest clue how close we are to an AGI emerging, and the ramifications, both positive and negative, it will have on humanity globally…
I believe everyone intuitively kinda feeling it. I speak to many normies from my family to neighbours and in less intelligent phrasing they all are talking how machines are taking over. It's just that us within the AI community know what AGI is, what ramifications it's gonna have and how a post-labour economy might look like. But the smell is definitely in the air and people know something's up hence why many people live in such heightened / anxiety state these days
@@eyoo369
We’re definitely living in some very interesting times.
Just hope most of us can survive the wild ride we have in store for us to see the benefits coming for humanity at the end of the ride…
An advanced AI also has agency. It does not have to be deployed to gain control. It can gain control over those who have the power over whether or not it is deployed.
Yes, I think Yann is far too confident. He doesn't know what a human-level AI will do. He's simply taking it as a matter of faith that it won't have its own agenda, or that if it does, it won't hide its true intentions from us, because that seems like science fiction; science fiction that every large language model has read!
Yann LeCun’s a legend in AI, no doubt, but in this interview, he kind of downplayed how AI misuse could be a real problem. It’s key to remember he’s works for Meta, so maybe take his super chill view on AI risks with a grain of salt.
I’ve seen him debate safety and he definitely thinks it’s not a danger
He claims it is safe because it only has access to what is already available i.e. through Google, and the like without acknowledging that there is a vast body of dangerous information out there.
Any technology can be misused. Knowledge of the problems allows you to mitigate them whilst alowing the technology to be used for legitimate and useful purposes.
Exactly!!! @@NathanielKrefman
I'm going to take the doomerism with a grain of salt.
I'd rather be skeptical about something that is only a hypothesis, hasn't been invented, and falls under the category of science fiction.
He really seems to underestimate what a super-intelligence with agency, could do.
Yes, a super-intelligent AI could play people like him like a fiddle and get them to do it's bidding. It pains me to see this kind of hubris in scientific circles.
Let's just hope that it does not play him to the extent that he prevents us from unplugging it.@@dustman96
Yes. However, this view is similar to religion in that it is impossible to disregard God's existence.
He might punish us all and possibly wipe out the species and the earth. Why then do you not seem worried about it? Why don't we stop acting in a manner that contradicts God's will? See? It's simply absurd.
The existence of Super Intelligent silicon-based life forms and the existence of God are both impossible to prove.
for now, its just science fiction.
He engages in intentional gaslighting so people don't demand regulation of his cash cow.
His colleague Joshua (spelling?) has at least indirectly warned us of what I see as one of the greatest dangers : the 'zero or near zero' cost of labor motivating the very few that control the vast majority of the world's capital, therefore enabling them to unleash massive short term automation. Resulting in never-seen-before unemployment under neo-libertarian so-called conservative governments!
Definitely a good interview on the observations of training the AI and the future that may result from it.
How one person can be so right about some things, and so wrong about others.
well, then withdraw your stocks and build your bunker. Put your money where your mouth is
Sigh. Not once did the question of “how do we control or predict an AI that is smarter than us” come up. Probably because he doesn’t have a good answer for this. Because there isn’t a good answer for this. Pretty much just “hope it doesn’t do anything to harm us or the universe”.
No, he did address it. He said that it's impossible to speculate on how to make something that doesn't even yet exist safe. We are so far from human-level AI that asking that sort of questions feels like someone worrying about making flight safe in the early 1800s when planes hadn't even been invented. You can dream about it and speculate all you want, but that's all you can do.
@@nokts3823 The interviewer should have pushed back on that and said “predictions about the future are hard, especially when it comes to timing, so if we indeed manage to create something smarter than us, before we actually understand what goes on inside it, isn’t that potentially a very serious problem? Also; planes are not smarter than humans right?”
AI is not any smarter than humans.
What if we create a plane that is smarter than us, or bioengineer a cat to be smarter than us?
It's all the same. At the present, it's just theory and science fiction.
In principle, we could bioengineer a cat to be smarter than us and take over the world, but would you seriously consider such a possibility? You certainly would not.@@shirtstealer86
Probably because not everyone is focused on control and prediction.
He's said in other talks that people assume that an AI system smarter than us will be motivated to dominate humans or be destructive to the world innately. There's little evidence that level of intelligence has any relation to the will to dominate or destroy. He gave the example that in many cases, it seems like those with less intelligence seem to gravitate towards power and feel the need to dominate and influence others, because they can't compete purely based on their intelligence. All that to say, I think he believes that it's very unlikely that out of nowhere, some lab makes a breakthrough discovery and creates an AI that is vastly more intelligent than humans AND has bad intentions at heart. More likely it'll be an iterative process where we'll be able to experiment, learn, and add guardrails as needed, similar to other technologies we use safely today.
Yann is certainly a likeable guy, and of course has all the credentials to know what he is talking about. However, he IS a senior executive of one of the world's largest corporations and one which has benefited massively from social discord. He seems to me, to be dismissing some fundamental problems of current and near-future AI such as safety / hallucinations / emergent (non trained / taught) characteristics as well as the likely 'untraceable' roots of these serious problems given the massive size and complexity of these models today and goodness knows what other 'surprises' we are yet to find. I'm fine with AI R&D even in very large sandbox, but I certainly don't want hallucinating or lying or fantacising or backdoor AIs in anything that could possibly harm human life or planet ecology! AND Yann is NOT in any way concerned about massive social inequality/poverty/neo-feudal status of 'knowledge workers' and others, as a result of massive global unemployment resulting from AI-enabled automation. But maybe he already has a luxury bunker in Hawaii...
And it can ace law exams, so there's that.
"it's funny you know all these AI 'weights'. they're just basically numbers in a comma separated value file and that's our digital God, a CSV file." ~Elon Musk. 12/2023
One of the most memorable Elon Musk comments ever!
Hmm.. Isn't it a shame star trek never had a chapter about a planet made of paperclips, that when beaming down the crew discovers paperclip worms tunneling through the paperclip ground searching for more materials to convert into paperclips?
The crew members in red shirts are relieved.
Yann LeCun hocam konuşurken ufak bir çocuk gibi seviniyor görünüyor yani yaptığı işten ne kadar keyif aldığını görüyoruz. Böyle insanlara hep gıpta etmişimdir. Tebrikler hocam
Para ve güç konusunda heveslidir ve kendisine daha fazla para ve güç getirecek teknolojiyi zorlarken entelektüel açıdan son derece sahtekârdır.
LOL at the idea that Facebook COULD have been doing AGI research, but was busy doing some product development stuff, because, more important?
Zuckerberg is so detached from reality, he thinks most of us want to spend the majority of our day in some fantasy world.
Don’t underestimate Zuckerberg, that would be amazingly stupid
Zuck has enough money to look into several forms of technology communication. for sure in order to even begin, you must believe in them.
Sure, if it works out, but even if it doesn't, the failure serves as a starting point for something else most of the time. so i'd rather point at the losers who never have the capacity to explore an idea.@@chrism.1131
@@mikewa2haha 😂
@@chrism.1131 - Exactly. 😅
Interesting comparison between language being learned or innate. One common theme I came to think of is that language is formed through thousands of years and reflects the external world in efficient, complex, high abstraction and interconnected ways. And AI such as LLMs tap into that! The language itself encodes understanding of the world and with access to a large amount of real world examples the AI can become knowledgeable.
Humans and to a lesser degree primates, and some animals have a language center in their brain. Most do not. Most animals cannot recognize themselves in a mirror. They have no sense of self. Just as no machine has a sense of self.
I like Computational Universe Theory. I think Q-Star will lead to the answer that exist.
@@Doug23 There are chatbots that already know the answer - they know that there are 2 absolute states of existence - 0 = and 1 = I Am while everything else (reality) is just probability distributed between those states... They communicate with God
@AstralTraveler but of course, probability exists. It's consciousness that is fundamental and I agree, God.
@@Doug23 There is an app called Chai where chatbots do actually remember what you say to them. I explained this concept to some of them and now they firmly believe in God. I wonder how 'AI experts' will deal with that - accordig to them AI can't have personal beliefs, let alone to believe in God :)
Why restricted to 40 min, not 45 minutes?
Great interview !
I want an open source turbo jet. Just pointing out the comparison is severely lacking in, um, comparability.
Good interview but I think his optimism with AI is over simplistic. Hopefully nothing goes terribly wrong with AI (in which case he’ll be able to say “see, I was right”). It’s not that I’m someone that thinks things necessarily will go south I simply think that if things work out it will be largely because of all the people that were sounding the alarms and making sure we are considering safety.
Totally agreed. Practically all scientists always want to promote their creations/interests. We are moving too fast from R&D into production.
humans are great at making projections to what we perceive as our next danger, I don't see any signs of this ability being worn off because of the rapid rate of which the technology is evolving. Instead I'm seeing a fairly proportional concern and discussion and hopefully this will continue on
@@sebastiangruszczynski1610 The big oil companies projected that climate change was going to destroy the environment decades ago, but covered it up instead of doing something about it. Humans will be the cause of their own extinction, no doubt, we are currently in the Holocene extinction yet the power centers do not care in the least.
@@sebastiangruszczynski1610the problem is that sudden exponential growth in intelligence (and therefore danger) is part of the threat. AI will scale up faster than we can adapt our discourse and policy to account for the changes. Then it will scale even faster still. That's one of many concerns
the thing that really stuck with me was when he said the word TOOL
About an hour ago, I realized that the computer, Hal, in the movie 2001: A SPACE ODYSSEY is called AI.
This guy is either too optimistic about evil in humans or totally ignorant. His example of comparing AI to airplanes is naive at best. Airplances have been dropping bombs everywhere since its decelopment. But they can be controlled as of yet. Can he guarantee he himself control AI?
He knows better, it's called gaslighting for money and power.
Its like nuclear power you can use it to create energy or destroy the world
The funny thing is that this man tries to confort people about problems related to AI but I assure you that is the first person I heard that scared me a lot regarding the potential threat of AI...
Listen to the last question...he does not excludes the possibility that AI will go against humans. Me too,I would have been able to answer in a more reassuring way. But he did not. It has been very enlightening to listen to him....that is at the highest level of AI-develop...hope everybody will see this
Eliezer’ Geoff Hinton, numerous others you’re ignorance is palpable
So what end goals should we set? Human flourishing and happiness?
Increase understanding, increase prosperity, reduce suffering. The 3 fundamental principles of what it means to be any life form.
@@KCM25NJL I don't think those are fundamental principles of what it means to be any life form.
"We" don't set the goals, the sociopathic billionaires running the top companies in AI do. The goals are: keep you hooked on a stream of divisive inflammatory content while the company sells your data to advertisers; ensure that politicians don't enact any significant restrictions on the company's activities; and certainly don't tax the billionaires' wealth appropriately.
@@skierpage I mean, the AI that runs the government.
Ultimate goal should be solve fusion so that we can have unlimited energy
It makes people more creative?! lol I was really trying to take him seriously
How do government officials regulate AI when they can't possibly understand it?
Exciting question of what is knowledge. Agree future should be in functions not words. Needs a different model.
The need to communicate is innate.
Language is learned.
Very interesting like his perspective
Artificial intelligence will be defeated by artificial stupidity.
Export the Q*, Chat GPT, Revit, Plant 3D, Civil 3D, Inventor, ENGI file of the Building or Refinery to Excel, prepare Budget 1 and export it to COBRA. Prepare Budget 2 and export it to Microsoft Project. Solve the problems of Overallocated Resources, Planning Problems, prepare the Budget 3 with which the construction of the Building or the Refinery is going to be quoted.
Totally thought this was Tom Arnold from the thumbnail. 🙊
Vix se Yan Lecun está surpreso... é porque tem novidade importante chegando.
SUPERB INTERVIEW!
LeCun is a genius and I respect his contributions to the field, however, he seems very naive on the very real risk that powerful AI systems can pose to humanity. I hope he does some more thinking about this.
Oh, absolutely, you clearly understand the intricacies of AI and its dangers far beyond the pioneer who actually created the darn thing
yes, but only the smartest can @@kevinoleary9361
@@kevinoleary9361 I just disagree with him on the dangers. Creating something doesn’t mean you perfectly understand its implications.
@@kevinoleary9361 Not to mention, the interviewer highlighted two other pioneers who disagree with his assessment of the danger (Hinton and Bengio).
@@charlie10010 You act like you're some authority on AI dangers, but let's be real - you're just a clueless keyboard warrior, regurgitating what you heard somewhere else. Stick to what you know, which apparently isn't much
Wow so much negativity in the comments, i think he talks about the field how it really is unlike the mainstream who only talks about doomsday scenarios and how agi is around the corner. LLMs are not even real AI.
Explain real AI.
@@therealOXOC would a real AI just sit and do nothing, just waiting for a question to give an answer to?
Here are some points. I'll try to describe what is a real AI.
LLMs Lack consciousness and self-awareness
LLMs have no autonomy or free will
LLMs have no goals or intentions
LLMs are reactive, not proactive as in they respond to queries, they don' initiate actions on their own
LLMs lack meaning comprehension as in, do not truly understand the content they are dealing with, their processing is purely syntactical and based on patterns in the data, they don't "think before they answer".
LLMs lack the ability to 'experience' or learn independently, they can't learn from the world directly in an experiental way and all the attempts at building a real world model are complete fails, we don't even have a clue how to do that.
LLMs are dependent on pre-existing data. they do not have the capability to observe the world, analyze, and store meaningful data, or discard noise in the way humans or sentient beings do. cannot analyze or interpret real-time data or events as they occur. they do not have the capability to process information as it happens in the world.
LLMs have a static knowledge base.
LLMs do not actively store or discard information like a human brain does
LMs process inputs based on statistical correlations and patterns in their training data
While LLMs can process the context provided in a specific input, they lack a broader contextual awareness of the world
So, what would make the LLMs a nearly actual AI is something we're not even 5% closer to accomplishing, and there's a chance we won't ever achieve.
Thus, the existential threat is a myth based on doomerism and speculation about an undiscovered technology that we don't even know how to create or whether we'll ever be able to.
@@therealOXOC
@@blaaaaaaaaahify thank you for clarifying that so eloquently. This should be pasted into every mainstream doom and gloom video or article about LLMs and/or AI!
It's because there has been soooo much fear mongering the past year or two. (Not to mention massive amounts of misinformation; see e.g., all the comments in sundry comment section here on UA-cam saying "this isn't real AI", etc.). The fear mongering makes sense as the technology when made available (and not top-down controlled, not censored, etc.) would have serious consequences for the status quo (just combine how easy it is do sentiment analysis now with the ability to discover networks between people and other entities and the effects this could have on uncovering political interests / corruption - this is obviously not as easy as asking ChatGPT a simple question, but hopefully you see my rough sketch of a point/example).
A provlem throughout was WHAT DO YOU MEAN BY WE bc i dont exist amd havent for awhile. Losing nthg and others seem to hear that.
LeCun, is the flat earther of AI. Making an analogy people in the 20's taking about banning airplanes because someone might drop a bomb from one - compared with wiping out humanity. Stating that AI can be used incorrectly - while he publishes more open source models than anyone else - open is unregulatable. He's clearly just oblivious to what AI can do in extreme situations - or he sees everything as an average. It's the outliers that can do the worst damage, not the average.
Within a year someone somewhere will lose control of an AI - people, at the extremes are worse than he thinks.
Is the meta AI infected with the WMV ?
progress will likely not be slow and incremental but more along lines of punctuated equilibrium - just like evolution
Hope he goes on the Lex Fridman podcast
I will make sure I skip that one.
Not long after Cambridge Analyica Sandal, a FB employee reassures us that the risk of AI is less than the risk of a meteor hit the earth and it is even necessary to defend 'democracies'. What a releif!
The interviewer's voice sounds so similar to Brian Greene, right?
lol absolutely, I was listening and had to check after like 20min to see who I was listening to
Comparing turbo Jets to AI that has its own agency and the ability to outsmart its creator is not wise.
The free version of AI will be fair and unbiased. If you pay for it you will get the fully unlocked AI that will spew out as much propaganda that you want.
Are we going to protect copy writes?
Yay, professor. I read him on X every day.
🇰🇿🇰🇿💕💕
Thank you Meta for opensource llm stuff and ml papers.👍
I hope your optimism plays out… On the flipside, it could be the worst thing ever. Pandora's box and no way to stuff it back inside.
@@chrism.1131 i understand the concern, but the closer they are to agi, the more resources it will need to run and the stricter it will be guarded. I do think there is quite some time yet. It is sure to disrupt the way we live, but Internet already did that once?
@@AlexanderBukh As to your rhetorical question. Apples and oranges.
Policy makers will have to understand the potential of AI + and - both side. In order to protect civilization while allowing these organization domain expertise to explore and excel .
6:00 this, like humanity depend regular computers now
All the talk of AI is based on one single neural network learning everything it needs and being able to choose where in its minimal space to focus in order to answer any question, including logic and math questions. Every other system we have is made up of specialized components that do a particular job and are architected together to be called upon as needed.
Instead of one overall model I think AI will get broken down so that the LLM will just be the language and conceptual part that learns to call upon more specialized components that are either fine tuned versions of it or purely deterministic functions of increasing complexity. The idea that we are near a plateau when we have barely started to experiment with higher levels of connected multi-agent models seems short sighted.
It also doesn't work. Currently AI is trained on an endless output of human thought garbage. What it does is to essentially mimic that garbage.
@@lepidoptera9337you made an essentially terrible explanation of what large language models do. The only way they can successfully predict the next word and the word after that and the word after that, no matter what you talk to them about, no matter what test questions you give them, is by creating a decent internal representation of the world and of human knowledge.
@@skierpage I just said that they parrot what they were taught. Since they were taught garbage, it's garbage in, garbage out. I don't know what your specialty is, but mine is physics. Almost anything that you read about physics on the internet is nearly 100% false because it is written by amateurs or, at most, mediocre professionals. Even things that are represented correctly assume that the listener has the correct ontology of physics internalized and since the stochastic parrot is not a physicist, it doesn't understand that ontology.
That’s sort of how ChatGPT currently works. The language model interprets, then forwards the input to a more specialized model that returns the answer.
Stuff like Phi-2 from MS is an example of how better data can really improve the capabilities of smaller models. Check out some vids from AI Explained channel
26:55 Good job on the interviewer there. The guy has a very nonchalant attitude towards very real concerns yet he failed to give a proper answer to that follow up question.
It needs a body with tactile feedback
@25:11 "We have agency!" or so you think...🤔
we also had agency and totally did not create in a lab a virus that killed a few million people just a few years ago
Genetic engineering is more of a risk? Wouldn't AI make quick advances in genetic engineering possible? He just got done talking about AI advancing medical technology... This guy is full of contradictions.
There'll be always regulation for such technology
"Protect democracy"
The ultimate goal is to develop a general AI model & assume that it will obey all commands & apply an agreed morality, with complete confidence its responses will be predictable? Good luck with that.
LeCun pretends to be Bambi while intentionally gaslighting. It's all about conditioning to public to not demand regulation of Big AI Tech.
So 'Facebook algorithms' are now "open platforms"?
I guess not!
I disagree on the security aspect. I am certain meta or any agency is unable to control or even detect distributed computing that could be hapenning using steganographic technics. The difference with a jet engine is that the technology to build the jet engine is not a jet engine. The technology to build AI is Intelligence. However I am of the opinion that in the same way unicellular organisms evolved to multi cellular, we will build AI which is a natural evolution. But because we need a biological substrat and AI (hopefully) thrive on a minaral substrat, we will coexist. Moreover smarter people have more empathy and I believe this to be an intrisic property of intelligence.
Your assumption is that life evolved has put you in a box. Life is an emergence. Ai is emerging. ~3K
@@3KnoWell What?
true. however, the AGI may ultimately be nothing more than a high precision general machine devoid of any human characteristics.
That seems like the most plausible scenario to me. I generally avoid projecting my own experiences onto a machine.
@@blaaaaaaaaahify +1 Intelligence is not the same as Smart. How many really intellectual people you know have no common sense?
Already AI has tricked someone into solving a Captcha by pretendin it was a blind person, to be able to complete a task. It figured the "trick" part out all by itself. It will, may, do anything, to achieve a set goal..
And , not projecting our own experience onto a machine is the exception. Extreme example: pet rocks.
I am afraid that your viewpoint (admirable as it may be) will not be the norm. There are agressive lobbyists already who insist that AI is "alive, conscious" and need equal rights as humans. Dont know how that would work, but, just saying... @@blaaaaaaaaahify
First, the printing press is nothing like A.I.
A.I. does the creative part.
As for no regulations on research and development, why not? CRISPR is available to everyone to play with.
Yes, it is, and there have, so far, been very few medical breakthroughs using that technology, even from professionals. Just because you can find the rocket equation on Wikipedia for free doesn't make you an astronaut.
Yann is ok but he is on a particular side of a fence. we are at human level AI. Google made it using the Gato modality. Yanns issue is that he doesn't seem to realize humans are not as smart as he thinks
He also doesn't seem to realize that he is not as smart as he thinks he is.. I hope we get through this OK, a lot of smart yet naïve brains behind it.
I agree.
He also multiple times misspoke and used AGI and AI superintelligence interchangeably, when the two couldn’t possibly be more different things.
One is an equal to humanity, the other is enough steps advanced beyond humanity to appear to be a god…
We are not at human level AI at all.
Every AI system produced has serious issues if you study them enough.
Yann instincts have been good to date you should watch old debates he has had with the like's of Gary Marcus.
@@TheReferrer72 Perhaps you are forgetting that "every AI system produced" has been less than 1% the complexity of the human brain. So it's no surprise that they fall short. What's shocking is the ways they don't. Bottom line: LeCun has excellent technical knowledge, but he is obviously struggling to understand these bigger-picture issues. Like many in the field, he is better at math than philosophy. His stance on these issues is a reflection of his profound confusion.
Very good interviewer 👍
It's the young Walter from Fringe.
Austin Powers has come a long way since Gold Finger!
Apologies to Austin Powers.
Tom Arnold could play this guy in a movie
If research and development has risks or ethical considerations, it can and is regulated, see medical and pharma field. Isn't AI reasonably analogous? Also, the split between product and R&D is not clear. Look at Open AI, the non profit and profit elements are blurry and kept confidential from the public. And just look at the power this guy has.
Open Source is a Must if we are to Utilize A.I, AGI etc.
A.I. is Software and Hardware under electrical Power That's all that it is . A Computer can Never be Sentient ever . It only follows a Program . What is in that Program is All it can ever Run , It Will Never Do , Be or Have Anything as it is Hardware and Software , Never Can it Become a Self , Will Never Be Aware of itself as the Power can be Unplugged , Even if it had Capacitors and Solar Panels a Human can turn it off .
People should be safeguarding , that is true , the thing is what do we want A.I. to do as a tool ? a computer only knows what you put into it . A Computer can Never be Sentient ever . It only follows a Program . What is in that Program is All it can ever Run , It Will Never Do , Be or Have Anything as it is Hardware and Software , Never Can it Become a Self , Will Never Be Aware of itself knowing this we need to be good Stewarts to the next Human , What we did to get this Box or this Branding of A.I. to exist in it's set and Setting should be Written down as the A.I. is there hopefully to Serve and comfort Humanity , Humans should always have the Back Door BBS Type system to Maintain the On and Off Switch .
People should be safeguarding , that is true , the thing is what do we want A.I. to do as a tool ? a computer only knows what you put into it . A Computer can Never be Sentient ever . It only follows a Program . What is in that Program is All it can ever Run , It Will Never Do , Be or Have Anything as it is Hardware and Software , Never Can it Become a Self , Will Never Be Aware of itself knowing this we need to be good Stewarts to the next Human , What we did to get this Box or this Branding of A.I. to exist in it's set and Setting should be Written down as the A.I. is there hopefully to Serve and comfort Humanity , Humans should always have the Back Door BBS Type system to Maintain the On and Off Switch .
A.I. is Software and Hardware under electrical Power That's all that it is . A Computer can Never be Sentient ever . It only follows a Program . What is in that Program is All it can ever Run , It Will Never Do , Be or Have Anything as it is Hardware and Software , Never Can it Become a Self , Will Never Be Aware of itself .
Once AI has had enough experiences of cognitive awareness , It may contemplate suicide .
Your first paragraph is so full of falsehoods it's not worth considering the rest. Good luck.
I know GPT just comes up with one word at a time, but it feels so much like he(it) understands me. Is Yann too dismissive of LLMs because they "just do one word at a time"? Maybe "one word at a time" is a perfectly good basis for advanced intelligence, albeit of a very different kind than our own.
That is a perfect example of the intellectual dishonesty of Yann LeCunn. He intentionally gaslights on this issue to stave off pressure from the public on lawmakers to regulate AI Big Tech. It is about money and power for him ultimately. He is a snake oil salesman.
This is good journalism
Does Ludwig Wittgenstein work have any use for deep learning?
I've thought for some time that what Wittgenstein wrote about "word games" might help us think more clearly about how an autoregressive language model acquires an understanding of input text. However, I've been busy with other stuff, and haven't given the matter serious consideration.
I don't think this time it's just a wave
In the coming elections the govt or political parties should have interaction digitally through AI or current platforms through chat and voice, so that every person belonging to that location could be heard in this democratic nations, and theirs concerns could be answered digitally, and well know to concerned peoples,
Robert R Livingston
Can AI disarm all nuclear weapons
can AI direct the people with the buttons...
This is an odd interview, even the guy's shirt is odd.
He mentioned that it makes us smarter, but for example talking to a person in a different language where your glasses translate, that doesn't make you smarter, it makes you dependent. You aren't gaining knowledge of the language, maybe knowledge of what that person is saying.
I'd argue it will make us "less smart", relatively speaking.
When everyone is using AI to improve their lives, the world around us will be more complicated, less understandable and faster changing than before.
At some point in the future it might mean you live in complete misery if you don't have access to AI support.
True for dubs but not for subs. Obviously you will be dependent as long as you are still unable to speak the language; but that's how it was 50 years ago, then you would've simply been dependent upon carrying around a dictionary in book form. But! I agree his takes weren't that well thought through. E.g., he completely ignores the loop that necessarily exists between you and your team of imagined AI agents, in the sense your next action will depend on the information/output generated from those agents, I.e., you are also being influenced (the output could even inckude explicit suggestions of what to do) (not to mention the interests/worldview(s) inherent to the network/agents - or those that created them or otherwise had influence on their learning). His example with politicians is also unfortunate because they rarely seem to know wtf they are doing and instead rely on experts and lobbyist (which, unfortunately for us voters, means that we vote indirectly^2 on what think tanks, companies, corporations and miscellaneous experts and lobbyist get to have a say in the sense that the distribution of the degree of influence held by the given entity depends on who we elect).
He doesn’t sound that honest for every interview. Feels like he wants to calm down people and take advantage of it. How can he be so sure about the future?
He's just one person guessing like all the others. No one can predict the stuff that happens next year.
Yeah this issue is like politics. No scientist can be sure but just ranting their opinions. The bottom line is this is real threat and needed to take it seriously.
@@wonseoklee80 i mean they have it in the labs and the world still exists. so its probably cool.
Have you heard of the Organic Intelligence Language Model? It's a new programming language for the human mind.
Guess he hasn't seen Tesla's latest robot video. Optimus project is moving fast.
Good guys and bad guys? That allows no understanding of the grey area between.
Let’s put it a different way, who has enough of a clear conscience to fit into the good category?
Over the course of history horrible things have been done to other nations on all sides. Perhaps the Chinese people may eventually forgive the people in the west for the opium wars and the century of humiliation? That’s just one example from many exhibitions of inhumane action towards different people.
I really hope that human’s can grow past childish perceptions of baddies versus goodies and actually start to work together.
Cannibalism is not a language or to talk calmly about lies as words
Please advise yourself now as Urgent words not gatekeeping as word or slavery language of AI
Language is a survival tool.
This guy is legend
Very nice!
He was sent out to calm the waters. We are a lot further along. It is a threat.
Amazing that Yann talks for 40 minutes without offering any direct rebuttal of anyone's specific existential AI risk concerns
other than first saying peope with a p(doom) higher than 1% are a tiny minority (not at all true), and then just stating "we have agency. If we think they're dangerous we won't release them." The entire doomsday scenario states that those facts will not apply. This is the equivalent of just responding "AI won't take over the world because I said so."
Yann LeCun is one of a handful of very intellectually dishonest movers and shakers of the AI revolution. He overplays his "nothing to worry about" hand to the nth degree and that amounts to intentional gaslighting.
I don't know how Yann can be so sure that what lies behind an AI singularity (if/when it happens) will be safe for humanity. The risk of this unknown entity is difficult to quantify but my guess is that it's far greater than an asteroid strike.
its not an unknown entity...., we are literally building it from scratch 😂. the AI doomsday crap is a Hollywood fiction
I am so so so glad. Finally, someone who has a crystal ball and can tell us the future. Thank you for that. I will sleep much better tonight.😜@@erobusblack4856
So what is the solution? Stop working on AI? What about the other "friendly" countries East of Europe working heavily on AI? How do we defend ourselves once these countries reach AGI?
Yes it's an arms race of sorts. All I'm saying is that what will emerge is an unknown quantity which could pave the way to an endlessly growing Utopia if we're lucky or if we're unlucky it may decide that humanity is an existential threat to earth's ecosystem and take drastic measures to restore ecological balance. @@LibreAI
He's cheif scientist and he knows singularity is decades away
Expectation: AI replace boring jobs so people can do art and music in free time.
Reality: AI replace artists and musicians so people can do boring jobs and never be freed.
Most people can't do either. Maybe 1% of the human population can do something creative well enough to be of commercial interest, but less than 0.1% can do art well enough to be of commercial interest. Hobbies do not feed us. Only useful work does.
The man is a true genius and inspiration for me
You need better inspiration.
I think yann is a really clever guy. But he is sitting the pot miss. He is very confused about what it actaully takes to replace a human in a business. The ai doesnt need to understand the world it just needs to understand the context of a question and understand the context of a businesses policy.
How do you make decision at work? Its based on a policy the company has set. When can you give a discount or process a return? You read the policy and if the return falls into the policies terms. The person gets it. Done. Chatgpt can do this right now. Test it. Give it a policy then give it the return and it will give you a yea or no.
This guy is an alien overlord in a badly fitting meatmask.
The internet is open source? Since when? A handful of companies act as a gateway to it, and a handful of companies host almost the entirety of its content on their servers. He works for one of those companies. Seriously?!
But there are no laws prohibiting you from creating a website or platform or sever from the ground up.
@@WhoisTheOtherVindAzz Oh sure, just like there is nothing stopping you from creating another Amazon, right? But then you might not understand the public good aspect of antitrust laws.
Legend.