Neural networks should not be black boxes | Max Tegmark and Lex Fridman
Вставка
- Опубліковано 10 жов 2024
- Lex Fridman Podcast full episode: • Max Tegmark: AI and Ph...
Please support this podcast by checking out our sponsors:
The Jordan Harbinger Show: www.jordanharb...
Four Sigmatic: foursigmatic.c... and use code LexPod to get up to 60% off
BetterHelp: betterhelp.com... to get 10% off
ExpressVPN: expressvpn.com... and use code LexPod to get 3 months free
GUEST BIO:
Max Tegmark is a physicist and AI researcher at MIT.
PODCAST INFO:
Podcast website: lexfridman.com...
Apple Podcasts: apple.co/2lwqZIr
Spotify: spoti.fi/2nEwCF8
RSS: lexfridman.com...
Full episodes playlist: • Lex Fridman Podcast
Clips playlist: • Lex Fridman Podcast Clips
CONNECT:
Subscribe to this UA-cam channel
Twitter: / lexfridman
LinkedIn: / lexfridman
Facebook: / lexfridmanpage
Instagram: / lexfridman
Medium: / lexfridman
Support on Patreon: / lexfridman
Brilliant! I understood every single word. Of course I had to watch other AI LLMs to understand what a neural network was to understand the enormity of computation functions!! We live in magnificent times.
Lex pleaseeeee get Prof Donald Hoffman on!!
I wish I had even a remote clue of what this guy is talking about
Neither does he. Neural networks isnt programming or something as simple as tweaking knobs and machine learning.
It's just living with ppl on the fly...for others as ones self
@@Mojo_Dojo333 it basically is, if you have built neural networks from scratch u would start with a function and some randomly intialized params (forward prop) than use backprop to get the ideal weights. Then u would get the loss function to see how your model performs. Back prop allows u to get the gradient of your features which gives u to know how u should adjust your params (tweaking knobs) to assign the ideal weight to your features. U continue to do this process (gradient descent) until u minimize your loss function and get a model that produces accurate predictions
@@Mojo_Dojo333 actually it’s just that. Rhic limbic override
@@Mojo_Dojo333 but it is as simple as tuning knobs. He never said anything about programming, he only used a python syntax example to explain why the whole process is not just inscrutable random noise
Really love this guy, but the fact that he didn‘t get the dancing robots were CGI is off-putting
This guy gives a really complicated answer for something it’s supposed to be super simple
Not complicated at all
I came here to say
QUANTUM
Douglas Hofstader next!
Tegmark has now become a conspiracy theorist.
Huh?
@@Fungamingrobo Tegmark shares the same perspective as Yoshua Bengio and Geoffrey Hinton, asserting that AI poses an existential risk, a viewpoint which seems unreasonable. In contrast, Yann LeCun offers a more grounded perspective, clearly articulating what AI is and its potential capabilities. His insights can be found in various news stories, such as the one titled "Researchers Made an IQ Test for AI, Found They're All Pretty Stupid," which provides a revealing look into the current state of AI.
Max Tegmar, are you an expert or not? We know exactly how and why neural networks work; it suffices to take time to understand how the backpropagation algorithm works, which is accessible to a lot of people, and you are in business.
He's talking about understanding a trained net you fool.
@@ChurchOfThought Let me guess, civilization has not reached you ... yet! 😢
@@Age_of_Apocalypse Ouch! The tickling slings and arrows of a man confused and unknowledgeable within the topics he chooses to smear others in! 🤡