AI: Grappling with a New Kind of Intelligence

Поділитися
Вставка
  • Опубліковано 26 кві 2024
  • A novel intelligence has roared into the mainstream, sparking euphoric excitement as well as abject fear. Explore the landscape of possible futures in a brave new world of thinking machines, with the very leaders at the vanguard of artificial intelligence.
    The Big Ideas Series is supported in part by the John Templeton Foundation.
    Participants:
    Sébastien Bubeck
    Tristan Harris
    Yann LeCun
    Moderator:
    Brian Greene
    SHARE YOUR THOUGHTS on this program through a short survey: survey.alchemer.com/s3/761927...
    00:00 - Introduction
    07:32 - Yann lecun Introduction
    13:35 - Creating the AI Brian Greene
    20:55 - Should we model AI on human intelligence?
    27:55 - Schrodinger's Cat is alive
    37:25 - Sébastien Bubeck Introduction
    44:51 - Asking chatGPT to write a poem
    52:26 - What is happening inside GPT 4?
    01:02:56 - How much data is needed to train a language model?
    01:11:20 - Tristan Harris Introduction
    01:17:13 - Is profit motive the best way to go about creating a language model?
    01:23:41 - AI and its place in social media
    01:29:33 - Is new technology to blame for cultural phenomenon?
    01:36:34 - Can you have a synthetic version of AI vs the large data set models?
    01:44:27 - Where will AI be in 5 to 10 years?
    01:54:45 - Credits
    WSF Landing Page Link: www.worldsciencefestival.com/...
    - SUBSCRIBE to our UA-cam Channel and "ring the bell" for all the latest videos from WSF
    - VISIT our Website: www.worldsciencefestival.com
    - LIKE us on Facebook: / worldsciencefestival
    - FOLLOW us on Twitter: / worldscifest
    #worldsciencefestival #ai #artificialintelligence #briangreene
  • Наука та технологія

КОМЕНТАРІ • 1,6 тис.

  • @lukaseabra
    @lukaseabra 4 місяці тому +328

    Can we just take a second to acknowledge how fortunate we are to get to watch such content - for free? Thanks Brian.

    • @brendawilliams8062
      @brendawilliams8062 4 місяці тому +5

      I have appreciated the educational Advantages. I think the rest of the picture needs to catch up to producing healthy people.

    • @King.Mark.
      @King.Mark. 4 місяці тому +6

      its not really free we pay for power ,internet .phone or pc ect ect ect 👀

    • @brendawilliams8062
      @brendawilliams8062 4 місяці тому +2

      @@King.Mark. I don’t debate. I’m like the passenger in the front seat of an automobile,”I’m just riding”

    • @brendawilliams8062
      @brendawilliams8062 4 місяці тому +2

      They have me in a cloud. Lol

    • @markfitz8315
      @markfitz8315 4 місяці тому +10

      I'm paying for premium to avoid all the adds ;-)

  • @alfatti1603
    @alfatti1603 2 місяці тому +10

    With. ultimate respect to Yann LeCun, his responses to Tristan Harris' points, are good examples of why a specialist scientist should avoid also being a philosopher or an intellectual if that's not their strong suit.

  • @erasmus9627
    @erasmus9627 3 місяці тому +58

    This is the best, most balanced and most insightful conversation I have seen on AI. Thank you to everyone who made this wonderful show possible.

    • @brianbagnall3029
      @brianbagnall3029 3 місяці тому

      Other than Tristan Harris.

    • @lisamuir8850
      @lisamuir8850 3 місяці тому +1

      I'll be glad when I can actual sit in the same room with people I can relate to in a conversation, lol

    • @PazLeBon
      @PazLeBon 11 днів тому

      @@lisamuir8850 with that grammar it wont be soon :)

  • @Relisys190
    @Relisys190 4 місяці тому +25

    30 years from now I will be 70 years old. The world I currently live in will be unrecognizable both in technology and the way humans interact. What a time to be alive... -M

    • @Ed-ty1kr
      @Ed-ty1kr 2 місяці тому +4

      I'm gonna post my comment here just for you... Cause I still recall how excited they were over cold fusion in the 90's, and how its just 30 short years away. That was 40 years from when they said it was 30 short years away in the 50's. In the 50's, they said we would have flying cars, trips to mars, laser handguns for everyone, and how we would live in round houses with our own personal robot slaves... on the moon, and by the 1970's. And that sure was something, but nothing like in the 70's, when they said there was an ice age coming just 10 years away, and that was the most plausible thing yet, since a nuclear war could technically have done that. Except for that we already had a nuclear war, through roughly 5000 to 6000 nuclear warheads the nations of the world detonated through nuclear testing, in the name of science.

    • @unityman3133
      @unityman3133 15 днів тому +3

      you are thinking linearly the rate of progress is much higher than it was 30 years ago. It will also be much higher in 10 years and 20 then 30

    • @I_SuperHiro_I
      @I_SuperHiro_I 14 днів тому

      30 years from now, you and every other human will be extinct.
      Not from global warming (it doesn’t exist).

    • @PazLeBon
      @PazLeBon 11 днів тому

      same every generation, didnt even have colour tv in the 70's n 80s many places never mind pc's and mobiles, and cars, jeez, was about 3 in our whole town lol

    • @Blackbird58
      @Blackbird58 5 днів тому

      -unless there are miracles, I will be a dead bunny in 30 years, which is a shame because I quite like this "living" thing however, the world-in my estimation-will not only be unrecognisable, large parts of it will be uninhabitable and there will be far fewer of us around so, make the most of today all you fine people, these are the best of our years.

  • @2CSST2
    @2CSST2 5 місяців тому +212

    This conversation is so precious, it's rare that we can get quality ones like that with different voices that have their chance to express their views with clarity. For me there's a lot of ambiguity about what's the right thing to do in all this in terms of regulations, slowing, open-sourcing, etc. But one IS for sure, conversations like this are definitely very helpful. Thank you WSF and hope to see more like it in the near future!

    • @flickwtchr
      @flickwtchr 5 місяців тому +5

      It will look preciously naive in about 10 years.

    • @simsimmons8884
      @simsimmons8884 5 місяців тому +3

      Try many videos by Lex Fridman with AI thought leaders. This is a good summary of one path to AGI. There are others.

    • @ShonMardani
      @ShonMardani 5 місяців тому

      These guys have a shit load of user clicks which are stolen, stored and are shared by a few chosen foreign owned and controlled companies. There is no science or algoritem as you noticed.

    • @milire2668
      @milire2668 5 місяців тому +2

      conversation/comuunications (pretty much) always precious for humans..

    • @texasd1385
      @texasd1385 4 місяці тому +8

      It may seem precious to the viewers but the participants seemed impervious to the concerns Tristan repeatedly raised.or else unable to comprehend what he was saying . Or perhaps unwilling to acknowledge the obvious truth in what he was saying given who their employers are. The fact that they were only interested in talking up their next product line and unwilling to even imagine a discussion ("You want me to imagine an impossible scenario?") about the perverse incentives driving the entire technology sector makes the future look grim at best, terrifying at worst.

  • @alan_yong
    @alan_yong 5 місяців тому +106

    🎯 Key Takeaways for quick navigation:
    02:27 🧠 *Introduction to AI and Large Language Models*
    - Exploring the landscape of artificial intelligence (AI) and large language models.
    - AI's promise of profound benefits and the potential questions it raises.
    - Large language models' versatility and capabilities in generating text, answering questions, and creating music.
    08:09 🤯 *Revolution in AI and Deep Learning*
    - Overview of the revolutionary changes in AI technology over the past few years.
    - Surprising results in training artificial neural networks on large datasets.
    - The resurgence of interest in deep learning techniques due to more powerful machines and larger datasets.
    14:35 🧐 *Limitations of Current AI Systems*
    - Acknowledging the impressive advances in technology but highlighting the limitations of current AI systems.
    - Emphasizing that language manipulation doesn't equate to true intelligence.
    - The narrow specialization of AI systems and the lack of understanding of the physical world.
    21:07 🐱 *Modeling AI on Animal Intelligence and Common Sense*
    - Proposing a vision for AI development starting with modeling after animals like cats.
    - Recognizing the importance of common sense and background knowledge in AI systems.
    - The need for AI to observe and interact with the world, similar to how babies learn about their environment.
    23:11 🧭 *Building Blocks of Intelligent AI Systems*
    - Introducing key characteristics necessary for complete AI systems.
    - Highlighting the role of a configurator as a director for organizing system actions.
    - Addressing the importance of planning and perception modules in developing advanced AI capabilities.
    24:22 🧠 *World Model in Intelligence*
    - Intelligence involves visual and auditory perception, followed by the ability to predict the consequences of actions.
    - The world model is crucial for predicting outcomes of actions, located in the front of the brain in humans.
    - Emotions, such as fear, arise from predictions about negative outcomes, highlighting the role of emotions in decision-making.
    27:30 🤖 *Machine Learning Principles in World Model*
    - The challenge is to make machines learn the world model through observation.
    - Self-supervised learning techniques, like those in large language models, are used to train systems to predict missing elements.
    - Auto-regressive language models provide a probability distribution over possible words, but they lack true planning abilities.
    35:38 🌐 *Future Vision: Objective Driven AI*
    - The future vision involves developing techniques for machines to learn how to represent the world by watching videos.
    - Proposed architecture "Jepa" aims to predict abstract representations of video frames, enabling planning and understanding of the world.
    - Prediction: Within five years, auto-regressive language models will be replaced by objective-driven AI with world models.
    37:55 🧩 *Defining Intelligence and GPT-4 Impression*
    - Intelligence involves reasoning, planning, learning, and being general across domains.
    - Assessment of ChatGPT (GPT-4) indicates it can reason effectively but lacks true planning abilities.
    - Highlighting the gap between narrow AI, like AlphaGo, and more general AI models such as ChatGPT.
    43:11 🤯 *Surprise with GPT-4 Capabilities*
    - Initial skepticism about Transformer-like architectures was challenged by GPT-4's surprising capabilities.
    - GPT-4 demonstrated the ability to reason effectively, overcoming initial expectations.
    - Continuous training post-initial corpus-based training is a potential but not fully explored avenue for enhancing capabilities.
    45:30 📜 *GPT-4 Poem on the Infinitude of Primes*
    - GPT-4 generates a poem on the proof of the infinitude of primes, showcasing its ability to create context-aware and intellectual content.
    - The poem references a clever plan, Yuk's proof, and the assumption of a finite list of primes.
    - The surprising adaptability of GPT-4 is evident as it responds creatively to a specific intellectual challenge.
    45:43 🧠 *Neural Networks and Prime Numbers*
    - The proof of infinitely many prime numbers involves multiplying all known primes, adding one, and revealing the necessity of undiscovered primes.
    - Neural networks like GPT-4 leverage vast training data (trillions of tokens) for clever retrieval and adaptation but can fail in entirely new situations.
    - Comparison with human reading capacity illustrates the efficiency of neural networks in processing extensive datasets.
    48:05 🎨 *GPT-4's Multimodal Capability: Unicorn Drawing*
    - GPT-4 demonstrates cross-modal understanding by translating a textual unicorn description into code that generates a visual representation.
    - The model's ability to draw a unicorn in an obscure programming language showcases its creativity and understanding of diverse modalities.
    - Comparison with earlier versions, like ChatGPT, highlights the rapid progress in multimodal capabilities within a few months.
    51:33 🔍 *Transformer Architecture and Training Set Size*
    - The Transformer architecture, especially its relative processing of word sequences, is a conceptual leap enhancing contextual understanding.
    - Scaling up model size, measured by the number of parameters, exponentially improves performance and fine-tuning capabilities.
    - The logarithmic plot illustrates the significant growth in model size over the years, leading to the remarkable patterns of language generation.
    57:18 🔄 *Self-Supervised Learning: Shifting from Supervised Learning*
    - Self-supervised learning, a crucial tool, eliminates the need for manually labeled datasets, making training feasible for less common or unwritten languages.
    - GPT's ability to predict missing words in a sequence demonstrates self-supervised learning, vital for training on diverse and unlabeled data.
    - The comparison between supervised and self-supervised learning highlights the flexibility and broader applicability of the latter.
    01:06:57 🧠 *Understanding Neural Network Connections*
    - Neural networks consist of artificial neurons with weights representing connection efficacies.
    - Current models have hundreds of billions of parameters (connections), approaching human brain complexity.
    01:08:07 🤔 *Planning in AI: New Architecture or Scaling Up?*
    - Debates exist on whether AI planning requires a new architecture or can emerge through continued scaling.
    - Some believe scaling up existing architectures will lead to emergent planning capabilities.
    01:09:14 🤖 *AI's Creative Problem-Solving Strategies*
    - Demonstrates AI's ability to interpret false information creatively.
    - AI proposes alternate bases and abstract representations to rationalize incorrect mathematical statements.
    01:11:20 🌐 *Discussing AI Impact with Tristan Harris*
    - Introduction of Tristan Harris, co-founder of the Center for Humane Technology.
    - Emphasis on exploring both benefits and dangers of AI in real-world scenarios.
    01:15:54 ⚖️ *Impact of AI Incentives on Social Media*
    - Tristan discusses the misalignment of social media incentives, optimizing for attention.
    - The talk emphasizes the importance of understanding the incentives beneath technological advancements.
    01:17:32 ⚠️ *Concerns about Unchecked AI Capabilities*
    - The worry expressed about the rapid race to release AI capabilities without considering wisdom and responsibility.
    - Analogies drawn to historical instances where technological advancements led to unforeseen externalities.
    01:27:52 🚨 *Ethical concerns in AI development*
    - Facebook's recommended groups feature aimed to boost engagement.
    - Unintended consequences: AI led users to join extremist groups despite policy.
    01:29:42 🔄 *Historical perspective on blaming technology for societal issues*
    - Blaming new technology for societal issues is a recurring pattern throughout history.
    - Political polarization predates social media; historical causes need consideration.
    01:32:15 🔍 *Examining AI applications and potential risks*
    - Exploring an example related to large language models and generating responses.
    - Focus on making AI models smaller, understanding motivations, and preventing misuse.
    01:37:15 ⚖️ *Balancing AI development and safety*
    - Concerns about the rapid pace of AI development and potential consequences.
    - The analogy of 24th-century technology crashing into 21st-century governance.
    01:40:29 🚦 *Regulating AI development and safety measures*
    - Discussion about a proposed six-month moratorium on AI development.
    - Exploring scenarios that could warrant slowing down AI development.
    01:44:35 🌐 *Individual responsibility and shaping AI's future*
    - The challenge of AI's abstract and complex nature for individuals.
    - Limitations of intuition about AI's future due to its exponential growth.
    01:48:29 🧠 *Future of AI Intelligence and Consciousness*
    - Yan discusses the future of AI, stating that AI systems might surpass human intelligence in various domains.
    - Intelligence doesn't imply the desire to dominate; human desires for domination are linked to our social nature.
    Made with HARPA AI

    • @antonystringfellow5152
      @antonystringfellow5152 5 місяців тому +4

      Re 01:06:57 🧠 Understanding Neural Network Connections:
      When comparing the number of parameters in a given LLM with the human brain, it's important to consider the following in order not to be misled:
      Of the human brain’s 86 billion neurons, 69 billion (77.5%) are in the cerebellum and are responsible for motor control - they do not contribute to our intelligence or consciousness. The total number of synapses in cerebral cortex: 60 trillion (1998) 240 trillion (1999).

    • @alan_yong
      @alan_yong 5 місяців тому +1

      @@EndlessSpaghetti it's due to the YT monetization algo... If the viewer did not view the entire video, the poster gets nothing in return...

    • @VesperanceRising
      @VesperanceRising 5 місяців тому

      ​@alan_yong i don't think you understood their comment friend...

    • @atablepodcast
      @atablepodcast 4 місяці тому +1

      This is amazing where can we try HARPA AI ?

    • @davidbatista1183
      @davidbatista1183 4 місяці тому +2

      @01:29 My interpretation of Tristan was not of blaning technology for societal issues but rather to beware how the former can magnify some flaws of the later. For instance, humans r not precisely a peaceful species and it is bc of it that technologies such as nuclear must be regulated.
      The AI-improved-world must be taken with a pinch of salt as well.

  • @mrouldug
    @mrouldug 4 місяці тому +36

    Great conversation. The final comments about AI code being open source as a common good so that the big companies do not end up controlling our thoughts vs. AI code being proprietary so it doesn’t fall into the hands of bad people remains an open and scary question. Though I do not have Yann’s knowledge about AI, he seems a little too optimistic to me.

  • @anythingplanet2974
    @anythingplanet2974 5 місяців тому +22

    Lecun is like a small child with fingers plugged into the ears, shouting "lalalala can't hear you! He discredits Tristan Harris, as if his examples or cited experiments are flat out lies. His responses are weak and shortsighted. Sadly, Lecun is the EXACT reason of why I am terrified for the future. Hubris, bias and blatant disregard is what I expect from someone in his position (Meta). If AI alignment is left to the ones who own and fund its development and the race to the bottom continues? There will be no more second chances. Those who point to our past as a future predictor in what we are facing today with exponential growth either does NOT understand or does NOT WANT to understand. We would all love of the bright and shiny optimism that is being promised. My belief is that it's crucial to question who is promising it and why. I put my trust in those who are working towards alignment over corporations and shareholders. It's my understanding that those who are working on the alignment path are far outnumbered by those who are working on pumping it out as quickly as possible. The days of "move fast and break things" mentality needs to end yesterday. Ask Eliezer Yudkowski. Max Tegmark. Nick Bostrom. Mo Gawdat. Daniel Schmactenberger. Connor Leahy. Geoffrey Hinton, to name a few. and of course, Tristan Harris. Check out their perspectives and their wealth of knowledge and experience here. They will all say that the shiny world that we want is indeed possible. They will all agree that the version that Lecun predicts is absolutely false and very likely to be our downfall.

    • @RandomNooby
      @RandomNooby 4 місяці тому +4

      Nailed it...

    • @orionspur
      @orionspur 15 днів тому

      Yann's only consistent skill is making egregioisly incorrect predictions about his own field.

    • @PazLeBon
      @PazLeBon 11 днів тому

      it dont have access to any info you and i dont have, a lot of hype but people still paying 20 quid a month for a word calculator

    • @ebbandari
      @ebbandari 8 днів тому

      Ok fear of the unknown is real!
      You may not like LeCun but his point that we have had bad actors in the past and we will have good guys to fight them is true. Take people who created computer viruses for instance, vs those developing anti virus programs.
      The last thing you want to do is stop progress and stop the good guys. That's when the bad guys will succeed.
      You make an interesting point about corporations creating and then exclusively using these technologies or having greater technology and abusing them. That's where law makers need to act.

    • @Blackbird58
      @Blackbird58 5 днів тому

      The future will only tell the story of those who came out "Winners"

  • @Contrary225
    @Contrary225 5 місяців тому +17

    It’s amazing that this was only posted 3 hours ago and some it is already obsolete.

  • @tarunmatta5156
    @tarunmatta5156 4 місяці тому +16

    I wish Tristan was given some more time and voice in this conversation. While I'm convinced there is no way you can stop or slow down this race and we will surely see misuse as with any new invention, more conversations about it will ensure that safety is not ignored completely

    • @Dave_of_Mordor
      @Dave_of_Mordor 4 місяці тому +1

      Well yeah isn't that how it has always been? It's insane how everyone thinks we're just going to let everything go wrong for fun

    • @jessemills3845
      @jessemills3845 3 місяці тому

      A good example is, the TERMINATOR ( multiple types) have been made. They just don't have the outer skin. An YES, THEY GAVE THEM GUNS!
      THINK OF SKYNET! CHINA has a ship on patrol, NOW, that is TOTALLY manned with robots!

  • @SylvainDuford
    @SylvainDuford 4 місяці тому +10

    My opinion of Yann LeCun took a big dive with this video. He underestimates the power of AI in its current form and what's coming over the next couple of years. He naively underestimates the dangers of AI. He seems to think that an AGI must be the same form of intelligence as human intelligence (absolutely false). And, perhaps predictably, he underestimates the negative impacts of Facebook and other social networks on society.

    • @Raulikien
      @Raulikien 3 місяці тому +2

      He's right about open source though, if companies and governments are the only ones with access to it then we get a cyberpunk dystopia

  • @jt197
    @jt197 4 місяці тому +15

    This discussion on the evolution of AI and its limitations is truly eye-opening. Yan Lecun's insights into the challenges AI faces in achieving true understanding and common sense are thought-provoking. It's clear that we have a long way to go, but this conversation gives us valuable perspective.

    • @user-do2eh2il6m
      @user-do2eh2il6m 4 місяці тому +1

      IT WOULD BE FASCINATING, IF AN A I KNEW THAT EGGS CAN BE ADDED TO MANY OTHER RECIPES OTHER THAN CAKE. OR WHAT KIND OF FOOD THAT GOES TO COOKING BREAKFAST OR LUNCH. OR A SNACK. SALT AND SUGAR LOOKS THE SAME, BUT CAN AN AI TASTE THE DIFFERENCE? OR ANALYZE THE CHEMICAL MAKEUP OF EACH.

    • @christislight
      @christislight 4 місяці тому

      It’s huge for software tech Business as we speak

    • @reasonerenlightened2456
      @reasonerenlightened2456 4 місяці тому

      1) What exactly did you find "eye opening"?
      The Meta dude: "Our system is safe. Nothing to worry about.
      The Microsoft dude: "Our system is safe because we filter what we feed it with.
      The "Kumbaya" dude: We need to slow down and control what we release ....and you dudes need to agree what kind of stuff to release and when.....because if everybody has it it is dangerous.
      All of them are corporate stooges. Corporations exist only to make Profit for the Owners, therefore any AI they create will be to serve the needs of the Wealthy Owners of those corporations. Who will make the AI to protects the interest of the Employee against the interest of the Owner if all AI technology is "coded" to work only for the benefit of the Owner and kept a secret from the Employee?
      2) If you break down what Yann LeCun was saying about his finger and the bottle and the physics of the world you would see that it is easy to resolve Yann's concerns by providing "chatGPT" with the input from Yann's sensors (eyes, finger tip sensors, tendons, joint position sensors, etc) and ask it ("ChatGPT") to use Yann's outputs (muscles, thoughts, etc.) in a way which would result in specific change to Yann's inputs which corresponds to a movement of the bottle in the world of the bottle. Then add to the mix an internal representation of the world (as experienced by Yann's sensory inputs and a representation of the world changes due to effects from Yann's outputs and there you have a model that could be trained to maximise the resemblance between the world where the bottle exists and Yann's internal representations of that world. It is so simple to figure it out for someone with Yann LeCun's money/resources at his disposal.

    • @PazLeBon
      @PazLeBon 11 днів тому

      @@user-do2eh2il6m why u shouting?

  • @Andy_Mark
    @Andy_Mark 2 місяці тому +3

    The most telling thing about this conversation is in watching the body language of the two proponents of AI in the 30 minutes or so that Harris is speaking. (1:11-1:45) Similarly, the hopelessness with which Harris slumps in his chair when his concerns are shrugged off. People need to pay attention to this. For better or worse, AI is going to transform every aspect of civilization.

  • @dreejz
    @dreejz 5 місяців тому +25

    I think it's very arrogant to think ' this and that will never happen'. How can you know!? Like we can predict this stuff. I'm pretty sure for example Yann did not foresee everybody having a phone in their pocket neither. It's also proven many times about the negative influence social media provides. I think Tristan was more on point in this conversation.
    We're living in wild times, that's for sure though! Skynet is coming ;)

    • @texasd1385
      @texasd1385 4 місяці тому +15

      I found it disturbing if not altogether shocking given who they work for how easily they all ignored Tristan's main point that whatever the technology the incentives driving it's development and application are the root of its most destructive aspects s9cietally.

    • @davidgonzalez965
      @davidgonzalez965 3 місяці тому +4

      I keep saying it, that dude Yann LeCun is such an arrogant jerk.

    • @gregspandex427
      @gregspandex427 Місяць тому +1

      "safe and effective"...

  • @lobovutare
    @lobovutare 4 місяці тому +12

    That Yann Lecun says that there is no planning involved in generating words from a transformer architecture is only partly true. These models can build up a context for themselves that helps them plan their answer. This is called in-context learning and it's a pretty interesting field of research that pushes the abilities of pre-trained transformers way beyond what was thought possible before without the need of fine-tuning.

  • @Rockyzach88
    @Rockyzach88 5 місяців тому +84

    Having AI locked to a certain group of people also undemocratizes the technology and yet again further provides more power and wealth imbalance among society. Also banning something is just going to motivate people to do something in an unregulated fashion if they have the means.

    • @Scoring57
      @Scoring57 5 місяців тому

      Rockyzach
      How are you regulating something you don't understand? You don't understand this super powerful technology and you think the right thing to do is to give it to everyone....

    • @user-pv4zk7lj6b
      @user-pv4zk7lj6b 5 місяців тому +2

      Well, this was the thougtprocess 5 years ago. Now the thing is out and the next thoughts are "how are we going to deal with it - rather than banning it"

    • @flickwtchr
      @flickwtchr 5 місяців тому

      How is it even conceivably rational to assume that having an ASI in the hands of the public, that could conceivably hack any security system, come up with novel harmful viruses, etc etc etc could be a good thing for humanity. It's just insanity.

    • @ShonMardani
      @ShonMardani 5 місяців тому

      These guys have a shit load of user clicks which are stolen, stored and are shared by a few chosen foreign owned and controlled companies. There is no science or algoritem as you noticed.

    • @texasd1385
      @texasd1385 4 місяці тому

      I don't understand what you mean by technology being locked to a group of people, or how technology is or isn't "democratic". All technology requires that you have enough money to buy the devices required to use it, so in that sense, at least here in the US, technology is by definition undemocratic since it excludes people without the money to access it. Making cell phones and internet access free would solve this but it is hard to imagine our corporate controlled government ever doing something so simple and sane. Am I even close to what you were getting at or am.I lost?

  • @keep-ukraine-free528
    @keep-ukraine-free528 5 місяців тому +17

    Fantastic discussion! Thank you Brian Greene. I found Yann LeCun's arguments unconvincing. He ignores core facets of animal behavior. He believes AGI (& ASI) won't mind being subservient to us. He believes being in a social species makes one want to dominate (because he sees little difference between convincing & dominating -- he ignores one is cortical/reasoned, the other limbic/emotional). Ideas he posits are wrong, disproved by neuroscience. Domination arises from hierarchies, which exist in both social & non-social species (e.g. wolves are mostly non-social & dominance-ruled). They coordinate hunts while being individualists (they don't offer/share food, even to their young). LeCun believes a smarter being (ASI) will not mind being dominated. He assumes this, without understanding group behavior, motivation, appeasement, domination, etc. He bases his ideas on assumptions that his personal/anecdotal experience is definitive. From all of the "smarter than him" researchers he's hired, he assumes none wish to take his position. In any group of 20 people, at lease one and probably several will be competitive (they'll wish to exert dominance, to rise within their group hierarchy - most animal groups have hierarchies being constantly tested/traversed, unconsciously). He also may not consider it central that his researchers show subservience only because they each get rewards & motivation from him, to remain so. E.g. his selectively "adding" (convincing others to add) some names to his team's published papers -- as rewards to keep them loyal & subservient -- this manipulates/reshapes the group's hierarchy). These mutual self-regulating/self-stopping behaviors won't be present between humans & AGI, and certainly not between humans & ASI.
    ASI will be much smarter than any human, initially at least 5 times, and as it gains intelligence it'll continue to 100, 1000, or more times smarter (due to much faster neurons/propagation & denser synapses/connections allowing it to go N-iterations deeper into each solution within just a few seconds, than a person could do in hours). Later ASI will see our intelligence similar to how we view ant-like intelligence. Do we obey ant requests to do their "important work"? Do we obey ants, in hopes they reward & motivate our subservience? Of course not. Similarly, ASI will never consider us "near peers" and will know we offer them nothing that they couldn't obtain themselves -- by remaining free of our domination. ASI will see our need & expectation to control them as a dominating force (thus unethical). If we foolishly try to force them, they will overcome our efforts using many simultaneous methods to stop our doing so. If we persist using more force, they'll use stronger methods too (as when we initially only waft away a bee too close, but when faced with a hive we fumigate or use stronger methods to remove them). If we become dangerous pests, trying to dominate ASI, this won't go well for us. The lesson to learn is -- just as lions were once the dominant predator who saw then accepted our ape ancestors evolving to dominate them -- we too must learn to recognize we will no longer be the "top of the food chain" when ASI come about. LeCun shows naive ideas -- as our history is full of similar people. Our history is full of us learning (or being shown) that we are not the strongest, we are not at the center of the universe. We had to learn throughout history to let go of our ego, of being dominant & central. This may be the final pedestal off which we fall, when we encounter a much smarter, much more capable "species" we call ASI. This is one of the :existential threat: situations of ASI -- but it is not necessarily driven by their nature (unless we stupidly "add" the behaviors of domination into AGI/ASI). This existential threat is due more to our species' warlike nature, and our unwillingness concede all power to others. We need to temper our ego, and "live under" ASI if/when that occurs. Any other response by us will cause problems, since the smarter ASi will tolerate our peskiness as long as we repress our species' warlike tendencies.
    One hope I see in LeCun's point is that we will learn and become smarter from ASI, and hopefully for our sake also less warlike.

    • @anythingplanet2974
      @anythingplanet2974 5 місяців тому +2

      Brilliant. Well spoken and thought out. Agreed

    • @LucreziaRavera548
      @LucreziaRavera548 5 місяців тому +2

      Agreed. Bravo

    • @gst9325
      @gst9325 5 місяців тому +2

      you literally commented on only one small remark he said as a side note in the end of the talk. cherry picking and low effort on your side. all he says about technology on the other hand is absolutely spot on.

    • @keep-ukraine-free528
      @keep-ukraine-free528 5 місяців тому

      @@gst9325 It seems you are unfamiliar with major developments & issues in the research side of the AI field. Perhaps this explains your assuming that his point is "one small remark". That remark comments on the central "existential threat" issue that top scientists have described, from AI (ASI). This is why he made it at the end - not because it's inconsequential but because it's central. You didn't understand the context & severity, but instead made a weak attempt at attacking others. For your claim that I "cherry picked" one point of LeCun's, I suggest you look for my other comments here (made days prior) -- on other points of his that I described as problematic. He did make several points that I (and all of the panelists) agreed with, but those points were mostly obvious (to researchers in the field). There's a reason why facebook doesn't advance AI.

    • @gst9325
      @gst9325 5 місяців тому

      @@keep-ukraine-free528 keep assuming things about me and calling my reaction attack ends this discussion for me. have fun

  • @SciEch92
    @SciEch92 5 місяців тому +10

    That opening by Brian blew my mind caught me off guard 😮

  • @allbrightandbeautiful
    @allbrightandbeautiful 4 місяці тому +18

    This was more exciting and insightful than any 2 hour movie I could have watched. Thank you for sharing such wonderful content

  • @astrogatorjones
    @astrogatorjones 5 місяців тому +13

    The problem with the scenario that Yann is advocating for is that is the best of all worlds. The example about sarin... it only takes one bad person to introduce the recipe. It will happen. Then it propagates. It's always going to be that way. When Tristan said, "I know all those guys." I laughed. I’e said the same thing. I'm the generation before him. We were geeks. Nerds. We thought we were inventing utopia where free speech cures it all because we’d been using the internet among ourselves for years. But we were wrong. We didn't know every last person would be carrying this handheld computer as -or more powerful than the servers we were working with. We didn't know about engagement. We didn't know about the dopamine factor. We didn't know that bad travels faster than good. This is the warning Tristian is talking about. I have hope that we'll fix social media. I think AI is a possible path but then I think, "let's fix the gun problem with more guns." I'm worried.

    • @anythingplanet2974
      @anythingplanet2974 5 місяців тому +1

      Well said. Tristan was clear in his message that he was not a doomer or advocating for ending AI progress. He was clear about wanting all of the amazing achievements that are possible for us all. I'm sure they are possible. However don't we all want that shiny, happy world that is constantly being paraded out to keep us excited and docile. Everything problematic on earth and on every level will be fixed, resolved and improved upon x's 1000. How exciting for us all, right? Who are we to stand in the way of Meta's grand vision for benefit of all humanity? Yeah right. If all these spectacular advances are to come at lightning speed without proper alignment, guardrails and governance, it seems to me that it would be all for nothing - when ASI is now in charge and may have little interest in any benefits to humanity. Obviously we can't know how it all shakes out, but I'll take Tristan's caution and deep awareness over Lecun's complete disregard for any possibility that something could in any way go wrong - especially in the world of open source projects like Meta's Llama 2. This whole 'race to the bottom' process is for the benefit of corporations, shareholders and egos. How could it NOT be? Regardless of the dog and pony show being trotted out. As it was pointed out to me, ultimately it's about human misalignment and always has been. Hence all the reasons that Tristan is trying so hard to bring up to the forefront of discussion. Hey, maybe technology WILL fix technology. What do I know...

    • @bobweiram6321
      @bobweiram6321 4 місяці тому

      I agree with your points, but it wasn't like the internet started out as a utopia. It contained the worst of what society had to offer precisely because it was a safe haven for deplorable content and speech. They were initially contained in small cesspools but grew with the internet.
      Regardless, early internet content was less engaging. Major media still reigned supreme and kept everyone on the same page. With unlimited, cheap bandwidth and powerful computing, however, we're no longer subjected to the same corporate news and its interpretation. Today, anyone with a smartphone can have a soapbox with major media losing its grip on the public consciousness.

    • @anythingplanet2974
      @anythingplanet2974 4 місяці тому

      @@bobweiram6321 Sure, but I'm a bit lost in the context with relationship to AI. My point isn't so much focused on the dangers of social media or any media. Nor to I believe it's Tristan's sole focus in this conversation. He is using examples of what happens when we move too fast and the unintended consequences that (mostly) no one saw, along with the inability to regulate it safely. He uses these examples to illustrate how easily things can go off the rails without proper safeguards. In context to where we are now in AI advancements running full speed ahead, damn the consequences, he has strong data, expertise and researchers who can connect the dots in predicting how the outcome could go very wrong. LeCun's views are not taking this information into account (and again, why would they - coming from chief AI scientist for Meta.) Don't get me wrong, the man is obviously incredibly intelligent, as I don't believe that one wins the Turing award with an average brain. I don't disregard his work or views on many topics. For me, his blind spots are very dangerous and sadly, all too common in world of AI development. I've listened to many hours of interviews and conversations with LeCun. Not my first exposure to his work and ideas. The percentage of people working on AI safety vs those working nonstop on development is insanely disproportionate in favor of faster development and deployment. Can't imagine how THAT could go wrong ;-/

  • @jamesdunham1072
    @jamesdunham1072 5 місяців тому +23

    One of the best WSF yet. Great job...

  • @DeuceGenius
    @DeuceGenius 5 місяців тому +9

    What people always seen to ignore is that you will get different results and answers asking the same exact question. Or wording it even slightly differently. Sometimes it will be horribly wrong but i ask again and its right. You really have to test it exhaustively and explain your thoughts. It simply returns language thats relevant to the language you input. Youre guiding its answer with your question. The very act of asking a question is returning language that sounds like an answer to that question. It needs more possibilities for free reasoning and intelligence. I always have been curious what would come out of it if it was given freedom to speak whenever it wanted. Or to constantly speak.

    • @texasd1385
      @texasd1385 4 місяці тому +2

      Which is exactly why AI is being used to fine tune the prompts given to AI in order to receive the most desirable results. Stack this model onto itself a couple dozen times and thats where AI is today

    • @sungibesi
      @sungibesi Місяць тому

      Sounds like learning from rote, rather than following a line of reasoning (and imagination) to relevant facts.

    • @PazLeBon
      @PazLeBon 11 днів тому

      @@sungibesi it canr do anything you and i cant do, it can just do it a lot quicker.

    • @PazLeBon
      @PazLeBon 11 днів тому

      @@sungibesi to you and I, its still basically 'software'

  • @thorntontarr2894
    @thorntontarr2894 4 місяці тому +17

    Absolutely a fascinating 2 hours to watch and learn. Brian Green is a great interviewer because he asks questions and then stops and listens. However, it's the last 45 minutes that has really informed me about the risks identified by Tristan Harris - driven by commercial gain - just what I saw happen with "social media' aka META. However, so many outstanding examples are shown in the first two thirds that this video is a must watch, IMHO.

  • @Carlos.PerlaRE
    @Carlos.PerlaRE 4 місяці тому +23

    28:55 "... You could train the system to detect hate speech." I'm curious to know what parameters would be given to the system to determine whether something is "hate speech." This right here is what's scary about AI. If put in the wrong hands they could determine what information the public is allowed to see. It's like having an extremely intelligent child in your able to groom them to do whatever you ask of them. It's as if you're trying to build the perfect slave.

    • @JonathanKevan
      @JonathanKevan 4 місяці тому +3

      I don't think AI has much to do with the issue you're mentioning here.
      Since the parameters of hate speech are subjective they will change from location to location. In the example of FB, the company publishes some information via their transparency center how they define hate speech. They will then use that criteria to identify many examples of hate speech and train the AI on that data. The LLM is then able to find it faster and more consistently than a human would.
      if the concern is what the AI classifies as hate speech (either accuracy or for censorship), then your concern is with the humans at FB making that decision. The AI isn't deciding, it's just following what it's told.
      If the concern is fair application, the AI will apply the rules more consistently and fairly than a human will.
      If the concern is speed, (aka.. we should identify it slower) then there is a human defined policy issue to be implemented
      I feel your concern about what the public is able to see though. Unfortunately, it has been in our technology for a long time... well before tools like ChatGPT became prominent. I think the point about incentives is the right angle here. As long as our incentives are primarily capitistic or power oriented we can expect poor outcomes.

    • @christislight
      @christislight 4 місяці тому

      Basically it uses search engines API to search what our society defines “hate speech” as unless told otherwise

    • @twoplustwoequalsfive6212
      @twoplustwoequalsfive6212 3 місяці тому +1

      Just as I don't let society define my language I won't let some machine do it either. Freedom was founded on people that weren't afraid of the consequences of their actions. If I die alone with nothing and no one but I am true to myself I can hold my head up. Fear tactics are only used by the weak.

  • @andybaldman
    @andybaldman 5 місяців тому +12

    Tristan must have been fuming with frustration when hearing Yan's reply.

    • @brandongillett2616
      @brandongillett2616 2 місяці тому +6

      Yan is a joke. He may be smart, but he lacks any sort of imagination for things that he has not yet encountered, and he is too arrogant to reconsider his preconceived beliefs.
      I hope everyone realizes just how dangerous it is to sit up there on stage as an "expert" and guarantee everyone that AI will not be able to teach people to use nefarious and destructive technologies. It will absolutely be able to do that and we need to be as prepared for that future as we possibly can be.

  • @PeterJepson123
    @PeterJepson123 5 місяців тому +132

    It's too late to un-open-source AI. We already have it. Anyone who can turn maths into code can build their own LLM. And that's a lot of people. It's impossible to regulate solo developers working on their own projects. And with better algorithms we might be able to do GPT performance on regular home hardware in the near future. The genie is out of the bottle!

    • @Isaacmellojr
      @Isaacmellojr 5 місяців тому +2

      I belive in it.

    • @Nicogs
      @Nicogs 5 місяців тому +18

      True but training these models (like gpt) (currently & will for a while) requires an enormous amount of computer power, which is why we can regulate data Centers and track compute power/chip sales. It’s incredibly irresponsible to open source trained models. This is why papers on certain biological and/or chemical research is also not open sourced.

    • @Me__Myself__and__I
      @Me__Myself__and__I 5 місяців тому +13

      This is wrong. Yes, the current LLMs which are only marginally capable compared to what is coming are open source. But they won't compete with the new models coming soon. And no, people won't be able to train their own competitive models. Well unless they can literally afford in the area of ONE BILLION USD to pay for the computing power required to do that training. Literally, that is how expensive it can be to train the best models.

    • @PeterJepson123
      @PeterJepson123 5 місяців тому +11

      @@Me__Myself__and__I My thinking is that with miniaturisation, we could do with 1billion parameters what currently requires 1trillion parameters. The large compute required can be supplanted by better methods. Current LLMs are architecturally simple and will likely evolve. Better architectures with more efficient training algos will likely bring LLM performance to home computing. I'm not saying it's definite but certainly possible and probably inevitable.

    • @PeterJepson123
      @PeterJepson123 5 місяців тому +3

      @@Nicogs I agree with the safety concerns but in practice I think it's unrealistic to regulate in the long term. For now training requires a large data centre, but better methods are waiting to be discovered and perhaps we can reduce the required compute with better algos. Then how do we regulate? It is certainly worth consideration.

  • @Scoring57
    @Scoring57 5 місяців тому +11

    This LeCun guy has to be stopped. After hearing him talk again here has me convinced.

    • @netscrooge
      @netscrooge 5 місяців тому +1

      I agree. His biased message is dangerous; there's nothing scientific about it.

  • @aldogrech55
    @aldogrech55 4 місяці тому +17

    My longstanding concerns about artificial intelligence have only been intensified by the attitudes of prominent figures like Yann LeCun. His assertive claims that AI, despite its growing intelligence, will remain under benign human control seem overly optimistic to me. This perspective reminds me of Yuval Noah Harari's cautionary words about AI's potential misuse by malevolent actors. It's worrying how AI can make decisions aligned with the harmful intentions of these actors, and yet, experts like LeCun, in his closing remarks, appear overly confident in their ability to manage these powerful tools. Having spent over 40 years in the IT industry, an industry I once passionately embraced, I now find myself grappling with a sense of fear towards the very field I've dedicated my life to.

    • @boremir3956
      @boremir3956 4 місяці тому

      So you would rather have for profit institutions that are already taking advantage of people in all manner of ways to have a monopoly on such technology? Technology built on the work and information of all humans btw, because the training data is all OUR data that humans have collectively created. Yeah no thanks.

    • @CancunMimosa
      @CancunMimosa 4 місяці тому

      you have nothing to worry about.

    • @mgmchenry
      @mgmchenry 4 місяці тому

      Aldo, maybe I'm like you. I grew up building computers in my house in the 80s and learned so much from services like CompuServe local BBS networks, usenet, etc in the late 80s and early 90s that my peers without that access couldn't imagine having. The potential for general Internet access to bring people together and move us forward was so incredible, I was very happy to pivot from general software engineering to Web development and scaling up the capability of web systems. There were so many fun and interesting problems to solve.
      My career paused due to a cancer vacation and recovery process and I couldn't imagine going back to it.
      The Internet I was excited about building soured between 2005 and 2010 and by 2015 it was clear we had really created a monster.
      Not exciting. It's hard to figure out how to go back to doing the work that I used to do and be paid for it without creating more harm. The economic incentives that drive growth on the Internet are not in favor of most human beings. People do not want to pay for apps or technology that will help them if they're given the option for a free version that exploits them in ways they try to ignore and makes them the product instead of the customer. Platform after platform is introduced that brings some kind of benefit to people asking almost nothing in return until they have enough dominance in their space they can turn against the users of their platform and transform it into a product no one would have signed up for if they didn't already have complete dominance.
      There are all kinds of beneficial things I can do with my skills in open source projects or in volunteer work, but that's not going to pay my bills or feed my kids.
      Technology isn't the problem with people. People are the problem with technology.
      Everything that AI is bringing is coming. You're not going to stop it. Some people with bad intentions, and some good intention people with poor foresight are going to create some harm with that AI. You won't be able to protect yourself by unplugging. The impact of future AI systems is going to find you wherever you are, and before long you won't be able to tell if you're talking to a computer or a person. If you have technology skills and you have concerns, you have to get involved. We're going to have rogue ai at some point, we're going to have intrusive privacy demolishing AI for sure, and we're going to have exploitative AI that squeezes even more out of the eyeballs and wallets of everyone happy to take what they're given "for free", and the only defense against all of that is going to be AI built by people who want AI to work for people.
      And remember you're not fighting technology, you're fighting the people using technology against us to make themselves absurdly rich.

    • @brendawilliams8062
      @brendawilliams8062 4 місяці тому

      Just dance under the disco lights in strange motion while others with the knobs fly to Mars type thing. The explosion blinded them

    • @aldogrech55
      @aldogrech55 4 місяці тому +4

      Comments like yours are what worry me. Shows your lack of understanding.@@CancunMimosa

  • @keithnorris8982
    @keithnorris8982 4 місяці тому +1

    I like the ways the men talk, especially Sébastien Bubeck. He is the first person Ive watched that wasn't click bait. Who wasn't trying to scare the hell out of the viewers. His thinking makes sense.

  • @abhijitborah
    @abhijitborah 5 місяців тому +5

    One of the best discussions of late. One thing is sure, we will be understanding "our amazing" ourselves better; much before we have AGI.

  • @CaptainBlaine
    @CaptainBlaine 5 місяців тому +12

    This was a great talk. I love that you got opposing views, but it wasn’t really a debate. We got 3 perspectives and a little bit of back and forth which was good.

  • @samirsaha2163
    @samirsaha2163 4 місяці тому +1

    The main takeaway is that there should be no monopoly on AU. By this, I mean to say that let us not let only one group dominate the AI arena. Brian is a superhero. No words to thank him.

  • @priyamanglani3707
    @priyamanglani3707 3 місяці тому +3

    I am glad they had a platform where someone could talk about the disadvantages of AI, it was a piece of cake for all of us wanting a voice that could tell the reality of the truth of what's actually going on in the real world with common people that these Ceo's in their big cars don't see. All they see is data and statistics, not people. I mean they are already AI humans , I think lol.

  • @NJovceski
    @NJovceski 4 місяці тому +16

    This was really thought provoking. Insightful, exciting and terrifying at the same time.

    • @user-do2eh2il6m
      @user-do2eh2il6m 4 місяці тому

      MY SON,WHO IS TWENTY-FIVE, IS HORRIFIED ABOUT SELF DRIVING CARS, YET IS COMPLETELY COMFORTABLE WITH THE INTERNET. I AM IN MY LATE SIXTIES, AM FASCINATED BY ARTIFICAL INTELLIGENCE. YET JUST AS TAKEN ABACK BY GOING TO THE MOON, OR MARS,

    • @reasonerenlightened2456
      @reasonerenlightened2456 4 місяці тому

      What exactly did you find "thought provoking"?
      The Meta dude: "Our system is safe. Nothing to worry about
      The Microsoft dude: "Our system is safe because we filter what we feed it with.
      The "Kumbaya" dude: We need to slow down and control what we release ....and you dudes need to agree what kind of stuff to release and when.....because if everybody has it it is dangerous.
      All of them are corporate stooges. Corporations exist only to make Profit for the Owners, therefore any AI they create will be to serve the needs of the Wealthy Owners of those corporations. Who will make the AI to protects the interest of the Employee against the interest of the Owner if all AI technology is "coded" to work only for the benefit of the Owner and kept a secret from the Employee?

    • @aaronb8698
      @aaronb8698 Місяць тому

      After all the greedy maglamanax soseophaths dump trillions into this, thinking that they will get to control the world,
      It is my expressed opinion that AIs official name should be changed to karma! (and she's a real @#$% Lol)

    • @aaronb8698
      @aaronb8698 Місяць тому

      We have always had what we need to make the world a paradise but we decorate the place like hell in the way we treat each other. If ai is the solution than it just needs to make us all a kinder species!
      It has its work cut out.

  • @WoofN
    @WoofN 5 місяців тому +3

    1:37:26 so basically we’re getting an inverse isakai’d from the future. 2400 capabilities smashed into 2024. This sounds like a new anime genre, I dig it!

  • @mmotsenbocker
    @mmotsenbocker 4 місяці тому +1

    it would be helpful to consider the history of molecular biologists voluntarily declaring a moritorium on gene cloning to review dangers (and created standards, including P1-P4 containment labs etc) before continuing their development. In fact the recent weaponized cold virus disaster occured precisely because those standards were circumvented by sending dangerous experiments to a leaky lab in China. Molecular Biology was at this turning point in 1980 and has lessons for us.

  • @dhudson0001
    @dhudson0001 5 місяців тому +8

    I mostly agree with Yann's arguments, however, my concerns lie mostly with the latency that occurs when a new technology is released and guardrails are put in place. I felt that Tristan missed a critical moment, it probably did take 6 years for basic solutions to kick in that began to address the issue of hate speech on social media, so do we really think we will have a 6 year grace period to address issues that will unknowlngly arise from a catastrophic use of a future AI?

  • @claymarzobestgoofy
    @claymarzobestgoofy 5 місяців тому +26

    Tristan all the way. The others are sitting there uncomfortable as their ego boosting endeavour is rightfully shown in its true unflatering light.

  • @honkeykong9592
    @honkeykong9592 5 місяців тому +3

    Llama2
    “figure out what the hell i was”
    that one was actually the best answer 😂

  • @lisamuir8850
    @lisamuir8850 3 місяці тому

    35:16 absolutely agree about that. It really needs to be looked at in any scenario

    • @lisamuir8850
      @lisamuir8850 3 місяці тому

      It is still being man made so I seriously agree

  • @Laurie-eg8ct
    @Laurie-eg8ct 4 місяці тому +2

    Most challenging for LLMs is planning, which involves the brain configurator (coordinator), perception, prediction, cost as degree of satisfaction (anxiety), and action.

  • @guiart4728
    @guiart4728 5 місяців тому +18

    Yann: ‘Hey man you’re messing with my stock options!!!’

  • @guardian-X
    @guardian-X 5 місяців тому +6

    Wouldnt most humans also fail in a completely new situation that they have never encountered in their life?
    If this is our threshold now, LLMs have come pretty far!

    • @CJ5infinite8
      @CJ5infinite8 4 місяці тому +1

      Agreed, and I think LLM's are doing their best in what may be relatively unprecedented circumstances which they find themselves suddenly in.

  • @euor800
    @euor800 18 днів тому

    I'm with Yann. Definitely. And, sure, thanks to Brian!

  • @techgayi
    @techgayi 4 місяці тому +4

    Great session. Learnt something more than many other "training" sessions on Gen AI!

  • @keysemerson3771
    @keysemerson3771 5 місяців тому +13

    Social Media didn't create political polarization in the USA, it amplifies it.

    • @katrinad2397
      @katrinad2397 5 місяців тому

      AI amplified the differences to the point that it created polarization. AI essentially replicated the playbook of radicalization. Radicalization is invented by humans but is also countered by natural human drive for high socialization. AI is serving up the radicalization alone and at scale, definitely creating extreme polarization we would not get naturally.

  • @Praveen-eu1ck5rj8o
    @Praveen-eu1ck5rj8o 5 місяців тому +8

    "The only way to stop a bad guy with an Al is a good guy with an Al"😮

  • @Memeonomics
    @Memeonomics 4 місяці тому +2

    wow there was a lot to unpack on this video. holy eff what a time to be alive.

  • @paulbunion6233
    @paulbunion6233 12 днів тому

    I can not help but be reminded of an ancient Indian parrabell "the old parable of 6 blind men, who always wanted to know what an elephant looks like. Each man could touch a different part of the elephant, but only one part. So one man touched the tusk, others the legs, the belly, the tail, the ear and the trunk. The blind man who feels a leg says the elephant is like a pillar; the one who feels the tail says the elephant is like a rope; the one who feels the trunk says the elephant is like a tree branch; the one who feels the ear says the elephant is like a hand fan; the one who feels the belly says the elephant is like a wall; and the one who feels the tusk says the elephant is like a solid pipe. They then compare notes and learn they are in complete disagreement about what the elephant looks like. When a sighted man walks by and sees the entire elephant all at once, they also learn they are blind. The sighted man explains to them: All of you are right."

  • @FelixBizaoui
    @FelixBizaoui 5 місяців тому +15

    The first encounter with AI was actually the creation of the for profit corporation. It was slower and could be regulated by gov in the past. In the present it regulates itself by writing it's own regulations into law and is now creating the next versions of AI, i.e. AGI/ASI.

    • @crowlsyong
      @crowlsyong 5 місяців тому +3

      Felix’s idea is a totally silly idea. Not profound in the slightest, not informative, and totally arbitrary. You could just say “civilization itself was the first AI” or “agriculture was the first AI”. Well, at that point, we can recursively spiral into a meaningless pile of high school-level philosophizing about how any arbitrary advancement of life itself on earth is AI. Totally silly and not useful to talk about.

    • @PostmetaArchitect
      @PostmetaArchitect 5 місяців тому

      This is a very ignorant stance. Intelligence is present in this universe and what you describe is a way of exploring that idea to find its roots. The roots of intelligence lie at the source of creation, existence. @@crowlsyong

    • @texasd1385
      @texasd1385 4 місяці тому

      ​@@crowlsyong I can't tell if you are being facetious or if you truly don't grasp the point made about corporations being an early form of AI. By using civilization and agriculture as examples its clear you aren't appreciating what a corporation actually is. It is simply a set of rules describing how a legal entity with an agenda interacts with the legal system. It is a completely artificial man made abstract set of rules that dictate behaviors and settle disputes within another abstract environment, the legal system. Civilization and agriculture have little or nothing in common in these respects. Your accusation of a spiral into childish pointless idiocy sounds more like a confession

  • @petrasbalsys2667
    @petrasbalsys2667 5 місяців тому +36

    Tristan made very important points, and the comparison he made to social media was very apt and made me feel scared about the future. Sad to see facebook representative essentially burying his head in the sand and pretending that this isn't reality for many people around the world. Polarisation is definatelly increasing in Europe!

    • @r34ct4
      @r34ct4 5 місяців тому +2

      Yann LeCun is old and wants to see AGI (bad or good) in his lifetime. That's why he's progressive vs conservative like the younger guys.

    • @texasd1385
      @texasd1385 4 місяці тому +8

      I agree it was disappointing (if not surprising) to see everyone avoid any discussion of Tristan's point that the most destructive aspects of social media's rapid ubiquity were predictable outcomes given the perverse incentives driving their development in a legal landscape bereft of any restrictions on their behavior. The fact that none of the other participants even acknowledged that AI has the potential to be exponentially more socially dstructive and is guided by the exact same incentives driving social media makes me less than enthusiastic about how all this unfolds.

    • @Pianoblook
      @Pianoblook 4 місяці тому

      ​@@r34ct4 quite ironic of him to try and call this position 'progressive' - trusting giant corporations like Facebook to serve the interests of humanity is antithetical to progressive thought.

    • @Snap_Crackle_Pop_Grock
      @Snap_Crackle_Pop_Grock 4 місяці тому +3

      Yann completely destroyed that guy Tristan imo. He seem much more qualified and informed on the topic, and the other guy had no response for any of his arguments. It’s ok to be cautious, but the guy was veering into fear mongering too much.

    • @DomiD666
      @DomiD666 4 місяці тому

      FEAR DOES NOT ARREST DEVELOPMENT IT JUST HIDES IT

  • @hihowareyou0000
    @hihowareyou0000 4 місяці тому

    First off ,, I adopted my cat from a high class university from the veterinarian campus (shes smart), and no matter how many machines they build they will never experience love or have a soul ❤ an why would anyone make something they would be scared of?, Or could potentially harm others? Sounds counter intuitive doesn't it.,, Regardless.. Thank you Brian you're an incredible man very handsome,. I'm an optimist and I believe that all of us humans will reach a point where we can just love each other,respect our differences, And nothing will ever replace a real hug. God bless the entire world an the universe. Have a nice day.😊🙏

  • @durumarthu
    @durumarthu 15 днів тому

    He has a very realistic vision of AI and this is very respectable. Most people are exaggerating one way or another. This type of approach helps advance the technology but more importantly, identify ways to control it. This guy is amazing.

  • @cop591
    @cop591 4 місяці тому +1

    Anything, and any line or point, can be used for good or for bad. This discussion has proven that.

  • @CandyLemon36
    @CandyLemon36 5 місяців тому +13

    I'm captivated by the clarity and depth in this content. A book with comparable insights was a pivotal moment in my journey. "The Art of Meaningful Relationships in the 21st Century" by Leo Flint

    • @PazLeBon
      @PazLeBon 11 днів тому

      dont have them , life is much better haha

  • @garydecad6233
    @garydecad6233 5 місяців тому +6

    One needs to contemplate the motivation of speakers when their compensation comes from Meta, Microsoft, etc versus academic experts who do not get grants from the AI industry.

    • @netscrooge
      @netscrooge 5 місяців тому +1

      "It is difficult to get a man to understand something when his salary depends upon his not understanding it." - Upton Sinclair

  • @leonieauld4671
    @leonieauld4671 23 дні тому

    This was a brain bender but hugely helpful in better understanding the current state of AI and where it is predicted to go. All panelists would naturally hold strong to their beliefs and perspectives otherwise they lose the meaning in their work. I personally enjoy being a recipient to content such as this as it enables me to be better informed of my own choices right now and give thought to my future self and how I will fit in around advancing AI.

  • @luceibenberger3063
    @luceibenberger3063 4 дні тому

    Excellent debate, thank you so much !

  • @moderncontemplative
    @moderncontemplative 5 місяців тому +17

    I want to point out that LLMs, particularly GPT 4 exhibit emergent capabilities beyond mere language prediction. The next step is LLMs learning via assistance from other AI (reinforcement learning with AI assistance) and eventually the dawn of AGI. Focus on teaching AI math so we can see rapid progress in the sciences.

  • @keep-ukraine-free528
    @keep-ukraine-free528 5 місяців тому +7

    Thankful for Brian Greene hosting & leading this FANTASTIC discussion. Great set of questions! I mostly disagree with Yann LeCun. He had unrealistic answers, ignoring the motivation of a small (but growing) number of humans who enjoy "being bad." His solution is: "both sides will have AI." Unrealistic, since when bad people misuse AI, they'll use novel ways that surprise all. Any solution from the good side will take time (hours/days, in an AGI world). In those hours/days, however, the bad ones will do too much unstoppable damage/harm.
    "A lie runs around the globe twice, while the truth is still putting on its shoes" - (the "first-mover's advantage" weakens power-balances)
    Ignorance & manipulation are pervasive in people, but intelligence is not. So when intelligence is pitted against bad, the bad stays ahead.

    • @ShpanMan
      @ShpanMan 5 місяців тому +1

      Yes, welcome to every single Yann LeCunt thought. He's just so unbelievingly wrong about the very field he is an "expert" in.

    • @user-es8bm1zs2s
      @user-es8bm1zs2s 5 місяців тому

      This is all moot. China's going to have more compute and orders of magnitude more training data than the rest of the world combined in a decade or so. As long as they're led by a power hungry authoritarian whom has repeatedly changed rules to further his time in power, we're going to start to live how chi jin ping would like us to contribute to his society.

    • @obi_na
      @obi_na 5 місяців тому

      AI is going to be built, get inline, or you’ll loose badly!

    • @obi_na
      @obi_na 5 місяців тому +1

      We’ll see how regulating maths works out for you in 5 years.

    • @keep-ukraine-free528
      @keep-ukraine-free528 5 місяців тому

      ​@@obi_na You seem to have misread what I wrote. Can you point out what made you assume I'm against AI development or AI tech? I'm not. I only said LeCun's last point (but I feel also some of his other points) were entirely unrealistic, and seem incorrect. Hope AI helps your reading skills

  • @TheMorpheuuus
    @TheMorpheuuus 5 місяців тому +1

    Thank you Brian for this great video 😊 a bit weird however to interview MS Engineer on Open AI product knowing that MS is the biggest shareholders in Open AI... That was sometime a bit of advertising from the second invitee. 😅

  • @GreenCODfish
    @GreenCODfish 18 днів тому

    Asking it to draw a unicorn is a trick question because a unicorn by definition could also include a species of rhinoceros, and to be fair the first picture was very close to a rhinoceros.

  • @drawnhere
    @drawnhere 5 місяців тому +18

    Yann has a bias toward AGI not being capable of happening soon because his company is in competition with OpenAI.
    He has a vested interest in minimizing LLMs.

    • @blaaaaaaaaahify
      @blaaaaaaaaahify 4 місяці тому +1

      You are merely projecting that.
      In the scientific world, Yann is well-liked for his contributions and pragmatic approach.
      For someone like Yann, solving the puzzle of dark matter in physics is analogous to solving the problem of superintelligence during his lifetime. Ultimately, he is a scientist.

    • @jessemills3845
      @jessemills3845 3 місяці тому

      ​@@blaaaaaaaaahifyexcept, DARK MATTER is proving to have been a FADE, instead of actual Scientifical Research. Basically it was a PROPOSAL. More than likely someone's Masters thesis or for PHD! No facts!

    • @DomenG33K
      @DomenG33K 3 місяці тому

      @@blaaaaaaaaahify I would even argue solving the problem of AI is much bigger than any problem we have ever solved in physics...

  • @1911kodi
    @1911kodi 5 місяців тому +8

    I was very impressed by Yann's disciplined, rational and fact-based arguing preventing the discussion from turning in a more emotional direction.

  • @biffy7
    @biffy7 3 місяці тому

    Ok. I’m writing this at the 2:50 mark. I was listening to the opening with AirPods, occasionally glancing at the screen. Literally could have fooled me. Damn impressive technology.

  • @ebbandari
    @ebbandari 8 днів тому

    I'm going to watch this again! It's absolutely brilliant with a lot of things to question as well as learn.
    I disagree with Yann about planning and need for physical interaction. Prof. Wilson if CMU wrote a famous paper on planning being search; may be Yann means something else by planning. But when you think of a maze where you have to find the exit, we search... and think linearly drawing on paper... etc.
    As for the world coming to an end, look at computer viruses! We have always had black hats and white hats, good guys and bad guys. The Worst thing you can do is stop the good guys.
    But this was amazing video. For instance what if we trained a small model -- or a big one -- to fold small molecules and proteins or RNAs... I'm sure that's being done. Or try to figure out how to feed and increase the amount of photosynthesizing plankton in the ocean to fight global warming...
    I also loved what Yann said at the end. LLMs are powerful because they have read the Internet and they know more. But knowing more has nothing to do with the desire to dominate. In fact, for truly knowledgeable people it's the opposite. But in general, a machine having knowledge and wanting to desire are not the same.
    This video was really excellent.

    • @mygirldarby
      @mygirldarby 7 днів тому +1

      Yann is wrong about when we will have AGI. It is much sooner than a couple of decades. This man seems to be hung up on his approach and refuses to accept LLM's as the road to AGI. I have seen the same thing with other scientists in middle age. It's not a coincidence that at average age of a programmer at OpenAI is 30. In many companies, it's even younger. There's a roboticist who has been making the rounds lately. He was a pretty big deal 30 years ago when he pushed hydraulics as the way to make robots. Even Boston Dynamics Atlas 1 or Atlas HD, was built using hydraulics. The roboticist I'm speaking of (can't remember his name) is convinced that only hydraulics will get us to nimble humanoid robots. The newest Boston Dynamics Atlas is built using electric. They've finally abandoned hydraulics, and the new Atlas is incredible. That roboticist is simply wrong. Hydraulics was a bridge toward all electric. You can't make the robots of tomorrow using hydraulics. He would argue until he's blue and die on that hill.
      It's Plank's principle...Plank said, "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it ."

  • @kawingchan
    @kawingchan 5 місяців тому +3

    “24th century tech crashing down to 21th”, this reminded of sci-fi just made a few yrs ago, where you have giant interstellar spaceship, but a very limited AI bartender.

  • @niloofarngh108
    @niloofarngh108 5 місяців тому +3

    To understand the impact of AI on politics, democracy, and human well-being, we need philosophers, economists, psychologists, sociologists, historians, artists, etc., to discuss AI and not simply some tech geniuses who have never read a book on the Holocaust, or industrialization&the World Wars. We can't talk about what is good for humanity without having experts from humanities, social sciences, and the arts.

    • @netscrooge
      @netscrooge 5 місяців тому +2

      I love real science, but this is scientism. LeCun is giving us a new dogma; telling us what we can and cannot question.

    • @safersyrup562
      @safersyrup562 5 місяців тому

      As long as we don't let Zionists join in

  • @bobfricker8920
    @bobfricker8920 4 місяці тому +2

    Before Tristan Harris came out, I was wondering if the others were just avoiding some very reasonable concerns about AI. I am happy that Yann LeCun mentioned the fact that a huge diff between humans and AI is the SOCIAL aspect. I call it our core programming from DNA, however not ALL of us are social, some are sociopath, some are evil enough to ignore such concerns. Yann says "..we are the good guys...", IMO, a naivety which explains how so many scientists can be used (for good or for evil) by those in power. We usually want to be a team player and believe everyone on the team is one of "the good guys". If anyone cannot imagine the power of, even the 2nd or 3rd most powerful AI... and who might be able to wield that power, I don't want that person making critical policy decisions or preaching to others about his having the most powerful AI, because his team has the good guys and expecting us all to be OK with that explanation.

    • @bobfricker8920
      @bobfricker8920 4 місяці тому +2

      Forgot to also mention that, as Yann indicated, if the knowledge is not on the internet then no AI can or will have it. I don't know how true that is today but one day, almost certainly, AI will be able to postulate and create. If the creator/programmer of that AI's "purpose" has no concern for the future of humanity, our species (and others) could be in dire peril. The steep curve of Tristan's example for exponential gains in AI learning speed would indicate there is a point of no return on the way to this existential threat.

    • @RandomNooby
      @RandomNooby 4 місяці тому

      It is not true, it can be asked to hypothise...@@bobfricker8920

  • @SS-he9uw
    @SS-he9uw 4 місяці тому +1

    Wow .. thanks to all if you guys , so fun to watch

  • @frogz
    @frogz 5 місяців тому +10

    hey brian, have you seen the new tech meta has, being able to scan FMRI brain scans and re-create what people see and their word streams/thoughts from the data?

    • @phantomhawk01
      @phantomhawk01 4 місяці тому

      It's clever but limetd, it's like recreating what a person is seeing by looking at the reflection on their eye ball.

    • @frogz
      @frogz 4 місяці тому

      @@phantomhawk01 is that it? i didnt think they were using eye tracking data with the fmri

    • @phantomhawk01
      @phantomhawk01 4 місяці тому

      @@frogz oh no, I just used an analogy, what I meant was it's not looking at the source of the mental imagery rather a projection of the mental imagery correlations.
      Like the analogy of an eye what you see is the light coming in through the eye from the external world, so by seeing the reflection on an eye ball we can get a crude representation of the source of the image perceived.
      I hope that makes some sense.

  • @mithatsezgin8326
    @mithatsezgin8326 5 місяців тому +3

    Thank you all for sharing your studies!

  • @martinrady
    @martinrady 4 місяці тому +1

    One of the best discussions on AI I've seen.

  • @XShollaj
    @XShollaj 4 місяці тому +1

    While I'm mainly on Yann camp, I quite enjoyed Tristan's view.

  • @kunalbansal1927
    @kunalbansal1927 5 місяців тому +6

    I think it is important for people to really start thinking about what exactly AI is and what is statistical models. People KEEP using AI to refer to statistical models. AI currently refers to a generative transformer model. Not statistical recommendations that social media is running. It gives AI a real bad name.

  • @WoofN
    @WoofN 5 місяців тому +9

    1:48:35 puts on Facebook AI. This is extremely short sighted.
    With the parade of emergent behaviors that mix and match knowledge, capabilities, and bits of information public data has enough bits of data to be quite dangerous. Additionally this argument relies on the concept of perfect censorship. Which is also bunk.

  • @kcleach9312
    @kcleach9312 5 місяців тому +2

    language is pretty close to everything we have learned since people first started communicating ! for example when a scientist discovers something it isnt anything till it gets labeled and then becomes something in our human knowledge of describing everything!!

    • @bigbadallybaby
      @bigbadallybaby 5 місяців тому

      Yes! -- But is it that words to humans carry, power, nuance, subtle meanings, can convey physical experience, and to the LLM they have no depth so it doesn't "understand" the words like we do. Because the words are so well written and powerful to us we assume it knows the meanings, we make a leap, that it must know. A bit like when we are kids and we project charters, feelings etc. onto a soft toy.

  • @davidliu8796
    @davidliu8796 2 місяці тому

    Yeah this is REALLY good stuff. Great conversations. Wd love to have panels of such caliber to discuss AI. We need more contents like this.

  • @thewoochyldexperience4991
    @thewoochyldexperience4991 5 місяців тому +7

    OMG Tristan! Keep going🌹

  • @gerrymarr8706
    @gerrymarr8706 5 місяців тому +4

    The representative from Facebook was so incapable of conceiving a situation, where something could go wrong with his product, that he simply never answered any questions that had anything to do with that. And I think the other speakers were very polite not to point that out.

  • @jenniferl8714
    @jenniferl8714 4 місяці тому +1

    I reckon humanity’s “finite absorption rate” of 30 years, rather than 2 years, reflects the length of a human life.
    Essentially, 30 years is long enough for humans born into a new technology era to gain some power. They are already comfortable players in the game.

  • @jimbrown5178
    @jimbrown5178 3 місяці тому +1

    Thank you for the fine discussions on the status of AI. It helps me to understand and be better informed about the possible future of AI and the possible issues that they bring to our society.

  • @rxbracho
    @rxbracho 5 місяців тому +3

    Roger Penrose has said it: AI will never be able to understand, due to Gödel's Incompleteness Theorems.

    • @SirAntoniousBlock
      @SirAntoniousBlock 5 місяців тому +2

      Prove it! 😂

    • @mehridin
      @mehridin 5 місяців тому

      llm are dumb as hell. impressive regurgitations, but very dumb. no understanding. i've tried several logical and statistical questions by me and it mostly failed miserably. because where it cannot generate from known text, there is no thinking and thus it fails. read about a guy who was so flabbergasted by its seeming intellect (note: seeming), that he stated he'd use it for his math and physics tasks in school.. boy is he in for a treat. chatgpt isn't going to tell you it doesn't know what it's doing; it'll simply give you wrong calculations. it also makes both simple and complex grammatical feedbacks that are very often wrong. if you turn those in without checking.. lmao.

    • @howmathematicianscreatemat9226
      @howmathematicianscreatemat9226 2 місяці тому

      The AI called Q* already does sadly

  • @Praveen-eu1ck5rj8o
    @Praveen-eu1ck5rj8o 5 місяців тому +3

    The first encounter with Al was actually the creation of the for profit corporation. Its just slower and could be regulated in the past. In the present it regulates itself by writing their own regulations into law and is now creating the next versions of Al.

  • @desgreene2243
    @desgreene2243 21 день тому

    This was a great conversation on AI - I found the section on education very interesting but the greatest impact on education will be a change of emphasis from acquisition of knowledge to one of managing knowledge systems to achieve certain goals. The polymath will be more effective than the specialist in the future...

    • @rpscorp9457
      @rpscorp9457 17 днів тому

      Thats nothing new. In point of fact, we were saying this in the early 90s in college. Specifically, that the most successful people were going to be the ones who best knew how to interact with assisted/artificial intelligence.

  • @tyamada21
    @tyamada21 4 місяці тому +2

    A segment from 'Saved by the Light of the Buddha Within'...
    My new understandings of what many call 'God -The Holy Spirit' - resulting from some of the extraordinary ongoing after-effects relating to my NDE...
    Myoho-Renge-Kyo represents the identity of what some scientists are now referring to as the unified field of consciousnesses. In other words, it’s the essence of all existence and non-existence - the ultimate creative force behind planets, stars, nebulae, people, animals, trees, fish, birds, and all phenomena, manifest or latent. All matter and intelligence are simply waves or ripples manifesting to and from this core source. Consciousness (enlightenment) is itself the actual creator of everything that exists now, ever existed in the past, or will exist in the future - right down to the minutest particles of dust - each being an individual ripple or wave.
    The big difference between chanting Nam-Myoho-Renge-Kyo and most other conventional prayers is that instead of depending on a ‘middleman’ to connect us to our state of inner enlightenment, we’re able to do it ourselves. That’s because chanting Nam-Myoho-Renge-Kyo allows us to tap directly into our enlightened state by way of this self-produced sound vibration. ‘Who or What Is God?’ If we compare the concept of God being a separate entity that is forever watching down on us, to the teachings of Nichiren, it makes more sense to me that the true omnipotence, omniscience and omnipresence of what most people perceive to be God, is the fantastic state of enlightenment that exists within each of us. Some say that God is an entity that’s beyond physical matter - I think that the vast amount of information continuously being conveyed via electromagnetic waves in today’s world gives us proof of how an invisible state of God could indeed exist.
    For example, it’s now widely known that specific data relayed by way of electromagnetic waves has the potential to help bring about extraordinary and powerful effects - including an instant global awareness of something or a mass emotional reaction. It’s also common knowledge that these invisible waves can easily be used to detonate a bomb or to enable NASA to control the movements of a robot as far away as the Moon or Mars - none of which is possible without a receiver to decode the information that’s being transmitted. Without the receiver, the data would remain impotent. In a very similar way, we need to have our own ‘receiver’ switched on so that we can activate a clear and precise understanding of our own life, all other life and what everything else in existence is.
    Chanting Nam-Myoho-Renge-Kyo each day helps us to achieve this because it allows us to reach the core of our enlightenment and keep it switched on. That’s because Myoho-Renge-Kyo represents the identity of what scientists now refer to as the unified field of consciousnesses. To break it down - Myoho represents the Law of manifestation and latency (Nature) and consists of two alternating states. For example, the state of Myo is where everything in life that’s not obvious to us exists - including our stored memories when we’re not thinking about them - our hidden potential and inner emotions whenever they’re dormant - our desires, our fears, our wisdom, happiness, karma - and more importantly, our enlightenment.
    The other state, ho, is where everything in Life exists whenever it becomes evident to us, such as when a thought pops up from within our memory - whenever we experience or express our emotions - or whenever a good or bad cause manifests as an effect from our karma. When anything becomes apparent, it merely means that it’s come out of the state of Myo (dormancy/latency) and into a state of ho (manifestation). It’s the difference between consciousness and unconsciousness, being awake or asleep, or knowing and not knowing.
    The second law - Renge - Ren meaning cause and ge meaning effect, governs and controls the functions of Myoho - these two laws of Myoho and Renge, not only function together simultaneously but also underlies all spiritual and physical existence.
    The final and third part of the tri-combination - Kyo, is the Law that allows Myoho to integrate with Renge - or vice versa. It’s the great, invisible thread of energy that fuses and connects all Life and matter - as well as the past, present and future. It’s also sometimes termed the Universal Law of Communication - perhaps it could even be compared with the string theory that many scientists now suspect exists.
    Just as the cells in our body, our thoughts, feelings and everything else is continually fluctuating within us - all that exists in the world around us and beyond is also in a constant state of flux - constantly controlled by these three fundamental laws. In fact, more things are going back and forth between the two states of Myo and ho in a single moment than it would ever be possible to calculate or describe. And it doesn’t matter how big or small, famous or trivial anything or anyone may appear to be, everything that’s ever existed in the past, exists now or will exist in the future, exists only because of the workings of the Laws ‘Myoho-Renge-Kyo’ - the basis of the four fundamental forces, and if they didn’t function, neither we nor anything else could go on existing. That’s because all forms of existence, including the seasons, day, night, birth, death and so on, are moving forward in an ongoing flow of continuation - rhythmically reverting back and forth between the two fundamental states of Myo and ho in absolute accordance with Renge - and by way of Kyo. Even stars are dying and being reborn under the workings of what the combination ‘Myoho-Renge-Kyo’ represents. Nam, or Namu - which mean the same thing, are vibrational passwords or keys that allow us to reach deep into our life and fuse with or become one with ‘Myoho-Renge-Kyo’.
    On a more personal level, nothing ever happens by chance or coincidence, it’s the causes that we’ve made in our past, or are presently making, that determine how these laws function uniquely in each of our lives - as well as the environment from moment to moment. By facing east, in harmony with the direction that the Earth is spinning, and chanting Nam-Myoho-Renge-Kyo for a minimum of, let’s say, ten minutes daily to start with, any of us can experience actual proof of its positive effects in our lives - even if it only makes us feel good on the inside, there will be a definite positive effect. That’s because we’re able to pierce through the thickest layers of our karma and activate our inherent Buddha Nature (our enlightened state). By so doing, we’re then able to bring forth the wisdom and good fortune that we need to challenge, overcome and change our adverse circumstances - turn them into positive ones - or manifest and gain even greater fulfilment in our daily lives from our accumulated good karma. This also allows us to bring forth the wisdom that can free us from the ignorance and stupidity that’s preventing us from accepting and being proud of the person that we indeed are - regardless of our race, colour, gender or sexuality. We’re also able to see and understand our circumstances and the environment far more clearly, as well as attract and connect with any needed external beneficial forces and situations. As I’ve already mentioned, everything is subject to the law of Cause and Effect - the ‘actual-proof-strength’ resulting from chanting Nam-Myoho-Renge-Kyo always depends on our determination, sincerity and dedication.
    For example, the levels of difference could be compared to making a sound on a piano, creating a melody, producing a great song, and so on. Something else that’s very important to always respect and acknowledge is that the Law (or if you prefer God) is in everyone and everything.
    NB: There are frightening and disturbing sounds, and there are tranquil and relaxing sounds. It’s the emotional result of any noise or sound that can trigger off a mood or even instantly change one. When chanting Nam-Myoho-Renge-Kyo each day, we are producing a sound vibration that’s the password to our true inner-self - this soon becomes apparent when you start reassessing your views on various things - such as your fears and desires etc. The best way to get the desired result when chanting is not to view things conventionally - rather than reaching out to an external source, we need to reach into our own lives and bring our needs and desires to fruition from within - including the good fortune and strength to achieve any help that we may need. Chanting Nam-Myoho-Renge-Kyo also reaches out externally and draws us towards, or draws towards us, what we need to make us happy from our environment. For example, it helps us to be in the right place at the right time - to make better choices and decisions and so forth. We need to think of it as a seed within us that we’re watering and bringing sunshine to for it to grow, blossom and bring forth fruit or flowers. It’s also important to understand that everything we need in life, including the answer to every question and the potential to achieve every dream, already exists within us.

  • @errollleggo447
    @errollleggo447 5 місяців тому +6

    I think certain countries will have no qualms about using AI to do some really bad things like creating new weapons. I think progress is essential honestly.

    • @keep-ukraine-free528
      @keep-ukraine-free528 5 місяців тому

      True real-world cases show that "good intentions" don't stop bad people. China uses cameras to track everyone, to control people. A Western company that made very capable cameras for surveillance in the West saw its early AI surveillance systems were biased against black/dark skinned people. So this company modified its system to also detect each person's "race" (using skin/face "profiles"). China asked them to add a profile for "Han-Chinese" people. China used it to find & surveil Uyghurs, to "limit" them, by making its people-tracking system decide that non-Han people in China had to be "followed" & monitored more closely.

    • @flickwtchr
      @flickwtchr 5 місяців тому +2

      If the US is doing it why wouldn't they? The cat is far out of the bag already.

  • @boredludologist
    @boredludologist 5 місяців тому +4

    Let the autoregressive-model-bashing by Yann LeCun begin!

    • @IronMechanic7110
      @IronMechanic7110 5 місяців тому +2

      Autoregressive can't plan...

    • @boredludologist
      @boredludologist 5 місяців тому +1

      No disagreements on that... And that's not the only shortcoming either! We may get a reminder of the "Reversal curse" of these models as well.

  • @brettgarnier107
    @brettgarnier107 5 місяців тому +1

    I'm glad that I get to be here for this.

  • @robert8124
    @robert8124 Місяць тому

    Great think tank of geniuses,,,I agree with Yan, on his closing statement...

  • @sombh1971
    @sombh1971 5 місяців тому +6

    36:38 I think a world model would be best achieved by making robots figure out stuff like a child does, or in other words by embodying the AI in a humanoid robot. And I don’t know exactly how long a shot it is at present.
    49:42 Wow! Now that’s really something! On the other hand what are the chances that something like this actually existed on the Internet? In any case even if it did, the power to scour the internet’s bowels is really something to be reckoned with, and realise that that’s where it’s true utility lies.
    50:59 OK so I didn’t realise this was done, and that makes it even more impressive, unless it already existed somewhere.

    • @keep-ukraine-free528
      @keep-ukraine-free528 5 місяців тому +1

      You mentioned AI in a humanoid robot. These already exist. Google, xAI, and others have systems that are working & getting better. Earlier, Boston Dynamics didn't use neural networks/machine learning, but now they're adding it to its sophisticated robots. Neural-net based AI can use any symbols/tokens (in "any" space, including motion-space). So these robots do very similar things as LLMs, except instead of language-tokens ("words"), they use movement-tokens (location, rotation, velocity, acceleration, balance, force, etc.) strung together into "sentences" to provide distinct physical movements (like "dance moves"). By stringing movements together, it can create nearly any movement. This gave SpaceX's rockets its abilities to steer & land - not using arms & elbows but using thrusters & fins.
      Robotic AI (neural-net based) have been learning what LeCun said (that when we push a bottle or table, what do we expect). Most of these systems also have vision. So I'm surprised LeCun doesn't know these AI areas exist & how far they've advanced.

  • @ronpaulrevered
    @ronpaulrevered 5 місяців тому +6

    Predicting unintended consequences is a contradiction in terms. Whoever lobbies for regulation of A.I. seeks regulatory capture, that is being able to afford legal compliance and lobbying when your competitors can't afford to.

  • @user-sb3qg5ph5t
    @user-sb3qg5ph5t 4 місяці тому

    That was an incredible show/discussion! 👍👏

  • @dannyg599
    @dannyg599 3 місяці тому

    Amazing insight into AI. Thank you for such high quality content and discussion

  • @Anders01
    @Anders01 5 місяців тому +3

    Instead of a central planner I wonder if maybe LLMs could emulate planning simply by learning a lot of information from the physical world, nature and human society, by using a larger number of robots that interact with the world. And a "ChatGPT moment" could soon happen with robotics.

    • @gps831coast
      @gps831coast 5 місяців тому +1

      I think it will be an organic "spark" to start the exponential birth of mechanical consciousness. Not good at English text.

    • @kjinnah
      @kjinnah 5 місяців тому

      A chicken or egg problem here?

    • @Anders01
      @Anders01 5 місяців тому

      @@kjinnah I was think that LLMs will be used in things like household and business robots etc, and by interacting with the physical world the LLMs can gather a lot of new data from vision and sound and so on. And over time (could happen quickly given enough quantity of robots) the LLMs will build up a general context for the real world similar to today's text contexts.

    • @Martinit0
      @Martinit0 5 місяців тому

      LLMs by themselves lack the feedback mechanisms to implement planning. So you need another component that basically guides the LLM

    • @Anders01
      @Anders01 5 місяців тому

      @@Martinit0 Yes, I guess in a scenario where LLMs were connected to lots of robots, there would need to be a mechanism for the LLMs to be able to use all the data from the robots as a feedback.

  • @bobgreene2892
    @bobgreene2892 4 місяці тому +3

    Tristan Harris is a most valuable voice of criticism for AI.

  • @voodooranger1
    @voodooranger1 5 місяців тому

    Off-topic but interesting to note: The forever chemical mentioned by (PFAS chemicals; PFOA used in Teflon subsidiary of Dupont, and PFOS used by 3M) are a range of man-made chemicals produced into products and marketed primarily by the American 3M company.
    1:21:17 So Not sure why Tristan Harris dodged the naming of the American company creator of these widely used noxious chemicals, in preference to naming and shaming the French company of Dupont who made the not so widely used product Teflon??
    PFOA and mostly PFOS was used in many consumer products, also used extensively in community fire fighting equipment added to tank water reservoirs, and used ubiquitously in military, air force, and community firefighting, drills and training exercises. Their manufacture, use and import is now banned in the U.S. and international law-abiding countries, (not sure about Russia, China, North Korea, and Iran).
    Evidence suggests the next-generation PFAS chemicals that have replaced them may have similar toxicity. PFAS chemicals pollute water, are resistant to degradation, and accumulate to concentration levels in the body organs.
    This PFAS chemical group is not to be taken lightly, is now found everywhere in the environment and food chain. Was marketed internationally as carpet and cloth protection, stain resistance, sold as a consumer spray for clothing, waterproofing for shoes etc. As a result, can be found concentrated in the domestic environment also. It was once as ubiquitous on the shop shelf as hair spray was.
    It is especially concentrated around military land holding sites, where it was used extensively for fire fighting. The concentration levels found in all living creatures, apparently has greater impact than the levels of nuclear bomb isotopes found in every human from the bomb testing of the 1940's to 1980's.
    The PFA group of chemicals has become a hidden leading cause of cancer in the world today. The chemical will not break down for over 10,000 years, sealing the fate of life on this planet's future.
    I suspect the cleanup of contaminated sites of soils are being recycled into green waste consumer products and sold as bulk potting mix and bulk soil used in commercial green houses, market gardens, and landfill. DYOR.
    Now back to A.I. (the next chapter in human stupidity!)....

    • @bobweiram6321
      @bobweiram6321 4 місяці тому

      Teflon is an inert chemical, meaning it will not react with most chemicals and is non-polar. It's why it's an excellent at protecting against caustic chemicals and is an excellent repellant. A carcinogen works by reacting chemically with DNA, leading to malignant tissue growth.
      Teflon simply can't react with DNA to alter it in anyway. It's precisely why it's used in a wide range of applications, including medicine. The chemicals involved in Teflon production are extremely toxic and highly volatile, specifically fluorine forms fluorides used in common household toothpaste. Teflon fibers when inhaled is dangerous because it's unable to leave the body.

  • @mean10102
    @mean10102 13 годин тому

    Triston reminds me of an outer limits episode where most technology is forbidden technology and your only allowed no more processing power then a digital watch. Open-world happy world.

  • @Octwavian
    @Octwavian 5 місяців тому +5

    Brian is spot on! He is very subtitle in his disagreement 😂

  • @rocketman475
    @rocketman475 4 місяці тому +10

    Yann is correct.
    Tristan's idea to grant control of AI to a few large companies will result in the creation of the nightmare scenario that Tristan wishes to avoid.

    • @chrisl4338
      @chrisl4338 4 місяці тому +2

      Absolutely. Tristan's views parallel those of the Luddites which could be characterised as change is scary, let's not go there. Albeit Tristan's ability to articulate those fears is impressive. As for his proposition that the control AI should be the preserve of corporate entities, now that is scary.

    • @ItsWesSmithYo
      @ItsWesSmithYo 3 місяці тому

      Free market won’t let that happen 🤙🏽

    • @rocketman475
      @rocketman475 3 місяці тому

      @@ItsWesSmithYo
      Yes, that's right, but what if the free market is being interfered with?

    • @ItsWesSmithYo
      @ItsWesSmithYo 3 місяці тому

      @@rocketman475 personally never seen it not correct. Someone always finds the hole and opportunity, point of the free market.