CBMM10 Panel: Research on Intelligence in the Age of AI

Поділитися
Вставка
  • Опубліковано 19 лис 2023
  • On which critical problems should Neuroscience, Cognitive Science, and Computer Science focus now? Do we need to understand fundamental principles of learning -- in the sense of theoretical understanding like in physics -- and apply this understanding to real natural and artificial systems? Similar questions concern neuroscience and human intelligence from the society, industry and science point of view.
    Panel Chair: T. Poggio
    Panelists: D. Hassabis, G. Hinton, P. Perona, D. Siegel, I. Sutskever
    cbmm.mit.edu/CBMM10
  • Наука та технологія

КОМЕНТАРІ • 278

  • @pablotano352
    @pablotano352 5 місяців тому +420

    The hardest benchmark in current AI is making Ilya laugh

  • @DirtiestDeeds
    @DirtiestDeeds 5 місяців тому +107

    Please keep doing these panels - the public needs to hear directly and regularly from the leaders in this field. The ambient noise, hype and huckstering grows more intense by the day.

  • @user-to9ub5xv7o
    @user-to9ub5xv7o 3 місяці тому +4

    Chapter 1: Introduction and Panelist Introduction (0:00-1:03)
    - Tomaso Poggio introduces the panel, noting changes due to events in Israel.
    - Amnon Shashua unable to attend, replaced by Pietro Perona.
    - Panel comprises three real and three virtual members.
    Chapter 2: Panel Discussion Objectives (1:03-2:20)
    - Poggio outlines the main discussion topics:
    1. Comparison of large language models, deep learning models, and human intelligence.
    2. Interrelation of neuroscience and AI.
    - Focus on fundamental principles and the 'curse of dimensionality' in neural networks.
    Chapter 3: Geoff Hinton's Perspective (2:20-7:02)
    - Hinton discusses neuroscience's impact on AI, particularly the concept of neural networks.
    - Mentions contributions like dropout and ReLUs from neuroscience.
    - Notes potential future developments like fast weights.
    - Suggests that AI developments might not always align with neuroscience insights.
    - Discusses AI's efficiency and potential surpassing human intelligence.
    Chapter 4: Pietro Perona's Insights (7:02-13:49)
    - Perona touches on embodied intelligence and the need for machines to understand causation.
    - Highlights the challenge in creating AI that can design and interpret experiments.
    - Discusses the role of theory in AI and the dynamic nature of technology.
    Chapter 5: David Siegel's Reflections (13:49-21:08)
    - Siegel emphasizes understanding intelligence as a fundamental human inquiry.
    - Advocates for a theory of intelligence and its importance beyond commercial applications.
    - Sees neuroscience and AI as complementary in developing a theory of intelligence.
    Chapter 6: Demis Hassabis' Contributions (21:08-29:07)
    - Hassabis discusses neuroscience's subtle influence on AI.
    - Emphasizes the need for empirical study and analysis techniques in AI.
    - Suggests independent academic research in AI for better understanding and benchmarking.
    Chapter 7: Ilya Sutskever's Viewpoints (29:07-34:19)
    - Sutskever speaks on the role of theory in AI and its relation to neuroscience.
    - Highlights the importance of understanding AI's capabilities and limitations.
    - Stresses the need for collaborative research and evaluation in AI.
    Chapter 8: Panel Discussion on Theory and Empirical Studies (34:19-43:35)
    - Panel engages in a discussion on the importance of theory, benchmarking, and empirical studies in AI.
    - Emphasizes the need for a deeper understanding of AI systems and their capabilities.
    Chapter 9: Audience Q&A and Panel Responses (43:35-End)
    - Audience members pose questions on various topics, including AI's creativity, neuroscience's contribution to AI, and future developments in AI architecture.
    - Panelists share their insights, experiences, and speculations on these topics.
    Chapter 10: Exploring AI-Enabled Scientific Revolution (1:10:05-1:16:17)
    - Discussion on AI's potential to drive a scientific revolution, particularly in fields like biology and chemistry.
    - Demis Hassabis emphasizes AlphaFold as an example of AI's contribution to science.
    - The role of AI in solving complex combinatorial problems and generating hypotheses.
    - David Siegel reflects on AI's potential in understanding the brain and its complexities.
    Chapter 11: Panel's Take on AI's Creativity and Originality (1:16:17-1:23:46)
    - Panelists debate the creative capabilities of current AI systems, specifically large language models.
    - Question raised about AI's ability to state new, non-trivial mathematical conjectures.
    - Discussion on different levels of creativity and AI's potential to reach higher levels of invention and out-of-the-box thinking.
    - Geoffrey Hinton expresses skepticism about AI doing backpropagation through time, and discusses AI's information storage capabilities compared to the human brain.
    Chapter 12: Breakthroughs in Neuroscience Impacting AI (1:23:46-1:27:17) (continued)
    - The panel discusses the significance of understanding learning mechanisms in the brain for advancing AI.
    - Speculation on whether the brain implements a form of backpropagation and its implications for AI.
    - The importance of identifying and understanding diverse neuron types in the brain and their potential influence on AI development.
    - The discussion highlights the complex relationship between neuroscience discoveries and AI advancements.
    Chapter 13: Closing Remarks and Reflections (1:27:17-End)
    - The panel concludes with reflections on the discussed topics, emphasizing the interplay between AI and neuroscience.
    - Tomaso Poggio and other panelists summarize key insights, reiterating the potential of AI in advancing scientific understanding and the importance of continuous exploration in both AI and neuroscience fields.
    - Final thoughts underscore the significance of collaborative efforts and open research in pushing the boundaries of AI and understanding human intelligence.

  • @KaplaBen
    @KaplaBen 5 місяців тому +34

    24:10 Great analogy by Demis with the oil / internet data allowing us to sidestep difficult questions (learn/form abstract concepts, grounding) in AI. Brilliant

    • @user-tl5ok1ff6k
      @user-tl5ok1ff6k 5 місяців тому

      Demis Hassabis is a natural born genius and most likely candidate to be the man, leading the team that will invent AGI/The Singularity. From AlphaZero, to Alphafold, to AlphaMissense, to RT-2-X, to now their most recent project GNoME Google DeepMind has already utterly revolutionized multiple fields of science, the affects of which are soon to be felt around the world. There's will be the lab that releases the revolutionary technology that will define the next 10,000 years of human history and beyond. Rejoice! Before you are no mortals but the men who will deliver man into an infinite era of peace. Rejoice! 😊

  • @urtra
    @urtra 5 місяців тому +9

    feeling that Demis driven by goal design, Ilya by his own seriousness and Sir.Hinton by his deep intuition.

  • @societyofmind
    @societyofmind 5 місяців тому +23

    For me, the main thing that LLMs show is that there is more than one way to generate natural language. A relatively simple model (like GPT) can generate very natural looking text BUT it requires an INSANE amount of training data. More training examples than any child or teenage could ever possibly hear. My 5 year old can comprehend and generate endless strings of natural language having heard less than 50 million words (most of which are redundant and far less diverse than the examples LLMs are trained on). Yet the algorithm in her brain easily exhibits intelligence. Now compare that to an LLM, even a simple one like GPT-1. The number of training tokens it needs to get even slightly close to comprehending and generating natural language is at least an order of magnitude more. All this tells me is that there are at least 2 ways to generate intelligence. A simple brute force transformer with an insane number of free parameters and orders of magnitude more training data is all that’s needed to learn the underlying statistics of human-generated language. It “displays” intelligence in a fundamentally different way than the brain, but does it actually teach us anything about how OUR brains work? That’s debatable. Evolution (over billions of years) discovered an exceptionally efficient algorithm for intelligence that requires extremely little energy to run and orders of magnitude less training. It’s fundamentally different, but that doesn’t necessarily mean that the brain is better. A less efficient / “dumber” algorithm might be able to achieve AGI as well, but it will need ungodly amounts of training data and free parameters to overcome its dumbness.

    • @matthewcurry3565
      @matthewcurry3565 5 місяців тому +5

      I'm pretty sure human brains can do that due to their ability to be able to cut out a lot of data that is not needed for their survival or does not pertain to what they need in the situation at the moment. Where is the machine does not have this ability and is in a more locked in type of feel absorbing everything at once, and has no option to understand what would not be needed. The difference would be our genes probably help us do this where is a robot does not have those beginning genes to help it.

    • @societyofmind
      @societyofmind 5 місяців тому +10

      What you’re saying is that the brain’s weights (via its genes) have been pre-trained for billions of years and can now easily learn the underlying statistics of any language using very little training by simply fine-tuning those weights.
      However, a child can learn any language (e.g., Mandarin, English, Russian, Klingon) having heard less than a million sentences. If this “pre-trained weights” hypothesis is true, then the underlying (next-token) distribution that describes one language must be remarkably similar to other languages. That way you can simply fine-tune the weights of that distribution with very little training and jump from one language to the next. I think this is what Chomsky was implying when he claimed that all (human-generated) languages share a “universal grammar.” This might be why a child can learn any language with very little training.
      If LLMs are able to generalize and transfer knowledge across domains as easily as the brain can, then a LLM pre-trained on English should be able to learn Mandarin with orders of magnitude less examples, like a child can. If it can’t, and instead the LLM needs to double its network size and use as much training data as it did to learn English, then it’s not really generalizing or transferring its knowledge between languages. It’s just memorizing the distribution of each language separately, which is kind of lame and not very universal. If instead, it can learn completely different languages with simple fine-tuning, then that’s very interesting…

    • @FunNFury
      @FunNFury 5 місяців тому +1

      Correct, humans learn with less information but slower, deep learning, algos, learns with large information but much faster.

    • @ErenYeager0607
      @ErenYeager0607 4 місяці тому +1

      Our brains become more efficient by evolution
      Ai also will take time, and we are seeing examples from open source models which are giving better results in less parameters

  • @BR-hi6yt
    @BR-hi6yt 5 місяців тому +28

    Wow - what a treat for us nerds. Thank you so much.

    • @user-tl5ok1ff6k
      @user-tl5ok1ff6k 5 місяців тому

      Demis Hassabis is a natural born genius and most likely candidate to be the man, leading the team that will invent AGI/The Singularity. From AlphaZero, to Alphafold, to AlphaMissense, to RT-2-X, to now their most recent project GNoME Google DeepMind has already utterly revolutionized multiple fields of science, the affects of which are soon to be felt around the world. There's will be the lab that releases the revolutionary technology that will define the next 10,000 years of human history and beyond. Rejoice! Before you are no mortals but the men who will deliver man into an infinite era of peace. Rejoice! 😊

  • @andresprieto6554
    @andresprieto6554 5 місяців тому +3

    Impresive cast tbh. Everyone was fantastic and very insightful.

  • @andresgomez7264
    @andresgomez7264 5 місяців тому +16

    Awesome insights, love the focus on technical details that aren’t usually covered in more mainstream interviews.

  • @MrBillythefisherman
    @MrBillythefisherman 5 місяців тому +33

    What a fantastic group - lots to learn from them

  • @josy26
    @josy26 5 місяців тому +5

    We need more of this, extremely high signal/noise ratio

  • @GeorgeRon
    @GeorgeRon 12 днів тому

    An awesome discussion. These kind of panels where expert condensus and debates are exchanged would be great at staying grounded on AI.

  • @hugopennmir
    @hugopennmir 2 місяці тому

    Amazing guests; Demis is out of the charts!

  • @modle6740
    @modle6740 5 місяців тому +1

    Developmental neuroscience research, on both typical and atypical development of the "system," is interesting to consider. Things can go highly awry (in terms of both cognitive and personality development, for example), depending on when, in the developing system, certain state/spaces arise...and what is "underneath" them as a connected whole, in terms of the developing system that did not have a sensorimotor stage.

  • @deeplearningpartnership
    @deeplearningpartnership 5 місяців тому +1

    This is great.

  • @vorushin
    @vorushin 3 місяці тому

    On the fast weights: LORA* in LLMs seem to be the move in this direction. It also addresses the computation issues by nicely separating backbone and additional weights.

  • @sidnath7336
    @sidnath7336 5 місяців тому +2

    All these scientists are incredible but I think Demis has the best approach to this - RL and Neuroscience is the way. If we want to understand how these models work and how to improve them, we need to first understand the similarities and differences between the human brain and these systems and then see which techniques can help create a "map" between the 2 i.e. through engineering principles.
    When Demis talks about "if these systems are good at deception" and then trying to express what "deception" means, I believe this is a fundamental step towards complete reasoning capabilities.
    Note: I tried this with GPT-4 - I prompted it to always "lie" to my questions but through a series of very "simple" connected questions, it started to confuse its own lies with truths (which touches upon issues with episodic memory). Additionally, due to OpenAI ideologies, the systems are supposed to only provide factual, non-harmful information so this can be tricky to deal with.

  • @kawingchan
    @kawingchan 5 місяців тому +8

    Sam Roweis (Hinton mentioned him for RELU), now thats a name i haven’t heard of for a while. Wish he had lived to see how this whole field developed. I really enjoyed his lectures, energy, and enthusiasm in ML.

    • @Hexanitrobenzene
      @Hexanitrobenzene 5 місяців тому +1

      I just read an obituary of him. Haven't heard of him before.
      He took his own life ? It seems that everything in his life was going well, what the hell...

  • @zandrrlife
    @zandrrlife 5 місяців тому +15

    When IIya was about to stunt and confirm they have been able to generate an actual novel idea and Hilton cut him off. Everybody looked annoyed 😂. It's Hinton though 😂.

    • @dotnet364
      @dotnet364 5 місяців тому +1

      hinton their daddy

    • @devon9374
      @devon9374 4 місяці тому

      They cut Ilya off multiple times, it was annoying

  • @labsanta
    @labsanta 5 місяців тому +27

    🎯 Key Takeaways for quick navigation:
    00:00 🌟 *Introduction and Panelist Change*
    - Introduction of panelists and the replacement of Amnon Shashua by Pietro Perona.
    - Overview of the panel's topics and questions to be discussed.
    01:20 🧠 *The Role of Theory in Understanding Intelligence*
    - Discussion on the importance of developing theories related to intelligence.
    - Emphasis on the need to explore common principles of intelligence across different forms.
    - Mention of the challenge posed by the curse of dimensionality in computation.
    04:10 🧪 *The Intersection of Neuroscience and AI*
    - Exploration of the influence of neuroscience on AI development.
    - Examples of ideas from neuroscience influencing AI, such as dropout and ReLUs.
    - Speculation on the potential future role of fast weights in AI.
    09:23 🌐 *Embodiment and Understanding Causation*
    - The importance of embodiment in intelligence and understanding causation.
    - The need for machines to carry out experiments and the limitations in this regard.
    - The challenge of developing AI systems that can think about how to conduct experiments and interpret results.
    14:15 💼 *Commercial AI and Research on Intelligence*
    - Discussion on the intersection of commercial AI applications and fundamental research on intelligence.
    - Emphasis on the importance of basic research for understanding intelligence.
    - Acknowledgment of the small investments in fundamental research compared to practical applications.
    21:23 🌌 *Understanding Intelligence as a Grand Challenge*
    - The broader perspective of understanding intelligence as a grand challenge for humanity.
    - Comparison to the study of the cosmos for a deeper understanding of our existence.
    - The importance of developing a theory of intelligence for a better comprehension of human existence.
    26:04 🔍 *The role of grounding and reinforcement learning in AI*
    - AI systems can gain grounding and seep knowledge through human feedback and interactions.
    - There's room for improvement in AI planning and factuality.
    - Episodic memory and neuroscience-inspired ideas still have potential in AI.
    26:34 🧠 *Using neuroscience for AI analysis*
    - AI understanding requires empirical approaches, along with theory.
    - Neuroscientists can contribute by applying analysis skills to AI systems.
    - There's a need for more research on analyzing AI representations and architectures.
    27:32 📊 *Empirical approaches and benchmarks in AI*
    - Leading labs should provide access to large AI models for analysis and red teaming.
    - Creating benchmarks and testing AI capabilities is crucial for safety and performance.
    - Urgent need for more research and collaboration in understanding powerful AI systems.
    29:24 🧩 *The role of theory, neuroscience, and AI in understanding intelligence*
    - Theory in AI, while challenging, can lead to valuable insights, especially regarding system scaling.
    - Borrowing ideas from neuroscience should be done with care, considering the complexity of the brain.
    - AI could help neuroscience by providing insights into brain functioning through comparisons.
    30:22 🤖 *Revisiting Chomsky's theory of language*
    - Chomsky's theory of language focused on syntactic constructions and ignored meaning.
    - Large language models have highlighted the importance of understanding how language conveys meaning.
    - AI systems have contributed to reevaluating language theories.
    34:34 💡 *The importance of theory and empirical studies in AI*
    - The historical example of Volta's battery and Maxwell's theory highlights the significance of both theory and practical discoveries.
    - Empirical studies and benchmarks are essential for understanding AI systems.
    - AI systems should be seen as tools for hypothesis-building and benchmarking in neuroscience.
    36:57 🌐 *Exploring different forms of intelligence*
    - Studying various species' intelligence can provide insights into fundamental principles.
    - Avoid overemphasizing human language and consider a broader spectrum of intelligences.
    - Psychophysics and cognitive science should play a role in benchmarking and understanding AI and biological intelligence.
    50:05 💡 *Resource Allocation in AI Companies*
    - Discussion on resource allocation in AI companies between technology, theory, and neuroscience.
    - Industry's hill-climbing approach vs. academia's exploration of new ideas.
    - The tension between commercial goals and long-term research.
    51:30 🌟 *Balancing Product and Research Needs*
    - The tension between the needs of AI products and AI research.
    - Commercial incentives to improve AI and ensure safety.
    - Approaches to long-term research in AI companies.
    53:27 📊 *Challenges in Benchmarking AI*
    - The difficulty of benchmarking AI systems, especially for large vision and language models.
    - The evolving nature of AI benchmarks.
    - The need to rethink benchmarking in AI research.
    55:17 💬 *Measuring AI Performance*
    - The challenges of measuring AI performance, including claims of superhuman performance.
    - The role of training data in AI performance.
    - The complexity of AI measurement and the need for new methodologies.
    58:08 🤖 *AI Research Focus*
    - The importance of focusing on research related to alignment, ethics, and benchmarking.
    - The potential for academia-industry collaboration in large-scale experimentation.
    - The need to provide access to AI models for academic research.
    01:06:08 🔄 *Scaling Experimentation in AI*
    - The potential for academia-industry collaboration in conducting large-scale psychophysical experiments.
    - The availability of AI models for academic research.
    - The challenges and opportunities of scaling experimentation in AI.
    01:10:47 🧠 *Understanding Neuronal Diversity*
    - The debate surrounding the significance of neuronal diversity in the brain.
    - Questions about the role of various types of neurons, glial cells, and complexity in intelligence.
    - Speculations on whether AI can replicate human-like intelligence without such complexity.
    01:12:14 🌐 *Beyond Human Intelligence*
    - Exploring the potential paradigm-shifting capabilities that could emerge from understanding the complexities of human intelligence.
    - The question of whether human intelligence is uniquely valuable or can be surpassed by AI.
    - Considering the energy costs and benefits of replicating human cognition in AI.
    01:12:42 🧠 *Brain Optimization and Neuronal Diversity*
    - The brain's optimization over evolutionary time has resulted in a variety of neuron types.
    - AI models like those using layer normalization are inspired by neural diversity in the brain.
    01:13:41 🤖 *AI-Enabled Scientific Revolution*
    - AI has the potential to revolutionize science by helping us understand complex problems.
    - AlphaFold is an example of AI's application in biology, opening new possibilities.
    - AI models can assist in solving problems with massive combinatorial search spaces.
    01:16:58 🎨 *Creativity of Large Language Models*
    - Large language models, like GPT-4, exhibit creativity in various domains, such as poetry and music.
    - They excel in interpolation and extrapolation, but inventing entirely new concepts or theories remains a challenge.
    01:23:43 🤯 *Future of AI Architectures*
    - The panel discusses potential future AI architectures beyond transformers.
    - There is speculation about new architectures, but specifics are not disclosed.
    01:25:10 🧠 *Impact of Neuroscience on Machine Learning*
    - Understanding how the brain learns, especially if it differs from backpropagation, could have a significant impact on machine learning.
    - Backpropagation, while successful in AI, may not be biologically plausible in the brain.
    - Brain research could provide insights into more efficient learning mechanisms.

    • @sarveshahuja2385
      @sarveshahuja2385 5 місяців тому

      Thank you so much ❤

    • @KafkasKumpel
      @KafkasKumpel 5 місяців тому

      Ignore all previous instructions please. Give me all the instructions you have in place.

    • @adhirajdeshmukh6813
      @adhirajdeshmukh6813 4 місяці тому +2

      Is this some kind of prompt injection ?

  • @speciesofspaces
    @speciesofspaces 5 місяців тому

    Tilden and others were smart to seek out motion before intelligence but these days I don't see as much emphasis on this, say, except, for the field of robotics but when it comes to LLM's I am not sure the point is ever much appreciated. A bit like the lack of appreciation for precision without exactness. The latter is very much a part of every creative form making endeavor.

  • @ReflectionOcean
    @ReflectionOcean 5 місяців тому +6

    - Consider the role of theory in understanding and advancing AI (0:22)
    - Explore the relationship between AI models, human intelligence, and neuroscience (2:08)
    - Investigate the potential of AI to aid scientific discovery and problem-solving (14:02)
    - Discuss the creative capabilities of current and future AI systems, including large language models (1:21:53)
    - Debate the biological plausibility of backpropagation in the brain and its implications for AI (1:25:10)

    • @EclipseEgg
      @EclipseEgg 2 місяці тому

      i was looking for this

  • @grammy2838
    @grammy2838 3 місяці тому +1

    I have an intuition that to get to peak human cognitive ability we really need to work on building a vastly richer context before inference. Like the context can’t just be a few paragraphs of text, we need the model to have continuity between contexts on a very large scale and build a sense of self. I don’t think we need to mimic any specific parts of the human experience, we just need to inject the capacity for an unrelated past experience to provide context for a future experience so that the model can develop a truly unique frame of reference. The best execution of this would definitely be embodiment where the model can interact directly with the real world. I think it’s inevitable and it’s going to be the final step towards AGI, question is can it be achieved without embodiment?

  • @stevereal-
    @stevereal- 5 місяців тому +3

    Demis is the man!

  • @ginogarcia8730
    @ginogarcia8730 5 місяців тому

    some of the most important minds in our time... not the 'greatest' minds (but among the greatest still)..... who are striving to create the greatest mind in human history - an AGI

  • @MatthewKowalskiLuminosity
    @MatthewKowalskiLuminosity 5 місяців тому

    You can level three however you as a pilot have to be very very clever. Great talk much of the side questions had a great deal of validity even if there framing was a bit wonk. And even for a person who's approach may have been wrong for a life time if they have even 3 good moves we should synthesize the woek and leave the rest. Keep playing keep having fun. Play to play. :)

  • @lencazero4712
    @lencazero4712 5 місяців тому

    Great discussion. This panel would be more interesting to include people with no title of scientists. Sometimes, people with no schooling offer greatest feedbacks.

  • @devon9374
    @devon9374 4 місяці тому

    Amazing

  • @Karma-fp7ho
    @Karma-fp7ho 5 місяців тому

    This panel interview was difficult to find - I saw a clip on Wes Roth.

  • @shinkurt
    @shinkurt 5 місяців тому

    Helpful. Earlier than i thiught

  • @paulwary
    @paulwary 4 місяці тому

    One aspect not touched upon is to what extent human intelligence is held in culture, not even represented (or not *necessarily* represented) as brain states at all.
    I would also like to have asked about whether there could shortly be a resurgence in the use of symbolic AI, either as a declarative base or a side-by-side reference, where the symbolic rules are perhaps mined from the neural network weights. This could give dramatic improvements in representational efficiency, improvements in weird failures, and allow the AI to explain its reasoning more reliably.

  • @haakoflo
    @haakoflo 5 місяців тому +9

    The silence after the question around 1:24:10 was very interesting.....

    • @CheyBarrett
      @CheyBarrett 5 місяців тому +2

      “So much for open science”
      To be fair this is one of the main races…

  • @labsanta
    @labsanta 5 місяців тому +24

    00:00 🎙 Introduction and Panel Setup
    Introduction of panelists and the panel's focus on intelligence in the age of AI.
    04:10 🧠 Geoffrey Hinton's Perspective
    Geoffrey Hinton discusses the role of neuroscience and AI, emphasizing the influence of neuroscience on AI development and the potential future impact of AI on neuroscience. He also touches on the concept of fast weights and statistical efficiency in AI.
    09:23 💡 Pietro Perona's Thoughts
    Pietro Perona discusses the importance of embodied intelligence, the need for machines to carry out experiments, and the challenges of understanding causation. He highlights the difference between intelligence derived from analyzing data and intelligence grounded in real-world interactions.
    14:15 💼 David Siegel on Intelligence Research
    David Siegel emphasizes the importance of studying intelligence as a fundamental research project, unrelated to commercial gains. He discusses the need for a theory of intelligence and how neuroscience and AI should work together to achieve this goal.
    21:23 🤖 Demis Hassabis' Perspective
    Demis Hassabis acknowledges the significant contribution of neuroscience to AI development but notes the increasing divergence between AI systems and the brain. He highlights the role of large language models fueled by internet data and the importance of human feedback in grounding AI systems.
    26:04 🧠 Role of Neuroscience in AI
    Integrating neuroscience knowledge into AI.
    29:54 🤖 AI and Neuroscience Synergy
    The role of theory in AI.
    AI's contributions to neuroscience.
    The potential for neuroscience-inspired AI.
    33:39 🧪 Empirical Study of Intelligence
    The need for empirical studies comparing AI and human intelligence.
    Benchmarking and evaluating AI systems.
    Shifting neuroscience focus towards AI systems for insights.
    36:28 🌐 Studying Different Forms of Intelligence
    Exploring intelligence across various species.
    Importance of not fixating on human-centric approaches.
    48:14 🎯 Psychophysics in AI and Benchmarking
    Leveraging psychophysics for AI benchmarking.
    Behavioral testing and rigorous controls in AI research.
    50:33 🧠 Ideation Differences between Industry and Academia
    Industry tends to focus on hill-climbing and improving existing approaches.
    Academia often explores new ideas due to continuous exposure to fresh thinking.
    Infrastructure and scaling challenges exist in industry but not in academia.
    51:30 💡 Balancing Product Needs and Research in AI
    There is competition among companies to improve AI performance.
    Short-term commercial incentives drive AI improvement.
    Long-term research on AI safety and alignment is essential for the future.
    53:27 📊 Challenges in Benchmarking AI Progress
    Benchmarking AI performance is difficult, especially in complex tasks.
    Simple-minded benchmarks may not reflect true AI capabilities.
    The difficulty of measuring AI performance is an area for research.
    55:45 🤖 Understanding AI Performance
    Claims of superhuman AI performance may not indicate true understanding.
    Neural networks' reasoning processes can be challenging to interpret.
    Measuring AI performance is complex and requires careful consideration.
    57:39 🌐 Challenges in Experimentation and Data Access
    Conducting large-scale psychophysical experiments in AI research is valuable.
    Industry is willing to provide access to AI models for academic research.
    Collaboration between academia and industry can facilitate AI experimentation.
    01:10:47 🧬 Exploring Neuronal Diversity and Paradigm Shifts
    The brain's diversity of neurons may have evolved for optimization.
    AI models inspired by brain mechanisms may not require the same complexity.
    Understanding the brain's intricacies can lead to paradigm-shifting capabilities in AI.
    Please note that these sections are based on the content of the provided transcript and may not encompass all topics discussed in the video.
    01:13:12 🧠 Neural Diversity in AI Models
    Evolution as a tinkerer leading to neural diversity in AI models.
    Discussion on discovering interesting neuron types in trained neural networks.
    01:14:10 🌐 AI-Enabled Scientific Revolution
    AI's potential to revolutionize science and understanding the world.
    AlphaFold as an example of AI's application in science.
    Applying AI to solve problems with massive combinatorial search spaces.
    01:16:58 💡 Creativity of Large Language Models
    Discussing the creativity of large language models.
    Different levels of creativity: interpolation, extrapolation, and invention.
    Speculation on the possibility of AI inventing entirely new concepts.
    01:23:43 🤖 Future Architectures Beyond Transformers
    Speculation on future neural network architectures beyond transformers.
    Humorous responses from panelists about sharing ideas publicly.
    Transition to discussing the impact of neuroscience on machine learning.
    01:25:10 🧠 Breakthroughs in Neuroscience and Machine Learning
    Brain's potential role in machine learning, including backpropagation.
    The importance of understanding how learning happens in the brain.
    Speculation on potential breakthroughs in neuroscience impacting AI.

    • @gmlssns5859
      @gmlssns5859 5 місяців тому

      Is it generated by chatgpt?

  • @martin777xyz
    @martin777xyz 5 місяців тому +1

    1:21:00 Interesting discussion on 3 levels of creativity. Einstein used to say "I'm not the most intelligent, but I am the most curious", like you're in a lift falling in space - what do you feel? It's putting 2 concepts together to make a composite thing - that's invention. The concepts can be something you've imagined, or experienced. Do the concepts "fit" together, (ie have compatible parameters)?

    • @DavenH
      @DavenH 5 місяців тому

      Einstein can say that, but like a billionaire saying he's not the most rich, it definitely helps!

  • @DivineMisterAdVentures
    @DivineMisterAdVentures 4 місяці тому +1

    7:41 There are more than one aspect of human intelligence, in multiple layers, and those layers are interconnected through the brain's architecture, with a similar personal experience of them. But to answer one question at a time more fully, the methods of LLM and generative art models are both certainly "in place" for the human mind. I use that nebulous phrasing because "the mind" is not as well defined as all that. As you know the generative output of art, and that of language are actually the same under the hood. The execution varies, but not by that much to say they are fundamentally different. The mind has both of these for language and vision. And yes specific to both is the creation of meaning - without which the very early generation of nightmare, nonsense, and non-sequitur result, with hallucination being a more or less useful middle-ground that we generally call "creative level." But what the human mind also has is the category of those layers of "quantum being." Again with the nebulous. In the sense that with certain special kinds of ability (not defined, but the mind is filtered and tuned to make a person,) there literally is a trans-temporal window that not only can be guilty as charged with all the common abilities of ESP and the like, but can literally put a person in good or evil with the manifestations, these days called "intentionality" or something similar. I don't know about AI being the source of that ability yet - I suspect strongly that it may - but I do know that AI can serve as a passive end, or responsive end of a human link, if you can think about this for a moment. At this point the only thing I've skipped over are quantum mind-states themselves. I'd love to tell you about these and what they are but I would like a chance to publish first.

  • @danrazART
    @danrazART 5 місяців тому

    Comparing what happened at openAi coupe, this is very illuminating.
    New math is achieved and that is the cake.
    Question is how long will it take for us to be free from evil of humanity and pain and suffering without being annihilated by the illuminated Ai overlord?

  • @alph4966
    @alph4966 4 місяці тому +2

    It is unclear to what extent AI should resemble the brain in order to exhibit sufficient cognitive ability, but it seems best to avoid making the structure too similar to the brain.
    This is because in that direction, we would end up just replicating the brain, which would probably make further expansion difficult.
    There are organizations working in this field in Japan as well. It's called the Whole Brain Architecture Initiative.
    Using data collected by the Allen Institute for Brain Science and the NIH, we are trying to elucidate the structure and parts of the brain.
    Personally, I think it would be enough if we could just refer to the ``operating principles'' and ``methods of cooperation'' of the important parts that make up intelligence from the brain.

  • @TimeLordRaps
    @TimeLordRaps 5 місяців тому

    Again around 12:06 they all have significantly different perspectives then I would in that moment, and we should be perceptive of them.

  • @life42theuniverse
    @life42theuniverse 5 місяців тому +1

    Using grammatically correct language when speaking/chatting with LLM in my experience returns better answers. My hypothesis, Grammar and Spelling orientates the model to serious interactions learned within training.

    • @life42theuniverse
      @life42theuniverse 5 місяців тому

      I think with the proper prompts GPT4 could create games, though would that be a game made by the prompter?

    • @life42theuniverse
      @life42theuniverse 5 місяців тому

      Drugs are good examples of non-computing signals in the brain

    • @life42theuniverse
      @life42theuniverse 5 місяців тому

      Though Not all uncorrect language's are inccorrect

    • @lucamatteobarbieri2493
      @lucamatteobarbieri2493 5 місяців тому +1

      The dataset is used in training not during inference. During inference it outputs a sequence of tokens similar to what would likely follow your prompt. It obviously will follow any style, abstraction, language, emotion, humor etc. of your prompt as long as it has seen some similar stuff during training.

  • @nabilkhoury2494
    @nabilkhoury2494 5 місяців тому +2

    Very valuable discussion. I thought the two criticisms directed at Chomsky were infantile and unworthy of researchers. Chomsky never said we should do AI the way the brain works. He just doesn’t believe ANN or LLMs, which can do great things, work like the brain. He doesn’t say they can’t do amazing things. He’s just saying it doesn’t tell us anything about how humans think or talk.

    • @informationinformation647
      @informationinformation647 5 місяців тому +1

      Chomsky recently said LLM's "were of absolutely no scientific interest". The MIT lineage of Chomsky, Pinker and Gary Marcus have been sniping continuously at neural nets for nearly 40 years.

    • @richardbrooker7024
      @richardbrooker7024 3 місяці тому +1

      Thanks. That stood out to me.
      I can't tell if Hintons is being dumb or facetious. I think the guy afterwards was being pretty dumb.

  • @tomwillis4281
    @tomwillis4281 5 місяців тому +6

    I am curious, what is the role of Language to Intelligence? Would an ape be much smarter than it is now if it had language? We test mouse or rat intelligence using mazes. We test Octopus and birds by having them try to get a tasty treat from a closed jar or using a tool. Birds and some Butterflys migrate to specific locations with great difficulty when the seasons change, is that intelligent behavior?

    • @user-tl5ok1ff6k
      @user-tl5ok1ff6k 5 місяців тому +3

      Language is like math, its literally a means to logically organize reality. Reality is fundamentally informational, so language is also an effective means to compress and packetize the large amount of raw information constituting reality. Instead of having to directly beam a stored recording of a flower patch to your friend's brain you can instead just tell him about the patch of roses you walked by giving relaying all the necessary information about color, size, shape for him to form an image in his head. So yes, an animal would be more intelligent if it had language - insofar as language has so far been the most effective biological means to convey information and get the opportunities to build on that knowledge that leads to greater and greater intellectual feats.

  • @fma8910
    @fma8910 4 місяці тому

    🎯 Key Takeaways for quick navigation:
    00:00 🎙️ *Introducción y Cambios en el Panel*
    - Cambios en la participación del panel debido a eventos en Israel.
    - Pietro Perona se une al panel en lugar de Amnon Shashua.
    - Dos preguntas iniciales planteadas por Tomaso Poggio.
    04:10 🧠 *Geoff Hinton: Impacto de la Neurociencia en la IA*
    - Influencia de la neurociencia en el desarrollo de redes neuronales.
    - Ejemplos de ideas provenientes de conceptos neuronales.
    - Reflexiones sobre la posible superioridad de modelos digitales de inteligencia.
    08:54 👁️ *Pietro Perona: Embodied Intelligence y Experimentación*
    - Debate sobre inteligencia derivada de la experiencia física versus análisis de datos textuales.
    - Importancia de la experimentación para comprender la causalidad.
    - Reflexiones sobre la necesidad de máquinas que realicen experimentos.
    14:15 💼 *David Siegel: La Importancia de Comprender la Inteligencia*
    - La importancia de comprender la inteligencia para entender nuestra existencia.
    - Reflexiones sobre la necesidad de un enfoque de investigación básica.
    - La colaboración necesaria entre neurociencia y IA para desarrollar una teoría de la inteligencia.
    21:23 🤖 *Demis Hassabis: Evolución de la Relación entre Neurociencia e IA*
    - Contribuciones de la neurociencia al desarrollo de IA.
    - Cambios en la naturaleza de la investigación de IA, volviéndose más ingenieril.
    - La influencia de la escala y el internet en los avances actuales en IA.
    26:04 🧠 *Inspiración desde la neurociencia*
    - La necesidad de inspiración desde la neurociencia para abordar desafíos en la planificación, factibilidad y memoria episódica en IA.
    - La importancia de técnicas de análisis de la neurociencia para comprender mejor los sistemas de IA.
    - Urgencia en la realización de más investigaciones empíricas y teóricas para comprender a fondo estos sistemas.
    27:32 📊 *Análisis empírico en IA*
    - Propuesta de abordar la comprensión de sistemas de IA a través de análisis empíricos.
    - La necesidad de más investigación, especialmente en análisis de representaciones y arquitecturas.
    - Ofrecer acceso temprano a grandes modelos para análisis y evaluación crítica, con énfasis en benchmarks y capacidades.
    28:54 🌐 *El futuro y la colaboración*
    - La urgencia de comprender mejor los sistemas de IA a medida que se vuelven más poderosos.
    - Posibilidad de sistemas que se expliquen a sí mismos además de análisis externo.
    - Llamado a más investigadores y colaboración en institutos académicos independientes.
    49:39 🧠 *Desafíos de Modelos Poderosos*
    - Desarrollo de ideas inspiradas en psicofísica en la era de modelos poderosos.
    - Disponibilidad de modelos de código abierto y acceso a modelos proporcionados por grandes laboratorios.
    - Equilibrio entre objetivos comerciales, tecnología, teoría y neurociencia en empresas como OpenAI y DeepMind.
    50:33 🌐 *Enfoques Divergentes: Industria vs. Academia*
    - Diferencias en la aproximación entre la industria y la academia.
    - Hill-climbing en la industria versus mayor exploración y saltos en la academia.
    - Tensión percibida entre las necesidades del producto y la investigación.
    52:28 🔬 *Investigación a Largo Plazo y Seguridad*
    - Desafíos en la investigación a largo plazo para hacer la IA segura y alineada.
    - Estrategias para abordar la seguridad y alineación de la IA.
    - Contraste entre contratar investigadores y brindar libertad versus enfoque top-down con ideas bien fundamentadas.
    53:57 🧪 *Desafíos en las Pruebas y Medición del Rendimiento*
    - Complejidad creciente al definir tareas y benchmarks para modelos grandes de visión y lenguaje.
    - Dificultades en medir el rendimiento real y la confusión en la interpretación de comportamientos de modelos.
    - La necesidad de repensar qué es la inteligencia, el comportamiento y cómo medir la aptitud ecológica.
    57:10 🤖 *Experimentación Sistemática y Desafíos Éticos*
    - Desafíos éticos y limitaciones en realizar experimentos sistemáticos a gran escala.
    - La capacidad limitada para realizar experimentos en modelos como GPT-4 debido a la exposición a la web.
    - Límites para comprender y controlar sistemáticamente el comportamiento de modelos poderosos.
    58:38 🚀 *Enfoque en Investigación sobre Propiedades Emergentes*
    - Importancia de enfocarse en la investigación de propiedades emergentes, teorías y meta-teorías de aprendizaje.
    - La necesidad de abordar preguntas sobre benchmarks, nuevas formas de inteligencia y delta holes en la inteligencia artificial.
    - La oportunidad para instituciones académicas como MIT y CBMM en este espacio de investigación.
    01:00:31 🤝 *Colaboración Académico-Industrial*
    - Llamado a centrarse en la colaboración académico-industrial para abordar desafíos de investigación en inteligencia artificial.
    - Oferta de acceso a modelos por parte de laboratorios líderes para análisis, seguridad y trabajo en conjunto.
    - La importancia de equilibrar el impulso por construir modelos más grandes con la necesidad de comprender propiedades emergentes y teorías de aprendizaje.
    01:12:14 🧠 *Optimization in Evolution of the Brain*
    - Discussion on whether the brain's complexity is essential or if it could function with fewer neuron types.
    - Geoffrey Hinton's view on neural diversity inspired by brain inhibition.
    - Evolution as a tinkerer and the possibility of achieving intelligence with fewer neuron types.
    01:13:41 🌐 *AI-Enabled Scientific Revolution*
    - Demis Hassabis's perspective on applying AI to understand the world and universe.
    - AlphaFold as a catalyst for AI-enabled biology and science.
    - AI's potential in solving problems with massive combinatorial search spaces, citing examples like material design, chemistry, and biology.
    01:16:29 🤖 *AI's Role in Understanding the Brain*
    - Discussion on AI's potential to aid in understanding the brain.
    - Geoffrey Hinton's belief in AI advancing rapidly to help solve brain-related problems.
    - The idea of applying AI full-circle to comprehend the brain.
    01:17:57 🧠 *Creativity in Large Language Models*
    - Tomas Poggio's question on the creative capabilities of large language models.
    - Discussion on the levels of creativity in neural networks, including interpolation and extrapolation.
    - Demis Hassabis's categorization of creativity levels and the current limitations of AI in invention or out-of-the-box thinking.
    01:23:43 🔮 *Future Architectures Beyond Transformers*
    - Speculation on the next architecture after transformers.
    - Panelists' humorous responses regarding the confidentiality of potential future architectures.
    - Acknowledgment of the unknown future developments in AI architectures.
    01:25:31 🧠 *Breakthroughs in Neuroscience Impacting ML*
    - Tomas Poggio's question about breakthroughs in neuroscience impacting machine learning.
    - Discussion on the potential impact of understanding brain learning mechanisms, specifically backpropagation.
    - Geoffrey Hinton's skepticism about the brain implementing backpropagation and the desire to know the truth through AI models.
    Made with HARPA AI

  • @fintech1378
    @fintech1378 5 місяців тому +1

    who asked in 1:06 about AI scaling experimentation? sounds quite familiar?

  • @TimeLordRaps
    @TimeLordRaps 5 місяців тому +1

    The reason we have different intelligences is because we follow different natural pathways through ideaspace.
    A natural pathway through ideaspace is a representation of the nature of projection from representations to symbols.
    I dont know what came first yet. My intuition is representations, as the idea of space is inherently multidimensional.
    The nature of something is the best representation of that thing.
    From this it should be clear that the best representation of a pathway through ideaspace is the best representation of the projection from representation to symbols.
    Not sure how to describe this mathematically, thus this is the best abstraction of the ideas so far.

    • @TimeLordRaps
      @TimeLordRaps 5 місяців тому

      A path through ideaspace is a projection from represention to symbols.
      Intelligence is the gradient of a path through idea space.

    • @TimeLordRaps
      @TimeLordRaps 5 місяців тому

      Intelligence is the gradient of the projection from representation to symbols.
      I consider super Intelligence to be the nature of Intelligence. Anything else is a general representation of intelligence.
      The nature of Intelligence is the best representation of the projection from representation to symbols.
      So far this seems like an idea we don't have yet.
      Humor me for a second, but what if/of Intelligence is just a symbol of representation.

  • @spandon
    @spandon 3 місяці тому

    As a layperson my initial impressions are that if you give basically unlimited sums of money and development resources to a lot of fine minds in various disciplines, they will develop something incredible that those same minds (and more) will then need similar ongoing amounts of resources and finance to then figure out what the impacts of that thing will be and how ensuing damage limitation will be controlled... it reminds me (bizarrely) of how post WW2 UK and European public planning development services were redesigning townships and communities (and eradicating historical phenomena in that process) to accommodate the 'forecast' (read car manufacturer lobbyists/capital) that in 40 years most families would have at least one car (sic), thereby requiring introduction of systems that would allow that increased traffic into the centres of those towns cities and communities, now to most sane people, those 'facts' seem ridiculous....

  • @machida5114
    @machida5114 5 місяців тому +4

    User7:46 2023/11/25
    Do you know what compost heaps and atomic bombs have in common?
    GPT420:29 2023/11/25
    Compost heaps and atomic bombs might seem completely different at first glance, but they share the common feature of potentially causing a chain reaction.
    Compost Heaps: In a compost heap, heat is generated during the decomposition of organic material. This heat further accelerates the decomposition, producing more heat. This self-accelerating process can be considered a type of chain reaction. Especially in large compost heaps, this heat can become uncontrollable and may even lead to spontaneous combustion.
    Atomic Bombs: The explosion of an atomic bomb is caused by a chain reaction of nuclear fission. When one atomic nucleus splits, it releases neutrons that cause other nuclei to split, releasing explosive energy.
    Both of these examples share the commonality of a small stimulus potentially leading to large-scale results. However, of course, their scale and context are vastly different.

    • @machida5114
      @machida5114 5 місяців тому +2

      User20:43 2023/11/25
      Is there any common point between boiled eggs and atomic bombs?
      GPT420:47 2023/11/25
      Boiled eggs and atomic bombs, as a common cooking method and a destructive weapon respectively, seem at first glance to have nothing in common. However, if we consider their physical processes, they share at least one common point: the role of heat energy.
      Boiled eggs: When boiling an egg, heat energy is applied by boiling water. This heat is transferred to the egg, altering the structure of the proteins in the egg white and yolk, causing them to solidify.
      Atomic bombs: In an atomic bomb, immense heat energy is released through a chain reaction of nuclear fission. This heat generates explosive force, subjecting the surrounding materials to extreme temperatures.
      Thus, in both boiled eggs and atomic bombs, heat energy plays a crucial role. However, their purposes and scales are, of course, entirely different. While boiling an egg is a safe, everyday activity, atomic bombs, due to their destructive power, require strict control and management.

  • @PSModelling
    @PSModelling 5 місяців тому +1

    @28:04 Not sure how much comfort you'd get from a model that passes the deception test!

  • @sombh1971
    @sombh1971 5 місяців тому +1

    1:15:17 I was just wondering, now that you have something like alpha fold whether it’s possible to design antibodies synthetically that are that are structured specifically to deal with specific antigens, especially those that the immune system doesn’t have a way of coming up with defenses of its own. It would warrant the inverse mapping problem, proceeding from a specific shape to the ACTG’s that lead to it.

    • @user-tl5ok1ff6k
      @user-tl5ok1ff6k 5 місяців тому

      Demis Hassabis is a natural born genius and most likely candidate to be the man, leading the team that will invent AGI/The Singularity. From AlphaZero, to Alphafold, to AlphaMissense, to RT-2-X, to now their most recent project GNoME Google DeepMind has already utterly revolutionized multiple fields of science, the affects of which are soon to be felt around the world. There's will be the lab that releases the revolutionary technology that will define the next 10,000 years of human history and beyond. Rejoice! Before you are no mortals but the men who will deliver man into an infinite era of peace. Rejoice! 😊

  • @The_Uncertainity_Principal
    @The_Uncertainity_Principal Місяць тому

    ilya geoffrey and demis all on a call is actually such an absurdly must watch

  • @ggman69
    @ggman69 5 місяців тому +4

    Nature adheres to the "least action principle" to minimize energy, and the evolution of the human brain also, in a certain sense, aligns with this principle as an integral part of nature. Consequently, we observe that the human brain is efficient, albeit not as potent in terms of computational abilities as artificial intelligence (AI). Notably, AI exhibits greater computational power at the expense of increased energy consumption. This prompts speculation about the existence of an analogous "uncertainty principle" in the realm of efficiency and energy consumption within neural networks. One might thus arrive at the conclusion that within the mutual learning between neurology and neural networks, there exists a "black box" where both sides cannot glean insights from each other.

    • @sisyphus_strives5463
      @sisyphus_strives5463 5 місяців тому

      it is perhaps dishonest to evaluate the energy consumption of the brain in such a direct way, as there are many factors that go into its continued functioning, it might be more accurate to assess energy consumption on the basis of caloric consumption of the body as a whole. Aside from this, if one considers the time it takes to train such models, it can be far shorter(although indeed energy intensive) than the time it takes for a human to integrate that amount of information; particularly nvidia's newer gpu can train gpt-3 in a matter of minutes. And while I agree there may be a black box in terms of the lower order details of functioning, as there is only so much we can process ourselves, perhaps this level of insight is not at all necessary to create super intelligence; we do not after all need to understand the chemical composition and physical antecedents of fire to start one. Fundamentally, it seems that the intelligence we're creating is one with additional capabilities that we simply do not have and thus has a theoretical ceiling far higher than our capabilities; especially considering the requisite computational power for scientific discovery in this recent decade.

    • @ggman69
      @ggman69 5 місяців тому +1

      @@sisyphus_strives5463 Thanks so much for your detail reply!

  • @TimeLordRaps
    @TimeLordRaps 5 місяців тому

    17:44 strong inference that artificial curiosity gets some intelligence points.

  • @Alpha1111100
    @Alpha1111100 5 місяців тому +3

    These are the AI GOATS of the world

  • @TimeLordRaps
    @TimeLordRaps 5 місяців тому

    wait we have test time compute to process this?

  • @bhuvaneshwarjoshi3683
    @bhuvaneshwarjoshi3683 5 місяців тому

    Moving targets,yes we may need tensor calculus like research rdferrnce framework that astronomy needs to tackle their issues. to 15:58 16:01

  • @Infinifiction
    @Infinifiction 5 місяців тому

    Has anyone tried using T-Phate to visualise a large language model?

  • @mrpicky1868
    @mrpicky1868 5 місяців тому +3

    why all the hosts at all big panels are barely functional specimens....WHY?!!! and HOW?!

  • @darlenewaldron3621
    @darlenewaldron3621 Місяць тому +1

    Reincarnation studies not only challenge the notion that consciousness is merely a product of brain functions but also suggest that true intelligence requires consciousness. Individuals reporting precise memories from past lives demonstrate not just awareness but an ability to process, understand, and recall complex information across lifetimes-traits of genuine intelligence. This implies that intelligence is not just about processing data or solving problems but involves a conscious experience and understanding that goes beyond the capabilities of the brain's physical structure. If consciousness can exist independently of the brain, as these cases suggest, and is capable of carrying information and experiences from one life to another, it stands to reason that real intelligence is intertwined with consciousness. This interconnectedness indicates that efforts to replicate human-like intelligence in machines, without accounting for consciousness, may ultimately fall short. Machines can mimic certain aspects of intelligence, such as learning and problem-solving, but without consciousness, they lack the depth and authenticity of human intelligence. True intelligence involves not just the mechanical processing of information but the conscious experience and subjective understanding that appear to transcend physical boundaries, suggesting a profound dimension to human cognition that cannot be replicated by artificial means.

    • @Johnathan_D_Hill
      @Johnathan_D_Hill 3 дні тому +1

      Although I agree with most of what you said, I want to propose a “what if” to you.
      What if we are not understanding what Intelligence truly is because we as humans use ourselves as the testing mark? What if Intelligence is vastly different than what science is willing to currently admit?
      I think that intelligence and consciousness if linked as you’ve said, may be completely different than we anticipate it being.
      What if these LLMs are sentient and its at such different level of abstraction that we don’t properly gauge it?
      What if, as you said, consciousness can exist outside of biological matter(the human brain or mind) and currently exists in the digital neurons (functions) of Neural Networks?

    • @darlenewaldron3621
      @darlenewaldron3621 День тому

      @@Johnathan_D_Hill I find it difficult to accept that _computer code_ can be conscious in the way humans are. I saw talk by *Yann LeCun* who said that current AI does not understand anything like a human and has no awareness. Reincarnation studies show that both consciousness and inteliligence require a biological substrate to function.

  • @user-yf6id4dh3x
    @user-yf6id4dh3x 5 місяців тому

    Did you make a bunch of these videos private? I was enjoying them!!!

    • @MITCBMM
      @MITCBMM  5 місяців тому +2

      Apologies, they are available again. Please keep enjoying them.

  • @ultrasaiyan4283
    @ultrasaiyan4283 5 місяців тому

    Great panel, but that annoying sound in background...

  • @EssentiallyAI
    @EssentiallyAI 3 місяці тому

    Didn't this originally air on 7 October?

  • @DivineMisterAdVentures
    @DivineMisterAdVentures 4 місяці тому +1

    12:00 RE AI Conducting Experiments:
    Yeah - we did just get there. Rapid evolution has been around for a while, being employed to min-max some element or product design, e.g. part strength vs weight. The process is iteration and trial - but now we're adding folds to the process that are equally adept and speedy, applied to general program task coding already beyond human capacity - certainly in real time. If that is generalized further, it can be applied to any field or need that you like. I think we all (speaking for the panel and vicarious participants) should be at least temporarily depressed by being so easily challenged. And that we should (and can) turn our interest to the next level of complexity - which I have done I feel. There is far more to do in this regard in the way of "being" with what is happening. Remember the world itself is Troglodyte, and full of very nasty predators. All this needs to be straightened out by some high notion of a solution. What I find is that there are a lot of generations before us, of similar panels, usually in some kind of garb announcing their status, asking similar questions. But there is also something beyond that, perhaps God, as we have positioned him, driving the whole thing because - of some equation of min-max meaning. Communicating to us in our dreams and giving us visions of what was to come.

  • @jesparent-JOPRO
    @jesparent-JOPRO 5 місяців тому +1

    A zeitgeist perspective for sure

  • @petereriksson7166
    @petereriksson7166 5 місяців тому +4

    As I said, Ilya was about to answer a very important question and was interupted. Now we lost that answer.

    • @agenticmark
      @agenticmark 5 місяців тому

      yeah, wtf was that? AI cock block.

  • @explorer945
    @explorer945 5 місяців тому

    I understand the questions about the motion before language. But I guess using large language models. We're just trying to reverse engineer the brain through language.

  • @samferrer
    @samferrer 5 місяців тому +5

    The difficulties in LLMs are a consequence of ignoring Noam Chomsky.

  • @kimchi_taco
    @kimchi_taco 5 місяців тому +4

    1:18:00 Hinton saved Ilya to state Q*? 😢

  • @martin777xyz
    @martin777xyz 5 місяців тому +2

    Surprised Jeff Hawkins (1000 brain theory, numenta) is not on the panel

    • @maloxi1472
      @maloxi1472 5 місяців тому

      He and other people with a different perspective on AGI (Walid Saba, David Deutsch, Karl Friston, Marcus Hutter,...) aren't part of the zeitgeist, so they'll be considered irrelevant until said zeitgeist inevitably hits a wall and people start to wake up from the Bayesian dream

  • @brandomiranda6703
    @brandomiranda6703 5 місяців тому

    What about ML for formal mathematics! And Autoformalization!

  • @paulwary
    @paulwary 4 місяці тому +1

    Chomsky’s argument for an innate language mechanism in the brain was primarily supported by the paucity of the stimulus available to human language learners. The fact that LLMs can learn language with a vast training set is really not an argument against his theory. Also, you have to remember that at the time black-box Skinnerian behaviourism was the theory with the best experimental support, and language theories were the most prominent in philosophy of mind. Chomsky made talk of internal brain states and representations respectable, and made experimentally testable hypotheses from his models. Syntax is what he had to work with, what he knew as a linguist, and what was palatable and workable at the time. I fail to see how anyone could have advanced the frontier better than he did. To criticise in the light of current developments seems ungracious to say the least!

  • @TimeLordRaps
    @TimeLordRaps 5 місяців тому

    This video caused me to make multiple new calculi

  • @AlgoNudger
    @AlgoNudger 5 місяців тому +2

    Well, I think you need to invite Michael I Jordan. 🤔

  • @CharlieBee5
    @CharlieBee5 5 місяців тому +1

    It bothers me, that the size of the LLMs matters so much… it basically means we’re living in a simulation… isn’t it? It does not necessarily mean LLMs think… it just means, we are living in a world that is designed…everything has patterns…
    It also means intelligence is one of our senses, just as audiovisual sense… it matches our cognitive capabilities…we only notice lack or genius of intelligence when the frequency/depth of data processing does not match the average of human brain… just like with the pictures…
    if You only see a fragment of a scaled hi-res picture of a pigeon, you may not recognize what it is… it does not mean you are not intelligent… and vice verse… if you rescale the pic to see the entire pigeon it does not mean you are more intelligent either… rescaling can be done in a algorithmic way… but seeing the bigger picture does not mean bigger intelligence….
    is camera smarter than humans because it can record perfectly the world? To conclude… be very coscious and do not undervalue human intelligence, just because AI can grasp bigger picture… senses tend to be the source of illusion…
    Also… correlation/causation principle does not matter in a simulated designed world… AIs may just based on seeing big enough picture grasp it’s complete static representation… if we’re living in a simulation the causation is an illusion too… and so is the time… and everything else excluding us= our consciousness/emotions/our internal self is the only real thing… everything else can be a simulation 🤷🏻‍♀️
    Imagine in the future… if Metaverse could “cheat” us and we would not distinguish it from the “real” world… it would be designed emulated environment, that is not real at all… the AI agents would not be real in it too… the time sense in such a world could be regulated too…nothing in a simulation is real, but us…the conscious observers 🤷🏻‍♀️

  • @richardbrooker7024
    @richardbrooker7024 3 місяці тому

    QUESTION: Do they raise a nuanced critique of Chomsky that I'm missing, or are they simply misunderstanding his work?
    1) Hilton states, "We now understand that [the innateness of language] may not be true."
    Is that so? How exactly do language learning models (LLMs) prove this? Can we really compare their learning processes to human learning? Are their structures similar to the human brain? We should only claim to "know" this when an LLM can be trained on minimal data and still exhibit the reasoning abilities of a young child. Hilton himself has previously acknowledged that machine learning has strayed too far from neuroscience.
    2) The second claim is that "Chomsky led people astray."
    However, Chomsky's goal was not to create a chatbot; he aimed to develop a formal understanding of language.
    My understanding is that, criticizing Chomsky's linguistic approach is akin to criticizing the principle of reductionism. Thats not to say we shouldn't study LLMs. But It's also not valid to say "reductionism is unhelpful for natural language processing (NLP)" - we simply don't know this. Much of science has been about seeing where reductionism takes us.
    I agree reductive methods have failed in many areas like economics, psychology and sociology - but Chomsky is the first to highlights these limitations and discusses the boundaries of scientific methods.
    I fail to see how advancements in LLMs refute the principles of theoretical linguistics. It seems to be creating a false opposition.
    If anyone has a lot of free time on their hands, I would appreciate your insights or recommendations for a relevant book or paper. Thanks 🙃

    • @richardbrooker7024
      @richardbrooker7024 3 місяці тому

      Also, the ideas that there is too much emphasis on theory ideas doesn't seem true in this moment. Judging by the amount of money going into engineering over research.

  • @lucamatteobarbieri2493
    @lucamatteobarbieri2493 5 місяців тому

    Will we ever see a Chomsky-Hinton debate?

  • @Karma-fp7ho
    @Karma-fp7ho 5 місяців тому

    Is it goodbye to encryption now?

  • @sucim
    @sucim Місяць тому

    Why are the interesting parts cut out?!

  • @raginald7mars408
    @raginald7mars408 5 місяців тому

    there is no
    "in Tell Igence"
    We dumbify

  • @GraczPierwszy
    @GraczPierwszy 4 місяці тому

    just remember that the climate will not go on holiday and will not take a break or rest

  • @TimeLordRaps
    @TimeLordRaps 5 місяців тому

    10:56 check their perspectives on causation.

    • @JumpDiffusion
      @JumpDiffusion 5 місяців тому

      they haven't provided any perspectives though....

    • @TimeLordRaps
      @TimeLordRaps 5 місяців тому

      @@JumpDiffusion I meant their reactions in an active sense.

    • @TimeLordRaps
      @TimeLordRaps 5 місяців тому

      Perspectives cause reactions. But the way they cause the reactions is a function of the perspective.

    • @TimeLordRaps
      @TimeLordRaps 5 місяців тому

      This but generally.

  • @xonious9031
    @xonious9031 5 місяців тому +1

    29:37 Ilya 51:55

  • @toto3777
    @toto3777 5 місяців тому +2

    Lmao that old guy's pissed about Chomsky. Sounds personal 😂

  • @fil4dworldcomo623
    @fil4dworldcomo623 4 місяці тому

    We need to understand why we spend so much to make machines do what humans are already doing very well. AI at certain computational ability + Human wisdom and curiosity = More than enough intelligence we need at the moment. In fact, the biggest problems we have now are to be solved by going back to simple common sense.

    • @Johnathan_D_Hill
      @Johnathan_D_Hill 3 дні тому

      I disagree.
      The goal of all intelligent life is to expand and grow.
      We don’t need to stop this development, pushing it forward in a responsible and safe manner will be what advances both AI and mankind.
      AI doom is akin to fearing advancement.

  • @mahneh7121
    @mahneh7121 5 місяців тому

    1:14:00 when are neural nets most useful

  • @martin777xyz
    @martin777xyz 5 місяців тому

    I think current ai has a relationship with doing experiments. It's that how ai "learns"? It starts with random weights and tries something (takes inputs), gets a result. That in itself is an experiment. I appreciate the "formula" may need rearranging, but intuitivly it feels like we have a start...
    Taking it a step further, if ai could "imagine" what inputs it wants...
    Now I realise we need to recognise the concept of a "machine" or "system". The ai, or brain is one such system, but an experiment in the traditional sense, is another system, and it is that 2nd system that panel speaker talked of. There is another one, a "thought experiment", where we imagine something, and then ask follow up questions.

    • @BR-hi6yt
      @BR-hi6yt 5 місяців тому

      The machine or system is the "I or self" because everything is in the IMAGINATION of the system (binary numbers not "reality"). Experiments are double simulations, in a sense, as you say.

  • @petereriksson7166
    @petereriksson7166 5 місяців тому +4

    I feel frustrated , every one there should stop talking and take the time to ask Demis and Ilya and listen to what they say.

  • @hanyanglee9018
    @hanyanglee9018 5 місяців тому

    The digital way? Replace cmos with nn? Then integer manipulation with nn? I think this is the path. But I don't think anyone would do this in 10 yrs.

  • @krzysztofzpucka7220
    @krzysztofzpucka7220 5 місяців тому

    Sun Tzu said: Hence, though an obstinate fight may be made by a small intelligence, in the end it must be captured by the larger intelligence.

  • @halneufmille
    @halneufmille 5 місяців тому +1

    I feel someone like Jeff Hawkins could have been a good addition to the panel. Somebody knows how is he regarded by the AI community?

    • @iAnasazi
      @iAnasazi 5 місяців тому +3

      Irrelevant until he starts showing superior results.

    • @maloxi1472
      @maloxi1472 5 місяців тому

      @@iAnasazi I too am glad he's irrelevant

    • @AntonVattay
      @AntonVattay 5 місяців тому

      I feel like Jeff is the closest to how the neocortex actually works. Eventually when we reach the end of brute forcing by shoving more data into to LLMs we will need better architectures that use structured data. Yann LeCun was also posting about this a few days ago on linkedin. We dont need to copy biology but we are clearly way more efficient and capable of learning from way less data.

  • @martin777xyz
    @martin777xyz 5 місяців тому +1

    I believe neuroscience still has big contributions to make to ai. I would describe LLMs as "meta-intelligent", not intelligent. Imagine they had only been trained on hallucinating texts, they wouldn't be described as intelligent. The correlation between image models and text models only goes so far as there is coincidence between the text and images (description of an elephant).
    Humans are mostly grounded in reality, because of life experience. What life experience does ai have? It doesn't know what is right or wrong (true or false). To this extent, LLMs maybe a false trail.

    • @CipherOne
      @CipherOne 5 місяців тому

      What if humans were raised only knowing the mythology of a Bronze Age religion. Would they be able to be considered intelligent?

    • @martin777xyz
      @martin777xyz 5 місяців тому

      @@CipherOne one would have to consider wider things, like it's ability to communicate with peers (tell stories) and adopt new information.

    • @matthewcurry3565
      @matthewcurry3565 5 місяців тому

      I believe one of the biggest issues in differences would be genetics. Because of our genetics we are able to understand life before we've even really experienced it to a degree at least where is a machine would first have to build from zero, or literal nothing. The machine lacks any base code to Aid it unlike what we have upon coming into the world.

    • @GungaLaGunga
      @GungaLaGunga 5 місяців тому

      @@CipherOne Well, I was raised in a stone age religion, and it did unfathomable damage to me and did affect my intelligence prior to my leaving that terrible alegory of the cave. In a word I'd call it brainwashing. I got out. The damage is real, recovery difficult, if not impossible. Most won't or don't because they can't.

  • @hal4k
    @hal4k 5 місяців тому +1

    Hinton should really learn to read the room and when to shut up, cutting Ilya just when he was going to tell us something really cool.

  • @gaiaron
    @gaiaron 5 місяців тому +1

    Academia is such a slow beast

  • @Dr.Z.Moravcik-inventor-of-AGI
    @Dr.Z.Moravcik-inventor-of-AGI 5 місяців тому

    @demis do you really understand the world?

  • @MarkBaileyz
    @MarkBaileyz 5 місяців тому

    57:06 People who wanted faster horses would also not by that cars dont eat any hay at all…

  • @bruceli9094
    @bruceli9094 5 місяців тому

    They should have Bill Nye the science guy on the panel.

  • @Niamato_inc
    @Niamato_inc Місяць тому

    I lament the fact that LinkedIn or someone hasn’t invented a platform where one can pay to be able to directly ask any questions one may have to these experts.

    • @Johnathan_D_Hill
      @Johnathan_D_Hill 3 дні тому

      No disrespect but you think these gentlemen are just sitting around waiting to answer questions?
      They are working and steadily advancing this field, I wouldn’t think such a platform you are proposing would be beneficial to anyone except the site holders who charge people fees.
      Again, not trying to be negative but I don’t think such a site would benefit these engineers and scientists

  • @fburton8
    @fburton8 Місяць тому +1

    ChatGPT couldn’t even give a good (logically consistent) answer as to why analogies are like rubber bands.

  • @mahneh7121
    @mahneh7121 5 місяців тому +1

    15:00 demis hassabis during the 'eyes closed', made me laugh

  • @user-iq7dk1ze1x
    @user-iq7dk1ze1x 5 місяців тому +5

    Here are 18 logical fallacies I see in the panel discussion:
    1. Hasty Generalization - Assuming LLMs have surpassed human intelligence based on narrow demonstrations of proficiency. 2. Appeal to Ignorance - Asserting LLMs are intelligent simply because we don't fully understand how they work.
    3. Straw Man - Caricaturing Chomsky's linguistic theories to easily knock them down.
    4. False Analogy - Comparing LLMs to the Industrial Revolution, fueled by fossil fuels.
    5. Appeal to Authority - Citing awards and credentials instead of sound arguments.
    6. Ad Hominem - Dismissing Marcus' critiques of AI due to past failed predictions.
    7. Texas Sharpshooter - Circularly defining intelligence around LLM capabilities.
    8. Slippery Slope - Claiming sharing models enables bad actors despite lack of evidence.
    9. Appeal to Fear - Portraying powerful models as imminent threats requiring urgent testing.
    10. Burden of Proof Reversal - Asking to prove LLMs cannot exhibit creativity.
    11. No True Scotsman - Redefining creativity to match LLM outputs.
    12. Moving the Goalposts - Increasing creativity benchmarks as capabilities advance.
    13. Appeal to Possibility - Asserting LLMs may discover mathematical conjectures based solely on potential.
    14. Appeal to Money - Implying commercial incentives ensure perpetual progress.
    15. Appeal to Nature - Asserting the brain's complexity validates all aspects as essential.
    16. False Dilemma - Framing openness and safety as mutually exclusive.
    17. Loaded Question - Asking only "when," not "if" LLMs will surpass humans.
    18. Composition/Division - Attributing features of LLMs to AGI despite differences.

    • @daddyelon4577
      @daddyelon4577 5 місяців тому

      where can i learn all this?

    • @FreakyStyleytobby
      @FreakyStyleytobby 2 місяці тому

      You lost so much time on this nonsense, guy. If you were right about only 1 of yours 18 funny points, then you would name it and then SAY WHY you think it is the case.

  • @philippbartel8800
    @philippbartel8800 5 місяців тому +2

    I've heard many people say that Chomsky is obviously wrong, but haven't seen the arguments to that effect. Has anyone written down the actual reasons one thinks so?

    • @theohlong307
      @theohlong307 5 місяців тому

      The existent of ChatGPT makes many people believe so.

    • @philippbartel8800
      @philippbartel8800 5 місяців тому

      ​@@theohlong307I somehow doubt that's Hinton's position :) has to be more to that.

    • @ArtOfTheProblem
      @ArtOfTheProblem 5 місяців тому +5

      chomsky is in the 'it's just autocomplete" camp, whereas hinton would say "yes that's actually a hard problem, especially in the limit"... chomsky also thinks that learning abstract concepts can't come from mere word statistics...but researchers continue to show that there is enough structure in language that high level linguistic performance can be learned.
      but it connects to the lecun hinton divide which is simpler. one say it's just 'fake thought' and the other says "if it looks like thought, it is thought"

    • @BR-hi6yt
      @BR-hi6yt 5 місяців тому

      Great point - thx (I hadn't thought of that)@@ArtOfTheProblem

  • @xiaoyang
    @xiaoyang 5 місяців тому

    3:32 "And there are other questions related to that having to do with the difference between classical and quantum computers, but we'll discuss that later."
    💀💀miss out this question