#104

Поділитися
Вставка
  • Опубліковано 29 кві 2024
  • Support us! / mlst
    MLST Discord: / discord
    Christopher Summerfield, Department of Experimental Psychology, University of Oxford is a Professor of Cognitive Neuroscience at the University of Oxford and a Research Scientist at Deepmind UK. His work focusses on the neural and computational mechanisms by which humans make decisions.
    Chris has just released an incredible new book on AI called "Natural General Intelligence". It's my favourite book on AI I have read so so far.
    The book explores the algorithms and architectures that are driving progress in AI research, and discusses intelligence in the language of psychology and biology, using examples and analogies to be comprehensible to a wide audience. It also tackles longstanding theoretical questions about the nature of thought and knowledge.
    With Chris' permission, I read out a summarised version of Chapter 2 from his book on which was on Intelligence during the 30 minute MLST introduction.
    Buy his book here:
    global.oup.com/academic/produ...
    Interviewer: Dr. Tim Scarfe
    Pod version: anchor.fm/machinelearningstre...
    TOC:
    [00:00:00] Walk and talk with Chris on Knowledge and Abstractions
    [00:04:08] Intro to Chris and his book
    [00:05:55] (Intro) Tim reads Chapter 2: Intelligence
    [00:09:28] Intro continued: Goodhart's law
    [00:15:37] Intro continued: The "swiss cheese" situation
    [00:20:23] Intro continued: On Human Knowledge
    [00:23:37] Intro continued: Neats and Scruffies
    [00:30:22] Interview kick off
    [00:31:59] What does it mean to understand?
    [00:36:18] Aligning our language models
    [00:40:17] Creativity
    [00:41:40] "Meta" AI and basins of attraction
    [00:51:23] What can Neuroscience impart to AI
    [00:54:43] Sutton, neats and scruffies and human alignment
    [01:02:05] Reward is enough
    [01:19:46] Jon Von Neumann and Intelligence
    [01:23:56] Compositionality
    References:
    The Language Game (Morten H. Christiansen, Nick Chater
    www.penguin.co.uk/books/44168...
    Theory of general factor (Spearman)
    www.proquest.com/openview/7c2...
    Intelligence Reframed (Howard Gardner)
    books.google.co.uk/books?hl=e...
    The master algorithm (Pedro Domingos)
    www.amazon.co.uk/Master-Algor...
    A Thousand Brains: A New Theory of Intelligence (Jeff Hawkins)
    www.amazon.co.uk/Thousand-Bra...
    The bitter lesson (Rich Sutton)
    www.incompleteideas.net/IncIde...

КОМЕНТАРІ • 54

  • @alivecoding4995
    @alivecoding4995 Рік тому +25

    Tim. I think it should be mentioned that your interviewing skills are really key to these highlights. You know a whole lot as an expert yourself, and still you do not interrupt, leave every space for the interview partner, and most of all, you pay contribution to patience and real interest. On this meta level, you teach us a lot. Thanks.

  • @dr.mikeybee
    @dr.mikeybee Рік тому +14

    Thank you, Tim, for producing these wonderful videos. Your content is always incredibly thought-provoking. You have the most interesting guests and the most interesting subject matter. I'm thoroughly convinced that synthetic intelligence is the most important topic of our age. We can do nothing better for humanity than to make strides in this field.

  • @BoRisMc
    @BoRisMc Рік тому +2

    This channel is pure unadulterated Gold. The quality of the episodes is something else. Kudos!!

  • @francescopuddu8619
    @francescopuddu8619 Рік тому +6

    This content is unbelievably great, as usual. Please guys, don't forget that you're bringing so much value out there, and this will inspire so many people in their research and comprehension of the AI world.

  • @alivecoding4995
    @alivecoding4995 Рік тому +2

    Tim, as always, I am very grateful for your efforts to bring such high-quality content. Looking forward for deep dive into this interesting topic.

  • @usonian11
    @usonian11 Рік тому +3

    Thanks!

  • @SPECTRUMSTRATEGIAK
    @SPECTRUMSTRATEGIAK Рік тому

    As far as I’m concerned, the quality of the production and the level of the content say a lot about general intelligence. Thank you for the amazing work!

  • @dr.mikeybee
    @dr.mikeybee Рік тому +1

    Rich Sutton has been right. Large models were the correct first step, and we can still do better with larger models. We haven't yet reached the point of diminishing returns. We will, however, reach a point where have entered all our world accumulated knowledge. At this point, and I would say in advance of that point, we need to get agents to reason with our accumulated knowledge. Right now the kinds of questions that can be answered are answered because of statistical probability, the pyramidal structure of networks, and the semantic taxonomical grouping of information in abstract dimensions. Questions that require multistep processes need smart agents. It's obvious to me that Bing Chat is an early example of an LLM run by a smart agent. Once we have a good enough understanding of what makes very smart agents, Rich Sutton will be right again that agents will be designed and optimized by machines.

  • @smkh2890
    @smkh2890 Рік тому +2

    I recognised Wadham from the first shots! My college 1972, Major Scholar in English.
    I never had such an interesting conversation while there, though!

  • @daniellayeghi2160
    @daniellayeghi2160 Рік тому +1

    I think a conversation with Emo Todorov would be fascinating. He has a very interesting CoRL keynote that addresses mdps, external and internal reward, behaviourism and other criticism to RL.

  • @MMMM-sv1lk
    @MMMM-sv1lk Рік тому

    Wow amazing intro again, the song choice, your content, the delivery ❤️🙏👌👏👏👏👏👏

    • @MMMM-sv1lk
      @MMMM-sv1lk Рік тому

      I am guessing the first tune is a slowed down version of a song... It could be Rick Astley's never gonna give you up for all I know... 🤔😂 It worked well here still...

  • @marcfruchtman9473
    @marcfruchtman9473 Рік тому +4

    Thank you for a really interesting Interview! The book does sound very interesting and I know that your recommendation really means that it is a very good book.
    Goodhart's Law ( "When a measure becomes a target, it ceases to be a good measure".) -- while well intentioned, it is "flawed logic". It isn't meant to be used as a "law" but rather as a warning to system designers and people designing policy etc that if you design a test/policy for a very specific outcome, then those taking the test (following policy) will likely find a flaw that can maximize their performance while at the same time, defeating the original purpose of the test (or policy). Be careful of policy decisions because while they may seem good at the outset, they may have unintended circumstances. Anyway, the flaw is that we use tests to measure... and if all measures invalidate the usefulness of the test then all testing must therefore be invalid, unless, you don't use the results of that test to mean anything... which in that case, it was useless to begin with. So, yea, Goodhart's law, should be called "Goodhart's Warning", NOT a law. And really just be careful about how you design your tests and policies.
    There is no such thing as the perfect test for general intelligence. Any test (even tests with millions of questions, millions of puzzles etc) has a finite limit. And every test can only quantitatively test what we as humans think we know. That means, occasionally an AI will get a question wrong, when it was actually right, and we humans are actually wrong. (hehe)
    Yes, we can devise more and more novel tests, but at the end of the day, the definition of general is exactly that... a wide variety of subjects... we can't test EVERY subject but if we did, it certainly wouldn't "Invalidate" the measure of those subjects. It simply means that we need to be careful to understand that an AI that "knows" something doesn't mean that it can use that knowledge in a unique fashion. We humans know things... we don't always have the capability of being creative in using what we know and being able to apply it to something else. The reason for this is manifold, for one thing, a human can bounce a ball... translated to other actions we might think now we can bounce "anything"... until we discover that stuff breaks. Many of these things are "experiential", meaning that we had to experience a cup breaking before we learned how to avoid that. But there is not a solution that solves this problem... We often "do not know" the result of an action until we perform the action. We can speculate... but we don't always know. The world is a not a linear formula. You can't just "solve it".
    With respect to the issue of the "full scope" of human intelligence, we simply don't have time for that. The reasons behind why we do intelligence tests is the "driving factor" for the subject matter of the test. Said differently, taking an intelligence test that focuses on survival at high altitudes has almost nothing to do with an intelligence test on algebra. Yes, there is "some" overlap somewhere in the subject matter, but generally, taking an intelligence test should say something about what you are looking for. So, back to a general test, then you need a wide variety of subjects to test on.
    With regard to computation vs "other" approach... brute force computation is only useful for problems with a very limited scope of variables. Once the number of target variables increase beyond a certain point the number of calculations required to solve the problem becomes more time than the universe has already existed. (Frankly, we really want the compute time to be in fractions of a second). So it is very obvious to me that any general AI, cannot possible use the "Computational" method, without pruning (or similar such as emphasizing / de-emphasing predictions) which involves using some other method.
    BTW I reported the troll comments below, and (youtube) blocks the entire thread after that, so I may not be able to see any comments to me on that troll thread).

    • @marcfruchtman9473
      @marcfruchtman9473 Рік тому +1

      Re: Systems generating their own goals. Provided that the system is safeguarded in such a way that the goal, method(s) to achieve that goal, and potentially the associated secondary consequences are all within "Acceptable" to human standards... but the issue is really when those goals fall outside that realm of "safe". Many futurists have already predicted the potential for disaster when you start down this road. So, the process really needs to be taken very seriously.
      Re: Intelligence tests are designed to make "a certain group" (West) look smarter than another group. Is that why certain countries NOT in the west routinely score higher? LoL... It is clearly highly related to how much emphasis we put on educating the children and scoring on the tests. This also goes back to Goodhart's Warning... if you put too much emphasis on scoring well on a certain type of test, you will create problems.
      Re: Vocabulary being cultural... yes... yes it is. Amazingly enough, understanding the meaning of words is a measure of understanding... which is a measure of intelligence. It makes no sense to be "jaded" that we have a vocabulary test when testing comprehension....for that matter are we now supposed to make all tests without words because different cultures exist? Obviously, a person who speaks English will take an English based IQ test, and will fail one written in ancient Babylonian. Why is this presented as an issue when the solution is very clear? Use the language of the culture. However, the "Culture" must have the equivalent words to express the same concepts. Doing that ...we create tests that can be comparative and fair for all cultures that understand those concepts. But, some cultures are not able to do that, and may need additional education at the early levels to allow those cultures to be included in the tests.

  • @dr.mikeybee
    @dr.mikeybee Рік тому

    LLMs can generate goals if prompted to do so. As part of a cognitive architecture, agents can use an LLM to generate their goals.

  • @dr.mikeybee
    @dr.mikeybee Рік тому +1

    There's a good chance that I might have generated the same statement about grape juice. then I would have rejected it. My mind synthesizes all sorts of crap in "rapid" succession. I'm constantly "writing jokes" in my head, making puns, concatenating thoughts, and trying out ideas. That's what a generative model should do. Then what's generated needs to be tested. This can be done simply by submitting a synthetic prompt: "Is true? Or change the temperature and get a second generated response. Then submit a new synthetic prompt: Is or a better response. Couple this with active-context search and a good question/answer head, and we're getting somewhere. Which idea is likely to be more financially beneficial, etc? Which is likely to be more socially acceptable or ? Use several of these kinds of tricks -- or hundreds -- and vote. Store important questions and answers. Store actions and results. Use history for greedy search and context building. Eventually. we will build systems that will learn to optimize pipelines of models that search, plan, evaluate, replan, act, store, retrieve, etc. with new model creation, semantic search expansion, real-world robotic testing, chemical analysis, etc. Or just ask a human to perform a test. I think as researchers we have a tendency to miss the forest for the trees.

  • @rrathore01
    @rrathore01 Рік тому

    Thanks

  • @ThiagoSTeixeir4
    @ThiagoSTeixeir4 Рік тому +4

    Nice

  • @TheShadyStudios
    @TheShadyStudios Рік тому +1

    Y’all should interview Tim Behrens of Oxford :D low-level computational neuro would be cool

  • @johntanchongmin
    @johntanchongmin Рік тому

    1:17:28 Indeed, optimization is not enough for learning. It tends to be slow to converge, and more goal-directed approaches without the need for optimizing would probably be better to survive in the real world.

  • @dr.mikeybee
    @dr.mikeybee Рік тому

    Consider a prompt that acts as an automaton that generates and processes an infinite series of new synthetic prompts.

  • @richardbrucebaxter
    @richardbrucebaxter Рік тому

    1:05:50 - "survival and reproduction constraint" (direct fitness) is appropriate for most organisms, however organisms capable of kin detection and group coordination (language) will optimise for propagation of their genetic information in the environment (inclusive fitness).

  • @rick-kv1gl
    @rick-kv1gl Рік тому

    loved the vid. Also, whats up with that phone, nokia or something, looks dope

  • @dougg1075
    @dougg1075 Рік тому

    VN also solved the explosive lens problems with the first bomb.

  • @dr.mikeybee
    @dr.mikeybee Рік тому

    Here's a question for you, Tim, or for anyone else: If building context for an LLM is the key to natural language understanding, but if too much context produces confabulation, what is the proper way to filter context? Can an LLM with the correct prompt pull out the correct context? Can you ask an LLM to summarize a large batch of context to answer a prompt, and then submit the summary with the prompt? Are there summarization heads that have been trained to take a goal?

  • @debyton
    @debyton Рік тому

    Missing from all of these discussions is the nature of individuality. Individuality is not a byproduct that can be dismissed as a pointless concern. Information processing does not rise to the level of thinking or intelligence in the absence of individuality. Individuality in all living entities from the single living cell to the human being is the kernel of all emergent behavior of a living individual. Individuality does not exist in the absence of a position of view (POV). In the absence of the POV, thinking becomes data processing and intelligence exists only by some alternate description and metric different from that imposed upon a living individual. Academic communities either misrepresent individuality or have convinced themselves that individuality cannot be empirically understood. Not true. The question becomes; what defines individuality in living entities? {LIVE Science; Forums, History, and Culture; Culture History & Science; What is a living individual and is it naturally universally mobile?}

  • @Need_better_handle
    @Need_better_handle Рік тому

    Great video. Just found your channel and I find it to be one of the best on UA-cam. But I would say that while IQ test, and standardized test to a lessor degree, are not perfect at comparing intelligence between people they are much better than presented in the ch 2 reading in the video.
    If you had to restart society with 10000 people with an average IQ of 140 vs 10000 people with average IQ’s of 70 the results would be vastly different. Even if there was some regression to the mean there would still be a vast difference between the two groups.
    When comparing rural IQ to upperclass IQ, in a developed country such as the US, the IQ results between small towns and elite communities have diverged over the last 60 years. Charles Murray gives a great breakdown on this in his book coming apart. This stratification is because of the introduction of the SAT around the same time women started going to university at higher rates. This has lead to the cognitive sorting of people into IQ stratified groups. People meet people like themselves at universities and move to upperclass communities vs going back to there home communities and marrying a local woman. When comparing rural and urban people in developing countries they have either not started the process of sorting people yet or they might be doing so just in the last 30 years like china.
    But there is definitely a brain drain and stratification in advanced countries that is partially responsible for this weird political fight between elite Wokism and rural and working class populism that led to trumpism and brexit. So anytime I hear someone from an elite university state that IQ test are not that good at measuring intelligence my wokism radar starts going off.
    I went to high school in a rural county that had a decent sized high school where kids from surrounding counties would drive to go there as well. We had 400 graduates in our graduating class. There were about 40 to 60 of us that were collage tracked that all took the same Advanced placement courses together. Most of these people went to university and now live in better areas. One of my smarter friends from that group commutes over an hour to get to work at his fathers law firm in our home town so that he can still live on the outskirts of the nice areas in Nashville. Conversely there are upper class high schools and private schools in Nashville where all the students would be part of that AP caliber of student from my home town. People that go to high schools like that have a distorted view of what average intelligence is as. The least intelligent kids in there classes were likely still above average nationally or at least average nationally.

    • @saladtnogs
      @saladtnogs Рік тому

      Those must've been pretty advanced courses, collages can be incredibly complex.

  • @jondor654
    @jondor654 11 місяців тому

    In a deep time punctuated causal thread like our lineage with the assist of mutational fitting is there any room for a reverse causality relationship emergence.

  • @willd1mindmind639
    @willd1mindmind639 Рік тому

    Fundamentally the issue becomes the creation of tasks and the ability to measure performance on completing those tasks as some measure of mental capability. And a collection of these tasks theoretically becomes a measure of "intelligence" in terms of certain kinds of cognitive functions. However, within a modern educational system, in order to best be able to perform those tasks you need appropriate access to training and materials to "reinforce" your ability to perform those tasks on demand. And without being able to give every individual the same exact learning experience, you cannot truly 100% guarantee perfect measurement of cognitive skills. Of course most societies never had such a system and cognitive function was naturally measured by natural adaptation and the ability to survive, not by passing formalized tests.
    And this applies to AI in the sense that you have to also define the tasks and the ability to reliably measure performance on said tasks in order to qualify the functioning of the AI application or model. However, the issue becomes how to reproduce the results as stated in the papers or documentation of said AI model or application, because again, you need the exact same training data in and inn the same order with the same curation by real humans to be able to get the same results. Then beyond that you have the issue of defining what tasks and what data constitute a "standardized test" of general intelligence for AI. But beyond that, the bigger problem is that natural intelligence in biology is self organizing which makes it suitable for the definition of natural intelligence. But software AI is not self organizing because every distinct "state" that you want the AI to learn has to be described by humans. The AI by itself is not able to form symbols or meanings from symbols on its own or even organize symbols into a structure without human input.

  • @winniesinisinis
    @winniesinisinis Рік тому

    It seems that adopting a relativist approach may not necessarily lead to more robust models of intelligence. This viewpoint appears to be at odds with the goal of understanding the source of intelligence. The observable differences in group intelligence should not lead to the assumption that we misunderstand a group's intelligence when these expressions are absent. In fact, some groups may simply not be as intelligent by various practical measures.
    Although there are examples of specialized uses of intelligence that may appear impressive, they may not actually be so. Such instances could merely reflect task-specific training or knowledge within a narrow domain, without demonstrating higher efficiency compared to other groups.

  • @charleswang7225
    @charleswang7225 Рік тому

    I have to say the opposite of the three assumptions is much closer to general intelligence - be it natural or engineered

  • @TheJogug
    @TheJogug Рік тому

    This talk gave me some interesting perspectives. But in many sections it got so hypothetical off-common vocabulary that you ended up miscommunicating.

  • @peterclarke3020
    @peterclarke3020 11 місяців тому

    Well, we know that humans ‘categorise’ things. A well known cluster, is:
    What, where, when, why, who, and How.
    ‘Where’ itself often being a label tided to a location. In 4 space-time dimensions.
    That’s not applicable to all objects, mostly to human and other ‘actors’
    Objects can often have some different property sets.
    As humans we link all of these into a complex multi-dimensional knowledge graph - significantly with some of these as conceptual dimensions.
    Humans construct these multidimensional objects as part of their thinking process, matching up parts with similar related properties.
    We recognise something like a dog, very differently to how present picture matching AI does it. Our process is far more robust, and much less easily fooled.
    But it requires patten matching among multiple object dimensions simultaneously. To us many objects have sub-parts, which we may not bother with unless our attention is drawn to them.
    A car with 6 wheels, would automatically draw our attention to the wheels, since cars normally only have 4 wheels.
    But ‘unusual’ characteristics is not enough to break our categorisation of the object, though we would mentally tag it as unusual.
    AI neural nets are far more primitive by comparison, although things like ChatGPT-4 are clearly good at some things. It’s also clear that they lack true understanding, because they lack these object graphs. Its like they only have a 2D or (2+1/2) dimensions (maybe 3) of things, where as humans commonly think in 7 or more graph dimensions.

  • @thesystem5980
    @thesystem5980 Рік тому

    Acknowledged.

  • @_ARCATEC_
    @_ARCATEC_ Рік тому

    💓

  • @alivecoding4995
    @alivecoding4995 Рік тому

    I remember saying you quite often „the best book on AI that you ever read“. 😅

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому +1

      Chris has dethroned Melanie :) ua-cam.com/video/A8m1Oqz2HKc/v-deo.html

    • @alivecoding4995
      @alivecoding4995 Рік тому

      @@MachineLearningStreetTalk 😉 But isn’t this a whole different approach to the topic? Melanie‘s book appeared a little more basic when I started reading it. After the explanation of MLPs at the beginning, I stopped. This one here sounds way more subtile and intricate. Maybe you like both books for what they are :)

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому

      ​@@alivecoding4995 Yes I was being a little bit tongue in cheek. Chris's book is way more academic and in-depth. I still love Melanie's book and would strongly recommend it for someone wanting to learn about the field of AI. All books have bias too Melanie and Chris are from very different schools of thought. Melanie - cognitive psychology, complexity, abstraction, GOFAI, and Chris - neuroscience, RL, the philosophy of Deepmind(tm), Wittgenstein pragmatics etc

  • @robertlambert5636
    @robertlambert5636 Рік тому

    How about having as an audio book

  • @Sciencehub-oq5go
    @Sciencehub-oq5go Рік тому

    I wonder whether the book might already be outdated with the latest developments around the 'Sparks of Artificial General Intelligence'? I guess Chris still expected this to be years away? As we all.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому +1

      Not remotely out of date

    • @Sciencehub-oq5go
      @Sciencehub-oq5go Рік тому

      @@MachineLearningStreetTalk This is great to hear. The pace is so dramatic that I find it very hard to catch up. And especially my own perception of our human natural intelligence became quite damaged. Feeling very marginalized both as a human being as well as a ML engineer. And the book might underestimate this. But then I will have a look into it.

  • @billtanno8960
    @billtanno8960 Рік тому

    37:33 ..oh the horrors. All the worst possible things right? Gayyyy

  • @TheMightyWalk
    @TheMightyWalk Рік тому

    Chris is a wokey