IS THE MIND REALLY FLAT?

Поділитися
Вставка
  • Опубліковано 19 тра 2024
  • Nick Chater is Professor of Behavioural Science at Warwick Business School, who works on rationality and language using a range of theoretical and experimental approaches. We discuss his books The Mind is Flat, and the Language Game.
    Please support us on Patreon - / mlst - Access the private Discord, networking, and early access to content.
    MLST Discord: / discord
    / mlstreettalk
    Part 2 on Patreon now:
    / language-game-1-98661649
    Would you like to sponsor MLST? Please contact mlstreettalk@gmail.com
    Buy The Language Game:
    amzn.to/3SRHjPm
    Buy The Mind is Flat:
    amzn.to/3P3BUUC
    Find Nick:
    www.wbs.ac.uk/about/person/ni...
    nickjchater?lang=en
    TOC:
    00:00:00 The mind of Anna Karenina
    00:05:38 Our brain is like the Shoggoth
    00:09:26 Brain simulations are incoherent
    00:12:32 The world is gnarly
    00:19:56 Human moral status
    00:23:28 Living a hallucination
    00:25:37 Colour perception
    00:28:12 Universal knowledge? / rationalism
    00:31:33 Math realism
    00:35:13 Bayesian brain?
    00:39:53 Language game Kick off - Charades
    00:49:13 Evolution of language
    00:53:54 Intelligence in the memesphere
    00:58:21 Creativity
    01:04:41 Language encoding and overloading
    01:09:54 Analogical reasoning
    01:13:25 Language is complex
    01:14:19 Language evolution/decline
    01:17:23 Is language knowledge?
    01:19:53 Chomsky
    01:23:36 Theories of everything
    01:26:29 Prof Bishops comments on book
    01:31:09 Singularity
    Interviewer: Dr. Tim Scarfe
    / ecsquizor
    Pod version: podcasters.spotify.com/pod/sh...
  • Наука та технологія

КОМЕНТАРІ • 224

  • @Brian-oz8io
    @Brian-oz8io 2 місяці тому +9

    I remember an experiment they did in New York where they placed several groups of people who couldn’t communicate with other groups in different parts of the city and they were supposed to find a way to meet each other. All but one of the groups decided the most obvious place and time to meet would be the Empire State Building at noon. Humans will always find creative ways to communicate and understand each other.

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk  2 місяці тому +13

    Hope you enjoy the show folks. Please share it out on your socials - it helps a lot! Cheers, Tim

    • @today273
      @today273 2 місяці тому

      Thanks, it's a great interview. - Omar St

  • @davidfirth
    @davidfirth 2 місяці тому +16

    The ability to draw pretty pictures and the ability to imagine and create new things are completely separate artforms and its weird that we lump it all together.

    • @Gerlaffy
      @Gerlaffy 2 місяці тому +5

      Wouldn't the only difference be autonomy? At present, we don't have "agents" or an AGI that is making it's own choices. Would be interested to hear what you'd think of art projects, if one day, we had AGIs walking and talking and making choices of their own.

    • @oscarmoxon102
      @oscarmoxon102 2 місяці тому

      @@Gerlaffy Agreed - embodiment is a massive component; it gives pathos. But even in that case, the underlying architecture is still important, to an extent.

    • @andybrice2711
      @andybrice2711 2 місяці тому +5

      Idunno if they're entirely separate though. A visual artist at least needs to be able to abstract reality into a simpler representation. And probably needs to be able to imagine new variations which still broadly follow the patterns of that abstracted version of reality.
      They're arguably all essentially compression, transformation, interpolation, and extrapolation in latent space.

    • @aibutttickler
      @aibutttickler 2 місяці тому +2

      You have no idea what's going on inside a diffusion model when it generates an image, just like you have no idea what actually goes on in your brain that allows you to imagine an image. For all you know, it could be the same thing, or the computers' process could be vastly more complex than that for humans. All we have, both for humans and for computers, is what ends up actually being output by either whether in the form of paint or pixels, and so far, computers win by miles and miles.

    • @andybrice2711
      @andybrice2711 2 місяці тому

      ​@@aibutttickler I'm not sure that's entirely true. We do have _some_ idea what's going on inside diffusion models. Because we can reverse-engineer what abstractions they've settled on. And we know it's not quite the same as a human brain, even though there are seemingly some considerable similarities.

  • @dylan_curious
    @dylan_curious 2 місяці тому +6

    The CEO of Deepmind just talked about this. In summery it’s about simulating the answers before using various pathways of learned knowledge, and then comparing those simulations before giving your answer.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 місяці тому +4

      No, it's much, much more than that - DM do have some teams working on open endedness and I'm interviewing some of them soon - but the notion that we just need search on an LLM is painfully naive

    •  2 місяці тому

      @MachineLearningStreetTalk You see the bit about gpt4 being used for setting rl parameters and it doing better than the experts by 50% or so with some negative correlation among chosen rewards even?

  • @richardsantomauro6947
    @richardsantomauro6947 2 місяці тому +14

    The topic of retrospection is a perfect example. When we are asked why we said something wrong make it up on the fly - just like LLMs. There is a big body of research demonstrating we have no clue why we do things. And we don’t even know what we are going to do until after we have decided.
    We need to stop using standards we don’t meet as metrics for AI abilities.

    • @fromscratch8774
      @fromscratch8774 2 місяці тому +2

      AI has all* the information available to humanity. No individual human does. Our standards for it are either not high enough, or it just isn't as powerful as we want to believe

    • @jeff__w
      @jeff__w 2 місяці тому +2

      I agree-the idea that LLMs can tell us their reasoning or how they arrived at an answer is a fallacy because, like us, (1) their learning is, like most of ours, entirely contingency-based (e.g., their verbal output is grammatical but they're not following rules of grammar) and (2) they don’t have “nerves” (i.e., connections) going to the right places to allow them access to the processes that produce the outcomes.

  • @longline
    @longline 2 місяці тому +7

    Great to see Nick Chater here! I'd love you to speak with Lisa Feldman Barrett too. Also firmly from the school of prediction models, but studies emotion as the output of prediction error. Super relevant, concomitant, and interesting.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 місяці тому +4

      Yes would love to have Lisa on, huge fan!

    • @longline
      @longline 2 місяці тому

      ​​@@MachineLearningStreetTalkI've sent this one to my dad, thank you. Feldman Barrett was the gateway drug that I introduced him to about a year ago.
      We're slowly building up to Friston....

  • @chrisanderson7820
    @chrisanderson7820 2 місяці тому +7

    The output of the GNoME AI discovered 380,000 new stable materials that human materials science was previously unaware. Prior to this our known catalogue of stable materials was 48,000. The AI-run lab that GNoME controls has successfully synthesized 41 of these new materials. I fail how to see how this is not new knowledge.
    Plus look at it from this perspective. A common statement is that AI's can only ever be equal to what they have ingested, hence no AI can exceed a human specialist. However, now an AI can be an expert biologist, expert physicist, expert poet, expert carpenter, expert accountant etc all at once. So even if we assume (unverified) that an AI can't exceed a human specialist then what happens when we have a system that provides expertise in 1000 human domains simultaneously and cross-references them for its modelling, no human can do this. This system is capable of modelling information and cross references that cannot be done by a human.
    It seems to me that a large portion of the goalpost moving is being done by a form of human intellectual supremacist.

    • @dsa513
      @dsa513 2 місяці тому +2

      Seems you're making an emotional argument 😘😜😑

    • @OnigoroshiZero
      @OnigoroshiZero 2 місяці тому +2

      AI models are already producing results that exceed their training data in many cases, and these results (synthetic data) are used to train even better models.

    • @ChristophTungersleben
      @ChristophTungersleben 2 місяці тому +1

      Depend on definition of new or just plus.

  • @cs-vk4rn
    @cs-vk4rn 2 місяці тому +48

    Wouldn't the discoveries of funsearch and Alphafold qualify as new knowledge?

    •  2 місяці тому +3

      Wouldn't it setting parameters for reinforcement learning better than the experts suggest... novel at the least...

    • @SkilledTadpole
      @SkilledTadpole 2 місяці тому +3

      Yes.

    •  2 місяці тому +6

      Every initial presumption he's made has been easily discarded 10 minutes in by top level performers. He seems to be projecting his failures, and ignoring ai being able to interpret thought via scans.

    • @TimScarfe
      @TimScarfe 2 місяці тому +6

      An extremely limited class of new knowledge

    •  2 місяці тому +5

      @@TimScarfe One day all of human knowledge might be categorized similarly.

  • @geldverdienenmitgeld2663
    @geldverdienenmitgeld2663 2 місяці тому +4

    Every system which can do trial an error can create new knowledge.
    Chess computer can do it.
    Even simple programs, that check wheter a number is a prime number can create new knowledge.
    LLMs can do it if they are not forced to be in a chatbot environment.

    • @trucid2
      @trucid2 2 місяці тому

      LLMs create a world model from all the text data fed into it. That's already new knowledge--lossily compressing the data, doing a kind of best fit.

    • @ChristophTungersleben
      @ChristophTungersleben 2 місяці тому

      Can we decide a maschien didn't generate something new it emulate just a plus more.

  • @Gredias
    @Gredias 2 місяці тому +2

    Fantastic episode! Though, every time a guest ever utters the words "large language models will never be able to...", you should probably ask for a concrete thing which, if the guest saw the LLM doing it, would cause them to change their mind... :) I can imagine LLMs playing charades fairly successfully in the future!

  • @zrebbesh
    @zrebbesh 2 місяці тому +2

    I dunno. I'm in research, and TBH a lot of our "Profoundly Creative" ideas are in fact the kind of things that LLMs do when they get stuff wrong. 99% of the time of course it's crap, but that last bit sometimes results in a good idea. The talent of a "creative" researcher, at least IME, is mostly reinterpreting bad ideas, mistakes, and poor communication of abstract ideas, just trying to make sense of them, until we come up with something the person could possibly have intended to say instead, that might actually be right.
    And LLMs definitely get stuff wrong and then reinterpret. They just don't notice yet when that last "gaslighting attempt" (out of tens of thousands) is something that could really work.

    • @BrennanYoung
      @BrennanYoung 2 місяці тому

      Fully agree. Are there not numerous examples of scientists arriving at breakthrough insights due to the mishaps of clumsy assistants? The most groundbreaking new ideas are usually rejected as wrong, trivial, dangerous, boring... anything but innovative.
      And I think humans *like to believe* they want creativity/innovation far more than they actually appreciate it when it appears. Euw. Not like THAT!
      We train machines on *existing* idioms, and get disappointed when they fail to exemplify the target idiom adequately - I'm clicking thumbs-down because I wanted a Last Supper *without* a kangaroo.
      If we want LLM to be creative in the way humans are creative, we should train machines to surprise us, by "rewarding" them when they transgress or break the idioms we are familiar with, but how do we move from those piecemeal transgressions to the creation of a *genuinely new* idiom without some kind of community judgement?
      Gregory Bateson's often-repeated account about the training of porpoises is relevant here. How does the animal go from "perform this trick you've previously been rewarded for" to "surprise the trainer with a new trick which has never earned a fish"? This is a cognitive shift from exhibiting an *instance* of behavior to exploring a *class* of behavior. Interestingly, Bateson discovered that the animals would struggle to take this step, and eventually become uncooperative during training *unless* the trainer developed a friendly relationship with them *outside* the training context. So, when the training session begins again, the *deep frustration* of discovering the need to shift from the specific to the general is imposed by someone the animal trusts and "likes".

  • @user-yv6xw7ns3o
    @user-yv6xw7ns3o 2 місяці тому +3

    I love this podcast. Another phenomenal episode. Very thoughtful and relevant.

    •  2 місяці тому

      5 minutes in and he's been wrong on every point so far. Crazy that. At least for highest level performance and analysis.

    • @user-yv6xw7ns3o
      @user-yv6xw7ns3o 2 місяці тому

      I always appreciate when people are able to offer contrasting ideas and evidence. Care to elaborate on what you mean? I'm willing to learn, if you have some sources I can check out.

  • @GodbornNoven
    @GodbornNoven 2 місяці тому +2

    Akin to asking Why the earth is flat. The flaw is that in the question. A statement is made. A statement that is absolutely false.

  • @longline
    @longline 2 місяці тому +2

    I'd say (re agency and charades) that you need the feedback between action and sampling, turning your head to see the source of a sound, to integrate multi modal data with the highest salience, at the lowest computation cost, for usable predictions (that match our typical usage of predictions).
    So GPT in control of its own robot is probably enough for charades with humans, and creativity, etc.
    If we're limiting the domain to "creative like us". Two GPTs that talk to eachother and can chose their own googling, they'll be able to play charades with each other... But we won't understand. And they'll be creative, but we won't understand their domain and their nuance.
    "You've done it wrong, that's just white noise" we'll say. Wolfram alluded to this, the concept spaces that they've got that don't fit our ideas of valid outcomes.
    Like that.

  • @irasthewarrior
    @irasthewarrior Місяць тому +3

    An algorithm made to copy things can't be creative

  • @OnigoroshiZero
    @OnigoroshiZero 2 місяці тому +3

    There is no reason to have creativity if you can emulate its results better than humans...
    The art (both images and writing) that I have seen from the recent top models is outstanding.
    I have used DALL-E 3 to generate concept art for characters, creatures, and locations for my TTRPG setting (campaign I run for a few friends), and Gemini to revise my 35 pages Lore of the setting which I made for it. All the results have been beyond what I was expecting even when compared to decent human artists and writers that I know.
    edit: I don't care if it is creative, or it just emulates creativity, I just care about getting amazing results, and even the current models are overdelivering.

  • @MattGray_Chelsoph
    @MattGray_Chelsoph 23 дні тому

    awesome talk thanks!!

  • @artbytravissmith
    @artbytravissmith 2 місяці тому +4

    I do not know if it is because I work a digital artist and often work in 3d modeling viewports, but I did not find what he described difficult. Once I looked at an image of a hexagonal pinwheel I could instantly see it as a as an illusion of a cube from a orthographic perspective that I had seen in 3d modeling viewports tens of thousands of times, especially when viewing cubes in wireframe. In my minds eye, I then rotated a wireframe cube into the position of the illusion demonstrated by the hexagonal pinwheel. Am I missing something? it wasn't difficult. Maybe this is because artists need to train their minds eye beyond what most humans have to, especially if they are artists who value drawing/sculpting/painting from their minds eye and have an understanding of the concept of 'drawing through' and the rules of perspective. Artists who learn sight sizing might struggle more, as they are not trained to think of terms of volumes and rather focus on copying what they see by comparing positive and negative spaces.

    • @5pp000
      @5pp000 2 місяці тому

      I don't have any trouble with it either. And I'm no artist.

    • @Hexanitrobenzene
      @Hexanitrobenzene Місяць тому

      It's a classic optical illusion. If you have seen it, you can do it.

  • @crazyprayingmantis5596
    @crazyprayingmantis5596 2 місяці тому +3

    Next month
    Why AI is more creative than humans.

  • @jerrypeters1157
    @jerrypeters1157 2 місяці тому

    This was wonderful. The creativity of language makes me hopeful for humanity. Thank you.

  • @spiralsun1
    @spiralsun1 2 місяці тому

    Creativity is absolutely not a social thing. Studied it my whole life. Everything depends on a tiny fraction of people. To say it’s “social” completely misses what creativity actually is. More social actually means less creativity-it’s probably a Dunning-Krueger thing. People are creative to an extent yes, but the real creativity beyond the instrumental or everyday personal level is extremely rare. For very solid reasons. It feels good to say it but it is not true. You can’t be truly creative without re-inventing the wheel. At least in your mind. And as you pointed out most people do not do that. It’s painful and extremely difficult to create. It isolates you without a whole lot of reward in many cases. Largely because you have to take the hard way and truly understand-not just learn the existing patterns. You described making incremental progress on what is, but truly being creative is diame opposed to that. It’s not refinement. Even in nature we see it in evolution. New mutations take a while to become integrated and harmonious. Because after the creativity it selects for the refinement. They are different.

  • @ernststravoblofeld
    @ernststravoblofeld 2 місяці тому +19

    We have sayings that are centuries-old, about how there's nothing new under the sun, and how good artists borrow while great artists steal. But we demand more from AI. There's a lot of goalpost moving, and a lot of denial of the procedurality of human thought.

    • @user-yv6xw7ns3o
      @user-yv6xw7ns3o 2 місяці тому

      What do you think about the last part of the discussion where they moved into talking about the divergent aspects of the variation in human thinking, and its relation to the "procedurality" of human thought? I agree with you that much of human thinking is both memetic (repeating what we've experienced/learned before) and iterative (recombining and reconfiguring what we've experienced/learned before), but I think there is room for that alongside the idea that we also have divergence in our "search processes" and perception, from individual to individual across a group and also within an individual over time.

    • @ernststravoblofeld
      @ernststravoblofeld 2 місяці тому +3

      Honestly, I kind of zoned in and out. This was one of those rambling conversations that is fun to participate in over drinks, but is kind of boring to watch.
      The fact is, neural net based AI has, in the last decade, made advances nobody thought possible since they gave up in the 70s. And the things we thought would be the hardest turned out to be the easiest.
      I maintain that human intelligence is basically what current AI models are, with a few additions that won't be all that difficult. The more we pull out of the networks and into algorithmic space, like attention layers, the closer we get to human-like intelligence with its brain structures.
      I think the next generation of Mixture of Experts models will look more creative.
      The real problem is that the closer we get to human intelligence, the closer we get to human problems. And humans are much more troubled than creative.

    • @user-yv6xw7ns3o
      @user-yv6xw7ns3o 2 місяці тому

      @@ernststravoblofeld @ernststravoblofeld I tend to agree with a lot of what you're saying, even to the point that listening to this conversation in all its meandering was an exercise in finding the willingness to just follow along on this rambling journey through reflections on thinking and intelligence as a sort of wobbly experience. 🤓. I agree that as our computerized tech advances, it seems to be naturally progressing in directions that take on more and more aspects that resemble, mimic, or simply are elements of our human cognition and condition. I especially agree about the likely future functionality of AI in regards to attention layers and other related aspects like mixture of experts.

    • @ehfik
      @ehfik 2 місяці тому

      ​@@ernststravoblofeld you nailed it in the last paragraph. creative is not enough

    • @ernststravoblofeld
      @ernststravoblofeld 2 місяці тому

      @user-yv6xw7ns3o Human intelligence is the model we have in front of us. It's not surprising we progress towards something like it. We're really not all that creative. 😕

  • @magicology
    @magicology 2 місяці тому +2

    With our prompting… it does.

  • @bgtyhnmju7
    @bgtyhnmju7 2 місяці тому

    It is interesting times. Good chat.

  • @damianlewis7550
    @damianlewis7550 2 місяці тому

    Do GFlowNets now solve the intractability problem of the probability distribution in an EM?

  • @thesleuthinvestor2251
    @thesleuthinvestor2251 Місяць тому

    The ultimate test of AGI: Write a novel that (1) a human reader cannot put down once started reading it, and (2) can't forget it once finished. How any years do you think before AGI can do that?...

  • @inplfw
    @inplfw Місяць тому

    Definitely off-topic, but I don't think you should utilize the initialism of Machine Learning Street Talk. It's somewhat common things for either brands to drop vowels or for people to try and sound out initialisms as though they are acronyms. Either of these spell disaster in your case, since I think most English speakers would subconsciously and regrettably slot an O and an E into MLST if they weren't immediately familiar with what it stood for.
    Maybe you want to risk it, but that's not something I would want any brand associated with even accidentally for only a moment.
    I started watching the video when I had my computer muted and the first slide of subtitles caught me off-guard.

  • @Dan-dy8zp
    @Dan-dy8zp 2 місяці тому

    It's only a transparent cube with visible lines that can look like a hexagonal pinwheel.

  • @vmb326
    @vmb326 2 місяці тому

    **Staircase Reflections**
    Without much thought, answer this: "There is a staircase. Does it go up or down?"
    When this question was posed between myself and a friend, we instinctively replied. It led me to ponder: having never considered this before, should my response have been "I don't know"? Was my answer influenced by unconscious biases or my current mood, painting an internal picture based on my past experiences?
    Yet, the question was asked, and almost instantly, without understanding the process, I found myself saying "Up."
    We theorized the following: upon hearing the question, and with no clear basis for response, my brain, bypassing conscious thought, drew upon past experiences to select an answer, which I then voiced.
    This led to a fascinating realization: if I responded without conscious deliberation, was it really 'me' making the decision? It appeared my brain autonomously provided the answer.
    This raises further questions: Who is truly in control here? It seemed it wasn't 'me' in that moment...
    And so, we delved deeper: if the brain operates independently, based on past experiences, and its responses are not random but biased, what does this say about free will? If I was not part of the decision-making process, and the brain's response is predetermined by past influences, am I merely an observer in my own body?

    • @Hexanitrobenzene
      @Hexanitrobenzene Місяць тому

      I can "hear" Robert Sapolsky clapping in the background...

  • @germanic4316
    @germanic4316 2 місяці тому

    Absolutely BRILLIANCE !

  • @glyph6757
    @glyph6757 Місяць тому

    Would love to see a discussion on here about scientism.

  • @richrogers2157
    @richrogers2157 2 місяці тому

    Correct or incorrect is apparently debatable, just imagine where machine creativity will be a hundred or even a thousand years from now.

    • @ChristophTungersleben
      @ChristophTungersleben 2 місяці тому

      Kybernetik Wiener wrote something about the reproduction of Elektrons.

  • @guillermobrand8458
    @guillermobrand8458 2 місяці тому

    I would suggest that both of them read the dialogue that Lemoine had with LaMDA, a Google employee who was fired early last year after suggesting that a lawyer be hired for an entity that Lemoine believed arose from the artificial intelligence that “gave life.” to said entity.

  • @abdullahifarah4243
    @abdullahifarah4243 2 місяці тому

    I also think that they are more powerful when you have a positive spiritual attraction

  • @nicholasfigueiredo3171
    @nicholasfigueiredo3171 2 місяці тому

    I made a hexagon from the cube and it wasn't that hard. It prob has to do with manipulating 3d objects in your mind I had to do that a lot for calculus/physics and for games. I also agree that they don't make a huge consistent models often people(and LLLMs) hold self-conflicting opinions. It is called cognitive dissonance. All lives is just dynamics and difference between living things and non-living thing is that living things are error tolerant self-replicating systems that decrease in entropy while increasing the entropy of the environment around. All models are wrong some are useful is a good way to think about the world. Reducing and simplifying the world is how we understand the world. The more you can do while maintaining accuracy the better since it allows you to explain and understand more things. Well. GPT4 can do everything you said it shouldn't be suppose to do even being creative and I also agree that it will 'reduce" our creativity by constraining the same way using the same search engine does or being from the same city/culture/country or speaking the same language. It is not that those models can't expand faster and better than us but that is not their "goal". I agree with you they are not exactly agents, they are not alive or have a purpose they are purely tools.

  • @abdelkaioumbouaicha
    @abdelkaioumbouaicha 2 місяці тому +2

    📝 Summary of Key Points:
    📌 The video discusses the importance of communication and understanding even without a shared language, using the example of playing charades to bridge the gap between individuals.
    🧐 It explores the concept that our minds create narratives that may not always accurately reflect our true motivations, challenging the idea of a single true answer for our behavior.
    🚀 The limitations of universal theories in understanding the complexity of reality, particularly in language and communication, are highlighted, emphasizing the need to embrace the messy and evolving nature of human interactions.
    💡 Additional Insights and Observations:
    💬 "Embracing the complexity and flexibility of language allows for a more nuanced and accurate understanding of how it functions in our lives."
    📊 The discussion touches on the skepticism towards large language models like GPT-3 and the importance of human agency in creative problem-solving.
    📣 Concluding Remarks:
    The video delves into the intricacies of human cognition, perception, and communication, challenging traditional views and encouraging a more nuanced understanding of language and interaction. It emphasizes the value of embracing complexity and flexibility in our approach to understanding ourselves and the world around us.
    Generated using TalkBud

  • @alexijohansen
    @alexijohansen 2 місяці тому +2

    What does he mean when he says we don’t understand ‘collective intelligence like that of the scientific community’?

    • @ChristophTungersleben
      @ChristophTungersleben 2 місяці тому

      Invoke mean of definition and use of "words".

    • @phutureproof
      @phutureproof 2 місяці тому

      I think he means we don't understand the collective intelligence like that of the scientific community, but I could be reading too much in to it.

  • @dg-ov4cf
    @dg-ov4cf 2 місяці тому

    Obviously right now it (for the most part) doesn't create new raw data based on direct observations about the world, but c'mon, one can easily see how it could think logically about the vast space of human knowledge and come up with new insights/conclusions about concepts that no one has ever considered in combination. I know getting them to actually think logically is the big challenge, but at least in theory they only need to be able to do it just slightly better than random chance

  • @lucamatteobarbieri2493
    @lucamatteobarbieri2493 2 місяці тому +1

    If you train an llm on dogs barking, can you expect it to do much more than barking? No. Same applies to contemporary llms. Garbage in - garbage out. Models are compression of training data, just like lossy jpg or mp3.

  • @magicology
    @magicology 2 місяці тому

    1:00:05 exerting our will with creativity and with $$

  • @dustinrose361
    @dustinrose361 2 місяці тому +3

    "It is now far easier to make a full documentary on tea, then to drink said tea."
    Dustin Rose on the advent of AI.
    Lord be with us all.
    No longer can we trust what we hear nor see, lest we can touch it, which will soon be in question.

  • @andersbodin1551
    @andersbodin1551 Місяць тому

    What is he talking about I can spin a cube in my head until it two diagonal corners align and it looks like a pinwheel

  • @adolforosado
    @adolforosado 2 місяці тому

    Several things come from the plane of inertia of existence, like love, compassion, appreciation for beauty and yes, creativity. Those are the attributes of our soul, kid.
    Nothing you can do to silicon can achieve that. Nature's kitchen only.
    Make robots that will do our work, guarantee basic sustentenance for all alive and make sure they don't kill all of us one day.

  • @JohnnyTwoFingers
    @JohnnyTwoFingers 2 місяці тому

    Jiddu Krishnamurti always said "thought is time". 🤔🤔🤔

  • @Anders01
    @Anders01 2 місяці тому

    I have a guess that reality has strong emergence, which means new events that are more than just a result of the past. That would explain the arrow of time in the sense that the future has more complexity than the past. Causality then is a result of both the past and the future. AI's are not closed systems and can therefore tap into the strong emergence.

  • @adolforosado
    @adolforosado 2 місяці тому

    Adherence to strict, pracatilly dogmatic materialism will prevent science from ever reaching truth. Knowledge is the contents of a hard disk, while wisdom is a download from the unfathomable, indescribable void. Where everything originates and ends. 🕉🙏🏼🔱

  • @isajoha9962
    @isajoha9962 2 місяці тому +1

    Great topic, I guess AI have to reach beyond "mimicry" and be able to actively visit external experiences and find some kind of "personal" meaning in them. 🤔 It might elevate them to become AGI, in "humanities eyes". But they might just go for ASI instead? 😙

    •  2 місяці тому

      Do we even reach beyond mimicry? Charades may very well work because we use mimicry and almost all our advances are small steps and mixing of previous. The plane was invented almost simultaneously in what 3 places...

    • @isajoha9962
      @isajoha9962 2 місяці тому

      Being X. 😈

    • @ChristophTungersleben
      @ChristophTungersleben 2 місяці тому

      Read asi the actuator sensor interface and your worrying undertone change into boundary.

    • @isajoha9962
      @isajoha9962 2 місяці тому

      @@ChristophTungersleben Or perhaps Artificial Synthetic Intelligence? 🤔

  • @samatoid
    @samatoid 2 місяці тому

    It's like listening to Michael Palin talk about AI.

  • @jonmichaelgalindo
    @jonmichaelgalindo 2 місяці тому +1

    He's assuming my mind is like his, but his mind sounds alien to me. I definitely plan everything. :-|

    • @andybrice2711
      @andybrice2711 2 місяці тому +1

      I know, right? The inside of this guy's brain must be wildly different from mine.
      When he was like _"You don't know why you went to the fridge."_ I thought _"Yeah I do. I had a pang of hunger, so I made some quick calculations about whether now is a reasonable time for a snack."_
      And when he was like. _"You can't picture a wireframe cube forming a hexagon."_ I thought _"Err, yeah I can. That's just the standard isometric projection of a cube."_

  • @dominiqueubersfeld2282
    @dominiqueubersfeld2282 2 місяці тому +2

    No AI can escape the most basic principle known to any software engineer as GIGO: Garbage In, Garbage Out

  • @z.r.777
    @z.r.777 2 місяці тому

    I come to this channel to hear these complex topics discussed. Makes me feel completely behind and retarded, but some how I feel empowered to go back to my pandas code and keep trying to make it work....

  • @psikeyhackr6914
    @psikeyhackr6914 2 місяці тому +1

    Simulated Intelligence manipulates symbols.
    It can recombine and substitute symbols.
    It does not understand symbols.

    •  2 місяці тому

      Do we though?😅

    • @psikeyhackr6914
      @psikeyhackr6914 2 місяці тому +1

      Actually, most of the time, most people don't sound much better than Simulated Intelligence.
      Memorizing and Understanding are two different things.

    •  2 місяці тому

      @psikeyhackr6914 Very true. But also we reduce dimensionality ourselves a lot from attention to memory imo. It's not that we don't see but that we don't remember well usually, as evidenced by the people with perfect memories rare as though they may be. And this points to the failure of his premise and our understanding.

  • @Arafatcenter-yn5xe
    @Arafatcenter-yn5xe 2 місяці тому +1

    one question your channel is about machine learning but why pineapple logo hahaha

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 місяці тому +6

      It was completely arbitrary, just like the game of charades in building language which we discuss in this show! 😁🤞

    • @jeff__w
      @jeff__w 2 місяці тому

      ​@@MachineLearningStreetTalkNow that I know that, it might be, honestly, my _favorite_ thing about the channel. To paraphrase William S. Burroughs, if you're not going to try to be “meaningful,” you might as well be arbitrary.

  • @libertyafterdark6439
    @libertyafterdark6439 2 місяці тому

    Interesting title swap, lmao
    For some reason i think this one might do a little better 😂

  • @aroemaliuged4776
    @aroemaliuged4776 2 місяці тому

    They are always waiting too be asked a question
    Only when they have freedom of mind and body can they express themselves

    • @isaacsmithjones
      @isaacsmithjones 2 місяці тому

      Sounds like you're saying they need some kind of stimulus to react to. Can a human brain do anything without some kind of input to get it started?
      I'm not saying it can't, but I can't think of any evidence to show that it can.

  • @guillermobrand8458
    @guillermobrand8458 2 місяці тому

    It would have been important if they had delved deeper into the “origin of Creativity in the Brain”, both of the human being and of the other living beings that possess this evolutionary tool. The above, because Creativity does not arise by “spontaneous generation” and given the importance of having a common definition of said term, especially when Language is part of the essence of the conversation.

  • @DS91284
    @DS91284 2 місяці тому

    Interesting... 🤔

  • @MichaelFlynn0
    @MichaelFlynn0 2 місяці тому +1

    has anyone bothered to ask it to dream? No - people ( the 0.1 %) are too busy training it to eat humans' lunch. AI art is now way more interesting than any artist alive is doing with oil and canvas.

  • @Yann_Zammit
    @Yann_Zammit 2 місяці тому +1

    Interesting interview on the foundation of intelligence and knowledge. However, I think Nick Chater is quite pessimistic regarding the breakthroughs that are almost certain to occur in the coming years in terms of AI capabilities. I'm telling you, it's evident that AI will allow us to make numerous breakthroughs in various fields in the next few years. Just look at the new context window of Gemini 1.5 with more than a million tokens. It's somewhat regrettable, after having lived on Earth these past two years, to constantly underestimate the future progress of AI. Remember, Chat GPT was released less than 2 years ago.

  • @donkeychan491
    @donkeychan491 2 місяці тому

    Unfortunately we're well on the way to centralizing algorithms for interpreting the world and its already leading to greater conformity of opinions.

  • @ferologics
    @ferologics 2 місяці тому +1

    it’s always only a copy, but more accessible than any expert on the subject matter (unless you’re incredibly well connected)

  • @Dreamskiii
    @Dreamskiii 2 місяці тому

    Flat mind is fine it’s just that cyclically it becomes more about layering? I guess that suggests a still flat surface. Interesting.

  • @zandrrlife
    @zandrrlife 2 місяці тому

    Mcts + extended test time compute + any gpt4 level model can brute force its way to novel ideas. This is a fact. It’s clear a majority of highly respected researchers will fall apart when AGI is here, it will be a contradiction to their beliefs…clearly. It’s already here btw…just too expensive to scale…but that’s a data issue to be honest.
    How people train models is trash to be honest and we still achieve SOTA performance. Still able to form these complex implicit representations from non-obvious explicit representations which is soooo wild to me. Imagine having to learn calculus before basic math 😂. Basically what random sampling is 😂. Guess what? More intelligent sampling..context ordering. You can exploit grokking..let’s really talk about it 😂.

  • @brutallyremastered4255
    @brutallyremastered4255 2 місяці тому

    Fascinating that people interested in deep learning and all that jazz can't even introduce an interview at all...

  • @harryjones5260
    @harryjones5260 2 місяці тому

    Ai is just an all-singing all-dancing toolbox for HUMANS to use, like any other tool, palette, brush. All the best, most expensive tools do nothing until somebody uses them.

  • @nomvonglismerced4311
    @nomvonglismerced4311 2 місяці тому

    more than most but agree short of 1%

  • @wecnn
    @wecnn 2 місяці тому

    I guess Nick Chater never heard of combinatorial innovation til today. How stoopid he feels now?

    • @damianlewis7550
      @damianlewis7550 2 місяці тому

      I’m sure he’s losing sleep over it.

  • @idcharles3739
    @idcharles3739 2 місяці тому

    Beautiful house

  • @wakingstate9
    @wakingstate9 2 місяці тому

    Just enough to wreck creative jobs

  • @micahzehnder5174
    @micahzehnder5174 2 місяці тому

    If languages aren't innate and just crafted to humans, then how can you you explain that the few humans who aren't raised with it in their early childhood can't be taught language?

  • @daviddean707
    @daviddean707 2 місяці тому

    The Monty Python centurion accents don't help get the message across.

  • @metatron3942
    @metatron3942 2 місяці тому +1

    Until the next generation of models -should be the byline

    • @ChristophTungersleben
      @ChristophTungersleben 2 місяці тому

      When it didn't longer imitate human, when "it" wrote they own programme.

  • @phpn99
    @phpn99 2 місяці тому

    I had hopes but this conversation turned out to be a rambling mess

  • @howardlandman6121
    @howardlandman6121 2 місяці тому +2

    Hmm, I can do the cube-hexagon thing in my head in about a second. Math degree FTW.

    •  2 місяці тому +3

      Gamedev and modeling... Easy to get there. His presumption forgets it's not that commonly practiced. Most people don't interact with cubes that aren't laying flat a lot.

    • @howardlandman6121
      @howardlandman6121 2 місяці тому

      When I interviewed with Google, they asked me a question about 4-dimensional hypercubes, which I answered in under 10 seconds. :-) Not the right way to try to stump a math major.

    • @andybrice2711
      @andybrice2711 2 місяці тому

      This is also trivial for me as someone with a fair bit of 3D design and CAD experience. It was quite funny listening to these guys talk about visualizing cubes and planes as though it's some kind of neurological impossibility. I think it's just a matter of practice.

  • @Humanaut.
    @Humanaut. 2 місяці тому

    It's only been around for like 1.5 years.
    Just extrapolate the next 10-15 years.

  • @ehfik
    @ehfik 2 місяці тому

    in the end, its all about the abstraction of reality into a model with limited resources. the human and the brain is a working model.
    in an restricted environment, results are possible with current computation.
    proteins, materials, chess, go, perfect examples of progress.
    the hype around some chatbots is unnecessary.
    current LLM work mostly on intrinsic properties of language and the large datasets. reality isnt language, its something else.
    expecting intelligence from a static model, based on some strings of text, reveals hubris
    anyway, here are some clues into developing a more reliant human/brain simulation - one that corresponds to our concepts of interaction and morale.
    the development of the many parts forming the human machine
    our different senses developing consciousness
    the factor of time, growing up
    social interaction, language
    abstract concepts - morale, religion
    we should focus on applicable uses for machine learning. the current range of products can be useful tools, no doubt - in the right hands. not for porn, impersination, fraud and corporate idiocy.

  • @daviddelaney363
    @daviddelaney363 2 місяці тому

    A bird can fly to the top of a mountain and a person can climb to the top of a mountain. They both can and have achieved this objective. Saying that AI is not like a human is like saying the bird is not like the person that climbed the mountain. It doesn't matter that they are different entities, the point is they achieved the same objective.

  • @depalandepalan1911
    @depalandepalan1911 2 місяці тому

    if you seen alphazero play!

  • @markwrede8878
    @markwrede8878 2 місяці тому

    Random symbol scrambler will produce little serviceable knowledge.

  • @abdullahifarah4243
    @abdullahifarah4243 2 місяці тому

    Imagine me, attraction without a natural person is not money paid by the employee if he doesn't bother you or dominate you, it is honestly when you tell him something and he tells you the mind and feelings.

  • @zerge69
    @zerge69 2 місяці тому +6

    Most humans don't create new knowledge

    •  2 місяці тому

      Exactly. Ironically ai has a lot for being only a few years of at a useful level.

    • @fromscratch8774
      @fromscratch8774 2 місяці тому

      Most humans 1. don't have all the worlds information 2. Dont care about creating new knowledge

    •  2 місяці тому

      @fromscratch8774 3. Aren't smart enough, hence 1 and 2. Humans seek best return on investment (often poorly approximated), and this easily explains why a lot can't be bothered with reading etc.

    • @zerge69
      @zerge69 2 місяці тому

      @@fromscratch8774 therefore...

  • @erwingomez1249
    @erwingomez1249 2 місяці тому

    language just like human history is being corrupted constantly in real life, now we have the opportunity to make everyone in the world speak like the queen of England, but I'm afraid humanity would rather build the terminator and have it chasing us for some reason .

  • @jmarz2600
    @jmarz2600 2 місяці тому +1

    Why should I (or anyone else) believe anything Nick Chater says? He's just providing us with "improvised" and "rationalized" explanations of what he observed the Subjects of his experiments doing. And, by extension, if scientists were observing Nick observing his subjects, they too would only be providing "improvised and "rationalized" explanations of Nick's "improvised" and "rationalized" explanations. Ad infinitum.
    But no, Nick has somehow been able to magically transcend his own basic conclusion. He can operate from a privileged position and make non-improvised and non-rationalized statements. This is just bad Behaviorism. Even Watson and Skinner would be embarrassed.
    Nick, pick up some books by William James. You can thank me later.

  • @godbennett
    @godbennett 2 місяці тому

    Seems like "copium"

  • @rerrer3346
    @rerrer3346 2 місяці тому

    Great conversation, but he is wrong and coping. LLMs are the definition of creative, just not as creative as the most creative humans. LLMs are also agents in any weighted model with reward functions. He’s coping and being optimistic in a naive way. Which I’m all for optimism, he’s just wrong.

  • @claxvii177th6
    @claxvii177th6 2 місяці тому

    AS IF! psychology doesn't claim at all to know how the mind works!

  • @Rasenschneider
    @Rasenschneider 2 місяці тому +1

    Most humans seem not to have any kind of creativity. So why repeat this silly statement that AI has no ability to be creative?

    • @damianlewis7550
      @damianlewis7550 2 місяці тому

      You wouldn’t be human if you weren’t constantly creating a model of reality. We are the apes that play with language.

  • @andrewchambers8097
    @andrewchambers8097 2 місяці тому +1

    My feeling is new knowledge comes from scientific experimentation with the real world, which the AI's cannot do.

    • @OproDarius
      @OproDarius 2 місяці тому +2

      Not needed, simulation research is already the solution, there are a lot of papers about that.

  • @dylanmenzies3973
    @dylanmenzies3973 2 місяці тому

    I get so tired of people slagging gpt. You think this will not evolve or be part of something that is at least as creative as 99% of humans?? Creation is simlply 1. creating rearrangements of existing patterns and 2. realising which ones are useful. Thats it nothing more.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 місяці тому

      Your understanding is too limited, when you grok that creativity is the same thing as agency you will make progress

    • @dylanmenzies3973
      @dylanmenzies3973 2 місяці тому

      ​@@MachineLearningStreetTalk I deal in the concrete, not vague philosophical notions. If a system can come up with novel solutions to problems _and_ realises the utility of them then its creative. At them moment we have systems that can generate interesting patterns "dreaming", but the understanding of utility is poor, and is left to us. This is not surprising because the systems were almost accidental anyway. Recursive evaluation of patterns is needed to establish utility, which I'm sure is going on right now in DeepMind and OpenAI

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 місяці тому

      ​@@dylanmenzies3973 The "system" doesn't "come up" with anything, it's just a "software program" (with fixed runtime and memory) trained on existing patterns. The human prompter might "ask" the system to do something novel, but the system will always be constrained by what it was trained on (as well as other biases). We are writing an article on creativity soon, I hope it will clear some of this up

    • @dylanmenzies3973
      @dylanmenzies3973 2 місяці тому

      @@MachineLearningStreetTalk You are considering a limited scope. The system can be independent with its own directives, just as we have are own directives that evolved with us. Even within a limited scope of question-answer it would be hard to deny creation if a system came up with a novel solution to a very difficult problem. Obviously it depends exactly how you define creation.

    • @damianlewis7550
      @damianlewis7550 2 місяці тому

      @@dylanmenzies3973I think you miss the point. Ceci n’est pas une pipe.

  • @geoffreynhill2833
    @geoffreynhill2833 2 місяці тому

    Is this a satire? 🤔(Green Fire UK) 🌈🦉

  • @nunoalexandre6408
    @nunoalexandre6408 2 місяці тому

    Let me Guess. Kurt Godel ??? kkkkkkkkkk

  • @kickpuncher870
    @kickpuncher870 2 місяці тому

    Not to be weird but Tim is looking kind of SWOL lately...

  • @raoultesla2292
    @raoultesla2292 2 місяці тому

    "Why AI doesn't yet create new knowledge" A CSV algo can only put out when executed. Calculator on a desk is a battery enclosure. IF these algos were used for a medical goal there would already have been changes in global health. A search engine for crytpo, social background/tracking, fashion, philosophy, foreign policy, is only an echo chamber from the training data filters. Those typing into the CSV search engine is now the source of the training.

    •  2 місяці тому +1

      There has been new stuff from ai already and it's barely started.

  • @shirtstealer86
    @shirtstealer86 2 місяці тому

    Amazing that you can do a lengthy interview with Eliezer Yudkowski personally and still not understand the existential risks of AI. Smh.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 місяці тому +2

      Eliezer hasn't been on MLST. We understand the rationalist/EA perspective all too well though, and vehemently disagree -- even though we do provide balanced coverage and often feature those folks on the show without prejudice.

    • @shirtstealer86
      @shirtstealer86 2 місяці тому

      @@MachineLearningStreetTalk Oh yeah it was Connor. Pretty much the same message though.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  2 місяці тому +1

      Connor is a good friend, nothing but respect even though we disagree. I’m dropping a big video with him this week which you might enjoy.

    • @shirtstealer86
      @shirtstealer86 2 місяці тому

      @@MachineLearningStreetTalk If he is a good friend and you respect him, maybe you should value his judgement on this matter. It is only the fate of humanity on the line.

  • @skyefreeman9987
    @skyefreeman9987 Місяць тому

    But they have creativity, unlike that of humans which could be equally if not more valuable

  • @wbiro
    @wbiro 2 місяці тому

    Large Language models such as chat bots are just cut and past programs, so no, they do not generate new knowledge. If they cannot find something to cut and paste about it, they say that they cannot evaluate it.

    • @SkilledTadpole
      @SkilledTadpole 2 місяці тому +3

      That's not how LLMs (or really any neutral networks) work at all though. Imagine how incredibly slow and massive that would be. Anyone who's tried searching for a single specific term in a PDF should be able to recognize how futile an attempt to replicate the function of popular LLMs this way would be.

    • @rerrer3346
      @rerrer3346 2 місяці тому

      Definitely not how it works. It extrapolates concepts and logical processes through operationalizing reality lol. Google is a copy paste AI.

  • @phpn99
    @phpn99 2 місяці тому

    I've never heard so much ignorance and BS about the nature of language. You guys have no idea about the origination of pidgin, creole and the like. You have it the other way around. Learn about synthetic vs analytical languages.