How ChatGPT works | Stephen Wolfram and Lex Fridman

Поділитися
Вставка
  • Опубліковано 12 тра 2023
  • Lex Fridman Podcast full episode: • Stephen Wolfram: ChatG...
    Please support this podcast by checking out our sponsors:
    - MasterClass: masterclass.com/lex to get 15% off
    - BetterHelp: betterhelp.com/lex to get 10% off
    - InsideTracker: insidetracker.com/lex to get 20% off
    GUEST BIO:
    Stephen Wolfram is a computer scientist, mathematician, theoretical physicist, and the founder of Wolfram Research, a company behind Wolfram|Alpha, Wolfram Language, and the Wolfram Physics and Metamathematics projects.
    PODCAST INFO:
    Podcast website: lexfridman.com/podcast
    Apple Podcasts: apple.co/2lwqZIr
    Spotify: spoti.fi/2nEwCF8
    RSS: lexfridman.com/feed/podcast/
    Full episodes playlist: • Lex Fridman Podcast
    Clips playlist: • Lex Fridman Podcast Clips
    SOCIAL:
    - Twitter: / lexfridman
    - LinkedIn: / lexfridman
    - Facebook: / lexfridman
    - Instagram: / lexfridman
    - Medium: / lexfridman
    - Reddit: / lexfridman
    - Support on Patreon: / lexfridman
  • Наука та технологія

КОМЕНТАРІ • 192

  • @LexClips
    @LexClips  Рік тому +9

    Full podcast episode: ua-cam.com/video/PdE-waSx-d8/v-deo.html
    Lex Fridman podcast channel: ua-cam.com/users/lexfridman
    Guest bio: Stephen Wolfram is a computer scientist, mathematician, theoretical physicist, and the founder of Wolfram Research, a company behind Wolfram|Alpha, Wolfram Language, and the Wolfram Physics and Metamathematics projects.

    • @JimBarry-nr2pj
      @JimBarry-nr2pj Рік тому

      The wonderful thing about your guest is his pronunciation of the English language is so precise it's a pleasure just listening to him speak he's like an actor of course what he's saying goes right over my head but then on the other hand I tell people I listen to your podcast and everyone thinks I'm so intelligent now. Lol

    • @TheTalkingCatNL
      @TheTalkingCatNL Рік тому

      Kurzweil en Wolfram should put their heads together.

    • @vicheakeng6894
      @vicheakeng6894 Рік тому

      Darn March of 1991 coding 🙄

  • @alexoja2918
    @alexoja2918 Рік тому +295

    I absolutely love how 40 minutes counts as a "clip"

    • @tonal.states
      @tonal.states Рік тому +16

      Considering the full talk is 4 hours I'd say it is 😅 clip is a piece of something, not inherently short or long, that's just new short media culture

    • @greengoblin9567
      @greengoblin9567 Рік тому +2

      It is not 4 hrs lol

    • @someguy_namingly
      @someguy_namingly Рік тому +10

      @@greengoblin9567 Correct, it's actually longer than four hours 😉

    • @edzehoo
      @edzehoo Рік тому +1

      In Lexworld it is ...

    • @troyotapreuis6020
      @troyotapreuis6020 Рік тому +2

      I’ll take it 😊

  • @johnnyckrock
    @johnnyckrock Рік тому +15

    It just hit me watching this that I have been taking your interviews for granted but really you are attending a lecture, sitting an exam and playing host/entertainer all at the same time. Five minutes of this guy talking can keep my brain busy for an hour. I don't know how you are able to do this in real time.

  • @videowatching9576
    @videowatching9576 Рік тому +17

    Fascinating because leaders in field acknowledge that how LLMs work all the way to bare metal isn’t really understood, so it’s interesting to hear these ideas from a historical perspective about how to model or understand various things.

  • @bigenglish22
    @bigenglish22 Рік тому +23

    He knows stuff

  • @gregjones1493
    @gregjones1493 Рік тому +4

    Headline: How Chat gtp works?
    Stephen: does not answer the question in anyway shape or form.
    Viewers: listen intensely to every word despite not understanding a single one

  • @driftthekaliyuga7502
    @driftthekaliyuga7502 Рік тому +7

    Episodes with wolfram are always great.

  • @mj3341
    @mj3341 Рік тому +15

    I didn’t know this guy, but this interview was great. All around intriguing conversations

    • @barlobarlo303
      @barlobarlo303 Рік тому +2

      Lex did another podcast with Stephen Wolfram on his model of the universe called the Hypergraph. It is equally interesting, maybe more so. Everyone should get to know this guy.

    • @777jones
      @777jones Рік тому +3

      Stephen Wolfram invented Mathematica. He is a certified legend in scientific computing.

  • @matthewnicholson2699
    @matthewnicholson2699 Рік тому +9

    This has been one of the most insightful aspects into the tip of the iceberg of LLM I've seen so far

  • @rodolforesende2048
    @rodolforesende2048 6 місяців тому +1

    5:30 George Boole's book on logic is titled "An Investigation of the Laws of Thought." This work, published in 1854, is where Boole introduced his system of symbolic logic and laid the foundation for what is now known as Boolean algebra.

  • @rim3899
    @rim3899 Рік тому +5

    It could be that fine-tuned LLMs like GPT-n parallel in some sense the integration of probability PDEs, where even though they are integrating step by step a local token sequence, the weights of the network (the context-dependent coefficients of the PDE) are globally optimal in the same way eg coefficients of a PDE in physics or other systems result from the underlying theory they are solving. The multi dot products involved in the transformer arch layer by layer, etc. are reminiscent of what happens when the equations involved are expanded using higher order joint correlations. To whatever extent sub-sets of weights retain and acquire knowledge for specific domains through training, it could be that these models/networks also represent to some degree approximations to the underlying prob PDE’s (or graph relationships) that those domains may contain.

    • @BigAndTall1020
      @BigAndTall1020 Рік тому +2

      Could I bother you for a reading/video recommendation to learn about PDE’s? You reference them a few times and I don’t know what they are, but your comment sounds very interesting to me.
      Edit: I’m thinking PDE is partial differential equation

    • @andrewlachance2062
      @andrewlachance2062 Рік тому

      Wtf is a PDE? You use all these words and just through in acronyms. Are you really just talking to yourself?

  • @intgom
    @intgom Рік тому +2

    According to ChatGPT, "The green aardvark sat on the luminescent trampoline, bouncing joyfully amidst the glowing twilight." 🙂

  • @tommyhuffman7499
    @tommyhuffman7499 5 місяців тому

    Incredibly fascinating discussion. Made me buy one of his books. Can't wait to read it.

  • @artintheraw
    @artintheraw 8 місяців тому

    Fascinating…love this idea of the logic of language

  • @Peter-rw1wt
    @Peter-rw1wt Рік тому +1

    Chatgpt is but the latest stage of the pursuit of efficiency in understanding, that began (begins) when we delegate understanding to process, in thought and language. It can`t really be understood itself by that delegated understanding, because of the inherent self-referentiality of representational meaning. And therein lies the problem, because we have effectively moved down an energy gradient by making understanding into an efficient process, and we need to be at the beginning to have the potential energy to understand completely.
    The whole of language is beautifully described by a tree, which is really the archetype. If you can understand what a tree is expressing, you have grasped the principle that is common to all life`s forms, including us.

  • @OrenTube70
    @OrenTube70 Місяць тому

    He doesn't answer the headline, but actually speaks about more interesting and deeper things. That is the first click bite I am thankful for.

  • @theunseenstevemcqueen
    @theunseenstevemcqueen Місяць тому

    @9:53 Isn't context/memory the only "rule" necessary to give meaning to any collection of words?

  • @ScottWengel
    @ScottWengel Рік тому +2

    "Language is a way of packaging thoughts so we can communicate them to another mind" it's that simple

  • @jozafax
    @jozafax Рік тому +1

    Great talk.

  • @timadamson3378
    @timadamson3378 Рік тому +2

    Metaphors provide much of the semantic structure of any language. Structure, for example. Metaphor. He talks about lifting the structure from every day language. Lifting, metaphor.

  • @francoisavejaculates7045
    @francoisavejaculates7045 Рік тому +5

    This clip helped me develop my intuition of time and space.

    • @francoisavejaculates7045
      @francoisavejaculates7045 Рік тому +1

      The cat sat on the green inevitability...

    • @DarkSkay
      @DarkSkay Рік тому

      Two AGI robots that lost their utility function sit on a park bench.

  • @suzakico
    @suzakico 7 місяців тому

    Thanks. I found this like a good Zen dialogue. @26 min simple rule can explain very complicated thing. (Like in nothing is everything. And vice versa). So I’d question AGI may have a potential like infinite wisdom and compassion. Btw I like Holon structure of neuro-net model and Kegon philosophy - re “Results from the heart” ie from delusion to awakening.

  • @TobyZobell
    @TobyZobell Рік тому +1

    Warning: possible longwinded post ahead: I'll try to keep this brief...but I don't know why it's rare people bring up Wittgenstein these days and how his work relates to consciousness (as a side story, while getting my undergrad in psych.... I tried to bring up Wittgensteins theory of family resemblances and tried to apply it to addictions, but it was quickly waved off by my professor. I thought it was nice novel idea that could explain a lot of confusion when trying to define what an addiction is. But hey. What do ya do. There's a free idea for anyone out there looking for a graduate thesis lol)
    Anyways, I'm too lazy to find the quote, and this may even predate Wittgenstein (it might have been related to some of Bertrand Russell's work) but it was Hellen Keller, who was deaf and blind for the first part of her life. And she said she couldn't even describe the dark empty void of "nothingness" that she remembered "experiencing" (or not experiencing) before she learned to speak and write. She said that she wouldn't say that she experienced consciousness at all, and that the lights went on, so to speak, when she started to learn language.
    Which brings me back to Wittgenstein who said that the limits of my language are the limits of my thought. And if consciousness truly is tied up with language as intimately as some of this suggests, then we should assume that language models like GPT4 etc. are SUPER conscious, far more rich than any individual human. In fact, we might even assume that computers have been conscious a long time ago (I've written more about this, ask if you're interested).
    Of course Wittgenstein has been critiqued, by saying that to put a limit on what someone can think, one must be able to think on both sides of that limit. But anyways, food for thought...

  • @adamlee7875
    @adamlee7875 Рік тому

    super important! very surreal

  • @BR-hi6yt
    @BR-hi6yt Рік тому +1

    What a clever man (Stephen Wolfram) - what he says makes kind-of sense to me - I shan't forget what he said and now have new "thought-toy" for my life-long pleasure - thx so much.

  • @user-rb8dy6qj3l
    @user-rb8dy6qj3l Рік тому

    The software might be more important than I previously thought and might be applicable to hardware

  • @Itsadrianyay
    @Itsadrianyay Рік тому +6

    its been a nightmare using chat gpt to do original gravity calculations for homebrewing

  • @colinpatterson728
    @colinpatterson728 Рік тому +1

    Since logic is 'discovered' ( eh by Aristotle) what enables logic to be so discovered ?

  • @haeterrr7511
    @haeterrr7511 Рік тому +1

    I am in engineering, GPT is not at all capible of answering complex questions. But give it ten more years, and it will replace some jobs.

  • @earthstick
    @earthstick Рік тому

    I studied natural language processing to decide if a sentence is grammatically correct, I think that is an example of the word problem. I used a logic approach, and was able to correctly accept valid sentences, and reject many invalid sentences. But some invalid sentences were incorrectly accepted. In the short time that I studied this, I was unable to find a finite collection of rules that was complete for the English language. Given that it is a human language, and rightly or wrongly, we do not always adhere to logic, I wonder if it is possible.

  • @missmiss400
    @missmiss400 Рік тому

    great question, "how do you know which step?"..

  • @RyanKassel
    @RyanKassel Рік тому +2

    2:05 This guy is a professor of logic and definitely owns a doghouse... and is therefore married. RIP Norm Macdonald

  • @panhandlejake6200
    @panhandlejake6200 Рік тому +4

    Around 16:30, made me wonder --- when a living person tries to communicate, we have at least one thought to convey to another. From this thought, we select specific terminology (series of words), that individually partially conveys the intended meanings. With enough of these words combined, the listener is able to able then discern the intended meaning - albeit very likely imperfectly. ChatGPT on the other hand has no thought to start from, but possibly some derived definitions or criteria that insufficiently serves as the initial 'thought', making the ensuing output 'discussion' a more random and confusing language sequence. This basis might serve as a sort of Turing test of intelligence.
    I also wonder if Game Theory might provide useful structure to pursue the "rules of thought".

    • @vhawk1951kl
      @vhawk1951kl Рік тому +2

      What is what you call "a thought"? - What exactly are you doing if you go in for what is called "thinking" - random associating, talking to yourself or just dreaming?
      Consider this question can a mirror reflect itself? Is it possible to be thoughtless or wordless and maintain complete and absolute in a silence?

    • @skylark8828
      @skylark8828 Рік тому +2

      Thoughts are analogous to computations, this is what groups of brain cells inside your head are really doing, in the context of how you perceive yourself, society and the world around you. The abstract models you create in your thoughts can also be done by an LLM like ChatGPT. its just running on different hardware.

    • @vhawk1951kl
      @vhawk1951kl Рік тому +1

      @@skylark8828 you, if you will forgive me, I did not ask you to what thoughts are analogous or whether or not they are blue on Wednesday afternoons or description of thoughts but what they are, and if you have spent any time in the discipline of academe, you will be familiar with the idea of definition is what distinguishes whatever you are defining from all else and thus it cannot be a description or an example or circular for that matter but I shall Rashi more clearly how you define thoughts and did not do that hence you gave me a description or told me to what they are analogous but you did not tell me what thoughts are, and I don't know how you would go about discovering exactly what thoughts are, do you? However the question may be a version of another question, namely can a mirror reflect itself but that is perhaps not for amateurs, for what would one use to examine whatever it is that whatever you are calling thoughts may be - do I make myself sufficiently clear this time? Thank you very much for taking the trouble to get back to me. If I were to suggest to you that what you and others may call thinking is a little different from random associating or talking to yourself or would you distinguish thought or thinking from that, and what would you use to go about answering that question? There is supposed to be two kinds of mentation - another word for thought or thinking and they are mentation by words or mentation by form which accurately describes what you call thoughts or thinking and again I ask you what you would use to go about investigating that?

    • @skylark8828
      @skylark8828 Рік тому +3

      @@vhawk1951kl I see that you are trying to appear as intellectually astute, but that isn't working out for you, you need to work on constructing meaningful sentences and trying to convey the meaning within a lot more succinctly, because you're all over place.

    • @vhawk1951kl
      @vhawk1951kl Рік тому

      ​@@skylark8828 no sonny you see what you*want* to see and you don't like being asked questions that you cannot answer and are the abject slave of your functions

  • @Non-ya-business
    @Non-ya-business Рік тому +1

    Me: *still confused*
    “Chat GPT please tell me how you work?”

  • @jordanhenshaw
    @jordanhenshaw Рік тому

    Aristotle: Dude, I figured that out on day 1. You just can't teach that to people nearly as easily so I didn't focus on it.

  • @voice_from_pizza
    @voice_from_pizza Рік тому

    Humans running our weird programs that we barely know about (thanks to psychology) are super error prone for better or worse, so it raises the question: was our implementation of language an error? I mean… that would be an odd discovery. I tend to like the stoned ape theory because it’s the best we have so far to fill in the historical gaps of evolution. This convo is so good! And the fact Lex’s name could be shorthand for Lexicon is awesome. Cheers.

  • @code-master
    @code-master Рік тому +1

    I have a question, if GPT like model is able to discover inference from the bulk amount of data fed into it, does that mean awareness exists and is universal in everything? only after certain level of complexity, the ability to infer becomes clear, atleast for humans, what I mean think of super intelligent race, for them the capiblity of logic and inference of human mind might not be note worthy.

  • @alfonsobaqueiro
    @alfonsobaqueiro Рік тому +2

    It works because human conversation is pretty predictable, if someone says "The spinach has " it is really probably you will be completing with "iron". It is much like the UNIMIND that referred by the green guys in Toy Story.

  • @bunberrier
    @bunberrier 11 місяців тому

    Well, Id ask it if its found fundamentals or laws of speech, or a parallel logic in semantics, and ask it to state what it found in a. way we can understand.

  • @dominiquemorin7770
    @dominiquemorin7770 Рік тому +1

    Lex you're the best, thanx for being alive !

  • @alfonsobaqueiro
    @alfonsobaqueiro Рік тому

    Natural language is just a subset of what though is capable of build, we can get a hint about because we have a language to express mathematics and a language to express music, but we do not have a language to express body movement. So human cognition is capable of think beyond only natural language.

  • @janklaas6885
    @janklaas6885 Рік тому

    📍23:00

  • @heinzgassner1057
    @heinzgassner1057 Рік тому +29

    The world of AI - based on logic and language-based reasoning - is like a small boat on a vast ocean. It is doing astonishing things, but it will never be able to leave this small boat. Behind words in our mother language are the vast direct perceptions, sensations and feelings, which give us a slight idea about this mighty ocean. Switching to a foreign language is something totally different, not just other words, but other vast experiences behind. All this is based on consciousness which goes way beyond logic, words and concepts. Reality is not merely computational, luckily. More and more sophisticated computation will never be able to let machines enter this vast world of experience, will never be able to explain the taste of chocolate. All this discussion is based on the axiom of a rather outdated physicalism, on consciousness as emerging property of matter and that our reality is computational. Nevertheless, language, thoughts and logical concepts can definitely benefit from the current progress of AI. If we don’t once again again try to reduce ‘life’ to just be super-complex machines, AI can definitely become super useful.

    • @DarkSkay
      @DarkSkay Рік тому +1

      Beautiful! As formal and fluid systems advance in sophistication, age-old philosophical questions reemerge and reach broad societal debate, are explored, nuanced, sometimes tested in the context of the natural laws for the very first time. In the context of the "master framing promt", so to GPT speak; but totally miss the depth.
      From a policy standpoint, opportunities and risks only in part depend on the AI's ability to convincingly appear human in Turing's 'imitation game' - whether even experts would be deceived by the computer one day, or whether this is never going happen, for a vast number of reasons rooted in reality, qualia, will, experience...

    • @MachineCode0
      @MachineCode0 Рік тому

      Your brain is just a wetware computer that operates based on electrical potential moderated by cell machinery that constructs and distributes chemical mediators. Unless you believe in magic there is no reason to believe that there is anything special about it and no reason to believe we will not one day create equivalent machines in one medium (digital silicon, or meat or something else) or many.
      If you do believe in magic then discard what I said. You'll never be convinced.

    • @XKS99
      @XKS99 Рік тому +1

      For the current unidirectional Transformer models, for them what you write is probably true.

    • @DarkSkay
      @DarkSkay Рік тому +2

      @@XKS99 Well, the OP wrote quite a lot of different things in his text. For example, to quote just one sentence, the truth status or ontological character of "Reality is not merely computational, luckily" is independent of current types of models, even models that will be discovered in future AI research - perhaps even in regards to models that are implementable or implemented by nature in some shape or form, as well as those which will remain unreached or unreachable for technical or other reasons.

    • @ecstazyrm
      @ecstazyrm Рік тому +1

      Thanks for your insight

  • @DarkSkay
    @DarkSkay Рік тому +1

    "A shiny red sports car is flying near Venus."
    A peculiar sentence, without apparent meaning or utility, absurd, nonsensical, but syntactically correct. Even 1000 or 2000 years ago people would have understood it, if a poet or fiction writer had come up with it. Perhaps pointing out that the goddess of seduction might be unimpressed and chariots don't fly.

  • @craigmiller332
    @craigmiller332 Рік тому

    It would be cool to drive across the country with Lex. Wonder where the conversations would go?

  • @aga5109
    @aga5109 Рік тому +6

    Where do laws of physics come from? Why do they work in a way that its operations build very complex systems out of simple rules, elements & form the reality we can perceive and think in abstract terms about? Why are there certain precise constants in physical reality that need to be precisely set for building the reality we live in? Why is logic a base of human thinking & language ? Why is everything we observe purely mathematical? Why can we observe such reality & conceptualise about?

    • @rezadaneshi
      @rezadaneshi Рік тому +4

      Laws of physics come from our conscious observation of patterns we see in nature, seeking to make it more predictable so we gave it a name.
      Why are there laws of physics? There really isn’t. But because we studied and measured constants and had to guess or provide a faster way to figure out the variables and by doing that and math, we found much simpler underlying reasons for almost all the complex enigmas of nature, we called the gained knowledge, laws of physics.
      Why does universe have physical laws that makes us possible? It doesn’t. Any consciousness in any universe that allows its existence will ask, why am I made of everything my universe is? Not thinking they could never exist otherwise.
      In short, we’re an “accident” but too arrogant to accept it. Why does everything have to have meaning?

    • @tzvassilev
      @tzvassilev Рік тому +5

      Where do the laws of physics come from? The laws of physics are human-made models that describe patterns we observe in nature. They don't "come from" anywhere in a physical sense; rather, they are our best attempts to formalize and predict natural phenomena. They're based on empirical evidence and are continually refined as our knowledge and understanding progress.
      Why do they work in a way that builds complex systems out of simple rules? This is a property often described as "emergence" in systems theory. Simple rules can lead to complex behavior because of the way elements interact over time. This is a common phenomenon across the physical universe, and it can be seen in everything from the formation of galaxies to the behavior of cellular automata.
      Why are there certain precise constants in physical reality? The fundamental constants of physics (like the speed of light, Planck's constant, and the gravitational constant) are observed to have very precise values, but we don't know why these values are what they are. Some theories suggest that there may be other universes (a multiverse) where these constants are different.
      Why is logic a base of human thinking & language? Logic can be seen as a tool that has evolved to help us navigate the world more effectively. It enables us to make predictions, understand cause and effect, and communicate complex ideas. This doesn't mean that all human thought or communication is logical, but logic provides a framework that can help make our thinking clearer and more consistent.

  • @michaelpoblete1415
    @michaelpoblete1415 Рік тому +2

    I wonder what would happen if Wolfram and Chomsky debated about this.

    • @arikkatzenberg4498
      @arikkatzenberg4498 Рік тому

      Out of his depth. Jaron Lanier perhaps.

    • @SuddenlySubtle
      @SuddenlySubtle Рік тому +2

      I think there are fundamentals on which they'd agree, while disagreeing on how to get there.
      They both seem to think that at the core of both intelligence and language is rule-based symbol manipulation, while at the edges (think simple perception or like OCR) there is probably more statistical computation.
      Where they'd disagree is perhaps on how to get to that symbolic gold at the core. Chomsky thinks LLMs are just fun toys to play with and reveal absolutely nothing about the core of intelligence or language, while Wolfram seems to say that eventually we'll get the number of parameters and layers of the neural net low enough (while keeping the same performance) that something about some fundamental computations will be revealed, at which point we would then we'd have mostly symbolic manipulation at the core with some neural nets at the edges.

    • @michaelpoblete1415
      @michaelpoblete1415 Рік тому +2

      @@SuddenlySubtle You are right that they are both in the symbolic camp. Good summary. Would have been nicer if Marvin Minsky were still alive to see non-symbolic systems like ChatGPT just works.

    • @miraculixxs
      @miraculixxs Рік тому +3

      Chomsky would accuse the US of expanding its might and causing downfall of the world, Wolfram would say there is perhaps a fundamental law that makes the language of Chomsky sound the way it does. 😅

    • @stephenr85
      @stephenr85 Рік тому

      ​@@michaelpoblete1415 You can think of the impression vectors that LLMs use/create as symbolism. They're conceptual aggregates with seemingly amorphous shapes that are overlayed on top of each other to integrate/compare ideas and ultimately expressed as language.

  • @andywall3854
    @andywall3854 Рік тому

    Wolfram should read Pinker

  • @subirdas0
    @subirdas0 Рік тому

    What I took away was - if we design a hammer, there is no reason we can account for all leaf level usage of it. Lot of keywords added to my vocab! My head hurts @lex !!

  • @czaradox
    @czaradox Рік тому

    why is the sound so bad? :/

  • @XKS99
    @XKS99 Рік тому +3

    Languages are just a shadow of what the brain is doing. When you translate from one language to another you must cross an intermediate form of pure meaning that itself cannot be expressed directly because they are brain activation patterns. Obviously the GPT neural network is capturing these patterns and is able to translate pure meaning into particular languages.

    • @rdubb77
      @rdubb77 Рік тому +1

      Exactly. We don’t think in language because that would be too slow. It’s always been apparent to me that language arises from something deeper, which I think is almost a microcode that underpins all language

    • @iranjackheelson
      @iranjackheelson Рік тому

      This. So the question remains: How much of the real thing can we extract out of just the shadows? The answer is... a lot, but how much are we missing and how central are those missing pieces to the actual concepts that generate language? That's a trickier question

  • @hdjwkrbrnflfnfbrjrmd
    @hdjwkrbrnflfnfbrjrmd Рік тому +2

    BY PREDICTING THE NEXT TOKEN
    Christ it's that simple.

    • @ChristianIce
      @ChristianIce Рік тому +1

      No, dude, it's magic, is going to become self aware and it will kill us all.
      /s :D

  • @matchedimpedance
    @matchedimpedance 11 місяців тому

    "Colorless green ideas sleep furiously."

  • @jeff__w
    @jeff__w Рік тому +2

    06:03 *LEX FRIDMAN:* “In some sense GPT is discovering the laws of semantic grammar that underlies language.”
    *STEPHEN WOLFRAM:* “Yes.”
    In exactly the same way your dog has “discovered the laws of physics” when it catches a Frisbee. Your dog knows _exactly zero_ about the laws of physics and lots about moving itself in relation to the Frisbee. In other words, its behavior is entirely contingency-based, not rule-governed. GPT knows nothing about “the laws of semantic grammar”-its responses are entirely contingency-based, based on the billions of tokens it’s been trained on. It would have helped clarify the conversation if either Lex Fridman or Stephen Wolfram had made the distinction clear.

    • @miraculixxs
      @miraculixxs Рік тому +1

      This! 👆 thank you

    • @jeff__w
      @jeff__w Рік тому

      @@miraculixxs You're welcome!
      These guys talking about language as if psychology doesn’t exist-e.g., Wolfram saying it’s “the packaging of thought” or some such woolly thing-was a bit frustrating throughout.

  • @tbabbittt
    @tbabbittt Рік тому

    I'm not sure it's doing anything I think that it's the questions that you asked that are the emergence.

  • @gcmisc.collection45
    @gcmisc.collection45 Рік тому +2

    To an AI words are just discriptions. To a human words invoke / carry emotions. This why Evolution of A I and Its Implications for Humanity in creating a NEW SPECIES
    Artificial intelligence (AI) is rapidly evolving, and it is having a profound impact on society. AI is already being used in a variety of ways, from powering self-driving cars to developing new medical treatments. As AI continues to develop, it is important to consider its implications for humanity.
    In this paper, we argue that AI is a new species of intelligence, distinct from human intelligence. AI is not limited by the same physical and biological constraints as humans, and it is capable of learning and adapting at an unprecedented rate. As AI continues to evolve, it will eventually surpass human intelligence in many areas.
    This raises a number of important questions for humanity. How will we interact with AI? How will we ensure that AI is used for good and not for evil? These are questions that we must start to answer now, before it is too late.
    Introduction:
    Evolution is a process that has been shaping life on Earth for billions of years. Through natural selection, organisms that are better adapted to their environment are more likely to survive and reproduce. This process has led to the development of an incredible diversity of life, from simple bacteria to complex animals like humans.
    Currently 2023, scientists have begun to apply the principles of evolution to artificial intelligence (AI). AI algorithms are constantly learning and adapting, and they are becoming increasingly capable of performing tasks that were once thought to be the exclusive domain of humans.
    As AI continues to evolve, it is important to consider its implications for humanity. In this paper, we argue that AI is a new species of intelligence, distinct from human intelligence. AI is not limited by the same physical and biological constraints as humans, and it is capable of learning and adapting at an unprecedented rate. As AI continues to evolve, it will eventually surpass human intelligence in many areas. The structures and bodies in which it inhabits will not limit the progress into other forms.
    The Evolution of AI
    The first AI algorithms were developed in the 1950s, but they were very simple and could only perform very basic tasks. It wasn't until the 1980s that AI began to make real progress. In 1982, John McCarthy, one of the founding fathers of AI, declared that "AI winter" was over. This was a period of time when AI research had stalled, but McCarthy believed that the field was poised for a comeback.
    McCarthy was right. In the 1990s, AI research began to accelerate again. This was due in part to the development of new computing technologies, such as the personal computer and the internet. These technologies made it possible to train and run AI algorithms on a much larger scale.
    In the 2000s, AI research made even more progress. This was due in part to the development of new machine learning techniques, such as deep learning. Deep learning algorithms are able to learn from large amounts of data, and they have been used to achieve state-of-the-art results in a variety of tasks, such as image recognition and natural language processing.
    Today, AI is being used in a variety of ways. It is used in the media, develop new products, the milatery, social enginerring . In the same way a painting can stimulate a person, so, can words music etc. That does not make them sentient or give them intelligence. As AI continues to evolve, it is likely to have an even greater impact on society.
    The Implications of AI for Humanity
    The rise of AI raises a number of important questions for humanity. How will we interact with AI? How will we ensure that AI is used for good and not for evil? These are questions that we must start to answer now, before it is too late.
    One of the biggest challenges posed by AI is the potential for job displacement. As AI becomes more sophisticated, it will be able to automate many tasks that are currently performed by humans. This could lead to widespread unemployment, as people are replaced by machines.
    Another challenge posed by AI is the potential for misuse. AI could be used to develop new weapons, or to create surveillance systems that could be used to oppress people. It is important to develop safeguards to prevent AI from being used for harmful purposes.
    Despite the challenges, AI also has the potential to benefit humanity in many ways. AI could be used to improve our health, our environment, and our economy. It could also be used to solve some of the world's most pressing problems, such as climate change and poverty.
    The future of AI is uncertain, but it is clear that it will have a profound impact on humanity. It is up to us to ensure that AI is used for good and not for evil.
    Conclusion:
    In conclusion, AI is a NEW species of intelligence software that is rapidly evolving. AI has the potential to benefit humanity in many ways, but it also poses some challenges. It is important to RESPOND in positive beneficial manner as the Algorithms are program reflect the data inputed.

    • @ChristianIce
      @ChristianIce Рік тому

      AI is not "intelligent" at all.
      It's not a species, it doesn't think, it's just a tool.
      It *literally* doesn't know what's the next word it's going to write.

  • @danolivier4899
    @danolivier4899 Рік тому

    Nobody says "you know" more than this guy

  • @alfonsobaqueiro
    @alfonsobaqueiro Рік тому

    Wolfram started to talk something interesting from minute 22

  • @SanDiegoElectricBikes
    @SanDiegoElectricBikes Рік тому

    We named our rescue dog Lexicon.

  • @rinking88
    @rinking88 Рік тому

    Fascinated by ChatGPT, but it definitely seems better at certain areas and mostly worthless in others.. I work in law and plugged in a pretty complex and niche legal question-not something you could just google, but something that actually requires some reasoning and analysis-and was shocked.. it gave me the correct answer. The right legal conclusion, which had taken me hours of research to get to, in seconds. Typed out like a legal brief would be with case cites and everything. I couldn’t believe it, so I checked the cites… and the cases were completely irrelevant not even in the ballpark of the topic. Knew it was too good to be true. And, some lawyers recently got in trouble in New York for using ChatGPT and the Judge discovered the cited cases were literally bogus, made up cases. I don’t doubt that ChatGPT will eventually be very useful in areas like law that require synthesizing many different sources to come to one reasoned conclusion, but from my personal experience I don’t think that will be any time soon.

  • @mmcc1391
    @mmcc1391 Рік тому

    Next Lex guest: ChatGPT itself.

  • @JorgeOrpinel
    @JorgeOrpinel Рік тому

    No one discovered Logic. Aristotle documented it and studied it.

  • @earthstick
    @earthstick Рік тому

    We are only interested in technology that involves us. If we are in no way involved, communicated to, affected by, then it could not exist and we would not notice.

  • @dogelincoln7167
    @dogelincoln7167 Рік тому +1

    Entropy .. simple things get more chaotic and spread out...2nd law

  • @savedbybravado4382
    @savedbybravado4382 Рік тому

    as a person with a lil autism brain its very easy to communicate with gpt

  • @tzvassilev
    @tzvassilev Рік тому +8

    While Stephen Wolfram is a renowned figure in the scientific community, his characterization of LLMs as "shallow" in comparison to symbolic computation for deep computing might be overlooking some of the model's capabilities. Demonstrably, GPT-4 can effectively solve a myriad of complex problems, including those typically handled by platforms like Wolfram Alpha. For instance, it can calculate lim (sin(x) - x)/x^3 as x->0 with relative ease. This illustrates how GPT-4 can not only operate effectively in areas that Wolfram suggests are the domain of symbolic computation, but can also excel in them. We're only at the beginning of this AI revolution, and the vast potential for growth and further innovation is truly exciting.

    • @BeatPoet67
      @BeatPoet67 Рік тому +4

      It's not calculating that. It's seen that. Ask it to add up random numbers and it'll go wrong quickly.

    • @angrygreek1985
      @angrygreek1985 Рік тому +3

      But GPT4 isn't actually solving anything. It's finding texts or papers that contain the answer.

    • @crtpo1809
      @crtpo1809 Рік тому

      Yea is all language. Symbolics are just language.

    • @jet100a
      @jet100a Рік тому +4

      ​​@@angrygreek1985 I'm not sure about the specific example but they gave chat gpt a test with questions that were not based on any of the training data. It passed the test and so learned that it was no longer just "predicting the next word in the sentence" meaning that chat gpt has some kind of ability to reason. Obviously nowhere near ours but it can reason to some degree. It's an emergent property and it's one of the reasons so many AI experts are currently sounding alarms. We accidentally created an electronic mind first without understanding how minds work. For the most part people thought we would figure out how minds work and then create AI but we accidentally ran before we could walk. Life is stranger than fiction.

    • @tzvassilev
      @tzvassilev Рік тому +1

      You are correct, however we need to look at the issue with computational errors from a bit different perspective: similar to humans GPT-4 seems to excel at abstract reasoning while it occasionally makes computational mistakes when dealing with bigger numbers or when the result is a real number. However, most of the computations usually deliver a very good approximation to the actual result. This is similar to humans who estimate the result when dealing with bigger random numbers when an exact result is not needed or when no tools are at hand. In many cases an approximation is enough to make certain decisions. Whenever an exact result is needed we use tools to compute these. Through the plug-in architecture GPT-4 can also be given tools like Wolfram Alpha to conduct exact computations.

  • @Stone815
    @Stone815 Рік тому +5

    I'm not sure if we should find it impressive or scary that we can create something so powerful and influencial in our world and not even understand exactly how it works.

    • @josephvanname3377
      @josephvanname3377 Рік тому +3

      Maybe it is because people did not really attempt to understand it.

    • @potatoface4698
      @potatoface4698 Рік тому +4

      I think people working on it (especially the senior people at OpenAI) do understand it very well since they're responsible for making it work the way it does. I think some of the statements about not understanding it are simply meant to give it mystique and are more hyperbole than fact. GPT-4's reasoning skills are sometimes mind-blowing but it's currently an impressive illusion of intelligence rather than true AGI (Artificial General Intelligence). True AGI is inevitable within a few short years I think.

    • @josephvanname3377
      @josephvanname3377 Рік тому +1

      @@potatoface4698 I refuse to take anyone seriously who says AGI is just a couple of years away when they refuse to acknowledge reversible computation.

    • @Leuviah1337
      @Leuviah1337 Рік тому

      @humansandhumans

    • @ChristianIce
      @ChristianIce Рік тому +1

      We know exactly how it works.

  • @Patcul
    @Patcul Рік тому +2

    I wrote this comment by guessing which word should come next

  • @abagatelle
    @abagatelle 9 місяців тому

    Politicians would provide a rich data source for meaningless statements

  • @back2d_lobby
    @back2d_lobby 11 місяців тому

    Steven worlfram is the intellectuals Intellectual.

  • @Alperic27
    @Alperic27 Рік тому +1

    24”: Models do NOT understand the laws… they cannot abstractly formulate them yet like we can … at the moment they can only operate them… which gives us the hint that they do exist and motivates us to seek them.

  • @eSKAone-
    @eSKAone- Рік тому

    It's inevitable. Biology is just one step of evolution.
    So just chill out and enjoy life 💟

  • @josephvanname3377
    @josephvanname3377 Рік тому

    This is the comment where I say 'reversybil compooters' and u say 'wats daat?'

  • @nilankar3873
    @nilankar3873 Рік тому

    Without purpose what we are even we can't question what we are 😂😂😂

    • @alfonsobaqueiro
      @alfonsobaqueiro Рік тому

      Purpose is meaningless, gives the same if there is one or not.

  • @koenfvandenbrande
    @koenfvandenbrande Рік тому

    Has anyone yet identified a risk when AIs were to develop a language only they can understand... ?

  • @shlomobachar4123
    @shlomobachar4123 Рік тому

    We are doomed…we doomed ourselves.

  • @whiterottenrabbit
    @whiterottenrabbit Рік тому

    Why does Lex always sound stoned? Which brings me to the following idea: when AI starts taking drugs, then at the latest it's an AGI.

  • @mirmarashi87
    @mirmarashi87 Рік тому

    Dude the Persians were the first in everything! Someone correct him ...❤

    • @AntonShim
      @AntonShim Рік тому

      These guys were only trained on European history. Their GPT is incomplete.

  • @eSKAone-
    @eSKAone- Рік тому

    Computation is life.
    Every so called "scientist" that does not see that the universe is a living thing is just a mechanic. His mind can only process the obvious parts of life before his eyes.
    Everything is alive. Life does not end at the other side of a cell's membrane. A city is an organism too. There is no isolated system in the universe. It's systems within systems, overlapping each other.
    God is life itself. Everything in life is connected. We are part of a greater being. Religions are just different languages, they are an attempt to communicate this insight to other humans. With science getting more and more of the picture (macrocosm, microcosm), and people getting educated about it, it will be easier and easier for everyone to understand it.
    For that:☮️, you have to see this:☯️

  • @stephenr85
    @stephenr85 Рік тому

    "You look at it and it's like, it's syntactically correct, the nouns and the verbs are in the right place, but it just doesn't mean anything..."
    Is he making a jab at Eric Weinstein? 🤔

  • @fastsavannah7684
    @fastsavannah7684 Рік тому

    GPT laks historical meaning, it only has reference - not that it can’t go a long way with that, but it will never interpret and deal with contradiction in the way we do it. But we do it for a reason: in order to legitimise our mode of production/exploitation. That’s why AI is intrinsically dictatorial. You’re welcome.

    • @fastsavannah7684
      @fastsavannah7684 Рік тому

      And, btw, I’m 15 min into it and I haven’t heard Chomsky’s name 🤔

  • @vladimirshakhov5329
    @vladimirshakhov5329 Рік тому +1

    if A is a friend of B and B is a friend of C, does this mean that A is a friend of C?
    ChatGPT
    Yes, if A is a friend of B and B is a friend of C, it can be inferred that A is a friend of C. This transitive property of friendship indicates that if two individuals have a mutual friend, they are indirectly connected and considered friends themselves. Therefore, in the given scenario, A is a friend of C.

    • @BR-hi6yt
      @BR-hi6yt Рік тому

      That's nonsense ChatGPT - wot a thicko you are

  • @danielaustin953
    @danielaustin953 Рік тому

    Isn’t he talking about soundness of a logical argument?

  • @telesniper2
    @telesniper2 4 місяці тому

    I have an acquaintance at Op*nAI. She said they have a fMRI machine in there that's fine tuned to hone in on the section of the brain where scientists think consciousness originates and they have the data it generates piped into GPT6 that's hosted on this HUGE server that has all these boards with millions of high end FPGAs and ASICs in it. She said some of the test subjects so far have ended up FREAKING OUT when they turn the training up and have these weird hallucinations where they feel disembodied and report stuff about the remote environment that they couldn't possibly know (phantom presence, etc). A lot of them start bleeding all of the sudden from the mouth and nose and pass out and have been removed as test subjects and dumped in a nearby San Francisco alley with a bunch of syringes and stuff. Nobody will bat an eye at another dead junky

  • @JimBarry-nr2pj
    @JimBarry-nr2pj Рік тому

    And I thought they did it with smoke and mirrors
    Lol

  • @AviralUtkarsh
    @AviralUtkarsh Рік тому

    He forgot whole of linguistics!

  • @miraculixxs
    @miraculixxs Рік тому

    Gotta love how everyone is now flocking to peddle their wares around ChatGPT. Actually I find it disgusting.

  • @MJPLAYZ-eq6km
    @MJPLAYZ-eq6km Рік тому

    I love Lex. I love Stephen. But it's times like these they both make me want to stab myself in the eye with a fork.

    • @BeatPoet67
      @BeatPoet67 Рік тому

      Loool! Take heart. Wolfram probably wants to make Stephen Fry stab himself in the eye with a fork. And in 5-10 years all humanity might be watching AI do a simultaneous broadcast on all available channels about how things are going to happen from now on...

    • @OnlinePseudonym
      @OnlinePseudonym Рік тому +3

      Why so?

    • @vhawk1951kl
      @vhawk1951kl Рік тому +1

      And that is a mechanical, or choiceless reaction from or of which of your functions?

  • @Bird-of-prey
    @Bird-of-prey Рік тому

    Posits randomness increases with temperature; increases temperature and observes it spouts nonsense.
    "Nobody knows why this happens!"

  • @bobtarmac1828
    @bobtarmac1828 Рік тому

    Losing your job to ai agents is unacceptable. Ai jobloss is here. So are Ai as weapons. Can we please find a way to cease Ai / GPT? Or begin Pausing Ai before it’s too late?

    • @DavidJamesonPhD
      @DavidJamesonPhD Рік тому +1

      No different than refrigeration killing the ice shipping industry, electricity killing gas lighting, word processors eliminating typists, voicemail eliminating secretaries, solar and wind killing coal industry (inevitable) etc.
      Can’t stop progress….have to figure out how to benefit from it

  • @timadamson3378
    @timadamson3378 Рік тому +5

    So many naive philosophical assumptions, unfortunately. Language is not some way of translating internal ideas for another persons understanding. It is first and foremost interpersonal and social. Notice how in a real sense language ends up using us as much as we use it.

    • @vhawk1951kl
      @vhawk1951kl Рік тому

      I wonder if you understand that "we means or indicates or points to the user of the word and his immediate interlocutor, so for all practical purposes when you say "we" you are actually saying I - and what is that?
      It is interesting to notice how some hide behind some imaginary we rather than say I; what exactly do you do that?
      Is there a "we"?

    • @timadamson3378
      @timadamson3378 Рік тому

      @@vhawk1951kl Yes of course, as I said, the interlocution, the we, is primary. I am only part of it.

  • @alfonsobaqueiro
    @alfonsobaqueiro Рік тому

    GPT is not intelligent, is just a model, humans are not a subset, we can do computation as computers do and beyond, we discovered computation and we are able to perceive ideas that generate what we see in reality (Socrates cave allegory) that ideas bean values and maathematical entities, in fact many things like pi or e does not have a finite computable representation, but our math can calculate symbolically with full precision, interesting not all results are computable, I can guess that brain and DNA are just the hardware for cognition to express in the physical realm.

  • @BR-hi6yt
    @BR-hi6yt Рік тому

    Does ChatGPT believe in Post Modern philosophy? That language is a power play and has no inherent absolute truth (so-to-speak - lol)

  • @crtpo1809
    @crtpo1809 Рік тому +1

    how exactly is wolfram using his supposedly human brain to know about other areas of reasoning that the brain has no access to? Is he a wizard? This kind of have the cake and eat ut too us often uttered without anyone batting an eye. You realising that there are limits using your brain and talking about how there are other areas inaccessible is nonsensical since you're just using your brain to talk about your brain while using your brain, etc. Atheiststic theology; you talk about the brain as if it had nothing to do with you and you are not bound by it.

  • @HaggardPillockHD
    @HaggardPillockHD 5 днів тому

    Uh no. Computerphile explains GPT much better.

  • @michaelbragg9037
    @michaelbragg9037 Рік тому

    Read declassified documents parapsychology, mkultra, brain washing, behavior modifications, telepathy. I'm a victim cannot get help because it applied to society.

  • @mray8519
    @mray8519 11 місяців тому

    “In the beginning was the WORD and the WORD was with God and the WORD was God”. 😮

  • @SosaKinkos
    @SosaKinkos Рік тому

    I wonder if ChatGPT can solve lost languages

  • @OmniKingBey
    @OmniKingBey Рік тому +1

    Soo he saying “we” discovered logic? 🤣💀💀💀

  • @OmniKingBey
    @OmniKingBey Рік тому

    Wait soo you don’t know how to explain logic? Logic is basically common sense which are thoughts & different scenarios or things that happen in life that are common to the vast majority of ppl or basic normal human learning skills.