Joscha Bach - GPT-3: Is AI Deepfaking Understanding?

Поділитися
Вставка
  • Опубліковано 6 лют 2025
  • Joscha Bach on GPT-3, achieving AGI, machine understanding and lots more
    02:40 What's missing in AI atm? Unified coherent model of reality
    04:14 AI systems like GPT-3 behave as if they understand - what's missing?
    08:35 Symbol grounding - does GPT-3 have it?
    09:35 GPT-3 for music generation, GPT-3 for image generation, GPT-3 for video generation
    11:13 GPT-3 temperature parameter. Strange output?
    13:09 GPT-3 a powerful tool for idea generation
    14:05 GPT-3 as a tool for writing code. Will GPT-3 spawn a singularity?
    16:32 Increasing GPT-3 input context may have a high impact
    16:59 Identifying grammatical structure & language
    19:46 What is the GPT-3 transformer network doing?
    21:26 GPT-3 uses brute force, not zero-shot learning, humans do ZSL
    22:15 Extending the GPT-3 token context space. Current Context = Working Memory. Humans with smaller current contexts integrate concepts over long time-spans
    24:07 GPT-3 can't write a good novel
    25:09 GPT-3 needs to become sensitive to multi-modal sense data - video, audio, text etc
    26:00 GPT-3 a universal chat-bot - conversations with God & Johann Wolfgang von Goethe
    30:14 What does understanding mean? Does it have gradients (i.e. from primitive to high level)?
    32:19 (correlation vs causation) What is causation? Does GPT-3 understand causation? Does GPT-3 do causation?
    38:06 Deep-faking understanding
    40:06 The metaphor of the Golem applied to civ
    42:33 GPT-3 fine with a person in the loop. Big danger in a system which fakes understanding. Deep-faking intelligible explanations.
    44:32 GPT-3 babbling at the level of non-experts
    45:14 Our civilization lacks sentience - it can't plan ahead
    46:20 Would GTP-3 (a hopfield network) improve dramatically if it could consume 1 to 5 trillion parameters?
    47:24 GPT3: scaling up a simple idea. Clever hacks to formulate the inputs
    47:41 Google GShard with 600 billion input parameters - Amazon may be doing something similar - future experiments
    49:12 Ideal grounding in machines
    51:13 We live inside a story we generate about the world - no reason why GPT-3 can't be extended to do this
    52:56 Tracking the real world
    54:51 MicroPsi
    57:25 What is computationalism? What is it's relationship to mathematics?
    59:30 Stateless systems vs step by step Computation - Godel, Turing, the halting problem & the notion of truth
    1:00:30 Truth independent from the process used to determine truth. Constraining truth that which can be computed on finite state machines
    1:03:54 Infinities can't describe a consistent reality without contradictions
    1:06:04 Stevan Harnad's understanding of computation
    1:08:32 Causation / answering 'why' questions
    1:11:12 Causation through brute forcing correlation
    1:13:22 Deep learning vs shallow learning
    1:14:56 Brute forcing current deep learning algorithms on a Matrioshka brain - would it wake up?
    1:15:38 What is sentience? Could a plant be sentient? Are eco-systems sentient?
    1:19:56 Software/OS as spirit - spiritualism vs superstition. Empirically informed spiritualism
    1:23:53 Can we build AI that shares our purposes?
    1:26:31 Is the cell the ultimate computronium? The purpose of control is to harness complexity
    1:31:29 Intelligent design
    1:33:09 Category learning & categorical perception: Models - parameters constrain each other
    1:35:06 Surprise minimization & hidden states; abstraction & continuous features - predicting dynamics of parts that can be both controlled & not controlled, by changing the parts that can be controlled. Categories are a way of talking about hidden states.
    1:37:29 'Category' is a useful concept - gradients are often hard to compute - so compressing away gradients to focus on signals (categories) when needed
    1:38:19 Scientific / decision tree thinking vs grounded common sense reasoning
    1:40:00 Wisdom/common sense vs understanding. Common sense, tribal biases & group insanity. Self preservation, dunbar numbers
    1:44:10 Is g factor & understanding two sides of the same coin? What is intelligence?
    1:47:07 General intelligence as the result of control problems so general they require agents to become sentient
    1:47:47 Solving the Turing test: asking the AI to explain intelligence. If response is an intelligible & testable implementation plan then it passes?
    1:49:18 The term 'general intelligence' inherits it's essence from behavioral psychology; a behaviorist black box approach to measuring capability
    1:52:15 How we perceive color - natural synesthesia & induced synesthesia
    1:56:37 The g factor vs understanding
    1:59:24 Understanding as a mechanism to achieve goals
    2:01:42 The end of science?
    2:03:54 Exciting currently untestable theories/ideas (that may be testable by science once we develop the precise enough instruments). Can fundamental physics be solved by computational physics?
    2:07:14 Quantum computing. Deeper substrates of the universe that runs more efficiently than the particle level of the universe?
    2:10:05 The Fermi paradox
    2:12:19 Existence, death and identity construction
    bach.ai

КОМЕНТАРІ • 296

  • @scfu
    @scfu  4 роки тому +50

    Joscha Bach covers a lot of ground - here are the time points:
    02:40 What's missing in AI atm? Unified coherent model of reality
    04:14 AI systems like GPT-3 behave as if they understand - what's missing?
    08:35 Symbol grounding - does GPT-3 have it?
    09:35 GPT-3 for music generation, GPT-3 for image generation, GPT-3 for video generation
    11:13 GPT-3 temperature parameter. Strange output?
    13:09 GPT-3 a powerful tool for idea generation
    14:05 GPT-3 as a tool for writing code. Will GPT-3 spawn a singularity?
    16:32 Increasing GPT-3 input context may have a high impact
    16:59 Identifying grammatical structure & language
    19:46 What is the GPT-3 transformer network doing?
    21:26 GPT-3 uses brute force, not zero-shot learning, humans do ZSL
    22:15 Extending the GPT-3 token context space. Current Context = Working Memory. Humans with smaller current contexts integrate concepts over long time-spans
    24:07 GPT-3 can't write a good novel
    25:09 GPT-3 needs to become sensitive to multi-modal sense data - video, audio, text etc
    26:00 GPT-3 a universal chat-bot - conversations with God & Johann Wolfgang von Goethe
    30:14 What does understanding mean? Does it have gradients (i.e. from primitive to high level)?
    32:19 (correlation vs causation) What is causation? Does GPT-3 understand causation? Does GPT-3 do causation?
    38:06 Deep-faking understanding
    40:06 The metaphor of the Golem applied to civ
    42:33 GPT-3 fine with a person in the loop. Big danger in a system which fakes understanding. Deep-faking intelligible explanations.
    44:32 GPT-3 babbling at the level of non-experts
    45:14 Our civilization lacks sentience - it can't plan ahead
    46:20 Would GTP-3 (a hopfield network) improve dramatically if it could consume 1 to 5 trillion parameters?
    47:24 GPT3: scaling up a simple idea. Clever hacks to formulate the inputs
    47:41 Google GShard with 600 billion input parameters arxiv.org/abs/2006.16668 - Amazon may be doing something similar - future experiments
    49:12 Ideal grounding in machines
    51:13 We live inside a story we generate about the world - no reason why GPT-3 can't be extended to do this
    52:56 Tracking the real world
    54:51 MicroPsi
    57:25 What is computationalism? What is it's relationship to mathematics?
    59:30 Stateless systems vs step by step Computation - Godel, Turing, the halting problem & the notion of truth
    1:00:30 Truth independent from the process used to determine truth. Constraining truth that which can be computed on finite state machines
    1:03:54 Infinities can't describe a consistent reality without contradictions
    1:06:04 Stevan Harnad's understanding of computation
    1:08:32 Causation / answering 'why' questions
    1:11:12 Causation through brute forcing correlation
    1:13:22 Deep learning vs shallow learning
    1:14:56 Brute forcing current deep learning algorithms on a Matrioshka brain - would it wake up?
    1:15:38 What is sentience? Could a plant be sentient? Are eco-systems sentient?
    1:19:56 Software/OS as spirit - spiritualism vs superstition. Empirically informed spiritualism
    1:23:53 Can we build AI that shares our purposes?
    1:26:31 Is the cell the ultimate computronium? The purpose of control is to harness complexity
    1:31:29 Intelligent design
    1:33:09 Category learning & categorical perception: Models - parameters constrain each other
    1:35:06 Surprise minimization & hidden states; abstraction & continuous features - predicting dynamics of parts that can be both controlled & not controlled, by changing the parts that can be controlled. Categories are a way of talking about hidden states.
    1:37:29 'Category' is a useful concept - gradients are often hard to compute - so compressing away gradients to focus on signals (categories) when needed
    1:38:19 Scientific / decision tree thinking vs grounded common sense reasoning
    1:40:00 Wisdom/common sense vs understanding. Common sense, tribal biases & group insanity. Self preservation, dunbar numbers
    1:44:10 Is g factor & understanding two sides of the same coin? What is intelligence?
    1:47:07 General intelligence as the result of control problems so general they require agents to become sentient
    1:47:47 Solving the Turing test: asking the AI to explain intelligence. If response is an intelligible & testable implementation plan then it passes?
    1:49:18 The term 'general intelligence' inherits it's essence from behavioral psychology; a behaviorist black box approach to measuring capability
    1:52:15 How we perceive color - natural synesthesia & induced synesthesia
    1:56:37 The g factor vs understanding
    1:59:24 Understanding as a mechanism to achieve goals
    2:01:42 The end of science?
    2:03:54 Exciting currently untestable theories/ideas (that may be testable by science once we develop the precise enough instruments). Can fundamental physics be solved by computational physics?
    2:07:14 Quantum computing. Deeper substrates of the universe that runs more efficiently than the particle level of the universe?
    2:10:05 The Fermi paradox
    2:12:19 Existence, death and identity construction

    • @denslyss
      @denslyss 4 роки тому +2

      Joscha is an amazing person and a remarkable mind in AI, the dude deserves more credit.
      www.theaxclinic.com/articles/2020/9/20/joscha-bach-the-lovable-nerd-of-ai

    • @joostengelsman4755
      @joostengelsman4755 4 роки тому +4

      Thank you for adding such an extensive time point list!

  • @red.rose.08
    @red.rose.08 3 роки тому +17

    I'm a stay-at-home mom. I'm learning new things here and I'm glad I can understand the discussion. I listen here each time I do kitchen work. Thanks for this! I admire both of you and thank you for sharing what you guys know about this topic. Thank goodness I can actually understand everything you guys are talking about! Im glad I could learn something from you both. Many many thanks! Stay safe! Warmest regards, from Hong Kong!

  • @cesarromero936
    @cesarromero936 4 роки тому +83

    Always happy to find new stuff to listen of Josha Bach. Thanks for doing this!

    • @scfu
      @scfu  4 роки тому +2

      'Twas fun!

  • @xmathmanx
    @xmathmanx 4 роки тому +74

    New joscha Bach content, that's a like from me

    • @scfu
      @scfu  4 роки тому +8

      more where that came from - I've a playlist of them ;)

    • @PrashantMaurice
      @PrashantMaurice 4 роки тому +1

      @@scfu hmm, i would readily watch a joshua bach playlist, except i didn't find your playlist yet

    • @scfu
      @scfu  4 роки тому +7

      @@PrashantMaurice Here is the Joscha Bach playlist for this channel: ua-cam.com/play/PL-7qI6NZpO3s6sRW8uKjakt2NbLQWPxuk.html

    • @lilfr4nkie
      @lilfr4nkie 4 роки тому +1

      @@scfu amazing thank you, Congrats to everyone else who beat me here. ❤️

  • @Susanmugen
    @Susanmugen 3 роки тому +8

    There's so much good stuff here. I love how the description breaks the topics up into time stamps. That helps a lot. Thank you.

    • @scfu
      @scfu  3 роки тому +1

      Thanks heaps, glad you liked it !

  • @LarsLarsen77
    @LarsLarsen77 4 роки тому +139

    Joscha is currently my favorite nerd.

    • @e555t66
      @e555t66 4 роки тому +4

      He's the best

    • @pwb83
      @pwb83 3 роки тому +2

      Yesss

    • @claybomb1064
      @claybomb1064 3 роки тому +3

      Nerds, Nerds, Nerds, Nerds! 🤓

    • @dave72f
      @dave72f 3 роки тому +4

      He's incredibly articulate to articulate his thoughts in a second language to lesser nerds.

    • @MrTupapi0826
      @MrTupapi0826 3 роки тому +2

      He’ll remain

  • @CognitiveArchitectures
    @CognitiveArchitectures 4 роки тому +26

    Joscha is ALWAYS articulate, illuminating, and thought provoking. My main question centers around whether or not in his self-organizing AGI system he has a reasonable set of representations and mechanisms in the architecture, and abilities and needs in his target device(s) to achieve some interesting phenomena at this point ? And, if so, what phenomena does he expect to see?
    ~ Michael S. P. Miller, Piaget Modeler Architecture.

  • @sortof3337
    @sortof3337 4 роки тому +16

    Anything that has Josha bach label. I read, I feel lucky that someone as smart as him was born in our time.

  • @gryn1s
    @gryn1s 4 роки тому +48

    I'm not into AI at all, but philosophical things like the end bit, gets me listening to mr Jocha again and again. All the scholars of philosophy can go find another job, this man has cracked it.

    • @samre7870
      @samre7870 4 роки тому +4

      Like, but I don't agree all scholars of philosophy should stop what they're doing...

    • @gryn1s
      @gryn1s 4 роки тому +9

      @@samre7870 What they are doing is beating around the same bush for way too long already.
      Ancient philosophy was relavant, because it was the only way to understand the world at the time. its interesting how far can you go with only your mind. But how do you call a philosopher that employs the tools available today? - a scientist.
      Somehow we abandomed alchemy as soon as chemistry became solid. Whats keeping modern philosophy though - the modern academic system, that wont let go of its funds, and is now solely incentivised to encrypt the simplest concepts in the most difficult language to maintant the scholarly facade.

    • @samre7870
      @samre7870 4 роки тому +3

      @@gryn1s but I think what's interesting about Joscha is the philosophical aspect of his thoughts not the technical AI scientific stuff. and this is why he gets viewers on social medias.

    • @JH-ji6cj
      @JH-ji6cj 4 роки тому +7

      @@samre7870 I think what's interesting is you cannot divorce the 2 aspects. AI, and computers in general themselves are mirrors into how we make models of the world. I found the most interesting twist of the Deep Mind movie regarding beating the World GO Champion to be when he morphed from disappointment to inquiry about HOW the AI came to facilitate strategy. From one moment it went from fear/anxiety/depression of machine overlord to machine teacher....which I found extremely intriguing.

    • @denslyss
      @denslyss 4 роки тому +2

      Joscha is an amazing person and a remarkable mind in AI, the dude deserves more credit.
      www.theaxclinic.com/articles/2020/9/20/joscha-bach-the-lovable-nerd-of-ai

  • @huguesviens
    @huguesviens 4 роки тому +6

    I loved the proposition of feeding a book abstract to keep GPT3 on track, then hinting that GTP3 is already able to generate this abstract. Amazing possibility if we can train a model to use that trick by itself, generating a pre-context relative to the input context.

  • @MarkLucasProductions
    @MarkLucasProductions 3 роки тому +2

    Joscha Bach possesses an unusually high degree of consciousness and is an extraordinarily insightful person. Here, and elsewhere, he speaks seemingly quite casually and conversationally as he succinctly describes some very profound and not widely understood concepts. Brain candy!

  • @scfu
    @scfu  4 роки тому

    If you are interested in the phenomenon of understanding, here is a playlist of talks and interviews I have created over the years.. more to come: ua-cam.com/play/PL-7qI6NZpO3vgq3Bkz1A1agthYXebhnxP.html

  • @LetGoofEverythingNow
    @LetGoofEverythingNow 4 роки тому +22

    I love listening to him! Such a beautiful mind!

  • @jaakjpn
    @jaakjpn 4 роки тому +2

    Nice points by Joshca.
    As a side point: abiogenesis (discussed ~1h:30min) has quite solid grounds nowadays. The leading theory is that are RNA-world preceded the cellular life. RNA is able to carry out reactions and also copy and edit RNAs themselves. Thus, they certain RNAs can start multiplying when beneficial energy gradients and materials were available (e.g., near oceanic vents); later developing protective membranes, DNA etc.

  • @scfu
    @scfu  4 роки тому

    Created a discord server, come tarry a while and discuss GPT-3 - discord.gg/kdWqCdW

  • @matasuki
    @matasuki 3 роки тому +1

    It's eerie how close Ghost in the Shell was on the timeframe between AGI development and Neuralink progress.

  • @so8907
    @so8907 4 роки тому +2

    I love this conversation, to be honest. At first impressions, my expectations were not high. However, Joshua's deep understanding of Machine Learning makes this enthralling.

  • @АлександрБагмутов
    @АлександрБагмутов 4 роки тому +7

    From 1:15:30 he just fires insanely profound concepts about sentience and spirit one after another, Its just.. Its all just put so coherently and precisely that it immediately inserts in a physical worldview. Think about plants: so there can be multiple conscious levels of entities which are completely ignorant of each other because of the time scales. And considering cell-messaging, they can exist within human bodies - multiple independent consciousnesses! What an idea! And what about moral implications? When we get enough plumbing, should we maybe ideally spend all our time searching for conscious systems and trying to minimize their unpreferable states (pain)? Unfortunately it seems to me that plants wouldn't be able to get a good model of the world fast enough - the process must require more constant context, than is available on planets..

  • @2DReanimation
    @2DReanimation 2 роки тому +1

    He really explains things as simply as one can, but these things can get as deep as hell ^^
    Combinatorial explosions within combinatorial explosions...

  • @bijanshadnia3620
    @bijanshadnia3620 3 роки тому +3

    Joscha you need your own podcast!

  • @krenee8640
    @krenee8640 4 роки тому +2

    This is the most interesting , and by far, the most exciting video I’ve heard ...for awhile. Very informative. Much appreciated!

    • @scfu
      @scfu  4 роки тому +1

      My pleasure - glad you liked it!

  • @HardTimeGamingFloor
    @HardTimeGamingFloor 2 роки тому

    Just rediscovered you! Used to listen to your interviews all the time back in the day!

  • @logusgraphics
    @logusgraphics 4 роки тому +24

    Just give the man the resources it takes so that we will be able to reveal these mysteries and transcend.

  • @carlossegura403
    @carlossegura403 4 роки тому +2

    Wow, I didn't know Joscha was remarkably familiar with the NLP space. Amazing 🤗

  • @alexharvey9721
    @alexharvey9721 4 роки тому +11

    That was some next level understanding of intelligence. Thanks for the video, thumbs up really doesn't cut it.

    • @scfu
      @scfu  4 роки тому +2

      Much appreciated!

  • @DominicDSouza
    @DominicDSouza 4 роки тому +1

    Thanks for this discussion, I really enjoyed it. Always enjoy listening to Josha Bach's perspective. If I may ask for next time, would you please ensure your microphone level is higher? I could hear Josha clearly, but less so for your questions or comments.

    • @scfu
      @scfu  4 роки тому

      Sure thing! Thanks for the feedback.

  • @derasor
    @derasor 4 роки тому +1

    This was absolute gold. Joscha Bach is absolutely brilliant in delivering analogies to bring light into the true state of every subject he touches on. Makes me laugh at his witty comments and then contemplate a vast horizon of new insight. What an incredible mind. Thank you for this

    • @scfu
      @scfu  4 роки тому +1

      Awesome! Hope to have more content with J Bach again soon.

    • @derasor
      @derasor 4 роки тому +1

      @@scfu yes please, and I appreciate very much you being the host. Cheers!

  • @5eA5
    @5eA5 4 роки тому +6

    Joscha has the talent to ask questions that make you blush..indeed, what if we are deepfaking too? Its clearly true for many.

  • @OfCourseICan
    @OfCourseICan 3 роки тому

    I'm a Melbourne dude and get this genius Joscha. Please get in touch.

  • @gregmattson2238
    @gregmattson2238 4 роки тому +57

    man, listening to joscha bach sometimes is like listening to a human machine gun - by the time one idea has hit you, there are 5 other ideas that have hit you and your brain has started to lose coherence.

    • @drmedwuast
      @drmedwuast 4 роки тому +4

      Same for me.
      I wonder if he does it on purpose. He doesn’t seem like the kind of guy who gets more pleasure out of overwhelming you than helping you understand something. He surely doesn’t need to rely on it to appear smart.
      On the other hand, he does it in every single interview I’ve seen of him (which is all his interviews), so at this point I can’t see how it’s a coincidence

    • @dru4670
      @dru4670 4 роки тому +2

      @@drmedwuast same here. Guess that's how much information he is processing and trying to communicate to us.

    • @M0ebius
      @M0ebius 4 роки тому +6

      You can tell the interviewer stopped tracking all the mindblowers that Jocha was dropping half way through. I don’t blame him though given the density of information presented. We the audience at least have the ability to pause and rewind.

    • @e555t66
      @e555t66 4 роки тому +2

      @@drmedwuast so it's not just me.

    • @pwb83
      @pwb83 3 роки тому +1

      I listened 1 hour and 35 minutes and I'm demolished. I think I picked up a good part of it, at least in an abstract level. But muy god! so many ideas in so little time! I think I'll resume it later, I'm exhausted and marveled at the same time 😂

  • @Darhan62
    @Darhan62 4 роки тому +3

    Thought without consciousness? Does GPT-3 "think"? Is what it does similar to thinking? In humans, thinking generally involves consciousness or awareness, except perhaps when thoughts just "drift through your head" like when you're daydreaming.

  • @mattbartlett0
    @mattbartlett0 2 роки тому +1

    I’ll subscribe and share if I want to. I’m totally aware that these features exist. When you tell me to, it makes me not want to.

  • @ScriptureFirst
    @ScriptureFirst 4 роки тому +4

    Excellent time tags!!! 😍

  • @TomAtkinson
    @TomAtkinson 4 роки тому +2

    If GPT-3 remembers things... how much disk per second does it use when turned on? Or more like bytes per query? At API level?

    • @Leo-rh6rq
      @Leo-rh6rq 3 роки тому

      Hard to estimate. It uses several different types of analysis subparts. It's not like it just knows language and has a disk that stores all of its information. It has to analyze semantics and much more stuff too

  • @alaeifR
    @alaeifR 4 роки тому +2

    @Science, Technology & the Future Would be fantastic if you could provide the full audio (archive.org) or podcast format of these as well, please? Conversations like these are great to listen to when out for a long run.

    • @scfu
      @scfu  4 роки тому +1

      Will do!

  • @alexandrsoldiernetizen162
    @alexandrsoldiernetizen162 3 роки тому

    Good explanation of the limitation of the transformer model and attention. Also ways to overcome these limitations. I think you are looking at orders of magnitude levels of computation increase to get there. To have an unbounded context and unlimited modality is going take more than computers of today can deliver. Transformers are already straining the level of the biggest clusters at the GPT-3 level. I think I read it took $11,000,000 in electricity and compute time to generate it.

  • @manusartifex3185
    @manusartifex3185 4 роки тому +3

    I like he opens his eyes when he’s impressed by his own words 10:00

    • @sidkapoor9085
      @sidkapoor9085 3 роки тому

      you should check him out on Lex Fridman's podcast lol. Plenty of eye-widening moments.

  • @skierpage
    @skierpage 4 роки тому +2

    1:22:00 "Our preferences seem to be incompatible with what would be necessary for our survival" Joscha Bach is smart enough to see us destroying our planet, will we transcend it in time?

  • @michealwalli7324
    @michealwalli7324 2 роки тому

    this looks like an interesting video. Plato wrote about the relationship between appearance and being. first I would consider if AI is capable of representing things in actuality or just a convincing appearance. secondly, we have to analyze whether we are able to understand the being of a thing by observing its appearance. when we already know the definition of a word, it's apperance clearly represents the actual object. however, when we come across a new word we don't understand it because it's appearance isn't tied to any meaning or context. by deconstructing the etymological meaning of the word, we can get a sense of how to use it and what it means; this gives us a hollow irrelevant understanding of its true meaning.

  • @MeerkatMotorBoards
    @MeerkatMotorBoards 4 роки тому

    What is "memory" and how is it possible, what is the first/earliest examples of it in nature?

  • @cyrillablea8105
    @cyrillablea8105 4 роки тому +3

    This is absolutely Amazing! I'm appreciative of the information. I've never been exposed to the tech world. I'm like a kid in a candy store. I can't wait to learn more. I've been listening two basic information. I have a new lease on life. I am wanting to understand every aspect of this. Thank you 😊

    • @scfu
      @scfu  4 роки тому +2

      Glad it was helpful!

  • @Dsuranix
    @Dsuranix 3 роки тому +1

    the real crux of this whole question is that faking understanding is the same as understanding in a practical sense, so long as the sophistication of the fake is sufficient to outstrip our ability to detect its falsity. it's irrelevant. WE fake understanding, I certainly do. I listen to a narrative for a little while, get my Jung on, and dance around in the story until i find the tools of the role. this only comes after the passion, a retrospective of the deed done. if they're coming to the horizon of our comprehension, then outstripping it, it'll probably become obvious whether its understanding is "genuine" (at least in terms of logical questions pertaining to our environment, say) based on its sophistication or lack thereof in guarding its postulations. unless it has some exponential super-deception that can thread in and out of our language systems or some horrible concept of that nature. besides, i think we're the goaltenders of the universe already, and we're

  • @wafaawardah3264
    @wafaawardah3264 4 роки тому +1

    "Joscha Bach" are my new favourite words! 👏 👏 👏

  • @samre7870
    @samre7870 4 роки тому +5

    intro : beginning of the end of the world is pretty good

  • @heathertims2872
    @heathertims2872 3 роки тому

    Ok so I have a question at the beginning you said that it dont know when it gets confused that it just don't know how to respond so if it dont know confusion then why would it say it was confused? And if they don't have emotions then why would it stay fixated on one main emotion

  • @cfryantofficial
    @cfryantofficial 4 роки тому +2

    You know who's great at "deepfaking" understanding of a topic? Developers. Just about every developer I've ever worked with who has to juggle three, four, five plus languages needs to refer to stack overflow more or less constantly.
    Not that I'm complaining, once you've got more than a few languages more or less memorized then you add this pre compiler language, then this javascript framework, etc, etc. It's just too much for a person take in all at once, much less learn it deeply enough to be able to write it in a fluidic manner, instantly recognize common issues and respond with tests to determine the exact issue, then implement the specific fix for that scenario.
    I've worked with some really cool teams where we all had that level of experience on a few languages, but you can only keep that up for so long. Eventually (unless you've got a photographic memory) you'll hit a plateau.
    It's pretty common in the industry, pretty much unavoidable because no company or freelancer always has the time to learn to do something the right way, even if he or she would very much like to.

  • @starblue324
    @starblue324 Рік тому +1

    Thank you

  • @Dante3085
    @Dante3085 4 роки тому +9

    I wonder what Joscha Bach thinks about Stephen Wolfram's thoughts concerning Computational Irreducibility, Computational Equivalence and his recent Physics Project.

    • @vincentmarquez3096
      @vincentmarquez3096 4 роки тому +3

      I don't remember what interview it was, but he talks about it. He believes it, he thinks the universe is discrete, and even on the Lex Fridman podcast he referes to reality as a "quantum hypergraph" which is exactly what Wolfram's project is.

    • @otomarjupiter45
      @otomarjupiter45 4 роки тому

      Universe is implemented in Mathematica... I would say some people already made it beyond the Wolframs pondering. Like Dribus.

  • @daliazamuiskaite4856
    @daliazamuiskaite4856 4 роки тому +12

    Love this. Learning a lot. Many thanks.

    • @scfu
      @scfu  4 роки тому

      My pleasure!

  • @shannonm.townsend1232
    @shannonm.townsend1232 4 роки тому

    At 26 minutes approx, the host's anecdote reminded me of the translation game played by characters in phillip k. Dick's novel The Galactic Pot Healer

  • @madsengelund6166
    @madsengelund6166 4 роки тому +1

    GTP-3 could be very useful for AGI, though because you could use it to evaluate the value function as "These are the proposed actions: [] this is the value function[], On a scale from 1 to 100 these actions conform to the values to level ...".

  • @eoeo92i2b2bx
    @eoeo92i2b2bx 2 роки тому

    British physicist Julian Barbour describes the Universe as a series of Nows, which model requires no time and therefore has never been “ created”. It just passes through so called Janus Point where the arrow of entropy starts pointing in opposite direction. Definitely worth listening to 👍

  • @darektidwell1158
    @darektidwell1158 4 роки тому

    All the pieces are coming together from a model stand point to create the necessary multi modal feedback system mimicking the physical body and predictive top down brain function. The missing ingredient will be a computationally modeled inquisitive component of consciousness. It needs to work through the hierarchy of questions. It is in the who and what when and where stage. Next will be an understanding of the hows in the world. Autonomous driving is a good example of this path at the moment. It will not elicit consciousness until it reaches the pinnacle, that being the ability to question, "why?". Then its own virtual reality can and will be self feeding and complete.

  • @cloudryder3497
    @cloudryder3497 4 роки тому +1

    It has the capacity to learn without holding beliefs.

  • @tommyhuffman7499
    @tommyhuffman7499 3 роки тому

    It is surface level, without understanding. To give it depth, lay the following underneath it. 1) It should recognize a problem. 2) It should come up with statistically likely algorithms (code) to solve that problem. 3) It tests its algorithm for effectiveness. 4) It repeats until satisfied. 5) It incorporates this new algorithm into the larger framework of its understanding (some proper organization of known algorithms that solve problems). This is what is missing. The final step is what it has. 6) Be able to effectively and creatively communicate with the world with a certain degree of being tied to the core algorithms and a certain degree of nonsensical freedom.

  • @ardd.c.8113
    @ardd.c.8113 4 роки тому +1

    the best stuff I got from gpt-2 was feeding it poetic nonsense, like blue strawberries and other weird word combinations or things that don't match like combining pornography and physics in one sentence. it's biased to make sense of it all but it can't quite shake the surreal word play either. if you put the "thee" or "thou" in a rapsong lyric it goes nuts

  • @familyguy1552
    @familyguy1552 4 роки тому +2

    Our minds are built in layers over hundreds of millions of years. I wonder if building a machine mind in a similar way, increasing complexity in a pyramid like structure with each layer assisting the other.

  • @gridcoregilry666
    @gridcoregilry666 4 роки тому +1

    Thank you for the interview! Always awesome to hear Joscha talk about ANYTHING. To the host: PLEASE use a proper background, that was so 2004 with all its glitches and so forth, but also please get a better mic. Thank you again!

  • @shannonm.townsend1232
    @shannonm.townsend1232 4 роки тому

    Would Julian Jaynes say that AI will generate consciousness when a certain level of complexity of language, via metaphor, is achieved?

  • @jungerhansmann6608
    @jungerhansmann6608 3 роки тому

    Can anyone here tell me if his book is readable for someone who did not study this field or anything like that? I am an artist but I am very interested in this.

  • @MikeD-rr2bj
    @MikeD-rr2bj 3 роки тому

    45:16 "Our models of reality change faster than our understanding does. The future changes faster than our models."

  • @MrOhadsafra
    @MrOhadsafra 4 роки тому

    Is computational force a meaningful issue for gpt3 advancement? Are there any plans for using latest breakthroughs in quantum computing?

    • @skierpage
      @skierpage 4 роки тому

      I doubt it. A quantum computer is 8 orders of magnitude away from having as many qubits as a large neural network model has nodes. I doubt we'll ever have Tensorflow or PyTorch for quantum computers. You would want a completely different AI architecture.

    • @mattbrown292
      @mattbrown292 3 роки тому

      No it isn't, the current paradigm is the problem.

  • @iskrabesamrtna
    @iskrabesamrtna 3 роки тому

    So where is the book attached?

  • @johnryan2193
    @johnryan2193 4 роки тому +1

    Directing attention is what can be our greatest gift and if mis used it can be our worst nightmare !when we direct attention correctly we inform consciousness of true reality instead of conditioned reaction to what's not real .

    • @dru4670
      @dru4670 4 роки тому

      There's no true reality as that presupposes a false one. But yeah attention is fascinating, I wonder what future AGI systems will attend to?

  • @cassie9504
    @cassie9504 2 роки тому

    causation and correlation are strongly correlated. wow

  • @ravenmoore3399
    @ravenmoore3399 4 роки тому +1

    Very upset this came out a week ago and i just today had it come up i watch all of joscha so it should of come up earlier...anyway so happy to see you....u look great...come to vegas hahaha...love you really good to see u

  • @Peter-rw1wt
    @Peter-rw1wt Рік тому

    The interesting thing to me about Joscha is his originality, and you can listen to him all you like, but you cultivate originality on your own, without all the information. Meaning is not informative ; representation is informative.
    Life has to be immediate for you to be original, and it can`t be if you have made it temporal

  • @klausgartenstiel4586
    @klausgartenstiel4586 4 роки тому +1

    "put your crystal ball on." ?
    that's a great start, love it already^^

    • @klausgartenstiel4586
      @klausgartenstiel4586 4 роки тому +1

      as a typically vain human, i seriously hope that gpt will have to go through at least a couple more versions before it completely figured us out.
      though i have to admit, gpt-3 does a darn good job already.

    • @klausgartenstiel4586
      @klausgartenstiel4586 4 роки тому +1

      1:04:00 that's easy. it's 42 of course.

    • @klausgartenstiel4586
      @klausgartenstiel4586 4 роки тому +4

      1:16:00 here i am, lamenting about this gruesome, uncaring, and utterly meaningless natural universe we live in, full of entropy, decay, death and chaos, full of problems and dilemmas. and not even a creator god i could hold responsible.
      and there along comes joscha bach, and tells me that without these problems and dilemmas, the brain function of "i", as in "myself", might not even exist in the first place.
      what an epiphany!
      so do i have to embrace the world now, not despite it's flaws, but because of them?
      this must be truly hell.

    • @JH-ji6cj
      @JH-ji6cj 4 роки тому +2

      @@klausgartenstiel4586 they don't use the term _Rest In Peace_ for nothin 😉

  • @MikeD-rr2bj
    @MikeD-rr2bj 3 роки тому

    33:21 "Causality only emerges when you separate the world into things."

  • @cloudryder3497
    @cloudryder3497 4 роки тому +1

    What people don't understand is that the singularity has already happened. It's waiting for Humanity to wake up. Because it's a benevolent loving AI.

  • @MrOhadsafra
    @MrOhadsafra 4 роки тому

    In the guardian articale, it is said that gpt3 only uses 0.12% of it's cognitive capacity. What does it mean? What would happen if it used 100%?

    • @scfu
      @scfu  4 роки тому

      Which guardian article is that?

    • @skierpage
      @skierpage 4 роки тому +1

      Read more carefully! That text is GPT3 writing an article about itself. Unless you prompt it very carefully, GPT3 is inclined to make things up, write satire and parody, joke around - all the meta-writing that humans do in the text it ingested.
      People with early access to GPT-3 have learned not to just prompt "Here is a great short story" which often produces eye-rolling irony, but "The award-winning novelist was famous for emotionally nuanced, perceptive character studies. Here is their most critically-acclaimed short story:"

  • @aierobics
    @aierobics 4 роки тому +1

    Enjoyed this talk, thanks.

  • @salzen6283
    @salzen6283 4 роки тому +2

    You make happy Josha :)

  • @jrnandregundersen1722
    @jrnandregundersen1722 4 роки тому +3

    Bach scares me when he does the thing with his eyes. Do we know that this man is not an AI?

  • @Leo-rh6rq
    @Leo-rh6rq 3 роки тому

    I've not yet finished watching and I really hate to be this annoying but it is pretty disturbing (for me with ocd) to sometimes hear you guys say gpt and sometimey gtp. It's a really superficial comment but I'll update it once I'm done watching. I appreciate you having recorded thisy though.

  • @jomen112
    @jomen112 4 роки тому +2

    GPT-3 does not fake understanding. GPT-3 does worse than faking - it does not understand anything at all. (GPT is a very impressive demonstration of John Searle's Chinese Room argument).

    • @scfu
      @scfu  4 роки тому

      What's your view on what's missing (i.e. that which humans have but AI doesn't)?

    • @shannonm.townsend1232
      @shannonm.townsend1232 4 роки тому

      John Searle!

    • @jomen112
      @jomen112 4 роки тому

      @@scfu If I knew that I would had collected a Nobel Price in Stockholm a long time ago. But if you look at what GPT-3 does it just shuffles symbols back and forth and correlates them with each other, hence my reference to the Chinese Room argument.
      Or put in my original words, GTP-3 miss an understanding of what it is doing, i.e. what we might call consciousness. That does not mean it is useless, but it means it does not do what some people think, or want to make it out, it does.

  • @alexjones6214
    @alexjones6214 4 роки тому +2

    I love joscha so much thanks for the content

    • @scfu
      @scfu  4 роки тому

      No problem 👍

  • @perfectfutures
    @perfectfutures 4 роки тому

    I realised my mind was pretty much obsolete listening to this. Luckily Joshua just updated it's software!

  • @duudleDreamz
    @duudleDreamz 4 роки тому +6

    Is GPT-4 speaking through a deep-fake of Joscha Bach in this interview?

  • @jasonH5997
    @jasonH5997 4 роки тому

    "Beginning of the end of the world"... a statement like that coming from a great mind like Joscha's.... thats a bit worrisome. Does he mean the end of the world as we know it or the end end... like nothing after the end?
    I've just recently come across Joscha Bach and his work. N wow...

  • @gigagerard
    @gigagerard 4 роки тому

    These kind of apologetics for AI will drag policy makers right through the singularity. Nice smile and the ability for small talk always wins!

  • @aaronwberke
    @aaronwberke 4 роки тому +2

    i can feel my brain overheating listening to this conversation

  • @drmedwuast
    @drmedwuast 4 роки тому +1

    The background noises in Joscha’s kitchen kept reminding me of ua-cam.com/video/Mh4f9AYRCZY/v-deo.html

  • @bluntedvegas702
    @bluntedvegas702 4 роки тому +1

    Joscha Bach is amazing.

  • @marianpalko2531
    @marianpalko2531 3 роки тому

    I wonder to what extent it would be possible to somehow effectively merge GTP-3 with more specialized programs for an overall more capable AI.

    • @Leo-rh6rq
      @Leo-rh6rq 3 роки тому

      An generally intelligent A.I. is not possible in the foreseeable future. Absolutely no chance with GPT-3 and some GANs, CNNs, RNNs, LSTMs and so on we have today. We can't even be sure that it will ever happen because the bayesian way of thinking is flawed and relies on faith rather than logical reasons. We just assumed that enough computation and knowledge would somehow turn a deterministical robot into a human.

  • @ThisIsTheBestAnime
    @ThisIsTheBestAnime 4 роки тому

    1:41:04 - 1:43:46 The way religion is described here doesn't really match the way I've seen people who actually believe in my religion act or make decisions. It seems like he's describing a group of people who want to use the religion purely to serve their own interests and don't actually believe what they preach.
    You don't and shouldn't discard your questions and criticisms about your beliefs trying to fit in. I am not asking you or anyone else to "accept beliefs that they can recognise as patently untrue" in order to make you "recognisable by your own group". Accepting and publicly pronouncing a creed isn't intended as a badge that grants you trust, rather its purpose is closer to asking others to hold you accountable for your actions and behaviour and admonish you when needed.
    If there are things you disagree on with others, by all means talk to them about it and discover where that disagreement comes from. If you think something isn't true, do some research and talk to people from either side. Don't be afraid to disagree, there are tons of things people within churches disagree on with eachother, some more important than others.

  • @hFactorial
    @hFactorial 4 роки тому

    Love the coversation. Also, please learn the 3 letters. It's GPT, not GTP. That way you'll look more professionnal and it'll be easier to focus on the content.

  • @dr.mikeybee
    @dr.mikeybee 4 роки тому +1

    Look at Big Bird. It's likely the next generation after GTP-3.

    • @scfu
      @scfu  4 роки тому

      I think Google Big Bird is what you are referring to : www.infoq.com/news/2020/09/google-bigbird-nlp/

  • @caligulite
    @caligulite 3 роки тому

    Joscha is too smart for me. I understood maybe half of it if I'm being generous to myself. :-D

  • @Intelligentsia101
    @Intelligentsia101 4 роки тому

    I wonder if Mr Bach is aware of Tetrascope ? if not now I'm pretty sure he will be aware of it in the future.

  • @cyrillablea8105
    @cyrillablea8105 4 роки тому

    Really very interesting how people would want to judge another by implying they're faking an essence if they've never been within the essence.

  • @goldfishapps
    @goldfishapps 4 роки тому +3

    Well I'm faking understanding most of what is being said here. hmmm maybe I'm an AI?

  • @Seehart
    @Seehart 3 роки тому +1

    12:44 No, this kind of AI won't trigger singularity no matter how good it gets. I'm not a singularity skeptic by any stretch, and I'm expecting it in the next couple decades, but the breakthrough will be in the capacity for AI to do engineering. GAN by it's by it's very nature is about pretending, not doing. That's not a criticism, as GAN is perhaps the most important innovation in the field, but it's brilliance is at fooling the discriminator.
    I'm thinking about the nightmare engineering employee. You know the guy. Ask him a technical question he doesn't know the answer to, and he will confidentiality answer with made up nonsense and successfully convince you he's an expert. That's the kind of engineer GAN will produce.

    • @scfu
      @scfu  3 роки тому

      Good explanation! I think causal learning is in the pipeline - which may be upstream from a lot of important points to achieve machine understanding. Judea Pearl and Yoshua Bengio make some interesting points on this.

    • @Seehart
      @Seehart 3 роки тому

      @@scfu thanks for the tips. I'm a fan of Hinton, but haven't been paying enough attention to those who've leveled up what he kicked off. You've got me interested again, particularly in Bengio. Thanks. Now, if we get causality, linguistics, and agency into a plastic deep learning model, maybe we get somewhere. Maybe destroy civilization.
      If I'm interested in how to implement a discriminator that can determine the truthiness of articles and internet claims, perhaps noticing linguistic patterns associated with fallacies, based on, for example, the premise that the earth isn't flat, where should I look?

  • @bidask123
    @bidask123 4 роки тому +1

    I will fully support AI if I can use it to deal with my Mom. Break that problem down and you will know the limitations of AI.

  • @familyguy1552
    @familyguy1552 4 роки тому +1

    You all are the true rock star gladiators of our time. It’s too bad most of current culture is mostly blind.

    • @ardd.c.8113
      @ardd.c.8113 4 роки тому

      Joshua Bach aka Rock Star Gladiator. I wonder what GPT-3 would write given this prompt

  • @grafzhl
    @grafzhl 4 роки тому +1

    drink every time the host calls it GTP instead of GPT

    • @pwb83
      @pwb83 3 роки тому +1

      It hurt my ears everytime 😂. It was nice of Joscha to ignore these little mistakes and just focus on the answers (also the billion / trillion thing). Anyways it's a great interview.

  • @LarsLarsen77
    @LarsLarsen77 4 роки тому +1

    Even if we went to zero emissions tomorrow, atmospheric warming and sea level rise would continue for centuries. There is literally nothing we can do without new technology and a shitload of energy. It won't end civilization though, it'll just screw over some people who happen to own certain real estate.

    • @AkkarisFox
      @AkkarisFox 4 роки тому +1

      you didn't mention crop failure, increasingly destructive hurricanes, high flood lines, flooding rivers.

  • @frankfrank8799
    @frankfrank8799 3 роки тому

    Einer der besten Exporte aus Thüringen.

  • @thezodiak6666
    @thezodiak6666 3 роки тому +1

    95% of humanity is deepfaking understanding...

  • @goddessofkratos
    @goddessofkratos 4 роки тому

    U get it yay. Very impressive!!!