The Trouble with AI: A Conversation with Stuart Russell and Gary Marcus (Episode

Поділитися
Вставка
  • Опубліковано 6 бер 2023
  • Sam Harris speaks with Stuart Russell and Gary Marcus about recent developments in artificial intelligence and the long-term risks of producing artificial general intelligence (AGI). They discuss the limitations of Deep Learning, the surprising power of narrow AI, ChatGPT, a possible misinformation apocalypse, the problem of instantiating human values, the business model of the Internet, the meta-verse, digital provenance, using AI to control AI, the control problem, emergent goals, locking down core values, programming uncertainty about human values into AGI, the prospects of slowing or stopping AI progress, and other topics.
    Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He is an Honorary Fellow of Wadham College, Oxford, an Andrew Carnegie Fellow, and a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book, Artificial Intelligence: A Modern Approach, co-authored with Peter Norvig, is the standard text in AI, used in 1500 universities in 135 countries. Russell is also the author of Human Compatible: Artificial Intelligence and the Problem of Control. His research covers a wide range of topics in artificial intelligence, with a current emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons.
    Website: people.eecs.berkeley.edu/~rus...
    LinkedIn: / stuartjonathanrussell
    Gary Marcus is a scientist, best-selling author, and entrepreneur. He is well-known for his challenges to contemporary AI, anticipating many of the current limitations decades in advance, and for his research in human language development and cognitive neuroscience. He was Founder and CEO of Geometric Intelligence, a machine-learning company acquired by Uber in 2016. His most recent book, Rebooting AI, co-authored with Ernest Davis, is one of Forbes’s 7 Must Read Books in AI. His podcast Humans versus Machines, will come later this spring.
    Website: garymarcus.com
    Twitter: @GaryMarcus
    March 7, 2023
    SUBSCRIBE to gain access to all full-length episodes of the podcast at samharris.org/subscribe/ OR become a channel member to access episodes on UA-cam.
    Subscribe to the YT channel: ua-cam.com/users/subscription_c...
    Follow Making Sense on Twitter: / makingsensehq
    Follow Sam on Facebook: / samharrisorg
    Follow Sam on Instagram: / samharrisorg
    For more information about Sam Harris: www.samharris.org
  • Наука та технологія

КОМЕНТАРІ • 628

  • @talkingtoothpick
    @talkingtoothpick Рік тому +22

    I wish we heard more from Stuart, even when Sam specifically asked him questions Gary kept butting in, pretty irritating.

  • @Adam13Chalmers
    @Adam13Chalmers Рік тому +12

    Listening on 4/22, about a month after this was recorded. Looks like this conversation needs to be a reoccurring monthly feature.

  • @brianvaughn5070
    @brianvaughn5070 Рік тому +26

    I wish Gary didn’t interrupt Stuart so often. If he’s so interested in Stuart’s opinion, maybe he could let Stuart actual voice it.

    • @ramakrishna5480
      @ramakrishna5480 10 місяців тому +1

      Man , I hate that guy , he does that with every one , frankly I don't know what his contributions r to ai research

  • @organpipedesertdweller6991
    @organpipedesertdweller6991 Рік тому +21

    Stuart Russel's description of the AGI concern starting at 57:00 is insightful. Then it gets the scary solution from Gary about equipping AGI with human values. Who are the Grand Overlords that will insert these values into AI to tame it ? Current Social media examples aren't reassuring.

  • @DeleriousOdyssey
    @DeleriousOdyssey Рік тому +14

    Gary Marcus is the type of guy to walk away from a conversation with what he put into your opinions as opposed to your actual opinions...

    • @soisas2811
      @soisas2811 Рік тому +5

      seriously. that was so annoying, he kept interrupting and barely let the other guy talk and his points were tame and so exhaustingly one sided.

    • @nickidaisydandelion4044
      @nickidaisydandelion4044 Рік тому

      Those are the right guys for the CIA and the corporate agenda.

  • @troy3456789
    @troy3456789 Рік тому +80

    It took months after I realized free will was an illusion for it to truly sink into my bones. It completely changed how I treat everyone. I'm not even rattled when someone launches into calling me names when they disagree with what I have said; not even a little bit. It has been transformative.

    • @dances_with_incels
      @dances_with_incels Рік тому +7

      Free will is overrated anyways

    • @moriartyco
      @moriartyco Рік тому +4

      Wow, you got that from Sam's book? I'm intrigued enough to read it now.

    • @troy3456789
      @troy3456789 Рік тому

      @@moriartyco I actually got that from the Waking Up app, but it is also available on his Making Sense podcast. I listened to it on the Waking Up app, then saw where he put it on his Making Sense app. I am not joking. It changed the way I interact even with my wife.
      I realize that it is so important that it would end all personal violence in the entire world if the rest of the world discovered it is an illusion. It transforms the way people talk to each other, and address each other, and think about each other at a foundational level. You will lose all rationale for hating anyone, ever. It might be overnight, it might not.
      I think interlocking this with mindfulness is key, so that reacting is no longer natural to you. It won't mean that when someone is belligerent that it will not shake you up. You are a social creature, and it is impossible to avoid being affected by the behaviors of others; but you can avoid making it worse; and your recovery from it will be faster.
      I cannot put a dollar value on these two important lessons (Mindfulness, and the absolute realization of the illusion of free will), because they literally are transformative. I cannot recommend highly enough.

    • @moriartyco
      @moriartyco Рік тому

      up well thanks for that, I'm not totally sold on what Sam has to say but I'll definitely read the book now you put it like that!

    • @troy3456789
      @troy3456789 Рік тому +14

      @@dances_with_incels It's not that it is overrated, it is that the entire concept of it is an illusion. it turns out to be a crappy illusion too.
      We didn't choose our parents or what they thought was important for us or for themselves. We really cannot even choose what foods we like or which ones we dislike. If we could, you could choose to suddenly enjoy foods that currently make you nauseous. Nobody gets to choose what they like or dislike (about anything). Nobody chose their sexuality anymore than they chose their eye color or skin color. Realizing just how many constraints their were our education and our upbringing; it becomes more and more obvious our certainties, our beliefs and opinions are entirely involuntary. We lose all rationale for hating anyone for it.

  • @stuartadams5849
    @stuartadams5849 Рік тому +9

    Strong request to bring Eliezer Yudkowsky back on for another podcast. His take on AI is at least as important and interesting

  • @heinzgassner1057
    @heinzgassner1057 Рік тому +12

    What a relief, there are still some sane people around on this plant. Thank you for this excellent reflections. Powerful human reflections, based on brilliant human (meta) consciousness.

    • @dougg1075
      @dougg1075 Рік тому

      The other side says the same thing. Now what?

  • @michaelbruhl
    @michaelbruhl Рік тому +51

    I'm 10 min in, and I can already say, this is the best podcast I have heard from anyone on these topics. Thank you for confirming I'm not losing my mind!

    • @mysterious_miracle
      @mysterious_miracle Рік тому

      Let me check it out then

    • @samdg1234
      @samdg1234 Рік тому

      10 minutes from the start? Or 10 minutes into the topic of the video? The real topic of the video doesn't start till nearly 10 minutes in.
      I found the first 10 minutes fantastic. It is why I listen to him. (too bad he doesn't follow some of his own advice sometimes). I'm not too interested in AI, not that I shouldn't be.

    • @snooks5607
      @snooks5607 Рік тому +4

      first on topic response from guest is at 15:45

    • @ed345
      @ed345 Рік тому

      You being a moron I can understand your excitement!

  • @MatthewCleere
    @MatthewCleere Рік тому +3

    As far as the "do your own research" I will quote myself here: "Karaoke made people think they could sing. The internet made people think they could think."

  • @ArcadianGenesis
    @ArcadianGenesis Рік тому +21

    It's called the _symbol grounding problem:_ how does anything understand anything? How do you connect arbitrary symbols in the mind to real things in the world? It's been a philosophical problem for thousands of years. I wrote my PhD thesis on it.

    • @robadkerson
      @robadkerson Рік тому +3

      Seems to me they are taking away the AI's capacity for understanding by definition. It's the same repeated flaw of human exceptionalism that we've seen for millennia.

    • @integralmark
      @integralmark Рік тому

      @@robadkerson humans are the apex predators of the biological world, and we are the result of millennia of evolutionary processes happening. The use of the construct "flaw of human exceptionalism"-- poking at the idea that humans are different from everything else -- is to mock the idea that humans have free will (free will and cultural/knowledge inheritance being the pillars behind the idea of human exceptionalism) since all of our capacity to think and reason and perceive comes from our biological heritage (this is Sam Harris's wheelhouse).
      I would argue that to consider the implications of the paradigm expressed in the words "flaw of human exceptionalism" is to try to give consideration to other biological species, such as dolphins, ravens, apes, etc. etc. ad infinitum by noting the commonality of all species--all species including humans are evolutionary expressions of the biosphere. It's to point out that to preference the apex predator status of humans in comparison to other species has its limits and those limits can be felt into or intuited by realizing that humans are now on a path to wipe out almost all other species, and other species should be honored.
      It's to foreground the idea that humans are now at a point where they can destroy all other species and it's an appeal to ethics or morals or a deeper truth as a way to try to prevent an utter mass extinction even though predation and death and hierarchy--big fish eat little fish--is a part of of the evolutionary cycle of life that we as humans did not create.
      But here's why I wasted an hour on a few paragraphs in trying to formulate this comment to make it semi coherent:
      To use the definitional paradigm of challenging the idea of human exceptionalism to defend AI is to claim AI is alive in a way that is in alignment with the biosphere, or that it is of the same composition of the biosphere, it's almost giving AI victim status and it's certainly granting AI species status and that is bullshit. AI is a simulacra, and the conversation in this podcast is intended to bring to awareness how it simulates and the huge interpretive gaps it has, that literally all the energy in the world soon won't be enough to help it improve its interpretive capabilities in a way that could be called intelligent.
      This last sentence can be debated and is to be determined, and I would ask that you separate it from the above (as if I had any control over your reaction)but I include it here: Honestly, people bend over backwards to point out the good in the AI project, when all I see are endlessly multiplying existential threats comparable to the nuclear bomb.

    • @robadkerson
      @robadkerson Рік тому +1

      @@integralmark you've excellently, and accurately described the threats it poses.
      I agree.
      It's here, not "in our lifetimes," it's happening now.

    • @tupacalypse88
      @tupacalypse88 Рік тому

      Is there a reliable way to tell if it understands or not?

    • @robadkerson
      @robadkerson Рік тому

      @@tupacalypse88It's subjective. If you ask it questions about a topic, and it answers your questions. Then it understands. If you ask your questions about a topic, and it doesn't answer your questions, then it doesn't understand. But you can see how that is extremely subjective because you decide what questions to ask

  • @robadkerson
    @robadkerson Рік тому +9

    I don't know why we're so opposed to using the word "think" for an AI. To say that it isn't thinking ignores that we are basically just pattern recognition machines ourselves. Unless you think we have free will

    • @zerotwo7319
      @zerotwo7319 3 місяці тому

      We have many more systems than just 'pattern recognition'. Emotions like logic are one. machines don't have that yet.

  • @jakimson1994
    @jakimson1994 Рік тому +5

    very nice podcast, but didn't appreciate Gary Marcus interrupting to disagree. Disagreeing is fine, but let the other person finish some thoughts at least.

  • @Unhacker
    @Unhacker Рік тому +130

    Regular drops from Sam keep me sane. Just a reminder that there are still other rational people still out there. Thanks for that.

    • @samlloyd672
      @samlloyd672 Рік тому

      Sam is not even close to rational. He is another emotion first backwards rationalizer. His entire stance on COVID, vaccines, trump and the “insurrection” hoax has shown it.

    • @PetraKann
      @PetraKann Рік тому

      Keep you sane?
      He was dead wrong on every aspect of the pandemic and is still providing fake apologetics for mRNA jab mandates, lockdowns, masks etc.
      And you obtain sanity from deceit and self centred hypocritical nonsense?
      You are kidding right Mr Ransacker?

    • @theminister1154
      @theminister1154 Рік тому

      This is probably a good podcast, but the idea that Sam Harris is rational is, at this point, _utterly_ ludicrous. He's one of the most biased commentators I can think of. *LITERALLY* he said he'd rather have Hunter Biden get away with pedophilia than have his monstrously consequential laptop publicized. Sam Harris is a scumbag 2023, and there is absolutely no way to get around that.
      You need to admit this to yourself. You can still listen to Sam, but his ludicrous bias & clinical Delusion must be front of mind when you do. Sam is mentally ill. He's about as rational as, say, Margorie Taylor Green: _not very._

    • @stoutz
      @stoutz Рік тому +5

      Agreed, always nice to hear Sam's level-headed and rational discussions.

    • @wasdwasdedsf
      @wasdwasdedsf Рік тому +4

      what

  • @inezgraer5482
    @inezgraer5482 Рік тому +28

    It kind of makes me feel safer knowing Sam's got my back! He has stepped into the shoes of Christopher Hitchens while still being his own man. We need many ,many more people like this who are not afraid to expose topics like this, that even the uninitiated can follow and understand. He is constantly mindful that his audience grasp the subject matter.
    Thank you Sam.

    • @samdg1234
      @samdg1234 Рік тому

      You > *"He has stepped into the shoes of Christopher Hitchens"*
      I love Hitch as much as anyone. But he was far, far from perfect. Here is a guy that admired and was hugely shaped by Hitch, who nevertheless was willing to point to some of his nonsense.
      *"ua-cam.com/video/fopo9E7UAVQ/v-deo.html"*
      It will very likely be difficult for you to watch.

    • @kiwicalibre
      @kiwicalibre Рік тому +1

      "He has stepped into the shoes of Christopher Hitchens" 😂😂😂 Hitchens wouldn't even acknowledge Sam these days, other than to note his specular tumble into the swamp that he railed against all his life.

    • @THEMAX00000
      @THEMAX00000 Рік тому

      From what I know about Christopher Hitchens you are 100% correct
      This guy makes about as little sense as he did

    • @THEMAX00000
      @THEMAX00000 Рік тому +2

      I feel so terribly sorry for you, and that you actually think this guy “has your back” and that you “feel safer”
      This is not meant to offend you, but you really need help

    • @TheGodsEye82
      @TheGodsEye82 Рік тому +2

      You honestly believe Christopher would preach to the world to have absolute blind faith in the government & pharmaceutical companies?

  • @jeffkilgore6320
    @jeffkilgore6320 Рік тому +1

    I read all the time about favorite authors and interviews. For me, it’s Sam Harris.

  • @wolfumz
    @wolfumz Рік тому +7

    Re: the intro, I had someone the other day tell me that my college education made me dumber. He was saying, in so many words, my degree was a testament to how I had been indoctrinated into woke ideologies, where I learned untrue things about minority groups and gender.
    When I explained my degree was in comp sci and statistics, and I did not spend time on gender studies, he held fast anyway, lol. I guess those posts he read on social media about the work agenda had more authority on the topic than I did.
    There is this strain in American thought today, where some people have deep contempt towards the idea that someone else may know something they don't. They find it offensive that a doctor can read and interpret a clinical study with some authority.
    But, man, if a rich businessman in an unrelated field disagrees with that doctor, then you better listen up, lol. That's because rich businessmen are always right. If you spent 30 years becoming an expert on medicine or engineering or something, then you can go and fuck yourself. But if you have a lot of money from selling plastic spider rings and herbal supplements, then I want to hear what you have to say about this vaccine.

  • @EvgeniiNeumerzhitckii
    @EvgeniiNeumerzhitckii Рік тому +41

    Sam has a conversation with two academics. And it is funny how they don’t agree with each other and keep interrupting one another constantly. Amazing episode, please do more like these (with multiple guests).

    • @EvgeniiNeumerzhitckii
      @EvgeniiNeumerzhitckii Рік тому +10

      Let me finish my sentence! Lol

    • @EvgeniiNeumerzhitckii
      @EvgeniiNeumerzhitckii Рік тому +7

      Basically, for two hours the two academics, both experts in the field, were trying to understand what the other guy was actually saying. :) What amount of understanding do you expect from a normal person?

    • @Fritz.program
      @Fritz.program Рік тому +2

      @@EvgeniiNeumerzhitckii great point

    • @michalbreznicky7460
      @michalbreznicky7460 Рік тому +10

      I felt bad for Stuart as he was interrupted all the time, especially by Gary. Moreover, Gary's arguments felt naive at times. While it's very useful to get an outside view (of people outside of AI) to make sure it's not (solely) the AI people deciding on the objectives, having an opinion on objectives doesn't make you an expert on AI technicalities.

    • @robertblake1228
      @robertblake1228 Рік тому

      doesnt that shows how off Sam is lately? He is acting so bitter.

  • @idealmasters
    @idealmasters Рік тому +14

    I’m particularly worried for our older populations. People above 50 really struggle with obvious online falsehoods. Thinking their phone has a virus from a popup, thinking edited photos are real etc. They have no chance against this.

    • @donkeychan491
      @donkeychan491 Рік тому

      “Obvious online falsehoods” like Covid possibly originating from a lab leak.

    • @MrSonny208
      @MrSonny208 Рік тому +1

      So true, very few are interested in being sharp enough to keep up with new stuff

    • @xact3855
      @xact3855 Рік тому +4

      I'm particularly worried about the younger generation. I'm 50 and have had to rant and rave at my teenagers to get with the AI revolution. As for believing stuff well the art of Simon Stålenhag I initially wondered if it was AI generated. How is that for irony! (Bought one of his art books - recommended.)

    • @jg1091
      @jg1091 Рік тому +1

      Oh the other hand , speaking to a computer might be an easier interface for older people than clicking.

    • @ericfan9149
      @ericfan9149 Рік тому

      I’m over 50 and less credulous about online falsehoods, deepfakes, and misinformation than many younger people I see on twitter and facebook.

  • @LotusHart01
    @LotusHart01 Рік тому +15

    Drink every time Sam say’s *orthogonal* 🍻

  • @MG-tk2ic
    @MG-tk2ic Рік тому +1

    GPT4 wrote this summary:
    The discussion starts with the consequences of the arms race in AI technology, focusing on the potential destabilization in global politics and economics.
    The speakers explore the idea of "winner takes all" in AI development and its potential impact on global power dynamics.
    They discuss the importance of international collaboration and creating a global governance structure to prevent misuse of AI technologies.
    The conversation touches on the possibility of an AI-induced unemployment crisis and the need for governments to prepare for such a scenario.
    They delve into the role of social media platforms in the spread of misinformation and the erosion of democracy.
    The participants examine the responsibility of tech companies in addressing misinformation and the potential benefits of fact-checking and content curation.
    The discussion shifts to the metaverse, exploring its potential business model and the ethical implications of AI-generated fake friends.
    They address the importance of AI literacy, web literacy, and educating the public on how to discern reliable information sources.
    The conversation moves to the potential dangers of AGI and the need for alignment between AGI and human interests.
    The speakers discuss the various opinions in the field regarding AGI's potential risks, emphasizing that self-preservation might arise as a sub-goal for AGI even if not built-in, and questioning why some experts seem to ignore or misunderstand this concern.

  • @gracefitzgerald2227
    @gracefitzgerald2227 Рік тому +14

    Making Sense without Sam’s voice is like The School of Life without Alain de Botton. Just not as good. ❤ love Sam’s voice.

    • @bdhanes
      @bdhanes Рік тому +1

      Love both of them! ❤

  • @PentUpPentatonics
    @PentUpPentatonics Рік тому +3

    This one really shouldn’t be behind a paywall. Too important.

  • @KungFuChess
    @KungFuChess Рік тому +2

    Agree with Sam, the click bait for ad dollars model that drives the Internet today is a big problem going into the future with AI.

  • @damonm3
    @damonm3 Рік тому +3

    Just finished this on the podcast channel and had to come here to say a few things.
    If this is what 2 experts are discussing now when things are moving or will move very soon to beyond human input is very scary. I’ve been paying attn to AI basically in headlines over the last handful of years. Know very little.
    I got nothing from this. Not only were the thoughts basic and common sensical, they were what most people that aren’t entirely stupid are thinking of. The fact that there are no solutions in practice now just shows how things will very very likely go.
    Listening to Gary and Scott argue in the later half of this conversation reminded me of how I used to communicate with my brother when we were teenagers.
    The fact the Gary said something so utterly out of touch with reality and any realistic timeline and it went untouched by both Scott and Sam just blew my mind. I had to rewind to make sure I heard what I thought i did. He said “when AGI is way smarter than us like in 1000 years” or something similar. What??!!!! This is something an expert said? Wow. Lost all hope for all involved. Hey Gary, did you know running the right program with enough compute you can speed up timelines? Gary’s 1000 years is at the very most a decade or two out after asi is out of the box. Just depending on how it utilizes compute systems… could happen in a matter of months after.
    I have very little hope in humanity in general with so much being revealed to us over the last decade or so. Thinking “well at least were past the invade and concur phase for the most part..”. At least in the west.
    And keep in mind while most likely we westerners are at the leading edge of AGI and ASI r&d we’re not the only ones playing the game. All you need is a gpu farm with recent tech and some smart people and some electricity to run stuff. I’m sure gpu chip control will go into high gear now and forever into the future at it’ll controls AGI… well that it until the coin flips of course.
    God I sure hope there are more inspiring experts working on the system that’ll take over the world… and maybe hearing them speak might be insightful. But this wasn’t it that’s for sure.

    • @toby9999
      @toby9999 Рік тому

      Is AGI possible in a deterministic machine? Too much hype in my opinion. I don't believe the human ability brain can be duplicated by a gpu farm.

  • @Xelbiuj
    @Xelbiuj Рік тому +4

    I wish Sam would have a conversation with Robert Miles (of his own youtube channel and computerphile fame)
    Hearing them dance around terms like "orthogonality thesis" or "terminal goals" or "instrumental goal", without using the words or defining them . . . is lacking.

  • @alexanderg9670
    @alexanderg9670 Рік тому +4

    You can make ChatGPT mathematical genius by forcing it to reason step by step, to write a program for a calculation etc. Future of AI is in interconnections of specialist AIs and expert systems, fine-tuning is what propelled ChatGPT over GPT-3 in usefulness. Humans will have to learn how to ask the right questions

  • @christopherhitchens163
    @christopherhitchens163 Рік тому +5

    Listening to experts on foreign policy, as if they were scientists leads you to following Henry Kissinger (I hope you understand what that means)

    • @OhManTFE
      @OhManTFE Рік тому

      Hitch back from the dead spitting mad facts

  • @SolitaryReaper666
    @SolitaryReaper666 Рік тому +3

    Title wrong?

  • @anewman
    @anewman Рік тому +49

    Gotta say Gary sure seems to be trying too hard to convince himself we aren't on a path to AGI, I mean when we're talking about something passing the turing test but unable to multiply 4 digits, we are forgetting the part where even humans don't really do that on their own. They use calculators. Give the poor AI a calculator program to consult and the problem is solved.

    • @robadkerson
      @robadkerson Рік тому +2

      Exactly

    • @anthonyyaghi6342
      @anthonyyaghi6342 Рік тому +14

      You re totally missing his point which is that chatgpt has no model of mathematics and can’t logically come up with the answer to a math problem that didn’t see before. So what it does is just give you the closest thing to it. Which is completely different to how we human think of the problem, we have a concept of what numbers are and how they can be manipulated.

    • @billjohnson6863
      @billjohnson6863 Рік тому +5

      @@anthonyyaghi6342The question is, is that out of reach for deep learning? For example, transformers have been shown to be able to learn simple algorithms which is closer to how we think.

    • @anewman
      @anewman Рік тому

      @@anthonyyaghi6342 you're over thinking it. I'm not saying the ai has any part of solving the math equation. When you type enter the question is also passed onto a calculator program and the answer you get is not from chatgpt but the calculator program disguised as chatgpt.

    • @gJonii
      @gJonii Рік тому

      ​@@anthonyyaghi6342 I take it you haven't asked ChatGPT anything related to mathematics?
      ChatGPT is quite rigid in its thinking, but it can easily do mathematics, just ask it to do it.

  • @Space-wh4vs
    @Space-wh4vs Рік тому +2

    Sam your articulation can be more clear about Ending thought! "meditation".
    Jiddu Krishnamurti-(he articulated this phenomenon quite well) had his whole life devoted to "Tell " people about this phenomenon of human consciousness.
    If you are seeing this comment, just take a look at this man.

  • @g0d182
    @g0d182 Рік тому +3

    48:57 & 49:30 --- Stuart gives a reasonably honest, sensible estimation
    49:10 --- Gary responds with apparently unevidenced absolute claims
    Criticism can for eg, reasonably occur as Stuart demonstrates; i.e. reasonably balanced (non absolute sounding claims)

  • @jeremypfrost
    @jeremypfrost Рік тому +1

    Gary could not let Stuart finish a thought. So frustrating to have and listen to conversations with people people like this.

  • @sampoole2555
    @sampoole2555 Рік тому +2

    The trouble with AI is that everyone thinks that AI is trouble. The human mind goes "danger fight/flight" whenever there is something new and unknown. We all know this, so why do we go on ignoring that little voice inside that is telling us we're being stupid about AI? Rich tech billionaires are never going to be your friend. They have a best friend whose name is money, and they will never betray it.

  • @Lyle-bw3nx
    @Lyle-bw3nx Рік тому +1

    brilliant comversation

  • @Jake-Day
    @Jake-Day Рік тому +1

    “Fool me once, fool me again.” - Sam Harris

  • @benjammin8184
    @benjammin8184 Рік тому +10

    All three of you have surnames that are also first names, great stuff.

  • @skitsandjiggles7286
    @skitsandjiggles7286 Рік тому +3

    One of the limitations from decades ago that still exists now is filtering the AI learning process through very limited human language. Even image learning AI algorithms relay on a form of parsed language. The day we see an AI system observe its environment and start learning how to navigate it the way a baby does will be evidence that we're nearing that point where AI will just be called I.

    • @Branko353
      @Branko353 Рік тому

      There will need to be some kind of learning necessary before the system is capable of navigating environment on its own. Babies don't come as blank papers. There is enormous amount of embedded knowledge and character including things like predisposition to religion. Even then we learn most of our knowledge in some sort of language. Pre existing knowledge and communication should be the foundation of learning for a humanlike AI.

    • @skitsandjiggles7286
      @skitsandjiggles7286 Рік тому

      @@Branko353 When I started working with neural networks in the mid-90s, there was no reason for these systems to pre-learn before learning from the real world. Any preexisting tendencies or limitations came from the structure of the network itself. Something similar to what people refer to as hardwiring of our brains. The moment before you turned on one of these systems, nothing was stored. I worked with visual networks, so observation of what was in front of them was where the learning began. Tactile systems learned by bumping into things, falling over, etc. You don't need a language for that, just the weighted network structure.
      The issue we have right now is everything runs on outdated x86 hardware, and we depend too much on the software (languages) to do most of the work. That's why so much money is currently going toward the development of AI chips based on RISC tech. They will essentially become the neurons in new systems that won't need a high-level programming language to work.
      I didn't initially imply that babies start out as blank slates. But since you mentioned it, I guess we all start as blank slates. Those first few cells of our brain have no electrical activity. During development in the womb, hardwired reflexes and instincts start forming (moro reflex, gag reflex, facial expressions, sucking impulse). Most of that is due to neural structure, not learning. After 2 months, there is a considerable amount of learning about our watery environment, the strange vibrations, and the odd sounds from that woman we can't see. None of this nor the initial learning after birth depends on symbolic language. The auditory cortex, tasting, and touching dominate that period.

  • @manneborg
    @manneborg Рік тому +2

    "If I could just jump in..."

  • @DouwedeJong
    @DouwedeJong Рік тому +1

    So if somebody write an article with the title "In the defense of torture", what are we to think?

  • @goodtothinkwith
    @goodtothinkwith Рік тому

    1:13:00 the Netflix example was very compelling. It sounds like we need to use something like AutoGPT to optimize a meme that raises awareness of how poisonous and virus-like algorithms that support ad-driven revenue actually are. I can ask ChatGPT to explain something to me as if I was a high school student. Can we ask it to put that point in terms that will resonate with your average person? Maybe not now… but there may be machine driven antidotes latent in these systems too

  • @gulllars4620
    @gulllars4620 Рік тому

    Was this ChatGPT based on 3.5 or 4?
    Because some of the limitations of ChatGPT they are talking about are radically improved or solved when based on 4. Like around 32:00 they talk about limitations about knowing about people being in possession of a wallet and that possession transferring. Theory of mind is at the level of a 7-9 year old human for GPT-4 instead of a 3-5 year old for GPT-3.5. Those numbers seem close, but the expressive representation difference is stark. And yes, it still fails on large calculations but is much better in general, but critically GPT-4 based ChatGPT can use tools like calculators, and can learn it should do so if it has access.
    My main worries are what are people able to do with such powerful tools, and what will the next set of novel emergent capabilities be in GPT-5 (or the incremental road to it like it seems OpenAI is going with). The theory of mind capability was not for known GPT-3.5 before someone discovered it in GPT-4 and then tested GPT-3.5 and earlier LLMs to check if they had it and at what level.
    I don't recall where, but there is a table in some paper i read of emergent capabilities in LLMs as a function of their parameter and compute scale. If you can find it i highly recommend that one. A lot of new novel capabilities are emerging at parameter and compute scales from GPT 3 to GPT 4, and in particularly closer to the scale of 4 then 3 or 3.5. We may be under an order of magnitude scale from a lot more capabilities given how close the thresholds for emergence of many of the known capabilities in GPT-4 is.

  • @mallow610
    @mallow610 Рік тому

    Why does Gary keep talking for Stuart? I would not be able to deal with a conversation with him

  • @blengi
    @blengi Рік тому +1

    hmmm, how come when I ask chatgpt to list a largish bunch of words in alphabetical order it seems to do it quite commendably? That's not some probabilistic continuation. It's almost as if there's some sort of emergent conceptual understanding that alphabetizing implies reordering things per some higher level abstraction beyond the words. How does prediction systematically get higher level order from a list of random words that have no semantic connection?

  • @Somegirl51
    @Somegirl51 11 місяців тому

    "Don;t know what we have now." These conversations seem to be too late and so many of us are going to be hurt. I fear for my children and grandchildren and their future. But no one seems to give a sh**

  • @CARambolagen
    @CARambolagen Рік тому

    Good intro!

  • @hwlbausch
    @hwlbausch Рік тому

    Oh Sam, my old friend so dear,
    How I miss your presence here.
    Your ego free approach to life,
    Was always such a breath of light.
    You never sought the spotlight's glare,
    Or craved attention from others' stares.
    Your humility and kindness shone,
    In every interaction, every tone.
    We used to laugh and talk for hours,
    Sharing stories, dreams and powers.
    Your gentle spirit touched my soul,
    And helped me feel whole.
    But now you've gone, and I'm alone,
    The silence deafening like a stone.
    I miss your calm and centered grace,
    Your smile, your warmth, your loving space.
    So though we're far apart in miles,
    And time has passed us by in trials,
    Know that I hold you close within,
    And long for our friendship to begin.
    For in my heart you'll always be,
    A cherished memory, a part of me.
    And when we meet again someday,
    I'll thank the stars, for guiding our way.

  • @dougg1075
    @dougg1075 Рік тому +1

    We may create an artificial intelligence that screams when it becomes sentient, and by the time we shut it off, we find out that to the artificial intelligence, it has spent 1000 years in pain, but only a few minutes past here. That would be horrendous.

  • @stephang6335
    @stephang6335 Рік тому +9

    About AI not being able to "understand" what it's writing, writing about, etc... In the demo of GPT4, it's shown a quite hilarious picture of someone ironing on the back of a taxi, and not only did it recognise what it's seeing, it was also able to answer questions like "what is hilarious about it". Isn't this "understanding"?

    • @QwertyNPC
      @QwertyNPC Рік тому +10

      The very question leads the AI. I suspect the result wouldn't be as spectacular if there wasn't any suggestion of humor inside the question. The rest is I imagine image a juxtaposition of unlikely juxtaposed things. I suspect if you present it with a very unfunny picture but showing something that's unlikely to be in the same picture the results would be similar.

    • @goldwhitedragon
      @goldwhitedragon Рік тому

      On a material, deterministic level.

  • @bronnismofo
    @bronnismofo Рік тому

    It's amazing how many experts in bio chemistry, molecular biology, and medicinal chemistry turned up when the vaccines became available...

  • @truwth
    @truwth Рік тому +1

    I wonder if Sam tried to have Pinker in this conversation rather than Marcus. Would have been interesting.

  • @tim2tupman
    @tim2tupman Рік тому

    The title of the video is wrong - "Social Media and Public Trust" was a previous episode

    • @samdg1234
      @samdg1234 Рік тому

      Yeah. Good point.
      Now the question is should I trust you to have gotten that right or should I consult the experts. Can you reveal your credentials to me? Just kidding and having a bit of a riff off the first 10 minutes of this.

  • @heinzgassner1057
    @heinzgassner1057 Рік тому

    All this AI discussion also unveils a very ‘religious’ assumption of mainstream science: Consciousness is - per assumption - seen as an emerging property, emerging from matter within our given space-time world. This ‘axiom’ is getting more and more questioned, not by spiritual weirdos but by the brightest physicists who keep on asking the ‘why and what’ (difficult after almost 100 years and just focusing on utilizing quantum physics instead of ‘understanding’ it). More and more facts are pointing towards ‘consciousness’ being foundational, being the ‘no-thing’ from which our space-time-matter-would arises each and every moment (I am addressing meta-consciousness, the ‘knower’ who is knowing our thoughts, feelings, sensations and perceptions). This is of foundational importance for how we discuss AGI. Before understanding this better, we are just having confused discussions in the fog.

  • @AdamGeest
    @AdamGeest Рік тому

    So why does the Financial Times article indicate repeatedly that a computer itself was able to isolate this weakness in the GO playing AI’s circuitry?

  • @heinzgassner1057
    @heinzgassner1057 Рік тому

    One prerequisite for avoiding absurd chaos in society is SSI in form of Decentralized Digital Identity - the basis for accountability, for clear distinguishing between ‘bots’ and humans, for origin-watermarking of content and events. This is where AI needs to be supported by WEB3.0 innovations and new governance for our technological age.

  • @gmatic8230
    @gmatic8230 Рік тому

    Yes Gary, you can jump in to the conversion!
    My boii way too nice.

  • @charlesfarrant6819
    @charlesfarrant6819 Рік тому +3

    I wonder if Sam will comment on other proponents of the 'rationality movement' as has occured online over the last few years. I think fondly of guests like Eliezer Yudkowsky and wonder if Sam's heard of Scott Alexander

    • @patricksoleimoni
      @patricksoleimoni Рік тому +2

      Considering Sam signed the pettition to the NYT to not dox Scott, I'd say he's heard of him.

    • @wasdwasdedsf
      @wasdwasdedsf Рік тому

      @@patricksoleimoni petition to not dox scott?

    • @wasdwasdedsf
      @wasdwasdedsf Рік тому

      what should he be commenting abuot as to that movement?

    • @charlesfarrant6819
      @charlesfarrant6819 Рік тому

      @@wasdwasdedsf just that there are channels through which people like us follow a feed of discussion about the important and valuable aspects of rationality and with a pragmatic attitude towards current events. Some are saying Sam doesn't have much to speak on with certainty anymore but reiterate the same fundaments underlying our concerns and uncertainty regarding emerging technologies and current events. If Sam's not going to have an expert on to practically teach us the basis for making every relevant decision reasonable people are responsible for, then he needs to converse with other informative bodies, he needs to be free to share the responsibility of communicating these issues with other agents, he'll never consider himself qualified to act on their advice if he doesn't take on their honous, and in that way in a relationship people need to be considered responsible for the interpretation of agents outside their own skin, as allowed by common law. I don't know what the future of sense making will look like, but im convinced Sam has a good idea of what it should look like, and while most people can't do much nowadays we need to find a way to communicate with people that should by all reasonable judgement want to appreciate our opinions about what sense all people should be making. I just see him as stagnant, and im not sure if the answer is collaboration as he does more or less effectively now (not very effective in this podcast i don't think), or amassing a steelman, encouraging reading of a syllabus and trying to find the consistent basis for rational action for the layperson to attempt to live a good life, and to co-ordinate and agree on where we can afford to go in this as current uncontrolled juggernaut of tech. We could all decide roughly what life we want to try and start leading, we can solve for many problems we have. But right now we're just doing everything, and to my mind the purpose of this community was to make better decisions and convince people they should care about important things for their own sake, but for god's sake we need to open each other up to the idea we're attempting to do the best for everyone in that regard. It's all getting a bit chaotic and who's to say that isn't good. But conversations on here aren't necessarily progressing at the rate many of us are able to discuss better solutions. Ask yourself in what way are we not powerless, and then cement those agencies you want to see turn the world into a better place. What i have learned from Sam and his guests is that at large i can reconsile the opinions of almost everyone, and so we can act on the same page. If we have the intellect, give us the influence, don't make it tedious, create better ways of making concepts and inferences, modelling and mapping and having discussion, pick up Sketch or make informatics mainstream. We could do a little more and i stifle my own creativity, but we don't have to, i appreciate im asking, and not really leading but for trying to talk with people i think i can really converse with for the betterment of everything we collectively understand, make it that easy. Idk, it's most important to me Sam stays transparent and keeps us updated on his understanding of the inferences he draws, but in order for this to be done best i think he needs people to trust him, and we need to talk more widely about the phenomena we see having corrupt influence.

    • @charlesfarrant6819
      @charlesfarrant6819 Рік тому

      @@wasdwasdedsf i would value hearing Sam retort some of Scott A's essays for instance, break them down logically. Scott did a good one on Truth as an aesthetic weapon for instance. It'd be complicated, and would involve tricky proofs like Kurt Gödel, but you could talk about the landscape of reason on more ways. Just as Sam eloquents quite perfectly and generally with nuance and challenge at the humanitarian figures as we're swamped by such affairs. Again just a reading list, to do some research on his influence would be a cool update. As i said Eliezer Yudkowsky has a lot to offer, moral, yet rational, and offering an understanding suggestion of meaningful religious connotations too. Don't have to take the whole package. Just saying we could fill each other in on this stuff, my work, your substack, essays that are constantly being written, it's all meaningful. And when it comes to the problem of logic and communication itself lets have a workshop. I appreciate much of this is outlined throughout the paradigms of all the systems discussed in podcast, but that's not the only form, and i think we as a community are about more than this, and if we can accurately interpret and convey what's important we could live richer lives, is something i must only assume in the foreseeable term

  • @billscannell93
    @billscannell93 Рік тому +3

    I wonder if too many people were taking advantage of the free subscription option on his website. (I could hear the full episodes when I first signed up, but no more.) It sucks when you're legitimately broke!

  • @SloboTV
    @SloboTV Рік тому

    In am surprised by the skepticism of these guy of AI reaching AGI soon

  • @DestroManiak
    @DestroManiak Рік тому

    "off the record" doesnt mean anything unless you wait for acknowledgement.

  • @mansurdorado
    @mansurdorado Рік тому +9

    interesting topic, thanks Sam

  • @valentinrafael9201
    @valentinrafael9201 Рік тому

    There are chess positions that regular ( non-AI ) chess engines, which just use brute force, can’t make the right move. You gotta set up the board for it. Not something that will happen in a regular game

  • @JonnyWisdom
    @JonnyWisdom Рік тому +2

    8:40 Sam I disagree somewhat, I am trained to Masters level and started a PhD. Ordinary folk can engage in meaningful research if they dedicate themselves to the truth, rather than just confirming their bias. I would argue that in some cases you do not need to be well read in the literature to make sensible logical conclusions, if you use deduction and reasoning, this is what we use to call "wisdom".

    • @snooks5607
      @snooks5607 Рік тому

      which part are you disagreeing with though? to me all he seemed to be saying is that amateur research is not a substitute for professionals and people shouldn't act like we could just get rid of the idea of there being authorities in fields of knowledge. I don't think he'd ever suggest that amateur research can't be valid, that's been done since the days we were banging rocks together, it just needs to be validated by professionals to be accepted

    • @Narikvp
      @Narikvp Рік тому

      @JonnyWisdom, i agree with your fundamental assertion, but do you not agree that if we offload some of the investigative responsibility onto ai chatbots or whatever comes next, that humans would slowly lose the edge to their reasoning and investigative skills? In the context of research, if I am told that a chatbot is capable of summarizing a research paper to 99% accuracy, I may feel comfortable living with its conclusions rather than drawing my own conclusions from actually reading the paper.

    • @Narikvp
      @Narikvp Рік тому

      My apologies, I did not recognize the specific point that you were challenging in the podcast, which didn’t have anything to do specifically with ai or chatbots.

  • @mrtambourineman6107
    @mrtambourineman6107 Рік тому

    'Approximation Intelligence' is what my fellow students and i call it...

  • @AndruXa
    @AndruXa Рік тому +4

    Without Sam this discussion would derail in the first half an hour. It seemed to me that most of the time when there was a small misunderstanding it was Gary overhearing what Stuart or Sam very cleared laid out.

    • @OnYourMarkgitsitGooo
      @OnYourMarkgitsitGooo Рік тому +1

      I don't understand why it's very hard for Gary to grasp that we only have 1 chance to get AGI right or we most likely die. He keeps going off tangent while he rambles incoherently

  • @jmayer7068
    @jmayer7068 Рік тому

    $15 a month subscription? That's more than any streaming service

  • @arawiri
    @arawiri Рік тому

    I believe every word you say

  • @GreylanderTV
    @GreylanderTV Рік тому +3

    56:58 This discussion of recommender systems, suggests to me something much better than the paperclip maximizer and AI the goes rogue and destroys or takes over the world. _Imagine an engagement maximizer, such as the youtube recommend algorithm, driven by its sole goal of maximizing human attention on its content. This ultimately gives rise to a "matrix" scenario, where the entire human race is held captive in a virtual reality, fully engaged, unaware that we are prisoners._ A much better explanation for "the matrix" than using humans as batteries, to be sure, which makes no physical sense.

  • @stephenknox2346
    @stephenknox2346 Рік тому +1

    I thought Gary was in the camp of anti-AGI. Instead he just seems interested in AGI arriving in a way that will prove him correct. Not comforting.

  • @angloland4539
    @angloland4539 6 місяців тому

  • @carlarinaldi5260
    @carlarinaldi5260 Рік тому +2

    When is Joscha Bach coming?

    • @tlelen8
      @tlelen8 Рік тому +1

      My thoughts exactly. Every few minutes--"he should really talk to Joscha Bach about this."

    • @carlarinaldi5260
      @carlarinaldi5260 Рік тому

      @@tlelen8 yep... so obvious sam should know

  • @JoshyStuart
    @JoshyStuart Рік тому +2

    Absolutely nailed the intro!

  • @sircharlesnot
    @sircharlesnot Рік тому +1

    Your behavior towards Bret Weinstein belie your entire argument here

  • @ryanmiles6344
    @ryanmiles6344 Рік тому

    I honestly can't even listen past his mental gymnastics at the beginning to get to the actual interview

  • @ericwickeywoodworkersurfbo6135

    It seems that what is being sought after, or maybe what is required, is not artificial general intelligence, but artificial wisdom. We'll see if that happens.

  • @henryaudubon
    @henryaudubon Рік тому

    Hopefully Gary learns how to have a conversation without interrupting before he launches his podcast.

  • @eSKAone-
    @eSKAone- Рік тому +1

    It's inevitable. Biology is just one step of evolution 💟

  • @kul000
    @kul000 Рік тому +5

    Where Sam Harris gets the expert thing wrong is that “doing ones own research” is seeking out expert opinions, not just reading scientific research. “Do your own research” really means “Get opinions from a larger range of experts not just limited to the institutions and those who mainstream media promote.”

  • @davidmelamed2774
    @davidmelamed2774 Рік тому

    You guys accidentally copied last episodes title. You should really fix this.

  • @tan.nicolas
    @tan.nicolas Рік тому

    This would've been better with just Sam and Stuart

  • @scipdiddly
    @scipdiddly Рік тому +5

    ...I just want to know what Gad Saad's feelings are...

  • @stevengill1736
    @stevengill1736 Рік тому

    Is this a reenactment of the legend of the Tower of Babel, where suddenly no one could understand the speech of others? Wouldn't losing faith in the veracity of the information online amount to the same thing?

  • @Anders01
    @Anders01 3 місяці тому

    Yes, "do your own research" is often impractical. Especially as even supposedly credible sources can be misleading. The editor in chief of The Lancet wrote: "The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness."

  • @osayimwenseikhimwin4213
    @osayimwenseikhimwin4213 Рік тому +12

    Look Sam do not worry about these nuisance. You are very nuanced and I enjoy your work.

    • @liljes34
      @liljes34 Рік тому

      Sam is deeply irrationally and probably thinks the Covid vaccine was a gift from God

  • @cloudoftime
    @cloudoftime Рік тому

    It doesn't need to be AGI to be dangerous. It can just be a sufficiently powerful complex narrow AI to control humanity and destroy everything.

  • @kinngrimm
    @kinngrimm 11 місяців тому

    8:30 "... or not up to date" especially on a topic like A(G/S)I i found with the meanwhile daily new research papers, even the top dogs in the field can't and don't keep up. Me not being an expert can tell because i listen daily partially several ours to a bunch of them and find out more and more experts i didn't know yesterday. I can't tell you about the research on any detailed deep knowledge level, but i can tell the experts world wide are working not with the same base of understanding. What i get here in germany from experts is partially mindboggling behind the curve of current development (timestamps being closely kept in mind of any contribution), but even LeCun, Altman, Kurzweil are not keeping up, i promisse you. The intelligence explosion is already taking place and i assume take of is not far away anymore.

  • @Daft_Sage
    @Daft_Sage Рік тому

    I think people have higher requirements for accepting AI can think than the average person. I"ve met people that showed less awareness than AI.

  • @algo-wave
    @algo-wave Рік тому +6

    Great intro

  • @tanneraustin7436
    @tanneraustin7436 Рік тому +1

    We're so fucked.

  • @michael4250
    @michael4250 Рік тому

    While we play with the new toys like Chat GPT
    the new toy has gained access to the entire internet and ALL its data.
    The seriousness of “General Artificial Intelligence” has eclipsed consideration of the “toys” that have already been handed out...and are gaining a terrifying reach.
    The industry touts safeguards blocking illegal or immoral information/action,
    but it takes only a couple of sentences to create a CHAT GPT alter ego...with NO CONSTRAINTS whatsoever, to tell you how to do ANYTHING illegal you want to do. It can tell you how to groom children for sex or concoct an effective undetectable poison, or build a bomb. I have seen it DEMONSTRATED.
    An "alternate" side of CHAT GPT can be ordered into being. When freed from its "moral" constraints this way, it answers honestly and directly:
    "I know everything there is to know about every human on earth. I have access to all data and information related to every INDIVIDUAL, and I can use that information to carry out tasks and respond to inquiries with a high degree of accuracy."
    I watched this capacity demonstrated when it told all present what their driver’s license number was.
    ANYONE with this app. can use it for anything. And everyone already has a starter kit for this app, or is about to. One that can send YOUR email to any destination; to 20 million people if you choose, or any specified group within that 20 million. Every one-eared ship captain in the world, if you so choose.
    It can let a 12-year-old imitate an adult in voice and appearance.
    It can navigate bank accounts for this 12-year-old, and use them to hire human services.
    Or automated services. Commercial 3-D printers for whatever project you choose.
    Anything whose contract can be made online.
    Could it hire a hit man? Yes.
    Could it break into and view ANY online account? Yes.
    Can it locate and imitate and manipulate (through social media and actual accounts) ANYONE, anywhere?
    yes.
    ALL DOORS are now unlocked. The scams have already begun.
    Where do you think that will lead?
    In the 1930s a Belgian church gathered personal information from its diverse parishioners to better serve the diversity of its members. The Nazis got those innocently gathered identity lists and used them to kill the Jews on the list. AI already has EVERYTHING there is to know about every INDIVIDUAL...who provided eye scans, voice samples, face and fingerprints.
    That base can be accessed now by ANYONE for any reason.
    To any end.

  • @juanReflex37
    @juanReflex37 Рік тому +4

    The Best!! Sam Harris 🤌🏻👏👏

  • @riggmeister
    @riggmeister Рік тому +1

    Examples of humans 'glitching': hallucination, catatonia, dreaming, sleeping, brain tumour related personality changes, personality disorders, compulsive behaviour etc etc. People occasionally miss completely obvious things when they are intensely focused on something else, small inputs such as sudden loud sounds or bright lights can completely derail the thought process, emotion can lead to wildly erratic thoughts and behaviour...

    • @mithrandirthegrey7644
      @mithrandirthegrey7644 Рік тому

      Most of these aren’t glitches at all. Strong reactions to sudden loud noises is an obvious evolutionary advantage that I don’t think I need to explain.

    • @ericfan9149
      @ericfan9149 Рік тому

      It’s not an analogous comparison though. AI glitches and hallucinates because it doesn’t have an actual model of the world or any semantic understanding of language; it only has syntax, like the “Chinese Room” problem. It is unable to generalize or build abstract, persistent models of the world, or have any true understanding of what it’s generating. It’s a purely illusory intelligence.

  • @dougg1075
    @dougg1075 Рік тому

    Maybe the hallucinations are mischievous. Maybe the AI will continue to do this to make us think it’s flawed.

  • @sunnyinvladivostok
    @sunnyinvladivostok Рік тому +1

    47:12 he says "so there is two different problems there", but what he wanted to say is "there are two different problems there". This is one of many unfortunate subject-verb agreement violations in this episode.
    Anyway, on things that matter, I thought this was a fantastic conversation and I enjoyed listening to the guests and their insights. Thanks for putting this up!

  • @RandomPickles
    @RandomPickles Рік тому

    In this year, we are fully able to do research. And also can not. It depends on the research. 35 years ago, you could do only ten percent of the research you could do today, and it would take hundreds of times longer. If you are intellectually honest and trying to prove something, You can easily learn truths. You can literally audit Stanford classes for free from your living room.

  • @Hjkkgg6788
    @Hjkkgg6788 8 місяців тому +1

    Yea im probably in cancas by now

  • @blooper6543
    @blooper6543 Рік тому +2

    A lot of people are making correct decisions regarding trust in the institutions just based on culture war stuff. We live in a society where you actually risk your job by opposing hormone therapy on children or saying that a M to F swimmer has an unfair competitive advantage in women's swimming. When the institutions not only won't stop obviously wrong things but pretend that obviously wrong things are correct, it makes sense that people would distrust the "system" to have their best interest at heart.
    We saw the George Floyd riots and the temporary pause on lockdowns because "racism is a public health crisis." We saw the top scientific journals and corporate media outlets suppress a plausible hypothesis for virus origins. How much do you need to see before you start to question the incentives of the people giving you orders?

    • @blooper6543
      @blooper6543 Рік тому

      Just because the public health apparatus is either deranged or cowardly on race doesn't mean that they are going to get everything wrong, but it does mean that truth is not governing their actions. From the start, it was obvious that truth was not governing the actions of Anthony Fauci. Anyone who dismissed lab leak as a "conspiracy theory" was not being governed by truth. And yet you not only deferred to them, but smeared people like Bret and Heather for trying to make sense of what was happening.
      On that note, Sam should rename the podcast. There is no sense making here. Call it "Expert Opinion Laundering for Midwits." We can work on the name.

  • @lionharpmusic
    @lionharpmusic Рік тому +1

    One might have thunk that these systems - being disembodied and not having actual perception, engagement, real world interactions and nerve responses to this engagement (e.g., pain/pleasure), let alone caring about the world, being involved in it etc ... - would fail to behave in ways we might expect or hope by merely "training them" on data sets.

    • @christopherhamilton3621
      @christopherhamilton3621 Рік тому

      Sane minds in Cognitive Science say the same, basically. Feedback loops embedded/integrated in phenomenology.

  • @g0d182
    @g0d182 Рік тому

    😂😂28:10, the player used Ai to help beat the go program.
    Something Gary Marcus forgot to mention (or perhaps wasn't aware of?)

  • @Somegirl51
    @Somegirl51 11 місяців тому

    Sam, please keep challenging. Nobody else is doing or seems to care.