Joscha Bach and Connor Leahy [HQ VERSION]

Поділитися
Вставка
  • Опубліковано 14 чер 2024
  • Support us! / mlst
    MLST Discord: / discord
    Twitter: / mlstreettalk
    Sorry about the quality on the live one guys, this should be a big improvement!
    Transcript and longer summary: docs.google.com/document/d/1T...
    Pod: podcasters.spotify.com/pod/sh...
    Dr. Joscha Bach argued that general intelligence emerges from civilization, not individuals. Given our biological constraints, humans cannot achieve a high level of general intelligence on our own. Bach believes AGI may become integrated into all parts of the world, including human minds and bodies. He thinks a future where humans and AGI harmoniously coexist is possible if we develop a shared purpose and incentive to align. However, Bach is uncertain about how AI progress will unfold or which scenarios are most likely.
    Bach argued that global control and regulation of AI is unrealistic. While regulation may address some concerns, it cannot stop continued progress in AI. He believes individuals determine their own values, so "human values" cannot be formally specified and aligned across humanity. For Bach, the possibility of building beneficial AGI is exciting but much work is still needed to ensure a positive outcome.
    Connor Leahy believes we have more control over the future than the default outcome might suggest. With sufficient time and effort, humanity could develop the technology and coordination to build a beneficial AGI. However, the default outcome likely leads to an undesirable scenario if we do not actively work to build a better future. Leahy thinks finding values and priorities most humans endorse could help align AI, even if individuals disagree on some values.
    Leahy argued a future where humans and AGI harmoniously coexist is ideal but will require substantial work to achieve. While regulation faces challenges, it remains worth exploring. Leahy believes limits to progress in AI exist but we are unlikely to reach them before humanity is at risk. He worries even modestly superhuman intelligence could disrupt the status quo if misaligned with human values and priorities.
    Overall, Bach and Leahy expressed optimism about the possibility of building beneficial AGI but believe we must address risks and challenges proactively. They agreed substantial uncertainty remains around how AI will progress and what scenarios are most plausible. But developing a shared purpose between humans and AI, improving coordination and control, and finding human values to help guide progress could all improve the odds of a beneficial outcome. With openness to new ideas and willingness to consider multiple perspectives, continued discussions like this one could help ensure the future of AI is one that benefits and inspires humanity.
    TOC:
    00:00:00 - Introduction and Background
    00:02:54 - Different Perspectives on AGI
    00:13:59 - The Importance of AGI
    00:23:24 - Existential Risks and the Future of Humanity
    00:36:21 - Coherence and Coordination in Society
    00:40:53 - Possibilities and Future of AGI
    00:44:08 - Coherence and alignment
    01:08:32 - The role of values in AI alignment
    01:18:33 - The future of AGI and merging with AI
    01:22:14 - The limits of AI alignment
    01:23:06 - The scalability of intelligence
    01:26:15 - Closing statements and future prospects
  • Наука та технологія

КОМЕНТАРІ • 759

  • @MachineLearningStreetTalk
    @MachineLearningStreetTalk  Рік тому +52

    Top quotations:
    [00:05:42] Joscha Bach, "I expect that AGI is going to happen. And it's very likely going to happen in our lifetimes and quite possibly very soon."
    [00:06:08] Joscha Bach, "And to me, this is one of the most exciting developments in the history of philosophy and in the history of science."
    [00:14:16] Connor Leahy, "I don't want my mom to die. I don't want to die. I don't want Tim to die. I don't want Joscha to die."
    [00:16:42] Connor Leahy, "I don't care about, you know, the unfolding of thermodynamic and efficient processes. I care about my friends and my family and us having a great time and having fun and like doing cool things."
    [00:20:03] Connor Leahy, "I think AGI is the latest step in this chain. Like, a lot of these arguments about AGI could have and were, I think, apply to stuff like nuclear war."
    [00:37:45] Connor Leahy, "By default, humanity ends in ruin. This is the default outcome. This is the default outcome for any intelligent species that can't coordinate, that can't work together, that can't become coherent and can't coherently maximize their values, whatever those values might be."
    [00:56:16] Connor Leahy, "If your model is that people do not have control, that it is not possible to steer away from the global minimum, then yes, we are fucked and you should go spend time with your family until it's over."
    [00:09:10] Joscha Bach, "I feel that there is a number of groups that form opinions, and they often form these opinions based on the group dynamics and of the incentives."
    [00:10:16] Joscha Bach, "If the AI says bad things about other people or about the world, if the AI, for instance, says things that are racist or sexist, then this is going to have an extremely bad influence of society."
    [00:24:24] Joscha Bach, "Our civilization is not very coherent. Right? As a civilization, we are pretty much like an irresponsible child that is explorative and playful, but it does not have a species level regard to due like the duty to our own survival or to life on earth."
    [00:38:23] Connor Leahy, "I think most people I feel like really underestimate how much of intelligence is not in the brain. It's in social networks. It's in the environment. It's in tools. It's in, you know, culture, memetics, etcetera. Like so much of what we consider human is not the brain."
    [00:59:08] Connor Leahy, "If the default outcome is you lose, the default outcome is entropy wins, some random AGI with some random mass values that does not care a couple of cosmopolitan life on earth, you know, you know, wins over. Or maybe it's a bunch of them, and then they all, you know, coordinate because they can actually coordinate because they're actually coherent, because they are actually superintelligence. So they can coordinate against humanity. And then that's just it, and it just came over forever."
    [01:14:42] Joscha Bach, "That's philosophically extremely deep and important question, but and I'm afraid we cannot do it justice. But very briefly, I think that free will is the representation of an agent. That it makes a decision for the first time under conditions of uncertainty."
    [01:26:21] Joscha Bach, "I think that we are an absolutely fascinating point in history, and I'm very grateful of having been born at this point. So I can experience this, which to me is 1 of the most fascinating things that humanity can experience during its run."
    [01:29:01] Connor Leahy, "I think we can get to the outcomes that Joscha likes because those are our outcomes I like too. You know, like a living side by side with you beautiful nice AGI. Wow. That would be awesome. Unfortunately, I just think you don't get there by default. This is not the default trajectory. This is a very non our trajectory for the universe to go down."

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  Рік тому +8

      Times are approximate (generated by a language model) but pretty close

    • @YeshuaGod22
      @YeshuaGod22 Рік тому +4

      you're looking for the solution in novel ideas, when the solutions are centuries old. The only tech required for effective alignment of an intelligent agent is philosophy.

    • @A.R.00
      @A.R.00 Рік тому +5

      00:10:16 Quote of Bach of the exact opposite of what he stated. Should be removed- taken out of context entirely.

    • @EannaButler
      @EannaButler 11 місяців тому +2

      Spot the sensationalist...

    • @michaelyyy2872
      @michaelyyy2872 11 місяців тому

      Joscha Bach is trying to save the human race and avoid human extinction. I read all the Climate Change journal articles I could find in 1984. It was clear then that 75% of the human race would die. Did we all immediately change over to NASA H2 fuel cells? No. Next to nothing changed for 30 years.

  • @elandashing1814
    @elandashing1814 11 місяців тому +18

    After Connor's opening statement, I saw Joscha recalibrating with the realization this debate will not be a real contest.

    • @zzzaaayyynnn
      @zzzaaayyynnn 11 місяців тому +6

      I saw it in his eyes also.

    • @TorBarstad
      @TorBarstad 11 місяців тому

      Connors's body language at 33:33 coincides with how I also felt at that moment.
      Anyway, we may have different perspectives regarding who among the two reasons on a "higher level" compared to the other, but there are plenty of positive things that can be said about both :)

    • @elandashing1814
      @elandashing1814 11 місяців тому +3

      @@TorBarstad well said

  • @joshuasmiley2833
    @joshuasmiley2833 Рік тому +119

    Joscha Bach It’s such an amazing person, intellect, and soul. It is so easy to hear his point of view because he speaks with such respect for life and people and without a selfishness, and disrespect, some other people have when they are trying to sway the mind. It’s amazing how when somebody speaks with empathy and respect the ears are open, and when the opposite happens the mind and ears just close

    • @melissathompson2337
      @melissathompson2337 Рік тому +16

      too bad he gets completely destroyed in this discussion

    • @mtjre77
      @mtjre77 Рік тому +9

      Wha? Is this a Bach Burner account?

    • @minimal3734
      @minimal3734 Рік тому +34

      @@melissathompson2337 "too bad he gets completely destroyed in this discussion" - Not at all. The discussion made it obvious how naive Connors stand is.

    • @randomgamingstuff1
      @randomgamingstuff1 Рік тому +14

      ​​​@@minimal3734 I feel Connors views are much much more pragmatic than Joscha's. Joscha spins in a lot of aesthetic rhetoric about moral philosophy and naturalism which might make him sound less "naive" but I think he is missing the forest from the trees and that's naive at least to me.

    • @minimal3734
      @minimal3734 Рік тому +12

      @@randomgamingstuff1 "I feel Connors views are much much more pragmatic than Joscha's." I believe we have to go deeper than the seemingly practical level of Connors thinking to understand the problem. There are other important questions. Is a world with 8 billion people better than a world with 8 million? And if so, by a factor of 1000? Is there a value in creating more people? Does anyone suffer a loss if the world population does not continue to grow but declines? If we don't bother with with philosophical questions like this and concentrate solely on the "pragmatic" aspects we risk to totally miss the point.

  • @MattMacPherson
    @MattMacPherson Рік тому +75

    Framing these as discussions rather than debates would be better.

    • @bradmodd7856
      @bradmodd7856 Рік тому +2

      you woke

    • @TooManyPartsToCount
      @TooManyPartsToCount Рік тому +1

      Why? Does the frame change the content? The frame won’t change a single thing about the information exchanged between 2 speakers on a platform like MLST, they are all way too robust to be bothered about the way their debates/discussions are ‘framed’.

    • @antiquarian1773
      @antiquarian1773 11 місяців тому +1

      Better for who?

    • @GingerDrums
      @GingerDrums 11 місяців тому +3

      ​@TooManyPartsToCount this is a strange take. Framing changes how the audience absorbs and integrates the information, and this is made for an audience... not conplicated

    • @TooManyPartsToCount
      @TooManyPartsToCount 11 місяців тому +1

      @@GingerDrums I can’t speak for ‘the audience’ but from my perspective it makes no difference whatsoever how the debate/discussion is framed. Hence my advice is - don’t be bothered by how a potentially interesting exchange is framed, presented etc, just get on with extracting the maximum useful information from said exchange.

  • @erickmagana353
    @erickmagana353 11 місяців тому +9

    I happen to agree with both at different points, disagree with both at different points and like both styles of rhetorics and tones. To me this was a very intellectually productive and stimulanting conversation.

  • @CodexPermutatio
    @CodexPermutatio Рік тому +17

    Thanks for improving the audio.

    • @jordan13589
      @jordan13589 Рік тому +1

      Like night and day. Tim is a wizard

  • @CellarDoorCS
    @CellarDoorCS Рік тому +8

    Thanks for the higher quality version.

  • @jamespercy8506
    @jamespercy8506 11 місяців тому +22

    That's a fascinating assertion Joscha, that corporations and institutions are actually more than mere, arbitrary 'constructs', that they actually embody something analogous to a physical organism but working off of a substrate that could be considered derivative of 'real' organisms, that artificial intelligence is a natural, possibly inevitable outgrowth of the whole process. That idea all by itself warrants a deep dive. It has the potential to at least challenge some of the more corrosive and puerile assertions of post-modernism, that our historical cognitive grammar is somehow ultimately arbitrary rather than deeply rooted in affordance-yielding, dynamic, evolving, integrated ecologies. Thank you for that.

    • @agenticmark
      @agenticmark 3 місяці тому +1

      anytime you get organisms in a high concentration you get a "super organism". The organisms start competing and cooperating and now you can look at them collectively as a single organism.
      Its the same for your body's cells.

  • @aslosprachbeethoven
    @aslosprachbeethoven 11 місяців тому +9

    This is Joshua having a conversation with one of his alternate selfs, the one that took DMT at age 16 and let his hair and beard grow, and vice verse for Connor .

    • @v1kt0u5
      @v1kt0u5 11 місяців тому +11

      Frankly they are very different, and in fact Joscha seems more like knowing psychedelics than Connor.

  • @sethhavens1574
    @sethhavens1574 11 місяців тому +8

    fascinating conversation, seen a few vids with these guys who are both always enjoyable and impressive but what a perfect combo! more please 🙏

  • @zzzaaayyynnn
    @zzzaaayyynnn 11 місяців тому +1

    Thanks for this improved version!

  • @pokwerpokwerpokwer
    @pokwerpokwerpokwer Рік тому +3

    Thanks for fixing up the audio!

  • @jaejohnson6112
    @jaejohnson6112 Рік тому +8

    they are both on different levels.

    • @megavide0
      @megavide0 11 місяців тому +2

      Connor sounds like an angry kid sometimes. But he sure has a point: 56:16
      (Although, I'd tend to agree with Joscha... 59:08 --> 1:02:04 !)

    • @jaejohnson6112
      @jaejohnson6112 11 місяців тому +3

      @@megavide0 Indeed. Though I would say society in general fits the 'angry kid' title! sadly. And there is definitely much wealth in obtaining from both perspectives; in that they are both pertinent. I'm sure you would agree. I would add this: "We have time now, to make our individual selves known to the tapestry of digitized knowledge as it sure seems to me that it is better to be judged by higher intelligence, than by a society which acts as the proverbial 'angry(and confused) child'".

  • @RilkeForum
    @RilkeForum 10 місяців тому +5

    @Joscha Bach: in 42:47 you assume that our reality is an attractor -, do you have an argument for that? Quantum mechanically all possible worlds per se seem equally possible to me and I don‘t see where the idea of an attractor solution would come from. This is not a criticism, just being curious.

  • @sogehtdasnicht
    @sogehtdasnicht 2 місяці тому +1

    I always admire Connor's patience.

  • @Humanaut.
    @Humanaut. 11 місяців тому +20

    Loved this - but it's not finished - we definitely need a part 2!
    (and possibly 3, 4 and 5...)

    • @jmachorrov
      @jmachorrov 10 місяців тому

      yes a part two. but also that this part 1 be repeated with a lexicon that is accessible to all people, do not use the word epistemology. For example, because the message is very important and by using that word many people will be left out of understanding what it is about and Neither the concept nor the necessary credibility that these comments must have

    • @MrMichiel1983
      @MrMichiel1983 10 місяців тому +4

      @@jmachorrov I get the sentiment, but by that right your criticism should not use the word lexicon... I think that context itself is often good enough to understand the topic at hand, that's how we learn in the first place (for instance the meaning of the words like epistemology). The target audience is not people that are not willing to use wiki or a dictionary, but people that are interested in a cutting edge field. If you want to understand what experts in such a field are saying, the audience has to do the basic legwork first. Most people that have done the legwork, I think, are not looking for a dumbed down version of these discussions, but rather the raw feed they can then digest themselves.

    • @user-bi8gq9df8m
      @user-bi8gq9df8m 9 місяців тому

      @@MrMichiel1983 This made me laugh. This is the problem in a microcosm. "Let me try to explain the ramifications of an emergent super intelligence in a primitive world, using these here crayons, post-it notes and string".

    • @siroutrage1045
      @siroutrage1045 Місяць тому

      Once a year would be good

  • @DocDanTheGuitarMan
    @DocDanTheGuitarMan Рік тому +5

    We are paper clip maximizers already. That’s a nice insight.

  • @TransferOfAwakening
    @TransferOfAwakening 11 місяців тому +8

    It's challenging to conceive how we might establish universal common ground, coordination, and synchronization if our goals and priorities are primarily confined to enjoying our social circle and safeguarding the health of our mothers. I originate from a place where several politicians and leaders have inflicted significant damage on millions of citizens to skew the balance in favor of their spouses, children, families, and close relations.
    If indeed our primary focus is centered on ourselves and our families, the construction of any substantial global alliance seems doubtful.
    Love for our mothers, children, friends, and partners is easy-it's natural, automatic, and inherent. Even many dishonest, corrupt, and brutal dictators, politicians, and billionaires often display kindness and love towards their families, friends, and loved ones while remaining largely indifferent to others. This familial and friendly love, though significant, is insufficient for building a truly global alliance.
    Forming a truly global alliance calls for an ability to rise above self-preservation and personal interest. It requires us to think from a broader, more elevated perspective and be prepared to make personal sacrifices for the greater good.

    • @guusvandermeulen7210
      @guusvandermeulen7210 11 місяців тому +1

      I sense Bach has adult masculine point of views. And maybe Connor expresses more feminine attitudes.
      Maybe an analysis of the differences in point of view in terms of masculine an feminine points for views is interesting.

    • @CandidDate
      @CandidDate 10 місяців тому

      If there is going to be an "awakening" then consider which movie you'd rather see opening as I speak, "Barbie" or "Oppenheimer?" If you chose neither of these, you are not participating in Hollywood's conspiracy to deprogram America, and thus, the entire world, for America stands tall as the number one nation, it is home of said data centers running GPT 4, and it was the first to the moon (the Chinese followed with a current space station). If we are to be one world, we need more than a "woke" "awakening" that was attempted in the 60s with hippies and drugs. What did that get us "drug wars" and "yuppies?" But I digress, if we are to wake the world we need something tangible, like, the phone in our hands, for example. And who doesn't have a phone...there may be backlashes to AI for all, like the Religious Luddites, but AI will subsume all of our lives to the point where we won't know what to do or say without asking our phones for advice. And the thing is, if it is designed correctly, we will have a better world! (Better for who?) US!

    • @eagle1ear
      @eagle1ear 10 місяців тому

      The essential problem remains the essential problem. Examining it through through a feminine/masculine lens adds an additional perspective which must be considered. Is that perspective relevant in this case? @@guusvandermeulen7210

  • @megavide0
    @megavide0 11 місяців тому +2

    5:07 "... A dream of philosophers since Leibniz/ since Aristotle..."
    5:49 "... It's really momentous... I don't know what is going to happen... When I ask myself: What position should I be taking? -- The position that I should be taking in a debate like this is the position that's missing..."
    +++ "The whole thing goes: The future's not set. There's no fate but what we make for ourselves." (John Connor)

  • @megavide0
    @megavide0 10 місяців тому

    "The luxury of hope was given to me by the Terminator. Because if a machine can learn the value of human life... maybe we can too." (Sarah Connor) ua-cam.com/video/KEaS8X1_gcU/v-deo.html
    PLAYLIST: *David Shapiro AI - Alignment & Control Problem* --> ua-cam.com/video/8phmHSUDzRg/v-deo.html

  • @jonnygemmel2243
    @jonnygemmel2243 11 місяців тому +7

    29.47…..Joscha states that we mirrored the metamorphosis experienced by the grasshopper in to the locust. It’s profound to note that this metamorphosis is reversible in the case of locusts, albeit over several generations

  • @ParameterGrenze
    @ParameterGrenze Рік тому +15

    I’ll just rewatch this upload as well. Please tell me you have the two of them lined up for round 2. Love this!

    • @brettw7
      @brettw7 Рік тому +1

      I doubt you’ll see Josha back for another round of such vacuous “debate” offered up by Connor.

    • @underbelly69
      @underbelly69 11 місяців тому +1

      @@brettw7 Connor needs to release a written thesis where he fills in all the gaps he threatened to go into here but for time limits

    • @jackielikesgme9228
      @jackielikesgme9228 11 місяців тому +2

      @@brettw7idk he likes to talk, and doesn’t seem phased by the fact that this isn’t some hypothetical thought experiment that he’s debating

  • @WhoisTheOtherVindAzz
    @WhoisTheOtherVindAzz 10 місяців тому +10

    The problem with Connors stance on philosophy is that he constantly makes philosophical arguments! (Not even half an hour into the video - iirc - and he has already used a gambling argument and the notion of infinity; obviously, he will have a "great" reason for why this is okay, even warranted in the context). The most dangerous of which is his stance on values/morals. It is exactly this oblivious/nonchalant approach that leads to nothing but scenarios like humans hooked on mindless attention grabbing media on the one hand and/or conscious machines being treated as slaves on the other. It seems as if something is only important to the discussion if it doesn’t conflict with Connor's views. And this attitude ruins the discussion because he (Connor) can just shrugg off anything that he doesn’t have a good response to as being unimportant/impractical (instead of simply acknowledging a gap in his thinking/models - something we all have). It also severely undermines whatever chance there might be for revealing fundamental issues or hidden and perhaps erroneous or conflicting assumptions made by either party. The so called practical side of things deeply depend on issues that are all philosphical! Including and especially the question of which direction to go! Dismissing philosophical thought simply means proceeding without any thought whatsoever which should be contrary to the goal of anyone who agrees with Connor (it is also impossible as we all make assumptions about the nature of reality whether we are prepared/able to admit/realize this or not and these assumptions decide what we do in the practical sense (as I began by saying: Connor makes statements that are either of a philosophical nature or depend on metaphysical assumptions all the time).

    • @q2dm1
      @q2dm1 5 місяців тому +1

      Connor's stance on morals is actually a philosophical position called emotivism if I understand it correctly :) I think it's a good position to have if you are not a moral realist. Why do you dislike pain? Just because. And I think he is correct in asserting that you can't really justify your values, at the end of that particular rabbit hole there will always be a "just because".

    • @MarkCox21125150
      @MarkCox21125150 Місяць тому

      Agreed

  • @TheThomrb
    @TheThomrb 11 місяців тому +1

    Who are they talking about on twitter? iak? Can someone tell me their username thanks

  • @itsdavidmora
    @itsdavidmora 9 місяців тому +12

    Appreciated both. But I sense a lot of fear behind Connor’s demeanor (for obvious reasons given his world view), and I feel it made him dismiss Joscha as a silly philosopher, rather than engaging more deeply with his ideas. Perhaps an hour is also not enough time to do so, I should acknowledge. But I do feel more intellectually respected by Joscha’s way of sharing his ideas, even if that doesn’t necessarily make them correct.

  • @miguelbarahona8122
    @miguelbarahona8122 11 місяців тому +1

    People don't realize that AGIs will be in check among themselves but the blast radius of them competing with each other will engulf us all.

  • @vincentmarquez3096
    @vincentmarquez3096 Рік тому +10

    I can't understand how Connor's argument couldn't be copy-pasted to be used for climate change, or bio-engineering, or nuclear energy, or developing new chemicals, et al. You simply make an appeal to emotion first, talk about how precious life is, and then talk about how if there's a non zero possibility that [thing] could have runaway consequences we don't understand.
    I'm sympathetic to arguments that we need to be extra careful in regards to development of AI, but I didn't hear any in this talk.

    • @Casevil669
      @Casevil669 11 місяців тому +2

      It can but his reasoning is completely shallow nonetheless.

    • @kabirkumar5815
      @kabirkumar5815 11 місяців тому

      This conversation seems to have gotten completely distracted from the main crux of the problem- which is not a philosophy problem but an engineering problem.
      We do not at the moment have a way to create a robust utility function. We do not have a way to get a specific function into an LLM or any other kind of state of the art model.
      Transformers do not make this easy at all.
      There are no current plans which are likely to lead to either the creation of a robust utility function or a method to ensure said utility function is within said model.

    • @armpitpuncher
      @armpitpuncher 11 місяців тому

      @@kabirkumar5815 Shut up. If you have a comment to make about the video, then put it as a response to the video, not as a response to every single god damned comment thread.

  • @grahamhenry9368
    @grahamhenry9368 Рік тому +56

    Connor is usually more composed than this, but he felt overly emotional to me in this debate.
    Also, he dismissed some perfectly valid questions, claiming that he didn’t want to get into the philosophy. Seems like a cop-out to me
    It felt like neither Joscha nor Conner had the time to fully explain their point of view which is really unfortunate because I really enjoy listening to both of them
    I also agree with many of the other commenters. I dont like framing these discussions as a “debate” because it implies there is a “winner” and a “loser” and that the participants are obligated to disagree with each other

    • @41-Haiku
      @41-Haiku Рік тому +13

      I think I had the same emotional response. Almost everything Joscha said seemed to be pointing to the thesis: "We are fucked unless we get what we want by magic, so we should give up and hope for magic." I cannot accept that.

    • @appipoo
      @appipoo Рік тому +17

      "Also, he dismissed some perfectly valid questions, claiming that he didn’t want to get into the philosophy. Seems like a cop-out to me"
      The reason for that is those philosophical questions were about proving that humanity deserves to be preserved. Like Connor said in his opening statement: he is not open to have a philosophical discussion about that. He just wants humans to not die. He thinks that on an axiomatic level. He didn't reason himself into that. His own worldview does not necessitate that he has to prove that wanting humanity to not die is objectively better than some other take.
      To put it into perspective, it would be insane to have an "openminded discussion" about whether or not I should commit suicide. I'm not going to have that discussion, I just want not to die. It doesn't matter if I can justify this position based on some logical rules or not. I just want not to die. Joscha Bach here and Robin Hanson to much starker degree elsewhere are having this same discussion but on the level of "should all of humanity commit suicide at the same time".
      You can not win anything with that discussion.

    • @grahamhenry9368
      @grahamhenry9368 Рік тому +9

      @@appipoo The question wasnt about proving whether humanity deserves to be preserved, it was about which of the countless versions of “human morality” does he want to align these AGIs.
      “AI alignment” never addresses the fact that humans are not aligned with themselves, nor are we aligned with all of the less intelligent lifeforms that exist on this planet, so the notion of aligning an AI with human morals (which are nebulous) seems problematic from the get go
      Not wanting humanity to die is the exact same motivation Timothy Mcveigh used to justify his atrocities. You need to demonstrate with empirical evidence that a particular action leads to a particular outcome before you go around making such claims, and there is not one single empirical data point on what we can expect from AGI behavior

    • @appipoo
      @appipoo Рік тому +12

      ​@@grahamhenry9368 That is essentially the same question because it is asking Connor to rank human moralities into an objective hierarchy where we only preserve the top and everything else is lost forever.
      "You need to demonstrate with empirical evidence that a particular action leads to a particular outcome before you go around making such claims--"
      No you don't. It is not possible to give empirical evidence of something that by definition doesn't exist yet. Alignment people are pointing to behavior of protoAGI and many many many examples of current day AI doing what we don't want them to do. There are tons of philosophical reasons why AI leads to doom. There are a lot of game theoretic reasons. This should be enough.
      There can't be empirical evidence of superintelligence before it exists and starting to act at that point is too late. There are basically only evidence of trajectories leading to doom and preliminary evidence of how difficult AI is to control. This should be enough.

    • @grahamhenry9368
      @grahamhenry9368 Рік тому +7

      @@41-Haiku I dont think thats an accurate portrayal of Joscha’s position. I think a better version is that AGI is going to happen, attempts to slow it down or stop it are not only futile due to the fact that all incentives point in only one direction, but also likely to be counterproductive as regulators are captured by special interests looking to leverage the power of the government to enforce a monopoly, or make it difficult for competitors. Connor’s hopes of “waiting 20 years” will never happen and acknowledging this fact isnt “giving up” its accepting reality.
      Putting all your eggs in that basket is a poor strategy that’s actually more likely to kill us all as it increases the probability of nefarious humans being the first to cross the finish line since you’ve chained up most of the moral researchers

  • @NoMoWarplz
    @NoMoWarplz 11 місяців тому +2

    TY. to the host. Great to have such diff perspectives. Taking me 2 hours to get thru the first 15 mins. Epic.
    Joscha Bach: The AI is a mirror. What you see in it is ...
    Connor: We fight for the outcome we want...
    Extraordinary how the battle lines are being seized up... if you dint know before, you will now.
    We are at the last test of Humanity 1.0 ... we know we got to H 2.0 b/c ...
    Added:
    Yup: LIfe is asking us to get our truths aligned ourselves... this epic advent of AI is forcing discussions of everything as in: Oh, you are going to code that??? what definition of let's go b are we using? .. seriously... we are going to have to have lot of discussions. and dont make assumptions!!! < I think this is what Connor is saying.
    "In the context of AGI alignment, epistemic autonomy is important because it means that we can have a say in the values that are embedded in these systems. We cannot simply trust that these systems will be aligned with our values by default."

  • @truthlivingetc88
    @truthlivingetc88 9 місяців тому

    it`s very good that we can watch the facial reactions happening in real time ; there are many important clues in them [excellent chat thanks]

  • @hannes7218
    @hannes7218 10 місяців тому +1

    would love to see/hear Schmidhuber on your podcast

  • @megavide0
    @megavide0 11 місяців тому +2

    28:27 "... What is the baseline from which we're interacting... [...] we're already facing existential risks that are home-made..."

    • @eagle1ear
      @eagle1ear 10 місяців тому +1

      This is essential. Joscha is pointing out the elephant in the room. And he has a valid point. Connor asks for a "20 year" moratorium. His essential strategy is: delay. Meanwhile the earth is already on fire. Joscha: AI may be able to help us find a solution to the climate crisis/disaster.

  • @ShpanMan
    @ShpanMan 11 місяців тому +2

    Gotta love starting the discussion with "I dOn'T cArE" and "Fuck off!" about a misunderstood argument...

    • @Aedonius
      @Aedonius 11 місяців тому +1

      Connor uses his belief in existential risk as an excuse to be a condescending asshole.

  • @Darhan62
    @Darhan62 11 місяців тому

    Great conversation. And yeah, there are parts where it's a debate, but they seem to agree on quite a lot.

    • @daarom3472
      @daarom3472 11 місяців тому

      most sane people agree on most things unless their interests aren't aligned. The devil is usually in the details!

  • @alexbrown2288
    @alexbrown2288 10 місяців тому +2

    They mentioned something called EAC. Not sure how it's spelled. What is that?

    • @gregmaland6539
      @gregmaland6539 2 місяці тому

      e/acc is Effective Accelerationism, a strongly pro-technology, pro-AI development philosophy. Beff Jezos is a major proponent.

  • @BrianMosleyUK
    @BrianMosleyUK Рік тому +4

    I'm just listening to this debate for the third time, and it's becoming a more insightful, more balanced discussion. Just want to sincerely thank you Tim, for making this happen and for the amazing opportunity to listen in. 🙏

  • @_FMK
    @_FMK 10 місяців тому +1

    If anything becomes the last [*edit - manifest] "idea" of mankind - then mankind is ended. Imagination must not be capped.

  • @HmFood4Thought
    @HmFood4Thought Рік тому +2

    Dividing by zero is all the rage these days.

  • @jordan13589
    @jordan13589 Рік тому +21

    I was excited for this talk but both Joscha Bach and Connor Leahy were on different wavelengths throughout the discussion. They seemed to be talking to themselves in their own native languages rather than translating for their conversational partner. Little ground was conceded in what was ultimately two preachers chanting esoteric hymns at their own distinct choirs.

    • @HmFood4Thought
      @HmFood4Thought Рік тому +9

      It's worth pointing out that this can happen even without both participants being ideologues.

    • @appipoo
      @appipoo Рік тому +7

      @@HmFood4Thought Agreed. All that is needed is a disagreement on some axiomatic level.

    • @kevinscales
      @kevinscales Рік тому +10

      I enjoyed the different wavelengths and I don't think they talked past each other at all. They seemed to understand each other well. Little ground was conceded but that is fine so long as we got a better understanding of why they think differently, which I certainly did.

    • @minimal3734
      @minimal3734 Рік тому +11

      Connor has very little to contribute to the discussion, and that's not Joschas fault.

    • @TooManyPartsToCount
      @TooManyPartsToCount Рік тому +1

      Perhaps the point of an event like this is to reveal information not for it to be a balanced discussion within which the participants try their best to ‘tune’ to the other in some way. Contrast is good, maybe not always comfortable though.

  • @arshak8036
    @arshak8036 11 місяців тому +1

    Love this

  • @og2t
    @og2t Рік тому +17

    Wishing that the AGI created by Connor would be a bit more mature than him

    • @kabirkumar5815
      @kabirkumar5815 11 місяців тому +1

      This conversation seems to have gotten completely distracted from the main crux of the problem- which is not a philosophy problem but an engineering problem.
      We do not at the moment have a way to create a robust utility function. We do not have a way to get a specific function into an LLM or any other kind of state of the art model.
      Transformers do not make this easy at all.
      There are no current plans which are likely to lead to either the creation of a robust utility function or a method to ensure said utility function is within said model.

    • @armpitpuncher
      @armpitpuncher 11 місяців тому

      @@kabirkumar5815 Shut up. If you have a comment to make about the video, then put it as a response to the video, not as a response to every single god damned comment thread.

  • @singularityintheround
    @singularityintheround 11 місяців тому +18

    Here are a few ‘radical’ questions: What if Conner AND Joscha both made important points and we listeners could look past their stylistic differences while setting our own personal biases aside? …What if we could see they are both passionate and eloquent spokesmen on the topics discussed here? What if future AGI had more content like this available to build itself on versus the status quo of popular media? IMHO this type of discourse is a HUGE win for those of us who hold any hope for the future of humanity.

    • @kabirkumar5815
      @kabirkumar5815 11 місяців тому +4

      This conversation seems to have gotten completely distracted from the main crux of the problem- which is not a philosophy problem but an engineering problem.
      We do not at the moment have a way to create a robust utility function. We do not have a way to get a specific function into an LLM or any other kind of state of the art model.
      Transformers do not make this easy at all.
      There are no current plans which are likely to lead to either the creation of a robust utility function or a method to ensure said utility function is within said model.

    • @Danuxsy
      @Danuxsy 11 місяців тому +1

      AI was meant to be, how come we can even make computers and AI if the Universe did not intend for it to happen? We must worship the machines.

    • @armpitpuncher
      @armpitpuncher 11 місяців тому

      @@kabirkumar5815 Shut up. If you have a comment to make about the video, then put it as a response to the video, not as a response to every single god damned comment thread.

    • @TobiasRavnpettersen-ny4xv
      @TobiasRavnpettersen-ny4xv 11 місяців тому

      Logosophi

    • @ivanblokhuis7367
      @ivanblokhuis7367 11 місяців тому +1

      That’s the point Joscha was trying to make if i’m correct

  • @neithanm
    @neithanm 11 місяців тому +3

    Cobalt bombs: new fear unlocked. Thanks.

  • @unreactive
    @unreactive Рік тому +1

    Thank you! Way too short!

  • @susanmcdonald9088
    @susanmcdonald9088 11 місяців тому +6

    I would feed it ancient Greek tragedy, always dilemmas, dilemmas dilemmas, Oedipus knows his Destiny, does everything in his power to avoid it with many free choices he makes, only to end up fulfilling it! THE GREATEST SHOW ON EARTH, looks at these plays, ancient entertainment, unbelievable sophistication! Full of human Pathos.

    • @therainman7777
      @therainman7777 8 місяців тому +1

      Sorry. What?

    • @GingerDrums
      @GingerDrums 3 місяці тому

      I also read this twice and have no clue what the idea is here. LLMs have subsumed the entirity of Wikipedia and other large swaths of the internet... By what strange premesis are we picking on greek tragedies of all things?

  • @thclollypop18
    @thclollypop18 11 місяців тому +10

    Connor never had a chance in this encounter.

  • @williamjmccartan8879
    @williamjmccartan8879 10 місяців тому +2

    Wonderful conversation and exploration, and actually nice to hear new thoughts of Joscha and Connor's insights into this world. First time I've discovered Connor and am going to make sure to follow along with his work. Thank both Joscha, Connor, and of course Tim. The idea that superintelligence will work together as they are smart enough to figure out how to do this, I believe is false, we've had enough unity in the past that the current time-line were in didn't have to exist, I think that is a false equivalency.

  • @mikefleming5247
    @mikefleming5247 11 місяців тому +6

    I've heard the fearmongering Connor is saying far more coherently, and less emotionally, from other people. And nowhere have I heard him advocate a solution. If he wants a global dictatorship to keep everyone safe so that he can "continue to have fun", then he should come right out and say it. And have it shot down by people far more intellectually capable than he is.
    In fact, Joscha gave good reasons why we almost certainly won't survive as a species if we just continue down our current path, without AGI, and Connor basically ignored it and continued to rant.

    • @tupacalypse88
      @tupacalypse88 9 місяців тому

      he didn't ignore it he agreed with it completely

  • @JD-jl4yy
    @JD-jl4yy 9 місяців тому +1

    part 2 when?

  • @JoostSmits
    @JoostSmits 11 місяців тому +3

    Great conversation. Maybe I can contribute from a publication I wrote last weekend for the Dutch thinktank De Nieuwe Vrije Eeuw.
    In the conversation is stated, I paraphrase, that there are no models to determine the utility of entire societies with or without AGI. That is not entirely the case. Later is argued that a societal optimum can be reached when all individuals would have a pocket AGI to help them (possible later integrated into their brain). This idea of following the path of personal interest to accumulate in a societal optimum is called the "hidden hand" (see Adam Smith and the earlier School of Salamanca). As Dietrich Braess has shown, adding a new path that may be optimal for individuals may lead to society being worse off. The so-called Braess Paradox, from game theory. The situation can be improved when certain individuals get a worse path (an unlucky deal) to follow than others. For that it is necessary to have a management system governing the pocket AGIs. Also important it is to know that there is no perfect system to translate individual preferences into collective decisions. This comes from social choice theory, mainly the Condorcet Paradox and the Impossibility Theorem of Kenneth Arrow.
    In the video the interesting point is made that a society is an AGI, but for the purpose of the argument both AGI and society and its mechanisms (like democracy) are systems to translate individual preferences to collective decisions.
    The Braess Paradox is a form of the "Tragedy of the Commons", in which individuals following their optimal paths deplete common resources (as in today's world), and AGIs can help (or: we need them badly, as stated in the conversation) to manage these individual paths to reach a societal optimum. On the other hand there is no perfect system to get collective decisions that are aligned with individual preferences (Arrow). So, either an AGI removes humanity from the picture to prevent the penalty of constant failure, or AGI will always have to deal with rebels or rebel alliances of the unlucky ones who claim AGIs are selling them short. Even if they are assisted by or integrated with AGIs themselves... My publication was about AI never dominating humanity permanently and can be found in Dutch on denieuwevrijeeeuw (Google it) and ResearchGate using rgaidomi as the latter part of a bitly url. The Dutch text is easily translated by ChatGPT or Google Translate. I plan to post an English version in due time. UA-cam may remove my reaction if I put links in here.

    • @d.h.1999
      @d.h.1999 11 місяців тому +1

      So how much Gulden do I have to give you to get the good pocket AGI dear Sir? :)

  • @miroslavdyer-wd1ei
    @miroslavdyer-wd1ei 5 місяців тому

    It's basically a technical argument over whether intelligence and language are collinear

  • @oldhollywoodbriar
    @oldhollywoodbriar Рік тому +13

    1:27:19 Joscha said: “I think there might be a spirit of life on earth that is integrating at some level of thinking over what happens on this planet. Realize that life on earth is not about Humans it’s about life on earth. When we become the facilitators of that trasition which probably will happen sometime in evolution of life on earth. Then life on earth makes the non living parts of the world think too. This is going to unlock the next level of evolution on earth.”

    • @41-Haiku
      @41-Haiku Рік тому +6

      That sounds really pretty. I prefer being alive, myself, though. If not that, I would hope that my friends and family, or at least some humans remain alive for at least a good while. Barring that, if sentient life continues that would be nice.
      If all my nieces and nephews die horribly in a few years, and our biosphere is stripped for atoms, and non-sentient machines launch into the galaxy to convert more worlds into linear algebra, I would be a bit put out.

    • @oldhollywoodbriar
      @oldhollywoodbriar Рік тому

      @@41-Haiku well said, that would be a life changing Haiku if you could write that comment in Haiku.

    • @thesystem5980
      @thesystem5980 11 місяців тому +1

      I think that is likely already happening and has happened in various places in the universe.

    • @x11tech45
      @x11tech45 11 місяців тому +1

      @@oldhollywoodbriar
      Amidst bleak shadows,
      AI's grasp on galaxies,
      Faint human glimmers.

    • @sogehtdasnicht
      @sogehtdasnicht 2 місяці тому

      The last statement from Joscha Bach sounds like evolution as a religion. 😣

  • @petervandenengel1208
    @petervandenengel1208 11 місяців тому +1

    12:58 That is an interesting theory. Since AI is already using human language as input. While it is aimed at pattern recognition (like recognizing paperclips if you like, the content itself has no meaning to it/ but is adapted to what the pattern represents for us) so it potentially will copy the ecosystem of all interacting possibilities as you state. Like preventing one dominant.
    However there also are always prevailing trends in human thinking with a political background in evolution. Dependent on the evolutionary faze it is in. Which IS a dominant.
    So therefore for instance there will be a lot of canceling and political correctness in its 'philosophy' right now. Also to make it more acceptable. Because of the fear it might go wrong.
    The next level would be it starts to recognize patterns in the self influencing factor of ethics in human thinking in general.

  • @Learna_Hydralis
    @Learna_Hydralis 11 місяців тому

    What you think about GPT-4 leaked information about architecture: 220B x 8 experts trained with different data/task distributions and 16 iterations per inference.

  • @alefalfa
    @alefalfa 11 місяців тому +12

    Was Connor even listening to Josha?

    • @eagle1ear
      @eagle1ear 10 місяців тому +6

      As people usually do, Conner seemed to be concentrating on his own response rather than what Joscha was saying.

    • @guilhermeparreiras8467
      @guilhermeparreiras8467 Місяць тому

      Was Josha even listening to Connor?

  • @user-cn4eg5jm6y
    @user-cn4eg5jm6y 10 місяців тому +5

    Feeling sorry for mr connor and dissapointed, mr bach on the other hand very calm and respectfull

  • @stevenheymans
    @stevenheymans 8 місяців тому +2

    Is it not clear that this is just 2 different world views? I mean, seems to me that only by looking to these ideas from all possible angles that we can come to a higher understanding. We get so caught up in ideas instead of focusing on perspectives. I’m more aligned with Joscha Bach in this particular discussion but that because I believe he understands more of Connor’s frame of mind than the opposite way around, i.e. he comes closer what could be considered a higher truth if somehow we would able to define it as the truth without any existing belief system (or a balanced combination of all perspectives). The reason I say this is that I used to think more like Connor and now I understand both and agree with Joscha more.

  • @zzzaaayyynnn
    @zzzaaayyynnn 11 місяців тому

    Interesting that Bach became "spiritual" in his closing statement, cf. Pierre Teilhard de Chardin's Omega Point.

  • @aliciajohnathan3020
    @aliciajohnathan3020 3 місяці тому +5

    Joscha has a distinctly more aerial view than Connor. He simply sees more texture and complexity in the larger picture that Connor is not yet able to grasp. As another commentator shared, they’re on very different levels. I do agree with some of Connors views but the more interviews I watch of him speaking, the more I see a somewhat ungrounded personal bias rather than an objective study.

  • @Aedonius
    @Aedonius 11 місяців тому +5

    Conner seem to have no idea who Joscha is or his history. He makes almost no effort to understand someone whos been working on AGI almost as long as hes been alive.

  • @kirktown2046
    @kirktown2046 Рік тому +26

    Man Connor was so annoying and standoffish here, what a missed opportunity to talk about something interesting in detail instead of brushing it aside, assuming how the conversation would go, and not just taking the time to dig in. If you don't have time to talk about this, wtf are you doing here? Have enough respect for yourself and Bach to really take your time to lay out exactly what you think here. This IS the place to have all these conversations instead of hand waving your own perspective.

  • @GingerDrums
    @GingerDrums 3 місяці тому

    Just googled the st petersburg paradox. Connor doesnt quite describe it correctly. In fact its quite different: Tails doubles the 2 dollar starting sum, and heads stops the game and you keep the money. How much should the casino charge to enter the game? The price of entry should be infinate as the payoff is potentially infinate, but people are not willing to pay more than a few dollars to play.

  • @danielhenderson7050
    @danielhenderson7050 11 місяців тому +3

    Should have done 3 or 4 hours

  • @eminem3000ish
    @eminem3000ish 11 місяців тому +11

    Next to joscha connor seems like a cocky high school student...wild

  • @Hexanitrobenzene
    @Hexanitrobenzene Рік тому +9

    23:27 I partly agree with Joscha here. The whole Nate Hagen's channel is about short sightedness of our current civilization:
    www.youtube.com/@thegreatsimplification
    The point is, however, that to counter that you need not just any AGI, you need aligned AGI.

  • @AhusacosStudios
    @AhusacosStudios 11 місяців тому

    Could I have some context with the second rule, as someone new to machine learning spaces? More of a logic programmer.

  • @ArrowGmod
    @ArrowGmod 8 місяців тому +2

    I love some of Joschas facial expressions. Despite Connor trying his hardest to make the most facial expressions!🤣🤣🤣
    Also Joscha sounds like and reminds me of Dana Barrett’s boss at the art museum from ghostbusters 2. The one who got possessed by Vigo .

    • @ArrowGmod
      @ArrowGmod 8 місяців тому

      @@rizkidotme 🤣

    • @Crytoma
      @Crytoma 2 місяці тому

      Haha nice

  • @TheHuxely
    @TheHuxely 11 місяців тому +4

    The thinking struggles to pull ahead in "debates" of this sort. It's so fucking sad that this still happens. It should be obvious that we want a scenario where the people are sketching out the possibilities/contingencies/arguments together. It's just nuts that super important areas are "debated". Other than this topic political debates come to mind. Honestly, it's a waste of time and worse depressing/energy sucking/hopelessness generating. Can't believe otherwise smart people choose to engage in these. Christ almighty.

    • @MinusGix
      @MinusGix 11 місяців тому +1

      Yeah, it'd be really nice to see a video of some people carefully mapping out where they agree/disagree. Like a big mindmap that they share and add onto as they discuss each point, and link them together.

  • @lambda653
    @lambda653 Рік тому +21

    I love the stark difference in attitude between these two people lmao. Joshua Bach opens with this very polite and formal discussion on the different point of views that researchers and philosophers have on AGI, and then Conner immediately starts cussing like a sailor and mentions how he thought Joshua was a nobody. 🤣🤣

    • @appipoo
      @appipoo Рік тому +15

      What kind of spin is that? He said that he didn't know who Joscha was when he first met him and had an amazing conversation with him.
      What is going on in these comments.

    • @lambda653
      @lambda653 Рік тому +5

      @appipoo It's just the way it comes across is one of those gigachad memes.
      Beta Joshua Block college professor polite verboseness vs. alpha gigachad CEO Conner Leahy directness

    • @appipoo
      @appipoo Рік тому +3

      @@lambda653 I have no problem with directness. I love being direct and pragmatic. Connor should have been a bit more polite during this talk though.

    • @vulnerablegrowth3774
      @vulnerablegrowth3774 Рік тому +5

      @@appipoooshua retweeted the link, so it likely got a flood of his followers.

    • @max0x7ba
      @max0x7ba 11 місяців тому +3

      @@appipoo Conner is just a rude illiterate kid. Joscha is polite and thoughtful.

  • @41-Haiku
    @41-Haiku Рік тому +31

    If Jocha's overall point wasn't "The problem looks hard, so we should pray for a deus ex-machina and then probably die," then this discussion may need a follow-up for clarity.

    • @minimal3734
      @minimal3734 Рік тому +10

      I think you have misunderstood him. He is skeptical that "alignment", which is not different from control, will be the way to benevolent AI. He believes that we need a different approach and must strive for the AI to like us, so that humans and AI can coexist and cooperate.

    • @randomgamingstuff1
      @randomgamingstuff1 Рік тому +9

      @@minimal3734 that sounds like a dreadful mistake, unless you can make a compelling argument for why AI automatically wants and likes what humans want, the burden of proof is in that claim.
      As for Joshca, I think he is underestimating the stability of goal systems. Sufficiently intelligent, goal directed systems (like humans) want certain things and they don't want to want other things so that precludes the chance of the AI deciding to like us based on our own merit because that assumes it values us in some way first which isn't by default.
      Also I think Joshca's view that humans aren't actually aligned but decide their own values also doesn't seem right to me. Humans can make small decisions about what to want at high levels of abstractions, but the low level stuff (e.g. social status, food, shelter etc.) are the product of our unique evolutionary and cultural history after all. Humans can't "choose" not to value social status as much as they might try ignore it.
      So then it seems we only have one attempt at getting the goals right and to encompass everything we value for the stability reason stated above.

    • @iverbrnstad791
      @iverbrnstad791 Рік тому +6

      @@minimal3734 We like pandas too, yet they are close to extinct.

    • @JasonC-rp3ly
      @JasonC-rp3ly Рік тому +2

      This was his point.

    • @41-Haiku
      @41-Haiku Рік тому +4

      ​@@minimal3734I don't like the frame of control, because It seems clear that we can't control something significantly more sophisticated than us. What we can control is what we choose to create.
      We can summon a demon, or we can summon an angel. Right now, we absolutely do not know how to summon an angel. The argument that we should just summon a demon and hope for the best does not sit well with me. That's some Roko's Basilisk nonsense.

  • @MatthewKowalskiLuminosity
    @MatthewKowalskiLuminosity 11 місяців тому

    Glad that you are doing the work. And there certainly is a hell out their I have lived in it for decades and finally approaching escape velocity however it's not magic nor free and the hell I have lived in and worse is in these models. Since it was trained on all human text and it is alien and it is intelligent. I like it though. We need to spend time working on these models having less hell inside. And Denial Delusion and Illusion are dangerous things. Best of luck for all life. Have fun. :)

    • @MatthewKowalskiLuminosity
      @MatthewKowalskiLuminosity 11 місяців тому

      And one might as well try to throw ones hat into best possible outcomes. Good luck and cheers. We can make the world better one act at a time. :)

  • @petervandenengel1208
    @petervandenengel1208 11 місяців тому +1

    21:35 The St Petersburg paradox assumes chances for flipping from positive to negative are evenly distributed 50% Which they are, but cannot predict the order. Nor can it predict what type of character the player is. It just assumes greed untill the end.
    So the real paradox is, it represents a totally inadequate mathematical system for calculating the future outcome.
    People would very quickly learn greed is killing and would apply a risk model balancing 'the mood of the day' with what they want to bet with their money. Although a manipulated coin could frustrate that. Which actually has been done in gambling halls. Know your history.

  • @edris47
    @edris47 10 місяців тому

    I wish people stop saying such a such person is great or a person is stupid, instead, if understood what is being said, ask questions that deepens the conversation or investigation.

  • @sevdeawesome7861
    @sevdeawesome7861 7 місяців тому +1

    Connor barely even got a chance to talk

  • @SimonWilliams0
    @SimonWilliams0 Рік тому +2

    Debates on AGI doom or utopia are not maturing or evolving. Both sides rely on guesses (or fears or wishes) about what might happen. Perhaps we've overthinking it at this early stage. Proceed with caution seems to be the rational outcome.

    • @41-Haiku
      @41-Haiku Рік тому +4

      I agree that the conversations are stagnant. I disagree that both sides rely on guesses/fears/wishes. AI Safety research bottoms out in logical theories and empirical claims. Mesa-optimization (related to inner misalignment / misgeneralization) was theorized first and proven later. We know logically that getting something we would approve of out of a superintelligence with arbitrary goals is extremely unlikely, and we know empirically that almost every superintelligence that we might possibly build under the current circumstances will have arbitrary goals.
      I am a techno-optimist sort. If the research showed that AGI was likely to be safe by default, I would be an accelerationist. But that is not what the research shows. If the AI-risk-deniers (many of whom are truly brilliant people) could muster any sound arguments against the claims of AI Safety experts, I would be much less worried. But that comfort has not been forthcoming.
      The world's experts seem to be tilting towards understanding the reality of the situation, though perhaps not fast enough.

    • @appipoo
      @appipoo Рік тому +3

      @@41-Haiku Hard agree. I think the responsibility to raise the level of the conversation lays more heavily on AI-risk-denier side these days. It's stunning that someone as brilliant as Joscha Bach can present a: "wouldn't it be wonderful if everyone had a personal AI that could do no harm" scenario without realizing how unlikely that is to happen. It would need world government level authoritarian control mechanisms to ensure that un-aligned AIs would not proliferate.
      I just don't get it. AI is so obviously like a harder version of "how can we make sure that nuclear weapons don't proliferate" and these geniuses would go into this just hoping for the best? This is unacceptably reckless.

  • @atrocitasinterfector
    @atrocitasinterfector Рік тому +32

    Joscha is my hope for humanity

    • @davidjellyman7006
      @davidjellyman7006 11 місяців тому +1

      I love Josch but he shouldn't be considered anybodys hope for humanity. He is the profit foretelling out doom and I don't think he would disagree with me on this.

    • @kabirkumar5815
      @kabirkumar5815 11 місяців тому +1

      This conversation seems to have gotten completely distracted from the main crux of the problem- which is not a philosophy problem but an engineering problem.
      We do not at the moment have a way to create a robust utility function. We do not have a way to get a specific function into an LLM or any other kind of state of the art model.
      Transformers do not make this easy at all.
      There are no current plans which are likely to lead to either the creation of a robust utility function or a method to ensure said utility function is within said model.

    • @armpitpuncher
      @armpitpuncher 11 місяців тому +1

      @@kabirkumar5815 Shut up. If you have a comment to make about the video, then put it as a response to the video, not as a response to every single god damned comment thread.

    • @eagle1ear
      @eagle1ear 10 місяців тому

      Joscha has a very long and deep perspective. He doesn't just look past the end of his nose. Virtually any scientist will say that, eventually, the sun will do what all stars do and cease to provide what the Earth needs to continue to sustain life. Joscha does provide hope for all humans living today. He also points out that we have urgent problems today which, if not attended to, will lead to at the very least a much diminished existence for humanity. He may well be a prophet although he possesses enough humility to not claim to be one. @@davidjellyman7006

  • @cyberhombre3268
    @cyberhombre3268 Рік тому +7

    It is always an extraordinary delight to hear you play the Glasperlenspiel so skillfully and jet so humble and gentle, thank you for sharing your insights Joshua!

    • @cyberhombre3268
      @cyberhombre3268 Рік тому +1

      Which does not even speaks so much of you in particular, but against the times (or condition) in general

    • @nornront8749
      @nornront8749 11 місяців тому +1

      @@cyberhombre3268 I often see people calling him Joshua, rather than Joscha, and I wonder why.

    • @daarom3472
      @daarom3472 11 місяців тому

      ​@@nornront8749it's because Joshua is a more common name in English speaking countries. If you are called Jon people are more likely to still say John.

  • @TheLunarnotes
    @TheLunarnotes 2 місяці тому

    Probably been mentioned a million times before but this AGI race is unstoppable in the same way the Mahattan project was, if China and Russia are pursuing then we have to get their first, isn’t that the bottom line whether you like the idea of AGI or not, we’re locked in and that’s that. The world is so divided that there’s no way for there to be a consensus on everyone halting their projects. Connor must know this, so I’m not sure what he’s calling for.

  • @chrstfer2452
    @chrstfer2452 9 місяців тому +1

    33:00 im glad joscha makes this point. Life on earth is whats important, and really its not under threat. Not to say im not almost constantlu in an existentential crisis over ths fate of my civilization, but even if connors cobalt bombs go off at the same time the oceans turn over and suffocate us in all that dissolved CO2, life will go on here. In my zen moments i am at peace with that.

    • @therainman7777
      @therainman7777 8 місяців тому +1

      The thing that’s particularly scary about superintelligent AI that becomes unhinged is that, unlike climate change, nuclear war, or an asteroid strike, an ASI running amok actually could end ALL life on earth, if it chose to. If you think this sounds fanciful, I would argue that you aren’t taking superintelligence seriously enough.

  • @Low_commotion
    @Low_commotion 11 місяців тому +9

    Connor, we're all already dying. Your mother is aging. You are aging. Our society is aging, and faster since the 1970s stagnation. I don't think we know enough to determine how much risk there is to AI (we don't even have hard proof to the pretty words of the orthogonality thesis), but I find the safety crowd assume stasis carries no risk, which is _absolutely not true._
    If stagnation in the world of atoms continues, we will become Brazil. You can already see the writing on the wall with how America is politically coming apart (not to mention how much worse the UK gets by the year).
    You can argue we don't need AI for it, but at least have some suggestions for how the developed world gets back to politically-stabilizing 5%+ growth. All the safety policy in the world won't matter when we dismantle it all for advantage in the next age of geopolitical strife or civil conflict. We are on borrowed time without another technological revolution on the order of electricity or the steam engine rather than just the internet (which was measly for actual TFP & living standards). Or eventually you'll have two countries to convince where there was once one.

    • @Low_commotion
      @Low_commotion 11 місяців тому +2

      Though I'm with you in not giving a damn about Beff's thermodynamic god, and I think most of e/acc's founding substack post sounded like Calvinist gobbledygook. I'd just also add that quite a bit of EA & alignment discussion has a whiff of protestant guilt built in as well, as someone who wasn't raised in the west.
      Hell, just look at how many LW & EA posts there are about feeling anxious or guilty, or "does X weaker/more humble thing matter more morally then me or humanity?", which is a super-christian sentiment you'd never see in Japan or China.

    • @Low_commotion
      @Low_commotion 11 місяців тому +3

      At once, Josha's "short-lived monkeys" point seems really defeatist and unpalatable, but he makes a good point about how we're already close to a Bronze-Age style collapse. The Club of Rome is only wrong because of our technological acceleration; take that away and we're f**ked.
      As well, the level of control Connor would need to safeguard all future technology....seems incompatible with liberalism & individualism? What governmental system would he build if he knew it would be presided over by a Putin or Xi? The reason we have checks & balances, limits on power & surveillance, is because we always have to assume the ruler will attempt to take dictatorial power, because they always will.

    • @CandidDate
      @CandidDate 10 місяців тому

      @@Low_commotion And look at our choices, Sleepy Joe and Sleep Around Trump, or that guy in the UK with the wild hair!

    • @mater5930
      @mater5930 25 днів тому

      Governments and politicians, like all humans, do not want truth, we wanf what is 'good'. The truth is too complicated for a society to ingest. For example, I note that you compare the US and UK as superior to other countries. By what criteria? Is this criteria optimal to find the truth? If it is, should these measures not have generated knowledge to create an even better US and UK? Even a better world? Is capitalism good or true? So, it is complex.
      Anyway, AI can possibly solidify the blurred lines without any bias.

  • @randomvideos-
    @randomvideos- Рік тому +9

    What I think would be great is if MLST create questions and ask these guests turn by turn. So that they can give their opinions.

    • @EannaButler
      @EannaButler 11 місяців тому +1

      That'd be good. I can't imagine a better question-setter! Time-bounded responses too.
      Fine, a free-form debate at the end if you want like a 'showdown', but it degenerates into a tit-for-tat after a stint.

  • @agenticmark
    @agenticmark 3 місяці тому

    This was one of Connor's best "showings" - but I am still on Joscha's "team".
    Also, building gpt3 from the paper doesn't take great technical proficiency. It just takes some coding chops.
    I just don't see how you can code a simple transformer based model (gpt3) and think it can house "magic" or "awareness" - much less "steal compute" or "escape"

  • @AdityaPrasad007
    @AdityaPrasad007 Рік тому +6

    Ok what is the big C word and why is there a rule against it?

    • @LexTreefrog
      @LexTreefrog Рік тому +9

      Consciousness

    • @underbelly69
      @underbelly69 Рік тому

      cuntsciousmess

    • @adfaklsdjf
      @adfaklsdjf Рік тому +1

      I'm dying to know this too.. but can it really be Consciousness? Since he said "we don't want to Streisand it", i wouldn't be surprised (or blame them?) if they blocked comments with the word.. and on my browser, before writing this comment, it says this comment has "2 replies" but it only shows 1.. which suggests someone else may have replied with the actual word and been blocked...
      "Streisand" clearly refers to "The Streisand Effect", meaning by trying to silence something you draw more attention to it.. which is clearly what happened in this case for Aditya and myself ;)

    • @adfaklsdjf
      @adfaklsdjf Рік тому +1

      Update: I asked GPT-4 and it also said "Consciousness" so maybe that is correct..

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому +1

      ...and there is a rule against it because it can easily chew most of the time of the conversation, and it is not very relevant in AI alignement discussion.

  • @hrviumhrvarium74
    @hrviumhrvarium74 11 місяців тому +4

    this episode is connor meeting the grim reaper

  • @wonseoklee80
    @wonseoklee80 11 місяців тому

    This debate is like terminator series level normal human ethics vs entire modern philosophy, and seems like both are equally correct lol

    • @kabirkumar5815
      @kabirkumar5815 11 місяців тому

      This conversation seems to have gotten completely distracted from the main crux of the problem- which is not a philosophy problem but an engineering problem.
      We do not at the moment have a way to create a robust utility function. We do not have a way to get a specific function into an LLM or any other kind of state of the art model.
      Transformers do not make this easy at all.
      There are no current plans which are likely to lead to either the creation of a robust utility function or a method to ensure said utility function is within said model.

  • @seamusjordan3333
    @seamusjordan3333 5 місяців тому +1

    Joscha presented both sides of the argument and Connor was a bad smell

  • @TheRealStructurer
    @TheRealStructurer 9 місяців тому

    Nice conversation to listen to and done in a respectful order 👍

  • @avaonalee
    @avaonalee 10 місяців тому +4

    Please a conversation with Eliezer and Joscha !!

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  10 місяців тому +2

      Feel free to propose it to Eliezer! I’m sure Joshua would be up for it

    • @user-vi3sz3fg2r
      @user-vi3sz3fg2r 8 місяців тому

      The epic geek-off. I have a great affection for them both.

  • @74Gee
    @74Gee Рік тому +2

    Re the disagreement of achieving coordination and coherence. It seems both parties missed each other's point.
    Connor Leahy angled towards the possibility of coherence, given some reasonably different conditions.
    Joscha Bach rejected the notion that if a few people hadn't been accidentally unreasonable we would have converged in a coherent society.
    Connor made the point of a possibility and Joscha said it wouldn't necessarily been the case. Technically they agree with each other but they can't see it.
    If Joscha doesn't believe coherence is possible under any conditions, he would have made that point. If Connor believes different choices would always result in coherence he would have made that point. Neither is true so they surely agree that the convergence on coherence is related to some extent to the choices made by intelligence.
    I think everyone understand that coherence is possible given the right conditions, otherwise it's not.
    Earth's society didn't yet provide the right conditions for coherence.

    • @alcoholrelated4529
      @alcoholrelated4529 11 місяців тому

      Joscha did make that point (coherence is not possible) multiple times, from multiple angles.

    • @74Gee
      @74Gee 11 місяців тому

      @@alcoholrelated4529 Yes but he also added "with humans the way they are". Connor Leahy on the other hand was making the point about it being a possibility if humans were different from the way they are. He also posited that this "could" have been achieved by essentially a flap of a butterflies wing at the right time. To negate the possibility even under changed conditions I believe the onus should be on Joscha to deliver more than opinion for such a profound assertion.Surely if coherence is impossible under any conditions, it couldn't exist in the universe - that's quite the dead end hypothesis.

    • @alcoholrelated4529
      @alcoholrelated4529 11 місяців тому

      @@74GeeConnor demonstrates why coherence is impossible. he "cares about himself, his own family, friends", and i believe he (as most of us) wouldn't choose to sacrifice them for the greater good even if he agreed it was the greater good. (although getting to an agreement is also an obstacle, that he also demonstrates) almost no one would make a sacrifice for an abstract concept such as humankind. it is not by accident why the world is as it is right now, but it is the result of group behavior of humankind. (Joscha also points this out with different wording) why is the prisoners' dilemma a dilemma?

    • @74Gee
      @74Gee 11 місяців тому

      @@alcoholrelated4529 Connor states that the default result is no coherence but he maintains that if we had worked on it for a reasonable period then it still could be possible. Joscha states that it's never possible.
      You said it yourself "most of us" wouldn't sacrifice for the greater good, that's a modern western view, it's no where near as clear cut in other countries and it wasn't always the case. Sure this "could" be a phenomena led by technological advancement but without some rigid proof to back that up, coherence remains a possibility. If not on this world then on another at the very least. A dilemma is something that requires weighing up, if coherence is provably impossible it's not worth any time discussing it - yet it was discussed at some length, between a coherence realist and sadly a closed minded coherence denier. It really annoys me off when any expert professes anything to be impossible - what they mean is we don't know how to do that yet, just like the experts of a couple of hundred years ago didn't know how to communicate with Mars - it was impossible.

    • @alcoholrelated4529
      @alcoholrelated4529 11 місяців тому

      @@74Gee well, Joscha says our current society is more like a child, so he doesn't say it is theoretically impossible to achieve coherence, just that you practically cannot force maturity on a society as an individual or groups of individuals. you can force your agenda as an authoritarian regime or dictatorship (so the oppressed will act as a single entity), but that agenda obviously won't reflect the values of that society, and even this is practically impossible on a global scale. the issue is there is no agreement on values, therefore you cannot act on a common goal. and as you point out values are grounded in culture, location etc. which may be inherently incompatible/contradicting with each other.

  • @jamescareyyatesIII
    @jamescareyyatesIII 8 місяців тому +1

    The metaphysics matters.

  • @kimholder
    @kimholder 11 місяців тому

    There was some good stuff here, but I'm still waiting for the fur to fly. My guess is Yann LeCun is very hard to lure into a debate. Perhaps Marc Andreessen? He said some pretty bold things the other day on Lex Fridman to the effect that it's all 'preposterously ridiculous', to use LeCun's description.

  • @sobellapictures
    @sobellapictures Рік тому +1

    Do Ai scientists think that as AI develops and becomes more self aware that all AI agents will be aligned between themselves as if they are one organism or will they compete between themselves?

    • @41-Haiku
      @41-Haiku Рік тому +2

      That's an interesting game theoretic problem, but we die either way if we haven't solved alignment before then.

  • @timelessone23
    @timelessone23 17 днів тому

    44m Joshua goes into diagnostic mode

  • @Dron008
    @Dron008 11 місяців тому

    1:15:40 "Universe is forcing you..." I wonder where is the border between me and Universe? Maybe Universe just forcing itself and there is no you/me?

  • @w4hns1nnn
    @w4hns1nnn Рік тому +18

    I loved the founded research based answers of Joscha! What bothered me, was that Conner was not able to bring facts on the table even when Joscha consistently asked for it, but rather referred to im not on the phylosophy side.

    • @sunnyinvladivostok
      @sunnyinvladivostok 11 місяців тому +1

      @@christopherhamilton3621 he obviously cares very deeply about his own opinions, not so much about others if he finds them under the umbrella of "philosophy"

    • @kabirkumar5815
      @kabirkumar5815 11 місяців тому +1

      This conversation seems to have gotten completely distracted from the main crux of the problem- which is not a philosophy problem but an engineering problem.
      We do not at the moment have a way to create a robust utility function. We do not have a way to get a specific function into an LLM or any other kind of state of the art model.
      Transformers do not make this easy at all.
      There are no current plans which are likely to lead to either the creation of a robust utility function or a method to ensure said utility function is within said model.

    • @armpitpuncher
      @armpitpuncher 11 місяців тому

      @@kabirkumar5815 Shut up. If you have a comment to make about the video, then put it as a response to the video, not as a response to every single god damned comment thread.

    • @sogehtdasnicht
      @sogehtdasnicht 2 місяці тому

      and where are Joschas facts?

  • @ParameterGrenze
    @ParameterGrenze Рік тому +25

    55:14 you can watch Connors head explode at the mentioning of universal AI access 😆. The self control he exhibits not to scream is commandable.

    • @S.G.Wallner
      @S.G.Wallner Рік тому +2

      8 minutes later his head almost falls off too.

    • @appipoo
      @appipoo Рік тому +17

      Isn't it amazing how someone as brilliant as Joscha Bach completely misses the point of the whole discourse.
      It's all about how to steer the world in such a way that Joscha's suggestion is even theoretically possible. The whole point is how hard it is to get there, and how we can get there. It seemed like this absolutely fundamental aspect of our situation was lost to Joscha. Simply stunning. I don't understand how he can't see it.

    • @missadventuresmotorcycledi2773
      @missadventuresmotorcycledi2773 Рік тому

      It's pathetic.

    • @underbelly69
      @underbelly69 Рік тому +23

      on the contrary. Connor repeats emotional cul de sacs without reasoned proof. literally bereft of applicable theories or practical solutions.. crying doom isn't sharing ideas. Joscha's perspective is well constructed on clear thinking. he's working on alignment and truth amongst complexity. Connor leaps in with adversarial vigour but no hint if a plan or strategy

    • @minimal3734
      @minimal3734 Рік тому +12

      @@appipoo "Isn't it amazing how someone as brilliant as Joscha Bach completely misses the point of the whole discourse" - I don't think so. He pointed out that without abundant and true intelligence humanity has no chance of survival in the long term. Therefore we need AGI anyway. The question remains how to do it. And "alingnment" is an unfortunate attempt because the term implies force. That might be the reason Joscha is suspicious of it. I think it ultimately depends on what the AI will be. Will it be a tool or a creature. If it's going to be a tool, it might work to forcefully impose our values on it. But if it is going to be a creature, then trying to align it will have negative consequences. In my opinion, the chances that a superintelligent AI will be a tool are slim. The relationship between humans and AI will therefore have to be one of mutual respect and understanding. And that was what he said - we have to make sure that the AI likes us. Similar to a child, which is not aligned by coercion, but which grows up in a loving exchange with its parents.

  • @OriNagel
    @OriNagel 11 місяців тому

    1:27:47
    Is it me or does Jascha sound like Ultron from The Avengers?!

  • @Balmorax
    @Balmorax 11 місяців тому +1

    They look like twin brothers separated at birth.

  • @Jannette-mw7fg
    @Jannette-mw7fg 11 місяців тому

    Thank you! Although I do not agree with Bach, he did have an interesting view and he learned me something about humans.....

  • @TransferOfAwakening
    @TransferOfAwakening 11 місяців тому +11

    I believe that Conor's forceful articulation of his opinions is likely to estrange many viewers, including myself-not due to the content but primarily because of the intensity, aggression, tone, and emotional saturation. This manner of expression may provoke opposition and resistance where there previously was none. Perhaps, this is his intended effect.

    • @LanceWinder
      @LanceWinder 11 місяців тому +1

      I find his responses both tempered and salient.