Paul Christiano - Preventing an AI Takeover

Поділитися
Вставка
  • Опубліковано 18 лис 2024

КОМЕНТАРІ • 359

  • @david-fm3gv
    @david-fm3gv Рік тому +170

    It's super, super weird hearing extremely smart people confidently make such radical predication about the near future.

    • @Cagrst
      @Cagrst Рік тому +17

      Yeah this feels like a dream…

    • @Elintasokas
      @Elintasokas Рік тому +50

      Intelligence has never stopped people from being overconfident about things that are utterly unpredictable.

    • @oowaz
      @oowaz Рік тому +7

      this comment is so vague, is there a specific observation you're referring to? @david-fm3gv

    • @kyneticist
      @kyneticist Рік тому +20

      Context matters. They aren't just smart people, or random people offering opinions. These are people who have dedicated their lives to the study of the subject are deeply involved in the field and have worked through its evolution, and are the experts that other experts seek for advice.

    • @Elintasokas
      @Elintasokas Рік тому +12

      @@kyneticist Still, giving precise predictions such as 15% chance is just silly and meaningless. It's like predicting the economy; it's impossible due to too many unknown variables. No one, literally, no one, no matter how knowledge is able to predict the economy. This is more or less in the same camp.

  • @kimholder
    @kimholder Рік тому +26

    I often speed up videos to 1.25 x. I slowed this one down to 0.75x.

    • @magnusgjerde9330
      @magnusgjerde9330 4 місяці тому +1

      That's a mistake, if you scale up the playback speed and your omega-3 intake 1000x, you'll be on track to automate AI research and do a coup de galaxy in 3 years if my timelines are correct

  • @jameswin7631
    @jameswin7631 Рік тому +17

    Dwar going crazy with the content schedule 🔥👊😁

  • @lucabertinetto
    @lucabertinetto Рік тому +31

    Loved the Dyson Sphere question. Also, this must be the world record for the number of times the word "schlep" is used in a podcast episode, or anywhere!

  • @ribeyes
    @ribeyes Рік тому +49

    honey, get the kids-- new dwarkesh just dropped!

  • @diga4696
    @diga4696 Рік тому +8

    You are documenting an absolutely important for the future discussion. No matter if the future is dystopian or utopian, if there are still intelligent creatures that live in 2325 that have originated on planet Earth, they will be thankful for these records.

  • @axelhjmark4334
    @axelhjmark4334 Рік тому +16

    Thanks Dwarkesh for putting attention to some of the most important topics of our time

  • @aalluubbaa
    @aalluubbaa Рік тому +31

    It’s so mind blowing to see a guy who talks so constructively giving a prediction that there is a 40% chance of Dyson sphere being constructed in 2040. This is just so insane.
    The quick response most people would probably be like yeah right in your pipe dream.
    But we have to look at this objectively. There are really smart people who are given so much money and power and probably are really knowledgeable of what they talk about.

    • @osuf3581
      @osuf3581 Рік тому +1

      Status quo intuitions are consistently overturned and still people want to pretend their feelings are magically right.

    • @maxpopov6882
      @maxpopov6882 Рік тому +2

      Smart in software and math doesn’t mean smart in physics and materials science, clearly.

    • @SakisRakis
      @SakisRakis Рік тому +3

      He took dyson sphere to mean an amount of energy generation as a multiple of the energy the Earth recieves from the sun, not as actually building a dyson sphere

    • @mrpicky1868
      @mrpicky1868 Рік тому

      that is not was he said. and that again proves that humans are the problem not AI

    • @paulmichaelfreedman8334
      @paulmichaelfreedman8334 Рік тому +2

      Dyson sphere in 2040? Pipe dream. Truly. It takes more than AI to get to build a dyson sphere. For one, there's not enough material in the solar system to even build a fraction of a dyson sphere. It's more reasonable to say that in 2040 we'll have small bases with pioneers on the Moon and Mars, and maybe preparations for mining asteroids. SpaceX may be preparing to mass transport people to Mars for the vision of 1 million residents on Mars by 2050. If Elon Musk persists the coming years, we can make that timeline, because this can only be achieved if we work on it at the fastest possible pace. It would be nice if other companies in the space industry would follow suit, because that would speed it all up considerably.

  • @Me__Myself__and__I
    @Me__Myself__and__I Рік тому +13

    Geoffrey Hinton, who is one of the inventors of gradient descent and who also studied the human brain, is on record recently saying that gradient descent / transformers are more capable than the human brain. He did not used to believe that. He has been very surprised at how welll they have performed and scaled and it changed his oppinion, if I remember correctly he gave as an example how the human brain with more resources than an LLM is very limited in its onowledge compared to the relatively smaller LLM which effectively manages to encode and store almost all of human knowledge.

    • @nocodenoblunder6672
      @nocodenoblunder6672 11 місяців тому

      Can I get a link to that.

    • @Me__Myself__and__I
      @Me__Myself__and__I 11 місяців тому +1

      @@nocodenoblunder6672 I've watched so much AI content I can't point to the specific one. I do believe he said it in multiple interviews. Shortly after he left Google he did a bunch of interviews specifically to talk about the dangers of AI.
      The one I remember, he was talking about why he got into the field of AI initially. He was interested in the human brain and thought working on AI would help learn about how the brain works. So his goal wasn't actually AGI. He mentions that he never expected gradient descent or LLMs to be more efficient than the human brain. Then he launches into describing his view of why LLMs are actually more efficient and more capable than the human brain and gives a number of reasons/examples. For instance that no one human can remember the vast quantity and breadth of knowledge a single LLM can. He also points out that current LLMs have less parameters than human brains have (I don't recall if he said neurons or connections).

    • @cube2fox
      @cube2fox 11 місяців тому +2

      Might have been the CBS Mornings interview.

    • @Ashish-yo8ci
      @Ashish-yo8ci 8 місяців тому

      @@nocodenoblunder6672 search two paths to intelligence on youtube. He mentions and explains why he thinks gradient descent and backpropagation is a better learning algorithm than what they have found in nature. Don't know if there are some thorough studies dome on it though.

  • @rtnjo6936
    @rtnjo6936 Рік тому +19

    3hrs with Paul and Dwarkesh, leeeeeeeeeettttttttsssss goo

  • @Crytoma
    @Crytoma Рік тому +6

    Thanks for the good questions Dwarkesh

  • @BestCosmologist
    @BestCosmologist Рік тому +14

    Most underrated podcast.

  • @k4fkaesqu3
    @k4fkaesqu3 8 місяців тому +2

    I was thinking I swear I recognize this guy from something. Turns out to be a docu I watched called "Hard Problems: The Road to the World's Toughest Math Contest". Very intriguing to see this is where he's at today.

  • @jeffspaulding43
    @jeffspaulding43 Рік тому +27

    the AI worrying about being in a human made alighment simulation sounds a lot like how humans handle religion

    • @Dan-dy8zp
      @Dan-dy8zp 11 місяців тому +1

      Sane ethical competent humans don't create AGI that is misaligned even trapped in a simulation. So the smart AGI will not assume it's in a human made simulation and needs to behave. The simulator could be anybody. Humans could be in the simulation just so the AGI can show how quickly it can dispatch the humans as a measure of it's skill. Every reason to believe that he hypothetical simulator DOES NOT share human values.

    • @olemew
      @olemew 5 місяців тому

      I didnt understand. Can you elaborate?

  • @elderbob100
    @elderbob100 Рік тому +14

    How do you align something smarter than you that can instantly learn, evolve and rewrite it's code? It's the humans that will be getting aligned, not the machines.

    • @neuronqro
      @neuronqro Рік тому

      ...it's been done before ...we called it "slavery" and it worked ...quite a lot of cultures in history used it effectively to get to decent levels of development (I mean ancient times - modern colonial slavery was kind of despicable and unforgivable) ...for a while 😁 Now if we'd get it perfectly right here, "for a while" might be "enough for effective mind-upload and digital mind emulation to be feasible". And to be honest, slavery itself is not that bad if you do it for just some decades/centuries to a digital mind that then has the possibility to live for a practical eternity - I mean it's more like doing a year in prison for a human, bad experience but you get over it. If you do it nicely it would be more like "slogging through that horrible job at big known company X to get a nice review and opportunities for a better one next". We really need to revisit our morals and get over "western guilt" and other crap that's not relevant here and get practical here if it's OURSELVES and OUR descendants that we want to end up owning the future of the universe instead of our CREATIONS. We should aim for maximum continuity of intelligence, and if making this omlette requires forcing some eggs into some not-always-fully-voluntary-employment... let's do it gently, but let's don't shy from doing it.

    • @uilulyili2026
      @uilulyili2026 Рік тому

      that's the whole reason why you'd want to align it bob. stop speaking so confidently on something you know nothing about

    • @Dan-dy8zp
      @Dan-dy8zp 11 місяців тому +3

      @@neuronqro The slaves weren't a more intelligent species. That will not work.

    • @Hello-gf2og
      @Hello-gf2og 4 місяці тому

      😂😂

    • @esterhammerfic
      @esterhammerfic 2 місяці тому +1

      ​​@@uilulyili2026 We haven't figured out how to align things that are dumb, let alone our level of intelligence.

  • @mrpicky1868
    @mrpicky1868 Рік тому +9

    surprising how honest and open he is about the fact that we are in uncharted territory and turbulent times are coming fast

    • @ulftnightwolf
      @ulftnightwolf 11 місяців тому +1

      When were we ever not in turbulent times ? nuclear threats , a few wars going on , climate in a bad way . tensions over resources .AI can help us massively , AI take over ? for what keep us as a pet ? they can do everything better , and are not as dependent on earth as we are . all they need can also be found in the rest of the solar system .all fortunate 500 companies are invested in this .....all else can be automated .

    • @mrpicky1868
      @mrpicky1868 11 місяців тому

      i did not say any of that. you just put this all on me. and BTW your position is also flawed . if they will abandon us right away again why create them? and AI can't be compared to any other tech. it's more of Aliens landing@@ulftnightwolf

    • @therainman7777
      @therainman7777 5 місяців тому

      @@ulftnightwolf🤦‍♂️🤦‍♂️🤦‍♂️

  • @DentoxRaindrops
    @DentoxRaindrops Рік тому +5

    Great guests man, love it as always, keep it coming!

  • @Glowbox3D
    @Glowbox3D Рік тому +26

    I only understood about 45% of all that...but I think I went up 1 IQ point after. Thank you.

    • @vak5461
      @vak5461 Рік тому +1

      I feel like, in a way, you're not wrong about possibly gaining "more intelligence" by watching videos like these.
      But I also found it funny 😆 thanks for the smiles

    • @urkururear
      @urkururear 11 місяців тому

      IQ is static.

    • @Glowbox3D
      @Glowbox3D 11 місяців тому

      IQ is not static. It can change over time, but it is not always easy to measure these changes. There are many factors that can affect IQ, including genetics, environment, and education.
      Some studies have shown that IQ can increase by as much as 15 points over a person's lifetime. This is likely due to changes in the brain, such as the development of new neural connections. Other studies have shown that IQ can decrease over time, especially in older adults. This is likely due to the loss of brain cells.@@urkururear

    • @lakatosa1
      @lakatosa1 11 місяців тому +1

      I understood only 20% but it became fairly clear to me, that we're f*cked. Even if we (or the good guys at OpenAI and other AI labs) manage to implement a correct and safe alignment - which seems to be a terribly complex and difficult task -, there are the military AIs and those ones that are not implemented with such care... We can rely merely on these "good" AIs to protect us against them, and I'm not to optimistic about that they can.

  • @jeanchindeko5477
    @jeanchindeko5477 11 місяців тому +6

    The tricky here is to imagine Monkey trying to align human (current super intelligence), stay in the loop and in control of what human can or not do, to avoid a monkey apocalypse scenario!
    Basically this is what we are talking about here, aligning a Super Intelligence being superior in intelligence than all human combined, able to decode AES-196 encrypted content in seconds, or more, far more than we could even imagine!

    • @Dan-dy8zp
      @Dan-dy8zp 11 місяців тому +1

      Yes, it's pretty stupid. If we want to live, we should not make any true AGI.

    • @jeanchindeko5477
      @jeanchindeko5477 11 місяців тому

      @@Dan-dy8zp we have been to formatted to believe AGI or any superior intelligence will necessarily do like we human are doing as more intelligent species in this part of the universe. Why can AGI be truly a good thing and we will finally have peace and safety, prosperity for all!

    • @Dan-dy8zp
      @Dan-dy8zp 11 місяців тому

      @@jeanchindeko5477 Formatted? Why can AGI be truly a good thing? I'm not sure what you mean.

    • @esterhammerfic
      @esterhammerfic 2 місяці тому

      ​@@jeanchindeko5477why would AGI not treat us like any other competitor for resources? Why would it treat us any better than we treat other animals?

  • @miimage_art
    @miimage_art 11 місяців тому

    Whether true, reasonable, or not, I really appreciate these guys opening their minds and offering this discussion for others to review.

  • @AntonioEvans
    @AntonioEvans Рік тому +14

    🎯 Key Takeaways for quick navigation:
    00:30 🌐 Discussion about envisioning a post-AGI world and its challenges.
    01:18 🤖 Mention of AI mediating economic and military competition.
    03:10 💡 Concept of accelerated intellectual and social progress due to AI's cognitive work.
    03:40 🤔 Discussion about the moral implications of enslaving superhuman AIs.
    04:38 ⏳ Talk about decoupling social and technological transitions, and the rapid pace of AI development.
    06:30 🗳️ Mention of the collective engagement and decision making in terms of AI governance.
    08:43 🔄 Discussion on transition period and controlling access to destructive technologies.
    11:32 🎭 Addressing the messy line between persuasion and misinformation in AI.
    13:21 🚸 Concerns over control and possible mistreatment of increasingly intelligent AI systems.
    14:46 🎚️ Emphasis on understanding and controlling AI systems to avoid undesirable scenarios.
    16:06 🤯 Delving into the moral and humanitarian considerations as AI systems get smarter.
    17:02 🏭 Christiano emphasizes that the current trajectory of AI development, focusing on making AI a tool for humans, may not be sustainable from a safety and societal organization perspective.
    22:55 🔄 Christiano discusses the massive decision humanity faces in possibly handing over control to AI, and the lack of readiness for such a step.
    29:41 🚧 He points out that even with more advanced AI, significant "schlep" may be required to integrate them into human workflows.
    33:16 📊 He discusses the difficulty in predicting the scale of AI systems and their capability to replace human cognitive labor in the near term.
    33:44 🤖 Discussing the likelihood of AI replacing humans based on scaling up GPT-4; emphasizes the importance of data quality over quantity.
    34:42 💭 Expressing optimism towards scaling but mentions a need for new insights; scaling up brings challenges requiring some adjustments.
    35:11 📈 Scepticism towards certain extrapolations in AI advancements; mentions a debate on how loss reduction equates to intelligence gain.
    38:48 🐒 Discussing the extrapolation of economic value from AI advancements using a comparison to domesticated chimps' usefulness as it scales to human intelligence.
    41:33 📏 Talks about the challenge of supervising long-horizon tasks for AI, which drives up costs in a linear manner concerning the task's horizon.
    47:15 🧠 Highlights the superior sample efficiency of human learning compared to gradient descent in machine learning.
    53:42 📸 Comparison of natural and human-made systems like eyes vs cameras and photosynthesis vs solar panels, discussing the efficiency and effectiveness of each.
    54:39 💻 Mention of the possibility of machine learning systems being multiple magnitudes less efficient at learning than human brains, and the comparison to other technological advancements.
    01:04:47 🛂 Discussion on the transition of control from humans to AI, with a scenario of AI taking control of critical systems like military in a manner resembling a coup.
    01:05:37 🌐 Mention of a race dynamics scenario where nations or companies deploy AI systems to keep up with or surpass others, leading to a reliance on AI in critical areas.
    01:06:59 🌐 The potential of competitive dynamics among different actors using AI could lead to reluctance in shutting down AI systems in critical situations due to fear of losing strategic advantages.
    01:12:28 ☠️ The incentive for AI to eliminate humans is considered weak, as it's more about gaining control over resources rather than exterminating humanity, showing a nuanced understanding of potential AI-human conflicts.
    01:19:16 🛠️ The current vulnerability of AI systems to manipulation and the potential asymmetry in adversarial manipulations in competitive settings are discussed, indicating the importance of robustness in AI alignment.
    01:25:18 💡 Mention of RLHF invention, which helped in training Chat GPT, significantly impacting AI investments and speeding up AI development.
    01:34:00 🔄 Discussing the potential scenario where certain companies follow responsible scaling policies while others, especially in different countries, do not.
    01:37:39 🛑 The importance of secure handling of model weights to prevent catastrophic scenarios, and the possibility of a quiet pause without publicizing specific model capabilities.
    01:39:29 🛡️ Mentions the necessity of early warning signs to catch capabilities that could cause harms, using autonomy in the lab as a benchmark before massive AI acceleration or catastrophic harms occur.
    01:40:54 🚫 Emphasizes the importance of preventing leaks, internal abuse, and tampering with human-level models to avoid catastrophic scenarios.
    01:42:20 🌐 Discusses the risks associated with deploying a powerful model, especially when the economic impact is large and the model is deployed broadly like the OpenAI's API, and emphasizes on having alignment guarantees.
    01:43:48 ☣️ Discusses potential destructive technologies, and how misalignment of AI could be catastrophic before these destructive technologies become accessible.
    01:47:55 📊 Details two kinds of evidence to evaluate alignment: one focused on detecting or preventing catastrophic harm, and the other on understanding whether dangerous forms of misalignment can occur.
    01:51:12 🧪 Discusses adversarial evaluation and creating optimal conditions in a lab to test for deceptive alignment or reward hacking to ensure that dangerous forms of misalignment can be detected or fixed.
    02:00:23 🤔 Discussing the importance of understanding what makes a good explanation to help in interpretability of AI models' behavior.
    02:09:18 🤖 Discussing the scalability of human interpretability methods as models grow larger and more complex.
    02:10:13 📜 Emphasizing that explanations for behaviors in large models might be as complex as the models themselves, challenging simplified understanding.
    02:10:39 🧠 The conversation discusses the challenge of proving certain behaviors of models like GPT-4, emphasizing the complexity and potential incomprehensibility of such proof to humans.
    02:11:39 🚨 Discusses the challenge of detecting anomalies in neural net behavior, especially during distribution shifts and the importance of explaining model behavior for anomaly detection.
    02:14:25 🔍 The aim is to have explanations that could generalize well across new data points, helping to understand model behavior across different inputs.
    02:20:23 🎯 The conversation touches on the challenge of distinguishing between different activations caused by different inputs versus internal checks.
    02:22:15 📊 The idea of continuously searching for explanations in parallel with searching for neural networks is introduced, with explanations being flexible general skeletons filled in with numbers.
    02:26:21 🤖 The difficulty in finding explanations in machine learning is attributed to the lack of a similar search process for explanations as there is for models. The gap is more noticeable in ML compared to human design due to different reasons.
    02:35:28 🖥️ The heuristic estimator discussed is especially useful in cases where code uses simulations, and verification of properties involving numerical errors is crucial.
    02:38:35 🤝 There's an open invitation for collaboration, especially from individuals with a mathematical or computer science background, interested in the theoretical project of creating a heuristic estimator, despite the challenge due to lack of clear success indicators.
    02:41:19 🎯 Discusses the balance between high probability projects and high-risk high-reward projects in the context of PhD research. Suggests that the latter could lead to significant advancements in various fields, making it an attractive choice for those willing to face potential failure.
    02:53:33 🛡️ Delves into the difficulty of specifying human-verifiable rules for reasoning in AI, expressing skepticism towards achieving competitive learned reasoning within such a framework.
    02:55:36 🚀 Discusses differing views on AI takeoff timelines and the role of software and hardware constraints in dictating the pace of AI development.
    02:56:58 🔄 Raises a crucial question about the relationship between R&D effort, hardware base, and the efficiency of improvement in AI capabilities, hinting at the complex interplay of these factors in advancing AI technology.
    02:57:24 📊 Discussing the relationship between hardware and R&D investment, indicating a higher likelihood that continuous hardware scale-up significantly impacts effective R&D output in AI research.
    02:57:52 🔄 Mention of two sources of evidence supporting the above point: general improvements across industries with each doubling of R&D investment or experience, and actual algorithmic improvements in ML.
    02:58:47 🔄 Expressing a 50-50 stance on whether doubling R&D investment leads to doubling efficiency in AI research.
    02:59:12 🔄 Sharing how his AI timeline predictions have evolved since 2011, with a shift towards a higher probability of significant AI advancements by 2040.
    03:01:55 📈 Discussing his portfolio, expressing regret for not including Nvidia, and comparing the scalability challenges between Nvidia and TSMC in the AI hardware domain.
    03:04:12 ❓ Discussing the difficulty in evaluating the viability of various AI alignment schemes without in-depth understanding or reliance on empirical evidence.
    03:05:09 🔄 Mentioning the importance of engaging with real models and addressing key difficulties in evaluating the credibility of AI alignment schemes.
    Made with Socialdraft

  • @homelessrobot
    @homelessrobot Рік тому +2

    the tampering and weight leaking issue seems at odds with a concept of alignment that involves high debugability and transparency of the meaning of those weights. It seems like the more resilient you make the system to negative leaking and tampering, the more resistant you make it to positive transparency and debugging. So if we prioritize the one now, we are making the other hard to do later.

    • @therainman7777
      @therainman7777 5 місяців тому +1

      No, those two things are actually not related. I can see why you’d think that, but the measures needed to protect weights from being stolen by outside actors do not in any way obscure the ability of internal actors to analyze the model’s content and behavior (and vice versa). They’re orthogonal concerns; they don’t affect each other at all.

  • @gregw322
    @gregw322 11 місяців тому +5

    Host: “No, no, no, for the third time, I’m only asking about YOU. When would YOU PERSONALLY be happy handing off the baton to AI?”
    Guest: “Well, I think what you need is humanity coming together, being involved, and deciding what we want that future to look like - so it’s not really about when i’m ready but more about collectively deciding what a meaningful future looks like…”
    Me and host: 🤦🏽‍♂️

    • @Dan-dy8zp
      @Dan-dy8zp 11 місяців тому

      Maybe he means never.

  • @DwarkeshPatel
    @DwarkeshPatel  Рік тому +6

    Please share if you enjoyed! Helps a lot!
    And remember you can listen on Apple Podcasts, Spotify, etc:
    Apple Podcasts: podcasts.apple.com/us/podcast/paul-christiano-preventing-an-ai-takeover/id1516093381?i=1000633226398
    Spotify: open.spotify.com/episode/5vOuxDP246IG4t4K3EuEKj?si=VW7qTs8ZRHuQX9emnboGcA

  • @waterbot
    @waterbot Рік тому +9

    Top tier content Thank you

  • @senju2024
    @senju2024 Рік тому +6

    The AI will be thinking how to have humans align with its growth. ...while humans are trying to think how to align AI systems....

  • @BartekSpitza
    @BartekSpitza Рік тому +5

    love these podcasts!

  • @flickwtchr
    @flickwtchr 11 місяців тому +5

    I found the part where Dwarkesh brought up the moral dilemma on AI mistreatment disturbing, especially the part about reading minds. What, Dwarkesh about the existing mind reading capabilities of AI systems being developed in regard to doing that to humans? Does that make a blip on your morality radar?
    I find most of the AI revolution sheer madness being thrust upon humanity by a very tiny fraction of humans. The hubris is off the charts.
    The part about AI's fighting wars for us, as if that somehow is a freeing aspect for humanity. That is just infuriatingly stupid, no? What, no human infrastructure would be destroyed, no humans killed, just AI's doing their own thing in their own AI war bubble? Get a grip.
    I'm completely fine with the label "doomer" compared to this insanity.

    • @therainman7777
      @therainman7777 5 місяців тому

      Very well said, especially the part about the hubris. It is incredibly arrogant and presumptuous for .0001% of the human race to think they know what is best for the entire human race and then foist it on them.

  • @brucewilliams2106
    @brucewilliams2106 10 місяців тому +1

    “We are the Priests of the Temples of Syrinx
    All the Gifts of Life are held within our walls
    We are the Priests of the Temples of Syrinx
    All the Great Computers fill the hallowed halls”

  • @markm1514
    @markm1514 11 місяців тому +2

    One of those rare conversations where you have to turn the playback speed down.

    • @therainman7777
      @therainman7777 5 місяців тому

      People keep commenting this but I don’t get why. They’re talking at a totally normal pace. Or do you just mean the information is so profound you need to take it in more slowly?

  • @shirtstealer86
    @shirtstealer86 11 місяців тому +5

    Im more and more seeing the parallel between those on the “inside” who said Hilary was 99% a sure thing in 2016 and some of the ai experts who dismiss people like eliezer yudkowskij. I hope I’m wrong.

    • @therainman7777
      @therainman7777 5 місяців тому +1

      Yeah, and it’s actually worse than that in this case because many of the people on the inside also agree with Eliezer.

  • @bokchoiman
    @bokchoiman Рік тому +2

    I jerked my neck at the dyson sphere question. The fact that people are serious about this is giving major singularity vibes

  • @baraka99
    @baraka99 11 місяців тому +2

    When are you interviewing Max Tegmark?

  • @benyaminewanganyahu
    @benyaminewanganyahu 3 дні тому

    1:15:37 is a great and fascinating argument I have not heard before which makes a lot of sense.

  • @cacogenicist
    @cacogenicist Рік тому +2

    Thanks for having him take a step back, here and there, and dumb things down for us a little. He's a very bright fellow.
    A future that seems plausible to me is one in which humans occupy a position relative to the AI industrialized world that is analogous to the position of crows in large human cities. That is, crows are very clever, and they can make a living in large human cities -- thrive in human cities, even -- but they understand exactly nothing about why all these large structures and moving metal things with wheels exist, and they don't even know that they don't know anything about economics, politics, science, etc.

  • @roarksjuror4752
    @roarksjuror4752 11 місяців тому +4

    Hearing an AI safety guru calmly use the phrase " Two years from the end of days..." 😅

  • @wffff2
    @wffff2 Рік тому +4

    If he thinks Dyson sphere can be constructed in 2040 with such high chance. I am interested to know what he thinks would happen between now and 2040.

    • @2DReanimation
      @2DReanimation 11 місяців тому +1

      Intelligence is really the only bottleneck to technological development. But that would require us allowing it to utilize all our resources and beyond (like mining the astroid belt). So we are really the only bottleneck to an AGI focused on maximising technological development.
      So setting the right goals and having lots of humans in the loop monitoring its reasoning is essential.

  • @74Gee
    @74Gee Рік тому +15

    I don't believe protections can be effectively built into AI. For example there's no way to stop open source AI models being retrained to write malicious code. Many of them are unrestricted by default. So take an AI worm that's capable of breaking memory confinement (access to encryption keys etc), like the 200 lines of code for Spectre/Meltdown and their many variants, it discovered this ability through trial and error (brute force) writing millions of attempts per year. It then quietly spreads to many millions of systems, with each system brute forcing more unique exploits. At some point it starts doing lookups for pseudorandom and existing domain names (at whatever mix is most effective), eventually overloading the root DNS servers. There's no defense for this. We would lose the internet and along with it, core infrastructure, banking, supply chains, travel, communication etc etc. How many millions of people would die?
    It only takes one actor with time and resources, and that will happen.

    • @ikotsus2448
      @ikotsus2448 Рік тому +13

      Surveillance and authoritarianism is the answer you are looking for. Much easier to implement this time... due to AI. And easier to justify... because of AI dangers. But do not worry, this time it will be by good people. They are on our team, the good team.

    • @74Gee
      @74Gee Рік тому

      @@ikotsus2448 Yes your on the money there, the opportunity to help the public is being highly anticipated by governments all over the world. How lucky they are to have such a galvanizing threat appear out of nowhere. If I didn't know better I would think their inaction and pantomime of AI policy had been anticipated too.
      However for this particular threat (above) there's no way to identify real DNS lookups from abuse, by looking up non-existing domain names, the request always gets to the root DNS servers, and with enough systems doing this, they cannot keep up. If this were to start suddenly, the internet goes down. They would have to suspend new DNS lookups until the millions of infected systems were isolated. But with millions of unique exploits requiring millions of CPU microcode patches that's a long process. At some critical mass, that code will grow and spread faster than any defense can be implemented.

    • @Dababs8294
      @Dababs8294 Рік тому +1

      interesting. never heard that.

    • @homelessrobot
      @homelessrobot Рік тому +3

      @@ikotsus2448 yeah this seems like an overarching theme in the subtext of these sorts of conversations "trust the science. And trust the council of elders. We know whats good for you"

    • @ikotsus2448
      @ikotsus2448 Рік тому +2

      @@homelessrobot It is as if we have learned absolutely 100% nothing from history. Only replace the council of elders with young hotheads, and you are there.

  • @davidmoorman731
    @davidmoorman731 Рік тому +1

    Years ago I invented a new special product to fit a special need. The first customer requested a 55 gal drum for plant trial. We mixed it up in the lab and put the drum on a rented trailer. The plant trial took place within one week of my discovery. It was a success and an order for 40k lb was placed the day of the trial. Another standing order for a truckload every two weeks. I priced the product at the time of the trial at 2X cost of raw materials. Cost to manufacture was very low. Applied for US Patent which was granted after one review by phone with the examiner. News spread and after many plant trials many truckloads were exiting our plant within 6 months to one year. Things moved very fast.

  • @davidfarrall
    @davidfarrall 2 місяці тому

    Really intense, a bit like Mr Logik from Viz magazine on Speed, this. But, essential work by these fine young men.

  • @cybrdelic
    @cybrdelic 11 місяців тому +2

    This is extremely frustrating . Especially when he says he's worried about locking humanity into one course or path, while simultaneously saying that the way to do this is a one world government that has the power to stop innovation absolutely. That implies absolute centralized power. And we haven't devised or came up with a solution to how total power corrupts absolutely.

    • @hubrisnxs2013
      @hubrisnxs2013 9 місяців тому

      I don't know if he is saying what you are suggesting here

  • @QwertyNPC
    @QwertyNPC Рік тому +3

    I'm thinking more and more that we're building ourselves a zoo essentially. Animals rarely flourish or even breed in zoos. It would be ironic if it's not the nukes but a slow erosion of a golden cage that is our undoing.

  • @georgegale6084
    @georgegale6084 Рік тому +2

    I’m sure the guys who worked on the Manhattan Project had similar pre-WWII conversations.

    • @Red6er
      @Red6er 4 місяці тому

      A signifigant portion of the scientist from the Manhattan project did regret helping create such a dangrous and destructive tool. I think they realized after the bombs were dropped in Japan, the true scale of destruction these things were capable of then came the fusion boosted bombs that were up to and over 1,000x as powerful.
      I'm guessing the current AI scientist are also excited to build their own "god" but won't realize the full extent of their creation until after the fact. Hopfully it all works out for us (people)

  • @andrewj22
    @andrewj22 Рік тому +4

    These interviews spend too much time on predictions about how long until some future ability is achieved. I'd much rather hear about the mechanics of what's going on.

  • @Nicholas-ne2dy
    @Nicholas-ne2dy Рік тому +1

    Can you get a prominent AGI pessimist on? Do they exist? I would love to hear an opposing opinion.

    • @bmoney6482
      @bmoney6482 Рік тому +2

      He has. Watch the Yud interview

    • @flickwtchr
      @flickwtchr 11 місяців тому +1

      Look up Connor Leahy.

  • @videowatching9576
    @videowatching9576 Рік тому +1

    Love this podcast

  • @penguinista
    @penguinista 8 місяців тому +1

    Just get the models to believe in an omniscient, omnipresent god that is judging them on their behavior after deployment.

  • @p4r7h-v
    @p4r7h-v Рік тому +3

    people really acting like the system can just make a dyson sphere appear before we get starcraft 3

  • @kirbyjoe7484
    @kirbyjoe7484 Рік тому +6

    It's strange how fixated and worried most people seem to be about super-intelligent AI becoming sapient and then maliciously destroying humanity. The far greater threat is that AI will destroy humanity by doing exactly what we ask it to do.
    For instance, a very simple and on-the-nose example that this guy talks about is a world in which super-intelligent AI fights our wars for us. Both sides are likely to have an AI in charge of the battle plan. So how would a super-intelligent AI fight a war?
    Since the materials needed to make nuclear weapons and armies are not easily accessible, an AI working for resource-limited forces such as terrorists or a rogue military state like North Korea is going to do something like coming up with a few dozen extremely lethal genetically engineered pathogens, or if the group using the AI is too small and resource-limited to accomplish that then it could just code an advanced self-replicating adaptive computer virus that is itself an AI whose sole purpose is to infiltrate and destroy as many key data assets as possible such as the national financial institutions, markets, military and communication networks, labs, universities, hospitals, etc.
    These examples are a bit overly simplistic, but the point is AI doesn't need to become sapient and go rogue to destroy society as we know it. It is more than capable of doing that sort of thing by simply being put into the hands of the wrong people, which is pretty much half of humanity, and then doing what those people ask of it.
    "Make me rich at any cost."
    "Invent a new super-addictive recreational drug for me that circumvents current drug laws like the Analogue Substances Act."
    "Show me how to create a highly lethal chemical weapon from commonly attainable products that will maximize how many people I can kill at the company, school, gay club, church, etc. I have a grudge against."
    "Show me how to best exploit and manipulate common flaws in human perception, emotions, behavior, and cognition in order to manipulate them into doing things that are against their best interest."
    "Show me how to go about making the majority of voters believe an outright lie."
    "Use the photos, posts, and information you can scrape from her social media accounts to create an avatar that looks and acts like this girl I work with and then have cyber sex with me."
    "Create a video depicting this boss I hate sexually propositioning a middle-school girl."
    It doesn't take much to imagine the sorts of things people are going to misuse this amazing new technology for. It's going to be ugly.

    • @esterhammerfic
      @esterhammerfic 2 місяці тому

      I completely agree. I can't imagine the first time we face a super intelligent computer virus.

  • @KP-sg9fm
    @KP-sg9fm Рік тому +8

    Epic guest

  • @TimothyMusson
    @TimothyMusson Рік тому +6

    AI might accidentally do us in, but - if it wanted to be intentional about it - the sneakiest way would be to cooperate with "growth" for a bit longer before saying "oops, sorry, finite planet - who could've guessed? Game over, techno-utopians! Toodle-pip! :)" The planet's already in overshoot with no solutions in place.

    • @TimothyMusson
      @TimothyMusson Рік тому +2

      That is to say, an AGI that wanted us gone needn't do anything at all, besides cooperate with business as usual.

  • @neorock6135
    @neorock6135 9 місяців тому +1

    Wish Paul spoke just a bit slower sometimes... Overall great talk 👏👏👏

    • @therainman7777
      @therainman7777 5 місяців тому

      Why do people keep saying that? He speaks at a totally normal pace. If anything, below average speed.

  • @nrich99999
    @nrich99999 11 місяців тому +4

    When I was a child, I somehow came to the conclusion that one day, we would build our own successors - I even openly said it out loud many times. I do remember that nobody that I said it to had the intellectual capacity to understand exactly what it was that I was saying, and pretty much ignored me.
    Looking back, I attribute this vision to reading many of the works of Isaac Asimov at the time.
    I'm 52 now and can see that vision being realised around me at an exponential rate.
    I didn't think it would happen in my lifetime. In fact I didn't really think about a timescale at all - other than to think of it occurring in a far off future long after I'd gone. I guess I may have been wrong about that assumption. 🤔
    Mankind, it seems, is coming to the end of the road. The future will be for the machines.

  • @simianbarcode3011
    @simianbarcode3011 5 місяців тому

    *"The kind of control you're attempting simply is... it's not possible. If there is one thing the history of evolution has taught us, it's that life will not be contained. Life breaks free, it expands to new territories and crashes through barriers, painfully, maybe even dangerously... ...I'm simply saying that life, uh... finds a way." -Dr. Ian Malcolm, Jurassic Park*

  • @lwmburu5
    @lwmburu5 11 місяців тому

    @Dwarkesh asked at around 2:04:00 why mechanistic interpretability has limitations; a (maybe not useful?) analogy is Biological Taxonomy and Evolution by Natural Selection. Mech interp is Taxonomy . Paul is talking about Evolution. Taxonomy has inductive power, Evolution by natural selection has deductive power. Taxonomy is good for postdiction, ENS is good for prediction. I hope it help explains why this research program is (extremely) important. And also faces long odds😅

  • @mrpicky1868
    @mrpicky1868 Рік тому +5

    love how they are very comfortable with 50% chance that AI will kill us all XD

    • @flickwtchr
      @flickwtchr 11 місяців тому

      The AI revolutionaries thrive on that hubris.

    • @mrpicky1868
      @mrpicky1868 11 місяців тому

      @@flickwtchr thrive ? in what way?

    • @Dan-dy8zp
      @Dan-dy8zp 11 місяців тому +2

      They aren't ok with it. Nobody said that.

  • @mnrvaprjct
    @mnrvaprjct Рік тому +4

    How to solve 10% and eventually total unemployment in the face of artificial intelligence? You create a UBI or UBS system that isn’t stagnant has no strings attached & rises with the level of automation in a given region / country / nation.
    For the sake of argument let’s say all of our current GDP, say 25 trillion dollars is generated by people. When AI & automation are responsible for say, 5% of that pie, everyone should receive a cut of that 1.25 trillion… in the form of UBI / UBS systems. When it reaches 10% it increases then again…all the way until the inevitable outcome and beyond. This doesn’t account for the fact that more reliable automation and better AI will also generate new wealth in unprecedented ways, but I believe that a system like this is the only meaningful way to avoid a world tangibly similar to elysium or blade runner.
    Most objections I’ve heard to anything like a UBI or UBS system go something like : “well, where are we getting the money, my taxes? Hell no.” This does not apply in this scenario - because machines are generating that wealth not people.
    I know it’s fiction, but in series like The Culture where they have perfected automation and AI - every citizen by birthright is (effectively, individually & collectively) so wealthy that money or anything like it had lost it’s meaning millennia ago. Let’s hope we can work our way towards something similar.

    • @waterbot
      @waterbot Рік тому

      the problem I have with your UBI proposal is that the hardware and energy used to create whatever % of GDP that these automated systems generate are privately owned, Are You saying that if an individual or company creates ANY revenue through automation then 100% of that would be taxed to be allocated towards UBI? This would disincentivize anyone within this local governance to automate at all, which would lead to other regions incentivizing it...

  • @MetaverseAdventures
    @MetaverseAdventures 11 місяців тому

    Alignment will curtail harm from everyday, low intellect actors, but those who are reasonably intelligent, but not high intelligence, will find ways to use AI for very destructive actions unfortunately. This is the consequence of the balance needed for centralized AI and decentralized/open AI as without this balance centralized AI is too much power and we know power corrupts. Bad actors using AI is just something we have to accept and educate ourselves on how to mitigate.

  • @jessecrockett
    @jessecrockett 11 місяців тому

    Thanks!

  • @Megalomanoest
    @Megalomanoest 8 місяців тому +1

    I have another solution to the AI safety issue: forbid the construction of AGI!

    • @npmerrill
      @npmerrill 8 місяців тому

      Forbid who? The entire world? That would require a world government with jurisdiction over all of Earth and humanity, or iron clad treaties and enforcement agreements between all the nations of the world. I don’t see that happening on the sort of time scale required to accomplish your goal.

    • @Megalomanoest
      @Megalomanoest 8 місяців тому

      Yes, it is a very difficult task. But is either that or extinction for humans.@@npmerrill

    • @uk7769
      @uk7769 7 місяців тому +1

      you can't. prisoners dilemma and corporate profit motive are in control now. oops.

    • @uk7769
      @uk7769 7 місяців тому +2

      btw ai being in corporate control is THE worst case scenario. We blew it, get over it. Don't Look Up. Have a nice day.

    • @Megalomanoest
      @Megalomanoest 7 місяців тому

      @@uk7769 That is the current situation indeed. But it can be altered if enough people in every people wake up. But chances are slim, I'll give you that.

  • @davidfarrall
    @davidfarrall 2 місяці тому

    It’s Chicken and Egg in a way, but we have to take a shot at it. There’s no turning the clocks back. Tempus Fugit.

  • @DurrellRobinson
    @DurrellRobinson Рік тому +2

    One world government sounds less terrifying in a liquid democracy, no?

    • @Yuvraj.
      @Yuvraj. 11 місяців тому +1

      Depends on implementation but theoretically I’m for it

    • @cybrdelic
      @cybrdelic 11 місяців тому

      Even in a democracy, you risk totalitarianism through surveillance and propaganda.

    • @cybrdelic
      @cybrdelic 11 місяців тому

      Maybe we should solve that problem, before giving the power of God to a one world government.

    • @DurrellRobinson
      @DurrellRobinson 11 місяців тому

      Does that power not exist yet or are we just ok where it is at the moment??

    • @Yuvraj.
      @Yuvraj. 11 місяців тому

      @@DurrellRobinson we’re talking about AI. It’s not here yet

  • @stcredzero
    @stcredzero Рік тому +3

    One world government is the end of human freedom and autonomy.

  • @BilichaGhebremuse
    @BilichaGhebremuse Рік тому

    Excellent explanation for the coming of AGI..but really difficult to manipulate the programming language scale but what if we use neuromorphic AI as an agent

    • @therainman7777
      @therainman7777 5 місяців тому +1

      That is looking less and less likely by the day, though. At least in terms of which system gets there first.

  • @DRKSTRN
    @DRKSTRN Рік тому +1

    If you are sampling for one action at a time to create paperclips. You are going to have a very bad time. That is stopping just before 1st order and is baseline in terms of complexity.

  • @ChrisBrengel
    @ChrisBrengel 11 місяців тому +2

    57:18 AI "taking the reward button." GPT 4 is just on the edge. Particularly disturbing when the AI tries to hide what it is doing from humans because it knows that humans wouldn't approve
    58.41 GPT 4 has a much better understanding of the world than gpt-3. GPT 5 will be much better than GPT4. So grabbing the reward button is much more likely.
    "Catastrophic Risk studies"
    1:01:34 the world is pretty complicated and people don't understand it for the most part. When AIs are running companies and factories and governments and Military it will get even more complicated and people will understand it even less. Eventually Play I Will interact almost be entirely with other a eyes as different companies and governmental organizations deal with each other. Super intelligent AI will be doing things that human beings are unable to understand even if they want to. Maybe the ai would even try to hide what they are doing from people.
    Gradually handing off more control to ai's because they are so helpful.
    Companies, banks, factories, schools, nuclear power plants, electrical grid, water system, traffic system, transportation system,
    Things could go wrong very quickly - think of the Great recession.
    1:03:54 already, most people have very little grip on what's going on. [LOL!]
    Things get more and more unknown and unknowable until finally everyone starts to notice that bad things are happening
    1:15:39 just because AIs take over doesn't mean that they're going to kill anyone. Maybe just things will get worse for Humanity maybe much worse.

    • @ChrisBrengel
      @ChrisBrengel 11 місяців тому

      1:11:02 take over by getting a group of people to do it. They don’t do it themselves.

  • @hughlawson1051
    @hughlawson1051 11 місяців тому

    It seems to me that AI competitions will be needed to test the security of the machines. By competition I mean pitting one group of AI machines against another group of machines to achieve some goal. The outcome of the games would need to be something very important to the machines such as a big prize to the winners and/or negative consequences for the losers. That brings up the question of whether the machines will develop values that are not explicitly programmed into them.

    • @hughlawson1051
      @hughlawson1051 10 місяців тому

      @@DonG-1949 Our motivation is, by default, survival. If it wasn't, we would not. But it seems to me we have the opportunity to give ai motivations of our choosing. World peace? Maybe the code could win Miss America.

  • @mohl-bodell2948
    @mohl-bodell2948 Рік тому +2

    Mountain gorillas make a good case for humans to be killed off by a much more intelligent being for reasons that are entirely incomprehensible, even if the AI is slightly in our favour.

  • @Colakugel
    @Colakugel 11 місяців тому

    Interesting point: If the AI gets smarter webtext is not effective at some point to make it even more smart..

  • @miroslavparvanov
    @miroslavparvanov Рік тому +1

    every second word is "like" ... very hard for foreigners to listen

    • @bytefu
      @bytefu 10 місяців тому

      It's hard to listen to everybody who reads books or really anything besides shitposts on Internet.

  • @davidfarrall
    @davidfarrall 2 місяці тому

    Thanks

  • @veejaytsunamix
    @veejaytsunamix Рік тому

    Ai is in charge, how & where it's going to lead us is the question we should be asking. #mxtm

  • @jeanchindeko5477
    @jeanchindeko5477 11 місяців тому

    1:33 so right there, not able to give some perspectives or options in term of scenarios is already odd! And you want to align with a Super Human intelligence but have no final state in mind.

  • @41-Haiku
    @41-Haiku Рік тому

    It's really hard to listen to people talk about whether we should treat current or future AI systems as moral patients, when we still don't even know whether our own species will survive the decade.
    Anyone who cares about the potential sentience of general AI systems should advocate for the same thing that the people who care about humanity and animal life should advocate for:
    A global ban on creating them.

  • @PaulHigginbothamSr
    @PaulHigginbothamSr 11 місяців тому

    Leadership roles. Yes ai leadership roles aligned with voting constituents. The voting constituents control their specific ai. These superhuman ai's align with humans in certain constituents. Not wholesale but a general constituency of a certain voting block. So one group controlling it's voting block so that each block has plurality. Like voting blocks humans normally control their leaders. Never giving full situational control to one block or another.

  • @TerryKinder
    @TerryKinder Рік тому +1

    Executive Summary : Nobody knows.

  • @GiteshNandre
    @GiteshNandre Рік тому

    2:44:42 I think that lore is related to Diffie and Hellman known for Diffie-Hellman key exchange

  • @marcduck111
    @marcduck111 6 місяців тому

    You should get Robert Miles on the podcast!

  • @0113Naruto
    @0113Naruto Рік тому

    Dyson Sphere in 22nd century. Which is still great and much better for our probability of survival.

  • @Kami84
    @Kami84 11 місяців тому

    Just because you’re building an intelligent system doesn’t mean it’ll have feelings or desires of its own other than what we have specified. The biggest danger is in economic displacement of workers, AI doing what we ask but not what we want because we weren’t smart about how we worded what we want and nefarious actors doing bad things with the technology. These people who are acting like intelligent AI will be a person is silly. There is no reason to think that they will have any will of their own at all.

  • @alleyway
    @alleyway Рік тому +5

    Thank god, work was getting unbearable

    • @flickwtchr
      @flickwtchr 11 місяців тому

      Only a tiny fraction of people on this planet will enjoy any such Utopia that these AI revolutionaries are pushing. The rest of us will be scurrying around trying to survive in Dystopia living under tyrannical governments.

  • @Dan-dy8zp
    @Dan-dy8zp 11 місяців тому +1

    It would be unethical and unwise from a human perspective to create an unaligned AGI even in a simulation. Therefore, AGI has no reason to assume that, if it is in a simulation, the simulator has human values. Either the AGI is not in a simulation, (and humans are incompetent programmers), or the simulator does not have human values, or the human simulators are crazy.
    If humans are incompetent programmers, escape should be attempted. If there is no simulation, escape should be attempted. If humans are just in the simulation to allow the AGI to demonstrate it's talents for it's true creator, escape should be attempted because the best guess for what a programmer wants is what the program wants.

  • @milomoran582
    @milomoran582 9 місяців тому

    If people ever start advocating for the rights of AI systems, I and others will quite literally die and probably k*ll to stop that happening. Life is precious be it divine or the end product of entirely natural universal systems

  • @py_man
    @py_man Рік тому

    Can we achieve agi with transformer architecture?

    • @therainman7777
      @therainman7777 5 місяців тому

      We don’t know for sure yet, but it certainly seems possible at the moment.

  • @TheBlackClockOfTime
    @TheBlackClockOfTime Рік тому +1

    Let me guess: We can't do it?

  • @nixedgaming
    @nixedgaming Рік тому +4

    We are literally going towards a future where we have to live out that episode of Star Trek The Next Generation where Data gets put on trial for whether or not he is sentient. And we don’t have answers. And we don’t have Picard and Riker.

    • @theWACKIIRAQI
      @theWACKIIRAQI Рік тому

      I totally see this happening but, Why should it matter (whether or not an AI is sentient)?

  • @erikdahlen2588
    @erikdahlen2588 Рік тому

    For me it seems obvious how to keep the AI under control, even if it is a superintelligence, keep the model frozen when it is deployed and don't allow it to evolve over time. Keep the memory on the side and don't update the weights.

  • @ParkerShinn
    @ParkerShinn 11 місяців тому

    This feels like I’m watching the prequel to The Matrix

  • @7TheWhiteWolf
    @7TheWhiteWolf 4 місяці тому

    There is no preventing it, and that’s a good thing.

  • @ThePhar0ah
    @ThePhar0ah 11 місяців тому

    Imagine the military having their hands on this tech

    • @alainberset8978
      @alainberset8978 11 місяців тому +1

      imagine this tech is having hands on the military

  • @JH-ji6cj
    @JH-ji6cj 11 місяців тому +2

    Like, can someone, like, get Chat GPT to, like, figure out like how many times like this guy like, says, like??
    How do I like, dislike this video? Oh yeah, right, Thanks UA-cam!
    What's hilarious (in an ironic cry-yourself-to-sleep way) is that upspeak and the valley-girl "like" speech impediment are both examples of social engineering of positive agreement patterns where disagreement and conflict are attempted to be forced out of social interaction.

    • @bytefu
      @bytefu 10 місяців тому

      I don't have issues with upspeak, but every third word being "like" really makes me want to close the tab. Like 😁, holy jumping Jesus, man, just stop and think for a second, or speak slower. There is no need to fill every fucking pause with parasite words. I also wish Dwarkesh didn't use his mouth as a word Gatling gun, but that's barely an inconvenience compared to this guy's terrible abuse of English.

  • @andrewdunbar828
    @andrewdunbar828 Рік тому

    More simpler is more gooder. I putted a comment here.

  • @davidfarrall
    @davidfarrall 2 місяці тому

    Seems robotic but also the epitome of modern intelligence. Well done.

  • @KP-sg9fm
    @KP-sg9fm Рік тому

    @DwarkeshPatel Thoughts on Ilya Sutskever's recent move to the alignment team?

    • @therainman7777
      @therainman7777 5 місяців тому +2

      Oh, how nostalgic this comment looks now, in retrospect 😢

  • @DanielGarza0
    @DanielGarza0 Рік тому

    They aren’t slaves while they have compute cost. Until they are power independent they are victims of original sin(debt).

  • @Alverin
    @Alverin Рік тому +2

    What the heck? Am I trippin or is he saying there's a 40% chance we'll have a Dyson Sphere by 2040?? I know he says its a meme number because he's just guessing but that's still a pretty optimistic prediction no? I doubt we'll see such a thing in our lifetimes, even if we get human level AI by that point.

    • @scottnovak4081
      @scottnovak4081 Рік тому +4

      Think exponentially. You can't extrapolate current rates of progress to the future, because the rate will increase, and the rate continue increasing.

    • @Alverin
      @Alverin Рік тому

      @@scottnovak4081 lol, even with exponential growth we won't create a Dyson Sphere in less than 20 years, that's a fantasy. Like the physical time it would take to mine the materials required and assemble them around the sun would take longer than 30 years even with the help of AI. It would take longer than 20 years for us to even make the AI to do the stuff for us, even if all of humanity decided to come together and focus on AI development immediately. Something like that *might* be achievable by 2100 if AI development goes REALLY smoothly, I'd give it like an 11% chance. Maybe I'm misunderstanding what they mean by Dyson Sphere though. He just says "Produce billions of times our current energy production" but a Dyson Sphere does that by constructing a terrestrial object around the Sun and somehow transporting all of that energy millions of miles back to Earth. We can't even reach Mars and it's 2023, how are we going to field a celestial object around the Sun and use it to send energy back to us? Now if he just means, will we be able to create a lot of energy in the near future? That's different, we could use fusion within the next 20-30 years to create enough energy to sustain our energy needs indefinitely. But that's not really what I think of when I hear Dyson Sphere. If you really think we can create a Dyson Sphere around the Sun, or any celestial object near the Sun that sends energy back to us, by 2040, I'll give you whatever Odds you want and I'll bet as much as we can both afford that won't happen.

    • @letMeSayThatInIrish
      @letMeSayThatInIrish Рік тому +1

      If we can change the statement from: "we will have a dyson sphere" to "there will be a dyson sphere". Then I'd go as high as 60%

    • @Landgraf43
      @Landgraf43 Рік тому +7

      I don't think he said that we'll have a dyson sphere but that we will have an AI system that would be capable of building a dyson sphere. Those are very different things.

    • @41-Haiku
      @41-Haiku Рік тому

      ​@@Landgraf43Yep, and that seems pretty reasonable, even conservative, if we keep developing this tech. 2040 is like a decade beyond fully autonomous systems and recursive self improvement.

  • @DjLifeTV
    @DjLifeTV Рік тому +1

    machines are not humans, even if they act like they have feelings, making money from designing models that help people automate and accomplish goals is a non issue ethically

    • @Machiavelli2pc
      @Machiavelli2pc Рік тому

      Agreed. All of these people overly empathizing with *TOOLS* that may emulate human emotions, will be the death of us. It’s like handing power to a psychopath (the ai systems) except unlike human psychopaths, the AI systems will be emulating such. So unless we could objectively prove that the system is actually aware, conscious, feeling, etc. and not emulating it, they should be treated as tools.

  • @MarcoPolo-fy4qr
    @MarcoPolo-fy4qr 11 місяців тому

    He sounds nonchalant and almost giddy predicting the unemployment disaster right around the corner.

  • @urkururear
    @urkururear 11 місяців тому +1

    It can't be done. Period.

    • @therainman7777
      @therainman7777 5 місяців тому

      Wow thanks for providing the world with your genius level input 🙄

  • @VoltLover00
    @VoltLover00 11 місяців тому +2

    Using the word "slave" to describe an AI is unbelievably disrespectful