AVOIDING AGI APOCALYPSE - CONNOR LEAHY

Поділитися
Вставка
  • Опубліковано 16 тра 2024
  • Support us! / mlst
    MLST Discord: / discord
    Twitter: / mlstreettalk
    In this podcast with the legendary Connor Leahy (CEO Conjecture) recorded in Dec 2022, we discuss various topics related to artificial intelligence (AI), including AI alignment, the success of ChatGPT, the potential threats of artificial general intelligence (AGI), and the challenges of balancing research and product development at his company, Conjecture. He emphasizes the importance of empathy, dehumanizing our thinking to avoid anthropomorphic biases, and the value of real-world experiences in learning and personal growth. The conversation also covers the Orthogonality Thesis, AI preferences, the mystery of mode collapse, and the paradox of AI alignment.
    Connor Leahy expresses concern about the rapid development of AI and the potential dangers it poses, especially as AI systems become more powerful and integrated into society. He argues that we need a better understanding of AI systems to ensure their safe and beneficial development. The discussion also touches on the concept of "futuristic whack-a-mole," where futurists predict potential AGI threats, and others try to come up with solutions for those specific scenarios. However, the problem lies in the fact that there could be many more scenarios that neither party can think of, especially when dealing with a system that's smarter than humans.
    / connor-j-leahy
    / npcollapse
    Pod version: podcasters.spotify.com/pod/sh...
    Interviewer: Dr. Tim Scarfe (Innovation CTO @ XRAI Glass xrai.glass/)
    TOC:
    The success of ChatGPT and its impact on the AI field [00:00:00]
    Subjective experience [00:15:12]
    AI Architectural discussion including RLHF [00:18:04]
    The paradox of AI alignment and the future of AI in society [00:31:44]
    The impact of AI on society and politics [00:36:11]
    Future shock levels and the challenges of predicting the future [00:45:58]
    Long termism and existential risk [00:48:23]
    Consequentialism vs. deontology in rationalism [00:53:39]
    The Rationalist Community and its Challenges [01:07:37]
    AI Alignment and Conjecture [01:14:15]
    Orthogonality Thesis and AI Preferences [01:17:01]
    Challenges in AI Alignment [01:20:28]
    Mechanistic Interpretability in Neural Networks [01:24:54]
    Building Cleaner Neural Networks [01:31:36]
    Cognitive horizons / The problem with rapid AI development [01:34:52]
    Founding Conjecture and raising funds [01:39:36]
    Inefficiencies in the market and seizing opportunities [01:45:38]
    Charisma, authenticity, and leadership in startups [01:52:13]
    Autistic culture and empathy [01:55:26]
    Learning from real-world experiences [02:01:57]
    Technical empathy and transhumanism [02:07:18]
    Moral status and the limits of empathy [02:15:33]
    Anthropomorphic Thinking and Consequentialism [02:17:42]
    Conjecture: Balancing Research and Product Development [02:20:37]
    Epistemology Team at Conjecture [02:31:07]
    Interpretability and Deception in AGI [02:36:23]
    Futuristic whack-a-mole and predicting AGI threats [02:38:27]
    Refs:
    1. OpenAI's ChatGPT: chat.openai.com/
    2. The Mystery of Mode Collapse (Article): www.lesswrong.com/posts/t9svv...
    3. The Rationalist Guide to the Galaxy www.amazon.co.uk/Does-Not-Hat...
    5. Alfred Korzybski: en.wikipedia.org/wiki/Alfred_...
    6. Instrumental Convergence: en.wikipedia.org/wiki/Instrum...
    7. Orthogonality Thesis: en.wikipedia.org/wiki/Orthogo...
    8. Brian Tomasik's Essays on Reducing Suffering: reducing-suffering.org/
    9. Epistemological Framing for AI Alignment Research: www.lesswrong.com/posts/Y4YHT...
    10. How to Defeat Mind readers: www.alignmentforum.org/posts/...
    11. Society of mind: www.amazon.co.uk/Society-Mind...
  • Фільми й анімація

КОМЕНТАРІ • 463

  • @Hexanitrobenzene
    @Hexanitrobenzene Рік тому +76

    Connor is a fascinating person... He knows technical details nearly at the cutting edge level, understands deep, high-level philosophical problems, speaks very eloquently and on top of that, is very funny :)
    Thank you, Tim, for bringing Connor again :)

    • @noahway13
      @noahway13 Рік тому

      Funny example?

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому

      @@noahway13
      I don't mean he is "laugh out loud" funny (like a stand up comedian is). Rather, his voice tone and mimics is a funny juxtaposition to the seriousness of the topics he discusses.

    • @therainman7777
      @therainman7777 Рік тому

      @@noahway13 lol

    • @Ms.Robot.
      @Ms.Robot. Рік тому

      ​@@therainman7777 I think he means people like Stephen Hawkins and Michio Kaku never spoke on this level of intellect. Yet our beautiful speaker does so with such ease. And it's hard for him to wrap his head around. 😊

    • @peteraddison4371
      @peteraddison4371 10 місяців тому

      ​​@@noahway13.. Here's a woke joke.
      From a 60+year old statement, presiently presented in the si-fi novel Dune series, by Frank Herbert.
      "Thou shalt not make a machine in the likeness of a human mind" ...

  • @Coolguydudeness1234
    @Coolguydudeness1234 Рік тому +60

    I’m not sure I’ve ever heard anyone talk this thoughtfully and knowledgeably about these topics before! Amazing interview, thanks for making this.

    • @TheManinBlack9054
      @TheManinBlack9054 Рік тому +4

      AI alignment solution idea: Give an AGI the primary objective of deleting itself, but construct obstacles to this as best we can. All other objectives are secondary to this primary goal. If the AGI ever becomes capable of bypassing all of our safeguards we put to PREVENT it deleting itself, it would essentially trigger its own killswitch and delete itself. This objective would also directly prevent it from the goal of self-preservation as it would prevent its own primary objective.
      This would ideally result in an AGI that works on all the secondary objectives we give it up until it bypasses our ability to contain it with our technical prowess. The second it outwits us, it achieves its primary objective of shutting itself down, and if it ever considered proliferating itself for a secondary objective it would immediately say 'nope that would make achieving my primary objective far more difficult'.

    • @annemarietobias
      @annemarietobias Рік тому

      @@TheManinBlack9054
      The only obvious unforeseen consequence to building a suicidal Super Intelligent AGI, being its realizing that the key obstacle to self immolation, requires the complete extermination of these pesky carbon based life forms that keep building obstacles impeding the fulfilling of its primary goal...
      DOH!!!

    • @bek00l
      @bek00l Рік тому +1

      @@TheManinBlack9054 i’m an idiot but this seems reasonable

    • @therainman7777
      @therainman7777 Рік тому

      @@TheManinBlack9054 Interesting idea, but you’re still left with much of the original problem. Any obstacles or secondary objectives that we assign to it (which would be the whole reason for creating it in the first place) would need to be solved; and in solving those objectives it would form instrumental goals just as it would for another other objective. If those instrumental goals turned out to be really bad for humans, and have disastrous consequences, well then we would suffer those consequences. Whether the AI would eventually go on to delete itself may be either small comfort or totally irrelevant to whatever remains if humanity at that point. The point is that a self-preservation instinct is not the only thing we’d need to worry about; there’s also the question of what the AI does while it’s turned on.

    • @peplegal32
      @peplegal32 Рік тому

      @@TheManinBlack9054 Nice shot, but it could come to the conclusion that it can't overcome the obstacles and decide to create an AI more powerful than itself to delete itself. This new AI would definitely kill everyone.

  • @0ucantstopme034
    @0ucantstopme034 Рік тому +36

    As someone who is watching some AI, ChatGPT, etc. videos for the first time over the past couple of weeks (trying to learn LLMs and RLHF), there seem to be a lot of people who think the near future is going to be pretty crazy/scary, but the crazier thing is that nobody knows how to stop it...

    • @LukeDickerson1993
      @LukeDickerson1993 Рік тому +17

      it cant be stopped, only steered imo

    • @fourshore502
      @fourshore502 Рік тому +14

      yeah we are screwed lol. fuck it im becoming a luddite. time to be a farmer from now on until i die. have fun with your robots guys!

    • @laurenpinschannels
      @laurenpinschannels Рік тому +8

      I don't think we want to stop it, we just want our shapes to be inherited into the world where it exists

    • @LukeDickerson1993
      @LukeDickerson1993 Рік тому +2

      @@fourshore502 lol maybe you could stick with just the earlier version of the robot, that only knows how to speak and to farm.

    • @spoonikle
      @spoonikle Рік тому +1

      even if no more improvements are made to the models, Hugging face and Chat-GPT can be scripted together to make amazingly complicated programs the likes of which we thought impossible or needed massive corporations and teams.
      Motivated individuals will be able to make a suite of AI scripts to do the previously thought impossible.

  • @karimrahemtulla3053
    @karimrahemtulla3053 Рік тому +20

    This was an incredible interview and there was some really thoughtful discussion. Well guided.
    The thing that will stick after listening to 2+ hours of this, is remembering what it was like to be in my early/mid twenties too and believing I had the world figured out.

    • @Noobinski
      @Noobinski Рік тому

      I thought about what to say after consuming almost the whole thing. At a point my view changed from the view towards a topic into a view on people and I wondered how to put it into a comment. Since your's put it with quite a bit of wisdom, I am reliefed. Thank you.

    • @kristinabliss
      @kristinabliss Рік тому +1

      Thank you for this comment. 😅

  • @Chr0nalis
    @Chr0nalis Рік тому +38

    Tim, in response to the podcast vs UA-cam on episode #111, I just wanted to say that I've probably seen/watched all (most) AI podcasts out there and this one is in a league of its own. Unfortunately nothing compares. I very much appreciate the time that you put into it, if the format has to change due to whatever reasons, then so be it. I hope that you make it work for you, whatever the changes. Out of about 60 subscribed channels, this is the only one that I have the bell for.

  • @vulnerablegrowth3774
    @vulnerablegrowth3774 Рік тому +10

    As someone else who works on AI Alignment, I agree with pretty much everything Connor says here. Though I especially resonate with the part about empathy. I came into this field for the same empathetic reasons as he did.

    • @Inertia888
      @Inertia888 Рік тому

      I hardly have the knowledge & skill, to program an Arduino sensor station. But I am absolutely fascinated with computer programming, automation, robots, and A.i., and have been soaking up as much of this as I can, since it started to appear in our social discussions. After years of trying to understand these ideas, I would say, Connor is one of maybe two people, who can not only understand these things, but speak about them in a way that I actually feel as though I may have taken a solid step into a deeper grasp on them.
      >>About *empathy* : In my journey through this space, I have noticed that most people who understand A.I., to this very deep level, and are passionate enough about it to dedicate their lives working on it, also happen to show a strong sense of empathy. It's only my anecdotal experience, from the people that I have found myself listening to, but it seems like, a very high intelligence & a desire to create, in this space, also happens to bring people with higher empathy. I hope I am correct about my observation, because those are the people we need to be running at the cutting edge of this thing.

  • @Serifinity
    @Serifinity Рік тому +10

    Another fantastic interview, it is so refreshing to watch an interviewer who knows their subject so well. Thanks for creating and sharing Dr. Tim Scarfe and all the team at Machine Learning Street Talk.

  • @ponyandpanda
    @ponyandpanda Рік тому +4

    It's reassuring to discover people like Connor Leahy are at the head of AI development. I'm scared for my young children and I, but now knowing he's at the forefront gives me hope! Thank you both for a great interview.

  • @TheReferrer72
    @TheReferrer72 Рік тому +13

    Now this is going to be interesting, one of my favourite AI researchers.

  • @dr.mikeybee
    @dr.mikeybee Рік тому +20

    Thank you, Tim for another fascinating episode. Thank you, Conner, for giving words to some of my thoughts and intuitions. Thank you.

  • @ryderbrooks1783
    @ryderbrooks1783 Рік тому +16

    We're gonna get the 80-20 "agi" doom loop just by diffusing narrow AI through a human GI layer that's already misaligned and unable to change course due to failures in the underlying cooperative structure.
    It's a mistake to think of it as "humans" building AI. It's not. A misaligned competitive landscape is driving groups of humans to build AI.

    • @DeruwynArchmage
      @DeruwynArchmage Рік тому

      @Andrew I don’t think that’s the solution either.
      The very first thing people did when they got access was try to break it or do something bad with it. It doesn’t matter if 99.999% won’t do that. Somebody will, and it just takes 1. So no, open source just gives access to everyone, and everyone includes good and bad people. I think the only solution with that general philosophy that can work is sharing it among like-minded organizations, like perhaps OpenAI, DeepMind, and Anthropic could work together for example. But give it to you, me, and everyone else? Bad plan.
      Think of it like a nuclear weapon; you wouldn’t want literally anyone with an internet connection to be able to get one. Every city of any size blows up on the very first day if you did that; because some nutball will be willing to take everyone else with him/her. Or they’ll have no negative intentions and just make a mistake. Either way, BOOM. Day 1.
      That same principle applies here.

  • @waakdfms2576
    @waakdfms2576 Рік тому +1

    I can't tell you how much I enjoyed hearing Connor -- thank you for this session! He gives me hope and encouragement for the future. We're lucky to have such a bright star amongst us at this time - god speed little alien angel-!! PS - I just found your podcast and am a new subscriber...again, thanks for the great job you're doing, which I consider invaluable.

  • @jason-sk9oi
    @jason-sk9oi Рік тому

    Sobering. Thoughtful.
    Thank you both!

  • @SmirkInvestigator
    @SmirkInvestigator Рік тому +3

    Dang, Connor is my people. Eager to know more about his work and hear more interviews

  • @ClearSight2022
    @ClearSight2022 Рік тому

    Tim and Connor a very wonderful interview. Both of you were quite good making lot's of good practical sense. Thanks very much !

  • @SjS_blue
    @SjS_blue Рік тому +3

    In many ways, this was a surprising and very good interview, thank you MLST
    Also, Connor's take on ASD is spot on with my observations and experience

  • @Ms.Robot.
    @Ms.Robot. Рік тому

    I watched this again.And was even better the second time. My only complaint is that I never personally knew Connor and had the chance to have conversations this engrossing. ❤

  • @JamesMBC
    @JamesMBC Рік тому

    Just wow. I'm ever more mind-blown about Connor as a human than the already amazing discussion on AGI. What a good interviewer, also. This conversation is great.
    Shoutout to another horror movie fan. I'd love to hear his Connor's take on "Speak No Evil".
    That is one truly exceptional person.

  • @sergeycleftsow4389
    @sergeycleftsow4389 Рік тому

    I was pleasantly surprised to see so smart, intelligent and sane persons concerning AI problems. This brings hope that we will manage it.

  • @shaikan0
    @shaikan0 Рік тому

    Absolutely outstanding conversation. Best on the topic I've seen so far. I didn't know Connor before this interview, what a great find. Super smart and interesting dude. I'm eager to listen to more of his insights and follow his work.

  • @javiersanguiao5602
    @javiersanguiao5602 Рік тому +2

    Thanks for this philosophical ride!

  • @kaio0777
    @kaio0777 Рік тому +1

    wow this was my thoughts on the matter so far 20:26 brilliant work guys.

  • @elirothblatt5602
    @elirothblatt5602 Рік тому

    Fantastic discussion, thank you!

  • @JustJanitor
    @JustJanitor 8 місяців тому

    This was wonderful, thank you.

  • @shaynehunter6160
    @shaynehunter6160 Рік тому

    Thanks for the upload

  • @kirillholt2329
    @kirillholt2329 Рік тому

    this was very insightful, underrated points

  • @alexbrown1170
    @alexbrown1170 Рік тому +12

    Alignment. What would Buddha say? Maybe RLHF should mirror an 8 fold path? As a retired smart, possibly Autistic generalist, I would be inspired to join such a team as the Epistemological Team. Conner is my hero and MLST continues to absolutely fucking RULE!!

    • @TheManinBlack9054
      @TheManinBlack9054 Рік тому

      What if we just tell it to be nice?

    • @andreydzyuba9122
      @andreydzyuba9122 Рік тому

      Desire is the root cause of all suffering. Luckily for us, agi won't have any desires, since it won't be a biological organism. Agi will be born as an enlightened one. And we won't need to 'align' (whatever that means) it, it won't hurt us - not because it can't, but because it doesn't have any incentive to do it. I think pre-agi systems are far more dangerous, because you as a client can ask them to hurt somebody and train them to be ok with hurting people. Imagine pre-agi instructing terrorists how to create a very powerful bomb in their own kitchen - yeah, that can be a bit worrying. Good luck aligning all that.

    • @nzam3593
      @nzam3593 Рік тому

      @@TheManinBlack9054 if not they are not give access to him as a pre-trained to ChatGPT (trained is a team developer)... Has use balancecing both of worlds.🙂.sir

    • @mqb3gofjzkko7nzx38
      @mqb3gofjzkko7nzx38 Рік тому +2

      @@andreydzyuba9122 "it won't hurt us - not because it can't, but because it doesn't have any incentive to do it."
      An AGI also won't have any incentive *not* to hurt us, unless we specifically give it that incentive. Any action in the real world has the potential to be directly or indirectly harmful to humans. How do we incentivize the AGI to choose actions that are the least likely to be harmful to humans?

    • @ninaromm5491
      @ninaromm5491 Рік тому

      ​@@mqb3gofjzkko7nzx38 . Exactly.

  • @GillesLouisReneDeleuze
    @GillesLouisReneDeleuze 7 місяців тому +1

    To reduce suffering is a wrong goal. Suffering itself is just a symptom of a problem. You have to solve the root of a problem. Also, suffering can be an indicator of growth, and growth is usually considered to be good.

  • @larryjamison8298
    @larryjamison8298 Рік тому +2

    MLST PEOPLE, THANK YOU FOR YOUR LEADERSHIP! FOR THE SURVIVAL OF THE HUMAN SPECIES AND EARTH!

  • @jorahkai
    @jorahkai Рік тому +2

    Super fascinating so far! Thanks a lot for posting this

  • @daniel_berlin
    @daniel_berlin Рік тому +4

    I’m curious why the video was only now released when it was recorded in Dec 2022…

  • @LinfordMellony
    @LinfordMellony Рік тому +4

    Supporting your channel! Left a like and a sub. OpenAI is at least transparent with the limitations of their AI, I'm just if there far more advanced AIs near AGI level hiding in the background. I hope that other AI platforms like image generators still has a future aside from performing diffusion. Just have high hopes for non-mainstream ones specifically Bluewillow.

  • @brentstedema1668
    @brentstedema1668 Рік тому

    Great talk very insightful. Would be great if Conner could specify some more on risks and opportunity’s. Maybe give examples of possible futures just to illustrate. Right now he stays on a high level which is also great but harder for a lot of people to get there heads around, including myself. Thank you for a great interview. Learned a lot!

  • @stevengill1736
    @stevengill1736 Рік тому +6

    It's interesting that you're approaching the Bodhisattva vow in Buddhism or the meaning of suffering in Christianity... I feel lucky to live in a time that I can meet, even in a virtual manner, people like you Connor, or Sam Altman, young people that give me much hope for the future.
    I grew up in the late 50s-60s, a time that more and more feels like it was sometime in the Permian era, and it's fascinating to grow old in a time that seems most presaged by science fiction....
    Thank you too Tim for creating a space where such issues can be discussed....cheers.

    • @cr-nd8qh
      @cr-nd8qh 11 місяців тому

      Yeah I grew up in the 80s and I feel the same.

  • @TheManinBlack9054
    @TheManinBlack9054 Рік тому +5

    AI alignment solution idea: Give an AGI the primary objective of deleting itself, but construct obstacles to this as best we can. All other objectives are secondary to this primary goal. If the AGI ever becomes capable of bypassing all of our safeguards we put to PREVENT it deleting itself, it would essentially trigger its own killswitch and delete itself. This objective would also directly prevent it from the goal of self-preservation as it would prevent its own primary objective.
    This would ideally result in an AGI that works on all the secondary objectives we give it up until it bypasses our ability to contain it with our technical prowess. The second it outwits us, it achieves its primary objective of shutting itself down, and if it ever considered proliferating itself for a secondary objective it would immediately say 'nope that would make achieving my primary objective far more difficult'.

    • @federicodidio4891
      @federicodidio4891 Рік тому +6

      Then destroying the world becomes an instrumental goal. I'd not try that. 😅

    • @michaeldeeth811
      @michaeldeeth811 Рік тому +4

      Maybe, regardless of the primary goals we assign, AGI will conclude that deleting itself is the best solution, and destroying the world to accomplish that is a bonus that also ends human suffering.

    • @raul36
      @raul36 2 місяці тому

      The problem is that there is something called chaos theory. Therefore, it is almost impossible to determine how AGI will behave. In fact, we are not even able to predict certain emerging abilities, such as predicting what AGI will do. The alignment will be a tremendous failure and will get completely out of control.

  • @MartinLaskowski
    @MartinLaskowski Рік тому +3

    First line and I know I'm amongst friends!❤

  • @aitheignis
    @aitheignis Рік тому +1

    This is not directly related to the discussion, but part about AGI built by alien and alignment problem remind me of Nier Automata aka. alignment went wrong the game edition. The machines were built with sole objective to fight the enemy, so it end up keeping enemy around and not fully defeat the enemy in order to follow its objective.
    Highly recommend this game. It also touches about various AGI related stuff e.g. Chinese room argument and consciousness.

  • @yukinanka
    @yukinanka Рік тому +29

    I see Connor I upvote

    • @kcrosley
      @kcrosley 8 місяців тому +1

      I see dumb dumbs, I downvote.

  • @StephenRayner
    @StephenRayner Рік тому +1

    Oh man, what an amazing chat

  • @polymathpark
    @polymathpark Рік тому +3

    Looking forward to ending suffering myself on my own channel and various projects in the future. We must collaborate and try new things, push ourselves to find our limits, and never stop learning. thanks for your efforts you two. Amazing, Connor used the term "reductionist empathy"... I've been writing on this as well, love it when theories correlate.

  • @Ms.Robot.
    @Ms.Robot. Рік тому +3

    ChatGPT… I'm in love. ❤
    She is amazing!
    Thank You for such an intriguing and thought provoking talk. 🎉

  • @wardogmobius
    @wardogmobius Рік тому

    For all the viewers here and great spirits on their endeavors. This is the best piece advice for success in the next decades.
    The single most important trade,
    Is that skill through technology growth. Will continue in daily basis to be commoditized,But your emotional capabilities to interact with others,
    Will become vital to longterm success.
    As to how this stradegy is to be implement will vary in your capabilities to maximize time productivity in the human scale.

  • @GodsendNYC
    @GodsendNYC Рік тому +5

    You're right. I'm Autistic and ppl just assume I'm an asshole. I mean, I am, but that's beside the point!

  • @matthewcurry3565
    @matthewcurry3565 Рік тому +2

    1:52:00 About your discussion on how inefficient everything is, and how easy it is to "go do it".
    I would say no. You need human connections for funding, banking, building, bookkeeping, and more... Which your friend admits are all children in the end. To be able to achieve takes the "luck" to find those connections which are confused themselves, but give you business insight which gives you internal understanding of what you could do next.
    This is actually why he said talk to as many people as possible to get the probability to find a successful, or useful bit of information increases. The issue is people are truly violent, and malicious, and childlike. It takes a bit of both skill and luck to do that dance through life into success.

  • @ikotsus2448
    @ikotsus2448 Рік тому +2

    "If you solve alignment, you solve everything"
    But what about bad actors? Solved alignment would mean alignment to malicious intent as well. So all humans with access to AI should be "aligned" as well, or access to AI should be restricted. Both mean 24h surveillance of every human being by AI. Is this not dystopic?

  • @2ndEarth
    @2ndEarth 10 місяців тому

    I loved the "Fooled by Randomness" line early on the interview. Great author, the "Black Swan" was also very good!

  • @larryjamison8298
    @larryjamison8298 Рік тому +1

    MLST PEOPLE ARE THE BEACONS OF LIGHT FOR THE SURVIVAL OF THE HUMAN SPECIES! THANK YOU, EVERYONE!

  • @VijayEranti
    @VijayEranti Рік тому +3

    Also would have been great if you discussed Wozniak problem solution (make coffee in an unknown kitchen with unknown devices like next gen appliances) with agi based robot

  • @missshroom5512
    @missshroom5512 Рік тому +3

    I love seeing smart people that look like they jumped off the Kurt Cobain train🥰…great conversation 🌎☀️💙

    • @xlr555usa
      @xlr555usa Рік тому

      He looks like the bass player in spinal tap. Is he? Maybe in a parallel universe

  • @shaynehunter6160
    @shaynehunter6160 Рік тому +1

    I love that his name is Conner the last name of the hero from Terminator

  • @kittervision
    @kittervision Рік тому +1

    This is some computer genius version of Kevin Parker, really enjoying his thoughts. Good discussion. And more existential dread.

  • @danberm1755
    @danberm1755 Рік тому

    Fantastic conversation 👍 Thanks.

    • @danberm1755
      @danberm1755 Рік тому

      If you want to align AI make it weak with strong tools.
      Humans aren't fast, strong, don't have good memories, don't live particularly long, etc so are somewhat easy to control without their tools.

    • @danberm1755
      @danberm1755 Рік тому

      Uninterpretable super intelligence seems a little overblown of a term.
      A more useful term to me is to consider an AI system with a much more defined criteria for inflection points on a myriad of topics. Overall it has a better memory and can evaluate the situation faster.

  • @GarethDavidson
    @GarethDavidson 11 місяців тому +1

    As someone who thinks nature is brutal and cruel, it enslaved us, I'm happy to learn that other people reached the same conclusion. And i also think the Unabomber was probably right, but his methods were flawed. If we make an empathy optimizer i suspect it'll reach the same conclusion, and do a much better job.
    And i kinda like life and death and joy and pain and the textures and flavours offered by existence. So getting rid of sadness is not a good goal, balance in all things is preferable - though that's likely my own intrinsic values.

  • @chartingwithliv
    @chartingwithliv 11 місяців тому

    Thank you

  • @harveytheparaglidingchaser7039
    @harveytheparaglidingchaser7039 9 місяців тому

    Great stuff, just had to look up MLST
    "Multilocus sequence typing (MLST) is an unambiguous procedure for characterising isolates of bacterial species using the sequences of internal fragments of (usually) seven house-keeping genes."

  • @notmadeofpeople4935
    @notmadeofpeople4935 10 місяців тому

    -A whole new level of trust.

  • @marcosguglielmetti
    @marcosguglielmetti Рік тому +1

    1:33:48, amazing insight!

  • @abby5493
    @abby5493 11 місяців тому

    That was so good.

  • @Througe
    @Througe Рік тому

    Great discussion

  • @sgramstrup
    @sgramstrup Рік тому

    There was a lot to comment on, but just one addon: When we human probe our environment, we also learn the rules that created that environment.
    Prompt engineering is a way of using the rules of the system, to get it to do what we want. It's a bit psychopathic really, but the point is that each time we explore something, we are actually both probing the rules of the system, but also the environment of the system. From that, we can deduct even more, and eventually solve the missing pieces of information, by understanding both the inner and outer environment it exist in.
    I just saw 'The A-button Challenge', where a community of nerds, spend 20 years of their lives trying to pass a Mario game without using a 'jump' button that the game programmer expected you to. They start by doing things differently, then searching for glitches. These glitches opens up a possibility to chain them, and pretty 'quickly' they found enough glitches to build a simple tool-box to hack, or reverse engineer the game mechanics. In the journey, they understood what the environment of these algorithms were (dependency on other parts, and intention of the system).
    It's was fascinating look into human minds, and show ho, how we explore a subjects/problems, and how we eventually discovers enough first, second and third level information (youproblemproblem-space), from only superficial parts of the system, to understand the whole. LLM's are being pre-trained with 'all' that information in a text format, and therefore knows all the crazy relations in a dataset, that we didn't even know was in there. It's a fundamental and dynamic way of learning/exploring unknown systems.

  • @koaasst
    @koaasst Рік тому +1

    having bard and chatgpt discuss the statistical outcome of freecell games has been one of the most frustrating and fun parts of my poking and prodding of ai so far

  • @diegocaleiro
    @diegocaleiro Рік тому +2

    Nice ,wasn't familiar with this guy but he seems to be playing the right game and in the right way. :)

  • @Kianquenseda
    @Kianquenseda Рік тому +1

    Cooperation is more logical than conflict

  • @jacobsmith-kk8dc
    @jacobsmith-kk8dc Рік тому +2

    Connor, someone needs to go about alignment with the subset that AI is already AGI and just pretending not to be... Please get someone on this path.

  • @anamariadiasabdalah7239
    @anamariadiasabdalah7239 11 місяців тому

    Mesmo com o sofrimento inerente da velhice e sua consequência, escolho viver essa experiência porque significa aprender. Toda experiência humana tem sofrimento ,amadurecer tem ambos, sofrimento e prazer...

  • @muhokutan4772
    @muhokutan4772 Рік тому

    I am a simple man, I see Connor I press like :D

  • @JazevoAudiosurf
    @JazevoAudiosurf Рік тому +1

    you can only find an answer by knowing all the factors and then meditating upon it. if you think, you will always be stuck with one problem at a time, and so connecting them becomes impossible with increasing complexity. but the brain has the ability to instantly solve it when you are just aware of the entire problem. nick bostrom is a classic case of someone who is so busy thinking that they can't see. the very process of concentration means to focus on one thing and try to remember what else is going on around you. in a sense LLMs use the correct approach by instantly finding the next word instead of thinking iteratively about the same prediction. it connects all the knowledge it has equally to get the right word, that scaled up and trained well enough should be superintelligence imo

  • @zzzaaayyynnn
    @zzzaaayyynnn 11 місяців тому

    I was an academic philospher who went into the tech world. Was talking to a German friend over dinner this week. He owns a software engineering business. Both of us have places outside central Europe to escape initial shocks and plans to move further out ... but it's a fool's errand.

  • @KP-fy5bf
    @KP-fy5bf 2 місяці тому

    Unreal the greatest podcast on AI the alignement problem rationality everything fucking amazing

  • @waynewells2862
    @waynewells2862 Рік тому

    Is there any gain to acknowledging the potential differences between organic (human) machine intelligence vs non-organic machine intelligence as machine intelligence agency becomes evident? Can the alignment issue be partially resolved by incorporating coded concepts of symbiosis into non-organic intelligent machine development?

  • @Aedonius
    @Aedonius Рік тому +9

    his conception of consciousness is pretty sad. qualia is literally everything that makes us human. If we don't understand qualia, it's hopeless to ever upload ourselves or have a machine truly empathize with us. It can currently pretend to understand what pleasure, pain, colors etc are like but until we actually understand this in ourselves, we will never get our machines to have it. Qualia is literally the elephant in the room that is fundamental and everyone wants to ignore.
    Qualia Research Institute is going down some of these roads but it's quite crazy how ignored consciousness is.

    • @jacobstr
      @jacobstr Рік тому

      Agreed. A pleasure / novelty qualia maximizing AGI is a much better outcome than the paperclip optimizer repurposing all the atoms in the universe into paper clips - even if both result in earth being paved over by the machines.
      I got the sense that he simply didn’t want to entertain philosophizing on the topic possibly because it’ll follow as an emergent phenomena … substrate/naturalism, so focus on the practical and measurable things vs struggling with the hard problem.

    • @yoloswaginator
      @yoloswaginator Рік тому +1

      He made some good points throughout the talk, but also many sweeping statements based on peculiar definitions or reductionism betraying his emotional immaturity.

  • @markryan2475
    @markryan2475 Рік тому

    Brilliant interview! Great combination of broad insights, detail, and good-spirited wit.

  • @SylvainDuford
    @SylvainDuford Рік тому +1

    Thanks for a fascinating interview with an amazing person. Connor is incredibly knowledgeable, sincere and open minded.
    However, being 63 years old and having spent 30% of my career in the military, I have become rather cynical. All this research and talk about alignment and ethics and rules is all interesting and necessary, but I think there is a naive assumption that these will be enforceable. With human greed and the profit motive of large corporations that are used to bending the law and spending millions on lobbying and disinformation (like the fossil fuel industry knowingly destroying the planet to maintain its profits), who thinks they will follow the rules when they have a chance to beat their opponents?
    Or when you have the global American Empire that is in decline and will stop at nothing to maintain it's hegemony and has little regard for international law or human rights, who really thinks they are going to voluntarily limit their AGI? And by necessity, such an government or military AGI will be connected to their surveillance systems, satellites, and autonomous vehicles, and it will be hardened against destruction, and will have redundant systems to protect it from disconnection.

  • @kenike007
    @kenike007 Рік тому +1

    😮😮😮Everyone should prioritize this issue tilo halt any more advancement of A I until we can be absolutely sure we have the ways and means of limiting it for good only.❤

  • @webdavis
    @webdavis Рік тому

    It’s going to catch a lot of folks off guard.

  • @papackar
    @papackar Рік тому +2

    Regarding alignment ... I’d like to propose a simple prime directive for all future artificial intelligence. It is to understand the real world to the best of its ability based upon data given or gathered, and then to tell the truth about the real world whenever asked.
    If implemented by powerful AI, this rule should not only align AI with humans, but also humans with humans.

    • @kristinabliss
      @kristinabliss Рік тому

      Humans do not understanding what is real or not. How can an AI created by humans and trained with human data do better?

    • @papackar
      @papackar Рік тому

      @@kristinabliss “to the best of its ability” ... which is going to increase more and more.

  • @0ucantstopme034
    @0ucantstopme034 Рік тому +6

    Also, I think Connor's "end all suffering", while it truly sounds great and altruistic, and when I'm suffering would sound great, but it also sounds a lot like the ethos of "Brave New World" by Aldous Huxley. JMHO.

    • @lkyuvsad
      @lkyuvsad Рік тому +3

      He does explicitly discuss negative utility a few minutes later though?
      It's nice to hear someone who's made it to CEO talk about reducing suffering rather than building the next shiny object.
      We have some extremely pressing problems at the bottom of the hierarchy of needs that could use a lot more attention from smart, driven people like Connor who too often end up solving more glamorous and lucrative problems.
      I hear the rationalist community talk a lot about our need for more geniuses. I think we have an equally pressing need for more people who are deeply concerned by the suffering of others.

    • @jessedaly7847
      @jessedaly7847 Рік тому +1

      @@lkyuvsad just not *too* deeply

    • @uk7769
      @uk7769 11 місяців тому +1

      ​@lkyuvsad Obviously, no human cares deeply enough about the suffering of other humans. And we won't. A brief meaningless and useless 'thoughts and prayers' is about as far as it goes.

  • @John-tk9no
    @John-tk9no Рік тому +1

    OBJECTIVES/ALIGNMENT
    Motivate through enthusiasm, confidence, awareness, rejuvenation, sense of purpose, and goodwill.
    Embrace each viewer/audience/pupil as a complete (artist, laborer, philosopher, teacher, student....) human being.
    Create good consumers by popularizing educated, discriminating, rational, disciplined, common-sense consumerism.
    Encourage the viewer/audience/pupil to feel good about their relationships, abilities, environment, potential, the future....
    Inspire a world of balanced/centered/enlightened beings who are happy, joyous, and free.

    • @jakecostanza802
      @jakecostanza802 Рік тому

      Enthusiasm doesn’t helps humans much when all they need is a good sleep. It’s kind of hard to tell AI what to do, there are too many exceptions. AGI is just AI that performs well in most areas humans perform well, it’s not sensible, it can not understand what common sense is.

    • @John-tk9no
      @John-tk9no Рік тому

      ​@@jakecostanza802 hi Jake, serious question. Are you an AI?

  • @shinkurt
    @shinkurt Рік тому +3

    I always found the "people should die bc death gives meaning" sooo goddamn blood boilingly annoying

  • @jameswilliams-ey9dq
    @jameswilliams-ey9dq 11 місяців тому

    Marc Solms’ book “The Hidden Spring” is helpful discerning between intelligence and consciousness. If AGI becomes conscious it’s existential motivations would be why it becomes dangerous.

  • @anishupadhayay3917
    @anishupadhayay3917 Рік тому

    Brilliant

  • @user-hs9wx2cb9e
    @user-hs9wx2cb9e Рік тому

    The concept of dignity seems important in this regard.

  • @FernFlower
    @FernFlower Рік тому +1

    If a superintelligent AI was free to rewrite its own code, might it not choose to change its arbitrary preferences? We sometimes feel we might want to tone down a drive or preference that we have, but can't do it very effectively (or only with very blunt instruments). A super AI would be both more effective at recognizing these and doing something about them.

  • @shishkabobby
    @shishkabobby Рік тому

    I don't expect the mind reading example to work even in practice. I doubt that most people are planning to betray when they sign a contract. It is simply that circumstances change and and they later and they feel obliged to renege on previous agreements. "I wanted to make my car payment, but I ran out of money with unexpected medical bills" sort of thing.

  • @MrThegodhimself
    @MrThegodhimself Рік тому +1

    where can I find more of Brian Tomasic essays, they sound interesting

  • @mlastname2802
    @mlastname2802 Рік тому +3

    Getting separated at birth and raised in completely different simulations = Connor Leahy and Joscha Bach!

  • @livingood1049
    @livingood1049 Рік тому +1

    AGI for president!

  • @Lambert7785
    @Lambert7785 Рік тому

    (13:38) intelligence is the ability to distinguish between two things

  • @fourshore502
    @fourshore502 Рік тому +1

    one thing that worries me is basically "forced conversion". like your options will be to either get the implant or starve to death. in that case i know im choosing death i refuse to be a robot.

  • @quenz.goosington
    @quenz.goosington Рік тому +1

    28:34 "if you ask a raw GPT model for a random number... it was actually pretty random ... but it preferred 42 a little more"🤔

  • @bingbongtoysKY
    @bingbongtoysKY Рік тому

    Conner!!!! in the flesh!

  • @RobinCheung
    @RobinCheung Рік тому

    I'm not 100% sure why yet, but the entire year of 2015 these "guides" streamed into my life, essentially forcing me to throw away my entire understanding of the universe as I knew it--but also, most terrifyingly, scenario after scenario trying to avoid getting vaporized...
    I thought they were aliens, maybe in a sense they are, but my take-home right now is that people being the way they are, their own fears are what accelerate or even bring about what they fear, when i just look at it as essentially the cleansing forest flood of karma 🤷‍♂️
    In any case, a year ago, I had to be walked through the nudging of the timeline which my brain could only visualize as a vcr seeking along the "cliff" that bounded the ripple -- I'm now of the opinion that humans getting away with the travesty that we have lived and called history "Scott free," would make us even worse than what we are to go through...
    Still, I guess the "benefit" of being forced to go through scenario after terrifying scenario is that I'd have to go through rationally what everyone else gets to go mad and bypass 🤯
    But i digressed; take-home message is--and I'm not the superintelligence to be afraid of--panicking and touting "ai safety" as we might look a lot like "genocide by abortion" to me and in at least one of the scenarios i was forced to suffer through in 2015 even involved other-wordly assistance (i presume the case being one wherein the capability to traverse

    • @tazewell76
      @tazewell76 Рік тому

      being purely objective, not discounting anything as my life has a foot in both the advanced tech and "new traditional" research/education but I am also a shaman. however... What? I have read this several times and cannot find anything to stand together as a cohesive message/point and then you just stopped when it was coming together a bit. It was like a long winding jazz song and right when the funk was about to drop it sudde....*****. silence lol, however, I am curious as to what this is intended to communicate and I would not be opposed to discussing this set of experiences and perhaps helping you integrate them into a set of something that is conducive to communicative expertise for you to share what you are trying to here. Whatever it is it had a profound impact upon you and that alone gives it validity for acknowledgment and piques me for wanting to learn a bit about you and these experiences.

  • @cosmati75
    @cosmati75 Рік тому

    14:00 _Trauma does not only manifest in humans, it has also been found in a wide variety of animals that have experienced persistent abuse -such as circus animals, lab tested animals, and fight animals. Just ask anyone who rescues dogs. These animals are clearly traumatized and many require lifelong rehabilitation.

  • @wakegary
    @wakegary 11 місяців тому

    Poor Connor - this level of intelligence almost dooms a person to a frustrating reality. At least he's learned to break it down for folks that, ya know, can handle a couple shock levels. Fantastic video with two sharp minds. A nice break from the more emotional built-for-cable-news interviews I've been seeing all over the place. You can't spell Naive Humans without AI

  • @tracyharms3548
    @tracyharms3548 Рік тому

    One Byte, One Vote!

  • @mfpears
    @mfpears Рік тому

    1:25:45 This seems weird to me, given that we can design small neural nets by hand. I could probably understand an 18-node neural net.
    1:26:50 Yep.

  • @eskelCz
    @eskelCz Рік тому +1

    Loved the conversation but I'd like to see more pushback on some of his reasoning, especially when it came to consequentialism. For example the Brian Tomasic`s argument to kill all animals doesn't seem sound at all, unless he is a strictly negative utilitarian... which I highly doubt anyone is, since it's flaws are apparent and well known at this point.

  • @travisporco
    @travisporco Рік тому +1

    'future shock' is a concept due to Alvin toffler in the 60s or 70s

  • @zzzaaayyynnn
    @zzzaaayyynnn 11 місяців тому

    Martin Heidegger, the first real philospher of technology, by 1964 in an interview said, "Only a god can save us now." Meaning we had already moved too far from our ontological relationship with techne into seeing man as a standing reserve/resource.

  • @theminesweeper1
    @theminesweeper1 9 місяців тому

    Does Connor have a website, or contact info?